One of the biggest tech stories of the year was the Apple tax ruling which saw the company being ordered to pay Ireland €14 billion.
It was also a year that saw more advances in artificial intelligence (AI) alongside efforts to introduce new rules to regulate the technology.
Regulators here were also kept busy issuing massive data protection fines and launching new online safety rules.
Apple tax ruling
In September, Apple lost its fight against the European Commission’s ruling that it underpaid €14 billion in tax due to Ireland.
The European Court of Justice set aside the judgment of the lower General Court, which previously overturned the Commission’s decision.
The first €3 billion tranche of the money landed in the State coffers in October.
The Government said that it expected about €8 billion of the €14 billion to be received in 2024 with the remainder coming in 2025.
The windfall was repeatedly reference throughout the General Election campaign with the various parties outlining how they would spend the money.
Fianna Fáil said it would allocate €4 billon of it to social and affordable housing, with another €2 billion being spend on a new “Towns Investment Fund”.
Fine Gael said it would use more than half of the Apple money for housing, Sinn Féin proposed using €7.6 billion of the windfall for a public housing programme, and the Labour Party said it would use €6 billion to create a state-owned construction company.
Artificial intelligence
AI advances continued throughout 2024 with the launch of various models by many of the major tech companies.
In its predictions for 2025, Dell Technologies said that the year will be pivotal for AI where it moves from experimentation to execution and becomes an essential driver of business transformation.
“The era of trial and error in AI has come to an end, with businesses of all sizes moving to adopt Generative AI to enhance productivity, efficiency and growth,” said Jason Ward, EMEA North Vice-President and Managing Director of Dell Technologies Ireland.
In 2024 there were efforts by authorities and governments to regulate AI.
The EU AI Act came into force in August and banned artificial intelligence systems considered a clear threat to the safety, livelihoods and rights of people.
It included strict new rules for high-risk AI systems used for example in critical infrastructure, law enforcement or elections.
Foundation models, such as ChatGPT, will be required to comply with transparency obligations before they are put on the market.
Systems that have the ability to create manipulated images and videos, such as ‘deepfakes’, will have to clearly show that their content is AI-generated.
The act will also regulate governments’ use of AI in biometric surveillance.
In October, the Government published a list of nine national public authorities that will be responsible for policing the new rules.
The bodies will be given additional powers under the AI Act to facilitate them in protecting fundamental rights in circumstances where the use of AI poses a high risk to those rights.
The list of authorities includes the Electoral Commission, the media regulator Coimisiún na Meán, the Data Protection Commission, the Environmental Protection Agency, the Financial Services and Pensions Ombudsman, the Irish Human Rights and Equality Commission, the Ombudsman, the Ombudsman for Children and the Ombudsman for the Defence Forces.
One of those bodies, the Data Protection Commission (DPC), sought guidance from its European oversight board on the use of personal data for the development and deployment of AI models.
It prompted the European Data Protection Board (EDPB) to issue an opinion on the matter dealing with questions such as under what circumstances are AI models considered anonymous, and how should legitimate interest be considered as a legal basis for personal data processing to create, update and/or develop an AI model.
Meta, the parent company of Facebook, Instagram and WhatsApp, has been critical of the EU regulatory environment when it comes to AI, describing it as unpredictable and fragmented.
The company has said that regulatory barriers are a step backwards for European innovation and competition in AI development.
In July, Meta has said it would withhold the roll out of future ‘multimodal’ AI models in the EU after privacy concerns were raised by the DPC.
Data fines
Aside from regulating AI, the DPC was also kept busy issuing privacy fines for data breaches.
In February, the tenure of Data Protection Commissioner Helen Dixon ended after almost a decade in the post.
As part of a Government plan to expand the leadership team, two new Commissioners, Dr Des Hogan and Dale Sunderland, were appointed.
They continued in the same vein as their predecessor, hitting big tech with big fines.
In October, the DPC imposed penalties totalling €310m on social media platform LinkedIn following an investigation into the company’s processing of the personal data of users for the purposes of behavioural analysis and targeted advertising.
In September, Meta, the parent company of Facebook, Instagram and WhatsApp, was fined €91m over the storage of passwords.
In December, the company was hit with fines of €251m for a data breach that impacted around 29 million Facebook accounts globally.
It brought to €2.8 billion the total fines imposed on Meta by the DPC.
However, just €17m of this has so far been collected due to legal challenges.
Online safety
In October, a new Online Safety Code was formally adopted by the media regulator Coimisiún na Meán.
The rules apply to video-sharing platforms that have their EU headquarters in Ireland.
The code will be legally binding, and companies will face fines for breaches of up to €20 million or 10% of a platform’s annual turnover, whichever is greater.
Under the rules, social media firms will have to protect children from specific types of harmful online material including cyberbullying, content that promotes eating disorders and content that promotes self-harm or suicide.
Platforms will also have to prevent the uploading or sharing of a range of illegal content, including incitement to hatred or violence.
The code will run in tandem with the EU Digital Services Act and the EU Terrorist Content Online Regulation.
The Government said that the adoption of the Online Safety Code meant that the era of self-regulation for tech companies was over.
In November, a new Dublin-based social media appeals body opened to hear disputes about policy violation decisions made by platforms.
The Appeals Centre Europe can decide cases relating to Facebook, TikTok and YouTube, and is aiming to include more social media platforms over time.
It can hear complaints about decisions by platforms relating to issues such as the removal of content and the suspension of users’ accounts.
The Appeals Centre has been certified by Coimisiún na Meán as an out-of-court dispute settlement (ODS) body under the EU’s set of online safety rules, the Digital Services Act (DSA).
Its decisions will be non-binding.
The start-up funding for the Appeals Centre has been provided through a one-time grant from the Meta Oversight Board Trust but the centre insisted that it will be independent of Meta.
Global IT outage
On Friday 19 July, computer systems crashed around the world due to a global IT outage.
There was travel chaos and lengthy delays as airports and airlines experienced major technical issues.
Banks, telecom firms and media companies were also impacted.
Microsoft Windows users were greeted with the so-called ‘blue screen of death’, an error message telling them that the system was down.
The worldwide outage was caused by a technical problem with an update from cybersecurity company CrowdStrike which provides antivirus software.
“CrowdStrike is actively working with customers impacted by a defect found in a single content update for Windows hosts…the issue has been identified, isolated and a fix has been deployed,” CEO George Kurtz said in a message on social media platform X.
In Ireland, Ryanair was forced to cancel flights.
The National Car Test Service experienced disruptions, as did the processing of driving licence applications.
The National Transport Authority said that both the TFI Live and TFI Leap Top-Up apps were impacted but public transport services were unaffected.
Ireland came out of the IT crash relatively unscathed, compared to other countries, but it was a stark reminder of how dependant we have become on technology and of how interconnected the systems we rely on every day now are.