The state of IT in 2025 report. Key tech trends shaping our future
I've wrapped up some trends based on events that happened in 2024. AI has become both a blessing and a curse, but it's not the only story worth telling. Beneath the surface of AI-powered chatbots, language models, and generated images lies a complex web of technological shifts, social implications, and economic disruptions that are redefining our world.
Enshittification
Content bloat
It's been around for at least a decade — senseless SEO-optimized articles, content marketing for content marketing's sake, pointless SMM — yet in 2024, it went on a rampage thanks to AI. One of the current problems with AI is that it has made it so easy and fast to produce low-quality content. Users often complain that search results have become useless, largely due to material created, published, and SEO-optimized by LLMs with minimal human input. The same is true for social media: tons of genAI images have flooded Facebook, LinkedIn, Twitter, and other platforms, and are now widely used in ads.
Journalists at 404media conducted an experiment and created a website for just $365 that runs entirely on AI and produces about 50 plagiarized articles per day. Memes can also be AI-automated. Users build a Claude-to-Stable Diffusion pipeline and generate them based on templates (here's a couple of examples: 'Wojak' and 'Stop Doing X').
It's getting harder to select, search for, and to recieve meaningful content, and the situation tends to get worse. 'Less sense, more content' is the motto of a frighteningly large group of people and brands; now they have the tool which empowers that approach and makes it possible with minimal effort.
Data ends up in one place
And the place is datasets for training AI. Everyone wants your data — and if it used to be so to sell ads, it's now to sell ads and train AI on it. Bots generate 30% of world's traffic, according to Cloudflare, and some part of them are scraping any web page they find for AI-training purposes. The naughtiest of all, by the way, is 'Bytespider' from the Chinese company ByteDance, which owns TikTok, it's responsible for 40% of all AI-scrapping queries. Google is also ruthless: it uses the same bot agent for both search indexing and AI, so website owners have no choice: it's either you block it and vanish from search, or let it go. Another point of stress is user data on major platforms: you're happily using Instagram, or Reddit, or Twitter, and have no idea that your comments and posts will end up in some large language model. The latter, for instance, secretly added such an authorization to its settings, turned on by default, and it can't even be turned off from a smartphone.
Site owners, some businesses and users are fighting back. Cloudflare, for example, has launched a one-click solution to combat AI bots. The company that powers half the internet is now able to block scrappers that ignore robots.txt, thanks to its new ML-powered algorithm that can detect bots even when they're masquerading as real users. This feature is available to all users for free, and given Cloudflare's clout and ease of integration, it's likely to be quickly adopted.
Bullshit creates more bullshit
Given the alarming rate at which the Internet is being filled with generative content, and that the data from the Internet is ending up in training datasets, the possible outcome is quite predictable. Researchers have discovered that training models on generated data inevitably leads to their degradation. When you feed an LLM (Large Language Model), VAE (Variational Autoencoder), or GMM (Gaussian Mixture Model) with output from other models, they go haywire: forget what they've learned, the behavior collapses to a minimal variant, and they start producing outright nonsense. The process is called model collapse.
It's tempting to imagine a black market for dusty SSDs labeled '100% AI-free' and 'pre-AI era data', but that's just a humorous exaggeration. In reality, if humanity can muster the will to focus on this issue in advance, we can prevent AI from being poisoned by the data it generates — just as we did with the Y2K problem. And later, ordinary people will walk around saying that since nothing happened, there was no problem at all, and that evil programmers and lying media made it up.
Useless AI everywhere
There is some kind of rush to add AI to every product, and to label everything as 'AI-powered', as if that makes the product better. So-called 'AI assistants' are in many cases ChatGPT, or Claude, or another model wrappers that are poorly integrated, add no value and have no real use cases. When it comes to proprietary models integrated by major platforms, it doesn't get better:
- Musk has introduced a feature on Twitter called 'More about this account', which is powered by Grok and available to paid users. When you click the button, you're supposed to get a brief summary — who they are, what they're known for, and what they write about on their account. But the results are worse than disappointing. The neural network comes up with non-existent facts, doesn't provide key information that's right in front of you, or resorts to vague general statements that could be applied to any profile. In its current form, 'More about this account' is not only useless, it is harmful. Twitter has also added Grok-2 with image generation capabilities. Users are already creating images of Mario smoking, intentionally provocative photos of body parts, and erotic content with celebrities — in short, all the things that are banned in other chatbots. Grok in general easily absorbs and spreads misinformation, and generating content on its own platform opens the doors to a sea of fake news and senseless clickbait.
- Google has added Gemini or various AI features to Gmail, Chrome search bar, Meet. Here's a great example of how it messed up meeting notes. And YouTube has introduced AI-generated chat summaries. Users joke about how perfectly this feature will work. It's understandable — AI can easily mess up even simple tasks, and at scale it only takes a few resonant edge cases to undermine trust. Given the toxic and absurd nature of the Internet's comment sections, Google's initiative is indeed hard to call safe.
- Meta adds its Llama to WhatsApp, Instagram and Facebook, minimizing effort needed to generate and share AI slop.
- SalesForce is so aggressively rebranding its solutions as AI-powered that the phrase 'AI agents' was uttered a hundred times during an investor meeting.
LLMs actually do a poor job of summarizing, as NY Mag argues in their column. It's hard to disagree — AI always acts as an unreliable narrator whose word can't be trusted, and it's easy for them to miss important details and replace them with unimportant ones. Research confirms this — all models hallucinate, and even the best of them can only provide non-hallucinatory answers about 35% of the time at most. In addition, researchers claim that the larger the LLM, the less likely it is to answer simple, basic questions correctly. The problem is that as training datasets grow and model power increases, and especially when trained on feedback, models start to get better at ambiguous questions — but when it comes to simple questions, if they don't know the answer, they're more likely to answer incorrectly rather than admit ignorance.
AI-free zones
In addition to the trend of AI-izing everything, a counter-trend is emerging. 'AI is needed, but not everywhere and not always' — in other words, people value the ability to turn off neural network functions in certain scenarios.
One of the points is that people are not ready to outsource to AI what should be a genuine expression of their thoughts and emotions. Sending an AI-generated email to a contractor or government agency is one thing; writing a love letter, pinging friends, or writing a product review is another. The main question is: why bother writing at all when you can completely replace yourself with an algorithm that doesn't have any "you" in it and produces standardized, generic results that anyone could write? The best illustration of this is the ad Google ran on the Olympics featuring a father asking Gemini to help his daughter write a fan letter to an athlete. The ad had to be pulled.
Another point is connected to creativity and art. One of the notable themes of the year is an essay by sci-fi writer Ted Chiang on why AI won't create art. One of the main points is that the basis of any art is a huge number of micro-decisions made by the author, which AI cannot replicate (it either chooses something mediocre), and detailing all these decisions to instruct AI requires just as much effort as simply doing it without it — which is exactly the opposite of what genAI claims to do (more output than input). Perhaps the best sentence in the essay is: It [AI] reduces the amount of intention in the world.
Speaking of product examples, the Halide Mark II camera app has added a 'no AI at all' option. On modern smartphones, all photos are processed, even if you enable the 'raw' format in the settings. The sky is color-corrected, noise is removed, and some manufacturers even completely replace objects like the moon. The new Mark II feature turns off all processing, leaving the image as it would be on a '00s digital camera.
Procreate, on the other hand, urges companies not to use generative AI features in creative products. The company doesn't want to add genAI to its app, claiming that its use hurts artists and the industry as a whole. The artistic community has greeted this statement with approval.
Social media clusterfuck
The decline of social media isn't a new thing — the short golden age of 2010-15 is long gone, but the cumulative effect of social media enshittification peaked in 2024 and is likely going to evolve in 2025. The main platforms have long since turned into a moldy pot with a salad of stale algorithms, tons of rotten ads, boring gamification for engagement, grime of an overloaded interface, rust from total surveillance, and boils from millions of additional services like 'we-are-yet-another-super-app-for-everything'.
People are fed up with social media and some of them are trying to find a more enjoyable and meaningful experience in niche applications. Users who are desperate to feel connected to real people and ideas that matter to them are leaving social media to find it on specialized services with a social function, such as Goodreads, Strava, Letterboxd, and others. What's interesting is that, according to Bloomberg, the business world is somewhat following suit.
Not everyone is ready to give up on social media, though. These people are switching from old platforms to emerging ones — like Bluesky or Threads. However, the problem is that most of the new social platforms are built with the same principles in mind as the old ones. They reinforce the same pathological patterns, just in a new place, giving the users a brief respite while they are not yet filled with bots, ads, overengineering, etc. — before they turn into their predecessors.
Reality mix
Hard to tell if it's real
The imaginary world continues to merge with the real world. It becomes more and more difficult to distinguish information imagined and created by a machine from information imagined and created by a human. There seems to be a lack of working solutions in this area — and there's no certainty that they will appear.
For example, Instagram and Facebook's 'made with AI' labels erroneously attach themselves to real photos. All you have to do is edit them using Adobe tools — whether it's a little cropping or some light AI retouching, Meta's systems will still flag it as generative fantasy. YouTube is developing a set of tools to detect deepfakes in vloggers' videos, but their effectiveness remains unclear. Google search results are flooded with AI images, making it easier to find a generated peacock than a real one.
Scrolling through social media or websites, it's not always clear whether the people in the photos are real or not. Does this place even exist? Is this fake poison ivy what it looks like? The latter example is no accident — a family in the UK claimed to have been poisoned by mushrooms because of an identification guide containing AI-generated images. The buyers had purchased the guide on a marketplace, only to find out later that at least part of the book had been written and illustrated using AI (the author claims that there was no mention of this in the book, of course). The rise of AI video generators in late 2024 only adds up to the case: now it's not just about texts or photos, video is also shifting into a limbo between reality and fantasy.
Now, if you're looking at something, you are looking at something — and it's unclear what it is.
AR/VR winter
Companies are desperately trying to combine AI, AR/VR, the Internet, and social media in wearables to reboot the category, but so far it's been a clear failure. Smart glasses (remember Google Glass?) raise more questions about use cases, privacy boundaries, and convenience than they provide answers. Pins with AI assistants, in their current form, clearly can't compete with smartphones and feel like unnecessary add-ons. Vision Pro, despite its pass-through mode, is impossible to wear all the time. Perhaps the problem is that no new combination of output, input, compactness, and ergonomics has yet been created that is simultaneously more comfortable, immersive, efficient, and functional than touchscreens and keyboards. But it's still a long way off.
For now, companies are gradually releasing products — such as the new Meta Quest, Apple Vision Pro, Android XR, the Vertex treadmill platform, or the Roto VR rotating chair — but a critical mass of change has yet to be achieved, leaving the market narrow and penetration low for now.
At the same time, the AR community received a big blow from Meta: the company announced the shutdown of Spark Studio. AR Devkit will stop working on January 14, 2025, and all filters, masks, and 3D objects created with it will also be deleted. Meta claims to be shifting its focus from augmented reality to AI, but this excuse is no consolation to the creators who invested in AR-based features that were extremely popular on Instagram (and they are rightfully furious).
AI
Functionality improvements
Every major GPT chatbot company has been hard at work making chatbots more functional and adding tools for more use cases. These include 'projects' to share context (files and system prompts) between chats, web search to get rid of GPTs' constraining bonding to training data, OS and app integrations to gather more context and perform actions, 'memory' to store small chunks of context across chats and sessions, and so on.
In general, this sounds like a healthy way to make improvements toward a more useful technology. Some of the biggest problems with LLMs, besides unreliability/hallucinations and the reluctance to admit incompetence or lack of data, are the lack of user- and task-specific context, which prevents the assistant from doing right things in a right place at a right time, and the near total absence of long-term memory, which makes the conversation flow inconsistent. Looks like companies know this, and are taking steps to address it.
The race for AGI / Software optimizations
At the same time, there is an 'AGI fever'. Millions of dollars are being spent either to scale existing models — which is becoming increasingly difficult because there isn't enough data left — or to redesign the models by tweaking the way they work or by combining different types of models. This is all for one goal: create Artificial General Intelligence, a digital entity comparable to humans in terms of thinking and reasoning. Companies don't have a clear definition of what they aim to build, and have no idea how to measure whether they're getting there. In 2025, we'll see more and more claims of 'nearly achieving AGI' with state-of-the-art experimental models, while the basic and widespread functionality of AI will advance rather gradually.
What's more important, though, is optimization. There have already been some advances in quantization and other methods of running LLMs with fewer resources, but the overall resource consumption for both training and inference is still too high. If the programmers of the 70's-90's hadn't had to optimize everything to push the limits of what was possible on not-so-powerful machines, we wouldn't be at the same point of technological progress. After a surge in available computing resources, the tech industry became relaxed and started paying less and less attention to how neatly and efficiently things were organized under the hood. Now is the time to reverse this trend, because once again we are facing a set of tasks that are promising, but require qualitatively new approaches. We can expect some breakthroughs in this area in the coming years.
One of examples is Recogni: their approach, Pareto, significantly reduces the cost of inference. It uses a logarithmic number system and converts multiplication into addition. According to the company, this saves resources — while maintaining calculation accuracy. Or Outreport, who has presented a system for fast model swapping on a single GPU. The hot swap process takes about ~2 seconds, which is ~150 times faster than traditional methods. The founders claim that the project uses caching to enable fast loading and optimization, which can reduce costs by up to 40%.
Meanwhile, the gap between open source and proprietary LLMs is narrowing, so we shouldn't expect a One True™ Gatekeeper to emerge.
Infra optimizations
Optimizations and advances on the infrastructure side are underway. Everyone is working on three main challenges: balancing the cost-to-performance ratio, creating more powerful hardware, and scaling available resources (data centers, power, GPUs, etc).
Apple is trying to develop a distributed inference where tasks are processed on multiple devices. When you're working with neural networks, it makes sense to share workloads across high-end Macs, iPhones, iPads, and Vision Pro — and that's exactly what Apple has done with its recently announced Apple Intelligence. The combined processing power of the ecosystem makes it possible to perform more complex tasks than any single device can handle. In the meantime, distributed inference with workload management has been already done for consumer devices. With Exo, you can run an AI cluster on Macs, iPhones (support for which is currently suspended), Android phones, and Linux machines. The library connects devices in P2P mode and distributes model layers between nodes in proportion to their performance. It sounds a lot like the Apple patent.
Others, like Cerebras, are trying to challenge the hardware market leaders. The company is launching a cloud-based inference service. They claim it's the fastest inference in the world, supporting 450 tokens/sec with Llama 3.1-70B. The startup uses its own custom-designed chips, Wafer Scale Engines, and aims to compete with Nvidia's GPUs, the de facto industry standard, with a faster and cheaper option.
It's (not) welcome here
The jury is still out on where it's good to use AI and where it's not. Take medicine: medical AI systems are biased. Research shows that the quality of results from neural networks used to make diagnoses based on images (such as X-rays) varies depending on the patient's gender, age, and race. When analyzing data, AI uses 'demographic shortcuts': in other words, overly broad groupings like 'all 60+', 'all dark-skinned 20-25 year olds', and so on. These are essentially old-fashioned stereotypes. But this is a fixable problem, according to experts: it's better to either prevent AI from making predictions and taking demographics into account, or to reward a model for lack of bias in a subgroup and penalize it for bias that leads to worse outcomes.
In bureaucracy, it's not so smooth. In Nevada, Google's AI will review applications for unemployment benefits. The AI will analyze transcripts of court hearings and review related documents before providing a brief summary with a recommendation: approve, deny, or revise the application. This summary will then be reviewed by a human, who will make the final decision. However, experts are skeptical, and for good reason: in addition to the unreliability of modern AI results, the system also creates pressure on individuals to simply click 'approve AI's recommendation', rather than carefully reviewing the details — especially when they need to quickly clear a large backlog of cases. Moreover, this approach only reinforces the lack of attention to detail and context that often characterizes bureaucratic systems.
Judicial? Better forget it. Researchers have trained AI to detect lies better than humans, but the idea poses more problems than solutions. First, at 67% (compared to a human average of about 50%), accuracy is still very low. Second, there are moral dilemmas: even if the system's effectiveness were improved to a hypothetical 95%, would that be enough to blindly trust its results? What about the five percent (in an exaggerated case, it's actually closer to a third) that the AI would falsely accuse of lying? Moreover, research shows that people tend to believe and even accuse others of lying when AI says so. Is it worth delegating such judgments to programs and using them as additional evidence? Can someone be tested for lying against their will? Even polygraph is considered a pseudoscience with low effectiveness and is usually not accepted as legal evidence.
Smoothing the UX? That's a nice thing, but think (and run tests!) twice. The second-hand marketplace Depop has added an AI-generated listing feature based on a single photo. When an item is uploaded, the neural network recognizes it and fills in the brand, color, and description. This is essentially the mythical 'make it look cool' button that people have jokingly fantasized about since the dawn of the Internet. The question remains, however, how to strike a balance between helping users and society by automating routine tasks (that may not even exist), and dehumanizing everything and turning communication into a duel between ghostly simulators. The line is probably far from the idea behind Bumble's use of AI in profiles and messages, or even Tinder's more moderate solution of allowing AI to select profile photos.
Eco
It's bad. What did you think I was going to say? We all know the data: writing a 100-word email using ChatGPT would consume more than half a liter of water and 0.14 kWh of electricity, blah blah blah, and so on. And the corporations don't give a damn about it. They are going full-in on greenwashing: for instance, according to Google's annual report, the company's greenhouse gas emissions have increased by almost half in the last five years. The main culprit is AI, by the company's own admission. The energy consumption of data centers used to compute AI functions has led to a significant increase in emissions. Google's forecast is less than reassuring, stating that 'emissions will continue to grow before eventually declining to target levels'. However, it does not explain how this decline to net zero will occur. The phrase 'the world's understanding of "net zero" remains in a dynamic state and is subject to refinement' is a remarkably feeble and dubious PR excuse.
There is a glimmer of hope, though: Google has partnered with Holocene to remove CO2 from the atmosphere. The startup is still in early stages, having only been around for two years, but it offers a much more affordable solution than its competitors — $100 per ton vs $600+. The cost of capturing and processing CO2 is critical in this equation, and if Holocene is not bluffing and can actually reduce the costs to that level, then the greenhouse gas problem could be solved by a third. It's worth noting, however, that Holocene currently only has a pilot production facility in Tennessee with the capacity to capture just 10 tons per year.
Nostalgia
The Roaring Twenties (I mean 2020s) awake both mournful sadness and passion for the past. Nostalgia manifests itself in the visual domain, such as pixelated designs or retrowave, music, fine art, and, of course, tech.
It's been 20 years since the original Motorola Razr was released, and now the company is back with a new version — with AI, of course. The foldable smartphone market in the U.S. has tripled in the past year, and Motorola dominates it with about 75% of the market share. Users also have fond memories of the Windows Phone menu and OS as a whole. Many claim that it was ahead of its time and better, despite having almost no apps available. In addition, social media has been abuzz with discussions about an Apple Watch case that turns it into iPod Classic, yep, that's right — with a wheel. And, of course, people are reminiscing about the rare case for the iPod Nano that turned it into a watch-like device.
We've come full circle.
Phygital
Phygital (in a broad sense — as the combination of digital practices with physical world) is growing, but also raising concerns.
Walmart is replacing traditional price tags in its stores with e-ink screens. The move has sparked concerns about the potential for price manipulation, with shoppers worried that surge pricing will come to offline retail. One of the most prominent examples of how this technology could be used is to dynamically increase the price of water and ice cream on hot days.
European lawmakers are also trying to regulate surge pricing after a high-profile incident involving the sale of Oasis reunion tickets. Customers who had waited hours for their chance to buy tickets arrived at the store only to find that prices had skyrocketed. Now, 14 European MPs are proposing amendments to the Digital Services Act (DSA) to restrict the use of this technology. The issue is not new, however — concerns about surge pricing have been around since the dot-com bubble era, when it began to cause frustration in areas such as taxis and delivery services. While there may be some justification for dynamic pricing in these industries, its use in other sectors raises legitimate public concerns.
And finally, AI is being introduced into offline shopping and customer service. Amazon is integrating it into its Just Walk Out systems for stores, while Taco Bell is using it to optimize orders at drive-through windows.
Robotics
Robots are becoming smoother, more capable, and more futuristic (read: some multifunctional, some extremely specialized).
The Nadia robot has been optimized to play table tennis, with smooth movements and reduced I/O latency thanks to its VR control system. This was demonstrated in a game of ping-pong against a human opponent, but it's easy to see how robots like this could be used in the future where it would be dangerous or uncomfortable for humans: in mines, in space exploration, or in warfare. There's little doubt that robots will eventually be used in these areas — the Ukrainian military is already using AI in drones for terrain recognition, target identification, and avoiding electronic countermeasures. The next step is coordinating swarms of UAVs, which will require the use of sophisticated algorithms to manage complex variables and rapidly changing environments. In some cases, human intuition simply won't cut it, and AI will be a matter of life and death.
Robots are also getting more industrial, medical, agricultural, FMCG, and delivery jobs:
- An AI-controlled robot has performed its first dental surgery. It uses OCT to create 3D models of teeth, while a human doctor makes the decision and the robot then performs the procedure independently. In this case, the robot prepared the teeth for a dental crown in just over 15 minutes, significantly faster than the usual two or more one-hour visits required by human dentists.
- IKEA has adopted drones for inventory management. Instead of using them for delivery, the company unleashed them to fly around in the stores to scan products in hard-to-reach areas. This reduces tedious and inconvenient work while keeping the store's inventory up to date in near real time.
- In the UK, permission has been granted to test the use of UAVs for delivery, infrastructure inspection and emergency services. Shake Shack is also launching drone food delivery in LA.
- Robots with AI are being developed for agricultural use, including ones that can spray herbicides. John Deere's tractor robots with AI-based plant recognition systems are more accurate at targeting weeds while avoiding crops. Precision AI uses similar technology in its agricultural copters. Smart spraying not only reduces damage to crops, but also significantly reduces chemical use, benefiting the environment and lowering costs.
- Self-driving holds great promise for freight trucks. While others focus on automating deliveries on highways, Kodiak Robotics is taking a different approach in another niche — off-road routes. Not all destinations have roads, especially when it comes to contracts with the U.S. military. They're willing to pay more for it. The solution? Train trucks to drive off-road instead of sticking to traditional logistics. But the application goes beyond working with people in khaki: construction sites need gravel and sand delivered, remote communities need food and medicine, and oil rig workers and archaeologists need equipment. And perhaps one day these technologies will be used to transport goods on the surface of other planets.
Improving robot behavior is getting easier, too. Robots can be trained in simulations created by scanning a room with an iPhone. It's especially important for home robots — each workspace is unique and constantly changing. So users can scan their room with an iPhone, create a virtual replica, and run tens of thousands, hundreds of thousands, or even millions of simulations within it before deploying the robot to perform a task in the real room. The only remaining step is to automate the process of scanning the room itself, eliminating the need for human intervention — but that is already a technical matter.
Hardware
Hoist all sails and full steam ahead for better AI chips. The situation with graphics cards and chips due to the rise of AI is still intense. For example, Nvidia has released a version of its RTX 4070 model with simpler GDDR6 memory instead of GDDR6X; Mitsubishi is struggling to keep up with orders for components used in data centers. DRAM and SSDs are flying off the shelves, pushing manufacturers to develop more efficient devices and governments to support the industry.
Intel has announced new AI processors, while ASML has created a prototype of a new EUV lithography machine that has produced its first results. This technology can print chips with line densities as low as 9.5nm using 2D routing, and can even print DRAM chips in a single exposure, significantly reducing production costs. And Infineon has developed a technology that will significantly reduce the cost of manufacturing the next generation of AI chips. 300mm gallium nitride wafers can produce 2.3 times more chips than before. While this is a quantitative improvement in itself, combined with ongoing work to optimize conductor manufacturing, we could see significant qualitative changes in the performance, compactness and affordability of specialized AI chips in the near future.
Semiconductor sales in the Americas surpassed those in China for the first time in five years, growing 40% YoY. But Europe, which aims to produce 20% of the world's chips by 2030 (up from 10% today), faces a major risk: a shortage of skilled workers. While TSMC is going to build a factory in Dresden and Intel is planning a 'mega-factory' in Magdeburg, industry associations estimate that European bureaucracy and lengthy approvals are slowing down the sector's development.
There is also hope for quantum computers: a new architecture for qubits has been developed that could greatly simplify production. Scientists had previously worked on qubit sandwiches made of superconductors separated by an insulator, but this approach proved more complex and less productive. A simpler and more promising method uses separate superconducting plates connected by a thin superconducting wire.
Meanwhile, consumer device development seems to have plateaued — nothing particularly exciting is happening. Apple, Google, and Samsung have all held their presentations and shown off new products, but there hasn't been any significant change, with the focus instead on AI features. The same stagnation can be seen in the game console market, which is lagging behind computers.
Brains online
Brain-computer interfaces and neural implants are no longer the stuff of cyberpunk fantasy. The devices are here, they work, and they are on the (rather long) road to mass production. It's only a matter of time, the market they eventually land on, and penetration (pun intended).
- Neuralink has tested its devices on two patients and they are able to control a computer with their thoughts. Its Blindsight chip, designed to help restore vision, has received FDA approval for testing.
- Blackrock Neurotech's implants have helped two ALS patients regain the ability to communicate by voice. A paralyzed man and woman received neural chips that convert brain signals into text, which is then read aloud by a synthesized human voice using pre-disease samples.
- A child with the world's first epilepsy implant experienced an 80% reduction in seizures. The patient, who had an unresponsive form of epilepsy, received the implant in October 2023 and has since experienced a significant improvement in quality of life, with no reported falls due to severe daytime seizures.
- Synchron even integrated ChatGPT into the neural implant. The chip was inserted into a patient with ALS through a vein (no brain surgery required!). The AI in the implant is used to enhance communication capabilities: in conversations, ChatGPT generates multiple response options for the patient to mentally choose from, and the LLM takes into account not only the context of the conversation, but also the patient's emotional state, which is read through the implant.
Copyright & Legal
The battle Copyright vs. AI continues to escalate. Companies that sell licensed data for AI training have formed the Dataset Providers Alliance, the first industry association in the field, which promises to address ethical issues. At the same time, US record labels are suing AI music generators Suno and Udio, Center for Investigative Journalism is suing OpenAI and Microsoft, Forbes and Wired accused Perplexity of plagiarism and illegal data collection, while Reddit has updated its robots.txt file to prevent AI from scraping its data. However, many AI companies are known to ignore robots.txt instructions, and website owners have yet to find effective ways to stop them (although there are some interesting ideas).
A notable example of the copyright problem is Figma's temporary shutdown of the AI tool Make Designs shortly after its launch. The model, hastily (and CEO admits it himself) fed with reference images, began generating designs so similar to real-world applications that they were almost indistinguishable. The company's CTO claims that they did not train the model themselves, but I guess that won't come as much of a relief to those who used the new tool in their work and now risk being sued for copyright infringement.
The regulation of AI in its early stages is a global problem. Often it is reactive, as in Brazil, where a new Meta's privacy policy has been suspended due to training AI on user data. In other cases, like China, there's a strategic approach, with an attempt to develop 50+ standards for AI in two years.
Both approaches seem alarmingly unbalanced: reactive solutions fail to consider long-term consequences, but while large strategic policies are written and adopted, technologies leap forward, and the digital landscape changes.
There is still no clear regulation even for social networks, let alone artificial intelligence. For example, the head of Microsoft's AI division and co-founder of DeepMind sparked a discussion. The Verge quotes Mustafa Suleyman's comments under the headline 'Microsoft's AI boss thinks it's perfectly okay to steal content if it's on the open web'. He has a point: some users may underestimate what happens when they post online (and various 'take down' laws are pretty absurd). This is how the Internet works, with information chaotically spreading, combining, being used and transformed. But Mustafa's logic has a fatal flaw: you can't do whatever you want with content just because you found it on the Internet — especially if you're a giant corpo.
There has also been progress: the US, the EU, and the UK have signed the first legally binding treaty on AI. The Framework Convention on Artificial Intelligence includes high-level principles, such as the alignment of any developments with human rights and democratic principles, a commitment to transparency and privacy, and non-discrimination. The framework has also been signed by Norway, Iceland, Israel, Georgia, Moldova, Andorra and San Marino. In addition, the EU AI Act came into force in 2024, regulating AI work based on risk levels. And sixty countries, but not China, have backed the creation of a global framework for the use of AI in military technologies.
Google is recognized as a monopoly. In recent years, tech journalists have regularly discussed the possibility that the company will face a fate similar to that of AT&T in 1982, which was broken up into smaller entities due to its monopolistic position. While Google hasn't been broken up yet, the message is clear. Other tech giants, such as Apple, are at risk of facing similar problems.
Workforce
The debate about the impact of AI on jobs is intensifying. The impact is undeniable: Intuit is laying off 1,800 people to hire another 1,800 to work with AI; at Klarna, AI has taken over the work of 700 people and helped the company save on labor costs. The CEO claims that instead of layoffs, they've simply not filled positions since September 2023, and revenue per employee has increased by 73%. It's clear from the context that this is primarily about customer support, given the common and well-understood use cases of the fintech industry. A similar experience can be seen in the Philippines, where AI is successfully replacing humans in call centers.
OpenAI's CTO joined the discussion with a succinct and relatively objective statement: 'Mostly, jobs that shouldn’t have been there, will go away'. History shows that while technological progress may eliminate certain jobs, it also creates new ones — often with better working conditions or greater efficiency.
Moreover, AI is proving to be an unreliable touchpoint in the job market. According to a LinkedIn study, 71% of managers would rather hire a candidate with strong AI skills than industry experience. FT, in turn, reports that around half of job seekers use ChatGPT to create cover letters (which are then also reviewed by AI instead of HR professionals).
Economics
Tech companies are caught between two fires. On the one hand, they are forced to cut costs and make layoffs after the big correction that started in late 2021 and early 2022 (examples include Intel, Amazon, Alphabet, Microsoft, and IBM). On the other hand, their bets on AI will require them to increase spending on infrastructure and R&D. And investors are starting to get impatient, waiting for AI to become profitable and deliver those coveted 10x/50x/100x returns.
AI is still a work in progress and not very profitable. Some of the funniest examples are that customers are returning Humane's AI pins faster than the company can produce them, while Amazon's new AI feature for its podcasts, 'Topics' has AI mostly solely in its title: AI is only used to transcribe text, while human contributors manually add topic tags (maybe that's for the better).
Tech companies are trying to divest themselves of Chinese assets. Microsoft has asked some employees to relocate to other regions; IBM is following suit, completely shutting down its 1,600-person R&D department in China; Canada has imposed punitive tariffs on Chinese electric vehicles and steel, while the EU increased tariffs on them in July. The Netherlands plans to restrict ASML's ability to repair semiconductor equipment in China.
For too long, a blind eye has been turned to China's totalitarian regime, much like Russia's — but now, as tensions rise, companies are trying to protect their investments. The strategy is sound, although it should have started 10 years ago. But the infrastructure for disengagement is not yet in place, and current actions are limited: Big Tech is taking tentative first steps, unable to scale an exit from China.
Hacks
Hacking attacks are becoming more frequent, faster, and more consequential. According to a survey by Yubico, nearly half of employees have fallen victim to phishing scams or cyber attacks. In 2024, US car dealerships were targeted and it took several weeks to clean up the mess. Then Microsoft was hacked, along with some government agencies. 33 million phone numbers stored in Authy's 2FA app have been stolen, and a database of 10 billion stolen passwords (the largest of its kind) was compiled from previously leaked data over 20 years, plus some new ones added.
It also turns out that OpenAI was hacked earlier in 2023. Unfortunately, it's probably not an isolated incident: the company's new Mac app had stored conversations with ChatGPT in plain text (!). The bug has been fixed, but the damage is done. And to top it all off, 'Global chaos erupts as Windows security update goes bad' — a title that needs no further explanation. BSODs worldwide, flight cancellations at airports, problems with Apple Pay and TV broadcasts, closed stores and clinics, and other delightful consequences of CrowdStrike's poorly planned 'fuck it, LGTM, just push to prod' approach to security updates.
AI is striking in two ways: first, it can be exploited. A vulnerability has been identified in Slack's AI capabilities. An attacker could create a public channel with themselves and insert a text instruction for the model, which would then respond to a user's question by pulling data from this malicious channel and following the attacker's instructions to embed a phishing link. This type of attack is not limited to the Slack Assistant, but affects many other models as well. If there have been available lots of solutions such as SSL, PKI, and 2FA, for a standard MitM attack, it seems that there's still work to be done to prevent prompt injection vulnerabilities. In other incidents, security researchers discovered vulnerabilities in SAP's AI services that allow attackers to steal customer data and even manage their Docker containers.
The second, hackers are now using AI-generated code in attacks. When companies touted the benefits of generative AI, such as saving developers time, they overlooked a crucial point: developers come in all shapes and sizes. While AI can certainly help with routine tasks and make coding more accessible to beginners, it also lowers the barrier of entry for malicious actors.
In the past, would-be attackers needed some programming skills and exploration of attack vectors to create a simple Trojan. But now, thanks to AI-generated code, even novice hackers can create more sophisticated attacks with relative ease. For now, these attempts are often clumsy and amateurish, but in the wrong hands — or if the hacker himself develops greater expertise — AI-generated code could cause significant damage.