The Economist, 17 janvier, article payant
From oil to AI : Can the Gulf states become tech superpowers?
The region’s rulers want to move away from fossil fuels
Extraits :
Few middle powers have the towering technological ambitions of the rich Gulf states. As they seek to shift their economies away from fossil fuels, the Emiratis want to lead the world in artificial intelligence (AI) and the Saudis want the kingdom to become home to startups in cutting-edge areas such as robotics. Those aspirations, however, are about to collide with geopolitical reality.
The fascination with tech is not new, but the scale of the plans is. In Marchthe United Arab Emirates (UAE) created MGX, a tech-investment company with a target size of $100bn, which will invest in AI infrastructure, such as data centres and chips. It has also set up a $10bn AI venture-capital fund. In Saudi Arabia a number of different funds with a combined firepower of $240bn will splurge on AI, data centres and advanced manufacturing.
The rulers are making bets in three areas. One is model-making and applications. (…)
Gulf companies are building data centres abroad, too. (…)
A third area is chip manufacturing, which the UAE seems especially keen on. Samsung, a South Korean electronics giant, and TSMC, the world’s largest chipmaker, have held talks with officials to build plants in the UAE. Sam Altman, the boss of OpenAI, has convinced the UAE’s sheikhs, among other investors, to fund his chipmaking plans.
There are early signs the strategy could come together. The total capacity of all data centres currently in construction in Saudi Arabia and the uae has grown about ten-fold in the past five years. Investment has flowed in. The Gulf recorded almost $8bn of foreign direct investment in tech infrastructure and another $2bn in software in 2024, up three-fold from 2017, according to fDi Markets, a data firm. Talent is moving, too. BCG, a consultancy, says that the AI talent pool in the UAE and Saudi Arabia has grown by over one-third and almost a fifth, respectively, since 2022.
But a big risk looms over the Gulf’s ambitions: souring relations between America and China. The rulers have leaned heavily on America’s big technology firms for partnerships. At the same time, they have struckplenty of deals with large Chinese firms, including Huawei, a tech company, and China Telecom, communications firm. Saudi Arabia has invested $400m in Zhipu AI, one of China’s most prominent AI companies. Moreover, the data-centre boom relies on China: about a third of imports of servers, chips and storage devices by Saudi Arabia and the UAE come from the country.
American policymakers are clearly wary of this relationship. (…)
The Gulf’s rulers may hope their close ties to big American tech firms will help insulate them from such machinations in Washington. Google, for instance, plans to set up an AI hub in Saudi Arabia. Microsoft has invested $1.5bn in G42.
But ultimately they will face an uncomfortable choice. Geopolitical tensions are likely to intensify during Mr Trump’s second term. America’s tech giants already see themselves in a race with China. Brad Smith, Microsoft’s president, says that “the real key to American leadership from a long-term perspective is to put American technology around the world—and to do it faster than China does.” If the Gulf’s rulers want their tech dreams to materialise, they may eventually be forced to pick a side. ■
https://www.economist.com/business/2025/01/16/can-the-gulf-states-become-tech-superpowers
The Economist, 16 janvier, article payant
Fat and health : Is obesity a disease?
It wasn’t. But it is now
Extraits :
For years there has been a push to recognise obesity as a disease in its own right, and therefore something that needs to be treated in and of itself, rather than just as a risk factor for other things, such as diabetes, heart disease, strokes and some cancers. And there is indeed much evidence that being obese can result in exceptionally poor health. But many who are obese are not unwell in the slightest. This argues that obesity per se should not be treated as an illness.
Until two years ago, such discussion was of little practical relevance since there were few treatments for obesity between the extremes of bariatric surgery and the old-fashioned approach of eating less and exercising more. However, the arrival in 2023 of GLP-1 weight-loss drugs in the form of semaglutide (known commercially as Wegovy) changed that. If these drugs are to be prescribed sensibly and fairly, then who among the fat is sick and who is not becomes an important question.
The usual current measure of obesity is body mass index (BMI). This has the advantage of being easily calculated (by dividing a person’s weight by the square of their height). Obesity is then defined as a BMI of more than 30. But some people with a high BMI show no signs of being unwell. And, absurdly, stocky and well-muscled athletes have been known to qualify as obese according to this classification. (…)
To diagnose their newly defined disease, which they call “clinical obesity”, the commissioners require two things. First, the addition of a third measure of body size (waist circumference, waist-to-hip ratio or waist-to-height ratio) to those used to calculate BMI—though measuring body fat directly, with sophisticated modern scanning tools is even better. Second, if this revised measurement does, indeed, proclaim an individual to be obese, some objective signs and symptoms of reduced organ function, or ability to conduct daily activities—such as bathing, eating and dressing—are also needed to declare that obesity to be clinically relevant. (…)
Francesco Rubino, a professor in metabolic and bariatric surgery at King’s College, London, who is also one of the commissioners, reckons describing obesity as an actual disease is quite a radical shift. The next task—one which others have already started, he says—is to work out who among the 1bn or so people on the planet who were classified as obese according to the old definition qualify as being clinically obese under the new one, and thus in need of treatment. Preliminary work, he says, suggests 20-40% of them.
The commission’s approach already seems popular with medical officialdom. Seventy-six of the world’s leading health organisations, including the American Heart Association, the Chinese Diabetes Society and the All Indian Association for Advancing Research in Obesity, have already endorsed it. How quickly it will percolate into medical practice and public perceptions of who is and is not dangerously obese is another matter. ■
https://www.economist.com/science-and-technology/2025/01/15/is-obesity-a-disease
The Economist, 13 janvier, article payant
The fight over America’s economy : Tech is coming to Washington. Prepare for a clash of cultures
Out of Trumpian chaos and contradiction, something good might just emerge
Extraits :
Already things have turned nasty. Donald Trump has not even got to the White House, and his raucous court of advisers have rounded on each other. In recent days Elon Musk and other tech tycoons have traded insults with the MAGA crowd over highly skilled migration. What seems like a petty spat over visas is in fact a sign of a much deeper rift. For the first time, tech is coming to Washington—and its worldview is strikingly at odds with the maga movement. The ways in which these tensions are resolved, and who gains the upper hand, will profoundly affect America’s economy and its financial markets over the next four years.
As in his first term, Mr Trump has assembled an economic-policy team with disparate, sometimes contradictory goals. The maga diehards, such as Stephen Miller, Mr Trump’s choice for deputy chief of staff, are anti-trade, anti-immigration and anti-regulation, and are supported by an energetic base. The Republican mainstreamers, such as Scott Bessent, Mr Trump’s pick for treasury secretary, and Kevin Hassett, the head of the National Economic Council, are primarily low-tax, small-government enthusiasts. This time, though, there is a new faction that makes the mix more volatile still: the tech bros from Silicon Valley.
David Sacks, a venture capitalist, has been appointed Mr Trump’s crypto and artificial-intelligence tsar. He will hope to relax curbs on the crypto industry and, together with other arrivals from Silicon Valley, to loosen controls on ai to encourage faster progress. But the influence of the techies goes beyond tech policy. Mr Musk has been tasked with running the newly created Department of Government Efficiency (doge). Marc Andreessen, a renowned venture capitalist, says he has been spending about half his time at Mar-a-Lago as a “volunteer”. Scott Kupor, who worked for Mr Andreessen, will take charge of the Office of Personnel Management, which oversees public-sector hiring. Former employees of Palantir, the Thiel Foundation and Uber have been appointed to roles in the state and health departments and to the Pentagon, respectively. Once the revolving door between Wall Street and the Treasury spun so fast that Goldman Sachs was nicknamed “Government Sachs”. Mr Trump, by contrast, is trying to put the tech into technocracy.
This is new for American politics. (…)
One problem is that, when tech and maga say they are signed up to America First, they mean different things. Whereas the maga movement hopes to restore a vision of the past, including an impossible return to a manufacturing heyday, tech looks forward. It wants to accelerate progress and disrupt society, leaving the world for which maga yearns ever farther in the dust.
These contrasting visions will translate into policy disputes. (…)
A combination of infighting, botched implementation and self-dealing could provoke a backlash that hobbles Mr Trump’s second term.
Yet that dismal scenario is not foreordained. Instead of fighting each other to a standstill, the factions on Mr Trump’s team could moderate each other in some ways and reinforce each other in others, perhaps with benign results for America. For example, the mainstreamers and the tech bosses could limit maga’s worst instincts on protectionism and immigration, while tech’s clever ideas for reform could be implemented in a way that is politically astute. Everyone’s agreement on America’s need to deregulate and innovate, meanwhile, could lend the programme useful momentum. (…)
That may sound far-fetched. However, the stockmarket could help steer the administration towards this compromise. Mr Trump is sensitive to share prices, and will not want to endanger the roaring rally that has followed his re-election. By providing a real-time gauge of whether investors think Trumponomics will help the economy, the stockmarket could sway his decisions. If so, the administration could feel its way towards policies that boost growth. Tech’s arrival in Washington is high-risk. It could also—conceivably—be high-reward. ■
Frankfurter Allgemeine Zeitung, 12 janvier, article payant
Europa investiert in die falschen Technologien
Europa investiert zu wenig – und das auch noch in die falschen Technologien. Während US-Unternehmen mit Wucht in Zukunftstechnologien wie Software und KI investieren, konzentriert sich Europa seit 20 Jahren auf traditionelle Branchen wie die Autoindustrie.
Extraits :
Die Unternehmen in der EU investieren nur 1,2 Prozent des Bruttoinlandsproduktes in Forschung und Entwicklung – in den USA sind es 2,3 Prozent. Werden die Investitionen des Staates dazugerechnet, gibt die EU etwa zwei Prozent ihres Bruttoinlandsprodukts für F&E aus – vergleichbar mit Japan, aber weit unter den USA oder Südkorea. Besonders dramatisch ist der Rückstand in Zukunftstechnologien: US-Unternehmen dominieren mit einem Anteil von 75 Prozent die globale Softwareentwicklung, die EU kommt nur auf 6 Prozent und liegt damit auch hinter China. Während in den USA die großen Digitalkonzerne die größten F&E-Investoren sind, führen in Europa seit 20 Jahren die gleichen Autohersteller die Liste an.
Wo in Amerika 85 Prozent der privatwirtschaftlichen Investitionen in Hightechbranchen fließen, sind es in Europa nur 50 Prozent, mit einem Schwerpunkt auf mitteltechnologischen Sektoren. Eine neue Studie der Università Bocconi zeigt, dass diese Spezialisierung Europa nicht nur Wachstumschancen nimmt, sondern es auch geopolitisch schwächt.
Die EU müsse ihre Innovationsförderung radikal umbauen, um den Anschluss nicht zu verlieren. Die Studie spricht von einer „Mitteltechnologie-Falle“: Die EU ist auf mittelkomplexe Technologien spezialisiert, die zwar technologische Fortschritte nutzen, aber nicht selbst entwickeln. Diese Pfadabhängigkeit verstärkt sich selbst – während US-Unternehmen ihre Technologieführerschaft durch hohe F&E-Ausgaben ausbauen, fehlen in Europa die Ressourcen für den Aufbau neuer Hightechindustrien. Besonders problematisch: Die europäische Autoindustrie wurde trotz hoher F&E-Investitionen von US- und chinesischen Herstellern überholt. Sowohl bei den softwarelastigen Fahrzeugen als auch beim autonomen Fahren in den relevanten Ausbaustufen 4 und 5 spielen Europas Hersteller aktuell keine Rolle.
Das EU-Innovationsprogramm „Horizon Europe“ mit einem Jahresbudget von mehr als elf Milliarden Euro ist nach Ansicht der Autoren falsch aufgestellt. Weniger als 5 Prozent der Mittel fließen in bahnbrechende Innovationen. Der neu geschaffene European Innovation Council (EIC) konzentriere sich zu stark auf die Finanzierung reifer Technologien statt auf echte Durchbrüche. Die Entscheidungsprozesse sind zu politisch, die Zusammenarbeit wird erzwungen statt begleitet.
Die Autoren fordern eine radikale Reform nach dem Vorbild der US-Forschungsagentur DARPA. (…)
Das Zeitfenster, in dem dies möglich ist, schließt sich jedoch. Während die USA und China ihre Führungspositionen weiter ausbauen, bleibt Europa wenig Zeit, um das Ruder herumzureißen. Die „Middle Technology Trap“ ist keine unvermeidbare Sackgasse, sondern ein Hindernis, das durch entschlossenes Handeln überwunden werden kann. Die Frage sei, ob Europa den Willen dazu habe.
The Economist, 11 janvier, article payant
Meta’s makeover : Mark Zuckerberg’s U-turn on fact-checking is craven—but correct
Social-media platforms should not be in the business of defining truth
Extraits :
Apart from the million-dollar wristwatch, it had the look of a hostage video. On January 7th Mark Zuckerberg posted a clip to Facebook and Instagram in which he announced changes to his social networks’ content-moderation policies in response to what he called the “cultural tipping point” of Donald Trump’s election. There have been “too many mistakes and too much censorship”, he said, adding that Mr Trump’s return provides an “opportunity to restore free expression”. He also appointed Dana White, an ally of Mr Trump’s, to Meta’s board (as well as John Elkann, the boss of Exor, which part-owns The Economist’s parent company).
For all the talk of freedom, Mr Zuckerberg’s video was another example of the capture of American business by the bullying incoming president. Mr Trump has called Facebook an “enemy of the people” and threatened to ensure that Mr Zuckerberg “spends the rest of his life in prison”. Mr Zuckerberg is not the only executive to submit: everyone from Apple’s Tim Cook to OpenAI’s Sam Altman is said to have donated to Mr Trump’s inauguration vanity fund. This week Amazon announced a $40m biopic of the incoming First Lady.
The circumstances may be grotesque and the motives suspect. But the substance of Meta’s sweeping changes is, in fact, correct. Speech online urgently needs to become freer. Making it so will shore up America’s democracy against whatever tests it faces in the years to come.
Mr Zuckerberg was once a free-speech enthusiast, allowing content such as Holocaust denial on Facebook even as many urged him to block it. But following claims of Russian online interference in Mr Trump’s first election, in 2016, and an outbreak of misinformation around the covid-19 pandemic, in 2020, the company cracked down on a broad range of “lawful but awful” content, from quack medicine to crackpot groups such as QAnon.
What first seemed like common sense has placed a growing cost on users’ freedom of expression. Never mind the freedom to be wrong; in some cases perfectly accurate claims have been blocked, as when Facebook suppressed a New York Post story about Joe Biden’s son, Hunter, which turned out to be true. The definition of hate speech has expanded in a way that limits debate about subjects such as transgender rights. Automated filters are so strict that even Meta says 10-20% of the content it removes is taken down in error. Mr Zuckerberg’s promise to replace fact-checking with user-led “community notes”, and loosen the rules on what can be said about testy topics like gender, is welcome.
There are risks. Mr Zuckerberg acknowledges that moderation involves trade-offs and that his new rules will mean more “bad stuff” online. (…) On X, where Elon Musk has dismantled much of the moderation apparatus, posts inciting violence—a criminal offence—spread rapidly during a recent spate of rioting in Britain. Telegram, a libertarian network popular in Russia, has become a haven for crooks owing to its hands-off approach.
The best way to guard against these dangers is to be transparent about how rules are set. Meta’s Oversight Board, an independent standards watchdog set up in 2020, appears to have been wrongfooted by this week’s announcement, first supporting the measures and then expressing concerns. The rules on what can and cannot be said online should be explained and defended transparently, not overturned by the company’s chief executive in a pre-inauguration panic.
For all that, Meta’s moves are a step in the right direction. Social networks should stamp out illegal content. For the sake of advertisers’ business and users’ enjoyment, they will probably want to keep things civil. But it is past time that they got out of the business of ruling on what is right and wrong. Only a fool would claim that his social network was the truth.■
The Wall Street Journal, 27 décembre, article payant
The AI Boom May Be Too Good to Be True
Pending copyright-infringement lawsuits could derail the industry’s economic potential.
Extraits:
Investors rushing to capitalize on artificial intelligence have focused on the technology—the capabilities of new models, the potential of generative tools, and the scale of processing power to sustain it all. What too many ignore is the evolving legal structure surrounding the technology, which will ultimately shape the economics of AI. The core question is: Who controls the value that AI produces? The answer depends on whether AI companies must compensate rights holders for using their data to train AI models and whether AI creations can themselves enjoy copyright or patent protections.
The current landscape of AI law is rife with uncertainty. The New York Times, Getty Images and individual artists have challenged AI companies over their use of copyrighted material in training data sets. How these cases are decided will determine whether AI developers can harvest publicly available data or must license the content used to train their models. Should courts decide in favor of rights holders, AI companies will face increased costs that could reduce profit margins and put many current valuations into question. Investors shouldn’t underestimate the risks.
Equally significant are the questions surrounding intellectual-property rights for AI-generated creations. Can a novel invented by AI be copyrighted? Can a discovery guided by an AI model be patented? Recent rulings have denied such protections, emphasizing that only human creators can claim IP rights under current laws. (…)
Critics argue the legal framework will catch up with technology, and that lawmakers will adapt to accommodate AI’s evolving role in society. They also claim the value of AI lies primarily in its functional capability—its ability to analyze, generate and innovate—and that legal questions around copyrights and patents are a secondary concern. Those arguments underestimate the complexity and the slow-moving nature of legal systems, especially in areas involving fundamental shifts in technology and human rights. (…)
A historical analogy offers a stark warning. In the 1990s, the music encoded on compact discs wasn’t encrypted, leading to a piracy crisis in the music industry. The movie industry learned from those mistakes and pursued a strategy to protect its IP that was primarily legal rather than purely technological. (…)
AI investors take note: Technological strength alone doesn’t suffice when the regulatory and legal environment is left unattended. (…)
The excitement surrounding AI is understandable, but investors must pay close attention to how courts rule on cases involving copyright infringement in training data, and to legislative developments around AI-created intellectual property. These decisions will ultimately determine who profits and who loses in the AI age.
Mr. Harlan is founder and managing partner of Harlan Capital Partners.
Le Figaro, 27 décembre, article payant
Vieillissement : pourquoi les sexagénaires d’aujourd’hui sont en bien meilleure forme que ceux d’il y a dix ans
DÉCRYPTAGE – Leurs capacités cognitives, locomotrices psychologiques et sensorielles sont meilleures que celles de leurs prédécesseurs au même âge, estiment des chercheurs.
Extraits:
Vous avez 60 ans, et l’on vous dit que vous ne faites pas votre âge ? N’y voyez aucune basse flatterie. C’est vrai, et c’est la science qui le dit : 70 ans serait devenu le nouveau 60 ans, selon des chercheurs de l’Université de Columbia. Pour parvenir à cette conclusion, ils ont analysé les données de plus de 14 000 Anglais nés entre 1940 et 1950, et ont évalué leurs capacités cognitives, locomotrices psychologiques et sensorielles. (…)
Après analyse, les chercheurs ont constaté que celles d’une personne de 62 ans née en 1950 étaient meilleures que celles d’une personne qui avait 62 ans dix ans plus tôt. « Nous avons même constaté qu’un participant de 68 ans né en 1950 avait une capacité intrinsèque supérieure à celle d’une personne de 62 ans née en 1940 », écrivent les auteurs de l’étude. Et c’est une bonne nouvelle. Car cela veut dire que l’augmentation de l’espérance de vie s’est accompagnée d’une hausse de l’espérance de vie en bonne santé.
« C’est très encourageant pour notre société », se réjouit le Pr Bruno Vellas, à la tête de l’Institut Hospitalier Universitaire (IHU) pour le vieillissement en bonne santé de Toulouse qui a développé pour les médecins le programme Icope de l’OMS. « Nous allons vivre 30 % de notre vie après 60 ans. Ce qui peut être une chance si nous restons actifs, productifs, solidaires, aidants. Et cette étude nous montre que c’est possible. Nous ne vendons pas du rêve en disant qu’il est possible d’éviter l’écueil de la dépendance. Bien vieillir est possible. » Les données récentes sur la dépendance confirment ces propos optimistes : entre 2015 et 2022, le taux de personnes en perte d’autonomie a baissé de 2 %, selon la direction statistique du ministère de la Santé. Au total, le nombre de personnes concernées a diminué de 180 000, alors que les prévisions de 2015 estimaient qu’il augmenterait de 130 000 personnes ! (…)
The Guardian, 26 décembre, libre accès
I spent a week working, exercising and relaxing in virtual reality. I’m shocked to say it finally works
Bar some glitches, I think a tipping point has been reached – except when it comes to virtual gigs
Extraits:
I’m writing this from a room that’s slowly orbiting the Earth. Behind the floating screen in front of me, through a giant opening where a wall should be, the planet slowly spins, so close that it takes up most of my field of vision. It’s morning in Australia to my right; India and the first hints of Europe are dotted with lights up and to my left. The soft drone of the air circulation system hums quietly behind me.
I spent a week doing everything that I could – working, exercising, composing – on my virtual reality headset. This was the year virtual reality threatened to go mainstream, with prices becoming more attainable and Apple entering the market, and so I wanted to see how far VR has come since I first tried it in the mid-2010s, when the main experiences on offer were nausea-inducing rollercoaster simulators. I used a recent model from Meta, called the Quest 3, and the conclusion was clear: this thing now works. It feels a little unfinished, but we’ve reached the point where VR could at lastbecome genuinely useful.
The biggest surprise was working in VR. I cannot recommend this highly enough. Donning a headset, you can summon multiple screens, all connected to your computer, make them as large as you want, and place them anywhere in your environment. “Passthrough” – the ability to see digital objects superimposed on the real world, made possible with cameras built into the front of the headset – means you can carve out a window from the virtual environment to see your keyboard. And you can choose between any number of environments to work in, from minimalist cafes to mountain lodges, switching between them at will. I’ve rapidly got to the stage where, if I’m working on my own, I’d rather work in virtual reality than in reality. (…)
The launch of Apple’s Vision Pro headset earlier this year was meant to be the starting gun for VR. It wasn’t. It is an engineering marvel, magical to use – but it doesn’t yet have enough compelling apps, and the £3,500 price tag rules it out for most people. Stories of headsets gathering dust or being returned have led some to think VR is little more than another hype bubble from a tech industry desperate to find the next big thing.
But VR is not hype. There are kinks to smooth out, sure. But I think we’ve hit a tipping point. If you embrace it as something single-player – and something you’re not going to be using much in public – it is genuinely useful. Work, entertainment, exercise – all are fantastic in VR already. Don’t count on small, rectangular screens being how humanity communicates with machines for ever.
Ed Newton-Rex is the founder of Fairly Trained, a non-profit that certifies generative AI companies that respect creators’ rights, and a visiting scholar at Stanford University
https://www.theguardian.com/commentisfree/2024/dec/25/virtual-reality-work-exercise-relax
Wall Street Journal, 21 décembre, article payant
The Next Great Leap in AI Is Behind Schedule and Crazy Expensive
OpenAI has run into problem after problem on its new artificial-intelligence project, code-named Orion
Extraits:
OpenAI’s new artificial-intelligence project is behind schedule and running up huge bills. It isn’t clear when—or if—it’ll work. There may not be enough data in the world to make it smart enough.
The project, officially called GPT-5 and code-named Orion, has been in the works for more than 18 months and is intended to be a major advancement in the technology that powers ChatGPT. OpenAI’s closest partner and largest investor, Microsoft, had expected to see the new model around mid-2024, say people with knowledge of the matter.
OpenAI has conducted at least two large training runs, each of which entails months of crunching huge amounts of data, with the goal of making Orion smarter. Each time, new problems arose and the software fell short of the results researchers were hoping for, people close to the project say.
At best, they say, Orion performs better than OpenAI’s current offerings, but hasn’t advanced enough to justify the enormous cost of keeping the new model running. A six-month training run can cost around half a billion dollars in computing costs alone, based on public and private estimates of various aspects of the training.
OpenAI and its brash chief executive, Sam Altman, sent shock waves through Silicon Valley with ChatGPT’s launch two years ago. AI promised to continually exhibit dramatic improvements and permeate nearly all aspects of our lives. Tech giants could spend $1 trillion on AI projects in the coming years, analysts predict.
The weight of those expectations falls mostly on OpenAI, the company at ground zero of the AI boom.
The $157 billion valuation investors gave OpenAI in October is premised in large part on Altman’s prediction that GPT-5 will represent a “significant leap forward” in all kinds of subjects and tasks.
GPT-5 is supposed to unlock new scientific discoveries as well as accomplish routine human tasks like booking appointments or flights. Researchers hope it will make fewer mistakes than today’s AI, or at least acknowledge doubt—something of a challenge for the current models, which can produce errors with apparent confidence, known as hallucinations. (…)
Generally, AI models become more capable the more data they gobble up. For LLMs, that data is primarily from books, academic publications and other well-respected sources. This material helps LLMs express themselves more clearly and handle a wide range of tasks.
For its prior models, OpenAI used data scraped from the internet: news articles, social-media posts and scientific papers.
To make Orion smarter, OpenAI needs to make it larger. That means it needs even more data, but there isn’t enough.
https://www.wsj.com/tech/ai/openai-gpt5-orion-delays-639e7693?mod=hp_lead_pos8
Le Figaro, 15 décembre, article payant
De Bezos à Zuckerberg, l’allégeance très calculée des grands patrons de la tech envers Donald Trump
RÉCIT – Les dirigeants des Gafam montrent patte blanche au président élu. Une illustration de l’espoir dans la vague de dérégulation promise par Donald Trump, tout autant que la crainte de représailles de la part du président ou de son vice-président officieux, Elon Musk.
Extraits:
Donald Trump sera-t-il le prochain ambassadeur des lunettes connectées Meta Ray-Ban ? Mark Zuckerberg n’a pas hésité à jouer les VRP à Mar-a-Lago, la résidence de Floride du président-élu, où il était invité le 27 novembre. Avant de passer à table, le patron de Meta a fait la démonstration des mérites de ses lunettes au vainqueur de la présidentielle avant de les lui offrir en grande pompe. Des retrouvailles assez inhabituelles compte tenu du passif entre les deux hommes. Mark Zuckerberg a souvent fait office de tête de turc de Trump pendant la campagne. Le candidat républicain, qui ne lui pardonnait pas d’avoir été banni de Facebook après l’assaut du Capitole du 6 janvier 2021, l’avait même qualifié au printemps d’« ennemi du peuple » et affublé du surnom moqueur « Zuckerschmuck », soit en français «Zuckercon».
Mark Zuckerberg a « clairement indiqué qu’il voulait soutenir le renouveau national de l’Amérique sous la direction du président Trump », a indiqué Stephen Miller, futur directeur adjoint du cabinet présidentiel à l’issue du dîner à Mar-a-Lago. Ce qu’il n’a pas tardé à faire : le 11 décembre, Meta a annoncé un don d’un million de dollars au profit du fonds dédié à l’investiture du futur chef d’Etat.
La même somme sera versée par Amazon, affirme le Wall Street Journal. Le groupe va même retransmettre en direct la cérémonie d’investiture du 47e président des États-Unis sur son service Prime Video. Son fondateur Jeff Bezos a également subi les foudres de Trump lors de son premier mandat sur des sujets aussi variés que les impôts d’Amazon ou bien les articles à charge du Washington Post, qui lui appartient. Les deux hommes se sont entretenus à deux reprises au téléphone pendant l’été. Ils se reverront à Mar-a-Lago la semaine du 16 décembre. Sundar Pichai, le PDG de Google, y est, lui, attendu le jeudi 19.
Ce réchauffement diplomatique illustre le changement de cap de nombreuses figures de la Silicon Valley depuis l’élection de Donald Trump. Traditionnellement démocrate, cette puissante industrie a vu son cœur balancer entre Donald Trump et Kamala Harris pendant la campagne. Si la plupart des patrons des GAFAM ont fait profil bas, plusieurs milliardaires ont serré les rangs autour de l’ancien magnat de l’immobilier. À commencer par Elon Musk qui y a consacré 270 millions de dollars, devenant ainsi le plus grand donateur politique de l’histoire récente du pays. La victoire sans appel du candidat républicain secoue depuis ce petit bout de Californie, et bouleverse la géopolitique des Gafam. (…)
Derrière ces ralliements, il faut sans doute lire la crainte d’un durcissement de la future administration à l’égard des « Big Tech ». Pendant la campagne, le candidat républicain et son colistier JD Vance n’ont cessé de marteler leur souhait de « démanteler » ces géants, dont Google en premier lieu. Selon JD Vance, la firme créée en 1998 par Larry Page et Sergey Brin est «beaucoup trop grande, beaucoup trop puissante». La « Big Tech s’est déchaînée pendant des années, étouffant la concurrence dans notre secteur le plus innovant », a encore récemment déclaré Trump sur son réseau Truth Social. (…)
Depuis l’élection de Trump, les GAFAM ont vu leur titre progresser entre 10% et 15% en Bourse, tirant la croissance du Nasdaq. Certes, assez loin de Tesla, dont le cours a explosé de 75% en l’espace d’un mois.
The Economist, 13 décembre, article payant
The Babel wish : Machine translation is almost a solved problem
But interpreting meanings, rather than just words and sentences, will be a daunting task
Extraits:
Vasco Pedro had always believed that, despite the rise of artificial intelligence (AI), getting machines to translate languages as well as professional translators do would always need a human in the loop. Then he saw the results of a competition run by his Lisbon-based startup, Unbabel, pitting its latest AI model against the company’s human translators. “I was like…no, we’re done,” he says. “Humans are done in translation.” Mr Pedro estimates that human labour currently accounts for around 95% of the global translation industry. In the next three years, he reckons, human involvement will drop to near zero.
It is hardly a surprise that the AI model-makers are bullish, but the optimism feels apt. Machine translation has become so reliable and ubiquitous so fast that many users no longer see it. The first computerised translations were attempted more than 70 years ago, when an IBM computer was programmed with a vocabulary of 250 words of English and Russian and six grammatical rules. That “rules-based” approach was superseded in the 1990s by a “statistical” approach, based on crunching large datasets, which was still the state of the art when Google Translate was launched in 2006. The field exploded in 2016, though, when Google switched to a “neural” engine—the forebear of today’s large language models (LLMs). Influence flowed both ways: when LLMs became better, so too did machine translation.
In Unbabel’s test, human and machine translators were asked to translate everything from casual text messages to dense legal contracts and the archaic English of an old translation of “Meditations”by Marcus Aurelius. Unbabel’s AI model held its own. Measured by Multidimensional Quality Metrics, a framework that tracks translation quality, humans were better than machines if they were fluent in both languages and also experts in the material being translated (for instance, specialist legal translators dealing with contracts). But the lead was small, says Mr Pedro, who added that it would be hard to see how, two or three years from now, machines would not overtake humans entirely. (…)
The problem of translating one sentence to another is “pretty close to solved” for those “high-resource” languages with the most training data, says Isaac Caswell, a research scientist at Google Translate. But going beyond this to make machine translation as good as a multilingual person—especially for languages that do not have reams of available training data—will be a more daunting task. (…)
Complex translations face the same problems that plague LLMs in general. Without the ability to plan, refer to long-term memory, draw from factual sources or revise their output, even the best translation tools struggle with book-length work, or precision tasks such as keeping a translated headline to a certain length. Even tasks that a human finds trivial still trip them up. (…)
Live translation is also in the works. DeepL launched a voice-to-voice translation system in November, offering interpretation for one-on-one conversations in person and multi-member video chats. Unbabel, meanwhile, has demonstrated a device capable of reading small muscle movements in the wrists or eyebrows and pairing them with LLM-generated text to allow communication without the need to speak or type. The firm intends to build the tech into an assistive device for people with motor-neurone disease who can no longer speak by themselves.
Despite the progress, and his part in it, Mr Caswell is hopeful that the value in speaking other languages will not disappear entirely. “Translation tools are very useful for navigating the world, but they’re a tool,” he says. “They can’t replace the human experience of learning a language in terms of actually understanding where other people are coming from, understanding what a different place is like
The Economist, 6 décembre, article payant
Death from above : How Ukraine uses cheap AI-guided drones to deadly effect against Russia
Ukraine is making tens of thousands of them
Extraits:
(…) Ukraine’s drone war is evolving rapidly. Once a cheap answer to Russia’s artillery dominance, Ukrainian small and inexpensive first-person view (FPV) drones are now a force in their own right. They are used on a huge scale, with Ukraine projected to produce 2m this year. Ukraine now observes 1,000 Russian drones in every 24-hour period, says an insider. That has made some sections of the front lines, for example around Siversk in Luhansk province, practically no-go areas for humans. Drones are now responsible for a majority of battlefield losses, overtaking artillery, according to Ukrainian sources. (…)
The biggest change of all is that electronic warfare—essentially jamming—has consumed the battlefield. (…)
Data from the battlefield suggest that the hit rate for these AI-guided drones is currently above 80%. That is higher than the rate of manually piloted drones. (…)
The result is that Ukraine has become the furnace of a new kind of software-defined warfare which combines precision with mass. (…)
In both cases the drones themselves are made in Ukraine, by Ukrainians. One advantage of that is scale. Auterion’s largest partner in Ukraine, one of many, churns out 300,000 drones per year. Although recent Chinese sanctions have threatened to disrupt Ukraine’s drone supply chain, Mr Meier says that alternatives from Taiwan are now available. (…)
The tech entrepreneur rejects talk of military automation as some kind of dystopian future. “Using AI to accurately target is far more ethical than lobbing missiles and artillery,” he says. Ultimately, a human still has to make the final call on any engagement, says Mr Scherf. But Western and Ukrainian companies are busy working on deep-strike drones whose AI systems will be able to hunt for a wide range of potential targets far from the human operator. Mr Azhnyuk of The Fourth Law sees current technology as just the start. He hopes to have a prototype of a fully automated system, from launch to strike, built by early next year.■
Wall Street Journal, 6 décembre, article payant
Trump Plans to Appoint Musk Confidant David Sacks as AI, Crypto Czar
Tech investor was one of the most outspoken supporters of Trump in Silicon Valley
Extraits:
President-elect Donald Trump named a Silicon Valley investor close to Elon Musk as the White House’s artificial intelligence and cryptocurrency policy chief, signaling the growing influence of tech leaders and loyalists in the new administration.
David Sacks, a longtime venture capitalist who worked with Musk at PayPal more than two decades ago, will serve as the “White House A.I. & Crypto Czar,” Trump said on his social-media platform Truth Social.
“In this important role, David will guide policy for the Administration in Artificial Intelligence and Cryptocurrency, two areas critical to the future of American competitiveness,” he posted.
Musk, who has spent close to a quarter-billion dollars to help elect Trump, and Vice President-elect JD Vance chimed in with congratulatory messages on X. (…)
Some crypto executives cheered Sacks’ appointment. Emilie Choi, president and chief operating officer of crypto exchange Coinbase Global, wrote on X: “Time to build in the US!” (…)
The appointment further illustrates the growing influence of Musk and his associates in the incoming Trump administration. The Tesla chief has been appointed to co-lead the Department of Government Efficiency, or DOGE, the mandate of which is to streamline government bureaucracy. (…)
Some of Musk’s rivals fear that he and his associates could target them with their newfound power. Artificial intelligence company OpenAI’s CEO Sam Altman ranks high on the billionaire’s list of enemies. That didn’t stop Altman from congratulating Sacks. “congrats to czar @DavidSacks!” Altman posted on X after Thursday’s announcement.
Musk responded to Altman’s post with an emoji that is laughing so hard it is crying.
The Economist, 6 décembre, article payant
Brainpower : Stimulating parts of the brain can help the paralysed to walk again
Implanted electrodes allowed one man to climb stairs unaided
Extraits:
The spinal cord is the control cable that connects the brain to the rest of the body. If it is severed, people lose the ability to move their body below the site of the injury. But if it is only partly cut, the brain can sometimes adapt to the damage. Some people who are paralysed by a spinal-cord injury can gradually regain at least a limited ability to walk.
Exactly which bits of the brain are involved in this adaptation is not clear. But, in a paper just published in Nature Medicine, a group of researchers led by Jocelyne Bloch of Lausanne University Hospital and Grégoire Courtine at the Swiss Federal Institute of Technology in Lausanne shed some light. In doing so, they demonstrate that stimulation of the right bits of the brain can produce dramatic—and seemingly permanent—improvements in the ability of patients to walk again.
“We already knew that [changes in] the brain were key to regaining walking after a spinal-cord injury,” says Dr Courtine. “But we didn’t know which regions were the most important.” To find out, the researchers built detailed maps of the brains of a dozen mice whose spinal cords had been partially severed. (…)
Optogenetics is not generally approved for use in humans. But an alternative method of stimulating neurons, deep-brain stimulation (DBS), is. Rather than modifying cells to respond to light, this involves inserting fine electrodes into the brain and stimulating neurons with electric currents. Switching to DBS required a second round of testing, this time on rats. (Rats have slightly bigger brains than mice, says Dr Courtine, which makes the delicate job of placing the electrodes a bit easier.)
As hoped, zapping neurons in the brains of injured rats over the course of several weeks helped them, too, to regain the ability to walk. The improvements persisted even when the current was turned off, with analysis of their spinal cords showing an increased density of neuronal wiring below the site of their injuries.
The final step was to try it in people. The researchers recruited two volunteers who had suffered spinal injuries and then relearned how to walk with assistance. The electrodes were implanted in them while they were conscious. This helped the doctors ensure they were in the right place, with both patients reporting an urge to walk when the current was switched on.
After three months of rehabilitation, both reported big improvements in walking, as assessed by tests of how far they could travel in a set time, and by the subjective difficulty they experienced. Before the operation, one of them had hoped to walk without braces; the other to climb and descend a staircase unaided. Both achieved their goals.
Success in two people will not, by itself, be enough to make DBS generally available. The next step, says Dr Courtine, is to investigate whether brain stimulation might boost the power of an existing, similar treatment—stimulation of the spinal cord itself. The first person to get that sort of double-ended treatment, he says, is scheduled to have electrodes implanted within the next three months. ■
The Economist, Guest Essay, 5 décembre, article payant
Artificial Intelligence : An agenda to maximise AI’s benefits and minimise harms, by David Patterson
How technologists, researchers and policymakers can reassure people AI will serve the public good
Extraits:
People are reacting with both fascination and fear to the rapid deployment of artificial intelligence (AI). Some see the next era of humanity and others, imminent dangers. Without visibility into how AI is being developed and brought into their lives, people just aren’t equipped to navigate the rapidly evolving landscape. (…)
Our conversation turned to shared frustration over the polarised discourse between AI “accelerationists” and “doomers”. The reality, we agreed, is more nuanced. We concluded that there is an urgent need for computer scientists to take a more active role in both steering research and shaping the narrative. Rather than simply predict what the impact of AI will be given a laissez-faire approach, our goal was to propose what the impact could be given directed efforts to maximise the upsides and minimise the downsides.
We then assembled nine of the world’s leading computer scientists and rising AI stars, from academia, startups and big tech, to explore the pragmatic near-term impact of AI. We also interviewed two dozen other experts about AI’s impact on their specialties, including John Jumper, a winner of this year’s Nobel prize in chemistry, on science; President Barack Obama on governance; his former UN ambassador and national security adviser Susan Rice on security; and Eric Schmidt, a philanthropist and Google’s former chief executive, on several topics. For those interested, we’ve compiled our learnings into a more detailed 25-page paper.
Five guidelines emerged for harnessing AI for the public good. We believe they should guide our efforts in both the discovery and deployment of this transformative technology.
First, humans and AI systems working as a team do more than either on their own. Applications of AI focused on human productivity produce more positive benefits than those focused on human replacement. Tools that make people more productive increase their employability, satisfaction, and opportunity. (…)
Second, to increase employment, aim for productivity improvements in fields that would create more jobs. Despite tremendous productivity gains in computing and passenger aviation, America in 2020 had 11 times more programmers and eight times more commercial-airline pilots than in 1970. This growth is because programming and air transport are fields for which, as economists say, demand is elastic. Agriculture, on the other hand, is relatively inelastic, so productivity gains meant the number of agriculture jobs fell by three-fourths in one human lifetime (1940 to 2020). If AI practitioners aim to improve productivity in elastic fields, despite public fears, AI can actually increase employment.
Third, AI systems should initially aim at removing the drudgery from current tasks. Releasing time for more valuable work will encourage people to use new AI tools. Doctors and nurses choose their careers because they want to help patients, not do endless documentation. (…)
Fourth, the impact of AI varies by geography. Eric Schmidt emphasises that while rich countries worry about AI displacing highly trained professionals, countries with lean economies face shortages of skilled experts. AI could make such expertise more widely available in such regions, potentially enhancing quality of life and economic growth, becoming as transformative there as mobile phones have become. (…)
And finally, we need better metrics and methods to evaluate AI innovations. At times the marketplace can do this, such as for AI tools for professional programmers. (…)
There is no shortage of concerns about the risks and complexities of AI, which we address in the long paper: data privacy and security, intellectual-property rights, bias, information accuracy, threats to humanity from more advanced AI, and energy consumption (though on this last point, AI accounts for under a quarter of 1% of global electricity use, and the International Energy Agency considers AI’s projected increased energy consumption for 2030 to be modest relative to other trends).
Although there are risks, there are also many opportunities both known and unknown. It can be as big a mistake to ignore the benefits of AI as it is to ignore its risks. AI moves quickly, and governments must keep pace. (…)
At this point, readers might expect that we scientists are about to ask for government funding. But we believe that money for these efforts should come from the philanthropy of the technologists who have prospered in the computer industry. Several have already pledged support, and we expect more to join. (…)
We brainstormed on an AI moonshot. But which goal? We might create an AI mediator that orchestrates conversations across political chasms to pull us out of polarisation and back into pluralism. We can leverage the growing prevalence of smartphones by aiming to create a tutor app for every child in the world in their language, for their culture, and in their best learning style. We might enable biologists and neuroscientists to make a century of progress in a single decade. But if we create the right blueprint for innovation, and bring experts and users together into the conversation, we don’t have to pick just one moon. ■
David Patterson is the Pardee Professor of Computer Science, Emeritus at the University of California at Berkeley.
Wall Street Journal, 4 décembre, article payant
AI and the Automation of Work for Gen Z
Students discuss how to use technology to help with their jobs.
Extraits:
Automate the Mundane
AI isn’t as advanced as we would like it to be. It can write an essay, but the content is typically filled with generic material in a neutral voice. Whether it’s creating test scripts, building workflows or completing due-diligence checks, AI is limited. When I worked in generative AI acquisitions with clients in the telecommunications industry, I found myself spending more time explaining what I wanted from the AI than actually getting results.
Gen Z is comfortable working with AI because it covers the mundane, repetitive tasks, freeing up time for other job aspects such as creating or capturing value for clients. Graphic designers, for example, would need to spend hours creating a mock-up of a chicken holding a flag, but if they used a generative AI platform, the task can be completed in seconds. It gives the designer more time to worry about artistic choices, the message and positioning instead of spending hours on a creation. AI will help workers automate the mundane and allow them to develop freethinking aspects of the working environment.
—Shriya Boppana, Duke University, business administration
(…)
It’s Efficient and Effective
I plan to use AI to save time and enhance productivity in my career as a teacher and administrator. AI tools will streamline lesson planning by generating different materials, assessments and activities, allowing me to focus on more-meaningful interactions with students. AI will support my decision-making as an administrator through data-driven insights, optimizing resource allocation and scheduling. Tools such as predictive analytics will help forecast student outcomes and inform strategies for improvement.
Communication platforms will also enhance engagement with staff, students’ families and the community by automating reminders, newsletters and surveys. It will manage administrative tasks such as reports, maintaining compliance documentation, and organizing development schedules. I plan to use AI to build a more efficient and effective educational environment in which both teachers and students thrive.
—Tina Titus, Quinnipiac University, educational leadership
(…)
Don’t Replace Jobs
AI has the power to either turn Gen Z workers into productivity powerhouses or render them obsolete. The difference lies in how responsibly we use it. Today’s companies have rapidly begun integrating AI, focusing on cost savings and efficiency. Many aim to replace humans with AI tools, prioritizing automation over workforce development. For Gen Z, the future feels precarious—like a race to slip through a closing door for increasingly scarce jobs.
But workers shouldn’t be the only ones concerned. Companies often overlook the consequences of widespread job loss from automation. Eliminating jobs reduces consumer buying power, shrinks demand and undermines the economic growth businesses rely on. Companies that neglect workforce development risk weakening their own long-term prospects and erode their competitive advantage. Balancing efficiency with employment ensures short-term gains don’t cause future harm.
When joined with human talent, AI boosts productivity, enables workers to focus on high-value tasks, and strengthens adaptability within businesses. This strategic pairing drives innovation and fosters sustainable growth. The choice is clear: AI must empower, not replace.
—Connor McVey, University of Southern California, business administrationhttps://www.wsj.com/opinion/ai-and-the-automation-of-work-for-gen-z-artificial-intelligence-1635d53a?mod=opinion_lead_pos9
Le Figaro, 4 décembre, article payant
Trisomie 21 : un nouvel espoir pour soigner les malades
De nouveaux traitements développés en France pourraient réduire les troubles cognitifs dans la trisomie 21, mais aussi dans les maladies de Parkinson et d’Alzheimer.
Extraits:
Une première mondiale : la société bordelaise Aelis Farma vient d’annoncer que son traitement expérimental AEF0217 a montré des effets positifs sur les troubles cognitifs et comportementaux dans la trisomie 21 (syndrome de Down). Des résultats obtenus lors d’un essai clinique de phase 1/2 mené sur 29 jeunes adultes de 18 à 35 ans.
Il s’agissait d’évaluer la sécurité de la molécule administrée aux patients, plus vulnérables que des volontaires sains. Aucun effet indésirable notable n’a été signalé. Mais la bonne surprise a été d’observer aussi des signes d’efficacité. « On a constaté des améliorations dans des domaines importants tels que la communication – la capacité à s’exprimer, à écrire. Ou dans les tâches du quotidien comme prendre soin de soi, interagir avec son environnement et développer des interactions avec les autres », explique Pier Vincenzo Piazza, directeur général d’Aelis Farma. (…)
Autre signe prometteur de ces améliorations : des analyses électroencéphalographiques ont décelé dans le cerveau une réduction de l’effort nécessaire pour réaliser des tâches de mémoire de travail. Il faut noter que ces résultats ont été obtenus en seulement quatre semaines, quand les spécialistes espéraient des améliorations après des mois de traitement. (…)
AEF0217 constitue une nouvelle classe de médicaments, les inhibiteurs spécifiques de signalisation du récepteur CB1 (CB1-SSi). Ces molécules agissent sur le récepteur CB1 situé à la surface d’un grand nombre de cellules, un peu partout dans le corps, et en particulier dans le système nerveux.
Les récepteurs CB1 jouent un rôle crucial dans la modulation de la douleur, l’humeur, l’appétit, la mémoire, la réponse immunitaire et le métabolisme. Tout l’intérêt des CB1-SSi est qu’ils ne bloquent pas ces récepteurs, mais régulent leur activité, préservant ainsi leur fonctionnement normal. (…)
Les patients atteints de trisomie 21 pourraient d’ailleurs ne pas être les seuls à bénéficier de ces nouveaux traitements. Aelis Farma a déjà obtenu des preuves de l’efficacité de AEF0217 chez l’animal dans les troubles cognitifs associés à certaines formes d’autisme ou ceux liés à l’âge, par exemple dans les phases précoces de la maladie d’Alzheimer. La société mène actuellement des études précliniques avec son candidat-médicament pour identifier de nouvelles indications, par exemple les troubles cognitifs de la maladie de Parkinson ou encore de la schizophrénie.
Wall Street Journal, 3 décembre, article payant
Googling Is for Old People. That’s a Problem for Google.
And it’s not just demographics that are weighing on the search giant. Its core business is under siege from pressures that threaten to dismantle its ecosystem of search dominance and digital advertising.
Extraits:
If Google were a ship, it would be the Titanic in the hours before it struck an iceberg—riding high, supposedly unsinkable, and about to encounter a force of nature that could make its name synonymous with catastrophe.
The trends moving against Google are so numerous and interrelated that the Justice Department’s attempt to dismantle the company—the specifics of which were unveiled Nov. 20—could be the least of its problems.
The company’s core business is under siege. People are increasingly getting answers from artificial intelligence. Younger generations are using other platforms to gather information. And the quality of the results delivered by its search engine is deteriorating as the web is flooded with AI-generated content. Taken together, these forces could lead to long-term decline in Google search traffic, and the outsize profits generated from it, which prop up its parent company Alphabet’s GOOGL 1.50%increase; green up pointing triangle money-losing bets on things like its Waymo self-driving unit.
The first danger facing Google is clear and present: When people want to search for information or go shopping on the internet, they are shifting to Google’s competitors, and advertising dollars are following them. (…)
This shift is due largely to users’ bypassing Google to start their search for goods on Amazon. (…)
The second threat is the rise of “answer engines” like Perplexity which, well, do what they say on the tin. OpenAI has added internet search to ChatGPT, Meta Platforms is exploring building its own search engine, and even AI chatbots that can’t search the internet are proving increasingly capable at addressing many questions. (…)
“Google had this seemingly insurmountable position in search, until AI came around, and now AI is to search what e-commerce was to Walmart,” says Melissa Schilling, a professor of management at New York University’s Stern School of Business. Another comparable moment was when Microsoft missed the importance of the smartphone, and the iPhone upended its dominance of consumer computing, she adds. (…)
The third trend that threatens Google is one the company may not be able to do much about, and that makes it the most dangerous—the degradation of the overall ecosystem of websites that Google has shaped, and on which it depends. (…)
Even though the rise of AI has the potential to finally unseat Google, it’s likely to take a long time before Google’s dominance truly fades, says David Yoffie, a professor at Harvard Business School.
“We know from behavioral economics that people tend to get into certain routines, and in the absence of a spectacularly better product, people tend to stick with that,” he adds. (…)
As with its attempt to break up Microsoft, the government’s case against Google may be outpaced by competitive forces far more powerful than antitrust enforcement.
UC Berkeley News, 28 novembre, libre accès
Breakthrough in capturing ‘hot’ CO2 from industrial exhaust
A metal-organic framework, or MOF, is capable of capturing CO2 at extreme temperatures
Extraits :
Industrial plants, such as those that make cement or steel, emit copious amounts of carbon dioxide, a potent greenhouse gas, but the exhaust is too hot for state-of-the-art carbon removal technology. Lots of energy and water are needed to cool the exhaust streams, a requirement that has limited adoption of CO2 capture in some of the most polluting industries.
Now, chemists at the University of California, Berkeley, have discovered that a porous material can act like a sponge to capture CO2 at temperatures close to those of many industrial exhaust streams. The material — a type of metal-organic framework, or MOF — will be described in a paper to be published in the Nov. 15 print edition of the journal Science. (…)
https://news.berkeley.edu/2024/11/14/breakthrough-in-capturing-hot-co2-from-industrial-exhaust/
The Economist, 26 novembre, article payant
Nutrition : Nobody knows why ultra-processed foods are bad for you
But scientists are racing to find out
Extraits :
For millennia, people have altered food to please their palettes. More than 3,000 years ago Mesoamericans, living in what is Mexico and Central America today, cooked corn kernels in a solution of wood ash or limestone. The process, known as nixtamalisation, unlocked nutrients and softened the tough outer shells of the corn, making them easier to grind.
With the invention of canned goods and pasteurisation in the 19th century, alchemy became possible on an industrial scale. Processing innovations made food cheaper, more convenient and plentiful. According to the UN, the average daily food intake of a person in upper- and middle-income countries increased by about 40% between 1975 and 2021, to 3,300 kilocalories. In that time, obesity rates have more than tripled; today, nearly one in three people globally is either obese or overweight.
For millennia, people have altered food to please their palettes. More than 3,000 years ago Mesoamericans, living in what is Mexico and Central America today, cooked corn kernels in a solution of wood ash or limestone. The process, known as nixtamalisation, unlocked nutrients and softened the tough outer shells of the corn, making them easier to grind.
With the invention of canned goods and pasteurisation in the 19th century, alchemy became possible on an industrial scale. Processing innovations made food cheaper, more convenient and plentiful. According to the UN, the average daily food intake of a person in upper- and middle-income countries increased by about 40% between 1975 and 2021, to 3,300 kilocalories. In that time, obesity rates have more than tripled; today, nearly one in three people globally is either obese or overweight. (…)
Now concerns are growing that the heavy processing used to cook up cheap, tasty nibbles may itself be harmful. A particular target is “ultra-processed foods” (upfs), a relatively recent label put forward by Carlos Monteiro, a Brazilian scientist. Robert F. Kennedy junior, Donald Trump’s nominee for secretary of health, has likened processed food to “poison” and promised to reduce the share of UPFs in American diets. In November 2023 Colombia imposed a tax on highly processed foods and drinks. Authorities in Brazil, Canada and Peru have advised the public to limit consumption of these foods. In Britain parliamentarians are investigating the effects of UPFs on people’s health.
At the heart of the debate is a question: are upfs unhealthy because their nutritional content is poor, or does the processing somehow pose risks in itself? New research may soon provide answers that could reformulate what people eat. (…)
Since the 1990s the share of UPFs in diets worldwide has grown; they now account for more than half of the calorie intake in America and Britain (see chart). And for several decades, evidence has also been building that these foods are harmful in some way. Numerous studies show that people who consume diets high in UPFs tend to have more health problems, including obesity, type-2 diabetes, cardiovascular disease, various cancers and mental-health problems. UPFs often contain higher concentrations of fat, sugar and salt than processed foods, which could explain their negative effects. But a recent analysis by Samuel Dicken and Rachel Batterham at University College London reviewed 37 studies and found that even after adjusting for fat, sugar and salt UPFs were still strongly linked to poor health. That suggests there is more to their harm than just a poor nutrient profile.
Where those harms come from is still unclear, however. (…)
A better way to assess the question is with a randomised controlled trial (RCT), where researchers track a person’s food intake and control for all other variables. In one of the few such trials, published in 2019, Kevin Hall, a researcher at the National Institutes of Health (NIH) in America, and his colleagues, admitted 20 adults to the NIH Clinical Centre for four weeks. The participants received either ultra-processed or minimally processed foods for two weeks before swapping diet for the next fortnight. Participants in both diets had access to the same amount of calories and nutrients like sugars, fibre and fat. People were free to eat as much or as little as they wanted.
The results were striking. People on the ultra-processed diet ate about 500 more calories per day than those on the unprocessed one. They also ate faster and gained an average of 1kg (2.2 pounds) over two weeks. On the other diet, participants lost a similar amount of weight. Dr Hall says that, though the study was short and conducted in an artificial setting, the results suggest that excess amounts of salt, sugar and fats might not be fully to blame for the ill effects of processed food.
Further RCTs will be needed to confirm Dr Hall’s results. Even then, a bigger question remains—why do people overeat UPFs? (…)
Even if the results show conclusively that processing, and not just nutrients, leads to poor health, policymakers will face another difficulty: the definition of UPFs remains woolly. The Nova classification has no tolerance at all for artificial ingredients. The mere presence of a chemical additive classifies a food as a UPF, regardless of the amount. This can lead to confusing health outcomes—a recent observational study from Harvard University, for example, found that whereas some UPFs, such as sweetened drinks and processed meats, were associated with a higher risk of heart disease, others, like breakfast cereals, bread and yogurt, were instead linked to lower risks for cardiovascular disease. Dr Astrup warns that the current classification risks “demonising” a lot of healthy food. Insights from Dr Hall’s work could therefore help refine the understanding of UPFs, paving the way for more balanced and useful guidelines. ■
Wall Street Journal, 25 novembre, article payant
Punishing Google for Its Search Success
The DOJ’s meddling in internet search engines could hurt consumers and help China.
Extraits :
How badly does the Biden Administration want to punish Google? So much that the Justice Department’s antitrust cops are now asking a federal court to hobble the search giant, even though their proposals would hurt consumers and could benefit China. That’s only the start of the reasons to be skeptical of this government market meddling.
In a court filing last week, the DOJ proposed a slew of remedies for Google’s alleged antitrust violations. Federal Judge Amit Mehta ruled in August that Google had maintained an illegal search-engine monopoly by paying web browsers and device manufacturers to be featured by default, even as he acknowledged this wasn’t the primary reason for its success.
“Google has not achieved market dominance by happenstance. It has hired thousands of highly skilled engineers, innovated consistently, and made shrewd business decisions,” Judge Mehta wrote. “The result is the industry’s highest quality search engine, which has earned Google the trust of hundreds of millions of daily users.”
No matter, the government now wants to degrade Google’s search-engine quality to help less successful rivals. Start with its proposal to require Google to divest its popular Chrome browser, which by default uses the company’s web search. DOJ says Chrome lets Google collect more data on users to better target ads and refine search results. Yet if advertisers and users benefit from this product integration, what’s the antitrust problem? (…)
The main beneficiary would be Microsoft’s Bing search engine, which could bid less for default placement. Note that Microsoft’s market capitalization is 50% larger than Google’s. To hamper one tech giant, DOJ would bolster a competing colossus. (…)
DOJ even wants Google to socialize its data. The government’s filing proposes to force Google “to provide rivals and potential rivals both user-side and ads data for a period of ten years, at no cost, on a non-discriminatory basis.” Could this include foreign competitors, such as China’s Tencent or ByteDance? Don’t worry, DOJ’s filing suggests “proper privacy safeguards.” (…)
All of this is taking place as the U.S. is in a high-stakes race with China for the lead in artificial intelligence. Google is an American leader in AI investment. Antitrust policy was designed to police genuine market abuses, not punish companies for success.
The Economist, 24 novembre, article payant
The Human Cell Atlas : Scientists are building a catalogue of every type of cell in our bodies
It has thus far shed light on everything from organ formation to the causes of inflammation
Extraits :
AN ADULT HUMAN body consists of some 37trn cells. Not so long ago, these were thought to come in 220 different types. That number, the product of painstaking decades spent peering through microscopes at slides bearing tissue sections coloured by chemical stains, gave a sense of the division of cellular labour needed to keep a body running.
A sense, but only a superficial one. Tools now exist that are capable of looking inside the cells, breaking them open one at a time to release their complements of messenger RNA (mRNA), the molecule which carries genetic information from a cell’s nucleus to its protein factories. Molecules of mRNA indicate which genes are active, thus revealing a cell’s inner nature. Cells that look alike under a microscope often turn out to be quite diverse. The cell-type count has thus risen above 5,000.
The leader of this histological revolution is the Human Cell Atlas (HCA) consortium, which was set up in 2016 and currently involves more than 3,600 collaborators in 190 laboratories in 102 countries. Other cell-atlas projects are limited to mapping particular organs or types of tissue. The HCA aspires to catalogue the whole caboodle: identifying and locating all the cell types, healthy and diseased, in every human tissue over the course of a lifetime. Its remit extends even to “organoids”, science’s fumbling first attempts to grow living simulacra of organs.
The hope, according to Sarah Teichmann of Cambridge University and Aviv Regev of Genentech, an American biopharma firm, who set the whole thing up, is to have a first draft of the atlas available next year. Their latest progress report has just been published as a set of papers in Nature and several of its sister journals. (…)
The result is a system that can be (and has been) used not just for enhancing the atlas, but putting it to work. Drug companies are already, for example, using HCA data and models to screen potential drugs “virtually” before they are tested experimentally; to predict side-effects by discovering non-target tissues where the gene a drug candidate interacts with is expressed; and, conversely, to spot opportunities in such non-target tissues to extend a drug’s range of therapeutic targets.
One day all this effort may contribute to a human “digital twin”, which would also incorporate foundation models about how proteins work (such as AlphaFold, a protein-folding model developed by Google DeepMind) and how bodies develop. That day is still far distant. But it now seems more likely to arrive. ■
Wall Street Journal, opinion, 22 novembre, article payant
How to Regulate AI Without Stifling Innovation
Rules can’t solve every potential problem, and the demand for perfect safety has dangers of its own.
Extraits :
The biggest challenge with artificial intelligence is that we don’t have enough yet. Regulation should aim to help solve this problem. AI could turbocharge the many advanced economies grappling with slow productivity growth. But the technology is still developing, and the European Union’s heavy-handed AI rules have impeded progress there. As the U.S. debates regulation, we should avoid those mistakes by following six principles:
First, balance benefits and risks. This may sound obvious, but many regulatory enthusiasts ignore the technology’s benefits out of an overabundance of caution and instead support delaying AI until it is proven absolutely safe. Cost-benefit analysis requires regulators to think not only about the risks of AI but also the risks from slower AI development, such as more cancer deaths because of delayed drug discovery, worse educational outcomes because students lack personalized digital tutors, more car accidents because of delays in self-driving cars, and worse climate change because of a slowdown in discovering better materials for grid-level battery storage.
Second, compare AI with humans, not to the Almighty. Yes, autonomous cars crash—but how do they compare with human drivers? AI may show biases, but how do these stack up against human prejudices? In some cases, it might even be acceptable for AI to perform slightly worse than humans if it offers significant convenience and has greater potential for improvement over time, as we have seen with autonomous vehicles. AI is learning much faster than humans are and the future gains this learning will generate belong on the benefit side of the ledger.
Third, address how existing regulations are hindering progress. The most obvious are permitting and other obstacles to the expansion of data centers and the power sources they will need. A bigger threat over time is the dozens of state laws regulating AI that have already been passed and the hundreds more that have been proposed. To the degree possible, federal pre-emption with its own framework would help ensure the U.S. remains a digital single market—unlike the fractured EU.
Fourth, where new regulation is warranted, AI should be overseen by existing domain-specific regulators rather than a new superregulator. We don’t have separate regulators for computers or linear regression; instead, our regulators specialize in areas where these are used, such as auto safety, stock trading, and medical devices. Existing regulators should focus on outputs and consequences in their domains, not on inputs and methods. This may require more AI expertise and flexibility within agencies. The Food and Drug Administration has come up with procedures to approve AI-based devices that might fall foul of its standard rules.
Fifth, regulation must not become a moat protecting incumbents. History shows that well-intentioned rules can entrench existing powers, from medieval guilds to hospital certificate-of-need laws. In AI, we risk repeating this pattern. Centralized licensing bodies could easily become gatekeepers stifling competition. A superregulator could be captured by big companies. When tech giants enthusiastically promote regulation, it should raise red flags. Our regulatory framework should nurture a competitive AI landscape, not solidify the dominance of a few early movers.
Sixth, not every problem caused by AI can be solved by regulating AI. I hope this technology will raise wages without hurting employment, with especially large increases for workers with lower-paying skills. Studies have found that less-able writers benefit most from AI-based writing suggestions. But bleak scenarios of swift technological change displacing workers or causing inequality are possible. The answer to this downside risk isn’t to have regulators assess whether each technological advance is job-replacing or inequality-increasing. Rather, the solution lies in more conventional economic policies like training programs that connect people to jobs, wage subsidies, and a more progressive tax and transfer system to ensure that AI’s benefits are shared broadly. As a professor, I wouldn’t expect AI regulations to limit plagiarism—it is on us to figure out how to adapt our teaching.
While some AI regulation is warranted, policymakers should proceed cautiously. Well-intentioned efforts could inadvertently slow progress while falling short of their goals. These six principles can help form a balanced and effective approach to regulating AI, one that harnesses its potential while addressing legitimate concerns.
Mr. Furman, a professor of the practice of economic policy at Harvard, was chairman of the White House Council of Economic Advisers, 2013-17.
The Economist, 18 novembre, article payant
Future imperfect : Artificial intelligence is helping improve climate models
More accurate predictions will lead to better policy-making
Extraits :
THE DIPLOMATIC ructions at COP29, the United Nations climate conference currently under way in the Azerbaijani capital of Baku, are based largely on computer models. Some model what climate change might look like; others the cost of mitigating it (see Briefing).
No model is perfect. Those modelling climate trends and impacts are forced to exclude many things, either because the underlying scientific processes are not yet understood or because representing them is too computationally costly. This results in significant uncertainty in the results of simulations, which comes with real-world consequences. Delegates’ main fight in Baku, for example, will be over how much money poor countries should be given to help them decarbonise, adapt or recover. The amount needed for adaptation and recovery depends on factors such as sea-level rise and seasonal variation that climate modellers still struggle to predict with much certainty. As negotiations become ever more specific, more accurate projections will be increasingly important.
The models that carry most weight in such discussions are those run as part of the Coupled Model Intercomparison Project (CMIP), an initiative which co-ordinates over 100 models produced by roughly 50 teams of climate scientists from around the world. All of them attempt to tackle the problem in the same way: splitting up the world and its atmosphere into a grid of cells, before using equations representing physical processes to estimate what the conditions in each cell might be and how they might change over time. (…)
Clever computational tricks can make them more detailed still. They have also grown better at representing the elaborate interactions at play between the atmosphere, oceans and land—such as how heat flows through ocean eddies or how soil moisture changes alongside temperature. But many of the most complex systems remain elusive. Clouds, for example, pose a serious problem, both because they are too small to be captured in 50km cells and because even small changes in their behaviour can lead to big differences in projected levels of warming.
Better data will help. But a more immediate way to improve the climate models is to use artificial intelligence (AI). Model-makers in this field have begun asserting boldly that they will soon be able to overcome some of the resolution and data problems faced by conventional climate models and get results more quickly, too. (…)
Reducing the uncertainties in climate models and, perhaps more important, making them more widely available, will hone their usefulness for those tasked with the complex challenge of dealing with climate change. And that will, hopefully, mean a better response. ■
The Guardian, 12 novembre, libre accès
Reasons to be hopeful: five ways science is making the world better
If Trump’s re-election is getting you down, these innovations in medicine and technology should cheer you up
L’Express, 8 novembre, article payant
Venki Ramakrishnan, prix Nobel : “On sait maintenant ce qui permet de retarder le vieillissement”
Entretien exclusif. Dans le remarquable “Why we die”, le biologiste de Cambridge fait le tri entre vraies pistes de recherche contre le vieillissement et fausses promesses.
Extraits :
Dans la liste des auteurs de best-sellers promettant de faire reculer la mort, Venki Ramakrishnan a deux atouts majeurs. D’abord, ce professeur à Cambridge, ancien président de la prestigieuse Royal Society et lauréat du prix Nobel de chimie en 2009 pour ses recherches sur les ribosomes, les structures cellulaires responsables de la production des protéines, est l’un des plus éminents biologistes au monde. Ensuite, contrairement à nombre de ses confrères, il n’a “aucun argent investi dans le secteur” et peut donc se permettre un regard objectif et critique sur les découvertes récentes en matière de longévité, qui font souvent l’objet d’annonces sensationnalistes.
Dans le remarquable Why we die (Hodder Press, non traduit), salué cette année par la critique anglo-saxonne, Venki Ramakrishnan retrace l’histoire des avancées scientifiques sur la compréhension du vieillissement et fait un point sur les principales pistes pour retarder les effets de l’âge : restriction calorique, sénolytiques, reprogrammation cellulaire, transfusions de sang plus jeune… Si le prix Nobel estime que nous sommes à la veille d’avancées majeures, il remet aussi à leur place les scientifiques bien trop arrogants, à l’image d’Aubrey de Grey qui a déclaré que les premiers humains qui atteindront 1 000 ans sont déjà nés. En exclusivité pour L’Express, Venki Ramakrishnan fait le tri entre recherches sérieuses et fausses promesses, tout en rappelant que le triptyque “bien manger, dormir et faire de l’exercice” demeure aujourd’hui le meilleur moyen pour prolonger son existence. Entretien.
L’Express : Vous êtes un biologiste reconnu, mais vous n’avez pas travaillé directement sur le vieillissement. Comment en êtes-vous venu à vous intéresser à ce domaine?
Venki Ramakrishnan : Mes travaux sont très proches de ce champ de recherche. C’est le cas des questions liées à la dégradation et au renouvellement des protéines, ou à la réponse au stress, quand le corps arrête de produire des protéines s’il détecte un problème. Des chercheurs de mon laboratoire étudient ces sujets, qui sont centraux dans le processus de vieillissement. Par ailleurs, j’avais déjà écrit un livre, sur la course pour la découverte de la structure du ribosome, avec plein d’histoires sur les arrière-cuisines de la science. Cela m’a donné le goût d’écrire pour le grand public. Or le vieillissement et la mort restent de grandes questions existentielles depuis que l’Homme a pris conscience de sa mortalité. (…)
Pouvons-nous réellement espérer repousser les limites de la vie humaine?
A ce jour, personne n’a battu le record de votre compatriote, la Française Jeanne Calment, décédée en 1997 à l’âge de 122 ans. Pourtant, depuis, le nombre de centenaires a beaucoup augmenté. Mais une fois qu’ils atteignent les 110 ans, la plupart déclinent et meurent. Cela suggère que la limite naturelle de notre espèce se situe probablement autour de cet âge, même si certains peuvent parfois, de façon exceptionnelle, vivre un peu plus longtemps. En guérissant le cancer, les maladies cardiaques, le diabète ou la démence, on améliorera peut-être encore de dix ou quinze ans l’espérance de vie moyenne, aujourd’hui de 80 ans environ. Mais pour aller plus loin, il faudrait réussir à s’attaquer au vieillissement lui-même.
Dans votre livre, vous vous montrez critique à l’égard des recherches sur le vieillissement, mais vos conclusions sont finalement assez optimistes : vous dites qu’il y aura des avancées majeures. Quelles pistes vous paraissent les plus encourageantes?
Je critique le battage médiatique, mais pas le secteur dans son ensemble, car de nombreuses recherches sérieuses sont en cours. Avec tout l’argent déversé sur ce domaine, et avec tous les très bons scientifiques qui y travaillent, quelque chose finira par se produire. La question est de savoir combien de temps cela prendra. Ce que je dénonce, ce sont ces entreprises qui commencent à commercialiser des produits pour les humains à partir de résultats obtenus en laboratoire, sur des souris, sans aucune autre forme d’essai. Pour autant, il existe effectivement plusieurs pistes intéressantes.
D’abord, la restriction calorique, grâce à laquelle il serait possible de se trouver en meilleure santé à un âge avancé, même si elle présente aussi des effets secondaires. (…)
Une autre piste concerne les cellules sénescentes. Elles ont une fonction biologique très importante : elles signalent au système immunitaire qu’elles sont endommagées et qu’elles doivent être éliminées. (…)
Une autre piste concerne la synthèse des métabolites importants pour le fonctionnement de notre organisme, qui s’avère plus difficile quand on prend de l’âge. (…)
Vous vous montrez assez critique à l’égard de scientifiques comme le généticien de Harvard David Sinclair, mondialement connu pour ses travaux sur le vieillissement. Pour quelle raison?
Mon livre a été relu avec soin par deux avocats, et je ne veux pas parler spécifiquement d’un chercheur. Mais de façon générale, quand des scientifiques créent des entreprises sur la base de leurs recherches, il y a un conflit d’intérêts intrinsèque. Vu l’argent en jeu, ils perdent leur objectivité. Les résultats sur lesquels ils se basent devraient être testés par d’autres experts dépourvus de conflits d’intérêts. (…)
Les milliardaires de la tech, comme Elon Musk, Peter Thiel et d’autres, sont obsédés par la recherche anti-âge. Pensez-vous qu’ils soutiennent de la vraie science?
Pour certains d’entre eux, oui. Par exemple, ils sont plusieurs à avoir fondé un laboratoire Altos Labs, qui a attiré d’excellents chercheurs, en provenance d’universités prestigieuses. Mais les milliardaires de la tech surestiment la rapidité avec laquelle ces travaux aboutiront à des avancées concrètes. Ils viennent de l’industrie du logiciel, où tout peut changer du tout au tout en l’espace d’un an – regardez ce qu’il s’est passé avec ChatGPT. En biologie, tout est bien plus compliqué, les essais cliniques prennent du temps, il faut souvent une vingtaine d’années entre une découverte fondamentale et l’arrivée d’une molécule sur le marché. Mais au-delà de cette limite, la science qu’ils soutiennent est probablement très légitime.
Vous expliquez que Bill Gates est un cas particulier chez les milliardaires de la tech…
Effectivement, pour augmenter la durée de vie moyenne de la population sur cette planète, une des priorités est d’éliminer les maladies liées aux infections ou à la malnutrition. En termes d’années de vie gagnées, Bill Gates, avec sa Fondation, en fait probablement plus que tous ces milliardaires qui investissent dans la lutte contre le vieillissement… (…)
Le Point, 4 novembre, article payant
Bertrand Duplat, l’homme qui veut réparer le cerveau
Grâce à un microrobot de la taille d’un grain de riz conçu par sa société Robeauté, cet ingénieur est en passe de révolutionner la neurochirurgie. Rencontre.
Extraits :
(…) À la tête de Robeauté, société créée avec Joana Cartocci en 2017, Bertrand Duplat pourrait bien donner un sérieux coup d’accélérateur à la neurochirurgie. L’idée ? Mettre au point un robot miniature de 1,8 mm de diamètre – soit la taille d’un grain de riz –, capable de se déplacer dans le cerveau, de réaliser des biopsies et de déposer des traitements ciblés, tout en étant le moins intrusif possible. (…)
Telle une fusée à étages, le robot autopropulsé sera inséré par un petit trou de 3 à 4 millimètres à l’intérieur du cerveau, puis voyagera dans la viscoélasticité en écartant les parois cérébrales grâce à une trajectoire préalablement établie et un GPS intégré.
« Aujourd’hui, on se déplace au mieux en une dimension, explique Bertrand Duplat, avec des aiguilles neurochirurgicales et en ligne droite. Notre objectif, c’est une trajectoire non linéaire, qui permette d’éviter les obstacles, et un moteur embarqué pour pouvoir utiliser le robot en dehors du bloc opératoire. Ce serait une première mondiale. » Car, si les bras robotisés sont de plus en plus nombreux à venir en aide aux équipes médicales, et ce avec une précision de plus en plus probante, les dispositifs sont encore très lourds, encombrants et monopolisent les salles d’opération pendant plusieurs heures. (…)
Rien ne prédestinait le chercheur à se spécialiser dans l’ingénierie chirurgicale. Mais, en 2007, Bertrand Duplat perd sa mère d’un glioblastome, une tumeur cérébrale extrêmement agressive. Pour le touche-à-tout qui avait pris la fâcheuse habitude de résoudre tous ses problèmes à coups d’inventions, l’impuissance est insupportable.
« Elle était inopérable, et aucun traitement ne pouvait abréger ses souffrances. Je me suis promis à ce moment-là qu’il serait un jour possible d’intervenir. » Il lui faudra quelques années pour sauter le pas, pour développer un réseau de neurologues toujours à ses côtés aujourd’hui, et pour convaincre de la nécessité d’envoyer son robot dans le cerveau humain. (…)
Aujourd’hui, l’avancée considérable que permettra le microrobot n’est plus à prouver. Grâce à son système de tracking ultrasonore, inoffensif pour le patient et le personnel hospitalier, on pourra suivre la trajectoire de l’intervention en temps réel avec une précision d’abord millimétrique, puis submillimétrique. L’appareil ressortira ensuite en marche arrière, à la même vitesse qu’à l’aller, soit 3 millimètres par minute.
« La grande différence avec les techniques qui existent déjà, poursuit Duplat, c’est qu’actuellement, pour traiter les pathologies comme le cancer ou les maladies neurodégénératives, on ingère des médicaments ou on les envoie dans le sang. Or le cerveau est extrêmement bien protégé. Il y a le crâne, bien sûr, mais aussi la barrière hémato-encéphalique, qui isole le cerveau du sang. Les médicaments n’entrent pas bien, et, si on veut obtenir la bonne dose dans le cerveau, cela peut devenir très toxique pour le reste du corps. »
Désormais, il sera donc possible d’explorer plusieurs points, de délivrer les médicaments, de poser des implants beaucoup plus précisément que ne le permet la technologie actuelle, mais aussi de procéder à des prélèvements – notamment moléculaires – et d’établir une carte exhaustive de l’étendue d’une tumeur ou d’une pathologie là où l’imagerie est encore incomplète. Cartographier le cerveau de l’intérieur, découvrir ce qui se passe dans les zones pathologiques et péripathologiques, voilà ce qu’on saura faire dans un premier temps. Ensuite, on va rapidement pouvoir se servir du robot au niveau fonctionnel, neuronal. Comment les neurones sont-ils connectés les uns avec les autres ? Quels sont les circuits qui peuvent être problématiques ? Comment peut-on les traiter ? Les protocoles devraient mieux fonctionner, car les données qu’on aura pu récupérer nous confirmeront que c’est le bon endroit, le bon timing et la bonne combinaison de thérapies. »
Les essais sont lancés
Si le chemin est encore long d’ici l’autorisation de mise sur le marché, Bertrand Duplat étudie déjà les réglementations avec la Food and Drug Administration. Des essais précliniques ont lieu depuis 2022 sur des cadavres d’animaux et d’humains ainsi que sur des animaux vivants, et les essais cliniques de biopsie chez l’homme devraient commencer en 2026 pour une commercialisation en 2030. Le voyage fantastique ne devrait plus tarder.https://www.lepoint.fr/science/bertrand-duplat-l-homme-qui-veut-reparer-le-cerveau-03-11-2024-2574327_25.php
The Economist, 4 novembre, article payant
The theory of evolution : Darwin and Dawkins: a tale of two biologists
One public intellectual has spent his career defending the ideas of the other
The Genetic Book of the Dead. By Richard Dawkins. Yale University Press; 360 pages; $35. Apollo; £25
Extraits :
GO TO ANY bookshop, and its shelves will be groaning with works of popular science: titles promising to explain black holes, white holes, the brain or the gut to the uninitiated. Yet the notion that a science book could be a blockbuster is a relatively recent one. Publishers and curious readers have Richard Dawkins to thank.
In 1976 his book “The Selfish Gene”—which argued that natural selection at the level of the gene is the driver of evolution—became a surprise bestseller. It expressed wonder at the variety of the living world and offered a disciplined attempt to explain it. It also announced Dr Dawkins, then 35, as a public intellectual.
A spate of books about evolution followed, as well as full-throated attacks on religion, particularly its creationist aspects. (In one essay he contemplated religion as a kind of “mind virus”.) Dr Dawkins earned the sobriquet “Darwin’s rottweiler”—a nod to Thomas Huxley, an early defender of the naturalist’s ideas, known as “Darwin’s bulldog”.
Dr Dawkins, now 83, has returned with his 19th volume, “The Genetic Book of the Dead”. Its working hypothesis is that modern organisms are, indeed, like books, but of a particular, peculiar, variety. Dr Dawkins uses the analogy of palimpsests: the parchments scraped and reused by medieval scribes that accidentally preserved enough traces of their previous content for the older text to be discerned. (…)
Dr Dawkins’s contention is that, by proper scrutiny of genetics and anatomy, a scientist armed with the tools of the future will be able to draw far more sophisticated and connected inferences than these. This will then illuminate parts of evolutionary history that are currently invisible.
As an analogy, describing organisms as palimpsests is a bit of a stretch. A palimpsest’s original text is unrelated to its new one, rather than being an earlier version of it, so it can tell you nothing about how the later text was composed. But that quibble aside, the tantalising idea is that reading genomes for their history is an endeavour that may form the basis of a new science.
After 19 books and almost 50 years spent contemplating essentially the same theme, lesser authors would be forgiven for getting stale. But, though Dr Dawkins’s topic is unchanging, his approach is always fresh, thanks to new examples and research. Yet he calls his current book tour “The Final Bow”, suggesting that he is exhausted, even if his subject is not.
Dr Dawkins has been an influential figure as much as an important thinker. In the current age, when academics and students are fearful of expressing even slightly controversial opinions, the world needs public intellectuals who are willing to tell it, politely but persuasively, how it is. Dr Dawkins has long been happy to challenge his readers’ orthodoxies (even if he has mellowed on the subject of faith, and refers to himself as a “cultural Christian”). Popular science writers today could take note. ■
https://www.economist.com/culture/2024/10/28/darwin-and-dawkins-a-tale-of-two-biologists
The Economist, 1 novembre, article payant
Think outside the box : ADHD should not be treated as a disorder
Adapting schools and workplaces for it can help far more
Extraits :
(…) NOT LONG ago, attention-deficit hyperactivity disorder (ADHD) was thought to affect only school-aged boys—the naughty ones who could not sit still in class and were always getting into trouble. Today the number of ADHD diagnoses is rising fast in all age groups, with some of the biggest increases in young and middle-aged women.
The figures are staggering. Some 2m people in England, 4% of the population, are thought to have ADHD, says the Nuffield Trust, a think-tank. Its symptoms often overlap with those of autism, dyslexia and other conditions that, like ADHD, are thought to be caused by how the brain develops. All told, 10-15% of children have patterns of attention and information-processing that belong to these categories.
At the moment, ADHD is treated as something you either have or you don’t. This binary approach to diagnosis has two consequences. The first is that treating everyone as if they are ill fills up health-care systems. Waiting lists for ADHD assessments in England are up to ten years long; the special-needs education system is straining at the seams. The second consequence occurs when ADHD is treated as a dysfunction that needs fixing. This leads to a terrible waste of human potential. Forcing yourself to fit in with the “normal” is draining and can cause anxiety and depression.
The binary view of ADHD is no longer supported by science. Researchers have realised that there is no such thing as the “ADHD brain”. The characteristics around which the ADHD diagnostic box is drawn—attention problems, impulsivity, difficulty organising daily life—span a wide spectrum of severity, much like ordinary human traits. For those at the severe end, medication and therapy can be crucial for finishing school or holding on to a job, and even life-saving, by suppressing symptoms that lead to accidents.
But for most people with ADHD, the symptoms are mild enough to disappear when their environment plays to their strengths. Rather than trying to make people “normal”, it is more sensible—and cheaper—to adjust classrooms and workplaces to suit neurodiversity. (…)
Greater understanding of neurodiversity would reduce bullying in schools and help managers grasp that neurodivergent people are often specialists, rather than generalists. They may be bad in large meetings or noisy classrooms, but exceptional at things like multitasking and visual or repetitive activities that require attention to detail. Using their talents wisely means delegating what they cannot do well to others. A culture that tolerates differences and takes an enlightened view of the rules will help people achieve more and get more out of life. That, rather than more medical appointments, is the best way to help the growing numbers lining up for ADHD diagnoses. ■
https://www.economist.com/leaders/2024/10/30/adhd-should-not-be-treated-as-a-disorder
The everything drugs : It’s not just obesity. Drugs like Ozempic will change the world
As they become cheaper, they promise to improve billions of lives (The Economist, 29 octobre, article payant)
Voir “Article du Jour”
Identity medicine : The data hinted at racism among white doctors. Then scholars looked again
Science that fits the zeitgeist sometimes does not fit the data (The Economist, 28 octobre, article payant)
Extraits :
BLACK BABIES in America are more than twice as likely to die before their first birthday than white babies. This shocking statistic has barely changed for many decades, and even after controlling for socioeconomic differences a wide mortality gap persists. Yet in 2020 researchers discovered a factor that appeared to reduce substantially a black baby’s risks. In their study, published in Proceedings of the National Academy of Sciences (PNAS), they wrote that “when Black newborns are cared for by Black physicians, the mortality penalty they suffer, as compared with White infants, is halved.”
This striking finding quickly captured national and international headlines, and generated nearly 700 Google Scholar citations. The study was widely interpreted—incorrectly, say the authors—as evidence that newborns should be matched to doctors of the same race, or that white doctors harboured racial animus against black babies. It even made it into the Supreme Court’s records as an argument in favour of affirmative action, with Justice Ketanji Brown Jackson (mis)citing the findings. A supporting brief by the Association of American Medical Colleges and 45 other organisations mentioned the study as evidence that “For high-risk black newborns, having a black physician is tantamount to a miracle drug.”
Now a new study seems to have debunked the finding, to much less fanfare. A paper by George Borjas and Robert VerBruggen, published last month in PNAS, looked at the same data set from 1.8m births in Florida between 1992 and 2015 and concluded that it was not the doctor’s skin colour that best explained the mortality gap between races, but rather the baby’s birth weight. Although the authors of the original 2020 study had controlled for various factors, they had not included very low birth weight (ie, babies born weighing less than 1,500 grams, who account for about half of infant mortality). Once this was also taken into consideration, there was no measurable difference in outcomes.
The new study is striking for three reasons. First, and most important, it suggests that the primary focus to save young (black) lives should be on preventing premature deliveries and underweight babies. Second, it raises questions about why this issue of controlling for birth weight was not picked up during the peer-review process. And third, the failure of its findings to attract much notice, at least so far, suggests that scholars, medical institutions and members of the media are applying double standards to such studies. Both studies show correlation rather than causation, meaning the implications of the findings should be treated with caution. Yet, whereas the first study was quickly accepted as “fact”, the new evidence has been largely ignored. (…)
That a flawed social-science finding which fitted neatly within the zeitgeist was widely accepted is understandable. Less understandable is that few people now seem eager to correct the record. The new study has had just one Google Scholar citation and no mainstream news coverage. This suggests that opinion-makers have, at best, not noticed the new article (it was published a month ago; The Economist only spotted it when the Manhattan Institute, a conservative think-tank, last week put out an explanatory note). At worst, they have deliberately ignored it. (…)
Mr Borjas and Mr VerBruggen end on an optimistic note, observing that science has “the capacity for self-correction, and scientists can facilitate this by being open with their methods and data”. Science journalists can help, too. ■
Are You Ready for a Brain Chip? It’ll Change Your Mind
These implants will help us do amazing things. The downside is that they may destroy humanity. (WSJ, 19 octobre, article payant)
Extraits :
Smartphone ownership is nearly universal. It isn’t mandatory, of course, but you’d be seen as an eccentric if you didn’t have one. Rejecting smartphones means you’re old-fashioned, possibly a bit of a crank.
There are pros and cons to having a smartphone. As a society, we’ve decided that on the whole it’s much better to have one. Of course, there’s an astronomical amount of money at stake. Imagine how much revenue depends, directly and indirectly, on the near-universal ownership of smartphones and tablets. There’s the demand for the hardware itself, for the raw materials to build that hardware and the infrastructure to assemble it, to improve it, to ship it around the world. There’s demand for transmission lines, cell towers and data networks. There’s demand for the operating systems, for the middleware that operates the cameras and regulates the batteries, for the hundreds of thousands of apps you might want to download. There’s demand for the endless content that appears on those apps and for the advertising time and real estate on hundreds of millions of smartphone and tablet screens. Beyond all that, there’s demand for the huge amounts of energy we need to make it all work.
There is very little that is completely independent of these devices. So, yes, you can do without a smartphone. But it isn’t easy.
It won’t be long before there is a similar concerted effort to make brain-implanted chips seem normal. It is a matter of years, not decades. These won’t be chip implants permitting paraplegics to regain their independence. These will be implants marketed to everyone, as smartphones are now. And if you decline to have a chip grafted onto your brain, you’ll be a backward, out-of-touch misanthrope.
The benefits of brain chips will be vastly beyond what external devices offer today. We will be able to take “photos” of anything we see with our eyes, just by thinking. Ditto video—in 3-D. We will be able to send messages to friends by thinking them, and to hear their replies played in our minds. We’ll have conversations with friends remotely, hearing their voices and ours without actually having to speak. We’ll be able to talk to anyone in any language. We’ll be able to remember an infinite amount of information, to retrieve any fact by asking our brain chips. We’ll be able to pay for things without carrying a wallet or a phone. We’ll be able to hear music piped directly into our brains. To watch movies. To take part in movies. To be totally entertained in new virtual worlds. (…)
We’ll be able to get advertising pumped directly into our brains, to have images hover before our eyes that we can’t turn off—except for those opting for the premium subscription. Our memories will be organized for us by artificial intelligence under policies crafted by experts who will have society’s best interests at heart. We won’t have access to information that might be, say, Russian propaganda. If we have criminal ideas, or perhaps just countercultural notions, they will be referred to the proper authorities before it’s too late.
In other words, it will be every dystopian sci-fi drama rolled into one.
But transhumanism—transcending human “limitations” through technology—becomes dangerous when a human, deprived of that technology, would be not only inconvenienced but unrecognizable.
Imagine a world in which not only our friends’ phone numbers but all our experiences with them, and even their names and their faces, are remembered for us and stored remotely on servers somewhere—available for us at any time. Until they’re not. What would be left of a generation of humans who had never had to use their own memory or do any of their own reasoning until, one day, all the chips were turned off? Would there be any human left, or only an empty shell?
It doesn’t take an atomic bomb to destroy humanity. There are other ways. If you don’t get a brain chip, you’ll have a hard time competing or even living in the modern world. You won’t be able to retain endless information, to pick up new skills instantly, to communicate with anyone anywhere. You’ll be out of date. You’ll be an obsolete human. You might be the last human. So maybe you’d better get the brain chip after all. Remember, it’s optional.
Mr. Gelernter is manager of RG Niederhoffer Digital and an expert in artificial intelligence and machine learning.
Are You Ready for a Brain Chip? It’ll Change Your Mind – WSJ
Filling up space : The rockets are nifty, but it is satellites that make SpaceX valuable
Elon Musk’s space venture may soon be more valuable than Tesla (The Economist, 19 octobre, article payant)
Extraits :
There was no mistaking the feat of engineering. The bottom half of the biggest object ever flown—by itself as tall as a 747 is long—came hurtling out of the sky so fast that it glowed from the friction. With the ground rushing to meet it, a cluster of its engines briefly relit, slowing the rocket and guiding it carefully back towards the same steel tower from which it had launched just seven minutes previously. A pair of arms swang closed to catch it, leaving it suspended and smoking in the early-morning sunshine.
Less obvious than the kinetic marvels, but even more important, are the economics of Starship, as the giant rocket tested on October 13th is known. The firm that built it, SpaceX, was founded in 2002 by Elon Musk, an entrepreneur, with the goal of slashing the expense of flying things to space. For Mr Musk, the purpose of such cost-cutting is to make possible a human settlement on Mars. But it has also made new things possible back on Earth. Over the past four years, SpaceX has become a globe-straddling internet company as well as a rocket-maker. Its Starlink service uses what would, a few years ago, have been an unthinkably large number of satellites (presently around 6,400, and rising fast) to beam snappy internet access nearly anywhere on the planet.
Excitement about Starlink’s prospects has seen SpaceX’s valuation rise to $180bn (see chart 1). Some analysts are even beginning to wonder whether it might one day match or exceed the value of Tesla, an electric-car firm of which Mr Musk is also CEO. If Starship lives up to its promise, its combination of vast size and bargain-basement price could provide a big boost to the economics of space in general—and of Starlink in particular. (…)
Mr Musk is famous for making grand predictions, only some of which come to pass. But when he started Starlink he said his only ambition was not to go bankrupt, with good reason. Something similar had been tried, albeit on a much smaller scale, by firms including Teledesic and GlobalStar at the height of the dotcom boom. All of them went bust. But as far as anyone can tell, Starlink is thriving.
ts distinctive white antennae have popped up everywhere from remote schools in the Amazon to the bunkers and trenches on the front lines of the war in Ukraine. “I’ve [even] seen a Starlink dish tied to a broom handle and mounted on a public toilet in the Lake District,” says Simon Potter of BryceTech. In September the firm announced it had signed up 4m customers. Traffic through its networks has more than doubled in the past year, as SpaceX has signed deals with cruise lines, shipping firms and airlines.
Modelling by Quilty Space, another firm of analysts, suggests that Starlink’s revenue will hit $6.6bn this year, up from $1.4bn in 2022. That is already 50% more than the combined revenue of SES and IntelSat, two big satellite-internet firms that announced a merger in April. A year ago Mr Musk said that Starlink had achieved “break-even cashflow”. “It’s astounding that a constellation of this size can be profitable,” says Chris Quilty. “And it scares the shit out of everyone else in the industry.” (…)
Starlink’s satellites fly in very low orbits, around 500km up. That slashes transmission delays, allowing Starlink to offer a connection similar to ground-based broadband. The trade-off is that each satellite can serve only a small area of Earth. To achieve worldwide coverage you therefore need an awful lot of satellites. According to Jonathan McDowell of the Harvard-Smithsonian Centre for Astrophysics, the 6,400 or so Starlink satellites launched since 2019 account for around three-quarters of all the active satellites in space (see chart 3). SpaceX has firm plans to deploy 12,000 satellites, and has applied to launch as many as 42,000. (…)
The rockets are nifty, but it is satellites that make SpaceX valuable (economist.com)
Starship sticks the landing : Elon Musk’s SpaceX has achieved something extraordinary
If SpaceX can land and reuse the most powerful rocket ever made what can’t it do? (The Economist, 14 octobre, article payant)
Extraits :
THE LAUNCH was remarkable: a booster rocket with twice the power of the Apollo programme’s Saturn V lancing into the early-morning sky on a tight, bright column of blue-tinged flame. But that wonder has been seen four times before. It was the landing of the booster stage of SpaceX’s fifth Starship test flight which was truly extraordinary.
The landing was a triumph for the engineers of SpaceX, a company founded and run by Elon Musk. It strongly suggests that the company’s plans to use a huge reusable booster to launch a huge reusable spacecraft, the Starship proper, on a regular basis are achievable. That means that the amount of cargo that SpaceX can put into orbit for itself and its customers, including the American government, is set to grow spectacularly in the second half of this decade. (…)
Mr Musk’s ambitions for Mars are part of an ambition to safeguard civilisation which also entails, in his eyes, the re-election of Donald Trump (on which he is working hard), and, apparently, the use of X, a social network he owns, as a personal platform and a tool for the spread of misinformation. This is something about which many have strong concerns, and rightly. But with the Super Heavy cooling down in its elevated cradle, the getting to Mars bit, at least, looks more real than it has ever done before. ■
Elon Musk’s SpaceX has achieved something extraordinary (economist.com)
ESCP Business School va se transformer en profondeur grâce à OpenAI
L’école de commerce ESCP va utiliser la plateforme de la société américaine pour adapter ses méthodes d’enseignement et ses processus administratifs. (Le Figaro, 14 octobre, libre accès)
Extraits :
Dans son rapport publié en mars, la Commission de l’intelligence artificielle recommandait de «généraliser (son) déploiement dans toutes les formations du supérieur». Certains établissements vont plus vite que d’autres. Ainsi, l’ESCP Business School, une des écoles de commerce européennes les plus réputées, a conclu un large partenariat avec la société OpenAI. Une partie des étudiants, des enseignants-chercheurs et des cadres administratifs du réseau ESCP seront formés à OpenAI edu, la version spécialisée de sa plateforme d’IA pour les universités. «Le premier avantage pour nous est d’avoir une amélioration continue de l’expérience étudiante», résume Léon Laulusa, le directeur général de ESCP. Grâce à ces technologies, l’école veut renforcer l’apprentissage personnalisé et interactif pour ses étudiants. «Je pense que ces technologies vont permettre de favoriser l’apprentissage ancré, celui qui ne s’oublie pas avec le temps» insiste-t-il. L’école a créé des assistants conversationnels pour répondre aux questions des étudiants dans divers domaines, créer des tests de connaissances personnalisés pour s’entraîner, trouver des conseils pour la rédaction de thèse. «Nous nous greffons sur les données ouvertes de ChatGPT et nous les fermons sur nos données», ajoute le directeur, soucieux de la sécurité des données. Les étudiants pourront aussi recevoir des commentaires personnalisés sur leurs travaux en temps réel. (…)
En travaillant au quotidien avec ces technologies, l’école compte aussi adapter au mieux ses programmes et les cursus aux compétences attendues en entreprise. «Il y a un enjeu majeur de massification de la formation à l’IA en France, mais il faut aussi voir comment développer des compétences durables dans le temps. Nous voulons veiller à ce que les programmes de l’ESCP restent à la pointe de la technologie et pertinents dans un monde des affaires en évolution rapide», insiste le directeur. (…)
ESCP Business School va se transformer en profondeur grâce à OpenAI (lefigaro.fr)
The 2024 Nobel prizes : AI wins big at the Nobels
Awards went to the discoverers of micro-RNA, pioneers of artificial-intelligence models and those using them for protein-structure prediction (The Economist, 14 octobre, article payant)
Extraits :
The scientific Nobel prizes have always, in their way, honoured human intelligence. This year, for the first time, the transformative potential of artificial intelligence (AI) has been recognised as well. That recognition began on Tuesday October 8th, when Sweden’s Royal Academy of Science awarded the physics prize to John Hopfield of Princeton University and Geoffrey Hinton of the University of Toronto for computer-science breakthroughs integral to the development of many of today’s most powerful AI models.
The next day, the developers of one such model also received the coveted call from Stockholm. Demis Hassabis and John Jumper from DeepMind, Google’s AI company, received one half of the chemistry prize for their development of AlphaFold, a program capable of predicting three-dimensional protein structure, a long-standing grand challenge in biochemistry. The prize’s other half went to David Baker, a biochemist at the University of Washington, for his computer-aided work designing new proteins.
The AI focus was not the only thing the announcements had in common. In both cases, the research being awarded would be seen by a stickler as being outside the remit of the prize-giving committees (AI research is computer science; protein research arguably counts as biology). (…)
For the growing number of researchers around the world who rely on AI in their work, the lasting message of this year’s awards may be a different one: that they, too, could one day nab science’s most prestigious gongs. For his part, said Dr Jumper, “I hope…that we have opened the door to many incredible scientific breakthroughs with computation and AI to come.” ■
AI wins big at the Nobels (economist.com)
Albert Moukheiber, docteur en neurosciences : «On pense que le cerveau fonctionne par zones, mais c’est faux»
ENTRETIEN – Émotions, personnalité, prise de décisions… tout semble aujourd’hui trouver une explication dans notre cerveau. C’est ce que dénonce Albert Moukheiber, docteur en neurosciences, dans son livre Neuromania, le vrai du faux sur votre cerveau. (Madame Figaro, 7 octobre, article payant)
Extraits :
(…) Dans son livre Neuromania, le vrai du faux sur votre cerveau (1), il dénonce la tendance à trop souvent invoquer les neurosciences dans le but de donner un «vernis» scientifique à ce qui ne relève pourtant pas de la science. Le tout au prix d’approximations, de raccourcis, voire de contre-vérités. Une omniprésence de discours réducteurs qui ne sont pas sans conséquence, avertit-il. Entretien.
Madame Figaro.- Quelles sont les conséquences des discours réducteurs sur les neurosciences que vous dénoncez?
Albert Moukheiber.- D’abord, cette “neuromania” impacte la vision que l’on porte sur nos propres performances. En clair, si l’on affirme à quelqu’un qu’il fonctionne plutôt avec son “cerveau gauche” et qu’il est donc soi-disant doté d’un esprit plus cartésien qu’intuitif, cela peut orienter ses décisions de carrière ou ses choix de vie, par exemple. Ensuite, le phénomène nous impacte financièrement, puisque l’on nous vend aujourd’hui des formations pour “utiliser 20 % de notre cerveau, au lieu de 10 %”, ou encore pour développer notre “neuro-créativité”. (…)
Vous assurez aussi que ces discours réducteurs entraînent des conséquences sociétales…
Et ce pour une raison simple : lorsqu’on voit tout sous le prisme du cerveau, on fait fi des autres niveaux explicatifs. Un cas le montre tout particulièrement : ces dernières années, on a pu lire dans les médias que notre incapacité à agir contre le réchauffement climatique était liée à notre cerveau. Plus précisément : addict au plaisir immédiat, l’organe ferait obstruction en nous poussant à faire le moins d’efforts possible. Seulement, en insinuant que l’espèce humaine est vouée à l’échec malgré elle, on invisibilise la responsabilité des gouvernements, des lobbies et des entreprises polluantes. Tous ces facteurs qui expliqueraient plus pertinemment le problème sont “effacés” au profit de la thèse cérébrale. (…)
uel impact notre cerveau a-t-il réellement sur notre quotidien ?
Il est une partie centrale de qui l’on est. Il va intégrer et coordonner des informations corporelles, environnementales et cérébrales. Il nous permet de percevoir, penser, agir et de donner un sens à l’existence. Mais contrairement à ce que l’on pourrait croire, ce n’est pas une tour de contrôle. Le cerveau étant un organe d’interrelation, il influe sur le corps et l’environnement mais est aussi influencé par eux. Autrement dit, nous ne sommes pas que notre cerveau. Un simple exercice permet d’en prendre conscience. Observons la façon dont on s’adresse à un ami et à un client au travail. Dans les deux situations, nous avons le même cerveau, pourtant, nous agissons complètement différemment. C’est bien la preuve que notre personnalité change en fonction du contexte et que notre cerveau n’est pas le seul à déterminer qui l’on est, nos goûts, nos idées, et nos émotions. S’il a une influence certaine sur nos manières d’agir, le cerveau ne reste qu’un réseau de neurones qui envoient un signal ou pas. (…)
Peut-on tout de même agir sur lui ?
Pas vraiment. Prenons un exemple : aujourd’hui, on nous dit souvent que pour être moins triste ou plus heureux, il suffit de palier au “manque de sérotonine” – surnommée l’hormone du bonheur. Mais on ne peut pas contrôler les neurotransmetteurs, on ne peut pas décider là maintenant d’augmenter notre taux de dopamine ou de sérotonine. D’un point de vue neuroscientifique, il n’est même pas possible de mesurer le taux d’une hormone dans le cerveau ! Alors il est essentiel de se fier aux conclusions scientifiques. Revenons à l’exemple du bonheur : pour être heureux, la recherche scientifique a démontré qu’il fallait entretenir de bonnes conditions matérielles, des relations sociales saines, une bonne estime de soi, bien dormir, manger sainement et s’hydrater correctement. Et de telles informations sont nettement plus importantes à partager, puisqu’elles nous permettent de véritablement agir.
Neuromania, le vrai du faux sur votre cerveau, d’Albert Moukheiber, (Ed. Allary), 288 pages, 21,90€.
Quand l’intelligence artificielle révolutionne aussi l’Histoire de la langue française
RÉCIT – Évolution de la langue, apparition de mots dans le temps… Les technologies de pointe ont permis d’établir certaines hypothèses émises, grâce à l’analyse de manuscrits du XIIIe au XVIIe siècle. (Le Figaro, 7 octobre, article payant)
Extraits :
Pierre Corneille était-il le véritable auteur des œuvres de Molière ? À partir de quand le latin a-t-il disparu au profit du français ? La sorcellerie était-elle un sujet récurrent à l’époque ? Toutes ces questions que l’on se posait, l’intelligence artificielle permet d’y apporter une réponse. Cette dernière n’arrête pas de faire parler d’elle. Depuis peu, on assiste en effet à une véritable révolution dans le monde de la recherche historique. Car jusqu’à présent, si on arrivait à trouver des occurrences, grâce à un moteur de recherche, sur des livres imprimés et numérisés, ce n’était pas le cas des ouvrages manuscrits.
Les nouvelles technologies n’étaient pas suffisamment avancées pour interpréter et analyser des textes de ce type. Longtemps, les archivistes ont rêvé d’avoir un moteur de recherche qui leur permettait de trouver des mots ou des abréviations dans ces manuscrits pour faire avancer la recherche. Or, étudier et retranscrire un parchemin rédigé en ancien et moyen français représentait des heures de travail. «Le problème de l’écriture humaine, c’est qu’elle est extrêmement variable dans le temps», explique Jean-François Moufflet, conservateur aux Archives nationales, qui a participé au projet Himanis (pour «Historical manuscript indexing for user-controlled search»), un projet de recherche européen lancé en 2015 par l’Institut de Recherche et d’Histoire des textes (CNRS) qui vise à l’indexation du texte des registres de la chancellerie royale française des années 1302-1483 conservés aux Archives nationales.
Les archivistes, dans leur travail d’indexation (consistant à relever les occurrences d’un mot ou d’un concept dans les registres), se livraient à un travail chronophage et titanesque. «Ils n’avaient pu le faire que sur des registres de la première moitié du XIVe siècle et ils n’ont pu continuer car c’était trop volumineux», continue le conservateur. Rien que pour ces 200 registres de la chancellerie royale – dans lesquels on recopiait les décisions prises par les rois de France – qui constituent une véritable mémoire administrative de l’époque, on parle de 80.000 pages à étudier. Le nombre de mots et d’expressions à relever dépasse largement le nombre de ces pages. Or l’indexation de ces derniers permet aux historiens de faire des analyses sur l’état d’esprit de l’époque, d’effectuer des hypothèses par rapport à des événements passés et de reconstituer l’Histoire.
Ces moyens hors normes ont pu voir le jour grâce à des technologies destinées à la reconnaissance des textes numérisés (OCR pour «Optical Character Recognition», pour les caractères imprimés) et des écritures manuscrites (HTR pour «Handwritten Text Recognition») qui se sont améliorées ces dernières années avec l’évolution fulgurante de l’intelligence artificielle. (…)
Quand l’intelligence artificielle révolutionne aussi l’Histoire de la langue française (lefigaro.fr)
Science of success : This Will Be Your New Favorite Podcast. The Hosts Aren’t Human.
With this Google tool, you can now listen to a show about any topic you could possibly imagine. You won’t believe your ears. (WSJ, 5 octobre, article payant)
Extraits :
Have you heard about the latest hit podcast? It’s called Deep Dive—and you have to check it out.
Each show is a chatty, 10-minute conversation about, well, any topic you could possibly imagine. The hosts are just geniuses. It’s like they know everything about everything. Their voices are soothing. Their banter is charming. They sound like the kind of people you want to hang out with.
But you can’t. As it turns out, these podcast hosts aren’t real people. Their voices are entirely AI-generated—and so is everything they say.
And I can’t stop listening to them.
This experimental audio feature released last month by Google is not just some toy or another tantalizing piece of technology with approximately zero practical value.
It’s one of the most compelling and completely flabbergasting demonstrations of AI’s potential yet.
“A lot of the feedback we get from users and businesses for AI products is basically: That’s cool, but is it useful, and is it easy to use?” said Kelly Schaefer, a product director in Google Labs.
This one is definitely cool, but it’s also useful and easy to use. All you need to do is drag a file, drop a link or dump text into a free tool called NotebookLM, which can take any chunk of information and make it an entertaining, accessible conversation.
Google calls it an “audio overview.” You would just call it a podcast.
One of the coolest, most useful parts is that it makes podcasts out of stuff that nobody would ever confuse for scintillating podcast material.
Wikipedia pages. YouTube clips. Random PDFs. Your college thesis. Your notes from that business meeting last month. Your grandmother’s lasagna recipe. Your resume. Your credit-card bill! This week, I listened to an entire podcast about my 401(k). (…)
This Will Be Your New Favorite Podcast. The Hosts Aren’t Human. – WSJ
On the fly : An adult fruit fly brain has been mapped—human brains could follow
For now, it is the most sophisticated connectome ever made (The Economist, 5 octobre, article payant)
Extraits :
FRUIT FLIES are smart. For a start—the clue is in the name—they can fly. They can also flirt; fight; form complex, long-term memories of their surroundings; and even warn one another about the presence of unseen dangers, such as parasitic wasps.
They do each of these things on the basis of sophisticated processing of sound, smell, touch and vision, organised and run by a brain composed of about 140,000 neurons—more than the 300 or so found in a nematode worm, but far fewer than the 86bn of a human brain, or even the 70m in a mouse. This tractable but non-trivial level of complexity has made fruit flies an attractive target for those who would like to build a “connectome” of an animal brain—a three-dimensional map of all its neurons and the connections between them. That attraction is enhanced by fruit flies already being among the most studied and best understood animals on Earth. (…)
Creating a connectome means taking things apart and putting them back together. The taking apart uses an electron microscope to record the brain as a series of slices. The putting back together uses AI software to trace the neurons’ multiple projections across slices, recognising and recording connections as it does so. (…)
Janelia’s second method involved shaving layers from a sample with a diamond knife and recording them using a transmission electron microscope (which sends its beam through the target rather than scanning its surface). This is the data used by FlyWire. With Janelia’s library of 21m images made in this way, Dr Murthy and Dr Seung, ably assisted by 622 researchers from 146 laboratories around the world (as well as 15 enthusiastic “citizen scientist” video-gamers, who helped proofread and annotate the results), bet their software-writing credibility on being able to stitch the images together into a connectome. Which they did.
Besides the numbers of neurons and synapses in the fly brain, FlyWire’s researchers have also counted the number of types of neurons (8,577) and calculated the combined length (149.2 metres) of the message-carrying axons that connect cells. More important still, they have enabled the elucidation not only of a neuron’s links with its nearest neighbours, but also the links those neurons have with those farther afield. (…)
This sort of thing is scientifically interesting. But to justify the dollars spent on them, projects such as FlyEM and FlyWire should also serve two practical goals. One is to improve the technology of connectome construction, so that it can be used on larger and larger targets—eventually, perhaps, including the brains of Homo sapiens. The other is to discover to what extent non-human brains can act as models for human ones (in particular, models that can be experimented on in ways that will be approved by ethics committees). (…)
These natural experiments, the circuit-diagrams of which connectomes will make available, might even help human computer scientists. Brains are, after all, pretty successful information processors, so reproducing them in silicon could be a good idea. As it is AI models which have made connectomics possible, it would be poetic if connectomics could, in turn, help develop better AI models. ■
An adult fruit fly brain has been mapped—human brains could follow (economist.com)
Cerveau : les secrets de l’ultime « terra incognita »
Sous l’impulsion de programmes de recherche comme le Human Brain Project, les scientifiques disposent à présent d’atlas complets des régions cérébrales. (Le Point, 30 septembre, article payant)
Extraits :
Le projet était très ambitieux. Trop, pour ses détracteurs. En 2013 est inauguré en grande pompe le Human Brain Project, un mastodonte financé par la Commission européenne à hauteur de plus de 600 millions d’euros. L’objectif de ce programme de recherche, appelé à faire collaborer 500 scientifiques à travers tout le continent ? Produire une simulation informatique du cerveau. Ni plus ni moins qu’un jumeau virtuel de l’organe, imitant son anatomie et son fonctionnement dynamique et sur lequel on pourrait tester toutes sortes d’hypothèses scientifiques. Comme la conquête spatiale en son temps, la course à la découverte de cette terra incognita qu’est le cerveau est lancée.
La même année, les États-Unis consacrent 6 milliards de dollars à un programme visant à développer de nouvelles technologies de cartographie dans le cadre de l’initiative Brain (Brain Research through Advancing Innovative Neurotechnologies). En 2014, c’est au tour du Japon de lancer son initiative Brain/Minds (Brain Mapping by Integrated Neurotechnologies for Disease Studies), dont une grande partie consiste à cartographier les réseaux neuronaux du ouistiti commun. Suivront d’autres pays, comme le Canada, l’Australie, la Corée du Sud ou la Chine, avec des programmes de recherche comparables.
Plus de dix ans après, où en sommes-nous ? Encore loin du compte. « L’ambition de départ, à savoir une modélisation informatique extrêmement précise du cerveau humain, s’est révélée impossible », constate le neuroscientifique Philippe Vernier, codirecteur général de la plateforme Ebrains, une émanation directe du Human Brain Project. Le défi était colossal. « Un cerveau humain, ce sont 200 milliards de cellules, plus qu’il n’y a d’étoiles dans notre galaxie, explique Hervé Chneiweiss, directeur de recherche au CNRS, neurobiologiste et neurologue. Et chaque cellule établit à peu près 5 000 connexions avec des cellules voisines ou un peu plus distantes. » (…)
Cerveau : les secrets de l’ultime « terra incognita » (lepoint.fr)
The drugs don’t work : Ara Darzi on why antibiotic resistance could be deadlier than cancer
To get on top of the crisis, stop prescriptions without a proper diagnosis, argues the surgeon and politician (The Economist, 25 septembre, article payant)
Extraits :
IHAVE SPENT much of my career at St Mary’s Hospital, in London, a short walk from the laboratory where in 1928 Sir Alexander Fleming made his epoch-defining discovery of penicillin, the first antibiotic.
Millions of lives have been saved since and the drugs were once thought to have put an end to infectious disease. But that dream has died as bacteria resistant to antibiotics have grown and multiplied. Today untreatable infections, for which there is no antibiotic, cause more than 1m deaths a year worldwide, a toll projected to rise ten-fold by 2050, surpassing all deaths from cancer.
Radical action is needed. For only the second time in its history, the UN General Assembly will meet this week to address this global threat and protect humanity from falling into a post-antimicrobial era in which simple infections kill and routine surgery becomes too risky to perform.
A key problem is that antibiotics are too casually prescribed to people, and too widely used in animal agriculture. This happens because they are cheap and have few immediately harmful effects.
I believe we must set a bold new target: by 2030 no antibiotic should be prescribed without a proper diagnosis that identifies the underlying cause as bacterial infection. (…)
Imagine that we had a covid-like test that could be self-administered and swiftly tell patients and clinicians what they were treating? It would be transformative.
Such tests are becoming available. In June the £8m ($10.4m) Longitude Prize was awarded to a Swedish company, Sysmex Astrego, for developing a test that within 15 minutes can detect which urinary-tract infections are caused by bacteria, and within 45 minutes reveal which antibiotic they are sensitive to.
The challenge in getting the test more widely adopted is that it is currently much more expensive (£25 privately) than antibiotics (measured in pennies). (…)
Anyone anywhere is at risk of contracting a life-threatening, drug-resistant infection. But the crisis is worst in poor and middle-income countries and among patients with multiple medical conditions. Being able to test without the need to access clinics or other traditional health-care settings is crucial to ensuring patients have the information they need to make decisions about their health. (…)
It is eight years since the UN first agreed to stem the growth of drug-resistant infections, but there has been scant progress since. Antibiotics have underpinned medical progress for the past hundred years. We must keep them effective to underpin all that happens in medicine for the next hundred years.
Ara Darzi, Lord Darzi of Denham, is a surgeon, director of the Institute of Global Health Innovation at Imperial College London and chair of the Fleming Initiative. He led the recent report into the performance of the National Health Service in England.
Ara Darzi on why antibiotic resistance could be deadlier than cancer (economist.com)
KI könnte das Universum in ein Reich der Dunkelheit verwandeln, sagt Yuval Noah Harari
Es gehe zu Ende. Mit dem Menschen, mit der Welt: Das ist die grosse Geschichte, die Yuval Noah Harari seit Jahren erzählt. Im neuen Buch «Nexus» nimmt er sich die künstliche Intelligenz vor. . (NZZ, 25 septembre, article payant)
Extraits :
Am Schluss hat er doch noch eine gute Nachricht bereit. Oder wenigstens eine Hoffnung. Auf der vorletzten Seite seines neuen Buches schreibt Yuval Noah Harari, vielleicht werde es trotz allem nicht so schlimm, wie er es beschrieben habe, und die Zerstörung der Menschheit lasse sich aufhalten. Wenn die Menschen einen Weg fänden, die Mächte, die sie geschaffen hätten, in Schach zu halten.
Nach fast sechshundert Seiten, in denen die Geschichte der Menschheit als Abfolge von Entdeckungen, Erfindungen und Eroberungen geschildert wird, die den Menschen an den Rand des Abgrunds gebracht haben, ist das ein schwacher Trost. Vor allem, weil der Mensch allein schuld ist an der Misere. Und weil nur er sich helfen kann. Gott ist tot, an die Götter, die in den antiken Mythen die Welt wieder in Ordnung bringen, wenn sie durcheinandergeraten ist, glaubt auch niemand mehr. Einen Hexenmeister, der dem Lehrling zu Hilfe eilt, gibt es sowieso nicht. Wir sind unsere eigenen Hexenmeister. Weil wir uns zu unseren eigenen Göttern gemacht haben, würde Harari sagen.
Da klingt die Zuversicht fast vermessen, der Mensch könnte sich am eigenen Schopf aus dem Sumpf ziehen. Dafür hätte er in den vergangenen hunderttausend Jahren genug Zeit gehabt. Gründe, den ins Verderben gleitenden Wagen wieder auf den rechten Weg zu bringen, hätte es auch gegeben. Und die Mittel dazu gab sich der Mensch, das findigste von allen Lebewesen, selbst an die Hand. Aber er verstand es nicht, sie zu nutzen. Oder nutzte sie falsch.
Das ist die grosse Geschichte, die Harari seit Jahren erzählt: Es geht zu Ende. Mit dem Menschen, mit der Welt. Weil der Mensch die Grenzen überschreitet, die ihm gesetzt sind. Und weil er sich damit selbst überflüssig macht. Davon erzählt auch das neue Buch. Aber mit erhöhter Dringlichkeit. Denn jetzt steht die Menschheit für Harari vor einer grossen Entscheidung. Und ist darauf und daran, einmal mehr einen verheerenden Fehler zu begehen.
In «Nexus» warnt der israelische Historiker vor den Gefahren der künstlichen Intelligenz. Sie ist für ihn die grösste Bedrohung, der die Menschheit je gegenüberstand. Weil die Art, wie wir KI nutzen, nicht darüber entscheidet, wie unsere Zukunft aussieht. Sondern darüber, ob die Menschheit überhaupt noch eine Zukunft hat. Wie Hararis vorangehende Bücher «Eine kurze Geschichte der Menschheit» (2013) und «Homo Deus. Eine Geschichte von Morgen» (2017) ist auch «Nexus» im Grunde kein historisches Buch, sondern ein Pamphlet. Da spricht nicht der Geschichtsprofessor der Hebräischen Universität Jerusalem, sondern ein belesener, historisch bewanderter Prophet, der sieht, wie es mit dem Menschen zu Ende geht. Der aus der Geschichte zu erklären versucht, warum es zu Ende geht, und weiss, was dagegen zu tun wäre. Und sich wohl auch bewusst ist, dass es das Schicksal von Propheten ist, nicht gehört zu werden.
Obwohl, darüber kann sich Harari eigentlich nicht beklagen. Seine Bücher wurden in über sechzig Sprachen übersetzt und mehr als fünfundvierzig Millionen Mal verkauft. Das hat noch kein Historiker geschafft. Allein «Eine kurze Geschichte der Menschheit» erreichte eine Auflage von fünfundzwanzig Millionen Exemplaren. Die Mächtigen der Welt ziehen Harari als Consultant hinzu, Barack Obama empfiehlt seine Bücher, Angela Merkel und Emmanuel Macron trafen sich mit ihm, um über die Probleme der Welt zu reden. Mark Zuckerberg und Bill Gates fragen ihn um Rat, und am World Economic Forum gehört Harari zu den gerngesehenen Gästen.
(…) Denn der Schritt zur KI, darauf pocht Harari, sei nicht mit den technischen Revolutionen vergleichbar, die den Gang der Geschichte in vorhergehenden Jahrhunderten verändert hätten: der Erfindung der Tontafeln in Mesopotamien zum Beispiel, der Einführung des Buchdrucks oder des Fernsehens.
Der entscheidende Unterschied besteht für Harari darin, dass alle bisherigen Informationsnetzwerke nur Instrumente waren. Sie verbreiteten Informationen. Aber diese waren vom Menschen bestimmt – ob es Nachrichten, Mythen, Warenlisten oder Unterhaltungssendungen waren. Nun können Computer selbst Texte verfassen, selbst Informationen recherchieren und verarbeiten – und Entscheidungen fällen.
KI kann sich selbst organisieren und ist in der Lage, eigene Informationsnetzwerke zu schaffen. Ohne Menschen. (…)
Man folgt Yuval Noah Harari mit einem gewissen Vergnügen durch seine Exkurse, die von der Steinzeit bis in die digitale Zukunft führen. Man wird von Schauder ergriffen angesichts der Weite des Blicks, der das ganze Universum umfasst und Geschichte nach Äonen misst. Und wird Harari zustimmen, dass die Entwicklung der KI kritische Aufmerksamkeit verlangt. Das hat man allerdings nicht erst von ihm gehört. Und was wir konkret tun könnten, sagt uns Harari nicht. Ist wahrscheinlich auch nicht seine Aufgabe. Der Seher warnt, das genügt. Um die praktischen Belange sollen sich andere kümmern.
Wenn das Universum dunkel wird: Yuval Noah Harari warnt vor KI (nzz.ch)
Rosalind Franklin, la chimiste surdouée si injustement oubliée à qui l’on doit la découverte de l’ADN
C’est la grande oubliée du Nobel. En 1952, la chimiste britannique identifie la structure de l’ADN. Mais le mérite reviendra, dix ans plus tard, à trois autres chercheurs… masculins. Une histoire racontée par Virginie Girod*. (Le Figaro, 23 septembre, article payant)
Extraits :
En 2008, la scientifique britannique Rosalind Franklin reçoit à titre posthume le prix Louisa-Gross-Horwitz, qui récompense enfin sa contribution à la recherche fondamentale. Il y en a un qu’elle n’aura jamais : le Nobel. C’est pourtant à elle que l’on doit en grande partie la mise au jour de la structure en double hélice de l’ADN, au cœur des années 1950. Mais sur son chemin se sont trouvés d’avides collègues qui l’ont trahie.
Née en 1920, Rosalind est une enfant surdouée. Déterminée, elle parvient à faire des études à une époque où les universités britanniques tolèrent à peine les femmes. En 1950, titulaire d’une thèse et forte de son expérience en France auprès de la famille Curie, elle intègre un laboratoire du King’s College de Londres, où elle travaille sur l’ADN. Dans ce domaine, tout est à découvrir. La course au Nobel est ouverte, et son confrère, Maurice Wilkins, en est le favori. (…)
Rosalind Franklin n’a pas été privée de son prix Nobel parce qu’elle était une femme. En revanche, elle a subi l’immense violence et l’injustice du milieu scientifique à une époque où la course aux récompenses justifiait tous les coups bas. Elle est morte avant de faire valoir les fruits de son travail. Mais le destin est parfois étonnant. Aujourd’hui, qui se souvient de ses trois collègues tandis que Rosalind Franklin est montée au pinacle (une plaque a récemment été installée rue Garancière, à Paris, où elle a habité) ? Elle n’a pas eu la célébrité, mais jouit d’une postérité unique.
Shrink to fit : The semiconductor industry faces its biggest technical challenge yet
As Moore’s law fades, how can more transistors be fitted onto a chip? (The Economist, 19 septembre, article payant)
Extraits :
(…) Gordon Moore’s original observation, in 1965, was that as the making of chips got better, the transistors got smaller, which meant you could make more for less. In 1974 Robert Dennard, an engineer at ibm, noted that smaller transistors did not just lower unit costs, they also offered better performance. As the distance between source and drain shrinks, the speed of the switch increases, and the energy it uses decreases. “Dennard scaling”, as the observation is known, amplifies the amount of good that Moore’s law does.
In 1970 the gate length, a proxy for the distance between the source and drain, was ten microns (ten millionths of a metre, or 10,000nm). By the early 2000s this was down to 90nm. At this level, quantum effects cause current to flow between the two terminals even when the transistor is off. This leakage current increases the power used and causes the chip to heat up.
For chipmakers that was an early indication that their long, sort-of-free ride was ending (see chart). Transistors could still be made smaller but the leakage current placed a limit on how low a chip’s voltage could be reduced. This in turn meant that the chip’s power could not be reduced as before. This “power wall” marked the end of Dennard scaling—transistor sizes shrank, but chip speeds no longer got quicker and their power consumption was now an issue. To keep improving performance, designers started arranging logic gates and other elements of their chips in multiple, connected processing units, or “cores”. With multiple cores, a processor can run many applications simultaneously or run a single application faster by splitting it into parallel streams. (…)
In 2013 Max Shulaker, now at mit, with Subhasish Mitra and Philip Wong, both of Stanford University, built the first microprocessor using cnt transistors. The researchers designed an “imperfection-immune” processor that functions even if a certain number of cnts misbehave. By 2019 Mr Shulaker had devised a microprocessor built with 14,000 cnt transistors (half the number found in the 8086, a chip released by Intel in 1978). In 2023 researchers at Peking University built a transistor using cnts on manufacturing technology that can be scaled down to the size of a 10nm silicon node. The results may seem basic, but they underscore the potential of cnts as an alternative to silicon.
In 1959 Richard Feynman, a physicist, gave a lecture that presaged the nanotechnology era. He wondered, “What would happen if we could arrange the atoms one by one the way we want them?” With semiconductor device features now atomic lengths, the world has its answer: build smaller transistors. ■
The semiconductor industry faces its biggest technical challenge yet (economist.com)
The motherlode : Breast milk’s benefits are not limited to babies
Some of its myriad components are being tested as treatments for cancer and other diseases (The Economist, 14 septembre, article payant)
Extraits :
IN A TALK she gave in 2016, Katie Hinde, a biologist from Arizona State University, lamented how little scientific attention was commanded by breast milk. Up until that point, she said, both wine and tomatoes had been far more heavily studied. Eight years on, alas, that remains true.
What is also true—and this was the serious point of Dr Hinde’s talk—is that scientists have been neglecting a goldmine. Unlike wine or tomatoes, breast milk’s physiological properties have been honed by evolution to be healthy. In babies it can reduce inflammation, kill pathogens and improve the health of the immune system. As a result, some components of breast milk are now being studied as potential treatments for a host of adult conditions, including cancer, heart disease, arthritis and irritable bowel syndrome (IBS). Scientists may never look at breast milk in the same way again. (…)
In a recent study of milk from 1,200 mothers on three continents, Dr Azad and her colleagues found roughly 50,000 small molecules, most of them unknown to science. By using artificial-intelligence (AI) models to analyse this list of ingredients, and link them to detailed health data on babies and their mothers, they hope to identify components beneficial for specific aspects of babies’ development. (…)
It is not only the molecules in breast milk that could have health benefits. Until about 15 years ago, says Dr Azad, it was assumed that breast milk was largely sterile. But genetic-sequencing tools have revealed it contains a wide variety of bacteria. Some, such as Bifidobacterium, a particularly beneficial bacterium that feeds exclusively on HMOs, can survive the trip into the baby’s gut, where it strengthens the gut barrier; regulates immune responses and inflammation; and prevents pathogenic bacteria from adhering to the lining of the gut. That makes it an ideal candidate for use in probiotics, live bacterial supplements used to remedy the gut’s ecosystem. (…)
Other exciting results have emerged from studying breastfeeding itself. When babies breastfeed, some of the milk ends up in their nasal cavity. It is possible, say scientists, that it could then make its way into the brain. In a small study in 2019 doctors at the Children’s Hospital in Cologne nasally administered maternal breast milk to 16 premature babies with brain injury. The babies subsequently had less brain damage, and required less surgery, than those who did not receive the treatment.
Similar results were reported in May by researchers at the Hospital for Sick Children in Toronto. In a small safety study, they administered intranasal breast milk as a preventive treatment for brain haemorrhage in premature babies; 18 months later, babies so treated had better motor and cognitive development, and fewer vision problems, than those fed only the usual way. Though bigger trials are needed to confirm these results, stem cells in the milk may be repairing some of the damage.
It is too early to tell whether any blockbuster drugs will result. But breast-milk scientists are starting to feel vindicated. For Bruce German from the University of California, the neglect of breast milk will rank “as one of the great embarrassments of scientific history.” ■
Breast milk’s benefits are not limited to babies (economist.com)
The Future of Warfare Is Electronic
An audacious Ukrainian incursion into Russia shows why. Is the Pentagon paying enough attention? (WSJ, 5 septembre, article payant)
Extraits:
The Ukrainian army has launched a stunning offensive into Kursk, Russia, under a shield of advanced electronic weapons. The war in Ukraine is demonstrating that 21st-century conflicts will be won or lost in the arena of electronic warfare.
Think of electronic warfare as casting spells on an invisible battlefield. Combatants strive to preserve their own signals, while disrupting those of the enemy. In Kursk, the Ukrainians took advantage of their technical knowledge to achieve a leap in battlefield tactics. Using a variety of electronic sensing systems, they managed to figure out the key Russian radio frequencies along the invasion route. They jammed these frequencies, creating a series of electronic bubbles that kept enemy drones away from Ukrainian forces, allowing reconnaissance units, tanks and mechanized infantry to breach the Russian border mostly undetected. This is the chaotic way of modern combat: a choreography of lightweight, unmanned systems driven by a spiderweb of electronic signals. (…)
The Russians have so far been unable to dislodge these innovators but have begun using their own jammers to counter the waves of Ukrainian drone fleets supporting them, effectively creating a classic blockade. With the local electronic environment scrambled, Ukrainian drones have difficulty operating. If the Russians succeed, they could isolate the Ukrainian forces on the island. As these struggles reveal, the ultimate prize in modern warfare is spectrum dominance: ensuring one’s own control of drone networks while detecting and denying the adversary’s. (…)
America has a reputation as a global innovator, yet it trails in the dark arts of electronic warfare. Improvised jamming systems and dozens of counter-drone systems have created a spectral environment that the U.S. military isn’t yet prepared to navigate. American drones and munitions frequently can’t overcome the jamming of their guidance systems. Yet we send them to Ukraine, where the Russians often scramble them before they reach their targets. (…)
A military that can’t build a dynamic electronic shield around its own forces will likewise be unable to maneuver in the coming drone wars. Modern electronic-warfare systems mounted on low-cost drones are now as necessary as munitions. New companies are in the early stages of building the right weapons but need the Pentagon to recognize the same future—and spend accordingly.
We aren’t the only ones watching Ukraine. China moves at the speed of war, while the U.S. moves at the speed of bureaucracy. If we retool our approach to electronic warfare, America will tip the scales in favor of deterrence and, if necessary, victory. If not, we will be subject to the harsh lessons inevitably faced by those who fight the last war.
Mr. Smith is a former U.S. Army attack aviator and officer of the 160th Special Operations Aviation Regiment. Mr. Mintz, an aerospace engineer, was founding CEO of the defense startups Epirus, Spartan Radar and now CX2.
The Future of Warfare Is Electronic – WSJ
Top economist: EU must invest more in high tech or lose out to the United States
Philippe Aghion has had a critical influence on the economic understanding of economic growth, innovation, and the rise and fall of companies. The French native, who taught at Harvard University for many years, believes that Europe must change if it wants to keep up with the United States. (NZZ, entretien, 5 septembre, article payant)
Extraits:
The CEO of Norway’s sovereign wealth fund, Nicolai Tangen, wants to invest more in the United States, and less in Europe. He says that Europeans are too lazy, and no longer want to work hard. What do you say to that, Mr. Aghion?
Mr. Tangen is absolutely right in many respects. Europe regulates too much and too rigidly, has too little presence in the high-tech sector, and makes too few truly groundbreaking innovations.
Why is that?
Europe is a giant when it comes to regulation. In this regard, Europe’s politicians don’t have bad intentions. They are simply trying to drive forward political integration with economic regulations, trying to unite Europe’s nation states more closely into a single entity.
Are you thinking of the monetary union?
The monetary union was adopted as an instrument of political integration. Policymakers then wanted to ensure that the member states didn’t overspend. They have tried to do this with too many overly strict rules. That hasn’t worked. There is the 3% deficit ceiling, which does not differentiate between whether a state simply wants to spend money or wants instead to invest in the future. And there is a competition policy that is too strict, in that it prohibits all state aid. It would be better to allow states to engage actively in industrial policy, but ensure that this is done in a competition-friendly manner.
Are you really telling me that the EU is falling behind the United States because EU states can spend too little money?
Like China, the United States is pursuing a very deliberate industrial policy. The big risk is that Europe will fall behind.
Europe’s productivity has grown more slowly than America’s since the end of the 1990s.
Europe has at least partially missed out on the IT revolution. Why has that happened? America invests much more in cutting-edge high-tech technology than Europe does. The two spend a similar amount on research, but the U.S. focuses much more on high-tech and breakthrough innovations that have helped its leading tech companies grow. (…)
Economist: EU must invest more in high-tech innovation (nzz.ch)
Yuval Harari: What Happens When the Bots Start Competing for Your Love? (NYT, tribune, 4 septembre, quelques articles gratuites / sem.)
Extraits:
Democracy is a conversation. Its function and survival depend on the available information technology. For most of history, no technology existed for holding large-scale conversations among millions of people. In the premodern world, democracies existed only in small city-states like Rome and Athens, or in even smaller tribes. Once a polity grew large, the democratic conversation collapsed, and authoritarianism remained the only alternative.
Large-scale democracies became feasible only after the rise of modern information technologies like the newspaper, the telegraph and the radio. The fact that modern democracy has been built on top of modern information technologies means that any major change in the underlying technology is likely to result in a political upheaval.
This partly explains the current worldwide crisis of democracy. In the United States, Democrats and Republicans can hardly agree on even the most basic facts, such as who won the 2020 presidential election. A similar breakdown is happening in numerous other democracies around the world, from Brazil to Israel and from France to the Philippines.
In the early days of the internet and social media, tech enthusiasts promised they would spread truth, topple tyrants and ensure the universal triumph of liberty. So far, they seem to have had the opposite effect. We now have the most sophisticated information technology in history, but we are losing the ability to talk with each other, and even more so the ability to listen.
As technology has made it easier than ever to spread information, attention became a scarce resource, and the ensuing battle for attention resulted in a deluge of toxic information. But the battle lines are now shifting from attention to intimacy. The new generative artificial intelligence is capable of not only producing texts, images and videos, but also conversing with us directly, pretending to be human. (…)
The ability to hold conversations with people, surmise their viewpoint and motivate them to take specific actions can also be put to good uses. A new generation of A.I. teachers, A.I. doctors and A.I. psychotherapists might provide us with services tailored to our individual personality and circumstances.
However, by combining manipulative abilities with mastery of language, bots like GPT-4 also pose new dangers to the democratic conversation. Instead of merely grabbing our attention, they might form intimate relationships with people and use the power of intimacy to influence us. To foster “fake intimacy,” bots will not need to evolve any feelings of their own; they just need to learn to make us feel emotionally attached to them. (…)
In a political battle for minds and hearts, intimacy is a powerful weapon. An intimate friend can sway our opinions in a way that mass media cannot. Chatbots like LaMDA and GPT-4 are gaining the rather paradoxical ability to mass-produce intimate relationships with millions of people. What might happen to human society and human psychology as algorithm fights algorithm in a battle to fake intimate relationships with us, which can then be used to persuade us to vote for politicians, buy products or adopt certain beliefs? (…)
A.I.s are welcome to join many conversations — in the classroom, the clinic and elsewhere — provided they identify themselves as A.I.s. But if a bot pretends to be human, it should be banned. If tech giants and libertarians complain that such measures violate freedom of speech, they should be reminded that freedom of speech is a human right that should be reserved for humans, not bots.
Opinion | Yuval Harari: A.I. Threatens Democracy – The New York Times (nytimes.com)
How Self-Driving Cars Get Help From Humans Hundreds of Miles Away (NYT, 4 septembre, quelques articles gratuites / sem.)
Extraits:
In places like San Francisco, Phoenix and Las Vegas, robot taxis are navigating city streets, each without a driver behind the steering wheel. Some don’t even have steering wheels:
But cars like this one in Las Vegas are sometimes guided by someone sitting here:
This is a command center in Foster City, Calif., operated by Zoox, a self-driving car company owned by Amazon. Like other robot taxis, the company’s self-driving cars sometimes struggle to drive themselves, so they get help from human technicians sitting in a room about 500 miles away.
Inside companies like Zoox, this kind of human assistance is taken for granted. Outside such companies, few realize that autonomous vehicles are not completely autonomous.
For years, companies avoided mentioning the remote assistance provided to their self-driving cars. The illusion of complete autonomy helped to draw attention to their technology and encourage venture capitalists to invest the billions of dollars needed to build increasingly effective autonomous vehicles. (…)
See How Humans Help Self-Driving Cars Navigate City Streets – The New York Times (nytimes.com)
Why America’s tech giants have got bigger and stronger
Whatever happened to creative destruction? (The Economist, 23 août, article payant)
Extraits:
When your columnist first started writing Schumpeter in early 2019, he had a romantic idea of travelling the world and sending “postcards” back from faraway places that chronicled trends in business, big and small. In his first few weeks, he reported from China, where a company was using automation to make fancy white shirts; Germany, where forest-dwellers were protesting against a coal mine; and Japan, where a female activist was making a ninja-like assault on corporate governance. All fun, but small-bore stuff. Readers, his editors advised him, turn to this column not for its generous travel budget but for its take on the main business stories of the day. So he pivoted, adopting what he called the Linda Evangelista approach. From then on, he declared, he would not get out of bed for companies worth less than $100bn.
This is his final column and, as he looks back, that benchmark seems quaint. At the time, the dominant tech giants were already well above it. Microsoft was America’s biggest company, worth $780bn, closely followed by its big-tech rivals: Apple, Amazon, Alphabet and Meta. Their total value back then was $3.4trn. Today the iPhone-maker alone exceeds that.
Since early 2019 the combined worth of the tech giants has more than tripled, to $11.8trn. Add in Nvidia, the only other American firm valued in the trillions, thanks to its pivotal role in generative artificial intelligence (AI), and they fetch more than one and a half times the value of America’s next 25 firms put together. That includes big oil (ExxonMobil and Chevron), big pharma (Eli Lilly and Johnson & Johnson), big finance (Berkshire Hathaway and JPMorgan Chase) and big retail (Walmart). In other words, while the tech illuminati have grown bigger and more powerful, the rest lag ever further behind.
It is tempting to view this as an aberration. This column is named after Joseph Schumpeter, the late Austrian-American economist who made famous the concept of creative destruction—the relentless tide of disruptive innovation toppling old orders and creating new ones. Surely these tech firms, founded decades ago in dorms, garages and dingy offices, should be vulnerable to the same Schumpeterian forces that they once unleashed on their industrial forebears.
But creative destruction, at least as framed by the original Schumpeter, is more complicated than that. To be sure, he revered entrepreneurs. He considered them, as we do today, the cult heroes of business, driving the economy forward with new products and ways of doing things. But late in life, after he had witnessed decades of dominance by big American corporations, he changed his tune. He decided that large firms, even monopolies, were the big drivers of innovation. They had the money to invest in new technology, they attracted the best brains—and they had most to lose if they did not stay alert. That may disappoint those who see business as a David v Goliath struggle of maverick upstarts against managerial apparatchiks. But it was prescient. It helps explain why today’s tech Goliaths vastly outspend, buy up and outflank startups before they get the chance to sling a stone. (…)
Why America’s tech giants have got bigger and stronger (economist.com)
The new gender gap : Why don’t women use artificial intelligence?
Even when in the same jobs, men are much more likely to turn to the tech (The Economist, 22 août, article payant)
Extraits:
Be more productive. That is how ChatGPT, a generative-artificial-intelligence tool from OpenAI, sells itself to workers. But despite industry hopes that the technology will boost productivity across the workforce, not everyone is on board. According to two recent studies, women use ChatGPT between 16 and 20 percentage points less than their male peers, even when they are employed in the same jobs or read the same subject.
The first study, published as a working paper in June, explores ChatGPT at work. Anders Humlum of the University of Chicago and Emilie Vestergaard of the University of Copenhagen surveyed 100,000 Danes across 11 professions in which the technology could save workers time, including journalism, software-developing and teaching. The researchers asked respondents how often they turned to ChatGPT and what might keep them from adopting it. By exploiting Denmark’s extensive, hooked-up record-keeping, they were able to connect the answers with personal information, including income, wealth and education level.
Across all professions, women were less likely to use ChatGPT than men who worked in the same industry (see chart 1). For example, only a third of female teachers used it for work, compared with half of male teachers. Among software developers, almost two-thirds of men used it while less than half of women did. This gap shrank only slightly, to 16 percentage points, when directly comparing people in the same firms working on similar tasks. The study concludes that a lack of female confidence may be in part to blame: women who did not use AI were more likely than men to highlight that they needed trainingto use the technology. (…)
Why don’t women use artificial intelligence? (economist.com)
Artificial intelligence : Mark Zuckerberg and Daniel Ek on why Europe should embrace open-source AI
It risks falling behind because of incoherent and complex regulation, say the two tech CEOs (The Economist, 22 août, tribune, article payant)
Extraits:
THIS IS AN important moment in technology. Artificial intelligence (AI) has the potential to transform the world—increasing human productivity, accelerating scientific progress and adding trillions of dollars to the global economy.
But, as with every innovative leap forward, some are better positioned than others to benefit. The gaps between those with access to build with this extraordinary technology and those without are already beginning to appear. That is why a key opportunity for European organisations is through open-source AI—models whose weights are released publicly with a permissive licence. This ensures power isn’t concentrated among a few large players and, as with the internet before it, creates a level playing field. (…)
Regulating against known harms is necessary, but pre-emptive regulation of theoretical harms for nascent technologies such as open-source AI will stifle innovation. Europe’s risk-averse, complex regulation could prevent it from capitalising on the big bets that can translate into big rewards.
Take the uneven application of the EU’s General Data Protection Regulation (GDPR). This landmark directive was meant to harmonise the use and flow of data, but instead EU privacy regulators are creating delays and uncertainty and are unable to agree among themselves on how the law should apply. For example, Meta has been told to delay training its models on content shared publicly by adults on Facebook and Instagram—not because any law has been violated but because regulators haven’t agreed on how to proceed. In the short term, delaying the use of data that is routinely used in other regions means the most powerful AI models won’t reflect the collective knowledge, culture and languages of Europe—and Europeans won’t get to use the latest AI products.
These concerns aren’t theoretical. Given the current regulatory uncertainty, Meta won’t be able to release upcoming models like Llama multimodal, which has the capability to understand images. That means European organisations won’t be able to get access to the latest open-source technology, and European citizens will be left with AI built for someone else.
The stark reality is that laws designed to increase European sovereignty and competitiveness are achieving the opposite. This isn’t limited to our industry: many European chief executives, across a range of industries, cite a complex and incoherent regulatory environment as one reason for the continent’s lack of competitiveness.
Europe should be simplifying and harmonising regulations by leveraging the benefits of a single yet diverse market. Look no further than the growing gap between the number of homegrown European tech leaders and those from America and Asia—a gap that also extends to unicorns and other startups. Europe needs to make it easier to start great companies, and to do a better job of holding on to its talent. Many of its best and brightest minds in AI choose to work outside Europe.
In short, Europe needs a new approach with clearer policies and more consistent enforcement. With the right regulatory environment, combined with the right ambition and some of the world’s top AI talent, the EU would have a real chance of leading the next generation of tech innovation. (…)
While we can all hope that with time these laws become more refined, we also know that technology moves swiftly. On its current course, Europe will miss this once-in-a-generation opportunity. Because the one thing Europe doesn’t have, unless it wants to risk falling further behind, is time. ■
Mark Zuckerberg and Daniel Ek on why Europe should embrace open-source AI (economist.com)
How Did the First Cells Arise? With a Little Rain, Study Finds.
Researchers stumbled upon an ingredient that can stabilize droplets of genetic material: water. (NYT, 22 août, quelques articles gratuites / sem.)
Extraits:
Rain may have been an essential ingredient for the origin of life, according to a study published on Wednesday.
Life today exists as cells, which are sacs packed with DNA, RNA, proteins and other molecules. But when life arose roughly four billion years ago, cells were far simpler. Some scientists have investigated how so-called protocells first came about by trying to recreate them in labs.
Many researchers suspect that protocells contained only RNA, a single-stranded version of DNA. Both RNA and DNA store genetic information in their long sequences of molecular “letters.”
But RNA can also bend into intricate shapes, turning itself into a tool for cutting or joining other molecules together. Protocells might have reproduced if their RNA molecules grabbed genetic building blocks to assemble copies of themselves. (…)
Dr. Agrawal discovered that the water was responsible for keeping the droplets stable. The water coaxed the molecules in the outer layer of the droplets to link together. “You can imagine a mesh forming around these droplets,” said Dr. Agrawal, now a postdoctoral researcher at the Pritzker School of Molecular Engineering at the University of Chicago. (…)
But rain on the early Earth most likely had a different chemistry from rain today, because it formed in an atmosphere with a different balance of gases. The high level of carbon dioxide believed to be in the air four billion years ago would have made raindrops more acidic. Dr. Agrawal and his colleagues found they could still form stable RNA droplets with water as acidic as vinegar.
Neal Devaraj, a chemical biologist at the University of California, San Diego, who was not involved in the new study, said that it could shed light on the origin of life because the researchers didn’t have to do all that much to make stable RNA droplets: just mix and shake.
“It’s something you can imagine happening on the early Earth,” he said. “Simple is good when you’re thinking about these questions.”
How Did the First Cells Arise? With a Little Rain, Study Finds. – The New York Times (nytimes.com)
Bad news, red wine drinkers: alcohol is only ever bad for your health
We needn’t be puritanical about having a drink, but we can no longer deny that it harms us, even in small quantities (The Guardian, 22 août, libre accès)
Extraits:
To say yes to that glass of wine or beer, or just get a juice? That’s the question many people face when they’re at after-work drinks, relaxing on a Friday night, or at the supermarket thinking about what to pick up for the weekend. I’m not here to opine on the philosophy of drinking, and how much you should drink is a question only you can answer. But it’s worth highlighting the updated advice from key health authorities on alcohol. Perhaps it will swing you one way or the other.
It’s well-known that binge-drinking is harmful, but what about light to moderate drinking? In January 2023, the World Health Organization came out with a strong statement: there is no safe level of drinking for health. The agency highlighted that alcohol causes at least seven types of cancer, including breast cancer, and that ethanol (alcohol) directly causes cancer when our cells break it down.
Reviewing the current evidence, the WHO notes that no studies have shown any beneficial effects of drinking that would outweigh the harm it does to the body. A key WHO official noted that the only thing we can say for sure is that “the more you drink, the more harmful it is – or, in other words, the less you drink, the safer it is”. It makes little difference to your body, or your risk of cancer, whether you pay £5 or £500 for a bottle of wine. Alcohol is harmful in whatever form it comes in. (…)
Bad news, red wine drinkers: alcohol is only ever bad for your health | Devi Sridhar | The Guardian
Keeping your marbles : How to reduce the risk of developing dementia
A healthy lifestyle can prevent or delay almost half of cases (The Economist, 6 août, article payant)
Extraits:
Some of the best strategies for reducing the chances of developing dementia are, to put it kindly, impracticable: don’t grow old; don’t be a woman; choose your parents carefully. But although old age remains by far the biggest risk factor, women are more at risk than men and some genetic inheritances make dementia more likely or even almost inevitable, the latest research suggests that as many as 45% of cases of dementia are preventable—or at least that their onset can be delayed.
That is the conclusion of the latest report, published on July 31st, of the Lancet commission on dementia, which brings together leading experts from around the world, and enumerates risk factors that, unlike age, are “modifiable”. It lists 14 of these, adding two to those in its previous report in 2020: untreated vision loss; and high levels of LDL cholesterol. Most news about dementia seems depressing, despite recent advances in treatments for some of those with Alzheimer’s disease, much the most common cause of the condition. Most cases remain incurable and the numbers with the condition climb inexorably as the world ages. That the age-related incidence of dementia can actually be reduced is a rare beacon of hope.
The modifiable risk factors include: smoking, obesity, physical inactivity, high blood pressure, diabetes and drinking too much alcohol (see chart). The best way a person can reduce their risk of developing dementia is to lead what has long been identified as a healthy life: avoiding tobacco and too much alcohol and taking plenty of exercise (but avoiding forms of it that involve repeated blows to the head or bouts of concussion, like boxing, American football, rugby and lacrosse).
It also means having a good diet, defined in one study cited by the commission as: “eat at least three weekly servings of all of fruit, vegetables and fish; rarely drink sugar-sweetened drinks; rarely eat prepared meat like sausages or have takeaways.” So it is not surprising that LDL cholesterol has been added to its not-to-do list. It is also important to exercise the brain: by learning a musical instrument or a foreign language, for example—or even by doing crossword and sudoku puzzles.
Some of the modifiable risk factors are in fact far beyond any individual’s control. For example, it makes a big difference how many years of education someone has had. Broadly speaking, the higher the level of educational attainment, the lower the risk of dementia. And the only way to escape another risk factor—polluted air—is to move. (…)
Nevertheless, there is plenty of evidence to show that the risk factors outlined by the commission are salient. In the rich West, for example, the incidence rate of dementia has declined by 13% per decade over the past 25 years, consistently across studies. Gill Livingston, a professor in the psychiatry of older people at University College London and leader of the Lancet commission, has summed up the evidence of progress in North America and Europe as “a 25% decrease in the past 20 years”. That can only be as a result of changes in modifiable risk factors.
Despite the upbeat tone of the commission’s report, in some countries, such as China and Japan, the age-related incidence of dementia is climbing. In Japan, the overall age-adjusted prevalence rate doubled from 4.9% in 1985 to 9.6% in 2014. And according to the China Alzheimer Report of 2022, the incidence of Alzheimer’s in China had “steadily increased”, making it the fifth-most important cause of death in the country.
So nobody doubts that the prevalence of dementia is going to climb fast in the next decades as humanity ages. All the more reason for dementia-risk reduction to become a global policy priority. ■
How to reduce the risk of developing dementia (economist.com)
Maladie d’Alzheimer: deux nouveaux facteurs de risque identifiés
45 % des cas de démence seraient évitables grâce à des mesures de prévention, estime un groupe de travail international qui a recensé 14 paramètres ayant une influence mesurable sur l’apparition de la maladie. (Le Figaro,1er août, article payant)
Extraits:
(…) Le rapport de 57 pages actualise une précédente étude. En 2020, les scientifiques avaient identifié 12 facteurs de risque : un niveau d’éducation bas, une perte d’audition, l’hypertension, le tabagisme, l’obésité, la dépression, la sédentarité, le diabète, une consommation excessive d’alcool (définie comme plus de 17 verres d’alcool par semaine), un traumatisme crânien, la pollution de l’air et l’isolement social. « Les preuves se sont accumulées et sont aujourd’hui plus fortes » sur ces leviers de prévention, assurent les chercheurs. Après une nouvelle synthèse de la littérature, ils ajoutent deux nouveaux facteurs de risque : un trouble de la vision non traité et un taux élevé de cholestérol LDL (le « mauvais cholestérol »). (…)
De façon générale, les auteurs de l’étude rappellent que l’état de santé a un impact sur le déclenchement de ces troubles cognitifs. Il a été montré que les activités physiques, sociales et intellectuelles renforcent la « réserve cognitive » (ou « résilience cérébrale »), qui permet de retarder l’apparition des symptômes chez des individus montrant pourtant des altérations neurologiques.
Maladie d’Alzheimer: deux nouveaux facteurs de risque identifiés (lefigaro.fr)
Puissance des algorithmes : Faut-il écouter Elon Musk quand il s’inquiète des biais idéologiques majeurs des principaux outils d’IA ?
Derrière les algorithmes de l’intelligence artificielle se cachent des biais idéologiques et politiques qui ont de lourdes conséquences au sein de la société. (Atlantico,1er août, quelques articles gratuites / sem.)
Extraits:
Atlantico : Elon Musk s’est interrogé sur les biais idéologiques des outils d’intelligence artificielle en s’interrogeant sur les réponses proposées à des questions sur la tentative d’assassinat contre Donald Trump et sur la campagne électorale aux Etats-Unis. Alors que les Américains s’apprêtent à voter en novembre, est-ce que les outils d’IA contiennent des biais idéologiques ? Ces outils peuvent-ils avoir des conséquences au sein de la société ou même sur le plan politique ? L’IA peut-elle influencer les électeurs?
Fabrice Epelboin : A partir du moment où l’IA est devenue l’intermédiaire entre le grand public et l’information, entre le grand public et le savoir, cela impacte et influence, par nature, la société. L’IA occupe dorénavant un rôle dans la société qui était dévolu à l’Education nationale et aux médias. Ce rôle d’intermédiaire entre les citoyens et l’information et le savoir a été perdu par les médias et l’Éducation nationale. Les algorithmes d’intermédiation occupent aujourd’hui le rôle le plus important. C’est pour cela que TikTok, Twitter, ou Facebook sont pointés du doigt. Ils ont un rôle social absolument central, notamment chez les plus jeunes qui n’ont pas connu les anciens grands gardiens du savoir comme les bibliothèques municipales ou qui n’ont pas eu la chance de naître dans des milieux suffisamment privilégiés et cultivés. Tout cela a été remplacé par des algorithmes.
Cela a donc un impact sociétal absolument colossal qui va bien. Cette évolution sociétale et les biais, notamment politiques, qui sont à l’oeuvre au coeur de l’IA ont un impact phénoménal sur la façon dont tout un chacun se fait une idée du monde. Pour se faire une idée du monde, tout le monde passe par ces algorithmes. (…)
Est-ce que cela témoigne d’une forme d’emprise idéologique de la part de certains géants de la tech ou d’une volonté d’influencer la société ?
Les géants de la Tech sont un intermédiaire avec le savoir. Google est un intermédiaire avec le savoir. Facebook et Twitter sont un intermédiaire avec l’information. Ces intermédiaires se sont substitués à d’anciens intermédiaires qui étaient les médias et la presse.
Aujourd’hui, les principaux intermédiaires avec les connaissances ou l’information sont les algorithmes d’intelligence artificielle. Le fait d’avoir une position aussi importante dans la société a donné lieu à une multitude de dérives de la part d’entreprises plus ou moins bienveillantes.
Ce rôle d’intermédiaire leur permet d’avoir un impact politique sur la société.
Men are spending more time looking after their children – and it’s not just cultural, it’s in their genes
New research turns on its head the idea that the cascade of hormones brought on by parenthood is limited to mothers (The Guardian, 30 juillet, opinion, libre accès)
Excerpt:
(…) Sarah Blaffer Hrdy, another great US anthropologist, points out in her recent book Father Time: A Natural History of Men and Babies that although there are obvious biological differences between men and women, we have almost the same genes and very similar brains. Consequently, men’s bodies retain the potential to do things typically associated with women, and vice versa.
A striking example of this is men’s hormonal response to fatherhood. When dads have prolonged periods of intimacy with babies, their bodies react in similar ways to new mums. Prolactin and oxytocin levels rapidly rise. Levels of testosterone – the male sex hormone – fall.
This is the biochemical basis of the philosopher Roman Krznaric’s observation that fatherhood increased his emotional range “from a meagre octave to a full keyboard of human feelings”. Less poetically, it is why I feel ecstatic when my toddler does a poo, and burst into tears when Clay Calloway walks on stage towards the end of Sing 2.
The maternal endocrine response – the hormone changes women experience during and after pregnancy – arises in the subcortex, the part of the brain that is common to all vertebrates and has remained largely unchanged for millions of years. Hrdy argues that the evolutionary origins of this response can in fact be traced back to male fish.
Piscine mums tend to lay their eggs and then forage for food in preparation to produce more eggs. It won’t surprise anyone who has watched Finding Nemothat fish dads often hover near nests to nurture and protect eggs they have fertilised. In nature, mothers are not always the primary carers; in many instances, it is the father’s role.
The prize for the best fish dads in the world goes to species from the Syngnathidae family. Female seahorses, pipefish and sea dragons inject their eggs into the male’s brood pouch, where they are fertilised and incubated. Not only do daddy Syngnathidae gestate and give birth, but the hormones involved are very similar to those regulating human pregnancies. Prolactin promotes the enzyme that breaks down the egg membranes, creating a nourishing fluid that the embryos feast on; and labour is stimulated by the fishy equivalent of oxytocin.
Human fatherhood is not this full-on, but when culture, choice or happenstance gives men caring responsibilities for infants, it triggers a similar endocrine response to mothers. Oxytocin and prolactin course through the brain, enhancing the father’s emotional wellbeing and social connections. For many fathers spending time with their baby, sharing the burden with their partner, or doing their bit to bring down the patriarchy is enough of a reward. But now we know there is another benefit: access to a part of the human experience that until recently was assumed to be closed to men.
For too long, simplistic interpretations of biology have been used to argue that traditional gender roles, in which women take on primary responsibility for childcare, are natural and immutable. We now know that biology can, in fact, free women and men from these binary straitjackets.
Jonathan Kennedy teaches politics and global health at Queen Mary University of London and is the author of Pathogenesis: How Germs Made History
Artificial Intelligence Gives Weather Forecasters a New Edge
The brainy machines are predicting global weather patterns with new speed and precision, doing in minutes and seconds what once took hours. (NYT, 30 juillet, quelques articles gratuites / sem.)
Excerpt:
(…) The Texas prediction offers a glimpse into the emerging world of A.I. weather forecasting, in which a growing number of smart machines are anticipating future global weather patterns with new speed and accuracy. In this case, the experimental program was GraphCast, created in London by DeepMind, a Google company. It does in minutes and seconds what once took hours.
“This is a really exciting step,” said Matthew Chantry, an A.I. specialist at the European Center for Medium-Range Weather Forecasts, the agency that got upstaged on its Beryl forecast. On average, he added, GraphCast and its smart cousins can outperform his agency in predicting hurricane paths.
In general, superfast A.I. can shine at spotting dangers to come, said Christopher S. Bretherton, an emeritus professor of atmospheric sciences at the University of Washington. For treacherous heats, winds and downpours, he said, the usual warnings will be “more up-to-date than right now,” saving untold lives.
Rapid A.I. weather forecasts will also aid scientific discovery, said Amy McGovern, a professor of meteorology and computer science at the University of Oklahoma who directs an A.I. weather institute. She said weather sleuths now use A.I. to create thousands of subtle forecast variations that let them find unexpected factors that can drive such extreme events as tornadoes. (…)
“It’s a turning point,” said Maria Molina, a research meteorologist at the University of Maryland who studies A.I. programs for extreme-event prediction. “You don’t need a supercomputer to generate a forecast. You can do it on your laptop, which makes the science more accessible.” (…)
“With A.I. coming on so quickly, many people see the human role as diminishing,” Mr. Rhome added. “But our forecasters are making big contributions. There’s still very much a strong human role.”
Critical moment : AI can predict tipping points before they happen
Potential applications span from economics to epidemiology war (The Economist, 29 juillet, article payant)
Excerpt:
ANYONE CAN spot a tipping point after it’s been crossed. Also known as critical transitions, such mathematical cliff-edges influence everything from the behaviour of financial markets and the spread of disease to the extinction of species. The financial crisis of 2007-09 is often described as one. So is the moment that covid-19 went global. The real trick, therefore, is to spot them before they happen. But that is fiendishly difficult.
Computer scientists in China now show that artificial intelligence (AI) can help. In a study published in the journal Physical Review X, the researchers accurately predicted the onset of tipping points in complicated systems with the help of machine-learning algorithms. The same technique could help solve real-world problems, they say, such as predicting floods and power outages, buying valuable time.
To simplify their calculations, the team reduced all such problems to ones taking place within a large network of interacting nodes, the individual elements or entities within a large system. In a financial system, for example, a node might represent a single company, and a node in an ecosystem could stand for a species. The team then designed two artificial neural networks to analyse such systems. The first was optimised to track the connections between different nodes; the other, how individual nodes changed over time. (…)
Like with many AI systems, only the algorithm knows what specific features and patterns it identifies to make these predictions. Gang Yan at Tongji University in Shanghai, the paper’s lead author, says his team are now trying to discover exactly what they are. That could help improve the algorithm further, and allow better predictions of everything from infectious outbreaks to the next stockmarket crash. Just how important a moment this is, though, remains difficult to predict. ■
AI can predict tipping points before they happen (economist.com)
Le cerveau d’un champion, ça marche comment ?
Selon le neurobiologiste Jean-Philippe Lachaux, directeur de recherches à l’Inserm et spécialiste de l’attention, les athlètes de haut niveau ont des capacités intellectuelles hors pair (Madame Figaro, 27 juillet, libre accès)
Excerpt:
La tête et les jambes. Les neurones et le muscle. Pendant longtemps, on a séparé les capacités intellectuelles et physiques. Tout faux ! Jean-Philippe Lachaux démontre dans son nouveau livre, Dans le cerveau des champions*, le cerveau sans pareil des athlètes.
Le premier superpouvoir du champion, c’est la faculté à être «focus». «Cette hyperconcentration, c’est celle du tennisman sur sa balle ; celle de l’athlète en ultratrail Julien Chorier, suffisamment concentré sur sa course pour n’être pas perturbé par ses concurrents ; celle du pianiste Frank Braley, qui est “comme une étoile à neutron” en jouant chaque note», affirme Jean-Philippe Lachaux. Ce premier niveau – mobilisé dans la zone arrière du cortex préfrontal – s’accompagne d’une vision globale de l’ensemble. Il faut «avoir à l’esprit une idée de l’ensemble du jeu pianistique, artistique ou sportif, avoir en tête la succession des coups qui vont mener à la victoire. Ce double niveau de concentration est l’un des points forts du champion.» Pour parvenir à ce haut niveau, encore faut-il maîtriser sa technique. «Plus on automatise son geste, plus on est capable de libérer son cerveau pour se concentrer sur le reste – le jeu de l’adversaire, l’adaptation au terrain, etc. C’est ce qui se passe dans les sports de haut niveau, ou même au théâtre : un comédien doit connaître son texte par cœur pour pouvoir ensuite le sublimer, et y imprimer toutes ses émotions », explique le neurobiologiste. D’où l’importance de s’entraîner. «Je milite pour la répétition et le “par cœur”, dans l’éducation aussi, condition sine qua non de la réussite», soutient Jean-Philippe Lachaux. (…)
Le cerveau d’un champion, ça marche comment ? (lefigaro.fr)
Africa 2.0 : How to ensure Africa is not left behind by the AI revolution
Weak digital infrastructure is holding the continent back (The Economist, 26 juillet, article payant)
Extraits :
More than two decades ago The Economist calculated that all of Africa had less international bandwidth than Brazil. Alas, until 2023 that was still true. Africa’s lack of connectivity is one reason its people could miss out on the benefits promised by artificial intelligence (ai).
For decades, experts have called for better broadband across Africa, citing the gains in productivity and employment. But the economic potential of ai, and its insatiable computing appetite, have renewed the case for urgent investment in the physical sinews needed to sustain a new digital revolution.
Fortunately, Africa has a home-grown model it can emulate. Its embrace of mobile phones in the early 2000s was a stunning feat of economic liberalisation. In most parts of Africa, businesses and consumers used to have to wait years to get a fixed-line phone. Nigeria, which has Africa’s biggest population (now more than 220m-strong), had just 450,000 phone lines in 1999, of which perhaps a third were on the blink. But when governments allowed privately owned mobile-phone companies to offer their services, they rapidly displaced the lethargic state-owned telcos.
It was a lesson in development done effectively, but frugally. (…)
Alas, this spectacular leapfrogging has downsides. The focus on mobile is one reason behind Africa’s underinvestment in fast fibre-optic connections. Although mobile phones have enabled mobile money and government services, such as digital id, they can take economies only so far, especially when most parts of Africa have relatively slow 2g or 3g networks.
Fibre can carry more traffic, and faster. This allows seamless video calls, reduces dizzying lags in augmented-reality apps for, say, training surgeons and lets people interact with ai chatbots and other online services. Yet Africa is poorly served by subsea internet cables. Moreover, much of the internet bandwidth that lands on the coasts is wasted because of a lack of high-capacity overland cables to carry it to the interior. Worse, the continent does not have enough data centres—the brick-and-mortar sites where cloud computing happens. The Netherlands, population 18m, has more of these than all of Africa and its 1.5bn people. As a result, data must cross half the world and back, leading to painful delays. If Africans are to do movie animation, run sophisticated weather forecasts or train large language models with local content, they will need more computing capacity closer to home.
To fix this, governments should learn from the mobile boom and cut red tape. Starlink, a satellite-internet firm, could be a stopgap, but regulators have blocked it in at least seven countries including South Africa. Heavy taxes on data access drive up costs for consumers, discouraging them from using it and firms from investing in providing it. Governments could do much to help simply by getting out of the way.
Development institutions, for their part, should be doing more to help finance this vital infrastructure because of its widespread benefits for growth and employment. The new digital revolution will create opportunities for Africa to catch up with rich countries. But if the continent lacks the right infrastructure, it will instead fall further behind. ■
How to ensure Africa is not left behind by the AI revolution (economist.com)
Comment la tech a révolutionné la guerre
Lasers, essaims de drones, missiles hypersoniques… Ces nouvelles armes sont sur le point de bouleverser les conflits (Le Point, 25 juillet, article payant)
Extraits :
L’art de la guerre n’échappe pas à l’accélération de l’Histoire. S’il fallait des décennies, voire des siècles, pour inventer un nouvel alliage métallique ou changer la forme d’un bouclier durant l’Antiquité, il suffit aujourd’hui de six mois pour qu’un drone soit obsolète sur le champ de bataille. « Une invention qui change la donne à elle toute seule, cela n’existe plus, à part peut-être l’arme atomique », prévient Léo Péria-Peigné, chercheur à l’Observatoire des conflits futurs de l’Institut français des relations internationales (Ifri).
Adieu donc les fameux game changers, ces armements censés offrir un avantage décisif et définitif. « La guerre reste un duel dans lequel il n’y a pas de solution miracle, mais une combinaison de systèmes d’armes tous nécessaires », ajoute l’auteur de Géopolitique de l’armement (Le Cavalier bleu). Néanmoins, dans tous les domaines, des inventions vont radicalement transformer la conduite de la guerre. Emblème de cette révolution, l’intelligence artificielle (IA) « va irriguer toutes les dimensions de notre travail », assure le général Pierre Schill, chef d’état-major de l’armée de terre française, qui salue la création en mars dernier de l’agence ministérielle de l’IA de défense (Amiad).
« Dans dix à quinze ans, un tiers de l’armée américaine sera robotisé et largement contrôlé par des systèmes dotés d’IA », a même prédit le général Mark Milley, ancien chef d’état-major des armées américaines sous les présidents Trump puis Biden, lors d’une conférence le 15 juillet 2024. Aux États-Unis comme en Chine, des milliers d’ingénieurs travaillent sur des algorithmes voués à l’analyse du renseignement, à la surveillance automatisée des mouvements ennemis, à la conduite de mission des essaims de drones ou encore à la maintenance prédictive des outils les plus précieux comme les avions, les navires et les chars. Presque tout peut être géré par une IA en une fraction de seconde, charge ensuite aux humains de suivre le rythme impulsé par la machine. (…)
Comment la tech a révolutionné la guerre (lepoint.fr)
Lights out: Are we prepared for the next global tech shutdown? – opinion
Every organization must know how to continue “business as usual” even in an emergency, even without computers. (The Jerusalem Post, 24 juillet, article payant)
Extraits :
On Friday, the world woke up to the announcement of a global disruption affecting cross-sector operations. Hospitals, health clinics, and banks were affected, airlines grounded their planes, broadcasting companies couldn’t broadcast (Sky News went off the air), emergency numbers like 911 in the US were unreachable, and even here in Israel, MDA experienced numerous issues.
This event had an impact in the US, Australia, and Europe. Critical infrastructure alongside many business operations came to a halt. In Israel, we immediately connected the event to warfare, to the UAV that arrived from Yemen and exploded in Tel Aviv, assuming that Iran was attacking in the cyber dimension.
What exactly happened? And how can one mistake impact the entire world?
Let’s begin with the facts: An American company based in Texas named CrowdStrike, which provides a cybersecurity protection system installed in many companies around the world, announced on Friday morning that there was an issue with the latest version of its system released to customers. The problem caused Windows, Microsoft’s operating system, not to load, displaying a blue screen. Consequently, all organizational systems installed and based on that operating system did not load either. In other words, the organization was paralyzed.
But the issue didn’t end there. During the repair actions distributed by the company, hackers “jumped on the bandwagon,” posing as company employees and distributing instructions that essentially meant inserting malicious code into the organization and deleting its databases. This was the second derivative of the event. (…)
It seems that the world has become much more global and technological than humans want to think about or believe. And yes, a keyboard mistake by one employee in one company can affect the entire world, impacting all our daily lives. This is the reality, and we should understand it quickly and start preparing through structured risk management processes for any event that may come. Every organization must know how to continue “business as usual” even in an emergency, even without computers.
Look at what happened in hospitals in Israel. Due to numerous cyberattacks experienced before the war, but mainly around the Gaza war, staff was trained to work manually, without computers. During last weekend’s event, they continued to operate more or less in a reasonable state.
Therefore, prior preparation prevents chaos and confusion at the critical moment. The state must implement mandatory regulation on the business continuity of organizations for the functional continuity of the economy.
Organizations should be prepared for cyberattacks or shutdowns – The Jerusalem Post (jpost.com)
IA : « Le vrai changement se produira lorsqu’une machine sera capable de souffrir »
Pour conquérir le monde, l’intelligence artificielle devra élaborer des plans complexes et stratégiques, estime le chercheur Stuart Russell en réaction au projet Strawberry d’OpenAI (Le Point, 24 juillet, article payant)
Extraits :
Professeur d’informatique à Berkeley, Stuart Russell est l’un des principaux chercheurs en intelligence artificielle et l’auteur avec l’ancienne figure de Stanford Peter Norvig du texte de référence Artificial Intelligence : A Modern Approach. Le chercheur, né en 1962 à Portsmouth, en Angleterre, qui s’est d’abord formé en physique théorique à Oxford avant d’opter pour l’informatique à Stanford, a notamment cofondé le Berkeley Center for Human-Compatible Artificial Intelligence (CHAI). Il est un innovateur en matière de représentation probabiliste des connaissances, de raisonnement et d’apprentissage, notamment en ce qui concerne son application à la surveillance sismique mondiale dans le cadre du traité d’interdiction complète des essais nucléaires.
Son dernier ouvrage, Human Compatible, traite de l’impact à long terme de l’IA sur l’humanité. L’ancien responsable de la chaire Blaise Pascal au laboratoire d’informatique – CNRS – de Sorbonne Université est également membre du Future of Life Institute, un think tank qui réfléchit à l’impact de l’intelligence artificielle sur la société. (…)
L’intelligence artificielle peut-elle doter les machines d’une conscience ?
Même si c’est le cas, cela ne changera rien. Même si mon ordinateur était conscient, cela ne changerait rien, et je n’aurais même pas les moyens de le savoir. Il y a fort à parier qu’il continuerait à exécuter les ordres donnés par le logiciel.
La seule chose qui changerait, ce serait qu’une machine soit capable de souffrir. Elle aurait alors des droits moraux, ce qui compliquerait tout. Il deviendrait alors criminel de l’éteindre, d’être cruel à son égard ou de lui imposer des choses qu’elle n’aime pas. Mais nous ne savons absolument pas si cela va se produire un jour. (…)
IA : « Le vrai changement se produira lorsqu’une machine sera capable de souffrir » (lepoint.fr)
An der Tour de France purzeln die Rekorde. Neue Pharmazeutika erhöhen die Betrugsmöglichkeiten. Besteht ein Zusammenhang?
Dopern bieten sich heute zahlreiche leistungsfördernde Substanzen. Sie könnten die Bestmarken an der Frankreichrundfahrt erklären. Ein Experte warnt vor vorschnellen Schlüssen (FAZ, 17 juillet, article payant)
Extraits :
(…) Besorgt sind auch die professionellen Doping-Jäger. Mario Thevis, der Leiter des Kölner Labors für Dopingkontrolle, sagt, dass Revolutionen in der Materialentwicklung und der Trainingsmethodik die Erklärung für die gegenwärtigen Leistungssteigerungen im Radsport sein könnten. Er sagt aber auch: «Die Möglichkeiten der Leistungsbeeinflussung durch nicht erlaubte Mittel und Methoden sind umfangreicher geworden.» Thevis erklärt, dass sich die Anti-Doping-Institutionen heute mehr anstrengen müssten, um die neuen Wege der Manipulation aufzuzeigen. (…)
Die gegenwärtige Fahrergeneration ist aber sowohl in der Spitze als auch in der Breite schneller als frühere. Radsport-Teams betonen, dass die Fortschritte eine Folge des deutlich besseren Materials seien, was Einsparungen von insgesamt bis zu 60 Watt allein durch aerodynamische Eigenschaften bringen könne. Auch die optimierte Ernährung und das bessere Training führten zu weiteren Leistungssteigerungen.
Allerdings gibt es heute auch eine Reihe von pharmazeutischen Präparaten, die im Training und im Wettkampf leistungsfördernd wirken. (…)
Möglichkeiten zur unerlaubten Leistungssteigerung gibt es also viele. Aber nicht immer verfügen die Dopingfahnder über die Fähigkeit, die verbotenen Substanzen auch nachzuweisen. Diesen Umstand gilt es bei der Analyse der Höchstleistungen an der Tour de France zu bedenken. Und nicht jede überragende Leistung verdankt sich Doping. Ausschliessen lässt sich ein Zusammenhang aber auch heute nicht.
Tour de France: Die Rekorde der besten Radfahrer werfen Fragen auf (nzz.ch)
How Microsoft’s Satya Nadella Became Tech’s Steely Eyed A.I. Gambler
Microsoft’s all-in moment on artificial intelligence has been defined by billions in spending and a C.E.O. counting on technology with huge potential and huge risks (NYT, 15 juillet, tribune, quelques articles gratuites / sem.)
A new bionic leg can be controlled by the brain alone
Those using the prosthetic can walk as fast as those with intact lower limbs (The Economist, 5 juillet, article payant)
Before hugh herr became a professor at the Massachusetts Institute of Technology (mit), he was a promising rock climber. But after being trapped in a blizzard during a climb at age 17, he lost both his legs below the knee to frostbite. Since then he has worked on creating prosthetic legs that would work and feel like the real thing. He appears to have succeeded.
In an article published on July 1st in Nature Medicine, Dr Herr and his team at mit describe seven people with below-the-knee amputations who can now walk normally with the help of surgery and new robotic prostheses. For the first time, Dr Herr says, people have been able to walk with bionic legs—mechanical prostheses that mimic their biological counterparts—that can be fully controlled by their brains.(…)
Stanisa Raspopovic from eth Zurich, who was also not involved, adds that Dr Herr’s “promising and beautiful” approach could be the end goal for below-the-knee amputations. But it remains to be seen if it could achieve similar results for people with amputations involving knees or upper-body limbs. Nor will everyone be able to get the amis they need. Decades after his amputation, Dr Herr has only enough muscle mass to construct an ami for a robotic ankle, but not a whole robotic foot. He says he is considering it regardless. Even if he cannot get the full effect, it may prove a sensible step. ■
A new bionic leg can be controlled by the brain alone (economist.com)
Neurosurgery : A new technique could analyse tumours mid-surgery
It would be fast enough to guide the hands of neurosurgeons (The Economist, 5 juillet, article payant)
Léo wurpillot was ten years old when he learned he had a brain tumour. To determine its malignancy, sections of the tumour had to be surgically removed and analysed. Now 19, he recalls the anguish that came with the subsequent three-month wait for a diagnosis. The news was good, and today Mr Wurpillot is a thriving first-year biomedical student at Cardiff University. But the months-long post-operative anticipation remains hard for patients to bear. That wait may one day be a thing of the past.
On June 27th a group of brain surgeons, neuropathologists and computational biologists met at Queen’s Medical Centre in Nottingham to hear about an ultrafast sequencing project developed by researchers at Nottingham University and the local hospital. Their work will allow brain tumours to be classified from tissue samples in two hours or less. As brain surgeries typically take many hours, this would allow results to come in before the end of surgery and inform the operation itself. (…)
A new technique could analyse tumours mid-surgery (economist.com)
Huit ans de retard pour le projet de fusion nucléaire Iter (Le Figaro, 4 juillet, article payant)
Ce réacteur, décidé par un traité international en 2006, devrait démarrer en 2033, avec de nouveaux surcoûts estimés au moins à 5 milliards d’euros
Huit ans de retard pour le projet de fusion nucléaire Iter (lefigaro.fr)
A sequence of zeroes : What happened to the artificial-intelligence revolution? (The Economist, 3 juillet, article payant)
So far the technology has had almost no economic impact
(…) Almost everyone uses ai when they search for something on Google or pick a song on Spotify. But the incorporation of ai into business processes remains a niche pursuit. Official statistics agencies ask ai-related questions to businesses of all varieties, and in a wider range of industries than do Microsoft and LinkedIn. America’s Census Bureau produces the best estimates. It finds that only 5% of businesses have used ai in the past fortnight (see chart 1). Even in San Francisco many techies admit, when pressed, that they do not fork out $20 a month for the best version of Chatgpt. (…)
Concerns about data security, biased algorithms and hallucinations are slowing the roll-out. (…)
Indeed, there is no sign in the macroeconomic data of a surge in lay-offs. Kristalina Georgieva, head of the imf, recently warned that ai would hit the labour market like “a tsunami”. For now, however, unemployment across the rich world is below 5%, close to to an all-time low. The share of rich-world workers in a job is near an all-time high. Wage growth also remains strong, which is hard to square with an environment where workers’ bargaining power is supposedly fading. (…)
Some economists think ai will transform the global economy without booting people out of jobs. Collaboration with a virtual assistant may improve performance. A new paper by Anders Humlum of the University of Chicago and Emilie Vestergaard of Copenhagen University surveys 100,000 Danish workers. The average respondent estimates Chatgpt can halve time spent on about a third of work tasks, in theory a big boost to efficiency. (…)
In time, businesses may wake up to the true potential of ai. Most technological waves, from the tractor and electricity to the personal computer, take a while to spread across the economy. Indeed, on the assumption that big tech’s ai revenues grow by an average of 20% a year, investors expect that almost all of big tech’s earnings from ai will arrive after 2032, according to our analysis. If an ai bonanza does eventually materialise, expect the share prices of the users of ai, not only the providers, to soar. But if worries about ai grow, big tech’s capex plans will start to look as extravagant as its valuations.
What happened to the artificial-intelligence revolution? (economist.com)
Dieses Biskuit rettet Leben: wie ein Schweizer mit einem 14-Gramm-Gebäck den Hunger bekämpft / Ce biscuit sauve des vies : comment un Suisse lutte contre la faim avec un biscuit de 14 grammes (NZZ, 2 juillet, article payant)
Sie sind kein Dessert, sondern pure Nahrung. Und sie sollen auf Madagaskar die Not lindern. Vom schwierigen Kampf für ein Stück Hoffnung. / Ce ne sont pas des desserts, mais de la pure nourriture. Et ils doivent soulager la misère à Madagascar. De la lutte difficile pour un peu d’espoir.
14 Gramm gegen Hunger: Wie ein Biskuit in Madagaskar Leben rettet (nzz.ch)
Viruses : A deadly new strain of mpox is raising alarm (The Economist, 29 juin, article payant)
Health officials warn it could soon spread beyond the Democratic Republic of Congo
Extraits :
(…) The situation in the region is complicated by war, displacement and food insecurity. Containment efforts are made harder still by the likelihood of asymptomatic cases, where individuals do not know they are infected but can nevertheless spread the virus to others. Dr Lang emphasises that this, along with the number of mild cases of the infection, are the biggest unknowns in the current outbreak. Preventing this new mpox strain from becoming another global health crisis requires swift and co-ordinated action.
A deadly new strain of mpox is raising alarm (economist.com)
Politberater Juri Schnöller: «Entweder wir denken Demokratie mit KI neu, oder sie wird langsam sterben» (NZZ, Interview, 19 juin, article payant)
Während sich andere Menschen vor künstlicher Intelligenz fürchten, plädiert Juri Schnöller dafür, sie möglichst bald einzusetzen. Für ihn ist KI nicht die Zukunft, sondern die Gegenwart – «und wer das nicht erkennt, plant bereits seine eigene politische Irrelevanz»
Le conseiller politique Juri Schnöller : “Soit nous repensons la démocratie avec l’IA, soit elle mourra lentement”.
Alors que d’autres personnes ont peur de l’intelligence artificielle, Juri Schnöller plaide pour qu’elle soit utilisée le plus tôt possible. Pour lui, l’IA n’est pas l’avenir, mais le présent – “et celui qui ne le reconnaît pas planifie déjà sa propre non-pertinence politique” (NZZ, Interview)
Extraits :
La question de savoir si l’IA va arriver est résolue depuis longtemps. La question plus décisive est de savoir comment elle sera utilisée. Pour l’instant, nous laissons le développement de l’intelligence artificielle à de grandes entreprises orientées vers le profit. Or, nous avons besoin d’une forme d’intelligence artificielle qui conduise également à une plus-value sociale au sens de tous les êtres humains dans des démocraties pluralistes. (…)
Vous restez malgré tout optimiste quant à notre capacité à nous en sortir avec l’IA ?
Oui. Nous avons tendance à nous considérer comme le point final de l’histoire humaine. Pourtant, il est tout à fait possible que les générations futures poursuivent le progrès. Peut-être trouveront-elles aussi de nouvelles formes de démocratie. (…)
On lit souvent dans les médias les effets néfastes de l’IA sur la démocratie, par exemple les deepfakes ou les campagnes de désinformation à grande échelle menées par la Russie. Vous dites que l’IA est une chance pour la démocratie. Les médias sont-ils trop pessimistes à votre goût ?
Oui, les médias adorent les histoires de fin du monde. Je ne pense pas que ce soit si grave. Le fait est que soit nous repensons la démocratie avec l’IA, soit elle mourra à petit feu. (…)
Politberater Juri Schnöller im Interview über Chancen von KI für die Demokratie (nzz.ch)
Artificial intelligence : Ray Kurzweil on how AI will transform the physical world (June 18)
Pay wall :The changes will be particularly profound in energy, manufacturing and medicine, says the futurist (The Economist, Guest Essay)
Excerpt :
(…) Sources of energy are among civilisation’s most fundamental resources. For two centuries the world has needed dirty, non-renewable fossil fuels. Yet harvesting just 0.01% of the sunlight the Earth receives would cover all human energy consumption. Since 1975, solar cells have become 99.7% cheaper per watt of capacity, allowing worldwide capacity to increase by around 2m times. So why doesn’t solar energy dominate yet?
The problem is two-fold. First, photovoltaic materials remain too expensive and inefficient to replace coal and gas completely. Second, because solar generation varies on both diurnal (day/night) and annual (summer/winter) scales, huge amounts of energy need to be stored until needed—and today’s battery technology isn’t quite cost-effective enough. The laws of physics suggest that massive improvements are possible, but the range of chemical possibilities to explore is so enormous that scientists have made achingly slow progress.
By contrast, ai can rapidly sift through billions of chemistries in simulation, and is already driving innovations in both photovoltaics and batteries. This is poised to accelerate dramatically. In all of history until November 2023, humans had discovered about 20,000 stable inorganic compounds for use across all technologies. Then, Google’s gnome ai discovered far more, increasing that figure overnight to 421,000. Yet this barely scratches the surface of materials-science applications. Once vastly smarter agi finds fully optimal materials, photovoltaic megaprojects will become viable and solar energy can be so abundant as to be almost free. (…)
Today, scientific progress gives the average American or Briton an extra six to seven weeks of life expectancy each year. When agi gives us full mastery over cellular biology, these gains will sharply accelerate. Once annual increases in life expectancy reach 12 months, we’ll achieve “longevity escape velocity”. For people diligent about healthy habits and using new therapies, I believe this will happen between 2029 and 2035—at which point ageing will not increase their annual chance of dying. And thanks to exponential price-performance improvement in computing, ai-driven therapies that are expensive at first will quickly become widely available.
This is ai’s most transformative promise: longer, healthier lives unbounded by the scarcity and frailty that have limited humanity since its beginnings.
Ray Kurzweil on how AI will transform the physical world (economist.com)
“Why machines won’t save us from the labor shortage” (June 12)
Pay wall :Automation is seen as a beacon of hope in the fight against labor shortages. However, the hoped-for relief requires more than just the availability of machines / Warum uns Maschinen nicht vor dem Arbeitskräftemangel retten : Automatisierung gilt als Hoffnungsträger im Kampf gegen den Arbeitskräftemangel. Die erhoffte Entlastung setzt jedoch mehr voraus als nur das Vorhandensein von Maschinen (NZZ, Opinion)
Warum der technische Fortschritt Arbeit nicht ersetzt (nzz.ch)
“Like people, elephants call each other by name” (June 11)
Pay wall :Trunk calls : Like people, elephants call each other by name – And anthropoexceptionalism takes another tumble (The Economist)
Excerpt :
As with dolphin whistles, it has long been known that elephant rumbles are individually recognisable. One thing to establish, therefore, was whether, when communicating with another elephant, the caller was mimicking the recipient. The software suggested this was not the case. It was, however, the case that calls were receiver-specific. This showed up in several ways. First, for a given caller, the receiver could be predicted from the sonic spectrum of its rumble. Second, rumbles directed by a particular caller to a particular recipient were more similar to each other than those made by that caller to other recipients. Third, recipients responded more strongly to playbacks of calls originally directed towards them than to those originally intended for another animal.
On top of this, rumbles directed by different callers towards the same recipient were more similar to each other than to other calls within the data set, suggesting that everyone uses the same name for a given recipient. All of which adds to the evidence that elephant intelligence does indeed parallel the human sort in many ways—and makes their slaughter by humans, which threatens many of their populations, even more horrifying.
Like people, elephants call each other by name (economist.com)
“The war for AI talent is heating up” (June 9)
Pay wall : Retention is all you need : The war for AI talent is heating up – Big tech firms scramble to fill gaps as brain drain sets in (The Economist)
The war for AI talent is heating up (economist.com)
“Fourth time lucky : Elon Musk’s Starship makes a test flight without exploding” (June 8)
Pay wall : Crucially, the upper stage of the giant rocket survived atmospheric re-entry (The Economist)
Elon Musk’s Starship makes a test flight without exploding (economist.com)
Smallest known great ape, which lived 11m years ago, found in Germany (June 8)
Free access : Smallest known great ape, which lived 11m years ago, found in Germany : Buronius manfredschmidi estimated to have weighed just 10kg and was about the size of a human toddler (The Guardian)
Smallest known great ape, which lived 11m years ago, found in Germany | Fossils | The Guardian
“Robots are suddenly getting cleverer. What’s changed?” (June 7)
Pay wall : Robotics : Robots are suddenly getting cleverer. What’s changed? – There is more to AI than ChatGPT (The Economist)
Robots are suddenly getting cleverer. What’s changed? (economist.com)
“SpaceX’s monumental Starship makes a spectacular 4th test flight full of promise” (June 7)
Pay wall : Le monumental Starship de SpaceX effectue un 4e vol d’essai spectaculaire et plein de promesses : La fusée géante conçue pour être entièrement réutilisable a réussi l’amerrissage à la fois du premier étage, mais aussi du vaisseau après sa descente depuis l’orbite (Le Figaro)