VI. 3. Technologie & Sciences


Wall Street Journal, opinion, 22 novembre, article payant      

How to Regulate AI Without Stifling Innovation

Rules can’t solve every potential problem, and the demand for perfect safety has dangers of its own.

Extraits :

The biggest challenge with artificial intelligence is that we don’t have enough yet. Regulation should aim to help solve this problem. AI could turbocharge the many advanced economies grappling with slow productivity growth. But the technology is still developing, and the European Union’s heavy-handed AI rules have impeded progress there. As the U.S. debates regulation, we should avoid those mistakes by following six principles:

First, balance benefits and risks. This may sound obvious, but many regulatory enthusiasts ignore the technology’s benefits out of an overabundance of caution and instead support delaying AI until it is proven absolutely safe. Cost-benefit analysis requires regulators to think not only about the risks of AI but also the risks from slower AI development, such as more cancer deaths because of delayed drug discovery, worse educational outcomes because students lack personalized digital tutors, more car accidents because of delays in self-driving cars, and worse climate change because of a slowdown in discovering better materials for grid-level battery storage.

Second, compare AI with humans, not to the Almighty. Yes, autonomous cars crash—but how do they compare with human drivers? AI may show biases, but how do these stack up against human prejudices? In some cases, it might even be acceptable for AI to perform slightly worse than humans if it offers significant convenience and has greater potential for improvement over time, as we have seen with autonomous vehicles. AI is learning much faster than humans are and the future gains this learning will generate belong on the benefit side of the ledger.

Third, address how existing regulations are hindering progress. The most obvious are permitting and other obstacles to the expansion of data centers and the power sources they will need. A bigger threat over time is the dozens of state laws regulating AI that have already been passed and the hundreds more that have been proposed. To the degree possible, federal pre-emption with its own framework would help ensure the U.S. remains a digital single market—unlike the fractured EU.

Fourth, where new regulation is warranted, AI should be overseen by existing domain-specific regulators rather than a new superregulator. We don’t have separate regulators for computers or linear regression; instead, our regulators specialize in areas where these are used, such as auto safety, stock trading, and medical devices. Existing regulators should focus on outputs and consequences in their domains, not on inputs and methods. This may require more AI expertise and flexibility within agencies. The Food and Drug Administration has come up with procedures to approve AI-based devices that might fall foul of its standard rules.

Fifth, regulation must not become a moat protecting incumbents. History shows that well-intentioned rules can entrench existing powers, from medieval guilds to hospital certificate-of-need laws. In AI, we risk repeating this pattern. Centralized licensing bodies could easily become gatekeepers stifling competition. A superregulator could be captured by big companies. When tech giants enthusiastically promote regulation, it should raise red flags. Our regulatory framework should nurture a competitive AI landscape, not solidify the dominance of a few early movers.

Sixth, not every problem caused by AI can be solved by regulating AI. I hope this technology will raise wages without hurting employment, with especially large increases for workers with lower-paying skills. Studies have found that less-able writers benefit most from AI-based writing suggestions. But bleak scenarios of swift technological change displacing workers or causing inequality are possible. The answer to this downside risk isn’t to have regulators assess whether each technological advance is job-replacing or inequality-increasing. Rather, the solution lies in more conventional economic policies like training programs that connect people to jobs, wage subsidies, and a more progressive tax and transfer system to ensure that AI’s benefits are shared broadly. As a professor, I wouldn’t expect AI regulations to limit plagiarism—it is on us to figure out how to adapt our teaching.

While some AI regulation is warranted, policymakers should proceed cautiously. Well-intentioned efforts could inadvertently slow progress while falling short of their goals. These six principles can help form a balanced and effective approach to regulating AI, one that harnesses its potential while addressing legitimate concerns.

Mr. Furman, a professor of the practice of economic policy at Harvard, was chairman of the White House Council of Economic Advisers, 2013-17.

https://www.wsj.com/opinion/how-to-regulate-ai-without-stifling-innovation-regulation-eu-licensing-a2f0d8af?mod=opinion_lead_pos7


The Economist, 18 novembre, article payant      

Future imperfect : Artificial intelligence is helping improve climate models

More accurate predictions will lead to better policy-making

Extraits :

THE DIPLOMATIC ructions at COP29, the United Nations climate conference currently under way in the Azerbaijani capital of Baku, are based largely on computer models. Some model what climate change might look like; others the cost of mitigating it (see Briefing).

No model is perfect. Those modelling climate trends and impacts are forced to exclude many things, either because the underlying scientific processes are not yet understood or because representing them is too computationally costly. This results in significant uncertainty in the results of simulations, which comes with real-world consequences. Delegates’ main fight in Baku, for example, will be over how much money poor countries should be given to help them decarbonise, adapt or recover. The amount needed for adaptation and recovery depends on factors such as sea-level rise and seasonal variation that climate modellers still struggle to predict with much certainty. As negotiations become ever more specific, more accurate projections will be increasingly important.

The models that carry most weight in such discussions are those run as part of the Coupled Model Intercomparison Project (CMIP), an initiative which co-ordinates over 100 models produced by roughly 50 teams of climate scientists from around the world. All of them attempt to tackle the problem in the same way: splitting up the world and its atmosphere into a grid of cells, before using equations representing physical processes to estimate what the conditions in each cell might be and how they might change over time. (…)

Clever computational tricks can make them more detailed still. They have also grown better at representing the elaborate interactions at play between the atmosphere, oceans and land—such as how heat flows through ocean eddies or how soil moisture changes alongside temperature. But many of the most complex systems remain elusive. Clouds, for example, pose a serious problem, both because they are too small to be captured in 50km cells and because even small changes in their behaviour can lead to big differences in projected levels of warming.

Better data will help. But a more immediate way to improve the climate models is to use artificial intelligence (AI). Model-makers in this field have begun asserting boldly that they will soon be able to overcome some of the resolution and data problems faced by conventional climate models and get results more quickly, too. (…)

Reducing the uncertainties in climate models and, perhaps more important, making them more widely available, will hone their usefulness for those tasked with the complex challenge of dealing with climate change. And that will, hopefully, mean a better response. ■

https://www.economist.com/science-and-technology/2024/11/13/artificial-intelligence-is-helping-improve-climate-models


The Guardian, 12 novembre, libre accès  

Reasons to be hopeful: five ways science is making the world better

If Trump’s re-election is getting you down, these innovations in medicine and technology should cheer you up

https://www.theguardian.com/world/2024/nov/09/reasons-to-be-hopeful-five-ways-science-is-making-the-world-better


L’Express, 8 novembre, article payant      

Venki Ramakrishnan, prix Nobel : “On sait maintenant ce qui permet de retarder le vieillissement”

Entretien exclusif. Dans le remarquable “Why we die”, le biologiste de Cambridge fait le tri entre vraies pistes de recherche contre le vieillissement et fausses promesses.

Extraits :

Dans la liste des auteurs de best-sellers promettant de faire reculer la mort, Venki Ramakrishnan a deux atouts majeurs. D’abord, ce professeur à Cambridge, ancien président de la prestigieuse Royal Society et lauréat du prix Nobel de chimie en 2009 pour ses recherches sur les ribosomes, les structures cellulaires responsables de la production des protéines, est l’un des plus éminents biologistes au monde. Ensuite, contrairement à nombre de ses confrères, il n’a “aucun argent investi dans le secteur” et peut donc se permettre un regard objectif et critique sur les découvertes récentes en matière de longévité, qui font souvent l’objet d’annonces sensationnalistes.

Dans le remarquable Why we die (Hodder Press, non traduit), salué cette année par la critique anglo-saxonne, Venki Ramakrishnan retrace l’histoire des avancées scientifiques sur la compréhension du vieillissement et fait un point sur les principales pistes pour retarder les effets de l’âge : restriction calorique, sénolytiques, reprogrammation cellulaire, transfusions de sang plus jeune… Si le prix Nobel estime que nous sommes à la veille d’avancées majeures, il remet aussi à leur place les scientifiques bien trop arrogants, à l’image d’Aubrey de Grey qui a déclaré que les premiers humains qui atteindront 1 000 ans sont déjà nés. En exclusivité pour L’Express, Venki Ramakrishnan fait le tri entre recherches sérieuses et fausses promesses, tout en rappelant que le triptyque “bien manger, dormir et faire de l’exercice” demeure aujourd’hui le meilleur moyen pour prolonger son existence. Entretien.

L’Express : Vous êtes un biologiste reconnu, mais vous n’avez pas travaillé directement sur le vieillissement. Comment en êtes-vous venu à vous intéresser à ce domaine?

Venki Ramakrishnan : Mes travaux sont très proches de ce champ de recherche. C’est le cas des questions liées à la dégradation et au renouvellement des protéines, ou à la réponse au stress, quand le corps arrête de produire des protéines s’il détecte un problème. Des chercheurs de mon laboratoire étudient ces sujets, qui sont centraux dans le processus de vieillissement. Par ailleurs, j’avais déjà écrit un livre, sur la course pour la découverte de la structure du ribosome, avec plein d’histoires sur les arrière-cuisines de la science. Cela m’a donné le goût d’écrire pour le grand public. Or le vieillissement et la mort restent de grandes questions existentielles depuis que l’Homme a pris conscience de sa mortalité. (…)

Pouvons-nous réellement espérer repousser les limites de la vie humaine?

A ce jour, personne n’a battu le record de votre compatriote, la Française Jeanne Calment, décédée en 1997 à l’âge de 122 ans. Pourtant, depuis, le nombre de centenaires a beaucoup augmenté. Mais une fois qu’ils atteignent les 110 ans, la plupart déclinent et meurent. Cela suggère que la limite naturelle de notre espèce se situe probablement autour de cet âge, même si certains peuvent parfois, de façon exceptionnelle, vivre un peu plus longtemps. En guérissant le cancer, les maladies cardiaques, le diabète ou la démence, on améliorera peut-être encore de dix ou quinze ans l’espérance de vie moyenne, aujourd’hui de 80 ans environ. Mais pour aller plus loin, il faudrait réussir à s’attaquer au vieillissement lui-même.

Dans votre livre, vous vous montrez critique à l’égard des recherches sur le vieillissement, mais vos conclusions sont finalement assez optimistes : vous dites qu’il y aura des avancées majeures. Quelles pistes vous paraissent les plus encourageantes?

Je critique le battage médiatique, mais pas le secteur dans son ensemble, car de nombreuses recherches sérieuses sont en cours. Avec tout l’argent déversé sur ce domaine, et avec tous les très bons scientifiques qui y travaillent, quelque chose finira par se produire. La question est de savoir combien de temps cela prendra. Ce que je dénonce, ce sont ces entreprises qui commencent à commercialiser des produits pour les humains à partir de résultats obtenus en laboratoire, sur des souris, sans aucune autre forme d’essai. Pour autant, il existe effectivement plusieurs pistes intéressantes.

D’abord, la restriction calorique, grâce à laquelle il serait possible de se trouver en meilleure santé à un âge avancé, même si elle présente aussi des effets secondaires. (…)

Une autre piste concerne les cellules sénescentes. Elles ont une fonction biologique très importante : elles signalent au système immunitaire qu’elles sont endommagées et qu’elles doivent être éliminées. (…)

Une autre piste concerne la synthèse des métabolites importants pour le fonctionnement de notre organisme, qui s’avère plus difficile quand on prend de l’âge. (…)

Vous vous montrez assez critique à l’égard de scientifiques comme le généticien de Harvard David Sinclair, mondialement connu pour ses travaux sur le vieillissement. Pour quelle raison?

Mon livre a été relu avec soin par deux avocats, et je ne veux pas parler spécifiquement d’un chercheur. Mais de façon générale, quand des scientifiques créent des entreprises sur la base de leurs recherches, il y a un conflit d’intérêts intrinsèque. Vu l’argent en jeu, ils perdent leur objectivité. Les résultats sur lesquels ils se basent devraient être testés par d’autres experts dépourvus de conflits d’intérêts. (…)

Les milliardaires de la tech, comme Elon Musk, Peter Thiel et d’autres, sont obsédés par la recherche anti-âge. Pensez-vous qu’ils soutiennent de la vraie science?

Pour certains d’entre eux, oui. Par exemple, ils sont plusieurs à avoir fondé un laboratoire Altos Labs, qui a attiré d’excellents chercheurs, en provenance d’universités prestigieuses. Mais les milliardaires de la tech surestiment la rapidité avec laquelle ces travaux aboutiront à des avancées concrètes. Ils viennent de l’industrie du logiciel, où tout peut changer du tout au tout en l’espace d’un an – regardez ce qu’il s’est passé avec ChatGPT. En biologie, tout est bien plus compliqué, les essais cliniques prennent du temps, il faut souvent une vingtaine d’années entre une découverte fondamentale et l’arrivée d’une molécule sur le marché. Mais au-delà de cette limite, la science qu’ils soutiennent est probablement très légitime.

Vous expliquez que Bill Gates est un cas particulier chez les milliardaires de la tech…

Effectivement, pour augmenter la durée de vie moyenne de la population sur cette planète, une des priorités est d’éliminer les maladies liées aux infections ou à la malnutrition. En termes d’années de vie gagnées, Bill Gates, avec sa Fondation, en fait probablement plus que tous ces milliardaires qui investissent dans la lutte contre le vieillissement… (…)

https://www.lexpress.fr/sciences-sante/venki-ramakrishnan-prix-nobel-on-sait-maintenant-ce-qui-permet-de-retarder-le-vieillissement-S2REZ33IGVBIRIRT52LLVS7DHE/


Le Point, 4 novembre, article payant    

Bertrand Duplat, l’homme qui veut réparer le cerveau

Grâce à un microrobot de la taille d’un grain de riz conçu par sa société Robeauté, cet ingénieur est en passe de révolutionner la neurochirurgie. Rencontre.

Extraits :

(…) À la tête de Robeauté, société créée avec Joana Cartocci en 2017, Bertrand Duplat pourrait bien donner un sérieux coup d’accélérateur à la neurochirurgie. L’idée ? Mettre au point un robot miniature de 1,8 mm de diamètre – soit la taille d’un grain de riz –, capable de se déplacer dans le cerveau, de réaliser des biopsies et de déposer des traitements ciblés, tout en étant le moins intrusif possible. (…)

Telle une fusée à étages, le robot autopropulsé sera inséré par un petit trou de 3 à 4 millimètres à l’intérieur du cerveau, puis voyagera dans la viscoélasticité en écartant les parois cérébrales grâce à une trajectoire préalablement établie et un GPS intégré.

« Aujourd’hui, on se déplace au mieux en une dimension, explique Bertrand Duplat, avec des aiguilles neurochirurgicales et en ligne droite. Notre objectif, c’est une trajectoire non linéaire, qui permette d’éviter les obstacles, et un moteur embarqué pour pouvoir utiliser le robot en dehors du bloc opératoire. Ce serait une première mondiale. » Car, si les bras robotisés sont de plus en plus nombreux à venir en aide aux équipes médicales, et ce avec une précision de plus en plus probante, les dispositifs sont encore très lourds, encombrants et monopolisent les salles d’opération pendant plusieurs heures. (…)

Rien ne prédestinait le chercheur à se spécialiser dans l’ingénierie chirurgicale. Mais, en 2007, Bertrand Duplat perd sa mère d’un glioblastome, une tumeur cérébrale extrêmement agressive. Pour le touche-à-tout qui avait pris la fâcheuse habitude de résoudre tous ses problèmes à coups d’inventions, l’impuissance est insupportable.

« Elle était inopérable, et aucun traitement ne pouvait abréger ses souffrances. Je me suis promis à ce moment-là qu’il serait un jour possible d’intervenir. » Il lui faudra quelques années pour sauter le pas, pour développer un réseau de neurologues toujours à ses côtés aujourd’hui, et pour convaincre de la nécessité d’envoyer son robot dans le cerveau humain. (…)

Aujourd’hui, l’avancée considérable que permettra le microrobot n’est plus à prouver. Grâce à son système de tracking ultrasonore, inoffensif pour le patient et le personnel hospitalier, on pourra suivre la trajectoire de l’intervention en temps réel avec une précision d’abord millimétrique, puis submillimétrique. L’appareil ressortira ensuite en marche arrière, à la même vitesse qu’à l’aller, soit 3 millimètres par minute.

« La grande différence avec les techniques qui existent déjà, poursuit Duplat, c’est qu’actuellement, pour traiter les pathologies comme le cancer ou les maladies neurodégénératives, on ingère des médicaments ou on les envoie dans le sang. Or le cerveau est extrêmement bien protégé. Il y a le crâne, bien sûr, mais aussi la barrière hémato-encéphalique, qui isole le cerveau du sang. Les médicaments n’entrent pas bien, et, si on veut obtenir la bonne dose dans le cerveau, cela peut devenir très toxique pour le reste du corps. »

Désormais, il sera donc possible d’explorer plusieurs points, de délivrer les médicaments, de poser des implants beaucoup plus précisément que ne le permet la technologie actuelle, mais aussi de procéder à des prélèvements – notamment moléculaires – et d’établir une carte exhaustive de l’étendue d’une tumeur ou d’une pathologie là où l’imagerie est encore incomplète. Cartographier le cerveau de l’intérieur, découvrir ce qui se passe dans les zones pathologiques et péripathologiques, voilà ce qu’on saura faire dans un premier temps.  Ensuite, on va rapidement pouvoir se servir du robot au niveau fonctionnel, neuronal. Comment les neurones sont-ils connectés les uns avec les autres ? Quels sont les circuits qui peuvent être problématiques ? Comment peut-on les traiter ? Les protocoles devraient mieux fonctionner, car les données qu’on aura pu récupérer nous confirmeront que c’est le bon endroit, le bon timing et la bonne combinaison de thérapies. »

Les essais sont lancés

Si le chemin est encore long d’ici l’autorisation de mise sur le marché, Bertrand Duplat étudie déjà les réglementations avec la Food and Drug Administration. Des essais précliniques ont lieu depuis 2022 sur des cadavres d’animaux et d’humains ainsi que sur des animaux vivants, et les essais cliniques de biopsie chez l’homme devraient commencer en 2026 pour une commercialisation en 2030. Le voyage fantastique ne devrait plus tarder.https://www.lepoint.fr/science/bertrand-duplat-l-homme-qui-veut-reparer-le-cerveau-03-11-2024-2574327_25.php


The Economist, 4  novembre, article payant        

The theory of evolution : Darwin and Dawkins: a tale of two biologists

One public intellectual has spent his career defending the ideas of the other

The Genetic Book of the Dead. By Richard Dawkins. Yale University Press; 360 pages; $35. Apollo; £25

Extraits :

GO TO ANY bookshop, and its shelves will be groaning with works of popular science: titles promising to explain black holes, white holes, the brain or the gut to the uninitiated. Yet the notion that a science book could be a blockbuster is a relatively recent one. Publishers and curious readers have Richard Dawkins to thank.

In 1976 his book “The Selfish Gene”—which argued that natural selection at the level of the gene is the driver of evolution—became a surprise bestseller. It expressed wonder at the variety of the living world and offered a disciplined attempt to explain it. It also announced Dr Dawkins, then 35, as a public intellectual.

A spate of books about evolution followed, as well as full-throated attacks on religion, particularly its creationist aspects. (In one essay he contemplated religion as a kind of “mind virus”.) Dr Dawkins earned the sobriquet “Darwin’s rottweiler”—a nod to Thomas Huxley, an early defender of the naturalist’s ideas, known as “Darwin’s bulldog”.

Dr Dawkins, now 83, has returned with his 19th volume, “The Genetic Book of the Dead”. Its working hypothesis is that modern organisms are, indeed, like books, but of a particular, peculiar, variety. Dr Dawkins uses the analogy of palimpsests: the parchments scraped and reused by medieval scribes that accidentally preserved enough traces of their previous content for the older text to be discerned. (…)

Dr Dawkins’s contention is that, by proper scrutiny of genetics and anatomy, a scientist armed with the tools of the future will be able to draw far more sophisticated and connected inferences than these. This will then illuminate parts of evolutionary history that are currently invisible.

As an analogy, describing organisms as palimpsests is a bit of a stretch. A palimpsest’s original text is unrelated to its new one, rather than being an earlier version of it, so it can tell you nothing about how the later text was composed. But that quibble aside, the tantalising idea is that reading genomes for their history is an endeavour that may form the basis of a new science.

After 19 books and almost 50 years spent contemplating essentially the same theme, lesser authors would be forgiven for getting stale. But, though Dr Dawkins’s topic is unchanging, his approach is always fresh, thanks to new examples and research. Yet he calls his current book tour “The Final Bow”, suggesting that he is exhausted, even if his subject is not.

Dr Dawkins has been an influential figure as much as an important thinker. In the current age, when academics and students are fearful of expressing even slightly controversial opinions, the world needs public intellectuals who are willing to tell it, politely but persuasively, how it is. Dr Dawkins has long been happy to challenge his readers’ orthodoxies (even if he has mellowed on the subject of faith, and refers to himself as a “cultural Christian”). Popular science writers today could take note. ■

https://www.economist.com/culture/2024/10/28/darwin-and-dawkins-a-tale-of-two-biologists


The Economist, 1 novembre, article payant      

Think outside the box : ADHD should not be treated as a disorder

Adapting schools and workplaces for it can help far more

Extraits :

(…) NOT LONG ago, attention-deficit hyperactivity disorder (ADHD) was thought to affect only school-aged boys—the naughty ones who could not sit still in class and were always getting into trouble. Today the number of ADHD diagnoses is rising fast in all age groups, with some of the biggest increases in young and middle-aged women.

The figures are staggering. Some 2m people in England, 4% of the population, are thought to have ADHD, says the Nuffield Trust, a think-tank. Its symptoms often overlap with those of autism, dyslexia and other conditions that, like ADHD, are thought to be caused by how the brain develops. All told, 10-15% of children have patterns of attention and information-processing that belong to these categories.

At the moment, ADHD is treated as something you either have or you don’t. This binary approach to diagnosis has two consequences. The first is that treating everyone as if they are ill fills up health-care systems. Waiting lists for ADHD assessments in England are up to ten years long; the special-needs education system is straining at the seams. The second consequence occurs when ADHD is treated as a dysfunction that needs fixing. This leads to a terrible waste of human potential. Forcing yourself to fit in with the “normal” is draining and can cause anxiety and depression.

The binary view of ADHD is no longer supported by science. Researchers have realised that there is no such thing as the “ADHD brain”. The characteristics around which the ADHD diagnostic box is drawn—attention problems, impulsivity, difficulty organising daily life—span a wide spectrum of severity, much like ordinary human traits. For those at the severe end, medication and therapy can be crucial for finishing school or holding on to a job, and even life-saving, by suppressing symptoms that lead to accidents.

But for most people with ADHD, the symptoms are mild enough to disappear when their environment plays to their strengths. Rather than trying to make people “normal”, it is more sensible—and cheaper—to adjust classrooms and workplaces to suit neurodiversity. (…)

Greater understanding of neurodiversity would reduce bullying in schools and help managers grasp that neurodivergent people are often specialists, rather than generalists. They may be bad in large meetings or noisy classrooms, but exceptional at things like multitasking and visual or repetitive activities that require attention to detail. Using their talents wisely means delegating what they cannot do well to others. A culture that tolerates differences and takes an enlightened view of the rules will help people achieve more and get more out of life. That, rather than more medical appointments, is the best way to help the growing numbers lining up for ADHD diagnoses. ■

https://www.economist.com/leaders/2024/10/30/adhd-should-not-be-treated-as-a-disorder


The everything drugs  : It’s not just obesity. Drugs like Ozempic will change the world

As they become cheaper, they promise to improve billions of lives (The Economist, 29 octobre, article payant) 

Voir “Article du Jour”

https://www.economist.com/leaders/2024/10/24/its-not-just-obesity-drugs-like-ozempic-will-change-the-world


Identity medicine : The data hinted at racism among white doctors. Then scholars looked again

Science that fits the zeitgeist sometimes does not fit the data (The Economist, 28 octobre, article payant) 

Extraits :

BLACK BABIES in America are more than twice as likely to die before their first birthday than white babies. This shocking statistic has barely changed for many decades, and even after controlling for socioeconomic differences a wide mortality gap persists. Yet in 2020 researchers discovered a factor that appeared to reduce substantially a black baby’s risks. In their study, published in Proceedings of the National Academy of Sciences (PNAS), they wrote that “when Black newborns are cared for by Black physicians, the mortality penalty they suffer, as compared with White infants, is halved.”

This striking finding quickly captured national and international headlines, and generated nearly 700 Google Scholar citations. The study was widely interpreted—incorrectly, say the authors—as evidence that newborns should be matched to doctors of the same race, or that white doctors harboured racial animus against black babies. It even made it into the Supreme Court’s records as an argument in favour of affirmative action, with Justice Ketanji Brown Jackson (mis)citing the findings. A supporting brief by the Association of American Medical Colleges and 45 other organisations mentioned the study as evidence that “For high-risk black newborns, having a black physician is tantamount to a miracle drug.”

Now a new study seems to have debunked the finding, to much less fanfare. A paper by George Borjas and Robert VerBruggen, published last month in PNAS, looked at the same data set from 1.8m births in Florida between 1992 and 2015 and concluded that it was not the doctor’s skin colour that best explained the mortality gap between races, but rather the baby’s birth weight. Although the authors of the original 2020 study had controlled for various factors, they had not included very low birth weight (ie, babies born weighing less than 1,500 grams, who account for about half of infant mortality). Once this was also taken into consideration, there was no measurable difference in outcomes.

The new study is striking for three reasons. First, and most important, it suggests that the primary focus to save young (black) lives should be on preventing premature deliveries and underweight babies. Second, it raises questions about why this issue of controlling for birth weight was not picked up during the peer-review process. And third, the failure of its findings to attract much notice, at least so far, suggests that scholars, medical institutions and members of the media are applying double standards to such studies. Both studies show correlation rather than causation, meaning the implications of the findings should be treated with caution. Yet, whereas the first study was quickly accepted as “fact”, the new evidence has been largely ignored. (…)

That a flawed social-science finding which fitted neatly within the zeitgeist was widely accepted is understandable. Less understandable is that few people now seem eager to correct the record. The new study has had just one Google Scholar citation and no mainstream news coverage. This suggests that opinion-makers have, at best, not noticed the new article (it was published a month ago; The Economist only spotted it when the Manhattan Institute, a conservative think-tank, last week put out an explanatory note). At worst, they have deliberately ignored it. (…)

Mr Borjas and Mr VerBruggen end on an optimistic note, observing that science has “the capacity for self-correction, and scientists can facilitate this by being open with their methods and data”. Science journalists can help, too. ■

https://www.economist.com/united-states/2024/10/27/the-data-hinted-at-racism-among-white-doctors-then-scholars-looked-again


Are You Ready for a Brain Chip? It’ll Change Your Mind

These implants will help us do amazing things. The downside is that they may destroy humanity. (WSJ, 19 octobre, article payant) 

Extraits :

Smartphone ownership is nearly universal. It isn’t mandatory, of course, but you’d be seen as an eccentric if you didn’t have one. Rejecting smartphones means you’re old-fashioned, possibly a bit of a crank.

There are pros and cons to having a smartphone. As a society, we’ve decided that on the whole it’s much better to have one. Of course, there’s an astronomical amount of money at stake. Imagine how much revenue depends, directly and indirectly, on the near-universal ownership of smartphones and tablets. There’s the demand for the hardware itself, for the raw materials to build that hardware and the infrastructure to assemble it, to improve it, to ship it around the world. There’s demand for transmission lines, cell towers and data networks. There’s demand for the operating systems, for the middleware that operates the cameras and regulates the batteries, for the hundreds of thousands of apps you might want to download. There’s demand for the endless content that appears on those apps and for the advertising time and real estate on hundreds of millions of smartphone and tablet screens. Beyond all that, there’s demand for the huge amounts of energy we need to make it all work.

There is very little that is completely independent of these devices. So, yes, you can do without a smartphone. But it isn’t easy.

It won’t be long before there is a similar concerted effort to make brain-implanted chips seem normal. It is a matter of years, not decades. These won’t be chip implants permitting paraplegics to regain their independence. These will be implants marketed to everyone, as smartphones are now. And if you decline to have a chip grafted onto your brain, you’ll be a backward, out-of-touch misanthrope.

The benefits of brain chips will be vastly beyond what external devices offer today. We will be able to take “photos” of anything we see with our eyes, just by thinking. Ditto video—in 3-D. We will be able to send messages to friends by thinking them, and to hear their replies played in our minds. We’ll have conversations with friends remotely, hearing their voices and ours without actually having to speak. We’ll be able to talk to anyone in any language. We’ll be able to remember an infinite amount of information, to retrieve any fact by asking our brain chips. We’ll be able to pay for things without carrying a wallet or a phone. We’ll be able to hear music piped directly into our brains. To watch movies. To take part in movies. To be totally entertained in new virtual worlds. (…)

We’ll be able to get advertising pumped directly into our brains, to have images hover before our eyes that we can’t turn off—except for those opting for the premium subscription. Our memories will be organized for us by artificial intelligence under policies crafted by experts who will have society’s best interests at heart. We won’t have access to information that might be, say, Russian propaganda. If we have criminal ideas, or perhaps just countercultural notions, they will be referred to the proper authorities before it’s too late.

In other words, it will be every dystopian sci-fi drama rolled into one.

But transhumanism—transcending human “limitations” through technology—becomes dangerous when a human, deprived of that technology, would be not only inconvenienced but unrecognizable.

Imagine a world in which not only our friends’ phone numbers but all our experiences with them, and even their names and their faces, are remembered for us and stored remotely on servers somewhere—available for us at any time. Until they’re not. What would be left of a generation of humans who had never had to use their own memory or do any of their own reasoning until, one day, all the chips were turned off? Would there be any human left, or only an empty shell?

It doesn’t take an atomic bomb to destroy humanity. There are other ways. If you don’t get a brain chip, you’ll have a hard time competing or even living in the modern world. You won’t be able to retain endless information, to pick up new skills instantly, to communicate with anyone anywhere. You’ll be out of date. You’ll be an obsolete human. You might be the last human. So maybe you’d better get the brain chip after all. Remember, it’s optional.

Mr. Gelernter is manager of RG Niederhoffer Digital and an expert in artificial intelligence and machine learning.

Are You Ready for a Brain Chip? It’ll Change Your Mind – WSJ


Filling up space : The rockets are nifty, but it is satellites that make SpaceX valuable

Elon Musk’s space venture may soon be more valuable than Tesla (The Economist, 19 octobre, article payant) 

Extraits :

There was no mistaking the feat of engineering. The bottom half of the biggest object ever flown—by itself as tall as a 747 is long—came hurtling out of the sky so fast that it glowed from the friction. With the ground rushing to meet it, a cluster of its engines briefly relit, slowing the rocket and guiding it carefully back towards the same steel tower from which it had launched just seven minutes previously. A pair of arms swang closed to catch it, leaving it suspended and smoking in the early-morning sunshine.

Less obvious than the kinetic marvels, but even more important, are the economics of Starship, as the giant rocket tested on October 13th is known. The firm that built it, SpaceX, was founded in 2002 by Elon Musk, an entrepreneur, with the goal of slashing the expense of flying things to space. For Mr Musk, the purpose of such cost-cutting is to make possible a human settlement on Mars. But it has also made new things possible back on Earth. Over the past four years, SpaceX has become a globe-straddling internet company as well as a rocket-maker. Its Starlink service uses what would, a few years ago, have been an unthinkably large number of satellites (presently around 6,400, and rising fast) to beam snappy internet access nearly anywhere on the planet.

Excitement about Starlink’s prospects has seen SpaceX’s valuation rise to $180bn (see chart 1). Some analysts are even beginning to wonder whether it might one day match or exceed the value of Tesla, an electric-car firm of which Mr Musk is also CEO. If Starship lives up to its promise, its combination of vast size and bargain-basement price could provide a big boost to the economics of space in general—and of Starlink in particular. (…)

Mr Musk is famous for making grand predictions, only some of which come to pass. But when he started Starlink he said his only ambition was not to go bankrupt, with good reason. Something similar had been tried, albeit on a much smaller scale, by firms including Teledesic and GlobalStar at the height of the dotcom boom. All of them went bust. But as far as anyone can tell, Starlink is thriving.

ts distinctive white antennae have popped up everywhere from remote schools in the Amazon to the bunkers and trenches on the front lines of the war in Ukraine. “I’ve [even] seen a Starlink dish tied to a broom handle and mounted on a public toilet in the Lake District,” says Simon Potter of BryceTech. In September the firm announced it had signed up 4m customers. Traffic through its networks has more than doubled in the past year, as SpaceX has signed deals with cruise lines, shipping firms and airlines.

Modelling by Quilty Space, another firm of analysts, suggests that Starlink’s revenue will hit $6.6bn this year, up from $1.4bn in 2022. That is already 50% more than the combined revenue of SES and IntelSat, two big satellite-internet firms that announced a merger in April. A year ago Mr Musk said that Starlink had achieved “break-even cashflow”. “It’s astounding that a constellation of this size can be profitable,” says Chris Quilty. “And it scares the shit out of everyone else in the industry.” (…)

Starlink’s satellites fly in very low orbits, around 500km up. That slashes transmission delays, allowing Starlink to offer a connection similar to ground-based broadband. The trade-off is that each satellite can serve only a small area of Earth. To achieve worldwide coverage you therefore need an awful lot of satellites. According to Jonathan McDowell of the Harvard-Smithsonian Centre for Astrophysics, the 6,400 or so Starlink satellites launched since 2019 account for around three-quarters of all the active satellites in space (see chart 3). SpaceX has firm plans to deploy 12,000 satellites, and has applied to launch as many as 42,000. (…)

The rockets are nifty, but it is satellites that make SpaceX valuable (economist.com)


Starship sticks the landing : Elon Musk’s SpaceX has achieved something extraordinary

If SpaceX can land and reuse the most powerful rocket ever made what can’t it do? (The Economist, 14 octobre, article payant) 

Extraits :

THE LAUNCH was remarkable: a booster rocket with twice the power of the Apollo programme’s Saturn V lancing into the early-morning sky on a tight, bright column of blue-tinged flame. But that wonder has been seen four times before. It was the landing of the booster stage of SpaceX’s fifth Starship test flight which was truly extraordinary.

The landing was a triumph for the engineers of SpaceX, a company founded and run by Elon Musk. It strongly suggests that the company’s plans to use a huge reusable booster to launch a huge reusable spacecraft, the Starship proper, on a regular basis are achievable. That means that the amount of cargo that SpaceX can put into orbit for itself and its customers, including the American government, is set to grow spectacularly in the second half of this decade. (…)

Mr Musk’s ambitions for Mars are part of an ambition to safeguard civilisation which also entails, in his eyes, the re-election of Donald Trump (on which he is working hard), and, apparently, the use of X, a social network he owns, as a personal platform and a tool for the spread of misinformation. This is something about which many have strong concerns, and rightly. But with the Super Heavy cooling down in its elevated cradle, the getting to Mars bit, at least, looks more real than it has ever done before. ■

Elon Musk’s SpaceX has achieved something extraordinary (economist.com)


ESCP Business School va se transformer en profondeur grâce à OpenAI

L’école de commerce ESCP va utiliser la plateforme de la société américaine pour adapter ses méthodes d’enseignement et ses processus administratifs. (Le Figaro, 14 octobre, libre accès)

Extraits :

Dans son rapport publié en mars, la Commission de l’intelligence artificielle recommandait de «généraliser (son) déploiement dans toutes les formations du supérieur». Certains établissements vont plus vite que d’autres. Ainsi, l’ESCP Business School, une des écoles de commerce européennes les plus réputées, a conclu un large partenariat avec la société OpenAI. Une partie des étudiants, des enseignants-chercheurs et des cadres administratifs du réseau ESCP seront formés à OpenAI edu, la version spécialisée de sa plateforme d’IA pour les universités. «Le premier avantage pour nous est d’avoir une amélioration continue de l’expérience étudiante», résume Léon Laulusa, le directeur général de ESCP. Grâce à ces technologies, l’école veut renforcer l’apprentissage personnalisé et interactif pour ses étudiants. «Je pense que ces technologies vont permettre de favoriser l’apprentissage ancré, celui qui ne s’oublie pas avec le temps» insiste-t-il. L’école a créé des assistants conversationnels pour répondre aux questions des étudiants dans divers domaines, créer des tests de connaissances personnalisés pour s’entraîner, trouver des conseils pour la rédaction de thèse. «Nous nous greffons sur les données ouvertes de ChatGPT et nous les fermons sur nos données», ajoute le directeur, soucieux de la sécurité des données. Les étudiants pourront aussi recevoir des commentaires personnalisés sur leurs travaux en temps réel. (…)

En travaillant au quotidien avec ces technologies, l’école compte aussi adapter au mieux ses programmes et les cursus aux compétences attendues en entreprise. «Il y a un enjeu majeur de massification de la formation à l’IA en France, mais il faut aussi voir comment développer des compétences durables dans le temps. Nous voulons veiller à ce que les programmes de l’ESCP restent à la pointe de la technologie et pertinents dans un monde des affaires en évolution rapide», insiste le directeur. (…)

ESCP Business School va se transformer en profondeur grâce à OpenAI (lefigaro.fr)


The 2024 Nobel prizes : AI wins big at the Nobels

Awards went to the discoverers of micro-RNA, pioneers of artificial-intelligence models and those using them for protein-structure prediction (The Economist, 14 octobre, article payant) 

Extraits :

The scientific Nobel prizes have always, in their way, honoured human intelligence. This year, for the first time, the transformative potential of artificial intelligence (AI) has been recognised as well. That recognition began on Tuesday October 8th, when Sweden’s Royal Academy of Science awarded the physics prize to John Hopfield of Princeton University and Geoffrey Hinton of the University of Toronto for computer-science breakthroughs integral to the development of many of today’s most powerful AI models.

The next day, the developers of one such model also received the coveted call from Stockholm. Demis Hassabis and John Jumper from DeepMind, Google’s AI company, received one half of the chemistry prize for their development of AlphaFold, a program capable of predicting three-dimensional protein structure, a long-standing grand challenge in biochemistry. The prize’s other half went to David Baker, a biochemist at the University of Washington, for his computer-aided work designing new proteins.

The AI focus was not the only thing the announcements had in common. In both cases, the research being awarded would be seen by a stickler as being outside the remit of the prize-giving committees (AI research is computer science; protein research arguably counts as biology). (…)

For the growing number of researchers around the world who rely on AI in their work, the lasting message of this year’s awards may be a different one: that they, too, could one day nab science’s most prestigious gongs. For his part, said Dr Jumper, “I hope…that we have opened the door to many incredible scientific breakthroughs with computation and AI to come.” ■

AI wins big at the Nobels (economist.com)


Albert Moukheiber, docteur en neurosciences : «On pense que le cerveau fonctionne par zones, mais c’est faux»

ENTRETIEN – Émotions, personnalité, prise de décisions… tout semble aujourd’hui trouver une explication dans notre cerveau. C’est ce que dénonce Albert Moukheiber, docteur en neurosciences, dans son livre Neuromania, le vrai du faux sur votre cerveau. (Madame Figaro, 7 octobre, article payant) 

Extraits :

(…) Dans son livre Neuromania, le vrai du faux sur votre cerveau (1), il dénonce la tendance à trop souvent invoquer les neurosciences dans le but de donner un «vernis» scientifique à ce qui ne relève pourtant pas de la science. Le tout au prix d’approximations, de raccourcis, voire de contre-vérités. Une omniprésence de discours réducteurs qui ne sont pas sans conséquence, avertit-il. Entretien.

Madame Figaro.- Quelles sont les conséquences des discours réducteurs sur les neurosciences que vous dénoncez?

Albert Moukheiber.- D’abord, cette “neuromania” impacte la vision que l’on porte sur nos propres performances. En clair, si l’on affirme à quelqu’un qu’il fonctionne plutôt avec son “cerveau gauche” et qu’il est donc soi-disant doté d’un esprit plus cartésien qu’intuitif, cela peut orienter ses décisions de carrière ou ses choix de vie, par exemple. Ensuite, le phénomène nous impacte financièrement, puisque l’on nous vend aujourd’hui des formations pour “utiliser 20 % de notre cerveau, au lieu de 10 %”, ou encore pour développer notre “neuro-créativité”. (…)

Vous assurez aussi que ces discours réducteurs entraînent des conséquences sociétales…
Et ce pour une raison simple : lorsqu’on voit tout sous le prisme du cerveau, on fait fi des autres niveaux explicatifs. Un cas le montre tout particulièrement : ces dernières années, on a pu lire dans les médias que notre incapacité à agir contre le réchauffement climatique était liée à notre cerveau. Plus précisément : addict au plaisir immédiat, l’organe ferait obstruction en nous poussant à faire le moins d’efforts possible. Seulement, en insinuant que l’espèce humaine est vouée à l’échec malgré elle, on invisibilise la responsabilité des gouvernements, des lobbies et des entreprises polluantes. Tous ces facteurs qui expliqueraient plus pertinemment le problème sont “effacés” au profit de la thèse cérébrale. (…)

uel impact notre cerveau a-t-il réellement sur notre quotidien ?
Il est une partie centrale de qui l’on est. Il va intégrer et coordonner des informations corporelles, environnementales et cérébrales. Il nous permet de percevoir, penser, agir et de donner un sens à l’existence. Mais contrairement à ce que l’on pourrait croire, ce n’est pas une tour de contrôle. Le cerveau étant un organe d’interrelation, il influe sur le corps et l’environnement mais est aussi influencé par eux. Autrement dit, nous ne sommes pas que notre cerveau. Un simple exercice permet d’en prendre conscience. Observons la façon dont on s’adresse à un ami et à un client au travail. Dans les deux situations, nous avons le même cerveau, pourtant, nous agissons complètement différemment. C’est bien la preuve que notre personnalité change en fonction du contexte et que notre cerveau n’est pas le seul à déterminer qui l’on est, nos goûts, nos idées, et nos émotions. S’il a une influence certaine sur nos manières d’agir, le cerveau ne reste qu’un réseau de neurones qui envoient un signal ou pas. (…)

Peut-on tout de même agir sur lui ?
Pas vraiment. Prenons un exemple : aujourd’hui, on nous dit souvent que pour être moins triste ou plus heureux, il suffit de palier au “manque de sérotonine” – surnommée l’hormone du bonheur. Mais on ne peut pas contrôler les neurotransmetteurs, on ne peut pas décider là maintenant d’augmenter notre taux de dopamine ou de sérotonine. D’un point de vue neuroscientifique, il n’est même pas possible de mesurer le taux d’une hormone dans le cerveau ! Alors il est essentiel de se fier aux conclusions scientifiques. Revenons à l’exemple du bonheur : pour être heureux, la recherche scientifique a démontré qu’il fallait entretenir de bonnes conditions matérielles, des relations sociales saines, une bonne estime de soi, bien dormir, manger sainement et s’hydrater correctement. Et de telles informations sont nettement plus importantes à partager, puisqu’elles nous permettent de véritablement agir.

Neuromaniale vrai du faux sur votre cerveau, d’Albert Moukheiber, (Ed. Allary), 288 pages, 21,90€.

Albert Moukheiber, docteur en neurosciences : «On pense que le cerveau fonctionne par zones, mais c’est faux» (lefigaro.fr)


Quand l’intelligence artificielle révolutionne aussi l’Histoire de la langue française

RÉCIT – Évolution de la langue, apparition de mots dans le temps… Les technologies de pointe ont permis d’établir certaines hypothèses émises, grâce à l’analyse de manuscrits du XIIIe au XVIIe siècle. (Le Figaro, 7 octobre, article payant) 

Extraits :

Pierre Corneille était-il le véritable auteur des œuvres de Molière ? À partir de quand le latin a-t-il disparu au profit du français ? La sorcellerie était-elle un sujet récurrent à l’époque ? Toutes ces questions que l’on se posait, l’intelligence artificielle permet d’y apporter une réponse. Cette dernière n’arrête pas de faire parler d’elle. Depuis peu, on assiste en effet à une véritable révolution dans le monde de la recherche historique. Car jusqu’à présent, si on arrivait à trouver des occurrences, grâce à un moteur de recherche, sur des livres imprimés et numérisés, ce n’était pas le cas des ouvrages manuscrits.

Les nouvelles technologies n’étaient pas suffisamment avancées pour interpréter et analyser des textes de ce type. Longtemps, les archivistes ont rêvé d’avoir un moteur de recherche qui leur permettait de trouver des mots ou des abréviations dans ces manuscrits pour faire avancer la recherche. Or, étudier et retranscrire un parchemin rédigé en ancien et moyen français représentait des heures de travail. «Le problème de l’écriture humaine, c’est qu’elle est extrêmement variable dans le temps», explique Jean-François Moufflet, conservateur aux Archives nationales, qui a participé au projet Himanis (pour «Historical manuscript indexing for user-controlled search»), un projet de recherche européen lancé en 2015 par l’Institut de Recherche et d’Histoire des textes (CNRS) qui vise à l’indexation du texte des registres de la chancellerie royale française des années 1302-1483 conservés aux Archives nationales.

Les archivistes, dans leur travail d’indexation (consistant à relever les occurrences d’un mot ou d’un concept dans les registres), se livraient à un travail chronophage et titanesque. «Ils n’avaient pu le faire que sur des registres de la première moitié du XIVe siècle et ils n’ont pu continuer car c’était trop volumineux», continue le conservateur. Rien que pour ces 200 registres de la chancellerie royale – dans lesquels on recopiait les décisions prises par les rois de France – qui constituent une véritable mémoire administrative de l’époque, on parle de 80.000 pages à étudier. Le nombre de mots et d’expressions à relever dépasse largement le nombre de ces pages. Or l’indexation de ces derniers permet aux historiens de faire des analyses sur l’état d’esprit de l’époque, d’effectuer des hypothèses par rapport à des événements passés et de reconstituer l’Histoire.

Ces moyens hors normes ont pu voir le jour grâce à des technologies destinées à la reconnaissance des textes numérisés (OCR pour «Optical Character Recognition», pour les caractères imprimés) et des écritures manuscrites (HTR pour «Handwritten Text Recognition») qui se sont améliorées ces dernières années avec l’évolution fulgurante de l’intelligence artificielle. (…)

Quand l’intelligence artificielle révolutionne aussi l’Histoire de la langue française (lefigaro.fr)


Science of success : This Will Be Your New Favorite Podcast. The Hosts Aren’t Human.

With this Google tool, you can now listen to a show about any topic you could possibly imagine. You won’t believe your ears. (WSJ, 5 octobre, article payant) 

Extraits :

Have you heard about the latest hit podcast? It’s called Deep Dive—and you have to check it out. 

Each show is a chatty, 10-minute conversation about, well, any topic you could possibly imagine. The hosts are just geniuses. It’s like they know everything about everything. Their voices are soothing. Their banter is charming. They sound like the kind of people you want to hang out with. 

But you can’t. As it turns out, these podcast hosts aren’t real people. Their voices are entirely AI-generated—and so is everything they say.  

And I can’t stop listening to them. 

This experimental audio feature released last month by Google is not just some toy or another tantalizing piece of technology with approximately zero practical value. 

It’s one of the most compelling and completely flabbergasting demonstrations of AI’s potential yet. 

“A lot of the feedback we get from users and businesses for AI products is basically: That’s cool, but is it useful, and is it easy to use?” said Kelly Schaefer, a product director in Google Labs. 

This one is definitely cool, but it’s also useful and easy to use. All you need to do is drag a file, drop a link or dump text into a free tool called NotebookLM, which can take any chunk of information and make it an entertaining, accessible conversation. 

Google calls it an “audio overview.” You would just call it a podcast. 

One of the coolest, most useful parts is that it makes podcasts out of stuff that nobody would ever confuse for scintillating podcast material. 

Wikipedia pages. YouTube clips. Random PDFs. Your college thesis. Your notes from that business meeting last month. Your grandmother’s lasagna recipe. Your resume. Your credit-card bill! This week, I listened to an entire podcast about my 401(k). (…)

This Will Be Your New Favorite Podcast. The Hosts Aren’t Human. – WSJ


On the fly : An adult fruit fly brain has been mapped—human brains could follow

For now, it is the most sophisticated connectome ever made (The Economist, 5 octobre, article payant) 

Extraits :

FRUIT FLIES are smart. For a start—the clue is in the name—they can fly. They can also flirt; fight; form complex, long-term memories of their surroundings; and even warn one another about the presence of unseen dangers, such as parasitic wasps.

They do each of these things on the basis of sophisticated processing of sound, smell, touch and vision, organised and run by a brain composed of about 140,000 neurons—more than the 300 or so found in a nematode worm, but far fewer than the 86bn of a human brain, or even the 70m in a mouse. This tractable but non-trivial level of complexity has made fruit flies an attractive target for those who would like to build a “connectome” of an animal brain—a three-dimensional map of all its neurons and the connections between them. That attraction is enhanced by fruit flies already being among the most studied and best understood animals on Earth. (…)

Creating a connectome means taking things apart and putting them back together. The taking apart uses an electron microscope to record the brain as a series of slices. The putting back together uses AI software to trace the neurons’ multiple projections across slices, recognising and recording connections as it does so. (…)

Janelia’s second method involved shaving layers from a sample with a diamond knife and recording them using a transmission electron microscope (which sends its beam through the target rather than scanning its surface). This is the data used by FlyWire. With Janelia’s library of 21m images made in this way, Dr Murthy and Dr Seung, ably assisted by 622 researchers from 146 laboratories around the world (as well as 15 enthusiastic “citizen scientist” video-gamers, who helped proofread and annotate the results), bet their software-writing credibility on being able to stitch the images together into a connectome. Which they did.

Besides the numbers of neurons and synapses in the fly brain, FlyWire’s researchers have also counted the number of types of neurons (8,577) and calculated the combined length (149.2 metres) of the message-carrying axons that connect cells. More important still, they have enabled the elucidation not only of a neuron’s links with its nearest neighbours, but also the links those neurons have with those farther afield. (…)

This sort of thing is scientifically interesting. But to justify the dollars spent on them, projects such as FlyEM and FlyWire should also serve two practical goals. One is to improve the technology of connectome construction, so that it can be used on larger and larger targets—eventually, perhaps, including the brains of Homo sapiens. The other is to discover to what extent non-human brains can act as models for human ones (in particular, models that can be experimented on in ways that will be approved by ethics committees). (…)

These natural experiments, the circuit-diagrams of which connectomes will make available, might even help human computer scientists. Brains are, after all, pretty successful information processors, so reproducing them in silicon could be a good idea. As it is AI models which have made connectomics possible, it would be poetic if connectomics could, in turn, help develop better AI models. ■

An adult fruit fly brain has been mapped—human brains could follow (economist.com)


Cerveau : les secrets de l’ultime « terra incognita »

Sous l’impulsion de programmes de recherche comme le Human Brain Project, les scientifiques disposent à présent d’atlas complets des régions cérébrales. (Le Point, 30 septembre, article payant) 

Extraits :

Le projet était très ambitieux. Trop, pour ses détracteurs. En 2013 est inauguré en grande pompe le Human Brain Project, un mastodonte financé par la Commission européenne à hauteur de plus de 600 millions d’euros. L’objectif de ce programme de recherche, appelé à faire collaborer 500 scientifiques à travers tout le continent ? Produire une simulation informatique du cerveau. Ni plus ni moins qu’un jumeau virtuel de l’organe, imitant son anatomie et son fonctionnement dynamique et sur lequel on pourrait tester toutes sortes d’hypothèses scientifiques. Comme la conquête spatiale en son temps, la course à la découverte de cette terra incognita qu’est le cerveau est lancée.

La même année, les États-Unis consacrent 6 milliards de dollars à un programme visant à développer de nouvelles technologies de cartographie dans le cadre de l’initiative Brain (Brain Research through Advancing Innovative Neurotechnologies). En 2014, c’est au tour du Japon de lancer son initiative Brain/Minds (Brain Mapping by Integrated Neurotechnologies for Disease Studies), dont une grande partie consiste à cartographier les réseaux neuronaux du ouistiti commun. Suivront d’autres pays, comme le Canada, l’Australie, la Corée du Sud ou la Chine, avec des programmes de recherche comparables.

Plus de dix ans après, où en sommes-nous ? Encore loin du compte. « L’ambition de départ, à savoir une modélisation informatique extrêmement précise du cerveau humain, s’est révélée impossible », constate le neuroscientifique Philippe Vernier, codirecteur général de la plateforme Ebrains, une émanation directe du Human Brain Project. Le défi était colossal. « Un cerveau humain, ce sont 200 milliards de cellules, plus qu’il n’y a d’étoiles dans notre galaxie, explique Hervé Chneiweiss, directeur de recherche au CNRS, neurobiologiste et neurologue. Et chaque cellule établit à peu près 5 000 connexions avec des cellules voisines ou un peu plus distantes. » (…)

Cerveau : les secrets de l’ultime « terra incognita » (lepoint.fr)


The drugs don’t work : Ara Darzi on why antibiotic resistance could be deadlier than cancer

To get on top of the crisis, stop prescriptions without a proper diagnosis, argues the surgeon and politician (The Economist, 25 septembre, article payant) 

Extraits :

IHAVE SPENT much of my career at St Mary’s Hospital, in London, a short walk from the laboratory where in 1928 Sir Alexander Fleming made his epoch-defining discovery of penicillin, the first antibiotic.

Millions of lives have been saved since and the drugs were once thought to have put an end to infectious disease. But that dream has died as bacteria resistant to antibiotics have grown and multiplied. Today untreatable infections, for which there is no antibiotic, cause more than 1m deaths a year worldwide, a toll projected to rise ten-fold by 2050, surpassing all deaths from cancer.

Radical action is needed. For only the second time in its history, the UN General Assembly will meet this week to address this global threat and protect humanity from falling into a post-antimicrobial era in which simple infections kill and routine surgery becomes too risky to perform.

A key problem is that antibiotics are too casually prescribed to people, and too widely used in animal agriculture. This happens because they are cheap and have few immediately harmful effects.

I believe we must set a bold new target: by 2030 no antibiotic should be prescribed without a proper diagnosis that identifies the underlying cause as bacterial infection. (…)

Imagine that we had a covid-like test that could be self-administered and swiftly tell patients and clinicians what they were treating? It would be transformative.

Such tests are becoming available. In June the £8m ($10.4m) Longitude Prize was awarded to a Swedish company, Sysmex Astrego, for developing a test that within 15 minutes can detect which urinary-tract infections are caused by bacteria, and within 45 minutes reveal which antibiotic they are sensitive to.

The challenge in getting the test more widely adopted is that it is currently much more expensive (£25 privately) than antibiotics (measured in pennies). (…)

Anyone anywhere is at risk of contracting a life-threatening, drug-resistant infection. But the crisis is worst in poor and middle-income countries and among patients with multiple medical conditions. Being able to test without the need to access clinics or other traditional health-care settings is crucial to ensuring patients have the information they need to make decisions about their health. (…)

It is eight years since the UN first agreed to stem the growth of drug-resistant infections, but there has been scant progress since. Antibiotics have underpinned medical progress for the past hundred years. We must keep them effective to underpin all that happens in medicine for the next hundred years.

Ara Darzi, Lord Darzi of Denham, is a surgeon, director of the Institute of Global Health Innovation at Imperial College London and chair of the Fleming Initiative. He led the recent report into the performance of the National Health Service in England.

Ara Darzi on why antibiotic resistance could be deadlier than cancer (economist.com)


KI könnte das Universum in ein Reich der Dunkelheit verwandeln, sagt Yuval Noah Harari

Es gehe zu Ende. Mit dem Menschen, mit der Welt: Das ist die grosse Geschichte, die Yuval Noah Harari seit Jahren erzählt. Im neuen Buch «Nexus» nimmt er sich die künstliche Intelligenz vor. . (NZZ, 25 septembre, article payant) 

Extraits :

Am Schluss hat er doch noch eine gute Nachricht bereit. Oder wenigstens eine Hoffnung. Auf der vorletzten Seite seines neuen Buches schreibt Yuval Noah Harari, vielleicht werde es trotz allem nicht so schlimm, wie er es beschrieben habe, und die Zerstörung der Menschheit lasse sich aufhalten. Wenn die Menschen einen Weg fänden, die Mächte, die sie geschaffen hätten, in Schach zu halten.

Nach fast sechshundert Seiten, in denen die Geschichte der Menschheit als Abfolge von Entdeckungen, Erfindungen und Eroberungen geschildert wird, die den Menschen an den Rand des Abgrunds gebracht haben, ist das ein schwacher Trost. Vor allem, weil der Mensch allein schuld ist an der Misere. Und weil nur er sich helfen kann. Gott ist tot, an die Götter, die in den antiken Mythen die Welt wieder in Ordnung bringen, wenn sie durcheinandergeraten ist, glaubt auch niemand mehr. Einen Hexenmeister, der dem Lehrling zu Hilfe eilt, gibt es sowieso nicht. Wir sind unsere eigenen Hexenmeister. Weil wir uns zu unseren eigenen Göttern gemacht haben, würde Harari sagen.

Da klingt die Zuversicht fast vermessen, der Mensch könnte sich am eigenen Schopf aus dem Sumpf ziehen. Dafür hätte er in den vergangenen hunderttausend Jahren genug Zeit gehabt. Gründe, den ins Verderben gleitenden Wagen wieder auf den rechten Weg zu bringen, hätte es auch gegeben. Und die Mittel dazu gab sich der Mensch, das findigste von allen Lebewesen, selbst an die Hand. Aber er verstand es nicht, sie zu nutzen. Oder nutzte sie falsch.

Das ist die grosse Geschichte, die Harari seit Jahren erzählt: Es geht zu Ende. Mit dem Menschen, mit der Welt. Weil der Mensch die Grenzen überschreitet, die ihm gesetzt sind. Und weil er sich damit selbst überflüssig macht. Davon erzählt auch das neue Buch. Aber mit erhöhter Dringlichkeit. Denn jetzt steht die Menschheit für Harari vor einer grossen Entscheidung. Und ist darauf und daran, einmal mehr einen verheerenden Fehler zu begehen.

In «Nexus» warnt der israelische Historiker vor den Gefahren der künstlichen Intelligenz. Sie ist für ihn die grösste Bedrohung, der die Menschheit je gegenüberstand. Weil die Art, wie wir KI nutzen, nicht darüber entscheidet, wie unsere Zukunft aussieht. Sondern darüber, ob die Menschheit überhaupt noch eine Zukunft hat. Wie Hararis vorangehende Bücher «Eine kurze Geschichte der Menschheit» (2013) und «Homo Deus. Eine Geschichte von Morgen» (2017) ist auch «Nexus» im Grunde kein historisches Buch, sondern ein Pamphlet. Da spricht nicht der Geschichtsprofessor der Hebräischen Universität Jerusalem, sondern ein belesener, historisch bewanderter Prophet, der sieht, wie es mit dem Menschen zu Ende geht. Der aus der Geschichte zu erklären versucht, warum es zu Ende geht, und weiss, was dagegen zu tun wäre. Und sich wohl auch bewusst ist, dass es das Schicksal von Propheten ist, nicht gehört zu werden.

Obwohl, darüber kann sich Harari eigentlich nicht beklagen. Seine Bücher wurden in über sechzig Sprachen übersetzt und mehr als fünfundvierzig Millionen Mal verkauft. Das hat noch kein Historiker geschafft. Allein «Eine kurze Geschichte der Menschheit» erreichte eine Auflage von fünfundzwanzig Millionen Exemplaren. Die Mächtigen der Welt ziehen Harari als Consultant hinzu, Barack Obama empfiehlt seine Bücher, Angela Merkel und Emmanuel Macron trafen sich mit ihm, um über die Probleme der Welt zu reden. Mark Zuckerberg und Bill Gates fragen ihn um Rat, und am World Economic Forum gehört Harari zu den gerngesehenen Gästen.

(…) Denn der Schritt zur KI, darauf pocht Harari, sei nicht mit den technischen Revolutionen vergleichbar, die den Gang der Geschichte in vorhergehenden Jahrhunderten verändert hätten: der Erfindung der Tontafeln in Mesopotamien zum Beispiel, der Einführung des Buchdrucks oder des Fernsehens.

Der entscheidende Unterschied besteht für Harari darin, dass alle bisherigen Informationsnetzwerke nur Instrumente waren. Sie verbreiteten Informationen. Aber diese waren vom Menschen bestimmt – ob es Nachrichten, Mythen, Warenlisten oder Unterhaltungssendungen waren. Nun können Computer selbst Texte verfassen, selbst Informationen recherchieren und verarbeiten – und Entscheidungen fällen.

KI kann sich selbst organisieren und ist in der Lage, eigene Informationsnetzwerke zu schaffen. Ohne Menschen. (…)

Man folgt Yuval Noah Harari mit einem gewissen Vergnügen durch seine Exkurse, die von der Steinzeit bis in die digitale Zukunft führen. Man wird von Schauder ergriffen angesichts der Weite des Blicks, der das ganze Universum umfasst und Geschichte nach Äonen misst. Und wird Harari zustimmen, dass die Entwicklung der KI kritische Aufmerksamkeit verlangt. Das hat man allerdings nicht erst von ihm gehört. Und was wir konkret tun könnten, sagt uns Harari nicht. Ist wahrscheinlich auch nicht seine Aufgabe. Der Seher warnt, das genügt. Um die praktischen Belange sollen sich andere kümmern.

Wenn das Universum dunkel wird: Yuval Noah Harari warnt vor KI (nzz.ch)


Rosalind Franklin, la chimiste surdouée si injustement oubliée à qui l’on doit la découverte de l’ADN

C’est la grande oubliée du Nobel. En 1952, la chimiste britannique identifie la structure de l’ADN. Mais le mérite reviendra, dix ans plus tard, à trois autres chercheurs… masculins. Une histoire racontée par Virginie Girod*. (Le Figaro, 23 septembre, article payant) 

Extraits :

En 2008, la scientifique britannique Rosalind Franklin reçoit à titre posthume le prix Louisa-Gross-Horwitz, qui récompense enfin sa contribution à la recherche fondamentale. Il y en a un qu’elle n’aura jamais : le Nobel. C’est pourtant à elle que l’on doit en grande partie la mise au jour de la structure en double hélice de l’ADN, au cœur des années 1950. Mais sur son chemin se sont trouvés d’avides collègues qui l’ont trahie. 

Née en 1920, Rosalind est une enfant surdouée. Déterminée, elle parvient à faire des études à une époque où les universités britanniques tolèrent à peine les femmes. En 1950, titulaire d’une thèse et forte de son expérience en France auprès de la famille Curie, elle intègre un laboratoire du King’s College de Londres, où elle travaille sur l’ADN. Dans ce domaine, tout est à découvrir. La course au Nobel est ouverte, et son confrère, Maurice Wilkins, en est le favori. (…)

Rosalind Franklin n’a pas été privée de son prix Nobel parce qu’elle était une femme. En revanche, elle a subi l’immense violence et l’injustice du milieu scientifique à une époque où la course aux récompenses justifiait tous les coups bas. Elle est morte avant de faire valoir les fruits de son travail. Mais le destin est parfois étonnant. Aujourd’hui, qui se souvient de ses trois collègues tandis que Rosalind Franklin est montée au pinacle (une plaque a récemment été installée rue Garancière, à Paris, où elle a habité) ? Elle n’a pas eu la célébrité, mais jouit d’une postérité unique.

Rosalind Franklin, la chimiste surdouée si injustement oubliée à qui l’on doit la découverte de l’ADN (lefigaro.fr)


Shrink to fit : The semiconductor industry faces its biggest technical challenge yet

As Moore’s law fades, how can more transistors be fitted onto a chip? (The Economist, 19 septembre, article payant) 

Extraits :

(…) Gordon Moore’s original observation, in 1965, was that as the making of chips got better, the transistors got smaller, which meant you could make more for less. In 1974 Robert Dennard, an engineer at ibm, noted that smaller transistors did not just lower unit costs, they also offered better performance. As the distance between source and drain shrinks, the speed of the switch increases, and the energy it uses decreases. “Dennard scaling”, as the observation is known, amplifies the amount of good that Moore’s law does.

In 1970 the gate length, a proxy for the distance between the source and drain, was ten microns (ten millionths of a metre, or 10,000nm). By the early 2000s this was down to 90nm. At this level, quantum effects cause current to flow between the two terminals even when the transistor is off. This leakage current increases the power used and causes the chip to heat up.

For chipmakers that was an early indication that their long, sort-of-free ride was ending (see chart). Transistors could still be made smaller but the leakage current placed a limit on how low a chip’s voltage could be reduced. This in turn meant that the chip’s power could not be reduced as before. This “power wall” marked the end of Dennard scaling—transistor sizes shrank, but chip speeds no longer got quicker and their power consumption was now an issue. To keep improving performance, designers started arranging logic gates and other elements of their chips in multiple, connected processing units, or “cores”. With multiple cores, a processor can run many applications simultaneously or run a single application faster by splitting it into parallel streams. (…)

In 2013 Max Shulaker, now at mit, with Subhasish Mitra and Philip Wong, both of Stanford University, built the first microprocessor using cnt transistors. The researchers designed an “imperfection-immune” processor that functions even if a certain number of cnts misbehave. By 2019 Mr Shulaker had devised a microprocessor built with 14,000 cnt transistors (half the number found in the 8086, a chip released by Intel in 1978). In 2023 researchers at Peking University built a transistor using cnts on manufacturing technology that can be scaled down to the size of a 10nm silicon node. The results may seem basic, but they underscore the potential of cnts as an alternative to silicon.

In 1959 Richard Feynman, a physicist, gave a lecture that presaged the nanotechnology era. He wondered, “What would happen if we could arrange the atoms one by one the way we want them?” With semiconductor device features now atomic lengths, the world has its answer: build smaller transistors. ■

The semiconductor industry faces its biggest technical challenge yet (economist.com)


The motherlode : Breast milk’s benefits are not limited to babies

Some of its myriad components are being tested as treatments for cancer and other diseases (The Economist, 14 septembre, article payant)  

Extraits :

IN A TALK she gave in 2016, Katie Hinde, a biologist from Arizona State University, lamented how little scientific attention was commanded by breast milk. Up until that point, she said, both wine and tomatoes had been far more heavily studied. Eight years on, alas, that remains true.

What is also true—and this was the serious point of Dr Hinde’s talk—is that scientists have been neglecting a goldmine. Unlike wine or tomatoes, breast milk’s physiological properties have been honed by evolution to be healthy. In babies it can reduce inflammation, kill pathogens and improve the health of the immune system. As a result, some components of breast milk are now being studied as potential treatments for a host of adult conditions, including cancer, heart disease, arthritis and irritable bowel syndrome (IBS). Scientists may never look at breast milk in the same way again. (…)

In a recent study of milk from 1,200 mothers on three continents, Dr Azad and her colleagues found roughly 50,000 small molecules, most of them unknown to science. By using artificial-intelligence (AI) models to analyse this list of ingredients, and link them to detailed health data on babies and their mothers, they hope to identify components beneficial for specific aspects of babies’ development. (…)

It is not only the molecules in breast milk that could have health benefits. Until about 15 years ago, says Dr Azad, it was assumed that breast milk was largely sterile. But genetic-sequencing tools have revealed it contains a wide variety of bacteria. Some, such as Bifidobacterium, a particularly beneficial bacterium that feeds exclusively on HMOs, can survive the trip into the baby’s gut, where it strengthens the gut barrier; regulates immune responses and inflammation; and prevents pathogenic bacteria from adhering to the lining of the gut. That makes it an ideal candidate for use in probiotics, live bacterial supplements used to remedy the gut’s ecosystem. (…)

Other exciting results have emerged from studying breastfeeding itself. When babies breastfeed, some of the milk ends up in their nasal cavity. It is possible, say scientists, that it could then make its way into the brain. In a small study in 2019 doctors at the Children’s Hospital in Cologne nasally administered maternal breast milk to 16 premature babies with brain injury. The babies subsequently had less brain damage, and required less surgery, than those who did not receive the treatment.

Similar results were reported in May by researchers at the Hospital for Sick Children in Toronto. In a small safety study, they administered intranasal breast milk as a preventive treatment for brain haemorrhage in premature babies; 18 months later, babies so treated had better motor and cognitive development, and fewer vision problems, than those fed only the usual way. Though bigger trials are needed to confirm these results, stem cells in the milk may be repairing some of the damage.

It is too early to tell whether any blockbuster drugs will result. But breast-milk scientists are starting to feel vindicated. For Bruce German from the University of California, the neglect of breast milk will rank “as one of the great embarrassments of scientific history.” ■

Breast milk’s benefits are not limited to babies (economist.com)


The Future of Warfare Is Electronic

An audacious Ukrainian incursion into Russia shows why. Is the Pentagon paying enough attention? (WSJ, 5 septembre, article payant)  

Extraits:

The Ukrainian army has launched a stunning offensive into Kursk, Russia, under a shield of advanced electronic weapons. The war in Ukraine is demonstrating that 21st-century conflicts will be won or lost in the arena of electronic warfare.

Think of electronic warfare as casting spells on an invisible battlefield. Combatants strive to preserve their own signals, while disrupting those of the enemy. In Kursk, the Ukrainians took advantage of their technical knowledge to achieve a leap in battlefield tactics. Using a variety of electronic sensing systems, they managed to figure out the key Russian radio frequencies along the invasion route. They jammed these frequencies, creating a series of electronic bubbles that kept enemy drones away from Ukrainian forces, allowing reconnaissance units, tanks and mechanized infantry to breach the Russian border mostly undetected. This is the chaotic way of modern combat: a choreography of lightweight, unmanned systems driven by a spiderweb of electronic signals. (…)

The Russians have so far been unable to dislodge these innovators but have begun using their own jammers to counter the waves of Ukrainian drone fleets supporting them, effectively creating a classic blockade. With the local electronic environment scrambled, Ukrainian drones have difficulty operating. If the Russians succeed, they could isolate the Ukrainian forces on the island. As these struggles reveal, the ultimate prize in modern warfare is spectrum dominance: ensuring one’s own control of drone networks while detecting and denying the adversary’s. (…)

America has a reputation as a global innovator, yet it trails in the dark arts of electronic warfare. Improvised jamming systems and dozens of counter-drone systems have created a spectral environment that the U.S. military isn’t yet prepared to navigate. American drones and munitions frequently can’t overcome the jamming of their guidance systems. Yet we send them to Ukraine, where the Russians often scramble them before they reach their targets. (…)

A military that can’t build a dynamic electronic shield around its own forces will likewise be unable to maneuver in the coming drone wars. Modern electronic-warfare systems mounted on low-cost drones are now as necessary as munitions. New companies are in the early stages of building the right weapons but need the Pentagon to recognize the same future—and spend accordingly.

We aren’t the only ones watching Ukraine. China moves at the speed of war, while the U.S. moves at the speed of bureaucracy. If we retool our approach to electronic warfare, America will tip the scales in favor of deterrence and, if necessary, victory. If not, we will be subject to the harsh lessons inevitably faced by those who fight the last war.

Mr. Smith is a former U.S. Army attack aviator and officer of the 160th Special Operations Aviation Regiment. Mr. Mintz, an aerospace engineer, was founding CEO of the defense startups Epirus, Spartan Radar and now CX2.

The Future of Warfare Is Electronic – WSJ


Top economist: EU must invest more in high tech or lose out to the United States

Philippe Aghion has had a critical influence on the economic understanding of economic growth, innovation, and the rise and fall of companies. The French native, who taught at Harvard University for many years, believes that Europe must change if it wants to keep up with the United States. (NZZ, entretien, 5 septembre, article payant) 

Extraits:

The CEO of Norway’s sovereign wealth fund, Nicolai Tangen, wants to invest more in the United States, and less in Europe. He says that Europeans are too lazy, and no longer want to work hard. What do you say to that, Mr. Aghion?

Mr. Tangen is absolutely right in many respects. Europe regulates too much and too rigidly, has too little presence in the high-tech sector, and makes too few truly groundbreaking innovations.

Why is that?

Europe is a giant when it comes to regulation. In this regard, Europe’s politicians don’t have bad intentions. They are simply trying to drive forward political integration with economic regulations, trying to unite Europe’s nation states more closely into a single entity.

Are you thinking of the monetary union?

The monetary union was adopted as an instrument of political integration. Policymakers then wanted to ensure that the member states didn’t overspend. They have tried to do this with too many overly strict rules. That hasn’t worked. There is the 3% deficit ceiling, which does not differentiate between whether a state simply wants to spend money or wants instead to invest in the future. And there is a competition policy that is too strict, in that it prohibits all state aid. It would be better to allow states to engage actively in industrial policy, but ensure that this is done in a competition-friendly manner.

Are you really telling me that the EU is falling behind the United States because EU states can spend too little money?

Like China, the United States is pursuing a very deliberate industrial policy. The big risk is that Europe will fall behind.

Europe’s productivity has grown more slowly than America’s since the end of the 1990s.

Europe has at least partially missed out on the IT revolution. Why has that happened? America invests much more in cutting-edge high-tech technology than Europe does. The two spend a similar amount on research, but the U.S. focuses much more on high-tech and breakthrough innovations that have helped its leading tech companies grow. (…)

Economist: EU must invest more in high-tech innovation (nzz.ch)


Yuval Harari: What Happens When the Bots Start Competing for Your Love? (NYT, tribune, 4 septembre, quelques articles gratuites / sem.)

Extraits:

Democracy is a conversation. Its function and survival depend on the available information technology. For most of history, no technology existed for holding large-scale conversations among millions of people. In the premodern world, democracies existed only in small city-states like Rome and Athens, or in even smaller tribes. Once a polity grew large, the democratic conversation collapsed, and authoritarianism remained the only alternative.

Large-scale democracies became feasible only after the rise of modern information technologies like the newspaper, the telegraph and the radio. The fact that modern democracy has been built on top of modern information technologies means that any major change in the underlying technology is likely to result in a political upheaval.

This partly explains the current worldwide crisis of democracy. In the United States, Democrats and Republicans can hardly agree on even the most basic facts, such as who won the 2020 presidential election. A similar breakdown is happening in numerous other democracies around the world, from Brazil to Israel and from France to the Philippines.

In the early days of the internet and social media, tech enthusiasts promised they would spread truth, topple tyrants and ensure the universal triumph of liberty. So far, they seem to have had the opposite effect. We now have the most sophisticated information technology in history, but we are losing the ability to talk with each other, and even more so the ability to listen.

As technology has made it easier than ever to spread information, attention became a scarce resource, and the ensuing battle for attention resulted in a deluge of toxic information. But the battle lines are now shifting from attention to intimacy. The new generative artificial intelligence is capable of not only producing texts, images and videos, but also conversing with us directly, pretending to be human. (…)

The ability to hold conversations with people, surmise their viewpoint and motivate them to take specific actions can also be put to good uses. A new generation of A.I. teachers, A.I. doctors and A.I. psychotherapists might provide us with services tailored to our individual personality and circumstances.

However, by combining manipulative abilities with mastery of language, bots like GPT-4 also pose new dangers to the democratic conversation. Instead of merely grabbing our attention, they might form intimate relationships with people and use the power of intimacy to influence us. To foster “fake intimacy,” bots will not need to evolve any feelings of their own; they just need to learn to make us feel emotionally attached to them. (…)

In a political battle for minds and hearts, intimacy is a powerful weapon. An intimate friend can sway our opinions in a way that mass media cannot. Chatbots like LaMDA and GPT-4 are gaining the rather paradoxical ability to mass-produce intimate relationships with millions of people. What might happen to human society and human psychology as algorithm fights algorithm in a battle to fake intimate relationships with us, which can then be used to persuade us to vote for politicians, buy products or adopt certain beliefs? (…)

A.I.s are welcome to join many conversations — in the classroom, the clinic and elsewhere — provided they identify themselves as A.I.s. But if a bot pretends to be human, it should be banned. If tech giants and libertarians complain that such measures violate freedom of speech, they should be reminded that freedom of speech is a human right that should be reserved for humans, not bots.

Opinion | Yuval Harari: A.I. Threatens Democracy – The New York Times (nytimes.com)


How Self-Driving Cars Get Help From Humans Hundreds of Miles Away (NYT, 4 septembre, quelques articles gratuites / sem.)

Extraits:

In places like San Francisco, Phoenix and Las Vegas, robot taxis are navigating city streets, each without a driver behind the steering wheel. Some don’t even have steering wheels:

But cars like this one in Las Vegas are sometimes guided by someone sitting here:

This is a command center in Foster City, Calif., operated by Zoox, a self-driving car company owned by Amazon. Like other robot taxis, the company’s self-driving cars sometimes struggle to drive themselves, so they get help from human technicians sitting in a room about 500 miles away.

Inside companies like Zoox, this kind of human assistance is taken for granted. Outside such companies, few realize that autonomous vehicles are not completely autonomous.

For years, companies avoided mentioning the remote assistance provided to their self-driving cars. The illusion of complete autonomy helped to draw attention to their technology and encourage venture capitalists to invest the billions of dollars needed to build increasingly effective autonomous vehicles. (…)

See How Humans Help Self-Driving Cars Navigate City Streets – The New York Times (nytimes.com)


Why America’s tech giants have got bigger and stronger

Whatever happened to creative destruction? (The Economist, 23 août, article payant) 

Extraits:

When your columnist first started writing Schumpeter in early 2019, he had a romantic idea of travelling the world and sending “postcards” back from faraway places that chronicled trends in business, big and small. In his first few weeks, he reported from China, where a company was using automation to make fancy white shirts; Germany, where forest-dwellers were protesting against a coal mine; and Japan, where a female activist was making a ninja-like assault on corporate governance. All fun, but small-bore stuff. Readers, his editors advised him, turn to this column not for its generous travel budget but for its take on the main business stories of the day. So he pivoted, adopting what he called the Linda Evangelista approach. From then on, he declared, he would not get out of bed for companies worth less than $100bn.

This is his final column and, as he looks back, that benchmark seems quaint. At the time, the dominant tech giants were already well above it. Microsoft was America’s biggest company, worth $780bn, closely followed by its big-tech rivals: Apple, Amazon, Alphabet and Meta. Their total value back then was $3.4trn. Today the iPhone-maker alone exceeds that.

Since early 2019 the combined worth of the tech giants has more than tripled, to $11.8trn. Add in Nvidia, the only other American firm valued in the trillions, thanks to its pivotal role in generative artificial intelligence (AI), and they fetch more than one and a half times the value of America’s next 25 firms put together. That includes big oil (ExxonMobil and Chevron), big pharma (Eli Lilly and Johnson & Johnson), big finance (Berkshire Hathaway and JPMorgan Chase) and big retail (Walmart). In other words, while the tech illuminati have grown bigger and more powerful, the rest lag ever further behind.

It is tempting to view this as an aberration. This column is named after Joseph Schumpeter, the late Austrian-American economist who made famous the concept of creative destruction—the relentless tide of disruptive innovation toppling old orders and creating new ones. Surely these tech firms, founded decades ago in dorms, garages and dingy offices, should be vulnerable to the same Schumpeterian forces that they once unleashed on their industrial forebears.

But creative destruction, at least as framed by the original Schumpeter, is more complicated than that. To be sure, he revered entrepreneurs. He considered them, as we do today, the cult heroes of business, driving the economy forward with new products and ways of doing things. But late in life, after he had witnessed decades of dominance by big American corporations, he changed his tune. He decided that large firms, even monopolies, were the big drivers of innovation. They had the money to invest in new technology, they attracted the best brains—and they had most to lose if they did not stay alert. That may disappoint those who see business as a David v Goliath struggle of maverick upstarts against managerial apparatchiks. But it was prescient. It helps explain why today’s tech Goliaths vastly outspend, buy up and outflank startups before they get the chance to sling a stone. (…)

Why America’s tech giants have got bigger and stronger (economist.com)


The new gender gap : Why don’t women use artificial intelligence?

Even when in the same jobs, men are much more likely to turn to the tech (The Economist, 22 août, article payant)  

Extraits:

Be more productive. That is how ChatGPT, a generative-artificial-intelligence tool from OpenAI, sells itself to workers. But despite industry hopes that the technology will boost productivity across the workforce, not everyone is on board. According to two recent studies, women use ChatGPT between 16 and 20 percentage points less than their male peers, even when they are employed in the same jobs or read the same subject.

The first study, published as a working paper in June, explores ChatGPT at work. Anders Humlum of the University of Chicago and Emilie Vestergaard of the University of Copenhagen surveyed 100,000 Danes across 11 professions in which the technology could save workers time, including journalism, software-developing and teaching. The researchers asked respondents how often they turned to ChatGPT and what might keep them from adopting it. By exploiting Denmark’s extensive, hooked-up record-keeping, they were able to connect the answers with personal information, including income, wealth and education level.

Across all professions, women were less likely to use ChatGPT than men who worked in the same industry (see chart 1). For example, only a third of female teachers used it for work, compared with half of male teachers. Among software developers, almost two-thirds of men used it while less than half of women did. This gap shrank only slightly, to 16 percentage points, when directly comparing people in the same firms working on similar tasks. The study concludes that a lack of female confidence may be in part to blame: women who did not use AI were more likely than men to highlight that they needed trainingto use the technology. (…)

Why don’t women use artificial intelligence? (economist.com)


Artificial intelligence : Mark Zuckerberg and Daniel Ek on why Europe should embrace open-source AI

It risks falling behind because of incoherent and complex regulation, say the two tech CEOs (The Economist, 22 août, tribune, article payant)  

Extraits:

THIS IS AN important moment in technology. Artificial intelligence (AI) has the potential to transform the world—increasing human productivity, accelerating scientific progress and adding trillions of dollars to the global economy.

But, as with every innovative leap forward, some are better positioned than others to benefit. The gaps between those with access to build with this extraordinary technology and those without are already beginning to appear. That is why a key opportunity for European organisations is through open-source AI—models whose weights are released publicly with a permissive licence. This ensures power isn’t concentrated among a few large players and, as with the internet before it, creates a level playing field. (…)

Regulating against known harms is necessary, but pre-emptive regulation of theoretical harms for nascent technologies such as open-source AI will stifle innovation. Europe’s risk-averse, complex regulation could prevent it from capitalising on the big bets that can translate into big rewards.

Take the uneven application of the EU’s General Data Protection Regulation (GDPR). This landmark directive was meant to harmonise the use and flow of data, but instead EU privacy regulators are creating delays and uncertainty and are unable to agree among themselves on how the law should apply. For example, Meta has been told to delay training its models on content shared publicly by adults on Facebook and Instagram—not because any law has been violated but because regulators haven’t agreed on how to proceed. In the short term, delaying the use of data that is routinely used in other regions means the most powerful AI models won’t reflect the collective knowledge, culture and languages of Europe—and Europeans won’t get to use the latest AI products.

These concerns aren’t theoretical. Given the current regulatory uncertainty, Meta won’t be able to release upcoming models like Llama multimodal, which has the capability to understand images. That means European organisations won’t be able to get access to the latest open-source technology, and European citizens will be left with AI built for someone else.

The stark reality is that laws designed to increase European sovereignty and competitiveness are achieving the opposite. This isn’t limited to our industry: many European chief executives, across a range of industries, cite a complex and incoherent regulatory environment as one reason for the continent’s lack of competitiveness.

Europe should be simplifying and harmonising regulations by leveraging the benefits of a single yet diverse market. Look no further than the growing gap between the number of homegrown European tech leaders and those from America and Asia—a gap that also extends to unicorns and other startups. Europe needs to make it easier to start great companies, and to do a better job of holding on to its talent. Many of its best and brightest minds in AI choose to work outside Europe.

In short, Europe needs a new approach with clearer policies and more consistent enforcement. With the right regulatory environment, combined with the right ambition and some of the world’s top AI talent, the EU would have a real chance of leading the next generation of tech innovation. (…)

While we can all hope that with time these laws become more refined, we also know that technology moves swiftly. On its current course, Europe will miss this once-in-a-generation opportunity. Because the one thing Europe doesn’t have, unless it wants to risk falling further behind, is time. ■

Mark Zuckerberg and Daniel Ek on why Europe should embrace open-source AI (economist.com)


How Did the First Cells Arise? With a Little Rain, Study Finds.

Researchers stumbled upon an ingredient that can stabilize droplets of genetic material: water. (NYT, 22 août, quelques articles gratuites / sem.)

Extraits:

Rain may have been an essential ingredient for the origin of life, according to a study published on Wednesday.

Life today exists as cells, which are sacs packed with DNA, RNA, proteins and other molecules. But when life arose roughly four billion years ago, cells were far simpler. Some scientists have investigated how so-called protocells first came about by trying to recreate them in labs.

Many researchers suspect that protocells contained only RNA, a single-stranded version of DNA. Both RNA and DNA store genetic information in their long sequences of molecular “letters.”

But RNA can also bend into intricate shapes, turning itself into a tool for cutting or joining other molecules together. Protocells might have reproduced if their RNA molecules grabbed genetic building blocks to assemble copies of themselves. (…)

Dr. Agrawal discovered that the water was responsible for keeping the droplets stable. The water coaxed the molecules in the outer layer of the droplets to link together. “You can imagine a mesh forming around these droplets,” said Dr. Agrawal, now a postdoctoral researcher at the Pritzker School of Molecular Engineering at the University of Chicago. (…)

But rain on the early Earth most likely had a different chemistry from rain today, because it formed in an atmosphere with a different balance of gases. The high level of carbon dioxide believed to be in the air four billion years ago would have made raindrops more acidic. Dr. Agrawal and his colleagues found they could still form stable RNA droplets with water as acidic as vinegar.

Neal Devaraj, a chemical biologist at the University of California, San Diego, who was not involved in the new study, said that it could shed light on the origin of life because the researchers didn’t have to do all that much to make stable RNA droplets: just mix and shake.

“It’s something you can imagine happening on the early Earth,” he said. “Simple is good when you’re thinking about these questions.”

How Did the First Cells Arise? With a Little Rain, Study Finds. – The New York Times (nytimes.com)


Bad news, red wine drinkers: alcohol is only ever bad for your health

We needn’t be puritanical about having a drink, but we can no longer deny that it harms us, even in small quantities (The Guardian, 22 août, libre accès)

Extraits:

To say yes to that glass of wine or beer, or just get a juice? That’s the question many people face when they’re at after-work drinks, relaxing on a Friday night, or at the supermarket thinking about what to pick up for the weekend. I’m not here to opine on the philosophy of drinking, and how much you should drink is a question only you can answer. But it’s worth highlighting the updated advice from key health authorities on alcohol. Perhaps it will swing you one way or the other.

It’s well-known that binge-drinking is harmful, but what about light to moderate drinking? In January 2023, the World Health Organization came out with a strong statement: there is no safe level of drinking for health. The agency highlighted that alcohol causes at least seven types of cancer, including breast cancer, and that ethanol (alcohol) directly causes cancer when our cells break it down.

Reviewing the current evidence, the WHO notes that no studies have shown any beneficial effects of drinking that would outweigh the harm it does to the body. A key WHO official noted that the only thing we can say for sure is that “the more you drink, the more harmful it is – or, in other words, the less you drink, the safer it is”. It makes little difference to your body, or your risk of cancer, whether you pay £5 or £500 for a bottle of wine. Alcohol is harmful in whatever form it comes in. (…)

Bad news, red wine drinkers: alcohol is only ever bad for your health | Devi Sridhar | The Guardian


Keeping your marbles : How to reduce the risk of developing dementia

A healthy lifestyle can prevent or delay almost half of cases (The Economist, 6 août, article payant)  

Extraits:

Some of the best strategies for reducing the chances of developing dementia are, to put it kindly, impracticable: don’t grow old; don’t be a woman; choose your parents carefully. But although old age remains by far the biggest risk factor, women are more at risk than men and some genetic inheritances make dementia more likely or even almost inevitable, the latest research suggests that as many as 45% of cases of dementia are preventable—or at least that their onset can be delayed.

That is the conclusion of the latest report, published on July 31st, of the Lancet commission on dementia, which brings together leading experts from around the world, and enumerates risk factors that, unlike age, are “modifiable”. It lists 14 of these, adding two to those in its previous report in 2020: untreated vision loss; and high levels of LDL cholesterol. Most news about dementia seems depressing, despite recent advances in treatments for some of those with Alzheimer’s disease, much the most common cause of the condition. Most cases remain incurable and the numbers with the condition climb inexorably as the world ages. That the age-related incidence of dementia can actually be reduced is a rare beacon of hope.

The modifiable risk factors include: smoking, obesity, physical inactivity, high blood pressure, diabetes and drinking too much alcohol (see chart). The best way a person can reduce their risk of developing dementia is to lead what has long been identified as a healthy life: avoiding tobacco and too much alcohol and taking plenty of exercise (but avoiding forms of it that involve repeated blows to the head or bouts of concussion, like boxing, American football, rugby and lacrosse).

It also means having a good diet, defined in one study cited by the commission as: “eat at least three weekly servings of all of fruit, vegetables and fish; rarely drink sugar-sweetened drinks; rarely eat prepared meat like sausages or have takeaways.” So it is not surprising that LDL cholesterol has been added to its not-to-do list. It is also important to exercise the brain: by learning a musical instrument or a foreign language, for example—or even by doing crossword and sudoku puzzles.

Some of the modifiable risk factors are in fact far beyond any individual’s control. For example, it makes a big difference how many years of education someone has had. Broadly speaking, the higher the level of educational attainment, the lower the risk of dementia. And the only way to escape another risk factor—polluted air—is to move.  (…)

Nevertheless, there is plenty of evidence to show that the risk factors outlined by the commission are salient. In the rich West, for example, the incidence rate of dementia has declined by 13% per decade over the past 25 years, consistently across studies. Gill Livingston, a professor in the psychiatry of older people at University College London and leader of the Lancet commission, has summed up the evidence of progress in North America and Europe as “a 25% decrease in the past 20 years”. That can only be as a result of changes in modifiable risk factors.

Despite the upbeat tone of the commission’s report, in some countries, such as China and Japan, the age-related incidence of dementia is climbing. In Japan, the overall age-adjusted prevalence rate doubled from 4.9% in 1985 to 9.6% in 2014. And according to the China Alzheimer Report of 2022, the incidence of Alzheimer’s in China had “steadily increased”, making it the fifth-most important cause of death in the country.

So nobody doubts that the prevalence of dementia is going to climb fast in the next decades as humanity ages. All the more reason for dementia-risk reduction to become a global policy priority. ■

How to reduce the risk of developing dementia (economist.com)


Maladie d’Alzheimer: deux nouveaux facteurs de risque identifiés

45 % des cas de démence seraient évitables grâce à des mesures de prévention, estime un groupe de travail international qui a recensé 14 paramètres ayant une influence mesurable sur l’apparition de la maladie. (Le Figaro,1er août, article payant)  

Extraits:

(…) Le rapport de 57 pages actualise une précédente étude. En 2020, les scientifiques avaient identifié 12 facteurs de risque : un niveau d’éducation bas, une perte d’audition, l’hypertension, le tabagisme, l’obésité, la dépression, la sédentarité, le diabète, une consommation excessive d’alcool (définie comme plus de 17 verres d’alcool par semaine), un traumatisme crânien, la pollution de l’air et l’isolement social. « Les preuves se sont accumulées et sont aujourd’hui plus fortes » sur ces leviers de prévention, assurent les chercheurs. Après une nouvelle synthèse de la littérature, ils ajoutent deux nouveaux facteurs de risque : un trouble de la vision non traité et un taux élevé de cholestérol LDL (le « mauvais cholestérol »). (…)

De façon générale, les auteurs de l’étude rappellent que l’état de santé a un impact sur le déclenchement de ces troubles cognitifs. Il a été montré que les activités physiques, sociales et intellectuelles renforcent la « réserve cognitive » (ou « résilience cérébrale »), qui permet de retarder l’apparition des symptômes chez des individus montrant pourtant des altérations neurologiques.

Maladie d’Alzheimer: deux nouveaux facteurs de risque identifiés (lefigaro.fr)


Puissance des algorithmes : Faut-il écouter Elon Musk quand il s’inquiète des biais idéologiques majeurs des principaux outils d’IA ?

Derrière les algorithmes de l’intelligence artificielle se cachent des biais idéologiques et politiques qui ont de lourdes conséquences au sein de la société. (Atlantico,1er août, quelques articles gratuites / sem.)

Extraits:

Atlantico : Elon Musk s’est interrogé sur les biais idéologiques des outils d’intelligence artificielle en s’interrogeant sur les réponses proposées à des questions sur la tentative d’assassinat contre Donald Trump et sur la campagne électorale aux Etats-Unis. Alors que les Américains s’apprêtent à voter en novembre, est-ce que les outils d’IA contiennent des biais idéologiques ? Ces outils peuvent-ils avoir des conséquences au sein de la société ou même sur le plan politique ? L’IA peut-elle influencer les électeurs?

Fabrice Epelboin : A partir du moment où l’IA est devenue l’intermédiaire entre le grand public et l’information, entre le grand public et le savoir, cela impacte et influence, par nature, la société. L’IA occupe dorénavant un rôle dans la société qui était dévolu à l’Education nationale et aux médias. Ce rôle d’intermédiaire entre les citoyens et l’information et le savoir a été perdu par les médias et l’Éducation nationale. Les algorithmes d’intermédiation occupent aujourd’hui le rôle le plus important. C’est pour cela que TikTok, Twitter, ou Facebook sont pointés du doigt. Ils ont un rôle social absolument central, notamment chez les plus jeunes qui n’ont pas connu les anciens grands gardiens du savoir comme les bibliothèques municipales ou qui n’ont pas eu la chance de naître dans des milieux suffisamment privilégiés et cultivés. Tout cela a été remplacé par des algorithmes. 

Cela a donc un impact sociétal absolument colossal qui va bien. Cette évolution sociétale et les biais, notamment politiques, qui sont à l’oeuvre au coeur de l’IA ont un impact phénoménal sur la façon dont tout un chacun se fait une idée du monde. Pour se faire une idée du monde, tout le monde passe par ces algorithmes. (…)

Est-ce que cela témoigne d’une forme d’emprise idéologique de la part de certains géants de la tech ou d’une volonté d’influencer la société ?

Les géants de la Tech sont un intermédiaire avec le savoir. Google est un intermédiaire avec le savoir. Facebook et Twitter sont un intermédiaire avec l’information. Ces intermédiaires se sont substitués à d’anciens intermédiaires qui étaient les médias et la presse. 

Aujourd’hui, les principaux intermédiaires avec les connaissances ou l’information sont les algorithmes d’intelligence artificielle. Le fait d’avoir une position aussi importante dans la société a donné lieu à une multitude de dérives de la part d’entreprises plus ou moins bienveillantes. 

Ce rôle d’intermédiaire leur permet d’avoir un impact politique sur la société.

Faut-il écouter Elon Musk quand il s’inquiète des biais idéologiques majeurs des principaux outils d’IA ? | Atlantico.fr


Men are spending more time looking after their children – and it’s not just cultural, it’s in their genes

New research turns on its head the idea that the cascade of hormones brought on by parenthood is limited to mothers (The Guardian, 30 juillet, opinion, libre accès)

Excerpt:

(…) Sarah Blaffer Hrdy, another great US anthropologist, points out in her recent book Father Time: A Natural History of Men and Babies that although there are obvious biological differences between men and women, we have almost the same genes and very similar brains. Consequently, men’s bodies retain the potential to do things typically associated with women, and vice versa.

A striking example of this is men’s hormonal response to fatherhood. When dads have prolonged periods of intimacy with babies, their bodies react in similar ways to new mums. Prolactin and oxytocin levels rapidly rise. Levels of testosterone – the male sex hormone – fall.

This is the biochemical basis of the philosopher Roman Krznaric’s observation that fatherhood increased his emotional range “from a meagre octave to a full keyboard of human feelings”. Less poetically, it is why I feel ecstatic when my toddler does a poo, and burst into tears when Clay Calloway walks on stage towards the end of Sing 2.

The maternal endocrine response – the hormone changes women experience during and after pregnancy – arises in the subcortex, the part of the brain that is common to all vertebrates and has remained largely unchanged for millions of years. Hrdy argues that the evolutionary origins of this response can in fact be traced back to male fish.

Piscine mums tend to lay their eggs and then forage for food in preparation to produce more eggs. It won’t surprise anyone who has watched Finding Nemothat fish dads often hover near nests to nurture and protect eggs they have fertilised. In nature, mothers are not always the primary carers; in many instances, it is the father’s role.

The prize for the best fish dads in the world goes to species from the Syngnathidae family. Female seahorses, pipefish and sea dragons inject their eggs into the male’s brood pouch, where they are fertilised and incubated. Not only do daddy Syngnathidae gestate and give birth, but the hormones involved are very similar to those regulating human pregnancies. Prolactin promotes the enzyme that breaks down the egg membranes, creating a nourishing fluid that the embryos feast on; and labour is stimulated by the fishy equivalent of oxytocin.

Human fatherhood is not this full-on, but when culture, choice or happenstance gives men caring responsibilities for infants, it triggers a similar endocrine response to mothers. Oxytocin and prolactin course through the brain, enhancing the father’s emotional wellbeing and social connections. For many fathers spending time with their baby, sharing the burden with their partner, or doing their bit to bring down the patriarchy is enough of a reward. But now we know there is another benefit: access to a part of the human experience that until recently was assumed to be closed to men.

For too long, simplistic interpretations of biology have been used to argue that traditional gender roles, in which women take on primary responsibility for childcare, are natural and immutable. We now know that biology can, in fact, free women and men from these binary straitjackets.

Jonathan Kennedy teaches politics and global health at Queen Mary University of London and is the author of Pathogenesis: How Germs Made History

Men are spending more time looking after their children – and it’s not just cultural, it’s in their genes | Jonathan Kennedy | The Guardian


Artificial Intelligence Gives Weather Forecasters a New Edge

The brainy machines are predicting global weather patterns with new speed and precision, doing in minutes and seconds what once took hours. (NYT, 30 juillet, quelques articles gratuites / sem.)

Excerpt:

(…) The Texas prediction offers a glimpse into the emerging world of A.I. weather forecasting, in which a growing number of smart machines are anticipating future global weather patterns with new speed and accuracy. In this case, the experimental program was GraphCast, created in London by DeepMind, a Google company. It does in minutes and seconds what once took hours.

“This is a really exciting step,” said Matthew Chantry, an A.I. specialist at the European Center for Medium-Range Weather Forecasts, the agency that got upstaged on its Beryl forecast. On average, he added, GraphCast and its smart cousins can outperform his agency in predicting hurricane paths.

In general, superfast A.I. can shine at spotting dangers to come, said Christopher S. Bretherton, an emeritus professor of atmospheric sciences at the University of Washington. For treacherous heats, winds and downpours, he said, the usual warnings will be “more up-to-date than right now,” saving untold lives.

Rapid A.I. weather forecasts will also aid scientific discovery, said Amy McGovern, a professor of meteorology and computer science at the University of Oklahoma who directs an A.I. weather institute. She said weather sleuths now use A.I. to create thousands of subtle forecast variations that let them find unexpected factors that can drive such extreme events as tornadoes. (…)

“It’s a turning point,” said Maria Molina, a research meteorologist at the University of Maryland who studies A.I. programs for extreme-event prediction. “You don’t need a supercomputer to generate a forecast. You can do it on your laptop, which makes the science more accessible.” (…)

“With A.I. coming on so quickly, many people see the human role as diminishing,” Mr. Rhome added. “But our forecasters are making big contributions. There’s still very much a strong human role.”

How AI Speeds Up Forecasting for Hurricanes and Global Weather Patterns – The New York Times (nytimes.com)


Critical moment : AI can predict tipping points before they happen

Potential applications span from economics to epidemiology war (The Economist, 29 juillet, article payant) 

Excerpt:

ANYONE CAN spot a tipping point after it’s been crossed. Also known as critical transitions, such mathematical cliff-edges influence everything from the behaviour of financial markets and the spread of disease to the extinction of species. The financial crisis of 2007-09 is often described as one. So is the moment that covid-19 went global. The real trick, therefore, is to spot them before they happen. But that is fiendishly difficult.

Computer scientists in China now show that artificial intelligence (AI) can help. In a study published in the journal Physical Review X, the researchers accurately predicted the onset of tipping points in complicated systems with the help of machine-learning algorithms. The same technique could help solve real-world problems, they say, such as predicting floods and power outages, buying valuable time.

To simplify their calculations, the team reduced all such problems to ones taking place within a large network of interacting nodes, the individual elements or entities within a large system. In a financial system, for example, a node might represent a single company, and a node in an ecosystem could stand for a species. The team then designed two artificial neural networks to analyse such systems. The first was optimised to track the connections between different nodes; the other, how individual nodes changed over time. (…)

Like with many AI systems, only the algorithm knows what specific features and patterns it identifies to make these predictions. Gang Yan at Tongji University in Shanghai, the paper’s lead author, says his team are now trying to discover exactly what they are. That could help improve the algorithm further, and allow better predictions of everything from infectious outbreaks to the next stockmarket crash. Just how important a moment this is, though, remains difficult to predict. ■

AI can predict tipping points before they happen (economist.com)


Le cerveau d’un champion, ça marche comment ?

Selon le neurobiologiste Jean-Philippe Lachaux, directeur de recherches à l’Inserm et spécialiste de l’attention, les athlètes de haut niveau ont des capacités intellectuelles hors pair (Madame Figaro, 27 juillet, libre accès)

Excerpt:

La tête et les jambes. Les neurones et le muscle. Pendant longtemps, on a séparé les capacités intellectuelles et physiques. Tout faux ! Jean-Philippe Lachaux démontre dans son nouveau livre, Dans le cerveau des champions*, le cerveau sans pareil des athlètes.

Le premier superpouvoir du champion, c’est la faculté à être «focus». «Cette hyperconcentration, c’est celle du tennisman sur sa balle ; celle de l’athlète en ultratrail Julien Chorier, suffisamment concentré sur sa course pour n’être pas perturbé par ses concurrents ; celle du pianiste Frank Braley, qui est “comme une étoile à neutron” en jouant chaque note», affirme Jean-Philippe Lachaux. Ce premier niveau – mobilisé dans la zone arrière du cortex préfrontal – s’accompagne d’une vision globale de l’ensemble. Il faut «avoir à l’esprit une idée de l’ensemble du jeu pianistique, artistique ou sportif, avoir en tête la succession des coups qui vont mener à la victoire. Ce double niveau de concentration est l’un des points forts du champion.» Pour parvenir à ce haut niveau, encore faut-il maîtriser sa technique. «Plus on automatise son geste, plus on est capable de libérer son cerveau pour se concentrer sur le reste – le jeu de l’adversaire, l’adaptation au terrain, etc. C’est ce qui se passe dans les sports de haut niveau, ou même au théâtre : un comédien doit connaître son texte par cœur pour pouvoir ensuite le sublimer, et y imprimer toutes ses émotions », explique le neurobiologiste. D’où l’importance de s’entraîner. «Je milite pour la répétition et le “par cœur”, dans l’éducation aussi, condition sine qua non de la réussite», soutient Jean-Philippe Lachaux. (…)

Le cerveau d’un champion, ça marche comment ? (lefigaro.fr)


Africa 2.0 : How to ensure Africa is not left behind by the AI revolution

Weak digital infrastructure is holding the continent back (The Economist, 26 juillet, article payant)  

Extraits :

More than two decades ago The Economist calculated that all of Africa had less international bandwidth than Brazil. Alas, until 2023 that was still true. Africa’s lack of connectivity is one reason its people could miss out on the benefits promised by artificial intelligence (ai).

For decades, experts have called for better broadband across Africa, citing the gains in productivity and employment. But the economic potential of ai, and its insatiable computing appetite, have renewed the case for urgent investment in the physical sinews needed to sustain a new digital revolution.

Fortunately, Africa has a home-grown model it can emulate. Its embrace of mobile phones in the early 2000s was a stunning feat of economic liberalisation. In most parts of Africa, businesses and consumers used to have to wait years to get a fixed-line phone. Nigeria, which has Africa’s biggest population (now more than 220m-strong), had just 450,000 phone lines in 1999, of which perhaps a third were on the blink. But when governments allowed privately owned mobile-phone companies to offer their services, they rapidly displaced the lethargic state-owned telcos.

It was a lesson in development done effectively, but frugally. (…)

Alas, this spectacular leapfrogging has downsides. The focus on mobile is one reason behind Africa’s underinvestment in fast fibre-optic connections. Although mobile phones have enabled mobile money and government services, such as digital id, they can take economies only so far, especially when most parts of Africa have relatively slow 2g or 3g networks.

Fibre can carry more traffic, and faster. This allows seamless video calls, reduces dizzying lags in augmented-reality apps for, say, training surgeons and lets people interact with ai chatbots and other online services. Yet Africa is poorly served by subsea internet cables. Moreover, much of the internet bandwidth that lands on the coasts is wasted because of a lack of high-capacity overland cables to carry it to the interior. Worse, the continent does not have enough data centres—the brick-and-mortar sites where cloud computing happens. The Netherlands, population 18m, has more of these than all of Africa and its 1.5bn people. As a result, data must cross half the world and back, leading to painful delays. If Africans are to do movie animation, run sophisticated weather forecasts or train large language models with local content, they will need more computing capacity closer to home.

To fix this, governments should learn from the mobile boom and cut red tape. Starlink, a satellite-internet firm, could be a stopgap, but regulators have blocked it in at least seven countries including South Africa. Heavy taxes on data access drive up costs for consumers, discouraging them from using it and firms from investing in providing it. Governments could do much to help simply by getting out of the way.

Development institutions, for their part, should be doing more to help finance this vital infrastructure because of its widespread benefits for growth and employment. The new digital revolution will create opportunities for Africa to catch up with rich countries. But if the continent lacks the right infrastructure, it will instead fall further behind. ■

How to ensure Africa is not left behind by the AI revolution (economist.com)


Comment la tech a révolutionné la guerre

Lasers, essaims de drones, missiles hypersoniques… Ces nouvelles armes sont sur le point de bouleverser les conflits (Le Point, 25 juillet, article payant)  

Extraits :

L’art de la guerre n’échappe pas à l’accélération de l’Histoire. S’il fallait des décennies, voire des siècles, pour inventer un nouvel alliage métallique ou changer la forme d’un bouclier durant l’Antiquité, il suffit aujourd’hui de six mois pour qu’un drone soit obsolète sur le champ de bataille. « Une invention qui change la donne à elle toute seule, cela n’existe plus, à part peut-être l’arme atomique », prévient Léo Péria-Peigné, chercheur à l’Observatoire des conflits futurs de l’Institut français des relations internationales (Ifri).

Adieu donc les fameux game changers, ces armements censés offrir un avantage décisif et définitif. « La guerre reste un duel dans lequel il n’y a pas de solution miracle, mais une combinaison de systèmes d’armes tous nécessaires », ajoute l’auteur de Géopolitique de l’armement (Le Cavalier bleu). Néanmoins, dans tous les domaines, des inventions vont radicalement transformer la conduite de la guerre. Emblème de cette révolution, l’intelligence artificielle (IA) « va irriguer toutes les dimensions de notre travail », assure le général Pierre Schill, chef d’état-major de l’armée de terre française, qui salue la création en mars dernier de l’agence ministérielle de l’IA de défense (Amiad).

« Dans dix à quinze ans, un tiers de l’armée américaine sera robotisé et largement contrôlé par des systèmes dotés d’IA », a même prédit le général Mark Milley, ancien chef d’état-major des armées américaines sous les présidents Trump puis Biden, lors d’une conférence le 15 juillet 2024. Aux États-Unis comme en Chine, des milliers d’ingénieurs travaillent sur des algorithmes voués à l’analyse du renseignement, à la surveillance automatisée des mouvements ennemis, à la conduite de mission des essaims de drones ou encore à la maintenance prédictive des outils les plus précieux comme les avions, les navires et les chars. Presque tout peut être géré par une IA en une fraction de seconde, charge ensuite aux humains de suivre le rythme impulsé par la machine. (…)

Comment la tech a révolutionné la guerre (lepoint.fr)


Lights out: Are we prepared for the next global tech shutdown? – opinion

Every organization must know how to continue “business as usual” even in an emergency, even without computers. (The Jerusalem Post, 24 juillet, article payant) 

Extraits :

On Friday, the world woke up to the announcement of a global disruption affecting cross-sector operations. Hospitals, health clinics, and banks were affected, airlines grounded their planes, broadcasting companies couldn’t broadcast (Sky News went off the air), emergency numbers like 911 in the US were unreachable, and even here in Israel, MDA experienced numerous issues. 

This event had an impact in the US, Australia, and Europe. Critical infrastructure alongside many business operations came to a halt. In Israel, we immediately connected the event to warfare, to the UAV that arrived from Yemen and exploded in Tel Aviv, assuming that Iran was attacking in the cyber dimension.

What exactly happened? And how can one mistake impact the entire world?

Let’s begin with the facts: An American company based in Texas named CrowdStrike, which provides a cybersecurity protection system installed in many companies around the world, announced on Friday morning that there was an issue with the latest version of its system released to customers. The problem caused Windows, Microsoft’s operating system, not to load, displaying a blue screen. Consequently, all organizational systems installed and based on that operating system did not load either. In other words, the organization was paralyzed.

But the issue didn’t end there. During the repair actions distributed by the company, hackers “jumped on the bandwagon,” posing as company employees and distributing instructions that essentially meant inserting malicious code into the organization and deleting its databases. This was the second derivative of the event. (…)

It seems that the world has become much more global and technological than humans want to think about or believe. And yes, a keyboard mistake by one employee in one company can affect the entire world, impacting all our daily lives. This is the reality, and we should understand it quickly and start preparing through structured risk management processes for any event that may come. Every organization must know how to continue “business as usual” even in an emergency, even without computers.

Look at what happened in hospitals in Israel. Due to numerous cyberattacks experienced before the war, but mainly around the Gaza war, staff was trained to work manually, without computers. During last weekend’s event, they continued to operate more or less in a reasonable state. 

Therefore, prior preparation prevents chaos and confusion at the critical moment. The state must implement mandatory regulation on the business continuity of organizations for the functional continuity of the economy.

Organizations should be prepared for cyberattacks or shutdowns – The Jerusalem Post (jpost.com)


IA : « Le vrai changement se produira lorsqu’une machine sera capable de souffrir »

Pour conquérir le monde, l’intelligence artificielle devra élaborer des plans complexes et stratégiques, estime le chercheur Stuart Russell en réaction au projet Strawberry d’OpenAI (Le Point, 24 juillet, article payant)  

Extraits :

Professeur d’informatique à Berkeley, Stuart Russell est l’un des principaux chercheurs en intelligence artificielle et l’auteur avec l’ancienne figure de Stanford Peter Norvig du texte de référence Artificial Intelligence : A Modern Approach. Le chercheur, né en 1962 à Portsmouth, en Angleterre, qui s’est d’abord formé en physique théorique à Oxford avant d’opter pour l’informatique à Stanford, a notamment cofondé le Berkeley Center for Human-Compatible Artificial Intelligence (CHAI). Il est un innovateur en matière de représentation probabiliste des connaissances, de raisonnement et d’apprentissage, notamment en ce qui concerne son application à la surveillance sismique mondiale dans le cadre du traité d’interdiction complète des essais nucléaires.

Son dernier ouvrage, Human Compatible, traite de l’impact à long terme de l’IA sur l’humanité. L’ancien responsable de la chaire Blaise Pascal au laboratoire d’informatique – CNRS – de Sorbonne Université est également membre du Future of Life Institute, un think tank qui réfléchit à l’impact de l’intelligence artificielle sur la société. (…)

L’intelligence artificielle peut-elle doter les machines d’une conscience ?

Même si c’est le cas, cela ne changera rien. Même si mon ordinateur était conscient, cela ne changerait rien, et je n’aurais même pas les moyens de le savoir. Il y a fort à parier qu’il continuerait à exécuter les ordres donnés par le logiciel.

La seule chose qui changerait, ce serait qu’une machine soit capable de souffrir. Elle aurait alors des droits moraux, ce qui compliquerait tout. Il deviendrait alors criminel de l’éteindre, d’être cruel à son égard ou de lui imposer des choses qu’elle n’aime pas. Mais nous ne savons absolument pas si cela va se produire un jour. (…)

IA : « Le vrai changement se produira lorsqu’une machine sera capable de souffrir » (lepoint.fr)


An der Tour de France purzeln die Rekorde. Neue Pharmazeutika erhöhen die Betrugsmöglichkeiten. Besteht ein Zusammenhang?

Dopern bieten sich heute zahlreiche leistungsfördernde Substanzen. Sie könnten die Bestmarken an der Frankreichrundfahrt erklären. Ein Experte warnt vor vorschnellen Schlüssen (FAZ, 17 juillet, article payant) 

Extraits :

(…) Besorgt sind auch die professionellen Doping-Jäger. Mario Thevis, der Leiter des Kölner Labors für Dopingkontrolle, sagt, dass Revolutionen in der Materialentwicklung und der Trainingsmethodik die Erklärung für die gegenwärtigen Leistungssteigerungen im Radsport sein könnten. Er sagt aber auch: «Die Möglichkeiten der Leistungsbeeinflussung durch nicht erlaubte Mittel und Methoden sind umfangreicher geworden.» Thevis erklärt, dass sich die Anti-Doping-Institutionen heute mehr anstrengen müssten, um die neuen Wege der Manipulation aufzuzeigen. (…)

Die gegenwärtige Fahrergeneration ist aber sowohl in der Spitze als auch in der Breite schneller als frühere. Radsport-Teams betonen, dass die Fortschritte eine Folge des deutlich besseren Materials seien, was Einsparungen von insgesamt bis zu 60 Watt allein durch aerodynamische Eigenschaften bringen könne. Auch die optimierte Ernährung und das bessere Training führten zu weiteren Leistungssteigerungen.

Allerdings gibt es heute auch eine Reihe von pharmazeutischen Präparaten, die im Training und im Wettkampf leistungsfördernd wirken. (…)

Möglichkeiten zur unerlaubten Leistungssteigerung gibt es also viele. Aber nicht immer verfügen die Dopingfahnder über die Fähigkeit, die verbotenen Substanzen auch nachzuweisen. Diesen Umstand gilt es bei der Analyse der Höchstleistungen an der Tour de France zu bedenken. Und nicht jede überragende Leistung verdankt sich Doping. Ausschliessen lässt sich ein Zusammenhang aber auch heute nicht.

Tour de France: Die Rekorde der besten Radfahrer werfen Fragen auf (nzz.ch)


How Microsoft’s Satya Nadella Became Tech’s Steely Eyed A.I. Gambler

Microsoft’s all-in moment on artificial intelligence has been defined by billions in spending and a C.E.O. counting on technology with huge potential and huge risks (NYT, 15 juillet, tribune, quelques articles gratuites / sem.)

How Microsoft’s Satya Nadella Became Tech’s Steely Eyed A.I. Gambler – The New York Times (nytimes.com)


A new bionic leg can be controlled by the brain alone

Those using the prosthetic can walk as fast as those with intact lower limbs (The Economist, 5 juillet, article payant)

Extraits :

Before hugh herr became a professor at the Massachusetts Institute of Technology (mit), he was a promising rock climber. But after being trapped in a blizzard during a climb at age 17, he lost both his legs below the knee to frostbite. Since then he has worked on creating prosthetic legs that would work and feel like the real thing. He appears to have succeeded.

In an article published on July 1st in Nature Medicine, Dr Herr and his team at mit describe seven people with below-the-knee amputations who can now walk normally with the help of surgery and new robotic prostheses. For the first time, Dr Herr says, people have been able to walk with bionic legs—mechanical prostheses that mimic their biological counterparts—that can be fully controlled by their brains.(…)

Stanisa Raspopovic from eth Zurich, who was also not involved, adds that Dr Herr’s “promising and beautiful” approach could be the end goal for below-the-knee amputations. But it remains to be seen if it could achieve similar results for people with amputations involving knees or upper-body limbs. Nor will everyone be able to get the amis they need. Decades after his amputation, Dr Herr has only enough muscle mass to construct an ami for a robotic ankle, but not a whole robotic foot. He says he is considering it regardless. Even if he cannot get the full effect, it may prove a sensible step. ■

A new bionic leg can be controlled by the brain alone (economist.com)


Neurosurgery : A new technique could analyse tumours mid-surgery

It would be fast enough to guide the hands of neurosurgeons (The Economist, 5 juillet, article payant)

Extraits :

Léo wurpillot was ten years old when he learned he had a brain tumour. To determine its malignancy, sections of the tumour had to be surgically removed and analysed. Now 19, he recalls the anguish that came with the subsequent three-month wait for a diagnosis. The news was good, and today Mr Wurpillot is a thriving first-year biomedical student at Cardiff University. But the months-long post-operative anticipation remains hard for patients to bear. That wait may one day be a thing of the past.

On June 27th a group of brain surgeons, neuropathologists and computational biologists met at Queen’s Medical Centre in Nottingham to hear about an ultrafast sequencing project developed by researchers at Nottingham University and the local hospital. Their work will allow brain tumours to be classified from tissue samples in two hours or less. As brain surgeries typically take many hours, this would allow results to come in before the end of surgery and inform the operation itself. (…)

A new technique could analyse tumours mid-surgery (economist.com)


Huit ans de retard pour le projet de fusion nucléaire Iter (Le Figaro, 4 juillet, article payant)

Ce réacteur, décidé par un traité international en 2006, devrait démarrer en 2033, avec de nouveaux surcoûts estimés au moins à 5 milliards d’euros

Huit ans de retard pour le projet de fusion nucléaire Iter (lefigaro.fr)


A sequence of zeroes : What happened to the artificial-intelligence revolution? (The Economist, 3 juillet, article payant)

So far the technology has had almost no economic impact

Extraits :

 (…) Almost everyone uses ai when they search for something on Google or pick a song on Spotify. But the incorporation of ai into business processes remains a niche pursuit. Official statistics agencies ask ai-related questions to businesses of all varieties, and in a wider range of industries than do Microsoft and LinkedIn. America’s Census Bureau produces the best estimates. It finds that only 5% of businesses have used ai in the past fortnight (see chart 1). Even in San Francisco many techies admit, when pressed, that they do not fork out $20 a month for the best version of Chatgpt. (…)

Concerns about data security, biased algorithms and hallucinations are slowing the roll-out. (…)

Indeed, there is no sign in the macroeconomic data of a surge in lay-offs. Kristalina Georgieva, head of the imf, recently warned that ai would hit the labour market like “a tsunami”. For now, however, unemployment across the rich world is below 5%, close to to an all-time low. The share of rich-world workers in a job is near an all-time high. Wage growth also remains strong, which is hard to square with an environment where workers’ bargaining power is supposedly fading. (…)

Some economists think ai will transform the global economy without booting people out of jobs. Collaboration with a virtual assistant may improve performance. A new paper by Anders Humlum of the University of Chicago and Emilie Vestergaard of Copenhagen University surveys 100,000 Danish workers. The average respondent estimates Chatgpt can halve time spent on about a third of work tasks, in theory a big boost to efficiency. (…)

In time, businesses may wake up to the true potential of ai. Most technological waves, from the tractor and electricity to the personal computer, take a while to spread across the economy. Indeed, on the assumption that big tech’s ai revenues grow by an average of 20% a year, investors expect that almost all of big tech’s earnings from ai will arrive after 2032, according to our analysis. If an ai bonanza does eventually materialise, expect the share prices of the users of ai, not only the providers, to soar. But if worries about ai grow, big tech’s capex plans will start to look as extravagant as its valuations.

What happened to the artificial-intelligence revolution? (economist.com)


Dieses Biskuit rettet Leben: wie ein Schweizer mit einem 14-Gramm-Gebäck den Hunger bekämpft / Ce biscuit sauve des vies : comment un Suisse lutte contre la faim avec un biscuit de 14 grammes (NZZ, 2 juillet, article payant)

Sie sind kein Dessert, sondern pure Nahrung. Und sie sollen auf Madagaskar die Not lindern. Vom schwierigen Kampf für ein Stück Hoffnung. / Ce ne sont pas des desserts, mais de la pure nourriture. Et ils doivent soulager la misère à Madagascar. De la lutte difficile pour un peu d’espoir.

14 Gramm gegen Hunger: Wie ein Biskuit in Madagaskar Leben rettet (nzz.ch)


Viruses : A deadly new strain of mpox is raising alarm (The Economist, 29 juin, article payant)

Health officials warn it could soon spread beyond the Democratic Republic of Congo

Extraits :

(…) The situation in the region is complicated by war, displacement and food insecurity. Containment efforts are made harder still by the likelihood of asymptomatic cases, where individuals do not know they are infected but can nevertheless spread the virus to others. Dr Lang emphasises that this, along with the number of mild cases of the infection, are the biggest unknowns in the current outbreak. Preventing this new mpox strain from becoming another global health crisis requires swift and co-ordinated action.

A deadly new strain of mpox is raising alarm (economist.com)


Politberater Juri Schnöller: «Entweder wir denken Demokratie mit KI neu, oder sie wird langsam sterben» (NZZ, Interview, 19 juin, article payant)

Während sich andere Menschen vor künstlicher Intelligenz fürchten, plädiert Juri Schnöller dafür, sie möglichst bald einzusetzen. Für ihn ist KI nicht die Zukunft, sondern die Gegenwart – «und wer das nicht erkennt, plant bereits seine eigene politische Irrelevanz»

Le conseiller politique Juri Schnöller : “Soit nous repensons la démocratie avec l’IA, soit elle mourra lentement”.

Alors que d’autres personnes ont peur de l’intelligence artificielle, Juri Schnöller plaide pour qu’elle soit utilisée le plus tôt possible. Pour lui, l’IA n’est pas l’avenir, mais le présent – “et celui qui ne le reconnaît pas planifie déjà sa propre non-pertinence politique” (NZZ, Interview)

Extraits :

La question de savoir si l’IA va arriver est résolue depuis longtemps. La question plus décisive est de savoir comment elle sera utilisée. Pour l’instant, nous laissons le développement de l’intelligence artificielle à de grandes entreprises orientées vers le profit. Or, nous avons besoin d’une forme d’intelligence artificielle qui conduise également à une plus-value sociale au sens de tous les êtres humains dans des démocraties pluralistes. (…)

Vous restez malgré tout optimiste quant à notre capacité à nous en sortir avec l’IA ?

Oui. Nous avons tendance à nous considérer comme le point final de l’histoire humaine. Pourtant, il est tout à fait possible que les générations futures poursuivent le progrès. Peut-être trouveront-elles aussi de nouvelles formes de démocratie. (…)

On lit souvent dans les médias les effets néfastes de l’IA sur la démocratie, par exemple les deepfakes ou les campagnes de désinformation à grande échelle menées par la Russie. Vous dites que l’IA est une chance pour la démocratie. Les médias sont-ils trop pessimistes à votre goût ?

Oui, les médias adorent les histoires de fin du monde. Je ne pense pas que ce soit si grave. Le fait est que soit nous repensons la démocratie avec l’IA, soit elle mourra à petit feu. (…)

Politberater Juri Schnöller im Interview über Chancen von KI für die Demokratie (nzz.ch)


Artificial intelligence : Ray Kurzweil on how AI will transform the physical world (June 18)

Pay wall :The changes will be particularly profound in energy, manufacturing and medicine, says the futurist (The Economist, Guest Essay)

Excerpt :

(…) Sources of energy are among civilisation’s most fundamental resources. For two centuries the world has needed dirty, non-renewable fossil fuels. Yet harvesting just 0.01% of the sunlight the Earth receives would cover all human energy consumption. Since 1975, solar cells have become 99.7% cheaper per watt of capacity, allowing worldwide capacity to increase by around 2m times. So why doesn’t solar energy dominate yet?

The problem is two-fold. First, photovoltaic materials remain too expensive and inefficient to replace coal and gas completely. Second, because solar generation varies on both diurnal (day/night) and annual (summer/winter) scales, huge amounts of energy need to be stored until needed—and today’s battery technology isn’t quite cost-effective enough. The laws of physics suggest that massive improvements are possible, but the range of chemical possibilities to explore is so enormous that scientists have made achingly slow progress.

By contrast, ai can rapidly sift through billions of chemistries in simulation, and is already driving innovations in both photovoltaics and batteries. This is poised to accelerate dramatically. In all of history until November 2023, humans had discovered about 20,000 stable inorganic compounds for use across all technologies. Then, Google’s gnome ai discovered far more, increasing that figure overnight to 421,000. Yet this barely scratches the surface of materials-science applications. Once vastly smarter agi finds fully optimal materials, photovoltaic megaprojects will become viable and solar energy can be so abundant as to be almost free. (…)

Today, scientific progress gives the average American or Briton an extra six to seven weeks of life expectancy each year. When agi gives us full mastery over cellular biology, these gains will sharply accelerate. Once annual increases in life expectancy reach 12 months, we’ll achieve “longevity escape velocity”. For people diligent about healthy habits and using new therapies, I believe this will happen between 2029 and 2035—at which point ageing will not increase their annual chance of dying. And thanks to exponential price-performance improvement in computing, ai-driven therapies that are expensive at first will quickly become widely available.

This is ai’s most transformative promise: longer, healthier lives unbounded by the scarcity and frailty that have limited humanity since its beginnings. 

Ray Kurzweil on how AI will transform the physical world (economist.com)


“Why machines won’t save us from the labor shortage” (June 12)

Pay wall :Automation is seen as a beacon of hope in the fight against labor shortages. However, the hoped-for relief requires more than just the availability of machines / Warum uns Maschinen nicht vor dem Arbeitskräftemangel retten : Automatisierung gilt als Hoffnungsträger im Kampf gegen den Arbeitskräftemangel. Die erhoffte Entlastung setzt jedoch mehr voraus als nur das Vorhandensein von Maschinen (NZZ, Opinion)

Warum der technische Fortschritt Arbeit nicht ersetzt (nzz.ch)


“Like people, elephants call each other by name” (June 11)

Pay wall :Trunk calls : Like people, elephants call each other by name – And anthropoexceptionalism takes another tumble (The Economist)

Excerpt :

As with dolphin whistles, it has long been known that elephant rumbles are individually recognisable. One thing to establish, therefore, was whether, when communicating with another elephant, the caller was mimicking the recipient. The software suggested this was not the case. It was, however, the case that calls were receiver-specific. This showed up in several ways. First, for a given caller, the receiver could be predicted from the sonic spectrum of its rumble. Second, rumbles directed by a particular caller to a particular recipient were more similar to each other than those made by that caller to other recipients. Third, recipients responded more strongly to playbacks of calls originally directed towards them than to those originally intended for another animal.

On top of this, rumbles directed by different callers towards the same recipient were more similar to each other than to other calls within the data set, suggesting that everyone uses the same name for a given recipient. All of which adds to the evidence that elephant intelligence does indeed parallel the human sort in many ways—and makes their slaughter by humans, which threatens many of their populations, even more horrifying.

Like people, elephants call each other by name (economist.com)


“The war for AI talent is heating up” (June 9)

Pay wall : Retention is all you need : The war for AI talent is heating upBig tech firms scramble to fill gaps as brain drain sets in (The Economist)

The war for AI talent is heating up (economist.com)


“Fourth time lucky : Elon Musk’s Starship makes a test flight without exploding” (June 8)

Pay wall :  Crucially, the upper stage of the giant rocket survived atmospheric re-entry (The Economist)

Elon Musk’s Starship makes a test flight without exploding (economist.com)


Smallest known great ape, which lived 11m years ago, found in Germany (June 8)

Free access : Smallest known great ape, which lived 11m years ago, found in Germany : Buronius manfredschmidi estimated to have weighed just 10kg and was about the size of a human toddler (The Guardian)

Smallest known great ape, which lived 11m years ago, found in Germany | Fossils | The Guardian


“Robots are suddenly getting cleverer. What’s changed?” (June 7)

Pay wall : Robotics : Robots are suddenly getting cleverer. What’s changed? – There is more to AI than ChatGPT (The Economist)

Robots are suddenly getting cleverer. What’s changed? (economist.com)


“SpaceX’s monumental Starship makes a spectacular 4th test flight full of promise” (June 7)

Pay wall : Le monumental Starship de SpaceX effectue un 4e vol d’essai spectaculaire et plein de promesses : La fusée géante conçue pour être entièrement réutilisable a réussi l’amerrissage à la fois du premier étage, mais aussi du vaisseau après sa descente depuis l’orbite (Le Figaro)

Le monumental Starship de SpaceX effectue un 4e vol d’essai spectaculaire et plein de promesses (lefigaro.fr)


Thème 15 Articles d’avant le 7 juin 2024