Links
- Author: Jonathan Reades, Yingjie Hu, Emmanouil Tranos, Elizabeth Delmelle
 - Category: pdf
 - Document Tags: paper
 - URL: www.nature.com/articles/…
 - Author: Arvind Narayanan, Sayash Kapoor
 - Category: article
 - URL: www.normaltech.ai/p/ai-as-n…
 - Author: Frank Chimero
 - Category: article
 - URL: frankchimero.com/blog/2025…
 - Author: Gaël Varoquaux
 - Category: article
 - URL: https://gael-varoquaux.info/personnal/a-national-recognition-but-science-and-open-source-are-bitter-victories.html
 - Author: JA Westenberg
 - Category: rss
 - URL: www.joanwestenberg.com/p/why-you…
 - Author: Jarrett Walker
 - Category: article
 - URL: www.bloomberg.com/news/arti…
 - Author: Krzysztof Janowicz, Gengchen Mai, Weiming Huang, Rui Zhu, Ni Lao & Ling Cai
 - Category: article
 - Document Tags: paper
 - URL: www.tandfonline.com/doi/full/…
 - Author: Nathan Zimmerman
 - Category: article
 - URL: element84.com/machine-l…
 - Author: Naomi Klein
 - Category: article
 - URL: www.theguardian.com/us-news/n…
 - Author: turing.ac.uk
 - Category: article
 - URL: www.turing.ac.uk/blog/why-…
 - Author: par piketty
 - Category: article
 - URL: www.lemonde.fr/blog/pike…
 - Author: Ethan Mollick
 - Category: rss
 - URL: www.oneusefulthing.org/p/against…
 - Author: Aravind
 - Category: rss
 - URL: newsletter.terrawatchspace.com/last-week…
 
🔗 The City As Text
Neat paper by a great gang.
Metadata
Highlights
This Review seeks to ground this opportunity in an introduction to the kinds of text and tools available to researchers, providing examples of the state of the art in urban research while contextualizing these applications in the broader framework within which this interest in textual data evolved.
🔗 AI as Normal Technology
Finally made it to read (listen) to this one, after seeing it referred to by seemingly everyone whose views on AI I respect. Well worth the time. Particularly the first part (regulatory and risks leave me a bit colder), it provides such a useful framing to about AI as a rather than the technology.
Metadata
Highlights
AI as normal technology is a worldview that stands in contrast to the worldview of AI as impending superintelligence.
[…] we assume that, despite the obvious differences between AI and past technologies, they are sufficiently similar that we should expect well-established patterns, such as diffusion theory to apply to AI, in the absence of specific evidence to the contrary.
🔗 Beyond the Machine
Metadata
Highlights
Thinking of AI as an instrument recenters the focus on practice.
In other words, instruments can surprise you with what they offer, but they are not automatic. In the end, they require a touch. You use a tool, but you play an instrument.
We may not be in AI winter, but I am hoping for an AI autumn. Autumn is amazing; the air cools, the mania of summer dissipates, things slow down.
Here’s Eno again from earlier this year: “I can see from the little acquaintance that I have with using AI programs to make music, that what you spend nearly all your time doing is trying to stop the system becoming mind numbingly mediocre. You really feel the pull of the averaging effect of AI, given that what you are receiving is a kind of averaged out distillation of stuff from a lot of different sources.” An average email or line of code is fine. Average art isn’t.
scikit-learn has always been an example in many ways of how to do many things well (apis, open source, community). One more reason to the list.
Metadata
Highlights
And two decades later, we have won. Open source is everywhere. Statistical algorithms raise billions of dollars. But what good will this free software, these algorithms, have been if an Elon Musk can buy their vector of action and transform it into a fascist machine. This victory is bitter.
And it is these battles that today’s medal rewards. I have always been wary of individual distinctions. Success is rarely the work of a single person. We need more collective effort and fewer heroes, less ego.
🎧 How Silicon Valley enshittified the internet
Man, Cory Doctorow is on fire on this one.
—-
Decoder: How Silicon Valley enshittified the internet, Oct 30, 2025
… “And so, you know, one day Mark Zuckerberg arises from his sarcophagus and says, harken to me, brothers and sisters, I know I told you that the future was arguing with your most racist uncle using this text interface, but actually, I’m going to transform you and everyone you love into a legless, sexless, low-polygon, heavily surveilled cartoon character so I can imprison you in a virtual world I stole from a 25-year-old cyberpunk novel. I call it the metaverse, right? And that’s end stage enshittifcation, the giant pile of shit.”
🔗 Why You Should Write Every Day (Even if You’re Not a Writer)
Metadata
Highlights
When you write, you can’t handwave. You can’t bluster and obfuscate your own ideas into oblivion. When you’re alone with a blank page, there’s nobody to rescue you with a charitable interpretation.
🔗 Should We Let Public Transit Die?
No.
Metadata
Highlights
What does it take to replace transit? You have to match the fare, capacity and travel time of the services being provided now. Most alternatives are offering only one or two of these.
two big types of benefit that justify their subsidies.
First, dense cities do not have room for everyone to drive alone in a car, so their functionality depends on large numbers of people traveling in ways that use space efficiently.
Second, all communities have people who cannot, don’t want to, or shouldn’t be driving.
Most of these inventions seem to have a common theme: They will protect us from the unwanted company of strangers.
The biggest danger is that we will let transit die long before any technology could be ready to replace it.
in the US we have constructed our transportation funding streams to make transit’s costs visible, while the costs of car dependence are mostly concealed.
the future lies in making these decisions as locally as possible.
🔗 GeoFM: how will geo-foundation models reshape spatial data science and GeoAI?
This was a much more insightful read than I anticipated. The first part is a fantastic introduction to the idea of state of the art foundation models today, in particular in the Geo space. The second is more prospective, and thus a little more speculative. Either way, very good food for thought.
Metadata
Highlights
Given these three motivating factors, GeoFM can be defined as follows: Geo-foundational models are foundation models specifically trained on heterogeneous spatiotemporal data, capable of reliably performing advanced spatiotemporal reasoning, and designed to incorporate spatial, temporal, and other contextual factors into their output to support a wide range of (geo)spatial downstream tasks in geography and neighboring disciplines that benefit from a spatial or geographic perspective.
Just as LLMs encode the syntax, semantics, and pragmatics of human language, GeoFM could encode the language of space, i.e., the place-agnostic properties that define geography – spatial dependence and heterogeneity (Anselin,Citation1988) and its related concepts such as scale, adjacency, spatial and temporal scopes, and so on.
why do we need geo-foundation models (GeoFM) at all, and what exactly are they or will they be? First, foundation models can only generalize within the scope of their training data.
Second, many geospatial tasks are highly specific
Third, geography is inherently local/regional or contextual.
GeoAI advances along two major dimensions: (1) it applies novel methods and technologies from the broader AI and machine learning community to geographic and geospatial research questions and (2) it feeds its own, novel theoretical and methodological contributions back to the broader AI community
location embeddings can be trained separately and concatenated with the embeddings representing learned building footprints, land classes, and so on (Mac Aodha et al. Citation2019; Yan et al. Citation2019; Mai et al. Citation2020). Note: This is an idea I’ve had for a while and would be good to check some of these references to see how they approach it. Although modern foundation models were not yet on the horizon in the early 2010s, it was already clear that the era of custom, single-purpose models was slowly giving way to workflows developed around reuse and transferability. This shift raises a key question for GeoAI research: how can we distinguish progress driven by GeoAI-specific innovation from improvements mostly gained through the application of transfer learning (and related methods) from general-purpose models?
The successful combination of few-shot, prompt engineering, and transfer-learning methods on top of powerful general-purpose models raises the old question again: is spatial really special?
we can roughly classify the existing GeoFM-related research into the following categories: 1) adapting existing FMs on geospatial tasks via prompt engineering and task-specific fine-tuning; 2) developing advanced LLM agent frameworks for geospatial tasks; and 3) developing novel geo-foundation models via geo-aware model training and fine-tuning.
we further classify the current GeoFMs in four categories based on the data modalities they support and their application scenarios: geospatial language foundation models, geospatial vision foundation models, geospatial graph foundation models, and geospatial multimodal foundation models.
three major ways of realizing GeoFM or using generalist FM
For now, it is unclear whether one of the paths is preferred to approach the vision of generally capable GeoFM so that the research community could consolidate our efforts, or if this is task-dependent, and, hence, varying paths should be taken for different types of tasks.
Designing architectures that can jointly process such heterogeneous data, scale to large datasets, and accomplish effective cross-modality alignment remains a major open challenge.
A fundamental question is whether those subjective and complex human experiences should become part of GeoFM.
this raises concerns about GeoFM misrepresenting geography, be it by introducing bias or by learning representations that do not align with those of groups or societies.
spatial priors should ideally be incorporated into the pre-training of GeoFM […] those priors change across scale, resolution, modality, and so forth, and it is presently not clear how to best handle those. For instance, should they be explicitly engineered or implicitly learned?
Without co-evolving our data and benchmarks, the true potential of GeoFMs will remain constrained.
most present work on AI alignment does not account for regional, e.g., cultural, differences. However, as geographers, we know that the aforementioned societal goals, values, and norms vary greatly across geographic space and time – without any being inherently superior to others.
skills that help us better interact with such agents, critically think about their outputs, align AI with societal goals, and so on, will increase in importance.
🔗 Mission Critical -- Satellite Data is a Distinct Modality in Machine Learning
Some of the arguments made here would ring obvious to traditional spatial analysts, others to traditional remote sensers and, I suppose, many might seem “bread and butter” for the ML crowd. But it is not each individual argument that is the point here; it is putting them together, now, and in a contemporary and fresh framework that makes this paper worth a read. Well worth it indeed.
Metadata
Authors: Esther Rolf, Konstantin Klemmer, Caleb Robinson, Hannah Kerner Category: article URL: arxiv.org/abs/2402….
🔗 Why We’re Talking About a Centralized Vector Embeddings Catalog Now
The white paper mentioned below is well worth a read. It puts in much more eloquent words many of the reasons why I’m very excited about the new generation of satellite foundation models and the potential embeddings have to make satellite data more useful, useable, and used! A lot of food for thought and great argumentation for why we need to think about satellite images more and more like abstract tables than like images of pixels.
Metadata
Highlights
our team published a detailed white paper in which we make the case for how Earth Observation (EO) data providers such as NASA can dramatically improve access to their data by creating a centralized vector embeddings catalog
🔗 The rise of end times fascism
Metadata
Highlights
The governing ideology of the far right in our age of escalating disasters has become a monstrous, supremacist survivalism.
Three recent material developments have accelerated end times fascism’s apocalyptic appeal. The first is the climate crisis. While some high-profile figures might still publicly deny or minimize the threat, global elites, whose ocean-front properties and datacenters are intensely vulnerable to rising temperatures and sea levels, are well-versed in the ramifying perils of an ever-heating world. The second is Covid-19: epidemiological models had long predicted the possibility of a pandemic devastating our globally networked world; the actual arrival of one was taken by many powerful people as a sign that we have officially arrived at what US military analysts forecasted as “the Age of Consequences”. No more predictions, it’s going down. The third factor is the rapid advancement and adoption of AI, a set of technologies that have long been associated with sci-fi terrors about machines turning on their makers with ruthless efficiency – fears expressed most forcefully by the same people who are developing these technologies. All of these existential crises are layered on top of escalating tensions between nuclear-armed powers.
The startup country contingent is clearly foreseeing a future marked by shocks, scarcity and collapse.
the most powerful people in the world are preparing for the end of the world, an end they themselves are frenetically accelerating.
contemporary far-right movements lack any credible vision for a hopeful future. The average voter is offered only remixes of a bygone past, alongside the sadistic pleasures of dominance over an ever-expanding assemblage of dehumanized others.
But it also opens up powerful possibilities for resistance. To bet against the future on this scale – to bank on your bunker – is to betray, on the most basic level, our duties to one another, to the children we love, and to every other life form with whom we share a planetary home.
bunkered nation lies at the heart of the Maga agenda, and of end times fascism.
We should think of this less as old-school imperialism than super-sized prepping, at the level of the national state.
End times fascism is a darkly festive fatalism – a final refuge for those who find it easier to celebrate destruction than imagine living without supremacy.
🔗 Why we still need small language models - even in the age of frontier AI
This is a cool example of how less can be more. The article is also surprisingly informative for a post of these characteristics.
Metadata
Highlights
In a six week sprint, we set out to see how far a small, open-weight language model could be pushed using lightweight tools and without massive infrastructure. By combining retrieval-augmented generation, reasoning trace fine-tuning, and budget forcing at inference time, our 3B model achieved near-frontier reasoning performance on real-world health queries – and is small enough to run locally on a laptop. We’re open-sourcing everything, and we believe this approach has enormous potential for public sector and compute-constrained environments.
🔗 Trump, national-capitalism at bay
Metadata
Highlights
Let’s be clear: Trump’s national capitalism likes to flaunt its strength, but it is actually fragile and at bay. Europe has the means to confront it, provided it regains confidence in itself, forges new alliances and calmly analyzes the strengths and limitations of this ideological framework.
This is the first weakness of national capitalism: when powers reach a boiling point, they end up devouring each other. The second is that the dream of prosperity promised by national capitalism always ends up disappointing public expectations because it is, in reality, built on exacerbated social hierarchies and an ever-growing concentration of wealth.
When measured in terms of purchasing power parity, the reality is very different: the productivity gap with Europe disappears entirely.
The reality is that the US is on the verge of losing control of the world, and Trump’s rhetoric won’t change that.
In the face of Trumpism, Europe must, first and foremost, remain true to itself.
Europe must heed the calls from the Global South for economic, fiscal and climate justice.
🔗 Against "Brain Damage”
The final quote of “[o]ur fear of AI “damaging our brains” is actually a fear of our own laziness” has a lot of power, although it also oversees the mirrors of “nudges” that technology creates to act lazily.
Metadata
Highlights
ways of using AI to help, rather than hurt, your mind.
If you outsource your thinking to the AI instead of doing the work yourself, then you will miss the opportunity to learn.
the harm happens even when students have good intentions.
we have increasing evidence that, when used with teacher guidance and good prompting based on sound pedagogical principles, AI can greatly improve learning outcomes.
Moving away from asking the AI to help you with homework to helping you learn as a tutor is a useful step.
find more in the Wharton Generative AI Lab prompt library.
while AI is more creative than most individuals, it lacks the diversity that comes from multiple perspectives.
The deeper risk is that AI can actually hurt your ability to think creatively by anchoring you to its suggestions. This happens in two ways.
the anchoring effect. Once you see AI’s ideas, it becomes much harder to think outside those boundaries.
Second, as the MIT study showed, people don’t feel as much ownership in AI generated ideas, meaning that you will disengage
how do you get AI’s benefits without the brain drain? The key is sequencing. Always generate your own ideas before turning to AI.
Every post I write, like this one, I do a full draft entirely without any AI
Only when it is done do I turn to a number of AI models and give it the completed post and ask it to act as a reader: Was this unclear at any point, and how, specifically could I clarify the text for a non-technical reader? And sometime like an editor: I don’t like how this section ends, can you give me 20 versions of endings that might fit better.
there is the option to have it help make us better. One interesting example is using AI as a facilitator.
If you want to keep the human part of your work: think first, write first, meet first.
Our fear of AI “damaging our brains” is actually a fear of our own laziness.
🔗 Last Week in Earth Observation: May 26, 2025
Aravind has a really interesting take on why the hyperscalers are moving into the weather model space.
Metadata
Highlights
I think the real story here is not about weather accuracy, and it is definitely not about replacing ECMWF or NOAA in the future.
This is really about weather becoming part of the cloud infrastructure, about turning forecasting into a cloud-native service that’s deeply embedded within their compute ecosystems.
For sectors such as energy, insurance, agriculture, logistics and finance, weather is not just data, it is a key decision driver. If you can offer native, on-demand, customisable forecasts, users will start building their products and workflows around you: models, simulations, dashboards, alerts and triggers, aka a sticky service layer.
TL;DR: I think Google and Microsoft are trying to make weather foundational by turning it into a programmable infrastructure layer, that powers the horizontal layer of weather intelligence and climate services.