Last week, I contributed to the OECD’s 48th session of the Working Party on Territorial Indicators. Working Parties are a mostly internal affair where the different members of the OECD meet around specific topics to discuss progress, share experiences and coordinate. This is my second one, and they are a very interesting experience for an academic in that much of what is discussed is various forms of applied research, but the format and delivery is rather different than in a normal academic meeting.

I contributed a ten minute talk on geospatial AI for land use, and I thought I’d summarise here what I presented. I labelled it as an “opinionated version of our recent report on the topic”, and I stand by that definition. I tried to give an overview of the opportunities I see for (geospatial) AI to support the understanding, modelling and management of land use. Much of what I said is in the report (possibly in a slightly more eloquent and formal way), but I also mixed in some of the lessons we’ve learnt from our ongoing EuroFab project with OECD, Charles University and ESA.

To me, these are the three areas where (geospatial) AI intersects with land use:

  • More data
  • Better modelling
  • More intuitive interfaces

Start with data. Much of what modern AI/ML does is making data that traditionally were beyond the realm of computation, computable. Think of text, audio, or images. These are all sources of (unstructured) information that land use experts have only been able to access in a qualitative and manual way. Modern techniques such as foundation models for both text (LLMs) and image make it possible to treat these sources as quantitative data. And this is a big deal. Think of the amount of imagery that is coming from satellites, or how much data about, say, the planning system is locked in PDFs and word documents. Making these as accessible as a Census table will bring new sources on which to base evidence in a more timely fashion.

Then there’s modelling. Data in themselves are not information, insight, or knowledge. The bridge between them is often modelling. Modelling in land use is of course not new. But new advances in AI/ML are giving it a notable boost. We can both make better models with traditional data (e.g., a lot of the tree-based algorithms like random forests and XGBoost have revolutionised making predictions with structured data) and expand such models to accommodate the sources of data I mention above in a native way. In land use, this is also a pretty big advancement. Think of the full-fledged industry of land use regression models, where different outcomes (e.g., air quality, land use change) is explained as a function of land use characteristics. How that function is modelled and how land use is characterised/measured is poised to radically change in the coming years.

And then there’re interfaces. How much effect and impact the results from the modelling exercises in the previous paragraph can have is mediated by how they are presented and made available to the public. It’s 2025, so I don’t have to spend many words on how modern (Gen)AI is revolutionising human-computer interfaces. An example of how this could pan out is the chatbot interface we built on top of our DemoLand tool for exploring urban land use scenarios and that features in Section 2 of the report. This is of course not an area specific to land use, but I think land use is a good candidate to benefit particularly from this trend. The outputs from land use modelling are usually sophisticated and non-trivial to understand. At the same time, if we want them to have an effect, they need to be understood by non-technical folks such as policy makers, practitioners and the general public.

And that, pretty much, covered my ten minutes! It was fun to think of how to present results from our research to folks who are not academics but deeply care about the topic. This year, I had to participate online, but I hope I have future opportunities to come in person and meet participants over coffee breaks (or even during the timetabled cocktail!).