Goethe's take on...
AI-supported interaction to encourage personal Reflection on current issues
What would Goethe’s take on issues like climate change be? What would he think about Trump? In the initial research phase, I started asking those questions to AI, to get a quick introduction to Goethe’s thinking. Much to my surprise, this interaction with an AI ‘personal tutor’ inspired further questions in me, drawing me deeper and deeper into the subject. While the typical interaction in museums can often feel passive and lacking personal relevance for visitors, this shift from initial distance to genuine curiosity by asking my own questions is exactly the kind of experience I wish for every visitor.
‘Goethe's Take On…’ aims to deliver a more accessible pedagogical experience by encouraging visitors to enter into their own discourse with Goethe’s thinking and prompts a reflection on current issues. AI makes it possible to process large quantities of archive objects in a short amount of time and curate them for museum visitors. The artificial intelligence acts as a mediator and an expert ‘scholar’ of Goethe, and responds to visitors’ questions by connecting them to published sources.
Users hand-write their questions on cards or select a question from an archive of previously asked questions. In this way, the feeling of personal dialogue is activated haptically. The archive-like structure stores past interactions as a reference to ensure that everyone can benefit from previous explorations. The question is then scanned and answered by the specially programmed AI. The interaction design intends to communicate transparency and openness, emphasising that the AI is just a machine and can make mistakes.






00 Starting Point
The starting point of the project was a visit to the Goethe National Museum in Weimar. We got a quick tour through the Museum and they showed us all the interactions they already implemented. They also talked about their big archive which they are trying to make more accessible.
What struck to me was that most of the interactions in the museums felt passive and most important lacking personal relevance.
01 Initial research
I started by getting an overview of what Content or Online-Tools the Klassik Stiftung Weimar already has and explored them. There I figured out that the Archive is very big but not really accessible. In the Archive I found some diary entry and letters from Goethe, which sounded very interesting. That's when I realized, that Goethe thought and wrote about topics that have relevance to this day.
Then I tried to map around the problems I saw and would like to adress with my project. This involved: no opportunities for exchange, too little reflection and no connection/context to the present.
During that initial idea mapping I was wondering what Goethe would think of current issues like climate change. So I had the idea to ask ChatGPT about it. That was a big turning point for the project because this interaction with AI inspired further questions in me, drawing me deeper and deeper into the subject. I realized that this his shift from initial distance to genuine curiosity by asking my own questions is exactly the kind of experience I wish for every visitor.
From this, I formulated a „mantra“ that guided me through the entire design process: From a distance to the subject, curiosity is aroused through personal discovery rather than prescribed information.
02 Overall Concept
So at that point I had a route picked out and set myself some goals to keep me guided:
- A low-threshold interaction that invites personal engagement
- Relating Goethe's thinking to individual questions and new perspectives
- Making huge amounts of information from the archive more accessible
After I fixed my goals I started to go wild by mindmapping and brainstorming ideas. How can the Archive provide data? Which problems are adressed? Can I (personally) work with AI or does my critique on it blockade me? And then, how would AI be implemented? How does the Input and the Output have to look? Whats my target group? You can see my mapping in the Screenshot below.

03 Design Concept
So the overall Concept was final and from there I had to figure out how to execute my vision.
After presenting the concept, I immediately started testing interactions in simple and rough ways (rapid prototyping). To do this, I created four completely different tests and tested them with people. One was completely digital, one completely analogue, one with choices, one with suggestions, and one completely free. I tried to simulate very different situations to analyze, what works and what doesn't and how the Users feel while testing.
And what I take away from these tests is that basically all the texts created by the AI are too long to be actually read. That handwritten questions work better than digital questions, because you are directly activated yourself. A certain amount of guidance is needed to help visitors formulate a question because the questions should not be too complicated. Asking own questions (instead of choosing questions) helps to create a personal connection and requires a little bit of own thinking and reflection on what personal interest you have. Also it would be helpful to ask follow-up questions after answering a question or even to have follow-up questions suggested.
In the Screenshot below you get an insight on how the tests and the analyzing looked like.

But there are 100 ways to implement the findings in a design, right? Yes… and to narrow down the direction to should take, I generated ideas and organised them into three sliders. Then, based on the results of the initial interaction tests and my own interest in implementation, I decided which direction to take and used them as a guide for the next steps.
- Analogue input
- Digital output (e.g. AI response visible on screen)
- A more public experience (meaning you are not doing this on your own device, such as an iPad, but rather in a public setting where others can see it)
- Archive of past questions/answers
- Active participation with some sort of assistance
You can see the whole mapping of ideas in the following Screenshot.

All these findings, but aren't there still lots of possibilities? Yes, that's why I tried to sketch out various ideas with different vibes (functional, artistic, kitschy, immersive, etc.) and working with metaphors (see pictures below).

04 Design Development
„Training“ the AI
Then it was time to put it into practice. I started by “training” the AI by writing different system prompts.
In this prompt, you can define how the AI should respond. In what tone it should communicate, how the answer should be structured and what it should contain at a minimum and maximum to include enough information but not be too long to not get read by the users.
I didn't want museum visitors to ‘talk to Goethe’ but rather to receive a response based on Goethe's sources. However, I also wanted the interaction to stand out from the experience you can have at home with Chat GPT. That's why I created an arrogant and snappy but not human personality that is well versed in Goethe. The AI responds with ‘its opinion’ and specific source references, thus clearly communicating that everything is only interpretation.
It was also important that the AI output the answer in a way that would be readable for the Python and HTML code later on. That's why I trained the AI to output a finished JSON file with ID names. This allows the HTML to access and output the individual sections of the answer.
The following image shows how the system prompt is structured and what the response looks like when output as a JSON file.

Designing the „Workstation“
To determine how the interaction should look, I divided it into the design of the workstation from which the interaction takes place, what the input and output should look like, and how the archive of previously asked questions should function.
When designing the workstation, my goal was to clearly show who you are currently communicating with. After all, AI is a machine, it makes mistakes and is not perfect. It was important to me to work with AI in a reflective manner and to communicate this effectively. So the design should look technical and radiate openness and transparency. On this basis, I then created a mood board. That's how I came up with a modular metal frame. The technology and cables should remain partially visible to show this transparency.
It was clear that the users need guidance on how to use the workstation or rather on how to interact. So I used the ‘table’ surfaces of my Workstation to communicate. I engraved with a laser where the index card had to be placed so that it could be scanned, I adressed the question archive and wrote in front of the question archive that you could write a question on the card there.
Designing Output & Input
I tried out various things when designing the output. I knew that I wanted the output to be digital and the input analogue in order to bridge the gap between Goethe's text archive and the technical and futuristic vibe of AI. So I tested everything from projectors to screens and tried out different layouts, eventually settling on three screens. Then I tested different layouts for the screen and text. While keeping in mind the results of the first rapid prototyping tests I landed on a Design where there is a short answer, a longer answer with quotes from the archive and a note that the answer was generated by AI and there is no guarantee of its accuracy. Then I knew how the output should look in terms of graphics and content, and I wrote HTML/CSS code to implement the planned design.
For the input, I wanted to design a question archive that lowers the inhibition threshold for interaction by allowing users to ask questions that have already been asked instead of their own questions, or to be inspired by them. I also created a mood board for this, from index cards to hanging files. Index cards are ideal because they are handy and, for many people, a familiar format or tool that stands for reflection and learning.








Python Code
Then it slowly became technical: I had to think about what the interaction process should look like to know how to program the code that runs everything. Users come to interact, need a short briefing, then search through the archive or take a blank card and write down a question. The question then has to be recognised and transcribed somehow, the AI answers the question, the answer is displayed on the screens, the question is removed and filed in the question archive.
This made it pretty clear what the displays have to communicate to the users so that they understand whats happening and it's as intuitive as possible. Based of the interaction process I identified, i defined that there is a wait state, a loading state and an answer state. The wait state is always displayed when no question is being asked, the loading state is displayed when the AI is answering a question, and the answer state is displayed when an answer is displayed by the AI.
With the assistance of Simon von Schmude from eLab, I wrote the Python code. The code continuously checks whether there is a question under the camera. The camera detects this based on pixel brightness. Since the background is black and the index cards are white, this works very well. As soon as the brightness changes (i.e. there is a question under the camera), the camera takes a photo and sends it to the AI, which transcribes the handwriting and sends the answer back as a JSON file. This allows the HTML code to access the individual sections of the answer and display them correctly on the different screens.
The code detects which state it needs to display based on defined actions in the code. The load state is displayed as soon as the camera has created an image because the brightness has changed. It is designed very simple and communicates through changing background color from white to grey and back, that „the machine is working“. The answer state is displayed when the AI sends the answer as a Json file. The wait state is activated as soon as a change in brightness (back to dark) is detected from the camera. It shows the Title of the Project and also functions as a invitation to use the interaction.
05 How it works
This is how the final interaction works:
- You browse through the archive or take a blank card and write down a question.
- Then you place the card under the camera.
- The camera recognises that there is a card there and takes a photo.
- The AI transcribes the handwriting and answers the question.
- The answer is displayed on the screens.
- As soon as the card is removed, the AI is ready for the next question.
- The card can be placed in the archive, creating a collective knowledge base.
