https://github.com/xemantic/xemantic-neo4j-kotlin-driver
There is also associated demo project showing how to use this driver with Ktor, in fullstack asynchronicity and structured concurrency of coroutines:
https://github.com/xemantic/xemantic-neo4j-demo
I use Neo4j a lot with my AI agents, letting them store private memory as a knowledge graph, but also research this graph in auto-scientific process. I've discovered that reducing cognitive load on an LLM is crucial for the quality of machine reasoning. And this is intention behind this library - no explicit "async", DSLs for idiomatic resource management, automatic mapping of Cypher input and output data to multiplatform data classes. All of this can be executed as a script, while being strongly typed and compiled giving additional feedback to the autonomous chain-of-code style agent. This allows agents to define ad hoc data ingestion and retrieval schemas, while avoiding double-task inference challenge of encoding intents while comprehending own intents.
Initially I was quite impressed with it's problem solving capabilities, when outputting the code through the chat interface. It addressed certain problems much better than Claude or Gemini. However, as soon as I switched to Alibaba Cloud's API to provide Dashscope based implementation of cognizer interface of my new generation of AI agents (chain of code), the whole charm was gone.
Qwen3 struggles with structured generation attempts, quite often falling into an infinite loop when spitting out tokens.
It has troubles crossing boundaries of languages, which is crucial for my agents which are "thinking in code" - writing Kotlin script, containing JavaScript, containing SQL, etc., therefore it will not work well as automated software engineer.
It is "stubborn" - even when the syntax error in generated code is clearly indicated, it is rather wiling to output the same error code again and again, instead of testing another hypothesis.
It lacks the theory of mind and understanding of the context and the environment. For example when asked to check the recent news, it is always responding by trying to use BBC API url, with non-filled API key as a part of the request, while passing this url to the Files tool instead of the WebBrowser tool, which obviously fails.
And the last, but not least - censorship, for example Qwen3 will refuse to search for the information on the most recent anti-governmental protests in China. I wouldn't be surprised if these censorship blockers were partially responsible for poor quality of cognition in other areas.
Maybe I'm doing something wrong, and you are getting much better results with this model for fully autonomous agents with feedback loop?
1. Introduce IntelliJ IDEA IDE and tools 2. Showcase my Unix-omnipotent educational open source AI agent called Claudine (which can basically do what Claude Code can do, but I already provided it in October 2024) 3. Go through glossary of AI-related terms 4. Explore demo code snippets gradually introducing more and more abstract concepts 5. Work together on ideas brought by participants
In theory attendees of the workshop should learn enough to be able to build an agent like Claudine themselves. During this workshop I am introducing my open source AI development stack (Kotlin multiplatform SDK, based on Anthropic API). Many examples are using OPENRNDR creative coding framework, which makes the whole process more playful. I'm OPENRNDR contributor and I often call it "an operating system for media art installations". This is why the workshop is called "Agentic AI & Creative Coding". Here is the list of demos:
- Demo010HelloWorld.kt - Demo015ResponseStreaming.kt - Demo020Conversation.kt - Demo030ConversationLoop.kt - Demo040ToolsInTheHandsOfAi.kt - Demo050OpenCallsExtractor.kt - Demo061OcrKeyFinancialMetrics.kt - Demo070PlayMusicFromNotes.kt - Demo090ClaudeAiArtist.kt - Demo090DrawOnMonaLisa.kt - Demo100AffirmationMirror.kt - Demo110TruthTerminal.kt - Demo120AiAsComputationalArtist.kt
And I would like to extend it even further.
Each code example is annotated with "What you will learn" comments which I split into 3 categories:
- AI Dev: techniques, e.g. how to maintain token window, optimal prompt engineering - Cognitive Science: philosophical and psychological underpinning, e.g. emergent theory of mind and reasoning, the importance of role-playing - Kotlin: in this case the language is just the simplest possible vehicle for delivering other abstract AI development concepts.
I am collecting lots of feedback from participants of my workshops, and I hope to improve them even further. Now I am considering recording this workshop as a series of YouTube videos.
Are you teaching how to write AI agents? How do you do it? Do you have any recommendations for my workshops?
https://xemantic.com/ai/workshops/