The general idea is to have a floorplan dashboard that is easy to set up. I was never good with SVG editors or 3D modeling, and wanted something simpler.
The goal is that it connects to devices you already have in Home Assistant and visualizes things like lights, sensors, and potential issues directly on the floorplan.
Still early days and very much a work in progress, but it’s free to use. Would love feedback.
New features shipped last month:
- Adaptive practice: LLM generates and grades questions in real-time, then uses Item Response Theory (IRT) to estimate your ability and schedule the optimal next question. Replaces flashcards; especially for math and topics where each question needs to be fresh even when covering the same concept. - Interactive math graphs (JSXGraph) that are gradable - Single-image Docker deployment for easy self-hosting
Open source: https://github.com/SamDc73/Talimio
the flow is basically:
When practice questions are generated, the model generates the question + the reference answer together, but the user only sees the question. then on submit, a smaller model grades the learner answer against that reference answer + the grading criteria.
I benchmarked a bunch of judge models for this on a small multi-subject set, and `gpt-oss-20b` ended up being a very solid sweet spot for quality/speed/structured-output reliability. on one of the internal benchmarks it got ~98.3% accuracy over 60 grading cases, with ~1.6s p50 latency, so it feels fast enough to use live.
for math, it’s not just LLM grading though:
- `SymPy` for latex/math expressions, so if the learner writes an equivalent answer in a different form, it still gets marked correct; so `(x+2)(x+3)` and `x^2 + 5x + 6` can both pass. (but might remove that one since it might be easily replaced by an LLM? And it's a niche use that add some maintenance cost)
- tolerance-based checks for the JSXGraph board state stuff; so on the graph if you plotted x = 5.2 instead of 5.3 it will be within the margin of error to pass but will give you a message about it
I also tried embedding/similarity checking early on, but it was noticeably worse on tricky answers, so I didn’t use that as the main path.
Five modules, one package: data utilities (Arkhe), schema validation (Kanon), Result/Option types (Zygos), typed error classes (Sphalma), and a Lodash migration bridge (Taphos).
The idea is that these patterns compose natively: Validate with Kanon, get a typed Result back via Zygos, chain transformations with Arkhe. One pipeline, no try/catch, full type inference.
Benchmarks: ~4x smaller and 5-11x faster than Zod 4, ~21x smaller than Lodash, ~3x smaller than Neverthrow.
Available on npm as @pithos/core
I track everything in my Google Calendar — work blocks, side projects, gym, social time. But I could never answer 'where did my time actually go this week?' Google Workspace has Time Insights, but it's locked to paid accounts and doesn't work for personal Google Calendar.
Calens fills that gap: GitHub-style heatmap showing 52 weeks of calendar activity, weekly/monthly time breakdowns by calendar or tag, a progress chart of planned vs completed time, and a cleaner in-page event editor. Everything runs on-device — no servers, no tracking, no data leaving the browser.
Early-stage, looking for people who already log their life in Google Calendar and want better data on their habits. Happy to give free lifetime access in exchange for honest feedback.
Most injection evasion works by making text look different to a scanner than to the LLM. Homoglyphs, leet speak, zero-width characters, base64 smuggling, ROT13, Unicode confusables — the LLM reads through all of it, but pattern matchers don't.
The project is two curated layers, not code:
Layer 1 — what attackers say. ~35 canonical intent phrases across 8 categories (override, extraction, jailbreak, delimiter, semantic worm, agent proxy, rendering...), multilingual, normalized.
Layer 2 — how they hide it. Curated tables of Unicode confusables, leet speak mappings, LLM-specific delimiters (<|system|>, [INST], <<SYS>>...), dangerous markup patterns. Each table is a maintained dataset that feeds a normalisation stage.
The engine itself is deliberately simple — a 10-stage normalisation pipeline that reduces evasion to canonical form, then strings.Contains + Levenshtein. Think ClamAV: the scan loop is trivial, the definitions are the product.
Long term I'd like both layers to become community-maintained — one curated corpus of injection intents and one of evasion techniques, consumable by any scanner regardless of language or engine.
Everything ships as go:embed JSON, hot-reloadable without rebuild. No regex (no ReDoS), no API calls, no ML in the loop. Single dependency (golang.org/x/text). Scans both inputs and LLM outputs.
result := injection.Scan(text, injection.DefaultIntents()) if result.Risk == "high" { ... }
- Linetris ( https://apps.apple.com/us/app/linetris-daily-line-puzzle/id6... ), a daily puzzle game where you fill an 8x8 grid with Tetris-like pieces to clear lines. Think Wordle meets Tetris. Daily challenges, leaderboards, and competititve play against friends.
- Kvile ( https://kvile.app ) — A lightweight desktop HTTP client built with Rust + Tauri. Native .http file support (JetBrains/VS Code/Kulala compatible), Monaco editor, JS pre/post scripts, SQLite-backed history. Sub-second startup. MIT licensed, no cloud, your requests stay on your machine. Think Postman without the bloat and login walls.
- Mockingjay ( https://apps.apple.com/us/app/mockingjay-secure-recorder/id6... ) — iOS app that records video and streams AES-256-GCM encrypted chunks to your Google Drive in real-time. By the time someone takes your phone, the footage is already safe in the cloud. Built for journalists, activists, and anyone who needs tamper-proof evidence. Features a duress PIN that wipes local keys while preserving cloud backups, and a fake sleep mode that makes the phone look powered off during recording.
- Stao ( https://stao.app ) — A simple sit/stand reminder for standing desk users. Runs in the system tray, tracks your streaks, zero setup. Available on macOS, Windows, Linux, iOS, and Android.
- MyVisualRoutine ( https://myvisualroutine.com ) — This one is personal. I have three kids, two with severe disabilities. Visual schedules (laminated cards, velcro boards) are a lifeline for non-verbal children, but they're a nightmare to manage and they don't leave the house. So I built an app that lets you create a full visual routine in about 20 seconds and take it anywhere. Choice boards, First/Then boards, day plans, 50+ preloaded activities, works fully offline. Free tier is genuinely usable. Available on iOS and Android.
- Biblewise — a Bible trivia game I originally built for my niece and nephew but ended up with three modes: adventure (progressive levels across 6 categories), daily challenges with streak tracking, and a timed mode. Built with SwiftUI + SwiftData, offline-first. https://apps.apple.com/us/app/biblewise-bible-quiz-game/id67...
- Neimr — a collaborative naming app with Tinder-style swiping. Create a survey for baby names, pet names, business names, etc., invite your partner/friends, and it finds which names you all agree on. Built with Flutter + Firebase. https://apps.apple.com/us/app/neimr-swipe-find-names/id67582...