https://github.com/mitoma/kashiki2?tab=readme-ov-file#%E5%AE...
This kind of effect works especially well for Japanese, with its curved strokes inside square character boxes.
https://github.com/mitoma/kashiki2/blob/main/doc/assets/psyc...
https://github.com/mitoma/kashiki2/blob/main/doc/assets/ar-m...
Also, hotkey(s) for a set of predefined isomorphic camera views would seem useful; maybe I am not seeing it? https://github.com/mitoma/kashiki2/blob/main/kashikishi/asse...
I asked ChatGPT 4o to translate the README: https://chatgpt.com/share/6700bed9-1198-8004-8eed-07f5055d07...
The translation seemed largely consistent with what Google Translate provided, but some of ChatGPT’s translation differences seemed more plausible to me, and it certainly reads more coherently. It also doesn’t keep forgetting that it’s dealing with the proper name of the program.
I didn’t try Gemini for this, but I imagine it has to be decent at translation too, so I wonder if/when Google will use Gemini to assist, replace, or otherwise complement Google Translate.
https://ja.wikipedia.org/wiki/%E8%8D%89%E6%9B%B8%E4%BD%93#/m...
which is actually a fairly legible example. Admittedly, the more flowing styles that you see in old poetry and the like effectively require specialized training to read. Beautiful, though!
https://www.cjvlang.com/Writing/writmongol/websitesinmongolb...
Example:
At first I thought it was a descendant from Arabic, but a Wikipedia detour shows that the most common ancestor is actually Aramaic script.
でもやはり高低アクセントはもっと苦しいと思います。あれは無理ですww
For instance a lot of the obvious brush stroke is gone such as: うえらおや
The upper line is supposed to either look like a droplet of water or like ふ upper part
Certain details are gone: にこ no longer has the half arrow you can see exists vertically on the horizontal line.
ふ lost a lot of its details.
Still I would argue it looks better now for the most part.
Or if anyone wants to do an IDE like this, take it as an inspiration. Raskins thought it through very well.
Likely not technically feasible at the moment (without sacrificing font quality and too many other features of code editing)
With the jump from 2D screen to AR-based UI, we have the chance to re-think all of the conventions that have gripped UI/UX design over the past few decades. How many apps would benefit from being able to visualize data in a 3D space? How many new ways could we interact with computers, if we could reach out and touch things? Text editing, video editing, image editing (visualizing Photoshop layers?), 3D modeling, sketching, gaming: all revolutionized by a new input paradigm. That's partially what I thought Apple would accomplish. They have a history of totally rethinking every part of software when a new input device comes around. I mean, think about the jump from the iMac to the iPhone. ["I just take my finger, and I scroll."](https://www.youtube.com/watch?v=FSv5x3V_KHY) I shudder to think how many drugs Apple employees had to take in order to think around traditional desktop conventions and come up with this stuff. I figured with the Vision Pro, we'd see traditional apps reformed to a new, never-before-seen standard, but I have unfortunately seen very little of that. If you scrape off all of the high-budget polish, Vision Pro feels like a device that another company would create that Apple would then do correctly. By extension, the Meta Quest lineup feels the same way.
But this is the kind of thing I absolutely want to see more of. There's a physicality to this text editor that feels intuitive, but more importantly, it feels comforting. When things appear and disappear on screens instantaneously without any animation, it triggers our brains that something is wrong, because that's unusual behavior. There's a purpose for animation, it's not always all for show. Bringing physicality like this to a 3D interface in mixed reality is, in my opinion, the next step in UI design. This text editor isn't getting super crazy with its effects, but in my opinion, you can already see the potential. As these devices come down in price and more developers get their hands on them I hope to see more like this. Hell, seeing this is the closest I've ever gotten to splurging on a Meta Quest so I could whip up a 3D modal text editor. I want a digital kitchen timer I can physically wind and unwind for Pomodoro timing. I want to pick an album to listen to on Apple Music from a stack of records projected onto my floor. Impractical? Perhaps. But look at early skeuomorphic iPhone apps and tell me those are practical. If all we cared about was using computers to get from point A to point B, we'd all work in TUIs, and r/unixporn wouldn't exist.
I don't know what it is, but feel a fundamental lack of interest in this new input paradigm, both from companies like Apple & Meta and from developers. Hopefully open source projects like this will show people the real potential of this new hardware.
Alpha release builds an AR app to pull code, render in space.
I genuinely love Emacs people.
When a long time then use up and I want to give emacs a proper go, but I really can't live without some conveniences like "change inner string" or what have you.
How the hell do you do that in emacs, I cannot find out
I've seen close to zero spam pull requests. Are these common?
"Although Japanese is primarily written vertically, there are not many text editors that fully support vertical writing. However, you can gain deeper insight by reading a text vertically or horizontally, or by flexibly changing the layout and rereading it. With Sakishi, you can instantly switch between vertical and horizontal writing while editing a document."
That was my "aha!" moment when reading this. Japanese has been made to fit Western convention a lot of the time, but it's good to have another option.
If it's just for normal people, then go wild with all the useless, CPU-wasting frills. Feel proud about it, even. In 2024 a snappy user interface only requires a few GBs of RAM, several intercommunicating processes and the ENTIRE FREAKING WEB STACK.