Funny story: using kilo was the final straw [1] in getting me to give up on terminals. These days I try to do all my programming atop a simple canvas I can draw pixels on.
Here's the text editor I use all the time these days (and base lots of forks off of): https://git.sr.ht/~akkartik/text2.love. 1200 LoC, proportional font, word-wrap, scrolling, clipboard, unlimited undo. Can edit Moby Dick.
Hey Akkartik! That's really interesting! At the moment you're still using a terminal to launch the individual apps or something else?
akkartik 34 minutes ago [-]
Whatever works! I mostly use LÖVE, and it supports both. Some reasons to run it from the terminal rather than simply double-clicking or a keyboard shortcut in the OS:
* While I'm building an app I want to run from a directory rather than a .love file.
* I want to pass additional arguments. Though I also extensively use drag and drop for filenames.
* I want to print() while debugging.
pabs3 9 hours ago [-]
Someone else who eschews terminals and replaced them:
I really enjoyed the plan9 way of an application slurping up the terminal window (not a real terminal anyway) and then using it as full fledged GUI window. No weird terminal windows floating around in the background and you still could return to it when quitting for any logs or outputs.
volemo 9 hours ago [-]
> These days I try to do all my programming atop a simple canvas I can draw pixels on.
Why?
alpaca128 7 hours ago [-]
Not GP but the terminal is inefficient and limiting for input and UI. For one you cannot detect key-up and key-down events, only a full key press. The press of multiple (non-modifier) keys at once can't be recognized either. Also there are some quirks, like in many terminals your application cannot distinguish between the Tab key and Ctrl-I as they look the same. But in some (e.g. Alacritty) it can work, so now if you have two different keybindings for Tab & Ctrl-I your program will behave differently in different terminals.
If you want to do anything that's not printing unformatted text right where the cursor is, you need to print out control sequences that tell the terminal where to move the cursor or format the upcoming text. So you build weird strings, print them out and then the terminal has to parse the string to know what to do. As you can imagine this is kind of slow.
If you accidentally print a line that's too long it might break and shift the rest of the UI. That's not too bad because it's a monospaced font, so you only have to count the unicode symbols (not bytes)...until you realize chinese symbols are rendered twice as wide. Text is weird and in the terminal there is nothing but text. But to be fair it's still a lot simpler than proportional fonts and a lot of fun, but I definitely understand why someone would decide to just throw pixels on a canvas and not deal with the historical quirks.
vidarh 1 hours ago [-]
I think there's lots of scope for improvements to terminals, but I feel like this is more a question of "nobody has asked for it".
There's been plenty of recent innovation in terminals (e.g. support for a variety of new types of underlines to enable "squigglies" for error reporting is an example; new image support is another), and adding a code to enable more detailed key reporting the same way we have upgraded mouse event reporting over the years wouldn't be hard, and these things tends to spread quickly.
With respect to "accidentally printing a line that's too long", you can turn off auto-wrap in any terminal that supports DECAWM (\033[?7h / \033[?7l ).
That it's "kinda slow" really shouldn't be an issue - it was fast enough for hardware a magnitude slower than today. Parsing it requires a fairly simple state machine. If can't keep up with VT100/ANSI escape sequences, your parser is doing something very wrong.
The difficulty of unicode is fair enough, and sadly largely unavoidable, but that part is even worse in a GUI; the solution there is to use code to measure the rendered string, and it's not much harder to get that right for terminals either. It'd be nice if unicode had done this in a nicer way (e.g. indicated it in the encoding).
For my own terminal, I'm toying with the idea of allowing proportional text with an escape code, and make use of it in my editor. If I do, it'll be strictly limited: Indicate a start and end column where the text is proportional, and leave it to the application to specify a font and figure out the width itself.
Worst case scenario would be that you send the escape, and the editor doesn't get an escape acknowledging it has been enabled back, and falls back on monospaced text and keeps working fine in a regular terminal. This way, evolving terminal capabilities can be done fairly easily with backwards compatibility.
miki123211 5 hours ago [-]
And to make matters worse, unlike a GUI, the terminal doesn't provide any semantic information about the content it displays to the OS.
This is a problem for accessibility software, screen readers, UI automation, voice control etc.
If you want a screen reader to announce that a menu option is selected, you need some way to signal to the OS that there's a menu open, that some text is a menu option, and that the option has the "selected" state. All serious GUI frameworks let you do this (and mostly do it automatically for native controls), so does the web.
TUIs do not (and currently can not) do this. While this is not really a problem for shells or simple Unix utilities, as they just output text which you can read with a screen reader just fine, it gets really annoying with complicated, terminal-based UIs. The modern AI coding agents are very prominent examples of how not to do this right.
vidarh 4 hours ago [-]
TUI's could be made do this relatively easily. "All" you need is to pick an escape sequence to assign a semantic label to the following span of text, and have the terminal use whatever OS mechanism to make that available to assistive tech.
Of course, that doesn't help unless/until at least one prominent terminal actually does it and a few major terminal applications adds support for it.
akkartik 9 hours ago [-]
Terminals are full of hacks. For example, in my terminal project linked above the Readme says this:
This is a problem with every TUI out there built using ncurses. "What escape code does your terminal emit for backspace?" is a completely artificial problem at this point.
There are good reasons to deal with the terminal: I need programs built for it, or I need to interface with programs built for it. Programs that deal with 1D streams of bytes for stdin and stdout are simpler in text mode. But for anything else, I try to avoid it.
ayrtondesozzla 8 hours ago [-]
Sorry for jumping off topic but I came across mu recently - looks very interesting! Hope to try it out properly when I get a moment
akkartik 8 hours ago [-]
Thank you! Hit me up any time.
giancarlostoro 28 minutes ago [-]
I made a similar editor using Lazarus... since it has syntax highlighting components... I guess that's cheating. The more I think about it though, I wonder if Freepascal could produce a nice GUI for Neovim.
I did try to build one in Qt in C++ years ago, stopped at trying to figure out how to add Syntax Highlighting since I'm not really that much into C++. Pivoted it to work like Notepad so I was still happy with how it wound up.
My own editor is array of lines in Ruby, and in now about 8 years of using it daily, and having the actual editor interact with the buffer storage via IPC to a server holding all the buffers, it's just not been a problem.
It does become a problem if you insist on trying to open files of hundred of MB of text, but my thinking is that I simply don't care to treat that as a text editing problem for my main editor, because files that size are usually something I only ever care to view or is better off manipulating with code.
If you want to be able to open and manipulate huge files, you're right, and then an editor using these kind of simple methods isn't for you. That's fine.
As it stands now, my editor holds every file I've ever opened and not explicitly closed in the last 8 years in memory constantly (currently, 5420 buffers; the buffer storage is persisted to disk every minute or so, so if I reboot and open the same file, any unsaved changes are still there unless I explicitly reload), and it's not even breaking the top 50 or so of memory use on my machine usually (those are all browser tabs...)
I'm not suggesting people shouldn't use "fancier" data structures when warranted. It's great some editors can handle huge files. Just that very naive approaches will work fine for a whole lot of use cases.
E.g. the 5420 open buffers in my editor currently are there because even the naive approach of never garbage collecting open buffers just hasn't become an issue yet - my available RAM has increased far faster than the size of the buffer storage so adding a mechanism for culling them just hasn't become a priority.
lor_louis 11 minutes ago [-]
Oh by "more complex" operations I referred to multiple cursors and multi line regex searches. I've noticed some performance problems in my own editor but it's mostly because "lines" become fragmented, if you allocate all the lines with their own allocation, they might be far away from each other in memory. It's especially true when programming where lines are relatively short.
Regex searches and code highlight might introduce some hitches due to all of the seeking.
userbinator 12 hours ago [-]
The core data structure (array of lines) just isn't that well suited to more complex operations.
Modern CPUs can read and write memory at dozens of gigabytes per second.
Even when CPUs were 3 orders of magnitude slower, text editors using a single array were widely used. Unless you introduce some accidentally-quadratic or worse algorithm in your operations, I don't think complex datastructures are necessary in this application.
lifthrasiir 10 hours ago [-]
The actual latency budget would be less than a single frame to be completely non-noticable, so you are in fact limited to less than 1 GB to move per each keystroke. And each character may hold additional metadata like syntax highlight states, so 1 GB of movable memory doesn't translate to 1 GB of text either. You are still correct in that a line-based array is enough for most cases today, but I don't think it's generally true.
RetroTechie 41 minutes ago [-]
Movement of GB's of data being noticeable should be considered a feature, imho.
And if those GB's represent text, with user trying to edit that as a single file, well then... PEBKAC.
lelanthran 8 hours ago [-]
> The core data structure (array of lines) just isn't that well suited to more complex operations.
Just how big (and how many lines) does your file have to be before it is a problem? And what are the complex operations that make it a problem?
(Not being argumentative - I'd really like to know!)
On my own text editor (to which I lost the sources way back in 2004) I used an array of bytes, had syntax highlighting (Used single-byte start-stop codes for syntax highlighting) and used a moving "window" into the array for rendering. I never saw a latency problem back then on a Pentium Pro, even with files as large as 20MB.
I am skeptical of the piece table as used in VS Code being that much faster; right now on my 2011 desktop, a VS Code with no extra plugins has visible latency when scrolling by holding down the up/down arrow keys and a really high keyboard repeat setting. Same computer, same keyboard repeat and same file using Vim in a standard xterm/uxterm has visibly better scrolling; takes half as much time to get to the end of the file (about 10k lines).
ofalkaed 7 hours ago [-]
From what I have experienced the complex data structures used here are more about maintaining responsiveness when overall system load is high and that may result slightly slower performance overall. Say you used the variable "x" a thousand times in your 10k lines of code and you want to do a find and replace on it to give it a more descriptive name like, "my_overused_variable," think about all of the memory copying that is happening if all 10k lines are in a single array. If those 10k lines are in 10k arrays which are all twice the size of the line you reduce that a fair amount. It might be slower than simpler methods when the system load is low but it will stay responsive longer.
I think vim uses a gap structure, not a single array but don't remember.
I am not a programmer, my experience could very well be due to failings elsewhere in my code and my reasoning could be hopelessly flawed, hopefully someone will correct me if I am wrong. It has also been awhile since I dug into this, the project which got me to dig into this is one of the things which got me to finally make an account on hn and one of my first submissions was Data Structures for Text Sequences.
VS Code used 40-60 bytes per line, so a file with 15 million single character lines balloons from 30 MB to 600+ MB. kilo uses 48 bytes per line on my 64-bit machine (though you can make it 40 if you move the last int with the other 3 ints instead of wasting space on padding for memory alignment), so it would have the same issue.
I have never seen a file like this in my life, let alone opened one. I'm sure they exist and people will want to open them in text editors instead of processing with sed/awk/Python, but now we're well into the 5-sigma of edge cases.
Would highly recommend the tutorial as it is really well done.
stevekemp 3 hours ago [-]
I remember that tutorial fondly.
I played around with kilo when it was released, and eventually made a multi-buffer version with support for scripting with embedded Lua. Of course it was just a fun hack not a serious thing, I continue to do all my real editing with Emacs, but it did mean I got to choose the best project name:
Here’s a second recommendation for that tutorial. It’s the first coding tutorial I’ve finished because it’s really good and I enjoyed building the foundational software program that my craft relies on. I don’t use that editor but it was fun to create it.
cies 3 minutes ago [-]
Last serious work on this was in 2020. Lacks news worthiness imho.
anonzzzies 1 hours ago [-]
Ah darn. Closing in on retirement (will never happen, coding is too much fun for profit or charity) age, I resistent building an editor but I want to. Need to. I hacked so much vim, emacs, eclipse, vs code and its all crap (the newer, the worse: all these useless gimmicks you won't use past grade school aaarrr while lacking power user features). Can I do better? This seems a good start.
JdeBP 1 hours ago [-]
One interesting thing is that even some of those 1000 lines could have been eliminated.
It duplicates the C library's cfmakeraw() function, for instance.
Reading through this code is a veritable rite of passage. You learn how C works, how text editors work, how VT codes work, how syntax highlighting works, how find works, and how little code it really takes to make anything when you strip away almost all conveniences, edge cases, and error handling.
Although it does cheat a bit in an effort to better handle Unicode:
> unicode-width is used to determine the displayed width of Unicode characters. Unfortunately, there is no way around it: the unicode character width table is 230 lines long.
lifthrasiir 13 hours ago [-]
Personally, this is the reason I don't really buy the extreme size reduction; such projects generally have to sacrifice some essential features that demand a certain but necessary amount of code.
vidarh 4 hours ago [-]
A lot of those features are only "essential" for a subset of possible users.
My own editor exists because I realised it was possible to write an editor smaller than my Emacs configuration. While my editor lacks all kinds of features that are "essential" for lots of other people, it doesn't lack any features essential for me.
So in terms of producing a perfect all-round editor that will work for everyone, sure, editors like Kilo will always be flawed.
Their value is in providing a learning experience, something that works for the subset who don't need those features, or a basis for people to customise something just right for their needs in a compact way. E.g. my own editor has quirks that are custom-tailored to my workflow, and even to my environment.
lifthrasiir 4 hours ago [-]
You are right, but then there is not much reason to make it public because it can't be very useful for general users. I have lots of code that was written only for myself and I don't intend to publish at all.
vidarh 2 hours ago [-]
There's plenty of reason to make it public as basis for others to make it their own, or to learn from.
I have lots of code I've published not because it's useful to a lot of people as-is, but because it might be helpful. And a lot of my projects are based on code written by others that was not "very useful for general users".
E.g. my editor started out based on Femto [1], a very minimalist example of how small an editor can be. It cut some time of starting from scratch, even though there's now practically nothing left of the original.
Similarly, my terminal relies on a Ruby rewrite of a minimalist truetype renderer that in itself would be of little value for most people, who should just use FreeType. But it was highly valuable to me - allowing me to get a working pure-Ruby TrueType renderer in a day.
Not "very useful for general users" isn't a very useful metric for whether something is worthwhile.
(While the current state of my editor isn't open, yet, largely for lack of time, various elements of it are, in the form of Ruby gems where I've extracted various functionality.)
[1] There are at least 3 editors named Femto, presumably inspired by being smaller than Nano, the way Nano followed Pico, but this is the one I started with: https://github.com/agorf/femto
Here's the text editor I use all the time these days (and base lots of forks off of): https://git.sr.ht/~akkartik/text2.love. 1200 LoC, proportional font, word-wrap, scrolling, clipboard, unlimited undo. Can edit Moby Dick.
[1] https://git.sr.ht/~akkartik/teliva
* While I'm building an app I want to run from a directory rather than a .love file.
* I want to pass additional arguments. Though I also extensively use drag and drop for filenames.
* I want to print() while debugging.
https://arcan-fe.com/2025/01/27/sunsetting-cursed-terminal-e...
Why?
If you want to do anything that's not printing unformatted text right where the cursor is, you need to print out control sequences that tell the terminal where to move the cursor or format the upcoming text. So you build weird strings, print them out and then the terminal has to parse the string to know what to do. As you can imagine this is kind of slow.
If you accidentally print a line that's too long it might break and shift the rest of the UI. That's not too bad because it's a monospaced font, so you only have to count the unicode symbols (not bytes)...until you realize chinese symbols are rendered twice as wide. Text is weird and in the terminal there is nothing but text. But to be fair it's still a lot simpler than proportional fonts and a lot of fun, but I definitely understand why someone would decide to just throw pixels on a canvas and not deal with the historical quirks.
There's been plenty of recent innovation in terminals (e.g. support for a variety of new types of underlines to enable "squigglies" for error reporting is an example; new image support is another), and adding a code to enable more detailed key reporting the same way we have upgraded mouse event reporting over the years wouldn't be hard, and these things tends to spread quickly.
With respect to "accidentally printing a line that's too long", you can turn off auto-wrap in any terminal that supports DECAWM (\033[?7h / \033[?7l ).
That it's "kinda slow" really shouldn't be an issue - it was fast enough for hardware a magnitude slower than today. Parsing it requires a fairly simple state machine. If can't keep up with VT100/ANSI escape sequences, your parser is doing something very wrong.
The difficulty of unicode is fair enough, and sadly largely unavoidable, but that part is even worse in a GUI; the solution there is to use code to measure the rendered string, and it's not much harder to get that right for terminals either. It'd be nice if unicode had done this in a nicer way (e.g. indicated it in the encoding).
For my own terminal, I'm toying with the idea of allowing proportional text with an escape code, and make use of it in my editor. If I do, it'll be strictly limited: Indicate a start and end column where the text is proportional, and leave it to the application to specify a font and figure out the width itself.
Worst case scenario would be that you send the escape, and the editor doesn't get an escape acknowledging it has been enabled back, and falls back on monospaced text and keeps working fine in a regular terminal. This way, evolving terminal capabilities can be done fairly easily with backwards compatibility.
This is a problem for accessibility software, screen readers, UI automation, voice control etc.
If you want a screen reader to announce that a menu option is selected, you need some way to signal to the OS that there's a menu open, that some text is a menu option, and that the option has the "selected" state. All serious GUI frameworks let you do this (and mostly do it automatically for native controls), so does the web.
TUIs do not (and currently can not) do this. While this is not really a problem for shells or simple Unix utilities, as they just output text which you can read with a screen reader just fine, it gets really annoying with complicated, terminal-based UIs. The modern AI coding agents are very prominent examples of how not to do this right.
Of course, that doesn't help unless/until at least one prominent terminal actually does it and a few major terminal applications adds support for it.
"Backspace is known to not work in some configurations. As a workaround, typing ctrl-h tends to work in those situations." (https://git.sr.ht/~akkartik/teliva#known-issues)
This is a problem with every TUI out there built using ncurses. "What escape code does your terminal emit for backspace?" is a completely artificial problem at this point.
There are good reasons to deal with the terminal: I need programs built for it, or I need to interface with programs built for it. Programs that deal with 1D streams of bytes for stdin and stdout are simpler in text mode. But for anything else, I try to avoid it.
I did try to build one in Qt in C++ years ago, stopped at trying to figure out how to add Syntax Highlighting since I'm not really that much into C++. Pivoted it to work like Notepad so I was still happy with how it wound up.
https://github.com/Giancarlos/qNotePad
The core data structure (array of lines) just isn't that well suited to more complex operations.
Anyway here's what I built: https://github.com/lorlouis/cedit
If I were to do it again I'd use a piece table[1]. The VS code folks wrote a fantastic blog post about it some time ago[2].
[1] https://en.m.wikipedia.org/wiki/Piece_table [2] https://code.visualstudio.com/blogs/2018/03/23/text-buffer-r...
It does become a problem if you insist on trying to open files of hundred of MB of text, but my thinking is that I simply don't care to treat that as a text editing problem for my main editor, because files that size are usually something I only ever care to view or is better off manipulating with code.
If you want to be able to open and manipulate huge files, you're right, and then an editor using these kind of simple methods isn't for you. That's fine.
As it stands now, my editor holds every file I've ever opened and not explicitly closed in the last 8 years in memory constantly (currently, 5420 buffers; the buffer storage is persisted to disk every minute or so, so if I reboot and open the same file, any unsaved changes are still there unless I explicitly reload), and it's not even breaking the top 50 or so of memory use on my machine usually (those are all browser tabs...)
I'm not suggesting people shouldn't use "fancier" data structures when warranted. It's great some editors can handle huge files. Just that very naive approaches will work fine for a whole lot of use cases.
E.g. the 5420 open buffers in my editor currently are there because even the naive approach of never garbage collecting open buffers just hasn't become an issue yet - my available RAM has increased far faster than the size of the buffer storage so adding a mechanism for culling them just hasn't become a priority.
Regex searches and code highlight might introduce some hitches due to all of the seeking.
Modern CPUs can read and write memory at dozens of gigabytes per second.
Even when CPUs were 3 orders of magnitude slower, text editors using a single array were widely used. Unless you introduce some accidentally-quadratic or worse algorithm in your operations, I don't think complex datastructures are necessary in this application.
And if those GB's represent text, with user trying to edit that as a single file, well then... PEBKAC.
Just how big (and how many lines) does your file have to be before it is a problem? And what are the complex operations that make it a problem?
(Not being argumentative - I'd really like to know!)
On my own text editor (to which I lost the sources way back in 2004) I used an array of bytes, had syntax highlighting (Used single-byte start-stop codes for syntax highlighting) and used a moving "window" into the array for rendering. I never saw a latency problem back then on a Pentium Pro, even with files as large as 20MB.
I am skeptical of the piece table as used in VS Code being that much faster; right now on my 2011 desktop, a VS Code with no extra plugins has visible latency when scrolling by holding down the up/down arrow keys and a really high keyboard repeat setting. Same computer, same keyboard repeat and same file using Vim in a standard xterm/uxterm has visibly better scrolling; takes half as much time to get to the end of the file (about 10k lines).
I think vim uses a gap structure, not a single array but don't remember.
I am not a programmer, my experience could very well be due to failings elsewhere in my code and my reasoning could be hopelessly flawed, hopefully someone will correct me if I am wrong. It has also been awhile since I dug into this, the project which got me to dig into this is one of the things which got me to finally make an account on hn and one of my first submissions was Data Structures for Text Sequences.
https://www.cs.unm.edu/~crowley/papers/sds.pdf
https://github.com/antirez/kilo/blob/323d93b29bd89a2cb446de9...
I have never seen a file like this in my life, let alone opened one. I'm sure they exist and people will want to open them in text editors instead of processing with sed/awk/Python, but now we're well into the 5-sigma of edge cases.
Would highly recommend the tutorial as it is really well done.
I played around with kilo when it was released, and eventually made a multi-buffer version with support for scripting with embedded Lua. Of course it was just a fun hack not a serious thing, I continue to do all my real editing with Emacs, but it did mean I got to choose the best project name:
https://github.com/skx/kilua
The original in C: https://git.timshomepage.net/tutorials/kilo
Go: https://git.timshomepage.net/timw4mail/gilo
Rust: https://git.timshomepage.net/timw4mail/rs-kilo
And the more rusty tutorial version (Hecto): https://git.timshomepage.net/tutorials/hecto
PHP: https://git.timshomepage.net/timw4mail/php-kilo
...and Typescript: https://git.timshomepage.net/timw4mail/scroll
It duplicates the C library's cfmakeraw() function, for instance.
https://man.freebsd.org/cgi/man.cgi?query=cfmakeraw&sektion=...
Although it does cheat a bit in an effort to better handle Unicode:
> unicode-width is used to determine the displayed width of Unicode characters. Unfortunately, there is no way around it: the unicode character width table is 230 lines long.
My own editor exists because I realised it was possible to write an editor smaller than my Emacs configuration. While my editor lacks all kinds of features that are "essential" for lots of other people, it doesn't lack any features essential for me.
So in terms of producing a perfect all-round editor that will work for everyone, sure, editors like Kilo will always be flawed.
Their value is in providing a learning experience, something that works for the subset who don't need those features, or a basis for people to customise something just right for their needs in a compact way. E.g. my own editor has quirks that are custom-tailored to my workflow, and even to my environment.
I have lots of code I've published not because it's useful to a lot of people as-is, but because it might be helpful. And a lot of my projects are based on code written by others that was not "very useful for general users".
E.g. my editor started out based on Femto [1], a very minimalist example of how small an editor can be. It cut some time of starting from scratch, even though there's now practically nothing left of the original.
Similarly, my terminal relies on a Ruby rewrite of a minimalist truetype renderer that in itself would be of little value for most people, who should just use FreeType. But it was highly valuable to me - allowing me to get a working pure-Ruby TrueType renderer in a day.
Not "very useful for general users" isn't a very useful metric for whether something is worthwhile.
(While the current state of my editor isn't open, yet, largely for lack of time, various elements of it are, in the form of Ruby gems where I've extracted various functionality.)
[1] There are at least 3 editors named Femto, presumably inspired by being smaller than Nano, the way Nano followed Pico, but this is the one I started with: https://github.com/agorf/femto
And these projects:
https://github.com/antirez/kilo/forks
Why are all the commenters so eager to get out of terminals?
go figure.
;)