My career in computing and my life are largely inseparable. Computers were, and still are, a tool for living. In hindsight, the real milestones weren’t specific platforms or programming languages– though both have played major roles– but the arrival of tools that fundamentally changed what I could accomplish. Said tools weren’t just accelerators. They reshaped how I worked and lived. What follows is a look back at several of those shifts, and a look ahead at where LLMs– often inaccurately labeled “AI”– belong in that lineage.

When I was young, I told my parents I didn’t need to learn to write because I would have a machine that would write for me. I was 4 or 51. Not too many years later, we had a family Apple ][+ that I fully monopolized, teaching myself programming, various hackery, and causing a bit of trouble.

The computer was revelatory for me. It enabled me to write down my thoughts and ideas in a way that was recorded with assistance by this incredibly flexible tool that would oft aid in gathering said thoughts. Obviously, early days were primitive. “? SYNTAX ERROR” was the grand sum total of feedback at the time.

The Apple ][+ was where I got my first taste of “hey, this might be a career”, doing a bit of data recovery here and re-writing a good chunk of the school’s library checkin/checkout system. I also worked for a local family electronics hobbyist shop and wrote some Micro MUMPS on a CP/M mini computer to manage the store’s inventory and track employee hours (no, I didn’t augment my own hours).

I was also right in my prediction to my parents. Sort of. Of course, I did learn to write (cursive, even). But I don’t think I have ever written out anything longer than a paragraph without typing and editing using a computer.

A few years after that, we had a family Macintosh Plus (again, I monopolized). Instead of all keyboard all the time, you could now interact directly with the system by pointing at things! Graphics were possible! To be fair, all these things were possible on the Apple ][+, the difference was that the Mac enforced it as a standard interaction model across the system. Any app that forced you to drop back to keyboard only felt like an anachronism.

The Mac, though, broke the barrier of it just being me organizing my thoughts. Desktop publishing enabled me – or a team of people – to put together publications. I ended up helping my high school overhaul their journalism class with a Mac based publishing system and took advantage of the LaserWriter on a regular basis (I also ended up designing many of the advertisements that ran in the paper for various local businesses).

Then came NeXT. I was in college by that time. One look at the NeXT announcement and it was immediately obvious that it was the future. Not only was it a fully GUI OS, but it was “large scale”. The focus was on being able to see a whole page of a document at once and whatever was on screen was exactly what would come out of the printer. The fidelity in rendering and user interaction was unmatched (or promised to be – took more than a year for it to ship). But the thing that really nailed it for me were the development tools. Whereas the Mac was just HARD to develop for, the NeXT was designed such that developing apps for it should be as easy and intuitive as using a word processor. And it was a fully network enabled computer. When I worked at NeXT in the Pittsburgh office, we could just as easily print to a printer in the Tokyo office as we could to the printer on our own desk. “The network is the computer”.

Along the NeXT journey, there was the advent of the World Wide Web. We ran one of the top 10 most popular sites on the ‘net in the early days. Off of a NeXT machine connected to a T1 line. That was also about the time that my computing hobby became an actual career.

…. lots of history skipped. NeXT acquires Apple (or was it the other way around). The iPhone (which I was lucky enough to contribute to). The Intel/64 bit transitions at Apple. ….

And now we are to a few years ago. I had been toying with LLMs, various “AI” solutions, etc… off and on for quite some time, but it was all toys. The potential was obvious, but actual use was slow, rife with hallucinations (or just producing pure garbage), and it was a net tax on productivity.

That started to change about the time ChatGPT came online (with Google, Anthropic and others quickly followed). It was clear that there was something here. That with careful “prompting” – carefully written instructions with well framed context – you could get a lot more signal out of the system for comparatively little effort. Sure, the image generators still put 8 fingers on a person’s hand or faces would blend into the backgrounds, but it was occasionally actually useful.

The potential was obvious. The question was “How long until this is actually useful more often than not? When will it be approachable by ‘mere mortals’3?”

And those answers seem to have turned into “Now” and “Now” sometime in the last year.

My computing focused life is radically different today than it was a year ago.

It isn’t that I know more or discovered some way to make the work easier. In fact, the tasks I’m taking on now are far more complex than the ones that I would have taken on in the past simply because of time constraints. Instead, the LLMs have collapsed the cost of even trying to attempt complex solutions.

A concrete example: my family has accumulated over a thousand recipes across generations. It’s a real treasure trove. And a complete mess! PDF scans, 20-year-old Pages documents, Word files, handwritten notes. For years, I wanted to organize it into a clean, navigable set of web pages for the family. But it was always a “someday” project: days of focused effort, lots of tedious and hard-to-automate work across hundreds of documents. Permanently relegated to the “maybe when I retire” list.

Enter the LLM. With Claude Code, I was able to outline the goal, describe the trove of documents (including their sources), and let it “figure out” what steps should be taken. It cleaned up the file hierarchy, made an inventory of what kind of conversions were necessary and then guided the implementation of automation tools necessary to do the conversion to web pages. And it did so while preserving all the handwritten notes. Did it do a perfect job? Nope. There are transcription errors in the handwritten bits, for sure (the typed stuff is spot on). But, most importantly, this task that was likely to be put off forever is NOW DONE!

And that’s the key: Using an LLM as a tool fundamentally changes what is worth doing. They say that “time is the most valuable resource” and I would claim that proper use of an LLM is an incredible time efficiency optimizer.

Really, the AI tools – and they are just tools that can be used productively or destructively – are not just moderate enablers, but 10x+ enablers in many cases. I’m able to organize complex projects far more effectively and able to solve various problems that I would never have been able to otherwise. Not because it was beyond my technical skills, but because I would never be able to devote the days worth of effort to get done what I can now do with AI in a matter of hours.

That isn’t quite fair. In fact, I’m able to take on problems where I lack the technical skills by leaning on the LLM to provide me with a deep dive into the technical skills necessary to achieve the task.

Where do these tools fit into the grand scheme of things? In terms of profundity of impact on how I get stuff done?

Likely second on the list behind the computer itself. I briefly considered ubiquity of search – most of the sum total of human knowledge available in seconds – as #2, but realized that ChatGPT’s research mode vastly outperforms generic searches in all ways (I still use direct searches all the time, but use ChatGPT for anything that requires more than a simple answer).

As with the widespread deployment of any incredibly powerful, general-purpose, tool, there will be mass disruption. Some of that disruption is going to be—has already been—really stupid, largely avoidable, and harmful. People will suffer because of it.

With that said, I remain optimistic that these tools can be used to improve life for everyone.

Still, teaching sand to think may turn out to have been a terrible idea.


  1. I’m just the right age that I grew up with personal computers from pretty much the time they were first targeted to the consumer market through to the “supercomputer in your pocket” age (and whatever comes next). I embraced the tech from the beginning because it was both fascinating and quite clear that how the next generation of computing tech would unfold would be a surprise. And surprise it has been! If you’d asked me a couple of decades ago if I would drive a car capable of driving me without intervention from A to B, I’d have been doubtful, but had seen enough change to at least say “Well, maybe”2. What a fun ride. It ain’t over yet.

  2. While at CMU in the late 80s, I do remember seeing ALVINN roaming about on occasion. It was a self driving car. Very limited in speed and context, but it worked. IIRC, it might even have been pulled over by the cops once, but I don’t remember details. Which is kind of a common theme. What’s old is new again. Oft, what is hailed as a modern breakthrough is really just tools that those with extreme skills and time on their hands only had access to prior being refined to the point of being accessible to a much wider audience. The tool isn’t new. The tool being usable by “mere mortals” is new.

  3. “Mere mortals” refers to non-expert users—people without specialized technical knowledge. The phrase predates modern computing, but has long been used informally to distinguish expert-only systems from tools usable by everyone. Historically, the inflection point where a technology becomes approachable by “mere mortals” has mattered far more than raw capability.