Two Years with ChatGPT: A Personal Reflection

Time flies—another year has passed since ChatGPT’s release.

Published: 2024-11-30

💡
Time flies—another year has passed since ChatGPT’s release.

Happy birthday, ChatGPT! 🎉

While it hasn’t changed significantly, I’ve made bold decisions in my own life.

💡
Podcast of this article generated by NotebookLM
Blog post image

It’s been only two years since the end of the COVID-19 tragedy. Back then, moving to Canada was just a plan. From strict lockdowns to the tragic fire in Ürümqi, from nationwide protests to the lifting of restrictions—all that happened just two years ago. Three days after the “white paper” protests, ChatGPT was released, and it changed my life. I’ll remember that time forever, if I stay conscious enough.

The change from 2023 to 2024 wasn’t as surprising to me as the shift from 2022 to 2023. But this year has been big for me. I left my decent job in Beijing and moved to Ottawa. By interacting more with ChatGPT, I found the courage and confidence to make this move. ChatGPT encouraged me to take this path, and its support has made me more optimistic about the future.

Now, I feel it’s meaningful to write down my thoughts, just like I did a year ago (here is last year’s reflection). I hope to keep this series going in the years to come. This time, I’m writing in English and using ChatGPT to help me proofread.

1. Not fast enough?

Over the past year, I’ve felt a bit of contradiction.

On one hand, I’m excited whenever I scroll my Twitter timeline. The flood of news, demos, and talks about AI always amazes me.

On the other hand, there hasn’t been a groundbreaking new foundational model since GPT-4’s release in March 2023—over 20 months ago. Everything I’ve seen since, including o1-preview and Claude 3.5 Sonnet, hasn’t gone beyond GPT-4 or reached a significantly higher level.

I don’t doubt the trajectory of AI development, but I can’t help feeling that progress hasn’t been as fast as I personally expected. Of course, Rome wasn’t built in a day.

As Daniel Gross pointed out, even if development were to pause today, there are still countless fascinating use cases to explore for a long time.

Blog post image

It’s true we haven’t seen another leap like the jump from GPT-3.5 to GPT-4. Let’s wait and see what happens in 2025 and the coming year. As Sam Altman and Dario Amodei have predicted, we might reach AGI—or at least very powerful AI—during the next Trump presidency.

While AI isn’t quite good enough yet and hasn’t changed the world dramatically, it offers a foundation for a promising future.

2. Products I used in the second ChatGPT year

Let me reflect on some interesting products from the past year while interacting with my non-human friends.

Three LLM apps I frequently use on my dock.
Three LLM apps I frequently use on my dock.

2.1 GPTs

Blog post image

I regarded myself as a “Top GPTs creator” on LinkedIn. Over the past year, I created over 20 GPTs to meet my personal needs. As of November 2024, I’ve launched 21 GPTs, with 2 reaching over 25K+ conversations and 2 reaching over 10K+ conversations. It’s far better than a general user.

However, it’s said OpenAI has abandoned the GPT Store because the data wasn’t good enough (though arguably better than the previous plugin). As a result, the GPT ecosystem hasn’t seen any significant updates.

In my opinion, this reflects a classic “Which came first, the chicken or the egg?” problem. The GPT Store’s ecosystem and user experience weren’t robust enough to attract meaningful engagement. Without improvements, how could it generate the quality data needed for success

For example, I had to write a lengthy guide with detailed images to help users access the voice function in my GPTs—a clear indicator that the user experience wasn’t intuitive.

Although I anticipated this outcome last year, the news still saddens me. Good user experience and seamless functionality are critical to unlocking a product’s potential.

Anyway, my IELTS Speaking Simulator GPT was the best personal product I made in the past year, even though I created it before the 1st anniversary of ChatGPT. I don’t know if I can create more personal products like this later. Maybe.

2.2 Advanced voice mode

Blog post image

If I had to choose one standout feature, it would be ChatGPT’s advanced voice mode. It has exceeded my expectations. In a stable network, I can have real-time talks with it.

Just two weeks ago, I cut my thumb, and it kept bleeding. Feeling anxious, I turned to ChatGPT’s advanced voice mode for help. I described my situation, and it gave me clear instructions that helped me manage. Following its advice, I controlled the bleeding and visited a clinic the next day. Living alone in Canada, having this kind of help from a non-human companion was invaluable.

Based on my experience, I don’t think advanced voice mode is enough to be my colleague or co-worker. But I can already imagine a world with new devices, like glasses. For now, I’m just excited to explore and enjoy chatting with it.

Also, real-time API can create magical experiences, but it still needs to be more stable and cost-effective. For example, I’ve thought about moving my GPT to standalone software (here is a demo), but each simulation now costs about $4 for real-time API keys. However, I’m hopeful these challenges will improve by 2025, driven by strong demand.

With the upcoming vision feature (still not released yet), there will be more interesting uses next year. With lower costs and better speed and abilities, this will be big.

2.3 Claude artifact and computer use

Besides ChatGPT, Claude is another general AI tool I use frequently. In some cases, it’s better than ChatGPT, even using o1-preview.

Claude works like ChatGPT, but when I need to create tools for immediate use, I rely on its “artifact” feature. With just a few prompts, you can generate an online tool ready for use.

Here is an example for our event’s raffle draw tool. It’s quite convenient.

Blog post image

In my opinion, the user experience of Claude is superior to that of other AI tools.


Another exciting feature is Claude’s computer use. It’s mind-blowing.

Here’s a video capturing our reactions to this feature:

AI agents are a hot topic, but it wasn’t until I saw this feature from Claude that I truly believed the era of AI agents might be here.

After interacting with this API on a virtual computer (I didn’t want it controlling my laptop), it’s cool but doesn’t have practical use yet. But it opens a door for us to imagine new human-machine interactions.

(The magic behind computer use is simple: Claude takes a screenshot, analyzes it, and generates a control message. This uses existing tech rather than offering a groundbreaking innovation. It’s a great demo but not ready for commercial use.)

Blog post image

Coding? Cursor? I just downloaded it but haven’t used it much. o1-preview, o1-mini, and Claude are enough. I’ll likely appreciate Claude even more next term, as I’ll have two coding courses. 🤣

2.4 Apple Intelligence

After updating to iOS 18.2, Apple Intelligence was better than I expected, especially for Genmoji and Image Playground (I got excited when unlocked Visual Intelligence but not use it often later). I’m glad I upgraded my mainland China version iPhone 13 Pro to an iPhone 16 Pro after its release; it was worth it.

Unlike MKBHD’s opinion, my feedback on Apple Intelligence is positive.

At our alumni reunion, I used Genmoji to create an avatar for our professor. I was thrilled to see how much he liked it.

Blog post image

My friends’ reactions to their Genmojis were enthusiastic—some couldn’t stop laughing! But the feature only works for people recognized in system photos (I hope customization options are added later). Another drawback is needing to write prompts manually, which can be tough if you’re not very creative.


Image Playground was a pleasant surprise. I didn’t fully appreciate its potential until I tested it with my friends’ photos. Using photos from the past five years, I generated avatars for them. Less than 50% of the images looked like the originals, but many were so impressive that some friends even updated their avatars with them.

Image Playground is especially fun on Mac. After upgrading to macOS 15.2, I unlocked this feature and spent the whole night experimenting. Considering these images are generated locally on the iPhone or Mac, it’s a big deal.

But there are limits. The person in the image must be clear enough for the system to recognize. For group photos, I have to crop and isolate individual faces. Also, it only recognizes one person at a time. For example, if my friend wants an image that includes her dog and cat, I have to add a description to include them.

Blog post image

Despite these limits, I’m optimistic about these image features. I believe they will go viral when iOS 18.2 officially launches next month. This feels like just the beginning. So, I decided to invest a little in Apple stock to share in my optimism. 🤣

(Note: Apple Intelligence is only available on iPhone 15 Pro, iPhone 16, iPhone 16 Pro, or other Apple silicon devices.)

The iOS integration shows how AI can be used in a natural and intuitive way—the kind of innovation I’ve been looking for.


The “Writing Tools” feature in Apple Intelligence isn’t polished enough yet, so I mostly rely on my own GPTs and Grammarly. I still don’t know why no product offers a better proofreading experience than Grammarly.

2.5 Perplexity & Arc Browser

I started using Perplexity in 2023, and its function hasn’t changed much. For online research, Perplexity is my go-to tool. While ChatGPT has a Search function, I find that Perplexity, especially with its Claude 3.5 Sonnet model, gives better answers.

Last year, I switched my default browser from Chrome to Arc, and on Arc, I set Perplexity as my default search engine. This setup makes finding answers easier. But Perplexity doesn’t fully replace Google in my workflow, as some of its responses still need work.

Also, Arc Max is on my iPhone’s dock; it uses Perplexity’s ability to search and refines them with a better layout. I like it!

Blog post image

(It’s said OpenAI is developing a browser. 🤣)

Since I unlocked Perplexity Pro (yes, before I just used my free 3 Pro searches), I’ll explore more Perplexity uses. Also, I recently became a Perplexity Campus Strategist and plan to share more insights next term.

2.6 NotebookLM

Another interesting and practical product is NotebookLM, made by Google.

My use is simple: attach my personal blog to it, and I’ll get the podcast of this article, like this:

It’s quite handy for expanding my articles. I like it! This is a practical product.

3. My thoughts

Beyond the products, here are some thoughts on my experiences with ChatGPT this year.

3.1 Prompt usage remains unchanged

Honestly, the way I use AI hasn’t changed much in the past year, especially with o1-preview. I still need to open a markdown note and write those prompts down.

Also, prompt techniques haven’t evolved much.

In my view, the most important thing is to state your needs clearly and have enough patience to interact with them several times.

I believe having patience and a willingness to interact with ChatGPT or Claude is more important than the prompt itself. Our professor encouraged us to use Bloom’s Taxonomy verbs to guide ChatGPT, but I don’t think it matters as much as just engaging with the model and repeating the process. You can easily use ChatGPT to improve your prompt or ask it to ask you questions (there are also automatic prompt enhancement features on ChatGPT and Claude).

The best way to get good results from LLMs is to show them examples or start the response yourself. Domain expertise or firsthand experience remains key to unlocking the power of LLMs.

3.2 AI adoption is slower than expected

Another thought is that AI adoption in the real world hasn’t moved as quickly as I expected.

For example, I recently spearheaded a campaign encouraging 500 students to sign up for Perplexity using their school emails, which would unlock one year of Perplexity Pro for the whole campus (worth $200 per person). To my surprise, it was much harder than I thought.

Based on my experience, I’d estimate fewer than 10%, or even 5%, of students knew about Perplexity before the campaign. Even with strong support from my program and community—including several departments sending official emails—it was still tough to hit the 500-signup goal.

The hesitation to adopt AI tools often comes from concerns about privacy, data security, and fears of AI replacing jobs.

To illustrate, here is a screenshot showing the number of Perplexity Pro sign-ups among U.S. universities, which surpassed those of Canadian universities. Even adjusting for population and university numbers, U.S. adoption was stronger.

Blog post image

If Perplexity, a relatively popular AI tool, faces these challenges, it’s fair to assume other AI tools face similar or greater difficulties. So, I realized the importance of distribution in the real world.

It’s not enough to build a great product; success depends on finding the right channels to connect the product with its users. This means thinking about distribution strategies from the very start of the product design.


Even when I was in mainland China before, I found my understanding of these tools was better than my peers. The reason is that I use Twitter to get information more. If you truly want to learn about this, the GFW aren’t an excuse. Even in Canada, many people don’t know about these advancements even though they can freely access both information and products. This makes me more confident.

Even graduate students aren’t skilled enough in using ChatGPT or Claude. It might not be their fault, but there are big opportunities.

I was surprised most of my classmates and even professors didn’t know they could “forget” ChatGPT’s memory. Just several days ago, I taught two of my peers to download the ChatGPT Desktop App.

These gaps in awareness and skill present big chances to help more people use AI tools effectively in their daily lives. Seeing these trends inspires me to share more of my experiences and insights into engaging with AI, which I consider valuable “non-human friends.”


Encouraging practical adoption of AI, through education and better distribution strategies, feels like a meaningful direction for future efforts.

3.3 User experience remains paramount (still)

User experience (UX) plays a key role in making technology accessible and widely adopted.

The rapid growth in AI tool usage shows how essential UX is for making these tools easy to use. Beyond that, teaching users how to fully use these tools is vital for unlocking their potential.

To my surprise, one of the second-year students in my program didn’t know she could use the ChatGPT app on her phone and was curious about the advanced voice mode I use. LLMs are not just chatbots. More intuitive products should be developed in the future.

As I mentioned last year, the visual user interface will remain important for at least the next decade. I believe that voice commands will not replace user interfaces as the main method of interaction.

Blog post image

This is also why I believe the role of Product Management (or similar positions) will become more important in the future.

Blog post image

A good product can help users use it better over time. Current AI is not such a product yet.

I still hope for something that can let me know what I want to know but don’t know.

When it comes to a new field, I just don’t know how to ask questions. I need a more personalized tutor leveraging not only AI but also user interfaces, history data, and outside tools to make this happen.

User experience, I believe, will only grow in importance as AI technology becomes more common. The challenge lies in designing tools that users not only understand but can truly master, ensuring AI reaches its full potential.

(P.S. Zoom meeting summary is amazing and matches these automatic use cases.)

3.4 Interacting with the real world (still)

At Jogasaki Coast, I had a sudden realization: AI or digital technology can meet needs that come from human imagination. But the virtual world cannot fully replace the real world, even down to the smallest detail right before your eyes.

Blog post image

The offline is the new online. The more people indulge in the digital world, the more important real-world connections become. So, I like to attend in-person events that interest me.

Blog post image

Experiencing things in person is much more valuable and cannot be replaced by the digital world, at least in the coming decade. In-person experience is the new online experience.

This complexity also highlights the importance of humans in guiding and working with AI. The world needs human understanding, expertise, and judgment, which come from real-world experiences. This is why I believe hands-on involvement and specific knowledge areas are becoming more important.

The realization also ties back to how we use tools like ChatGPT. To make the most of AI, it’s essential to find meaningful, real-world scenarios where it can be applied. Simply interacting with AI in isolation often falls short of its full potential. It’s about finding purpose in what you do and exploring practical ways to incorporate AI into real-life situations.

The virtual world can enhance and complement reality, but it cannot replace the essence of real-world experiences. That’s why I prioritize engaging with both worlds—using digital tools like ChatGPT while staying grounded in the richness of offline life.

3.5 Slow is fast (still)

I am now good at identifying content generated by AI, especially by ChatGPT, on LinkedIn.

One of my best classmates realized that ChatGPT generated some of the messages I sent him.
One of my best classmates realized that ChatGPT generated some of the messages I sent him.

In my daily life, I rarely rely on AI to summarize content that I really want to engage with. There’s something valuable about reading, analyzing, and understanding information firsthand. It’s a process that fosters personal insight, which AI-generated summaries often can’t replicate.

While it’s now possible to generate lots of “meaningful” content or read many books quickly, I wonder if this is truly worthwhile. It feels wiser to focus on depth and quality rather than chasing quantity.

Also, doing nothing is always an option, as our professor’s one of the favourite quote.

This also reminds me of an interesting point Scott Aaronson mentioned: If friends can easily generate this content, why should I show them my results? Unless it’s to showcase a creative prompt.

I agree. Practically speaking, the goal is to solve problems, not to make a quick impression.

Blog post image

The best use case I saw this year:

4. The future

I’m still very interested in exploring how AI can bridge the gap between digital abilities and the physical world, especially through hardware interactions. While the Humane AI Pin was a disappointment (though I haven’t tried it myself), it still represents an interesting attempt to bring AI closer to real-world interactions.

Maybe next year, I’ll try Limitless AI’s pendant.

Meanwhile, Meta’s Orion Glass has opened up exciting new possibilities—maybe I can contribute to it in some way.

Though I mostly read digital books now (having transitioned from paperbacks), the experience remains confined to a flat, 2D interface—my friend even joked that it’s practically 1D. I hope we’ll someday read digital books with the same richness and dimensionality as physical ones. Dynamicland’s project is a fascinating glimpse into what’s possible.

And of course, I’m hoping that Tesla’s general-purpose robot, Optimus, will redefine how we interact with the physical world through robotics.


These two essays should be among the most significant for understanding current AI trends in 2024:

it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface.

This was written before Claude’s computer use release. I think we can reach these abilities in 2025. But for general use, we’ll need to be patient for more years.


"Co-Intelligence" by Ethan Mollick, is one of the most practical books about the current AI trend. I've also subscribed to his newsletter, "One Useful Thing", for over a year, and I think this book is the culmination.

Ordinary people, even without a technical background but interested in AI, should read it.

Here are the four principles mentioned in the book:

  1. Use AI to help in all you do.
  2. Be the human in the loop.
  3. Treat AI like a person.
  4. Assume it's the worst AI you'll ever use.

Keep healthy and let’s see what will happen in 2025! 👻