2023-05-17
I built this blog from scratch using ChatGPT. I want to use that experience to explore what it taught me about how to use LLMs in my work and forecast the potential impact of LLMs.
First, some context around my skill level and the project. By trade, I’m a product manager without a deep technical background. I spend a lot of time around engineers, but I don’t code. About 10 years ago, I took a “learn to code” class which culminated in building a personal blog as the capstone assignment. I finished the project, but barely. From that point on, I settled in at the level of being comfortable scripting in Python, but uncomfortable doing anything beyond that.
About 3 weeks ago, I decided to use a Friday morning to see how far I could get in replacing my blog hosted on Squarespace with one I wrote myself using ChatGPT.
The project is simple Python Flask app with minimal styling. It incorporates a blog, a few static pages, a CMS, and an authentication system. The backend is powered by a PostgreSQL database, which I’ve been told is overkill, but it was important to me to prove to myself I could build an app with a database.
If you had asked me how long this would take me to do on my own, I would’ve estimated about 2-3 weeks of full time, focused work. Doing it on nights and weekends, at least 2 months, maybe more. Using ChatGPT4, I had a working prototype by the end of the first hour and about ~25 total working hours later (spread across 3 weeks of morning work, so 30-45 minutes at a time), it was complete and deployed in production.
I’m not going to pretend that this is the most complicated piece of software ever written, but it is something that is well outside of my skill level without LLM assistance. In a way, focusing on the time spent to accomplish the task is misses the point; it was fun. I was hesitant at first, but as the project took I grew confident. I began to expect that I would be able to solve problems in a domain that had been out of my reach.
My workflow
My workflow has evolved through the project, but at this point it looks lie this: I’ve got a main tab open with a GPT4 chat focused on the project (the same chat every time); the next tab over is a GPT3.5 chat (usually a new chat each time).
The GPT4 chat is where the main action happens and the GPT3.5 chat is where I ask the easy stuff that doesn’t require project context(e.g., remind me how to check what branch I’m in in the terminal using git). This helps me conserve GPT4 queries and is much faster overall. I tried to do more serious coding with GPT3.5, but the quality was so much lower that it wasn’t worth it.
At the beginning of the project I would regularly remind the GPT4 chat of the overall project and my goals at the beginning of each session but I’ve done less of this as time has gone on and not seen a decline in results.
In the main GPT4 thread I run through this basic loop:
-
Product scoping: I suggest the next feature for us to work on and outline what I think the requirements, both what is in scope and what can safely be out of scope. I play an important role here in making the scope size manageable. As an example, “Let’s start by organizing the directory structure; how should I do that?” is much better than “I want to build a blog, can you do that for me?” I’ve also had the best results when I ask ChatGPT what else thinks we should consider in scope and then explicitly include or exclude it.
-
Implementation path: I then ask ChatGPT to suggest a plan for implementing this feature. So far, I’ve had the most success when I don’t suggest an implementation path at the outset. My biggest mistakes have been when I have had preconceived notions about how to implement something. ChatGPT and I usually discuss the pros and cons of different paths, and then we select one. Once we’ve chosen an implementation path, I copy and paste the most relevant files or a skeleton outline of the directory so that GPT has context on the coding.
-
Coding: ChatGPT does almost all of the coding for me. We move step-by-step through the implementation plan. Occasionally I’ll catch ChatGPT drifitng on a variable name and fix it, but it’s not uncommon for me to copy and paste entire files in and ask ChatGPT to work the code into the file for me. The biggest benefit from using ChatGPT here is it’s knowledge of Python packages that I can leverage and its knowledge of how to use those packages.
-
Testing: Once the code is implemented, I test the app on my local machine, feeding any bugs I back to ChatGPT. I copy and paste the error message and ask how we should fix it. This then kicks off its own mini-loop of implementation path discussion -> coding -> more testing. Once the app runs locally, I then repeat the same loop in production. When everything works, I move on to the next feature.
Typically a small feature (e.g., moving to a secrets file) can be built in ~30-45 minutes. Something more complex like an image uploader or a tags feature might take 1-3 hours.[0]
I’ve included a sample feature at the end of this post for a real life example of the above.
Pitfalls
As mentioned above, the largest problems I had were when I assumed a particular path of implementation. One example here is choosing a hosting provider. Because I’ve worked with Firebase/Google Cloud before on a project with a friend I suggested we use that as a backend. (That sound you hear in the background is experienced programmers laughing at me!)
This added a lot of complexity my deployment. I got it working, but ultimately ended up switching to Render, which was simpler and cheaper. If I had gone through the loop above, I would’ve avoided this delay. My lesson from this is that ChatGPT isn’t going to push back on a choice if I don’t explicitly ask for push back. It’s going to try and make it work.
The second biggest pitfall is managing ChatGPT’s context window. Even skimming the code, I noticed directories or files would get misnamed (e.g., image/ would turn into img/).[1] This usually isn’t that big of a deal, but a big reason why I end up copying and pasting in my files into ChatGPT is to catch exactly this type of mistake.[2] Worst case, I catch it running the app locally.
Finally, I had one issue in deployment where GPT became pretty distracted by the possibility that there was a bug in Render’s deployment engine. I was pretty sure this wasn’t the case as the app had worked in the previous deploy, but every answer kept coming back to contacting Render customer support until I explicitly asked it to rule this possibility out. Once I did this, we found and fixed the problem.
It’s worth reflecting for a moment on just how human these pitfalls are. Anyone who has ever been a manager has had a situation where the team doesn’t push back on a direction because they weren’t explicitly asked to, small changes in naming are akin to typos and the who among us has not been fixated on the idea that there could be a bug in someone else’s code that is causing our problems?
Impacts
Economic impacts
The potential economic impacts of LLMs are a hot topic of conversation. I think it’s interesting to look at this project through that lens.
Previously, I was using Squarespace for hosting + website design, costing ~$168 per year. Now I’m using Render for hosting, costing ~$84 per year and I’ve got a monthly chat ChatGPT membership of $20 per month. So the LLM has reduced the time cost of customization and shifted spending towards hosting services and, of course, LLM access. Including the full cost of the ChatGPT membership for the month I spent building puts me at a savings of $64.
At least at first glance, this fits for me:
- There’s a productivity benefit, both in terms for what I can accomplish and in terms of savings.
- The LLM provider takes a portion of the benefits
- Services that provide customization (in this case, Squarespace) are being commoditized
- Customizability at the most basic layer is even more important, because it's the limiting factor (I'm not longer limited by my ability, but by what is possible in the programming language I've chosen)
- Services that enhance management of physical infrastructure, which compliment the customization benefit from the change, particularly if they are friendly to beginners like yours truly.
Reducing the cost of something means you get more of it. It just became a lot less expensive to create software tools. I would predict this means that we get a lot more of them in a lot more niche areas. Business problems that couldn’t support a custom solution before will absolutely get one.
Labor market impacts
Somewhere over the course of the last month I saw a paper that studied the impact of LLMs in a call center. That study, which of course I can’t find, found that the LLMs had the greatest impact on the lowest productivity workers: new workers and under performing workers.
This definitely matches my experience. For about 6 months now, I’ve been using ChatGPT on things directly related to my expertise: product management. I’ll regularly feed in product requirement docs, strategy docs, important emails. The feedback I get is useful, but marginal. It makes it 5-10% better, not 50% better.
But programming is well outside my area of expertise. I’m not even sure I can characterize the impact of ChatGPT in percentage terms. It’s an order of magnitude difference.
The world I know well is software engineering teams. Coming out of this experience, it became clear to me that a lot of change is coming to this world.
At minimum, committing LLM generated code is going to become a part of the Product Manager and Designer work flow. I suspect that within 2-3 years, a PM of my level will be expected to come to the meeting with a strategy and a prototype product (maybe 2-3) running in production.[3]
Somewhat more speculatively, I’d assume that Product, Engineering, and Design are going to become more similar as disciplines, with the differences becoming one of emphasis, sort of like a Product Manager vs. a Technical Product Manager, with TPMs focused on more heavily technical codebases. So instead of having teams with ~1 PM, ~5 Engineers, ~1 designer, you’ll have a mix of Product, Design, and Engineering focused creators.
Finally, I want to endorse this tweet from Amjad Masai:
At the time I first saw the tweet, I sort of dismissed it, but after this experience, I think he’s right. The little bit of technical skill I had (e.g., understanding how to open the terminal, knowing how to write a little python) has opened up a whole new world of potential for me.
Where does this ultimately take us? I suspect that software development and a lot of other things is going to look a lot more like the Creator economy: low barriers to entry, high returns for the biggest stars, and a good living for those who can find their niche.
I suspect that this extends beyond software as well. There has never been a better time to be curious and self taught.
Feature requests
I wouldn’t be a PM if I didn’t come out of this experience without a list of feature requests.
- Threads. ChatGPT absolutely needs the ability for me to ask follow up questions about one of the steps in the context of that step. I get my 5 steps to implement a feature and then I’m constantly scrolling back up to figure out what step 3 was again.
- I need some way to give ChatGPT a view of my project’s directory structure, particularly the pieces that are unique to the project and aren’t inherited from a package. I spend a lot of time reminding ChatGPT of what the project looks like (maybe an easy way to do this exists!).
- It would be awesome to be able to integrate terminal with the LLM and be able to reference what happens within it. It would save me a lot of copying and pasting.
[0]: It’s worth keeping in mind here when looking at these time estimates that these are features that I would be very unlikely to be able to build on my own.
[1]: GPT3.5 has way more problems with this than GPT4.
[2]: I find it also helps with managing the context window.
[3]: I see some echos here to analytical tools. Some PMs I’ve known a generation ahead of me have told me that when they got started knowing SQL wasn’t expected and that over the course of their career it became a part of the PM job.
Example feature: Image storage and hosting
Potentially not the best example since not much code was written, but a good example of a problem I didn’t know how to solve myself. Total time including redeploys and testing: about 1 hour
Me: hello! I would like to work on how images are stored and used on my blog.
right now, I have a /images route where I upload images. this ends up in a app/static/img
directory which I reference in my markdown to get the images to show up. this works really well.
however, I’ve noticed a problem. every time I redeploy the site, all of my images get deleted. the reason for this is that Render rebuilds my app from github and none of these images are on github, they’re only in the production environment.
What are my options to solve this problem? Ideally it will stay pretty simple and low cost.
ChatGPT: You have several options to tackle this issue. Here are a few:
-
Cloud Storage: One of the most common approaches is to store your images on a cloud storage service, such as Amazon S3, Google Cloud Storage, or Azure Blob Storage. These services are designed for storing and retrieving any amount of data at any time, and they provide high durability and availability. When you upload an image, instead of saving it to the local filesystem, you would upload it to the cloud storage. In your posts, you would then reference the URL of the image on the cloud storage. This approach is scalable and doesn't depend on your application's deployment.
-
Database Storage: Another approach is to store the images in your database. However, this can quickly use up your database storage space and may not be as performant as other solutions, especially for large images.
-
Use a Content Delivery Network (CDN): Services like Cloudinary or Imgix can store your images and also offer additional features like on-the-fly image resizing and optimization. They provide SDKs to upload images directly from your server and give back a URL that can be saved in your database and used in your application.
-
Volume Storage (Render Specific): If you're using a platform like Render, you could consider using their SSD-based persistent disk option. This allows you to have a separate disk that isn't wiped out when your app is redeployed. You would have to configure your app to store images on this disk.
Choosing the right option depends on your specific use case, the volume of images you expect to handle, your budget, and the complexity you're willing to introduce into your application. The cloud storage option is often a good compromise between cost, complexity, and performance.
Me: Which of these options will be the lowest cost?
ChatGPT: The cost of each option can vary significantly based on the amount of storage you need, the amount of data transferred, and any additional features you use. Here's a general comparison:
-
Cloud Storage (Amazon S3, Google Cloud Storage, Azure Blob Storage): These services typically charge based on the amount of data stored per month, the amount of data transferred out of their network, and the number of requests made (e.g., upload, download). For a small to moderate amount of usage, these costs can be quite low (a few dollars per month).
-
Database Storage: If you're already using a database, it might not cost you any more to store a small number of images in it. However, for larger amounts of data, this could require upgrading to a more expensive database plan. Also, databases are typically not optimized for storing large binary files, which can lead to performance issues.
-
Content Delivery Network (CDN) services (Cloudinary, Imgix): These services often have a free tier for a limited amount of storage and data transfer, but costs can increase quickly with heavier usage. They also offer additional features, like image resizing and optimization, which can justify the cost.
-
Volume Storage (Render SSD-based persistent disk): Render charges $0.10 per GB per month for SSD-backed persistent disks. If you're storing a large amount of images, this could be a more cost-effective option compared to others.
In summary, for a small blog with moderate image usage, a cloud storage service like Amazon S3 or Google Cloud Storage is likely to be the most cost-effective option. However, if you expect your blog to grow and image usage to increase significantly over time, you might want to consider a CDN service or Render's SSD-based persistent disk, which offers better scalability and performance.
Me: Let’s rule out database storage. What would the simplest option be, given that I’m already using Render?
ChatGPT: If you're already using Render, the simplest option would likely be to use Render's SSD-based persistent disk storage. It's directly integrated with the rest of Render's services, so you wouldn't need to set up any additional accounts or learn to use a new platform. Here's a basic outline of the steps you'd need to take:
-
Create a new disk in your Render dashboard. You can specify the size of the disk (in GB) when you create it.
-
Attach the disk to your service. You'll need to specify a mount path for the disk, which is the directory in your service where the contents of the disk will be accessible. For example, you might choose to mount the disk at /app/static/img
to match your current image directory.
-
Update your application to save uploaded images to the correct directory. If you're already saving images to app/static/img
, and you chose to mount the disk at /app/static/img
, you might not need to make any changes.
-
Update your image serving logic to serve images from the correct directory. Again, if you're already serving images from app/static/img
and you mounted the disk at /app/static/img
, you might not need to make any changes.
With this setup, uploaded images would be saved to the persistent disk, and they would survive redeployments of your service. Note that the disk is tied to the specific service it's attached to; if you delete the service, the disk and all of its contents will be deleted as well.
Me: okay, I’ve created a new disc on render mounted at app/static/img
.
below is my image uploading logic from my routes.py file and my config.py files. do you see any changes I need to make?
image logic from routes.py:
def images():
form = UploadImageForm()
if request.method == ‘POST’:
if ‘image’ not in request.files:
flash(‘No file part’, ‘danger’)
return redirect(request.url)
file = request.files[‘image’]
if file.filename == ‘’:
flash(‘No selected file’, ‘danger’)
return redirect(request.url)
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
img = Image.open(file)
img.save(os.path.join(app.config[‘UPLOADED_IMAGES_DEST’], filename))
flash(‘Image uploaded successfully’, ‘success’)
return redirect(url_for(‘images’))
# Get a list of files in the directory
image_path = app.config[‘UPLOADED_IMAGES_DEST’]
all_files = os.listdir(image_path)
# Filter out any directories and create image info dictionaries
image_files = [f for f in all_files if os.path.isfile(os.path.join(image_path, f)) and f != ‘.gitkeep’]
images = [{‘filename’: f, ‘size’: os.stat(os.path.join(image_path, f)).st_size} for f in image_files]
return render_template(‘images.html’, images=images, form=form)
here is the config.py file:
import os
from dotenv import load_dotenv
load_dotenv()
BASE_DIR = os.path.abspath(os.path.dirname(file))
class Config:
SECRET_KEY = os.environ.get(‘SECRET_KEY’)
SQLALCHEMY_TRACK_MODIFICATIONS = False
LOCAL_DB_PASS = os.getenv(‘LOCAL_DB_PASS’)
PROD_DB_PASS = os.getenv(‘PROD_DB_PASS’)
class LocalConfig(Config):
DB_NAME = “db”
DB_USER = “user”
DB_PASS = Config.LOCAL_DB_PASS
DB_PORT = 5432
DB_HOST = “localhost”
SQLALCHEMY_DATABASE_URI = f’postgresql+psycopg2://{DB_USER}:{DB_PASS}@{DB_HOST}:{DB_PORT}/jdillaxyz_local’
UPLOADED_IMAGES_DEST = os.path.join(os.path.dirname(os.path.abspath(file)), ‘static’, ‘img’)
ALLOWED_EXTENSIONS = {‘png’, ‘jpg’, ‘jpeg’, ‘gif’}
class ProductionConfig(Config):
DB_NAME = “db”
DB_USER = “user”
DB_PASS = Config.PROD_DB_PASS
DB_PORT = 5432
DB_HOST = “sample-url.render.com”
SQLALCHEMY_DATABASE_URI = f’postgresql+psycopg2://{DB_USER}:{DB_PASS}@{DB_HOST}:{DB_PORT}/{DB_NAME}’
UPLOADED_IMAGES_DEST = os.path.join(os.path.dirname(os.path.abspath(file)), ‘static’, ‘img’)
ALLOWED_EXTENSIONS = {‘png’, ‘jpg’, ‘jpeg’, ‘gif’}
ChatGPT: The image uploading logic and your config file look correct assuming the mount path of the disk you created on Render is at app/static/img
. This path should be the same as UPLOADED_IMAGES_DEST
in your ProductionConfig
class.
Your application is saving the uploaded images in the app/static/img
directory and also serving images from the same directory. So, if the persistent disk on Render is correctly mounted at app/static/img
, there should be no issues and your images should persist across deployments.
Just remember to commit and push any code changes to your GitHub repository so that the latest version of your application is deployed on Render.
If you still face any issues, then the problem might be related to some other part of your code or it could be a deployment issue on Render. In that case, you might want to check the Render logs for more details on the error.
Me: hmm, okay when I test in production images aren’t saving. I’m wondering if this is related to my file structure.
here’s what my directory structure looks like:
jdillaxyz/
-app/
|- static
|- img
in render, I’ve made the mount path /app/static/img
is that correct?
ChatGPT: If the root directory of your project (where your application starts running from) is jdillaxyz
, then your mount path in Render should be app/static/img
, without the leading slash.
The leading slash /app/static/img
would point to a directory at the root of your file system, which is not where your application is.
Change your mount path in Render to app/static/img
, and then check if your images are being saved correctly. If they still aren’t, it may be worth checking the application logs to see if any errors are being thrown when you attempt to save an image.
Me: When I try to use app/static/img
I get an error message from Render: “Must be an absolute, non-root path.”
What should I do instead?
ChatGPT: I apologize for the confusion, you’re correct. The mount path in Render should be an absolute path, so it should start with a slash.
The path you mentioned (/app/static/img
) should be correct if the app directory is at the root level of your project.
However, this will mount the disk to the absolute path /app/static/img
within the container that Render creates. If your application starts in a directory other than the root directory of the container, you will need to adjust the path accordingly.
If your application starts in the /jdillaxyz
directory within the container, the mount path would need to be /jdillaxyz/app/static/img
. The exact mount path will depend on the structure of the Docker container that Render is creating.
One way to verify this is by checking the logs of the running service in Render to see what the current working directory is when your application starts. You could add a log line in your application to print out the current working directory. In Python, you can do this with:
/import os print(os.getcwd())
This should help you determine the correct absolute path for the mounted disk.
Me: I pretty certain it starts from the root directory because this is where my run.py file is. I’ll try /jdillaxyz/app/static/img
and report back.
ChatGPT: Sounds like a good plan! Sometimes, the exact structure can vary depending on how the Docker container is configured, even if the run.py file is in the root directory of your project. If /jdillaxyz/app/static/img
doesn’t work, you might want to try adding that log line to print out the current working directory, just to be sure.
Feel free to update here if you encounter any issues or if you have further questions. I’m here to help!
Me: hmm, okay, that didn’t work. where should I put the print statement? in my__init__.py
file?
ChatGPT: Yes, you can put the print statement in your __init__.py file
. Specifically, you can add it right at the beginning of the file, before the imports.
Here’s an example:
import os print(“Current working directory: “, os.getcwd()) # your imports start here
from flask import Flask
from .config import LocalConfig, ProductionConfig …
This will print out the current working directory when your application starts. This should help you understand the root directory from which your application is running in the production environment, and thus help you set up the correct path for your disk mount.
Remember to check the application logs in Render to see the printed output.
After you’ve confirmed the working directory, don’t forget to remove the print statement.
Me: wouldn’t this always be the directory the __init__.py
file is in?
ChatGPT: The os.getcwd()
function returns the current working directory, which is the directory from which your Python script is executed, not necessarily the directory where the __init__.py
file is located.
In a local development environment, this will often be the directory where you run your python or flask run command. However, in a production environment like Render, it may be set differently according to the platform’s configuration.
By printing out the current working directory in the production environment, you can determine the correct absolute path to use for your disk mount.
Me: okay, the print file gave me: /opt/render/project/src
ChatGPT: That means the root directory for your application in Render's environment is /opt/render/project/src
. Now, considering your project structure, where app/static/img
is a directory within your project, you should be able to use /opt/render/project/src/app/static/img
as the mount path for your disk in Render's settings.
After you make this change, adjust your UPLOADED_IMAGES_DEST
in ProductionConfig
accordingly:
python
UPLOADED_IMAGES_DEST = os.path.join('/opt/render/project/src/app', 'static', 'img')
Once these changes are made, redeploy your application and try uploading an image again. It should now be stored persistently on the disk you attached.
From there everything worked as expected. Feature done!