Author: christian@clickfoundry.co

  • Trying to be better at cold outreach, I wrote a program that put me out of a job

    Trying to be better at cold outreach, I wrote a program that put me out of a job

    It’s probably just me. I hate cold-calling. Cold-emailing. It’s vulnerable and uncomfortable.

    I was talking to a colleague about outreach. And they made a point I hadn’t considered. Up until that point, I was scraping data about the company or the website to personalize the outreach. Look at their website, put together my opinions and offer suggestions or a service.

    I had been building tools that audit websites. That way, I’d at least have something real to say.

    His insight? Nobody wants to hear “hi, your website is broken in these specific ways, I can help.” Even if it’s true. Especially if it’s true.

    So, I developed an internal tool that could make an interactive mockup/ webpage using their existing website.

    Look this is a homepage that took like 6 minutes to generate:

    So, instead of “hi, your website is messed up,” I can say, “hey I made this for you.” It’s hard to argue, that’s definitely a better pitch. This idea is in the air.

    The website builder tool

    How it works

    You add a project by entering a URL.

    The tool crawls the page and takes a screenshot of the site. It uses the stored information to generate an analysis.

    Visual Impression

    The site projects genuine design confidence — it leads with a single, beautifully photographed room rather than a cluttered homepage, trusting the work to speak first. The typographic system is unusually disciplined for a firm website, with the small-caps serif navigation feeling closer to an art book or luxury magazine than a typical service business site. The primary weakness is that the single-section scroll and sparse structural hierarchy may frustrate discovery for first-time visitors, and the lack of a visible H1 represents a meaningful SEO and accessibility gap that warrants attention.

    -click foundry web design pipeline internal tool

    This is honestly a good assessment of the page. Although it overwrites the “sections” and also sees markup that isn’t visible to the user as part of the analysis.

    this part is generating “sections the website has” after reading the markdown.

    The site was structured in a weird way. There weren’t actual pages, rather buttons that hide or show sections of the website. An absolute nightmare for UX and SEO, but for the mockup generator it was an ideal scenario. *

    according to the tool it’s a B-. I think it’s practically a teardown/ D+

    Then we drop some more tokens and it creates a critique of the site. It’s hit or miss. Generally, the observations were good. The way it weights and comes up with a score, however, left room for improvement.

    This is also when you can add your own criticism to the list.

    Then you have to add an image of a reference website (or several) and describe why you like it. As click foundry, I make custom websites for architects using WordPress. So, I’m always looking for inspiration, so I went with the OH architecture website.

    Then it’s time to generate.

    In summary, the process looks like this:

    1. Drop in a URL
    2. It scrapes and stores colors, copy, and images
    3. Generates a site audit, pauses for human review and approval
    4. Takes a reference page as design input
    5. Rebuilds the page in clean markup

    As someone who started with HTML5 templates there was something comforting about the output. As someone who does this for a living it was kind of unsettling how quickly I could put myself out of a job. I’m not saying I couldn’t do better myself, but if you’re trying to put out a “pretty good” website quickly, then it’s pretty good.

    ==INSERT A TOKEN TO CONTINUE PLAYING==

    Originally, I recorded myself putting together a site using the tool, but in order to get this posted and not have to deal with premiere. I opted out. Somewhere in the first 1 minute of the video I had to regenerate the page because I had some maximum token and timing issues on the backend. The process takes a while- TWO and a HALF minutes.

    I want to keep playing with it, so if you have or know somebody who has a bad site send it my way and I’ll give it a go and send you back a redraft of the page.

    THE GUTS

    The first working version was built in two sessions, maybe eight hours total. Most of that was fighting with timeouts and token limits, not actual architecture decisions.

    It runs locally on my machine. Node.js server, Express for the API routes, SQLite for storage. Nothing hosted, nothing fancy. I wanted something I could run from my desktop without paying for infrastructure or worrying about someone else’s uptime.

    The automation layer is Puppeteer — it launches a headless Chrome browser, navigates to whatever URL you give it, waits for the page to settle, then takes a full-page screenshot at 1440px. While it’s in there, it runs a script that pulls everything off the page: headings, body copy, links, images, computed font stacks, every color value on every element, background treatments. It tries to classify each visible section by what it’s doing — is this a hero with a split layout, a card grid, a CTA banner, a testimonial block. That classification matters later.

    All of that — the screenshot, the structured data, the section map — gets sent to Claude’s API. That’s the first AI call. It comes back with a summary of what the business does, who the site is talking to, what the color palette communicates, what the typography says about the brand. It also describes each image it can see in the screenshot so we know what’s a headshot versus a project photo versus a decorative element.

    Second AI call takes that audit and turns it into a critique. Structured, categorized, scored by severity. This is where I step in — I can agree with a point, throw it out, rewrite it, or add my own. That human layer is the whole point. The AI gets you 70% of the way, you close the gap with taste.

    Third call is the big one. It takes the locked critique, whatever reference images I’ve uploaded with notes about what I liked, the original site’s actual image URLs with descriptions, and builds a complete HTML page from scratch. Tailwind CSS via CDN so it never has to write custom stylesheets — just utility classes. That was a lesson learned the hard way. When I let it write raw CSS, it would burn through tokens on redundant style rules and the output would get cut off halfway through the page.

    Puppeteer picks up the generated HTML and renders it to PDF. That’s the deliverable.

    Each step feeds the next. The audit feeds the critique, the critique feeds the mockup. Small sequential context instead of one massive prompt. I tried the everything-at-once approach first and the output was noticeably worse — the model compresses what it knows when you give it too much at once, and details get lost.

    The whole pipeline costs about 40 cents per site in API calls and takes somewhere between 4 and 7 minutes depending on how long I spend editing the critique. The expensive part isn’t the AI — it’s the two and a half minutes of Puppeteer and Claude thinking while I sit there watching a progress bar.

    What I Learned Building It

    Simplify. HTML is too open-ended for AI output. I had it use Tailwind as the CSS library so it never had to write custom styles outside of color variables. Cleaner output, faster process.

    Process is everything. My actual design process starts with gathering collateral — colors, fonts, copy — then remixing it into a framework I know well. The tool mirrors that workflow. It’s not magic, it’s systems based thinking.

    Avoid context overload. Too much input forces the LLM to compact its memory and things get lost. Small, sequential steps produce better results than trying to do everything in one shot.

    I built this as a way to get my foot in the door during cold outreach.

    Frankly, I still like making websites. I like to make things. So, it’s not like I’m going to outsource the entire process to AI, but just because I’m not doing it doesn’t mean someone else wont.

    The Taste Gap

    I circled back to the colleague who started all this, and somewhere in that conversation I realized something about AI. And something about me. The machine makes you honest about how you actually work.

    Looking at my processes can sometimes be embarrassing because there’s a “taste gap” between what I produce and what I want to produce. And then to have a machine attempt it, I immediately become critical of it, instead of spending the time refining it. This mirrors my own tendencies.

    The moment I prove something works, I lose interest in making it great. Understanding how is the fun part for me. Refinement is just work. I’ve left a lot of projects on 90% finish line. That last 10% is the hardest part though.

    The ten percent. Here’s what sucks about the tool:

    1. the rationale and critique is not always on point. It needs someone to train it. creating a human grade vs. an ai grade.
    2. it defaults to unsplash stock photos. It does know what it’s looking for in the photo, but it would be way cooler, if we could replace the stock photography with photos that actually live on the prospects website.
    3. It doesn’t do mobile. Despite it being written in tailwind, the tool didn’t create a version that collapsed well for mobile.
    4. It had some coding errors (some json wasn’t executed and it just stayed in a div as text)

    Early results are mixed, but the economics works.

    It took a good deal of work to get this tool to be good enough to generate a solid mockup. But it’s not great and I’ve reached out to 2 firms so far with mockups it’s generated and I haven’t gotten an open (that’s on me though, bad headline, I think).

    I’ve sent two. Each one cost me 40 cents. I think I just need to send a hundred, pair it with a cold call and a follow up email.

    Did you find this piece interesting, useful, inspirational? Sign up to the newsletter.

      Did you hate this article? Sign up to the newsletter and let me know.

        I’m just kidding, they’re the same newsletter.

        notes on notes in notes

        In the aim of always improving, the edge case question becomes what happens/ what do you do when the initial webpage has little to no content.

      • This Post Was Supposed to  Write Itself. How to publish in under 20 minutes with SuperWhisper and Claude

        This Post Was Supposed to Write Itself. How to publish in under 20 minutes with SuperWhisper and Claude

        A weekend making AI video, a 6-minute voice memo, and the dead internet we’re all building together.

        I’m going to put my money where my mouth is. This entire post started as a rant into SuperWhisper — a voice-to-text app I love — with the plan to run it through a Claude pipeline I built and have it spit out something publishable. More on that process at the end. But first, let me tell you about a weekend I spent trying to make a short film with AI.

        I guess it wasn’t six minutes, but there is a good deal of silence in there. I wouldn’t suggest listening to it but you can.

        The Experiment

        I’d been meaning to test the current crop of AI video generation tools. Not with a big budget — I didn’t want to burn through a pile of credits — so I kept the scope tight.

        I’d seen a lot of what people are making with these tools, and most of it feels… spectacular. And I mean that literally — it’s spectacle. A kitten fighting Godzilla. Beautiful, surreal, directionless. There’s no human hand behind it other than the text prompt, and while I know that’s an oversimplification (there’s real craft emerging in this space), the technology still has this quality where the machine kind of misunderstands you at first. Which, conveniently, burns through your tokens. If you’re doing this professionally, you’re going to start measuring your marketing budget in tokens instead of crew hours. The landscape is already shifting — some hybrid role between old-school production, marketing, post, and a little web savviness is taking shape.

        So rather than make another spectacle, I wanted to try something with a point of view.

        Guerrilla Radio as a Template

        These tools give you about five seconds per clip. That constraint reminded me of a short film format I’d seen — four shots, four seconds each, tell a story. I didn’t follow that exactly, but I went looking for a visual template and landed on Rage Against the Machine’s “Guerrilla Radio” video.

        It opens with this incredible shot: rows of workers — all brown, all hunched over sewing machines — against a stark, sterile white background. They form this triangle pushing out toward the camera. Then it cuts to the band playing on a similarly blank stage.

        I thought there were echoes worth chasing. The original video was talking about wages, outsourcing, sweatshops. And here we are again with a new kind of class stratification. Now it’s the laptop class’s turn to get annihilated. The people who thought they were safe behind a screen are watching AI come for their work in real time.

        So I recreated that structure. But instead of cutting to the band, I cut to the people I think are the real arbiters of this AI moment. Jensen Huang, obviously. The big tech companies sitting on mountains of cash: Google, Apple, Microsoft, Meta. Because if the AI bubble bursts and the unit economics don’t work out for the smaller players, these are the ones left standing. They’re the band on the white stage.

        The Technicals

        this part felt missing. so I had to add another recording.

        At a tool level, I bounced between three platforms: OpenArt, ArtList, and Kling AI. FREE CREDITS, why not? I’d feed them reference images of that robot character you see in the final piece and then just prompt it with text — “turn this into a crowd,” that kind of thing. Pretty straightforward on that end.

        The CEO shots were trickier. I took standalone photos of the tech CEOs — Jensen, Zuckerberg, the usual suspects — and needed to place them on that empty white stage from the original Rage Against the Machine video. So I removed the band, dropped the CEOs in, and then had to match the feet. That was honestly the hardest part. I’d feed ChatGPT a reference image of the Rage members’ feet and positioning so it could match the stance for each CEO. I did most of the generation through ChatGPT specifically because you don’t want to be typing names and likenesses into text-based generation tools . That stuff gets flagged. Having reference images to work from sidesteps a lot of those guardrails.

        The group shot of all the tech CEOs together I mocked up in Photoshop. Then I brought everything into Premiere to cut and sequence the clips.

        And honestly? The worst parts of the final video are my fault. The cuts, the push-ins. I did this linear zoom that should have been eased in. It looks stiff. I just didn’t have the time to finesse it. Because here’s the thing people don’t talk about enough: making stuff this way is still really time-consuming… compared to where this is headed. Think about what it replaces. Coordinating a crew. Renting a studio. Hiring extras. I did this over a weekend with free tools and a laptop. That gap is closing fast.. There isn’t a workflow right now, at least not one that isn’t absurdly expensive, where you can just generate a cohesive video end to end.

        But I can see it coming. Something like Premiere building its own generative workspace. There’s ComfyUI, which popped into my world about four days ago, and it has these incredible node-based workflows that can generate different angles, find compositions, chain processes together. The potential for producing lower-cost content is obvious. Two weekends ago when I actually built this thing, that tooling wasn’t really there yet. The speed at which this space is moving is genuinely hard to keep up with.

        literally a screen recording of me typing it in photoshop. cuz, why not.

        So to sum it up: a lot of reference images, some Photoshop compositing, basic Premiere editing, and a willingness to show an AI a picture of a person with a laptop next to that original sewing machine shot and let it figure out the rest from a text prompt. That’s really all it took.

        The Dead Internet Feeling

        Here’s where my cynicism kicks in. I want to use AI for production — for content, and I say that word sarcastically. But if everyone is doing this, what are we actually building? We’re making AI-generated content so that other AI can read it, hoping that somewhere down the funnel a human actually sees it. It’s the dead internet theory playing out in real time.

        Martin Scorsese said something when digital cameras went mainstream that I keep coming back to: there will be great creators in this space, but you’re going to drown in bad film for a while. That’s a pretty lucid take on what any new technology does. It makes things accessible. Some people who never had a shot will make something incredible. But there’s going to be a lot of trash first.

        So in my little video, I kept cutting back to robots staring at billionaires. That felt about right. Draw your own conclusions.

        The Pipeline (Or: How This Post Actually Got Made)

        This whole thing — the rambling you just read in polished form — started as about six minutes of me talking into a microphone **plus the second recording we had . I ran the transcript through a Claude project I built that works in steps:

        1. It takes raw speech and rewrites it as a technical explanation from a specific perspective — in this case, a marketing person breaking down what I actually did and why.
        2. That explanation gets turned into a set number of paragraphs.
        3. Those paragraphs get simplified to a seventh-to-eighth grade reading level. Warren Buffett’s reading level. If it’s good enough for him, it’s good enough for me and whoever’s reading this.

        I think building internal tools like this is one of the genuinely useful things AI offers right now. Not replacing the thinking — organizing it. Taking the messy, meandering version and giving it shape so you can decide what stays.

        **pieces in highlight are places I had to do hand edits. Let me know if you think this piece is engaging or if it blows and you hated reading it. Maybe a 10 point scale system.