• Ben's Bites
  • Posts
  • AI’s impact on jobs: A look at front-end engineering

AI’s impact on jobs: A look at front-end engineering

And how these lessons apply to other roles

👋 Hey, this is Ben with a 🔒 subscriber-only issue🔒 of Ben’s Bites Pro. A weekly newsletter covering AI trends, ideas, business breakdowns and how companies are using it internally.

I’m writing a series exploring AI’s impact on specific jobs. Today I’m breaking down what AI is changing for front-end engineers and how these lessons apply to other roles.

I spoke with several front-end engineers and designers to get their opinions.

Thank you, Ben Ormos, Telman - Senior Product Designer @ Microsoft, Glenn - Senior Frontend Developer at Mercury, Josh Payne - CEO of Coframe, Arnaud & Helen - Co-Founders of Galileo, Roy Herman - CEO of supercreator.ai and Alex - CEO of Magic Patterns for your thoughts.

Here’s what stood out to me from this post:

  • AI’s impact on job roles is blurring the lines

  • How AI aids various aspects of front-end engineering

  • Tooling bridges the gap between different roles

  • The secondary benefits of using AI in your job

  • What the future of front-end engineering looks like

  • Designing interfaces to make them real

  • Anyone can start building products

  • Ever-improving websites and dynamic interfaces

  • Similarities across the entire organisation

AI is rapidly changing many job landscapes, shifting people to facilitator roles.

I’m already using AI in my day-to-day work. I think my manager would be shocked if they knew how much code in my Pull Requests were written by ChatGPT.  But I still have to be there to ask the right questions, fix bugs, and shepherd code into our larger codebase, following our conventions.

AI is blurring the lines between designer and front-end engineer, as its capabilities allow it to perform tasks traditionally divided between the roles.

Designers typically focus on the visual aspect and user experience, creating the layout, colour schemes, and graphics.

Front-end engineers typically bring designs to life with code. They develop the functional and interactive aspects of a website (or app).

Now, you can potentially merge these roles with AI tools.

That may be seen negatively, but with AI, its function allows the simplest work and increases demand for top 1% expertise.

This is true across all job roles.

The traditional role of a front-end engineer

There are many elements to this job, namely:

  • Develop User Interfaces (UI): Build and implement visually appealing, responsive, and user-friendly web interfaces using HTML, CSS, and JavaScript.

  • Implement Web Design: Translate visual and interaction designs from UI/UX designers into functional web pages, ensuring cross-browser and device compatibility, responsiveness, and accessibility.

  • Optimise Performance: Ensure web applications are optimised for speed and scalability by optimising code, graphics, and other elements for quick loading and smooth running.

  • Collaborate with Designers & Back-End Developers: Work with designers to implement designs and back-end developers to integrate APIs and services, ensuring seamless data exchange and functionality within the application.

  • Debugging and Testing: Identifying and fixing bugs in code, and implementing testing frameworks to ensure the web application's stability and reliability.

  • Prototype Development: Building functional prototypes for new features or products helps gather feedback and iterate on the design before final implementation.

Developing User Interfaces & Implementing

Here are a few AI tools that have impacted front-end engineering:

  • v0.dev: a generative user interface system developed by Vercel. It uses AI to generate copy-and-paste friendly React code based on shadcn/ui and Tailwind CSS. The tool lets developers describe the interface they want to build, and v0 produces the code using open-source tools.

  • Coframe/coffee: a tool developed by the Coframe team that helps build and iterate on UI 10x faster with AI, right from your IDE. (I’m an investor)

  • MakeReal by TLDRaw: a tool that lets users sketch an interface, press a button, and get a working website. It uses the GPT-4 API from OpenAI to transform vector drawings into functional software code, using Tailwind CSS and JavaScript.

  • abi/screenshot-to-code: This tool takes screenshots of a reference web page and builds single-page apps with Tailwind, HTML, and JS to replicate the screenshot.

  • Galileo AI: a text-to-UI tool that quickly produces high-fidelity designs from natural language prompts, trained on thousands of top user experience designs. (I’m an investor)

  • MagicPatterns: an AI-powered platform that helps with front-end development, enabling designers and engineers to rapidly build UIs by generating them from a text prompt. Users can create patterns from any design system, including their own, and export the outcome to code or Figma.

There are many more!

Many of these tools generate UI elements and provide optimized code for easy implementation, significantly easing this part of the job.

Blurring the lines between designer and front-end engineer.

I’ve only played around with AI tools, but ultimately still end up fully making or customizing things myself (with no code). I am biased as a designer, I usually have a specific idea in mind. But for more standard components such as an FAQ, Help me, etc. I’d definitely just use AI.

In the future, I can 100% see these tools becoming better than most designers.

AI allows peoples to explore design solutions faster. They can generate dozens of variants of the same page in a few minutes with tools like Galileo AI. It allows them to try a lot of ideas before getting into the implementation stage. Not all front-end developers know how to design and AI can give them design superpowers.

Enabling founders to do more in less time.

Drastically helping me write code faster, as well as implementing more technically complex features/functionality that I might have not been able to do before, or would have taken me a lot more time (or would have needed to delegate to someone else). It started with chat-gpt, then we got GitHub’s CoPilot, and now I use Cursor which gives me the benefits of all worlds as well as learning/indexing my codebase to give me even more ROI as im building. This is first and second degree implications, but then you also have things like Vercel’s v0 or no-code platforms that wrap these new AI capabilities and allow you to potentially skip even more steps / save more time during the process.

As with most creative work, AI can’t generate new UI that’s never seen before but a lot of front-end requires using standard elements; testimonials, hero sections, FAQs, and product feature lists.

This is why Tailwind CSS has taken off over the last few years. It’s a CSS framework for building user interfaces—it simplifies implementing good design in apps.

I expect Tailwind to release an AI product soon, but others have built their own versions for now.

But some external libraries can be tricky to work with.

The biggest issue with external libraries is that they are not frequently updated, sometimes suit only some of your needs—you add these large libraries when you only need a portion of them, adding extra complexity to the app.

you can ask agents to develop more complex interactions natively, so you can maintain these much better

The default way to work with AI is through a chat interface, which may not be the right form factor for design-first work.

A lot of the AI products we’ve seen to date are largely chat or text-based because of it being the easiest way to integrate with a LLM. But I think with time, the greatest products will hone in on the best way for users to interact with their AI products. Spoiler: it’s not a chat interface more often than you’d think!

For example, at Magic Patterns we help customers 1) integrate with Figma, 2) connect their component library via Storybook and Github, and then 3) generate UI based off of Figma mockups using their custom components.

There’s likely a balance, as if, for example, you don’t have your own component library, you may first want to generate one using the tools mentioned. Many companies, especially large ones, already have their own libraries.

AI is amazing at generating boilerplate code for components and we can extrapolate that it will be able to do even more in the future. Still, mature frontend codebases have a lot of interdependencies and might be too complex for AI today. In the future, prompting within a frontend codebase could be the leading use case.

It’s worth noting that Figma AI is coming soon to generate UI, design components, recommend designs, add mock data, and generate code.

Similarities across different roles:

Subscribe to Ben's Bites Pro to read the rest.

Become a paying subscriber of Ben's Bites Pro to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In

Join the conversation

or to participate.