During the process of writing AI Engineering, I went through many papers, case studies, blog posts, repos, tools, etc. The book itself has 1200+ reference links and I've been tracking 1000+ generative AI GitHub repos. This document contains the resources I found the most helpful to understand different areas.
While you don't need an ML background to start building with foundation models, a rough understanding of how AI works under the hood is useful to prevent misuse. Familiarity with ML theory will make you much more effective.
Foundational, comprehensive, though a bit intense. This used to be many of my friends' go-to book when preparing for theory interviews for research positions.
OpenAI (2023) has excellent research on how exposed different occupations are to AI. They defined a task as exposed if AI and AI-powered software can reduce the time needed to complete this task by at least 50%. An occupation with 80% exposure means that 80% of this occupation tasks are considered exposed. According to the study, occupations with 100% or close to 100% exposure include interpreters and translators, tax preparers, web designers, and writers. Some of them are shown in Figure 1-5. Not unsurprisingly, occupations with no exposure to AI include cooks, stonemasons, and athletes. This study gives a good idea of what use cases AI is good for.
One of the best reports I've read on deploying LLM applications: what worked and what didn't. They discussed structured outputs, latency vs. throughput tradeoffs, the challenges of evaluation (they spent most of their time on creating annotation guidelines), and the last-mile challenge of building gen AI applications.