Tech News
← Back to articles

Developing our position on AI

read original related products more articles

If you’re not familiar with us, RC is a 6 or 12 week retreat for programmers, with an integrated recruiting agency. Ours is a special kind of learning environment, where programmers of all stripes grow by following their curiosity and building things that are exciting and important to them. There are no teachers or curricula. We make money by RC is a 6 or 12 week retreat for programmers, with an integrated recruiting agency. Ours is a special kind of learning environment, where programmers of all stripes grow by following their curiosity and building things that are exciting and important to them. There are no teachers or curricula. We make money by recruiting for tech companies , primarily early-stage startups.

This post started as a question: How should RC respond to AI? Regardless of whether you think large language models present a big opportunity, a looming threat, or something in between, we can likely agree that AI is everywhere, especially in the world of programming, and almost impossible to ignore.

As operators of a programming retreat and recruiting agency, we’ve found ourselves grappling with many of the questions AI raises: What does the existence of code generation tools mean for the craft of programming? In what ways do language models help or harm our ability to learn? Which skills do these tools make less important, and which ones do they make more important? What impact has the proliferation of coding agents and other LLM-powered tools had on software engineering jobs today, and what impact might it have in the coming years?

Our interest in these questions is not academic; it’s practical. AI has popped up in every aspect of our work, from our admissions process (should we let applicants use Cursor? ) to our retreats (are AI tools helping or hindering people’s growth?) to our recruiting business (what should people focus on to be competitive candidates?) to community management (how do we support productive discussion when people have such strong feelings and divergent views?).

There are no simple answers to these questions. Nevertheless, I think it’s important that we at RC have a thoughtful perspective on AI; this post is about how we’ve tried to develop one.

Our approach

We chose at the outset to limit our focus to the personal and professional implications of LLMs on Recursers, since that’s what we’re knowledgeable about. You won’t find positions or pontification in this post on energy usage, misinformation, industry disruption, centralization of power, existential risk, the potential for job displacement, or responsible training data.

That’s not because we don’t think discussion of any of these issues has merit (it does) but because we think it best to remain focused on the areas closest to our expertise and that are core to our business, and to avoid those that are more inherently political. While the broader societal questions are still being debated, every programmer here has to answer the question of whether and how best to use LLMs in their work during their time at RC.

Our greatest asset is our community. With nearly 3,000 alums, we have easy and direct access to programmers of all stripes, with deep expertise and unique perspectives, from industry to academia. So we started this work by assembling an informal AI advisory group of alums who embody our most important educational values: curiosity, volition, rigor, kindness, learning by doing, and a growth mindset. We sought diversity among not only demographics (age, race, and gender), but also role type, industry, seniority, how recently they attended RC, and most importantly, their views on AI. For the latter, in several cases we were surprised to find people either more or less skeptical of AI than we had originally assumed. In all cases we were pleased to find people’s views thoughtful, nuanced, and more complex than the dominant discourse online.

What we’ve learned from talking with our alums

... continue reading