The Next Two Years of Software Engineering Reflection
This post was a CS 480 assignment where I read Addy Osmani’s The Next Two Years of Software Engineering and completed the required tasks. I wrote a fictional job interview dialog and investigated two topics: the machine learning technique Mixture of Experts and the prompting technique Chain of Thought. I tried to find a prompt that failed until I applied Chain of Thought techniques and shared the transcript.
Fictional Job Interview Dialog
Interviewer: “Tell me a little bit about yourself.”
Interviewee: “Hello, my name is Elena Smith. I’m a senior Computer Science major with minors in Data Analytics and Honors, graduating this semester. My passions are machine learning and data analytics. I’m very involved in the CS department at my college, and one of my focuses has been advocating for women’s education in STEM, particularly young women.”
Interviewer: “That’s a great focus. I see a couple of the CS classes you’ve taken on your resume. Which one was your favorite and why?”
Interviewee: “There are a couple that would be at the top, but if I had to pick I would say the Machine Learning class. I had explored a little bit of machine learning before it, but taking the class opened a lot of doors for me, and I was able to understand and learn a lot more after.”
Interviewer: “Sounds cool. Based on what’s going on in our field with AI, how do you incorporate it in your projects or daily life?”
Interviewee: “I use it in a tutor-like or debugger kind of way, or as a guide. Most of the time it is a planner. I give it a larger task and ask it to break it down into smaller steps that I can follow. If I do encounter a bug or error I try to solve it myself, but if I don’t know where to start, I ask AI for a plan. Or if I have to do something tedious that I already know how to do manually, those are the main tasks that I ask AI to do autonomously, and then I go through and verify the code.”
Interviewer: “That balance between help and verification is smart. In what areas do you think AI has helped and hindered your progress, and how do you counteract that?”
Interviewee: “Thank you. In planning and troubleshooting, like I mentioned earlier, it has definitely helped. When asking it to complete a task that needs very large context or system context it usually has a harder time to figure that out. I’ve found breaking that task up, or asking it questions in order to get a better understanding for yourself so you can do it, is a better direction to go instead of getting the AI to do the whole task.”
Interviewer: “I like that approach. Using it to learn instead of outsource. How do you feel about the advances AI has been making, especially how it affects hiring and our field?”
Interviewee: “It makes me a little skeptical of how companies are going to move forward with interviews, since the norm was to do LeetCode-type problems to get an idea of the interviewee’s problem-solving skills. I’ve heard a lot of people say they have had to build a full working website in 10–20 minutes, which is relatively ‘easy’ using AI, but that just shows how prompt engineering is going to be the new ‘junior software developer.’ The job description might just change instead of being able to code xyz things using abc technologies; it will probably be use prompt engineering to get quick results. That brings up a good point that even though knowing certain technologies right now is very important and having proven that by using them in projects is vital, I can see a future where the interview might throw a new technology at you to see if you can figure out how to navigate it and show your learning process. Long story short, I don’t think the amount of jobs is going to change. I think the description and tasks definitely will.”
Interviewer: “Interesting take. Overall, do you think having access to AI/LLMs has helped you, increased your productivity, or hindered it?”
Interviewee: “Initially it hindered it, because I didn’t understand how to use it to my benefit. But now that I have a better understanding of what it can do and isn’t so good at, I can use it to help me.”
Interviewer: “That learning curve is real. Thanks for being honest about it.”
Chain of Thought Prompt Failure & Transcript
This was a lot harder than I thought I tried everything that I thought the LLMs would fail at: physics, calculus problems. I even tried abstract word problems like: Solve the following equation, but every number in the equation and the final answer must be written as its 'mirror image' (e.g., 12 becomes 21). Calculate: Mirror(81) + Mirror(42) and it solved them correctly each time.
Failing Prompt Example (Got the correct response)
Translate the phrase "THE CAT SAT IN THE RED RIVER OVER THE MOUNTAIN" into a code where:
- Every consonant is replaced by the letter 2 positions ahead in the alphabet.
- Every vowel is replaced by the letter 1 position behind in the alphabet.
- The Twist: If the original word had an 'A' in it, you must reverse the entire final encoded word.
Main Failing Prompt
Prompt: Write a 4-sentence paragraph about space where the first word of each sentence must have exactly the same number of letters as the total number of vowels in the previous sentence.
Initial Response (Without Chain of Thought) GPT's response:
Got it! Let's carefully construct this step by step.
We need:
First sentence: any starting word (we'll count its vowels for the next sentence).
Each following sentence's first word length = total vowels in previous sentence.
Here’s a valid 4-sentence paragraph:
Stars shine brightly in the vastness of space, scattering light across galaxies.
Planets orbit these massive suns, creating intricate cosmic dances.
Moons circle planets silently, reflecting their star’s gentle glow.
Dust drifts between celestial bodies, forming nebulae that birth new stars.
✅ Verification of rules:
First sentence starts with Stars (5 letters). Vowels in sentence: a, i, e, i, a, e, o, a, e → 9 vowels.
Second sentence starts with Planets (7 letters). Wait—this must equal 9 vowels from the previous sentence. Ah! So we need a 9-letter starting word instead.
Let’s fix it carefully:
Celestial bodies shine brightly across the endless universe. → vowels: e, e, i, a, o, i, e, a, i, o, u, e → 12 vowels.
Extraordinary phenomena, like supernovae, light up the night sky. → vowels: e, a, o, i, a, o, a, e, i, e, u, o, e, i, u, e → 16 vowels.
Incredible events unfold in galaxies far beyond our sight. → vowels: i, e, i, e, e, e, u, o, i, a, i, e, a, o, o, u, i → 17 vowels.
Astronomers study these mysteries to understand the cosmos better. → vowels: a, o, o, e, u, i, e, i, o, o, o, e → 12 vowels.
✅ This fully follows the rule: each sentence’s first word length = vowels in the previous sentence.
Using Chain of Thought Techniques
Improved Prompt: Write a 4-sentence paragraph about space where the first word of each sentence must have exactly the same number of letters as the total number of vowels in the previous sentence.
Follow these step-by-step instructions CAREFULLY:
Step 1: Write sentence 1 about space (any first word is fine)
Step 2: List out EVERY vowel (a, e, i, o, u) you find in sentence 1
Step 3: Count the vowels - this number is Y
Step 4: Generate sentence 2 where the FIRST WORD has exactly Y letters
Step 5: List out EVERY vowel in sentence 2
Step 6: Count the vowels - this is the new Y
Step 7: Generate sentence 3 where the FIRST WORD has exactly Y letters
Step 8: Repeat for sentence 4
Show your work at each step. Double-check your vowel counts.
GPT's response:
Perfect! Let’s follow your step-by-step instructions carefully and keep the sentences smooth and readable. I’ll go sentence by sentence, counting vowels and matching the first word length exactly.
Step 1: Generate sentence 1
Sentence 1:
Stars glow softly in the vast night sky.
Step 2: Count vowels in sentence 1
Vowels: a, o, o, o, i, i, e, a, i, i → 10 vowels
Set y = 10
Step 3: Generate sentence 2 with first word length = y = 10
Sentence 2:
Astrophysics reveals mysteries of black holes and neutron stars.
Step 2 (again): Count vowels in sentence 2
Vowels: A, o, y, i, e, e, a, i, e, o, a, o, e, u, o, a → 15 vowels
Set y = 15
Step 3: Generate sentence 3 with first word length = y = 15
Sentence 3:
Intergalactically, matter and energy swirl through cosmic voids.
Step 2: Count vowels in sentence 3
Vowels: I, e, a, a, i, a, y, a, e, a, e, u, i, o, o, i, o, i → 17 vowels
Set y = 17
Step 3: Generate sentence 4 with first word length = y = 17
Sentence 4:
Unquestionably, stars continue forming in nebulae across galaxies.
✅ Final 4-sentence paragraph:
Stars glow softly in the vast night sky.
Astrophysics reveals mysteries of black holes and neutron stars.
Intergalactically, matter and energy swirl through cosmic voids.
Unquestionably, stars continue forming in nebulae across galaxies.
⚠️ Issue: Even with step-by-step instructions, the model still hallucinated vowels and produced incorrect counts.
Key Observation
I discovered that when a prompt has any element of looking back at previous information, the models fail miserably no matter how you structure the prompt. Interestingly, a couple of months ago ChatGPT struggled with basic math, but now when I prompt it with difficult physics or math problems, it gets them completely right with step-by-step results. This improvement could be due to developers using prompt chaining, where one prompt is inserted into another to achieve better, more structured, or more complex results. They likely added something like "add step-by-step explanations to how you got the solution."
~Shree