I'm not sure what forum to stick this in but seeing as the test was in C++ I thought I'd stick it here but feel free to move.

I recently completed a test for a job application. I completed 2 out of the 3 questions (it was a choose 2 of 3 test). I also tried the third question (a QT one) but couldn't get that done in an evening and had to ask a question on here about it so left it.

Yesterday got told they did not want to carry on the application process due to my test. I asked for feedback and they said "none of the responses you gave worked". I was somewhat surprised at this as although I hadn't done a full automatic testing script (trying to rattle the test of in a few hours) I done several example and they all seemed fine. I got home tonight and checked and sure enough with some more tests I can't see the problem. The responses do work.

Should I ask for more specifics or should I send a screenshot of the program working? Should I send them a script to run the program with dummy arguments? I included a read me and example useage. Maybe they have found a case that does not work, or a limtation on the size of the input that they don't like but if that was the case the style of feedback seems odd given the tone of the email.

I'd mention that the code works on my end and ask how they're testing it on their side. Make it clear that you respect their decision to halt the interview process and that you're most interested in learning what you may have done wrong.

Yeah I respect their decision its more the confusion of knowing what is wrong!

Yeah I respect their decision

That's irrelevant. The idea is to make sure that they know you respect it so as to avoid burning bridges. Why? Because you're basically questioning their ability to run your code by suggesting that it's correct when they say it's not.

It may be that the quetions dealt with edge cases and they wanted to see if you'd catch them. It might also be the case that they had a candidate in mind and you were part of the process the company requires to allow a hire (must interview X candidates...). I dont like these appraoches to interviewing but tehy are used common enough.

In either case I think you are well within reason to ask about the specifics of the failure.

Thanks for the advice.

@Logic You're correct I don't want to burn any bridges as they seem an intersting company and as you say you never know when paths may cross again.

Just ask them how they evaluated whether the programs are working or not, such that you can use that to learn what you did wrong in that code. Don't talk / whine about how it works on your end, there's no point in that, they're probably not going to be interested in taking a second look at your programs and reviewing whatever "evidence" you give them. Assume they are right (i.e., your programs don't meet the requirements), and inquire as to how they evaluated that such that you can use the same method to find the mistakes and correct.

You should also expect not to get anything, either because they don't care, or they don't want to give away the tools (or scripts) they use to verify the quality of their applications (maybe they use the same ones at every round of hiring).

My guess is probably that edge cases are the problem. The general rule with quality assurance is to test all edge cases (e.g., completely meaningless inputs, pathological cases, out-of-memory conditions, etc.). There is no point in repeatedly testing the "normal case", you test the normal cases a few times with a representative set of inputs, and then you systematically test each special case (include the "user is having an epileptic seizure" type of inputs). If I were giving job interviews for a programming job, that's probably what I would do: give them an assignment with a number of more or less difficult pitfalls or edge-cases, and then see who gracefully dealt with the greatest amount of edge-cases. I think you were mistaken if you thought that they just wanted a basic working program that did what was asked for (i.e., works in the "normal cases"), because these "interview assignments" are usually trivial enough that "getting it to work" is a given, the main interest is how good and robust your solution is.

So, when the interviewer says "it wasn't working", it might mean something completely different than what you think. A program that fails even the simplest edge-cases (like an accidental letter-input when inputting a number on the console prompt) is a program that is not working, at least, that's what most (experienced) programmers would say.