Monday, December 14, 2009

Software Engineering

I once was given a theoretical challenge, write a software engine, that can write software to interface with other system on it's own through learning. A kind of AI but not AI, instead at the end it must write it's own new program. Yes, it is possible if you had enough time for a process to connect up to another process, and try through brute force to start sharing data; finally saving the result as a new program. It's conceivable only because computers are good at repetitive tasks, so long as a human starts both ends off with a fighting chance and there are some rules in place, it should work?

Well the problem was a bit more complicated than that, but I nonetheless went off to consider this problem. Firstly with fear because if it could be done, software developers would start putting themselves out of work, but eventually once convinced it could not be done, I relaxed a little. Well this was a long time ago, perhaps a few decades hence I could re-formulate the same original question properly and on paper (I won't now because that would loose IP), and ask it again.

Engineering is full of questions and solutions that just do not quite work at the time. How many new technologies enabling their completion do there need to be?

2 comments:

FreeWildebeest said...

Hmmm. This is sort of an inverse of a code fuzzing tool. Might be useful in reverse engineering APIs. It'd be really really hard to do right though, how would you score API calls as being correct if you don't know what the API should do?

It also reminds me of the mouse brains controlling a flight simulator: http://interactive.usc.edu/archives/002993.html

Something about meaning coming out of chaos would be appropriate here I think.

Conrad Braam said...

I thought my description of the problem might raise some questions, this was about 6 years ago, but as you point out. "It's un-fuzzy".

I foresaw too many problems with the high-level concept. If you think of the problem in terms of an API.
To start:you have only 1 known rule, all functions must return true to indicate success.
Calling some functions in a wrong sequence will cause one or more to calls to fail. You can call functions many times. You have to record your attempts, and then present all valid sequences. But how valid is the end result?, we all know that software has bugs, and just because it passes does not mean you guessed the expected outcome.

The hard part comes in the form of the mouse and the flight-sim. The mouse needs to get input from the flight-sim; in my problem-domain the fuzzing tool needs a human to pick the most correct-looking results out at the end.
I'm not holding my breath for AI gurus enabling a solution in the near future.

The brain in a Petri dish article asks slightly more questions than it answers.