Saturday, December 29, 2012

Programmer Creates 800,000 Books Using Algorithmic System, Starts Selling Them


When I read this article the other day, I was absolutely blown back from my seat, with jaws dropping to the ground, totally overwhelmed.

Marketing professor Philip M Parker has created an algorithmic system that can write any book on any subject in just a few minutes, pooling in data from around the internet, compiling and reorganizing them accordingly.

According to the article:

"In a fascinating piece covering the news the sheer power of this system was revealed. Countless topics can be listed on sites like Amazon — everything you’d ever want to know. The funny part is that the books don’t even have to be written yet. Thanks to digital distribution and print-on-demand solutions, a whole new book can be generated on an incredibly obscure topic as soon as someone buys it. The system will be able to compile an entire book on the subject in the range of ten minutes to a few hours. It’s that simple."
He has even patented this system.

The one word I couldn't think of better after knowing this system: INGENIOUS.

Here's a video presentation of the system by Philip Parker himself (he even sounds like an algorithmically generated narrator; that's a compliment btw):


There have been criticisms on this system, such as, he has essentially created another spam bot, and that, companies like Google and Amazon would never allow such mass-creation of automatically generated content to be put up let alone sold, which are hotly being cracked down nowadays. There is also the issue of the system creating a lot of junk (garbage in => garbage out). Some have called it as an exaggerated, over-hyped claim. Perhaps we are not well enough informed of how the system truly works (even I am still baffled) to really see its value in real-world applications.

However, I do see HUGE potential in such a system for its contribution to scientific research, for instance, sifting through the plethora of scientific literature, the compilation of much needed data and the production of reports and analyses of them, which can be a tedious, labor-extensive, time-consuming, monotonous, not to mention DEAD BORING task. These would have tremendous value not only for scientific researchers, but also market analysts and business people who need these kinds of information in the shortest time possible. If such a process could be automated to the highest quality as humans would if not better, this would surely free up more of our time to engage in other critical tasks/projects, while at the same time accelerating the rate at which knowledge can acquired.

This reminds me of a similar article I read a long time ago on Physorg entitled, 'Mining the Language of Science'. It's essentially similar to what I have described. Here's an excerpt:

Ask any biomedical scientist whether they manage to keep on top of reading all of the publications in their field, let alone an adjacent field, and few will say yes. New publications are appearing at a double-exponential rate, as measured by MEDLINE – the US National Library of Medicine’s biomedical bibliographic database – which now lists over 19 million records and adds up to 4,000 new records daily.
For a prolific field such as cancer research, the number of publications could quickly become unmanageable and important hypothesis-generating evidence may be missed. But what if scientists could instruct a computer to help them?
 I've said it before and I'll say it again. These are exciting times.

No comments:

Post a Comment