Can machines be entrepreneurial?

Stephen Schwartz has an op-ed in the Daily Telegraph today talking about machine marking in education. All up I agree with his arguments – but this paragraph struck me as being wrong.

Critics also claim that computers will never be able to measure creativity. This is not true. Computers learn to assign marks by mimicking those given by humans. If the marking rules require human markers to assign higher marks to creative essays, the computer will mimic the markers and reward creativity as well.

Okay – now maybe we’re just quibbling over the definition of a “creative essay”. If it is possible to a priori identify what a creative essay is (so for example an original short story crafted by the students) then mechanistically the machine could add a random mark to all the grades and “reward” the creativity. That is trivial – but doesn’t to my mind “measure” creativity and then “reward creativity”.

The issue to my mind is actually identifying creativity when you see it. Now I’m happy to accept that computers and machines generally can work longer and harder than humans, are more consistent than humans, and so on. They are also profoundly and fundamentally stupid. Any deviation from predetermined and pre-programmed norms is a  innovation to a machine – but we know not all innovations are valuable or correct. Sometimes things students say are incredibly profound, other times just wrong. I’m not convinced that a machine can tell the difference unless a human foresaw every instance of student creativity in advance and programmed an appropriate response into the machine. Yet if it can be foreseen, how is it creative?

This entry was posted in Education. Bookmark the permalink.

25 Responses to Can machines be entrepreneurial?

  1. Entropy

    I think it would be pretty easy to game the system such creativity assessing machines would use. I few nov l phrases and words here and there, and voila!
    Mind you, in some ways the effort would be creative anyway, just not in the way intended.

  2. zyconoclast

    As long as the machine is not programmed to ‘massage’ the results according to the victimhood totem pole.

  3. Rafe Champion

    In short, yes.
    Similarly we can’t expect machines to make scientific discoveries although a lot of effort by Simon and Newell et al went into algorithms for discovery. The usual response is “wait for the next generation of super computers”.

  4. Tim Neilson

    In the Angry Studies pseudo-disciplines machines could hardly do worse at assessing than people do. After all, computers do as well as humans at writing the stuff. There have been several cases of computer generated pseudo-papers being accepted for publication in assiduously “progressive” journals.
    Of course the great Ern Malley caper was pre-computer, but Macaulay and Stewart did their best to avoid any real exercise of human intellect in producing the “poetry”, thus setting a standard that post-modernism has consistently conformed to.

  5. Nicholas (Unlicensed Joker) Gray

    It might just check for plagiarism. No plagiarism means you’re creative!

  6. Ƶĩppʯ (ȊꞪꞨV)

    No machine can measure creativity… yet. It will come though.

  7. duncanm

    If computers could measure (useful) creativity, then they would be able to be creative themselves.

    They’re not, therefore they can’t.

  8. cynical1

    It’s bullshit.

    Emotions.

    Only living creatures have them.

    No machine ever will.

    It’s why freaks are hanging out to buy rubber wanking dolls that scream with joy.

    Ya think it’s really having a great time?

  9. Bruce of Newcastle

    Nice article linked at Drudge today:

    This Company’s Robots Are Making Everything—and Reshaping the World

    It’s long but well worth reading. The impression I get from it is that the Japanese aren’t too worried about population decline, and see no need to import unskilled labour, because robots will it all.

    Then there’s this one from yesterday:

    Report: Google A.I. Writes Better Machine-Learning Code Than the Humans Who Created It

    I wonder what’s left for mere humans to do?

  10. Bruce of Newcastle

    That should be “robots will do it all.”
    I suspect they’d also write better comments on the Cat than I can.

  11. Given what we see from some of today’s writers, journalists et al, I suspect that an AI will be as good, if not better, at judging creativity and probably being creative in their own right. An AI doesn’t have to go to journalism school to learn how to plagiarise the work of others and use Twitter as a source of factual information.

  12. Arnost

    Yet if it can be foreseen, how is it creative?

    That is indeed the nub. I would suggest that it’s impossible to program to recognise innovative genius. History has shown that (human) masters of their own disciplines in the majority of cases don’t recognise it when they see it – so how can they program to identify it!

    But it is interesting that whist it is difficult to program to recognise genius, they can be programmed to create / innovate. Their strength is they can search through myriads of combinations and permutations to find optimal outcomes. They can manipulate over and over what to us may be random patterns to find reason… They can even paint!

    https://newatlas.com/creative-ai-algorithmic-art-painting-fool-aaron/36106/#gallery

    And they can even fool experts! Here is an AI text generator [click refresh for a brand new post-modernist paper in the style of the Alan Sokal hoax complete with citations!].

    http://www.elsewhere.org/pomo/

  13. Senile Old Guy

    Yet if it can be foreseen, how is it creative?

    Does this mean “foreseen” by a computer? If it means “foreseen” in general, including in people, then it is clearly wrong. Artists routinely “foresee” — imagine or see in their minds — their creations. The exceptions are the splatter paint mob, who I do not really consider artists (except in the BS artists sense).

  14. Senile Old Guy

    This is the gist:

    The system runs thousands of simulations to determine which areas of the code can be improved, makes the changes, and continues the process ad infinitum, or until its goal is reached.

    And here is how Google puts it:

    This process of manually designing machine learning models is difficult because the search space of all possible models can be combinatorially large — a typical 10-layer network can have ~1010 candidate networks!

    It’s a brute force approach that has been used in other situations for years. It is commonly used in statistical modelling when conventional methods (e.g. least squares) do not work.

    Having said that, what Google is doing is well beyond that and has some neat outcomes:

    However, there are some notable new elements — for example, the machine-chosen architecture incorporates a multiplicative combination (the left-most blue node on the right diagram labeled “elem_mult”). This type of combination is not common for recurrent networks, perhaps because researchers see no obvious benefit for having it. Interestingly, a simpler form of this approach was recently suggested by human designers, who also argued that this multiplicative combination can actually alleviate gradient vanishing/exploding issues, suggesting that the machine-chosen architecture was able to discover a useful new neural net architecture.

    What it is doing is still relatively simple, compared to what humans can do. As with most things like this, the computers can process things much faster than we can so can do simple things quickly and often. Carrying on a conversation, not so much.

  15. Senile Old Guy

    Here’s a sample:

    If one examines presemantic dialectic theory, one is faced with a choice: either accept the dialectic paradigm of context or conclude that the purpose of the artist is significant form. Several theories concerning not deconstruction as such, but neodeconstruction exist.

    It only fools “experts” in fields which are inherently devoid of meaning in the first place. The meaningless text generated by the computer is no different from the meaningless text generated by the people. That’s why the Sokal hoax worked.

  16. RobK

    I’m with Sinc on this one. There is a lot to innovation and creativity, not the least of which is understanding the problem in the first place.

  17. JohnA

    Rafe Champion #2527556, posted on October 19, 2017, at 3:56 pm

    In short, yes.
    Similarly, we can’t expect machines to make scientific discoveries although a lot of effort by Simon and Newell et al went into algorithms for discovery. The usual response is “wait for the next generation of super computers”.

    Or as Isaac Asimov put it in many of his short stories: “Insufficient data to answer the question.” Eventually, the computer swallows up the entire universe of the story, and the final sentence is “Let there be Light!”

    Poor (atheist and staunch anti-creationist) Asimov couldn’t get God out of his thinking…

  18. Rafe Champion

    On the topic of robots, this is a nice piece by our learned colleague Dr Peter Smith.

  19. 2dogs

    Dealing with the unforeseen requires evaluation by authentic experience. The “Chinese Room” thought experiment proves machines can’t do that.

  20. Tim Neilson

    The “Chinese Room” thought experiment proves machines can’t do that.

    I don’t know whether it actually “proves” that. It does prove the negative, i.e. that, no matter how successfully a computer mimics such cognition, that can never prove that it has the cognition.

    Wasn’t there a scene in one of the Terminator films where Schwarzenegger is shown being spoken to derogatorily, and the film shows the myriad of data being processed in his computer chips leading to a computerised instruction, in response to which he says to his interlockutor “go fuck yourself”.

  21. True Aussie

    Technology has already made 99% of academics redundant. So many lecturers could easily be replaced by a video recording. All economics courses in Australia could be taught by one person in his spare time, only updating his lectures periodically to keep up with the changing events. Imagine all the freed up government money from no longer having to support glorified welfare queens who can’t handle the real world.

  22. Lutz

    I can’t really see machines developing initiative. No computer will ever say: Oh, I think I’ll learn to play the piano, or learn watercolour painting – or for that matter decide to reinvent itself and start a different career. Judging creativity is dubious at best as even different evaluators will judge a creative work differently, depending on what in their own mind suits.

  23. Kneel

    “No computer will ever say: Oh, I think I’ll learn to play the piano, or learn watercolour painting – or for that matter decide to reinvent itself and start a different career.”

    That’s a bold statement – “will ever” means just that. Ever.

    100 years ago, no-one believed a machine could beat a human master at chess, but that was shown to be wrong nearly 2 decades ago.

    10 years ago, recognising different objects in a scene from a photograph required a human – now, you can do it in software, in real time, on a smart phone.

    Self driving cars are near and the biggest issue is likely to be the ethics of the choices they must make when someone is unavoidably going to be injured or killed.

    AI is chasing an ever-moving target – as soon as it manages a task that “only humans can do”, what constitutes “intelligence” changes.

    “Proving” sentience is impossible – prove to me that you are sentient! You cannot. You can only prove that something is NOT sentient. Even if you keep passing my tests, I can always say “that doesn’t prove it”. If it looks sentient, sounds sentient and acts sentient, then for all intents and purposes, you should treat it as sentient.

    The same is true for emotions and creativity. “A computer can never create a artistic masterpiece” is a trite and uninformative test – the fact that you cannot do so either does not mean you are neither intelligent, sentient, creative or a human being! That I sound and act angry is enough for you to conclude that I am, in fact, angry – even if you can’t objectively prove that I am even human, let alone angry.

    Machines can already outperform humans in many tasks, and the list is growing. The advantage humans currently have is the breadth of ability and the ability to learn. These too will eventually be outdone by machines – assuming we don’t destroy ourselves first. Even if we do destroy ourselves, the machines we have created may continue long after we are gone.

  24. egg_

    Machines can already outperform humans in many tasks, and the list is growing. The advantage humans currently have is the breadth of ability and the ability to learn. These too will eventually be outdone by machines – assuming we don’t destroy ourselves first. Even if we do destroy ourselves, the machines we have created may continue long after we are gone.

    Ask the machine if it will allow itself to be powered by 100% ruinables.

Comments are closed.