Abstract
Why interdisciplinary research in AI is so important, according to Jurassic Park.
“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
I think this quote resonates with us now more than ever, especially in the world of technological development. The writers of Jurassic Park were years ahead of their time with this powerful quote.
As we build new technology, and we push on to see what can actually be achieved there is an undertone of sales to whatever we build. The end product must be sold somewhere and to someone. This can derail any good intentions, just as building a resort full of dinosaurs was sold as a fun attraction, we see later in the film that it became a resort of terror.
In the field of AI we are certainly late to the party with ethics and regulation. Indeed, even existing modelling protocols have been, in many cases, circumvented or potentially ignored. This has led to a widening of the gap in the interdisciplinary field of AI. This is compounded by pop culture representations of AI and the Ethicist’s potential lack of knowledge surrounding technical progression in the field of AI.
There are now seemingly two separate branches that ought to be in sync. The first branch is a group of Philosophers led by the Author of Superintelligence, Nick Bostrom, who believe that there is a singularity where AI takes over the world and starts to kill off humans. The second branch is the Technical Cohort who are led by companies such as Google and Deepmind. The remit of these developers is to see what can be developed and produced. Ultimately a separate sales team will determine what products can be sold.
We have seen multiple failures of AI due to lack of interdisciplinary discourse. Some cases include misallocated or cut off financial support and healthcare. This indicates the prevalence of issues that can grow and become ever more complex.
So, we have to ask ourselves two questions:
One, do the Philosophers have a point? Two, does that point refer to humans programming AI in such a way that the destruction of the human race is its optimal aim in order to create a better future world?
Is the development of AI that different to an island of genetically engineered Dinosaurs?
The only solution, in my view, is interdisciplinary discourse.