Video: What can we expect from the intersection between academia and industry? Some of the answers come from the past.
What is the role of both academia and industry in developing all of these neat new technologies that we’re learning about?
Fredo Durand speaks about a “watershed moment” he sees us in now, with AI and related advances.
“Unfortunately, I see a lot of anguish,” he says, ”especially among some of the graduate students: that maybe academia just can’t compete with the resources in industry, especially in terms of scale of the input data, in terms of scale of the compute clusters, and the scale of the engineering teams.”
We get a little bit more about this from Durand as he goes into his experience, and talks about how these two different areas of research and implementation measure up to one another, in the real world.
First, he relates those challenges, compared to what industry is able to do to mobilize the technology that we have now, with the struggle to keep up in the 1980s.
He talks about memories from 25 years ago, when he worked in computer graphics, and academia tended to lag behind what the industry was doing, in some ways.
You can see, from some of the data that Durand presents, clues about what researchers were doing in labs, which perhaps seemed lackluster in comparison to all of those bold new projects like Toy Story that were coming out of industry, where there was an economic incentive to develop ever more powerful graphic rendering programs.
Academia, though, he said, did not give up: instead, the academics explored different ideas like lighting simulations and other types of conceptual programming, as industry tended to focus on the biggest, boldest and brightest designs for the screen.
“We just explored radically different ideas, and we moved the field forward,” he says.
Check out his enumeration of academic concerns regarding the graphics industry of that time!
“At the time, industry was all about artistic control,” he says. “They kept telling us that they didn’t care about physical reality, and they didn’t want lighting, simulation, video, and simulation of motion, or anything. They just wanted the artists to be able to tell the story and control every single thing that was going on, on the screen. And meanwhile, in academia, people were exploring the very thing that industry was telling us was not useful. People were looking at the miracle algorithm for lighting simulation, for fluid, and close simulation, and all sorts of other crazy ideas like appearance models for hair, skin, even machine learning for animation.”
Fast forward to today, where Durand argues that computer rendering is now mainly based on academic research. In other words, all of that esoteric stuff that academics were doing back then is very practical in the field now.
In the graphics world, he talks about an industry focus on fixed-function rasterization, versus an academic approach to hardware structuring and innovative GPU design.
“People even had this crazy idea that maybe you could run computation on the GPUs, not just render images,” he says. “All of these ideas are now fundamental to modern graphics hardware, and in particular, to running general computation (in) GPUs, which recently powered the recent deep learning revolution.”
Academia also makes graphics more mathematical, as Durand notes, with practical tools.
He talks about the history of something called Halide, where two people, Jonathan Ragan-Kelley and Adnrew Adams, worked on open source technology that became fundamental for companies like YouTube, Google and Facebook.
“We were able to have more impact than similar projects in (the) industry, because we were able to step further away from existing practices,” Durand says. “In particular, we gave much more control to the programmer than what people were doing (with) traditional compilers. Also, we open sourced the compiler – it was eventually picked up by people in industry, in particular at Google and Adobe, who really made the industrial strengths.”
One of these innovators, he shows, went into academia – the other went into industry.
Lest we think about industry as being blinded to the larger world of innovation, academia, Durand suggests, can also have blind spots.
He also describes some of the errant trends that lead people astray:
“Some real-world issues get ignored all the time,” he says. “And there’s sometimes a herd mentality for just incremental work on the topic that everybody’s working on.”
As far as lessons from this history, he talks about the need to look at things long-term, how to see change, and learning a diversity of skills and techniques.
“Don’t worry about field boundaries and whether something belongs to a field,” he suggests. “Develop theory and understanding of what’s going on. Learn skills and techniques outside the mainstream of your field. Learn about the real world, but don’t let it constrain you. And try to see change before everyone else, or via nascent capabilities of (how shifts) or bottlenecks or resources…. (in relation to) exponentials happening that will change how the field works.”
In a practical sense, he also recommends small teams and open source work: watch the video for more.
Then he presents what he calls the most important lesson of all:
“I think that there should be no anguish about the current situation,” Durand concludes. “The state of the field is very exciting, both the field of graphics, and the field of AI. I think everyone should really focus on having fun and enjoying the moment.”
What do you think?
Read the full article here