This posting reveals many fears and concerns, but presents no evidence that there is actual “control” of the research agenda. Have you considered the alternative hypothesis that by hiring so many faculty into their research labs, the companies find themselves in a situation where the research agenda is being set by academically-minded current and former professors rather than by the C-suite?
More importantly, there many research topics where there is wide consensus across academia, industry, and government about research priorities. We all want ML to generalize better, to model causal relationships, to use fewer computational resources, to be more explainable/interpretable/debuggable, to be more robust to domain shift, measurement error, labeling error, and so on. We all want systems with deeper understanding than current large language and vision models evince. We all are interested in making ML more private and secure. We want to figure out how to create federated learning systems where parties can learn collaboratively while preserving privacy. We all want to strengthen ML operations, continuous deployment, continuous improvement, and so on. When there is such a consensus about the important questions, there is no issue of “control”.
The most contentious topics are those that extend beyond pure technical questions to questions of socio-technical systems, power dynamics, and so on. These range from human-centered AI in the small (e.g., building AI to empower individual people) to the study of feedback loops between recommender systems and human behavior to structural questions about who is empowered vs disempowered by