Every week, Fenwick’s intellectual property associate Zach Harned puts together a handful of amusing and/or informative AI-related stories trending around the world.
- Labeling data to be used in a machine learning model can be an arduous, lengthy, and expensive undertaking. This is particularly true in the medical AI world, where the labels' accuracy is paramount, and such tasks may require expert medical professionals to perform the labeling. Take a look at this impressive new work from Harvard, where a machine learning model used medical notes to teach itself to spot disease on chest x-rays, minimizing the need for painstaking data labeling.
- Cool application of Deepmind’s Alphafold. Accelerating the fight against plastic pollution — Unfolded
- New blog post on creating 3D structure and appearance from a single 2D image. Google AI Blog: LOLNeRF: Learn from One Look
- Quantifying GitHub Copilot’s impact on developer productivity and happiness. Stanford’s Dr. Curt Langlotz says AI will not replace radiologists, but radiologists who use AI will replace radiologists who don’t. Will that be the case for software engineers using tools like Copilot and Codex?
- Clearly, large language models (LLMs) can be put to impressive uses, such as GitHub’s Copilot tool discussed above. However, it can be quite difficult to update LLMs, leaving potentially large deficits in the “knowledge” of the LLM (e.g., an LLM trained in 2019 not knowing about COVID). Researchers have developed SERAC, an add-on system to help deal with this issue.
- Anthropic takes a deep dive into the issue of polysemanticity–when a neural network stuffs a number of unrelated concepts into a single neuron–which can make the interpretability of the model troublesome. Anthropic’s new paper not only explores this important phenomenon, but it also gives you an excuse to use the word “superposition” around your non-quantum mechanics pals.