resources

some helpful pointers

Early researchers!!!

Check out my blog post for getting involved with ML at Rice. You should also be aware of

General research.

More related to my focus…

  • A primer on optimal transport by Marco Cuturi and Justin Solomon.
  • Rémi Flamary’s wonderful lightning view of Gromov-Wasserstein for graph learning.
  • A goldmine of talks on theoretical properties of Gromov-Wasserstein and Entropic Gromov-Wasserstein space proven by Ziv and his group. Follow him!
  • Been Kim has many insightful talks about interpretability, including dangers of misusing this terminology.
  • Many leading researchers spoke about the state of deep learning (2023) in this piece, with an insightful subsection on interpretability. I especially concur Zachary Lipton’s statement below:

    Zachary Lipton: Interpretability may be one of the most confused topics in all of machine learning, fraught with confusion and conflict. To begin, the word is badly overloaded. Read an interpretability paper selected at random and you’ll find representations (or insinuations) that the work is addressing “trust”, “insights”, “fairness”, “causality”, “fairness”. Then look at what the authors actually do and you’ll be hard-pressed to tie back the method to any of these underlying motivations. Half the papers produce a set of feature important scores, describing this “importance” in cryptic ways: “what the model is looking at”, “what its internal logic depends on to make this particular prediction”.

Misc.