resources

some helpful pointers

Early researchers!!!

Check out my blog post for getting involved with ML at Rice. You should also be aware of

  • REU programs, which are funded summer research program for U.S. students. Many target those with little prior experience and from underrepresented backgrounds.
  • Barry Goldwater Scholarship for sophomore, juniors. Schools usually pre-select candidates.
  • Cornell, Maryland, and Max Planck Pre-Doctoral Summer School for those debating to pursue CS research. You get a fun, fully-funded trip to Europe as well!
  • NSF GRFP, Hertz, and DOE CSGF fellowships for Ph.D. applicants. Apply as these can make you an incredibly competitive applicant.
    • Alex Lang wrote an insightful guide for NSF GRFP specifically. It also contains many example pieces submitted by the larger community.
  • Questions to ask prospective research advisors.
  • The widely shared Github repository for Ph.D. + grad school advice.

General research.

More related to my focus…

  • A primer on optimal transport by Marco Cuturi and Justin Solomon.
  • Rémi Flamary’s wonderful lightning view of Gromov-Wasserstein for graph learning.
  • A goldmine of talks on theoretical properties of Gromov-Wasserstein and Entropic Gromov-Wasserstein space proven by Ziv and his group. Follow him!
  • Been Kim has many insightful talks about interpretability, including dangers of misusing this terminology.
  • Many leading researchers spoke about the state of deep learning (2023) in this piece, with an insightful subsection on interpretability. I especially concur Zachary Lipton’s statement below:

    Zachary Lipton: Interpretability may be one of the most confused topics in all of machine learning, fraught with confusion and conflict. To begin, the word is badly overloaded. Read an interpretability paper selected at random and you’ll find representations (or insinuations) that the work is addressing “trust”, “insights”, “fairness”, “causality”, “fairness”. Then look at what the authors actually do and you’ll be hard-pressed to tie back the method to any of these underlying motivations. Half the papers produce a set of feature important scores, describing this “importance” in cryptic ways: “what the model is looking at”, “what its internal logic depends on to make this particular prediction”.

Misc.