Tag: notes

  • Weight-sparse transformers have interpretable circuits

    Weight-sparse transformers have interpretable circuits1234: Train sparse model on weights and pruning to explain interpretability, find connections between sparse and dense models. Transform: Encoder, Decoder, from tokens to embeddings to tokens | from electricity to magnetics to electricity | Fourier Transform | LLM Visualization Overall Setup | Superposition 5 Sparse Model Training sparse models contain…

  • The Secret by Rhonda Byrne

    The Secrect1 Abstract2 Make a list of some secret shifters to have up your sleeve. These are the things that can change your feelings in a snap. It might be beautiful memories, future events, funny moments, nature, a person you love, your favourite music. Different things will shift you at different times, so if one…