Have a personal or library account? Click to login
Transformers for Natural Language Processing and Computer Vision Cover

Transformers for Natural Language Processing and Computer Vision

Explore Generative AI and Large Language Models with Hugging Face, ChatGPT, GPT-4V, and DALL-E 3

Paid access
|Mar 2024

Table of Contents

  1. What are Transformers?
  2. Getting Started with the Architecture of the Transformer Model
  3. Emergent vs Downstream Tasks: The Unseen Depths of Transformers
  4. Advancements in Translations with Google Trax, Google Translate, and Gemini
  5. Diving into Fine-Tuning through BERT
  6. Pretraining a Transformer from Scratch through RoBERTa
  7. The Generative AI Revolution with ChatGPT
  8. Fine-Tuning OpenAI GPT Models
  9. Shattering the Black Box with Interpretable Tools
  10. Investigating the Role of Tokenizers in Shaping Transformer Models
  11. Leveraging LLM Embeddings as an Alternative to Fine-Tuning
  12. Toward Syntax-Free Semantic Role Labeling with ChatGPT and GPT-4
  13. Summarization with T5 and ChatGPT
  14. Exploring Cutting-Edge LLMs with Vertex AI and PaLM 2
  15. Guarding the Giants: Mitigating Risks in Large Language Models
  16. Beyond Text: Vision Transformers in the Dawn of Revolutionary AI
  17. Transcending the Image-Text Boundary with Stable Diffusion
  18. Hugging Face AutoTrain: Training Vision Models without Coding
  19. On the Road to Functional AGI with HuggingGPT and its Peers
  20. Beyond Human-Designed Prompts with Generative Ideation
PDF ISBN: 978-1-80512-374-3
Publisher: Packt Publishing Limited
Copyright owner: © 2024 Packt Publishing Limited
Publication date: 2024
Language: English
Pages: 730

People also read