Query Relaxation And Scoping As Part Of Semantic Search


Query Relaxation And Scoping As Part Of Semantic Search

Make it easier for your users to find you. Discover how query expansion and query scoping can help with this in-depth article.

Semantic Search Explained

This video is one of a series which explains some of the research being carried out at DERI, NUI Galway.

This video explains how semantic search technologies will make easier to find specific information on the Web and allow this information to be conveniently shared.

OpenAI Q&A: Finetuning GPT-3 vs Semantic Search – which to use, when, and why?

Patreon: https://www.patreon.com/daveshap (now includes Discord!)
GATO Framework: https://www.gatoframework.org/

GitHub: https://github.com/daveshap
LinkedIn: https://www.linkedin.com/in/dave-shap-automator/
Twitter: https://twitter.com/d_shap_autom8r

Relevant Subreddits:
Artificial Sentience: https://www.reddit.com/r/ArtificialSentience/
Heuristic Imperatives: https://www.reddit.com/r/HeuristicImperatives/
Autonomous AI: https://www.reddit.com/r/Autonomous_AI/

DISCLAIMER: This video is not medical, financial, or legal advice. This is just my personal story and research findings. Always consult a licensed professional.

I work to better myself and the rest of humanity.

Text embeddings & semantic search

Learn how Transformer models can be used to represent documents and queries as vectors called embeddings. In this video, we apply this technique to create a semantic search engine!

This video is part of the Hugging Face course: http://huggingface.co/course
Open in colab to run the code samples:

Related videos:
– Loading a custom dataset — https://youtu.be/HyQgpJTkRdE
– Slide and dice a dataset �� — https://youtu.be/tqfSFcPMgOI

Don’t have a Hugging Face account? Join now: http://huggingface.co/join
Have a question? Checkout the forums: https://discuss.huggingface.co/c/course/20
Subscribe to our newsletter: https://huggingface.curated.co/

OpenAI’s New GPT 3.5 Embedding Model for Semantic Search

In this video we’ll learn how to use OpenAI’s new embedding model text-embedding-ada-002.

We will learn how to use the OpenAI Embedding API to generate language embeddings, and then index those embeddings in the Pinecone vector database for fast and scalable vector search.

This is a powerful and common combination for building semantic search, question-answering, threat-detection, and other applications that rely on NLP and search over a large corpus of text data.

Everything will be implemented with OpenAI’s new GPT 3.5 class embedding model called text-embedding-ada-002; their latest embedding model that is 10x cheaper than earlier embedding models, more performant, and capable of indexing ~10 pages into a single vector embedding.

�� Pinecone docs:
Colab notebook:

��️ Support me on Patreon:

�� Discord:

�� 70% Discount on the NLP With Transformers in Python course:

�� AI Art:

�� Subscribe for Article and Video Updates!

00:30 Semantic search with OpenAI GPT architecture
03:43 Getting started with OpenAI embeddings in Python
04:12 Initializing connection to OpenAI API
05:49 Creating OpenAI embeddings with ada
07:24 Initializing the Pinecone vector index
09:04 Getting dataset from Hugging Face to embed and index
10:03 Populating vector index with embeddings
12:01 Semantic search querying
15:09 Deleting the environment
15:23 Final notes