Dataset dimensions

#2
by idotr7 - opened

I am looking to use this dataset.
I have tried to use both lang chain and llama index but they both make me trouble over the dimensions.
how can i convert it the embedding column to 1536 or anyone has stumbled this and solve it in other way?

idotr7 changed discussion title from DB dimensions to Dataset dimensions
MongoDB org

Hi @idotr7 ,

This data set comes with 256 embedding already generated , if you with to re encode the data with 1536 dimensions you need to delete this column and reinsert the data with the new encoding.

https://github.com/mongodb-developer/GenAI-Showcase/blob/main/notebooks/rag/mongodb-langchain-cache-memory.ipynb

see this notebook to follow how embeddings can be regenerated...

yes for now i am using those embbedings but i don't know how to change the query from 1536 to 256.
i tried with the $vectorSearch, and with the llamaindex and langchain.

To reencode the dataset it not clear how becuase in other tutorials in index the strings and integers, while here the relevant data is dict, and it seems to make problems.

MongoDB org

When you specify the embedding model you can specify dimensions variable.

Make sure that the vector index is also built with those dimensions

                input=search,
                model="text-embedding-3-small",
                dimensions=256
            )

It should be similar idea with Langchain config. If you want share the code.

that is already configured that way.

that is the error:
pymongo.errors.OperationFailure: PlanExecutor error during aggregation :: caused by :: vector field is indexed with 1536 dimensions but queried with 256, full error: {'ok': 0.0, 'errmsg': 'PlanExecutor error during aggregation :: caused by :: vector field is indexed with 1536 dimensions but queried with 256', 'code': 8, 'codeName': 'UnknownError', '$clusterTime': {'clusterTime': Timestamp(1718643379, 12), 'signature': {'hash': b'\xe5\x93\x05\xferJ\x8b\xfa\x97\x17yW\xffi\x06\xd1\xb3\xe0\xc8\xed', 'keyId': 7327356602321731586}}, 'operationTime': Timestamp(1718643379, 12)}

MongoDB org

The vector index you built is 1536 dimensions but you need to rebuild it when specifying 256.

Thank you, I tried it, and when i embed a query it seems the embedding not include enough of the data.
If i want to re encode to make the embedding 1536 and to include more data, not just the menu and attributes, how can i do it?

can you please help @Pash1986 ?
i am quite stuck on this...

MongoDB org

you can use json.dumps(...) and place any set of fields . It will basically make all keys and values as one string.

But I wonder what fields do you want to embed?

in addition to what already embedded the cuisine field, but mainly to make the embedding 1536 dimensions.

MongoDB org

I haven't tested this code but it should be around:

import os
from pymongo import MongoClient
import datasets
from datasets import load_dataset
from bson import json_util
from openai import OpenAI

# Set up OpenAI client
openai_api_key = os.environ.get('OPENAI_API_KEY')
client = OpenAI(api_key=openai_api_key)

# Function to create a new embedding of 1536 dimensions from menu and attributes using OpenAI API
def create_new_embedding(menu, attributes):
    combined_text = ' '.join(menu) + ' ' + ' '.join(attributes)
    response = client.embeddings.create(
        input=combined_text,
        model='text-embedding-ada-002'
    )
    new_embedding = response['data'][0]['embedding']
    return new_embedding

uri = os.environ.get('MONGODB_ATLAS_URI')
client = MongoClient(uri)
db_name = 'whatscooking'
collection_name = 'restaurants'

restaurants_collection = client[db_name][collection_name]
restaurant_collection.delete_many({})

dataset = load_dataset("MongoDB/whatscooking.restaurants")

insert_data = []

for restaurant in dataset['train']:
    # Extract the menu and attributes
    menu = restaurant.get('menu', [])
    attributes = restaurant.get('attributes', [])

    # Create the new embedding
    new_embedding = create_new_embedding(menu, attributes)

    # Overwrite the embedding field
    restaurant['embedding'] = new_embedding

    doc_restaurant = json_util.loads(json_util.dumps(restaurant))
    insert_data.append(doc_restaurant)

    if len(insert_data) == 1000:
        restaurants_collection.insert_many(insert_data)
        print("1000 records ingested")
        insert_data = []

if len(insert_data) > 0:
    restaurants_collection.insert_many(insert_data)
    insert_data = []

print("Data Ingested")

I will check it out
thank you!

MongoDB org

Hey @idotr7 ,

I've tested the following code:

import os
from pymongo import MongoClient
import datasets
from datasets import load_dataset
from bson import json_util
import openai
import json

# Set up OpenAI client
openai_api_key = os.environ.get('OPENAI_API_KEY')

openai.api_key = openai_api_key


# Function to create a new embedding of 1536 dimensions from menu and attributes using OpenAI API
def create_new_embedding(menu, attributes):
    combined_text = json.dumps({
        'menu': menu,
        'attributes': attributes
    })
    response = openai.embeddings.create(
        input=combined_text,
        model='text-embedding-3-small'
    )
    new_embedding = response.data[0].embedding
    return new_embedding

uri = os.environ.get('MONGODB_ATLAS_URI')
client = MongoClient(uri)
db_name = 'whatscooking'
collection_name = 'restaurants'

restaurants_collection = client[db_name][collection_name]
restaurants_collection.delete_many({})

dataset = load_dataset("MongoDB/whatscooking.restaurants")

insert_data = []

for restaurant in dataset['train']:
    # Extract the menu and attributes
    menu = restaurant.get('menu', [])
    attributes = restaurant.get('attributes', [])

    # Create the new embedding
    new_embedding = create_new_embedding(menu, attributes)

    # Overwrite the embedding field
    restaurant['embedding'] = new_embedding

    doc_restaurant = json_util.loads(json_util.dumps(restaurant))
    insert_data.append(doc_restaurant)

    if len(insert_data) == 1000:
        restaurants_collection.insert_many(insert_data)
        print("1000 records ingested")
        insert_data = []

if len(insert_data) > 0:
    restaurants_collection.insert_many(insert_data)
    insert_data = []

print("Data Ingested")

Hope that helps.

Hi @Pash1986 , thank you!
the dict embedded fields will be just a long string right?

MongoDB org

In its esense yes. This is the string.of joined data that is being encoded into an embedding...

Sign up or log in to comment