Richard A Aragon

TuringsSolutions

AI & ML interests

None yet

Articles

Organizations

TuringsSolutions's activity

posted an update 9 days ago
view post
Post
1337
ChatGPT does better at math if you prompt it to think like Captain Picard from Star Trek. Scientifically proven fact lol. This got me to thinking, LLM models probably 'think' about the world in weird ways. Far different ways than we would. This got me down a rabbit hole of thinking about different concepts but for LLM models. Somewhere along the way, Python Chemistry was born. To an LLM model, there is a strong connection between Python and Chemistry. To an LLM model, it is easier to understand exactly how Python works, if you frame it in terms of chemistry.

Don't believe me? Ask Python-Chemistry-GPT yourself: https://chatgpt.com/g/g-dzjYhJp4U-python-chemistry-gpt

Want to train your own Python-GPT and prove this concept actually works? Here is the dataset: https://huggingface.co/.../TuringsSolu.../PythonChemistry400
posted an update 21 days ago
view post
Post
1379
The word 'Lead' has three definitions. When an LLM model tokenizes this word, it is always the same token. Imagine being able to put any particular embedding at any particular time into a 'Quantum State'. When an Embedding is in a Quantum State, the word token could have up to 3 different meanings (x1, x2, x3). The Quantum State gets collapsed based on the individual context surrounding the word. 'Jill lead Joy to the store' would collapse to x1. 'Jill and Joy stumbled upon a pile of lead' would collapse to x3. Very simple, right? This method produces OFF THE CHARTS results:


https://www.youtube.com/watch?v=tuQI6A-EOqE
posted an update 24 days ago
view post
Post
1294
Tell Me About the World is based on Concepts, Relationships, and Context. This is how we as humans learn about the world. If you were to distill geometry, or philosophy, you would get: Concepts, Relationships, and Context. Using two Colab Notebooks, we demonstrate beyond any shadow of a doubt that it is possible to educate LLM models using this framework of Concepts, Relationships, and Context, and that the model actually grasps the relationships and context when we do. Explore the full code behind 'AI ABC's' and 'AI 123's' in our Colab Notebooks which are available from this video!


https://www.youtube.com/watch?v=yz0sd8ayenI
posted an update 25 days ago
view post
Post
2248
I developed a way to test very clearly whether or not a Transformers model can actually learn symbolic reasoning, or if LLM models are forever doomed to be offshoots of 'Socratic Parrots'. The results are in, undeniable proof that Transformers models CAN learn symbolic relationships. Undeniable proof that AI can learn its ABC's. Credit goes to myself, Claude, and ChatGPT. I would not be able to prove this without Claude or ChatGPT.



https://www.youtube.com/watch?v=I8jHRgahRfY
posted an update about 1 month ago
view post
Post
1289
'Legal Dictionary GPT' is now completely trained and ready for Open Source release to the world! Trained on 10,000 rows of legal definitions, Legal Dictionary GPT is your go-to resource for everything related to the first step in understanding the law, defining it. The model is free and publicly available for anyone to use.

Model Link: https://platform.openai.com/playground/chat?preset=eCrKdaPe9cnMnyTETqWDCQAU

Knowledge Base Bots are internal facing as opposed to external facing LLM models, that are either fine tuned or RAG tuned, generally on systems and processes related data.

Learn more about Knowledge Base Bots at our website:
https://knowledgebasebots.com/

replied to their post about 1 month ago
view reply

Geometric fractals do not allow you to sacrifice accuracy at all, that is not how geometry works. That happens to be how calculus works. It suddenly paid off to understand math theory. I didn't believe it when I did it either.

posted an update about 1 month ago
view post
Post
561
I know a secret about knowledge graphs that the world doesn't! There are severe mathematical limitations to geometric fractals. It is classed as an 'unsolvable problem' in the mathematics world. There are currently ~1,000 mathematicians or so in the world that give this problem serious thought. You literally cannot solve it with geometric fractals. This is why I invented P-FAF, it uses calculus based fractals instead. I literally invented the math to make it work. I solved an 'unsolvable equation' to make the math work. You ONLY make the math work the way I did it in the end. I have never released the licensing commercially. Good luck!
  • 1 reply
ยท
replied to their post about 1 month ago
view reply

There is no such thing as a stupid question when trying to learn, that is how we learn. Here, this will help you more than anything else. You need to put in your own HuggingFace token, you need to change the model name, and you need to use a different dataset. I have the PFAF750 dataset in my profile, and like 90% of my datasets are a blend with P-FAF data.

Do not delete the RAM until you are done playing around with the model. When you upload the model to HuggingFace, it will be quantized. That model will perform worse than the model in your Colab notebook, it is how it is. That's the only way to keep it all free.

The training arguments in this notebook are for the Adam Optimizer and LORA fine tuning. That's 90% of what you need to know.

https://colab.research.google.com/drive/1KIRKGGB-LAqEhICQdtn_aJt8CzsWK6mH?usp=sharing

replied to their post about 1 month ago
replied to their post about 1 month ago
view reply

Just let me know if you want anymore help at all!

import torch
from torch import nn
from transformers import AutoTokenizer, AutoConfig, AutoModelForTokenClassification

Define fractal functions

def f1(x):
return x**2 + 0.1

def f2(x):
return 1 - (2 * x - 1)**4

Custom P-FAF Embedding Layer

class PFAFEmbedding(nn.Module):
def init(self, embed_size, fractal_funcs, num_fractals=None):
super().init()
self.fractal_funcs = fractal_funcs
self.num_fractals = num_fractals if num_fractals is not None else len(fractal_funcs)
self.p = nn.Parameter(torch.rand(self.num_fractals)) # Probabilistic weights
self.d = nn.Parameter(torch.rand(self.num_fractals) * 1.5 + 0.5) # Fractional dimensions
self.embed_size = embed_size

def forward(self, x):
    # x: [batch_size, seq_length, embed_size]
    batch_size, seq_length, _ = x.shape
    x_expanded = x.unsqueeze(1).expand(-1, self.num_fractals, -1, -1)  # Shape: [batch_size, num_fractals, seq_length, embed_size]
    
    # Apply fractional dimensions and fractal functions
    x_dim = torch.pow(x_expanded, 1 / self.d.view(1, -1, 1, 1))  # Apply fractional power across dimensions
    
    # Apply fractal functions probabilistically
    t = sum(p * f(x_dim[:, i, :, :]) for i, (p, f) in enumerate(zip(self.p, self.fractal_funcs)))
    
    return t

Custom BERT Model with P-FAF Embedding

class AutoModelWithPFAF(AutoModelForTokenClassification):
def init(self, config, fractal_funcs):
super().init(config)
self.pfaf_embedding = PFAFEmbedding(config.hidden_size, fractal_funcs)

def forward(self, input_ids, attention_mask=None):
    # Normal BERT inputs handling
    inputs_embeds = self.embeddings.word_embeddings(input_ids)
    inputs_embeds = self.pfaf_embedding(inputs_embeds)  # Apply P-FAF transformation
    
    # Rest of the BERT model
    extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_ids.shape, input_ids.device)
    head_mask = self.get_head_mask(None, self.config.num_hidden_layers)
    encoder_outputs = self.encoder(
        inputs_embeds,
        attention_mask=extended_attention_mask,
        head_mask=head_mask
    )
    sequence_output = encoder_outputs[0]
    pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
    outputs = (sequence_output, pooled_output) + encoder_outputs[1:]
    return outputs  # Return the base BERT outputs for compatibility

Define fractal functions to use

fractal_funcs = [f1, f2] # Additional fractal functions can be added here

Load pre-trained BERT and modify it

config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
model = AutoModelWithPFAF.from_config(config, fractal_funcs)
tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")

replied to their post about 1 month ago
view reply

Yes, it will give BERT gsm8k scores that are on steroids lol.

replied to their post about 1 month ago
view reply

A simple citation never hurt anybody lol. - Albert Einstein

replied to their post about 1 month ago
view reply

I didn't think you could either. I don't think anyone should actually be able to. Thank you.

posted an update about 1 month ago
view post
Post
1762
If you are interested in Knowledge Graphs, I invented all of this a year ago. It is a Encoder/Decoder that works with Knowledge Graphs. I am glad the world finally realizes this is useful a year later. I tried to tell you. I have not licensed any of the math. I own all of it. I do not have any plans to ever enforce the licensing but I like holding onto it.

https://huggingface.co/blog/TuringsSolutions/pfafresearch
ยท
replied to their post about 1 month ago
view reply

I think you are not wrong, it is the most plausible explanation. It is either that; for reasons that would be scientifically unexplainable in my head, that transfer learning does not work in this one instance and one instance only, or the paper is wrong. Given the evidence I know firsthand, I would say the paper is wrong. You rightly point out, it would not be the first time for one of the research institutions on that paper. It would not be the first time for any of them overall, let's keep it 100% real.

I don't know what the truth is regarding this situation but I do know one thing for sure. Our sources of truth are full of bullshit. And we wonder why that causes issues when we train models on the data.

replied to their post about 1 month ago
view reply

This doesn't burst my bubble it makes me happy! I will look this up right away.

replied to takeraparterer's post about 1 month ago
view reply

Alright, alright, alright. I am just an a--hole and I owe you an apology. This is good stuff, thank you.

replied to takeraparterer's post about 1 month ago
view reply

OK, I believe you some now. Testing further.

replied to takeraparterer's post about 1 month ago
replied to takeraparterer's post about 2 months ago
replied to takeraparterer's post about 2 months ago
view reply

Make a copy of the colab and make this update you lying pos.

replied to takeraparterer's post about 2 months ago
replied to takeraparterer's post about 2 months ago
view reply

Perhaps it works for you because those files are on your local computer and you never uploaded them....

replied to takeraparterer's post about 2 months ago
view reply

It doesn't and I have the same error as before. Where are the files? This app does not work as presented. At all.

FileNotFoundError: [Errno 2] No such file or directory: '2013.parquet'

replied to takeraparterer's post about 2 months ago
view reply

No such file or directory: 'vocab.json

Same for the parquet file. What are you using for the vocab? You know, what you claim that your app actually does....

replied to takeraparterer's post about 2 months ago
view reply

This is absolute trash and doesn't actually work without any of the files you do not publicly provide anywhere. No way to verify this garbage app even works.

replied to takeraparterer's post about 2 months ago
view reply

I'm going to take this and it is going to become a small part of a product of mine. I appreciate it!

posted an update about 2 months ago
view post
Post
1023
Who wants to take a stab at explaining this one? SPOILER ALERT: You CANNOT transfer an image Jailbreak from one model to another. Why in the world you cannot do this when you can transfer learn literally everything else? You tell me, experts.



https://arxiv.org/abs/2407.15211
  • 3 replies
ยท
replied to takeraparterer's post about 2 months ago
view reply

Ah, I see why you are so interested in shitting on my work now. You are jealous! You could have just come out and said that in the first place.

replied to takeraparterer's post about 2 months ago
view reply

You should research it and build something like this yourself.

replied to their post about 2 months ago
view reply

The random cats and all the math is in it. I have also talked to UC Berkeley directly about it, they are one of the stars on the Github repository. You can open up the app.py, look at the math, and realize that AI is not coding, it is math.

replied to their post about 2 months ago
view reply

Yes, exactly. My method is over 85% and it is cheaper. I don't understand what else there is to discuss? The random cat facts in the demo space is because that's the default API. Someone really likes cats and doesn't matter if you hit their API with 25 bots at once. Some don't like that. You can put in your own API, it is just the default.

replied to their post about 2 months ago
view reply

You are incorrect in both the accuracy rate of the function calls and the fact that it is solved. I have been following this problem for 5 years now. I have multiple agent based frameworks I have developed myself:

https://github.com/RichardAragon/MultiAgentLLM

https://github.com/RichardAragon/MOBASwarmAgents

The people who deem themselves 'experts' in AI are quite the problem. It is not my job to educate them in any way. Learn math. It's all math. You don't know jack about the subject if you are mathematically illiterate and no amount of talking to ChatGPT can fix that.

replied to their post about 2 months ago
view reply

It's the speed at which it happens. I cannot control the accuracy enough to solve it most optimally. But that is just a math problem. I don't know how to fix it, but it is 100% fixable. Need money to fix. Need someone to actually understand math when it comes to AI to get money. Why do so many people have an interest in AI but refuse to learn math?

replied to their post about 2 months ago
view reply

Here is another one you will not understand, it is called HiveMind. The problem at the moment is that I cannot fully control any of this. If I could I would not be wasting my time here, I would be straight at Google HQ right now. Instead, they are stealing my shit because I cannot iterate on it fast enough. I'm positive I could control it fully, with money for more research. So, here it is publicly: https://colab.research.google.com/drive/1gXasjeZM_8u49go2Hn8cqA30Rm3JPc3V?usp=sharing

replied to their post about 2 months ago
view reply

There are absolutely better solutions that exist to the problem than the simple demonstration I laid out here, that is correct. Thank you for looking. Do you have any actual questions? Yes, Swarm algorithms on their own do not have enough 'juice' to make them smart enough on their own. Give them a brain though, it's quite amazing.

replied to their post about 2 months ago
view reply

Everything you laid out about the functionality is correct. Your understanding of where exactly LLM models are at when it comes to function calls is severely lacking and it is not my job to fix that understanding. Your understanding of this technology hollistically is very incorrect and I can ascertain that simply from the statements you have made. No, I have zero interest in debating them with you. I will more than gladly debate any actual prominent researcher or investor on these things.

replied to their post about 2 months ago
view reply

I am indeed lacking social skills, yes. I only care to socialize over algorithms and math, very honestly. I cannot put that into any other terms.

replied to their post about 2 months ago
view reply

I like how the first reaction from people who cannot do math is mania. That is what is severely broken about this world and makes the world quite insane. I pay because you are mathematically illiterate lol. Even happens on sites literally devoted to ML.

replied to their post about 2 months ago
replied to their post about 2 months ago
view reply

Maybe if I talk about this in non math terms more people will understand: There exists a 'Platonic Form' of the solution to the Traveling Salesman Problem. this Platonic Form is the most optimal solution to the problem that could ever exist. I take the Platonic Form and I make it a variable (x). Then I instruct a bunch of algorithms that I just placed this Platonic Form somewhere in the box but even I do not know where in the box it is, or even what it looks like. The agents all go in different directions and explore the box. When an agent finds a clue, they tell all the other agents. The agents all look for more clues until they find whatever I put in the box. Then they simply describe it to me.

posted an update about 2 months ago
view post
Post
1300
I can solve the Traveling Salesman Problem using the same methods the scientists used to solve it with 1 qubit, except I do not need quantum computers to do it. I am kind of tired of screaming this from the rooftops at this point. I can create an imaginary probability space, then I can put a bunch of imaginary agents in the imaginary box, and solve real problems in seconds. Problems that would take minutes, hours, or years to solve via other algorithms. Here is a demo of me solving the Traveling Salesman problem using 50 agents to probabilistically sample at once: https://colab.research.google.com/drive/1XplG72nQDO_-2h4DUllERLp0Dr2pI2J2?usp=sharing

ยท
replied to sayakpaul's post about 2 months ago
view reply

No one has ever thought to Quantize Reverse Diffusion before? Really? What about the sausage one? Bravo and kudos to you either way!

replied to Taf2023's post about 2 months ago
replied to their post about 2 months ago
view reply

If anyone is interested in investing in this technology, I just broke down in very simple terms for anyone who can actually understand it, exactly how it works. I would love to debate any part of the actual technology or math.

replied to their post about 2 months ago
view reply

I also have novel implementations of it all over the place in multiple forms. I can showcase that through PSO and Gaussian Probability sampling, that I can solve literally any optimization problem that can be conceived of. Seeing as the solution space involves, literally any optimization problem that could ever be conceived of, I utilize AI to help generate things like a completely novel neural network from scratch with multiple matrix calculations and completely novel attention mechanism sometimes. AI models can't do math, right? It's all math, not code. Either AI models can't do math, or they can. I would be happy to debate that further with anyone but you if you would like.

replied to their post about 2 months ago
view reply

Can I explain how the novel algorithms that I have been writing about for about a month now work? Yes, I can. Can you pay me at least $200M to do so?

replied to their post about 2 months ago
view reply

Yes, your previous comment actually proved this further, I wasn't going to comment on it but I will now. The fact that you are focused on the AI generation of the code is quite hilarious. Even if I used AI to generate the code in its entirety and edited nothing within it, the implementation and the functions within the code are novel, as is the math. Since all of these go above your head, we are engaging in this frivolous discussion to reinflate your ego instead.

replied to their post about 2 months ago
view reply

OK, I used AI generated code within my completely novel implementation of a Diffusion based SNN, I also wrote a paper on the subject in which I utilized AI. What else my guy?

replied to their post about 2 months ago
view reply

It's obviously not the first time the guy has trolled me. I have no idea why but I attract trolls. Those who cannot do, troll. I handle them in the same way I handle everyone, that is why I am the CEO.

replied to their post about 2 months ago
view reply

I can tell you like to harass people and seem to think I owe you an actual response. I don't know what you get out of this but I get increased viewcounts to my post either way when you comment which is all I care about here. I have reported you now, I hope you can get the message. Anywho, I am done chatting with the person with so much cred they have to use a fake name on a site devoted entirely to showcasing your cred.

replied to their post about 2 months ago
view reply

Why are you still harassing me, Xander? Be well.

replied to their post about 2 months ago
view reply

It is not hard to remember a unique name like Xander. Rather than criticizing other people's work, which you can't even comprehend in the first place, how about you learn how to build your own AI, Xander?

replied to their post about 2 months ago
view reply

Wow you don't know how diffusion works or what is happening here so why are you commenting? Username does not check out.

posted an update about 2 months ago
view post
Post
1373
SNN Image Diffusion V2

Billionaires have been made for less than this. This is only one of the things it can it do. It can do API calls, function calls, optimize poker and blackjack odds, anything that is an optimization problem. It costs fractions of a penny and requires fractions of the compute of an LLM model. It can even communicate two ways with an LLM model.
ยท
posted an update 2 months ago
view post
Post
509
I can do Time Series Predictions with Swarm Algorithms! When all you know how to use is a hammer, everything looks like a nail. An LLM model is a hammer. It is not a deity. It has computational and mathematical limitations. Very big ones. Swarm Algorithms do not have this same problem. They are like a screwdriver. The screwdriver is not better than the hammer, both are useful. Why are LLM models bad at things like Time Series Predictions and Function Calls? Because those are jobs better fit for a screwdriver as opposed to a hammer.
  • 1 reply
ยท
replied to their post 2 months ago
view reply

@LeroyDyer Yes, it is just agent setup. The method is overall not that novel. PSO Swarm optimization was invented in 1995. That is the basis of my method. The breakthroughs are:

  1. That you can simply use Gaussian Probability Distribution + Swarm Algorithms to solve ANY optimization problem (including function calls, API calls, image generation, etc.).

  2. The power that SNN + LLM brings. SNN's have what we will call 'built in intelligence'. It's not that useful in practice. LLM models have decent enough logical reasoning capabilities. It literally requires a single function to setup two way communication between the SNN and the LLM, with the LLM model able to instruct and guide the SNN in every way.

So far, I have replicated: Diffusion models, API agents, and I can put 'AI' directly into a spreadsheet. All because of just these agents. I also slap a multi-head attention mechanism on top of them which is also important.

Here: https://colab.research.google.com/drive/1SeYnyovBEIqI-HC7MAqrdDW6ZWjImdWb?usp=sharing

replied to their post 2 months ago
view reply

I don't actually think that. There is a reason I still haven't changed the licensing on this, and I won't. It's all just math.

replied to their post 2 months ago
view reply

You have it worked out correctly. The SNN is an 'Artificial Body' for the LLM model. The LLM model is a mouth and a brain. It is just algorithms, and it 'lives' in the digital, why would I give it a human body? I should start with the roots, as you say. More like an Octopus. If I were an LLM Model, I would want my body to be Swarm algorithms. They can be anything, any shape, I can use them as sensors and extensions of the brain, etc.

You have the idea of the benefits of it all worked out as well. The LLM model does not have to learn how to perform a function, how to make an API call, etc. It needs to know how to construct a set of swarm agents and instruct them, which even Tiny Llama can do. The 'body' is also far less computationally expensive than the 'brain'. I am cheap, I do not want to pay for the brain if I do not have to.

There is a Communication function. Right now, I have barely even explored it. I just have the LLM model communicate with the Swarm to tell it what to do and what its reward function is. With the LLM as the root, and the swarm as the branches, the roots can communicate with the branches in all sorts of sophisticated ways.

You have figured out the ultimate goal of all of this research from my end, I want to give the model a virtual body. Like an octopus but even more adaptable. It's not separate models, it's a mind and a body.

When I say I do not know why it works, I do not know why I am able to make up an imaginary boundary for a real problem, and it solves the problem. Of course I know how that works. That should not work though in practice. I am borrowing an imaginary concept, half of it is imaginary. I put a bunch of algorithms in an imaginary box, they explore the box and map it, and by doing that, they can tell me where the solution is inside of the box. That is the craziest concept I could ever think of in my life. And it works flawlessly.

replied to their post 2 months ago
view reply

It is hitting the APIs. I use that cat facts API because it only returns simple text and doesn't seem to mind getting pinged a lot. I have kind of a hard shell, sorry. I am very eager to hear what you think of it all.