Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
TuringsSolutions 
posted an update Aug 5
Post
1024
Who wants to take a stab at explaining this one? SPOILER ALERT: You CANNOT transfer an image Jailbreak from one model to another. Why in the world you cannot do this when you can transfer learn literally everything else? You tell me, experts.



https://arxiv.org/abs/2407.15211

THANKS MATE !

i think you will find here that there are too many heads working on this tiny lightbulb issue :

these papers are like a properganda my friend as stanford are famous for this !
they set thier students these questions forcing them to prduce papers on apll the comerically desirable topics ie trying to lead industry : using students to do thier work :

as these questions have been given to them by lazy comerical developer , as in the open source comunity we are already working on all of thier proposed ideas:
we need ot consider the validitiy of these papers and the requirement for the initial paper inn the first instance : is this even an issue
has this been something that has plagued devlopment of ai ? what does it actually solve that seemed so hard ? and who comercially will benifit from such a niche task ?

i personally am highly focused on the mistral models : and converting this decoder only model into other formats: by adding other encoders to its input essentially turningit back into a full model ... but with the newly added feature extractor and input processor ! <<

( so cross attention ) ...<<<< hence here is the magic of creating multi modal models !: )

i also recently downloaded the FAKE ! NVIDIA ! <<< riding on the coattails of a basic mistral model ! << they did not create anything either but are ony using a triton based setup ( very efficient but not universally compatible ) <<< only to find an old mistal and llam at its heart ad some other models ? << again i ask how many doctors does it take to change a lightbulb?

as this is what this paper displays !!---------->>> another line of misdirection >>> another garden path created !! << its called swamping ! <<< they are saturating the arena with bad paper and usless pathways buit attempting to cover every aspect ( none very well ) ! just to get thier name in place ! <<

so thee fact theey have not understoods:

( so cross attention ) ...<<<< hence here is the magic of creating multi modal models !: )

or even Ring embeddings? the most powerfull embedding system yet not implemented allowing you to extend your model to unlimited sequence lenght without loss !( Really important for video translations and diffusions !) <<<
they are not working on it they are TALKING !

LOL !

If a paper was written by maybe 1 Person then its going to be interesting !! these ones are too many heads Who actually wrote the paper and who said what in the paper or did they all ghet free rides ! <<< i went to uni too >>> (bsc(business inteligence-Big data)(Msc(Artfical intelligence in DataScience))<<< ( as you know the masters are even easier than the Bsc ! < the PHD was a give away ! <<<

SO we know the value of these things ! ( we the open source are more powerfull and truly we are doing the work my freind --->>> AGI will come from the people of the world and the people in the industry !>>> as i search for data i need i find it now ! as data is all you need ! >> attention is only for focussing >>>as well as methodogies and chain of thought give models a realistic ability to understand the word ! >>> and silly concepts as Speech and Voice( we could ALWAYS MAKE ) just fool the users into beliving and being amazed to miss the flaws !<

·

I think you are not wrong, it is the most plausible explanation. It is either that; for reasons that would be scientifically unexplainable in my head, that transfer learning does not work in this one instance and one instance only, or the paper is wrong. Given the evidence I know firsthand, I would say the paper is wrong. You rightly point out, it would not be the first time for one of the research institutions on that paper. It would not be the first time for any of them overall, let's keep it 100% real.

I don't know what the truth is regarding this situation but I do know one thing for sure. Our sources of truth are full of bullshit. And we wonder why that causes issues when we train models on the data.