https://huggingface.co/docs/datasets/quickstart
https://huggingface.co/docs/chat-ui/index
https://huggingface.co/docs/autotrain/quickstart_spaces
- remember to insert a api key before starting huggin face project
- some datasets require api insertion key from huggingface api
Apps you can test inside the website without local install needs
- example type into the app :
a girl on beach are using sun glasses
a random image of a history event
a robot running weapons from star wars movie
a hollywood 3d figure movie , ready for a film set
https://huggingface.co/spaces/mukaist/Midjourney
- We Setup Ollama apps and designing datasets and models and using ai to generate data models of apps
- Design Ai and Train Ai
- Make Ai Generative Text Prompt Communicate to Explain Knowledge And Results
- Using Local Hosted App Interface to Run Ollama Models and Generate Bether App Than ChatGpt But Without The Hassle Drop of Limits
Having these tools makes us develop faster apps and specific
finding code and library and how the app should be coded
and ai gives also security tip considerations
along with the source code instructions
- ai enables us get in the place we are for finding out our agenda of using correct code development
- and use a nice interface to manage a professional ai code development help tool
- the code also can be managed solve our code waited 100 projects
and become a pro organisation of understand programming faster
- with ai code tools we enabled understand python and c++ in few hours
- we went from not doing anything other than research
to outfolding operate a ai organisation and execute coder projects
- with ollama run localy we dont need the waiting time on ollama 2 ready prompt on Facebook Meta
14:14 in this video we learned how to generate and use the Ollama libraries by this guys tutorial on explain how to run a ollama prompt
https://youtu.be/V6LDl3Vjq-A?si=N4aQSaozzFfY9t2s
- use ai to generate llama generative data
- train ai to do operative target agenda
- follow steps :
- holding github account
- share api key
- authenticate github
- start chat experiment prompts on https://www.llama2.ai/
- register user
- download ollama
- run ollama after download
- paste this command in terminal
ollama run llama3.2
- you will need to download olama and install it local on machine
- after its installed in application folder local on computer
- run the gray window ollama run command in therminal
3 steps :
- install Ollama on local app folder
- run ollama
- insert this command in terminal after opened ollama installed software
ollama run llama3.2
- instert only this command in terminal :
ollama run llama3.2
-
its the main olamma model, you start with train data for
-
experiment and commit to train datasets to commit special develop ai experiences
-
its datasets for program website with ai
-
its datasets for program apps
-
its datasets for design images with text image prompts
-
its datasets for design vidoes from text prompts
train ai to help automatic design website
- train ai to design apps
- train ai to design videos
- train ai to design films
- train ai to design youtube shorts videos
- train ai to design background images
- train ai to do help automation task of helping with answeers
- train ai to become a mental health partner and friend
- train ai to become a psychology expert
- train ai to make ebook story tellings
just select and test in terminal as done in alternative 2 after install
you find a model to put and train in terminal client
the olama repositories have a link window with a text command to put into terminal
- seperate commands and usernames are by what model to train ai for
-
the ai developed a instance app in ai to perform generate a library drift app for specific iphone 13 and allocation run 7% of the instance for techologies image recognition and Image classifications
- This means we solved a interface run of iphone 13 kernel run specific optimisation and enabled command prompt a development in 10 minutes and start the app dividend into c++ and python
-
it also runned the library of tensorflow
-
it also runned the needed code for execute the specific kernel as Mobilenet Tensorflow model without mention the prompts
>>> Make a Tensorflow app for image recognition and image classification
for mobile kernel processing
7% source allocation runs of iphone13 kernel OS
This means we now runs ai on specific kernel instance with mobilenets for mentioning traininng the specific kernel phone OS
TensorflowiphoneKernel.txt
https://github.com/CulturesSupports/CulturesSupports/blob/main/TensorflowiphoneKernel.txt