Tumgik
#OnX
cloveswifey · 11 months
Text
Horribly Wrong
Tumblr media
Parings: JJ Maybank x Fem!Reader
Type: Angst - Fluff
Words: 0.3k
Y/N was excited to go on a date with Nate. They had been talking for a while, and she thought that they had a real connection. But as soon as they sat down at the restaurant, she knew that something was off.
Nate was rude to the waiter, snapping his fingers and demanding that he bring him a drink.
"Hey you!" Nate called out to the waiter who was passing their table. "Yeah you!"
"Is there anything I can help you with?" the dark-haired waiter asked politely.
"Bring me a beer and a water for the lady," Nate demanded, snapping his fingers.
"Of course, right away," the waiter replied, trying to hide his annoyance.
Nate muttered under his breath, "Fucking slow," as the waiter walked away.
Y/N cringed, feeling embarrassed by Nate's behavior towards the waiter. She tried to make small talk, but Nate seemed more interested in checking his phone than talking to her.
The waiter returned with our drinks, placing them gently on the table.
"Are you both ready to order?" he asked, pulling out his notepad.
Nate spoke up without any consideration. "I'll have the steak and chips, and the lady will have a salad."
Y/N tried to object, "I don't really like-"
"The salad!" Nate interrupted her.
Y/N was disappointed; she wanted the chicken carbonara. She guessed that Nate was one of those guys.
"Of course," the waiter replied, writing down the order.
"Hurry!" Nate demanded, already engrossed in his phone.
Feeling embarrassed, Y/N smiled at the waiter and mouthed "sorry." He nodded and walked away to submit the order to the kitchen.
As the night went on, Y/N grew more and more frustrated. She couldn't believe that she had wasted her time on this guy.
"Call me pretty girl," Nate said before he took off towards his car, not even offering to drive her home.
Y/N let out a sigh and started to walk home. She was feeling disappointed when she saw her friend JJ walking towards her.
JJ was cute, with blonde hair and bright blue eyes. He smiled at her, and she felt her heart skip a beat. She couldn't help but smile back.
"Is everything okay?" JJ asked, noticing that she looked upset.
Y/N shook her head. "Not really," she said, feeling tears welling up in her eyes.
JJ took her hand and led her outside. They sat on a bench, and Y/N told him everything that had happened. JJ listened, his eyes full of empathy.
"I'm sorry," he said, taking her hand. "That sounds terrible."
Y/N felt a warmth spreading through her body. She had never felt so understood before. She looked at JJ, and he leaned in to kiss her.
The kiss was electric, and Y/N felt a rush of excitement. She didn't care about Nate anymore. All she wanted was to be with JJ.
As they pulled away, JJ looked into her eyes. "Do you want to go out sometime?" he asked, smiling.
Y/N nodded, feeling a huge weight lifted off her shoulders. She had never been happier.
180 notes · View notes
heyupyoursjr · 9 months
Text
Tumblr media Tumblr media Tumblr media
Art Fight 2023 batch 5
The last of the art I drew for Art Fight 2023 ^.=.^
ElektronXz on Twitter HeyAnkey on FA Predatoria on FA
2 notes · View notes
zhao-tianyou · 5 months
Text
we finally got a date!! onx launching december 9th
1 note · View note
gwydionmisha · 1 year
Link
2 notes · View notes
801boy · 2 years
Note
Tumblr media
Character assignment??? Ohohoho [hovering in ur kitchen and stealing ur fish sauce]
HELLO DEAR FRIEND sorry this took me a couple of days. Anyways you get takane enomoto (left) AKA ene (right) from kagerou project. They've got two different character designs but they're the same girl technically <3 Her whole thing is that she can project herself into the Cyber World or wtv so that's why she's blue as ene. As takane though, she has narcolepsy and therefore can never get enough rest. She is cranky and snappy but cares for her friends dearly. She sucks at flirting. She has a crush on her friend + classmate haruka (who is so sweet and so wonderful) but doesn't really wanna admit it - or more like doesn't get the chance to. Also theyre in a special ed class that consists of just the two of them. She is a closet expert gamer and has gamer rage. "Ene" was part of her username. As ene she is a lot more energetic and bratty and cheerful (bc she doesn't need sleep here) and calls herself a Super Pretty Cyber Girl. The creator of this series refuses to give us her bday date. For some reason. Overall very good very silly very [EXPLODES] I love her
Tumblr media Tumblr media
3 notes · View notes
govindhtech · 3 days
Text
Microsoft Open Phi-3 Mini Languages Quicken with NVIDIA
Tumblr media
NVIDIA revealed that NVIDIA TensorRT-LLM, an open-source framework for optimising large language model inference while running on NVIDIA GPUs from PC to cloud, has accelerated Microsoft’s new Phi-3 Mini open language model.
Phi-3 Mini advances Phi-2 from its research-only origins by bringing the power of 10x larger models to the masses, and it is licenced for both commercial and research use. Workstations equipped with NVIDIA RTX GPUs or PCs sporting GeForce RTX GPUs possess the necessary performance to execute the model locally through TensorRT-LLM or Windows DirectML.
With 512 NVIDIA H100 Tensor Core GPUs, the model with 3.8 billion parameters was trained on 3.3 trillion tokens in just seven days.
The Phi-3 Mini comes in two versions: the 4k token variation supports up to 128K tokens, making it the first model in its class for extremely extended contexts. This enables developers to ask the model questions using 128,000 tokens, or the atomic components of language that the model processes, and gets more pertinent answers.
At ai.nvidia.com, developers can test Phi-3 Mini with the 128K context window. It is packaged as an NVIDIA NIM, a microservice with a standard API that can be deployed anywhere.
Developing Effectiveness for the Advantage
Through community-driven tutorials, such as those on Jetson AI Lab, developers working on autonomous robots and embedded devices can learn how to construct and implement generative AI. Phi-3 is deployed on NVIDIA Jetson.
The Phi-3 Mini variant has 3.8 billion parameters, which makes it small enough to function well on edge devices. In memory, parameters resemble knobs that have been fine-tuned during the model training process to enable the model to react to input cues with a high degree of accuracy.
In usage instances where resources and costs are limited, Phi-3 can help, particularly with easier jobs. On important language benchmarks, the model can perform better than some larger models while still meeting latency requirements.
In order to increase inference speed and latency, TensorRT-LLM employs a variety of optimisations and kernels, including LongRoPE, FP8, and inflight batching. It will also support the extended context window of the Phi-3 Mini. Soon, the TensorRT-LLM implementations will be accessible on GitHub in the examples folder. Developers can then convert to the TensorRT-LLM checkpoint format, which is readily deployable with NVIDIA Triton Inference Server and optimised for inference.
Creating Open Systems
Having released more than 500 projects under open-source licences, NVIDIA is a prominent participant in the open-source ecosystem.
NVIDIA supports a wide range of open-source foundations and standards bodies in addition to contributing to numerous external projects like JAX, Kubernetes, OpenUSD, PyTorch, and the Linux kernel.
The announcement today builds on long-standing NVIDIA partnerships with Microsoft, which have facilitated advancements in DirectML, Azure cloud, generative AI research, healthcare, and life sciences.
What is ONNX Runtime?
ONNX Runtime enhances machine learning models. Its functions are:
Machine learning accelerators speed up machine learning model inference and prediction.
Cross-platform: It works on Windows, Linux, macOS, mobile devices, and web browsers.
It supports models from PyTorch, TensorFlow, scikit-learn, and others. This eliminates the need to preserve the framework for model use.
ONNX Runtime simplifies machine learning model deployment and performance optimisation in various contexts.
ONNX Runtime Tutorial
Thanks to ONNX Runtime and DirectML, Microsoft’s most recent in-house Phi-3 models may now be used on a wide variety of hardware and operating systems. They are pleased to inform that both the phi3-mini-4k-instruct and phi3-mini-128k-instruct flavours of Phi-3 will be supported on day one. The phi3-mini-4k-instruct-onnx and phi3-mini-128k-instruct-onnx websites offer the optimised ONNX models.
While most language models are too big to operate locally on most systems, Phi-3 is a notable exception to this rule, since this compact yet powerful suite of models performs on par with models ten times larger! The Phi-3 Mini variant is unique among its weight class in that it can accommodate lengthy contexts with up to 128K tokens.
Phi-3 Mini on Windows is scaled using DirectML and ONNX Runtime
Phi-3 is small enough to run on a wide range of Windows machines by itself, so why stop there? The model’s reach on Windows would be greatly increased by making Phi-3 even smaller by quantization, but not all quantization methods are made equal. Their goal was to guarantee both scalability and model correctness.
Phi-3 Mini can be quantized using Activation-Aware Quantization (AWQ), which allows us to benefit from quantization’s memory savings with negligible accuracy loss. In order to do this, AWQ quantizes the remaining 99% of weights and finds the top 1% of salient weights that are essential to preserving model correctness. When AWQ is used for quantization, significantly less accuracy is lost than with many other methods.
Whether an AMD, Intel, or NVIDIA GPU is supported by DirectX 12 on Windows, it can all run DirectML. Developers can now execute and distribute this quantized version of Phi-3 across hundreds of millions of Windows machines because DirectML and ONNX Runtime now support INT4 AWQ!
In the upcoming weeks, they will release driver updates that will further enhance performance in collaboration with its hardware manufacturer partners. To find out more, come to their Build Talk in late May!
ONNX Mobile Runtime
The ONNX Runtime is a genuinely cross-platform framework because it can run the Phi-3 Mini models on mobile and Mac CPUs in addition to supporting them on Windows. The ONNX Runtime facilitates the operation of these models on a wide range of hardware types by supporting quantization techniques such as RTN.
With the ONNX Runtime Mobile, developers may use AI models for on-device inference on mobile and edge devices. Through the elimination of client-server connections, ORT Mobile offers cost-free privacy protection. They can run both at a moderate pace on a Samsung Galaxy S21 and greatly lower the size of the state-of-the-art Phi-3 Mini versions using RTN INT4 quantization. There is a tuning parameter for the int4 accuracy level when using RTN INT4 quantization.
This option balances the trade-offs between accuracy and performance by defining the minimum accuracy level needed to activate MatMul in int4 quantization. With int4_accuracy_level=1, optimised for accuracy, and int4_accuracy_level=4, optimised for performance, two versions of RTN quantized models have been made available. They advise using the model with int4_accuracy_level=4 if you would rather have higher performance with a minor accuracy trade-off.
ONNX Runtime Server
ONNX Runtime with CUDA is a fantastic solution that supports a wide range of NVIDIA GPUs, including both consumer and data centre GPUs, for Linux developers and beyond. For ONNX Runtime with CUDA, Phi-3 Mini-128K-Instruct outperforms PyTorch in all batch size, prompt length combinations.
ONNX Runtime PyTorch
Phi-3 Mini-128K-Instruct with ORT outperforms PyTorch by up to 5X and up to 9X for FP16 CUDA and INT4 CUDA, respectively. Llama does not yet support the Phi-3 Mini-128K-Instruct.cpp.
Phi-3 Mini-4K-Instruct with ORT outperforms PyTorch by up to 5X and up to 10X on FP16 and INT4 CUDA, respectively. For large sequence lengths, Phi-3 Mini-4K-Instruct performs up to three times quicker than Llama.cpp.
With ONNX Runtime, there is a way to infer models effectively on Windows, Linux, Android, and Mac!
Use ONNX Runtime Generate()
They are excited to unveil their new Generate() API, which wraps several features of generative AI inferencing to make Phi-3 models easier to execute on multiple devices, platforms, and EP backends. Just drag and drop LLMs into your app with the Generate() API. Run these early models using ONNX.
Performance Metrics DirectML
ONNX Runtime + DirectML performance of Phi-3 Mini (4k sequence length) quantized with AWQ and 128 block size on Windows was measured. The test computer had an Intel Core i9-13900K CPU and NVIDIA GeForce RTX 4090 GPU. As shown in the table, DirectML has good token throughput with longer prompts and generation lengths.
AMD, Intel, and NVIDIA enable DirectML, which lets developers deploy models across Windows and achieve exceptional performance. Best of all, AWQ gives developers scale and model accuracy.
Their hardware partners’ optimised drivers and ONNX Generate() API upgrades will boost performance in the coming weeks.
ONNX Runtime FP16
Phi-3 Mini 128K Instruct ONNX model average throughput of the first 256 tokens created (tps) improved as seen in the table below. CUDA FP16 and INT4 precisions on 1 A100 80GB GPU are compared.Image Credit to ONNX Runtime
Try Phi3 ONNX Runtime
This blog post describes ONNX Runtime and DirectML Phi-3 model optimisation. Phi-3 instructions for Windows and other systems and early benchmarking data are supplied. Stay tuned for ONNX Runtime 1.18 in early May for more changes and perf optimisations!
Read more on govindhtech.com
0 notes
dreamketchers · 2 years
Photo
Tumblr media
Sugar Creek Loop, Oklahoma #onx #onxoffroad #dirttrails #backroads #waterfalls (at Oklahoma) https://www.instagram.com/p/CjWfSVVtPAH/?igshid=NGJjMDIxMWI=
0 notes
fairycosmos · 15 hours
Text
thank you adblock for my life
65 notes · View notes
heartonxions · 4 months
Text
Tumblr media
i had artblock.. and i wanted to make myself get out of my comfort zone (the same 5 characters) .. so … this!! phoenix’s black gray design literally is the best thing i’ve ever seen and so is zai’s design for wendy!! literally cannot get these designs out of my brain .. frosted tips gray… because. ice magic……. ok bye guys
@fairytail-redraw
113 notes · View notes
firapolemos05 · 6 months
Text
Have I mentioned that I love Gajeel whump? Yes? Good.
Tumblr media
"Damn beast nearly bit my arm off!"
"I'm surprised the legal guilds even allow monsters like that into their cities."
"Well if he's going to act like an animal, might as well treat him like one."
Whumptober 2023
Day 24
"A mouth full of ridicule."
(In which I am very mean to my favorite blorbo.)
64 notes · View notes
arkngth · 2 months
Text
Some GTA RP doodle requests from twt
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
20 notes · View notes
cloveswifey · 1 year
Text
Blind date
Tumblr media
Parings: Pope Heyward x Fem!Reader
Type: Fluff
Words: 0.5k
Sarah had been friends with both Pope and Y/N for years, but they had never met each other. She knew they would be perfect for each other, so she decided to set them up on a blind date.
Pope was hesitant at first. He had never been on a blind date before, and he wasn't sure if he was ready to start dating again after his last relationship ended badly. But Sarah was persistent, and eventually, he agreed to meet Y/N.
Y/N was nervous too. She had been set up on blind dates before, and they never went well. But she trusted Sarah's judgment, and she was excited to meet someone new.
The day of the date arrived, and Pope and Y/N met at a local coffee shop. They were both surprised at how easy it was to talk to each other. They had so much in common, and they quickly discovered that they shared many of the same interests.
"So, what are your plans for the future?" Pope asked as he sipped his coffee.
"I want to be a journalist," Y/N smiled. "How about you?"
"As weird as it sounds, I want to be a coroner," Pope replied, chuckling.
As they sipped their coffee and chatted, they both felt a spark of attraction. They were both shy at first, but as the conversation flowed, they grew more comfortable with each other.
After a few hours, they decided to go for a walk in the park. They strolled along the path, talking and laughing, enjoying each other's company.
"Where's your favorite place on earth?" Pope randomly asked.
"Hm..." Y/N paused to think. "Probably New York. It's always been a dream to go. What about you?"
"Honestly, I don't know. Weirdly enough, I've never been off the island, other than going to the mainland," Pope responded.
As the sun began to set, they found a quiet spot by the lake and sat down to watch the sunset.
As they watched the sky turn pink and orange, Pope took Y/N's hand in his.
"Let's play 'Would you rather?'" Pope suggested.
"Hmm, okay," Y/N giggled.
"Would you rather lounge by the beach or the pool?" Pope asked.
"The beach! 100% no, 110%!" Y/N chuckled. She had always had a passion for the beach.
"Would you rather be able to fly or become invisible?" Y/N asked.
"Invisible!" Pope quickly answered as if the answer had always been the same.
Y/N began to answer the next question, but was cut off by Pope's lips.
She pulled away and looked up at him, and he leaned in to kiss her again. It was a sweet, gentle kiss, but it was enough to spark a flame between them.
From that moment on, Pope and Y/N were inseparable. They went on more dates, and each one was better than the last. They fell in love, and Sarah was thrilled to have played a part in bringing them together.
Years later, at their wedding, Sarah stood up to make a toast. She looked at Pope and Y/N, and she smiled. "I always knew you two were meant to be together," she said. "I'm so happy I could be a part of your love story."
9 notes · View notes
plaguechan-cute · 4 months
Text
Tumblr media
Criken's GTA RP OC Vigo Hardman
19 notes · View notes
zhao-tianyou · 23 days
Text
when we get a morsel of stubble on onx
Tumblr media
0 notes
phantasmagoricl · 2 months
Text
Tumblr media
hiro maybe you should look into part-time acting...
10 notes · View notes
angelpuns · 7 months
Text
I hope ya'll know the next arc is literally in 4 parts with an interlude-
35 notes · View notes