Tumgik
jcmarchi · 18 minutes
Text
SiteGround Review – Is This The Best Premium Webhost?
New Post has been published on https://thedigitalinsider.com/siteground-review-is-this-the-best-premium-webhost/
SiteGround Review – Is This The Best Premium Webhost?
As a website owner and hosting expert, I have tested hundreds of hosting providers and can tell you that SiteGround is one of the very best on the market. Plus when 3 million website owners trust you with their domains, you are doing something right.
In the rest of this SiteGround review, we’ll take a tour through the web host’s plans, pricing, features offered, real-life performance stats, customer support, and other important considerations every webmaster should consider. Within the hour, you’ll decide for yourself if it’s the web host for you.
Fun fact: Unite AI is hosted on SiteGround!
SiteGround Review
SiteGround was founded in 2004 and has grown over two decades into one of the most reputable privately owned web hosts globally. The provider offers shared hosting, cloud hosting, reseller hosting, and WordPress-specific plans. They also have hosting plans tailored to agencies and a built-in email marketing tool. 
SiteGround currently hosts over 3 million domain names across 11 data centers. For whichever of SiteGround’s plans you go for, you get a free SSL certificate, free email, daily backups, free CDN integration, and a 30-day money-back guarantee.
SiteGround is also a fan favorite because they are eco-conscious. Their data centers match 100% of the energy consumed by their global operations with renewable energy. The company has been reviewed by over 16,000 people on TrustPilot and scored an impressive 4.8 stars.
Pros and Cons
SiteGround has excellent performance stats
They offer affordable hosting plans
They are one of the 3 WordPress-endorsed hosts
SiteGround is incredibly beginner-friendly
They offer daily backups on all plans
Exceptionally reliable resources built on Google Cloud
No dedicated hosting is listed on their site
Their shared hosting plans have limited storage
Their plans get pricier upon renewal
SiteGround Rating – My Personal Take
Do a quick search on Google and you’ll find thousands of web hosting providers on the market, all claiming to offer the best services for every website need. In my experience reviewing hosts, I have found that it’s logical to create a standard way to compare web hosts and rate them properly.
Considering the important features SiteGround offers and how they perform, here’s how I’d rate the web host on a scale of 1.0-5.0. Note that these scores are not static and may change as the host improves their offering:
Quality My rating Why I gave this score Features and specs 4.8 Free SSL, CDN, email migration, renewable energy match, free WordPress migration, and out-of-the-box caching are some of the features that make SiteGround amazing. However, not offering dedicated hosting and no free domains, are why the host doesn’t get full points. Pricing 4.9 Starting at $3.99 per month, SiteGround is one of the most affordable hosts on the market. There are, however, cheaper hosting plans from other providers. Performance stats  5.0 An incredible time-to-first-byte of 92 ms puts SiteGround at the very top in terms of server response time. SiteGround also uses SSD storage and gives you a 99.9% guaranteed uptime. Ease of use 4.9 SiteGround has its own native control panel called Site Tools. In addition, free WordPress integration and an intuitive user interface make the provider’s software very easy to use. Customer support guarantee 4.7 SiteGround has a thriving support center – Live chat, email tickets, and a knowledgebase. They also have a horde of WordPress-specific tutorials. However, to enjoy their support you need to be a customer – i.e. buy one of their plans.
SiteGround Hosting Plans & Pricing – 2024
SiteGround offers shared hosting, cloud hosting, reseller hosting, WordPress hosting, WooCommerce hosting, and web hosting for agencies. For most of their plans, you get a 30-day guarantee, giving you enough time to decide if it’s for you.
Once you have decided on a plan from SiteGround, you can pay via card – VISA, MasterCard, American Express, and Discover.
SiteGround’s shared hosting plans
StartUp
Space offered – 10 GB
Bandwidth – Unmetered traffic
Number of websites – 1 website allowed
Price – $3.99 per month billed annually
GrowBig
Space offered – 20 GB
Bandwidth – Unmetered traffic 
Number of websites – Unlimited websites allowed
Price – $6.69 per month billed annually
GoGeek
Space offered – 40 GB
Bandwidth – Unmetered traffic
Number of websites – Unlimited websites allowed
Price – $10.69/month billed annually
SiteGround’s GrowBig shared hosting plan gives you the most value for money. 20 GB of storage, unlimited websites, and unmetered traffic at $6.69/month is a great deal.
Who this is for:
Like any other web host, SiteGround shared hosting plans involve sharing server resources with other website owners and are best for small or new websites like portfolio websites, dropshipping landing pages, blogs, etc. that don’t need a lot of resources. If you won’t be using more than 40 GB of storage, then their shared hosting plans are best for you.
SiteGround’s cloud hosting plans
Jump Start
Space offered – 40 GB SSD storage
Bandwidth – 5 terabytes
Memory – 8 Gigabytes
Price – $100/month billed annually
Business
Space offered – 80 GB SSD storage
Bandwidth – 5 terabytes
Memory – 12 Gigabytes
Price – $200/month billed annually
Business Plus
Space offered – 120 GB SSD storage
Bandwidth – 5 terabytes
Memory – 16 Gigabytes
Price – $300/month billed annually
Super Power
Space offered – 160 GB SSD storage
Bandwidth – 5 terabytes
Memory – 20 Gigabytes
Price – $400/month billed annually
SiteGround’s Business cloud hosting is best for you if you have outgrown your shared hosting plan. For $200/month, you get 12 GB of memory, 80 GB SSD storage, and a bandwidth of 5 TB. 
SiteGround Custom Cloud
Aside from these fixed plans, SiteGround also offers Custom Cloud plans that allow you to tweak exactly how many CPU cores, how much memory, and storage space you want, without overspending.
These are super awesome if you need more resources than the Jump Start plan but less than the higher tiers – you can make fine adjustments to your resource demand.
Who this is for:
SiteGround’s cloud hosting plans are best for websites and stores that need more resources than shared hosting offers or have traffic that peaks at intervals – e.g. eCommerce stores during sales events or brands that sell seasonal products, web application providers, etc.
With cloud hosting, you have more control over the resources you use and how much you spend.
SiteGround’s WordPress hosting plans
StartUp
Space offered – 10 GB
Bandwidth – Unmetered traffic
Number of websites – 1 website allowed
Price – $3.99 per month billed annually
Extra features – Free WP installation, Free WP migrator, WordPress auto-updates, WP-CLI and SSH included, Managed WordPress hosting.
GrowBig
Space offered – 20 GB
Bandwidth – Unmetered traffic 
Number of websites – Unlimited websites allowed
Price – $6.69 per month billed annually
Extra features – Everything in StartUp plus access to on-demand backup copies, faster PHP, and a website staging tool built-in.
GoGeek
Space offered – 40 GB
Bandwidth – Unmetered traffic
Number of websites – Unlimited websites allowed
Price – $10.69/month billed annually
Extra features – Everything in GrowBig plus free private DNS and priority when requesting support.
Like their shared hosting plans, I’ll recommend SiteGround’s GrowBig WordPress hosting plan if you’re just starting with them. SiteGround’s WordPress plans are their shared hosting plans plus WordPress-specific features.
Who this is for:
SiteGround’s WordPress plans are meant for… you guessed it!… websites built on or that you intend to build on WordPress. You get free WordPress website migration, automatic updates of the WordPress software,  the WordPress command line, etc. all in a comprehensive managed WordPress package.
SiteGround’s reseller hosting plans
GrowBig
Space offered – 20 GB storage
Features – Free WP installation, WordPress auto-updates, Free WP Migrator plugin, free SSL, CDN & email, WP-CLI & SSH
Price – $6.69/month billed annually
GoGeek
Space offered – 40 GB storage
Features – GrowBig features plus free private DNS and priority support
Price – $10.69/month billed annually
Cloud
Space offered – 40 GB storage
Features – GoGeek’s features plus access to customize client access and tailor your resources
Price – $100/month billed annually.
SiteGround’s GoGeek reseller hosting plans are excellent for new hosting resellers. 40 GB of storage, WordPress-specific features, free private DNS, and priority in support, all for $10.69/month.
I like how their reseller hosting plans are pooled from their shared and cloud hosting plans. Their Cloud reseller hosting plan is their basic cloud hosting plan allowing you to configure your resource demand.
Who this is for:
SiteGround’s reseller hosting plans are designed for website developers, marketing agencies, IT guys, and web hosting entrepreneurs who want to sell hosting plans as theirs without needing to invest in actual physical data centers.
You can integrate these hosting plans as cross-sells to your core services to make more money from your clients.
SiteGround’s WooCommerce hosting
SiteGround’s WooCommerce hosting plans are an exact match with their WordPress plans. However, this time, we recommend the GoGeek plan priced at $10.69/month. On all plans you get WooCommerce pre-installed, built-in payment acceptance methods, a free CDN, a custom web application firewall (WAF), and other features adapted for WordPress eCommerce sites.
Who this is for:
SiteGround’s WooCommerce plans are best for business owners who want to run their eCommerce stores on WordPress. WooCommerce comes built-in and is one of the best WordPress eCommerce store-building tools.
SiteGround’s Features
Here’s an overview of the core features offered by SiteGround across its hosting plans:
SSD storage
Free website builders – WordPress and Weebly
Daily backups
WordPress acceleration features
Free email tools
Out-of-the-box caching
Renewable energy match
Free SSL and CDN
SiteGround offers many of the industry-standard features you’d expect from a world-class host but one thing that makes them stand out is that their technology is built on Google Cloud making them one of the swiftest and most reliable hosts on the market:
They are also one of the few hosts on the market that have ditched the traditional cPanel and provide a custom control panel called Site Tools. The result? A dashboard that makes managing your website’s backend incredibly easy.
SiteGround Performance Tests
When considering a new web host, you should evaluate their real-life performance and not just assume the figures they claim on their websites. Some important performance parameters to consider include – speed (average server response time), uptime, and overall performance in search engines.
A web host’s speed is how quickly their server starts to send back data after a user tries to visit a website hosted by them. The uptime measures how reliably the website is available online and is typically scored as a percentage – 99.9% is the industry standard. 
The lower the web host’s response time, the quicker your website will load, meaning less customer bounce and more potential conversions. And high uptime figures mean your website will experience little to no downtime.
To save you the hassle of finding a website hosted on SiteGround, I did the hard work for you and used GTMetrix to measure the host’s average speed and performance. These were the results:
SiteGround is super impressive and their servers responded in as little as 92 milliseconds – 0.092 seconds. The overall performance of the website hosted on their platform was 95% which is also incredible.
To test their uptime, I used Uptime Robot and evaluated the website’s availability over the last 30 days:
Over the last 30 days, the website was available 100% of the time, proving their 99.99% uptime claim.
SiteGround’s Customer Support
SiteGround has a thriving customer support hub to answer and solve queries 24/7 via:
Live chat
SiteGround allows you to chat live with their agents 24/7. However, I was a bit disappointed because you’d have to be a paying customer before you can chat with their live agents. Many other web hosts allow you to chat with an agent and make inquiries before you choose a plan.
Phone support
SiteGround also offers different phone lines to reach their agents and talk in real-time. However, like their live chat, you must be a paying customer to see their phone lines and place a call. Also, the phone lines are limited to English.
An extensive knowledge base
It’s always great to see a knowledge base that answers questions on everything from how to use the host’s particular features to everything else like domain name issues, WordPress maintenance, the control panel, etc. – and SiteGround doesn’t disappoint.
The web host’s knowledge base page also has a built-in search engine to make finding the right resources as easy as possible.
Email support tickets
SiteGround also offers support via email. You can create support tickets on their website that will automatically be forwarded to your email where an agent will reach out to you.
A blog section
Want to stay up-to-date with industry news, marketing trends, tips, and strategies? SiteGround’s blog section is rich with informative posts you’ll surely find helpful.
WordPress tutorials
If you plan to build your website on WordPress, you’ll be spoilt silly with resources on SiteGround. The provider has several help portals specifically tailored for WordPress owners including a WordPress Tutorials page, a WordPress optimization ebook, a WordPress security ebook, and a WooCommerce ebook.
SiteGround’s security features
SiteGround takes your website security seriously and you should carefully analyze those offered by any host you are considering to protect yourself from DDoS attacks, cross-site forgery, malware, and other security threats.
On every one of SiteGround’s plans, you get their custom web application firewall (WAF) built-in with regularly updated security rules. SiteGround also boasts an AI-powered anti-bot system to protect your site from bad bots trying to scrape your website, take over your account, or steal card details via carding.
And on every plan, you get a free SSL certificate, giving you the ‘padlock’ seal of trust and ‘HTTPS’ badge that is now a Google ranking factor.
SiteGround’s website builders
Many website owners like you want to consolidate their web hosting, domain name registration, and website building from the same provider. While SiteGround doesn’t offer free domains, you can register domains via their platform.
And when it’s time to build your website, you can use their free website builders –  WordPress and Weebly offered free of charge on every package:
It’s important to note that these website builders are somewhat limited – well, they are free. If you want more functionality, you’ll have to pay for premium website builders like Wix or Squarespace.
SiteGround’s User-Friendliness
How easy is SiteGround to use – setting up an account, managing your website’s backend via the control panel, and installing WordPress? Let’s see:
Registering an account with SiteGround
To register an account with the hosting provider, you only need to pay for one of their hosting plans. 
Step 1:
Navigate to the hosting type you want and choose a package. Click on ‘Get plan’. You’ll redirected to the ‘specify your domain’ page:
Step 2:
If you have an existing domain you want to use, check the ‘Existing domain’ box and move forward. If you don’t you can type in a new domain you are interested in and if it is available, you will move onto the checkout page with the domain price added to your overall cost.
Step 3:
Here, you type in your email and confirm your password. Next fill in your personal details, address, and company name. Finally, you’ll need to fill in your payment details and then choose your payment schedule – monthly, annually, or every 2 years:
You can also choose the Site Scanner as an addon that runs daily scans on your website to detect and quarantine malware.
Step 4:
Finally, agree to SiteGround’s policies and click ‘Pay now’’. And voila! Your account is created and you can enjoy the web host’s support and services to the fullest.
SiteGround’s intuitive Control Panel
SiteGround is one of the few hosts on the market who have dared to ditch the traditional cPanel and created their intuitive custom control panel – Site Tools.
And boy was I impressed! Site Tools is very easy to use, even for web hosting newbies. From the dashboard, you can install WordPress in 1 minute, set up email accounts, manage your website files, and access the Site Scanner, and the content delivery network.
One really fun feature of Site Tools is the built-in analytics tool:
You can see how many unique visitors to your domain per day and your page views.
At the left menu bar, you can access every setting related to your website, security, loading speed, domain name, email, and developer tools if you are no stranger to code.
How to install WordPress on SiteGround
Installing WordPress on SiteGround is incredibly easy and the quickest way is using Site Tools. Simply click on ‘Install and manage WordPress’ on your dashboard:
Next, choose whether you want to create a WordPress website alone or WordPress + WooCommerce. 
Now, fill in your domain name, select your language, and specify the installation path. Choose a username and password and click ‘Install’:
And in just one click, WordPress will be installed on your website:
Server footprint and CDN
You should also consider the server footprint and content delivery network (CDN) on any web host. The more servers your web host has and the more widely spread they are, the quicker your average website load speed will be for website visitors globally.
On the other hand, using a content delivery network (CDN), your web host caches your website’s data and distributes it across a network of servers to improve your website load speed.
 Websites hosted on SiteGround have access to a network of data centers spread across four continents. This network is then expanded by 170 CDN edge network locations around the world. SiteGround’s website caching solution is powered by Google Cloud – no wonder their incredible performance.
Conclusion: Should you choose SiteGround?
SiteGround is an incredible web hosting provider and one of the best on the market. And the reviews from so many customers are singing their praises. I recommend SiteGround as your next hosting provider whether you are creating a new website or looking for better services than your current provider.
Their server response speeds and reliability are simply unmatched. And if you are new to web hosting, you’ll find Site Tools incredibly easy to use to manage your website’s backend. 
SiteGround gives you everything you need to get your website running from scratch – well almost. You’ll have to pay extra to register a domain name. Their Site Scanner tool also comes with a fee. However, we recommend SiteGround!
FAQs
Does SiteGround offer free hosting?
SiteGround doesn’t offer a free web hosting package like other hosts do – for example, Hostinger. Their most basic shared hosting plan, StartUp is priced at $3.99/month and billed annually.
Is WordPress free with SiteGround?
For every plan on SiteGround, and more so for their WordPress hosting plans, you get WordPress for free. You also get a free WordPress migrator and the WP-CLI built-in. 
Which one is better Bluehost or SiteGround?
BlueHost and SiteGround are both WordPress-accredited hosts, so this is a tough matchup. In terms of performance, SiteGround trumps BlueHost. On the other hand, BlueHost’s plans are cheaper and start at $2.95/month. Plus, BlueHost’s platform is to use. 
Overall, I think SiteGround is a bit better than BlueHost.
What are some great SiteGround alternatives?
Some excellent SiteGround alternatives if you are looking for cheaper packages or free hosting and easier-to-use backends include Hostinger, BlueHost, Cloudways, and A2Hosting.
0 notes
jcmarchi · 19 minutes
Text
Pocket-Sized Powerhouse: Unveiling Microsoft’s Phi-3, the Language Model That Fits in Your Phone
New Post has been published on https://thedigitalinsider.com/pocket-sized-powerhouse-unveiling-microsofts-phi-3-the-language-model-that-fits-in-your-phone/
Pocket-Sized Powerhouse: Unveiling Microsoft’s Phi-3, the Language Model That Fits in Your Phone
In the rapidly evolving field of artificial intelligence, while the trend has often leaned towards larger and more complex models, Microsoft is adopting a different approach with its Phi-3 Mini. This small language model (SLM), now in its third generation, packs the robust capabilities of larger models into a framework that fits within the stringent resource constraints of smartphones. With 3.8 billion parameters, the Phi-3 Mini matches the performance of large language models (LLMs) across various tasks including language processing, reasoning, coding, and math, and is tailored for efficient operation on mobile devices through quantization.
Challenges of Large Language Models
The development of Microsoft’s Phi SLMs is in response to the significant challenges posed by LLMs, which require more computational power than typically available on consumer devices. This high demand complicates their use on standard computers and mobile devices, raises environmental concerns due to their energy consumption during training and operation, and risks perpetuating biases with their large and complex training datasets. These factors can also impair the models’ responsiveness in real-time applications and make updates more challenging.
Phi-3 Mini: Streamlining AI on Personal Devices for Enhanced Privacy and Efficiency
The Phi-3 Mini is strategically designed to offer a cost-effective and efficient alternative for integrating advanced AI directly onto personal devices such as phones and laptops. This design facilitates faster, more immediate responses, enhancing user interaction with technology in everyday scenarios.
Phi-3 Mini enables sophisticated AI functionalities to be directly processed on mobile devices, which reduces reliance on cloud services and enhances real-time data handling. This capability is pivotal for applications that require immediate data processing, such as mobile healthcare, real-time language translation, and personalized education, facilitating advancements in these fields. The model’s cost-efficiency not only reduces operational costs but also expands the potential for AI integration across various industries, including emerging markets like wearable technology and home automation. Phi-3 Mini enables data processing directly on local devices which boosts user privacy. This could be vital for managing sensitive information in fields such as personal health and financial services. Moreover, the low energy requirements of the model contribute to environmentally sustainable AI operations, aligning with global sustainability efforts.
Design Philosophy and Evolution of Phi
Phi’s design philosophy is based on the concept of curriculum learning, which draws inspiration from the educational approach where children learn through progressively more challenging examples. The main idea is to start the training of AI with easier examples and gradually increase the complexity of the training data as the learning process progresses. Microsoft has implemented this educational strategy by building a dataset from textbooks, as detailed in their study “Textbooks Are All You Need.” The Phi series was launched in June 2023, beginning with Phi-1, a compact model boasting 1.3 billion parameters. This model quickly demonstrated its efficacy, particularly in Python coding tasks, where it outperformed larger, more complex models. Building on this success, Microsoft latterly developed Phi-1.5, which maintained the same number of parameters but broadened its capabilities in areas like common sense reasoning and language understanding. The series outshined with the release of Phi-2 in December 2023. With 2.7 billion parameters, Phi-2 showcased impressive skills in reasoning and language comprehension, positioning it as a strong competitor against significantly larger models.
Phi-3 vs. Other Small Language Models
Expanding upon its predecessors, Phi-3 Mini extends the advancements of Phi-2 by surpassing other SLMs, such as Google’s Gemma, Mistral’s Mistral, Meta’s Llama3-Instruct, and GPT 3.5, in a variety of industrial applications. These applications include language understanding and inference, general knowledge, common sense reasoning, grade school math word problems, and medical question answering, showcasing superior performance compared to these models. The Phi-3 Mini has also undergone offline testing on an iPhone 14 for various tasks, including content creation and providing activity suggestions tailored to specific locations. For this purpose, Phi-3 Mini has been condensed to 1.8GB using a process called quantization, which optimizes the model for limited-resource devices by converting the model’s numerical data from 32-bit floating-point numbers to more compact formats like 4-bit integers. This not only reduces the model’s memory footprint but also improves processing speed and power efficiency, which is vital for mobile devices. Developers typically utilize frameworks such as TensorFlow Lite or PyTorch Mobile, incorporating built-in quantization tools to automate and refine this process.
Feature Comparison: Phi-3 Mini vs. Phi-2 Mini
Below, we compare some of the features of Phi-3 with its predecessor Phi-2.
Model Architecture: Phi-2 operates on a transformer-based architecture designed to predict the next word. Phi-3 Mini also employs a transformer decoder architecture but aligns more closely with the Llama-2 model structure, using the same tokenizer with a vocabulary size of 320,641. This compatibility ensures that tools developed for Llama-2 can be easily adapted for use with Phi-3 Mini.
Context Length: Phi-3 Mini supports a context length of 8,000 tokens, which is considerably larger than Phi-2’s 2,048 tokens. This increase allows Phi-3 Mini to manage more detailed interactions and process longer stretches of text.
Running Locally on Mobile Devices: Phi-3 Mini can be compressed to 4-bits, occupying about 1.8GB of memory, similar to Phi-2. It was tested running offline on an iPhone 14 with an A16 Bionic chip, where it achieved a processing speed of more than 12 tokens per second, matching the performance of Phi-2 under similar conditions.
Model Size: With 3.8 billion parameters, Phi-3 Mini has a larger scale than Phi-2, which has 2.7 billion parameters. This reflects its increased capabilities.
Training Data: Unlike Phi-2, which was trained on 1.4 trillion tokens, Phi-3 Mini has been trained on a much larger set of 3.3 trillion tokens, allowing it to achieve a better grasp of complex language patterns.
Addressing Phi-3 Mini’s Limitations
While the Phi-3 Mini demonstrates significant advancements in the realm of small language models, it is not without its limitations. A primary constraint of the Phi-3 Mini, given its smaller size compared to massive language models, is its limited capacity to store extensive factual knowledge. This can impact its ability to independently handle queries that require a depth of specific factual data or detailed expert knowledge. This however can be mitigated by integrating Phi-3 Mini with a search engine. This way the model can access a broader range of information in real-time, effectively compensating for its inherent knowledge limitations. This integration enables the Phi-3 Mini to function like a highly capable conversationalist who, despite a comprehensive grasp of language and context, may occasionally need to “look up” information to provide accurate and up-to-date responses.
Availability
Phi-3 is now available on several platforms, including Microsoft Azure AI Studio, Hugging Face, and Ollama. On Azure AI, the model incorporates a deploy-evaluate-finetune workflow, and on Ollama, it can be run locally on laptops. The model has been tailored for ONNX Runtime and supports Windows DirectML, ensuring it works well across various hardware types such as GPUs, CPUs, and mobile devices. Additionally, Phi-3 is offered as a microservice via NVIDIA NIM, equipped with a standard API for easy deployment across different environments and optimized specifically for NVIDIA GPUs. Microsoft plans to further expand the Phi-3 series in the near future by adding the Phi-3-small (7B) and Phi-3-medium (14B) models, providing users with additional choices to balance quality and cost.
The Bottom Line
Microsoft’s Phi-3 Mini is making significant strides in the field of artificial intelligence by adapting the power of large language models for mobile use. This model improves user interaction with devices through faster, real-time processing and enhanced privacy features. It minimizes the need for cloud-based services, reducing operational costs and widening the scope for AI applications in areas such as healthcare and home automation. With a focus on reducing bias through curriculum learning and maintaining competitive performance, the Phi-3 Mini is evolving into a key tool for efficient and sustainable mobile AI, subtly transforming how we interact with technology daily.
0 notes
jcmarchi · 19 minutes
Text
Integrating Artificial Intelligence and Behavioral Economics: New Frontiers in Decision-Making
New Post has been published on https://thedigitalinsider.com/integrating-artificial-intelligence-and-behavioral-economics-new-frontiers-in-decision-making/
Integrating Artificial Intelligence and Behavioral Economics: New Frontiers in Decision-Making
The recent passing of Nobel laureate Daniel Kahneman, a pioneer in blending psychological research with economics, especially in understanding how people make decisions under uncertainty, prompts a moment of reflection in both academic and business circles. Kahneman and Vernon L. Smith’s groundbreaking work laid the foundation for understanding the complex interplay of heuristics and biases in economic decisions, a legacy that continues to influence emerging fields.
At the turn of the millennium, when Kahneman received the Nobel Prize, artificial intelligence was still nascent in its development. Yet, in a prescient statement made a few years before his passing, Kahneman foresaw the profound implications of advanced AI on leadership and decision-making, posing the question, “Once it’s demonstrably true that you can have an AI that has far better business judgment, what will that do to human leadership?” This question underscores the transformative potential of AI in reshaping decision-making processes by integrating insights from behavioral economics.
In the rapidly evolving and intricately complex landscape of today’s business world, the art and science of decision-making stand as a paramount differentiator, often yielding winners and losers. Yet these critical decisions are besieged by the challenges of navigating through the dense fog of human emotion, bias, and irrationality. Traditional decision-making models, anchored in rational choice theory, which were challenged by Kahneman, frequently overlook these subtle yet powerful influences. It is within this context that the convergence of AI and behavioral economics emerges as a revolutionary force, promising to redefine the foundations of decision-making for business leaders.
Behavioral economics brings to light the role of heuristics—cognitive shortcuts that streamline decision-making at the expense of accuracy. These mental shortcuts are a breeding ground for biases, such as overconfidence, sunk cost, and loss aversion, which can skew judgment and impact organizational outcomes. Artificial intelligence, with its unmatched capacity for data analysis, presents a novel solution for dissecting and understanding these biases. By sifting through extensive datasets, AI can unveil patterns in decision-making that remain opaque to human observation, offering a new lens through which to view the cognitive biases that shape our choices.
The practical implications of this synergy between AI and behavioral economics are vast and varied. AI systems, informed by behavioral insights, can guide financial analysts away from biased conservative strategies, propel HR platforms to counteract unconscious bias in recruitment, implement marketing campaigns based on patterns influenced by behavioral tendencies, and much more. These are not speculative scenarios but attainable realities that leverage the predictive power of AI to inform more nuanced and effective decision-making strategies.
However, the path to integrating AI with behavioral economics is strewn with challenges, particularly the ethical quandaries presented by human biases in AI development. The creation of AI technologies is intrinsically linked to human knowledge and, by extension, our biases. These predispositions can inadvertently influence AI algorithms, perpetuating and even amplifying biases on a scale previously unimaginable.
Addressing these ethical concerns necessitates a multifaceted approach. It calls for the establishment of robust ethical frameworks, the cultivation of diverse development teams, and a commitment to transparency throughout the AI development process. Furthermore, AI systems must be capable of continuous learning, adapting not only to new data but also to evolving ethical standards and societal expectations.
The integration of AI and behavioral economics holds the promise of a new era of decision-making, one that harnesses the power of technology to illuminate and mitigate the biases that cloud human judgment. As we advance into this uncharted territory, guided by the legacy of visionaries like Kahneman, our success will hinge on our ability to navigate the ethical complexities inherent in this integration.
By embracing diversity, ensuring transparency, and fostering an environment of continuous adaptation, we can unlock AI’s full potential to enhance decision-making in a manner that is both innovative and ethically sound. This journey is not merely a technological endeavor but a moral imperative, paving the way for a future where AI and human insight converge to create a smarter, more just, and ethically informed business landscape.
0 notes
jcmarchi · 2 hours
Text
OpenAI faces complaint over fictional outputs
New Post has been published on https://thedigitalinsider.com/openai-faces-complaint-over-fictional-outputs/
OpenAI faces complaint over fictional outputs
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union.
“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences,” said Maartje de Graaf, Data Protection Lawyer at noyb. 
“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”
The GDPR requires that personal data be accurate, and individuals have the right to rectification if data is inaccurate, as well as the right to access information about the data processed and its sources. However, OpenAI has openly admitted that it cannot correct incorrect information generated by ChatGPT or disclose the sources of the data used to train the model.
“Factual accuracy in large language models remains an area of active research,” OpenAI has argued.
The advocacy group highlights a New York Times report that found chatbots like ChatGPT “invent information at least 3 percent of the time – and as high as 27 percent.” In the complaint against OpenAI, noyb cites an example where ChatGPT repeatedly provided an incorrect date of birth for the complainant, a public figure, despite requests for rectification.
“Despite the fact that the complainant’s date of birth provided by ChatGPT is incorrect, OpenAI refused his request to rectify or erase the data, arguing that it wasn’t possible to correct data,” noyb stated.
OpenAI claimed it could filter or block data on certain prompts, such as the complainant’s name, but not without preventing ChatGPT from filtering all information about the individual. The company also failed to adequately respond to the complainant’s access request, which the GDPR requires companies to fulfil.
“The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used to at least have an idea about the sources of information,” said de Graaf. “It seems that with each ‘innovation,’ another group of companies thinks that its products don’t have to comply with the law.”
European privacy watchdogs have already scrutinised ChatGPT’s inaccuracies, with the Italian Data Protection Authority imposing a temporary restriction on OpenAI’s data processing in March 2023 and the European Data Protection Board establishing a task force on ChatGPT.
In its complaint, noyb is asking the Austrian Data Protection Authority to investigate OpenAI’s data processing and measures to ensure the accuracy of personal data processed by its large language models. The advocacy group also requests that the authority order OpenAI to comply with the complainant’s access request, bring its processing in line with the GDPR, and impose a fine to ensure future compliance.
You can read the full complaint here (PDF)
(Photo by Eleonora Francesca Grotto)
See also: Igor Jablokov, Pryon: Building a responsible AI future
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, artificial intelligence, ethics, eu, europe, european union, gdpr, government, hallucinations, large language model, law, legal, noyb, openai, privacy, Society
0 notes
jcmarchi · 12 hours
Text
Things That Keep the Grumpy Designer Awake at Night
New Post has been published on https://thedigitalinsider.com/things-that-keep-the-grumpy-designer-awake-at-night/
Things That Keep the Grumpy Designer Awake at Night
I’ve learned many lessons in my years as a grumpy designer. One is to try and separate work from the rest of your life. It’s a healthy practice – one that leads to (slightly) less stress.
But, try as I might, certain things stick with me. Things that stay at the front of my mind all day and night. This vicious cycle results in less sleep and more grump. Yes, that’s wonderful for inspiring columns. Not so good for the soul, though.
I’m willing to bet that others are facing the same issue. The growing complexity of web design is making it harder to relax.
Feeling better starts with sharing. So, allow me to dig into the depths of my psyche. The following is a look at web-related issues that keep me up at night. Make a cup of coffee and join me on this nerve-wracking journey.
The Never-Ending Quest for Web Security
Security has long been a thorn in our sides. We can build websites in any number of ways. However, they all seem to be a target for malicious actors.
I work primarily with WordPress. I love the flexibility it offers. But securing these sites is a constant battle.
Hackers have numerous points of attack. They might take advantage of a plugin vulnerability. Or they might crack a weak password. They’re even stealing session cookies these days.
WordPress isn’t alone in the struggle for security. But working with it each day seems to magnify the issue. It has become a constant presence in my mind.
Sometimes, the situation feels hopeless. You plug one security hole – only to see another one pop up. Cleaning up a hacked site is tedious at best. Plus, the thought of data theft is enough to make anyone nervous.
Perhaps the answer lies in not going it alone. Web security is a vast subject. Threats continue to evolve. Thus, working with expert tools (and humans) is worth the price.
Even so, security issues make it harder to fall asleep.
The Always-on Work Culture of a Web Designer
Remember my goal of separating work and life? I’m terrible at it. Sure, I do well enough during slow times. But I drown when things get busy.
The web industry has a 24/7 work culture that’s hard to escape. A website won’t wait until business hours to break. Most clients won’t consider the clock when making a request, either.
It used to be easier to get away. Before smartphones, you could leave your desk and inbox behind.
I can recall vacationing in places that had no internet access. I could go an entire week without email. How quaint!
Good luck avoiding your inbox these days. You’ll need self-discipline and clients who can temporarily live without you.
Yes, I try to turn my brain off. I’ll even abstain from replying to an email – for a while. Eventually, my brain gets the best of me. Things stay on my mind until I address them. So, why not respond?
That makes sense on the surface. It doesn’t lead to much peace after hours, though.
The Things Out of My Control
Web designers can only control so much. Security is one example – but there are others. Modern websites tend to rely on third-party providers.
That covers everything from web hosting to SaaS (software as a service) to plugins. We may get to choose which tools to use. But we must also trust them to deliver.
What happens when something goes wrong? We might be able to contact a support person. However, some providers take days to respond. Plus, some companies are using chatbots as their first point of contact. Navigating these tools is no picnic.
The result leaves us stuck in the middle. Our clients want to know what’s going on. Meanwhile, we can only rely on what the provider tells us. A lack of communication can be frustrating and worrisome.
It’s about more than downtime, though. Sometimes, a product makes a significant change that impacts your website. Things may not work the way we (or our clients) expect. That leaves us scrambling to figure it out.
Gmail’s recent bulk-sender policy changes are an example. The change’s impact went beyond my expectations. That led to a lot of rushing around to fix email deliverability issues.
Sure, we can try to prepare for the inevitable. But sometimes, all we can do is react.
The Expectations of Clients and Myself
Expectations can keep any web designer up at night. Clients are asking more from us. They want high-end features in exchange for bargain-bin pricing.
That leads us on a wild goose chase. The quest to be faster, cheaper, and better. How do we squeeze in more projects in the same timeframe?
The expectations we have for ourselves are also a burden. I pride myself on getting things done. I want to create the layouts, pick the colors, and write the code. It’s the way I’ve done things for over two decades.
That’s becoming harder, though. The right tools can help. But there’s still a massive responsibility to do the job right.
Part of this may be cultural. Growth is expected and encouraged. After all, who wants to stay the same?
We don’t prioritize comfort nearly enough. Doing so may be perceived as accepting the status quo. Nobody wants to look like they’re stagnating.
All told, this adds to the pressure we feel. We must move onward and upward, regardless of the consequences.
Making Sense and Making Peace
So, what lessons have I learned? That was the point of writing this down, right?
I think web designers need to create boundaries – and stick to them. Otherwise, it’s too easy to get pulled into that vicious cycle. It’s hard – but better than the alternative.
Self-forgiveness is also a factor. It’s OK if you don’t know how to do something. There’s no shame in needing extra time to complete a project.
Sometimes, we’re harder on ourselves than any client could be. So, permit yourself to be imperfect. Give yourself some grace. None of us go through life without experiencing adversity.
Finally, don’t let your job become your only source of identity. It took me a while to understand that advice. But we all need time away from the online world.
Will the things above still keep me awake? I’m betting that they will. Perhaps it’s better to accept it instead of fighting it. Tomorrow can always be better.
Related Topics
Top
0 notes
jcmarchi · 19 hours
Text
5 Best AI Apps for Couples (April 2024)
New Post has been published on https://thedigitalinsider.com/5-best-ai-apps-for-couples-april-2024/
5 Best AI Apps for Couples (April 2024)
In the age of artificial intelligence, couples are discovering innovative ways to strengthen their relationships and foster deeper connections. From AI-powered dating apps that prioritize compatibility to virtual relationship coaches offering personalized guidance, technology is changing the way couples navigate the complexities of love and partnership.
In this blog post, we’ll explore the top AI apps designed to help couples enhance communication, build intimacy, and create lasting bonds.
Image: Flamme
Flamme is an innovative platform that goes beyond traditional dating apps by offering personalized advice and insights to help couples strengthen their bond and overcome challenges. With its advanced AI algorithms, Flamme analyzes user data to provide tailored recommendations that cater to each couple’s unique needs and preferences.
What sets Flamme apart is its comprehensive approach to relationship support. The app not only helps users navigate the early stages of dating but also offers guidance for maintaining long-term relationships. From communication tips to date night ideas, Flamme has it all covered. The AI-powered love guru acts as a virtual relationship coach, empowering couples to grow together and build a stronger connection.
Key Features of Flamme:
Personalized Relationship Roadmap: Flamme creates a customized plan for each couple based on their goals, personalities, and relationship history.
Emotion Analyzer: The app’s AI technology can detect emotional cues in conversations, helping users better understand their partner’s feelings and needs.
Date Night Generator: Flamme suggests creative and engaging date ideas tailored to each couple’s interests and preferences.
Relationship Health Monitor: The app tracks key indicators of relationship health and provides proactive advice to help couples address potential issues before they escalate.
Image: Maia
Maia takes a fresh approach to couples’ apps by focusing on the power of meaningful conversations. This innovative platform uses AI-driven prompts to encourage partners to engage in deep, thought-provoking discussions that foster emotional intimacy and understanding.
One of Maia’s standout features is its ability to learn and adapt to each couple’s unique dynamics. The more users interact with the app, the more personalized the prompts and suggestions become. This ensures that every couple receives a tailored experience that addresses their specific needs and challenges.
Key Features of Maia:
AI-Powered Conversation Starters: Maia generates thought-provoking questions and topics to help couples engage in meaningful discussions.
Relationship Insights: The app analyzes user interactions to provide valuable insights into communication patterns, emotional needs, and areas for growth.
Mood Tracker: Maia helps users track their emotional well-being and provides personalized suggestions for self-care and relationship maintenance.
Relationship Milestones: The app celebrates important milestones and achievements, fostering a sense of progress and shared accomplishment.
Ringi is an AI app that aims to simplify relationship maintenance for busy couples. With just five minutes of daily interaction, Ringi provides a convenient and effective way to nurture and strengthen your bond with your partner. The app’s intuitive interface and personalized features make it easy to integrate into your daily routine, ensuring that your relationship remains a top priority.
What makes Ringi unique is its focus on practicality and efficiency. The app understands that modern couples often struggle to find time for lengthy relationship exercises or therapy sessions. By condensing key relationship-building activities into bite-sized, five-minute interactions, Ringi ensures that even the busiest couples can invest in their partnership. The app’s AI-powered tools adapt to your specific needs, providing targeted advice and activities that maximize the impact of every interaction.
Key Features of Ringi:
Five-Minute Relationship Booster: Ringi offers a daily five-minute activity designed to strengthen your connection with your partner.
AI-Powered Advice: The app’s AI technology provides personalized guidance and insights based on your unique relationship dynamics.
Relationship Health Snapshot: Ringi offers a quick and easy way to assess the overall health of your relationship, identifying areas for improvement.
Progress Tracker: The app tracks your relationship growth over time, celebrating your successes and helping you stay motivated.
Multi-Language Support: Ringi is available in English and Japanese, with plans to expand to other languages in the future.
Image: Relish
Relish is a relationship coaching app that harnesses the power of AI to help couples strengthen their bond and overcome challenges. Designed by a team of experienced relationship experts, Relish combines AI-driven analysis with engaging quizzes, activities, and personalized guidance to provide couples with the tools they need to build a thriving partnership.
At the heart of Relish’s approach is its ability to assess a couple’s communication patterns and interactions using advanced AI technology. By analyzing this data, the app generates tailored advice and exercises that address each couple’s specific needs and challenges. From improving conflict resolution skills to increasing intimacy and maintaining a healthy relationship, Relish offers comprehensive support every step of the way.
Key Features of Relish:
Expert-Crafted Content: Relish’s quizzes, activities, and educational materials are designed by licensed relationship therapists and coaches, ensuring a solid foundation in proven relationship science.
Personalized Coaching: The app’s AI-powered coaching adapts to each couple’s unique dynamics, providing targeted guidance and support.
Accessible and Affordable: Relish makes professional relationship support more accessible and affordable compared to traditional in-person couples therapy.
Couples’ Engagement: Interactive features encourage couples to complete exercises and activities together, fostering a sense of teamwork and strengthening their bond.
Iris Dating AI is an innovative dating app that takes a fresh approach to helping couples find their perfect match. By prioritizing physical attraction and compatibility, Iris aims to create more meaningful connections between users. The app’s advanced AI algorithms analyze user preferences, physical attributes, and compatibility factors to suggest potential matches, ensuring a higher likelihood of mutual attraction and long-term success.
What sets Iris Dating AI apart is its unique “training” process for new users. Upon joining the app, users are shown images of potential matches and asked to indicate their level of interest. This data is then used by the AI to refine the user’s preferences and provide more accurate match recommendations. By emphasizing mutual attraction from the start, Iris reduces the chances of mismatches and dead-end conversations, saving users time and frustration.
Key Features of Iris Dating:
AI-Powered Matchmaking: Iris’s advanced algorithms analyze user data to suggest highly compatible matches based on physical attraction and shared interests.
Attraction-Based Training: New users undergo a “training” process to help the AI better understand their preferences and refine match recommendations.
Streamlined Interface: The app’s user-friendly design allows for quick and easy identification of mutual interest, facilitating more efficient connections.
Compatibility Assessments: Iris evaluates a range of factors to ensure matched couples have a strong foundation for a lasting relationship.
Success Stories: The app showcases real-life couples who found love through Iris, inspiring users and demonstrating the effectiveness of its approach.
The Power of AI in Strengthening Relationships
As technology continues to advance, AI-powered apps like the ones in this blog are changing the way couples approach relationship building and maintenance. By leveraging the power of artificial intelligence, these innovative tools provide personalized insights, tailored advice, and engaging experiences that help couples deepen their connection and navigate the complexities of modern relationships.
From facilitating meaningful conversations and providing expert guidance to streamlining the dating process and offering accessible coaching, these AI apps are empowering couples to invest in their partnerships and build stronger, more resilient bonds. As more couples embrace the potential of AI-driven relationship support, we can expect to see a new era of more fulfilling, successful, and long-lasting relationships.
0 notes
jcmarchi · 24 hours
Text
Nobody Likes a Know-It-All: Smaller LLMs are Gaining Momentum
New Post has been published on https://thedigitalinsider.com/nobody-likes-a-know-it-all-smaller-llms-are-gaining-momentum/
Nobody Likes a Know-It-All: Smaller LLMs are Gaining Momentum
Phi-3 and OpenELM, two major small model releases this week.
Created Using Ideogram
Next Week in The Sequence:
Edge 391: Our series about autonomous agents continues with the fascinating topic of function calling. We explore UCBerkeley’s research on LLMCompiler for function calling and we review the PhiData framework for building agents.
Edge 392: We dive into RAFT, UC Berkeley’s technique for improving RAG scenarios.
You can subscribed to The Sequence below:
TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
📝 Editorial: Nobody Likes a Know-It-All: Smaller LLMs are Gaining Momentum
Last year, Microsoft coined the term ‘small language model’ (SLM) following the publication of the influential paper ‘Textbooks Are All You Need’, which introduced the initial Phi model. Since then, there has been a tremendous market uptake in this area, and SLMs are starting to make inroads as one of the next big things in generative AI.
The case for SLMs is pretty clear. Massively large foundation models are likely to dominate generalist use cases, but they remain incredibly expensive to run, plagued with hallucinations, security vulnerabilities, and reliability issues when applied in domain-specific scenarios. Add to that environments such as mobile or IoT, which are computation-constrained by definition. SLMs are likely to fill that gap in the market with hyper-specialized models that are more secure and affordable to execute. This week we had two major developments in the SLM space:
Microsoft released the Phi-3 family of models. Although not that small anymore at 3.8 billion parameters, Phi-3 continues to outperform much larger alternatives. The model also boasts an impressive 128k token window. Again, not that small, but small enough 😉
Apple open-sourced OpenELM, a family of LLMs optimized for mobile scenarios. Obviously, OpenELM has raised speculations about Apple’s ambitions to incorporate native LLM capabilities in the iPhone.
Large foundation models have commanded the narrative in generative AI and will continue to do so while the scaling laws hold. But SLMs are certainly going to capture an important segment of the market. After all, nobody likes a know-it-all ;)”
🔎 ML Research
Phi-3
Microsoft Research published the technical report of Phi-3, their famous small language model that excel at match and computer science task. The new models are not that small anymore with phi-3-mini at 3.8B parameters and phi-3-small and phi-3-medium at 7B and 14B parameters respective —> Read more.
The Instruction Hierarchy
OpenAI published a paper introducing the instruction hierarchy which defines the model behavior upon confronting conflicting instructions. The method has profound implications in LLM security scenarios such as preventing prompt injections, jailbreaks and other attacks —> Read more.
MAIA
Researchers from MIT published a paper introducing Multimodal Automated Interpretability Agent (MAIA), an AI agent that can design experiments to answer queries of other AI models. The method is an interesting approach to interpretability to prove generative AI models to undestand their behavior —> Read more.
LayerSkip
Meta AI Research published a paper introducing LayerSkip, a method for accelerated inference in LLMs. The method introduces modification in both the pretraining and inference process of LLMs as well as a novel decoding solution —> Read more.
Gecko
Google DeepMind published a paper introducing Gecko, a new benchmark for text to image models. Gecko is structured as a skill-based benchmark that can discriminate models across different human templates —> Read more.
🤖 Cool AI Tech Releases
OpenELM
Apple open sourced OpenELM, a family of small LLMs optimized to run on devices —> Read more.
Artic
Snowflake open sourced Artic, an MoE model specialized in enterprise workloads such as SQL, coding and RAG —> Read more.
Meditron
Researchers from EPFL’s School of Computer and Communication Sciences and Yale School of Medicine released Meditron, an open source family of models tailored to the medical field —> Read more.
Cohere Toolkit
Cohere released a new toolking to accelerate generative AI app development —> Read more.
Penzai
Google DeepMind open sourced Penzai, a research tookit for editing and visualizing neural networks and inject custom logic —> Read more.
🛠 Real World ML
Fixing Code Builds
Google discusses how they trained a model to predict and fix build fixes —> Read more.
Data Science Teams at Lyft
Lyft shared some of the best practices and processes followed for building its data science teams —> Read more.
📡AI Radar
Perplexity announced it has $63 million at over $1 billion valuation.
Elon Musk’s xAI is closing in on a $6 billion valuation.
Microsoft and Alphabet beat Wall Street expectations with strong earnings fueled by AI adoption.
NVIDIA is acquiring AI ifnrastructure startup Run:ai for a reported $700 million.
Cognition, the startup behind coding assistant Devin, raised a $175 million round at $2 billion valuation.
Salesforce announced released Einstein Copilot Actions to bring actionability to its AI platform.
Adobe introduced Firefly 3 with new image generation capabilities.
Higher than expected AI investments had a negative impact in Meta’s earnings report.
Augment emerged from stealth mode with a monster $227 million round.
AI-biotech company Xaira Therapeutics launched with $1 billion in funding.
AI sales platform Nooks raised $22 million.
Snorkel AI announced major generative AI updates to its Snorkel Flow platform.
Flex AI raised $30 million for a new AI compute platform.
The OpenAI Fund closed a $15 million tranche.
TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
0 notes
jcmarchi · 1 day
Text
Exploring the history of data-driven arguments in public life
New Post has been published on https://thedigitalinsider.com/exploring-the-history-of-data-driven-arguments-in-public-life/
Exploring the history of data-driven arguments in public life
Tumblr media Tumblr media
Political debates today may not always be exceptionally rational, but they are often infused with numbers. If people are discussing the economy or health care or climate change, sooner or later they will invoke statistics.
It was not always thus. Our habit of using numbers to make political arguments has a history, and William Deringer is a leading historian of it. Indeed, in recent years Deringer, an associate professor in MIT’s Program in Science, Technology, and Society (STS), has carved out a distinctive niche through his scholarship showing how quantitative reasoning has become part of public life.
In his prize-winning 2018 book “Calculated Values” (Harvard University Press), Deringer identified a time in British public life from the 1680s to the 1720s as a key moment when the practice of making numerical arguments took hold — a trend deeply connected with the rise of parliamentary power and political parties. Crucially, freedom of the press also expanded, allowing greater scope for politicians and the public to have frank discussions about the world as it was, backed by empirical evidence.
Deringer’s second book project, in progress and under contract to Yale University Press, digs further into a concept from the first book — the idea of financial discounting. This is a calculation to estimate what money (or other things) in the future is worth today, to assign those future objects a “present value.” Some skilled mathematicians understood discounting in medieval times; its use expanded in the 1600s; today it is very common in finance and is the subject of debate in relation to climate change, as experts try to estimate ideal spending levels on climate matters.
“The book is about how this particular technique came to have the power to weigh in on profound social questions,” Deringer says. “It’s basically about compound interest, and it’s at the center of the most important global question we have to confront.”
Numbers alone do not make a debate rational or informative; they can be false, misleading, used to entrench interests, and so on. Indeed, a key theme in Deringer’s work is that when quantitiative reasoning gains more ground, the question is why, and to whose benefit. In this sense his work aligns with the long-running and always-relevant approach of the Institute’s STS faculty, in thinking carefully about how technology and knowledge is applied to the world.
“The broader culture more has become attuned to STS, whether it’s conversations about AI or algorithmic fairness or climate change or energy, these are simultaneously technical and social issues,” Deringer says. “Teaching undergraduates, I’ve found the awareness of that at MIT has only increased.” For both his research and teaching, Deringer received tenure from MIT earlier this year.
Dig in, work outward
Deringer has been focused on these topics since he was an undergraduate at Harvard University.
“I found myself becoming really interested in the history of economics, the history of practical mathematics, data, statistics, and how it came to be that so much of our world is organized quantitatively,” he says.
Deringer wrote a college thesis about how England measured the land it was seizing from Ireland in the 1600s, and then, after graduating, went to work in the finance sector, which gave him a further chance to think about the application of quantification to modern life.
“That was not what I wanted to do forever, but for some of the conceptual questions I was interested in, the societal life of calculations, I found it to be a really interesting space,” Deringer says.
He returned to academia by pursuing his PhD in the history of science at Princeton University. There, in his first year of graduate school, in the archives, Deringer found 18th-century pamphlets about financial calculations concering the value of stock involved in the infamous episode of speculation known as the South Sea Bubble. That became part of his dissertation; skeptics of the South Sea Bubble were among the prominent early voices bringing data into public debates. It has also helped inform his second book.
First, though, Deringer earned his doctorate from Princeton in 2012, then spent three years as a Mellon Postdoctoral Research Fellow at Columbia University. He joined the MIT faculty in 2015. At the Institute, he finished turning his dissertation into the “Calculated Values” book — which won the 2019 Oscar Kenshur Prize for the best book from the Center for Eighteenth-Century Studies at Indiana University, and was co-winner of the 2021 Joseph J. Spengler Prize for best book from the History of Economics Society.
“My method as a scholar is to dig into the technical details, then work outward historically from them,” Deringer says.
A long historical chain
Even as Deringer was writing his first book, the idea for the second one was taking root in his mind. Those South Sea Bubble pamphets he had found while at Princeton incorporated discounting, which was intermittently present in “Calculated Values.” Deringer was intrigued by how adept 18th-century figures were at discounting.
“Something that I thought of as a very modern technique seemed to be really well-known by a lot of people in the 1720s,” he says.
At the same time, a conversation with an academic colleague in philosophy made it clear to Deringer how different conclusions about discounting had become debated in climate change policy. He soon resolved to write the “biography of a calculation” about financial discounting.
“I knew my next book had to be about this,” Deringer says. “I was very interested in the deep historical roots of discounting, and it has a lot of present urgency.”
Deringer says the book will incorporate material about the financing of English cathedrals, the heavy use of discounting in the mining industry during the Industrial Revolution, a revival of discounting in 1960s policy circles, and climate change, among other things. In each case, he is carefully looking at the interests and historical dynamics behind the use of discounting.
“For people who use discounting regularly, it’s like gravity: It’s very obvious that to be rational is to discount the future according to this formula,” Deringer says. “But if you look at history, what is thought of as rational is part of a very long historical chain of people applying this calculation in various ways, and over time that’s just how things are done. I’m really interested in pulling apart that idea that this is a sort of timeless rational calculation, as opposed to a product of this interesting history.”
Working in STS, Deringer notes, has helped encourage him to link together numerous historical time periods into one book about the numerous ways discounting has been used.
“I’m not sure that pursuing a book that stretches from the 17th century to the 21st century is something I would have done in other contexts,” Deringer says. He is also quick to credit his colleagues in STS and in other programs for helping create the scholarly environment in which he is thriving.
“I came in with a really amazing cohort of other scholars in SHASS,” Deringer notes, referring to the MIT School of Humanities, Arts, and Social Sciences. He cites others receiving tenure in the last year such as his STS colleague Robin Scheffler, historian Megan Black, and historian Caley Horan, with whom Deringer has taught graduate classes on the concept of risk in history. In all, Deringer says, the Institute has been an excellent place for him to pursue interdisciplinary work on technical thought in history.
“I work on very old things and very technical things,” Deringer says. “But I’ve found a wonderful welcoming at MIT from people in different fields who light up when they hear what I’m interested in.”
0 notes
jcmarchi · 3 days
Text
Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models
New Post has been published on https://thedigitalinsider.com/mini-gemini-mining-the-potential-of-multi-modality-vision-language-models/
Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models
The advancements in large language models have significantly accelerated the development of natural language processing, or NLP. The introduction of the transformer framework proved to be a milestone, facilitating the development of a new wave of language models, including OPT and BERT, which exhibit profound linguistic understanding. Furthermore, the inception of GPT, or Generative Pre-trained Transformer models, introduced a new paradigm with autoregressive modeling and established a robust method for language prediction and generation. The advent of language models like GPT-4, ChatGPT, Mixtral, LLaMA, and others has further fueled rapid evolution, with each model demonstrating enhanced performance in tasks involving complex language processing. Among existing methods, instruction tuning has emerged as a key technique for refining the output of pre-trained large language models, and the integration of these models with specific tools for visual tasks has highlighted their adaptability and opened doors for future applications. These extend far beyond the traditional text-based processing of LLMs to include multimodal interactions.
Furthermore, the convergence of natural language processing and computer vision models has given rise to VLMs, or Vision Language Models, which combine linguistic and vision models to achieve cross-modal comprehension and reasoning capabilities. The integration and advent of visual and linguistic models have played a crucial role in advancing tasks that require both language processing and visual understanding. The emergence of revolutionary models like CLIP has further bridged the gap between vision tasks and language models, demonstrating the feasibility and practicality of cross-modal applications. More recent frameworks like LLaMA and BLIP leverage tailored instruction data to devise efficient strategies that demonstrate the potent capabilities of the model. Additionally, combining large language models with image outputs is the focus of recent multimodal research, with recent methods being able to bypass direct generation by utilizing the image retrieval approach to produce image outputs and interleaved texts.
With that being said, and despite the rapid advancements in vision language models facilitating basic reasoning and visual dialogue, there still exists a significant performance gap between advanced models like GPT-4, and vision language models. Mini-Gemini is an attempt to narrow the gap that exists between vision language models and more advanced models by mining the potential of VLMs for better performance from three aspects: VLM-guided generation, high-quality data, and high-resolution visual tokens. To enhance visual tokens, the Mini-Gemini framework proposes to utilize an additional visual encoder for high-resolution refinement without increasing the count of visual tokens. The Mini-Gemini framework further constructs a high-quality dataset in an attempt to promote precise comprehension of images and reasoning-based generation. Overall, the Mini-Gemini framework attempts to mine the potential of vision language models, and aims to empower existing frameworks with image reasoning, understanding, and generative capabilities simultaneously. This article aims to cover the Mini-Gemini framework in depth, and we explore the mechanism, the methodology, the architecture of the framework along with its comparison with state of the art frameworks. So let’s get started. 
Over the years, large language models have evolved, and they now boast of remarkable multi-modal capabilities, and are becoming an essential part of current vision language models. However, there exists a gap between the multi-modal performance of large language models and vision language models with recent research looking for ways to combine vision with large language models using images and videos. For vision tasks itself, image resolution is a crucial element to explicitly despite the surrounding environment with minimal visual hallucinations. To bridge the gap, researchers are developing models to improve the visual understanding in current vision language models, and two of the most common approaches are: increasing the resolution, and increasing the number of visual tokens. Although increasing the number of visual tokens with higher resolution images does enhance the visual understanding, the boost is often accompanied with increased computational requirements and associated costs especially when processing multiple images. Furthermore, the capabilities of existing models, quality of existing data, and applicability remains inadequate for an accelerated development process, leaving researchers with the question, “how to accelerate the development of vision language models with acceptable costs”?
The Mini-Gemini framework is an attempt to answer the question as it attempts to explore the potential of vision language models from three aspects: VLM-guided generation or expanded applications, high-quality data, and high-resolution visual tokens. First, the Mini-Gemini framework implements a ConvNet architecture to generate higher-resolution candidates efficiently, enhancing visual details while maintaining the visual token counts for the large language model. The Mini-Gemini framework amalgamates publicly available high-quality datasets in an attempt to enhance the quality of the data, and integrates these enhancements with state of the art generative and large language models with an attempt to enhance the performance of the VLMs, and improve the user experience. The multifaceted strategy implemented by the Mini-Gemini framework enables it to explore hidden capabilities of vision language models, and achieves significant advancements with evident resource constraints. 
In general, the Mini-Gemini framework employs an any to any paradigm since it is capable of handling both text and images as input and output. In particular, the Mini-Gemini framework introduces an efficient pipeline for enhancing visual tokens for input images, and features a dual-encoder system comprising of twin encoders: the first encoder is for high-resolution images, while the second encoder is for low-quality visual embedding. During inference, the encoders work in an attention mechanism, where the low-resolution encoder generates visual queries, while the high-resolution encoder provides key and values for reference. To augment the data quality, the Mini-Gemini framework collects and produces more data based on public resources, including task-oriented instructions, generation-related data, and high-resolution responses, with the increased amount and enhanced quality improving the overall performance and capabilities of the model. Furthermore, the Mini-Gemini framework supports concurrent text and image generation as a result of the integration of the vision language model with advanced generative models. 
Mini-Gemini : Methodology and Architecture
At its core, the Mini-Gemini framework is conceptually simple, and comprises three components. 
The framework employs dual vision encoders to provide low-resolution visual embeddings and high resolution candidates. 
The framework proposes to implement patch info mining to conduct mining at patch level between low-resolution visual queries, and high-resolution regions. 
The Mini-Gemini framework utilizes a large language model to marry text with images for both generation and comprehension simultaneously. 
Dual-Vision Encoders
The Mini-Gemini framework can process both text and image inputs, with the option to handle them either individually or in a combination. As demonstrated in the following image, the Mini-Gemini framework starts the process by employing bilinear interpolation to generate a low-resolution image from its corresponding high-resolution image. 
The framework then processes these images and encodes them into a multi-grid visual embedding in two parallel image flows. More specifically, the Mini-Gemini framework maintains the traditional pipeline for low-resolution flows and employs a CLIP-pretrained Visual Transformer to encode the visual embeddings, facilitating the model to preserve the long-range relation between visual patches for subsequent interactions in large language models. For the high-resolution flows, the Mini-Gemini framework adopts the CNN or Convolution Neural Networks based encoder for adaptive and efficient high resolution image processing. 
Patch Info Mining
With the dual vision encoders generating the LR embeddings and HR features, the Mini-Gemini framework proposes to implement patch info mining with the aim of extending the potential of vision language models with enhanced visual tokens. In order to maintain the number of visual tokens for efficiency in large language models, the Mini-Gemini framework takes the low-resolution visual embeddings as the query, and aims to retrieve relevant visual cues from the HR feature candidates, with the framework taking the HR feature map as the key and value.
As demonstrated in the above image, the formula encapsulates the process of refining and synthesizing visual cues, which leads to the generation of advanced visual tokens for the subsequent large language model processing. The process ensures that the framework is able to confine the mining for each query to its corresponding sub region in the HR feature map with the pixel-wise feature count, resulting in enhanced efficiency. Owing to this design, the Mini-Gemini framework is able to extract the HR feature details without enhancing the count of visual tokens, and maintains a balance between computational feasibility and richness of detail. 
Text and Image Generation
The Mini-Gemini framework concatenates the visual tokens and input text tokens as the input to the large language models for auto-regressive generation. Unlike traditional vision language models, the Mini-Gemini framework supports text-only as well as text-image generation as input and output, i.e. any to any inference, and it is the result of this outstanding image-text understanding and reasoning capabilities, the Mini-Gemini is able to generate high quality images. Unlike recent works that focus on the domain gap between text embeddings of the generation models and large language models, the Mini-Gemini framework attempts to optimize the gap in the domain of language prompts by translating user instructions into high quality prompts that produce context relevant images in latent diffusion models. Furthermore, for a better understanding of instruction finetuning, and cross modality alignment, the Mini-Gemini framework collects samples from publicly available high quality datasets, and uses the GPT-4 turbo framework to further construct a 13K instruction following dataset to support image generation. 
Mini-Gemini : Experiments and Results
To evaluate its performance, the Mini-Gemini framework is instantiated with the pre-trained ConvNext-L framework for the HR vision encoder, and with a CLIP-pre-trained Vision Transformer for the LR vision encoder. To ensure training efficiency, the Mini-Gemini framework keeps the two vision encoders fixed, and optimizes the projectors of patch info mining in all stages, and optimizes the large language model during the instruction tuning stage itself. 
The following table compares the performance of the Mini-Gemini framework against state of the art models across different settings, and also takes in consideration private models. As it can be observed, the Mini-Gemini outperforms existing frameworks across a wide range of LLMs consistently at normal resolution, and demonstrates superior performance when configured with the Gemma-2B in the category of efficient models. Furthermore, when larger large language models are employed, the scalability of the Mini-Gemini framework is evident. 
To evaluate its performance on high resolution and extended visual tokens, the experiments are performed with an input size of 672 for the LR vision encoder, and 1536 for the visual encoder. As mentioned earlier, the main purpose of the HR visual encoder is to offer high-resolution candidate information. As it can be observed, the Mini-Gemini framework delivers superior performance when compared against state of the art frameworks. 
Furthermore, to assess the visual comprehension prowess of the Mini-Gemini framework in real-world settings, developers apply the model to a variety of reasoning and understanding tasks as demonstrated in the following image. As it can be observed, the Mini-Gemini framework is able to solve a wide array of complex tasks thanks to the implementation of patch info mining, and high-quality data. But what’s more impressive is the fact that the Mini-Gemini framework demonstrates a keen addition to detail that extends beyond mere recognition prowess, and describes intricate elements intricately. 
The following figure provides a comprehensive evaluation of the generative abilities of the Mini-Gemini framework. 
When compared against recent models like ChatIllusion and AnyGPT, the Mini-Gemini framework demonstrates stronger multi-modal understanding abilities, allowing it to generate text to image captions that align with the input instructions better, and results in image to text answers with stronger conceptual similarity. What’s more impressive is the fact that the Mini-Gemini framework demonstrates remarkable proficiency in generating high-quality content using multi-model human instructions only with text training data, a capability that illustrates Mini-Gemini’s robust semantic interpretation and image-text alignment skills. 
Final Thoughts
In this article we have talked about Mini-Gemini, a potent and streamlined framework for multi-modality vision language models. The primary aim of the Mini-Gemini framework is to harness the latent capabilities of vision language models using high quality data, strategic design of the framework, and an expanded functional scope. Mini-Gemini is an attempt to narrow the gap that exists between vision language models and more advanced models by mining the potential of VLMs for better performance from three aspects: VLM-guided generation, high-quality data, and high-resolution visual tokens. To enhance visual tokens, the Mini-Gemini framework proposes to utilize an additional visual encoder for high-resolution refinement without increasing the count of visual tokens. The Mini-Gemini framework further constructs a high-quality dataset in an attempt to promote precise comprehension of images and reasoning-based generation. Overall, the Mini-Gemini framework attempts to mine the potential of vision language models, and aims to empower existing frameworks with image reasoning, understanding, and generative capabilities simultaneously.
0 notes
jcmarchi · 3 days
Text
Three from MIT awarded 2024 Guggenheim Fellowships
New Post has been published on https://thedigitalinsider.com/three-from-mit-awarded-2024-guggenheim-fellowships/
Three from MIT awarded 2024 Guggenheim Fellowships
Tumblr media Tumblr media
MIT faculty members Roger Levy, Tracy Slatyer, and Martin Wainwright are among 188 scientists, artists, and scholars awarded 2024 fellowships from the John Simon Guggenheim Memorial Foundation. Working across 52 disciplines, the fellows were selected from almost 3,000 applicants for “prior career achievement and exceptional promise.”
Each fellow receives a monetary stipend to pursue independent work at the highest level. Since its founding in 1925, the Guggenheim Foundation has awarded over $400 million in fellowships to more than 19,000 fellows. This year, MIT professors were recognized in the categories of neuroscience, physics, and data science.
Roger Levy is a professor in the Department of Brain and Cognitive Sciences. Combining computational modeling of large datasets with psycholinguistic experimentation, his work furthers our understanding of the cognitive underpinning of language processing, and helps to design models and algorithms that will allow machines to process human language. He is a recipient of the Alfred P. Sloan Research Fellowship, the NSF Faculty Early Career Development (CAREER) Award, and a fellowship at the Center for Advanced Study in the Behavioral Sciences.
Tracy Slatyer is a professor in the Department of Physics as well as the Center for Theoretical Physics in the MIT Laboratory for Nuclear Science and the MIT Kavli Institute for Astrophysics and Space Research. Her research focuses on dark matter — novel theoretical models, predicting observable signals, and analysis of astrophysical and cosmological datasets. She was a co-discoverer of the giant gamma-ray structures known as the “Fermi Bubbles” erupting from the center of the Milky Way, for which she received the New Horizons in Physics Prize in 2021. She is also a recipient of a Simons Investigator Award and Presidential Early Career Awards for Scientists and Engineers.
Martin Wainwright is the Cecil H. Green Professor in Electrical Engineering and Computer Science and Mathematics, and affiliated with the Laboratory for Information and Decision Systems and Statistics and Data Science Center. He is interested in statistics, machine learning, information theory, and optimization. Wainwright has been recognized with an Alfred P. Sloan Foundation Fellowship, the Medallion Lectureship and Award from the Institute of Mathematical Statistics, and the COPSS Presidents’ Award from the Joint Statistical Societies. Wainwright has also co-authored books on graphical and statistical modeling, and solo-authored a book on high dimensional statistics.
“Humanity faces some profound existential challenges,” says Edward Hirsch, president of the foundation. “The Guggenheim Fellowship is a life-changing recognition. It’s a celebrated investment into the lives and careers of distinguished artists, scholars, scientists, writers and other cultural visionaries who are meeting these challenges head-on and generating new possibilities and pathways across the broader culture as they do so.”
0 notes
jcmarchi · 3 days
Text
Zero Trust strategies for navigating IoT/OT security challenges - CyberTalk
New Post has been published on https://thedigitalinsider.com/zero-trust-strategies-for-navigating-iot-ot-security-challenges-cybertalk/
Zero Trust strategies for navigating IoT/OT security challenges - CyberTalk
Tumblr media Tumblr media
Travais ‘Tee’ Sookoo leverages his 25 years of experience in network security, risk management, and architecture to help businesses of all sizes, from startups to multi-nationals, improve their security posture. He has a proven track record of leading and collaborating with security teams and designing secure solutions for diverse industries.
Currently, Tee serves as a Security Engineer for Check Point, covering the Caribbean region. He advises clients on proactive risk mitigation strategies. He thrives on learning from every challenge and is always looking for ways to contribute to a strong cyber security culture within organizations.
In this informative interview, expert Travais Sookoo shares insights into why organizations need to adopt a zero trust strategy for IoT and how to do so effectively. Don’t miss this!
For our less technical readers, why would organizations want to implement zero trust for IoT systems? What is the value? What trends are you seeing?
For a moment, envision your organization as a bustling apartment building. There are tenants (users), deliveries (data), and of course, all sorts of fancy gadgets (IoT devices). In the old days, our threat prevention capabilities might have involved just a single key for the building’s front door (the network perimeter). Anyone with that key could access everything; the mailbox, deliveries, gadgets.
That’s how traditional security for some IoT systems worked. Once the key was obtained, anyone could gain access. With zero trust, instead of giving everyone the master key, the application of zero trust verifies each device and user ahead of provisioning access.
The world is getting more connected, and the number of IoT devices is exploding, meaning more potential security gaps. Organizations are realizing that zero trust is a proactive way to stay ahead of the curve and keep their data and systems safe.
Zero trust also enables organizations to satisfy many of their compliance requirements and to quickly adapt to ever-increasing industry regulations.
What challenges are organizations experiencing in implementing zero trust for IoT/OT systems?
While zero trust is a powerful security framework, the biggest hurdle I hear about is technology and personnel.
In terms of technology, the sheer number and variety of IoT devices can be overwhelming. Enforcing strong security measures with active monitoring across this diverse landscape is not an easy task.  Additionally, many of these devices lack the processing power to run security or monitoring software, thus making traditional solutions impractical.
Furthermore, scaling zero trust to manage the identities and access controls for potentially hundreds, thousands, even millions of devices can be daunting.
Perhaps the biggest challenge is that business OT systems must prioritize uptime and reliability above all else. Implementing zero trust may require downtime or potentially introduce new points of failure.  Finding ways to achieve zero trust without compromising the availability of critical systems takes some manoeuvring.
And now the people aspect: Implementing and maintaining a zero trust architecture requires specialized cyber security expertise, which many organizations may not have. The talent pool for these specialized roles can be limited, making it challenging to recruit and retain qualified personnel.
Additionally, zero trust can significantly change how people interact with OT systems. Organizations need to invest in training staff on new procedures and workflows to ensure a smooth transition.
Could you speak to the role of micro-segmentation in implementing zero trust for IoT/OT systems? How does it help limit lateral movement and reduce the attack surface?
With micro-segmentation, we create firewalls/access controls between zones, making it much harder for attackers to move around. We’re locking the doors between each room in the apartment; even if an attacker gets into the thermostat room (zone), they can’t easily access the room with our valuables (critical systems).
The fewer devices and systems that an attacker can potentially exploit, the better. Micro-segmentation reduces the overall attack surface and the potential blast radius by limiting what devices can access on the network.
Based on your research and experience, what are some best practices or lessons learned in implementing zero trust for IoT and OT systems that you can share with CISOs?
From discussions I’ve had and my research:
My top recommendation is to understand the device landscape. What are the assets you have, their purpose, how critical are they to the business? By knowing the environment, organizations can tailor zero trust policies to optimize both security and business continuity.
Don’t try to boil the ocean! Zero trust is a journey, not a destination. Start small, segmenting critical systems and data first. Learn from that experience and then expand the implementation to ensure greater success with declining margins of errors.
Legacy OT systems definitely throw a wrench into plans and can significantly slow adoption of zero trust. Explore how to integrate zero trust principles without compromising core functionalities. It might involve a mix of upgrades and workarounds.
The core principle of zero trust is granting only the minimum access required for a device or user to function (least privilege). Document who needs what and then implement granular access controls to minimize damage from a compromised device.
Continuous monitoring of network activity and device behaviour is essential to identify suspicious activity and potential breaches early on. Ensure that monitoring tools encompasses everything and your teams can expertly use it.
Automating tasks, such as device onboarding, access control enforcement, and security patching can significantly reduce the burden on security teams and improve overall efficiency.
Mandate regular review and policy updates based on new threats, business needs, and regulatory changes.
Securing IoT/OT systems also requires close collaboration between OT and IT teams. Foster teamwork, effective communications and understanding between these departments to break down silos. This cannot be stressed enough. Too often, the security team is the last to weigh in, often after it’s too late.
What role can automation play in implementing and maintaining Zero Trust for IoT/OT systems?
Zero trust relies on granting least privilege access. Automation allows us to enforce these granular controls by dynamically adjusting permissions based on device type, user role, and real-time context.
Adding new IoT devices can be a tedious process and more so if there are hundreds or thousands of these devices. However, automation can greatly streamline device discovery, initial configuration, and policy assignment tasks, thereby freeing up security teams to focus on more strategic initiatives.
Manually monitoring a complex network with numerous devices is overwhelming, but we can automate processes to continuously monitor network activity, device behaviour, and identify anomalies that might indicate a potential breach. And if a security incident occurs, we can automate tasks to isolate compromised devices, notifying security teams, and initiating remediation procedures.
Through monitoring, it’s possible to identify IoT/OT devices that require patching, which can be crucial, but also time-consuming. It’s possible to automate patch deployment with subsequent verification, and even launch rollbacks in case of unforeseen issues.
If this sounds as a sales pitch, then hopefully you’re sold. There’s no doubt that automation will significantly reduce the burden on security teams, improve the efficiency of zero trust implementation and greatly increase our overall security posture.
What metrics would you recommend for measuring the effectiveness of zero trust implementation in IoT and OT environments?
A core tenet of zero trust is limiting how attackers move between devices or otherwise engage in lateral movement. The number of attempted lateral movements detected and blocked can indicate the effectiveness of segmentation and access controls.
While some breaches are inevitable, a significant decrease in compromised devices after implementing zero trust signifies a positive impact. This metric should be tracked alongside the severity of breaches and the time it takes to identify and contain them. With zero trust, it is assumed any device or user, regardless of location, could be compromised.
The Mean Time to Detection (MTD) and Mean Time to Response (MTTR) are metrics that you can use to measure how quickly a security incident is identified and contained. Ideally, zero trust should lead to faster detection and response times, minimizing potential damage.
Zero trust policies enforces granular access controls. Tracking the number of least privilege violations (users or devices accessing unauthorized resources) can expose weaknesses in policy configuration or user behaviour and indicate areas for improvement.
Security hygiene posture goes beyond just devices. It includes factors like patch compliance rates, and the effectiveness of user access.
Remember the user experience? Tracking user satisfaction with the zero trust implementation process and ongoing security measures can help identify areas for improvement and ensure a balance between security and usability.
It’s important to remember that zero trust is a journey, not a destination. The goal is to continuously improve our security posture and make it more difficult for attackers to exploit vulnerabilities in our IoT/OT systems. Regularly review your metrics and adjust zero trust strategies as needed.
Is there anything else that you would like to share with the CyberTalk.org audience?
Absolutely! As we wrap up this conversation, I want to leave the CyberTalk.org audience with a few key takeaways concerning securing IoT and OT systems:
Zero trust is a proactive approach to security. By implementing zero trust principles, organizations can significantly reduce the risk of breaches and protect their critical infrastructure.
Don’t go it alone: Security is a team effort. Foster collaboration between IT, OT, and security teams to ensure that everyone is on the same page when it comes to adopting zero trust.
Keep learning: The cyber security landscape is constantly evolving. Stay up-to-date on the latest threats and best practices. Resources like Cybertalk.org are a fantastic place to start.
Focus on what matters: A successful zero trust implementation requires a focus on all three pillars: people, process, and technology. Security awareness training for employees, clearly defined policies and procedures, and the right security tools are all essential elements.
Help is on the way: Artificial intelligence and machine learning will play an increasingly important role in automating zero trust processes and making them even more effective.
Thank you, CyberTalk.org, for the opportunity to share my thoughts. For more zero trust insights, click here.
0 notes
jcmarchi · 3 days
Text
A musical life: Carlos Prieto ’59 in conversation and concert
New Post has been published on https://thedigitalinsider.com/a-musical-life-carlos-prieto-59-in-conversation-and-concert/
A musical life: Carlos Prieto ’59 in conversation and concert
Tumblr media Tumblr media
World-renowned cellist Carlos Prieto ’59 returned to campus for an event to perform and to discuss his new memoir, “Mi Vida Musical.”
At the April 9 event in the Samberg Conference Center, Prieto spoke about his formative years at MIT and his subsequent career as a professional cellist. The talk was followed by performances of J.S. Bach’s “Cello Suite No. 3” and Eugenio “Toussaint’s Bachriation.” Valerie Chen, a 2022 Sudler Prize winner and Emerson/Harris Fellow, also performed Phillip Glass’s “Orbit.”
Prieto was born in Mexico City and began studying the cello when he was 4. He graduated from MIT with BS degrees in 1959 in Course 3, then called the Metallurgical Engineering and today Materials Science and Engineering, and in Course 14 (Economics). He was the first cello and soloist of the MIT Symphony Orchestra. While at MIT, he took all available courses in Russian, which allowed him, years later, to study at Lomonosov University in Moscow.
After graduation from MIT, Prieto returned to Mexico, where he rose to become the head of an integrated iron and steel company.
“When I returned to Mexico, I was very active in my business life, but I was also very active in my music life,” he told the audience. “And at one moment, the music overcame all the other activities and I left my business activities to devote all my time to the cello and I’ve been doing this for the past 50 years.”
During his musical career, Prieto played all over the world and has played and recorded the world premieres of 115 compositions, most of which were written for him. He is the author of 14 books, some of which have been translated into English, Russian, and Portuguese.
Prieto’s honors include the Order of the Arts and Letters from France, the Order of Civil Merit from the King of Spain, and the National Prize for Arts and Sciences from the president of Mexico. In 1993 he was appointed member of the MIT Music and Theater Advisory Committee. In 2014, the School of Humanities, Arts, and Social Sciences awarded Prieto the Robert A. Muh Alumni Award.
0 notes
jcmarchi · 3 days
Text
TopSpin 2K25 Review - A Strong Return - Game Informer
New Post has been published on https://thedigitalinsider.com/topspin-2k25-review-a-strong-return-game-informer/
TopSpin 2K25 Review - A Strong Return - Game Informer
Tumblr media Tumblr media
In the heyday of the tennis-sim video game genre, Top Spin and Virtua Tennis were the best players in the crowded space. However, in the time since the genre’s boom settled, the offerings have fallen off considerably, with both franchises going more than a decade without a new release. TopSpin 2K25 signals the reemergence of the critically acclaimed series, and though it’s been a while since it stepped on the court, it’s evident the franchise hasn’t lost its stroke.
TopSpin 2K25 faithfully recreates the high-speed chess game of real-world tennis. Positioning, spin, timing, and angles are critical to your success. For those unfamiliar with those fundamental tennis tenets, 2K25 does a superb job of onboarding players with TopSpin Academy, which covers everything from where you should stand to how to play different styles. Even as someone who played years of tennis in both real life and video games, I enjoyed going through the more advanced lessons to refamiliarize myself with the various strategies at play.
Once on the court, you learn how crucial those tactics are. The margin of error is extremely thin, as the difference between a winner down the baseline and a shot into the net is often a split-second on the new timing meter. This meter ensures you release the stroke button timed with when the ball is in the ideal striking position relative to your player. Mastering this is pivotal, as it not only improves your shot accuracy but also your power.
[embedded content]
TopSpin 2K25 is at its best when you’re in sustained rallies against an evenly-matched opponent. Getting off a strong serve to immediately puts your opponent on the defensive, then trying to capitalize on their poor positioning as they struggle to claw back into the point effectively captures the thrill of the real-world game. I also love how distinct each play style feels in action; an offensive baseline player like Serena Williams presents different challenges than a serve-and-volleyer like John McEnroe.
You can hone your skills in one-off exhibition matches, but I spent most of my time in TopSpin 2K25 in MyCareer. Here, you create your player, with whom you’ll train and climb the ranks. As you complete challenges and win matches, you raise your status, which opens new features like upgradeable coaches, equippable skills, and purchasable homes to alleviate the stamina drain from travel. Managing your stamina by sometimes resting is essential to sustain high-level play; pushing yourself too hard can even cause your player to suffer injuries that sideline you for months.
I loved most of my time in MyCareer, but some design decisions ruined the immersion. For example, I ignored portions of the career goals necessary to rank up my player for hours, so while I was in the top 10 global rankings, I was unable to participate in a Grand Slam because I was still at a lower status than my ranking would typically confer. And since repetition is the path to mastery, it’s counterintuitive that repeated training minigames award fewer benefits, particularly since the mode as a whole is a repetitive loop of training, special events, and tournaments. Additionally, MyCareer shines a light on the shallow pool of licensed players on offer. Most of my matches were against created characters, even hours deep. 2K has promised free licensed pros in the post-launch phase, but for now, the game is missing multiple top players.
I’m pleasantly surprised by how unintrusive the use of VC is. In the NBA 2K series, VC, which can be earned slowly or bought using real money, is used to directly improve your player. In TopSpin 2K25, it’s used primarily for side upgrades, like leveling up your coach, relocating your home, earning XP boosts, resetting your attribute distribution, or purchasing cosmetics. Though I’m still not a fan of microtransactions affecting a single-player mode – particularly since it’s almost certainly why you need to be online to play MyCareer – it’s much more palatable than its NBA contemporary.
If you’d rather play against real opponents, you can show off your skills (and your created character) in multiple online modes. World Tour pits your created player against others across the globe in various tournaments and leaderboard challenges, while 2K Tour leverages the roster of licensed players with daily challenges to take on. Outside of minor connection hiccups, I had an enjoyable time tackling the challenges presented by other players online. However, World Tour’s structure means that despite the game’s best efforts, mismatches occur; it’s no fun to play against a created character multiple levels higher than you. Thankfully, these mismatches were the outlier rather than the exception in my experience.
TopSpin 2K25 aptly brings the beloved franchise back to center court, showing that not only does the series still have legs, but so does the sim-tennis genre as a whole. Though its modes are somewhat repetitive and it’s missing several high-profile pros at launch, TopSpin 2K25 serves up a compelling package for tennis fans.
0 notes
jcmarchi · 3 days
Text
1000 AI-powered machines: Vision AI on an industrial scale
New Post has been published on https://thedigitalinsider.com/1000-ai-powered-machines-vision-ai-on-an-industrial-scale/
1000 AI-powered machines: Vision AI on an industrial scale
This article is based on Bart Baekelandt’s brilliant talk at the Computer Vision Summit in London. Pro and Pro+ members can enjoy the complete recording here. For more exclusive content, head to your membership dashboard. 
Hi, I’m Bart Baekelandt, Head of Product Management at Robovision. 
Today, we’re going to talk about the lessons we’ve learned over the last 15 years of applying AI to robots and machines at scale in the real world. 
Robovision’s journey: From flawed to fantastic
First, let’s look at what these machines were like in the past.
15 years ago, our machine was very basic with extremely rudimentary computer vision capabilities. It used classical machine vision techniques and could only handle very basic tasks like recognizing a hand. Everything was hard-coded, so if you needed the machine to recognize something new, you’d have to recode the entire application. It was expensive and required highly skilled personnel.
Nowadays, we don’t have just one machine – we have entire populations of machines, with advanced recognition capabilities. There’s a continuous process of training AI models and applying them to production so the machines can tackle the problem at hand.
For example, there are machines that can take a seedling from a conveyor belt and plant it in a tray. We have entire fleets of these specialized machines. One day they’re trained to handle one type of seedling, and the next day they’re retrained to perform optimally for a different variety of plant. 
So yeah, a lot has happened in 15 years. We’ve gone from initially failing to scale AI, to figuring out how to apply AI at scale with minimal support from our side. As of today, we’ve produced over 1000 machines with game-changing industrial applications 
Let’s dive into a few of the key lessons we’ve picked up along the way.
Lesson one: AI success happens after the pilot
The first lesson is that AI success happens after the pilot phase. We learned this lesson the hard way in the initial stages of applying AI, around 2012.
Let me share a quick anecdote. When we were working on the machine that takes seedlings from a conveyor belt and plants them in trays, we spent a lot of time applying AI and building the algorithm to recognize the right breaking point on each seedling and plant it properly. 
Eventually, we nailed it – the algorithm worked perfectly. The machine builder who integrated it was happy, and the customer growing the seedlings was delighted because everything was functioning as intended.
However, the congratulations were short-lived. Within two weeks, we got a call – the system wasn’t picking the seedlings well anymore. What had happened? They were now trying to handle a different seedling variety, and the images looked just different enough that our AI model struggled. The robot started missing the plants entirely or planting them upside down.
We got new image data from the customer’s operations and retrained the model. Great, it worked again! But sure enough, two weeks later, we got another call reporting the same problem all over again. 
This highlighted a key problem. The machine builder wanted to sell to many customers, but we couldn’t feasibly support each one by perpetually retraining models on their unique data. That approach doesn’t scale. 
That painful lesson was the genesis of our products. We realized the end customers needed to be able to continuously retrain the models themselves without our assistance. So, we developed tooling for them to capture new data, convert it to retrained models, deploy those models to the machines, and interface with the machines for inference. 
Our product philosophy stems directly from those harsh real-world lessons about what’s required to successfully scale AI in real-world production.
Lesson two: It’s about getting the odd couple to work together
When you’re creating working AI solutions at scale, there typically are two types of people involved. They’re your classic “odd couple,” but they need to be able to collaborate effectively.
On one hand, you have the data scientists – they generally have advanced degrees like Masters in Engineering or even PhDs. Data scientists are driven by innovation. They live to solve complex problems and find solutions to new challenges. 
Once they’ve cracked the core issue, however, they tend to lose interest. They want to move on to the next big innovation, rather than focusing on continuous improvement cycles or incremental optimizations
On the other hand, you have the machine operators who run the manufacturing systems and processes where AI gets applied at scale – whether that’s a factory, greenhouse, or another facility. 
The machine operators have intricate knowledge of the products being handled by the machines. If you’re deploying AI to handle seedlings, for example, no one understands the nuances, variations, and defects of those plants better than the operator.
This post is for paying subscribers only
Subscribe now
Already have an account? Sign in
0 notes
jcmarchi · 3 days
Text
A snapshot of bias, the human mind, and AI
New Post has been published on https://thedigitalinsider.com/a-snapshot-of-bias-the-human-mind-and-ai/
A snapshot of bias, the human mind, and AI
Tumblr media
Introducing bias & the human mind  
The human mind is a landscape filled with curiosity and, at times irrationality, motivation, confusion, and bias. The latter results in levels of complexity in how both the human and more recently, the artificial slant affects artificial intelligence systems from concept to scale.
Bias is something that in many cases unintentionally appears – whether it be in human decision-making or the dataset – but its impact on output can be sizeable. With several cases over the years highlighting the social, political, technological, and environmental impact of bias, this piece will explore this important topic and some thoughts on how such a phenomenon can be managed.
Whilst there’s many variations and interpretations (which in some cases themselves could be biased), let’s instead of referring to a definition explore how the human mind might work in certain scenarios. 
Imagine two friends (friend A and friend B) at school who’ve had a falling out and makeup again after apologies are exchanged. With friend A’s birthday coming up, they’re going through their invite list and land on Person B (who they fell out with).
Tumblr media
Do they want to invite them back and risk the awkwardness if another falling out occurs, or should they take the view they should only invite those they’ve always got along with? The twist is though, Person A choosing the attendees for the party may have had minor falling outs with them in the past, but they’re interpreting it through the lens any previous falling outs are insignificant enough to be looked over. 
The follow-up from the above example turns to whether person’s A decision is fair. Now, fairness adds to the difficulty as there’s no scientific definition of what fairness really is.
However, some might align fairness with making a balanced judgment after considering the facts or doing what is right (even if that’s biased!). These are just a couple of ways in which the mind can distort, and mould the completion of tasks, whether they’re strategic or technical.
Before going into the underlying ways in which bias can be managed in AI systems, let’s start from the top: leadership. 
Leadership, bias, and Human In the Loop Systems  
The combination of leadership and bias introduces important discussions about how such a trait can be managed. “The fish rots from the head down” is a common phrase used to describe leadership styles and their impact across both the wider company and their teams, but this phrase can also be extended to how bias weaves down the chain of command.
For example, if a leader within the C-suite doesn’t get along with the CEO or has had several previous tense exchanges, they may ultimately, subconsciously have a blurred view of the company vision that then spills down, with distorted conviction, to the teams.
Leadership and bias will always remain an important conversation in the boardroom, and there’s been some fascinating studies exploring this in more depth, for example, Shaan Madhavji’s piece on the identification and management of leadership bias [1]. It’s an incredibly eye-opening subject, and one that in my view will become increasingly topical as time moves on. 
Generative Artificial Intelligence Report 2024
We’re diving deep into the world of generative artificial intelligence with our new report: Generative AI 2024, which will explore how and why companies are (or aren’t) using this technology.
Tumblr media
As we shift from leadership styles and bias to addressing bias in artificial intelligence-based systems, an area that’ll come under further spotlight will be the effectiveness of Human In the Loop Systems (HITL).
Whilst their usefulness varies across industries, in summary, HITL systems fuse both the art of human intuition and the efficiency of machines: an incredibly valuable partnership where complex decision-making at speed is concerned.
Additionally, when linked to bias, the human link in the chain can be key in identifying bias early on to ensure adverse effects aren’t felt later on. On the other hand, HITL won’t always be a Spring cleaning companion: complexities around getting a sizeable batch of training data combined with practitioners who can effectively integrate into a HITL environment can blur the productivity vs efficiency drive the company is aiming to achieve. 
Conclusions & the future of bias  
In my view, irrespective of how much better HITL systems might (or might not) become, I don’t believe bias can be eliminated, and I don’t believe in the future – no matter how advanced and intelligent AI becomes – we’ll be able to get rid of it.
It’s very much something that’s so woven that it’s not always possible to see or even discern it. Furthermore, sometimes bias traits are only revealed when an individual points it out to someone else, and even then there can be bias on top of bias!
As we look to the future of Generative AI, its associated increasingly challenging ethical considerations, and the wide-ranging debate on how far its usefulness will stem at scale, an important thought will always remain at heart: we on occasions won’t be able to mitigate future impacts of bias until we’re right at the moment and the impact is being felt there and then. 
Bibliography  
[1] shaan-madhavji.medium.com. (n.d.). Leadership Bias: 12 cognitive biases to become a decisive leader. [online] Available at: https://hospitalityinsights.ehl.edu/leadership-bias. 
Want to read more from Ana? Check out one of her articles below:
Navigating artificial intelligence in 2024
Discover how businesses can harness AI’s potential, balance innovation with ethics, and tackle the digital skills gap.
Tumblr media
0 notes
jcmarchi · 3 days
Text
Blizzard Announces It's Skipping BlizzCon This Year
New Post has been published on https://thedigitalinsider.com/blizzard-announces-its-skipping-blizzcon-this-year/
Blizzard Announces It's Skipping BlizzCon This Year
Tumblr media Tumblr media
Blizzard has decided to cancel this year’s BlizzCon. The company states the event will return in the future, but it plans to showcase upcoming games in a different manner over the coming months. 
First announced in a blog post, Blizzard plans to share details on upcoming games like World of Warcraft: The War Within and Diablo IV’s Vessel of Hatred expansion at other trade shows, such as Gamescom. The company also plans to launch “multiple, global, in-person events” for Warcraft’s 30th anniversary, which are described as being “distinct” from BlizzCon. 
“Our hope is that these experiences – alongside several live-streamed industry events where we’ll keep you up to date with what’s happening in our game universes – will capture the essence of what makes the Blizzard community so special,” Blizzard states in the blog post.
A Blizzard representative tells Windows Central that Blizzard made the call to cancel BlizzCon and not Microsoft, which completed its acquisition of the company last year. In a statement to the outlet, the representative says, “This is a Blizzard decision. We have explored different event formats in the past, and this isn’t the first time we’re skipping BlizzCon or trying something new. While we have great things to share in 2024, the timing just doesn’t line up for one single event at the end of the year.”
BlizzCon began in 2005 as an annual convention celebrating all things Blizzard. Last year’s show saw the reveal of World of Warcraft’s next expansion, The War Within, as well as two other expansions coming after it. It’s good that event is only taking a year off as opposed to being canned for good, and we’re curious to see how the alternative events shape up over the coming months. 
1 note · View note
jcmarchi · 3 days
Text
Generative AI’s Role in Job Satisfaction
New Post has been published on https://thedigitalinsider.com/generative-ais-role-in-job-satisfaction/
Generative AI’s Role in Job Satisfaction
Generative AI (GenAI) is a pivotal technology that enhances work in a myriad of ways. From automating complex analysis to simulating scenarios that assist in decision-making, GenAI use cases are making a big impact across a broad swath of industries, including financial services, consultancies, information technology, legal, telecommunication and more.
Certainly, organizations recognize GenAI’s potential with the increasing adoption of AI within organizations. According to a PWC survey, 73% of U.S. companies have adopted AI in some areas of their business. Yet, discussion persists about GenAI’s role within the workplace, given fears over job displacement, bias, decision-making transparency and more. Despite this, GenAI has made AI technology much more accessible to employees within organizations, regardless of their specific roles.
In fact, a LexisNexis Future of Work survey showed that 72% of professionals anticipate a positive impact from GenAI, and only 4% see it as a threat to job security. GenAI can automate mundane tasks, allowing users to focus on more specialized, impactful and strategic tasks. This, in turn, can increase employee productivity and job satisfaction while ensuring human ambition and innovation walk hand in hand.
AI’s Productivity Boost
GenAI’s rapid rise marks a crucial shift in how organizations must operate and strategize to augment every role. GenAI applications are as diverse as they are impactful. It’s not just hype; GenAI is already poised to increase labor productivity by 0.1 to 0.6% annually through 2040.
GenAI has also created value across multiple sectors and industries. Significant business functions, including Sales, Marketing, Customer Operations and Technology have leveraged GenAI to increase productivity. In technology, for example, GenAI-based coding assistants are a massive help to software developers in suggesting code snippets, refactoring code, fixing bugs, understanding complex code, writing unit tests, documentation and creating complete end-to-end applications.
As employees experiment and explore with GenAI tools, their comfort level with the technology increases. Eighty-six percent of professionals ‘agree’ or ‘strongly agree’ with a willingness to embrace GenAI for both creative and professional work. Sixty-eight percent of employees plan to use GenAI tools for work purposes, while 69% are already using these tools to assist with daily tasks. The data makes it clear that organizations that adopt GenAI can boost productivity, and employees are willing to use it to accelerate efficiency.
Productivity Gains Are a Given, But Also AI Helps with Job Satisfaction
One of the most significant opportunities around GenAI lies in its power to help with job satisfaction. While professionals have fairly balanced expectations on how far adoption will go, 82% expect generative AI to take over a range of repetitive administrative tasks by automating routine tasks and data analysis, freeing them to focus on more strategic aspects of their work.
When asked how they perceive GenAI’s role in the work environment, more than two-thirds of professionals see it as a ‘helpful tool’ or ‘supportive co-worker.’ As a result, they recognize AI’s potential to enhance, not hinder, job performance and are embracing it with a positive mindset toward eliminating repetitive tasks and freeing up time for more rewarding, higher-value work.
Most professionals do not see generative AI as a detriment to job satisfaction, either. Over half (51%) say job satisfaction has improved significantly or moderately thanks to GenAI, while only 10% felt that it decreases job satisfaction. A fundamental rethink is necessary where and how organizations implement GenAI tools within the workplace.
Recommendations to Improve Engagement and Job Satisfaction
Organizations need to consider employee engagement throughout the adoption process of GenAI tools. Here are some recommendations to improve engagement and thereby increase job satisfaction:
Engage your employees to identify the use cases that are most impactful for a particular role or group. Pick tasks that are most time-consuming and tedious, such that solving them would free up time to focus on more critical items.
Identify the GenAI tools and large language models (LLMs) that are most effective for solving the identified use case. Take the time to experiment, test and validate the output. Ensure that you account for a diverse set of inputs for the use case and measure the output quality, including the hallucination rate, to help build trust within your employee base using the solution.
Provide training to your team. Take advantage of the vast information available on the web, with videos, code samples, tool vendor resources and tutorials on using the specific tool, LLM, associated prompts and guardrails. Create mentors and experts within the team to help coach the rest. Showcase examples of lessons learned and success stories to inspire team members who may not see the value.
Identify and measure KPIs. These could include adoption, productivity gains, costs saved or repurposed, employee satisfaction, quality improvement and other KPIs that may be specific to the team or business.
Gen AI isn’t just for technologists anymore; it’s making potent tools accessible to everyone. Most business professionals who once viewed these technologies with skepticism now accept and even welcome them. And it’s no secret why, given GenAI’s power to present organizations and employees alike with unprecedented opportunities toward the future of work.
0 notes