Tumgik
globalmediacampaign ¡ 3 years
Photo
Tumblr media
As per the government data, the retail prices of edible oils have risen over 62 per cent in over a year and is adding woes to consumers already reeling under the economic distress induced by the Covid-19 pandemic. https://economictimes.indiatimes.com/news/economy/agriculture/centre-discusses-with-stakeholders-abnormal-rise-in-edible-oil-prices-asks-for-steps-to-soften-rates/articleshow/82912295.cms
0 notes
globalmediacampaign ¡ 3 years
Text
Modify an Amazon RDS for SQL Server instance from Standard Edition to Enterprise Edition
Microsoft SQL Server is available in various editions, and each edition brings unique features, performance, and pricing options. The edition that you install also depends on your specific requirements. Many of our customers want to change from the Standard Edition of Amazon RDS for SQL Server to Enterprise Edition to utilize its higher memory and high-availability features. To do so, you need to upgrade your existing RDS for SQL Server instance from Standard Edition to Enterprise Edition. This post walks you through that process. Prerequisites Before you get started, make sure you have the following prerequisites: Amazon RDS for SQL Server Access to the AWS Management Console SQL Server Management Studio Walkthrough overview Amazon RDS supports DB instances running several versions and editions of SQL Server. For the full list, see Microsoft SQL Server versions on Amazon RDS. For this post, we discuss the following editions: Standard – This edition enables database management with minimal IT resources, with limited feature offerings, a lack of some high-availability features, and few online DDL operations compared to Enterprise Edition. Additionally, Standard Edition has a limitation of 24 cores and 128 GB of memory. Enterprise: This is the most complete edition to use with your mission-critical workloads. With Enterprise Edition, you have all the features with no limitation of CPU and memory. The upgrade process includes the following high-level steps: Take a snapshot of the existing RDS for SQL Server Standard Edition instance. Restore the snapshot as an RDS for SQL Server Enterprise Edition instance. Verify the RDS for SQL Server Enterprise Edition instance. Upgrade your RDS for SQL Server instance on the console We first walk you through modifying your RDS for SQL Server edition via the console. We take a snapshot of the existing RDS for SQL Server instance and then restore it as a different edition of SQL Server. You can check your version of RDS for SQL Server on the SQL Server Management Studio. On the Amazon RDS console, choose Databases. Select your database and on the Actions menu, choose Take snapshot. For Snapshot name, enter a name. Choose Take snapshot. On the Snapshots page, verify that snapshot is created successfully and check that the status is Available. Select the snapshot and on the Actions menu, choose Restore snapshot. Under DB specifications, choose the new edition of SQL Server (for this post, SQL Server Enterprise Edition). For DB instance identifier, enter a name for your new instance. Select your instance class. Choose Restore DB instance. Wait for the database to be restored. After the database is restored, verify the version of SQL Server. The following screenshot shows the new RDS for SQL Server database created from the snapshot, with all databases, objects, users, permissions, passwords, and other RDS for SQL Server parameters, options, and settings restored with the snapshot. Upgrade your RDS for SQL Server instance via the AWS CLI You can also use the AWS Command Line Interface (AWS CLI) to modify the RDS for SQL Server instance: Create a DB snapshot using create-db-snapshot CLI: aws rds create-db-snapshot ^ --db-instance-identifier mydbinstance ^ --db-snapshot-identifier mydbsnapshot Restore the database from the snapshot using restore-db-instance-from-db-snapshot CLI: aws rds restore-db-instance-from-db-snapshot ^ --db-instance-identifier mynewdbinstance ^ --db-snapshot-identifier mydbsnapshot^ --engine sqlserver-ee Clean up To avoid incurring future costs, delete your RDS for SQL Server Standard Edition resources, because they’re no longer required. On the Amazon RDS console, choose Databases. Select your old database and on the Actions menu, choose Delete. Conclusion In this post, I showed how to modify Amazon RDS for SQL Server from Standard Edition to Enterprise Edition using the snapshot restore method. Upgrading to Enterprise Edition allows you to take advantage of higher memory and the edition’s high-availability features. To learn more about most effective way of working with Amazon RDS for SQL Server, see Best practices for working with SQL Server. About the author Yogi Barot is Microsoft Specialist Senior Solution Architect at AWS, she has 22 years of experience working with different Microsoft technologies, her specialty is in SQL Server and different database technologies. Yogi has in depth AWS knowledge and expertise in running Microsoft workload on AWS. https://aws.amazon.com/blogs/database/modify-an-amazon-rds-for-sql-server-instance-from-standard-edition-to-enterprise-edition/
0 notes
globalmediacampaign ¡ 3 years
Photo
Tumblr media
Hafez al-Assad, Syria's defence minister and the father of Bashar al-Assad, takes power in a military coup on November 16, 1970 that ousts president Nureddin al-Atassi. Assad, who leads the pan-Arab nationalist Baath Party, is elected president on March 12, 1971. He is the only candidate. https://economictimes.indiatimes.com/news/defence/syria-assad-dynastys-half-century-in-power/articleshow/82908949.cms
0 notes
globalmediacampaign ¡ 3 years
Photo
Tumblr media
Total FDI, including equity, re-invested earnings and capital, rose 10 per cent to the "highest ever" of USD 81.72 billion during 2020-21 as against USD 74.39 billion in 2019-20. https://economictimes.indiatimes.com/news/economy/finance/fdi-jumps-19-to-usd-59-64-billion-in-2020-21-govt-data/articleshow/82909023.cms
0 notes
globalmediacampaign ¡ 3 years
Text
Heads-Up: TokuDB Support Changes and Future Removal from Percona Server for MySQL 8.0
Back in December 2018, when we announced the general availability of Percona Server for MySQL 8.0, we also announced that the TokuDB Storage Engine has been marked as “deprecated” in this release, recommending to use the MyRocks Storage Engine as an alternative. We believe that MyRocks provides similar benefits for the majority of workloads and is better optimized for modern hardware. Since then, we have continued maintaining the storage engine in the 8.0 release, e.g. by incorporating bug fixes. However, the ongoing amount of changes that are still occurring in the upstream MySQL 8.0 codebase have been a constant challenge and a cause for concern. Maintaining TokuDB as part of the 8.0 codebase has become increasingly difficult and time-consuming. Based on the number of support requests and conversations on our forums, we have seen very little to no adoption of TokuDB in Percona Server for MySQL 8.0. We have therefore decided that the TokuDB storage engine will be disabled in future versions of Percona Server for MySQL 8.0. Beginning with Percona Server version 8.0.25, we’ll add a respective warning notice to the release notes, to inform users about this upcoming change. Timeline Starting with Percona Server version 8.0.26 (expected in Q3 2021), the storage engine will still be included in the binary builds and packages, but disabled by default. Users upgrading from previous versions will still be able to re-enable it manually, so they can perform the necessary migration steps. Starting with Percona Server for MySQL version 8.0.28 (expected to ship in Q1 2022), the TokuDB storage will no longer be supported and will be removed from the installation packages. It will still be part of the 8.0 source code, but not enabled in our binary builds. We intend to fully remove it in the next major version of Percona Server for MySQL (9.0 or whatever version that will be). Note that this change only applies to Percona Server for MySQL version 8.0 – TokuDB remains enabled and supported in Percona Server 5.7 until this release has reached the end of its support period. In case you’re still using TokuDB in Percona Server 8.0, we recommend switching to InnoDB or MyRocks instead as soon as possible. Please consult our documentation on how to migrate and remove the TokuDB storage engine. If you need any assistance with migrating your data or if you have any questions or concerns about this, don’t hesitate to reach out to us – our experts are here to help! https://www.percona.com/blog/2021/05/21/tokudb-support-changes-and-future-removal-from-percona-server-for-mysql-8-0/
0 notes
globalmediacampaign ¡ 3 years
Text
How to Setup Three Node MySQL 8 Cluster on Debian 10
MySQL is a widely-used open source relational database management system. In this tutorial, we will use one master node to store the cluster's configuration and two data nodes to store the cluster data. https://www.howtoforge.com/how-to-setup-three-node-mysql-8-cluster-on-debian-10/
0 notes
globalmediacampaign ¡ 3 years
Text
MySQL Connect Dialog
About a month ago, I published how you can connect to MySQL with a small form. One suggestion, or lets promote it to a request, from that post was: “Nice, but how do you create a reusable library for the MySQL Connection Dialog box?” That was a good question but I couldn’t get back until now to write a new blog post. This reusable MySQL connection dialog lets you remove MySQL connection data from the command-line history. This post also shows you how to create and test a Powershell Module. The first step to create a module requires that you set the proper %PSModulePath% environment variable. If you fail to do that, you can put it into a default PowerShell module location but that’s not too effective for testing. You launch the System Properties dialog and click the Environment Variables button: Then, you edit the PSModulePath environment variable in the bottom list of environment variables and add a new path to the PSModulePath. My development path in this example is: C:Datacit225mysqlpsmod I named the file the same as the function Get-Credentials.psm1 consistent with the Microsoft instructions for creating a PowerShell module and their instructions for Pascal case name with an approved verb and singular noun. Below is the code for the Get-Credentials.psm1 file: function Get-Credentials { # Add libraries for form components. Add-Type -AssemblyName System.Windows.Forms Add-Type -AssemblyName System.Drawing # Define a user credential form. $form = New-Object System.Windows.Forms.Form $form.Text = 'User Credential Form' $form.Size = New-Object System.Drawing.Size(300,240) $form.StartPosition = 'CenterScreen' # Define a button and assign it and its controls to a form. $loginButton = New-Object System.Windows.Forms.Button $loginButton.Location = New-Object System.Drawing.Point(60,160) $loginButton.Size = New-Object System.Drawing.Size(75,23) $loginButton.Text = 'Login' $loginButton.DialogResult = [System.Windows.Forms.DialogResult]::OK $form.AcceptButton = $loginButton $form.Controls.Add($loginButton) # Define a button and assign it and its controls to a form. $cancelButton = New-Object System.Windows.Forms.Button $cancelButton.Location = New-Object System.Drawing.Point(155,160) $cancelButton.Size = New-Object System.Drawing.Size(75,23) $cancelButton.Text = 'Cancel' $cancelButton.DialogResult = [System.Windows.Forms.DialogResult]::Cancel $form.CancelButton = $cancelButton $form.Controls.Add($cancelButton) # Define a label and assign it and its controls to a form. $userLabel = New-Object System.Windows.Forms.Label $userLabel.Location = New-Object System.Drawing.Point(30,15) $userLabel.Size = New-Object System.Drawing.Size(100,20) $userLabel.Text = 'Enter User Name:' $form.Controls.Add($userLabel) # Define a TextBox and assign it and its controls to a form. $userTextBox = New-Object System.Windows.Forms.TextBox $userTextBox.Location = New-Object System.Drawing.Point(140,15) $userTextBox.Size = New-Object System.Drawing.Size(100,20) $form.Controls.Add($userTextBox) # Define a label and assign it and its controls to a form. $pwdLabel = New-Object System.Windows.Forms.Label $pwdLabel.Location = New-Object System.Drawing.Point(30,40) $pwdLabel.Size = New-Object System.Drawing.Size(100,20) $pwdLabel.Text = 'Enter Password:' $form.Controls.Add($pwdLabel) # Define a TextBox and assign it and its controls to a form. $pwdTextBox = New-Object System.Windows.Forms.TextBox $pwdTextBox.Location = New-Object System.Drawing.Point(140,40) $pwdTextBox.Size = New-Object System.Drawing.Size(100,20) $pwdTextBox.PasswordChar = "*" $form.Controls.Add($pwdTextBox) # Define a label and assign it and its controls to a form. $hostLabel = New-Object System.Windows.Forms.Label $hostLabel.Location = New-Object System.Drawing.Point(30,65) $hostLabel.Size = New-Object System.Drawing.Size(100,20) $hostLabel.Text = 'Enter Hostname:' $form.Controls.Add($hostLabel) # Define a TextBox and assign it and its controls to a form. $hostTextBox = New-Object System.Windows.Forms.TextBox $hostTextBox.Location = New-Object System.Drawing.Point(140,65) $hostTextBox.Size = New-Object System.Drawing.Size(100,20) $form.Controls.Add($hostTextBox) # Define a label and assign it and its controls to a form. $portLabel = New-Object System.Windows.Forms.Label $portLabel.Location = New-Object System.Drawing.Point(30,90) $portLabel.Size = New-Object System.Drawing.Size(100,20) $portLabel.Text = 'Enter Port #:' $form.Controls.Add($portLabel) # Define a TextBox and assign it and its controls to a form. $portTextBox = New-Object System.Windows.Forms.TextBox $portTextBox.Location = New-Object System.Drawing.Point(140,90) $portTextBox.Size = New-Object System.Drawing.Size(100,20) $form.Controls.Add($portTextBox) # Define a label and assign it and its controls to a form. $dbLabel = New-Object System.Windows.Forms.Label $dbLabel.Location = New-Object System.Drawing.Point(30,115) $dbLabel.Size = New-Object System.Drawing.Size(100,20) $dbLabel.Text = 'Enter DB Name:' $form.Controls.Add($dbLabel) # Define a TextBox and assign it and its controls to a form. $dbTextBox = New-Object System.Windows.Forms.TextBox $dbTextBox.Location = New-Object System.Drawing.Point(140,115) $dbTextBox.Size = New-Object System.Drawing.Size(100,20) $form.Controls.Add($dbTextBox) $form.Topmost = $true $form.Add_Shown({$userTextBox.Select()}) $result = $form.ShowDialog() if ($result -eq [System.Windows.Forms.DialogResult]::OK) { # Assign inputs to connection variables. $uid = $userTextBox.Text $pwd = $pwdTextBox.Text $server = $hostTextBox.Text $port= $portTextBox.Text $dbName = $dbTextBox.Text # Declare connection string. $credentials = 'server=' + $server + ';port=' + $port + ';uid=' + $uid + ';pwd=' + $pwd + ';database=' + $dbName } else { $credentials = $null } return $credentials } You must create a Get-Connection directory in your C:Datacit225mysqlpsmod directory that you added to the PSModulePath. Then, you must put your module code in the Get-Connection subdirectory as the Get-Connection.psm1 module file. The test.ps1 script imports the Get-Credentials.psm1 PowerShell module, launches the MySQL Connection Dialog form and returns the connection string. The test.ps1 code is: # Import your custom module. Import-Module Get-Credentials # Test the Get-Credentials function. if (($credentials = Get-Credentials) -ne $undefinedVariable) { Write-Host($credentials) } You can test it from the local any directory with the following command-line: powershell .test.ps1 It should print something like this to the console: server=localhost;port=3306;uid=student;pwd=student;database=studentdb If you got this far, that’s great! You’re ready to test a connection to the MySQL database. Before you do that, you should create the same avenger table I used in the initial post and insert the same or some additional data. Connect to the any of your test databases and rung the following code to create the avenger table and nine rows of data. -- Create the avenger table. CREATE TABLE db_connect ( db_connect_id INT UNSIGNED PRIMARY KEY AUTO_INCREMENT , version VARCHAR(10) , user VARCHAR(24) , db_name VARCHAR(10)); -- Seed the avenger table with data. INSERT INTO avenger ( first_name, last_name, avenger ) VALUES ('Anthony', 'Stark', 'Iron Man') ,('Thor', 'Odinson', 'God of Thunder') ,('Steven', 'Rogers', 'Captain America') ,('Bruce', 'Banner', 'Hulk') ,('Clinton', 'Barton', 'Hawkeye') ,('Natasha', 'Romanoff', 'Black Widow') ,('Peter', 'Parker', 'Spiderman') ,('Steven', 'Strange', 'Dr. Strange') ,('Scott', 'Lange', 'Ant-man'); Now, let’s promote our use-case test.ps1 script to a testQuery.ps1 script, like: # Import your custom module. Import-Module Get-Credentials # Test the Get-Credentials function. if (($credentials = Get-Credentials) -ne $undefinedVariable) { # Connect to the libaray MySQL.Data.dll Add-Type -Path 'C:Program Files (x86)MySQLConnector NET 8.0Assembliesv4.5.2MySql.Data.dll' # Create a MySQL Database connection variable that qualifies: # [Driver]@ConnectionString # ============================================================ # You can assign the connection string before using it or # while using it, which is what we do below by assigning # literal values for the following names: # - server= or 127.0.0.1 for localhost # - uid= # - pwd= # - port= or 3306 for default port # - database= # ============================================================ $Connection = [MySql.Data.MySqlClient.MySqlConnection]@{ConnectionString=$credentials} $Connection.Open() # Define a MySQL Command Object for a non-query. $sqlCommand = New-Object MySql.Data.MySqlClient.MySqlCommand $sqlDataAdapter = New-Object MySql.Data.MySqlClient.MySqlDataAdapter $sqlDataSet = New-Object System.Data.DataSet # Assign the connection and command text to the MySQL command object. $sqlCommand.Connection = $Connection $sqlCommand.CommandText = 'SELECT CONCAT(first_name," ",last_name) AS full_name ' + ', avenger ' + 'FROM avenger' # Assign the connection and command text to the query method of # the data adapter object. $sqlDataAdapter.SelectCommand=$sqlCommand # Assign the tuples of data to a data set and return the number of rows fetched. $rowsFetched=$sqlDataAdapter.Fill($sqlDataSet, "data") # Print to console the data returned from the query. foreach($row in $sqlDataSet.tables[0]) { write-host "Avenger:" $row.avenger "is" $row.full_name } # Close the MySQL connection. $Connection.Close() } It should give you the MySQL Connection Dialog and with the correct credentials print the following to your console: Avenger: Iron Man is Anthony Stark Avenger: God of Thunder is Thor Odinson Avenger: Captain America is Steven Rogers Avenger: Hulk is Bruce Banner Avenger: Hawkeye is Clinton Barton Avenger: Black Widow is Natasha Romanoff Avenger: Spiderman is Peter Parker Avenger: Dr. Strange is Steven Strange Avenger: Ant-man is Scott Lange As always, I hope this helps those looking to exploit technology. https://blog.mclaughlinsoftware.com/2021/05/21/mysql-connect-dialog/
0 notes
globalmediacampaign ¡ 3 years
Text
Deploy Moodle on OCI with MDS
Moodle is the world’s most popular learning management system. Moodle is Open Source and of course it’s compatible with the most popular Open Source Database : MySQL ! I’ve already posted an article on how to install Moodle on OCI before we released MySQL Database Service. In this article we will see how to deploy Moodle very easily in OCI and using MDS. Once again we will use the easiest way to deploy an complete architecture on OCI: Resource Manager. We will then use a stack I’ve created that is available on GitHub This stack includes Terraform code allowing to deploy different architectures that we can use for Moodle. I’ve tried to cover the main possible architecture directly in the stack. It’s also possible to just download the Terraform code and modify it if you need. You also have the possibility to generate again a stack from your modified code. I’ve already multiple stacks you can deploy directly on OCI that allows you to deploy the same architectures as I cover in this article but for other solutions directly from this page: Deploy to OCI. Let’s have a look at some of the possible architectures we can deploy directly by clicking on the “deploy to OCI” button. Simplest Deployment This deployment, is the most simple to deploy. One single MySQL Database Service Instance and one compute instance as the Moodle Web Server. The architecture is composed by the following components: Availability domains: Availability domains are standalone, independent data centers within a region. The physical resources in each availability domain are isolated from the resources in the other availability domains, which provides fault tolerance. Availability domains don’t share infrastructure such as power or cooling, or the internal availability domain network. So, a failure at one availability domain is unlikely to affect the other availability domains in the region.Virtual cloud network (VCN) and subnets: a VCN is a customizable, software-defined network that you set up in an Oracle Cloud Infrastructure region. Like traditional data center networks, VCNs give you complete control over your network environment. A VCN can have multiple non-overlapping CIDR blocks that you can change after you create the VCN. You can segment a VCN into subnets, which can be scoped to a region or to an availability domain. Each subnet consists of a contiguous range of addresses that don’t overlap with the other subnets in the VCN. You can change the size of a subnet after creation. A subnet can be public or private.Internet gateway: the internet gateway allows traffic between the public subnets in a VCN and the public internet.Network security group (NSG): NSGs act as virtual firewalls for your cloud resources. With the zero-trust security model of Oracle Cloud Infrastructure, all traffic is denied, and you can control the network traffic inside a VCN. An NSG consists of a set of ingress and egress security rules that apply to only a specified set of VNICs in a single VCN.MySQL Database Service (MDS): MySQL Database Service is a fully managed Oracle Cloud Infrastructure (OCI) database service that lets developers quickly develop and deploy secure, cloud native applications. Optimized for and exclusively available in OCI, MySQL Database Service is 100% built, managed, and supported by the OCI and MySQL engineering teams.Compute Instance: OCI Compute service enables you provision and manage compute hosts in the cloud. You can launch compute instances with shapes that meet your resource requirements (CPU, memory, network bandwidth, and storage). After creating a compute instance, you can access it securely, restart it, attach and detach volumes, and terminate it when you don’t need it. Apache, PHP and Moodle are installed on the compute instance.Let’s see the different steps to deploy this architecture directly from here: You will redirected the OCI’s dashboard create stack page: As soon as you accept the Oracle Terms of Use, the form will be pre-filled by some default values. You can of course decide in which compartment you want to deploy the architecture: The second screen of the wizard the most important form where we need to fill all the required variables and also change the architecture as we will see later: The second part of the form looks like this. Note that we can enable High Availability for MDS, use multiple Web Server Instances or use existing infrastructure. This means that we have the possibility to use an existing VCN, subnets, etc… And of course we can also specify the Shapes for the compute instances (from a dropdown list of the available shapes in your tenancy and compartment) and for the MDS instance (this one needs to be entered manually). When we click next, we reach the last screen which summarize the choices and we can click on “Create”. By default the Architecture will be automatically applied (meaning all necessary resources will be deployed): Now we need to be a little bit of patience while everything is deployed… Other Possible Architectures As we could see earlier on the second screen of the stack’s creation wizard, we could also specify the use of multiple Web Servers. Then we have the possibility to deploy them on different Fault Domains (default) or use different Availability Domains: It’s possible to also specify if all Moodle servers will use their own database and user or share the same schema in case we want to use a load balancer in front of all the web servers and spread the load for the same site/application. The default architecture with 3 web servers looks like this: And if you want to enable High Availability for the MDS instance, you just need to check the box: And you will have an architecture like this: Finishing Moodle’s Installation When the deployment using the stack is finished, you will the a nice large green square with “SUCCEEDED” and in the log you will also see some important information: This information is also available in the Output section on the left: Now, we just need to open a web browser and enter that public ip to finish the installation of Moodle: And we follow the wizard until the database configuration section: On the screen above, we use the information that we can find in the Stack’s output section. Then we continue the installation process until it’s completed and finally we can enjoy our new Moodle deployment: As you can see, it has never been so easy to deploy applications using MySQL in Oracle Cloud Infrastructure. https://lefred.be/content/deploy-moodle-on-oci-with-mds/
0 notes
globalmediacampaign ¡ 3 years
Photo
Tumblr media
The Cyberspace Administration of China (CAC) said that the 105 apps violated laws by excessively collecting and illegally accessing users' personal information, according to a statement posted on its site Friday. https://economictimes.indiatimes.com/news/international/world-news/china-authorities-name-105-apps-for-improper-data-practices/articleshow/82829487.cms
0 notes
globalmediacampaign ¡ 3 years
Text
How to run MySQL Enterprise Edition on Oracle Cloud Infrastructure from Marketplace ?
 Deploy MySQL Enterprise Edition with OCI Market Place ApplicationIn this blog we will look into the process to deploy MySQL Enterprise Edition in to Oracle Cloud Infrastructure.In this blog we will discuss the below agenda1. Introduction about MySQL EE and OCI Market place2. Deploy MySQL EE into Oracle Cloud Infrastructure3. Accessing MySQL database from On-premises.4. Licensing/Purchase model of using MySQL EE from OCI marketplace5. ConclusionIntroductionIf you wanted to run MySQL Enterprise Edition (EE) in Oracle Cloud Compute Instance (VM) then generally what we do is first download the MySQL EE binaries from the oracle website and manually push to the Oracle Cloud Instance where, MySQL EE needed to be installed and follow the steps to get installed.Another approach is going with MySQL EE image from Oracle Cloud Marketplace.It automates this manual process to download the binaries and push to oracle cloud and then follow the step to install the MySQL binaries, it is pre-loaded package which by default install below software MySQL Server EE MySQL Enterprise Backup MySQL Shell MySQL Router MySQL Enterprise Thread poolThe MySQL Enterprise Edition Marketplace Application is an OCI compute instance, running Oracle Linux 7.7, with MySQL EE 8.0.x.The MySQL EE installation on the deployed image is similar to the RPM installation.This solutions is user-managed, meaning you are responsible for upgrades and maintenance.MySQL Enterprise Edition version 8.0 is available in OCI Marketplace. MySQL 5.7 nor 5.6 are not available in Oracle Cloud Infrastructure Marketplace. If the user needs previous versions, you need to create an OCI Compute Linux, download and install MySQL EE by themselves, similar to on-prem.What is Oracle Cloud Infrastructure Marketplace?Oracle Cloud Infrastructure Marketplace is an online store that offers solution specifically for customers of Oracle Cloud Infrastructure.you can find listings for two types of solutions from Oracle and trusted partners:1. ImageImages are templates of virtual hard drives that determine the operating system and software to run on an instance. You can deploy image listings on an Oracle Cloud Infrastructure Compute instance.2. StackStacks represent definitions of groups of Oracle Cloud Infrastructure resources that you can act on as a group.More info:- https://docs.oracle.com/en-us/iaas/Content/Marketplace/Concepts/marketoverview.htmWhat is MySQL Enterprise Edition?MySQL Enterprise Edition is a commercial product. Like MySQL Community Edition, MySQL Enterprise Edition includes MySQL Server, a fully integrated transaction-safe, ACID-compliant database with full commit, rollback, crash-recovery, and row-level locking capabilities. In addition, MySQL Enterprise Edition includes the following components designed to provide monitoring and online backup, as well as improved security and scalability, high availability.More info:- https://dev.mysql.com/doc/refman/8.0/en/mysql-enterprise.htmlDeploy MySQL EE into Oracle Cloud InfrastructurePrerequisite:- Make sure You have have create an account with Oracle Cloud. Make sure virtual cloud network(VCN) is createdhttps://docs.oracle.com/en-us/iaas/Content/GSG/Tasks/creatingnetwork.htmStep 1:- Create compute instance Step 2 : Change the shape based on the requirements(in terms of CPU , RAM )Step 03:-Configure the NetworkStep 04:- Add SSH key filesStep 05 :- Create the InstanceStep 06:- View the created InstanceStep 07:- Note down the Instance detailsInstance Public IP : 140.238.166.106Instance User Name:   opcInstance Private IP: 10.0.0.92MySQL Host Name: localhostMySQL UserName: rootMySQL Port: 3306Step 08:- Let’s connect the Instance from On-premises network using PuttyMake sure putty is installed in your local machine, if not then please install by downloading from the web links:- https://www.putty.org/Enter the correct public IP address as well as .ppk file to connect , refer below screenshot .When you click on OPEN, you will see that you are connected to Oracle Cloud Infrastructure.Step 09:- Get the temporary password$sudo grep 'temporary password' /var/log/mysqld.logStep 10:- Login to MySQL DBStep 11:- Change the Password for the root user from temporary password to NEW passwordalter user root@'localhost' identified by 'MySQL8.0';Step 12:- Next is you can ready to execute the commands with MySQL Enterprise EditionThat’s it.You can also connect MySQL Server using MySQL shell utilitySo you may noticed that I have not install manually any mysql clients , libraries its dependencies , MySQL shell because it is bundled and available as pre-installed image at Oracle Cloud Marketplace.It’s easy to plug in and play with MySQL Enterprise Edition.More info available at MySQL Website:-https://dev.mysql.com/doc/refman/8.0/en/mysql-oci-marketplace.htmlLicensing ModelThis is a BYOL (bring your own license) product with there are additional fees for the infrastructure usage."Hence Price of MySQL EE OCI marketplace = Price of MySQL EE + Price of Oracle Cloud Infrastructure VM usagePrice for MySQL EE goes via MySQL Sales , you can talk with sales to get the details pricing matrix(https://www.mysql.com/about/contact/).MySQL EE licensing model is "per Server", for 2 classes: Servers with 1-4 Sockets, and Servers with 5+ Sockets. MySQL Enterprise Edition 1-4 socket server :  USD5000MySQL Enterprise Edition 5+ socket server  :  USD10000More info :- https://www.mysql.com/products/Price of Oracle Cloud Infrastructure VM usage, it depends on kind of shape (comprises of OCPU, RAM, Storage) you are going to configured to run MySQL Enterprise Edition (EE).You can calculate using OCI cost estimator: - https://www.oracle.com/in/cloud/cost-estimator.htmlIf you are bringing your “on-premise license”, so this offering is available at no additional charge. Fees for usage of Oracle infrastructure resources, such as compute instances, are charged separately and based on the rate of usage. Learn about pricing" linked to  https://www.oracle.com/cloud/compute/pricing.htmlConclusion:-MySQL Enterprise Edition include most comprehensive set of advanced features (solutions for scalability, performance, high availability, securities), management tools (unified monitoring and hot/online backup) and technical supports(24*7 support backed by MySQL engineering team)It’s easy to deploy on private cloud , public cloud and available as “Managed Service” at Oracle Cloud named as MySQL Database Service (MDS).MySQL Enterprise Edition is BYOL product and it is annual subscription product which has to be renewed annually. https://mysqlsolutionsarchitect.blogspot.com/2021/05/how-to-run-mysql-enterprise-editionee.html
0 notes
globalmediacampaign ¡ 3 years
Text
New Features in Percona Server for MySQL 8.0.23-14
Percona Server for MySQL 8.0.23-14 was released last week and I wanted to take a minute to call out some of the interesting new features that we have introduced in this release. These are included in addition to the features and improvements in MySQL 8.0.23 that were introduced by the Oracle MySQL team (and to which Percona also contributed). Hashicorp Vault Plugin Support for KV Secrets Engine – Version 2 (PS-5364) As of Percona Server for MySQL 8.0.23-14, the Hashicorp Vault plugin can be configured to specifically use either V1 or V2 Secrets Engine API or it can be configured to probe and auto-detect the best version to use. See the documentation here. Adaptive Network Buffers (PS-7364) When long-lived connections are used or connections are “pooled”, network buffers tend to grow towards max_allowed_packet size and never shrink automatically until the connection is terminated. For example, an occasional big query will eventually increase the size of the network buffers of all of the connections within a connection pool (assuming connections are picked at random from the pool). A large value of max_allowed_packet_size, like 64-128Mb, combined with a significant number of connections (> 256) may lead to enormous memory overhead. This feature adds the ability to periodically shrink a connection’s network buffer size at a specified interval. See the documentation here. Credit for this goes to Evgeniy Firsov, who submitted the original idea to us. X Plugin – Reconfigure TLS Certificate at Runtime (PS-7125) As of 8.0.16, it is possible to configure the server’s TLS certificate at runtime, but this did not extend to the TLS context used by other enabled server plugins or components such as the X Plugin. As of Percona Server for MySQL 8.0.23-14, we have added a callback to the plugin interface, which notifies plugins when a user executes ALTER INSTANCE RELOAD TLS; The X Plugin now implements this callback and reloads certificates correctly. MyRocks – Secondary Index and Non-Binary Collation Performance Optimization (PS-6780) Prior to Percona Server for MySQL 8.0.23-14, character indexes that were based on non-binary collations required a Primary Key lookup in order to obtain the ‘decoded’ or original column data that the index was based upon. Now, as of Percona Server for MySQL 8.0.23-14, new indexes will also store the original decoded column values with the encoded index value, and covering index scans can be truly covered with MyRocks. Existing indexes will need to be rebuilt (and will occupy more space) in order to take advantage of this performance improvement. As a result of this change, the need for the options and functionality behind “rocksdb_strict_collation_check” and “rocksdb_strict_collation_exceptions” no longer exists, and as such these options have been deprecated. MyRocks – DEFAULT Value Expressions (PS-5846) Prior to Percona Server 8.0.23-14, MyRocks did support the use of DEFAULT value expressions on CREATE or ALTER TABLE statements. For example, if were to execute:CREATE TABLE `t1` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `subject` text COLLATE utf8mb4_0900_ai_ci NOT NULL DEFAULT (''), PRIMARY KEY (`id`) ) ENGINE=ROCKSDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;You would receive the error “3774 – ‘Specified storage engine’ is not supported for default value expressions.”. As of 8.0.23-14, DEFAULT value expressions are now supported with MyRocks. MyRocks – GENERATED Columns (PS-4894) I saved the best for last, as of Percona Server for MySQL 8.0.23-14, MyRocks now supports the use of generated columns which may be either VIRTUAL or STORED. This allows indexes to be created against these generated columns for some pretty powerful functionality. This is particularly useful for indexing JSON types. Let us take a very quick look at a contrived example. Start with a simple table:CREATE TABLE `t` ( `id` int NOT NULL AUTO_INCREMENT,MyRocks - GENERATED columns (PS-4894) `doc` json DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=RocksDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;Now, we know our json documents have a field called ‘name’ that we would like to index on, so, first, we create a generated column:ALTER TABLE `t` ADD `name` char(80) AS (doc->>"$.name");Then we add our new index on that column:ALTER TABLE `t` ADD INDEX `n` (`name`);Voila! Read more about this release here and download it here. Thank you for your continued interest in our Percona Software! https://www.percona.com/blog/2021/05/20/new-features-in-percona-server-for-mysql-8-0-23-14/
0 notes
globalmediacampaign ¡ 3 years
Text
Extending the MySQL Shell with Python
One of the problems that comes with age is that there is so much 'baggage' filling valuable brain area that it sometimes requires a mental sweeping before trying to learn a new skill. I am still getting rid of FORTRAN and COBOL factoids but need to come up to speed on extending MySQL for some talks I am giving.  So away with the PROCEDURE DIVISION stuff... The MySQL Shell or Mysqlsh is very extensible and it is easy to create a plugin for some handy routines.  Your routines can go in functions and there are examples below.  The tricky part is that you have to tell Mysqlsh that you are creating an extension and then hook up your function(s) to that extension object. Finally that extension object has to be registered with the shell.So below are two functions which query the MySQL World Database for the contents of the world.city and the world.country tables.  The first thing for both of these functions is to make sure we are connected to the MySQL database instance or, in other words, that our shell has a session establish with the server. Then it is a quick query and dump the results.Now the tricky stuff!We need an object for our extension plugin_obj = shell.create_extension_object()and then we provide the informationfor our functions with shell.add_extension_object_member(plugin_obj, "city", show_tables, {"brief" : "Dave's demo 2 - city"} ) and shell.add_extension_object_member(plugin_obj, "countryinfo", show_country, {"brief" : "Dave's demo 2 - country"} ) before we tell the shell to use our new plugin  with shell.register_global("world", plugin_obj, {"brief": "A 2nd demo for Dave"}) to tell the shell we want these functions available under the name world. When a new Mysqlsh is session is started, we can use h to list the available plugins. And we look for our new plugin.If you type 'world' and then a TAB key, you will be able to see that the two functions are ready!The you can type the name of the function desired and then the query will run.I have omitted the output from the query. Please note that you can use plugins written in either JavaScript or Python and that you do can run programs written in either language from the other language mode.The Codedef show_tables(session=None):   """ Lists all the records in the world.city table   Simple function to query records in table   Args:      session (object): Option session object or use current mysqlsh session   Returns: Nothing   """   if session is None:      session = shell.get_session()   if session is None:      print("No session specified - pass a session or connect shell to database")      return   if session is not None:      r = session.run_sql("SELECT * FROM world.city")      shell.dump_rows(r) def show_country(session=None):   """  Yada - yada -> see above function   """   if session is None:      session = shell.get_session()   if session is None:      print("No session specified - pass a session or connect shell to database")      return   if session is not None:      r = session.run_sql("SELECT * FROM world.country")      shell.dump_rows(r)plugin_obj = shell.create_extension_object()shell.add_extension_object_member(plugin_obj, "city", show_tables, {"brief" : "Dave's demo 2 - city"} )shell.add_extension_object_member(plugin_obj, "countryinfo", show_country, {"brief" : "Dave's demo 2 - country"} )shell.register_global("world", plugin_obj, {"brief": "A 2nd demo for Dave"})All opinions expressed in this blog are those of Dave Stokes who is actually amazed to find anyone else agreeing with him https://elephantdolphin.blogspot.com/2021/05/extending-mysql-shell-with-python.html
0 notes
globalmediacampaign ¡ 3 years
Text
Options for legacy application modernization with Amazon Aurora and Amazon DynamoDB
Legacy application modernization can be complex. To reduce complexity and risk, you can choose an iterative approach by first replatforming the workload to Amazon Aurora. Then you can use the cloud-native integrations in Aurora to introduce other AWS services around the edges of the workload, often without changes to the application itself. This approach allows teams to experiment, iterate, and modernize legacy workloads iteratively. Modern cloud applications often use several database types working in unison, creating rich experiences for customers. To that end, the AWS database portfolio consists of multiple purpose-built database services that allow you to use the right tool for the right job based on the nature of the data, access patterns, and scalability requirements. For example, a modern cloud-native ecommerce solution can use a relational database for customer transactions and a nonrelational document database for product catalog and marketing promotions. If you’re migrating a legacy on-premises application to AWS, it can be challenging to identify the right purpose-built approach. Furthermore, introducing purpose-built databases to an application that runs on an old-guard commercial database might require extensive rearchitecture. In this post, I propose a modernization approach for legacy applications that make extensive use of semistructured data such as XML in a relational database. Starting in the mid-90s, developers began experimenting with storing XML in relational databases. Although commercial and open-source databases have since introduced native support for nonrelational data types, an impedance mismatch still exists between the relational SQL query language and access methods that may introduce data integrity and scalability challenges for your application. Retrieval of rows based on the value of an XML attribute can involve a resource-consuming full table scan, which may result in performance bottlenecks. Because enforcing accuracy and consistency of relationships between tables, or referential integrity, on nonrelational data types in a relational database isn’t possible, it may lead to orphaned records and data quality challenges. For such scenarios, I demonstrate a way to introduce Amazon DynamoDB alongside Amazon Aurora PostgreSQL-compatible edition, using the native integration of AWS Lambda with Aurora, without any modifications to your application’s code. DynamoDB is a fully managed key-value and document database with single-millisecond query performance, which makes it ideal to store and query nonrelational data at any scale. This approach paves the way to gradual rearchitecture, whereby new code paths can start to query DynamoDB following the Command-Query Responsibility Segregation pattern. When your applications are ready to cut over reads and writes to DynamoDB, you can remove XML from Aurora tables entirely. Solution overview The solution mirrors XML data stored in an Aurora PostgreSQL table to DynamoDB documents in an event-driven and durable way by using the Aurora integration with Lambda. Because of this integration, Lambda functions can be called directly from within an Aurora database instance by using stored procedures or user-defined functions. The following diagram details the solution architecture and event flows. The solution deploys the following resources and configurations: Amazon Virtual Private Cloud (Amazon VPC) with two public and private subnets across two AWS Availability Zones An Aurora PostgreSQL cluster in the private subnets, encrypted by an AWS KMS managed customer master key (CMK), and bootstrapped with a orders table with sample XML A pgAdmin Amazon Elastic Compute Cloud (Amazon EC2) instance deployed in the public subnet to access the Aurora cluster A DynamoDB table with on-demand capacity mode A Lambda function to transform XML payloads to DynamoDB documents and translate INSERT, UPDATE, and DELETE operations from Aurora PostgreSQL to DynamoDB An Amazon Simple Queue Service (Amazon SQS) queue serving as a dead-letter queue for the Lambda function A secret in AWS Secrets Manager to securely store Aurora admin account credentials AWS Identity and Access Management (IAM) roles granting required permissions to the Aurora cluster, Lambda function and pgAdmin EC2 instance The solution registers the Lambda function with the Aurora cluster to enable event-driven offloading of data from the postgres.orders table to DynamoDB, as numbered in the preceding diagram: When an INSERT, UPDATE, or DELETE statement is run on the Aurora orders table, the PostgreSQL trigger function invokes the Lambda function asynchronously for each row, after it’s committed. Every function invocation receives the operation code (TG_OP), and—as applicable—the new row (NEW) and the old row (OLD) as payload. The Lambda function parses the payload, converts XML to JSON, and performs the DynamoDB PutItem action in case of INSERT or UPDATE and the DeleteItem action in case of DELETE. If an INSERT, UPDATE or DELETE event fails all processing attempts or expires without being processed, it’s stored in the SQS dead-letter queue for further processing. The source postgres.orders table stores generated order data combining XML with relational attributes (see the following example of a table row with id = 1). You can choose which columns or XML attributes get offloaded to DynamoDB by modifying the Lambda function code. In this solution, the whole table row, including XML, gets offloaded to simplify querying and enforce data integrity (see the following example of a corresponding DynamoDB item with id = 1). Prerequisites Before deploying this solution, make sure that you have access to an AWS account with permissions to deploy the AWS services used in this post through AWS CloudFormation. Costs are associated with using these resources. See AWS Pricing for details. To minimize costs, I demonstrate how to clean up the AWS resources at the end of this post. Deploy the solution To deploy the solution with CloudFormation, complete the following steps: Choose Launch Stack. By default, the solution deploys to the AWS Region, us-east-2, but you can change this Region. Make sure you deploy to a Region where Aurora PostgreSQL is available. For AuroraAdminPassword, enter an admin account password for your Aurora cluster, keeping the defaults for other parameters. Acknowledge that CloudFormation might create AWS Identity and Access Management (IAM) resources. Choose Create stack. The deployment takes around 20 minutes. When the deployment has completed, note the provisioned stack’s outputs on the Outputs The outputs are as follows: LambdaConsoleLink and DynamoDBTableConsoleLink contain AWS Management Console links to the provisioned Lambda function and DynamoDB table, respectively. You can follow these links to explore the function’s code and review the DynamoDB table items. EC2InstanceConnectURI contains a deep link to connect to the pgAdmin EC2 instance using SSH via EC2 Instance Connect. The EC2 instance has PostgreSQL tooling installed; you can log in and use psql to run queries from the command line. AuroraPrivateEndpointAddress and AuroraPrivateEndpointPort contain the writer endpoint address and port for the Aurora cluster. This is a private endpoint only accessible from the pgAdmin EC2 instance. pgAdminURL is the internet-facing link to access the pgAdmin instance. Test the solution To test the solution, complete the following steps: Open the DynamoDB table by using the DynamoDBTableConsoleLink link from the stack outputs. Some data is already in the DynamoDB table because we ran INSERT operations on the Aurora database instance as part of bootstrapping. Open a new browser tab and navigate to the pgAdminURL link to access the pgAdmin instance. The Aurora database instance should already be registered. To connect to the Aurora database instance, expand the Servers tree and enter the AuroraAdminPassword you used to create the stack. Choose the postgres database and on the Tools menu, and then choose Query Tool to start a SQL session. Run the following INSERT, UPDATE, and DELETE statements one by one, and return to the DynamoDB browser tab to observe how changes in the Aurora postgres.orders table are reflected in the DynamoDB table. -- UPDATE example UPDATE orders SET order_status = 'pending' WHERE id 10; -- INSERT example INSERT INTO orders (order_status, order_data) VALUES ('malformed_order', ' error retrieving kindle id '); The resulting set of items in the DynamoDB table reflects the changes in the postgres.orders table. You can further explore the two triggers (sync_insert_update_delete_to_dynamodb and sync_truncate_to_dynamodb) and the trigger function sync_to_dynamodb() that makes calls to the Lambda function. In the pgAdmin browser tab, on the Tools menu, choose Search Objects. Search for sync. Choose (double-click) a search result to reveal it in the pgAdmin object hierarchy. To review the underlying statements, choose an object (right-click) and choose CREATE Script. Security of the solution The solution incorporates the following AWS security best practices: Encryption at rest – The Aurora cluster is encrypted by using an AWS KMS managed customer master key (CMK). Security – AWS Secrets Manager is used to store and manage Aurora admin account credentials. Identity and access management – The least privilege principle is followed when creating IAM policies. Network isolation – For additional network access control, the Aurora cluster is deployed to two private subnets with a security group permitting traffic only from the pgAdmin EC2 instance. To further harden this solution, you can introduce VPC endpoints to ensure private connectivity between the Lambda function, Amazon SQS, and DynamoDB. Reliability of the solution Aurora is designed to be reliable, durable, and fault tolerant. The Aurora cluster in this solution is deployed across two Availability Zones, with the primary instance in Availability Zone 1 and a replica in Availability Zone 2. In case of a failure event, the replica is promoted to the primary, the cluster DNS endpoint continues to serve connection requests, and the calls to the Lambda function continue in Availability Zone 2 (refer to the solution architecture earlier in this post). Aurora asynchronous calls to Lambda retry on errors, and when a function returns an error after running, Lambda by default retries two more times by using exponential backoff. With the maximum retry attempts parameter, you can configure the maximum number of retries between 0 and 2. Moreover, if a Lambda function returns an error before running (for example, due to lack of available concurrency), Lambda by default keeps retrying for up to 6 hours. With the maximum event age parameter, you can configure this duration between 60 seconds and 6 hours. When the maximum retry attempts or the maximum event age is reached, an event is discarded and persisted in the SQS dead-letter queue for reprocessing. It’s important to ensure that the code of the Lambda function is idempotent. For example, you can use optimistic locking with version number in DynamoDB by ensuring the OLD value matches the document stored in DynamoDB and rejecting the modification otherwise. Reprocessing of the SQS dead-letter queue is beyond the scope of this solution, and its implementation varies between use cases. It’s important to ensure that the reprocessing logic performs timestamp or version checks to prevent a newer item in DynamoDB from being overwritten by an older item from the SQS dead-letter queue. This solution preserves the atomicity of a SQL transaction as a single, all-or-nothing operation. Lambda calls are deferred until a SQL transaction has been successfully committed by using INITIALLY DEFERRED PostgreSQL triggers. Performance efficiency of the solution Aurora integration with Lambda can introduce performance overhead. The amount of overhead depends on the complexity of the PostgreSQL trigger function and the Lambda function itself, and I recommend establishing a performance baseline by benchmarking your workload with Lambda integration disabled. Upon reenabling the Lambda integration, use Amazon CloudWatch and PostgreSQL Statistics Collector to analyze the following: Aurora CPU and memory metrics, and resize the Aurora cluster accordingly Lambda concurrency metrics, requesting a quota increase if you require more than 1,000 concurrent requests Lambda duration and success rate metrics, allocating more memory if necessary DynamoDB metrics to ensure no throttling is taking place on the DynamoDB side PostgreSQL sustained and peak throughput in rows or transactions per second If your Aurora workload is bursty, consider Lambda provisioned concurrency to avoid throttling To illustrate the performance impact of enabling Lambda integration, I provisioned two identical environments in us-east-2 with the following parameters: AuroraDBInstanceClass – db.r5.xlarge pgAdminEC2InstanceType – m5.xlarge AuroraEngineVersion – 12.4 Both environments ran a simulation of a write-heavy workload with 100 INSERT, 20 SELECT, 200 UPDATE, and 20 DELETE threads running queries in a tight loop on the Aurora postgres.orders table. One of the environments had Lambda integration disabled. After 24 hours of stress testing, I collected the metrics using CloudWatch metrics, PostgreSQL Statistics Collector, and Amazon RDS Performance Insights. From an Aurora throughput perspective, enabling Lambda integration on the postgres.orders table reduces the peak read and write throughput to 69% of the baseline measurement (see rows 1 and 2 in the following table). # Throughput measurement INSERT/sec UPDATE/sec DELETE/sec SELECT/sec % of baseline throughput 1 db.r5.xlarge without Lambda integration 772 1,472 159 10,084 100% (baseline) 2 db.r5.xlarge with Lambda integration 576 887 99 7,032 69% 3 db.r5.2xlarge with Lambda integration 729 1,443 152 10,513 103% 4 db.r6g.xlarge with Lambda integration 641 1,148 128 8,203 81% To fully compensate for the reduction in throughput, one option is to double the vCPU count and memory size and change to the higher db.r5.2xlarge Aurora instance class at an increase in on-demand cost (row 3 in the preceding table). Alternatively, you can choose to retain the vCPU count and memory size, and move to the AWS Graviton2 processor-based db.r6g.xlarge Aurora instance class. Because of Graviton’s better price/performance for Aurora, the peak read and write throughput is at 81% of the baseline measurement (row 4 in the preceding table), at a 10% reduction in on-demand cost in us-east-2. As shown in the following graph, the DynamoDB table consumed between 2,630 and 2,855 write capacity units, and Lambda concurrency fluctuated between 259 and 292. No throttling was detected. You can reproduce these results by running a load generator script located in /tmp/perf.py on the pgAdmin EC2 instance. # Lambda integration on /tmp/perf.py 100 20 200 20 true # Lambda integration off /tmp/perf.py 100 20 200 20 false Additional considerations This solution doesn’t cover the initial population of DynamoDB with XML data from Aurora. To achieve this, you can use AWS Database Migration Service (AWS DMS) or CREATE TABLE AS. Be aware of certain service limits before using this solution. The Lambda payload limit is 256 KB for asynchronous invocation, and the DynamoDB maximum item size limit is 400 KB. If your Aurora table stores more than 256 KB of XML data per row, an alternative approach is to use Amazon DocumentDB (with MongoDB compatibility), which can store up to 16 MB per document, or offload XML to Amazon Simple Storage Service (Amazon S3). Clean up To avoid incurring future charges, delete the CloudFormation stack. In the CloudFormation console, change the Region if necessary, choose the stack, and then choose Delete. It can take up to 20 minutes for the clean up to complete. Summary In this post, I proposed a modernization approach for legacy applications that make extensive use of XML in a relational database. Heavy use of nonrelational objects in a relational database can lead to scalability issues, orphaned records, and data quality challenges. By introducing DynamoDB alongside Aurora via native Lambda integration, you can gradually rearchitect legacy applications to query DynamoDB following the Command-Query Responsibility Segregation pattern. When your applications are ready to cut over reads and writes to DynamoDB, you can remove XML from Aurora tables entirely. You can extend this approach to offload JSON, YAML, and other nonrelational object types. As next steps, I recommend reviewing the Lambda function code and exploring the multitude of ways Lambda can be invoked from Aurora, such as synchronously; before, after, and instead of a row being committed; per SQL statement; or per row. About the author Igor is an AWS enterprise solutions architect, and he works closely with Australia’s largest financial services organizations. Prior to AWS, Igor held solution architecture and engineering roles with tier-1 consultancies and software vendors. Igor is passionate about all things data and modern software engineering. Outside of work, he enjoys writing and performing music, a good audiobook, or a jog, often combining the latter two. https://aws.amazon.com/blogs/database/options-for-legacy-application-modernization-with-amazon-aurora-and-amazon-dynamodb/
0 notes
globalmediacampaign ¡ 3 years
Photo
Tumblr media
Adani’s wealth rose by $625 million today to $66.5 billion, as per data available on the Bloomberg Billionaires’ Index. https://economictimes.indiatimes.com/markets/stocks/news/gautam-adani-beats-chinas-zong-shanshan-to-become-asias-second-richest-man/articleshow/82802400.cms
0 notes
globalmediacampaign ¡ 3 years
Photo
Tumblr media
CMIE data shows that the consumer sentiment index dipped further by 1.5% for the fifth consecutive week with the cumulative index down by 9.1% since the last week of March. In April 2021, the index of consumer sentiment was nearly 49% lower than it was in 2019-20. In April 2020, it was nearly 57% lower than the 2019-20 average. https://economictimes.indiatimes.com/news/economy/indicators/unemployment-rises-to-14-5-in-may-highest-since-last-years-lockdown/articleshow/82796137.cms
0 notes
globalmediacampaign ¡ 3 years
Text
Integrate Amazon Managed Blockchain identities with Amazon Cognito
When you authenticate with a web or mobile application, you typically do so with a username and password where you’re authenticated against a user database such as Amazon Cognito. You’re expected to secure your password and rotate it periodically or when it has been compromised. When you’re building a user-facing application that is running on a Hyperledger Fabric blockchain network, there are no usernames and passwords native to the blockchain. Instead, user credentials consist of a private key and one or more public certificates. The private key is used to validate ownership of your account (similar to a password), whereas the public certificate contains publicly consumable information such as your name, role, organization. Every read or write transaction against a blockchain network requires the transaction to be signed with the private key to prove user authenticity. This is similar to requiring you to enter your password for every action you performed on a web application, which would be a very poor user experience. In this post, we look at how we can continue providing a familiar username and password authentication experience to users, while securely managing private keys with AWS Secrets Manager and using AWS Lambda to sign blockchain transactions on the user’s behalf. This post shows a step-by-step approach to implement this, but you can automate many of these steps to provide a simple mechanism for administering users across Amazon Cognito and the blockchain. Certificate Authorities, private keys, and public certificates A blockchain network consists of multiple members, each representing a different organization participating in the network. Each member manages their own users’ credentials through a Certificate Authority (CA), which serves two primary purposes: Registration – The first step is to create accounts for new users, which get created within the CA. Enrollment – The second step is to generate cryptographic material (private keys and public certificates) for the user. Private keys should be managed with the same level of security as a password, whereas public certificates are designed to be shared. A user’s private key is used to sign a blockchain transaction before it’s submitted to the blockchain network. When a user has signed a transaction, their identity and associated attributes (such as name, role, and organization) are available to the network through their public certificate. This enables role-based access control within a smart contract, such as ensuring only administrators can perform a particular function. You can also use this to include the identity of the user in the smart contract data, or to assign them ownership of a digital asset. Blockchain credentials as part of web application authentication This post builds on the blockchain application deployed in the following GitHub repo. The application lets users donate to their favorite NGOs, while having transparency into how their donations are being spent. Application data and functionality are contained within a smart contract deployed on the blockchain. We continue to build out this application by creating users and defining their roles within our CA. We also upgrade our smart contract to restrict access based on the user’s role using Hyperledger Fabric’s native attribute-based access control. To build this, we start by registering and enrolling two users within our CA with the usernames ngoDonor and ngoManager. Next, we create an Amazon Cognito user pool with two users: bobdonor and alicemanager. Each user is tied to their respective CA user (ngoDonor or ngoManager) by setting Amazon Cognito custom attributes on their user profile. We then deploy an Amazon API Gateway and secure access to it by requiring authentication against the user pool. For more details on how API Gateway uses Amazon Cognito user pools for authorization, see Control access to a REST API using Amazon Cognito user pools as authorizer. Solution overview The following diagram illustrates the solution architecture. The blockchain user is registered (created) in the Certificate Authority, and the application takes custody of the user’s private key and public certificate in AWS Secrets Manager. The user never needs to know or supply their private key; instead a Lambda function signs their blockchain transactions for them after they have successfully authenticated. A corresponding user account is created within an Amazon Cognito user pool, with a custom attribute, fabricUsername, that identifies this user within the Certificate Authority, and also identifies their credentials within Secrets Manager. A user attempting to access an API Gateway endpoint is challenged to authenticate via username and password against the Amazon Cognito user pool. When a user successfully authenticates, Amazon Cognito returns an identity token, which is a JSON Web Token (JWT). A JWT is a standards-based method for transferring credentials between two parties in a trusted manner. The client application includes this JWT in requests sent to the API Gateway, which authorizes the user to invoke the API route. API Gateway retrieves the fabricUsername custom attribute from the JWT, and sends this to the Lambda function that runs the blockchain transaction. The Lambda function retrieves the blockchain user’s private key from Secrets Manager and retrieves the connection profile for connecting to the Amazon Managed Blockchain network from AWS Systems Manager (Parameter Store). We use AWS Identity and Access Management (IAM) policies to restrict access to Secrets Manager and Systems Manager to only the Lambda function. Up to this point, we have a solution for issuing blockchain transactions on behalf of a user that has authenticated using a username and password. The last step is to enforce attribute-based access within our Fabric chaincode. We upgrade our chaincode to add new methods that add restrictions as to who can run them. The following is a snippet of a Node.js function that retrieves information about the invoking user. function getClientIdentity(stub) { const ClientIdentity = shim.ClientIdentity; let cid = new ClientIdentity(stub); let result = {} result['getID'] = cid.getID(); result['getMSPID'] = cid.getMSPID(); result['getX509Certificate'] = cid.getX509Certificate(); result['role'] = cid.getAttributeValue("role"); //e.g. acme_operations result['affiliation'] = cid.getAttributeValue("hf.Affiliation"); //member name, e.g. ACME result['enrollmentID'] = cid.getAttributeValue("hf.EnrollmentID"); //the username, e.g. ngoDonor result['fullname'] = cid.getAttributeValue("fullname"); //e.g. Bob B Donor return result; } The following diagram shows the sequence of events that transpire to authenticate a user and invoke blockchain transactions on their behalf. Prerequisites Prior to completing the step-by-step walkthrough, you must complete Part 1, Part 2, and Part 6 in the GitHub repo. These parts include creating the Managed Blockchain network, deploying the NGO chaincode, and deploying an API Gateway with an associated Lambda function to query the blockchain. Walkthrough overview In our solution, we create two blockchain users, ngoDonor and ngoManager, each with a corresponding Amazon Cognito user we use to authenticate with. Next, we define API routes in API Gateway that require authorization via Amazon Cognito. One of these routes allows access only to ngoManager users. We then test our setup using the AWS Command Line Interface (AWS CLI) and curl. For more information about each step in this process, see the GitHub repo. Each step listed in this post has a matching step in the repo in which you run the script or command that completes that step. As you read each step, you can refer to the corresponding step in the repo to gain a more in-depth understanding. The walkthrough includes the following steps: Create Fabric users in the Certificate Authority. Deploy an Amazon Cognito user pool. Deploy API Gateway routes that require Amazon Cognito authentication. Create users in the Amazon Cognito user pool. Upgrade the chaincode. We conclude by testing the solution. Create Fabric users in the Certificate Authority On our Certificate Authority, we create two users, one named ngoDonor that represents a donor, and one named ngoManager that represents a manager. We define various attributes on the users’ certificates, including the users’ fullname and role. These attributes are available within the smart contract and can be used to enforce attribute-based access. For more information about creating Fabric users, see Fabric CA client. These credentials are stored in Secrets Manager, where they’re retrieved when processing an API request. Deploy an Amazon Cognito user pool Next, we create an Amazon Cognito user pool. We enable user password authentication to allow users to authenticate just like they do with a typical web application. We also define a custom attribute called fabricUsername that we populate with the user’s identity within the Certificate Authority. Deploy API Gateway routes that require Amazon Cognito authentication In this step, we deploy an API Gateway with the following routes: /donors – Returns information about all donors. This route is available to all authenticated users. /donorsmanager – Same as the preceding route, but restricted to authenticated users with the necessary attributes. /user – Returns information about the calling user. Useful for debugging. API Gateway fulfills these routes using the Lambda function that was deployed in Part 6. Each route requires authentication with the Amazon Cognito user pool we created in the previous step. When the user is authenticated, API Gateway parses the user’s fabricUsername from the provided identity token. This value is sent to the Lambda function with each invocation so it can retrieve the corresponding user’s CA credentials stored in Secrets Manager. Create users in the Amazon Cognito user pool Next, we create two users within the Amazon Cognito user pool. User bobdonor represents a donor, and we set a corresponding fabricUsername of ngoDonor. User alicemanager represents a manager and has a fabricUsername of ngoManager. Upgrade the chaincode The final step before we test is to upgrade the NGO chaincode with the new methods needed to support our application. We deploy this on our peer node and upgrade the chaincode on the channel. In a high availability deployment, blockchain members have multiple peer nodes, and you should first install the chaincode on all peer nodes before upgrading the chaincode version on the channel. Test the solution You’ve now provisioned all the components and are ready to test. First, we verify that an unauthenticated user can’t access the APIs. We issue the following command: curl -s -X GET "https://" We get the following expected response: {"message":"Unauthorized"} Next, we access that same URL, but first we authenticate as the bobdonor We do this using the aws cognito-idp initiate-auth AWS CLI command.The command returns a JWT that contains various information about the authenticated user. The following code is an example of a parsed JWT; note the fabricUsername attribute that identifies this user in the Certificate Authority. { "sub": "e6add7b3-xxxxx", "aud": "3q0bnxxxxx", "event_id": "d41d634a-xxxxx", "token_use": "id", "custom:fabricUsername": "ngoDonor", "auth_time": 1597273756, "iss": "https://cognito-idp.us-east-1.amazonaws.com/us-east-1_xxxxx", "cognito:username": "bobdonor", "exp": 1597277356, "iat": 1597273757 } We call the API again, but this time we send an Authorization header with the JWT token. curl -H "Authorization: " -s -X GET "https:///donors" This returns a successful response containing a list of all the donors. We try using this same JWT to access the API route that is restricted to only users with the ngoManager role. curl -H "Authorization: " -s -X GET "https:///donorsmanager" As expected, this fails because our user is not a manager, and the returned response tells us the following. User must have ngo_manager role to call this function. To complete our testing, we authenticate as user alicemanager, who maps to our ngoManager user in our CA. We use the returned JWT to query the restricted API route, and we can now view the list of donors. {"Key":"donormelissa","Record":{"donorUserName":"melissa","email":"[email protected]","docType":"donor"}} View user attributes within the smart contract It can be helpful during debugging to know what attributes are available within the smart contract. When we upgraded our chaincode, we included a method that returns the attributes associated with the caller. We can see these attributes by calling the following route: curl -H "Authorization: $ID_TOKEN" -s -X GET " https:///user" This returns the following result: { "getID":"x509::/C=US/ST=North Carolina/O=Hyperledger/OU=user+OU=member/CN=ngoManager::/C=US/ST=Washington/L=Seattle/O=Amazon Web Services, Inc./OU=Amazon Managed Blockchain/CN=member Amazon Managed Blockchain Root CA", "getMSPID":"m-abc123...", "getX509Certificate":{...}, "role":"ngo_manager", "affiliation":"member", "enrollmentID":"ngoManager", "fullname":"'Alice Manager'" } Summary In this post, I walked you through how to authenticate blockchain users with API Gateway and Amazon Cognito, while securing their blockchain credentials in Secrets Manager. Developers can use this approach to easily integrate blockchain capabilities into their web and mobile applications. Although this post explored implementing this integration step by step, you can automate user administration behind a user API. For example, with a single request, the API can atomically create a user within the CA, within the Amazon Cognito pool, and store their credentials on Secrets Manager. You can also extend this solution beyond Amazon Cognito user pools. For example, you can implement a similar solution that uses other identity providers such as Facebook or Google for authentication. For more information, see Amazon Cognito Identity Pools (Federated Identities). About the author Emile Baizel is a Senior Blockchain Architect on the AWS Professional Services team. He has been working with blockchain technology since 2018 and is excited by its potential across a wide range of industries and use cases. In his free time he enjoys trail running, and spending tech-free time with his wife and two young children. https://aws.amazon.com/blogs/database/integrate-amazon-managed-blockchain-identities-with-amazon-cognito/
0 notes
globalmediacampaign ¡ 3 years
Text
How to Create a New User Account in MySQL and Grant Permissions on a Database
This article provides a complete overview of how to create a new user account in MySQL and grant different types of privileges on a MySQL database. Learn the basics of user account management and find hints. Introduction First, let’s figure out why we need users and privileges. When you install MySQL Server on your system […] The post How to Create a New User Account in MySQL and Grant Permissions on a Database appeared first on Devart Blog. https://blog.devart.com/how-to-create-a-new-user-and-grant-privileges.html
0 notes