Ask Me

Oracle Real Application Cluster (RAC) Installation 11gR2

 


:Oracle 11g R2 RAC On Oracle Enterprise Linux 5:

       Real Application Clusters (RAC) is the Oracle trade name for its database server cluster product and provides Load-Balancing, Scalability, Elasticity and High-Availability by keeping the Oracle database server product available and running across a set of multiple server nodes accessing a common database residing on Shared Storage.

      Oracle Real Application Clusters (RAC) technology is Oracle's clustered version of its database server product and was introduced and popularized by Oracle with the release of 9i in 2001. The technology did exist before 9i under a different name: Oracle Parallel Server. However, Oracle Parallel Server was nowhere as fast and advanced as RAC in its architecture, application and deployment.

This Document shows the step by step of installing and setting up 3-Node 11gR2 RAC cluster. This setup uses IP Based iSCSI Openfiler SAN
as a shared storage subsystem. This setup does not have IPMI and Grid Naming Service (GNS) configured. The SCAN is resolved through DNS.

Hardware Used in setting up 2-node 11g R2 RAC using iSCSI SAN (Openfiler):


  • Total Machines: 4 (2 for RAC nodes + 1 for SAN + 1 for DNS)
  • Network Switches: 3 (for Public, Private and Shared Storage)
  • Extra Network Adaptors: 5 (4 for RAC nodes (2 for each node) and one for Storage Server)
  • Network cables: 9 (5 for RAC nodes (3 for each node),  one for Shared Storage and 1 for DNS server)


Software Used for the 2-node RAC Setup using NAS (Openfiler):


  • SAN Storage Solution: Openfiler 2.3 (2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686)
  • Operating System: Oracle Enterprise Linux 5.5 (2.6.18-194.el5PAE)
  • Clusterware: Oracle 11g R2 Grid Infrastructure (11.2.0.3)
  • Oracle RAC: Oracle RDBMS 11g R2 (11.2.0.3)


2-Node RAC Setup

Operating System: Oracle Enterprise Linux 5.5 (2.6.18-194.el5PAE):

Server: All the RAC Nodes + DNS server

Grid Infrastructure Software (Clusterware + ASM 11.2.0.1):

Server: All the RAC Nodes
ORACLE_BASE: /u/oracle/
ORACLE_HOME:  /u/oracle/server/grid203
Owner: grid (Primary Group: oinstall, Secondary Group: dba)
Permissions: 755
OCR/Voting Disk Storage Type: ASM
Oracle Inventory Location: /u01/app/oraInventory
           
Oracle Database Software (RAC 11.2.0.1):

Server: All the RAC Nodes
ORACLE_BASE:  /u/oracle/
ORACLE_HOME: /u/oracle/server/database203
Owner: oracle (Primary Group: oinstall, Secondary Group: dba)
Permissions: 755
Oracle Inventory Location: /u01/app/oraInventory

Database Name: labdb
Listener: LAB_LISTENER (TCP:1525)

Openfiler 2.3:
Server: single dedicated server acting as SAN
OS: Openfiler 2.3 (2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686).


Server Hardware Requirements:

  • Each node in the Cluster must meet the below requirement.
  • At least 1024 x 768 display resolution, so that OUI displays correctly.
  • 1 GB of space in the /tmp directory
  • 5.5 GB space for Oracle Grid Infrastructure Home.
  • At least 2.5 GB of RAM and equivalent swap space (for 32 bit installation as in my case).
  • All the RAC nodes must share the same Instruction Set Architecture. For a testing RAC setup, it is possible to install RAC on servers with mixtures of Intel 32 and AMD 32 with differences in sizes of Memory/CPU speed.


Minimum Required RPMs for OEL 5.5 (All the 3 RAC Nodes):

binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
elfutils-libelf-devel-static-0.125
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-24
glibc-common-2.5
glibc-devel-2.5
glibc-headers-2.5
kernel-headers-2.6.18
ksh-20060214
libaio-0.3.106
libaio-devel-0.3.106
libgcc-4.1.2
libgomp-4.1.2
libstdc++-4.1.2
libstdc++-devel-4.1.2
make-3.81
numactl-devel-0.9.8.i386
sysstat-7.0.2
unixODBC-2.2.11
unixODBC-devel-2.2.11

Set the below Kernel Parameters with recommended range in /etc/sysctl.conf
# Kernel sysctl configuration file for Oracle Enterprise Linux
#
# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
# sysctl.conf(5) for more details.

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 1

# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536

# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 8192

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 4294967295

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 1073741824
# For 11g, recommended value for file-max is 6815744
fs.file-max = 6815744
# For 10g, uncomment 'fs.file-max 327679', comment other entries for this parameter and re-run sysctl -p
# fs.file-max:327679
kernel.msgmni = 2878
kernel.sem = 250 32000 100 142
kernel.shmmni = 4096
net.core.rmem_default = 262144
# For 11g, recommended value for net.core.rmem_max is 4194304
net.core.rmem_max = 4194304
# For 10g, uncomment 'net.core.rmem_max 2097152', comment other entries for this parameter and re-run sysctl -p
# net.core.rmem_max=2097152
net.core.wmem_default = 262144
# For 11g, recommended value for wmem_max is 1048576
net.core.wmem_max = 1048576
# For 10g, uncomment 'net.core.wmem_max 262144', comment other entries for this parameter and re-run sysctl -p
# net.core.wmem_max:262144
fs.aio-max-nr = 3145728
# For 11g, recommended value for ip_local_port_range is 9000 65500
net.ipv4.ip_local_port_range = 9000 65500
# For 10g, uncomment 'net.ipv4.ip_local_port_range 1024 65000', comment other entries for this parameter and re-run sysctl -p
# net.ipv4.ip_local_port_range:1024 65000
# Added min_free_kbytes 50MB to


Grid Infrastructure  installation Steps


























  • Database Installation steps.








 














SQL Server 2008 to Oracle 11GR2 MIgration

 6 stage to complete migration(SQL Server to Oracle):


Stage 1 : Connect:

1. Download Oracle SQL Developer.

Create new connection Connect to Destination oracle database using system credential using SQL developer





1. Set up a Migration Repository

Enter the following code in the SQL Worksheet to create a user for the migration repository.

CREATE USER MWREP
IDENTIFIED BY mwrep
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP;
GRANT CONNECT, RESOURCE, CREATE SESSION, CREATE VIEW TO MWREP;
CREATE MATERIALIZED VIEW TO mwrep;
alter user mwrep quota unlimited on USERS;



1. 2. Creating the Migration Repository

Before you create the repository, you need to create a connection to the mwrep user. In the Connections tab, right-click Connections and select New Connection. A New / Select Database Connection window will appear.




1.1.3 Configuring the Third Party Database Connection

You now need to configure a database connection for the database which you would like to migrate. There are two steps to this process:
Download JDBC driver : jtds-1.3.1-dist.zip
Saved : D:\Drivers\jtds-1.3.1-dist
Connect to Third Party Database
Configure 3rd party database connection:






        
Stage 2 : Capture



        
 Stage 3 : Convert



        
Stage 4 : Translate


          


Stage1 : Move Data




Create new connection : user : database name of sql and password will be same as user id



Basic Configuration for Linux Server and Open Filer Server

 Linux Server:

--------------------
1. Create New virtual machine for  Redhat Linux Or Oracle enterprize Linux with below configuration.
a. RAM: 1.6 Gb Minmum
b. Hard Disk : 60gb
c. Use Custom partition when installing Linux
* /boot 100Mb
* / 4000 Mb
* /usr 8000 Mb
* /tmp 3000 Mb
* /swap 3000 Mb
* /u 40000 Mb

2. Install all packages if Needed.
3. Disable Firewall, SELinux.
4. Use your timezone as per requirement
5. Use Root password
6. Complete Installation.


OpenFiler SAN Server

--------------------------
1. Create New virtual machine for  Openfiler SAN woth below configuration.
a. RAM : 512 MB
b. Hard Disk : IDE : 8 GB
SCSI : 20GB (continued)
SCSi : 20Gb (as per requirement you can add more.

c. Use Custom partition  on 8gb hard disk (Hda)

* /boot 100Mb
* / 1500 Mb
* /usr 1500 Mb
* /tmp 3000 Mb
* /swap 3000 Mb


2. Use your timezone as per requirement
3. Use Root password
4. Complete Installation.

Setup NFS Mount Point Using Open Filer NAS storage

 


NFS (Network File System ) is acts like other mount point in Unix server as accessible to other client servers in the network. Its acts like shared storage for all configured servers in network.


1. Log In to Open filer (Userid : Openfiler Passowrd:Password)

2. Go to Volume manager .



3.Create New Physical volume . select your attached hard disk /dev/sda/b/c/d give volume name

4. Add Volume
 a. physical volume.
 b. XFS file system
 c. Specify size
 d. create.




5. Enable NFS services in service section



6.Go to Shares Section.
a. create a foler
b. make share this folder




7. Go to Shared foler and update the access to client server with below option.
(note: UID:GID Mapping NO root Squash-- so client can modify the ownership of this folder.






8. Go to Client system.

[root@test3 oradba]# cat /etc/fstab
LABEL=/                 /                       ext3    defaults        1 1
LABEL=/tmp              /tmp                    ext3    defaults        1 2
LABEL=/u                /u                      ext3    defaults        1 2
LABEL=/usr              /usr                    ext3    defaults        1 2
LABEL=/boot             /boot                   ext3    defaults        1 2
/dev/hda8               /backup                 ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
LABEL=SWAP-hda5         swap                    swap    defaults        0 0
# device       mountpoint     fs-type     options      dump fsckorder
vsan:/mnt/svol/shared/oradba /oradba nfs rw 0 0
vsan:/mnt/backup/backup/backup /backup nfs rw 0 0

[root@test3 oradba]#


Create LUNs for database servers using Open-filer (virtual storage server)

 :::: Openfiler configuration:::::


1. open firefox on linux box with below URL

  https://192.168.18.133:446 (Ip of openfiler server : port number)
Or
Use https://openfiler.home.com:446

(DNS NAME and port Number if its configured on your DNS server)

2. Use username : openfiler

      passsword : password

3. Go to System Menu

go down : network access configuration

** ADD your client server name and ip 

Name ip network type
rac1 192.168.2.1 255.255.255.255 shared

Update more if required


4. go to Volumes  groups


create new physical volume use below of disks

Block Device Management
Edit Disk  Type  Description  Size  Label type  Partitions
/dev/hda  IDE  VMware Virtual IDE Hard Drive  8.00 GB  msdos  6 (view)
/dev/sda  SCSI  VMware, VMware Virtual S  19.99 GB  msdos  1 (view)
/dev/sdb  SCSI  VMware, VMware Virtual S  19.99 GB  msdos  1 (view)
/dev/sdc  SCSI  VMware, VMware Virtual S  19.99 GB  msdos  0 (view)
/dev/sdd  SCSI  VMware, VMware Virtual S  19.99 GB  msdos  0 (view)
/dev/sde  SCSI  VMware, VMware Virtual S  19.99 GB  msdos  0 (view)

Edit partitions in /dev/sdc (2610 cylinders with "msdos" label)

Mode  Partition Type  Starting cylinder  Ending cylinder  Size Create  Reset

Primary physical volume




Create

5. go to volume groups


volume group name < VOL1 >

/dev/sda

Add Volume group

6. Use volume group name <VOL1> change

Volume Name  Linux1
description SCSi

Create

7. ISCSI target 



-----> Target configuration : give LUN name and update it.

Target IQN Add
iqn.2006-01.com.openfiler:tsn.ff77104d727a
(LUN NAME)

Update

-----> Lun mapping : map selected volumes

Map New LUN to Target: "iqn.2006-01.com.openfiler:LINUX2"

Name LUN Path R/W Mode SCSI Serial No.  SCSI Id. Transfer Mode Map LUN
LINUX-SCSI /dev/vol1/linux1 kvKX5n-GWP5-2rtZ kvKX5n-GWP5-2rtZ Map

-----> Network ACL : 

Name  Network/Host  Netmask  Access
test1.home.com  192.168.18.134  255.255.255.255  allow
test2.home.com  192.168.18.135  255.255.255.255  allow

Update

8. Service : Start iscsi service 


9. configure iscsi disk in Linux Server



a. Open terminal on Linux box.
b. install required package for iscsi service ( scsi-initiator-utils )

[root@openfiler1 ~]# service iscsi-target restart
Stopping iSCSI target service: [  OK  ]
Starting iSCSI target service: [  OK  ]

[root@linux3 ~]# chkconfig iscsid on
[root@linux3 ~]# chkconfig iscsi on

c. Now that the iSCSI service is started, use the iscsiadm command-line interface to discover all available targets on the network storage server:


[root@linux3 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-san( can use ip instead of openfiler-san name)
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:scsi.linux3-data-1


d. Manually login targets.

[root@linux3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:scsi.linux3-data-1 -p 192.168.2.195 --login
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260]: successful

e. Configure Automatic Login

The next step is to make certain the client will automatically login to the target(s) listed above when the machine is booted (or the iSCSI initiator service  is started/restarted):


[root@linux3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:scsi.linux3-data-1 -p 192.168.2.195 --op update -n node.startup -v automatic

F.NOW We can see iscsi disk as /sda in linux server

[root@linux3 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:scsi.linux3-data-1 -> ../../sda

Current iSCSI Target Name to local SCSI Device Name Mappings iSCSI Target Name  SCSI Device Name
iqn.2006-01.com.openfiler:scsi.linux3-data-1  /dev/sda

g. As you can see, it is impractical to rely on using the local SCSI device names like /dev/sda or /dev/sdb given there is no way to predict the iSCSI target  mappings after a reboot. 

 What we need is a consistent device name we can reference like /dev/iscsi/linux3-data-1 that will always point to the appropriate iSCSI target through  reboots. This is where the Dynamic Device Management tool named udev comes in. udev provides a dynamic device directory using symbolic links that point to the  actual device using a configurable set of rules. When udev receives a device event (for example, the client logging in to an iSCSI target), it matches its  configured rules against the available device attributes provided in sysfs to identify the device. Rules that match may provide additional device information  or specify a device node name and multiple symlink names and instruct udev to run additional programs (a SHELL script for example) as part of the device event handling process.

The first step is to create a new rules file. This file will be named /etc/udev/rules.d/55-openiscsi.rules and contain only a single line of name=value pairs  used to receive events we are interested in. It will also define a call-out SHELL script (/etc/udev/scripts/iscsidev.sh) to handle the event.

Create the following rules file /etc/udev/rules.d/55-openiscsi.rules on the client node linux3:


# /etc/udev/rules.d/55-openiscsi.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c/part%n"

Next, create the UNIX SHELL script that will be called when this event is received. Let's first create a separate directory on the linux3 node where udev  scripts can be stored:


[root@linux3 ~]# mkdir -p /etc/udev/scripts

Finally, create the UNIX shell script /etc/udev/scripts/iscsidev.sh:


#!/bin/sh

# FILE: /etc/udev/scripts/iscsidev.sh

BUS=${1}
HOST=${BUS%%:*}

[ -e /sys/class/iscsi_host ] || exit 1

file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"

target_name=$(cat ${file})

# This is not an open-scsi drive
if [ -z "${target_name}" ]; then
   exit 1
fi

# Check if QNAP drive
check_qnap_target_name=${target_name%%:*}
if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then
     target_name=`echo "${target_name%.*}"`
fi

echo "${target_name##*.}"

After creating the UNIX SHELL script, change it to executable:


[root@linux3 ~]# chmod 755 /etc/udev/scripts/iscsidev.sh

Now that udev is configured, restart the iSCSI initiator service:


[root@linux3 ~]# service iscsi stop
Logging out of session [sid: 3, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260]
Logout of [sid: 3, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260]: successful
Stopping iSCSI daemon: /etc/init.d/iscsi: line 33:  5143 Killed                  /etc/init.d/iscsid stop


[root@linux3 ~]# service iscsi start
iscsid dead but pid file exists
Turning off network shutdown. Starting iSCSI daemon: [  OK  ]
[  OK  ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260]: successful
[  OK  ]


[root@linux3 ~]# ls -l /dev/iscsi/
total 0
drwxr-xr-x 2 root root 60 Apr  7 01:57 linux3-data-1


[root@linux3 ~]# ls -l /dev/iscsi/linux3-data-1/
total 0



lrwxrwxrwx 1 root root 9 Apr  7 01:57 part -> ../../sda

Post MySQL Installation generic Queries

 


1. Connect mysql instance.

[root@xcell ~]# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 5.7.28-log MySQL Community Server (GPL)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>



2. Go for password less login for root.


Create file .my.cnf under home directory of root.



[root@xcell ~]# cat .my.cnf
[client]
password = XXXXXXXXXX
socket = /u01/mysql/mysql-5.7.28-el7-x86_64/mysql.sock
[root@xcell ~]#

[root@xcell ~]# mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 5.7.28-log MySQL Community Server (GPL)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


mysql>


3. See databases or schema resdes under mysql server



mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |---- all db object info etc
| mydb |
| mysql |---- Default database store all user db info
| performance_schema |---- gather performance data
| sys |
+--------------------+
5 rows in set (0.03 sec)




4. Select database for operation



mysql> use mydb;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> select database();
+------------+
| database() |
+------------+
| mydb |
+------------+
1 row in set (0.00 sec)


mysql>


5. See tables under mydb database



mysql> show tables;
+----------------+
| Tables_in_mydb |
+----------------+
| CORONA |
| DEPT |
| state_wise |
| travel_history |
+----------------+
4 rows in set (0.00 sec)


mysql>


6. see all connection details;




mysql> show processlist;
+----+---------+-----------+------+---------+------+----------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+---------+-----------+------+---------+------+----------+------------------+
| 4 | oggadm1 | localhost | mydb | Sleep | 1739 | | NULL |
| 5 | oggadm1 | localhost | NULL | Sleep | 1740 | | NULL |
| 9 | root | localhost | mydb | Query | 0 | starting | show processlist |
+----+---------+-----------+------+---------+------+----------+------------------+
3 rows in set (0.10 sec)





[root@xcell ~]# mysqladmin processlist -p

Enter password:

+----+---------+-----------+------+---------+------+----------+------------------+

| Id | User | Host | db | Command | Time | State | Info |
+----+---------+-----------+------+---------+------+----------+------------------+
| 4 | oggadm1 | localhost | mydb | Sleep | 1871 | | |
| 5 | oggadm1 | localhost | | Sleep | 1872 | | |
| 15 | root | localhost | | Query | 0 | starting | show processlist |
+----+---------+-----------+------+---------+------+----------+------------------+
[root@xcell ~]#


7. See status of server





[root@xcell ~]# mysqladmin status

Uptime: 6256 Threads: 3 Questions: 134 Slow queries: 0 Opens: 157 Flush tables: 1 Open tables: 150 Queries per second avg: 0.021

8. Server version





[root@xcell ~]# mysqladmin version
mysqladmin Ver 8.42 Distrib 5.7.28, for el7 on x86_64
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Server version 5.7.28-log
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /u01/mysql/mysql-5.7.28-el7-x86_64/mysql.sock
Uptime: 1 hour 45 min 41 sec
Threads: 3 Questions: 138 Slow queries: 0 Opens: 157 Flush tables: 1 Open tables: 150 Queries per second avg: 0.021
[root@xcell ~]#





mysql> status
--------------
mysql Ver 14.14 Distrib 5.7.28, for el7 (x86_64) using EditLine wrapper
Connection id: 17
Current database:
Current user: root@localhost
SSL: Not in use
Current pager: stdout
Using outfile: ''
Using delimiter: ;
Server version: 5.7.28-log MySQL Community Server (GPL)
Protocol version: 10
Connection: Localhost via UNIX socket
Server characterset: latin1
Db characterset: latin1
Client characterset: utf8
Conn. characterset: utf8
UNIX socket: /u01/mysql/mysql-5.7.28-el7-x86_64/mysql.sock
Uptime: 1 hour 46 min 4 sec
Threads: 3 Questions: 143 Slow queries: 0 Opens: 157 Flush tables: 1 Open tables: 150 Queries per second avg: 0.022
--------------
mysql>



[root@xcell ~]# mysqladmin -u root -p ping
Enter password:
mysqld is alive
[root@xcell ~]#

8. Find connected user details 


mysql> select Host,user,account_locked,plugin from mysql.user;
+-----------+---------------+----------------+-----------------------+
| Host | user | account_locked | plugin |
+-----------+---------------+----------------+-----------------------+
| localhost | root | N | mysql_native_password |
| localhost | mysql.session | Y | mysql_native_password |
| localhost | mysql.sys | Y | mysql_native_password |
| % | deepak | N | mysql_native_password |
| localhost | ogguser1 | N | mysql_native_password |
| localhost | oggadm1 | N | mysql_native_password |
+-----------+---------------+----------------+-----------------------+
6 rows in set (0.00 sec)


mysql>



mysql> select Host,Db,User from mysq.db;
+-----------+--------------------+---------------+
| Host | Db | User |
+-----------+--------------------+---------------+
| localhost | performance_schema | mysql.session |
| localhost | sys | mysql.sys |
+-----------+--------------------+---------------+
2 rows in set (0.00 sec)




10. Show available engines.

mysql> show engines;
+--------------------+---------+----------------------------------------------------------------+--------------+------+------------+
| Engine | Support | Comment | Transactions | XA | Savepoints |
+--------------------+---------+----------------------------------------------------------------+--------------+------+------------+
| InnoDB | DEFAULT | Supports transactions, row-level locking, and foreign keys | YES | YES | YES |
| MRG_MYISAM | YES | Collection of identical MyISAM tables | NO | NO | NO |
| MEMORY | YES | Hash based, stored in memory, useful for temporary tables | NO | NO | NO |
| BLACKHOLE | YES | /dev/null storage engine (anything you write to it disappears) | NO | NO | NO |
| MyISAM | YES | MyISAM storage engine | NO | NO | NO |
| CSV | YES | CSV storage engine | NO | NO | NO |
| ARCHIVE | YES | Archive storage engine | NO | NO | NO |
| PERFORMANCE_SCHEMA | YES | Performance Schema | NO | NO | NO |
| FEDERATED | NO | Federated MySQL storage engine | NULL | NULL | NULL |
+--------------------+---------+----------------------------------------------------------------+--------------+------+------------+
9 rows in set (0.00 sec)

mysql>


11. See current datafiles.



mysql> select * from information_schema.INNODB_SYS_DATAFILES;
+-------+---------------------------------------+
| SPACE | PATH |
+-------+---------------------------------------+
| 2 | ./mysql/plugin.ibd |
| 3 | ./mysql/servers.ibd |
| 4 | ./mysql/help_topic.ibd |
| 5 | ./mysql/help_category.ibd |
| 6 | ./mysql/help_relation.ibd |
| 7 | ./mysql/help_keyword.ibd |
| 8 | ./mysql/time_zone_name.ibd |
| 9 | ./mysql/time_zone.ibd |
| 10 | ./mysql/time_zone_transition.ibd |
| 11 | ./mysql/time_zone_transition_type.ibd |
| 12 | ./mysql/time_zone_leap_second.ibd |
| 13 | ./mysql/innodb_table_stats.ibd |
| 14 | ./mysql/innodb_index_stats.ibd |
| 15 | ./mysql/slave_relay_log_info.ibd |
| 16 | ./mysql/slave_master_info.ibd |
| 17 | ./mysql/slave_worker_info.ibd |
| 18 | ./mysql/gtid_executed.ibd |
| 19 | ./mysql/server_cost.ibd |
| 20 | ./mysql/engine_cost.ibd |
| 21 | ./sys/sys_config.ibd |
| 29 | ./mydb/CORONA.ibd |
| 296 | ./mydb/DEPT.ibd |
| 297 | ./mydb/travel_history.ibd |
| 298 | ./mydb/state_wise.ibd |
+-------+---------------------------------------+
24 rows in set (0.01 sec)


mysql>


12. See variables and global variables.




mysql> SHOW GLOBAL VARIABLES LIKE 'port';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| port | 3307 |
+---------------+-------+
1 row in set (0.14 sec)


mysql> select @@PORT;
+--------+
| @@PORT |
+--------+
| 3307 |
+--------+
1 row in set (0.00 sec)


mysql>

Popular Posts