Skip to main content

Increase the file system size in vmware workstation

 Increase the file system size in Red Hat linux server using vmware



 

 

Select VM guest OS and go to edit settings. It shows VM properties 

 

 Go to ADD option and select Hard Disk option, Click Next

 

Create a new virtual disk and click next option

Give to the Capacity in GB and select Thin Provision

 

Thick provisioning is a type of storage pre-allocation. With thick provisioning, the complete amount of virtual disk storage capacity is pre-allocated on the physical storage when the virtual disk is created.

A thick-provisioned virtual disk consumes all the space allocated to it in the datastore right from the start, so the space is unavailable for use by other virtual machines. 

Thin provisioning is another type of storage pre-allocation. A thin-provisioned virtual disk consumes only the space that it needs initially, and grows with time according to demand.

 

Go to the location option

 It show two options

1 store with the virtual machine: its takes physical server local disk 

2 specify data store means its takes disk in SAN Storage

 

 

Click to Next and Finish

 

 

Login to redhat linux server

Username : root

Password : xxxxxxx

 

Step1: Check prerequisite like as partition size , pvs, vgs and lsblk command out puts.

[root@Test ~]# pvs

  PV         VG      Fmt  Attr PSize   PFree

  /dev/sda3  Test lvm2 a--  197.52g      0

 /dev/sdb1  Test lvm2 a--  <50.00g <20.13g

 

[root@Test ~]# vgs

  VG      #PV #LV #SN Attr   VSize    VFree

  Test   3   9   0 wz--n- <317.52g 20.12g

[root@Test ~]# lsblk

NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT

fd0                         2:0    1     4K  0 disk

sda                         8:0    0   200G  0 disk

─sda1                      8:1    0   485M  0 part /boot

─sda2                      8:2    0     2G  0 part [SWAP]

└─sda3                      8:3    0 197.5G  0 part

  ─Test-root          253:0    0    15G  0 lvm  /

  ─Test-usr           253:1    0     5G  0 lvm  /usr

  ─Test-home          253:2    0    10G  0 lvm  /home

  ─Test-var           253:3    0    10G  0 lvm  /var

  ─Test-Uploads       253:4    0    10G  0 lvm  /Uploads

  ─Test-Test       253:5    0    82G  0 lvm  /Test

  ─Test-opt           253:6    0    10G  0 lvm  /opt

  ─Test-tmp           253:7    0     5G  0 lvm  /tmp

  └─Test-db 253:8    0  80.4G  0 lvm  /db

sdb                         8:16   0    50G  0 disk

└─sdb1                      8:17   0    50G  0 part

  └─Test-DB 253:8    0  80.4G  0 lvm  /db

sr0                        11:0    1  1024M  0 rom

[root@Test ~]# df –h /db

/dev/mapper/Test-DB       81G   68G   13G  85% /db

Step2: Scan the SAN Storage LUN

 [root@Test ~]# echo "- - -" > /sys/class/scsi_host/host0/scan

[root@Test ~]# echo "- - -" > /sys/class/scsi_host/host1/scan

[root@Test ~]# echo "- - -" > /sys/class/scsi_host/host2/scan

 

[root@Test ~]# lsblk

NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT

fd0                         2:0    1     4K  0 disk

sda                         8:0    0   200G  0 disk

─sda1                      8:1    0   485M  0 part /boot

─sda2                      8:2    0     2G  0 part [SWAP]

└─sda3                      8:3    0 197.5G  0 part

  ─Test-root          253:0    0    15G  0 lvm  /

  ─Test-usr           253:1    0     5G  0 lvm  /usr

  ─Test-home          253:2    0    10G  0 lvm  /home

  ─Test-var           253:3    0    10G  0 lvm  /var

  ─Test-Uploads       253:4    0    10G  0 lvm  /Uploads

  ─Test-Test       253:5    0    82G  0 lvm  /Test

  ─Test-opt           253:6    0    10G  0 lvm  /opt

  ─Test-tmp           253:7    0     5G  0 lvm  /tmp

  └─Test-DB 253:8    0  80.4G  0 lvm  /db

sdb                         8:16   0    50G  0 disk

└─sdb1                      8:17   0    50G  0 part

  └─Test-DB 253:8    0  80.4G  0 lvm  /db

sdc                         8:32   0    70G  0 disk

sr0                        11:0    1  1024M  0 rom

 

Step3: Create partition

[root@Test ~]# fdisk -l /dev/sdc

Disk /dev/sdc: 75.2 GB, 75161927680 bytes, 146800640 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

pvcreate:

[root@Test ~]# pvcreate /dev/sdc

  Physical volume "/dev/sdc" successfully created.

 pvstatus:

[root@Test ~]# pvs

  PV         VG      Fmt  Attr PSize   PFree

  /dev/sda3  Test lvm2 a--  197.52g      0

  /dev/sdb1  Test lvm2 a--  <50.00g <20.13g

  /dev/sdc           lvm2 ---   70.00g  70.00g

 vgextend:

[root@Test ~]# vgextend Test /dev/sdc

  Volume group "Test" successfully extended

 vgstatus:

[root@Test ~]# vgs

  VG      #PV #LV #SN Attr   VSize    VFree

  Test   3   9   0 wz--n- <317.52g 90.12g

lvcreate to online: 

  Size of logical volume Test/DB changed from 80.39 GiB (20580 extents) to 150.39 GiB (38500 extents).

  Logical volume Test/DB successfully resized.

meta-data=/dev/mapper/Test-DB isize=512    agcount=7, agsize=3276800 blks

                =                       sectsz=512   attr=2, projid32bit=1

                =                       crc=1        finobt=0 spinodes=0

data         =                       bsize=4096   blocks=21073920, imaxpct=25

                =                       sunit=0      swidth=0 blks

naming     =version 2              bsize=4096   ascii-ci=0 ftype=1

log            =internal               bsize=4096   blocks=6400, version=2

                 =                       sectsz=512   sunit=0 blks, lazy-count=1

realtime    =none                   extsz=4096   blocks=0, rtextents=0

data blocks changed from 21073920 to 39424000 

 

Step4: Check the partition size 

[root@e4vm4Testpgdbreplica ~]# df -h /db

Filesystem                                   Size  Used Avail Use% Mounted on

/dev/mapper/Test-DB  151G   68G   83G  46% /db


 

 

 

 

 


 

 

 

 

 

 


 

 

 

 

 



 

Comments

Post a Comment

Popular posts from this blog

Bonding configuration

      Bonding: Two different Ethernet cards are merging to behave like single interface card. It uses for link aggregation and single Ethernet card (physical card) failure over come to this scenario   Diagram:    In this scenario, One server have two physical Ethernet card, each Ethernet cards has a two ports like as Eth0 , Eth1 and Eth2, Eth3. Now we are configuring bond0 using Eth0 and Eth2. Open to the terminal as root : # yast2 lan &     It shows how many physical Ethernet cards connected to server. Select one Ethernet card and click Add option   Go to Device Type click to Ethernet option and scroll down it shows many options.   Select the Bond option and click   Its automatically change configuration name like as bond0 and go to next   Given the Ip address and Subnetmask . Hostname is not require go to tab like Hardware   It shows Device Type like as Bond and Configuration Name like as bond0   Go to tab Bond Slaves select two