728x90
반응형

String Time 을 Date 값으로 변환

echo 1621230001 | awk '{print strftime("%d/%m/%y %T",$1)}'
17/05/21 14:40:01
728x90
300x250

'IT > OS' 카테고리의 다른 글

AD 동기화 확인  (0) 2021.07.07
OS xfs 파일시스템 용량 증설  (0) 2021.07.04
IP2Location 사용 시 Perl 패키지 설치  (0) 2021.07.01
IOPS 상태 이상 징후 발생 대처 방안  (0) 2021.07.01
윈도우즈 파일 찾아서 지우기  (0) 2020.07.08
728x90
반응형
# 필요 Perl 패키지
yum install perl-core
yum install perl-LWP-Protocol-https
 
# IP2Location Download 진행
perl download.pl -package DB1 -token 토큰입력 -output /home/test/COUNTRY/test.ZIP
728x90
300x250
728x90
반응형

MongoDB 서버에 GP3 IOPS 체크에 대한 CloudWatch 를 설정 해두어, Slack을 통해 해당 얼럿을 확인이 되었을 때.

 

해당 서버 접속하여 실제 IOPS를 어떻게 사용하는지 모니터링 합니다.

$ iostat -dx -c 1
 
Linux 3.10.0-1062.12.1.el7.x86_64 ()    2021년 06월 15일   _x86_64_    (2 CPU)
 
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.14    0.00    0.09    0.00    0.10   99.68
 
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
nvme0n1           0.00     0.02    0.00    0.19     0.06     2.04    22.34     0.00    1.20    1.13    1.20   0.16   0.00
 
cpu와 iowait 그리고 Read/Write 와 util 정도 확인 한다.

 

실제로 많은 IOPS 사용 중인게 확인 된다면, iotop 을 설치 합니다.

# yum install iotop
Loaded plugins: fastestmirror, langpacks
Determining fastest mirrors
 * base: d36uatko69830t.cloudfront.net
 * extras: d36uatko69830t.cloudfront.net
 * updates: d36uatko69830t.cloudfront.net
base                                                                                                                        | 3.6 kB  00:00:00
extras                                                                                                                      | 2.9 kB  00:00:00
updates                                                                                                                     | 2.9 kB  00:00:00
(1/2): extras/7/x86_64/primary_db                                                                                           | 242 kB  00:00:00
(2/2): updates/7/x86_64/primary_db                                                                                          | 8.8 MB  00:00:00
Resolving Dependencies
--> Running transaction check
---> Package iotop.noarch 0:0.6-4.el7 will be installed
--> Finished Dependency Resolution
 
Dependencies Resolved
 
===================================================================================================================================================
 Package                           Arch                               Version                               Repository                        Size
===================================================================================================================================================
Installing:
 iotop                             noarch                             0.6-4.el7                             base                              52 k
 
Transaction Summary
===================================================================================================================================================
Install  1 Package
 
Total download size: 52 k
Installed size: 156 k
Is this ok [y/d/N]: y
Downloading packages:
iotop-0.6-4.el7.noarch.rpm                                                                                                  |  52 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : iotop-0.6-4.el7.noarch                                                                                                          1/1
  Verifying  : iotop-0.6-4.el7.noarch                                                                                                          1/1
 
Installed:
  iotop.noarch 0:0.6-4.el7
 
Complete!

iotop 명령어를 통해 프로세스별 디스크 점유율 확인 합니다.

# iotop -P
Total DISK READ :   0.00 B/s | Total DISK WRITE :       0.00 B/s
Actual DISK READ:   0.00 B/s | Actual DISK WRITE:       0.00 B/s
   PID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
     1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % systemd --system --deserialize 15
     2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
     4 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/0:0H]
     6 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
     7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
     8 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [rcu_bh]
     9 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [rcu_sched]
    10 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [lru-add-drain]
    11 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
    12 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/1]
    13 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
    14 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]
    16 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/1:0H]
    18 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kdevtmpfs]
    19 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [netns]
    20 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [khungtaskd]
    21 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [writeback]
    22 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kintegrityd]
    23 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [bioset]
    24 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [bioset]
    25 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [bioset]
    26 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kblockd]
    27 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [md]
    28 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [edac-poller]
    29 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdogd]
    35 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kswapd0]
    36 be/5 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksmd]
    37 be/7 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [khugepaged]
    38 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [crypto]
 16818 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % rsyslogd -n
    46 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthrotld]
    48 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kmpath_rdacd]
    49 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kaluad]
    51 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kpsmoused]
    53 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ipv6_addrconf]
 
프로세스 별로 Disk Read / Write IO를 확인 할 수 있습니다.

 

Active 중인 프로세스, 쓰레드별 디스크 점유율도 확인 합니다.

# iotop -o
 
Total DISK READ :   0.00 B/s | Total DISK WRITE :      11.90 K/s
Actual DISK READ:   0.00 B/s | Actual DISK WRITE:       0.00 B/s
   TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
 56499 be/4 mongodb     0.00 B/s    0.00 B/s  0.00 %  0.00 % mongos -f /usr/local/mongodb/conf/router.conf [ftdc]
 
프로세스 쓰레드 별로 확인 가능합니다.

 

만약, 예를 들어 PMM 영향 인 것 같다라고 생각 되면, 아래 처럼 프로세스 조회 후 모두 kill 해 줍니다.

$ ps -ef | grep pmm
$ kill 프로세스
728x90
300x250
728x90
반응형

윈도우 명령어

find /log/upload -name '*.log' -mtime +2 -delete

728x90
300x250
728x90
반응형

리스트 확인

zfs list

 

zfs 마운트 포인트 변경

zfs set mountpoint=/oradata_bk datapool/oradata   --> zfs set mountpoint=/oradata purevol/oradata

 

확인

zfs get mountpoint datapool/oradata

 

mount

zfs mount  datapool/oradata

zfs mount purevol/oradata

 

zfs 삭제

zfs destroy datapool/Arch

 

zpool list

NAME      SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT

dbpool    556G  6.38G   550G   1%  1.00x  ONLINE  -

purevol  11.9T  16.4G  11.9T   0%  1.00x  ONLINE  -

rpool     556G   179G   377G  32%  1.00x  ONLINE  -

 

zpool status

root@dbserver1 # zpool status

  pool: datapool

state: ONLINE

  scan: none requested

config:

 

        NAME                                     STATE     READ WRITE CKSUM

        datapool                                 ONLINE       0     0     0

          c0t624A9370B130E0A67E0B480800011011d0  ONLINE       0     0     0

 

errors: No known data errors

 

  pool: dbpool

state: ONLINE

  scan: none requested

config:

 

        NAME                       STATE     READ WRITE CKSUM

        dbpool                     ONLINE       0     0     0

          mirror-0                 ONLINE       0     0     0

            c0t5000CCA02F613828d0  ONLINE       0     0     0

            c0t5000CCA02F53C340d0  ONLINE       0     0     0

 

errors: No known data errors

 

  pool: purevol

state: ONLINE

  scan: none requested

config:

 

        NAME                                     STATE     READ WRITE CKSUM

        purevol                                  ONLINE       0     0     0

          c0t624A9370B130E0A67E0B480800011013d0  ONLINE       0     0     0

 

errors: No known data errors

 

  pool: rpool

state: ONLINE

  scan: resilvered 114G in 12m23s with 0 errors on Mon Oct 22 14:31:21 2018

 

config:

 

        NAME                       STATE     READ WRITE CKSUM

        rpool                      ONLINE       0     0     0

          mirror-0                 ONLINE       0     0     0

            c0t5000CCA02F540950d0  ONLINE       0     0     0

            c0t5000CCA02F613E14d0  ONLINE       0     0     0

 

datapool 삭제 zfs destroy 는 삭제가 안됨.

zpool destroy -f datapool

 

 

 

728x90
300x250
728x90
반응형

NFS 공유 서버에서

/etc/exports 에 클라이언트 정보에 대해 허용하기

 

/appnas    10.200.11.40(클라이언트IP)(rw,anonuid=1100,anongid=1100)

exportfs -ra 명령어로 exports 정책 다시 읽어들이기

 

NFS 클라이언트에서

/etc/fstab에서 해당 경로 등록

 

10.200.31.48:/appnas    /data1/contents    nfs    defaults    0    0

 

등록 후

 

mount -a 해도 되고 mount -t nfs 10.200.31.48:/appnas /data1/contents 로 해도 됨

728x90
300x250
728x90
반응형
p>vi에서

:%s/old/new/gc

 

명령어에서

cat test.log | sed -i 's/old/new/g'

 

728x90
300x250
728x90
반응형

리눅스에서 특정 유저에 대해 OS 내부 보안정책에서 예외를 두고 싶을 때 필요한 방법이다.

순서가 중요함

 

특정계정 PAM 모듈에서 제외처리

 

[root@test ~]# cat /etc/pam.d/system-auth

#%PAM-1.0

# This file is auto-generated.

# User changes will be destroyed the next time authconfig is run.

auth        required      pam_env.so

auth [success=1 default=ignore] pam_succeed_if.so user in TESTUSER

 

 

[root@test ~]# cat /etc/pam.d/password-auth

#%PAM-1.0

# This file is auto-generated.

# User changes will be destroyed the next time authconfig is run.

auth        required      pam_env.so

auth [success=1 default=ignore] pam_succeed_if.so user in TESTUSER

 

728x90
300x250

+ Recent posts