不让redo log日志组处于高激活状态

  • 来源: 互联网 作者: rocket   2008-03-19/11:27
  • 平台:SunOS 5.8 Generic_108528-23 sun4u sparc SUNW,Ultra-Enterprise

    数据库:8.1.5.0.0

    症状:响应缓慢,应用请求已经无法返回

    登陆数据库,发现redo日志组除current外都处于active状态

    oracle:/oracle/oracle8>sqlplus "/ as sysdba"

    SQL*Plus: Release 8.1.5.0.0 - Production on Thu Jun 23 18:56:06 2005

    (c) Copyright 1999 Oracle Corporation.All rights reserved.

    Connected to:
    Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production
    With the Partitioning and Java options
    PL/SQL Release 8.1.5.0.0 - Production
    SQL> select * from v$log;

    GROUP#THREAD#SEQUENCE#BYTESMEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
    ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
    11 520403 314572801 NOACTIVE1.3861E+10 23-JUN-05
    21 520404 314572801 NOACTIVE1.3861E+10 23-JUN-05
    31 520405 314572801 NOACTIVE1.3861E+10 23-JUN-05
    41 520406 314572801 NOCURRENT 1.3861E+10 23-JUN-05
    51 520398 314572801 NOACTIVE1.3860E+10 23-JUN-05
    61 520399 314572801 NOACTIVE1.3860E+10 23-JUN-05
    71 5204001048576001 NOACTIVE1.3860E+10 23-JUN-05
    81 5204011048576001 NOACTIVE1.3860E+10 23-JUN-05
    91 5204021048576001 NOACTIVE1.3861E+10 23-JUN-05

    9 rows selected.

    SQL> /

    GROUP#THREAD#SEQUENCE#BYTESMEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
    ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------

    11 520403 314572801 NOACTIVE1.3861E+10 23-JUN-05
    21 520404 314572801 NOACTIVE1.3861E+10 23-JUN-05
    31 520405 314572801 NOACTIVE1.3861E+10 23-JUN-05
    41 520406 314572801 NOCURRENT 1.3861E+10 23-JUN-05
    51 520398 314572801 NOACTIVE1.3860E+10 23-JUN-05
    61 520399 314572801 NOACTIVE1.3860E+10 23-JUN-05
    71 5204001048576001 NOACTIVE1.3860E+10 23-JUN-05
    81 5204011048576001 NOACTIVE1.3860E+10 23-JUN-05
    91 5204021048576001 NOACTIVE1.3861E+10 23-JUN-05

    9 rows selected.

    如果日志都处于active状态,那么显然DBWR的写已经无法跟上log switch触发的检查点。

    接下来让我们检查一下DBWR的繁忙程度:

    SQL> !
    oracle:/oracle/oracle8>ps -ef|grep ora_
    oracle2273 10 Mar 31 ? 57:40 ora_smon_hysms02
    oracle2266 10 Mar 31 ? 811:42 ora_dbw0_hysms02
    oracle2264 1 16 Mar 31 ? 16999:57 ora_pmon_hysms02
    oracle2268 10 Mar 31 ? 1649:07 ora_lgwr_hysms02
    oracle2279 10 Mar 31 ?8:09 ora_snp1_hysms02
    oracle2281 10 Mar 31 ?4:22 ora_snp2_hysms02
    oracle2285 10 Mar 31 ?9:40 ora_snp4_hysms02
    oracle2271 10 Mar 31 ? 15:57 ora_ckpt_hysms02
    oracle2283 10 Mar 31 ?5:37 ora_snp3_hysms02
    oracle2277 10 Mar 31 ?5:58 ora_snp0_hysms02
    oracle2289 10 Mar 31 ?0:00 ora_d000_hysms02
    oracle2287 10 Mar 31 ?0:00 ora_s000_hysms02
    oracle2275 10 Mar 31 ?0:04 ora_reco_hysms02 [bitsCN.Com]
    oracle 21023 210120 18:52:59 pts/65 0:00 grep ora_

    DBWR的进程号是2266。

    使用Top命令观察一下:

    oracle:/oracle/oracle8>top

    last pid: 21145;load averages:3.38,3.45,3.67 18:53:38
    725 processes: 711 sleeping, 1 running, 10 zombie, 3 on cpu
    CPU states: 35.2% idle, 40.1% user,9.4% kernel, 15.4% iowait,0.0% swap
    Memory: 3072M real, 286M free, 3120M swap in use, 1146M swap free

    PID USERNAME THR PRI NICESIZE RES STATETIMECPU COMMAND
    11855 smspf1590 1355M 1321M cpu/0 19:32 16.52% oracle
    2264 oracle 1 00 1358M 1316M run283.3H 16.36% oracle
    11280 oracle 1130 1356M 1321M sleep 79.8H0.77% oracle
    6957 smspf 1529 10 63M 14M sleep107.7H0.76% java
    17393 smspf1300 1356M 1322M cpu/1833:050.58% oracle
    29299 smspf5580 8688K 5088K sleep 18.5H0.38% fee_ftp_get
    21043 oracle 1430 3264K 2056K cpu/90:010.31% top
    20919 smspf 1729 10 63M 17M sleep247:020.29% java
    25124 smspf1580 16M 4688K sleep0:350.25% smif_status_rec
    8086 smspf5230 21M 13M sleep 41.1H0.24% fee_file_in
    16009 root 1350 4920K 3160K sleep0:030.21% sshd2
    25126 smspf1580 1355M 1321M sleep0:260.20% oracle
    2266 oracle 1600 1357M 1317M sleep811:420.18% oracle
    11628 smspf7590 3440K 2088K sleep0:390.16% sgip_client_ltz
    26257 smspf 82590447M178M sleep533:040.15% java

    我们注意到,2266号进程消耗的CPU不过0.18%,显然并不繁忙,那么瓶颈就很可能在IO上。

    使用IOSTAT工具检查IO状况。

    gqgai:/home/gqgai>iostat -xn 3
    extended device statistics
    r/sw/s kr/s kw/s wait actv wsvc_t asvc_t%w%b device
    ......
    0.00.00.00.00.00.00.00.0 0 0 c0t6d0
    1.8 38.4 32.4281.00.00.70.0 16.4 029 c0t10d0
    1.8 38.4 32.4281.00.00.50.0 13.5 027 c0t11d0
    24.8 61.3 1432.4880.10.00.50.05.4 026 c1t1d0
    0.00.00.00.00.00.00.09.1 0 0 hurraysms02:vold(pid238)
    extended device statistics
    r/sw/s kr/s kw/s wait actv wsvc_t asvc_t%w%b device
    ........
    0.00.00.00.00.00.00.00.0 0 0 c0t6d0
    0.38.30.3 47.00.00.10.09.2 0 8 c0t10d0
    0.08.30.0 47.00.00.10.08.0 0 7 c0t11d0
    11.7 65.3197.2522.20.01.60.0 20.5 0 100 c1t1d0
    0.00.00.00.00.00.00.00.0 0 0 hurraysms02:vold(pid238)
    extended device statistics
    r/sw/s kr/s kw/s wait actv wsvc_t asvc_t%w%b device
    ........
    0.00.00.00.00.00.00.00.0 0 0 c0t6d0
    bbs.bitsCN.com

    0.3 13.72.7 68.20.00.20.0 10.9 012 c0t10d0
    0.0 13.70.0 68.20.00.10.09.6 011 c0t11d0
    11.3 65.3 90.7522.70.01.50.0 19.5 099 c1t1d0
    0.00.00.00.00.00.00.00.0 0 0 hurraysms02:vold(pid238)
    extended device statistics
    r/sw/s kr/s kw/s wait actv wsvc_t asvc_t%w%b device
    ........
    0.00.00.00.00.00.00.00
     


    评论 {{userinfo.comments}}

    {{money}}

    {{question.question}}

    A {{question.A}}
    B {{question.B}}
    C {{question.C}}
    D {{question.D}}
    提交

    驱动号 更多