Quantcast
Channel: SCN : All Content - SAP on Oracle
Viewing all 2104 articles
Browse latest View live

SAP_SLD_DATA_COLLECT getting hanged

$
0
0

Dear Experts,

we have scheduled SAP_SLD_DATA_COLLECT job via RZ70 in our ECC system.Most of the time it runs successfully but everyday

once it get hanged.

i checked SLD system also and everything is fine there. I have checked RFC   SLD_UC and SLD_NUC and both working fine.

what might be the issue kindly suggest.kindly find dev_wx file below

 

Environment:

SAP ECC6

Windows 2008 R2

Oracle 11.3

 

 

Warm Regards,

Sumit Jha


brarchive is being failed

$
0
0

Hi,

we have Oracle 11g and Linux redhat.

We are trying to run /sapmnt/<SID>/exe/brarchive -u / -k yes -d disk -c -sd but it is giving the error message as below.

 

BR0002I BRARCHIVE 7.20 (10)
BR0006I Start of offline redolog processing: aemalgbk.svd 2013-09-06 05.10.44
BR0484I BRARCHIVE log file: /oracle/<SID>/saparch/aemalgbk.svd
BR0280I BRARCHIVE time stamp: 2013-09-06 05.12.44
BR0301E SQL error -1031 at location BrInitOraCreate-2, SQL statement:
'CONNECT / AT PROF_CONN IN SYSOPER MODE'
ORA-01031: insufficient privileges
BR0303E Determination of Oracle version failed

BR0007I End of offline redolog processing: aemalgbk.svd 2013-09-06 05.12.44
BR0280I BRARCHIVE time stamp: 2013-09-06 05.12.44
BR0005I BRARCHIVE terminated with errors

 

We have tried sap note

Note 776505 - BR*Tools fail with ORA-01017 / ORA-01031 on Linux

but issue is same.

Please help me

Regards

Ganesh Tiwari

Moving mirrorlog files

$
0
0

Hi,

 

I need to move the mirror log files (Oracle 11G)  from one drive to another on Windows Server 2008 R2.

 

I have gone through a couple of posts but they describe different ways of doing it.

 

I am not entirely sure as to how to proceed.

 

Thanks.

 

Best Regards,

Anita

SAP Oracle Upgrade issue

$
0
0

Hello,

 

Im going through an oracle upgrade from 10.2.0.4 to 11.2.0.3, I have already done the pre-upgrade tasks with no problems, however, I'm having a dump which says that I have a data block corrupted, should I try to go through dbua or it will give me an error in upgrade or post upgrade task?  I'd like to open an OSS message however as my database is still in 10G I'd probably not get too much support.

 

Regards,

 

JAM

 

OS: AIX 7.1

Oracle Dataguard and SAP licensing policy

$
0
0

Hi Gurus

We are planning to setup Disaster Recovery site using oracle 11g dataguard.

Would it be possible you to answer following queries?

 

1) Is oracle dataguard is part of Oracle DVD set provided or downloaded from SAP?

2) Do we need to procure license for oracle datagueard additionally from SAP?

3) would it be possible to provide me any note number?

 

Thanks and Regards

Upendra

Oracle Offline backup and restore in tape

$
0
0

Dear all,

 

Earlier we were taking Offline and online backup using TSM (Backup device used is util_file).

Now we are setting up Diaster Recovery  and Target System System Copy installation has stopped in "Backup Restore phase".

Please tel step by step to  triger Oracle Offline backup in tape .

Regards,

gayathri

Locking in R3

$
0
0

Hi All,

 

 

I have a doubt , oracle by itself incorporates row level locking. Since SAP R3 runs on top of it the update queries should also incorporate row level locking.

I feel row level locking dont stay for long on a database after the record has been updated.

Then how can there be 8 thousand lock entries in a given point of time in a system. These locks do stay for more than 9 to 10hrs at times.

Kindly provide some links to understand this concept. These lock entries are not following row level locking?

 

Thanks,

Swadesh

Help needed for Oracle 11.2 tuning memory parameters

$
0
0

Hi Experts,

 

Any one help me to set recommend oracle parameters for our SAP Environment & Number work processor.

 

Performance tuning parameters for Oracle

 

Our Environment:

 

Oracle 11.2.0.3 

Windows Server 2008 R2

Physical memory 36GB

Swap memory : 20GB

721 EXT Patchlevel 100 Unicode 64bit

2 server ( 1 SAP instance + 1 Dialog instance)

 

Warms Regards,

Vasan


ReturnCode -1403

$
0
0

Dear All,

FI users are facing delay when they are  trying to save the data in F-29. Please check the attached trace and guide me about rectification of this error ReturnCode -1403.

while system is running on window server 2008 with Oracle 10.2.0.5

 

 

Regards,

ORACLE 11 and Secure Storage in File System

$
0
0

Note 1622837 - Secure connection of AS ABAP to Oracle via SSFS

Note 1639578 - SSFS as password storage for primary database connect

Preparing and securing the file system

 

In general, we recommend storing the secure storage in the file system and the optional external encryption key on SAPGLOBALHOST under $(DIR_GLOBAL)/security/rsecssfs/data or $(DIR_GLOBAL)/security/rsecssfs/key, whereby these directories should be secured accordingly.

----------------------------------------------------------------------
2.1 Creating the directories
----------------------------------------------------------------------
Determine the value for DIR_GLOBAL (for example, from transaction AL11) on SAPGLOBALHOST. Replace $(DIR_GLOBAL) in the following description with the determined value <dir_global>. Create the required directories as described below if they do not already exist.

 

ssf1.png

----------------------------------------------------------------------
2.2 Securing the directories created
----------------------------------------------------------------------
In the following, make the directories that were created in step 2.1 available exclusively for the users of the SAP system <sid>.
On Linux and UNIX, this is the user <sid>adm. On Windows, all relevant users are merged into the groups SAP_<sid>_LocalAdmin and SAP_<sid>_GlobalAdmin.
In particular, cross-SAP system users and groups should not have any authorizations in these directories.

texadm@saptex:/usr/sap/TEX/SYS/global/security/rsecssfs>ls -lart /usr/sap/TEX/SYS/global/security/rsecssfs

total 16

drwx------. 5 texadm sapsys 4096 Sep  5 16:09 ..

drwx------. 2 texadm sapsys 4096 Sep  5 16:09 key

drwx------. 4 texadm sapsys 4096 Sep  5 16:09 .

drwx------. 2 texadm sapsys 4096 Sep  6 11:40 data

 

----------------------------------------------------------------------
3.  Maintaining the SSFS profile parameters
----------------------------------------------------------------------
Set the following profile parameters that point to the previously created directories as the location for the secure storage and the external key. We recommend that you add the parameters to the default profile DEFAULT.PFL. Otherwise, you must maintain all of the instance profiles. Add the following entries:
rsec/ssfs_datapath = $(DIR_GLOBAL)$(DIR_SEP)security$(DIR_SEP)rsecssfs$(DIR_SEP)data

           rsec/ssfs_keypath  = $(DIR_GLOBAL)$(DIR_SEP)security$(DIR_SEP)rsecssfs$(DIR_SEP)key

ssf2.png

 

----------------------------------------------------------------------
4.  Maintaining the SSFS environment variable
----------------------------------------------------------------------
The profile parameters rsec/ssfs_datapath and rsec/ssfs_keypath are interpreted only by the SAP system. The do not apply to the SAP tools R3trans, R3load, and so on. For these, you must set a corresponding environment variable on each application server including the central instance. Depending on the operating system, proceed as follows:

----------------------------------------------------------------------
Application server on UNIX and Linux
----------------------------------------------------------------------
For this, first determine the value <dir_global> for DIR_GLOBAL on the relevant application server (for example, using transaction AL11). Then add the following lines to the logon script for <sid>adm on this application server:

  • For C shell scripts:

    setenv RSEC_SSFS_DATAPATH <dir_global>/security/rsecssfs/data
    setenv RSEC_SSFS_KEYPATH <dir_global>/security/rsecssfs/key
  • For Korn shell scripts:

    export RSEC_SSFS_DATAPATH=<dir_global>/security/rsecssfs/data
    export RSEC_SSFS_KEYPATH=<dir_global>/security/rsecssfs/key

  texadm@saptex:/usr/sap/texadm>grep RSEC $HOME/.profile 

export RSEC_SSFS_DATAPATH=/usr/sap/TEX/SYS/global/security/rsecssfs/data

export RSEC_SSFS_KEYPATH=/usr/sap/TEX/SYS/global/security/rsecssfs/key

 

 

----------------------------------------------------------------------
5.  Setting up the SSFS data storage and checking the access rights
----------------------------------------------------------------------

----------------------------------------------------------------------
5.1 Setting up the SSFS storage
----------------------------------------------------------------------
In the following, you must fill the secure storage in the file system with the required access information for the ABAP database user. This information consists at least of the name of the ABAP database user and the password of this user.
In some database types, you must also make specifications about the target database. In all other cases, this information is still derived from the SAP profile.

Note that storage differentiates between uppercase and lowercase characters.

  • DB_CONNECT/DEFAULT_DB_USER
    ABAP database connect user (usually "SAPSR3")
    The storage in the secure storage should take place in an unencrypted manner for Support reasons.
  • DB_CONNECT/DEFAULT_DB_PASSWORD
    Password of the ABAP database user
    The storage in the secure storage takes place in an encrypted manner.
  • DB_CONNECT/DEFAULT_DB_CON_ENV
    Specifications about the ABAP target database
    The storage in the secure storage takes place in an unencrypted manner. This parameter is currently required for the SAP HANA database only.

 
Refer to the relevant platform note for the name of the database connect user, for the information about whether the parameter DB_CONNECT/DEFAULT_DB_CON_ENV is required, and its exact format, if required.

Proceed as follows: 

  • Log on to SAPGLOBALHOST as the <sid>adm user.
  • Make sure that the environment variables RSEC_SSFS_DATAPATH and RSEC_SSFS_KEYPATH are set.
texadm@saptex:/usr/sap/texadm>env | grep RSEC

RSEC_SSFS_DATAPATH=/usr/sap/TEX/SYS/global/security/rsecssfs/data

RSEC_SSFS_KEYPATH=/usr/sap/TEX/SYS/global/security/rsecssfs/key

 

  • Use the command line tool of the secure storage rsecssfx from the SAP kernel to add entries for the user <name> and the password <pwd>, and to add any information about the target database as follows:

    rsecssfx put DB_CONNECT/DEFAULT_DB_USER <name> -plain
    rsecssfx put DB_CONNECT/DEFAULT_DB_PASSWORD <pwd>

texadm@saptex:/usr/sap/texadm>rsecssfx put DB_CONNECT/DEFAULT_DB_USER  SAPSR3 -plain

texadm@saptex:/usr/sap/texadm>rsecssfx put DB_CONNECT/DEFAULT_DB_PASSWORD Kart0on1

 

           If required, also use:

           rsecssfx put DB_CONNECT/DEFAULT_DB_CON_ENV <con_env> -plain

           Note the following: In non-Unicode systems, only characters from the 7-bit ASCII area are permitted.

           To avoid code page problems, we generally recommend that you adhere to this rule. If you want to use other characters in Unicode systems, you must convert these using the ABAP report RSECSSFX_ESCAPE into characters that can be used by rsecssfx.

  • Check the content of the secure storage as follows:

    rsecssfx list

ssf3.png

   

texadm@saptex:/usr/sap/texadm>rsecssfx list

|---------------------------------------------------------------------------------|

| Record Key                     | Status             | Timestamp of last Update  |

|---------------------------------------------------------------------------------|

| DB_CONNECT/DEFAULT_DB_PASSWORD | Encrypted          | 2013-09-10  09:52:11 UTC |

| DB_CONNECT/DEFAULT_DB_USER     | Plaintext          | 2013-09-10  09:51:43 UTC |

|---------------------------------------------------------------------------------|

 

Summary

-------

Active Records    : 2 (Encrypted : 1, Plain : 1, Wrong Key : 0, Error : 0)

Outdated Records  : 4 (occupied space can be released by the "compact" command)

Datafile Location : /usr/sap/TEX/SYS/global/security/rsecssfs/data/SSFS_TEX.DAT (when existing)

Keyfile Location  : /usr/sap/TEX/SYS/global/security/rsecssfs/key/SSFS_TEX.KEY (when existing)

 

           Refer to the command line help for further commands for the administration of the secure storage:

rsecssfx help


----------------------------------------------------------------------
5.2 Setting and checking the authorization of the SSFS data storage
----------------------------------------------------------------------
Due to the first call of "rsecssfx put", the system also creates the data storage of the secure storage. The directory $(DIR_GLOBAL)/security/rsecssfs/data should now contain the file SSFS_<sid>.DAT.

 

texadm@saptex:/usr/sap/texadm>ls -lart /usr/sap/TEX/SYS/global/security/rsecssfs/*/

/usr/sap/TEX/SYS/global/security/rsecssfs/key/:

total 8

drwx------. 4 texadm sapsys 4096 Sep  5 16:09 ..

drwx------. 2 texadm sapsys 4096 Sep  5 16:09 .

 

/usr/sap/TEX/SYS/global/security/rsecssfs/data/:

total 12

drwx------. 4 texadm sapsys 4096 Sep  5 16:09 ..

-rw-r--r--. 1 texadm sapsys 1458 Sep 10 11:52 SSFS_TEX.DAT

drwx------. 2 texadm sapsys 4096 Sep 10 11:52 .


----------------------------------------------------------------------
SAPGLOBALHOST on Windows
----------------------------------------------------------------------
If your SAPGLOBALHOST runs on Windows, no action is required because the access rights are inherited from the directory when the file is created.

----------------------------------------------------------------------
SAPGLOBALHOST on UNIX or Linux
----------------------------------------------------------------------
Otherwise, you must correct the access rights for the file, in the same way as for step 2.2, so that only <sid>adm are authorized.

  • chmod 600 <dir_global>/security/rsecssfs/data/SSFS_<sid>.DAT


For security reasons, also check the access rights here using "ls -al":
-rw------- <sid>adm  sapsys  SSFS_<sid>.DAT

 

texadm@saptex:/usr/sap/texadm>chmod 600 /usr/sap/TEX/SYS/global/security/rsecssfs/data/SSFS_TEX.DAT

texadm@saptex:/usr/sap/texadm>ls -lart /usr/sap/TEX/SYS/global/security/rsecssfs/data/

total 12

drwx------. 4 texadm sapsys 4096 Sep  5 16:09 ..

-rw-------. 1 texadm sapsys 1458 Sep 10 11:52 SSFS_TEX.DAT

drwx------. 2 texadm sapsys 4096 Sep 10 11:52 .

 

 

----------------------------------------------------------------------
7.  Changing to the new connection method
----------------------------------------------------------------------
----------------------------------------------------------------------
7.1 Setting the required parameters
----------------------------------------------------------------------
If you have executed all of the previous steps correctly, the SAP system should now be able to retrieve the password information that is required for the connection to the primary ABAP database from the secure storage in the file system. However, the conventional password storage is consulted by default.

The changeover to the new method now takes place due to a further profile parameter or a further environment variable. Proceed in the same way as described in step 3 and 4 to set the profile parameter (on SAPGLOBALHOST) and the environment variable (for all of the application servers).

  • Profile parameter : rsdb/ssfs_connect = 1

    ssf4.png

  • Environment variable: rsdb_ssfs_connect 1

texadm@saptex:/usr/sap/texadm>env |  grep rsdb

rsdb_ssfs_connect=1

texadm@saptex:/usr/sap/texadm>grep rsdb_ssfs_connect .*

.sapenv.csh:setenv rsdb_ssfs_connect 1

.sapenv_saptex.csh:setenv rsdb_ssfs_connect 1

.sapenv_saptex.sh:rsdb_ssfs_connect=1; export rsdb_ssfs_connect

.sapenv.sh:rsdb_ssfs_connect=1; export rsdb_ssfs_connect


(To use the conventional storage, you must set the values of the profile parameter and environment variable to the value '0'. This corresponds to the default.)


----------------------------------------------------------------------
7.2 Checking the successful changeover
----------------------------------------------------------------------
Restart the SAP system and check whether the connect was successful. If the changeover was successful, the developer trace (SM50) should contain the following entry:
B read_con_info_ssfs(): DBSL supports extended connect protocol
B   ==> connect info for default DB will be read from ssfs


Check this for all of the application servers.

In addition, make sure that the SAP tools are still able to connect to the database. To do this, perform an R3trans testconnect on the application servers as <sid>adm.
R3trans -d

If R3trans was able to connect to the database successfully, the message "R3trans finished (0000)." should be displayed. You must now also check trans.log in the current directory for the following entry:

B read_con_info_ssfs(): DBSL supports extended connect protocol
B   ==> connect info for default DB will be read from ssfs



----------------------------------------------------------------------
8.  Removing the user data from the platform-specific storage
----------------------------------------------------------------------
After you make sure that the SAP system and its tools are able to retrieve the password information that is required for the initial connect to the ABAP database from the secure storage, you should remove the old platform-specific password storage. Otherwise, you will not benefit from the potential security-relevant improvements in comparison with the old method.

To do this, follow the instructions in the relevant platform notes.

 

SQL> show parameter remote_os_authent

 

NAME                              TYPE   VALUE

------------------------------------ ----------- ------------------------------

remote_os_authent                 boolean        TRUE

SQL> alter system reset remote_os_authent scope=spfile ;

 

System altered.

 

 

R3trans –d

Cat trans.log :

4 ETW000  [     dev trc,00000]  RSecSSFs: Entering function "RSecSSFsGetRecord" [/bas/740_REL/src/krn/rsec/rsecssfs.c 874]

4 ETW000                                                                                                  83  0.021348

4 ETW000  [     dev trc,00000]  RSecSSFs: Configuration data read from environment parameters [/bas/740_REL/src/krn/rsec/rsecssfs.c 4448]

4 ETW000                                                                                               40479  0.061827

4 ETW000  [     dev trc,00000]  RSecSSFs: Data file "/usr/sap/TEX/SYS/global/security/rsecssfs/data/SSFS_TEX.DAT" opened for read [/bas/740_REL/src/krn/rsec/rsecssfs.c 2563]

4 ETW000 83  0.061910

4 ETW000  [     dev trc,00000]  RSecSSFs: Key file "/usr/sap/TEX/SYS/global/security/rsecssfs/key/SSFS_TEX.KEY" not found, using default key [/bas/740_REL/src/krn/rsec/rsecssfs.c 1426]

4 ETW000 36  0.061946

4 ETW000  [     dev trc,00000]  RSecSSFs: Exiting function "RSecSSFsGetRecord" with return code 0 (message: <No message available>) [/bas/740_REL/src/krn/rsec/rsecssfs.c 942]

4 ETW000 354  0.062300

4 ETW000  [     dev trc,00000]  read_ssfs_record(): DB_CONNECT/DEFAULT_DB_USER read successfully from ssfs

Oracle Update BSP 11.2.0.3.0 to 11.2.0.3.7

$
0
0

Dear Experts,

 

Due to a GoLive Check recommendation, we have been tasked with the update of our Oracle patch from 11.2.0.3.0 to 11.2.0.3.7. Sadly, the installation of this patch has not gone as expected.

 

I have installed the patch as per this link following all the recommendations such as making sure everything is stopped when needed, usage of command fuser for stale sessions, updated both OPatch and MOPatch to the latest available version, and so on.

 

However, during the installation, out of the 61 patches that were supposed to be installed, only 30 were installed successfully. The remaining 31 patches either were not installed because of missing prerequisites, or conflicts. Except for 3 patches of the BSP that failed during the installation (9584028, 9458152, 14488478).

 

This 3 patches all failed with the same error. It seems it is trying to copy from a folder to another folder and it is giving a "file doesn't exist" error.

Note: A lot of people in the Internet / forums have issues throughout the installation because of authorization issues, this is NOT the case.

 

All with the similar error: Copy Action: Source file /oracle/S11/112_64/.patch_storage/9584028_Jun_22_2012_11_39_40/files/sap/ora_upgrade/post_upgrade/post_upgrade_checks.sql" does not exist. 'oracle.rdbms, 11.2.0.3.0': Cannot copy file from 'post_upgrade_checks.sql

 

The odd thing is that the patch was compiled in May 15 2013, but it is somehow generating a folder from folder Jun 22 2012....

 

I am unable to restore any longer to the backup that was taking before the update as it has been several days since the update and I cannot restore and make the consultants lose a 7 day work load. So, I wonder:

 

1.- How can I fix this issue? I mean, has anyone encountered the same problem with this patches.

2.- If it is not fixed, are this patches critical? I mean, SAP said that this patches are NOT modifying Oracle binaries so I don't think they are so critical... but are they a must?

 

Thank you for your time,

Kind regards,

PIU

Kernel upgrade from 700 to 720

$
0
0

Hello All,

 

system is running SAP Net weaver 7.0 ,Oracle 10G .

 

Can possible to upgrade kernel 700 to 720 ?

 

Please let me know any prechecks before kernel upgrade ?

 

Please advise .

How to troubleshoot tablespace related issues during/after upgrade

$
0
0
  • Even before starting the upgrade, it is better to check whether the assignment of container is correct. This can be done with the help of following SAP note which gives the correct mapping between data classs (TABART) and tablespaces:

 

            541542: Upgrade phase INIT_CNTRANS: Container inconsistency

            777615: Incorrect data class/database container assignment

            778784: Inconsistencies between data class and database container

If you find any discrepancy, it is better to correct before the upgrade than having an error during upgrade.

 

  • The upgrade may fail even if the mapping is correct but not as per note 541542. In such cases, it is fine to ignore and proceed if the tool permits. If you are not able to proceed, search for notes or KBA's or raise a message with SAP. Few known issues are mentioned in following notes/KBA's:

 

            1589777: Missing owner in menu "System -> Status"

             946135: Error in RSUPDTEC creates incorrect entries in IAORA

 

  • If the upgrade is finished and when you go to delete the old tablespace, it doesn't allow you as there are still objects left in it. In such cases, do not delete using force mode and check for objects under the tablespace using following SQL query:

                

             select table_name from dba_tables where tablespace_name='PSAPSR3<OLDREL>';

             select index_name from dba_indexes where tablespace_name='PSAPSR3<OLDREL>';

 

       You may need to do a reorg for these objects using brtools in order to move the objects to correct tablespaces. Following KBA talks about the       situation  and  helps to resolve such issues:

             

               1715052: tablespace cannot be deleted after upgrade

Build a new system by using from source system DB Export file

$
0
0

Hello All,

 

I need to Build a new system by using exciting system . we have already started Db export on source system.

 

and New system also ready once export completed from source system and i need to build new system by using DB export of Source .

 

Please advise/Provide any system and provide pre checks before DB import on Target system pls ?

 

Thanks .

 

Source system :

AIX, Oracle11G and ECC6.0

 

target also AIX and oracle 11G already installed .

User creation error in "Operating System Users and Groups"

$
0
0

Hello, experts.

 

Now, I'm doing systemcopy of distributed system with R3load. The DB Instance is Oracle Exadata.

When I execute "Operating System Users and Groups" under the "Additional Preparations Options", an error occur.

And in the "Create users for SAP system" phase, sapinst is disconnected suddenly.

 

Could you help me to solve this?

 

Error message in my console

terminate called after throwing an instance of 'ESyAccountSystemCallFailedImpl_<ESyAccountSystemCallFailed>'

  1. iauxsysex.c:365: child /u01/app/instlog/20130903_3/sapinst_exe.32131.1378186356/sapinst (pid 32143) has crashed. Executable directory is /u01/app/instlog/20130903_3/sapinst_exe.32131.1378186356. Contact Support.
  2. iaextract.c:1094: child has signaled an exec error (-134). Keeping directory /u01/app/instlog/20130903_3/sapinst_exe.32131.1378186356

--------------------------------

Sep 3, 2013 2:37:17 PM [Info]: Stopping service "SAPinstService" ...

Sep 3, 2013 2:37:17 PM [Info]: Service "SAPinstService" stopped.

Sep 3, 2013 2:37:17 PM [Info]: Services stopped.

Sep 3, 2013 2:37:17 PM [Info]: Server shutdown by SAPinstService

 

=======

 

I also tried to execute DB Instance Installation.

However, a similar error occurred. In this time the message below I can find insapinst_dev_user_create.log.

 

sapinst_dev_user_create.log

…………………………

At line 2362 file syuxcuser.cpp

Call stack:

  1. iaxxbprocess.cpp: 36: CIaOsProcess::CEIdJanitor::~CEIdJanitor()
  2. syuxccuren.cpp: 233: CSyCurrentProcessEnvironmentImpl::setEffectiveUser(PSyUserInt, const iastring&)
  3. syxxbuser.cpp: 130: *** syslib entry point CSyUser::getPrimaryGroup(void) const ***
  4. syuxcuser.cpp: 625: PSyGroupImpl CSyUserImpl::getPrimaryGroup()const
  5. syuxcuser.cpp: 2317: CSyUserImpl_getOsInfos(iastring sName, iastring sID, tSyUserInfo& msUserinfo)

 

Return value of function getpwnam(root) is NULL.

Failed action:  with parameters

Error number 207 error type SPECIFIC_CODE

 

 

INFO       2013-09-02 19:43:40.455 [syuxccuren.cpp:285]

           CSyCurrentProcessEnvironmentImpl::setEffectiveGroup(PSyGroupInt)

           lib=syslib module=syslib

Effective group id set to 2005.

 

ERROR      2013-09-02 19:43:40.456 [syuxcuser.cpp:2360]

           CSyUserImpl_getOsInfos(iastring sName, iastring sID, tSyUserInfo& msUserinfo)

           lib=syslib module=syslib

FSH-00006  Return value of function getpwnam(root) is NULL.

 

TRACE      2013-09-02 19:43:40.456 [syuxcuser.cpp:231]

           CSyUserImpl_getOsInfos(iastring sName, iastring sID, tSyUserInfo& msUserinfo)

           lib=syslib module=syslib

Exception thrown near line 2362 in file syuxcuser.cpp

Stack trace:

  1. syuxccuren.cpp: 377: CSyCurrentProcessEnvironmentImpl::set(PSyProcessEnvironmentInt)
  2. syuxccuren.cpp: 233: CSyCurrentProcessEnvironmentImpl::setEffectiveUser(PSyUserInt, const iastring&)
  3. syxxbuser.cpp: 130: *** syslib entry point CSyUser::getPrimaryGroup(void) const ***
  4. syuxcuser.cpp: 625: PSyGroupImpl CSyUserImpl::getPrimaryGroup()const
  5. syuxcuser.cpp: 2317: CSyUserImpl_getOsInfos(iastring sName, iastring sID, tSyUserInfo& msUserinfo)

 

 

At line 2362 file syuxcuser.cpp

Call stack:

  1. syuxccuren.cpp: 377: CSyCurrentProcessEnvironmentImpl::set(PSyProcessEnvironmentInt)
  2. syuxccuren.cpp: 233: CSyCurrentProcessEnvironmentImpl::setEffectiveUser(PSyUserInt, const iastring&)
  3. syxxbuser.cpp: 130: *** syslib entry point CSyUser::getPrimaryGroup(void) const ***
  4. syuxcuser.cpp: 625: PSyGroupImpl CSyUserImpl::getPrimaryGroup()const
  5. syuxcuser.cpp: 2317: CSyUserImpl_getOsInfos(iastring sName, iastring sID, tSyUserInfo& msUserinfo)

 

Return value of function getpwnam(root) is NULL.

Failed action:  with parameters

Error number 207 error type SPECIFIC_CODE

…………………………

 

Regards,

Naomi Yamane


Lost 000 DDIC PW for installation

$
0
0

Hello,

Now I'm doing  ERP installation with R3load.

 

I exported R3data from source system successfully, however I noticed the password of Client 000 DDIC I know is incorrect.

 

I hear that there can be a way to process installation though I don't know 000 DDIC password, but don't know its detail.

If someone know it, please help me.

 

Regards,

Naomi Yamane

Redo Log backup

$
0
0

Dear All;

 

I take every week an offline + redo log at the weekend.

 

I already used the offline backup for many things such the quality refresh, but I never used the redo log backup.

 

Can any one tell what are the redo logs are used for?

 

Best Regards

~Amal Aloun

SAP Online Backup by ArcServer

$
0
0

Hello Team,

 

I'm setting online backup to tape by ARCServer, but I'm having problem when Brtools will open the oracle file system.


I m using ECC 6.0 with oracle database on AIX system.


Anyone have some document step by step to configuration online SAP Brtool backup by Arcserver?

 

=========================================================================================

 

'/oracle/SRQ/sapdata2/sr3_5/sr3.data5'.
 
  09/11 17:42:06(18808906) -
  09/11 17:42:06(18808906) - - DSAOpenDataFile(): cannot open file '/oracle/SRQ/sapdata2/sr3_8/sr3.data8'.
 
  09/11 17:42:12(18808906) -
  09/11 17:42:12(18808906) - - DSAOpenDataFile(): cannot open file '/oracle/SRQ/sapdata3/sr3_3/sr3.data3'.
 

===============================================================================

 


Followsthe initSRQsap settings:

 

 

=========================================================================================

 

# @(#) $Id: //bas/720_REL/src/ccm/rsbr/initAIX.sap#11 $ SAP

########################################################################

#                                                                      #

# SAP BR*Tools sample profile.                                         #

# The parameter syntax is the same as for init.ora parameters.         #

# Enclose parameter values which consist of more than one symbol in    #

# double quotes.                                                       #

# After any symbol, parameter definition can be continued on the next  #

# line.                                                                #

# A parameter value list should be enclosed in parentheses, the list   #

# items should be delimited by commas.                                 #

# There can be any number of white spaces (blanks, tabs and new lines) #

# between symbols in parameter definition.                             #

# Comment lines must start with a hash character.                      #

#                                                                      #

########################################################################

# backup mode [all | all_data | full | incr | sap_dir | ora_dir

# | all_dir | <tablespace_name> | <file_id> | <file_id1>-<file_id2>

# | <generic_path> | (<object_list>)]

# default: all

backup_mode = all

# restore mode [all | all_data | full | incr | incr_only | incr_full

# | incr_all | <tablespace_name> | <file_id> | <file_id1>-<file_id2>

# | <generic_path> | (<object_list>) | partial | non_db

# redirection with '=' is not supported here - use option '-m' instead

# default: all

restore_mode = all

# backup type [offline | offline_force | offline_standby | offline_split

# | offline_mirror | offline_stop | online | online_cons | online_split

# | online_mirror | online_standby | offstby_split | offstby_mirror

# default: offline

backup_type = online

# backup device type

# [tape | tape_auto | tape_box | pipe | pipe_auto | pipe_box | disk

# | disk_copy | disk_standby | stage | stage_copy | stage_standby

# | util_file | util_file_online | util_vol | util_vol_online

# | rman_util | rman_disk | rman_stage | rman_prep]

# default: tape

backup_dev_type = util_file_online

# backup root directory [<path_name> | (<path_name_list>)]

# default: $SAPDATA_HOME/sapbackup

backup_root_dir = /oracle/SRQ/sapbackup

# stage root directory [<path_name> | (<path_name_list>)]

# default: value of the backup_root_dir parameter

stage_root_dir = /oracle/SRQ/sapbackup

# compression flag [no | yes | hardware | only | brtools]

# default: no

#compress = no

# compress command

# first $-character is replaced by the source file name

# second $-character is replaced by the target file name

# <target_file_name> = <source_file_name>.Z

# for compress command the -c option must be set

# recommended setting for brbackup -k only run:

# "compress -b 12 -c $ > $"

# no default

compress_cmd = "compress -c $ > $"

# uncompress command

# first $-character is replaced by the source file name

# second $-character is replaced by the target file name

# <source_file_name> = <target_file_name>.Z

# for uncompress command the -c option must be set

# no default

uncompress_cmd = "uncompress -c $ > $"

# directory for compression [<path_name> | (<path_name_list>)]

# default: value of the backup_root_dir parameter

compress_dir = /oracle/SRQ/sapbackup

# brarchive function [save | second_copy | double_save | save_delete

# | second_copy_delete | double_save_delete | copy_save

# | copy_delete_save | delete_saved | delete_copied]

# default: save

archive_function = save_delete

# directory for archive log copies to disk

# default: first value of the backup_root_dir parameter

archive_copy_dir = /oracle/SRQ/sapbackup

# directory for archive log copies to stage

# default: first value of the stage_root_dir parameter

archive_stage_dir = /oracle/SRQ/sapbackup

# delete archive logs from duplex destination [only | no | yes | check]

# default: only

# archive_dupl_del = only

# new sapdata home directory for disk_copy | disk_standby

# no default

# new_db_home = /oracle/C11

# stage sapdata home directory for stage_copy | stage_standby

# default: value of the new_db_home parameter

# stage_db_home = /oracle/C11

# original sapdata home directory for split mirror disk backup

# no default

# orig_db_home = /oracle/C11

# remote host name

# no default

# remote_host = <host_name>

# remote user name

# default: current operating system user

# remote_user = <user_name>

# tape copy command [cpio | cpio_gnu | dd | dd_gnu | rman | rman_gnu

# | rman_dd | rman_dd_gnu | brtools | rman_brt]

# default: cpio

tape_copy_cmd = cpio

# disk copy command [copy | copy_gnu | dd | dd_gnu | rman | rman_gnu

# | rman_set | rman_set_gnu | ocopy]

# ocopy - only on Windows

# default: copy

disk_copy_cmd = rman_set

# stage copy command [rcp | scp | ftp | wcp]

# wcp - only on Windows

# default: rcp

stage_copy_cmd = rcp

# pipe copy command [rsh | ssh]

# default: rsh

pipe_copy_cmd = rsh

# flags for cpio output command

# default: -ovB

cpio_flags = -ovB

# flags for cpio input command

# default: -iuvB

cpio_in_flags = -iuvB

# flags for cpio command for copy of directories to disk

# default: -pdcu

# use flags -pdu for gnu tools

cpio_disk_flags = -pdcu

# flags for dd output command

# default: "obs=16k"

# recommended setting:

# Unix:    "obs=nk bs=nk", example: "obs=64k bs=64k"

# Windows: "bs=nk",        example: "bs=64k"

dd_flags = "obs=64k bs=64k"

# flags for dd input command

# default: "ibs=16k"

# recommended setting:

# Unix:    "ibs=nk bs=nk", example: "ibs=64k bs=64k"

# Windows: "bs=nk",        example: "bs=64k"

dd_in_flags = "ibs=64k bs=64k"

# number of members in RMAN save sets [ 1 | 2 | 3 | 4 | tsp | all ]

# default: 1

saveset_members = 1

# additional parameters for RMAN

# following parameters are relevant only for rman_util, rman_disk or

# rman_stage: rman_channels, rman_filesperset, rman_maxsetsize,

# rman_pool, rman_copies, rman_proxy, rman_parms, rman_send

# rman_maxpiecesize can be used to split an incremental backup saveset

# into multiple pieces

# rman_channels defines the number of parallel sbt channel allocations

# rman_filesperset = 0 means:

# one file per save set - for non-incremental backups

# up to 64 files in one save set - for incremental backups

# the others have the same meaning as for native RMAN

# rman_channels = 1

# rman_filesperset = 0

# rman_maxopenfiles = 0

# rman_maxsetsize = 0      # n[K|M|G] in KB (default), in MB or in GB

# rman_maxpiecesize = 0    # n[K|M|G] in KB (default), in MB or in GB

# rman_sectionsize = 0     # n[K|M|G] in KB (default), in MB or in GB

# rman_rate = 0            # n[K|M|G] in KB (default), in MB or in GB

# rman_diskratio = 0

# rman_duration = 0        # <min> - for minimizing disk load

# rman_keep = 0            # <days> - retention time

# rman_pool = 0

# rman_copies = 0 | 1 | 2 | 3 | 4

# rman_proxy = no | yes | only

# rman_parms = "BLKSIZE=65536 ENV=(BACKUP_SERVER=HOSTNAME)"

# rman_send = "'<command>'"

# rman_send = ("channel sbt_1 '<command1>' parms='<parameters1>'",

#              "channel sbt_2 '<command2>' parms='<parameters2>'")

# rman_compress = no | yes

# rman_maxcorrupt = (<dbf_name>|<dbf_id>:<corr_cnt>, ...)

# rman_cross_check = none | archive | arch_force

# remote copy-out command (backup_dev_type = pipe)

# $-character is replaced by current device address

# no default

copy_out_cmd = "dd ibs=8k obs=64k of=$"

# remote copy-in command (backup_dev_type = pipe)

# $-character is replaced by current device address

# no default

copy_in_cmd = "dd ibs=64k obs=8k if=$"

# rewind command

# $-character is replaced by current device address

# no default

# operating system dependent, examples:

# HP-UX:   "mt -f $ rew"

# TRU64:   "mt -f $ rewind"

# AIX:     "tctl -f $ rewind"

# Solaris: "mt -f $ rewind"

# Windows: "mt -f $ rewind"

# Linux:   "mt -f $ rewind"

rewind = "tctl -f $ rewind"

# rewind and set offline command

# $-character is replaced by current device address

# default: value of the rewind parameter

# operating system dependent, examples:

# HP-UX:   "mt -f $ offl"

# TRU64:   "mt -f $ offline"

# AIX:     "tctl -f $ offline"

# Solaris: "mt -f $ offline"

# Windows: "mt -f $ offline"

# Linux:   "mt -f $ offline"

rewind_offline = "tctl -f $ offline"

# tape positioning command

# first $-character is replaced by current device address

# second $-character is replaced by number of files to be skipped

# no default

# operating system dependent, examples:

# HP-UX:   "mt -f $ fsf $"

# TRU64:   "mt -f $ fsf $"

# AIX:     "tctl -f $ fsf $"

# Solaris: "mt -f $ fsf $"

# Windows: "mt -f $ fsf $"

# Linux:   "mt -f $ fsf $"

tape_pos_cmd = "tctl -f $ fsf $"

# mount backup volume command in auto loader / juke box

# used if backup_dev_type = tape_box | pipe_box

# no default

# mount_cmd = "<mount_cmd> $ $ $ [$]"

# dismount backup volume command in auto loader / juke box

# used if backup_dev_type = tape_box | pipe_box

# no default

# dismount_cmd = "<dismount_cmd> $ $ [$]"

# split mirror disks command

# used if backup_type = offline_split | online_split | offline_mirror

# | online_mirror

# no default

# split_cmd = "<split_cmd> [$]"

# resynchronize mirror disks command

# used if backup_type = offline_split | online_split | offline_mirror

# | online_mirror

# no default

# resync_cmd = "<resync_cmd> [$]"

# additional options for SPLITINT interface program

# no default

# split_options = "<split_options>"

# resynchronize after backup flag [no | yes]

# default: no

# split_resync = no

# pre-split command

# no default

# pre_split_cmd = "<pre_split_cmd>"

# post-split command

# no default

# post_split_cmd = "<post_split_cmd>"

# pre-shut command

# no default

# pre_shut_cmd = "<pre_shut_cmd>"

# post-shut command

# no default

# post_shut_cmd = "<post_shut_cmd>"

# pre-archive command

# no default

# pre_arch_cmd = "<pre_arch_cmd> [$]"

# post-archive command

# no default

# post_arch_cmd = "<post_arch_cmd> [$]"

# pre-backup command

# no default

# pre_back_cmd = "<pre_back_cmd> [$]"

# post-backup command

# no default

# post_back_cmd = "<post_back_cmd> [$]"

# volume size in KB = K, MB = M or GB = G (backup device dependent)

# default: 1200M

# recommended values for tape devices without hardware compression:

# 60 m   4 mm  DAT DDS-1 tape:    1200M

# 90 m   4 mm  DAT DDS-1 tape:    1800M

# 120 m  4 mm  DAT DDS-2 tape:    3800M

# 125 m  4 mm  DAT DDS-3 tape:   11000M

# 112 m  8 mm  Video tape:        2000M

# 112 m  8 mm  high density:      4500M

# DLT 2000     10/20 GB:         10000M

# DLT 2000XT   15/30 GB:         15000M

# DLT 4000     20/40 GB:         20000M

# DLT 7000     35/70 GB:         35000M

# recommended values for tape devices with hardware compression:

# 60 m   4 mm  DAT DDS-1 tape:    1000M

# 90 m   4 mm  DAT DDS-1 tape:    1600M

# 120 m  4 mm  DAT DDS-2 tape:    3600M

# 125 m  4 mm  DAT DDS-3 tape:   10000M

# 112 m  8 mm  Video tape:        1800M

# 112 m  8 mm  high density:      4300M

# DLT 2000     10/20 GB:          9000M

# DLT 2000XT   15/30 GB:         14000M

# DLT 4000     20/40 GB:         18000M

# DLT 7000     35/70 GB:         30000M

tape_size = 100G

# volume size in KB = K, MB = M or GB = G used by brarchive

# default: value of the tape_size parameter

# tape_size_arch = 100G

# tape block size in KB for brtools as tape copy command on Windows

# default: 64

# tape_block_size = 64

# rewind and set offline for brtools as tape copy command on Windows

# yes | no

# default: yes

# tape_set_offline = yes

# level of parallel execution

# default: 0 - set to number of backup devices

exec_parallel = 0

# address of backup device without rewind

# [<dev_address> | (<dev_address_list>)]

# no default

# operating system dependent, examples:

# HP-UX:   /dev/rmt/0mn

# TRU64:   /dev/nrmt0h

# AIX:     /dev/rmt0.1

# Solaris: /dev/rmt/0mn

# Windows: /dev/nmt0

# Linux:   /dev/nst0

tape_address = /dev/rmt0.1

# address of backup device without rewind used by brarchive

# default: value of the tape_address parameter

# operating system dependent

# tape_address_arch = /dev/rmt0.1

# address of backup device with rewind

# [<dev_address> | (<dev_address_list>)]

# no default

# operating system dependent, examples:

# HP-UX:   /dev/rmt/0m

# TRU64:   /dev/rmt0h

# AIX:     /dev/rmt0

# Solaris: /dev/rmt/0m

# Windows: /dev/mt0

# Linux:   /dev/st0

tape_address_rew = /dev/rmt0

# address of backup device with rewind used by brarchive

# default: value of the tape_address_rew parameter

# operating system dependent

# tape_address_rew_arch = /dev/rmt0

# address of backup device with control for mount/dismount command

# [<dev_address> | (<dev_address_list>)]

# default: value of the tape_address_rew parameter

# operating system dependent

# tape_address_ctl = /dev/...

# address of backup device with control for mount/dismount command

# used by brarchive

# default: value of the tape_address_rew_arch parameter

# operating system dependent

# tape_address_ctl_arch = /dev/...

# volumes for brarchive

# [<volume_name> | (<volume_name_list>) | SCRATCH]

# no default

volume_archive = (SRQA01, SRQA02, SRQA03, SRQA04, SRQA05,

                  SRQA06, SRQA07, SRQA08, SRQA09, SRQA10,

                  SRQA11, SRQA12, SRQA13, SRQA14, SRQA15,

                  SRQA16, SRQA17, SRQA18, SRQA19, SRQA20,

                  SRQA21, SRQA22, SRQA23, SRQA24, SRQA25,

                  SRQA26, SRQA27, SRQA28, SRQA29, SRQA30)

# volumes for brbackup

# [<volume_name> | (<volume_name_list>) | SCRATCH]

# no default

volume_backup = (SRQB01, SRQB02, SRQB03, SRQB04, SRQB05,

                 SRQB06, SRQB07, SRQB08, SRQB09, SRQB10,

                 SRQB11, SRQB12, SRQB13, SRQB14, SRQB15,

                 SRQB16, SRQB17, SRQB18, SRQB19, SRQB20,

                 SRQB21, SRQB22, SRQB23, SRQB24, SRQB25,

                 SRQB26, SRQB27, SRQB28, SRQB29, SRQB30)

# expiration period in days for backup volumes

# default: 30

expir_period = 30

# recommended usages of backup volumes

# default: 100

tape_use_count = 100

# backup utility parameter file

# default: no parameter file

# null - no parameter file

# util_par_file = initSRQ.utl

# backup utility parameter file for volume backup

# default: no parameter file

# null - no parameter file

# util_vol_par_file = initSRQ.vol

# additional options for BACKINT interface program

# no default

# "" - no additional options

# util_options = "<backint_options>"

# additional options for BACKINT volume backup type

# no default

# "" - no additional options

# util_vol_options = "<backint_options>"

# path to directory BACKINT executable will be called from

# default: sap-exe directory

# null - call BACKINT without path

# util_path = <dir>|null

# path to directory BACKINT will be called from for volume backup

# default: sap-exe directory

# null - call BACKINT without path

# util_vol_path = <dir>|null

# disk volume unit for BACKINT volume backup type

# [disk_vol | sap_data | all_data | all_dbf]

# default: sap_data

# util_vol_unit = <unit>

# additional access to files saved by BACKINT volume backup type

# [none | copy | mount | both]

# default: none

# util_vol_access = <access>

# negative file/directory list for BACKINT volume backup type

# [<file_dir_name> | (<file_dir_list>) | no_check]

# default: none

# util_vol_nlist = <nlist>

# mount/dismount command parameter file

# default: no parameter file

# mount_par_file = initSRQ.mnt

# Oracle connection name to the primary database

# [primary_db = <conn_name> | LOCAL]

# no default

# primary_db = <conn_name>

# Oracle connection name to the standby database

# [standby_db = <conn_name> | LOCAL]

# no default

# standby_db = <conn_name>

# description of parallel instances for Oracle RAC

# parallel_instances = <inst_desc> | (<inst_desc_list>)

# <inst_desc_list>   - <inst_desc>[,<inst_desc>...]

# <inst_desc>        - <Oracle_sid>:<Oracle_home>@<conn_name>

# <Oracle_sid>       - Oracle system id for parallel instance

# <Oracle_home>      - Oracle home for parallel instance

# <conn_name>        - Oracle connection name to parallel instance

# Please include the local instance in the parameter definition!

# default: no parallel instances

# example for initRAC001.sap:

# parallel_instances = (RAC001:/oracle/RAC/920_64@RAC001,

# RAC002:/oracle/RAC/920_64@RAC002, RAC003:/oracle/RAC/920_64@RAC003)

# local Oracle RAC database homes [no | yes]

# default: no - shared database homes

# loc_ora_homes = yes

# handling of Oracle RAC database services [no | yes]

# default: no

# db_services = yes

# database owner of objects to be checked

# <owner> | (<owner_list>)

# default: all SAP owners

# check_owner = SAPSR3

# database objects to be excluded from checks

# all_part | non_sap | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: no exclusion, example:

# check_exclude = (SDBAH, SAPSR3.SDBAD)

# special database check conditions

# ("<type>:<cond>:<active>:<sever>:[<chkop>]:[<chkval>]:[<unit>]", ...)

# check_cond = (<cond_list>)

# database owner of SDBAH, SDBAD and XDB tables for cleanup

# <owner> | (<owner_list>)

# default: all SAP owners

# cleanup_owner = SAPSR3

# retention period in days for brarchive log files

# default: 30

# cleanup_brarchive_log = 30

# retention period in days for brbackup log files

# default: 30

# cleanup_brbackup_log = 30

# retention period in days for brconnect log files

# default: 30

# cleanup_brconnect_log = 30

# retention period in days for brrestore log files

# default: 30

# cleanup_brrestore_log = 30

# retention period in days for brrecover log files

# default: 30

# cleanup_brrecover_log = 30

# retention period in days for brspace log files

# default: 30

# cleanup_brspace_log = 30

# retention period in days for archive log files saved on disk

# default: 30

# cleanup_disk_archive = 30

# retention period in days for database files backed up on disk

# default: 30

# cleanup_disk_backup = 30

# retention period in days for brspace export dumps and scripts

# default: 30

# cleanup_exp_dump = 30

# retention period in days for Oracle trace and audit files

# default: 30

# cleanup_ora_trace = 30

# retention period in days for records in SDBAH and SDBAD tables

# default: 100

# cleanup_db_log = 100

# retention period in days for records in XDB tables

# default: 100

# cleanup_xdb_log = 100

# retention period in days for database check messages

# default: 100

# cleanup_check_msg = 100

# database owner of objects to adapt next extents

# <owner> | (<owner_list>)

# default: all SAP owners

# next_owner = SAPSR3

# database objects to adapt next extents

# all | all_ind | special | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: all abjects of selected owners, example:

# next_table = (SDBAH, SAPSR3.SDBAD)

# database objects to be excluded from adapting next extents

# all_part | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: no exclusion, example:

# next_exclude = (SDBAH, SAPSR3.SDBAD)

# database objects to get special next extent size

# allsel:<size>[/<limit>] | [<owner>.]<table>:<size>[/<limit>]

# | [<owner>.]<index>:<size>[/<limit>]

# | [<owner>.][<prefix>]*[<suffix>]:<size>[/<limit>]

# | (<object_size_list>)

# default: according to table category, example:

# next_special = (SDBAH:100K, SAPSR3.SDBAD:1M/200)

# maximum next extent size

# default: 2 GB - 5 * <database_block_size>

# next_max_size = 1G

# maximum number of next extents

# default: 0 - unlimited

# next_limit_count = 300

# database owner of objects to update statistics

# <owner> | (<owner_list>)

# default: all SAP owners

# stats_owner = SAPSR3

# database objects to update statistics

# all | all_ind | all_part | missing | info_cubes | dbstatc_tab

# | dbstatc_mon | dbstatc_mona | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# | harmful | locked | system_stats | oradict_stats | oradict_tab

# default: all abjects of selected owners, example:

# stats_table = (SDBAH, SAPSR3.SDBAD)

# database objects to be excluded from updating statistics

# all_part | info_cubes | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: no exclusion, example:

# stats_exclude = (SDBAH, SAPSR3.SDBAD)

# method for updating statistics for tables not in DBSTATC

# E | EH | EI | EX | C | CH | CI | CX | A | AH | AI | AX | E= | C= | =H

# | =I | =X | +H | +I

# default: according to internal rules

# stats_method = E

# sample size for updating statistics for tables not in DBSTATC

# P<percentage_of_rows> | R<thousands_of_rows>

# default: according to internal rules

# stats_sample_size = P10

# number of buckets for updating statistics with histograms

# default: 75

# stats_bucket_count = 75

# threshold for collecting statistics after checking

# <threshold> | (<threshold> [, all_part:<threshold>

# | info_cubes:<threshold> | [<owner>.]<table>:<threshold>

# | [<owner>.][<prefix>]*[<suffix>]:<threshold>

# | <tablespace>:<threshold> | <object_list>])

# default: 50%

# stats_change_threshold = 50

# number of parallel threads for updating statistics

# default: 1

# stats_parallel_degree = 1

# processing time limit in minutes for updating statistics

# default: 0 - no limit

# stats_limit_time = 0

# parameters for calling DBMS_STATS supplied package

# all:R|B|H|G[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

# | all_part:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

# | info_cubes:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

# | [<owner>.]<table>:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

# | [<owner>.][<prefix>]*[<suffix>]:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0

# |<degree>|A|D | (<object_list>) | NO

# R|B - sampling method:

# 'R' - row sampling, 'B' - block sampling,

# 'H' - histograms by row sampling, 'G' - histograms by block sampling

# [<buckets>|A|S|R|D] - buckets count:

# <buckets> - histogram buckets count, 'A' - auto buckets count,

# 'S' - skew-only, 'R' - repeat, 'D' - default buckets count (75)

# [A|I|P|X|D] - columns with histograms:

# 'A' - all columns, 'I' - indexed columns, 'P' - partition columns,

# 'X' - indexed and partition columns, 'D' - default columns

# 0|<degree>|A|D - parallel degree:

# '0' - default table degree, <degree> - dbms_stats parallel degree,

# 'A' - dbms_stats auto degree, 'D' - default Oracle degree

# default: ALL:R:0

# stats_dbms_stats = ([ALL:R:1,][<owner>.]<table>:R:<degree>,...)

# definition of info cube tables

# default | rsnspace_tab | [<owner>.]<table>

# | [<owner>.][<prefix>]*[<suffix>] | (<object_list>) | null

# default: rsnspace_tab

# stats_info_cubes = (/BIC/D*, /BI0/D*, ...)

# special statistics settings

# (<table>:[<owner>]:<active>:[<method>]:[<sample>], ...)

# stats_special = (<special_list>)

# update cycle in days for dictionary statistics within standard runs

# default: 0 - no update

# stats_dict_cycle = 100

# method for updating Oracle dictionary statistics

# C - compute | E - estimate | A - auto sample size

# default: C

# stats_dict_method = C

# sample size for updating dictionary statistics (stats_dict_method = E)

# <percent> (1-100)

# default: auto sample size

# stats_dict_sample = 10

# parallel degree for updating dictionary statistics

# auto | default | null | <degree> (1-256)

# default: Oracle default

# stats_dict_degree = 4

# update cycle in days for system statistics within standard runs

# default: 0 - no update

# stats_system_cycle = 100

# interval for updating Oracle system statistics

# 0 - NOWORKLOAD, >0 - interval in minutes

# default: 0

# stats_system_interval = 0

# database objects to be excluded from validating structure

# null | all_part | info_cubes | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: value of the stats_exclude parameter, example:

# valid_exclude = (SDBAH, SAPSR3.SDBAD)

# recovery type [complete | dbpit | tspit | reset | restore | apply

# | disaster]

# default: complete

# recov_type = complete

# directory for brrecover file copies

# default: $SAPDATA_HOME/sapbackup

# recov_copy_dir = /oracle/SRQ/sapbackup

# time period in days for searching for backups

# 0 - all available backups, >0 - backups from n last days

# default: 30

# recov_interval = 30

# degree of paralelism for applying archive log files

# 0 - use Oracle default parallelism, 1 - serial, >1 - parallel

# default: Oracle default

# recov_degree = 0

# number of lines for scrolling in list menus

# 0 - no scrolling, >0 - scroll n lines

# default: 20

# scroll_lines = 20

# time period in days for displaying profiles and logs

# 0 - all available logs, >0 - logs from n last days

# default: 30

# show_period = 30

# directory for brspace file copies

# default: $SAPDATA_HOME/sapreorg

# space_copy_dir = /oracle/SRQ/sapreorg

# directory for table export dump files

# default: $SAPDATA_HOME/sapreorg

# exp_dump_dir = /oracle/SRQ/sapreorg

# database tables for reorganization

# [<owner>.]<table> | [<owner>.][<prefix>]*[<suffix>]

# | [<owner>.][<prefix>]%[<suffix>] | (<table_list>)

# no default

# reorg_table = (SDBAH, SAPSR3.SDBAD)

# table partitions for reorganization

# [[<owner>.]<table>.]<partition>

# | [[<owner>.]<table>.][<prefix>]%[<suffix>]

# | [[<owner>.]<table>.][<prefix>]*[<suffix>] | (<tabpart_list>)

# no default

# reorg_tabpart = (PART1, PARTTAB1.PART2, SAPSR3.PARTTAB2.PART3)

# database indexes for rebuild

# [<owner>.]<index> | [<owner>.][<prefix>]*[<suffix>]

# | [<owner>.][<prefix>]%[<suffix>] | (<index_list>)

# no default

# rebuild_index = (SDBAH~0, SAPSR3.SDBAD~0)

# index partitions for rebuild

# [[<owner>.]<index>.]<partition>

# | [[<owner>.]<index>.][<prefix>]%[<suffix>]

# | [[<owner>.]<index>.][<prefix>]*[<suffix>] | (<indpart_list>)

# no default

# rebuild_indpart = (PART1, PARTIND1.PART2, SAPSR3.PARTIND2.PART3)

# database tables for export

# [<owner>.]<table> | [<owner>.][<prefix>]*[<suffix>]

# | [<owner>.][<prefix>]%[<suffix>] | (<table_list>)

# no default

# exp_table = (SDBAH, SAPSR3.SDBAD)

# database tables for import

# <table> | (<table_list>)

# no default

# do not specify table owner in the list - use -o|-owner option for this

# imp_table = (SDBAH, SDBAD)

# Oracle system id of ASM instance

# default: +ASM

# asm_ora_sid = <asm_inst> | (<db_inst1>:<asm_inst1>,

# <db_inst2>:<asm_inst2>, <db_inst3>:<asm_inst3>, ...)

# asm_ora_sid = (RAC001:+ASM1, RAC002:+ASM2, RAC003:+ASM3, RAC004:+ASM4)

# asm_ora_sid = +ASM

# Oracle home of ASM instance

# no default

# asm_ora_home = <asm_home> | (<db_inst1>:<asm_home1>,

# <db_inst2>:<asm_home2>, <db_inst3>:<asm_home3>, ...)

# asm_ora_home = (RAC001:/oracle/GRID/11202, RAC002:/oracle/GRID/11202,

# RAC003:/oracle/GRID/11202, RAC004:/oracle/GRID/11202)

# asm_ora_home = /oracle/GRID/11202

# Oracle ASM root directory name

# default: ASM

# asm_root_dir = <asm_root>

# asm_root_dir = ASM

===========================================================================================

 
Regards,

Thiago

SAP Bundle Patch error

$
0
0

Hello Gurus,

 

I am not able to install SBP patches on my Oracle Database.

Kindly suggest me on this...

 

Getting pre-run patch inventory...
Getting pre-run patch inventory...done.

Analyzing installed patches...
Analyzing installed patches...failed.

Cannot verify lists of installed patches.
Refer to log file
  $ORACLE_HOME/cfgtoollogs/mopatch/mopatch-2013_09_12-13-11-26.log
for more information.
rubidium:oraxh1 126>

 

Thanks and Regards,

Prasad

ORA-20003 error on folowing job-brconnect -u / -c -f stats -t oradict_stats

$
0
0

Hi all, When i am trying to run brconnect -u / -c -f stats -t oradict_stats  .. i am getting the following errors & i have followed sap Note 838725 - Oracle Database 10g: New database statistics  but however i am not cllear how should i iver come through this error.

 

 

sapql2:oraql2 1> brconnect -u / -c -f stats -t oradict_stats

BR0801I BRCONNECT 7.00 (16)

BR0805I Start of BRCONNECT processing: cebliuwm.sta 2009-09-11 23.25.28

 

BR0280I BRCONNECT time stamp: 2009-09-11 23.25.29

BR0807I Name of database instance: QL2

BR0808I BRCONNECT action ID: cebliuwm

BR0809I BRCONNECT function ID: sta

BR0810I BRCONNECT function: stats

BR0812I Database objects for processing: ORADICT_STATS

BR1314I Oracle dictionary statistics will be collected with default options

BR0126I Unattended mode active - no operator confirmation required

 

BR0280I BRCONNECT time stamp: 2009-09-11 23.25.29

BR1311I Starting collection of Oracle dictionary statistics...

BR0285I This function can take several seconds/minutes - be patient...

BR0280I BRCONNECT time stamp: 2009-09-11 23.25.30

 

BR0301E SQL error -20003 at location stats_oradict_collect-1, SQL statement:

'BEGIN DBMS_STATS.GATHER_DICTIONARY_STATS (ESTIMATE_PERCENT => NULL, METHOD_OPT

=> 'FOR ALL COLUMNS SIZE AUTO', GRANULARITY => 'ALL', CASCADE => TRUE, OPTIONS =

> 'GATHER', NO_INVALIDATE => FALSE); END;'

ORA-20003: Specified bug number (5099019) does not exist

ORA-06512: at "SYS.DBMS_STATS", line 14379

ORA-06512: at "SYS.DBMS_STATS", line 14725

ORA-06512: at "SYS.DBMS_STATS", line 17028

ORA-06512: at "SYS.DBMS_STATS", line 17070

ORA-06512: at line 1

BR1313E Collection of Oracle dictionary statistics failed

 

 

BR0806I End of BRCONNECT processing: cebliuwm.sta 2009-09-11 23.25.30

BR0280I BRCONNECT time stamp: 2009-09-11 23.25.30

BR0804I BRCONNECT terminated with errors

 

regards,

rahul

Viewing all 2104 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>