Quantcast
Channel: SCN : All Content - SAP MaxDB
Viewing all 322 articles
Browse latest View live

Error on upgrading from MAxDB 7.6.00.19 to 7.8 in SAP ECC 6.0 EHP4 ,

$
0
0

Hello All,

 

I want to upgrade MAxdb to 7.8.01.14 as it is required for EHP5 upgrade , OS is windows 2003 64bit ,SAP ECC 6.0 EHP4 NW 7.01

After i start DBUPDATE.BAT from command prompt

 

C:\RDBMS_MaxDB_7.8.01_Build_14\DATA_UNITS\MAXDB_WINDOWS_X86_64>DBUPDATE.BAT -s W

T9 -d WT9 -u superdba,zapkmd00

 

Below is the error I am getting :

 

I am pasting Erorr log  present in E:\sdb\data\wrk\SDBUPDMsg1303306914.log

 

 

ERR:  unhandled exception: Can't call method "IsDummy" on an undefined value at SDB/Install/Installation/Compat.pm line 910.

 

ERR:  Upgrade failed

ERR:    unhandled exception: Can't call method "IsDummy" on an undefined value at SDB/Install/Installation/Compat.pm line 910.

 

Please help !!

 

Regards

Jain Pankaj


MaxDB: index queueing during system copy

$
0
0

Hello MaxDB specialists,

 

may be you are familiar with the index queueing feature while using R3load methode for migration / system copy.

Sometimes this sequential methode is not really performant when the needed data is not in the data cache anymore.

 

During migration / system copy you are facing the following scenario:

So you just see one last R3load process of a package (before SAPVIEW package) during import which is taking a lot of time looking like this:

R3load -i SWWLOGHIST__DTP.cmd -dbcodepage 1100 -k <MIGkey> -l SWWLOGHIST__DTP.log -nolog -c 50000 -force_repeat -loadprocedure dbsl

 

The name of the package is not relevant because it is choosen randomly. But is every time a DTP package.

 

  1. But you want to know how long it will take to finish.
  2. You want to know how many indexes are still queued.
  3. May be you want to speed up the procedure and skip this index creation. Just to finish the import and create the missing indexes in your postprocedure.

 

 

First about the index queueing ( 1464560 - FAQ: R3load in MaxDB  ):

 

Q: Why are the indexes created after loading?

 

A: The MaxDB has an optimized "internal parallel create index" procedure, which is much faster than maintaining the indexes during loading. You also cannot use the fast loader if indexes already exist in the table.

 

 

Q: What is index queueing in the R3load for MaxDB?

 

A: The optimized "internal parallel create Index" can be used by a user task of the database. Other create indexes in a user task are created serially in the slow create index mode.

 

Exactly one R3load uses exactly one user task of the database, which means that parallel R3loads cannot use the "internal parallel create index" simultaneously.

For MaxDB, therefore, an index queueing via the table "/MAXDB/INDEXSTATEMENTS" is used so that only one R3load "create index" executes.

 

 

Q: What is the importance of the R3load option "-para_cnt <count>"?

 

A: The option R3load "-para_cnt <count>" specifies how many R3load processes the user has started and affects the index queueing.

 

R3load does not use the "internal parallel create index" during Create Index for all tables in which indexes are to be created and that use less space in the database than the nth part of the data cache (where n = count).

R3load creates these indexes serially as create index mode. For these tables, you can assume that the data records are still in the data cache during the create index.

In this case, it is useful to avoid the "internal parallel create index" for the entire runtime of the import because the serial mode is only marginally slower than the "internal parallel create index" if the data records are in the cache.

 

So all big indexes won't be created in an own package even if there is a seperate DTP package (package without load data, create table and create primary index) created via omit option (DTPIV). They will be queued in the table /MAXDB/INDEXSTATEMENTS.

short explanation about the omit option (Source: Migration Monitor documentation):

 

-o D : omit data; do not load data

-o T: omit tables; do not create tables

-o P: omit primary keys; do not create primary keys

-o I: omit indexes; do not create indexes

-o V: omit views; do not create views

Only in case of HANA database:

-o M: omit merge; do not merge

-o U: omit unload; do not unload table

 

 

 

So when a index is choosen for index queueing methode you will read this in the log of the affected package:

JEST__DTP.log:(DB) INFO: JEST~I created later with other R3load process#20141108094527

JEST__DTP.log:(DB) INFO: JEST~Z01 created later with other R3load process#20141108094527

 

If you want to know the current content of the queue you can use sqlcli

 

sqlcli -d <DBSID> -u <SchemaUser>,<PW>

sqlcli <SID>=> \dc /MAXDB/INDEXSTATEMENTS

Table "<SCHEMA>./MAXDB/INDEXSTATEMENTS"

| Column Name | Type          | Length | Nullable | KEYPOS |

| ----------- | ------------- | ------ | -------- | ------ |

| TABLESIZE   | FIXED         | 18     | NO       | 1      |

| TABLENAME   | VARCHAR ASCII | 40     | NO       | 2      |

| INDEXNAME   | VARCHAR ASCII | 40     | NO       | 3      |

| SESSION     | FIXED         | 10     | YES      |        |

| THISNODE    | VARCHAR ASCII | 64     | YES      |        |

| STMT        | CLOB ASCII    | -      | YES      |        |

 

 

sqlcli <SID>=>select * from /MAXDB/INDEXSTATEMENTS

TablesizeTablenameIndexnameSessionThisnode
-1## this row locks parallel create index###247<hostname.DOMAIN>
227353JESTJEST~Z01??
487161SWW_CONTOBSWW_CONTOB~A??
487161SWW_CONTOBSWW_CONTOB~ZZ1??
2709569COEJCOEJ~1??
2892158COEPCOEP~4??
2892158COEPCOEP~Z1??
2892158COEPCOEP~Z2??

 

 

But there is no estimate how long one or all indexes are take to finish its work. You just can refer to the tablesize and compare with other index creation runtimes.

 

 

If you want to skip this, just kill the R3load process, delete/truncate or drop the table /MAXDB/INDEXSTATEMENTS. Check the TSK file (if used) of the aborted DTP package to "ok" or "ign" if any "err" status.

Set status of the DTP package in import_monitor_cmd to "+" and restart the import monitor and the procedure will continue with the last package SAPVIEW.

You can create the missing indexes with a mass create action in DB02/SE14

 

or

 

if you want still to use R3load index queueing feature than you mustn't drop or delete the entries in /MAXDB/INDEXSTATEMENTS.

Just skip the DTP Package and wait until SAPVIEW package, because if any error during index creation occur the package will also abort.

Check the following tables:

          /MAXDB/INDEXSTATE

          /MAXDB/INDEXSTMTS

          /MAXDB/INDEXSTMTS_NEW

          /MAXDB/INDEXSTATEMENTS

If they have entries => backup them to temp tables, because the tables will be dropped in postprocedure from SWPM. Just copy the table back after the sapinst has completed its work.

Afterwards start R3load (with import environment) with option "-create_queued_indexes". This will process the queue again.

At the end of the day just drop the /MAXDB/INDEX* tables to avoid issues regarding unknown DDIC objects. (780043 - Additional "/MAXDB/INDEX*" tables exist in the system)

 

I hope I could help you to understand the feature and optimize/speed up your import procedure.

 

 

If you have any further questions, don't hestate to comment the blog or contact me or one of my colleagues at Q-Partners ( info_at_qpcm_dot_de )

 

 

 

Best Regards,

Jens Gleichmann

 

Technology Consultant at Q-Partners (www.qpcm.eu )</p>

Max DB restoration error

$
0
0

When i am trying to restore MAX DB data from source to destination, I got below error:

 

-24988 SQL error [backup_restore "<SID>_DB_BACKUP" DATA]; -7900, Different Block Sizes

 

Please help me in resolving this.

Help -- DBStudio not starting

$
0
0

Hi all,

 

Maxdb Studio is not starting after some Windows updates. I've reinstalled it but nothing changes, it simply does nothing and I don't receive any error message.

I execute the program as Administrator and MaxDB is up & running.

My Dbstudio version is maxdb-studio-win-64bit-x86_64-7_8_02_21 under windows 7 x64 and Java 1.8.0_25.

 

Does anybody have any idea what the problem could be?

 

Thanks in advance

R3trans fails from RHEL6 application server

$
0
0

I have an SCM 7.0 Linux/MaxDB application server that was working fine, but now that it has been upgraded to RHEL6, won't connect back to the db server.  From the application server, I run R3trans -dwv, and it returns this: 

 

> R3trans -dwv

This is R3trans version 6.24 (release 720 - 08.07.13 - 20:13:05 ).

unicode enabled version

2EETW169 no connect possible: "maybe someone set invalid values for DIR_LIBRARY ('/usr/sap/APP/SYS/exe/run') or dbms_type ('ada')"

R3trans finished (0012).

 

I've already verified that dbms_type = ada and DIR_LIBRARY = /usr/sap/APP/SYS/exe/run in the environment.

 

When I look at the trans.log file that is created, here is what I'm seeing;

 

> more trans.log

4 ETW000 R3trans version 6.24 (release 720 - 08.07.13 - 20:13:05 ).

4 ETW000 unicode enabled version

4 ETW000 ===============================================

4 ETW000

4 ETW000 date&time   : 10.11.2014 - 17:05:32

4 ETW000 control file: <no ctrlfile>

4 ETW000 R3trans was called as follows: R3trans -dwv

4 ETW000  trace at level 2 opened for a given file pointer

4 ETW000  [     dev trc,00000]  Mon Nov 10 17:05:32 2014                                                  44  0.0

00044

4 ETW000  [     dev trc,00000]  db_con_init called                                                        13  0.0

00057

4 ETW000  [     dev trc,00000]  set_use_ext_con_info(): usage of ssfs switched off (rsdb/ssfs_connect=0)

4 ETW000                                                                                                  18  0.0

00075

4 ETW000  [     dev trc,00000]  determine_block_commit: no con_hdl found as blocked for con_name = R/3

4 ETW000                                                                                                  13  0.0

00088

4 ETW000  [     dev trc,00000]  create_con (con_name=R/3)                                                  7  0.0

00095

4 ETW000  [     dev trc,00000]  Loading DB library '/usr/sap/APP/SYS/exe/run/dbsdbslib.so' ...            25  0.0

00120

4 ETW000  [    dlux.c  ,00000]  *** ERROR => DlLoadLib()==DLENOACCESS - dlopen("/usr/sap/APP/SYS/exe/run/dbsdbsli

b.so") FAILED

4 ETW000                          "libSQLDBC77.so: cannot open shared object file: No such file or directory"

4 ETW000                                                                                                 709  0.0

00829

4 ETW000  [    dbcon.c ,00000]  *** ERROR => Couldn't load library '/usr/sap/APP/SYS/exe/run/dbsdbslib.so'

4 ETW000                                                                                                  61  0.0

00890

2EETW169 no connect possible: "maybe someone set invalid values for DIR_LIBRARY ('/usr/sap/APP/SYS/exe/run') or d

bms_type ('ada')"

 

I'm concerned about the libSQLDBC77.so error - as it lives in /sapdb/programs/lib on the db server, but /sapdb isn't mounted on the application server.

 

Any help will be appreciated...

 

Thanks,

John

MaxDB / MAXCPUS

$
0
0

Hi Everyone,

 

after moving the productive ERP system to a new HP box we

are experiencing bad performance (that what users "feel").

 

The new box is a HP server making use of Intel E5-4640 CPUs

which means 4 sockets with 8 cores each and Hyperthreading

activate. This makes a least 64 CPU showing up in Windows

Task Manager.

 

We configured MaxDB using 64 GB RAM (total 128 GB) and

set MAXCPU = 24. On the system there is a SAP central

instance running (ERP 6.0 EHP4).

 

But still we can see that CPU cores will not be used more

than 10 to 15 % all over the day. So it seems the server is

bored.

 

What would be suggestion to set MAXCPU? Is it relevant

in a performance context?

 

Thanks a lot in advance.

 

Kind Regards,

Carsten

MaxDB I/Os on Windows

$
0
0

Hello,

 

I'm looking for some hints, which can help me to analyse the disk acitvity for a single "Left Outer Join" SQL statement. Unfortunately, the business is not able to enter an additional Where - clause.

 

Our database (version: 7.9.08.14 / OLTP)  is running on a virtual Windows 2008 R2 Server (ESXi 5.1.0). All datavolumes (50 pieces) are on a single LUN.

 

If I use the "x_cons <DB> show active 1 25", I get this reponse from, where T128 is my ID. During the whole runtime of the statement, there is no high cpu (Task Manager) or high disk activity (Resource Monitor of Windows). If I copy a file (3,5 GB) on the LUN, the performance is OK.

 

SERVERDB: xxx

 

ID   UKT  Win   TASK       APPL Current        Timeout/ Region     Wait

          tid   type        pid state          Priority cnt try    item

T128  11 0x13DC User      1240* IO Wait (R)             0     27        39077695(s)

 

ID   UKT  Win   TASK       APPL Current        Timeout/ Region     Wait

          tid   type        pid state          Priority cnt try    item

T54    9 0x1A60 User      6884* IO Wait (R)             0     15        121292233(s)

T128  11 0x13DC User      1240* IO Wait (R)             0     24        39077820(s)

 

ID   UKT  Win   TASK       APPL Current        Timeout/ Region     Wait

          tid   type        pid state          Priority cnt try    item

T35    8 0x1770 User      1812* IO Wait (R)             0     26        32578072(s)

T54    9 0x1A60 User      6884* IO Wait (R)             0     47        121292754(s)

T128  11 0x13DC User      1240* IO Wait (R)             0      9        39078040(s)

 

ID   UKT  Win   TASK       APPL Current        Timeout/ Region     Wait

          tid   type        pid state          Priority cnt try    item

T54    9 0x1A60 User      6884* IO Wait (R)             0     20        121292947(s)

T128  11 0x13DC User      1240* IO Wait (R)             0     26        39078296(s)

 

ID   UKT  Win   TASK       APPL Current        Timeout/ Region     Wait

          tid   type        pid state          Priority cnt try    item

T128  11 0x13DC User      1240* IO Wait (R)             0     29        39078437(s)

 

ID   UKT  Win   TASK       APPL Current        Timeout/ Region     Wait

          tid   type        pid state          Priority cnt try    item

T128  11 0x13DC User      1240* IO Wait (R)             0     13        39078563(s)

 

ID   UKT  Win   TASK       APPL Current        Timeout/ Region     Wait

          tid   type        pid state          Priority cnt try    item

T128  11 0x13DC User      1240* IO Wait (R)             0      4        39078667(s)

 

ID   UKT  Win   TASK       APPL Current        Timeout/ Region     Wait

          tid   type        pid state          Priority cnt try    item

T128  11 0x13DC User      1240* IO Wait (R)             0     20        39078759(s)

 

ID   UKT  Win   TASK       APPL Current        Timeout/ Region     Wait

          tid   type        pid state          Priority cnt try    item

T128  11 0x13DC User      1240* IO Wait (R)             0      8        39078856(s)

 

ID   UKT  Win   TASK       APPL Current        Timeout/ Region     Wait

          tid   type        pid state          Priority cnt try    item

T128  11 0x13DC User      1240* IO Wait (R)             0      3        39079019(s)

 

ID   UKT  Win   TASK       APPL Current        Timeout/ Region     Wait

          tid   type        pid state          Priority cnt try    item

T128  11 0x13DC User      1240* IO Wait (R)             0     37        39079176(s)

 

ID   UKT  Win   TASK       APPL Current        Timeout/ Region     Wait

          tid   type        pid state          Priority cnt try    item

T128  11 0x13DC User      1240* IO Wait (R)             0     11        39079282(s)

 

 

I know there are many components, but maybe with some hints I'm able to exclude a one or more component(s). I already check the good scripts from http://maxdb.sap.com/traning, but unfortunately without success.

 

 

Thank you!

 

best regards

Lukas Meusburger

Error "-9400 AK Cachedirectory full"

$
0
0

Hello,

 

I'm writing back following an old thread in 2009 on this forum, related to a problem with MaxDB and "AK Cachedirectory full" problems. You can find the previous thread here: What can we so against Error "-9400 AK Cachedirectory full"?

 

The problem was actually never resolved: we could more or less live with it, and we managed to reduce it for a while, but we are having that problem again almost every day now. We actually fixed various points since 2009 and our system has changed quite a lot.

 

We use MaxDB 7.8.02 (BUILD 038-121-249-252) with the JDBC Driver sapdbc-7.6.09_000-000-010-635.jar. Note that we don't use MaxDB in a SAP environment as we have our own business application.

 

Following some very helpful feedback from Lars Breddemann, we fixed various points in our system: for example, result sets were not always properly closed, this is now done immediately after the query has been executed and the result rows were read. We also follow the advise from Elke Zietlow to always close a connection and its associated prepared statements when the error occurs. This also helps in most cases, but sometimes when the error occurs, even closing the connection and its prepared statements does not help and the problem "escalates" until we have to restart the db to fix the problem.

 

Back to the discussion in 2009, I used the two statements given by Lars to monitor the catalog cache usage: when I run this multiple times, I see that all result sets are properly closed as I only see the ones currently being used and they disappear.

 

One important point is that our java application keeps many prepared statements open in a cache, to have them ready to be reused. We can have up to 10'000 prepared statements open, with up to 100 jdbc connections. Actually the AK Cachedirectory full problem happens sometimes very soon after we restart our system and db, so at that time the number of prepared statements can be very low, which seems to indicate that the number of prepared statements being open is not necessarily linked to the problem.

 

Also in the discussion in 2009, Lars mentioned the fact that we use prepared statements of the type TYPE_SCROLL_INSENSITIVE and he was asking if we could not use TYPE_FORWARD_ONLY. Would this really make a difference? We need the TYPE_SCROLL_INSENSITIVE in many cases because we use some iterators to scroll up and down the result sets, so using TYPE_FORWARD_ONLY would require changing quite some code. I also saw in the MaxDB code that using the type TYPE_SCROLL_INSENSITIVE adds the string "FOR REUSE" to the sql statement, what does it exactly mean?

 

Amy help to fix that problem would be greatly appreciated.

Christophe


MaxDB content server on cluster

$
0
0

Dear Friends

 

I have a requirement  to setup  maxdb content server on cluster environment.

 

below is the details.

 

db-MaxDB

 

os- AIX

 

 

request you to please suggest if anyone has done it . and  suggest the possibilites .

 

 

if possible please give in details and step by step.

 

Thanks

Sadiq

Different execution plan for the same sql statement

$
0
0

Hi all,

 

When I check the same sql statement in different systems,   P system and Q system.

It shows different execution plan for inner join

Sql statement

SQL statement.PNG
Excution plan

 

For system P    the value of MANDT is 1     table EKPO accessed first with full table scan.

Explain P.PNG

For system Q    the value of MANDT is 4     table EKKO accessed first with index EKKO~1(in P also has this index)

 

Explain Q.PNG

Data volume in Q is a little bit larger than P.

Could you please help to find out why there is such a difference?  which one is correct?

How the optimizer works when process the inner join for maxDB?

 

Thanks & Regards,

Chris

Multiple MaxBD DR Data files

$
0
0

We are using MaxDB for Live Cache. We are using Version 7.9.08.08 and have logshipping to the DR Server.

Currently we have 7 data files:

DATA0001 6,291,456 KB FILE /sapdb/L1P/sapdata1

DATA0002 6,291,456 KB FILE /sapdb/L1P/sapdata2

DATA0003 6,291,456 KB FILE /sapdb/L1P/sapdata3

DATA0004 6,291,456 KB FILE /sapdb/L1P/sapdata4

DATA0005 6,291,456 KB FILE /sapdb/L1P/sapdata5

DATA0006 6,291,456 KB FILE /sapdb/L1P/sapdata6

DATA0007 6,291,456 KB FILE /sapdb/L1P/sapdata1

 

The filesystems have been sized at 35GB each.

 

Today we fpounf that the filesystem /sapdb/L1P/sapdata1 has become full and when we view the filesystem we see the following:

root@<hostname>:/sapdb/L1P/sapdata1# ls -lrt

total 64678928

drwxr-xr-x    2 root     system          256 15 Jul 14:58 lost+found

-rw-rw----    1 sdb      sdba     6442450944 06 Dec 06:45 DISKD0007.9

-rw-rw----    1 sdb      sdba     6442450944 06 Dec 06:45 DISKD0007.11

-rw-rw----    1 sdb      sdba     6442450944 06 Dec 06:45 DISKD0007.10

-rw-rw----    1 sdb      sdba     6442450944 06 Dec 06:45 DISKD0007

-rw-rw----    1 sdb      sdba     6442450944 06 Dec 06:45 DISKD0001

-rw-rw----    1 sdb      sdba      903356416 08 Dec 12:36 DISKD0007.12

 

Can anyone tell me why we have 4 copied of DISKD0007?

 

Regards

 

David

liveCache 7.7 for SCM5.1 on RHEL 6

$
0
0

Hello,

 

I am experiencing issue when doing a test of OS/DB migration for APO from Oracle on HP-UX to Oracle on Linux X64 (RHEL6).

 

The migration of ABAP went without trouble, however I am unable to install liveCache (MaxDB) on RHEL 6.0.

 

According to note 916649, SCM 5.1 with liveCache 7.7.06.22 should be supported:

 

The following versions constitute the minimum requirement for the use of SAP liveCache on Red Hat 6:

SCM 5.1      : SAP LC/LCAPPS  5.1 SP25, SAP liveCache 7.7.06.22

 

I have used the instalaltion package LC770624A6028_2-20002022.SAR to update the LC version, the update of software went through, but I am unable to start the x_server.

 

It always core-dumped for segmentation fault, resp. the part that core dumped is vserver. I am not even able to check the version of vserver with vserver -V, it immediately core dumps. I am sure I have the right version (CPU version etc), also all maxdb verification tools returned no error at all.

 

The OS parametrization should be according to all the notes concerned, at least the OS colleagues confirmed that.

 

Thank you any suggestion,

Branislav GREGER

 

EDIT:

I will post here the the solution that worked for us after contacting SAP/RedHAT:

 

Why MaxDB is not starting after updating the operative system? - Red Hat Customer Portal

backint - MaxDB

$
0
0

Hello all,

 

we are trying to backup our MaxDB using the backint interface. Unfortunately, the backup process fails with the error messages shown in the appended KNLDIAG log and EXTERNAL BACKUP log

 

KNLDIAG

0:32:53 31276     11560 COMMUNIC Releasing  T132

2008-07-31 20:32:53 31276     12827 COMMUNIC wait for connection T132

2008-07-31 20:32:54 31258     11561 COMMUNIC Connecting T125 local 3626

2008-07-31 20:32:54 31276     12929 TASKING  Task T125 started

2008-07-31 20:32:54 31276     11007 COMMUNIC wait for connection T125

2008-07-31 20:32:54 31276     11561 COMMUNIC Connected  T125 local 3626

2008-07-31 20:32:54 31276     11560 COMMUNIC Releasing  T125

2008-07-31 20:32:54 31276     12827 COMMUNIC wait for connection T125

2008-07-31 20:32:55  3634 ERR 11000 devio    write error (fd = 54): Broken pipe

2008-07-31 20:32:55 31272     11000 vasynclo '/tmp/backintdb-pipe' devno 34 T72

2008-07-31 20:32:55 31259     12822 TASKING  Thread 3634 joining

2008-07-31 20:32:55  3634     11566 stop     DEVi stopped

2008-07-31 20:32:55 31272     52024 SAVE     63992 pages -> "/tmp/backintdb-pipe"

2008-07-31 20:32:55 31276     52012 SAVE     new tape required 4300

2008-07-31 20:32:55 31276         1 Backup   Backupmedium #1 (/tmp/backintdb-pipe) end of file

2008-07-31 20:32:55 31276         6 KernelCo  +   Backup error occured, Errorcode 4300 "new_hostfile_required"

2008-07-31 20:32:56 31276     12929 TASKING  Task T142 started

2008-07-31 20:32:56 31258     11561 COMMUNIC Connecting T142 local 3626

2008-07-31 20:32:56 31276     11007 COMMUNIC wait for connection T142

2008-07-31 20:32:56 31276     11561 COMMUNIC Connected  T142 local 3626

2008-07-31 20:32:56 31276     11560 COMMUNIC Releasing  T142

2008-07-31 20:32:56 31276     12827 COMMUNIC wait for connection T142

2008-07-31 20:36:23 31276     11000 vasynclo '/sapdata/SOM/DISKD0001' devno 17 T143

2008-07-31 20:36:23 31259     12822 TASKING  Thread 3629 joining

 

 

EXTERNAL BACKUP

2008-07-31 20:32:36

Setting environment variable 'TEMP' for the directory for temporary files and pipes to default ''.

Setting environment variable 'TMP' for the directory for temporary files and pipes to default ''.

Using connection to Backint for MaxDB Interface.

 

2008-07-31 20:32:36

Checking existence and configuration of Backint for MaxDB.

    Reading the Backint for MaxDB configuration file '/sapdb/data/wrk/SOM/bsi.env'.

        Found keyword 'BACKINT' with value '/sapdb/SOM/db/bin/backint'.

        Found keyword 'INPUT' with value '/tmp/tsm-logs/som-backint4maxdb.in'.

        Found keyword 'OUTPUT' with value '/tmp/tsm-logs/som-backint4maxdb.out'.

        Found keyword 'ERROROUTPUT' with value '/tmp/tsm-logs/som-backint4maxdb.err'.

        Found keyword 'PARAMETERFILE' with value '/sapdb/SOM/db/bin/backintmaxdbconfig.par'.

        Found keyword 'TIMEOUT_SUCCESS' with value '600'.

        Found keyword 'TIMEOUT_FAILURE' with value '300'.

        Found keyword 'ORIGINAL_RUNDIRECTORY' with value '/sapdb/data/wrk/SOM'.

    Finished reading of the Backint for MaxDB configuration file.

 

    Using '/sapdb/SOM/db/bin/backint' as Backint for MaxDB program.

    Using '/tmp/tsm-logs/som-backint4maxdb.in' as input file for Backint for MaxDB.

    Using '/tmp/tsm-logs/som-backint4maxdb.out' as output file for Backint for MaxDB.

    Using '/tmp/tsm-logs/som-backint4maxdb.err' as error output file for Backint for MaxDB.

    Using '/sapdb/SOM/db/bin/backintmaxdbconfig.par' as parameter file for Backint for MaxDB.

    Using '600' seconds as timeout for Backint for MaxDB in the case of success.

    Using '300' seconds as timeout for Backint for MaxDB in the case of failure.

    Using '/sapdb/data/wrk/SOM/dbm.knl' as backup history of a database to migrate.

    Using '/sapdb/data/wrk/SOM/dbm.ebf' as external backup history of a database to migrate.

    Checking availability of backups using backint's inquire function.

Check passed successful.

 

2008-07-31 20:32:36

Checking medium.

Check passed successfully.

 

2008-07-31 20:32:36

Preparing backup.

    Setting environment variable 'BI_CALLER' to value 'DBMSRV'.

    Setting environment variable 'BI_REQUEST' to value 'NEW'.

    Setting environment variable 'BI_BACKUP' to value 'FULL'.

    Constructed Backint for MaxDB call '/sapdb/SOM/db/bin/backint -u SOM -f backup -t file -p /sapdb/SOM/db/bin/backintmaxdbconfig.par -i /tmp/tsm-logs/som-backint4maxdb.in -c'.

    Created temporary file '/tmp/tsm-logs/som-backint4maxdb.out' as output for Backint for MaxDB.

    Created temporary file '/tmp/tsm-logs/som-backint4maxdb.err' as error output for Backint for MaxDB.

    Writing '/tmp/backintdb-pipe #PIPE' to the input file.

Prepare passed successfully.

 

2008-07-31 20:32:36

Creating pipes for data transfer.

    Creating pipe '/tmp/backintdb-pipe' ... Done.

All data transfer pipes have been created.

 

2008-07-31 20:32:36

Starting database action for the backup.

    Requesting 'SAVE DATA QUICK TO '/tmp/backintdb-pipe' PIPE BLOCKSIZE 8 NO CHECKPOINT MEDIANAME 'backindb'' from db-kernel.

The database is working on the request.

 

2008-07-31 20:32:36

Waiting until database has prepared the backup.

    Asking for state of database.

    2008-07-31 20:32:36 Database is still preparing the backup.

    Waiting 1 second ... Done.

    Asking for state of database.

    2008-07-31 20:32:37 Database is still preparing the backup.

    Waiting 2 seconds ... Done.

    Asking for state of database.

    2008-07-31 20:32:39 Database is still preparing the backup.

    Waiting 3 seconds ... Done.

    Asking for state of database.

    2008-07-31 20:32:42 Database is still preparing the backup.

    Waiting 4 seconds ... Done.

    Asking for state of database.

    2008-07-31 20:32:46 Database is still preparing the backup.

    Waiting 5 seconds ... Done.

    Asking for state of database.

    2008-07-31 20:32:51 Database has finished preparation of the backup.

The database has prepared the backup successfully.

 

2008-07-31 20:32:51

Starting Backint for MaxDB.

    Starting Backint for MaxDB process '/sapdb/SOM/db/bin/backint -u SOM -f backup -t file -p /sapdb/SOM/db/bin/backintmaxdbconfig.par -i /tmp/tsm-logs/som-backint4maxdb.in -c >>/tmp/tsm-logs/som-backint4maxdb.out 2>>/tmp/tsm-logs/som-backint4maxdb.err'.

    Process was started successfully.

Backint for MaxDB has been started successfully.

 

2008-07-31 20:32:51

Waiting for end of the backup operation.

    2008-07-31 20:32:51 The backup tool is running.

    2008-07-31 20:32:51 The database is working on the request.

 

    2008-07-31 20:32:56 The backup tool process has finished work with return code 2.

    2008-07-31 20:32:56 The backup tool is not running.

    2008-07-31 20:32:56 The database has finished work on the request.

    Receiving a reply from the database kernel.

    Got the following reply from db-kernel:

        SQL-Code              :-8020

        Date                  :20080731

        Time                  :00203248

        Database              :SOM

        Server                :bssomp02

        KernelVersion         :Kernel    7.6.00   Build 035-123-139-084

        PagesTransfered       :64000

        PagesLeft             :4907326

        MediaName             :backindb

        Location              :/tmp/backintdb-pipe

        Errortext             :end of file

        Label                 :DAT_000000055

        IsConsistent          :true

        FirstLogPageNo        :3321207

        DBStamp1Date          :20080731

        DBStamp1Time          :00203244

        BDPageCount           :4971302

        DevicesUsed           :1

        DatabaseID            :bssomp02:SOM_20070910_162640

        Max Used Data Page   

        Converter Page Count  :2676

The backup operation has ended.

 

2008-07-31 20:32:56

Filling reply buffer.

    Have encountered error -24920:

        The backup tool failed with 2 as sum of exit codes. The database request failed with error -8020.

 

    Constructed the following reply:

        ERR

        -24920,ERR_BACKUPOP: backup operation was unsuccessful

        The backup tool failed with 2 as sum of exit codes. The database request failed with error -8020.

Reply buffer filled.

 

 

It seem to me that it is a MUST to use the Tivoli Data Protection for the MaxDB. Is anybody out there to tell me if this is right? Or ist it possible to directly transfer backup data from MaxDB to Tivoli Storage Manager (TSM) via a pipe?

 

Thanks and kind regards

Anette Feierabend

Database Studio not starting

$
0
0

I've installed maxdb_studio_win_64bit_x86_64_7_9_08_18 on windows 8 Pro Version 6.2.9200. The Java runtime is 32bit jre1.8.0_25. Database studion doesn't run - there is no error message. The xServer services are all running. I've verified the installation using the installation manager.

 

Is there any way to diagnose what's wrong - any log or trace file I could look at?

hanging NAGIOS processes

$
0
0


Hello,

 

I have a question concerning NAGIOS. We monitor our productive systems with NAGIOS. Therefore a user NAGIOS is created to monitor the system. With this User NAGIOS connects to the system and monitors the system. We have the phenomenon that there are hanging NAGIOS sessions on our BW system. There are multiple Login sessions of the User Nagios. I can only resolve this issue by logging out User NAGIOS manually from the system.My question: Why does the User NAGIOS connects more than once to the system ? Why are there these hanging NAGIOS sessions ? How can this be prevented ?

 

Kind Regards

 

Hartwig Latz


Statistics Update via Reports (RSADAUP1 - 3) obsolete/alternative

$
0
0

Hello,

 

till now it was possible to update statistics via the reports RSADAUP1 - 3 . In new releases it is obsolete as SAP changed the DB13 behaviour.

But we still want to use them.

Why? We have an external Job Scheduling tool which shall start and check the whole statistics updates. So we started the Report and got a return code from it.

So how to start it without the report?

 

We dont' want to use the DB13 itself.

 

Thanks

Jan

Recovering CDB and SBD from MaxDB

$
0
0

Hi Everyone.

 

I am recovering a Content server database as part of the upgrade process of Content server 6.40 (32-bit) to Content server 6.50 (64-bit).

 

The source database version is MaxDB 7.5.00.51, the target database is version is 7.9.08.08. I am using DBStudio version 7.9.08.25 and following SAP notes 9620109 and 129352.

 

I have created backups of the source system using the DBStudio and transferred the files to the target server. I can recover the SDB database without any issues, however when I try to recover the CDB instance I get the following error:

 

CDB receover error.PNG

 

Am I missing something? The file definition is the same for both the source and the target.

 

Thanks

main_newbas/job_dbdif_upg fails during upgrade from 7.31 to 7.4 on MaxDB

$
0
0

The RADDBDIF job stops with shortdump PERFORM_NOT_FOUND when it tries to call the non-existing form EXECUTE of RSXPLADA.

 

The cause is an (incorrect ?) entry in table DBDIFF for PLAN_TABLE_EXTERNAL which has RSXPLADA in column SOURCE instead of SDB3FADA like other objects with DBSYS = ADABAS D.

I changed it to SDB3FADA and the update continued without further problems.

 

Software versions:

 

NetWeaver 7.31 SPS 14 before upgrade

NetWeaver 7.4 SPS 9 after upgrade

MaxDB 7.9.08.27

Software Update Manager 1.0 SP 12

-2 ERR_USRFAIL: User authorization failed

$
0
0

Hello guys,

First to mention my system:

OS: Windows 2003 server

DB: MaxDB  7_7_06_17

 

I have one problem to connect through Database Manager to one DB instance that  just now I have installed.

What I want to do is to install one MDM server.

The bellow are the steps that were already performed:

-installed Java server with sapinst (this created its database ZJD)

-after this Java server was finished installed I started to create one DB MDM instance using SDBSETUP.exe

-I was able to create this MDM instance successful

 

And the problem is here: This MDM database instance that was created with defaults users DBA and DBM. When the creation of this MDM was finished I tried to connect with both Database Manager and DBMCLI to MDM instance using these users but I am not able to connect. I get the   "-2 ERR_USRFAIL: User authorization failed" error. Can you tell me which is the reason?

I am sure that I typed the right user and password. I tried to uninstall all and installed again inclusive OS and the same error. I mention that to instance ZJD I am able to connect.

I tried with default users for Maxdb (DBA with pass: SECRET and DBA and DBM with pass: SECRET and DBM) but this doesn't work.

Can you help me to solve this problem?

Best regards,

Florin Radulea

How can i configure my database with SSL? I can't find Useful information!

$
0
0

How can i configure my database with SSL? I can't find Useful information!If you know, please contact via email...<deleted by moderator>

 

Message was edited by: Thorsten Zielke Please do not post an email address here inviting everyone to reply to. Use this forum instead for discussions or contact a single person directly via mail...

Viewing all 322 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>