The prefix value which is to be assigned for the logical file. The default prefix value is the logical file number length 3. Physical Dataset. The type of VSAM file:. Specifies whether the file is to be compressed or not. The default is. Indicates that the file is not to be compressed. Indicates that the file is to be written in variable record length.
During compression, the record is scanned backwards for default values, which are blank for alphanumeric fields, low values for binary fields, low values with a zone for packed fields and X'F0' for numeric fields. Compression stops as soon as the first non-default value is detected or the first descriptor is found. This is the default. The name used to reference the DDM in a Natural program.
The name must be unique within the specified Natural system file. The database in which the file to be accessed with the DDM is contained.
The number of the file being referenced. Line indicator. This field is used by the DDM editor to mark lines. Field Type: G. Level number assigned to the field. Valid level numbers are 1 - 7. Level numbers have to be specified in consecutive ascending order. A 3- to character external field name. This is the field name used in Natural programs to reference the field. Field format. Standard field length. This length can be overridden in a Natural program. For numeric fields format N , the length is specified as nn.
Descriptor Option. Indicates that the field is an alternate index for a VSAM file. Indicates that the field is a primary subdescriptor or superdescriptor; that is, a primary key for a VSAM file. Indicates that the field is an alternate subdescriptor or superdescriptor; that is, an alternate index for a VSAM file. If the field references a VSAM alternate index or a path denoted by an A in column D , the index or path name must be entered here.
The number of occurrences for a multiple-value field or a periodic group denoted by an M or P in column T. The following flags only apply to alternate indexes and not to paths:. If this option is marked with an X , the alternate index is to be read in ascending or descending value order.
If this option is marked with an X , Natural ensures that the values of the alternate index field are unique. An attempt to update with a non-unique value results in an error message.
The default value is a blank. A value of S indicates that null values for the alternate index field are suppressed. Specifies either of the following function codes: G. For retrieval statements; the current record length is determined for parm5. Specifies or returns the record length depending on the setting of parm1. Your email address will not be published. Save my name, email, and website in this browser for the next time I comment.
Skip to content. Overriding the Concatenated data sets: To override the first dsn in the concatenated list, then coding only the first one is enough. To override other than the first one, then all the DSN needs to be mentioned.
Each record can contain binary data, like the record and recordv formats. On the Data File Editor menu, select a file by positioning the cursor in the Selection field next to the file. The Input dataset fields contain file information from the Data File Editor menu.
Your cursor is located on the first blank field. In the Output sequential file area Filename field, type the name of the sequential file to be built.
In the Directory field, type the name of the directory where the sequential file will be built. You can also use an environment variable that represents the destination directory. The following figure shows a completed Build Sequential File screen. If you want to change any of the entries you made, press the Enter key to move the cursor to the Filename field. If the file already exists, the following message is displayed:. The Dump File function dumps some or all the records of a VSAM dataset to a disk file or the system printer in the form of a formatted report.
You can use the dump file to examine the data in a dataset. TABLE lists the field descriptions and acceptable values. If you want to change any of the entries you made, press the Enter key to move the cursor to the first editable field. The following table describes the Dump File screen fields. You can also specify an arbitrary name, which is passed to the kixprint shell script as the -p parameter.
Depending on your installation, the kixprint shell script can use this name as a physical printer name such as lp0 or lp1 or an arbitrary name such as lp0-comp or lp0-norm that might indicate lp0 compressed print or lp0 normal print. The kixprint shell script is then responsible for sending different control options to the print spooler daemon.
Name of a disk file to which to write the formatted dump. The name cannot exceed 14 characters and must begin with an alphabetic character. Environment variable that specifies the directory in which to store the file. This field is required if a file name is specified in the Destination filename field. Number of records to dump. If you do not supply a value, all records in the file are dumped. Specifies how to interpret the record key field data.
CHAR : Character key default. HEX : Hexadecimal. Key of the first record to begin the dump. For all dataset types, if you do not specify a key, the dump begins with the first record of the file. For a KSDS dataset, a partial key is used to start the dump. The length of this partial key is determined by scanning the key input. The last significant non-blank character determines the key length used to begin the dump.
For an ESDS dataset, type the relative byte address. For an RRDS dataset, type the relative record number. The following function keys are active on the Dump File screen:. Use either method:. Open the dump file in a text editor. You must use kixfile from batch shell scripts to close, manipulate, and open VSAM files. You might need to access VSAM files for a variety of reasons:. To use appropriate commands or methods on the file to replace it with a new file To reserve the file for batch To change the recovery attributes of the file Refer to the Sun Mainframe Transaction Processing Software Reference Guide for a description of the kixfile utility and its options.
The unikixbld utility allows you to perform a variety of tasks on VSAM files while the region is operating. You must use unikixbld from batch shell scripts.
To ensure VSAM file integrity, use a combination of the software's built-in recovery facilities and accepted administrative procedures. If configured, built-in recovery facilities take effect after a transaction abort, a deadlock, or a system crash.
Maintaining backups of datasets ensures that you can restore a file from a known good copy in the event of a disaster. You might need to restore a VSAM file if you experience any of the following events:. Bug in an application that corrupts some of the data records Breach in security UNIX system crash involving non-recoverable VSAM files Always make sure that you have a valid copy of your datasets.
If you must go back to this copy, make sure that the data is current enough to meet your needs. For example, if you create a backup every night, make sure that returning to last night's file in the middle of a business day is good enough to meet your needs.
If it is not, you need to create backup copies more frequently, or implement another backup method to keep current. To ensure a valid backup copy of a dataset, all update activity must be terminated and all the VSAM blocks must be written out to disk.
You can only guarantee this if the region is down or if the dataset is closed, locked, or set to a read-only state. Note - You must use kixfile in a batch shell script. Or, lock the dataset exclusively in your batch shell script that contains the backup commands using the command:.
You can then back up the dataset in a manner you choose. If you are backing up a KSDS file, you must back up both the. Change to the directory containing TEST1 and type the following commands:. To go back to the last known good copy of a dataset, the region must be shut down or the dataset must be closed. You can then restore the dataset in the manner you choose. Example : Building on the example in the previous section, use the following commands to restore a copy of the dataset:.
The software uses multiple files to implement the VSAM data storage method. If the KSDS file is spanned, up to seven additional files can be used. If the KSDS file has alternate indexes, two additional files are used for each alternate index. Activity counts are used to maintain integrity. Each file header has an activity count that is incremented each time the region opens and closes the file. While the file is open, the activity count is incremented, then set to its negative-valued complement.
When the file is closed, the activity count is reset to its positive complement. When the region opens the VSAM file, all components of that file must have equal activity counts. This means the index and data components of the primary file and all alternate index files must have the same activity counts. If the file is spanned, each segment must have that same activity count. If the activity count of one component of a file is not consistent with the other components, the file cannot be opened.
The opening of the file is bypassed to allow time to determine why the activity counts are inconsistent and to either override or correct the error.
Activity counts may be inconsistent if all components of a file were not restored from the same backup. Use the kixverify utility to display and modify activity counts. If you suspect that a dataset is corrupted, use kixvalfle with the -ik options on a closed dataset to check its integrity. Capture the output in a file in case technical support staff need it for problem analysis.
If the file is a KSDS or alternate index file, the links between the blocks of the index file are also checked. If kixvalfle reports no errors, your dataset is not corrupted and no further action is necessary. However, if a significant proportion of the file contains free blocks, you might want to reorganize it. See Reorganizing a Dataset.
If kixvalfle reports errors, you must perform corrective actions on the dataset. The error message types dictate the appropriate actions. If the index file or the alternate index files are not synchronized with the main data file, use the unikixbld utility to rebuild the index files without having to rebuild the data file.
The unikixbld options to rebuild index files are as follows:. The logical records are in ascending order within a VSAM block, and the blocks are chained together, but not necessarily in physical sequence.
Example : Block 1 is chained to block 7, which is chained to block 3, etc. Block 1 has the group of logical records with the lowest key values, and within Block 1, these records are in ascending sequence. Block 7 has the group of logical records which are the next ones in ascending sequence, then Block 3, etc. As logical records are deleted, the blocks may become empty no logical records are present.
These empty blocks are marked as free and are not returned to the operating system. These free blocks are then reused when new records are inserted into the dataset. If no free blocks are available, a new block is requested from the operating system. Reorganizing datasets reclaims free disk space and may improve disk access, but this depends on the fragmentation level of the dataset and your file system at file creation time, file update time, and file reorganization time.
It can also depend on your operating system and the type of disk controllers you use. Physical disk access behavior is difficult to predict and it changes over the life of the dataset.
Execute the kixfile command to make the dataset read-only:. Use the unikixbld utility to write the output to a sequential file:. Use unikixbld to restore the contents from the sequential file:.
The command used in Step 3 results in the default fill percentage of 0, which means all blocks contain as many logical records as can fit. This is a good choice if you know that new records inserted in the future will have keys with values higher than your present set.
However, if you expect new records to be inserted into existing VSAM blocks, it makes sense to leave space for these insertions. Use the fill percentage option to specify the amount of space. Use the kixsalvage utility to salvage a dataset. Carefully examine the output from kixsalvage and determine if it is better to go back to your last backed-up copy. Type the kixsalvage command to generate a new dataset. If you omit the -c option, kixsalvage generates a recordv format file that you can sort with an external sort utility.
Use unikixbld from a batch script to initialize the corrupted dataset and reload the contents from the sorted file that you just created. You must also specify recovery at the individual file level in the FCT. In addition to recovery for databases, the software supports recovery for temporary storage queues, transient data queues TDQs , and asynchronous transaction starts ATI requests.
Occurs when a transaction aborts. Any database updates that were performed by the transaction are backed out so that the failed transaction does not affect the database. This is called dynamic transaction back-out. Other transactions cannot access updated records until the updating transaction terminates successfully. This prevents the contamination of the database by data from failed transactions getting passed to successful transactions prior to the abort.
See Recovering From a Transaction Abort. Occurs after a system crash caused by a hardware problem, a software problem in any of the Sun MTP components or the operating system. See Recovering From a System Crash. When recovery is not enabled, the actions described above do not take place and the application environment will be chaotic.
A transaction abort, system crash, or region crash may make your database invalid. Even if none of these occur, applications can read in-process updates of other transactions. The application designer must understand these inconsistencies and plan for them. Other recovery issues to consider when designing your application are:. Conversational Transactions : How conversational transactions can cause problems in a multiuser environment.
See Conversational Transactions and Recovery. See Maintaining Database Integrity. The size of a logical record may be greater than the VSAM block size. Dynamic transaction back-out is used to roll back database updates when a transaction fails. When a record is written to the database, a copy of the original record is written to the recovery file.
This copy, called the before image , identifies the transaction that created it. Marker records indicating the start and end of each transaction that updates the database, as well as any syncpoints, are also written to the recovery file. The recovery file is a circular file, meaning that when the file reaches a predefined maximum size, records are reused starting from the beginning.
Sun MTP also stores offsets into the recovery file for each record written to the database, in memory. When a transaction aborts, the software reads each offset in the recovery file for each record associated with the transaction. Each before image associated with the failed transaction is restored to the database.
At this point, all the records that were updated by the failed transaction are backed out and the state of the database is the same as it was before the transaction was executed. After the recovery file back-outs are complete, the software rolls back all updates that the failed transaction made to Temporary Storage for queues that are defined as recoverable in the TST.
It also rolls back updates to intrapartition transient data and all recoverable asynchronous START requests that the failed transaction may have issued. However, such requests do not get scheduled until the transaction issues a syncpoint or completes successfully.
This ensures that all updates made by a failed transaction to recoverable resources VSAM files, temporary storage, intrapartition transient data, and asynchronous START s are rolled back during dynamic transaction back-out. Although the system may pause for a moment while the transaction back-out takes place, other transactions in the system are not affected.
After a system crash, start the region in the normal way. If the VCT was configured with recovery in effect, the recovery procedure described in this section occurs. The region's Recovery Server backs out the database updates of any transactions that were incomplete at the time of a system crash.
If header file information in the recovery file indicates that the system did not end normally, the Recovery Server restores the before images to the database just as it does with a single-transaction abort. However, it backs out all the transactions that were in progress at the time the last record of the recovery file was written. The effect is the same as doing an individual back-out for each transaction that did not complete successfully.
When it encounters temporary storage records, it creates Temporary Storage Blocks just as in the dynamic transaction back-out. As it encounters each start record, it creates an Asynchronous Start Queue entry and sends a message to the start processor to schedule the asynchronous START.
If a system crash occurs while recovery is in effect, recovery must still be in effect when the system is restarted to perform recovery. If you do not want to have the recovery performed, it is not enough to just turn off the recovery flag in the VCT:.
How Do Deadlocks Affect Recovery? A deadlock occurs when two or more transactions are each waiting for a resource that is currently owned by one of the transactions. Because each transaction is waiting for one of the other transactions to release a needed resource, the transactions remain hung unless intervention occurs.
The simplest deadlock involves two transactions and two resources. Transaction 1 cannot continue until Transaction 2 releases Resource B, while Transaction 2 cannot continue until Transaction 1 releases Resource A.
Sun MTP contains special logic to detect deadlock conditions. When it detects a deadlock, it forces one of the transactions to abend. The transaction that abends is the last transaction to enter the group; that is, the one whose request, when added to the others already outstanding, results in the deadlock. When this type of abend occurs, it does not mean that the particular transaction that abended has a design error. It means that the transactions involved in the deadlock, when taken as a group, have a design error.
This differs from a pseudo-conversational transaction in which there is no interaction with the user during the course of the transaction. While the user types in a response, the transaction is not active. Using the Sun MTP recovery capability has important design implications for conversational transactions:.
The maximum time span of a transaction becomes unbounded if the transaction requires a response from a user before it completes. The update may appear to the user to be complete, but it may not. Until the transaction completes, that update is not committed to the database.
If the transaction fails many minutes later, the user may not realize that the update was rolled back. Sun MTP uses a recovery file to store before images of database records. The recovery file provides a rollback capability in the event of an abort.
The before image data is not retained beyond a database commitment or rollback. Each third-party RDBMS has its own log files that provide both database rollback and roll forward capabilities.
The application designer manages database integrity within a particular application implicitly or explicitly. Implicit database actions occur at various points in the execution of an application:.
An implicit database commitment at the successful end of each transaction An implicit database rollback at the abnormal termination of a transaction Sun MTP automatically handles the implicit commitment or rollback of a VSAM database and the RDBMS software manages the implicit commitment or rollback of an RDBMS transaction through a user module. The database administrator must prepare the user module, then bind it with the Transaction Server unikixtran and the Batch Processor unikixvsam.
The user module must be developed in consultation with the application designer to guarantee the consistency of the application. For information about developing user modules, see Chapter An application program can request a database commit explicitly.
Here, all RDBMS software that was incorporated into the transaction processor is called to commit its changes to the appropriate database and mark its log file that the transaction has successfully completed. In either case, the operation is only executed on behalf of the data managed by that particular RDBMS. Use this method of explicitly requesting a commitment where there is just one RDBMS in use by the application.
Otherwise, inconsistent application databases can result.
0コメント