Invoking dpthost.exe corresponds to what happens when you EXEC PGM=ONLINE/BATCH204 when running M204 on the mainframe. The program can perform work itself, or just sit and act as a server for humans or "robot" threads, or both. The following sections cover the various ways you can configure the host, and give some background in various areas. For most programming tasks the default options should be OK.
There is only one program to use, regardless of whether the equivalent Model 204 task would be suited to PGM=ONLINE or PGM=BATCH204.
The host GUI itself should be self-explanatory, except perhaps for the "UnHalt" menu option. This corresponds to replying to the HALT message on the operator's console on MVS, and wakes up user 0 if it is in a HALTed state (see below). In other words this is usually how to bring the system down. When running DPT as a Windows service, without its GUI, issuing "stop service" from Windows Control Panel (or other service management front end) does the same thing.
When the console is shown (see NOGUI mode later), the window title is populated from the system parameter value for SYSNAME. The output windows can be scrolled using the PC's PgUp/PgDn/Home etc. keys, and also with a more mainframe-style F7/8/10/11. By default the display will try to keep up with the latest system activity (i.e. remain at the bottom of the audit trail or sysprint file), but this is annoying if you are trying to copy down an error message, so you can turn it off in the options menu. The GUI appears wherever on the screen you last left it. Often you might run it minimized it and not look at the audit trail at all during a run.
Host auxiliary file location
When you start the host application for the first time, a number of auxiliary files and sub-directories will be created if they're not already there. Some of these are output files such as the audit trail, and some are input files which are created empty to serve as a hint to help you configure the system in various ways, when or if you need to.
Two important output files are audit.txt and sysprint.txt (default names), corresponding to CCAAUDIT and CCAPRINT on Model 204. The text written to these mirrors exactly the text you see in the GUI panes (although only the last few thousand lines is shown in the GUI, for the obvious memoryish reasons).
By default, checkpointing will be turned on, which triggers the creation of a file usually called checkpoint.ckp.
All the auxiliary files are created in the directory where dpthost.exe was invoked from. In other words not necessarily the directory where the .exe file resides, but the "current working directory", or what you put in the "start in" box of a Windows shortcut. To run several concurrent copies of the host from the same .exe file on the same machine, it's necessary to use a different working directory for each. As with the client, different "skin colours" can be associated with each copy of the host, to help visually distinguish them.
In all cases lines starting with '*' or '//' are treated as comments.
Parameter values are normally assumed to be the first word after the equals sign, and are also usually uppercased before applying them. To get round these limitations specify them in the same way as you would on the RESET command, as C'ascii-string' or X'hex-value' (numeric parameter) or X'hex-string' (string parameter). Simple quoted strings are also allowed, and mean the same as C'ascii-string'.
For example:
SYSNAME = C'Test System' This text RCVOPT=X'03' is treated NUSERS = 100 as comments PRSUFFIX = '.txt' and ignored *etc.
These notes assume DPT is started in non-service mode (see next section).
The tools/options menu gives you the chance to change the names/locations of the config files described above, plus one or two other items of information that dpthost.exe requires before it can start its internal database server and read parms.ini for the hundreds of remaining parameter values. If the output streams are to go to directories other than the working directory, those directories must exist.
Option notes
dpthost [option=value]...e.g.
dpthost sysin=myjob.txt parms=batchparms.txtThe accepted parameters are:
It is not possible to specify "regular" parameter values (e.g. MAXBUF, NUSERS etc.) on the command line like M204 allows you to do on the JCL EXEC statement. Note also that there is no SYSOPT parameter on DPT, since all the bit switches it contains are redundant or implemented differently.
In addition to not showing the usual DPT "console" with its view of the audit trail, there are some other implications of NOGUI mode:
Installing and uninstalling services
Setting up DPT as a service requires several options that it's easy to forget or mistype, so the tools menu in the regular DPT console contains a handy dialog for installing and uninstalling services. If you use some other means of installing the service(s), remember to specify the command line parameter NOGUI, and probably WORKINGDIR too (services otherwise start in C:\Windows\System32"). If the DPT built-in install function is used, these will be added automatically in the correct format.
To use the built-in function, DPT host must be "run as administrator". To view the definition for one of the listed services , select it in the list. Then you can uninstall or modify it if desired. Uninstallation of a service may occasionally succeed "partially", and the service remains visible. In such cases the service has been "marked for deletion" at some point soon when Windows gets round to it. This can happen if the service is currently running, or if the Windows services dialog is open.
Service IDs and display names must both be unique. The description can be as long as you like. Each copy of the executable can be associated with several different services, but each must have a different working directory if they will be run at the same time.
Starting and stopping services
When a DPT host service is started it performs everything it would if you ran it "normally", say by double clicking on dpthost.exe in Explorer (see also the following main doc section starting and stopping DPT).
The command line parameters that you might normally put in a Windows shortcut definition, or batch file, are given when installing the service, so in that sense installing the service is much like defining a shortcut. However, it is made a little more confusing because the Windows Control Panel "start service" screen also allows these parameters to be given again if the service is started manually. DPT chooses to ignore any such manual parameters, and always sticks to the installed ones.
Using the "stop" function in Windows Control Panel, (or closing down Windows itself), invokes the same DPT processing that would be performed normally when using the DPT GUI "UnHalt" function, or issuing the =UNHALT command at a user chevron. In other words user zero wakes up and starts to perform any commands after the HALT in its input stream. If this processing might take a while, you should set the DPT system parameter SRVSTPWT to the expected number of seconds with some wiggle room. This is so that if Windows is shutting down, DPT can tell it to wait rather than possibly terminating the DPT process and leaving files corrupted. The default is 20, which is how long Windows waits by default (see Windows tech notes for registry key WaitToKillServiceTimeout).
DPT does not currently support being "paused" as a service. Logically this might be considered to be similar to issuing an "EOD" command, and is something that could be added later if required.
Viewing the audit trail
In NOGUI and/or service mode DPT still generates its audit file as normal, but the console is not there for you to see messages scrolling past. To look at the audit trail the options are (1) open it in its current state with notepad, or (2) use the new DISPLAY AUDIT command on a user session, which offers more functionality.
For a batch run, place the required commands in sysin.txt, with no HALT at the end. When the system thread finishes working its way through the commands, it will close the application. For an online run (including service mode) leave the HALT command in place, and the system thread waits there to be un-halted, which can be done from the DPT console window, or from a normal user thread with the =UNHALT command, or (in service mode) stopping the service.
When closing down, the system thread automatically issues a BUMP ALL and then waits for all other user threads to terminate. Therefore you don't need to code these explicitly in the sysin file.
As on M204 the the EOD command can be used to prevent any more users from logging on, and users can also be bumped so that their processing breaks at a "convenient moment", and they log off.
The host application sets a return code which can be accessed for example in a DOS batch (xxx.bat) file. The return code is the highest RETCODE set by any message by any user during the run, or the highest parameter value passed to $JOBCODE if that is higher. In the current version there is no distinction between "online" and "batch" type return codes.
The default codes associated with messages do not follow a very structured scheme, but a rule of thumb is:
Using Alt+F4, or clicking the main window [X] button, or initiating Windows overall closedown/restart, do not by default automatically close down DPT, but instead present a prompt dialog asking for confirmation. This is a reminder that there may be an impact on active online users or database transactions. The console options screen lets you set a flag to disable this prompt. Note that none of this is relevant in NOGUI and/or service mode (qv).
The same option
IODev 3
IODev3 threads are initiated by defining them in file iodev3.ini. This file is automatically processed at system start-up, assuming NUSERS > 1, and should contain a single line defining each IODev3 thread to be started, for example:
infile_A.txt outfile_A.txt 5 'IODev3 Inputs\B.txt' 'Iodev3 Outputs\B.txt' 5
Where each line contains three parameters - the first two are the input and output file to be used, and the third is the number of generations of output file to save (default 0). The first line in an IODev3's input file is taken as the user name. The IODev3 threads may not in the current release share an input file, even if the input stream, including user name, is to be exactly the same. If the output is going to a different directory, that directory must exist.
The file names can be quoted if desired (e.g. if they contain spaces). Lines beginning with asterisk are ignored. Any text after the third parameter on a line is also ignored.
IODev 15
This type supports telnet or any other custom socket program, which would then work similarly to a M204 IFDIAL or RCL thread. Most interesting operations like EDIT and full-screen READ/WRITE are disallowed, although UL debugging is OK.
The windows telnet client (telnet.exe) works OK, if not perfectly. The only "special" control sequences understood are GA (go-ahead), which is sent to the client just in case it's turned on, but is not required, and is ignored, in the other direction; and BRK (break), which if sent from the client should have the usual M204 effect of "attention" - i.e. the same as typing *CANCEL in most cases. The $READINV function operates just the same as $READ on IODev15 - i.e. no attempt is made to turn off/on LOCAL_ECHO on the client. There is no negotiation of telnet operational parameters at any time. Often the defaults you get are OK though, e.g. when you just say:
telnet localhost 13204
Socket threads in general
IODevs 7 and 15 are both socket connections. The host application listens for connections on the port number specified by parameter SYSPORT. This parameter can be reset during system operation if desired, but the new setting will not take effect until the listening thread is restarted, using the =SOCKET command.
The custom DPT client establishes itself as an IODev7 thread by (automatically) responding correctly to an initial challenge by the host. Any other "incorrect" response here makes the host treat the thread as an IODev15. When connecting with e.g. telnet, just press ENTER at this challenge.
BATCH2 threads
You can create a BATCH2-style configuration in various ways. One is to use the DPT client with appropriate config options. Another, possibly more flexible, method might be to use a commercial terminal emulator or telnet client which has a client-side scripting facility, and use it on IODev 15. Cheaper than that would be to write your own simple line-mode socket client.
On Model 204, IODev 29 is an IFDIAL thread, commonly using BATCH2. It is not supported as a separate IODev type on DPT, although the IODEV parameter is resettable, allowing some ability for threads to masquerade as other types.
Daemon threads on DPT - IODev 99
In most respects a daemon thread behaves the same as an IODev 3. It writes its output to a file, and all the default parameter settings are the same. The main difference is that the input to the thread consists of one or more command lines supplied when spawning the daemon, rather than a file as with IODev 3.
Daemon threads can be spawned by any user to perform arbitrary work, and are also used by the DPT host to perform various important housekeeping and background tasks. For example:
Internally a trade-off was made when deciding not to build a DPT internal scheduler. On the plus side all operating systems DPT is ever likely to run on are likely to supply thread-management services, and reinventing the wheel should be avoided unless the available wheels really are unsuitable. On the down side, all processing within DPT must be prepared to be time-sliced at any moment - in other words all threads are permanently "swappable". This does impose some internal overheads compared to the Model 204 situation where Model 204 can time-slice at the moment of its choosing. Of course, DPT takes care of all this, and the end user need not worry about it unless they want to.
Another big advantage of using OS threading services, which is becoming more important these days, is that multi-user workloads on DPT automatically take advantage of multi-processor machines.
Thread priorities
DPT does have a PRIORITY command like on Model 204, although the meaning of HIGH, NORMAL and LOW is not exactly the same. Since thread scheduling is performed by the operating system, the type of control we can exert is determined by what the OS provides. The following comments are particular to Windows 2000, which is the OS DPT was written on, and probably apply in all the important ways to later Windows as well.
The DPT host as a whole runs with whatever system priority on your PC you assign it - by default the Windows "Normal" priority. Threads within the host run at a priority level relative to that of the host system as a whole. When you use the PRIORITY command to set a thread to to HIGH, DPT gives it a one point advantage, and LOW a one point disadvantage. The windows scheduler works on a 32-point system when dispatching threads, but since most applications run at "Normal", even a small advantage is usually decisive.
Those interested should consult their Windows documentation for full details on this. For example with CPU-bound threads, the Windows 2000 scheduler is quite brutal - those with even slightly lower priorities simply do not get a look in. At all. On the other hand the scheduler is quite generous with even low priority threads that resume after waiting for disk or user IO.
* Warning * As a general rule it's probably best not to mess with thread priorities. In fact the chances are that anything other than the default setting, where all DPT user threads have the same priority, is liable to slow down threads which were supposed to speed up, and several others too, because of the implications of resource sharing. If you were unlucky with the exact moment you issued the command you might even disable DPT entirely. One possible reasonable use of PRIORITY would be to downgrade a thread which is known to be grinding CPU and not using any shared resources.
NUSERS has the same meaning as on M204. If any IODev tries to start a session and there are already NUSERS active threads (including user 0 and any PSTs - see elsewhere), the session is rejected. In the case of the IODev3s, if there are more defined than NUSERS, the system does not wait for earlier ones to finish before trying to start the later ones. This means the later ones will most likely fail.
On the plus side, it's all somewhat simpler from a user point of view than Model 204.
However, all this aside, let's say that ideally all structures would be allocated from a general pool representing "all available memory", and we would not have the hassle of deciding in advance the relative importance of, say buffers compared to user servers, or percent variables versus request line count. On your PC, where there is usually going to be just one user, this is a more realistic aim than in the multi-user melee that is the mainframe. DPT attempts to walk the line between idealistic and pragmatic, and adopts a simpler approach than Model 204 with most of this stuff.
At the system level this means parameters like SPCORE, LENQTBL and NSERVS are not used, and at the user level it means all the UTABLE parameters like LQTBL and LSTBL are also not used. The only memory-related parameters used are MAXBUF, BUFAGE (DPT-specific), LGTBL and LVARTBL (DPT-specific). Anything not controlled by these is requested as required from the operating system. To cater for the (generally infrequent) situations when such requests fail, DPT has a specific error condition that can be caught as the error global value in a User Language subsystem (see later).
In addition, since users are not limited to a fixed-size server, there is little reason to force you to juggle all the UTABLE parameters. However in this case we keep one or two of them. One is LGTBL, which is retained to help rein in requests that might go global crazy. A new parameter added is LVARTBL, which is a simple check on the number of variables of all types in a request (arrays count once per element), and is also there as a sanity check, for example when there are auto-built requests. There is no check on the size of a request in terms of number of executable statements (i.e. emulating LQTBL).
Following on from the above, DPT can not be used to gauge expected UTABLE usage in code ultimately bound for M204.
On DPT, the MAXBUF parameter has a slightly different meaning in that it refers purely to memory used for buffered database file pages. Model 204 uses its "disk buffer pool" for significantly more than that - there it's really a general purpose pool of memory, and at any given time contains a mixture of database page copies, found sets, sorted sets, TBO logs, and lots of other stuff, in varying amounts. In fact the proportion of the buffer pool filled with actual file pages might often be quite small on M204.
Another difference is that on DPT the amount of real, physical, memory actually used is not fixed, but starts off low and increases as required. MAXBUF represents the maximum, and is not resettable. The real memory used will also, by default, shrink again if the system has been quiet for a while. This is controlled by the BUFAGE parameter.
As with M204, a busy DPT host system uses a LRU (least recently used) scheme to decide which pages to discard and overwrite first when when the buffer pool is "full".
The default is 10,000 pages, or 80 Mb, which at the time of writing is quite small by PC standards, so one of the first pieces of parameter customization you might do would be to increase the buffer pool to something more capacious. For example if you have 512Mb of real memory, tripling the default is something that would almost certainly improve the operation of the DPT file system.
The theoretical highest value you would be able to set and still let DPT start up cleanly would be a little less than 2Gb. This is because "Win32" means 31-bit application-usable *virtual* address spaces (ignoring extended addressing for now). However, using a value higher than the quantity of *real* memory available (e.g. 256Mb, 512Mb etc.) would usually not be beneficial. You might think a page read from the Windows swap file would be just as quick as from a file's "home" location, but it doesn't work out that way in practice.
Atypical workloads
Probably the most important kind of jobs in this category are data loads. This subject is discussed more in the DBA guide.
Another time you might deliberately want to deviate from a "general workload" configuration and set MAXBUF higher is if you wanted to monitor the DKRD/DKPR effects of tuning efforts on UL programs destined for a production situation where there will be lots of memory (or more than you've got on your PC anyway). Database accesses that are satisfied from the "buffer pool" will not clock up DKRDs regardless of whether the buffer page in question was in real memory or the Windows swap file. Allowing for this kind of testing is the reason DPT doesn't simply insist on all real memory for buffers.
One time you might want to use a very low MAXBUF setting is in a batch run that performs largely sequential database processing, but otherwise could make good use of memory in other ways, perhaps by doing lots of sorting. In such cases few database pages are reused, so buffering them is not worthwhile, and hogging memory with pointless buffer pages leaves less available for the other work. Model 204 DBAs will be familiar with similar considerations in reorgs and similar types of run.
Another small-buffer-pool scenario would be if many programmers were working on a single host which only had small test data files attached. Then the system would be better off using memory for compiler and debugger data structures.
Finally of course the best idea is always to keep an eye on the statistics at the end of the audit trail, and make adjustments or experiment accordingly.
Memory exhaustion in most situations should be handled fairly well by DPT. TBO will be invoked, if it's turned on, and valid for the current activity (some operations are non-backoutable). A small amount of virtual memory is permanently reserved for TBO use. An active user language request is terminated, without routing to the ON ERROR unit first. After this the error is handled much like a UL run time error, which means the user is either returned immediately to command level, or if a subsystem is in use the user either leaves the subsystem or is routed to the error procedure with a new error global value "MEMORY" - see later. See also the section on user restarts later for more about how database errors are handled.
Top
The following notes assume some familiarity with Model 204 recovery topics.
Taking checkpoints
The actual activity of taking a checkpoint usually does not involve writing updated pages to disk, since this is done by the last updating thread at the end of update units. This can be thought of as similar to the situation when M204's DKUPDTWT parameter is zero (this parameter is not emulated at all).
Taking a checkpoint is therefore usually a simple case of clearing down the checkpoint file and unflagging the updated buffer pages. The approach taken on DPT is that the CHECKPOINT command works synchronously on the issuing user's thread rather than rather than "submitting a request" to the checkpointing pseudo-subtask (daemon). This has the side-benefit of allowing a real-time implied checkpoint to happen during some commands (see later).
If CPTIME is non-zero, there should be a daemon thread running which periodically wakes up and issues a CHECKPOINT command. This is usually much nicer than having to remember to do it yourself. If CPTIME is zero at the start of a run, RESETting it to a non-zero value will spawn a new checkpointing daemon. Setting CPTIME to zero will tell any current checkpointing daemon to log off before taking its next checkpoint. Changes to CPTIME take effect the next time the daemon wakes up. If you want it to take effect earlier than that, just bump the daemon before issuing RESET. If the daemon abends for some reason you can check the reason using the =SPAWNINFO command using special daemon ID of PST_C.
During a checkpoint, if there are any in-flight updates the daemon will wait for CPTO seconds for them to finish updates before giving up. In this event things are similar to Model 204 but not quite the same. What happens is that every end-of-transaction from then on (until a successful checkpoint is taken) causes the updating user to effectively attempt an immediate CHECKPOINT (i.e. no CPTO waiting). The checkpoint daemon itself simply gives up and goes back to sleep for CPTIME seconds.
Going by the descriptions in the M204 manuals, the DPT checkpointing algorithm is certainly much simpler in other respects too. For example during the quiesce period other threads are simply blocked from starting any updates. Also there is no "extended quiesce" period, whatever that is. Simple but effective is our motto.
The CHKABORT command does pretty much what it does on Model 204.
CPTO can be zero (wait for ever) or a custom value of -1 (don't wait at all if the initial attempt fails).
Rollback
DPThost invokes rollback automatically based on the presence of a non-empty checkpoint file and the appropriate RCVOPT bit being set. That is, if you crash the system, then change the parms file to specify RCVOPT=0, recovery will not be invoked next time you start the system.
The RESTART command is therefore redundant. The automatic recovery is the equivalent of "RESTART ROLL BACK". Since all file allocation is dynamic, the NODYNAM option would always be off. You can if you want simulate a run with no RESTART command, either by simply deleting or renaming the old checkpoint file before starting the system, or clicking on the "bypass" button in the rollback phase of recovery, if interactive mode is on.
When dpthost.exe is started, if roll-back looks as if it's required, the system by default takes all the necessary action automatically. Setting the (DPT custom) RCVOPT X'10' bit means the process is interrupted at a handful of key points for the user to make decisions: Note that DPT does not allow CPMAX to be anything but 1 in the current release, so there is no point where you have to choose which checkpoint to roll back to.
Low level details
Full information about the checkpoint logging and rollback processes is perhaps beyond the scope of this document. However, anybody who's interested should email DPT HQ.
One particular point concerns the taking of backups before recovery is attempted, and restore to them if recovery fails. If your files are large, this will take a non-trivial amount of time, and the default in automatic mode is not to do it. However, if you set the custom RCVOPT X'20' bit, automatic recovery will do it, so that everything will be left exactly as it was before starting the failed run.
Because of the way DPT checkpointing works, there is in theory no reason that a second rollback attempt should care whether the files in question had been partially rolled back in a previous attempt, and this is why the default is to skip backup/restore. In addition, if recovery fails you may often find yourself reverting to a complete older set of file backups, in which case reinstating the (broken) pre-rollback files will just be a waste of time anyway.
Transaction boundaries
No distincton is made on DPT between a "transaction" and an "update unit". In this documentation the two terms are used interchangeably, as the mood suits.
The rules governing transaction boundaries are very similar to Model 204's, both in the obvious cases (COMMIT, BACKOUT statements), and the less obvious (return to command level or end-of-procedure when APSY auto-commit is on).
All file-updating commands include an implied commit when they finish. In addition the following commands commit any existing update still open at the moment they are invoked. This list is an amalgamation of the lists in the M204 File Manager's Guide pp 19-17 onwards. Note that because of the differences in procedure handling on DPT the procedure-related commands are irrelevant.
ALLOCATE
BROADCAST FILE
CHECKPOINT
CLOSE
CREATE
DEFINE FIELD
DELETE FIELD
FREE
INCREASE
INITIALIZE
REDEFINE FIELD
RENAME FIELD
RESET (file parameter)
Plus these DPT custom commands:
=BUFFTIDY
Restarts
Like Model 204, DPT will terminate a user's session if there are serious file updating problems. DPT's processing may be a little simpler but the following rules of thumb apply:
TBO
The transaction backout mechanism works similarly to M204, although it is turned on and off at system level, via the a new X'02' bit in the system RCVOPT parameter, thus making FRCVOPT/X'08' redundant. M204 users often work TBO at system level anyway, so this is hopefully no great loss. It not only makes things simpler to use, but has made it much easier to implement internally to DPT.
There may be some other slight differences in the exact behaviour of TBO in some cases, for example in the exact order of fields on records after backout of DELETE EACH OCCURRENCE. Whatever happens should however be "reasonable", so that for example occurrences will always reappear in the correct relative order.
The behaviour of MSTRADD and MSTRDEL during TBO is the same as on M204 (when backing out a record deletion the same slot has to be used for the primary extent, to ensure invisible fields are not "lost"). Record numbers of deleted records are in fact the only items tracked in the constraints log - in other cases we just accept the possibility of TBO failure.
To cater for cases where TBO is invoked because of memory problems somewhere, the system keeps a small amount of memory permanently reserved for use purely by TBO. This can be thought of as similar to the DPGSRES mechanism, which is also used. In other words at the start of a backout the reserved memory is released for use and all files are told that they can use the DPGSRES pages, typically for re-inserting deleted b-tree values. After backout has finished both restrictions are reinstated (if possible).
The same set of activities can be backed out on DPT as on M204, except for procedure updates, which are completely outside the recovery mechanisms.
No FRCVOPT parameter
All recovery-related activity is enabled and disabled at system level, not file level, so the relevant FRCVOPT parameter bits are not emulated (X'80', X'40', X'20', X'04', X'02'). In addition, the X'01' bit (completely apply all updates) is irrelevant since there is no roll forward.
The one remaining bit (X'10') is also not emulated. It is always effectively set, disallowing discontinuities. Aside from removing the need for FRCVOPT, it also means we can do away with the internal ENQCTL mechanism too! However, given this situation, the only way to enforce it is to control the FREE command. If a file has been updated but not checkpointed, FREE is not necessarily allowed - the user is prompted to take a checkpoint as part of the FREE processing. Losing the possibility of discontinuities seemed a small price to pay for a nice simplification to the system. In fact this trade-off seemed so convenient internally that one or two other commands (e.g. ALLOCATE, CREATE) also involve a "hidden" checkpoint.
Just an extra little note about this parameter.
Force-commit in "batch update mode"
In the normal "online" course of events, the User Language COMMIT statement, or any other transaction-ending event for that matter, causes these things to happen:
When running with TBO and checkpointing turned off (which is what is meant by "batch update mode"), the disk writes would usually have little purpose, since the LRU algorithm writes pages out if it has to. Forcing it explicitly usually just means phsical disk writes happen many times for some pages when once would have sufficed. By default therefore, the COMMIT statement is inactive, but the RCVOPT X'04' bit allows you to reenable it if for some reason you really want to incur the (often huge) overhead.
Note that this issue only relates to the User Language COMMIT statement - all other transaction-ending events write pages as normal. For example closing the file. Note also that this bit switch only has any effect if the X'01' and X'02' bits are off. Otherwise COMMIT always has its full functionality. Finally note that on DPT "batch" need not mean that NUSERS is 1. You might often want to log on and use MONITOR commands to check on the progress of a large update.
The COMMIT RELEASE statement by default functions as RELEASE ALL RECORDS under batch update conditions.
DPT has no pre-compiled procedures, at least in the initial release, so APSY consists of not much more than its driver functionality. Becuase compilations are not saved, subsystems have no need to place enqueues on files or groups, and following on from that there is no need to actually start or stop subsystems! Once they are defined, they are available for use. Initialization procedures can simply be included at the time the subsystem would normally have been started.
During program development the fact that procedures are not precompiled can be useful because you don't have to leave the subsystem to make changes to them. This is similar to the situation on Model 204 when using TEST DEBUG.
A sometimes-annoying feature of Model 204 is that files are closed when leaving a subsystem. This is optional on DPT, and is controlled by the CSTFLAGS parameter X'0100' bit.
There is no SUBSYSMGMT, and subsystems are created using a new CREATE SUBSYS command, and deleted with DELETE SUBSYS. Subsystems must be recreated in each run, as there is no CCASYS-equivalent permanent storage file. This mirrors the treatment of groups, where there is no equivalent of CCAGRP, and also like groups it would be usual for user 0 to define subsystems as the system starts up.
The different values of the error global are somewhat fewer in number than on M204, as follows. If the subsystem has no error global defined, no global is set, but an error procedure may still be invoked.
After the error procedure, subsystem processing continues with NEXT-style routing as normal, except with bumps and restarts (CAN/SFT/HRD). In those cases the error procedure is followed by leaving the subsystem and thread termination. If there is no error procedure defined when one is required, the user simply leaves the subsystem. Any kind of error encountered whilst the error procedure is being processed also results in the user leaving the subsystem.
From DPT version 2.0 AUTOSYS is supported for interactive thread types (IODev 15 and IODev 7). If the AUTOSYS parameter is defined, all users connecting on these IODevs go directly into the named subsystem after the logon exchange, without getting a command prompt. If the auto-logout option is defined for this subsystem, they will also not get a command prompt when they leave the subsystem, except if $SUBSYS is called with the correct option to disable auto-logout.
If the DPT client (IODev7) has some updated procedures which require synchronization at logon time, this is skipped if there is an AUTOSYS. They will get synchronized the next time the client connects and there is no AUTOSYS. This is because synchronizing procedures requires the command line.
In some cases, for example when setting up DPT to run in a BATCH2 "robot" configuration, a client mini-script would be more appropriate. Create a mini-script with a single line containing the name of the desired sybsystem.
DPT's initial releases had no support for any of the access-control hooks that are present in many Model 204 features. Wherever an access level was required DPT simply ignored the supplied ID/password and allowed full access, or when reporting available access levels, DPT reported "full access is allowed".
Starting with Version 2.16 DPT will try to make slow piece-by-piece progress towards a better level of support. This section contains a summary of the access-control features currently supported, with links to the appropriate command manual entries etc.
User IDs and access level values are stored in the file unencrypted. However, passwords are represented by an SHA1 hash value, and there is also an SHA1 checksum to prevent tampering with the file while the DPT host is not running.
The categories of stats are also fewer, and this makes things less confusing in practice. Essentially there are "normal" stats and "file" stats. Normal stats, such as AUDIT and CPU, are collected at system, user and since-last level. File stats, such as BADD and DKPR, are collected at those 3 levels, plus for each file. No parameter settings are required to turn on and off the collection of any statistics. There are no partial system statistics.
The set of activities which create since-last groups is smaller than on M204. EVAL and CMPL mean the same thing, and there INIT, CREA and BLDX statistics for file processing. The $SLSTATS function does the same thing.
Stats appear on the audit trail with line type STT. The REQUEST option on the T[IME] command simply prints a saved copy of the most recent "LAST=EVAL" or "LAST=CMPL" statistics line that was sent to the audit trail.
The DPT specific =VSTAT command works like normal VIEW but can be used to show individual stat values, although normally the MONITOR STATS command is more useful. In addition, there is a custom option on the $VIEW function which allows you to use it to see file statistics if you want.
All statistics are unsigned 64 bit integers, which means they would theoretically "clock" at approximately 16,000,000,000,000,000,000 decimal.
Since DPT has no internal scheduler, nothing to do with scheduler queues is shown. On the other hand the operating system thread ID for each user IS shown, and can help tie together the information from different displays.
Hardware
Operating system
DPT is basically a 32 bit Windows application which means in theory it will run on all Windows back to '95.
Strictly speaking DPT does not need to be "installed". If you have the two executables (dpthost.exe and dptclient.exe) you can put them in a directory and execute them and they'll work. Add in the documentation ".html" files so F1 help works in the applications and you're good to go. Different DPT version files can be placed in different directories, and you can run them in parallel, or at the same time, even make them talk to each other.
It's the same download for casual and business users
Much of the discussion below concerns the downloadable installer package which automates the copying of the executables onto your machine, and adds configuration files to set DPT up in a suitable way for new users to get started immediately, e.g. via desktop shortcuts. The installer runs a wizard which creates a windows application called "DPT", in the usual sense you would understand that with e.g. Photoshop, Skype, etc., where "DPT" appears on your start menu and you can uninstall it via Windows control panel.
It is understood however that with software like DPT, serious users might often want several versions active at the same time on the same machine. Such custom installations must be created by hand, using whatever system management and version control methods pertain at your company. The installer wizard should then be thought of really as just a way to unzip the DPT files onto your machine, and the windows application "DPT" that gets installed could really be called "DPT Demo Configuration". Calling it "DPT" is a compromise to keep the download simple (same one for everybody) and satisfy both casual users who'll never touch any of the configuration settings at all, and business users/system managers who definitely will, and might uninstall the installed demo system as soon as they've taken a copy of the exe files and docs.
More about the installer wizard
In addition to the executables, the wizard offers a variety of other choices. The default "full" install gives a selection of example code like web server scripts and User Language applications, as well as system configuration files to set up the demo system. A "compact" install only copies the executables and documentation, which is useful with most DPT releases where there are no changes to files like SYSPROC and parms.ini, meaning any customizations you've made can be retained.
If DPT is already installed, the wizard by default assumes you're going to put the new version in the same directory as before, which is the recommended thing to do. You can also choose to have it make automatic backup copies of any existing files that get replaced, with any such backups being placed in the "Backup" subdirectory.
Uninstallation
When the installer wizard runs it creates an uninstaller program and stores it in the DPT
directory. This uninstaller program can be invoked directly, or via Windows Control Panel ("remove DPT") which just does the same thing indirectly. The uninstaller removes DPT as completely as it can, as follows:
Falling back to a previous version
This section assumes casual use where you only have the one version of DPT on your machine. It is assumed "serious" users would have a variety of old version configurations set up that they could switch to using whatever local methods they have for managing them, as mentioned above.
So, a previous version of the installed "DPT" demo application can be installed over the top of newer one with no problems, just by running the older installer package if you still have it. Or you could uninstall the current one completely first and then do a fresh install of the older one. If you don't have the old installer but you took backups when installing the newer version, it's also OK to just promote some or all of the files from the Backup directory, and it won't mess up any later runs of the installer or the uninstaller.
Creating custom DPT configurations
For casual users who only need a single DPT configuration it makes sense to just customize the config files in the install directory, cribbing from the config files for the demo system as required. The same single installed copy of the .exe files and documentation can also be used to run as many different configurations as desired, by creating (manually) a different working directory for each which contains shortcuts to the master .exes plus any specific configuration files. This can be thought of as a halfway-house between "casual" and "business" usage, since you still retain the single installed Windows application called "DPT" as created by the wizard, but can have several configurations for different tasks.
It is not recommended to run the install script again giving a different directory each time. It works up to a point, but Control Panel gets confused about which was the official installation of "DPT", and you have extra version control work to do when installing new DPT versions later.
System managers in a business setting will probably want to create different levels of indirection so that DPT versions and configuration files are managed and version-controlled separately. It's all up to you how you do that.
(See also "running DPT as a Windows service")
When are administrator privileges required to install or run DPT?
On pre-Vista operating systems, never. On Vista it depends on whether you install into "C:\Program Files" or not, and to a lesser extent on how you're going to use DPT.
For DPT versions 2.26 and before, the install wizard defaulted to "C:\Program Files\DPT", but now the default is just "C:\DPT", because that makes things simpler for casual users while not making anything harder for business users who'll be configuring things by hand anyway. The change is to work around the feature of Windows Vista where the "C:\Program Files" directory is treated as a special case, with administrator privileges required to modify any of its contents. This Vista feature complicates things because it means regular (non-administrator) Windows accounts can't use an application effectively unless it's configured to only access data in other directories, rather than the common and simple previous convention that a program's default "working directory" is the one containing the .exe file.
You can get the DPT install wizard to set things up in "C:\Program Files" on Vista, but it requires extra attention when running both the installer and DPT itself, unless you have turned UAC off entirely on your PC (and who would blame you?) Firstly the installer needs to be "run as administrator" to complete successfully (it can normally be run under any account). Then DPT itself either has to be "run as administrator" to avoid triggering the dreaded "file virtualization", or configured to not touch any data locally. This means using appropriate fully-qualified ALLOCATE commands within applications, and modifying the DPT shortcut properties for "working directory" and/or command line options.
So in summary, if you want an easy life take the default install location of "C:\DPT" and don't give it another thought. In a more strictly controlled business environment still take the default, then import the files into your local version control systems etc. or whatever you have to do, probably involving "C:\Program Files" somewhere along the line. Then run the DPT uninstall program if you don't want to keep the demo system hanging around.
The code table issue of course also applies to all other characters that sometimes take different codes. The "not" sign (¬) in User Language is X'AC' on the machine where DPT is being coded but that may not be standard. (See V1.2 fix on this subject).
Another issue that might cause obscure problems concerns PC "control characters". If the conversion table used during transfer to the PC maps any characters to ASCII codes X'00' - X'20' the DPT host may ignore them or treat them as spaces. Some more details on this are given in the IDE user guide. If your procedures have names containing strange characters, this is a show-stopper and DPT will not be able to access them.
Apart from the ASCII/EBCDIC conversion issues, the basic reorg method of unloading procedure files with "D (LABEL) ALL" and feeding the resulting output into DPT should transfer procedures successfully.
User Language
The User Language compiler by default builds stub "quads" to represent unsupported Model 204 statements resulting in run-time messages which at least allow the surrounding logic to be coded and tested. Hopefuly this will prove usable.
Commands
Commands are in many cases emulated less accurately than User Language. This is for the good reason that DPT is operating in a different environment, and standard Model 204 command options relating to mainframe issues would be meaningless or confusing. Despite the good reason, this may still be a major problem for existing code. By default, unsupported M204 commands are ignored with an informational message, which means that they do not break code. However, in many cases what is really required is to issue a DPT command or a DPT variation of a 204 command, and here the only solution is to change the code to make it run.
It has been suggested that all non-standard should commands have names prefixed with asterisks so that they can be left in when code is moved to the mainframe. Another idea would be to translate unsupported M204 commands and options into their nearest PC equivalents, allowing fully M204-compatible code to at least try to run correctly. These are ideas that will be investigated in future DPT releases.
Message codes
The message numbers and text on DPT bear no relation to the equivalent messages on Model 204. This not only means that applications issuing MSGCTL commands will not work as expected, but that UL code which parses message codes out of $ERRMSG or $FSTERR will be more than a little confused. Who would do such a thing anyway?
Site specific features
See FAQ item...
The bottom line
Any Model 204 feature could theoretically be added to DPT, but some would obviously be more work to build in than others. Therefore some applications will be likely candidates to be runnable on DPT in the near future, and others will perhaps never be runnable. This is something we will just have to accept.
More complex host configurations are covered in the DPT installation chapter, and configuration of the developer front end is covered in a separate document.
In some situations the host may have failed to open the specified port (SYSPORT parameter for terminal/DPT client connections and WEBPORT parameter for browser connections), and this will then mean the client can't connect. Socket open failure can be seen as an audit trail message early in the run, and might be caused by security settings on the host machine. An example is when running on Linux/Wine as a non-root user and a port of 0-1023 is specified.
MSGCTL DPT.xxxx NOAUDITwhere xxxx is the message number that you don't like. Insert further similar lines for any other troublesome messages.
In extreme cases (perhaps the host will not allow any more incoming connections for some reason) you can always use Windows Task Manager to end the host process. If checkpointing was turned on in the run that got cancelled, any in-flight updates will be backed out next time you start the host. If checkpointing was not on the files being updated will be left "physically inconsistent" and will need to be mended by hand. Apart from that there are no other bad things that can happen by killing dpthost.exe in this way.