This is file: 01_96QA.TXT

This file contains questions asked of the Driver Development Support Center
(DDSC) and the answers given to that particular question.

Each entry starts with a header line that has a KEY WORD (or words) included
to allow the reader to "search" on KEY WORDS of particular interest.  Each KEY
WORD starts with the exclamation point character (!).  This allows for search
arguments that will limit matches to the header lines.

The following KEY WORDS are supported in this file.

KEYWORD                      DEVICES/TYPES OF QUESTIONS
===========   ==========================================================
!BASE         Loader, Memory Management, Strategy/Architecture
!I/O          Serial/Parallel, Pointing Device, Trackball, Keyboard, Pen
!MULTIMEDIA   Motion, Sound
!NETWORK      LAN,
!OTHER        PCMCIA, APM, Miscellaneous
!PRINT        Printer, Scanner
!STORAGE      DASD, SCSI, Tape, CD-ROM, ASPI, IFS
!VIDEO        VGA, SVGA, XGA, 8514, Display
**********    Entry separator (10 *'s)
==============================================================================




!OTHER__________________________________**********
QUESTION: (Ref: HT3)
I want to build my PDD with Watcom 10.0 C++.

I have the example driver from the DUDE and followed it to set up my build
commands, and my driver compiles and links OK.  I then tried to run MAPSYM so
I could use the kernel debugger (or better yet Periscope's debugger), but the
MAP file generated by WLINK is in another format, MAPSYM seems to be looking
for the string "Public by value", but not finding it.

If I try the old Microsoft linker it says the OBJ files are invalid format.

Is there any way to get a MAPSYM compatible MAP file from WLINK or MS link
compatible OBJ files from WPP or otherwise debug a driver build with Watcom?


ANSWER:
Samples using the Watcom Compiler/Microsoft Linker are available on DevCon
DDK 2 (MAD16 & PCMCIA clsample).
Also a utility that converts a .MAP file from Watcom WLINK into
something that MAPSYM can use to generate a .SYM file is available

!OTHER_______**********
QUESTION: (Ref: HS9)
We are failing on a call to the devhelp routine LockSeg.

We allocate memory on the stack in a DLL for the DosDevIoctl parameter and
data buffers.  Inside the device driver we read these buffers and verify that
the data we read is valid data.  This works; data is correct in both buffers.
We then call LockSeg; the selector is 157, type = 1 (long term, any memory),
wait = 0 (wait until available).  This call fails.  We then call LockSeg again
(no change to the parameters passed).  This call passes.

Why does the first LockSeg fail and the second LockSeg pass?

ANSWER:
We do not have any Devhelp function called LockSeg.  You must be using Steve
Mastrianni's Library.

But the you can get the real return code from the DevHlp call as follows:

On kernel debugger, just step into through LockSeg code.  You may find the
following ASM code:

   MOV   DL,  DevHlp_Lock            ; setting Lock function code
   CALL  [Device_Help ]              ; calling DevHlp function entry point

 ==> At this point just use "p" as the command

The control will return from DevHelp function.  and if your DevHelp function
fails, the Carry flag is set and the value of AX will be real error code.  If
devhelp function succeeds the Carry Flag is Clear.  The definition of the
error code are defined in DDK\H\BSEERR.H.

Possible error codes could be -
170 - Segment unavailable
6   - Invalid handle

If you are allocating the information on the stack,do a DosAllocMem call.
This is required for DosDevIOCTL calls.You could use the OBJ_TILE option
with DosAllocMem.This would ensure that the memory object allocated would
be in the first 512MB of virtual address space.

Alternatively you could also try DevHlp_VMLock.



!MULTIMEDIA_____________________**********
QUESTION: (Ref: IB7)
The card is a combination SVGA display and image capture board.  The design
is to use the SAME memory for both the SVGA display and the
image captured/overlayed.  This is a problem as far as I can see.

The APIs assume that a KeyColor is used to determine where to overlay the
video into the output stream.  The card has a seperate memory block
which is a bit field as to whether to use the overlay live/capture or the SVGA
image.


1) We would like the VSD written entirely at ring 3 and are willing to
forgo the streaming protocol.  Are there any obvious pitfalls that I should be
concerned about in this sort of methodoligy?  At some point I suspect that I
will want to perform streaming, but not at this stage.

Potential problem 1) Generally slow behavior.

(Based upon my Targa+ driver, it LOOKS like there should be no problems, but
there will not be a file handle for the MMOS/2 subsystem directly.  It will
have one indirectly from thier support DLL.)

2) When is the key color written?
Solution 2) Monitor the whole display for the key color to use and when
found, set the bit image.  This would be very slow and ugly.
Additionaly, I would have to monitor the PM message queue to
determine whether the window has been covered.

3) Is there some mechanism (like in Windows) the a callback can be setup to be
invoked whenever the video painting region changes so that I could change the
board's bit map?

4) Can I suppress the painting of the video keyer color?
I can use the destination rectangle to determine where to punch my hole in the
screen.  I think if I try this and the window is partly covered, I will have a
video mess.  Integral is willing to allow a messy display (covering windows
get trashed) if it will work most of the time.

As stated, the board does NOT use color keying AT ALL to determine where to
capture/display.


ANSWER:
Good news and Bad new!  Bad new is the MMPM/2 Architecture/Code does not
support video overlay via shared frame buffer.  The GOOD NEWS is we are
working on it and it doesn't appear to be a big change.  The changes will
involve the MCD passing the list of visible rectange for the window to the VSD
as a part of the VSD_SETVIDEORECT command.  The MCD already uses the Visible
Rectangles for software video monitor so to pass them to the VSD should be
fairly simple.  Some Shared frame buffer devices may also require us to
allocate offscreen VRAM from the display driver through the ENDIVE interface.
In answer to you specific questions:
1) Writing Driver entirely at Ring 3. This shouldn't be a problem other than
the ones you noted.  Currently the MCD expect any card that can monitor can
also record but this is also being fixed as we speak.

2) When is Key Color written.  Currently the Key Color is only painted when
the window is uncovered or resized, As a part of the shared frame buffer
updates to our MCD we expect that we will not paint the the monitor window for
cards that are using clipping rectangles instead of Key Color.  So with the
new updates the when the window is covered or uncovered the MCD will skip the
window paint and call the VSD with the new list of Visible Rectangles.  3)
Call back function for window visible regions.  Yes OS/2 has such a function.
The MCD is using the callback today and now will report it to the VSD.  4) Can
key color painting be suppressed.  Not at the present time, but we are fixing
this as noted above.  Currently it looks like we will be defining a new card
type that act like overlay (inlay) except the key color is not painted and the
VSD_SETVIDEORECT call from the MCD to the VSD will have a Visible Rectangle
list appended to it,

3) You can use PM_ED_HOOKS to intercept the dispatching of Gre functions for
monitoring or other purpose. You can hook the functions GreDeath and
GreResurrection. Whenever GreResurrection is called, restore the screen to
PM interface.
Refer: 1. OS2_PM_DRV_ENABLE : Subfunction 0Ch ( PM_ED_HOOKS )
       GreDeath and GreResurrection  - Mandatory Simulated Gre function
       in Presentation Driver reference.
       2. Article in " Monitoring Display Driver Interface Calls " in
       DevCon News Vol 6.


!PRINTER________________________________**********
QUESTION: (Ref: IF9)
I want to develop a new 32-bit printer driver that works with both OS/2 2.11
and Warp.  Which sample code do I start with- Minidriver 1, Minidriver 2, or
the OMNI code?  I noticed that the OMNI code is not on the latest DDK.  Where
can I get it (if that is the correct code to start with)?

ANSWER:
There are no plans for the OMNI code to be included in the DDK. MiniDriver 1
Code is available in DevCon DDK 2. It is a good starting point for a 32 bit
printer driver which works both in OS/2 2.11 & Warp.  The MiniDriver 2 Code
would work only in Warp(GRE Version 220).
Also, download MINIDRV1.TXT from the MAIN file area.

!PCMCIA__________________________________**********
QUESTION: (Ref: HR8)
I do NOT require immediate action for my specific problem.  This report
concerns the handling of an error condition by OS/2's implementation of PCMCIA
Card Services.  Proper coding in my driver allows me to avoid this error
condition, but Card Services should handle it nonetheless.

From the current PCMCIA Card Services spec (yes, I know OS/2 uses CS level
2.0, but I don't think this detail has changed), the criteria for the starting
address i "on a boundary that is a multiple of the number of contiguous ports
requested rounded up to the nearest power of two.

During my client driver testing, I inadvertently passed a starting address of
0x108 with a range of 16 bytes to Request_IO.  The result was bizarre address
decoding at the socket.  Changing the starting address to a correct value such a

I think that OS/2 PCMCIA Card Services should flag a bad starting address and
return a BAD_BASE error (0x03) status on the Request_IO call.

Note that this problem will only occur if the client specifies the base
address going into the Request_IO call.  If the input value is zero, Card
Services assigns the base address itself and returns a properly aligned value.


ANSWER:
The APAR PJ19355 quotes from an erroneous PCMCIA 2.0 Card Services document.
The Card Services driver is coded per specifications 2.01.  Please refer to
Card Services specifications 2.01 document.  According to the document, if
multiple ranges are being requested in RequestIO () call, the Base Port field
must be non-zero.  See 5.44 RequestIO (1FH) - page 5-70 of Card Services
Specifications (2.01).  Any validation of Base Port address must be done by
the caller prior to issuing the call (which is not made clear in the document
but this is how development has interpreted the design).



!MULTIMEDIA__________________________________**********
QUESTION: (Ref: IA3)
.AVI plays over Win/OS2 fullscreen and wipes out DOS fullscreen.  We have seen
this problem with other SVGA adapters/drivers we have tried.  Our quick and
dirty solution is to check the foregroundsession flag set by
GreDeath/GreResurrection when AcquireFB is called, and return DEVESC_ERROR to
the calling application if PM is not in the forground.  Is there a problem
with this approach?  If so, what is the correct fix?  It seems to me the VDD's
and BVH's should somehow be handleing this case.  Any ideas?

ANSWER:
The display driver supporting software motion video (such as playing .AVI
files), must support multimedia hook functions

    DEVESC_ACQUIREFB
        DEVESC_DEACQUIREFB
            DEVESC_SWITCHBANK

Death and Resurrection functions are used when switching to and from full
screen windows, DOS or OS/2 sessions.  Death function handles the switching of
PM into the background and Resurrection function performs the inverse task.
You are required to implement these functions in your PM display driver.

Ref: Chapter on "S3 Display driver", section "Multimedia hooks" in "Display
     Device Driver Reference for OS/2" on DevCon DDK V2.0.

You can also refer to software motion video Dev Escapes functions in the
source code

    \ddkx86\src\pmvideo\s3tiger\s3qesc.c in DevCon DDK V2.0
        or
     \ddk\src\pmvideo\32bit\eddqesc.c in DDK V1.2


!OTHER___________________________________**********
QUESTION: (Ref: IE8)
Could you please help me with the following problem:
I am currently writing a VDD for Network application.  This VDD (one of its
function) should provide the DOS/WIN application with memory buffer.  This
buffer I have allocated with the VDHAllocMem function, but this function
returns FLAT pointer.  I have been tring alot of macroc to conver this pointer
to the pointer applicable for the DOS/WIN appl.  I see that this is not the
pointer that I got from VdhAllocmem.  Could you plese provide me a siquence of
the stepd to pass the FLAT pointer to the DOSD/WIN appl (allocated with
VDHAllocMem) and then to restore its value to free it.

ANSWER:
1. For conversion of Linear Address to Sel:Offset you could invoke the
DosSelToFlat or DosFlatToSel macros (available in DDK).

2. Pointers in a VDM:  DOS applications running in a VDM utilize real mode
addressing.  A 20-bit real mode address in the segment:offset form can refer
to a physical address within the VDM's one megabyte address space.  If the VDM
makes an IOCTL call to your device driver with pointers in the private data
and/or parameter buffers, the driver must take an extra step to ensure the
pointers are converted correctly.  The driver checks the TypeProcess variable
in the local info.  seg structure to determine if the application is a VDM
application.

If it is a DOS application, the driver allocates a GDT selector and converts
the segment:offset address to a VDM-relative physical address by shifting the
segment left 4 bits and adding in the offset.  This is the same way the
physical address is calculated in real mode for a real-mode application.  The
driver then calls LinToGDTSelctor with the 20-bit physical address of the VDM
application's buffer and/or parameter address.  This call maps the 20-bit
physical address to the caller's address using a GDT selector which can be
accessed at kernel or interrupt time.  The selector should be released by a
call to FreeGDTSelector when the driver is finished with it.  It is important
to note that normally, LinToGDTSelector requires a 32-bit linear address and
not a 20-bit physical address.

3. You could also use VDHMapPages Virtual DevHelp call to map the linear
address region in the V86 address space.

Refer "Virtual Device Driver Reference", Chapter 6 "C Language Virtual DevHlp
Services", Section on "VDHMapPages".


!STORAGE__________________________________**********
QUESTION: (Ref: IA6)
I am using DEBUGO.EXE on the Pentium connected to a clone 386SX-16
with two 40mb drives.  It has 5mb of memory.  I have a piece of code loaded by
the boot sequence before the OS/2 boot manager boot record is loaded.  It
handles INT 13h in a very limited way for this test, I need to be able to
locate that code, which is hidden about 12kb below the top of base memory from
the Filter's initialization code.  It would be nice to be able to access it
during processing of the message that tells the BASEDEVs that access to the
physical drives will switch to them.

I cannot get DEBUGO to cause the Kernel Debugger to stop in real mode by
sending an 'r', nor will it stop in response to a control-c in real mode.  I
cannot locate my piece of real mode code when I can get control.  I have used
'%%' and '&' to dump all of memory in those locations and been unsuccessful.
Does any OS/2 programs or components access the fixed disk drives without
going through any filters?  Should we take control of the fixed disk drives or
just hook our filter into the appropriate place?  Are there any IOCtl commands
that are used for access to the drives that are not documented?

Currently our test only messes with the master boot record, but the full
product will encrypt various partitions.  If anyone can bypass our filter,
then the system will probably become corrupted.


ANSWER:
You are unable to see your transient piece of code from DEBUGO when the
filter was doing its INIT processing.  OS/2 probably rewrites the master
boot record and eliminates yours.Your filter can block the write from
OS/2 FDISK.

!OTHER__________________________________**********
QUESTION: (Ref: IE7)
I am writing the program that has to redirect the data stream from parallel
port to my device (really to the network).  I found that the character monitor
provides this possibility.  But there is no info (samples, description) about
DosMon commands, because those commands are unchanged since OS/2 1.3.  All
books and docs we have refer to OS/2 1.3 books that we have not.  I didn't
find any description of the character monitor commands in IBM C Set++
references.  There is also no information in Tool Kit.  Only the DDK provides
some info, but this is not enough for application layer.  My working
environments are:  OS/2 V2.1/3.0, IBMC Set++, DDK, Tool Kit 2.1.

I have found the DosMon prototypes in the header files and found out that the
monitor functions are placed in the MONCALLS.DLL.  I have tried to load this
dll (successfully), get proc.  addresses (successfully), call it (fail).  I
didn't find any library that provides the monitor commands definition and can
be linked with my program.

The  questions are:
- What I have to do in order to use the character monitors under OS/2   v2.1
and  higher?
- Can anybody provide pointers to the samples and documentation?
- Can I use IBM C++ (it is 32bit; is the library 32bit as well)?
- What are the libraries that have to be involved in the link process?
(I have tried DISCALLS, OS2286, OS2386 but to no avail).


ANSWER:
1)  Refer to the "Physical Device Driver Reference" Chapter on "Character
Device Monitor", for details regarding device monitors.

2) There is a sample monitor available in the disk included in the book
"Writing OS/2 2.1 Device Drivers in C, 2nd Edition", by Steven J. Mastrianni

3) Yes, the above said sample uses IBM CSet++.

4) The Dos16Monxxx calls are available in os2286.lib and os2386.lib.  The
sample monitor uses icc compiler and it needs os2386.lib and dde4mbs.lib.

!I/O__________________________________**********
QUESTION: (Ref: J47)
I'm trying write a driver for an enhanced keyboard clone with an
embedded phone. The keyboard uses non-standard scancodes to represent
phone events like ring, off hook and on hook.Non-standard scancodes
are also used to represent dialed numbers.  The scancodes used are DC-DF and
E2-EB.

When kbdbase.sys delivers these scancodes to the SQDD at PutInSQB in
kbsubs.asm, the SQDD refuses the keystroke.  I need to "see" these
keystrokes in PM, if not as a WM_CHAR, then as something else that
will be globally available like in the PM input hook.

I am perfectly willing to transform these keystrokes into something
acceptable to the SQDD from within kbdbase.sys.  Luckily, all of the
keystrokes that I'm looking for come through the XXKey routine in
kbdxlat.asm.  So, there would be an economical place to trap and
transform the keystroke.

Before I do that, I would need to know what SQDD really wants in the
way of valid keystroke data from kbdbase.sys and I would need to know
what has to be set so that I could identify my translated keystrokes
from within PM.  For example, what can I set that would show through
in the fsflags of the WM_CHAR message.

Finally, all of the above stems from my flailing around in the kbdbase
code.  If there is an easier/better way to get these keystrokes into
PM, boy I would sure like to hear about it!


ANSWER:
1. If there is no matching make code,the system could throw the key.

The KBD DD passes the Scan Code to the SingleQ driver to pass it on to the PM
application.  If both the Make & Break Codes are not present the , SingleQ
driver ignores this packet . Ensure that you are sending both the make & break
packets to the SingleQ driver.

2. You could write a device driver for your Device 2000 which could get the
scan codes.  You could have a device monitor which is registered with the
CompuPhone driver and the keyboard device driver.  The Device Monitor could
take the scan codes from the CompuPhone driver and pass it to the keyboard
device driver in the appropriate packet format.  He can use the standard OS/2
API functions DosMonReg, DosMonOpen, DosMonClose, DosMonRead and DosMonWrite.
Refer chapter "Character Device Monitors" in Physical Device Driver reference
for more info.  Also refer to the Device Driver sample source code available
in the diskette included in the book "Writing OS/2 Device Drivers in C" 2nd
edition by Steve.J.Mastrianni.


!STORAGE__________________________________**********
QUESTION: (Ref: JA3)
I was wondering if there is source code available for any
installable file systems (preferably HPFS). I'm interested in augmenting
an existing IFS with additional features w/o changing any of the existing
features. Is there code available for IFS's as there is for device drivers?

ANSWER:
The source code for HPFS is not available.


!OTHER__________________________________**********
QUESTION: (Ref: JA2)
I need to wait for specific timing at init time, in a physical device driver,
and I do not know how to manage it.
My need is to build a function with the following prototype :
void Wait_at_least_for_n_mili_seconds(USHORT n);
Under ms-dos I can write it using timer adresse
#define GET_TIME(t)    *(t)=*(unsigned long far *)0x0040006C;
(+1 any 55mili seconds)
I can't find any simple solution using DevHlp functions in documentation.
Could you help me to solve this problem ?

ANSWER:
You could incorporate a delay in your driver at Init time in two ways :
a) If you require a granularity of more than 32ms( timer tick ) you could use
the IODelay macro for accurate delay. Here you could obtain a granularity
of 500ns.
For delays greater than 500ns, you could call the macro multiple times
as required.
Refer article "An Accurate Software Delay for OS/2 Device Drivers" in
DevCon News Vol.8 for more info.

b) In the INIT case in your strategy routine register a Tick Handler using
DevHelp_TickCount and wait for the TimerFlag to be set. In the Tick
Handler reset the timer and set the TimerFlag.

Init case of strategy routine :

    Call DevHelp_TickCount( TickHandler, TickCount )
      while( TimerFlag not set ) ;  /* wait for the timer flag to be set*/
      proceed with Init Code to be executed after delay.
      return ( RPDONE )

TickHandler()
  {
    ResetTimer( TickHandler )
    Set TimerFlag
    return
  }

The DevHelp_TickCount is used to register a timer handler to be called on
every "n" timer ticks. In this method you could obtain delays for multiple
of timer tick which occurs every 32ms.
Refer section "TickCount" and "ResetTimer", chapter "Device Helper Services"
in Physical Device Driver reference for more info.


!VIDEO__________________________________**********
QUESTION: (Ref: JA0)
To configure our Display Adapter to use the correct monitor refresh rate,
the user currently has to run our customised application.

I have heard that it is possible to add a Refresh Rate menu
to the System Icon object, so that the user can choose the refresh
frequency at the same time as he chooses the resolution/pixel depth.
Can you tell me how to implement this?

I've seen this option with the S3 drivers included with Warp.
Currently, we don't have SVGADATA.PMI support for our driver; is
it possible to implement the refresh feature without PMI
support? Also would this implementation work under OS/2 2.11?

ANSWER:
It is not possible to implement the refresh feature without PMI support. You
need to have SVGADATA.PMI support for your driver.
VIDEOCFG issues a DevQueryDisplayResolution call to the PM driver to obtain a
list of resolutions supported by the driver. These resolutions are displayed
on Page 1 of the System Icon.
VIDEOCFG searches through the MONITOR.DIF file to get information about
supported modes and maximum refreshes for specific monitor. The page 2 of
System Icon displays the list of monitors.
Refer section "Video Configuration Manager" on the Display Device Driver
reference for OS/2.
If you implement refresh feature with PMI support, you can utilize under
OS/2 2.11.
Refer section "Files required for OS/2 2.1 and 2.11" on the Display Driver
reference for more info.


!OTHER__________________________________**********
QUESTION: (Ref: JA1)
This is the continuing story of our efforts to port a 16 bit
video capture board OS2 driver to an 8 bit video capture board. We
have been able to get all of the code to compile and link but the
operating system will not load the driver giving an error "not a valid
driver".
At your suggestion we have purchased the Watcom compiler version
10.5. Do you have a sample makefile for an OS2 driver using the Watcom
compiler? That would give us the proper compiler, assembler, linker
switches.
What is the device driver loader looking for when it load a device
driver? I have both The Mastriani book and the Cannavino book. Cannavino
says that the first thing in the .sys file should be a .exe header then
the device driver header, Mastriani doesn't say anything about this. If
the .exe header should be first how do we insure that the linker inserts
the proper .exe header. If you can tell us the format for the .exe and
device driver headers we can use a hex editor to see what is missing.

ANSWER:
1) You could refer to
   \ddkx86\src\dev\pcmcia\clsample\makefile and
   \ddkx86\mmos2\samples\mad16\pdd\makefile
for examples of makefiles using Watcom compiler.
Refer article "Writing OS/2 Device Driver with Watcom C" in DevCon News Vol.7
for more info.

2) The Device Driver structure is as follows:
a) EXE Header
b) Device Driver Header
c) Data Segment
d) Code Segment
e) Initialization Code ( Discarded )
f) Optional extra Code &/ Data Segments

You could check that your device driver header is proper. The device driver
header format is as follows :

a) Far Pointer to Next Device Header   ( DWORD   )
b) Device Attribute                    ( WORD    )
c) 16-bit offset to strategy routine   ( WORD    )
d) 16-bit offset to IDC entry point    ( WORD    )
e) Driver name/Units( if block device) ( 8 BYTES )
f) Reserved                            ( 8 BYTES )
g) Capabilities Strip                  ( DWORD   )

The driver header must be located at the start of the data segment and the
Data Segment must appear before the Code Segment.
Refer section " Physical Device Driver Header", chapter "Physical Device
Driver Architecture and Structure" in Physical Device Driver  reference for
more info.
You could check the .MAP file for the location of the Code and Data Segments
instead of using a HEX editor.


!OTHER__________________________________**********
QUESTION: (Ref: J99)
Where can I get a kernel debugger for OS/2 WarpConnect?  The ones
that came with DevCon#9 wont install on this rev of the kernel.  Nor can
I get ASDT32 to not lock up the system...

ANSWER:
The Debug Kernel ia available in the Warp Connect CDROM.Look in the
directory - OS2IMAGE\DEBUG.


!OTHER__________________________________**********
QUESTION: (Ref: J91)

I would like to load data to a RAM in a custom build adapter. Prior to the load
my PDD waits for an I/O signal to go high (driven by an external logic).
I would like to set up a timer on the I/O signal to time out if the signal
is not ready in a certain amount of time. My question is how to setup a
timer in the driver, and how do I look at the I/O bit while the timer is
running. I am using OS/2 WARP on a value point pc.

ANSWER:
We do not know what granularity you would require the timeouts in. However,
you could adapt the following procedure for implementing a timer while waiting
for an I/O signal to go high.

1. Set a sampling timer ( say Timer A ) which is used to check the I/O port
using DevHelp_TickCount call with the ticks parameter set to the appropriate
value. The timer tick occurs once in 32ms.
2. Set a timeout timer ( say Timer B) which is used to timeout in case the
signal does not go high within the required time.
3. You could check the I/O port level in you Timer A timer handler. If the
level goes high, you could disable timer B, that is the level has gone high
before timeout occurs. You could then load data to the RAM.
4. If timeout for timer B occurs, take appropriate action as you would in case
of the signal not going high in the required time. You could then disable
both Timer A and Timer B.
It is required to have two timers since it is not good design to poll for
large amount of time in the CPU.  The driver runs at ring 0 and a thread
running at ring 0 is not preemptible. Also OS/2 guarantees that a
time-critical thread that is made ready to run will be dispatched within 4ms.
Therefore for peformance reasons, the physical device driver has to check the
TCYield Flag once every 3ms and if the flag is set, it should call TCYield.
You could also refer to an article in DevCon News Vol.8 - An Accurate Software
Delay for OS/2 Device Drivers.This article give you an alternate approach for
generating delays less than 32ms.

!PCMCIA__________________________________**********
QUESTION: (Ref: I94)
1. PCMCIA/eIDE -- a PCMCIA card( type II ) converts the PCMCIA interfaces into
Enhanced IDE interface for CD-ROM, tape, HD and MO.  We had a client driver
debugged and working with the IBM2SS01.SYS and PCMCIA.SYS.  The Plug-And-Play
icon can see the card as the IO card correctly now.  We try to save us some
efforts not to develope the IDE Adapter Device Driver by using the
IBMIDECD.FLT.  However, the client drivers are modified from the CLSAMPLE.SYS
from the DDK.  And, it is a device driver, the initialization, insertion
call-back is only executed after all of the ADD's are initialized.  By the
time when the PCMCIA card is inserted and is called back by the Card services,
the IBMIDECD.ADD had gone through the initialization and failed due to there
is no I/O 170-177 and IRQ15 openned to access.  We had tried to change the
client driver as the BASEDEV( ADD ), so as to open the channel before the
IBMIDECD.FLT initialization.  It works some time due to the call back is not
synchronized with the IBMIDECD.FLT, and it's too complicated to chaining the
OS2CDROM.DMD, OS2DASD.DMD( they are all BASEDEV ), the client driver and the
IBMIDECD.FLT.  My questions are:  a. How to modify the IBMIDECD.FLT quickly so
as to initialize itself after the client driver be loaded?  OR b. How can I
fake the IBMIDECD.FLT initialization to reserve at least one unit?  c. How can
we do the initialization later( after the card inserted )?  Apparrently, each
insertion may change the device table contents every time.  2. Second project
is the PCMCIA/SCSI.  We had the ADD driver done and ready to debug.  The
questions would be identical to the project 1.

Two more questions, will the IBMIDECD.FLT be suitable for HD, MO and CDR?
Will it be possible to modified the CIS information on the PCMCIA/eIDE card so
as to use the PCM2ATA.SYS to open the Secondary IO from 170-177, 376-377 and
IRQ15?  If it is possible, we don't have to have the client driver.  Is it
possible?  If so, how do I modified the CIS?


ANSWER:
a. If you want to use both the drivers with the DEVICE= statements,the ordering
in CONFIG.SYS is important.First have the CLSAMPLE statement,followed
by IBMDECD in config.sys.

b. At the present time, there is no way to reinitialize an OS/2 DD that con-
trols DASD after the system has booted.  You are on the right track with the
"Device Faking" approach.  We have developed, with Communica Inc., Bourne, MA,
a set of drivers that fake the presence of certain devices regardless of the
actual HW present.  The faking is controlled by device driver config.sys para-
meters.

We are in the process of writing a "How To" document which should provide
sufficient information for an IHV to implement this type of Faking logic.  This
is planned to be available to IHVs, for no-cost, within 90 days.

You can also contact Communica directly for technical assistance in designing
such an interface.  Communica is free to release this information however they
are not under contract with IBM to supply consulting services to IHVs seeking
such assistance.  As such, they will likely charge you for their service.  The
correct people to contact at Communica are:

             Matt Trask or Barry Kasindorf
             508-759-6714

c.Convert your IBMDECD to a Basedev.This should be a level 3 device
driver and Bit 4 should be set in the Capabilities Bit Strip in the device driver.
Consequently,your client PDD will receive the Initialisation Complete ( 1fh)
Command from the PDD that all PDDs have been loaded and allows to
set up any IDC.
When the CARD_INSERTION Event takes place the Handler could invoke AttachDD to
IBMDECD for relevant processing for initialisation.

d.  No, IBMIDECD.FLT is specific to CD-ROMs and will not attempt to communicate
to any other type of ATAPI device.

e. This sounds like a HW manufacturer issue?

!NETWORK__________________________________**********
QUESTION: (Ref: IB5)
I WAS TRYING TO INSTALL OUR LAN DRIVER INTO THE OS/2 WARP CONNECT BOX, IT
FAILS BECAUSE OUR MICROCODE PRORAPM.DWN DIDN'T GET COPIED TO THE D:\IBMCOM\MACS
DIRECTORY.
WHEN I DID THE FIRST TIME INSTALL, THE THREE FILES NDIS39XR.OS2, LS139XR.NIF
AND PRORAPM.DWN (WHICH ARE OUR LAN DRIVER, INSTALLATION FILE AND THE
MICROCODE) ARE COPIED TO THE D:\GRPWARE\CLIENTS\LADCLT\ MACS\ DIRECTORY.  BUT
LATER ON ONLY THE LAN DRIVER NDIS39XR.OS2 AND THE INSTALLATION FILE
LS139XR.NIF GOT COPIED TO THE D:\IBMCOM\MACS DIRECTORY, AND OUR DRIVER NEEDS
THE MICROCODE PRORAPM.DWN FILE TO BE IN THE SAME DIRECTORY TO FUNCTION.
WE USE THE SAME INSTALLATION FILE LS139XR.NIF ON THE OS/2 WARP SYSTEM, IT
SEEMS OK.  AND AFTER THE FIRST FAILS, IF WE USE THE MPTS TO INSTALL OUR
DRIVER, IT WILL WORK.
BELOW IS PART OF OUR INSTALLATION FILE THAT I THINK MATTERS TO THIS PROBLEM:

        [PX]
        Type = NDIS
        Title = "Token-Ring Adapters"
        Version = 2.18
        DriverName = NDXR$
        Xports = NETBEUI LANDD
        Copyfile = PRORAPM.DWN

        [FILE]
        Name = NDISXR.OS2
        Path = IBMCOM\MACS

        [INTLEVEL]
              

ANSWER:
The problem you described is a bug in the Warp Connect installation code.
(APAR IC11004).The Warp Connect developers are looking into it and hopefully 
it will be fixed for the next release/csd.