Hi - We're looking at using toke, detok in our development process.
Our current setup to tokenize device drivers uses a "script" similar to this: ----- start kng_sr_script ----- \ define these words for the tokenizer tokenizer[ 0 constant ?SYSTEM-ROM 0 constant COMMENT-OUT
0 constant ?DEBUG-TX 0 constant ?DEBUG-RX
h# 0110 constant ibm-Code-Revision-Level
h# 17D5 constant ibm-VendorId h# 5831 constant ibm-king-DeviceId h# 020000 constant ClsCode ]tokenizer
fload kng_main.of ----- end kng_sr_script -----
Then in kng_main.of, we do: ----- start kng_main.of ----- .... tokenizer[ hex ibm-Code-Revision-Level decimal ]tokenizer SET-REV-LEVEL
tokenizer[ hex ibm-VendorId ibm-king-DeviceId ClsCode decimal ]tokenizer PCI-HEADER
FCODE-VERSION2
... ----- start of code...-----
The version of toke I downloaded doesn't like this for 2 reasons: 1. Constant declarations in TOKENIZER[ ... ]TOKENIZER blocks emit bytes/tokens to the output, so we get extra stuff before the PCI header. I don't think it should do this; code inside tokenizer blocks should not be in the output FCODE. Is this correct? 2. Previous constant declarations aren't being looked up/interpreted when in Tokenizer mode. For example, tokenizer[ hex ibm-Code-Revision-Level decimal ]tokenizer SET-REV-LEVEL causes a "empty stack: error. I have to change it to tokenizer[ 0110 ]tokenizer SET-REV-LEVEL to make it work. Is this a bug?
I'm a little new to this; basically what I'm asking is if this behaviour is what was intended, or should we attempt to fix it and submit a fix to you.