-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does the Cobrix handle the Easytrieve layout.? #516
Comments
Hi, could you attach an example copybook and a link to the documentation for the data type, please? |
The cobol copybook says X (2) but however the data itself is coming from an Easytrieve with a data type of U (Packed Unsigned). Data Type Link: https://www.mvsforums.com/manuals/EZT_PL_APP_63_MASTER.pdf |
Easyterieve_Layout_sample.xlsx Hi @yruslan , This is the Excel which we created from the Easytrieve layout. only sample fields are added here. |
I see. The data types look parsable at first glance. The only thing you need a proper copybook that matches the data in order to parse records like that. And for that you would need a mapping between Easyretrieve data types and Cobol data types. Do I understand it correctly that the fields specified in the Excel file are not all fields of the record? Field 'CRSCON' with length 1 at offset 10 is followed by CRADTR at offset 20. It means there are other fields between CRSCON and CRADTR that fill the rest 9 bytes. |
Hello. I am adding a comment because I also need to request this same support for Unsigned Packed fields in the mainframe records. For example, let's say we have an account date value of '20220425'. As a U (Unsigned Packed) number, that field would be defined in COBOL like this: Unsigned Packed (U) fields must be defined in COBOL as PIC X fields because COBOL does not support the Unsigned Packed format. The Unsigned Packed field cannot be declared as a COBOL BINARY (COMP) field because it does not contain a binary value. It contains a Packed value without the sign nibble. Adding support for Unsigned Packed fields would be pretty simple in Cobrix. You could add a "Unsigned Packed" flag to the 'decodeBCDIntegralNumber' function that handles Packed (COMP-3) values, and just leave out the sign nibble if it's the Unsigned Packed format. Please let me know if you'd like to chat more about this. Thank you very much. |
Hi @mike-childs, Makes sense. I might ask a couple of more questions as we go. The first one, When you have |
I see the answer to the question in your description. Sorry. |
Hi @yruslan, |
Great, thanks for the answer and for such a detailed description! Will implement it soon. |
One more question. Would it be okay if PIC required for packed numerics to be
not
? |
Yes, requiring the '9' (as in 'PIC 9(4) COMP-3U') makes perfect sense because the field should contain only numeric data. The field would have all the same rules as a normal Packed field, other than the lack of a sign nibble. |
This parses 'unsigned packed' format, that is BCD without the sign nibble.
This parses 'unsigned packed' format, that is BCD without the sign nibble.
This is added. You can try building |
Thank you very much @yruslan! We have a story in our backlog to pull in the latest Cobrix version and do thorough testing with the new COMP-3U type parm. I will add an update here once we have done that work. We really appreciate you adding this functionality. |
Hello @yruslan. We have finished our testing with the new COMP-3U parm, and it correctly converted the Unsigned-Packed fields. I have attached a screen shot showing my input and output and test results. Please let me nkow if you need any further information. Thank you very much. |
Hi @mike-childs , Thanks a lot for confirming! Glad it works as expected. |
@yruslan I am getting the below error when updating the copybook to COMP-3U. za.co.absa.cobrix.cobol.parser.exceptions.SyntaxErrorException: Syntax error in the copybook at line 28: Invalid input 'COMP-3U' at position 28:64 |
Use |
@yruslan I have upgraded spark-cobol 2.6.4 and getting this error: java.lang.NoClassDefFoundError: scala/$less$colon$less here is the command: class_poc_df = spark.read.format("cobol") |
The error suggests that you are using spark-cobol build for a different Scala version from your Spark environment. Use the artifact that matches your Scala version:
or build the one that matches your environment exactly using 'sbt assembly' (the full command is in README) |
Hi @diddyp20 I have faced the similar error |
Background [Optional]
I am having the Easytrive layout which is having the Packed unsigned fields (data-type U in Easytrieve), binary unsigned fields (data-type B in Easytrieve) and Alpha-numeric fields (data-type A in Easytrieve and storing Hexbit). The Data file that we are trying to convert is EBCDIC data.
Question
Is there a way we can convert this data thru Cobrix by providing the above mentioned Easytrieve layout? @yruslan
The text was updated successfully, but these errors were encountered: