-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
string to varchar with length #517
Comments
Hi, you can get lengths and other parameters from an AST generated by parsing a copybook using Example: https://github.com/AbsaOSS/cobrix#spark-sql-schema-extraction When invoking |
ok, thanks let me try |
I'm also thinking of adding a metadata field to the generated Spark schema that will contain maximum lengths of string fields, so converting this question to a feature request. |
Thanks, Ruslan, the same idea came to my mind as well. Our use case is to load the data to RDBMS, currently, all strings default to max length (nvarchar). If we have lengths available we can add an option like this: |
The new metadata field ('maxLength') for each Spark schema column is now available in the 'master' branch. |
Thanks for the quick turnaround. Will check it out.
…On Mon, Oct 10, 2022 at 3:18 AM Ruslan Yushchenko ***@***.***> wrote:
The new metadata field ('maxLength') for each Spark schema column is now
available in the 'master' branch.
Here are details on this:
https://github.com/AbsaOSS/cobrix#spark-schema-metadata
You can try it out by cloning master and building from source, or you can
wait for the release of Cobrix 2.6.0, which should be soon.
—
Reply to this email directly, view it on GitHub
<#517 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA7NTXCPBAJNPFB6LMFTJUDWCO7NRANCNFSM6AAAAAAQTQ4HAI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
Anil Ramapanicker
148 Stony Brook Road
Fishkill, New York 12524
Home: +1 845-440-6496
Cell: +1 914-826-7646
Great Algorithms are poetry of computation!!!
- Francis Sullivan
|
Hi Ruslan,
Another question: we have a data file with x length (x > 90), but I want to
parse only the first 90 bytes, is it possible with the current approach?
I tried with record_length option but it did not work.
Let me please know your thoughts.
On Mon, Oct 10, 2022 at 6:08 AM Anil Ramapanicker ***@***.***>
wrote:
… Thanks for the quick turnaround. Will check it out.
On Mon, Oct 10, 2022 at 3:18 AM Ruslan Yushchenko <
***@***.***> wrote:
> The new metadata field ('maxLength') for each Spark schema column is now
> available in the 'master' branch.
> Here are details on this:
> https://github.com/AbsaOSS/cobrix#spark-schema-metadata
> You can try it out by cloning master and building from source, or you can
> wait for the release of Cobrix 2.6.0, which should be soon.
>
> —
> Reply to this email directly, view it on GitHub
> <#517 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AA7NTXCPBAJNPFB6LMFTJUDWCO7NRANCNFSM6AAAAAAQTQ4HAI>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
--
Anil Ramapanicker
148 Stony Brook Road
Fishkill, New York 12524
Home: +1 845-440-6496
Cell: +1 914-826-7646
Great Algorithms are poetry of computation!!!
- Francis Sullivan
--
Anil Ramapanicker
148 Stony Brook Road
Fishkill, New York 12524
Home: +1 845-440-6496
Cell: +1 914-826-7646
Great Algorithms are poetry of computation!!!
- Francis Sullivan
|
Background [Optional]
A clear explanation of the reason for raising the question.
This gives us a better understanding of your use cases and how we might accommodate them.
Question
we want to write the dataframe to SQL server, the dataframe has string datatype where we want to change the type to varchar with correct length. Is there a way to get fieldName, dataType and length from the copyBook?
The text was updated successfully, but these errors were encountered: