I'm also using ODBC3.5 and I was unable to reproduce your issue.

Here's what I tried:

import pyodbc
import pandas as pd
cnxn=pyodbc.connect(('DSN=MYDSN'),autocommit=True)
Data=pd.read_sql('SELECT TOP 10 DOB, RandomTime, TS FROM isc_py_test.Person',cnxn)
Data
Data.info()

And here's the output I got:

          DOB RandomTime                  TS
0  1977-07-18   07:49:03 1993-10-31 17:23:25
1  2001-11-08   07:45:05 2005-12-25 04:11:22
2  2004-02-20   23:17:49 1981-08-31 02:08:10
3  1995-11-22   01:46:31 2010-05-20 11:25:31
4  1974-01-09   15:20:03 1974-12-22 13:49:00
5  1987-10-19   23:14:52 1974-10-02 17:48:37
6  1985-03-29   17:47:12 1978-02-24 06:40:51
7  2015-10-21   23:09:15 2006-08-29 16:30:29
8  1972-12-26   15:53:23 1996-12-06 03:13:26
9  1990-09-25   05:53:25 2000-03-22 05:54:57

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 3 columns):
DOB           10 non-null object
RandomTime    10 non-null object
TS            10 non-null datetime64[ns]
dtypes: datetime64[ns](1), object(2)
memory usage: 320.0+ bytes

Here's the source data I used (import and run Populate).

I recommend PythonGateway for Python workloads with InterSystems IRIS.

Can you share your dataset?

The idea of document DBs is as follows. You always have constraints on data: what fields are there, what datatypes are there and so on. In the case of a relational DB the constraints are checked and enforced on a database side - you define a required timestamp field once and when you request the data back, hundred times out of a hundred, you get back a valid timestamp field.

On the one hand this is good - separation of concerns and all that, on the other hand what if your application data requirements change often? Or what if you need to store data for several versions of your application at once? In that case maintaining a constraints on a db side becomes cumbersome.

Document databases are created to work in this type of situations. On a DBMS side you now only enforce the existence of collection, id key and maybe some required properties if you're 100% sure they are always available. Everything else is optional and entirely the concern of the application. Document, after all can have any number of fields or don't have them at all. And it is application job to make sense of it.

In your case the schema is extremely stable - the standard defining RSS was aproven in 2005 and, frankly, not likely to change, ever. It is as one may say feature complete.

Using XSD schema you can easily generate the classes  in InterSystems IRIS and import your RSS data into them.

I'm not really sure about the use case.

PDF is a publish format (used to present documents and make sure that they look the same everywhere).

RTF is a simple editing format.

Generally, you can easily convert edit formats into publish formats, but reverse action is impossible.

Furthermore, PDF is far more feature rich than RTF so not all PDF features could be converted into corresponding RTF features.

What are you trying to do?

Please add a code sample to demonstrate your issue.

Here's my simple class:

Class test.json Extends (%RegisteredObject, %JSON.Adaptor)
{

Property int As %Integer [ InitialExpression = 4 ];

/// do ##class(test.json).test()
ClassMethod test()
{
    set obj = ..%New()
    set sc = obj.%JSONExport()
}

}

And in the terminal I get the expected output:

do ##class(test.json).test()
{"int":4}

As Ens.StreamContainer has a OriginalFilename property you can use that in your custom BO by specifying a subfolder ob request (so OriginalFilename  = /subfolderA/name.txt).

I would try to avoid the mixing of:

  • technical issue - outputting file into one of the several subfolders based on a filename
  • business issue - determining which subfolder to output file to based on extracurricular conditions

Operation/adapter should solve only one type of issues, either technical or business.

It might sufficient to put your code just into a CODE Block of a BP generated with the wizard.

If you need to execute raw ObjectScript from BPL add assign activity and set status variable to the value of your classmethod. No need to add a code block.

Still, BP without BPL are quite easy to implement, here's the guide.

What's your file I/O default?

zn "%SYS"
do ^NLS
Choose 4) Display loaded settings
Choose 3) Display system defaults for I/O tables

Here's what I got:

-------------- System defaults for I/O tables --------------
 
Table               Name
-----------------   ---------------
Process             RAW
Cache Terminal      UTF8
Other terminal      UTF8
File                UTF8
Magtape             UTF8
TCP/IP              RAW
System call         RAW
Printer             CP1251