sqlalchemy: sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically.

Hi, so I am trying to update a table in my db thorugh sql alchmey mixed with pandas. This actually might be the culprit of my issue. I am new to sql alchemy, pandas, and python so I used a template my mentor gave me to add to the code.

Anyways, my script processes csv files. if the file has format error, it will update the status of the file in the table to “error” and move the file to an “error” directory. To this regard I wrote an update table method:

def update_csv_status_db(db_instance, name_of_db_instance_tabledict, csvfile_path, dir_status):
    table_dict = db_instance[name_of_db_instance_tabledict]
    csvfile_name = csvfile_path.name
    sql = update(table_dict['table']).where(table_dict['table'].c.CSV_FILENAME == csvfile_name).values(CSV_STATUS=dir_status)
    df = pd.read_sql(sql, table_dict['connection'])
    #df.to_sql('odfs_tester_history_files', con=table_dict['engine'], if_exists='append', index=False)

Which is called in this function:

def odf_history_from_csv_to_dbtable(db_instance):
    odfsdict = db_instance['odfs_tester_history']
    #table_row = {}
    totalresult_list = []

    dir_dict, dictofpdir_flist = make_dict_of_csvprocessing_dirs()
    print(dir_dict)
    csvbase_path_list = dictofpdir_flist["csvbase_path_list"]
    archivefiles_path_set = dictofpdir_flist["archivefiles_path_set"]
    errorfiles_path_set = dictofpdir_flist["errorfiles_path_set"]
    emptyfiles_path_set = dictofpdir_flist["emptyfiles_path_set"]

    for csv in csvbase_path_list:  # is there a faster way to compare the list of files in archive and history?
        if csv.name in archivefiles_path_set:
            print(csv.name + " is in archive folder already")
        elif csv.name in errorfiles_path_set:
            print(csv.name + " is in error folder already")
        elif csv.name in emptyfiles_path_set:
            print(csv.name + " is in empty folder already")
        else:
            csvhistoryfilelist_to_dbtable(csv, db_instance)
            df_tuple = process_csv_formatting(csv)
            df_cnum, odfscsv_df = df_tuple
            if df_cnum == 1:
                trg_path = Path(dir_dict['empty_dir'])
                csv.rename(trg_path.joinpath(csv.name))
                update_csv_status_db(db_instance, 'odfs_tester_history_files', csv, 'empty')
            elif df_cnum == 0 and len(odfscsv_df.index != 0):
                result = odfscsv_df.to_sql('odfs_tester_history', con=odfsdict['engine'], if_exists='append', index=False)
                totalresult_list.append(result)
                trg_path = Path(dir_dict['archive_dir'])
                csv.rename(trg_path.joinpath(csv.name))
                update_csv_status_db(db_instance, 'odfs_tester_history_files', csv, 'archive')

    return totalresult_list

So a very weird error happens after this runs. Testing it with one file that contains bad formating, an “error” file. it actually processes the file, moves it to the error directory and updates the table. However at the end, the program breaks and returns the following error:

Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\result.py", line 1161, in _fetchall_impl
    return self.cursor.fetchall()
AttributeError: 'NoneType' object has no attribute 'fetchall'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:/Users/sys_nsgprobeingestio/Documents/dozie/odfs/odfshistory3.py", line 269, in <module>
    odf_history_from_csv_to_dbtable(db_instance)
  File "C:/Users/sys_nsgprobeingestio/Documents/dozie/odfs/odfshistory3.py", line 244, in odf_history_from_csv_to_dbtable
    update_csv_status_db(db_instance, 'odfs_tester_history_files', csv, 'error')
  File "C:/Users/sys_nsgprobeingestio/Documents/dozie/odfs/odfshistory3.py", line 178, in update_csv_status_db
    df = pd.read_sql(sql, table_dict['connection'])
  File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py", line 438, in read_sql
    chunksize=chunksize,
  File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py", line 1231, in read_query
    data = result.fetchall()
  File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\result.py", line 1216, in fetchall
    e, None, None, self.cursor, self.context
  File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1478, in _handle_dbapi_exception
    util.reraise(*exc_info)
  File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\util\compat.py", line 153, in reraise
    raise value
  File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\result.py", line 1211, in fetchall
    l = self.process_rows(self._fetchall_impl())
  File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\result.py", line 1163, in _fetchall_impl
    return self._non_result([])
  File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\result.py", line 1168, in _non_result
    "This result object does not return rows. "
sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically.

I have noticed when I have multiple files, it will process some of them, the rest it will leave once it hits this error. And it is spontaneous on when it hits the error. How can I fix this?

Is there a way to update thet able without needing to use pandas _readsql? COuld this be the issue? The fact that I am using pandas _tosql. And if so, what alternative can i use to commit the update?

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 34 (18 by maintainers)

Most upvoted comments

this error occurs when you try to read rows from a SQL statement that does not return rows. example of such statements are INSERT, UPDATE and DELETE statements that do not feature RETURNING. A SELECT statement, on the other hand, should always return rows, even if the number of rows is zero.

a second condition that causes this error is when the database driver has failed on a previous statement, and the application attempts to continue to use the database connection without rolling back the transaction first, however this requires additional conditions to be in place that are not easy to replicate unless the SQL statement is accessing some specific kind of issue.

from our end, we can’t provide any insight into this without extra information, and I am noticing that if you used the “question” template then we didnt ask for it, so fixing that now:

Versions

  • OS:
  • Python:
  • SQLAlchemy:
  • Database:
  • DBAPI:

Then you want to add SQL logging output right before the error that shows the SQL statement that was emitted right before these non-existent rows were attempted to be fetched. set echo=True on create_engine() to show this.

this issue review in sqlalchemy==1.4.2, in sqlalchemy==1.3.2 is work.

Thank you @wuwenrufeng. This error broke the Jupyter %sql magic, which I use as an in-between when my students start to transition gently from SQL to Python (in Colab). The default Colab install has this broken combination, so downgrading sqlalchemy is the temporary solution.

I faced the same problem, However, adding this statement ’ SET NOCOUNT ON ’ before the update,delete…SPs etc solved the problem for me.

The broken function call is not using Pandas, but it is using Records the line is essentially as below. I’m on PostgreSQL 12.x using psycopg2-binary==2.8 I can try to create a sqlalchemy only example but will need to find some time to experiment.

db = records.Database(...)
db.query("CREATE SCHEMA IF NOT EXISTS foo")