go-ora: rows.Next() halts with no exceptions

On go-ora/v2 v2.4.27 got a problem. When I iterate through the sql.rows sometimes it halts with no exceptions and timeouts. I did same on v1.2.1 with no problem

The discussion was here #https://stackoverflow.com/questions/73048029/rows-next-halts-after-some-number-of-rows

The max length of field is 7000, but it doesn’t affect the time when iteration stops. It stops even when I ORDER BY length(TEXT) asc

DDL of TICKET table:

-- auto-generated definition
create table TICKET
(
    ID                   NUMBER           not null
        constraint TICKET_ID_PK
            primary key,
    APPLICANT_ID         NUMBER           not null
        constraint TICKET_APPLICANT_FK
            references USERS,
    SOLUTION_USER_PHONE  VARCHAR2(56),
    PRIORITY_ID          NUMBER
        constraint TICKET_PRIORITY_FK
            references TICKET_PRIORITY,
    STATE_ID             NUMBER           not null
        constraint TICKET_STATE_FK
            references TICKET_STATE,
    SERVICE_ID           NUMBER           not null
        constraint TICKET_SERVICE_FK
            references SERVICE,
    CHANGE_ID            VARCHAR2(255),
    GUID                 VARCHAR2(56),
    REMEDY_ID            VARCHAR2(50),
    HEADER               VARCHAR2(100)    not null,
    CREATE_DATE          NUMBER           not null,
    LAST_EDIT_DATE       NUMBER,
    SLA_DATE             NUMBER,
    FINISH_DATE          NUMBER,
    SYNC_MASK            NUMBER default 0 not null,
    TEXT                 CLOB,
    SOLUTION             CLOB,
    TEXT_COPY            CLOB,
    USER_CONTACT_ID      NUMBER
        constraint TICKET_USER_CONTACT_ID_FK
            references USER_CONTACT,
    LAST_SOLUTION_DATE   NUMBER,
    WAIT_PERIOD          NUMBER,
    STATE_REASON         VARCHAR2(255),
    IMPACT_ID            NUMBER
        constraint TICKET_IMPACT_FK
            references IMPACT,
    URGENCY_ID           NUMBER
        constraint TICKET_URGENCY_FK
            references URGENCY,
    WEIGHT_ID            NUMBER,
    REQUEST_TYPE_ID      NUMBER
        constraint TICKET_REQUEST_TYPE_FK
            references REQUEST_TYPE,
    SOURCE_ID            NUMBER
        constraint TICKET_SOURCE_FK
            references SOURCE,
    STATE_REASON_ID      NUMBER
        constraint TICKET_STATE_REASON_FK
            references TICKET_STATE_REASON,
    OPER_CAT_1           VARCHAR2(60),
    OPER_CAT_2           VARCHAR2(60),
    OPER_CAT_3           VARCHAR2(60),
    PROD_CAT_1           VARCHAR2(60),
    PROD_CAT_2           VARCHAR2(60),
    PROD_CAT_3           VARCHAR2(60),
    WORKK                VARCHAR2(255),
    WORKK_RE             VARCHAR2(255),
    SUB_WORK             VARCHAR2(255),
    SUB_WORK_RE          VARCHAR2(255),
    SOLUTION_CAT_1       VARCHAR2(60),
    SOLUTION_CAT_2       VARCHAR2(60),
    SOLUTION_CAT_3       VARCHAR2(60),
    ASSIGNED_NOTE        VARCHAR2(256),
    SOLUTION_USER_LOGIN  VARCHAR2(128),
    SOLUTION_GROUP_NAME  VARCHAR2(128),
    SOLUTION_USER_ID     NUMBER
        constraint TICKET_SOLUTION_USER_FK
            references USERS,
    SOLUTION_GROUP_ID    NUMBER
        constraint TICKET_SOLUTION_GROUP_FK
            references SUPPORT_GROUP,
    LAST_EDIT_USER_LOGIN VARCHAR2(255)
)
/

create index IDX_TICKET
    on TICKET (STATE_ID, APPLICANT_ID)
/

create index IDX_TICKET_SOL_GROUP_ID
    on TICKET (SOLUTION_GROUP_ID)
/

create index IDX_TICKET_SOL_USER_ID
    on TICKET (SOLUTION_USER_ID)
/

create index IDX_TICKET_STATE_ID
    on TICKET (STATE_ID)
/

create index IDX_TICKET_GUID
    on TICKET (GUID)
/

create index IDX_TICKET_REMEDY_ID
    on TICKET (REMEDY_ID)
/

create index IDX_TICKET_CREATE_DATE
    on TICKET (CREATE_DATE)
/

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 1
  • Comments: 20 (8 by maintainers)

Most upvoted comments

First of all, thanks for the work on this driver!

We are facing (kind of) the same issues. The rows.Next() fails silently, while sometimes only receiving e.g. 50 records (2 iterations) instead of the expected 240 for our specific query.

We’re using v2.5.17 of the driver. The Oracle version we’re using is ‘Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.7.0.0.0’.

We tested this with setting the prefetch_rows to 20 at which time the result sometimes had 40 records and sometimes all.

Now we ‘fixed’ this by setting the prefetch_rows to 1000, but this is of course not a solution.

We have our Oracle instance in our ‘own’ datacenter and the functions using this in a AWS Lambda function. Running the same functions in our internal network (with less latency to the Oracle DB) doesn’t give a problem.

If rows.Next() fails, but with a catchable error, this would already work for us. Please let us know what information we can provide to help out.

The Table structure we use is:

CREATE TABLE 
    MAPPING_TBL 
    ( 
        RECNO                          NUMBER(10) NOT NULL, 
        TRANSFORMCODE                  VARCHAR2(10) NOT NULL, 
        FILTERROW                      NUMBER(10) NOT NULL, 
        COLUMNNAME                     VARCHAR2(60) NOT NULL, 
        VALUE                          VARCHAR2(254), 
        SYS_STSLUAFVULIWJ#2ABCQY09S_4H NUMBER GENERATED ALWAYS AS (SYS_OP_COMBINED_HASH 
        ("TRANSFORMCODE","COLUMNNAME")), 
        SYS_STS_N$BNEAJEV0X66MP#LA9RKR NUMBER GENERATED ALWAYS AS (SYS_OP_COMBINED_HASH 
        ("TRANSFORMCODE","COLUMNNAME","VALUE")), 
        CONSTRAINT PK_MAPPING_TBL_TRANSFORMCODEFILTER PRIMARY KEY (TRANSFORMCODE, FILTERROW, COLUMNNAME), 
        CONSTRAINT UX_MAPPING_TBLFILTERECNO UNIQUE (RECNO) 
    );

The Query we use is:

SELECT
    f.value source, was.value from_productcode, wase.value from_variantcode, wordt.value to_productcode, wordte.value to_variantcode
FROM
    SYSADM.MAPPING_TBL f
	, SYSADM.MAPPING_TBL was
	, SYSADM.MAPPING_TBL wordt
	, SYSADM.MAPPING_TBL wordte
	, SYSADM.MAPPING_TBL wase 
WHERE 
    f.transformcode = 'PRODUCT' AND wase.transformcode = 'PRODUCT' AND wordte.transformcode = 'PRODUCT' 
	AND f.columnname = '0#ADDRESSORIGINCODE' 
	AND was.filterrow = f.filterrow AND wordt.filterrow = f.filterrow
	AND wase.filterrow = f.filterrow AND wordte.filterrow = f.filterrow
	AND was.transformcode = 'PRODUCT' AND wordt.transformcode = 'PRODUCT' 
	AND was.columnname = '0#PRODUCTCODE'
	AND wase.columnname = '0#VARIANTCODE'
	AND wordt.columnname = '1#PRODUCTCODE'
	AND wordte.columnname = '1#VARIANTCODE'

The amount of rows are approximately 8.000 in our data set.

SOLUTION is CLOB i test before up to 100000 length and it is ok I will try to create the table put some data in it and see how many rows you try to fetch from the table?