lightdash: Fail to fetch data w/ spark adapter
I tried to use spark adapter with following docker image.
FROM lightdash/lightdash:0.2.7
RUN apt-get update ; apt-get install -y --no-install-recommends libsasl2-dev
RUN pip install -U dbt==0.19.2
RUN pip install 'dbt-spark[PyHive]'
When I run query, results field is empty. And Export CSV output is also empty 500 lines.

I doubt SQL query include extra-two-‘d’ as follows (metric: Num_users, Dimension: Org_1st).
SELECT
testmart.org_1st AS dtestmart_org_1std,
COUNT(testmart.count_user) AS dtestmart_num_usersd
FROM dbt.testmart AS testmart
GROUP BY 1
I think it will right.
SELECT
testmart.org_1st AS "testmart_org_1st",
COUNT(testmart.count_user) AS "testmart_num_users"
FROM dbt.testmart AS testmart
GROUP BY 1
/usr/app/dbt/logs/dbt.log include right numbers.
2021-07-05 12:59:19.606216 (Thread-1): TFetchResultsResp(status=TStatus(statusCode=0, infoMessages=None, sqlState=None, errorCode=None, errorMessage=None), hasMoreRows=False, results=TRowSet(startRowOffset=0, rows=[], columns=[TColumn(boolVal=None, byteVal=None, i16Val=None, i32Val=None, i64Val=None, doubleVal=None, stringVal=TStringColumn(values=[( snip ), i64Val=TI64Column(values=[9492, 1704, 13997, 755, 1650, 4686, 912, 2132, 912, 4260, 912, 3830, 4664, 7477, 10262, 912, 4686, 852, 2280, 1638, 4200, 4039, 6384, 2966, 5964, 5599, 3192, 3437, 12218, 5837, 4681, 4260, 10741, 1212, 3072, 3499, 1764, 570], nulls=b'\x00'), doubleVal=None, stringVal=None, binaryVal=None)], binaryColumns=None, columnCount=None))
I also tried dbt-presto and it went fine.
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 19 (1 by maintainers)
Commits related to this issue
- Fix error when getting quote strings from dbt adapter Closes #177 — committed to lightdash/lightdash by owlas 3 years ago
- Fix error when getting quote strings from dbt adapter Closes #177 — committed to lightdash/lightdash by owlas 3 years ago
- Fix error when getting quote strings from dbt adapter Closes #177 — committed to lightdash/lightdash by owlas 3 years ago
That’s awesome! I’m really pleased we’ve got this working. Please let us know if you hit any other problems with pyhive / pyspark.
I’m going to close this issue for now.
Would love to hear more about what you’re working on. You can always chat with the devs at:
Thanks @skame - I think I’ve found the problem. See #201
I’ll let you know when I’ve released the update
Thanks!
I use spark thrift interface for connecting hadoop hive cluster, so SQL will not be ansi. And my environment is spark SQL 2.3.
The result of
SELECT 2147483647 + 1;is-2147483648.