阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

Sqoop1.4.5 导入 hive IOException running import job: java.io.IOException: Hive exited with status 1

89次阅读
没有评论

共计 18822 个字符,预计需要花费 48 分钟才能阅读完成。

sqoop 导入 hive

hive.HiveImport: Exception in thread “main” Java.lang.NoSuchMethodError: org.apache.thrift.EncodingUtils.setBit(BIZ)B

ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Hive exited with status 1

出现上面的错误

[linuxidc@jifeng02 sqoop]$ bin/sqoop import –connect jdbc:mysql://10.X.X.X:3306/lir –table project –username dss -P –hive-import — –default-character-set=utf-8
Warning: /home/linuxidc/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/linuxidc/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: $Hadoop_HOME is deprecated.

14/09/08 01:25:36 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5
Enter password:
14/09/08 01:25:40 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
14/09/08 01:25:40 INFO tool.BaseSqoopTool: delimiters with –fields-terminated-by, etc.
14/09/08 01:25:40 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
14/09/08 01:25:40 INFO tool.CodeGenTool: Beginning code generation
14/09/08 01:25:40 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `project` AS t LIMIT 1
14/09/08 01:25:40 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `project` AS t LIMIT 1
14/09/08 01:25:40 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/linuxidc/hadoop/hadoop-1.2.1
注: /tmp/sqoop-linuxidc/compile/84b064476bf25fd09fa7171d6baf7a96/project.java 使用或覆盖了已过时的 API。
注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。
14/09/08 01:25:41 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-linuxidc/compile/84b064476bf25fd09fa7171d6baf7a96/project.jar
14/09/08 01:25:41 WARN manager.MySQLManager: It looks like you are importing from mysql.
14/09/08 01:25:41 WARN manager.MySQLManager: This transfer can be faster! Use the –direct
14/09/08 01:25:41 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
14/09/08 01:25:41 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
14/09/08 01:25:41 INFO mapreduce.ImportJobBase: Beginning import of project
14/09/08 01:25:42 INFO db.DBInputFormat: Using read commited transaction isolation
14/09/08 01:25:42 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `project`
14/09/08 01:25:42 INFO mapred.JobClient: Running job: job_201409072150_0002
14/09/08 01:25:43 INFO mapred.JobClient:  map 0% reduce 0%
14/09/08 01:25:52 INFO mapred.JobClient:  map 66% reduce 0%
14/09/08 01:25:53 INFO mapred.JobClient:  map 100% reduce 0%
14/09/08 01:25:54 INFO mapred.JobClient: Job complete: job_201409072150_0002
14/09/08 01:25:54 INFO mapred.JobClient: Counters: 18
14/09/08 01:25:54 INFO mapred.JobClient:  Job Counters
14/09/08 01:25:54 INFO mapred.JobClient:    SLOTS_MILLIS_MAPS=13548
14/09/08 01:25:54 INFO mapred.JobClient:    Total time spent by all reduces waiting after reserving slots (ms)=0
14/09/08 01:25:54 INFO mapred.JobClient:    Total time spent by all maps waiting after reserving slots (ms)=0
14/09/08 01:25:54 INFO mapred.JobClient:    Launched map tasks=3
14/09/08 01:25:54 INFO mapred.JobClient:    SLOTS_MILLIS_REDUCES=0
14/09/08 01:25:54 INFO mapred.JobClient:  File Output Format Counters
14/09/08 01:25:54 INFO mapred.JobClient:    Bytes Written=201
14/09/08 01:25:54 INFO mapred.JobClient:  FileSystemCounters
14/09/08 01:25:54 INFO mapred.JobClient:    HDFS_BYTES_READ=295
14/09/08 01:25:54 INFO mapred.JobClient:    FILE_BYTES_WRITTEN=204759
14/09/08 01:25:54 INFO mapred.JobClient:    HDFS_BYTES_WRITTEN=201
14/09/08 01:25:54 INFO mapred.JobClient:  File Input Format Counters
14/09/08 01:25:54 INFO mapred.JobClient:    Bytes Read=0
14/09/08 01:25:54 INFO mapred.JobClient:  Map-Reduce Framework
14/09/08 01:25:54 INFO mapred.JobClient:    Map input records=3
14/09/08 01:25:54 INFO mapred.JobClient:    Physical memory (bytes) snapshot=163741696
14/09/08 01:25:54 INFO mapred.JobClient:    Spilled Records=0
14/09/08 01:25:54 INFO mapred.JobClient:    CPU time spent (ms)=1490
14/09/08 01:25:54 INFO mapred.JobClient:    Total committed heap usage (bytes)=64421888
14/09/08 01:25:54 INFO mapred.JobClient:    Virtual memory (bytes) snapshot=1208795136
14/09/08 01:25:54 INFO mapred.JobClient:    Map output records=3
14/09/08 01:25:54 INFO mapred.JobClient:    SPLIT_RAW_BYTES=295
14/09/08 01:25:54 INFO mapreduce.ImportJobBase: Transferred 201 bytes in 12.6733 seconds (15.8601 bytes/sec)
14/09/08 01:25:54 INFO mapreduce.ImportJobBase: Retrieved 3 records.
14/09/08 01:25:54 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `project` AS t LIMIT 1
14/09/08 01:25:54 WARN hive.TableDefWriter: Column create_at had to be cast to a less precise type in Hive
14/09/08 01:25:54 WARN hive.TableDefWriter: Column update_at had to be cast to a less precise type in Hive
14/09/08 01:25:54 INFO hive.HiveImport: Removing temporary files from import process: hdfs://linuxidc01:9000/user/linuxidc/project/_logs
14/09/08 01:25:54 INFO hive.HiveImport: Loading uploaded data into Hive
14/09/08 01:25:55 INFO hive.HiveImport:
14/09/08 01:25:55 INFO hive.HiveImport: Logging initialized using configuration in jar:file:/home/linuxidc/hadoop/hive-0.12.0-bin/lib/hive-common-0.12.0.jar!/hive-log4j.properties
14/09/08 01:25:55 INFO hive.HiveImport: Exception in thread “main” java.lang.NoSuchMethodError: org.apache.thrift.EncodingUtils.setBit(BIZ)B
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.metastore.api.StorageDescriptor.setNumBucketsIsSet(StorageDescriptor.java:464)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.metastore.api.StorageDescriptor.setNumBuckets(StorageDescriptor.java:451)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.ql.metadata.Table.getEmptyTable(Table.java:132)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.ql.metadata.Table.<init>(Table.java:105)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.ql.metadata.Hive.newTable(Hive.java:2493)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:904)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeCreateTable(SemanticAnalyzer.java:8999)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8313)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:441)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:342)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:977)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:348)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:446)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:456)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:737)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
14/09/08 01:25:55 INFO hive.HiveImport:        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
14/09/08 01:25:55 INFO hive.HiveImport:        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
14/09/08 01:25:55 INFO hive.HiveImport:        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
14/09/08 01:25:55 INFO hive.HiveImport:        at java.lang.reflect.Method.invoke(Method.java:606)
14/09/08 01:25:55 INFO hive.HiveImport:        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
14/09/08 01:25:55 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Hive exited with status 1
        at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:385)
        at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:335)
        at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:239)
        at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:511)
        at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:601)
        at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
        at org.apache.sqoop.Sqoop.main(Sqoop.java:236)

[linuxidc@jifeng02 sqoop]$ bin/sqoop import –connect jdbc:mysql://10.X.X.X:3306/lir –table project –username dss -P –hive-import — –default-character-set=utf-8
Warning: /home/linuxidc/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/linuxidc/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: $HADOOP_HOME is deprecated.

14/09/08 01:28:52 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5
Enter password:
14/09/08 01:28:54 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
14/09/08 01:28:54 INFO tool.BaseSqoopTool: delimiters with –fields-terminated-by, etc.
14/09/08 01:28:55 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
14/09/08 01:28:55 INFO tool.CodeGenTool: Beginning code generation
14/09/08 01:28:55 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `project` AS t LIMIT 1
14/09/08 01:28:55 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `project` AS t LIMIT 1
14/09/08 01:28:55 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/linuxidc/hadoop/hadoop-1.2.1
注: /tmp/sqoop-linuxidc/compile/b281ae9014edf3aae02818af8d90c978/project.java 使用或覆盖了已过时的 API。
注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。
14/09/08 01:28:56 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-linuxidc/compile/b281ae9014edf3aae02818af8d90c978/project.jar
14/09/08 01:28:56 WARN manager.MySQLManager: It looks like you are importing from mysql.
14/09/08 01:28:56 WARN manager.MySQLManager: This transfer can be faster! Use the –direct
14/09/08 01:28:56 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
14/09/08 01:28:56 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
14/09/08 01:28:56 INFO mapreduce.ImportJobBase: Beginning import of project
14/09/08 01:28:56 INFO mapred.JobClient: Cleaning up the staging area hdfs://linuxidc01:9000/home/linuxidc/hadoop/tmp/mapred/staging/linuxidc/.staging/job_201409072150_0003
14/09/08 01:28:56 ERROR security.UserGroupInformation: PriviledgedActionException as:linuxidc cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory project already exists
14/09/08 01:28:56 ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory project already exists
        at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:973)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
        at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186)
        at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159)
        at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:247)
        at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:665)
        at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:118)
        at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
        at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:601)
        at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
        at org.apache.sqoop.Sqoop.main(Sqoop.java:236)

解决:

 

Hbase 和 hive 的 libthrift 版本分别是 libthrift-0.8.0.jar,libthrift-0.9.0.jar 

copy libthrift-0.9.0.jar 到 sqoop/lib 目录下,问题解决。

 

再次支持导入语句的时候出现目录已经存在的问题

14/09/08 01:28:56 ERROR security.UserGroupInformation: PriviledgedActionException as:linuxidc cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory project already exists
14/09/08 01:28:56 ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory project already exists

删除目录问题解决

[linuxidc@jifeng01 ~]$ hadoop dfs -rmr /user/linuxidc/project
Warning: $HADOOP_HOME is deprecated.

Deleted hdfs://linuxidc01:9000/user/linuxidc/project
[linuxidc@jifeng01 ~]$

 

[linuxidc@jifeng02 sqoop]$ bin/sqoop import –connect jdbc:mysql://10.X.X.:3306/lir –table project –username dss -P –hive-import — –default-character-set=utf-8
Warning: /home/linuxidc/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/linuxidc/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: $HADOOP_HOME is deprecated.

14/09/08 01:58:02 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5
Enter password:
14/09/08 01:58:07 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
14/09/08 01:58:07 INFO tool.BaseSqoopTool: delimiters with –fields-terminated-by, etc.
14/09/08 01:58:07 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
14/09/08 01:58:07 INFO tool.CodeGenTool: Beginning code generation
14/09/08 01:58:07 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `project` AS t LIMIT 1
14/09/08 01:58:07 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `project` AS t LIMIT 1
14/09/08 01:58:07 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/linuxidc/hadoop/hadoop-1.2.1
注: /tmp/sqoop-linuxidc/compile/437963d234f778a27f8aa27fec8e18aa/project.java 使用或覆盖了已过时的 API。
注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。
14/09/08 01:58:08 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-linuxidc/compile/437963d234f778a27f8aa27fec8e18aa/project.jar
14/09/08 01:58:08 WARN manager.MySQLManager: It looks like you are importing from mysql.
14/09/08 01:58:08 WARN manager.MySQLManager: This transfer can be faster! Use the –direct
14/09/08 01:58:08 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
14/09/08 01:58:08 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
14/09/08 01:58:08 INFO mapreduce.ImportJobBase: Beginning import of project
14/09/08 01:58:08 INFO db.DBInputFormat: Using read commited transaction isolation
14/09/08 01:58:08 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `project`
14/09/08 01:58:09 INFO mapred.JobClient: Running job: job_201409072150_0005
14/09/08 01:58:10 INFO mapred.JobClient:  map 0% reduce 0%
14/09/08 01:58:15 INFO mapred.JobClient:  map 33% reduce 0%
14/09/08 01:58:16 INFO mapred.JobClient:  map 66% reduce 0%
14/09/08 01:58:18 INFO mapred.JobClient:  map 100% reduce 0%
14/09/08 01:58:20 INFO mapred.JobClient: Job complete: job_201409072150_0005
14/09/08 01:58:20 INFO mapred.JobClient: Counters: 18
14/09/08 01:58:20 INFO mapred.JobClient:  Job Counters
14/09/08 01:58:20 INFO mapred.JobClient:    SLOTS_MILLIS_MAPS=11968
14/09/08 01:58:20 INFO mapred.JobClient:    Total time spent by all reduces waiting after reserving slots (ms)=0
14/09/08 01:58:20 INFO mapred.JobClient:    Total time spent by all maps waiting after reserving slots (ms)=0
14/09/08 01:58:20 INFO mapred.JobClient:    Launched map tasks=3
14/09/08 01:58:20 INFO mapred.JobClient:    SLOTS_MILLIS_REDUCES=0
14/09/08 01:58:20 INFO mapred.JobClient:  File Output Format Counters
14/09/08 01:58:20 INFO mapred.JobClient:    Bytes Written=201
14/09/08 01:58:20 INFO mapred.JobClient:  FileSystemCounters
14/09/08 01:58:20 INFO mapred.JobClient:    HDFS_BYTES_READ=295
14/09/08 01:58:20 INFO mapred.JobClient:    FILE_BYTES_WRITTEN=206338
14/09/08 01:58:20 INFO mapred.JobClient:    HDFS_BYTES_WRITTEN=201
14/09/08 01:58:20 INFO mapred.JobClient:  File Input Format Counters
14/09/08 01:58:20 INFO mapred.JobClient:    Bytes Read=0
14/09/08 01:58:20 INFO mapred.JobClient:  Map-Reduce Framework
14/09/08 01:58:20 INFO mapred.JobClient:    Map input records=3
14/09/08 01:58:20 INFO mapred.JobClient:    Physical memory (bytes) snapshot=163192832
14/09/08 01:58:20 INFO mapred.JobClient:    Spilled Records=0
14/09/08 01:58:20 INFO mapred.JobClient:    CPU time spent (ms)=1480
14/09/08 01:58:20 INFO mapred.JobClient:    Total committed heap usage (bytes)=64421888
14/09/08 01:58:20 INFO mapred.JobClient:    Virtual memory (bytes) snapshot=1208586240
14/09/08 01:58:20 INFO mapred.JobClient:    Map output records=3
14/09/08 01:58:20 INFO mapred.JobClient:    SPLIT_RAW_BYTES=295
14/09/08 01:58:20 INFO mapreduce.ImportJobBase: Transferred 201 bytes in 11.6303 seconds (17.2825 bytes/sec)
14/09/08 01:58:20 INFO mapreduce.ImportJobBase: Retrieved 3 records.
14/09/08 01:58:20 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `project` AS t LIMIT 1
14/09/08 01:58:20 WARN hive.TableDefWriter: Column create_at had to be cast to a less precise type in Hive
14/09/08 01:58:20 WARN hive.TableDefWriter: Column update_at had to be cast to a less precise type in Hive
14/09/08 01:58:20 INFO hive.HiveImport: Removing temporary files from import process: hdfs://linuxidc01:9000/user/linuxidc/project/_logs
14/09/08 01:58:20 INFO hive.HiveImport: Loading uploaded data into Hive
14/09/08 01:58:21 INFO hive.HiveImport:
14/09/08 01:58:21 INFO hive.HiveImport: Logging initialized using configuration in jar:file:/home/linuxidc/hadoop/hive-0.12.0-bin/lib/hive-common-0.12.0.jar!/hive-log4j.properties
14/09/08 01:58:27 INFO hive.HiveImport: OK
14/09/08 01:58:27 INFO hive.HiveImport: Time taken: 6.069 seconds
14/09/08 01:58:27 INFO hive.HiveImport: Loading data to table default.project
14/09/08 01:58:27 INFO hive.HiveImport: Table default.project stats: [num_partitions: 0, num_files: 4, num_rows: 0, total_size: 201, raw_data_size: 0]
14/09/08 01:58:27 INFO hive.HiveImport: OK
14/09/08 01:58:27 INFO hive.HiveImport: Time taken: 0.345 seconds
14/09/08 01:58:27 INFO hive.HiveImport: Hive import complete.
14/09/08 01:58:27 INFO hive.HiveImport: Export directory is empty, removing it.

相关阅读

通过 Sqoop 实现 Mysql / Oracle 与 HDFS / Hbase 互导数据 http://www.linuxidc.com/Linux/2013-06/85817.htm

[Hadoop] Sqoop 安装过程详解 http://www.linuxidc.com/Linux/2013-05/84082.htm

用 Sqoop 进行 MySQL 和 HDFS 系统间的数据互导 http://www.linuxidc.com/Linux/2013-04/83447.htm

Hadoop Oozie 学习笔记 Oozie 不支持 Sqoop 问题解决 http://www.linuxidc.com/Linux/2012-08/67027.htm

Hadoop 生态系统搭建(hadoop hive hbase zookeeper oozie Sqoop)http://www.linuxidc.com/Linux/2012-03/55721.htm

Hadoop 学习全程记录——使用 Sqoop 将 MySQL 中数据导入到 Hive 中 http://www.linuxidc.com/Linux/2012-01/51993.htm

Sqoop 的详细介绍 :请点这里
Sqoop 的下载地址 :请点这里

 

正文完
星哥说事-微信公众号
post-qrcode
 
星锅
版权声明:本站原创文章,由 星锅 2022-01-20发表,共计18822字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中