Wednesday, December 17, 2014

WSO2 BAM - APIM error - Failed to write data to database


When viewing statistics of APIs in WSO2 API Manager 1.7.0 via WSO2 BAM, you may have come across the following issue. [1]

As you might know, AM sends events about requests to APIs to WSO2 BAM. And, BAM stores these data in Cassandra storage which gets later by Hive analytics scripts. For AM, we have the am_stats_analyzer. After analyzing, the summarized information gets written to a RDBMS instance. The summarized db could be mysql, oracle or even a in-memory H2 DB.

Error


This error [1] comes when a column in a summarized database table is too small to store a given value. In this, it's the resourcePath of an API. By default, the size for the resourcePath column is set to VARCHAR(100). If the resourcePath of an API is longer than 100 characters, then this error will be thrown.

Solution

The 'resourcePath' is defined in the API_Resource_USAGE_SUMMARY table in the summary db.

  • If the system is already up and running, the summarized mysql tables are already created. Therefore, we need to alter the tables to modify column lengths. You can use the following steps for that.

1. Since the issue is in a RDBMS such as mysql, you first need to log-in to console where you can execute SQL statements.
2. Then, execute the following statement.

alter table API_Resource_USAGE_SUMMARY modify resourcePath MEDIUMTEXT


  • If the summarized tables are not created yet, then you could go and and modify the table creation script in the am_stats_analyzer.
  1. Open the API_Manager_Analytics.tbox.
  2. In there, you will find am_stats_analyzer analytics script.
  3. Open that, and look for the hive.jdbc.table.create.query, which has the following summary table creation sql statement.
 CREATE TABLE API_Resource_USAGE_SUMMARY ( api VARCHAR(100), version VARCHAR(100),apiPublisher VARCHAR(100) , consumerKey VARCHAR(100),resourcePath VARCHAR(100) ,context VARCHAR(100),
        method VARCHAR(100), total_request_count INT, hostName VARCHAR(100), year SMALLINT, month SMALLINT, day SMALLINT, time VARCHAR(30), PRIMARY KEY(api,version,apiPublisher,consumerKey,context,method,time))


Change the type of resourcePath from VARCHAR(100) to MEDIUMTEXT. Save the script.

[1]
ERROR {org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation} - Failed to write data to database {org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation}
com.mysql.jdbc.MysqlDataTruncation: Data truncation: Data too long for column 'resourcePath' at row 1
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3885)
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3823)
        at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2435)
        at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2582)
        at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2530)
        at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1907)
        at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2141)
        at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2077)
        at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2062)
        at org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation.insertData(DBOperation.java:175)
        at org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation.writeToDB(DBOperation.java:63)
        at org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBRecordWriter.write(DBRecordWriter.java:35)
        at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:589)
        at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:758)
        at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
        at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:758)
        at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
        at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:758)
        at org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:964)
        at org.apache.hadoop.hive.ql.exec.GroupByOperator.processAggr(GroupByOperator.java:781)
        at org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:707)
        at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
        at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:248)
        at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:518)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:419)

No comments:

Post a Comment