Wednesday, December 17, 2014

WSO2 BAM - APIM error - Failed to write data to database


When viewing statistics of APIs in WSO2 API Manager 1.7.0 via WSO2 BAM, you may have come across the following issue. [1]

As you might know, AM sends events about requests to APIs to WSO2 BAM. And, BAM stores these data in Cassandra storage which gets later by Hive analytics scripts. For AM, we have the am_stats_analyzer. After analyzing, the summarized information gets written to a RDBMS instance. The summarized db could be mysql, oracle or even a in-memory H2 DB.

Error


This error [1] comes when a column in a summarized database table is too small to store a given value. In this, it's the resourcePath of an API. By default, the size for the resourcePath column is set to VARCHAR(100). If the resourcePath of an API is longer than 100 characters, then this error will be thrown.

Solution

The 'resourcePath' is defined in the API_Resource_USAGE_SUMMARY table in the summary db.

  • If the system is already up and running, the summarized mysql tables are already created. Therefore, we need to alter the tables to modify column lengths. You can use the following steps for that.

1. Since the issue is in a RDBMS such as mysql, you first need to log-in to console where you can execute SQL statements.
2. Then, execute the following statement.

alter table API_Resource_USAGE_SUMMARY modify resourcePath MEDIUMTEXT


  • If the summarized tables are not created yet, then you could go and and modify the table creation script in the am_stats_analyzer.
  1. Open the API_Manager_Analytics.tbox.
  2. In there, you will find am_stats_analyzer analytics script.
  3. Open that, and look for the hive.jdbc.table.create.query, which has the following summary table creation sql statement.
 CREATE TABLE API_Resource_USAGE_SUMMARY ( api VARCHAR(100), version VARCHAR(100),apiPublisher VARCHAR(100) , consumerKey VARCHAR(100),resourcePath VARCHAR(100) ,context VARCHAR(100),
        method VARCHAR(100), total_request_count INT, hostName VARCHAR(100), year SMALLINT, month SMALLINT, day SMALLINT, time VARCHAR(30), PRIMARY KEY(api,version,apiPublisher,consumerKey,context,method,time))


Change the type of resourcePath from VARCHAR(100) to MEDIUMTEXT. Save the script.

[1]
ERROR {org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation} - Failed to write data to database {org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation}
com.mysql.jdbc.MysqlDataTruncation: Data truncation: Data too long for column 'resourcePath' at row 1
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3885)
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3823)
        at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2435)
        at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2582)
        at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2530)
        at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1907)
        at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2141)
        at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2077)
        at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2062)
        at org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation.insertData(DBOperation.java:175)
        at org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation.writeToDB(DBOperation.java:63)
        at org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBRecordWriter.write(DBRecordWriter.java:35)
        at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:589)
        at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:758)
        at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
        at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:758)
        at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
        at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:758)
        at org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:964)
        at org.apache.hadoop.hive.ql.exec.GroupByOperator.processAggr(GroupByOperator.java:781)
        at org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:707)
        at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
        at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:248)
        at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:518)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:419)

Tuesday, December 2, 2014

Fixing 101505 WSO2 Connection close issues


Error:

The error code 101505 means the HTTP connection initiated by the ESB has been terminated at the backend. According to the log you can determine whether message has been completely written to the backend prior to the connection close (See [Status SendingCompleted : true] in the log.) Users will usually get an error message such as the following.

{
fault: {
code: "101505"
type: "Status report"
message: "Runtime Error"
description: "Connection close For : x.x.x.x For Request : Axis2Request [Message ID : urn:uuid:84445830-ed4e-5c15-cd87-bb5b21869e3f] [Status Completed : false] [Status SendingCompleted : true]"
}


Possible fix:


Some Back-end servers do not support persistent connections. So, they tend close down the connections intermediately. If this is the case, then we should disable the Keep-Alive connections at ESB side for such back-ends.


By default Keep-Alive is enabled in WSO2 ESB/APIM. You can switch off this per proxy/api by adding the following property mediator.


<property name="NO_KEEPALIVE" value="true" scope="axis2"/> 


But If you need to disable Keep-Alive globally for all the connections initiated from ESB, then that configuration should be specified in transport configuration files.

So, if you are using NIO transport, then add the following property to ESB_HOME/repository/conf/nhttp.properties. But, if you are using Passthru transport, then add the following property to ESB_HOME/repository/conf/passthru-http.properties.


http.connection.disable.keepalive=1