`

iBatis大数据量分页查询的性能问题分析及改进

阅读更多

感谢 http://www.iteye.com/topic/544765http://www.iteye.com/topic/566605

首先看一下iBatis的分页代码是怎么执行的

iBatis中,具体负责执行sql的类是 com.ibatis.sqlmap.engine.execution.SqlExecutor。

负责分页查询的方法是executeQuery —>handleMultipleResults —> handleResults。handleResults方法的源码如下:

private void handleResults(RequestScope request, ResultSet rs, int skipResults, int maxResults, RowHandlerCallback callback) throws SQLException {
    try {
      request.setResultSet(rs);
      ResultMap resultMap = request.getResultMap();
      if (resultMap != null) {
        // Skip Results
        if (rs.getType() != ResultSet.TYPE_FORWARD_ONLY) {
          if (skipResults > 0) {
            rs.absolute(skipResults);
          }
        } else {
          for (int i = 0; i < skipResults; i++) {
            if (!rs.next()) {
              return;
            }
          }
        }

        // Get Results
        int resultsFetched = 0;
        while ((maxResults == SqlExecutor.NO_MAXIMUM_RESULTS || resultsFetched < maxResults) && rs.next()) {
          Object[] columnValues = resultMap.resolveSubMap(request, rs).getResults(request, rs);
          callback.handleResultObject(request, columnValues, rs);
          resultsFetched++;
        }
      }
    } finally {
      request.setResultSet(null);
    }
  }

 

 从代码中可以看出iBatis分页查询的逻辑是首先判断ResulteSet的类型,

 如果ResultSet的类型是 ResultSet.TYPE_FORWARD_ONLY,则使用ResultSet对象的next()方法,一步一步地移动游标到要取的第一条记录的位置,然后再采用next()方法取出一页的数据;

 如果ResultSet的类型不是ResultSet.TYPE_FORWARD_ONLY,则采用 ResultSet对象的absolute()方法,移动游标到要取的第一条记录的位置,然后再采用next()方法取出一页的数据。

 

ResultSet的类型,是在iBatis的配置文件中配置的,如:

     <select id="queryAllUser" resultMap="user" resultSetType="FORWARD_ONLY">
            select id,name from user_tab
    </select>

 

其中resultSetType的可选值为FORWARD_ONLY | SCROLL_INSENSITIVE | SCROLL_SENSITIVE,

如果没有配置,默认值为FORWARD_ONLY,FORWARD_ONLY类型的ResultSet 不支持absolute方法,所以是通过next方法定位的。

一般情况下,我们都使用FORWARD_ONLY类型的ResultSet,SCROLL类型ResultSet的优点是可向前,向后滚动,并支持精确定位(absolute),但缺点是把结果集全部加载进缓存(如果查询是从100万条数据开始取100条,会把前100万条数据也加载进缓存),容易造成内存溢出,性能也很差,除非必要,一般不使用。

 

  可见,iBatis的分页完全依赖于JDBC ResultSet的next方法或absolute方法来实现。

 而Hibernate在分页查询方面,比iBatis要好很多,Hibernate可以根据不同的数据库,对sql做不同的优化加工,然后再执行优化后的sql。

 

 比如,对于Oracle数据库来说,原始sql为select * form user_tab, 从1000001条开始取100条,则hibernate加工后的sql为:

    

select *
  from (select row_.*, rownum rownum_
          from (SELECT * FROM user_tab) row_
         where rownum <= 1000100)
 where rownum_ > 1000000

 

 写一个程序,对比一下两种方式下的查询效率。程序如下:

  

public class Test{
	public static void main(String[] args) throws Exception {
		Class.forName("oracle.jdbc.driver.OracleDriver");
		Connection conn = DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:db", "db",
				"xwdb");
		long a = System.currentTimeMillis();
		testIbatisPageQuery(conn);
		//testHibernatePageQuery(conn);
		long b = System.currentTimeMillis();
		System.out.println(b-a);
	}

	
	
	public static void testIbatisPageQuery(Connection conn) throws Exception{
		String sql = "SELECT * FROM user_tab ";
		
		Statement stmt = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
			
		ResultSet rs = stmt.executeQuery(sql);
		int j=0;
		//游标移动到1000001条数据的位置
		while(rs.next() && j++<1000000){
		
		}
		int i=0;
		//依次取出100条数据
		while(rs.next() && i++<100){
			
		}
			
	}
	public static void testHibernatePageQuery(Connection conn) throws Exception{
		String sql = "SELECT * FROM user_tab ";
		
		StringBuffer pagingSelect = new StringBuffer( sql.length()+100 );
		pagingSelect.append("select * from ( select row_.*, rownum rownum_ from ( ");
		pagingSelect.append(sql);
		pagingSelect.append(" ) row_ where rownum <= 1000100) where rownum_ > 1000000");

		Statement stmt = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
		ResultSet rs = stmt.executeQuery(pagingSelect.toString());
		
		while(rs.next()){
			
		}		
	}
}

 

发现testIbatisPageQuery需要执行十几秒,而testHibernatePageQuery仅需要执行零点几秒,差异很大。

而如果改成从1000条开始取100条,甚至更靠前,则2者的差别是非常小的。

 

 综上所述,如果系统中查询的数据量很大,并且用户会选择查询非常靠后的数据,那么我们就应该替换iBatis的分页实现,如果不存在这种情况,那我们就不需要替换iBatis的分页实现,一般情况下,用户不可能去查询那么靠后的页,这也是iBatis一直不修改分页实现的原因吧。

 

如果我们选择替换的话,有三种办法,

      一种是自己写一个类,继承iBatis的SqlExecutor,然后把这个类注入到 com.ibatis.sqlmap.engine.impl.ExtendedSqlMapClient中,由于SqlExecutor是 ExtendedSqlMapClient的私有变量,没有public类型的set方法,所以需要采用reflect机制注入;

      第二种办法是弃用iBatis的分页查询方法 queryForList(String sql,Object obj,int maxResult,int  skipResult),而用普通查询方法,queryForList(String sql,Object obj)。只不过把maxResult和skipResult都作为obj的变量传到sql里去。如下:  

     

<select id="queryAllUser" resultMap="user">
   select *
       from (select row_.*, rownum rownum_
                   from (SELECT * FROM user_tab) row_
                      where rownum <= #_maxResult#)
    where rownum_ > #_skipResult#
</select>

 

      第三种方法是在自己的工程里写一个和iBatis的SqlExecutor的包名和类名完全一样的类,web工程中,WEB-INF/classes下的java类,先于 WEB-INF/lib下jar包的加载,所以就巧妙了覆盖了iBatis的SqlExecutor类;

 这种方式可行是因为

     1、JVM类的加载是通过Class.forName(String cls)来实现,根据这个原理可以自己写一个与com.ibatis.sqlmap.engine.execution.SqlExecutor同名类;
    2、java web类的加载顺序是:首先是web容器的相关类与jar包,然后是web工程下面WEB-INF/classes/下的所有类,最后才是WEB-INF/lib下的所有jar包;

    有了以上的先决条件就好办了,可以在你的项目src目录下建包com.ibatis.sqlmap.engine.execution,然后在此包下建类 SqlExecutor,然后把iBatis包下的这个类的源码复制进来后做小小改动,原来的executeQuery方法改成私有、换名,换成什么名称随便,然后新建一个公有的executeQuery方法,分页功能就在这个方法体内实现;
这样一来,web容器首会找到WEB-INF/classes下的 com.ibatis.sqlmap.engine.execution.SqlExecutor这个类,因而会忽略掉在ibatis包中的这个类,即实现了自定义的分页功能,又不用去破坏ibatis的包;
    还有一点,也可以将自定义的这个类打成jar包放到lib中去,不过这时就要注意了,jar包的名称一定要在ibatis包的名称之前,也就是说 ibatis-2.3.4.726.jar,那么这个jar就可以写成ibatis-2.3.4.725.jar,或者字母在ibatis这几个字母之前,这样才能正确加载自己写的那个类。

 

 SqlExecutor.java

 

/*
 *  Copyright 2004 Clinton Begin
 *
 *  Licensed under the Apache License, Version 2.0 (the "License");
 *  you may not use this file except in compliance with the License.
 *  You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 *  Unless required by applicable law or agreed to in writing, software
 *  distributed under the License is distributed on an "AS IS" BASIS,
 *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 *  See the License for the specific language governing permissions and
 *  limitations under the License.
 */
package com.ibatis.sqlmap.engine.execution;

import java.sql.BatchUpdateException;
import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.sql.Types;
import java.util.ArrayList;
import java.util.List;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

import com.ibatis.sqlmap.engine.impl.SqlMapClientImpl;
import com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate;
import com.ibatis.sqlmap.engine.mapping.parameter.ParameterMap;
import com.ibatis.sqlmap.engine.mapping.parameter.ParameterMapping;
import com.ibatis.sqlmap.engine.mapping.result.ResultMap;
import com.ibatis.sqlmap.engine.mapping.result.ResultObjectFactoryUtil;
import com.ibatis.sqlmap.engine.mapping.statement.DefaultRowHandler;
import com.ibatis.sqlmap.engine.mapping.statement.MappedStatement;
import com.ibatis.sqlmap.engine.mapping.statement.RowHandlerCallback;
import com.ibatis.sqlmap.engine.scope.ErrorContext;
import com.ibatis.sqlmap.engine.scope.SessionScope;
import com.ibatis.sqlmap.engine.scope.StatementScope;

/**
 * Class responsible for executing the SQL
 */
@SuppressWarnings ("unchecked")
public class SqlExecutor {

	private static final Log log = LogFactory.getLog(SqlExecutor.class);
	//
	// Constants
	//
	/**
	 * Constant to let us know not to skip anything
	 */
	public static final int NO_SKIPPED_RESULTS = 0;
	/**
	 * Constant to let us know to include all records
	 */
	public static final int NO_MAXIMUM_RESULTS = -999999;
	
	public SqlExecutor() {
		log.info("Custom class 'SqlExecutor' Initialization");
	}

	//
	// Public Methods
	//

	/**
	 * Execute an update
	 * 
	 * @param statementScope
	 *            - the request scope
	 * @param conn
	 *            - the database connection
	 * @param sql
	 *            - the sql statement to execute
	 * @param parameters
	 *            - the parameters for the sql statement
	 * @return - the number of records changed
	 * @throws SQLException
	 *             - if the update fails
	 */
	public int executeUpdate(StatementScope statementScope, Connection conn,
			String sql, Object[] parameters) throws SQLException {
		ErrorContext errorContext = statementScope.getErrorContext();
		errorContext.setActivity("executing update");
		errorContext.setObjectId(sql);
		PreparedStatement ps = null;
		setupResultObjectFactory(statementScope);
		int rows = 0;
		try {
			errorContext
					.setMoreInfo("Check the SQL Statement (preparation failed).");
			ps = prepareStatement(statementScope.getSession(), conn, sql);
			setStatementTimeout(statementScope.getStatement(), ps);
			errorContext
					.setMoreInfo("Check the parameters (set parameters failed).");
			statementScope.getParameterMap().setParameters(statementScope, ps,
					parameters);
			errorContext.setMoreInfo("Check the statement (update failed).");
			ps.execute();
			rows = ps.getUpdateCount();
		} finally {
			closeStatement(statementScope.getSession(), ps);
		}
		return rows;
	}

	/**
	 * Adds a statement to a batch
	 * 
	 * @param statementScope
	 *            - the request scope
	 * @param conn
	 *            - the database connection
	 * @param sql
	 *            - the sql statement
	 * @param parameters
	 *            - the parameters for the statement
	 * @throws SQLException
	 *             - if the statement fails
	 */
	public void addBatch(StatementScope statementScope, Connection conn,
			String sql, Object[] parameters) throws SQLException {
		Batch batch = (Batch) statementScope.getSession().getBatch();
		if (batch == null) {
			batch = new Batch();
			statementScope.getSession().setBatch(batch);
		}
		batch.addBatch(statementScope, conn, sql, parameters);
	}

	/**
	 * Execute a batch of statements
	 * 
	 * @param sessionScope
	 *            - the session scope
	 * @return - the number of rows impacted by the batch
	 * @throws SQLException
	 *             - if a statement fails
	 */
	public int executeBatch(SessionScope sessionScope) throws SQLException {
		int rows = 0;
		Batch batch = (Batch) sessionScope.getBatch();
		if (batch != null) {
			try {
				rows = batch.executeBatch();
			} finally {
				batch.cleanupBatch(sessionScope);
			}
		}
		return rows;
	}

	/**
	 * Execute a batch of statements
	 * 
	 * @param sessionScope
	 *            - the session scope
	 * @return - a List of BatchResult objects (may be null if no batch has been
	 *         initiated). There will be one BatchResult object in the list for
	 *         each sub-batch executed
	 * @throws SQLException
	 *             if a database access error occurs, or the drive does not
	 *             support batch statements
	 * @throws BatchException
	 *             if the driver throws BatchUpdateException
	 */
	public List executeBatchDetailed(SessionScope sessionScope)
			throws SQLException, BatchException {
		List answer = null;
		Batch batch = (Batch) sessionScope.getBatch();
		if (batch != null) {
			try {
				answer = batch.executeBatchDetailed();
			} finally {
				batch.cleanupBatch(sessionScope);
			}
		}
		return answer;
	}

	/**
	 * Long form of the method to execute a query
	 * 
	 * @param statementScope
	 *            - the request scope
	 * @param conn
	 *            - the database connection
	 * @param sql
	 *            - the SQL statement to execute
	 * @param parameters
	 *            - the parameters for the statement
	 * @param skipResults
	 *            - the number of results to skip
	 * @param maxResults
	 *            - the maximum number of results to return
	 * @param callback
	 *            - the row handler for the query
	 * @throws SQLException
	 *             - if the query fails
	 */
//------------------------------- 分页代码重写(start) ------------------------------------//
	//重写executeQuery方法,首先判断是否分页查询,分页查询先将分页SQL语句构建,然后执行iBatis默认的查询
	public void executeQuery(StatementScope statementScope, Connection conn,
			String sql, Object[] parameters, int skipResults, int maxResults,
			RowHandlerCallback callback) throws SQLException {
		//取数据库产品名称
		String dbName = conn.getMetaData().getDatabaseProductName();
		
		int len = sql.length();
		
		//判断是否分页
		if ((skipResults != NO_SKIPPED_RESULTS || maxResults != NO_MAXIMUM_RESULTS)) {
			//根据数据库产品名称取对应的分页SQL语句
			sql = Dialect.getLimitString(dbName, sql, skipResults, maxResults);
			
			//分页语句是否存在
			if (sql.length() != len) {
				skipResults = NO_SKIPPED_RESULTS;
				maxResults = NO_MAXIMUM_RESULTS;
			}
        }
		iBatisExecuteQuery(statementScope, conn, sql, parameters, skipResults,
				maxResults, callback);
	}
	
	//iBatis包中默认的executeQuery方法
	private void iBatisExecuteQuery(StatementScope statementScope,
			Connection conn, String sql, Object[] parameters, int skipResults,
			int maxResults, RowHandlerCallback callback) throws SQLException {
		ErrorContext errorContext = statementScope.getErrorContext();
		errorContext.setActivity("executing query");
		errorContext.setObjectId(sql);
		PreparedStatement ps = null;
		ResultSet rs = null;
		setupResultObjectFactory(statementScope);
		try {
			errorContext
					.setMoreInfo("Check the SQL Statement (preparation failed).");
			Integer rsType = statementScope.getStatement().getResultSetType();
			if (rsType != null) {
				ps = prepareStatement(statementScope.getSession(), conn, sql,
						rsType);
			} else {
				ps = prepareStatement(statementScope.getSession(), conn, sql);
			}
			setStatementTimeout(statementScope.getStatement(), ps);
			Integer fetchSize = statementScope.getStatement().getFetchSize();
			if (fetchSize != null) {
				ps.setFetchSize(fetchSize.intValue());
			}
			errorContext
					.setMoreInfo("Check the parameters (set parameters failed).");
			statementScope.getParameterMap().setParameters(statementScope, ps,
					parameters);
			errorContext.setMoreInfo("Check the statement (query failed).");
			ps.execute();
			errorContext
					.setMoreInfo("Check the results (failed to retrieve results).");

			// Begin ResultSet Handling
			rs = handleMultipleResults(ps, statementScope, skipResults,
					maxResults, callback);
			// End ResultSet Handling
		} finally {
			try {
				closeResultSet(rs);
			} finally {
				closeStatement(statementScope.getSession(), ps);
			}
		}
	}
//-------------------- 分页代码重写(end) -------------------------------------//
	/**
	 * Execute a stored procedure that updates data
	 * 
	 * @param statementScope
	 *            - the request scope
	 * @param conn
	 *            - the database connection
	 * @param sql
	 *            - the SQL to call the procedure
	 * @param parameters
	 *            - the parameters for the procedure
	 * @return - the rows impacted by the procedure
	 * @throws SQLException
	 *             - if the procedure fails
	 */
	public int executeUpdateProcedure(StatementScope statementScope,
			Connection conn, String sql, Object[] parameters)
			throws SQLException {
		ErrorContext errorContext = statementScope.getErrorContext();
		errorContext.setActivity("executing update procedure");
		errorContext.setObjectId(sql);
		CallableStatement cs = null;
		setupResultObjectFactory(statementScope);
		int rows = 0;
		try {
			errorContext
					.setMoreInfo("Check the SQL Statement (preparation failed).");
			cs = prepareCall(statementScope.getSession(), conn, sql);
			setStatementTimeout(statementScope.getStatement(), cs);
			ParameterMap parameterMap = statementScope.getParameterMap();
			ParameterMapping[] mappings = parameterMap.getParameterMappings();
			errorContext
					.setMoreInfo("Check the output parameters (register output parameters failed).");
			registerOutputParameters(cs, mappings);
			errorContext
					.setMoreInfo("Check the parameters (set parameters failed).");
			parameterMap.setParameters(statementScope, cs, parameters);
			errorContext
					.setMoreInfo("Check the statement (update procedure failed).");
			cs.execute();
			rows = cs.getUpdateCount();
			errorContext
					.setMoreInfo("Check the output parameters (retrieval of output parameters failed).");
			retrieveOutputParameters(statementScope, cs, mappings, parameters,
					null);
		} finally {
			closeStatement(statementScope.getSession(), cs);
		}
		return rows;
	}

	/**
	 * Execute a stored procedure
	 * 
	 * @param statementScope
	 *            - the request scope
	 * @param conn
	 *            - the database connection
	 * @param sql
	 *            - the sql to call the procedure
	 * @param parameters
	 *            - the parameters for the procedure
	 * @param skipResults
	 *            - the number of results to skip
	 * @param maxResults
	 *            - the maximum number of results to return
	 * @param callback
	 *            - a row handler for processing the results
	 * @throws SQLException
	 *             - if the procedure fails
	 */
	public void executeQueryProcedure(StatementScope statementScope,
			Connection conn, String sql, Object[] parameters, int skipResults,
			int maxResults, RowHandlerCallback callback) throws SQLException {
		ErrorContext errorContext = statementScope.getErrorContext();
		errorContext.setActivity("executing query procedure");
		errorContext.setObjectId(sql);
		CallableStatement cs = null;
		ResultSet rs = null;
		setupResultObjectFactory(statementScope);
		try {
			errorContext
					.setMoreInfo("Check the SQL Statement (preparation failed).");
			Integer rsType = statementScope.getStatement().getResultSetType();
			if (rsType != null) {
				cs = prepareCall(statementScope.getSession(), conn, sql, rsType);
			} else {
				cs = prepareCall(statementScope.getSession(), conn, sql);
			}
			setStatementTimeout(statementScope.getStatement(), cs);
			Integer fetchSize = statementScope.getStatement().getFetchSize();
			if (fetchSize != null) {
				cs.setFetchSize(fetchSize.intValue());
			}
			ParameterMap parameterMap = statementScope.getParameterMap();
			ParameterMapping[] mappings = parameterMap.getParameterMappings();
			errorContext
					.setMoreInfo("Check the output parameters (register output parameters failed).");
			registerOutputParameters(cs, mappings);
			errorContext
					.setMoreInfo("Check the parameters (set parameters failed).");
			parameterMap.setParameters(statementScope, cs, parameters);
			errorContext
					.setMoreInfo("Check the statement (update procedure failed).");
			cs.execute();
			errorContext
					.setMoreInfo("Check the results (failed to retrieve results).");

			// Begin ResultSet Handling
			rs = handleMultipleResults(cs, statementScope, skipResults,
					maxResults, callback);
			// End ResultSet Handling
			errorContext
					.setMoreInfo("Check the output parameters (retrieval of output parameters failed).");
			retrieveOutputParameters(statementScope, cs, mappings, parameters,
					callback);

		} finally {
			try {
				closeResultSet(rs);
			} finally {
				closeStatement(statementScope.getSession(), cs);
			}
		}
	}

	private ResultSet handleMultipleResults(PreparedStatement ps,
			StatementScope statementScope, int skipResults, int maxResults,
			RowHandlerCallback callback) throws SQLException {
		ResultSet rs;
		rs = getFirstResultSet(statementScope, ps);
		if (rs != null) {
			handleResults(statementScope, rs, skipResults, maxResults, callback);
		}

		// Multiple ResultSet handling
		if (callback.getRowHandler() instanceof DefaultRowHandler) {
			MappedStatement statement = statementScope.getStatement();
			DefaultRowHandler defaultRowHandler = ((DefaultRowHandler) callback
					.getRowHandler());
			if (statement.hasMultipleResultMaps()) {
				List multipleResults = new ArrayList();
				multipleResults.add(defaultRowHandler.getList());
				ResultMap[] resultMaps = statement.getAdditionalResultMaps();
				int i = 0;
				while (moveToNextResultsSafely(statementScope, ps)) {
					if (i >= resultMaps.length)
						break;
					ResultMap rm = resultMaps[i];
					statementScope.setResultMap(rm);
					rs = ps.getResultSet();
					DefaultRowHandler rh = new DefaultRowHandler();
					handleResults(statementScope, rs, skipResults, maxResults,
							new RowHandlerCallback(rm, null, rh));
					multipleResults.add(rh.getList());
					i++;
				}
				defaultRowHandler.setList(multipleResults);
				statementScope.setResultMap(statement.getResultMap());
			} else {
				while (moveToNextResultsSafely(statementScope, ps))
					;
			}
		}
		// End additional ResultSet handling
		return rs;
	}

	private ResultSet getFirstResultSet(StatementScope scope, Statement stmt)
			throws SQLException {
		ResultSet rs = null;
		boolean hasMoreResults = true;
		while (hasMoreResults) {
			rs = stmt.getResultSet();
			if (rs != null) {
				break;
			}
			hasMoreResults = moveToNextResultsIfPresent(scope, stmt);
		}
		return rs;
	}

	private boolean moveToNextResultsIfPresent(StatementScope scope,
			Statement stmt) throws SQLException {
		boolean moreResults;
		// This is the messed up JDBC approach for determining if there are more
		// results
		moreResults = !(((moveToNextResultsSafely(scope, stmt) == false) && (stmt
				.getUpdateCount() == -1)));
		return moreResults;
	}

	private boolean moveToNextResultsSafely(StatementScope scope, Statement stmt)
			throws SQLException {
		if (forceMultipleResultSetSupport(scope)
				|| stmt.getConnection().getMetaData()
						.supportsMultipleResultSets()) {
			return stmt.getMoreResults();
		}
		return false;
	}

	private boolean forceMultipleResultSetSupport(StatementScope scope) {
		return ((SqlMapClientImpl) scope.getSession().getSqlMapClient())
				.getDelegate().isForceMultipleResultSetSupport();
	}

	private void handleResults(StatementScope statementScope, ResultSet rs,
			int skipResults, int maxResults, RowHandlerCallback callback)
			throws SQLException {
		try {
			statementScope.setResultSet(rs);
			ResultMap resultMap = statementScope.getResultMap();
			if (resultMap != null) {
				// Skip Results
				if (rs.getType() != ResultSet.TYPE_FORWARD_ONLY) {
					if (skipResults > 0) {
						rs.absolute(skipResults);
					}
				} else {
					for (int i = 0; i < skipResults; i++) {
						if (!rs.next()) {
							return;
						}
					}
				}

				// Get Results
				int resultsFetched = 0;
				while ((maxResults == SqlExecutor.NO_MAXIMUM_RESULTS || resultsFetched < maxResults)
						&& rs.next()) {
					Object[] columnValues = resultMap.resolveSubMap(
							statementScope, rs).getResults(statementScope, rs);
					callback.handleResultObject(statementScope, columnValues,
							rs);
					resultsFetched++;
				}
			}
		} finally {
			statementScope.setResultSet(null);
		}
	}

	private void retrieveOutputParameters(StatementScope statementScope,
			CallableStatement cs, ParameterMapping[] mappings,
			Object[] parameters, RowHandlerCallback callback)
			throws SQLException {
		for (int i = 0; i < mappings.length; i++) {
			ParameterMapping mapping = ((ParameterMapping) mappings[i]);
			if (mapping.isOutputAllowed()) {
				if ("java.sql.ResultSet".equalsIgnoreCase(mapping
						.getJavaTypeName())) {
					ResultSet rs = (ResultSet) cs.getObject(i + 1);
					ResultMap resultMap;
					if (mapping.getResultMapName() == null) {
						resultMap = statementScope.getResultMap();
						handleOutputParameterResults(statementScope, resultMap,
								rs, callback);
					} else {
						SqlMapClientImpl client = (SqlMapClientImpl) statementScope
								.getSession().getSqlMapClient();
						resultMap = client.getDelegate().getResultMap(
								mapping.getResultMapName());
						DefaultRowHandler rowHandler = new DefaultRowHandler();
						RowHandlerCallback handlerCallback = new RowHandlerCallback(
								resultMap, null, rowHandler);
						handleOutputParameterResults(statementScope, resultMap,
								rs, handlerCallback);
						parameters[i] = rowHandler.getList();
					}
					rs.close();
				} else {
					parameters[i] = mapping.getTypeHandler().getResult(cs,
							i + 1);
				}
			}
		}
	}

	private void registerOutputParameters(CallableStatement cs,
			ParameterMapping[] mappings) throws SQLException {
		for (int i = 0; i < mappings.length; i++) {
			ParameterMapping mapping = ((ParameterMapping) mappings[i]);
			if (mapping.isOutputAllowed()) {
				if (null != mapping.getTypeName()
						&& !mapping.getTypeName().equals("")) { // @added
					cs.registerOutParameter(i + 1, mapping.getJdbcType(),
							mapping.getTypeName());
				} else {
					if (mapping.getNumericScale() != null
							&& (mapping.getJdbcType() == Types.NUMERIC || mapping
									.getJdbcType() == Types.DECIMAL)) {
						cs.registerOutParameter(i + 1, mapping.getJdbcType(),
								mapping.getNumericScale().intValue());
					} else {
						cs.registerOutParameter(i + 1, mapping.getJdbcType());
					}
				}
			}
		}
	}

	private void handleOutputParameterResults(StatementScope statementScope,
			ResultMap resultMap, ResultSet rs, RowHandlerCallback callback)
			throws SQLException {
		ResultMap orig = statementScope.getResultMap();
		try {
			statementScope.setResultSet(rs);
			if (resultMap != null) {
				statementScope.setResultMap(resultMap);

				// Get Results
				while (rs.next()) {
					Object[] columnValues = resultMap.resolveSubMap(
							statementScope, rs).getResults(statementScope, rs);
					callback.handleResultObject(statementScope, columnValues,
							rs);
				}
			}
		} finally {
			statementScope.setResultSet(null);
			statementScope.setResultMap(orig);
		}
	}

	/**
	 * Clean up any batches on the session
	 * 
	 * @param sessionScope
	 *            - the session to clean up
	 */
	public void cleanup(SessionScope sessionScope) {
		Batch batch = (Batch) sessionScope.getBatch();
		if (batch != null) {
			batch.cleanupBatch(sessionScope);
			sessionScope.setBatch(null);
		}
	}

	private PreparedStatement prepareStatement(SessionScope sessionScope,
			Connection conn, String sql, Integer rsType) throws SQLException {
		SqlMapExecutorDelegate delegate = ((SqlMapClientImpl) sessionScope
				.getSqlMapExecutor()).getDelegate();
		if (sessionScope.hasPreparedStatementFor(sql)) {
			return sessionScope.getPreparedStatement((sql));
		} else {
			PreparedStatement ps = conn.prepareStatement(sql,
					rsType.intValue(), ResultSet.CONCUR_READ_ONLY);
			sessionScope.putPreparedStatement(delegate, sql, ps);
			return ps;
		}
	}

	private CallableStatement prepareCall(SessionScope sessionScope,
			Connection conn, String sql, Integer rsType) throws SQLException {
		SqlMapExecutorDelegate delegate = ((SqlMapClientImpl) sessionScope
				.getSqlMapExecutor()).getDelegate();
		if (sessionScope.hasPreparedStatementFor(sql)) {
			return (CallableStatement) sessionScope.getPreparedStatement((sql));
		} else {
			CallableStatement cs = conn.prepareCall(sql, rsType.intValue(),
					ResultSet.CONCUR_READ_ONLY);
			sessionScope.putPreparedStatement(delegate, sql, cs);
			return cs;
		}
	}

	private static PreparedStatement prepareStatement(
			SessionScope sessionScope, Connection conn, String sql)
			throws SQLException {
		SqlMapExecutorDelegate delegate = ((SqlMapClientImpl) sessionScope
				.getSqlMapExecutor()).getDelegate();
		if (sessionScope.hasPreparedStatementFor(sql)) {
			return sessionScope.getPreparedStatement((sql));
		} else {
			PreparedStatement ps = conn.prepareStatement(sql);
			sessionScope.putPreparedStatement(delegate, sql, ps);
			return ps;
		}
	}

	private CallableStatement prepareCall(SessionScope sessionScope,
			Connection conn, String sql) throws SQLException {
		SqlMapExecutorDelegate delegate = ((SqlMapClientImpl) sessionScope
				.getSqlMapExecutor()).getDelegate();
		if (sessionScope.hasPreparedStatementFor(sql)) {
			return (CallableStatement) sessionScope.getPreparedStatement((sql));
		} else {
			CallableStatement cs = conn.prepareCall(sql);
			sessionScope.putPreparedStatement(delegate, sql, cs);
			return cs;
		}
	}

	private static void closeStatement(SessionScope sessionScope,
			PreparedStatement ps) {
		if (ps != null) {
			if (!sessionScope.hasPreparedStatement(ps)) {
				try {
					ps.close();
				} catch (SQLException e) {
					// ignore
				}
			}
		}
	}

	/**
	 * @param rs
	 */
	private static void closeResultSet(ResultSet rs) {
		if (rs != null) {
			try {
				rs.close();
			} catch (SQLException e) {
				// ignore
			}
		}
	}

	private static void setStatementTimeout(MappedStatement mappedStatement,
			Statement statement) throws SQLException {
		if (mappedStatement.getTimeout() != null) {
			statement.setQueryTimeout(mappedStatement.getTimeout().intValue());
		}
	}

	//
	// Inner Classes
	//

	private static class Batch {
		private String currentSql;
		private List statementList = new ArrayList();
		private List batchResultList = new ArrayList();
		private int size;

		/**
		 * Create a new batch
		 */
		public Batch() {
			this.size = 0;
		}

		/**
		 * Getter for the batch size
		 * 
		 * @return - the batch size
		 */
		public int getSize() {
			return size;
		}

		/**
		 * Add a prepared statement to the batch
		 * 
		 * @param statementScope
		 *            - the request scope
		 * @param conn
		 *            - the database connection
		 * @param sql
		 *            - the SQL to add
		 * @param parameters
		 *            - the parameters for the SQL
		 * @throws SQLException
		 *             - if the prepare for the SQL fails
		 */
		public void addBatch(StatementScope statementScope, Connection conn,
				String sql, Object[] parameters) throws SQLException {
			PreparedStatement ps = null;
			if (currentSql != null && currentSql.equals(sql)) {
				int last = statementList.size() - 1;
				ps = (PreparedStatement) statementList.get(last);
			} else {
				ps = prepareStatement(statementScope.getSession(), conn, sql);
				setStatementTimeout(statementScope.getStatement(), ps);
				currentSql = sql;
				statementList.add(ps);
				batchResultList.add(new BatchResult(statementScope
						.getStatement().getId(), sql));
			}
			statementScope.getParameterMap().setParameters(statementScope, ps,
					parameters);
			ps.addBatch();
			size++;
		}

		/**
		 * TODO (Jeff Butler) - maybe this method should be deprecated in some
		 * release, and then removed in some even later release.
		 * executeBatchDetailed gives much more complete information. <p/>
		 * Execute the current session's batch
		 * 
		 * @return - the number of rows updated
		 * @throws SQLException
		 *             - if the batch fails
		 */
		public int executeBatch() throws SQLException {
			int totalRowCount = 0;
			for (int i = 0, n = statementList.size(); i < n; i++) {
				PreparedStatement ps = (PreparedStatement) statementList.get(i);
				int[] rowCounts = ps.executeBatch();
				for (int j = 0; j < rowCounts.length; j++) {
					if (rowCounts[j] == Statement.SUCCESS_NO_INFO) {
						// do nothing
					} else if (rowCounts[j] == Statement.EXECUTE_FAILED) {
						throw new SQLException(
								"The batched statement at index " + j
										+ " failed to execute.");
					} else {
						totalRowCount += rowCounts[j];
					}
				}
			}
			return totalRowCount;
		}

		/**
		 * Batch execution method that returns all the information the driver
		 * has to offer.
		 * 
		 * @return a List of BatchResult objects
		 * @throws BatchException
		 *             (an SQLException sub class) if any nested batch fails
		 * @throws SQLException
		 *             if a database access error occurs, or the drive does not
		 *             support batch statements
		 * @throws BatchException
		 *             if the driver throws BatchUpdateException
		 */
		public List executeBatchDetailed() throws SQLException, BatchException {
			List answer = new ArrayList();
			for (int i = 0, n = statementList.size(); i < n; i++) {
				BatchResult br = (BatchResult) batchResultList.get(i);
				PreparedStatement ps = (PreparedStatement) statementList.get(i);
				try {
					br.setUpdateCounts(ps.executeBatch());
				} catch (BatchUpdateException e) {
					StringBuffer message = new StringBuffer();
					message.append("Sub batch number ");
					message.append(i + 1);
					message.append(" failed.");
					if (i > 0) {
						message.append(" ");
						message.append(i);
						message
								.append(" prior sub batch(s) completed successfully, but will be rolled back.");
					}
					throw new BatchException(message.toString(), e, answer, br
							.getStatementId(), br.getSql());
				}
				answer.add(br);
			}
			return answer;
		}

		/**
		 * Close all the statements in the batch and clear all the statements
		 * 
		 * @param sessionScope
		 */
		public void cleanupBatch(SessionScope sessionScope) {
			for (int i = 0, n = statementList.size(); i < n; i++) {
				PreparedStatement ps = (PreparedStatement) statementList.get(i);
				closeStatement(sessionScope, ps);
			}
			currentSql = null;
			statementList.clear();
			batchResultList.clear();
			size = 0;
		}
	}

	private void setupResultObjectFactory(StatementScope statementScope) {
		SqlMapClientImpl client = (SqlMapClientImpl) statementScope
				.getSession().getSqlMapClient();
		ResultObjectFactoryUtil.setResultObjectFactory(client
				.getResultObjectFactory());
		ResultObjectFactoryUtil.setStatementId(statementScope.getStatement()
				.getId());
	}
}

 

Dialect.java  

 

 

package com.ibatis.sqlmap.engine.execution;

public class Dialect {
	private static final String SQL_END_DELIMITER = ";";

	public static String getLimitString(String dbName, String sql, int offset,
			int limit) {
		String limitString = sql;
		if (dbName.toLowerCase().indexOf("mysql") != -1) {
			limitString = getMysqlLimitString(sql, offset, limit);
		}
		if (dbName.toLowerCase().indexOf("microsoft sql server") != -1) {
			limitString = getMssqlLimitString(sql, offset, limit);
		}
		if (dbName.toLowerCase().indexOf("oracle") != -1) {
			limitString = getOracleLimitString(sql, offset, limit);
		}
		if (dbName.toLowerCase().indexOf("db2") != -1) {
			limitString = getDB2LimitString(sql, offset, limit);
		}
		
		return limitString;
	}

	private static String getMysqlLimitString(String sql, int offset, int limit) {
		sql = trim(sql);
		StringBuffer sb = new StringBuffer(sql.length() + 20);
		sb.append(sql);
		if (offset > 0) {
			sb.append(" limit ").append(offset).append(',').append(limit);
		} else {
			sb.append(" limit ").append(limit);
		}
		return sb.toString();
	}

	private static String getOracleLimitString(String sql, int offset, int limit) {
		sql = trim(sql);
		StringBuffer sb = new StringBuffer(sql.length() + 100);
		if (offset > 0) {
			sb.append("select * from ( select row_.*, rownum rownum_ from ( ")
					.append(sql).append(" ) row_ where rownum <= ").append(
							offset + limit).append(") where rownum_ > ")
					.append(offset);
		} else {
			sb.append("select * from ( ").append(sql).append(
					" ) where rownum <= ").append(limit);
		}
		return sb.toString();
	}

	private static String getMssqlLimitString(String sql, int offset, int limit) {
		return null;
	}
	
	private static String getDB2LimitString(String sql, int offset, int limit) {		
		return null;
	}

	private static String trim(String sql) {
		sql = sql.trim();
		if (sql.endsWith(SQL_END_DELIMITER)) {
			sql = sql.substring(0, sql.length() - 1
					- SQL_END_DELIMITER.length());
		}
		return sql;
	}
}

 

 

分享到:
评论
1 楼 cry615 2013-02-28  

相关推荐

    spirngmvc+ext+ibatis

    本人用ibatis 和spring mvc +ext 做的一个完整的增删改查例子,如果有需要的请将代码下载下来,代码量不大,相信都会看得懂!在此共享!包括里面 有一张数据表(oracle),就两个字段。谢谢 (唯一的缺点就是分页我...

    AppFramework_V1.0

    NHibernate适合那些数据量不大、性能要求不高、复杂度不高的场合。 &lt;br&gt;IBatisNet是一个轻量级ORMap工具,它把所有的SQL脚本以模板的方式集中到若干个XML配置文件里,用反射的方式向把C#类实体对象属性与SQL模板...

    AppFramework_V1.0_New

    NHibernate适合那些数据量不大、性能要求不高、复杂度不高的场合。 &lt;br&gt;IBatisNet是一个轻量级ORMap工具,它把所有的SQL脚本以模板的方式集中到若干个XML配置文件里,用反射的方式向把C#类实体对象属性与SQL模板...

    AppFramework数据库访问组件_代码生成插件_V1.1.rar

    NHibernate适合那些数据量不大、性能要求不高、复杂度不高的场合。 &lt;br&gt;IBatisNet是一个轻量级ORMap工具,它把所有的SQL脚本以模板的方式集中到若干个XML配置文件里,用反射的方式向把C#类实体对象属性与SQL模板...

    DBKING使用指南

     这些工具的主要目的都是对JDBC进行包装,使开发人员能更容易的进行数据库开发,在代码的可读性、性能问题、移植问题上,这些工具都尽量做了权衡,达到了他们认为的完美。  dbking是鉴于我们的开发经验和考虑...

    通用管理框架正式版1.1源码

    编写生成数据列表和分页函数 发送/接收消息模块的数据层,业务逻辑层,表现层和js代码的编写,根据最新需求修改编目参数模块 调整新数据结构和默认数据到测试服务器,上传新功能模块部分到测试服务器,外借参数设置...

    springmybatis

    而且也比较轻量级,因为当时在项目中,没来的及做很很多笔记。后来项目结束了,我也没写总结文档。已经过去好久了。但最近突然又对这个ORM 工具感兴趣。因为接下来自己的项目中很有可能采用这个ORM工具。所以在此...

    JAVA WEB典型模块与项目实战大全

    第1篇 开发工具及框架概述  第1章 开发前奏  1.1 java web应用概述  1.2 配置开发环境  1.3 基础技术简单简介  1.4 核心框架初步认识  1.5 小结  第2章 myedipse开发工具对各种框架的支持  2.1 ...

    Spring面试题

    当Hibernate在查询数据的时候,数据并没有存在与内存中,当程序真正对数据的操作时,对象才存在与内存中,就实现了延迟加载,他节省了服务器的内存开销,从而提高了服务器的性能。 3.Hibernate中怎样实现类之间的...

    Java面试宝典2010版

    26、大数据量下的分页解决方法。 27、用 JDBC 查询学生成绩单, 把主要代码写出来(考试概率极大). 28、这段代码有什么不足之处? 29、说出数据连接池的工作机制是什么? 30、为什么要用 ORM? 和 JDBC 有何不一样?...

    最新Java面试宝典pdf版

    26、大数据量下的分页解决方法。 111 27、用 JDBC 查询学生成绩单, 把主要代码写出来(考试概率极大). 112 28、这段代码有什么不足之处? 112 29、说出数据连接池的工作机制是什么? 113 30、为什么要用 ORM? 和 JDBC...

    Java面试笔试资料大全

    26、大数据量下的分页解决方法。 111 27、用 JDBC 查询学生成绩单, 把主要代码写出来(考试概率极大). 112 28、这段代码有什么不足之处? 112 29、说出数据连接池的工作机制是什么? 113 30、为什么要用 ORM? 和 JDBC...

    JAVA面试宝典2010

    26、大数据量下的分页解决方法。 111 27、用 JDBC 查询学生成绩单, 把主要代码写出来(考试概率极大). 112 28、这段代码有什么不足之处? 112 29、说出数据连接池的工作机制是什么? 113 30、为什么要用 ORM? 和 JDBC...

    Java面试宝典-经典

    26、大数据量下的分页解决方法。 111 27、用 JDBC 查询学生成绩单, 把主要代码写出来(考试概率极大). 112 28、这段代码有什么不足之处? 112 29、说出数据连接池的工作机制是什么? 113 30、为什么要用 ORM? 和 ...

    java面试题大全(2012版)

    26、大数据量下的分页解决方法。 111 27、用 JDBC 查询学生成绩单, 把主要代码写出来(考试概率极大). 112 28、这段代码有什么不足之处? 112 29、说出数据连接池的工作机制是什么? 113 30、为什么要用 ORM? 和 JDBC...

    Java面试宝典2012版

    26、大数据量下的分页解决方法。 111 27、用 JDBC 查询学生成绩单, 把主要代码写出来(考试概率极大). 112 28、这段代码有什么不足之处? 112 29、说出数据连接池的工作机制是什么? 113 30、为什么要用 ORM? 和 ...

    java面试宝典2012

    26、大数据量下的分页解决方法。 121 27、用 JDBC 查询学生成绩单, 把主要代码写出来(考试概率极大). 122 28、这段代码有什么不足之处? 123 29、说出数据连接池的工作机制是什么? 123 30、为什么要用 ORM? 和 JDBC...

    Java面试宝典2012新版

    26、大数据量下的分页解决方法。 111 27、用 JDBC 查询学生成绩单, 把主要代码写出来(考试概率极大). 112 28、这段代码有什么不足之处? 112 29、说出数据连接池的工作机制是什么? 113 30、为什么要用 ORM? 和 ...

Global site tag (gtag.js) - Google Analytics