Purpose: This paper proposes an expert assignment method for scientific project review that considers both accuracy and impartiality. As impartial and accurate peer review is extremely important to ensure the quality and feasibility of scientific projects, enhanced methods for managing the process are needed.Design/methodology/approach: To ensure both accuracy and impartiality, we design four criteria, the reviewers’ fitness degree, research intensity, academic association, and potential conflict of interest, to express the characteristics of an appropriate peer review expert. We first formalize the expert assignment problem as an optimization problem based on the designed criteria, and then propose a randomized algorithm to solve the expert assignment problem of identifying reviewer adequacy.Findings: Simulation results show that the proposed method is quite accurate and impartial during expert assignment.Research limitations: Although the criteria used in this paper can properly show the characteristics of a good and appropriate peer review expert, more criteria/conditions can be included in the proposed scheme to further enhance accuracy and impartiality of the expert assignment.Practical implications: The proposed method can help project funding agencies (e.g. the National Natural Science Foundation of China) find better experts for project peer review.Originality/value: To the authors’ knowledge, this is the first publication that proposes an algorithm that applies an impartial approach to the project review expert assignment process. The simulation results show the effectiveness of the proposed method.
Peer review is a standard professional practice that is designed to professionally assess the quality and feasibility of scientific projects and all academic projects/papers. Since appointed peer review experts directly create and/or contribute to
the review results and thus influence reader responses to the applicants’ project proposals, expert assignment has been one of the most important tasks in project management (Huang & Zhong, 2016).
The core issue in peer review expert assignment is how to ensure its accuracy and impartiality (Gandhi & Sauser, 2008). Accuracy in this context means that the experts appointed for project review should be very familiar with the related research field, and show correctness and precision in their project. Impartiality means that these experts should make unbiased (independent and fair) reports on all the projects they review (Wu, 1996). A suitable expert, one who is informed and honest, can be expected to make more objective and impartial comments on the quality of the reviewed project. A good fit for peer reviewers and applicants’ work they assess can help maintain the prestige of the reviewers and better ensure the reputation of the project designers, authors, and affiliate institutions.
The expert assignment problem (EAP) as a common phenomenon has attracted considerable research interest in recent years. Most of these works focus on the accuracy of the expert assignment (Wang, 2007). Aiming to guarantee such assignment accuracy, Li and colleagues in the last decade proposed two heuristic algorithms to solve the EAP—a genetic algorithm (Kumar et al., 2010; Li et al., 2007) and an ant colony optimization algorithm (Dorigo & Blum, 2005; Li et al., 2008). Li, Peng, and Wei (2013) further proposed an adaptive parallel genetic algorithm focused on assignment accuracy and computational efficiency to address the EAP. While these algorithms can fulfill the fundamental task of expert assignment to some extent, two main problems with their methods need to be addressed. First, all of these proposed algorithms assume a closeness (or similarity) measurement between the research fields of every applicant and experts. A formal definition and a calculation method for the measurement, however, should be detailed to clarify these relationships. Second, reviewer impartiality has not been considered in these methods. To deal with this closeness/similarity limitation, Ho et al. (2017) have just created a proposal reviewer recommendation system using keywords with fuzzy weights based on big data. So the topic is starting to get needed attention at this time.
Impartiality in peer reviews is supposed to be supported or guaranteed by project sponsors or government agencies using administrative means (Wu, 1996). For example, the National Natural Science Foundation of China (NSFC)—the most important science foundation in China—requires that applicants’ relatives should be excluded from their peer review expert team (Zhang et al., 2016). But professional or personal relationships that can impact peer review dynamics may be more numerous and meaningful (if less obvious) than those of family members when addressing EAP concerns. Other criteria for judging potential conflict of interest are therefore necessary to ensure impartiality. Outside of certain discussions on rules and regulations (Agee, 2007; Wang et al., 2002), however, the impartiality of expert assignment has rarely been considered in designing algorithms or data assessment techniques to address EPA concerns.
To tackle the above problems, we propose a peer review expert assignment method that considers both accuracy and impartiality. During the expert assignment, the method simultaneously takes into account the fitness degree among the project proposals and review experts, the research intensity of the experts, their academic association(s), and potential conflict of interest in the peer review process. The fitness degree and research intensity criteria are designed to ensure assignment accuracy, while academic associationand conflict of interest criteria to help guarantee impartiality. In summary, the contributions of this paper are as follows: (1) we propose four criteria designed to express the characteristics of a good and appropriate peer review expert; (2) we formalize the expert assignment problem as an optimization problem (finding the best among multiple solutions) on the designed criteria; and (3) we implement a randomized algorithm to help solve the EAP problem and perform simulation to verify the effectiveness and feasibility of the proposed method.
The remainder of this paper is organized as follows. Section 2 defines the four criteria, gives the formal definition of the expert assignment problem, and presents the proposed randomized algorithm. Section 3 presents the simulation analysis of the algorithm. Section 4 offers the conclusion, limitations, discussion, and future research directions.
An appropriate peer review expert needs to have three main qualifications:
(1) sufficient knowledge of the area/field under review; (2) enough frontline research experience to grasp the area/field’s key research points and frontiers to better guarantee an accurate understanding of the project proposal quality; and (3) few or no interest associations with the applicant to ensure the fairness of the project review. Interest association in this context refers to (a) the applicants’ academic association (e.g. same organization, coauthored/cofunded works, and other relationships/links, past or present) with the review experts, and (b) other interest associations like similar project proposals of the experts related to the applicants.
We then map the four (1, 2, 3a, and 3b) conditions into the proposed four criteria (fitness degree, research intensity, academic association, and conflict of interest) and explain how every criterion can be intuitively designed. A formal definition of each criterion is presented, along with details on how the expert assignment problem is addressed.
The reviewing experts who fit the project proposal’s research background are first selected. The most intuitive way of measuring the fitness between an expert and a project proposal is to assume each expert and proposal applicant has a vector of description keywords (related to the research area). The similarity between the two vectors is then selected as the fitness degree. The problem here is that keywords provided by different researchers may have high levels of personal relatedness, or one research point may apply different expressions depending on different researchers’ word choice. We thus make two assumptions about the research descriptions: (1) the topic description is hierarchically structured, in that it has more than one level of detail; and (2) the keywords in the description are (semi)controlled, so that most of the words can be matched during similarity calculation.
Actually, the assumptions (about the research descriptions) have been met by the NSFC Committee, which set up an Internetbased Science Information System (ISIS) to manage users’ research resumes. A user of ISIS can log in as either an applicant or expert^{(1)}(^{(1)}Only researchers who have been selected as peer review experts have the role of expert.). With either role, the system will require the users to register their research resumes. Among other variables, the research field section is used to collect scientific research backgrounds, consisting of three fields: familiarity code (FC), research direction(RD), and keywords (
To link with appropriate peer review experts, applicants will usually find or accept recommendations on keywords that best fit their project proposal’s research area. Consequently, we calculate the two parties’ fitness degree hierarchically as follows. If the proposal applicant and the expert have the same familiarity code, they have a fitness score of 0.2. Further, if they have the same research direction, they gain a fitness score of 0.3. Given two keywords vectors, the cosine similarity of the vectors is first calculated, and then scaled with 0.5 as keywords fitness score. The three fitness scores are then added to make the final fitness degree between the applicant and the expert. Considering that the same keywords may have completely different meanings across research fields, the keywords only contribute to the fitness degree if the score of the familiarity code is not 0.
Let RF={FC,RD,
Definition 1 Fitness Degree. Given a proposal P_{i} and an expert E_{j}, the fitness degree FD_{ij} between P_{i} and E_{j} can be calculated as Equation (1):
\[FD_{ij}=\left\{\begin{array}{rcl} \Gamma (FC^a_i,FC^e_j)\times (0.2+0.3 \times \Gamma (RD^a_i,RD^e_j)+0.5{∠}(\vec{K}^a_i,\vec{K}^e_j))\ \ if \ i \neq j\\ 1\ \ \ \ \ \ \ else\ \ \end{array}\right.\ \ (1)\]
where ∠(.,.) denotes cosine similarity and Γ(.,.) is a determination equation with Γ(a, b) = 1 if a = b, Γ(a, b) = 0 if a ≠ b. Note here that FD_{ij} = 1 is more of a flag than a score, as it notifies that an expert cannot review his/her own proposal.
A researcher’s research status is described herein so that when the researcher is appointed as a peer review expert, the confidence level in relation to the value of the expert’s comments can be identified. Because an expert should have enough frontline research experience to grasp the key research points and frontiers of the field and make accurate review comments, we use the Hindex and publication timeline to characterize the expert’s status.
Given an expert’s Hindex and publication history, the ranking of her/his Hindex is first derived based on all experts’ Hindexes from the same research field (i.e. researchers having the same FC^{e} as the one given). A new parameter ranking percentage can be calculated as dividing the expert’s ranking by the total number of experts in the same research field. This ranking percentage indicates a researcher’s relative academic competence in the research field and can thus convey the meaning of preferable, that is, an expert with a higher Hindex may be a preferred candidate for proposal evaluation. Next, the expert’s latest publication year is used to characterize her/his research vitality. The core idea is that a researcher’s ability of following up with academic frontiers declines with the lack of continuous output. The aging model is used to describe this vitality decline. The academic achievement and research vitality score is finally combined into the applicant’s research intensity score as follows.
Definition 2 Research Intensity. Let RP be an expert’s ranking percentage, and PY his/her latest publication year. The research intensity RI of the expert is defined as Equation (2):
where ∆_{t} = T_{c}  PY, T_{c} is the current year, and γ ∈ (0, 1) is the attenuation factor that determines the expert’s declining speed in research intensity along with the number of years without research output. For instance, if γ = 0.95, and a researcher’s Hindex ranking is 245/1000, with the latest publication year of 2015, then RP = 245/1000 = 0.245, RI = (1  0.245) × 0.95^{20172015} = 0.681.
Note that in practice, however, an expert’s Hindex may not be easy to acquire. This problem can be solved by information provided by the expert’s funding agencies. For example, NSFC is collaborating with ScholarMate^{(2)}（^{(2)} www.scholarmate.com） to allow users to maintain their research resumes online. The data can be used to calculate Hindexes and construct academic social networks (used in the next subsection).
Up to now, fitness degree and research intensity can be used to assign an adequately accurate expert for peer review. It is then necessary to assure impartiality, where there should be little academic association and no conflict of interest between an applicant and the expert.
Academic association is defined based on the network distance between the applicant and expert on academic social networks, which are set up based on various relationships among applicants and experts. Experts having the least degree of association with the applicant can thus be assigned for peer review, where academic association is defined as follows.
Definition 3 Academic Association. Given an applicant A_{i} and an expert E_{j}, let H_{ij} be the number of “hops” in the shortest path between A_{i} and E_{j} in the academic social network. The academic association AA_{ij} between A_{i} and E_{j} is then is defined as Equation (3):
where ξ > 1 is the attenuation factor that determines the academic association’s declining speed with respect to the network distance. For instance, let ξ = 2, where the shortest path between an applicant A_{i} and an expert E_{j} only has 1hop (i.e. they have direct relationship), then AA_{ij} = 2^{1+1} = 1, the highest degree of association two researchers can have. If the shortest path instead has 4hops, then AA_{ij} = 2^{4+1} = 0.125, indicating very little association between them.
It is now important to assure that a review expert does not review a proposal in the research area that he/she has also applied for support. Since it is hard to say to what extent the similarity between the expert’s proposal and the reviewing proposal can influence the review result, an alarm value is set so the proposal will not be assigned to this expert if the similarity is larger than the threshold. If this is not the case, the expert is regarded as having no potential/actual conflict of interest with the applicant. Conflict of interest is defined as follows.
Definition 4 Conflict of Interest. Given an applicant A_{i} and an expert E_{j}, if E_{j} submitted P_{j}, the conflict of interest CoI_{ij} between A_{i} and E_{j} can be calculated as Equation (4):
where τ is a given threshold and \(c_{ij}=\Gamma (FC^a_i,DC^a_j)\times (0.2+0.3 \times \Gamma (RD^a_i,RD^a_j)+0.5{∠}(\vec{K}^a_i,\vec{K}^a_j))\).
Based on Definitions 1 to 4, we can assign the expert that best fits the proposal, has the highest research intensity, least degree of academic association, and no conflict of interest with the applicant for project review. The selectivity degree is used to unify the four concepts as follows.
Definition 5 Selectivity Degree. Given a proposal P_{i} and an expert E_{j}, the selectivity degree SD_{ij} of P_{i} on E_{j} is defined in Equation (5).
\[SD_{ij}=\left\{\begin{array}{rcl}1 \ \ \ if\ \ CoI_{ij}FD_{ij}AA_{ij}=1 \\ \sqrt[3]{CoI_{ij}\times FD_{ij} \times RI_{ij}\times min(\frac{AA^{1}_{ij}}{\xi ^s})\ }\ \ else \end{array}\right. ,\ \ (5)\]
where  denotes logical OR, min(a,b) is the minimum function that returns the smaller numbers of a and b, and s is a positive integer.
Under Definition 5, if SD_{ij} = 1, then the proposal will not be assigned to the expert for review due to conflict of interest or avoiding selfreviews. If this is not the case, then the larger the fitness degree and research intensity is, the larger the selectivity degree. A proposal will be more likely assigned to an expert with considerable research intensity, which offers a better fit. As to academic association, for better understanding of
That is, suppose ξ = 2, and suppose s = 3, then if H_{ij} = 1 (direct collaboration), we have; if H_{ij} = 4, we have
Figure 1 gives an example of expert assignment based on selectivity degree. Figure 1(a) shows an academic social network among six researchers, where applicant A_{1} (submitted P_{1}) has not been selected as the peer review expert, E_{2} to E_{6} are review experts, andE_{2} and E_{3} submitted project proposal P_{2} and P_{3}, respectively. Figure 1(a) also gives the research intensity values of all the experts, calculated using the information provided in Figure 1(b) by setting γ = 0.95 (it is assumed here that all the proposal applicants and experts belong to the same research area, i.e. having the same FC). Figure 1(c) gives the fitness degrees among the proposals and experts. If experts are assigned based merely on fitness degree (per the existing works noted above), and if each proposal needs to be reviewed by two experts, then P_{1} will be assigned to E_{6} and E_{3}, and P_{2} and P_{3} will both be assigned to E_{4} and E_{5}. Figure 1(d) (ξ = 2) shows that A_{1} is closely related academically to E_{6}, just as A_{2} is related to E_{5}, and A_{3} is related to E_{4} and E_{5}. Compared withA_{2} and A_{3}, E_{6} may be more willing to support A_{1}. Furthermore, it can be seen that E_{5} has not directly contributed to research work for five years, so it may not be appropriate for him/her to be a review expert. By also considering the CoIs presented in Figure 1(f) (determined based on the values given in Figure 1(e) for τ = 0.5), the selectivity degrees can be identified as shown in Figure 1 (g). If we use selectivity degree for expert assignment, then P_{1} will be assigned to E_{3} and E_{4}, and P_{2} and P_{3} to E_{4} and E_{6}. It can be seen that the experts with considerable research intensity that have adequate fitness degree with the proposals and relatively less academic association and conflict of interest^{(3)}（^{(3)} Note that if we do not consider CoI, then we will get SD_{12} = 0.63 > 0.56 = SD14.） with the applicants are selected for peer review.
In practice, a number of proposals need to be reviewed at the same time. Moreover, every proposal needs to be reviewed by more than one expert, and every expert can only receive a limited number of proposals. Hence, given m project proposals and n review experts, let N be the number of review experts a proposal should have, M be the largest number of proposals an expert can receive, and
Definition 6 Expert Assignment Problem. Find a set of 0/1 appointments\(\sum^M_{i=1}\sum^n_{j=1}S_{ij}\times SD_{ij}\)that maximizes and satisfies both Equations (7) and (8):
\[\forall i\in [1,m],\sum^n_{j=1}S_{ij}=N \ \ (7)\] \[\forall j\in [1,n],\sum^m_{i=1} S_{ij}\leq M \ \ (8)\]
Definition 6 defines a version of the 01 knapsack problem (Freville, 2004), a famous NPC problem in computer science. That means that the problem may not be solved in a finite amount of time. Hence, the 01 knapsack problem is always handled using dynamic programming, greedy algorithm, and randomized algorithm (Martello & Toth, 1987). In this paper, we adopt the randomized algorithm to solve the problem since it can efficiently find an acceptable solution within a reasonable time period.
Before carrying out the proposed algorithm, an example of possible assignments is given (Figure 2). Based on the selectivity degrees presented in Figure 1(g), Figure 2 presents two possible assignments (where each assignment contains a set of 0/1 appointments), by assuming every proposal should be reviewed by two experts (i.e. N = 2), and each expert can receive no more than two proposals (M = 2). We can then calculate the total selectivity degree to identify the better assignment. The second assignment (Figure 2 (b)) is seen to gain a total selectivity degree of 3.46, much larger than the first assignment (Figure 2(a)), with the total selectivity degree of 2.95.
The possible assignments are seen to increase exponentially with the increase of experts and applicants. It is hard to find the best assignment in a limited amount of time when the number of experts and applicants is large. Hence, we design a randomized algorithm to find an adequately good assignment in a relatively short period of time. The idea is that, proposals to experts are randomly assigned until all the proposals have N review experts, while guaranteeing every expert has fewer than M appointments. The total selectivity degree is then calculated. The process is repeated ROUND (e.g. 10^{5}) times, where the assignment with the largest selectivity degree is employed, as noted in Algorithm 1 (Figure 3).
In this section, we perform simulations to verify the effectiveness and feasibility of the proposed algorithm. During the simulation, four matrices are randomly generated, corresponding to the four proposed criteria. Then the matrix of selectivity degree is calculated, after which the randomized algorithm is run ROUND times to find the best assignment (i.e. the assignment having the maximum total selective degree), and the intermediate results are recorded for analysis.
For the generation of fitness degree, we assume applicants and experts are from the same research field, i.e. FD
As to research intensity, we assume a uniform distribution of U(0,1) for generation. According to Definition 2, the ranking percentage part (i.e. 1RP) is subject to uniform distribution. And if we assume all the researchers’ publication years are also subject to uniform distribution, the attenuation factor part (
For academic association, we generate network distances based on the Sixth Degree Segmentation Theory (or Small World Phenomenon). Small World Phenomenon proposes that in a social network, there will be no more than six hops before a person can reach any stranger in the network (Milgram, 1967). This phenomenon is also closely applied to academic social networks, due to the particularity of academic circles (e.g. which consist of people with similar research backgrounds who are very willing to know each other) (Cainelli et al., 2015). Hence, we assume a researcher in the academic social network can reach 70% of other researchers in less than three hops, and can reach anyone in the network in less than six hops. More specifically, we assign the probability of reaching hhop as 0.1, 0.2, 0.4, 0.1, 0.1, 0.1 for h = 1, 2, ..., 6, respectively. Finally, for conflict of interest, we randomly generate a matrix that contains 10% of 1s and 90% of 1s.
During the simulation, the algorithm parameters are set as: m = 200, n = 500, N = 4, M = 8, ROUND = 10^{8}. Note that 200 is a sufficiently large number of applicants in practice. Since according to statistics, the total number of proposals is subject to management science, where NSFC 2016 is 8,293 and there are 57 familiarity codes in management science. So there is an average of 145 proposals per familiarity code. Also note that while the proposed algorithm is expected to find a better result with a larger ROUND, our experimental results show that ROUND = 10^{8} can return a good assignment for m = 200^{(4)}（^{(4)} A larger m may need a larger ROUND due to the increase of solution space (search space).）.
Since the output of the randomized algorithm is nondeterministic, we run the algorithm 100 times (see Figures 4 and 5 for analysis). For each run, we record the average fitness degree, research intensity, and academic association of the best assignment (best value) of all the ROUND assignments (overall value) and of the worst assignment (worst value). We then calculate the averages within the best, overall, and worst values of the 100 runs.
Figure 4 plots the maximum, average, and minimum values of best, overall, and worst FD/RI (of the 100 runs), where it is found that (1) after large amounts of sampling, the average values of the overall FD and RI converge to their corresponding theoretical value, and these values are the expected values when the experts are randomly assigned for peer review; (2) the maximum, average, and minimum values of the best FD and RI are significantly larger than their corresponding overall and worst values; (3) the average fitness degree among the experts and applicants of the best assignment is approximately 10% higher than the overall assignments, and 22% higher than the worst assignment; and (4) the average research intensity of the experts of the best assignment is approximately 28% higher than the overall assignments, and 56% higher than the worst assignment. That is, the proposed algorithm can always find experts with considerable research intensity having a better fit with proposal applicants.
For academic association, we plot the maximum, average, and minimum hops of the best, overall, and worst assignments (see Figure 5). The results are very similar to those of fitness degree and research intensity. The converged (i.e. expected) average hops (between an expert and the applicant) is 3.2. Compared to the overall assignments and the worst assignment, on average our algorithm can find the experts for peer review with one or two hops away from the applicants.
The time overhead of the algorithm is analyzed to verify whether it is feasible for expert assignment in practice. The same set of parameters is used here as those used in Section 3.1, except that m is varied from 10, 20, 50, 100, and 200 to 500 for verification. This approach is used because according to Algorithm 1, besides ROUND, the algorithm’s efficiency is mainly determined by m. Correspondingly, the algorithm needs approximately 50, 100, 250, 500, 1,000, and 2,500 seconds to return the best assignment. The results show that: (1) when ROUND is fixed, the algorithm’s time overhead is linearly correlated with the input m; (2) an assignment of 200 proposals needs less than 17 minutes on a computer with Intel Core i76700 (3.4GHz) and 8G memory, which is adequately efficient for the nonrealtime expert assignment problem in this paper; (3) when there are large numbers of proposals that need to be assigned, a pnode parallel computing network can easily increase the speed of p times (e.g. the assignment of 200 proposals will need less than one minute if p = 20), and the parallelization can be very easily deployed. In this case each round of assignments can be executed independently, and the results of each round can be efficiently integrated and compared in one run.
In this paper, we formally define the expert assignment problem as an optimization problem while considering both accuracy and impartiality of experts based on four carefully designed criteria. The criteria characterize the properties that a good peer review expert should have. With the help of the criteria, the integrated criterion (i.e. selectivity degree) is defined for expert selection, where a randomized algorithm is proposed to solve the optimization problem. Simulation results show that the proposed method can always identify experts with considerable research intensity, as well as adequate fitness degree and relatively fewer academic associations or conflict of interest with the proposal applicants for project review. Furthermore, the proposed algorithm can return results in an acceptable amount of time.
A limitation of this study is that real data (rather than simulation data) may be more convincing in proving the effectiveness of the proposed method. The authors are actively contacting the officers of project funding agencies (e.g. NSFC) for potential collaboration. Hence, our future work will be dedicated to improving the proposed method in terms of practical applications. For example, more criteria may be considered and adopted for expert characterization to further promote the accuracy and impartial of the assignment based on real data. Also, an algorithm with more sophisticated strategies (e.g. backtracking) may be designed to further improve the efficiency of the assignment when the data volume is extremely large.
M.L. Yue (yueml@whlib.ac.cn) proposed the research question, designed the methods, and designed and implemented the algorithm. K.L. Tian (23950839@qq.com) wrote the manuscript. T.C. Ma (atc@whlib.ac.cn, corresponding author) designed the method and revised the manuscript.
This work is supported by the National Natural Science Foundation of China under the grant (No. 7160325); the Young TalentField Frontier Project of Wuhan Documentation and Information Center, Chinese Academy of Sciences.
The authors have declared that no competing interests exist.
[1] 
URL
[Cite within: 1]
The article discusses the Professional Review Committee (PRC), a peer review board developed by Banner Estrella Medical Center in the U.S. The PRC is composed of physicians responsible for resolving physicianrelated issues including clinical skills, conduct/behavioral complaints and documentation issues. In addition, the PRC consists of the chief medical officer (CMO), who serves as chairman, and at least four active physicians at Banner Estrella, who are chosen by the selection committee.

[2] 
DOI:10.1007/s1119201414215
URL
[Cite within: 1]
Increased specialization and extensive collaboration are common behaviours in the scientific community, as well as the evaluation of scientific research based on bibliometric indicators. This paper ai

[3] 
DOI:10.1016/j.tcs.2005.05.020
URL
[Cite within: 1]
Research on a new metaheuristic for optimization is often initially focused on proofofconcept applications. It is only after experimental work has shown the practical interest of the method that researchers try to deepen their understanding of the method's functioning not only through more and more sophisticated experiments but also by means of an effort to build a theory. Tackling questions such as “how and why the method works’’ is important, because finding an answer may help in improving its applicability. Ant colony optimization, which was introduced in the early 1990s as a novel technique for solving hard combinatorial optimization problems, finds itself currently at this point of its life cycle. With this article we provide a survey on theoretical results on ant colony optimization. First, we review some convergence results. Then we discuss relations between ant colony optimization algorithms and other approximate methods for optimization. Finally, we focus on some research efforts directed at gaining a deeper understanding of the behavior of ant colony optimization algorithms. Throughout the paper we identify some open questions with a certain interest of being solved in the near future.

[4] 
DOI:10.1016/S03772217(03)002741
URL
[Cite within: 1]
The multidimensional 0–1 knapsack problem is one of the most wellknown integer programming problems and has received wide attention from the operational research community during the last four decades. Although recent advances have made possible the solution of medium size instances, solving this NPhard problem remains a very interesting challenge, especially when the number of constraints increases. This paper surveys the main results published in the literature. The focus is on the theoretical properties as well as approximate or exact solutions of this special 0–1 program.

[5] 
DOI:10.1080/10429247.2008.11431752
URL
[Cite within: 1]
In today's dynamic competitive business environment, project reviews are fundamental for most organizations ability to complete their projects successfully; however, it is important to know how to execute project reviews effectively. One important quality of an effective project review is the independence of its reviewers he central focus of this article. Through four case studies chosen from different scenarios, we have shown that, despite independence being an important quality of project reviews, most organizations choose reviewers who are within their own social network. This issue can partially be resolved by the use of knowledge network theory and analysis (KNA), which will help us characterize relationships which can significantly influence the effective execution of project reviews.

[6] 

[7] 
DOI:10.1177/0165551516644168
URL
reacttext: 424 The development of microarray technologies has made it to obtain gene expression pattern of thousands of genes in a single cell simultaneous. Based on such microarray data, assessment of gene variations including classification and developmental status of cancer cells are possible. The objective of this paper is to predict and classify gene expression information by means of analysis of... /reacttext reacttext: 425 /reacttext [Show full abstract]

[8] 
URL
[Cite within: 1]
Abstract Genetic algorithms are considered as a search process used in computing to find exact or a approximate solution for optimization and search problems. There are also termed as global search heuristics. These techniques are inspired by evolutionary biology such as inheritance mutation, selection and cross over. These algorithms provide a technique for program to automatically improve their parameters. This paper is an introduction of genetic algorithm approach including various applications and described the integration of genetic algorithm with object oriented programming approaches.

[9] 
DOI:10.1109/ICMLC.2007.4370276
URL
[Cite within: 1]
Expert assignment is chief and basic work of project review in project management. So it is significant to research how to solve expert assignment problem (EAP). In previous papers, we established the mathematical model of expert assignment problem, and proposed genetic algorithm (GA) and GA using heuristic mutation guide by pheromone (HMP) to solve EAP. Though it has been proven GA and HMP are effective ways for EAP, they have disadvantages of massive redundancy iteration in later period and inferior local search ability. In this paper a modification of GA which introduces adaptive mutation to HMP is proposed to solve EAP. The simulation results show that the new algorithm improves the ability of local search and generates solutions of better quality.

[10] 
DOI:10.1109/ICMLC.2008.4620487
URL
[Cite within: 1]
Expert assignment is chief and basic work of project review in project management. So it is significant to research how to solve expert assignment problem (EAP). In previous papers, we established the mathematical model of expert assignment problem, and proposed genetic algorithms (GAs) to solve EAP. Though it has been proven GAs are effective ways for EAP, they have disadvantages of slow convergence speed. In this paper, ant colony optimization (ACO), which has more powerful ability to solve complicated discrete optimization problem, is introduced to solve EAP. The simulation results show that ACO improves the convergence and generates solutions of better quality.

[11] 
DOI:10.1109/ISCID.2013.13
URL
As an evaluation method, peer review is used in many business fields. The quality of experts appointed will affect the results of the final evaluation since experts' expertise will have a direct impact on evaluation. This paper first analyzes the problem of expert assignment, then researches on its mathematical model and how to get the optimal Pareto, and finally discusses the feasibility of solving the problem of expert assignment through the application of genetic algorithms. It mainly discusses how to solve APGA in the expert assignment process, and then gives out the process of APGA solution to the problem. The test proved that the APGA can effectively solve expert assignment problem. The same time, and random search algorithm and genetic algorithm (SGA) to assign the results of APGA in the convergence speed and search ability has obvious advantages.

[12] 

[13] 

[14] 
DOI:10.1016/S18725791(07)600335
URL
[Cite within: 1]
Targeted to the difficult problem in quantitativly judging experts' appraisal behavior during science and technology apprasing and to the diffculty of science and technology administative department in selecting objective and impartial appraisal experts,it puts forward an algorithm for judging the experts' appraisal behavior and selecting experts.The algorithm can be realized through a software.It was tested and verified with reality.The test result is well consistent to the actual outcome.So it provides basis and method for the science and technology administrative department to select excellent appraise experts.

[15] 

[16] 
