In academic
publication there is no standard decision making process.
Commonly the process is postulated by an editorial board of
which an editor is the leader. Thus, the decision differs and
leads to an ‘acceptance’ or ‘rejection’ recommendation for the
same manuscript simultaneously in two different journals. This
is called ‘misery of recommendation’.

Out of all
kinds of academic publications, Journal publication warrants
more care and attention than the others. Most of the potential
submissions are processed by a journal to explore the
suitability for possible publication. An evaluation process is
the most common technique that facilitates the decision making
process in journal publication. Often an end-decision is made by
an editor based on two or three double blind evaluation reports.
Seldom open evaluation method is used for making decision. The
evaluation reports are often different from each other in terms
of recommendation. An editor’s final decision based on this
reports may be associated with human factor bias.
It seems
that at an evaluator’s stage a recommendation on a manuscript
reflects individual opinion. Thus, there is less opportunity to
argue on the comments except the experience and expertise of the
evaluator. It is clearly understood that due to this all issues
making a decision by an editor is critical to a great extent.
An editor or
editorial board’s decision is actually a synthesized outcome of
the evaluation reports. Since the basis of a decision is an
evaluation process, one must be aware of the shortfalls of an
evaluation process. The decision-making process based on an
instrument-based evaluation is not very accurate due to human
factor bias. Whatever the comments given by individual evaluator
the final decision depends on an editor’s or editorial boards’
view. A manuscript may have potential contents to be published
but due to one evaluator’s view it may be rejected. That means
traditional evaluation process does not produce hundred percent
accurate decision.
The
traditional approach in decision making is solely dependent
either on instruments or evaluators’ note
on certain aspects of a manuscript. Till date this is the best
way being utilized by publication authorities. here we
introduce a new quantitative approach that is able to eliminate
the limitations of traditional decision making process by
minimizing the bias. This new approach in decision making is a
combination of traditional Instrument Based Assessment (IBA)
approach and mathematical tools. The approach is called the “Standardized
Acceptance Factor Average (SAFA™)” that provides many
conveniences in making an end-decision. The details on this new
approach are discussed in the following section.
A manuscript
is a written script of scientific work with several aspects such
as logic, consistencies and so on. Does an evaluator have always
enough knowledge to focus on all these? Suppose, if a reviewer
is eligible enough but how his or her opinion is to be measured
and summarized on different aspects of a manuscript remains
unanswered. However it is done, human factor becomes one of the
most concerning issue in evaluation process. If it is possible
to incorporate evaluator’s (reviewer) efficiency in making a
decision may produce a better decision than just relying
completely on their 'Yes' or 'No'.
The approach
used in traditional review process uses a standard review format
which is named as 'Instrument Based Assessment’ approach'
or 'IBA'. The IBA is the most popular approach till the moment
for decision making in publication. The IBA tools, that is, the
instruments used in IBA differ from one journal to another that
addresses another issue of variability in quality. In IBA there
is always a list of options on recommendations followed by the
principal instrument of several items. The quantity of items
ranges from 6-12. The objective of having a list of
recommendations followed by the principal instrument is to
converge the qualification of each item to a common
recommendation such as 'accept' or 'reject' (often there are
more options). Making a recommendation out of a few options
based on the principal instrument leads to another bias. By
examining many reports I found that there are always
inconsistencies between the recommendation by an evaluator and
the item scoring. Mostly this bias is irremovable in traditional
end-decision making approach. This bias is named as
'inconsistent recommendation bias’.
An
end-decision is substantially directed by the recommendation in
an evaluation report. This event again causes bias because, for
instance, if there are three review reports and the
recommendations are not identical, an editor has to make a
decision which may not be completely accurate. Thus, there is a
chance of bias. This issue is termed 'end-decision bias'.
In conclusion,
the IBA is associated with major three biases - 'reviewers’
attribute bias', 'inconsistent recommendation bias', and
'end-decision bias'. These three biases cause less-efficiency in
decision making in academic publication. In order to minimize
the total bias caused by the above three issues the SAFA™ system
has been proposed.
|