Design for a PhD Dissertation
The chosen article is the Evolution of self organized task specialization in robot swarms; to undertake a PhD level study that extends this research, I would first select a specific domain to research upon. Looking at the findings of this research in which the researchers conclude that task partition is favored in evolution when environmental conditions have opportunities that when exploited, reduce switching costs and that it would be possible to develop self organizing task specialization through the concept of partitioning, I would choose to extend the knowledge on task specialization from scratch, specifically on swarm intelligence. The research would proceed in the area of multi robot systems in the context of swarm intelligence that can be used in real world conditions, using an experimental research design. An area where such research is useful is in search and rescue operations, such as finding survivors after and/ or during natural disasters such as earthquakes or fires. The area of focus would enhance knowledge in the field of multi robot systems swarm intelligence as search and rescue operations and robotic evolution from scratch is a highly complex area of study. The thesis for such research would be on the idea of particle swarm optimization in the context of evolutionary principles developed by Darwin and applied in robots. The thesis would revolve around the principle of survival of the fittest which is a social exclusion mechanism. The thesis for this research would be based around the question;
Can real world missions involving search and rescue be accomplished through the use of a swarm of robots?
The thesis would be
Swarm robotics can learn in an evolutionary manner based on Robotic Darwinian Particle Swarm Optimization architecture in which the robots are dynamically partitioned on the basis of the survival for the fittest theory
The thesis of focus is wide and so it would be solved effectively by tackling different subsidiary areas within the domain of the thesis. The robot autonomous deployment would be evaluated first, then the issue of communications and information sharing in environments that are faulty tackled. The adaptability would then be addressed in these environments before performance that can be proven estimated through the development of novel algorithms. This will be the crux of the research and testing the hypothesis; it must be proven, to a reasonable degree of statistical significance, that the proposed robotic swarm architecture (algorithms) will succeed in a real life mission. This is because swarm robotic algorithms have an inherent stochasticity that makes it almost impossible to predict the performance of a swarm of robots under different and/ or specific conditions. The thesis will be tested through the use of analytical and mathematical approaches by accurately estimating the performance of the robot swarms (collectively). These will be evaluated by analyzing the dynamism of the robots, their constraints in communication, ability to avoid obstacles, and their evolutionary properties which will then inform if the null hypothesis is accepted or rejected. The thesis will be tackled using an experimental approach through the use of novel algorithms, either developed from scratch or an improvement of existing algorithms, but essentially, develop a new algorithm (software) and use existing hardware (ANNs) for the research as well as existing simulation and testing software for ANNs evolutionary learning
A Biological Area Where Computational Evolution Approach is Preferable
A biological area of interest that has sometimes baffled scientists is the intelligence (collective and cooperative) demonstrated by Cetaceans, the family of mammals that include whales and dolphins, with dolphins being a very interesting area of research. These group of mammals have demonstrated interesting cognitive abilities where they can be used, for instance, to detect bombs of nuclear material in water. Studying the evolutionary process by which these mammals evolved through observation can be draining and time consuming, requiring long periods of study that is not likely to be feasible for scientific purposes. As such, modern computational techniques are better suited for modeling such scenarios to enable the evolutionary development of the Cetaceans cognitive abilities be studied better. Evolution is a long process that can take thousands to millions of years; further, the evolution being evaluated in such a case is perceptive; intelligence rather than physical-biological, such as development of motion. Naturally, making observations in the real world too understand the evolution of such a phenomenon as intelligence can be a very long drawn affair. Cognitive development and intelligence has long been considered to have been triggered through social interactions where social cooperation and working together can trigger brain evolution. These phenomena can best be studied and understood computationally through the use of ANNs (artificial neural networks) where challenging neural tasks and how they are tackled can best be studied and understood.
ANNs pertain to artificial neurons that are interconnected; this interconnection is highly analogous to, and is inspired by the biological neurons found in brains of animals. The neurons in human brains are interconnected through synapses where signals can be transmitted and exchanged. As with the biological neural networks, ANNs are also interconnected and the entire network is capable of communication and progressive learning to undertake tasks and can be effective in studying evolutionary intelligence in interesting groups like the Cetaceans. The ANNs are applied to real world situations where patterns can be classified as well as in robotics and entails developing suitable algorithms where a neural model can be selected/ chosen and used for scientific study to understand biological phenomena. A learning algorithm can be developed and a range of tests and conditions provided to enable a better understanding of how the Cetaceans developed over time; powerful algorithms can then be used to accurately study and observe the evolution of intelligence of Cetaceans or any other group of mammals fast and over a relatively short period of time. As discussed by Floreanno, Durr and Mattiussi (2008) in Neuroevolution: from architectures to learning, there are several prominent methods through which evolving ANNs that can be used, based on evolutionary algorithms, to synthesize and understand natural learning and intelligent evolution in mammals. The use of computational approach involving ANNs will enable the researcher to use a variety of parameters than is possible in the natural world when research is undertaken from a work-bench approach; the ANNs would be made to perform tasks that are more difficult than occur in the natural environment. Further, the use of computational approach will allow greater control of the parameters and treatments given to better understand the evolutionary cognitive behavior of Cetaceans in a relatively shorter time period.
Situation where Agent Based Modeling is Unsuitable
Agent based modeling refers to a class of computational models that are used in simulating the interactions and actions of both collective and individual autonomous agents with the aim of assessing how they affect systems at a collective level. ABM (agent based modeling) combines elements of complex systems, game theory, computational sociology, emergence, evolutionary programming, and multi agent systems. ABMs work also with Monte Carlo methods as a way of introducing randomness. While ABM is good for theoretical studies, such as how low level evolutionary behavior transmutes to determine collective behavior, it is unsuitable for real world studies. Real life studies that involve the use of empirical data are not suitable areas where the ABM model can be used, for example, how specific mutations within an organism can impact their adaptation to their environments, or in meta population studies. ABM is incapable, or at least considered to be so, of making inferences in real life phenomena and observations. The ABM is only suitable for theoretical studies and modeling and so reducing ABMs into empirical data becomes a huge challenge that often results in wrong analyses and outcomes. Agent based simulation results depend largely on initial conditions, they are are uncertain and are also not transparent. Further, ABM are only moderately reproducible and comparable and as such, they are unable to, and therefore, unsuitable for contributing to an understanding of real world phenomena. As such, they should be limited for use in theoretical studies and modeling. Simulated results normally rely upon the values of the parameters used in the simulation and the details of every internal structure used in the model.
Using ABM in actual real word studies requires that a researcher has some from of accurate data at the macroscopic level. For example, to study how two dispersed populations of a similar species adapt, the researcher must have some initial empirical data and information; however, in such a situation, it is only possible to accurately e,ploy ABM in a limited manner. In situations where there is a lack of sufficient empirical data to enable parameterization, the model analysis is only possible with respect to sensitivity analysis. So studies that involve population growth or how certain factors affect the population growth of a give species cannot be undertaken using ABM when there is no initial empirical data. ABM, as stated earlier, must have a basis for it to be used; this basis is usually some empirical data that acts as the basis for the ABM simulation. ABM is affected by the internal structure of the parameters under study; for example, in evaluating the adaptability and evolutionary learning of robots, ABM becomes unsuitable as a modeling tool because there is a lot of internal variability. The variability has to be quantified and then described and this will entail having to replicate the simulation several times for every set of parameters. In such a scenario, a researcher must be creative in how they present the results; the outcome is that there is a high level of uncertainty in the results, which means that they are not scientifically robust studies. ABM models in empirical studies show results that are not transparent and which cannot be reproduced or replicated, meaning in the scientific context, the results can be considered unreliable
Summary of Article on Computational Evolution
The paper is based on the ABC (artificial bee colony algorithm) is an algorithm used for optimization in operations research and computer science in general that is based on the foraging behavior of bees; a behavior considered to be highly intelligent. The paper investigates how the ABC algorithm performs with respect to ANN concepts of particle swarm optimization, differential evolution, and evolutionary algorithms when used in solving numeric problems that are multi dimensional. The article compares how the ABC algorithm performs in relation to the evolutionary algorithm (EA), particle swarm optimization (PSO) algorithm, and the differential evolution (DE) algorithm. This comparison is made be evaluating how real bees behave and experiments set up to evaluate how the ABC compares in terms of performance to the other three algorithms, based on the findings for Krink et al., (2004). The paper discusses the findings from the experiment that was repeated thirty times using classical simulations using different random seeds. Different control parameters were used for the comparison of the ABC with the PSO, DE, and EA algorithms and the results analyzed. During experimentation, the maximum cycle numbers for the ABC algorithm was taken as 1000 for f1 (x) and f2 (x); further, it was taken as 5000 for f3 (x), f4(x), f5 (x) so as to equalize the total evaluation numbers to be 100000 for first two functions and to 500000 for the remaining three functions, respectively. The experiment was designed so that the percentage of employed bees was half (50%), the onlooker bees percentage was also half (50%), and the for every cycle, the scout bees was set to be utmost The average function values for the best solutions were recorded and the standard deviations and means for DE, PSO, and EA computed, as well as the ABC algorithms values. Interesting findings from the study show that the ABC algorithm’s performance when compared to the other algorithms (PSO, EA, and DE) was very good, in the context of the global optimization and local optimization. This is because the selection schemes of the ABC algorithm that the neighbor production mechanism as used in the experiment yielded the best results. The simulation results for all the algorithms and the tests carried out demonstrate that the ABC algorithm is highly flexible and is equally robust and easy to use as an optimization algorithm. The ABC algorithm, according to the experimental findings (using simulation), can be employed efficiently to optimize multi variate and multi modal problems. Based on the results, the ABC algorithm can be applied efficiently and effectively in engineering problems that are complex and have high dimensionality (Karaboga & Basturk 2008). The ABC algorithm is based on the meta-heuristic methods that have evolved and become popular because of their ability to solve highly dimensional and complex optimization problems. Meta heuristic methods such as the ABN are independent of any initial solutions, while also not requiring derivatives to be effectively used. This quality makes meta-heuristic methods such as the ABN very useful in solving problems because they overcome the major limitations of conventional and deterministic optimization methods such as DE, as the results of the survey under question showed.
Floreano, D., Durr, P., & Mattiussi, C. (2008). Neuroevolution: from architectures to learning. Evolutionary Intelligence. 1, 47-62.
Karaboga, D., & Basturk, B. (2008). On the performance of artificial bee colony (ABC) algorithm. Applied Soft Computing Journal. 8, 687-697.