Browsing by Author "Shynkarenko, Viktor I."
Now showing 1 - 20 of 32
Results Per Page
Sort Options
Item Analytic Hierarchy Process Sustainability at the Significant Number of Alternatives Ranking(CEUR-WS Team, Aachen, Germany, 2020) Shynkarenko, Viktor I.; Vasetska, Tetiana M.; Vyshnyakova, Iryna М.ENG: This paper focuses on the need to evaluate the sustainability of Analytic hierarchy process at the Ranking of more than 10 alternatives. The proposed method is based on simulation modeling of the process of improving expert pair-wise comparison judgments. The represented method provides a stepwise improvement of the pair-wise comparison matrix transitivity. The average discrepancy and coincidence of ranks in multiple modeling are proposed as estimates of the rating stability. The application of the developed method was studied on a statistical sample formed according to the final tables of the England, Germany and Spain football championships. The method for determining probability of some alternatives ranks is developed. It is possible to modify the method for predicting the results of sports competitions and for the case of ranking with partially missing expert ratings.Item Application of Constructive Modeling and Process Mining Approaches to the Study of Source Code Development in Software Engineering Courses(Split: Croatian communications and information society, Croatia, 2021) Shynkarenko, Viktor I.; Zhevaho, Oleksandr O.ENG: We present an approach of constructing a source code history for a modern code review. Practically, it is supposed to be used in programming training, especially within initial stages. The developed constructor uses constructive-synthesizing modeling tools to classify a source code history by fine-grained changes and to construct an event log file aimed to provide information on students’ coding process. Current research applies Process Mining techniques to the software engineering domain to identify software engineering skills. By better understanding of the way students design programs, we will help novices learn programming. This research provides an innovative method of using code and development process review in teaching programming skills and is aimed to encourage using code review and monitoring coding practice in educational purposes. The standard method of evaluation takes into consideration only a final result, which doesn’t meet modern requirements of teaching programming.Item Authorship Determination of Natural Language Texts by Several Classes of Indicators with Customizable Weights(CEUR-WS Team, Aachen, Germany, 2021) Shynkarenko, Viktor I.; Demidovich, InnaENG: In this work we try to improve the results of texts and their fragments attribution using the classification method of the least distance in Euclidean space of images, by selecting weights for each of the image measures. For weights determination the genetic algorithm was used. Images are formed using statistical and modified recurrent analysis and the text complexity indicators. We will try to identify the effectiveness for each of them. It was found that this method usage improves the efficiency of the text attribution and the reliability of authorship determination of the texts from the control sample reaches 80-91%.Item Automated Monitoring of Content Demand in Distance Learning(CEUR-WS Team, Aachen, Germany, 2021) Shynkarenko, Viktor I.; Raznosilin, Valentyn V.; Snihur, YuliiaENG: In this paper the research of means and the development of software for matching the student’s gaze focus with the structure of information on the computer monitor during distance learning is presented. Widespread hardware is envisaged to be used. Primary processing of the face image, eye regions separation is performed by means of the OpenCV library. An appropriate algorithm to calculate the center of the eye’s pupil has been developed. The influence of the system calibration process with different schemes of calibration point display, its delay time on the screen and location of the additional camera according the accuracy of the calculation the coordinates of the gaze focus is investigated. Based on the performed experiments, it was defined that the error of gaze focus recognition with using two cameras can be reduced to 4-10%. The proposed approach makes it possible for objective measurement the working time of each student with one or another part of content. The lecturer will have the opportunity to improve the content by highlighting significant parts that receive little attention and simplifying those elements that students process for an unreasonable amount of time. It is planned to integrate the developed software with the LMS Moodle in the future.Item Automation of Template Formation to Identify the Structure of Natural Language Documents(CEUR-WS Team, Aachen, Germany, 2021) Kuropiatnyk, Olena S.; Shynkarenko, Viktor I.ENG: In the task of text borrowings and plagiarism detection, it is important to take into account the structure of the document. This allows getting a more accurate assessment of the text and reducing the volume of material for comparison. Using a template allows identifying the structure of the document. The paper presents a constructive synthesizing model for automating the construction of a structural template of a document. Possible implementations of some algorithms by means of programming in C# are considered. Their comparative assessment is performed. Possible modification of the template is presented to increase the importance of keywords and simplify the xml-tree, which is a template.Item Conceptualization of the Tabular Representation of Knowledge(IEEE, 2021) Shynkarenko, Viktor I.; Zhuchyi, Larysa I.; Ivanov, Oleksandr P.ENG: Tabular representation is used as a basis for knowledge extraction. The structure of knowledge is built from a generalized concept to data structures that are used in applied software. The resulting formalizations allow for control of information in the form of natural language texts (regulatory documents), databases and spreadsheets of automated transport systems. The knowledge extracted from the tabular representation serves as the basis for decision-making and data mining.Item Constructive Model of the Natural Language(Institute of Informatics, University of Szeged, Hungary, 2018) Shynkarenko, Viktor I.; Kuropiatnyk, Olena S.ENG: The paper deals with the natural lenguage model. Elements of the model (the language constructions) are images with such attributes as sounds, let- ters, morphemes, words and other lexical and syntactic components of the language. Based on the analysis of processes of the world perception, visual and associative thinking, the operations of formation and transformation of images are pointed out. The model can be applied in the semantic NLP.Item Constructive Modeling of Lightning Activity in Thunderstorm Front(Institute of Electrical and Electronics Engineers (IEEE), 2019) Shynkarenko, Viktor I.; Lytvynenko, Kostiantyn; Chyhir, Robert; Sansiieva, IrynaENG: Using the tools of constructive-synthesizing modeling, constructors of fractal time series which determine the location, magnitude and rate of damping of lighting discharges are developed. Model video images of lightning in the thunderstorm front are formed according to constructors' implementation. The adequacy of the model is verified by comparison of the model video image with the same produced by NASA satellite.Item Constructive-Synthesizing Modeling of Lightning Flashes in the Dynamic Thunderstorm Front(Springer, Cham, 2020) Shynkarenko, Viktor I.; Nikitina, Iryna M.; Chyhir, Robert R.ENG: At first, deep analysis of modeling processes of the lightning flashes dynamic behavior in static background and moving thunderstorm fronts with very mobile clouds at NASA satellite videos was done. Color models’ opportunities were explored for lightning flashes extraction in significant dynamic thunderstorm fronts and cloudiness. It was shown that the greatest efficiently in recognition is provided by combining Lab and LCH model-based features. The ranges of color channels were defined for lightning aureoles. This data was used for their detection at the series frames from meteorological satellites. The lightning detection filtering is based on the processing a current frame of the video with linear and quadratic filters were developed to solve these tasks. An analysis of their effectiveness was done. Modeling of the lightning flashes was implemented using constructive-synthesizing approach. A set of constructors was developed. Implementing parametric multi-character constructors allows forming fractal sequences of characters. Constructor-converter from the character string to time series creates fractal time series, which determine the location, magnitude and decay rate of lightning discharges. Model video images of lightning in the thunderstorm front are formed in accordance with the implementation of the constructor-assembler. The methods and software for lightning extraction from NASA satellites video, and also for realization constructive-synthesizing models were developed.Item Constructive-Synthesizing Modeling of Natural Language Texts(Khmelnytskyi National University, Khmelnytskyi, 2023) Shynkarenko, Viktor I.; Demidovich, Inna M.ENG: Means for solving the problem of establishing the natural language texts authorship were developed. Theoretical tools consist of a constructors set was developed on the basis of structural and production modeling. These constructors are presented in this work. Some results of experimental studies based on this approach have been published in previous works by the author, the main results should be published in the next ones. Constructors developed: converter of natural language text into tagged, tagged text into a formal stochastic grammar and the authors style similarity degree establishment of two natural language works based on the coincidence of the corresponding stochastic grammars (their substitution rules). In this paper, constructors are developed and presented that model a natural language text in the form of a stochastic grammar that displays the structures of sentences in it. This approach allows you to highlight the syntactic features of the construction of phrases by the author, which is a characteristic of his speech. Working with a sentence as a unit of text for analyzing its construction will allow you to more accurately capture the author's style in terms of the words use, their sequences and speech style characteristic. It allows you not to be tied to specific parts of speech, but reveals the general logic of constructing phrases, which can be more informative in terms of the author's style characteristics for any text. The presented work is a theoretical basis for solving the problems of the text authorship establishing and identifying borrowings. Experimental studies have also been carried out. The statistical similarity of solutions to the problems of establishing authorship and identifying borrowings was experimentally revealed, which will be presented in the next article of the authors. The proposed approach makes it possible to highlight the semantic features of the author's phrases construction, which is a characteristic of his speech. Working with a sentence as a unit of text to analyze its construction will allow you to more accurately determine the author's style in terms of the use of words, their sequences and characteristic language constructions. Allows not to be attached to specific parts of speech, but reveals the general logic of building phrases. It is planned to use the created model in the future to determine the authorship of natural language texts of various directions: fiction and technical literature.Item Constructive-Synthesizing Modelling of Ontological Document Management Support for the Railway Train Speed Restrictions(Український державний університет науки і технологій, Дніпро, 2022) Shynkarenko, Viktor I.; Zhuchyi, Larysa I.ENG: Purpose. During the development of railway ontologies, it is necessary to take into account both the data of information systems and regulatory support to check their consistency. To do this, data integration is performed. The purpose of the work is to formalize the methods for integrating heterogeneous sources of information and ontology formation. Methodology. Constructive-synthesizing modelling of ontology formation and its resources was developed. Findings. Ontology formation formalization has been performed, which allows expanding the possibilities of automating the integration and coordination of data using ontologies. In the future, it is planned to expand the structural system for the formation of ontologies based on textual sources of railway regulatory documentation and information systems. Originality. The authors laid the foundations of using constructive-synthesizing modelling in the railway transport ontological domain to form the structure and data of the railway train speed restriction warning tables (database and csv format), their transformation into a common tabular format, vocabulary, rules and ontology individuals, as well as ontology population. Ontology learning methods have been developed to integrate data from heterogeneous sources. Practical value. The developed methods make it possible to integrate heterogeneous data sources (the structure of the table of the railway train management rules, the form and application for issuing a warning), which are railway domain-specific. It allows forming an ontology from its data sources (database and csv formats) to schema and individuals. Integration and consistency of information system data and regulatory documentation is one of the aspects of increasing the level of train traffic safety.Item Constructive-Synthesizing Representation of Geometric Fractals(Springer, 2019) Shynkarenko, Viktor I.ENG: A constructive-production approach, which is more general than other well-known approaches, is proposed to generate fractals. Possibilities are shown for using a large variability of attributes and initial elements of formation of fractals, as well as combining fractals into multifractals. The possibilities of generating fractals are extended by eliminating the constraints necessary in other approaches. The proposed approach allowed to establish several previously unknown properties of fractional dimension that consist of the possibility of changing it in the process of generation of a fractal and a mismatch of fractional dimensions of the form limit during generation and the limiting fractal. A simple definition of a deterministic geometric fractal is given. This definition takes into account all the properties characterizing such a fractal.Item Data Stochastic Preprocessing for Sorting Algorithms(CEUR Workshop Proceedings, 2022) Shynkarenko, Viktor I.; Doroshenko, Anatoliy Yu.; Yatsenko, Olena A.; Raznosilin, Valentyn V.; Halanin, Kostiantyn K.ENG: The possibilities of improving sorting time parameters through preprocessing by stochastic sorting were investigated. The hypothesis that preprocessing by stochastic sorting can significantly improve the time efficiency of classical sorting algorithms has been experimentally confirmed. Sorting with different computational complexity is accepted as classical sorting algorithms: shaker sorting with computational complexity O(n2), insertions O(n2), Shell O(n·(log n)2) ... O(n3/2), fast with optimization of ending sequences O(n·log n). The greatest effect is obtained when performing comparisons using stochastic sorting in the amount of 160 percent of the array’s size. Indicators of the exchange efficiency of two elements and a series of comparisons with exchanges are proposed, which made it possible to establish the greatest efficiency of data preprocessing by stochastic sorting when one element for comparison is selected from the first part of the array, and the other from the second. For algorithms with a computational complexity of O(n2) the improvement in time efficiency reached 70–80 percent. However, for Shell sort and quick sort, the stochastic presort has no positive effect, but instead increases the total sorting time, which is apparently due to the initial high efficiency of these sorting methods. The hypothesis about increasing the time efficiency of quick sorting combined with sorting by insertions on the final sections due to the use of preliminary stochastic processing of such sections has not been confirmed. However, according to the experiment, the recommended size of the array was established, at which it is necessary to switch to insert sorting in the modified quick sort. The optimal length of the ending sequences is between 60 and 80 elements. Given that algorithm time efficiency is affected by computer architecture, operating system, software development and execution environment, data types, data sizes, and their values, time efficiency indicators should be specified in each specific case.Item Development of a Toolkit for Analyzing Software Debugging Processes Using the Constructive Approach(ПП Технологічний Центр, Харків, 2020) Shynkarenko, Viktor I.; Zhevago, Oleksandr O.ENG: Constructive-synthesizing modeling and the Process Mining methods in a toolkit to monitor and analyze the software debugging process were applied. Methods for monitoring the development and debugging processes are the basis for improving the level of practical training of students, reducing the time that is used irrationally in the process of software development by a student, and in monitoring the processes of performance of tasks by a teacher. The process of software debugging is seen as a sequence of actions when dealing with relevant tools. Using the methodology of constructive-synthesizing modeling, a constructor for forming a debugging actions log was developed. Based on the constructive model, the extension to the integrated development environment (IDE) Microsoft Visual Studio, in which all debugging actions are recorded in an event log, was designed. During debugging in the IDE, event logs are collected and then a conformance checking of these logs with regard to the reference model is performed. To do this, the ProM (Eindhoven Technical University, Netherlands), a platform for Process Mining methods, is used. By checking compliance, it is possible to compare different debugging processes and recognize behavioral similarities and differences. The main purpose of the developed toolkit is to collect debugging actions from the developer’s IDE. By better understanding how students grasp and deal with errors, one can help novices learn to program. Knowing how programmers debug can encourage researchers to develop more practically directed methods, enable teachers to improve their debugging curricula and allow tool developers to adapt the debugger to the actual needs of users. It is practically suggested to use the prepared tools in the software engineering course.Item Development of Ontological Support of Constructive-Synthesizing Modeling of Information Systems(НВП ПП «Технологічний Центр», Харьків, 2017) Skalozub, Vladyslav V.; Ilman, Valeriy M.; Shynkarenko, Viktor I.ENG: The methodology of ontological support of the processes of constructive-product modeling (CPM) of structurally complex information technologies (CCIT) is developed. Ontology models of subject areas are formed and presented on the basis of a constructive structure containing primary classes of ontology instances, active binding operators of actions and performers, classes of comparative properties, subordination and evolutionary development. The universality of the model of the ontological constructive structure, the possibility of its development and customization on subject areas allows to improve the quality of the automated processes of creating.Item A Dual Approach to Establishing the Authority of Technical Natural Language Texts and Their Components( Ukrainian State University of Science and Technologies, Dnipro, 2023) Shynkarenko, Viktor I.; Demidovich, Inna; Kuropiatnyk, Olena S.ENG: Purpose. The study is aimed at testing the hypothesis that it is possible to determine plagiarism by methods of establishing the authorship of a text without using a text bank and their direct comparison. Methodology. Construc-tive and productive models of the processes of establishing the authorship of technical texts for two methods have been developed. The first method is based on the formation of a text model in the form of a set of formal substitution rules with probabilistic weights (as in stochastic formal grammars), which reflects the syntactic features and patterns of text formation by the author. The degree of similarity between the text under study and another text is determined by comparing their models. The second method is a classical approach to detecting borrowings (plagiarism) by directly comparing the text under study with an existing text bank, highlighting repeated text fragments, and determining the degree of originality. Experiments were conducted to establish the correlation between the results of these two methods. The experimental base consisted of 509 text sections of theses of students majoring in «Software Engineering». Findings. Experimental studies have made it possible to establish a high correlation between the results of the two methods. Correlation coefficients in the range of 0.75...1.0 and with an average value of 0.88 were obtained provided that borrowings are taken into account for text fragments of at least five words in length. Originality. For the first time, the authors have identified the possibilities and proposed methods for indirect plagiarism detection without using a large text bank. The essence of the model is to formalize the representation of the author's sentence syntax by a set of substitution rules with probabilistic weights. Practical value. Based on the results obtained, the possibilities for detecting borrowings have been expanded and the effectiveness of the corre-sponding methods has been increased. Recommendations on the parameters of classical methods for detecting borrowings have been obtained, in particular, it is recommended to take into account text fragments of at least five words in length as a rational parameter when using borrowing detection systems. The possibilities of text authorship detection methods tested on fiction texts are extended to technical texts.Item Methods and Software for Significant Indicators Determination of the Natural Language Texts Author Profile(Інститут програмних систем НАН України, Київ, 2023) Shynkarenko, Viktor I.; Demidovich, Inna M.ENG: Methods for the formation and optimization of author profiles are presented. The author profile is an image - a vector in a multidimensional space, which components are author's texts measurements by a number of methods based on 4-grams, stemming, recurrence analysis and formal stochastic grammar. The author's profile is a model of his language, including vocabulary, sentence syntax features. A comparative analysis of the each of the methods effectiveness is carried out. By means of the genetic algorithm, a reduced profile of the author is formed. Insignificant indicators are excluded, which allows to reduce their number by 20%. The reduced author's profile contains attributes that are significant for this author and is an effective attribution of a particular author.Item Minimization of the Chemical Pollution Level at the Working Zones in Open Areas Using Screens(Дніпровський національний університет залізничного транспорту імені академіка В. Лазаряна, Дніпро, 2019) Biliaiev, Mykola M.; Rusakova, Tetiana I.; Shynkarenko, Viktor I.ENG: Purpose. The scientific work aims to develop a new method for assessing the level of chemical air pollution in working zones located in open areas near highways using screens of different heights. Methodology. The analytical method for calculating the airflow velocity field near protective screens is based on the mathematical apparatus of the theory of complex variable functions, which allows obtaining the value of the velocity potential and the flow function, to calculate the velocity value at any point of the plane with a screen of different height. The obtained velocity field is used to calculate the level of carbon monoxide concentration in the numerical solution of the two-dimensional mass transfer equation. Findings. The developed program of numerical calculation allows conducting computational experiments on the effectiveness of the use of protective screens, taking into account changes in their geometry and meteorological conditions. The developed method based on the obtained concentration field makes it possible to carry out an assessment of the risk of chronic intoxication for the employees of the take-out trade, who are within the zone of the emission source (highway) for a long time. Originality. The regularities of changes in the concentration of carbon monoxide are established depending on the distance to the emission source at a height of 2 m from the ground in the presence of a screen of a certain height and in its absence. A risk assessment of chronic carbon monoxide intoxication has been carried out for take-out trade workers near the highway. It is shown that the presence of the screen reduces the risk of chronic CO intoxication by 10% as compared to its absence. Increasing the screen height to 1.8 m reduces the risk of chronic intoxication by 6% relatively to the situation when the screen height is 1.2 m. Practical value. The developed numerical-analytical method for calculating the level of chemical pollution in working zones in open areas and the program «Screen» created on its basis allow us to carry out a prompt forecast of atmospheric air pollution level with carbon monoxide taking into account the effectiveness of the screens. Quantitative results are necessary at the planning stage of trading places near highways, during the architectural-planned reorganization of adjacent developments.Item Modeling of Lightning Flashes in Thunderstorm Front by Constructive Production of Fractal Time Series(Springer, Cham, 2020) Shynkarenko, Viktor I.; Lytvynenko, Kostiantyn; Chyhir, Robert R.; Nikitina, Iryna M.ENG: Using the tools of structural-synthesizing modeling, a set of constructors was developed. Implementing parametric multi-character constructors allows to form fractal sequences of characters. Constructor-converter from the character string to time series creates fractal time series, which determine the location, magnitude and decay rate of lightning discharges. Model video images of lightnings in the thunderstorm front are formed in accordance with the implementation of the constructor-assembler. All constructors are developed on the basis of the generalized constructor that was previously presented and repeatedly tested. The model adequacy of the model is confirmed by comparing the video image of the model with the image, what was obtained by NASA satellite. This approach can be the basis for solving the dynamic problems on lightning protection of engineering constructions and civil objects, and development of strategy of aircraft behavior in order to mitigate the risks of lightning strokes in the conditions of movement in the thunderstorm front.Item Ontological Harmonization of Railway Transport Information Systems(CEUR-WS Team, Aachen, Germany, 2021) Shynkarenko, Viktor I.; Zhuchyi, Larysa I.ENG: The problem of unification and intellectualization of Ukrainian railway transport information systems is investigated employing ontological support. A base frame model of modular ontology, which includes 12 components connected by logical definitions, has been developed. It provides ontological support of technological processes taking into account the formalized normative-legal documentation. The possibilities of ontologies for the railway transport models coordination have been established. The application of the developed methods and tools makes it possible to achieve greater decentralization of information systems and unification of railway technological processes representation. Further research involves extending the formalization of instructions and increasing the expressiveness of ontologies by developing new constructs and linking them with higher level of abstraction ontologies.