» Test on systems theory and system analysis. Test on the subject of modeling. Qualitative methods for describing systems use

Test on systems theory and system analysis. Test on the subject of modeling. Qualitative methods for describing systems use
Details Computer science tests with answers 15 October 2015 Views: 10961

Page 1 of 2

Topic: "Systems and patterns of their functioning and development"

1. The set of all objects whose properties change affects the systems, as well as those objects whose properties change as a result of the system's behavior, is:

a) environment;
b) subsystem;
c) components.

2. The simplest, indivisible part of the system, determined depending on the purpose of building and analyzing the system:

a) component;
b) an observer;
c) element;
d) an atom.

3. System component is:

a) a part of the system that has the properties of the system and has its own sub-goal;
b) the limit of the division of the system in terms of the aspect of consideration;
c) a means to an end;
d) a set of homogeneous elements of the system.

4. The restriction of the system of freedom of elements is determined by the concept

a) criterion;
b) purpose;
c) communication;
d) stratum.

5. The ability of a system in the absence of external influences to maintain its state for an arbitrarily long time is determined by the concept

a) sustainability;
b) development;
c) balance;
d) behavior.

6. Combining some system parameters in a higher level parameter is

a) synergy;
b) aggregation;
c) hierarchy.

7. The network structure is

a) decomposition of the system in time;
b) decomposition of the system in space;
c) relatively independent, interacting subsystems;
d) relationships of elements within a certain level;

8. The level of the hierarchical structure, in which the system is presented in the form of interacting subsystems, is called

a) a stratum;
b) level;
c) layer.

9. What kind of structure of systems does not exist

a) with arbitrary links;
b) horizontal;
c) mixed;
d) matrix.
  • tutorial

I recently had an interview for Middle QA for a project that clearly exceeds my capabilities. I spent a lot of time on what I did not know at all and little time on repeating a simple theory, but in vain.

Below is the basics of the basics to review before the interview for Trainee and Junior: definition of testing, quality, verification / validation, goals, stages, test plan, test plan items, test design, test design techniques, traceability matrix, test case, checklist, defect, error/defect/failure, bug report, severity vs priority, testing levels, types / types, approaches to integration testing, testing principles, static and dynamic testing, exploratory / ad-hoc testing, requirements, bug life cycle, software development stages, decision table, qa/qc/test engineer, link diagram.

All comments, corrections and additions are very welcome.

Software testing- verification of the correspondence between the actual and expected behavior of the program, carried out on the final set of tests, selected in a certain way. In a broader sense, testing is one of the quality control techniques, which includes the activities of work planning (Test Management), test design (Test Design), test execution (Test Execution) and analysis of the results (Test Analysis).

Software Quality is a set of software characteristics related to its ability to satisfy stated and implied needs.

Verification- is the process of evaluating a system or its components in order to determine whether the results of the current stage of development satisfy the conditions formed at the beginning of this stage. Those. whether our goals, deadlines, project development tasks, defined at the beginning of the current phase, are being met.
Validation- this is a determination of the compliance of the developed software with the expectations and needs of the user, system requirements.
You can also find another interpretation:
The process of assessing the conformity of a product to explicit requirements (specifications) is verification, while at the same time assessing whether a product meets user expectations and requirements is validation. You can also often find the following definition of these concepts:
Validation - 'is ​​this the right specification?'.
Verification - 'is ​​the system correct to specification?'.

Test Goals
Increase the likelihood that an application intended for testing will work correctly under all circumstances.
Increase the likelihood that the application intended for testing will meet all the described requirements.
Providing up-to-date information about the state of the product at the moment.

Testing steps:
1. Product analysis
2. Dealing with requirements
3. Development of a testing strategy
and planning of quality control procedures
4. Creation of test documentation
5. Prototype testing
6. Basic testing
7. Stabilization
8. Operation

Test Plan is a document describing the entire scope of testing work, starting with a description of the object, strategy, schedule, criteria for starting and ending testing, up to the equipment required in the process, special knowledge, as well as risk assessment with options for resolving them.
Answers the questions:
What should be tested?
What will you test?
How will you test?
When will you test?
Criteria for starting testing.
Criteria for the end of testing.

The main points of the test plan
The IEEE 829 standard lists the items that a test plan should (let it be) consist of:
a) test plan identifier;
b) introduction;
c) test items;
d) Features to be tested;
e) Features not to be tested;
f) approach;
g) Item pass/fail criteria;
h) Suspension criteria and resumption requirements;
i) test deliverables;
j) Testing tasks;
k) environmental needs;
l) Responsibilities;
m) staffing and training needs;
n) Schedule;
o) risks and contingencies;
p) Approvals.

test design- this is the stage of the software testing process, at which test scenarios (test cases) are designed and created in accordance with the previously defined quality criteria and testing goals.
Roles responsible for test design:
Test analyst - defines "WHAT to test?"
Test designer - defines "HOW to test?"

Design Test Techniques

Equivalence Partitioning (EP). As an example, if you have a range of valid values ​​from 1 to 10, you must choose one correct value within the interval, say 5, and one incorrect value outside the interval, 0.

Boundary Value Analysis (BVA). If we take the example above, as values ​​for positive testing, we will choose the minimum and maximum limits (1 and 10), and the values ​​\u200b\u200bfor more and less than the limits (0 and 11). Boundary value analysis can be applied to fields, records, files, or any kind of constrained entity.

Cause / Effect (Cause / Effect - CE). This is, as a rule, the input of combinations of conditions (causes) to receive a response from the system (Consequence). For example, you are testing the ability to add a customer using a particular display. To do this, you will need to enter several fields, such as "Name", "Address", "Phone Number" and then, click the "Add" button - this is "Reason". After pressing the "Add" button, the system adds the client to the database and displays his number on the screen - this is the "Consequence".

Error Guessing - EG. This is when the tester uses his knowledge of the system and the ability to interpret the specification in order to "foresee" under what input conditions the system may give an error. For example, the spec says "the user must enter a code". The tester will think: “What if I don’t enter the code?”, “What if I enter the wrong code? ", etc. This is error prediction.

Exhaustive Testing (ET)- this is an extreme case. Within this technique, you have to test all possible combinations of input values, and in principle, this should find all problems. In practice, the use of this method is not possible due to the huge number of input values.

Pairwise Testing is a technique for generating test data sets. The essence can be formulated, for example, like this: the formation of such data sets in which each tested value of each of the tested parameters is combined at least once with each tested value of all other tested parameters.

Suppose some value (tax) for a person is calculated based on his gender, age and the presence of children - we get three input parameters, for each of which we somehow select values ​​for tests. For example: gender - male or female; age - up to 25, from 25 to 60, over 60; having children - yes or no. To check the correctness of the calculations, you can, of course, enumerate all combinations of values ​​of all parameters:

floor age children
1 Man up to 25 no children
2 female up to 25 no children
3 Man 25-60 no children
4 female 25-60 no children
5 Man over 60 no children
6 female over 60 no children
7 Man up to 25 Do you have children
8 female up to 25 Do you have children
9 Man 25-60 Do you have children
10 female 25-60 Do you have children
11 Man over 60 Do you have children
12 female over 60 Do you have children

And you can decide that we do not need combinations of values ​​of all parameters with all, but we only want to make sure that we check all unique pairs of parameter values. That is, for example, in terms of gender and age parameters, we want to make sure that we accurately check a man under 25, a man between 25 and 60, a man after 60, and a woman under 25, a woman between 25 and 60, well, a woman after 60. And in the same way for all other pairs of parameters. And thus, we can get much fewer sets of values ​​(they have all pairs of values, although some are twice):

floor age children
1 Man up to 25 no children
2 female up to 25 Do you have children
3 Man 25-60 Do you have children
4 female 25-60 no children
5 Man over 60 no children
6 female over 60 Do you have children

This approach is approximately the essence of the pairwise testing technique - we do not check all combinations of all values, but we check all pairs of values.

Traceability matrix - Requirements compliance matrix is a two-dimensional table containing the correspondence between the functional requirements of the product and the prepared test scenarios (test cases). Requirements are located in the column headings of the table, and test scenarios are placed in the row headings. At the intersection, a checkmark indicating that the current column's requirement is covered by the current row's test case.
The requirements compliance matrix is ​​used by QA engineers to validate product coverage with tests. The MCT is an integral part of the test plan.

Test Case is an artifact that describes a set of steps, specific conditions and parameters necessary to verify the implementation of the function under test or part of it.
Example:
Action Expected Result Test Result
(passed/failed/blocked)
Open page "login" Login page is opened Passed

Each test case should have 3 parts:
PreConditions A list of actions that bring the system into a state suitable for a basic check. Or a list of conditions, the fulfillment of which indicates that the system is in a state suitable for conducting the main test.
Test Case Description A list of actions that transfer the system from one state to another, to obtain a result, based on which it can be concluded that the implementation meets the requirements
PostConditions List of actions that bring the system to its initial state (the state before the test is performed - initial state)
Types of Test Scripts:
Test cases are divided according to the expected result into positive and negative:
A positive test case uses only valid data and verifies that the application correctly executed the called function.
The negative test case operates on both valid and invalid data (minimum 1 invalid parameter) and aims to check for exceptions (validators fire), and also checks that the function called by the application is not executed when the validator fires.

Checklist is a document describing what is to be tested. In this case, the checklist can be of absolutely different levels of detail. How detailed the checklist will be depends on the reporting requirements, the level of knowledge of the product by employees, and the complexity of the product.
As a rule, the checklist contains only actions (steps), without the expected result. The checklist is less formalized than the test script. It is appropriate to use it when test scripts are redundant. Also, the checklist is associated with flexible approaches to testing.

Defect (aka bug)- this is a discrepancy between the actual result of the program execution and the expected result. Defects are discovered at the stage of software (software) testing, when the tester compares the results of the program (component or design) with the expected result described in the requirements specification.

error- user error, that is, he tries to use the program in a different way.
Example - enters letters in fields where numbers are required (age, quantity of goods, etc.).
In a quality program, such situations are provided and an error message is issued, with a red cross which.
Bug (defect)- a mistake of a programmer (or a designer or someone else who takes part in the development), that is, when something in the program does not go as planned and the program gets out of control. For example, when user input is not controlled in any way, as a result, incorrect data causes crashes or other “joys” in the program. Or inside the program is built in such a way that initially it does not correspond to what is expected of it.
Failure- failure (and not necessarily hardware) in the operation of a component, the entire program or system. That is, there are such defects that lead to failures (A defect caused the failure) and there are those that do not. UI defects for example. But a hardware failure that has nothing to do with software is also a failure.

Bug Report is a document describing the situation or sequence of actions that led to the incorrect operation of the test object, indicating the reasons and the expected result.
Hat
Short Description (Summary) A short description of the problem, explicitly indicating the cause and type of error situation.
Project Name of the project being tested
Application component (Component) The name of the part or function of the product under test
Version number (Version) The version on which the error was found
Severity The most common five-level system for grading the severity of a defect is:
S1 Blocker
S2 Critical
S3 Major
S4 Minor
S5 Trivial
Priority Defect priority:
P1 High
P2 Medium
P3 Low
Status The status of the bug. Depends on the procedure used and the bug workflow and life cycle

Author (Author) Creator of the bug report
Assigned To The name of the person assigned to resolve the issue
Environment
OS / Service Pack, etc. / Browser + version /… Information about the environment where the bug was found: operating system, service pack, for WEB testing - browser name and version, etc.

Description
Steps to Reproduce Steps by which you can easily reproduce the situation that caused the error.
Actual Result (Result) The result obtained after going through the steps to play
Expected Result Expected correct result
Add-ons
Attachment A log file, screenshot, or any other document that can help clarify the cause of the error or indicate a way to solve the problem.

Severity vs Priority
Severity is an attribute that characterizes the impact of a defect on the performance of an application.
Priority is an attribute that indicates the order in which a task or defect must be completed. We can say that this is a tool for a work planning manager. The higher the priority, the faster the defect needs to be fixed.
Severity is exposed by the tester
Priority - manager, team leader or customer

Defect severity grading (Severity)

S1 Blocker
A blocking error that brings the application to a non-working state, as a result of which further work with the system under test or its key functions becomes impossible. Solving the problem is necessary for the further functioning of the system.

S2 Critical
A critical bug, a key business logic not working properly, a security hole, a problem that temporarily crashes the server, or renders some part of the system inoperable, with no way to resolve the problem using other entry points. Solving the problem is necessary for further work with the key functions of the system under test.

S3 Major
Significant bug, part of the main business logic does not work correctly. The error is not critical, or it is possible to work with the function under test using other entry points.

S4 Minor
A minor error that does not violate the business logic of the part of the application under test, an obvious user interface problem.

S5 Trivial
A trivial error that does not concern the business logic of the application, a poorly reproducible problem that is hardly noticeable through the user interface, a problem of third-party libraries or services, a problem that does not have any impact on the overall quality of the product.

Defect Priority Grading
P1 High
The error must be corrected as soon as possible, as its presence is critical for the project.
P2 Medium
The error must be fixed, its presence is not critical, but requires a mandatory solution.
P3 Low
The error must be fixed, its presence is not critical, and does not require an urgent solution.

Testing Levels

1. Unit Testing
Component (unit) testing checks the functionality and looks for defects in parts of the application that are available and can be tested separately (program modules, objects, classes, functions, etc.).

2. Integration Testing
The interaction between the system components is checked after component testing.

3. System Testing
The main task of system testing is to test both functional and non-functional requirements in the system as a whole. This detects defects, such as incorrect use of system resources, unintended combinations of user-level data, incompatibility with the environment, unintended use cases, missing or incorrect functionality, inconvenience of use, etc.

4. Operational testing (Release Testing).
Even if the system satisfies all requirements, it is important to ensure that it satisfies the needs of the user and fulfills its role in the environment of its operation, as defined in the business model of the system. It should be noted that the business model may contain errors. This is why it is so important to conduct operational testing as the final step of validation. In addition, testing in the operating environment allows you to identify non-functional problems, such as: conflict with other systems related to business or software and electronic environments; insufficient performance of the system in the operating environment, etc. It is obvious that finding such things at the implementation stage is a critical and expensive problem. Therefore, it is so important to carry out not only verification, but also validation, from the very early stages software development.

5. Acceptance Testing
A formal testing process that verifies that a system meets requirements and is conducted to:
determining whether the system satisfies the acceptance criteria;
decision by the customer or other authorized person whether the application is accepted or not.

Types / types of testing

Functional types of testing

Functional testing
User Interface Testing (GUI Testing)
Security and Access Control Testing
Interoperability Testing

Non-functional types of testing

All types of performance testing:
o Load testing (Performance and Load Testing)
o Stress Testing
o stability or reliability testing (Stability / Reliability Testing)
o Volume Testing
Installation testing
Usability Testing
Failover and Recovery Testing
Configuration Testing

Types of testing associated with changes

Smoke Testing
Regression Testing
Re-testing
Build Verification Test
Sanitary testing or consistency/health testing (Sanity Testing)

Functional testing considers pre-specified behavior and is based on an analysis of the specifications of the functionality of the component or the system as a whole.

User Interface Testing (GUI Testing)- functional check of the interface for compliance with the requirements - size, font, color, consistent behavior.

Security Testing is a testing strategy used to test the security of a system, as well as to analyze the risks associated with providing a holistic approach to protecting an application, attacks by hackers, viruses, unauthorized access to confidential data.

Interoperability Testing is functional testing that tests the ability of an application to interact with one or more components or systems and includes compatibility testing and integration testing

Stress Testing- this is an automated testing that simulates the work of a certain number of business users on a common (shared by them) resource.

Stress Testing allows you to check how the application and the system as a whole are operable under stress and also evaluate the ability of the system to regenerate, i.e. to return to normal after the cessation of exposure to stress. Stress in this context can be an increase in the intensity of operations to very high values ​​or an emergency change in the server configuration. Also, one of the tasks in stress testing can be the assessment of performance degradation, so the goals of stress testing may overlap with the goals of performance testing.

Volume testing (Volume Testing). The goal of volume testing is to get a measure of performance as the amount of data in the application database grows.

Testing stability or reliability (Stability / Reliability Testing). The task of stability (reliability) testing is to check the performance of the application during long-term (many hours) testing with an average load level.

Installation testing is aimed at verifying the successful installation and configuration, as well as updating or uninstalling the software.

Usability testing- this is a testing method aimed at establishing the degree of usability, learnability, understandability and attractiveness for users of the developed product in the context of given conditions. This also includes:
User eXperience (UX) is the feeling experienced by the user while using a digital product, while the User interface is a tool that allows interaction between the user and the web resource.

Failover and Recovery Testing validates the product under test for its ability to withstand and recover successfully from potential failures due to software bugs, hardware failures, or communication problems (such as network failure). The purpose of this type of testing is to check recovery systems (or duplicating the main functionality of systems), which, in the event of a failure, will ensure the safety and integrity of the data of the tested product.

Configuration Testing- a special type of testing aimed at checking the operation of software under various system configurations (declared platforms, supported drivers, various computer configurations, etc.)

Smoke testing is considered as a short cycle of tests performed to confirm that after building the code (new or fixed), the application being installed starts and performs the main functions.

Regression Testing- this is a type of testing aimed at verifying changes made to the application or environment(fixing a defect, merging code, migrating to another operating system, database, web server or application server), to confirm that the pre-existing functionality works as before. Regression tests can be both functional and non-functional tests.

Retesting- testing, during which the test scripts that detected errors during the last run are executed to confirm the success of fixing these errors.
What is the difference between regression testing and re-testing?
Re-testing - bug fixes are checked
Regression testing - it is checked that bug fixes, as well as any changes in the application code, did not affect other software modules and did not cause new bugs.

Build Test or Build Verification Test- testing aimed at determining the compliance of the released version with the quality criteria for starting testing. According to its goals, it is an analogue of Smoke Testing, aimed at accepting a new version for further testing or operation. It can penetrate further into the depths, depending on the quality requirements of the released version.

Sanitary testing- this is a narrow testing sufficient to prove that a particular function works according to the requirements stated in the specification. It is a subset of regression testing. Used to determine the health of a particular part of the application after changes have been made to it or the environment. Usually done manually.

Integration Testing Approaches:
Bottom Up (Bottom Up Integration)
All low-level modules, procedures, or functions are put together and then tested. After that, the next level of modules is assembled for integration testing. This approach is considered useful if all or almost all modules of the developed level are ready. Also, this approach helps to determine the level of application readiness based on the results of testing.
Top Down Integration
First, all high-level modules are tested, and gradually, one by one, low-level modules are added. All lower-level modules are simulated by stubs with similar functionality, then, as they are ready, they are replaced by real active components. So we test from top to bottom.
Big Bang ("Big Bang" Integration)
All or almost all developed modules are assembled together as a complete system or its main part, and then integration testing is carried out. This approach is very good for saving time. However, if the test cases and their results are not recorded correctly, then the integration process itself will be greatly complicated, which will become an obstacle for the testing team in achieving the main goal of integration testing.

Testing principles

Principle 1– Testing shows the presence of defects
Testing can show that defects are present, but cannot prove that they are not. Testing reduces the likelihood of defects in software, but even if no defects are found, it does not prove its correctness.

Principle 2– Exhaustive testing is impossible
Complete testing using all combinations of inputs and preconditions is not physically feasible except in trivial cases. Instead of exhaustive testing, risk analysis and prioritization should be used to more accurately focus testing efforts.

Principle 3– Early testing
To find defects as early as possible, testing activities should start as early as possible in the software or system development life cycle, and should be focused on specific goals.

Principle 4– Defects clustering
Testing efforts should be concentrated in proportion to the expected, and later the actual density of defects per module. As a rule, most of the defects found during testing or that caused the main number of system failures are contained in a small number of modules.

Principle 5– Pesticide paradox
If the same tests are run many times, eventually this set of test cases will no longer find new defects. To overcome this “pesticide paradox”, test cases must be regularly reviewed and adjusted, new tests must be diversified to cover all software components,
or system, and find as many defects as possible.

Principle 6– Testing is concept depending
Testing is done differently depending on the context. For example, security-critical software is tested differently than an e-commerce site.
Principle 7– Absence-of-errors fallacy
Finding and fixing defects will not help if the created system does not suit the user and does not meet his expectations and needs.

Static and dynamic testing
Static testing differs from dynamic testing in that it is performed without running the product code. Testing is carried out by analyzing the program code (code review) or compiled code. The analysis can be performed both manually and with the help of special tools. The purpose of the analysis is to identify errors and potential problems in the product early. Static testing also includes testing specifications and other documentation.

Exploratory / ad-hoc testing
The simplest definition of exploratory testing is developing and executing tests at the same time. Which is the opposite of the scenario approach (with its predefined testing procedures, whether manual or automated). Exploratory tests, unlike scenario tests, are not predetermined and are not executed exactly according to plan.

The difference between ad hoc and exploratory testing is that, theoretically, anyone can conduct ad hoc, while exploratory testing requires skill and possession of certain techniques. Note that certain techniques are not just testing techniques.

Requirements is a specification (description) of what is to be implemented.
Requirements describe what needs to be implemented, without detailing the technical side of the solution. What, not how.

Requirements for requirements:
Correctness
unambiguity
Completeness of the set of requirements
Requirements set consistency
Testability (testability)
traceability
Comprehensibility

Bug life cycle

Software Development Stages- these are the stages that software development teams go through before the program becomes available to a wide range of users. Software development begins with the initial development stage (the "pre-alpha" stage) and continues through the stages at which the product is finalized and modernized. The final step in this process is the release to the market of the final version of the software (“public release”).

The software product goes through the following stages:
analysis of project requirements;
design;
implementation;
product testing;
implementation and support.

Each stage of software development is assigned a specific serial number. Also, each stage has its own name, which characterizes the readiness of the product at this stage.

Software Development Life Cycle:
pre-alpha
Alpha
Beta
Release Candidate
Release
post-release

decision table is a great tool for streamlining complex business requirements that need to be implemented in a product. Decision tables represent a set of conditions that, when met simultaneously, must result in a specific action.

3. TESTS FOR THE TRAINING COURSE "SYSTEM ANALYSIS"


1. What is meant by the composition of elements and the relationships between them?

  1. Structure

  2. Integrity

  3. Element

  4. emergence
2. What is the set of values ​​of the essential characteristics of the system at a given time?

1. Behavior

2. Development

3. Condition

4. Operation

3. What are terms?

1. Names and members of sentences, certain objects of study

2. Composition of system properties

3. That which links the elements in the system

4. Part of the object, which has a certain independence in relation to the entire object

4. Which of the following is not included in a dynamic description of a system?

1. Process

2. Functors

3. System

5. Which of the principles systems approach does it imply the need to study an object as a complex set of its constituent elements?

1. Purpose principle

2. The principle of complexity

3. The principle of integrity

4. The principle of historicism

6. What is not a life cycle stage?

1. Operation

2. Creation

3. Development

4. Management

7. What is system analysis?

1. Methodology for problem solving

2. Transfer of management functions to technical means

3. General systems theory

4. A set of scientific methods and practical techniques for solving various problems based on a systematic approach

8. What is the scientific basis of automation?

1. Automata theory

2. Philosophy

3. Computer science

4. General systems theory

9. What are principles?

1. A system of knowledge about some area of ​​the real world

2. Set of properties of the system

3. Establishing correspondence between the requirements of objective laws and subjective activity

4. The composition of the elements of the system and the links between them

10. What is a system?

1. An integral set of related elements

2. Part of the object, which has a certain independence in relation to the entire object

3. Lots of objects

4. An integral set of related objects

11. What types of connections are there?

1. Essential and non-essential

2. With control, without control

3. Dynamic, static

4. Internal, external

12. What are abstract systems?

1. Systems with material elements

2. Systems consisting of abstract elements that have no analogues in the real world

3. Systems consisting of abstract elements and having analogues in the real world

4. Systems with biological elements

13. What groups are the systems divided into in relation to the environment?

1. Natural, artificial

2. Static, dynamic

3. Open, closed

4. Active, passive

14. What are the main stages of the system life cycle do you know?

1. Creation, growth, maturity, destruction

2. Creation, functioning, destruction

3. Creation, debugging, functioning, destruction

4. Creation, debugging, operation

15. What scientific discipline well-structured problems are solved?

1. Decision theory

2. System analysis

3. Operations research

4. Game theory

16. Within the framework of what scientific discipline are weakly structured problems solved?

1. Decision theory

2. System analysis

3. Operations research

4. Theory of efficiency

17. What is the attribute of the problem?

1. Place and time of the problem

2. Difficulty

3. Scale (dimensions of discrepancy)

4. Importance

18. In systems analysis, a system is built to:

1. Studying the composition of its constituent elements

2. Identification of the problem

3. Definitions of interaction with other systems

4. Allocation of subsystems of the system

19. The technological scheme of system research includes:

1. Determining the purpose of the system, building the system, analyzing the system

2. Determining the purpose of the study, identifying the problem, solving the problem

3. General analysis of the system under study, identification of the problem, identification of directions and ways to solve the problem

4. Identification of subsystems of the system, selection of the system, analysis of the system

20. Necessary components of system analysis are:

1. Reliability, problematic, solvable, integrity

2. Integrity, quality, structure, model

3. Purpose, alternatives, resources, criterion, model

4. Set of solutions, resources, model

21. Which of the axioms are the axioms of control theory?

1. The presence of observability and controllability of the control object

2. The presence of freedom of action of the governing body in the development of control actions

3. The presence of freedom to choose control actions from a set of acceptable alternatives and resources for implementation decisions taken

4. The presence of a goal and a criterion for the effectiveness of management

22. A system with control is:

1. Decision making system

2. The system in which control is implemented

3. Cybernetic system

4. System for generating control actions

23. The principles of management are:

1. Operational management, regulation, planning

2. Hierarchical management, current management, formal management

3. Centralized control, decentralized control, combined control

4. Planning, operational management, control

24. Management functions are:

1. Accounting, control, planning, operational management

2. Regulation, forecasting, organization, evaluation

3. Evaluation, forecasting, regulation, formalization

4. Planning, operational management, organization, forecasting, accounting, control

25. W. R. Ashby's principle of necessary diversity is formulated as follows:

1. The diversity of the control object should be greater than the diversity of the control system

2. The diversity of the control system should be greater than the diversity of the control object

3. The variety of the control system must be no less than the variety of the control object

4. The diversity of the control system must be less than the diversity of the control object

26. The task of analysis is:

1. System optimization

2. Evaluation of the effectiveness of the system

3. Revealing the structure of the system and the principles of its functioning

4. Determination of the composition of parameters and elements of the system

27. The task of synthesis is:

1. Determination of the structure and parameters of the system, based on the specified requirements for performance indicators of its functioning

2. Revealing the principles of building a system

3. Determining the optimal values ​​of system parameters

4. Finding the optimal principles for building a system

28. Determine the purpose of measurement scales

1. Comparison of the values ​​of qualitative and quantitative characteristics of objects

2. Identification of alternatives

3. Measurement of the states of objects, processes, phenomena

4. Establishing preferences for the characteristics of compared objects

29. The concept of "measurement" is:

1. An operation that associates a given observable state of an object, process, phenomenon with a certain designation

2. A set of actions to collect initial data for evaluating objects

3. Obtaining initial data about the object using the device

4. A set of rules for collecting information about the states of objects

30. The essence of the task of pairwise comparison is:

1. Determination of the qualitative characteristics of the compared objects

2. Identifying an object with more utility

3. Revealing the best of the two compared objects

4. Determining the parameters of the compared objects

31. The task of ranking is to:

1. Ordering of system objects in descending (ascending) order of the value of some attribute

2. Assigning ranks to objects of the system

3. Arrangement of system objects according to the place and time of their occurrence

4. Sorting system objects by increasing the frequency of access to them

32. The essence of the classification task is:

1. Measuring system parameters using a classification scale

2. Assigning a given element of the system to one of the subsets

3. Organizing system objects

4. Assigning a certain quantitative attribute to the objects of the system

33. The essence of the problem of numerical evaluation is:

1. Comparison to the system of one or more numbers

2. Measurement of the qualitative characteristics of the objects of the system

3. Evaluation of the essential characteristics of the system

4. Optimization of system parameters according to the selected criterion

34. An assessment task is called an examination if it:

1 . Solved with the help of experts in the study area

2. Solved with the help of consultants

3. Decided by the decision maker

4. Solved with the help of experts

35. Which of the following stages are the stages of the examination?

1. Ordering the set of outcomes of the operation according to their preference

2. Determining the usefulness of each outcome

3. Checking the obtained estimates for consistency by comparing the estimates of the utility of the outcomes

4. Eliminate inconsistencies in estimates by adjusting any option for ordering outcomes, or utilities, or both

36. Which of the following methods are methods for qualitative evaluation of systems?

1. Morphological methods

2. Methods of vector optimization

3. Scenario Type Methods

4. Method of the "tree of goals" type

37. Which of the following rules must be observed when using a brainstorming method?

1. Do not allow criticism of any idea, do not declare it false and do not stop discussion

2. It is advisable not to express non-trivial ideas

3. Provide more freedom for brainstorming participants to think and come up with new ideas

4. Welcome any ideas, even if at first they seem dubious or absurd

38. The script type method allows you to:

1. Help the researcher get an idea of ​​the problem

2. Help the researcher solve the problem

3. Get the researcher meaningful reasoning about the problem

4. Study the problem by the researcher using a computer

39. What problems are solved using methods of expert assessments?

1. Problems for which there is sufficient provision of information

2. Problems for which there is not enough provision of information

3. Problems in respect of which knowledge is not enough for the certainty and validity of the indicated hypotheses

4. Problems in respect of which knowledge is sufficient for the certainty and validity of the indicated hypotheses

40. Which of the following stages are not examination stages?

1. Formation of the goal and development of the examination procedure

2. Forming a group of experts and conducting a survey

3 . Collection of statistical data by experts

4. Analysis and processing of information

41. Which of the following procedures are not expert measurement procedures?

1. Churchman-Akoff method

2. Von Neumann-Morgenstern method

3. Lagrange method

4. Thurstone method

42. Which of the following procedures are not procedures of the Delphi method?

1. Sequence of brainstorming cycles

2. Development of individual surveys of the "scenario" type

3. Introduction of coefficients of significance of expert opinions

4. Development of a program of consecutive individual surveys

43. Which of the following procedures are not components of the method ^ PATTERN ?

1. Expanding a goal tree with a number of criteria for each level

2. Determination by experts of weights of criteria and coefficients of significance of goals

3. Revealing the links between the levels of the goal tree

4. Determination of the coefficient of communication of goals

44. What is the essence of morphological methods of qualitative evaluation of systems?

1. Systematic finding of all conceivable options for solving the problem by combining the selected elements or their features

2. Systematic finding of all conceivable options for implementing the system by combining the selected elements or their features

3. Systematic finding of the most significant options for solving a problem or implementing a system by combining selected elements or their features

4. Systematic finding of all conceivable options for building a system by combining selected elements or their features

45. What is the subject of study of decision theory?

1. Patterns of construction of complex systems

2. Patterns of selection and decision-making

3. Patterns of processing command (control) information

4. Patterns of processing state information into command information

46. ​​Define the main task of operations research

1. Quantitative and qualitative substantiation of decisions

2. Qualitative justification of decisions

3. Preliminary quantitative justification of decisions

4. Preliminary qualitative justification of decisions

47. An operation in decision theory is:

1. The process of performing a sequence of actions in the system

2. The stage of the system functioning, limited to the fulfillment of a certain goal

3. A set of rules for building a system

4. Stage of system operation

48. Unmanaged characteristics of the system are:

1. Part of the characteristics that the governing body can change with the help of the control object and must be taken into account when choosing decisions

2. Part of the characteristics that the governing body can change with the help of the control object

3. Part of the characteristics that the control object can change

4. Part of the characteristics that the governing body cannot change with the help of the control object, but must take into account when choosing solutions

49. Managed characteristics of the system are:

1. Characteristics of the system that can be changed by the governing body

2. Characteristics of the system that can be changed by the control object

3. Selectable characteristics

4. Specified characteristics

50. Making a decision is:

1. The act of setting the values ​​of controlled characteristics

2. Determination of the composition of the controlled and unmanaged characteristics of the system

3. Determination by the governing body of the quantity, quality, place and time of the use of resources to achieve the goal

4. The act of setting the values ​​of uncontrolled characteristics

51. Solutions are called admissible:

1. For which unmanaged characteristics are defined

2. Accepted by the governing body

3. Satisfying the imposed restrictions

4. For which controlled characteristics are defined

52. An optimal solution is one that:

1. More preferable than other solutions in the field of feasible solutions

2. More preferable than other solutions in terms of a certain trait

3. Is the best in terms of system resource usage

4. Has the best values ​​of unmanaged characteristics

53. Strategy in decision theory is called:

1. The set of unmanaged characteristics taken to perform the operation

2. The set of controlled characteristics taken to perform the operation

3. The set of decisions taken to perform the operation

4. Decision made to perform the operation

54. Satisfaction choice in decision theory is:

1. Choice of a set of solutions from the domain of admissible solutions

2. Choice of any solution from the domain of admissible solutions

3. Choosing the optimal solution

4. Choice of admissible solutions

55. The outcome of the operation is:

1. The result of achieving the goal of the operation

2. Implementation of a particular solution

3. The situation that has developed (forecasted) at the time of completion of the operation

4. The final stage of the implementation of the operation

56. The effectiveness of the decision is:

1. The property of the solution to correspond to the purpose of the operation

2. The property of the system to meet the goal set for the system

3. Property of the decision, which consists in achieving the goal of the operation

4. A set of actions to select the values ​​of controlled parameters

57. Which of the terms are synonymous with the term "efficiency"?

1. Efficiency

2. Optimality

3. Fitness

4. Effectiveness

58. The indicator of the effectiveness of the solution is:

1. Parameter whose value satisfies a feasible solution

2. Indicators of the outcomes of the operation, on the basis of which the criterion of effectiveness is formed

3. Functions of indicators of the outcomes of the operation, on the basis of which the criterion of effectiveness is formed

4. Criterion of solution efficiency

59. The usefulness of the outcome of the operation is:

1. Numeric bounded function

2. The real number attributed to the outcome of the operation and characterizing its preference over other indicators with respect to the goal

3. The indicator of the outcome of the operation, which serves to compare outcomes

4. The real number attributed to the outcome of the operation

60. The utility function is:

1. Linear function for determining the type of efficiency criterion

2. Numerical bounded function defined on the set of outcomes

3. Threshold function for determining the type of efficiency criterion

4. Bounded function used to evaluate the effectiveness of solutions

61. The procedure for determining the utility function includes the following steps:

1. Identification of indicators of outcomes of the operation

2. Determination of the set of acceptable outcomes of the operation

3. Determining the usefulness of the outcomes of the operation

4. Determining the usefulness of the system

62. Ways to determine the utility function are as follows:

1. Analysis of the impact of the outcomes of the operation under study on the operation of a higher level of the hierarchy

2. Expert assessments

3. Approximation

4. Interpretation

63. The criterion of effectiveness is:

1. That by which solutions are compared when choosing

2. Parameter by which solutions are compared when choosing

3. A measure that quantifies the effectiveness of each solution and serves as the basis for choosing one of them

4. A characteristic that quantifies the effectiveness of each solution and serves as the basis for choosing one of them

64. The objective function is:

1. Solution efficiency

2. Mathematical expression of the decision efficiency criterion

3. One of the ways to write the solution efficiency criterion

4. Results of evaluating the effectiveness of the solution

65. A deterministic operation is:

1. An operation in which for each solution there are many outcomes of the operation with known distribution laws

2. An operation in which for each decision there are many outcomes of the operation

3. An operation in which for each decision there is one well-defined outcome of the operation

4. An operation in which for each solution there is one outcome of the operation with a known distribution law

66. A probabilistic operation is:

1. An operation in which each decision is associated with a set of outcomes of the operation

2. An operation in which each decision is associated with a set of outcomes of the operation with known laws of probability distribution on the outcomes

3. An operation in which each decision is associated with a set of outcomes of the operation with unknown laws of probability distribution on the outcomes

4. Operation at risk

67. An indefinite operation is:

1. An operation in which each decision corresponds to a certain outcome with an unknown probability distribution law

2. An operation in which different outcomes can correspond to each decision

3. An operation in which each decision is associated with a set of outcomes of the operation with known laws of probability distribution on the outcomes

4. An operation in which each decision can correspond to different outcomes with unknown laws of the distribution of probabilities on the outcomes

68. Which of the following steps make up the decision making process?

1. Analysis of the conditions of the operation

2. Building a model of the functioning of the system during the operation

3. Choosing the optimal solution within the framework of the constructed model

4. Formation of the decision to be made

69. What is the essence of the commission method?

1. In organizing the work of the expert group through open discussion

2. In organizing the work of the expert group through a closed discussion

3. Brainstorming

4. In a comprehensive assessment of the phenomenon, event, process under study
70. The main properties of experts during group examination should be:

1. Decency

2. Competence

3. Creativity

System is a Greek word that literally means a whole made up of parts. In another sense - the order determined by the correct arrangement of parts and their relationships.

The system is a set of interconnected elements, which is considered as a whole.

A system is any object that has some properties that are in some predetermined relationship.

A system is a part of reality isolated by consciousness, the elements of which reveal their commonality in the process of interaction.

Structure - a relatively stable fixation of connections between the elements of the system.

System Integrity is its relative independence from the environment and other similar systems.

Emergence is the irreducibility (degree of irreducibility) of the properties of a system to the properties of the elements of the system.

Under the behavior (functioning) of the system we mean its action in time. The change in the structure of the system over time can be considered as the evolution of the system.

The goal of a system is its preferred state.

Purposeful Behavior - the desire to achieve the goal.

Feedback - the impact of the results of the functioning of the system on the nature of this functioning.

Cybernetics (ancient Greek kybernetike - “the art of control”) is a branch of knowledge, the essence of which was formulated by N. Wiener as the science of “communication, control and control in machines and living organisms” in the book “Cybernetics, or control and communication in an animal and a machine » (1948).

Cybernetics is the study of systems of any nature capable of receiving, storing and processing information and using it for control and regulation. At the same time, cybernetics makes extensive use of the mathematical method and seeks to obtain specific special results that make it possible both to analyze such systems (to restore their structure based on experience in handling them) and to synthesize them (calculate schemes of systems capable of performing specified actions).

Within the framework of Wiener cybernetics, further development of systemic representations took place, namely:

1) typification of system models;

2) revealing the value of feedbacks in the system;

3) emphasizing the principle of optimality in the management and synthesis of systems;

4) the concept of information as a universal property of matter, the realization of the possibility of its quantitative description;

5) development of modeling methodology in general and in particularmachine experiment, i.e. mathematical expertise with the help of a computer.

OPEN SYSTEMS 3

Sustainability 4

MODELS, FORMALIZATION - 23

3. CLASSIFICATION OF SYSTEMS

In system analysis, classification occupies a special place, taking into account many criteria that characterize the structure of the system, its purpose, features of functioning, etc. Such criteria are most often used in the classification of systems.

According to the substantive feature, the systems are divided into three classes:

natural, which exist in objective reality (inanimate and living nature, society). Examples of systems are an atom, a molecule, a living cell, an organism, a population, a society;

conceptual or ideal systems that reflect reality, the objective world. These include scientific theories, literary works, i.e. systems that reflect objective reality with varying degrees of completeness;

artificial, which are created by man to achieve a specific goal (technical or organizational).

When using system analysis for the tasks of synthesis and analysis of complex control systems, systems are classified according to:

type of object - technical, biological, organizational, etc.;

scientific direction - mathematical, physical, chemical, etc.;

type of formalization - deterministic, stochastic;

type - open and closed;

complexity of structure and behavior - simple and complex;

degrees of organization - well organized, poorly organized (diffuse), with self-organization.

Well organized systems - these are those for which it is possible to determine the individual elements, the links between them, the rules for combining into subsystems and evaluate the links between the components of the system and its goals. In this case, the problem situation can be described in the form of mathematical dependencies that link the goal and the means to achieve it, the so-called performance criteria or performance evaluations. The solution of problems of analysis and synthesis in well-organized systems is carried out analytical methods. Examples: description of the operation of an electronic device using a system of equations that take into account the features of work; analytical models of control objects, etc.

To display the object under study in the form of a well-organized system, the most significant factors are singled out and secondary ones are discarded. Well-organized systems use mostly quantitative information. Poorly organized systems. For such systems, it is typical to display and study not all components, but only some sets of macro parameters and patterns using certain sampling rules. For example, when obtaining statistical regularities, they are transferred to the behavior of systems with certain probability indicators. Typical of these systems is the use

multicriteria problems with numerous assumptions and restrictions. Examples: queuing systems, economic and organizational systems.

In poorly organized systems, mainly qualitative information is used, in particular, fuzzy sets.

Systems with self-organization. Such systems have signs of diffuse systems: stochastic behavior and nonstationarity of parameters. At the same time, they have a well-defined ability to adapt to changing working conditions. A special case of a system with self-organization for the control of technical objects are adaptive systems with reference models or an identifier, which are considered in the discipline "Theory of automatic control".

There are a number of approaches to distinguishing systems by complexity and scale. For example, for control systems it is convenient to use the classification according to the number (number) of elements:

small (10-103 elements);

complex (104107 elements);

ultra-complex (108 - 1030 elements);

supersystems (1030 - 10200 elements).

A large system is always a set of material and energy resources, means of receiving, transmitting and processing information, people who make decisions at different levels of the hierarchy.

Currently, the following definitions are used for the concepts of "complex system" and "large system":

a complex system - an ordered set of structurally interconnected and functionally interacting different types of systems which are united structurally into an integral object by functionally heterogeneous relationships to achieve specified goals under certain conditions;

a large system combines diverse complex systems.

Then the definition of the system can be written as System - an ordered set of structurally interconnected and functionally

interacting elements of the same type of any nature, combined into an integral object, the composition and boundaries of which are determined by the goals of a systematic study. Characteristic features of large systems:

a significant number of elements;

relationship and interaction between elements;

hierarchy of the management structure;

the presence of a person in the control loop and the need for decision-making under conditions of uncertainty.

System Modeling and Modeling: Types,

classification of models

A model of an object or a description of an object, a system for replacing (under certain conditions, proposals, hypotheses) one system (i.e., the original) with another system for better studying the original or reproducing any of its properties.

The model is the result of mapping one structure (studied) to another (little studied).

Types of models 1) Cognitive model is a form of organization and representation of knowledge, a means

combination of new and old knowledge. The cognitive model is usually

adjusted to reality and is a theoretical model.

2) The pragmatic model is a means of organizing practical actions, a working representation of the goals of the system for its management. Reality in them is adjusted to some pragmatic model. These are, as a rule, applied models.

3) An instrumental model is a means of constructing, exploring and/or using pragmatic and/or cognitive models. Cognitive ones reflect existing, and pragmatic ones, although not existing, but desired and, possibly, feasible relations and connections. According to the level, "depth" of modeling, the models are:

empirical based on empirical facts, dependencies; theoretical based on mathematical descriptions;

mixed, semi-empirical based on empirical relationships and mathematical descriptions.

Modeling is a universal method for obtaining a description and using knowledge.

The modeling problem consists of three tasks:

building a model (this task is less formalizable and constructive, in the sense that there is no algorithm for building models); model research (this task is more formalizable, there are methods for studying various classes of models);

use of the model (constructive and concretized task).

Lecture 9: Classification of types of system modeling

The classification of types of modeling can be carried out for various reasons. One of the classification options is shown in the figure.

Rice. - An example of the classification of types of modeling

In accordance with the classification sign of completeness, modeling is divided into: complete, incomplete, approximate.

In full simulation, the models are identical to the object in time and space.

For incomplete simulations, this identity is not preserved.

Approximate modeling is based on similarity, in which some aspects of a real object are not modeled at all. The theory of similarity states that absolute similarity is possible only when one object is replaced by another exactly the same. Therefore, when modeling, absolute similarity does not take place. Researchers strive to ensure that the model well reflects only the studied aspect of the system. For example, to assess the noise immunity of discrete information transmission channels, the functional and information models of the system may not be developed. To achieve the goal of modeling, an event model is quite sufficient,

described by the matrix of conditional probabilities of transitions of the i-th symbol of the alphabet to the j-th one.

Depending on the type of media and model signature, the following types of modeling are distinguished: deterministic and stochastic, static and dynamic, discrete, continuous and discrete-continuous.

deterministic modeling displays processes in which the absence of random influences is assumed.

Stochastic modeling takes into account probabilistic processes and events.

Static Simulation serves to describe the state of an object at a fixed point in time, and dynamic - to study the object in time. At the same time, they operate with analog (continuous), discrete and mixed models.

Depending on the form of implementation of the carrier and signature, modeling is classified into mental and real.

Mental modeling is used when models are not realizable in a given time interval or there are no conditions for their physical creation (for example, the situation of the microworld). Mental modeling of real systems is realized in the form of visual, symbolic and mathematical. A significant number of tools and methods have been developed to represent functional, informational and event models of this type of modeling.

With visual modeling based on human ideas about real objects, visual models are created that display the phenomena and processes occurring in the object. An example of such models are educational posters, drawings, charts, diagrams.

The basis hypothetical modeling, a hypothesis is laid about the patterns of the process in a real object, which reflects the level of knowledge of the researcher about the object and is based on cause-and-effect relationships between the input and output of the object under study. This type of modeling is used when knowledge about the object is not enough to build formal models. Analog modeling is based on the application of analogies of various levels. For sufficiently simple objects, the highest level is complete analogy. With the complication of the system, analogies of subsequent levels are used, when the analog model displays several (or only one) aspects of the object's functioning.

Modeling is used when the processes occurring in a real object are not amenable to physical modeling or may precede other types of modeling. At the heart of the construction

Mental layouts also contain analogies, usually based on cause-and-effect relationships between phenomena and processes in an object.

Symbolic modeling is an artificial process of creating a logical object that replaces the real one and expresses its main properties using a certain system of signs and symbols.

The basis of language modeling is a certain thesaurus, which is formed from a set of concepts of the studied subject area, and this set must be fixed. A thesaurus is a dictionary that reflects the relationships between words or other elements of a given language, designed to search for words by their meaning.

A traditional thesaurus consists of two parts: a list of words and set phrases grouped according to semantic (thematic) headings; an alphabetical dictionary of keywords that define classes of conditional equivalence, an index of relationships between keywords, where for each word the corresponding headings are indicated. Such construction allows defining semantic (semantic) relations of hierarchical (genus/species) and non-hierarchical (synonymy, antonymy, associations) type.

There are fundamental differences between a thesaurus and a regular dictionary. Thesaurus is a dictionary that has been cleared of ambiguity, i.e. in it, only a single concept can correspond to each word, although in an ordinary dictionary, several concepts can correspond to one word.

If we introduce a symbol for individual concepts, i.e. signs, as well as certain operations between these signs, then you can implement sign modeling and use signs to display a set

concepts - to make separate chains of words and sentences. Using the operations of union, intersection and addition of set theory, it is possible to give a description of some real object in separate symbols.

Mathematical modeling is the process of establishing correspondence to a given real object of some mathematical object, called a mathematical model. In principle, to study the characteristics of any system by mathematical methods, including machine methods, this process must be formalized, i.e. a mathematical model is built. View mathematical model depends both on the nature of the real object, and on the tasks of studying the object, on the required reliability and accuracy of solving the problem. Any mathematical model, like any other, describes a real object with a certain degree of approximation.

Various notation forms can be used to represent mathematical models. The main ones are invariant, analytical, algorithmic and circuit (graphic).

An invariant form is a record of model relations using a traditional mathematical language, regardless of the method for solving model equations. In this case, the model can be represented as a set of inputs, outputs, state variables and global equations of the system. Analytical form - recording the model as a result of solving the initial equations of the model. Typically, models in analytical form are explicit expressions of output parameters as functions of inputs and state variables.

Analytical modeling is characterized by the fact that basically only the functional aspect of the system is modeled. In this case, the global equations of the system that describe the law (algorithm) of its functioning are written in the form of some analytical relations (algebraic, integro-differential, finite-difference, etc.) or logical conditions. The analytical model is studied by several methods:

analytical, when they strive to obtain explicit dependencies in a general form, connecting the desired characteristics with the initial conditions, parameters and state variables of the system;

numerical, when, not being able to solve equations in a general form, they strive to obtain numerical results with specific initial data (recall that such models are called digital);

qualitative, when, without having a solution in an explicit form, you can find some properties of the solution (for example, evaluate the stability of the solution).

IN At present, computer methods for studying the characteristics of the process of functioning of complex systems are widespread. To implement a mathematical model on a computer, it is necessary to build an appropriate modeling algorithm.

Algorithmic form - a record of the relationship between the model and the selected numerical solution method in the form of an algorithm. Among the algorithmic models, an important class is made up of simulation models designed to simulate physical or informational processes under various external influences. Actually, the imitation of these processes is called simulation modeling.

In simulation modeling, the algorithm of the system functioning in time is reproduced - the behavior of the system, and the elementary phenomena that make up the process are simulated, while maintaining their logical structure and sequence of flow, which makes it possible to obtain information about the states of the process at certain points in time from the initial data, making it possible to evaluate the characteristics of the system . The main advantage of simulation modeling compared to analytical modeling is the ability to solve more complex problems. Simulation models make it quite easy to take into account such factors as

as the presence of discrete and continuous elements, non-linear characteristics of the elements of the system, numerous random effects and others that often create difficulties in analytical studies. Currently, simulation modeling is the most effective method for studying systems, and often the only practically accessible method for obtaining information about the behavior of a system, especially at the stage of its design.

In simulation, a distinction is made between the method of statistical tests (Monte Carlo) and the method of statistical modeling.

The Monte Carlo method is a numerical method that is used for simulation random variables and functions whose probabilistic characteristics coincide with the solutions of analytical problems. It consists in multiple reproduction of processes that are realizations of random variables and functions, with subsequent processing of information by methods of mathematical statistics.

If this technique is used for machine simulation in order to study the characteristics of the processes of functioning of systems subject to random influences, then this method is called the method of statistical modeling.

The simulation method is used to evaluate options for the system structure, the effectiveness of various system control algorithms, and the impact of changing various system parameters. Simulation modeling can be used as the basis for structural, algorithmic and parametric synthesis of systems, when it is required to create a system with specified characteristics under certain restrictions.

Combined (analytical and simulation) modeling allows you to combine the advantages of analytical and simulation modeling. When building combined models, a preliminary decomposition of the Object Functioning process into constituent subprocesses is carried out, and for those of them, where possible, analytical models are used, and simulation models are built for the remaining subprocesses. This approach makes it possible to cover qualitatively new classes of systems that cannot be studied using analytical or simulation modeling separately.

Informational ( cybernetic) modeling is associated with the study of models in which there is no direct similarity of the physical processes occurring in the models to real processes. In this case, they seek to display only some function, consider the real object as a “black box” with a number of inputs and outputs, and model some connections between outputs and inputs. Thus, information (cybernetic) models are based on the reflection of some information management processes, which makes it possible to evaluate the behavior

real object. To build a model in this case, it is necessary to isolate the investigated function of a real object, try to formalize this function in the form of some communication operators between the input and output, and reproduce this function on a simulation model, moreover, in a completely different mathematical language and, of course, a different physical implementation of the process. So, for example, expert systems are models of decision makers.

Structural modeling of system analysis is based on some specific features of structures of a certain type, which are used as a means of studying systems or serve to develop specific approaches to modeling based on them using other methods of formalized representation of systems (set-theoretic, linguistic, cybernetic, etc. ). The development of structural modeling is object-oriented modeling.

Structural modeling of system analysis includes:

network modeling methods;

combination of structuring methods with linguistic ones;

a structural approach in the direction of formalizing the construction and study of structures of various types (hierarchical, matrix, arbitrary graphs) based on set-theoretic representations and the concept of a nominal scale of measurement theory.

At the same time, the term "model structure" can be applied both to functions and to system elements. The corresponding structures are called functional and morphological. Object-oriented modeling combines structures of both types into a class hierarchy that includes both elements and functions.

In structural modeling, over the past decade, a new technology case. The abbreviation CASE has a double interpretation, corresponding to two areas of use of CASE systems. The first of them - Computer-Aided Software Engineering - translates as computer-aided software design. The corresponding CASE systems are often referred to as Rapid Application Development (RAD) tooling environments. The second - Computer-Aided System Engineering - emphasizes the focus on supporting the conceptual modeling of complex systems, mostly semi-structured. Such CASE systems are often referred to as BPR (Business Process Reengineering) systems. Generally

CASE technology is a set of methodologies for analyzing, designing, developing and maintaining complex automated systems, supported by a set of interconnected automation tools. CASE is a toolkit for system analysts, developers and

MOSCOWuySTATEthuniversitytechnology and management

(formed in 1953)

Department of Physics and Higher Mathematics

A.R. Sadykova

THEORY OF DECISION MAKING.

SYSTEM THEORY AND SYSTEM ANALYSIS

Educational - practical guide

for students of specialty 2202

all forms of education

www. msta. en

Moscow - 2004 4093

© Sadykova A.R. Theory of decision making. Theory of systems and system analysis. Textbook for students of specialty 2202, all forms of education. – MGUTU, 2004

The manual contains a summary of the main theoretical information and specific methods of decision-making necessary for practical application in professional activities.

The issues under consideration correspond to state educational standards.

Specific questions and tests proposed in the manual will help students to independently study the sections "Decision Making Methods" and "Systems Theory and System Analysis".

The manual is intended for students studying in the specialty 2202.

Reviewers: Assoc. C.T.N. Latysheva E.I., Assoc. C.T.N. Deniskin Yu.D.

Editor: Sveshnikova N.I.

© Moscow State University of Technology and Management, 2004

109004, Moscow, Zemlyanoy Val, 73

Goals and objectives of discipline 4

  1. Chapter I. Basic concepts and definitions 4

1.1 Decision making as a human activity 4

1.2 Mathematical models of decision making 6

self-test questions for chapter 9

Chapter 9 test

2. ChapterII. Mathematical models of resource optimization and

decision making 10

2.1 General case of mathematical formulation of the optimization problem 10

2.2 Optimization methods and task-based resource allocation

linear programming 11

2.3 Methods of multivariable optimization in processes

planning, management and decision making 12

2.4 Problems of linear programming in operational control

production and decision making 14

chapter 17 self-test questions

Chapter 17 test

3. ChapterIII. Problems of Nonlinear Programming in the Process of Optimization

decision-making resources 18

3.1 Analytical methods for solving problems of unconstrained optimization 19

3.2 Problems of conditional optimization and methods for their solution 20

chapter 21 self-test questions

Chapter 21 test

4. ChapterIV. Theoretically - game models of decision making 22

4.1 Matrix games 22

4.2 Positional games 25

4.3 Bimatrix games 27

chapter 30 self-test questions

Chapter 31 Test

5. ChapterV. Operations Research 31

5.1 Dynamic programming 31

5.2 Elements of the theory of inventory management 35

5.3 Queuing theory 37

chapter 42 self-test questions

Chapter 42 Test

6. discipline test 42

7. Self-test questions 43

8. Glossary of basic concepts 44

9. Literature 45

10. Answers to tests 46

Goals and objectives of the discipline.

Decision theory.

Objectives - to familiarize students with the content of the decision-making task, its place and role in the management process. Along with mastering the basic concepts, they will study the basic, classical problems of decision making theory and methods for solving them, which are the foundation for the further development of decision making methods, and also serve as a practical tool for solving many applied management problems.

Objectives: To have an understanding of the concepts - a decision-making function; decision making process; common task decision-making and its content; methods of change in decision theory; main tasks; methods for solving basic problems.

Know - the basic concepts, methods and rules for solving decision-making problems. Acquire the skills of solving problems and evaluating the correctness of the results obtained.

Theory of systems and system analysis.

Objectives - the study and development of the basic concepts and laws of systems theory and system analysis.

The student must know:

Basic principles for compiling mathematical models for making optimal decisions in a conflict;

Mathematical apparatus of systems theory and system analysis: methods for solving differential and integral equations; combinatorics; probability theory and mathematical statistics;

Types and provisions of game theory.

Explore the simplest problems of systems theory;

Find a connection in the problems of systems analysis with the methods of the concept of cybernetics and informatics;

Reducing the simplest problems of game theory to problems of linear programming.