hexsha stringlengths 40 40 | size int64 5 1.05M | ext stringclasses 588 values | lang stringclasses 305 values | max_stars_repo_path stringlengths 3 363 | max_stars_repo_name stringlengths 5 118 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count float64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringdate 2015-01-01 00:00:35 2022-03-31 23:43:49 ⌀ | max_stars_repo_stars_event_max_datetime stringdate 2015-01-01 12:37:38 2022-03-31 23:59:52 ⌀ | max_issues_repo_path stringlengths 3 363 | max_issues_repo_name stringlengths 5 118 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count float64 1 134k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 363 | max_forks_repo_name stringlengths 5 135 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count float64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringdate 2015-01-01 00:01:02 2022-03-31 23:27:27 ⌀ | max_forks_repo_forks_event_max_datetime stringdate 2015-01-03 08:55:07 2022-03-31 23:59:24 ⌀ | content stringlengths 5 1.05M | avg_line_length float64 1.13 1.04M | max_line_length int64 1 1.05M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
82776282945964ef76bae972dffc737bbf084ddd | 578 | asciidoc | AsciiDoc | vendor/bundle/jruby/2.3.0/gems/logstash-codec-edn-3.0.6/docs/index.asciidoc | Zeus013/logstash | 14144e8cf45768b341f4d6e3cad2532b5a7b1439 | [
"Apache-2.0"
] | 3 | 2018-10-19T21:44:24.000Z | 2018-10-19T22:56:24.000Z | vendor/bundle/jruby/2.3.0/gems/logstash-codec-edn-3.0.6/docs/index.asciidoc | dashjuvi/Logstash-from-NXLog-to-ElasticSearch-parse-and-forward | ba0433ea73a041581dd0d4a481ff2f5b26f27926 | [
"Apache-2.0"
] | null | null | null | vendor/bundle/jruby/2.3.0/gems/logstash-codec-edn-3.0.6/docs/index.asciidoc | dashjuvi/Logstash-from-NXLog-to-ElasticSearch-parse-and-forward | ba0433ea73a041581dd0d4a481ff2f5b26f27926 | [
"Apache-2.0"
] | null | null | null | :plugin: edn
:type: codec
///////////////////////////////////////////
START - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
:version: %VERSION%
:release_date: %RELEASE_DATE%
:changelog_url: %CHANGELOG_URL%
:include_path: ../../../../logstash/docs/include
///////////////////////////////////////////
END - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
[id="plugins-{type}s-{plugin}"]
=== Edn codec plugin
include::{include_path}/plugin_header.asciidoc[]
==== Description
Reads and produces EDN format data.
| 22.230769 | 48 | 0.5 |
b8b8dbdee1b780f3291a617b4000054ffcda4b48 | 37,830 | adoc | AsciiDoc | _posts/2020-03-23-unit-testing.adoc | speker2010/guides | fc471043e3247c7fa6d81be87fd4b8680260de83 | [
"Apache-2.0"
] | 7 | 2018-12-28T06:16:01.000Z | 2020-11-10T19:45:56.000Z | _posts/2020-03-23-unit-testing.adoc | cubacn/guides | d86c151b45a46a5fd9aa96069ab501bc6f01dfc0 | [
"Apache-2.0"
] | 58 | 2018-10-14T17:30:59.000Z | 2021-08-02T12:46:49.000Z | _posts/2020-03-23-unit-testing.adoc | cubacn/guides | d86c151b45a46a5fd9aa96069ab501bc6f01dfc0 | [
"Apache-2.0"
] | 6 | 2019-03-12T04:43:42.000Z | 2021-10-19T12:58:11.000Z | ---
compatible_cuba_versions: 7.1+
compatible_java_versions: 8+
project_id: cuba-petclinic-unit-testing
permalink: unit-testing
---
= Unit Testing in CUBA Applications
:showtitle:
:sectlinks:
:sectanchors:
:page-navtitle: Unit Testing in CUBA applications
:page-excerpt: In this guide you will learn how automated testing in a CUBA application works. In particular this guide deals with the question of how unit tests can be executed and when it makes sense to use them over integration testing.
:page-root: ../../../
:project_id: cuba-petclinic-unit-testing
:java_version: 8
:cuba_version: 7.1
:page-icone: https://www.cuba-platform.com/guides/images/unit-testing/guide_icone.svg
In this guide you will learn how automated testing in a CUBA application works. In particular, this guide deals with the question of how unit tests can be executed and when it makes sense to use them over integration testing.
== What we are going to build
This guide enhances the https://github.com/cuba-platform/cuba-petclinic[CUBA Petclinic] example to show how existing features can be tested in an automated fashion via unit testing:
* display amount of pets of a given pet type for a particular Owner
* calculation of next regular checkup date proposals for a pet
include::includes/guide_requirements.adoc[]
== Overview
In this guide you will learn how to use the techniques of unit testing without dependencies. This approach differs from the other discussed variant of middleware integration testing (see https://www.cuba-platform.com/guides/integration-testing-middleware[CUBA guide: middleware integration testing]), where a close-to-production test environment is created for the test to run.
In case of unit testing, the test environment is very slim because basically no runtime of the CUBA framework is provided. For giving up this convenience, unit testing allows you to more easily spin up the system under test and bring it to the proper state for exercising the test case. It also cuts out a whole lot of problems regarding test data setup and is generally executed faster by orders of magnitude.
=== Unit Testing
Besides the integration test environment that you learned about in the middleware integration testing guide, it is also possible to create test cases that do not spin up the Spring application context or any CUBA Platform APIs at all.
The main benefits of writing a unit test in an isolated fashion without an integration test environment are:
* locality of the test scenario
* test execution speed
* easier test fixture setup
In this case, the class under test is instantiated directly via its constructor.
Since Spring is not involved in managing the instantiation of the class, dependency injection is not working in this scenario. Therefore, dependencies (objects that the class under test relies upon) must be instantiated manually and passed into the class under the test directly.
In case those dependencies are CUBA APIs, they have to be mocked.
=== Mocking
Mocking / Stubbing is a common way in test automation that helps to emulate / control & verify certain external parts that the system under the test interacts with to isolate the SUT from its outside world as much as possible.
image::unit-testing/testing-doc-replacement.png[align="center"]
As you can see in the diagram, there is one class that acts as the system to be tested (system under test: SUT). This class uses another class responsible for helping some additional needs. This class is normally referred to as a dependency.
Stubbing or Mocking is now doing the following:
To isolate the SUT from any other dependency and to gain the test locality, the dependency is replaced with a Stub implementation. This object plays the role of a real dependency, but in the test case you control the behavior of the Stub. With that you can directly influence on the SUT behaviour, even if it has some interaction with another dependency that otherwise would be out of direct control.
==== Mocking Frameworks
JUnit itself does not contain Mocking capabilities. Instead dedicated Mocking frameworks allow instantiating Stub / Mock objects and control over their behavior in a JUnit test case. In this guide you will learn about Mockito, a very popular Stubbing / Mocking framework in the Java ecosystem.
==== An Example of Mocking
This example shows how to configure a Stub object. In this case it will replace the behavior of the `TimeSource` API from CUBA which allows retrieving the current timestamp. The Stub object should return yesterday. The way to define the behavior of a Stub / Mock object with Mockito is to use the Mockito API like in the following listing (3):
.MockingTest.java
[source,java]
----
@ExtendWith(MockitoExtension.class) // <1>
class MockingTest {
@Mock
private TimeSource timeSource; // <2>
@Test
public void theBehaviorOfTheTimeSource_canBeDefined_inATestCase() {
// given:
ZonedDateTime now = ZonedDateTime.now();
ZonedDateTime yesterday = now.minusDays(1);
// and: the timeSource Mock is configured to return yesterday when now() is called
Mockito
.when(timeSource.now())
.thenReturn(yesterday); // <3>
// when: the current time is retrieved from the TimeSource Interface
ZonedDateTime receivedTimestamp = timeSource.now(); // <4>
// then: the received Timestamp is not now
assertThat(receivedTimestamp)
.isNotEqualTo(now);
// but: the received Timestamp is yesterday
assertThat(receivedTimestamp)
.isEqualTo(yesterday);
}
}
----
<1> the `MockitoExtension` activates the automatic instantiation of the `@Mock` annotated attributes
<2> the `TimeSource` instance is marked as a `@Mock` and will be instantiated via Mockito
<3> the Mock behavior is defined in the test case
<4> invoking the method `now` returns yesterday instead of the correct value
The complete test case can be found in the example: https://github.com/cuba-guides/cuba-petclinic-unit-testing/blob/master/modules/core/test/com/haulmont/sample/petclinic/MockingTest.java[MockingTest.java]. With that behavior description in place, the one thing left is that the Stub object instance must be manually passed into the class / system under test.
=== Constructor Based Injection
To achieve this, one thing in the production code should be changed. CUBA and Studio by default uses Field injection. This means that if a Spring Bean A has a dependency to another Spring Bean B, a dependency is declared by creating a field in the class A of type B like this:
[source,java]
----
@Component
class SpringBeanA {
@Inject // <1>
private SpringBeanB springBeanB; // <2>
public double doSomething() {
int result = springBeanB.doSomeHeavyLifting(); // <3>
return result / 2;
}
}
----
<1> `@Inject` indicates Spring to instantiate a SpringBeanA instance with the correctly injected SpringBeanB instance
<2> the field declaration indicates a dependency
<3> the dependency can be used in the business logic of A
In the context of unit testing, this pattern does not work quite well because the field can not be easily accessed without a corresponding method to inject the dependency. One way to create this dedicated injection method is Constructor based Injection. In this case, the class that needs a dependency defines a dedicated Constructor, annotates it with `@Inject` and mentions the desired dependencies as Constructor parameters.
[source,java]
----
@Component
class SpringBeanAWithConstructorInjection {
private final SpringBeanB springBeanB;
@Inject // <1>
public SpringBeanAWithConstructorInjection(
SpringBeanB springBeanB // <2>
) {
this.springBeanB = springBeanB; // <3>
}
public double doSomething() {
int result = springBeanB.doSomeHeavyLifting(); // <3>
return result / 2;
}
}
----
<1> `@Inject` indicates Spring to call the Constructor and treat the parameters as Dependencies
<2> the dependencies are defined as Constructor parameters
<3> the dependency instance is manually stored as a field
In production the code still works as before, because Spring supports both kinds of injection. In the test case we can call this constructor directly and pass it to the mocked instance configured through Mockito.
== Tests for Petclinic Functionality
Next, you will see the different examples on how to use the unit test capabilities to automatically test the functionality of the Petclinic example project in a fast and isolated manner.
=== Pet Amount for a Particular Owner
The first test case deals with the calculation of the amount of pets that are associated with an Owner for a given pet type. This feature is used in the UI to display this number when the user selects one Owner in the corresponding browse screen.
image::unit-testing/pets-of-type.png[align="center"]
The following business rules describe the functionality:
* If the pet type of a pet matches the asked pet type, the pet should be counted, otherwise not.
==== Implementation
The implementation logic is located in the Owner Entity class itself:
.Owner.java
[source,java]
----
class Owner extends Person {
// ...
public long petsOfType(PetType petType) {
return pets.stream()
.filter(pet ->
Objects.equals(petType.getName(), pet.getType().getName())
)
.count();
}
}
----
It uses the Java 8 Streams API to iterate over the list of pets and filter out the ones that match with the pet type name. Afterwards it counts the correctly identified pets.
==== Test Cases for the Happy Path
We will start with the three test cases that verify the above described business rule.
.OwnerTest.java
[source,java]
----
class OwnerTest {
PetclinicData data = new PetclinicData();
private PetType electric;
private Owner owner;
private PetType fire;
@BeforeEach
public void initTestData() {
electric = data.electricType();
fire = data.fireType();
owner = new Owner();
}
@Test
public void aPetWithMatchingPetType_isCounted() {
// given:
owner.pets = Arrays.asList( // <1>
data.petWithType(electric)
);
// expect:
assertThat(owner.petsOfType(electric)) // <2>
.isEqualTo(1);
}
@Test
public void aPetWithNonMatchingPetType_isNotCounted() {
// given:
owner.pets = Arrays.asList(
data.petWithType(electric)
);
// expect:
assertThat(owner.petsOfType(fire))
.isEqualTo(0);
}
@Test
public void twoPetsMatch_andOneNot_twoIsReturned() {
// given:
owner.pets = Arrays.asList(
data.petWithType(electric),
data.petWithType(fire),
data.petWithType(electric)
);
// expect:
assertThat(owner.petsOfType(electric))
.isEqualTo(2);
}
}
----
<1> the pets are configured for the owner
<2> `petsOfType` is executed with a particular type and the outcome is verified
The first test case is a positive test case, which contains the simplest amount of test fixture setup possible. The second test case is of the same kind but for a negative expectation. The third test case is a combination of both test cases.
Oftentimes it is a good idea to make the test data as small and clear as possible, because it shows better the essence of the test case.
With those tests in place, we have covered all code paths that are living along the happy path.
TIP: Happy path is a term describing the code execution path that works with valid input values and no error scenarios.
But if we think a little bit outside of this happy path, we will find a couple of more test cases to be written.
==== Testing Edge Cases
When thinking of the edge cases of that method, there are multiple error scenarios to consider. Just imagine the following situation:
The `LookupField` in the UI does not require a pet type to be entered. If the user uses it and does not select a type and clicks OK, the passed in `PetType` will be `null`. What happens in this case in the implementation? It will produce a `NullPointerException`.
TIP: Thinking about edge cases in tests is very important because it reveals bugs in the implementation that otherwise can be found only at the application runtime.
Now, to deal with that behavior, we will create a test case that proves this hypothesis:
.OwnerTest.java
[source,java]
----
class OwnerTest {
// ...
@Test
public void whenAskingForNull_zeroIsReturned() {
// expect:
assertThat(owner.petsOfType(null))
.isEqualTo(0);
}
}
----
By defining that edge case, we also defined the expected behavior of the API. In this case we defined that the return value should be zero. Running the test case shows the existence of the `NullPointerException` in this situation.
With this knowledge and test case that will tell us when we have reached a success state, we can adjust the implementation to behave according to our specification:
.Owner.java
[source,java]
----
class Owner {
public long petsOfType(PetType petType) {
if (petType == null) {
return 0L;
}
return pets.stream()
.filter(pet -> Objects.equals(petType.getName(), pet.getType().getName()))
.count();
}
}
----
Running the test case once again, we will see that the application can now deal with "invalid" inputs and react to them accordingly.
The next similar edge case is when one of the owner's pets has no type assigned. This is possible due to the current implementation of the entity model, `Pet.type` is not required.
In this case, we would like to exclude those pets from counting. The following test case will show the desired behavior:
.OwnerTest.java
[source,java]
----
class OwnerTest {
// ...
@Test
public void petsWithoutType_areNotConsideredInTheCounting() {
// given:
Pet petWithoutType = data.petWithType(null);
// and:
owner.pets = Arrays.asList(
data.petWithType(electric),
petWithoutType
);
// expect:
assertThat(owner.petsOfType(electric))
.isEqualTo(1);
}
}
----
Executing the test shows another `NullPointerException` in the implementation. The following change will make the test case pass:
.Owner.java
[source,java]
----
class Owner {
public long petsOfType(PetType petType) {
if (petType == null) {
return 0L;
}
return pets.stream()
.filter(pet -> Objects.nonNull(pet.getType())) // <1>
.filter(pet -> Objects.equals(petType.getName(), pet.getType().getName()))
.count();
}
}
----
<1> pets without a type will be excluded before doing the comparison
With that, the first example unit test is complete. As you have seen, thinking about the edge cases is just as necessary as testing the happy path. Both variants are required for a complete test coverage and a comprehensive test suite to rely on.
One additional thing to note here is that when you look at the test, it covers not the end to end feature from the UI to the business logic (and potentially the database). This is the very nature of a unit test. Instead, the normal way of tackling this issue would be to create other unit tests for the other parts in isolation as well. Oftentimes a smaller set of integration tests would also be created to verify the correct interaction of different components.
TIP: The way to write the test case first to identify a bug / develop a feature / prove a particular scenario is called `Test Driven Development` - TDD. With this workflow you can get a very good test coverage as well as some other benefits. In the case of fixing a bug it also gives you a clear end point where you know that you have fixed the bug.
=== Next Regular Checkup Date Proposal
The next more comprehensive example is the functionality for the Petclinic project that allows the user to retrieve the information when the next regular checkup is scheduled for a selected pet. The next regular Checkup Date is calculated based on a couple of factors. The pet type determines the interval and when the last regular checkup was.
The calculation is triggered from the UI in a dedicated Visit Editor for creating Regular Checkups: https://github.com/cuba-guides/cuba-petclinic-unit-testing/blob/master/modules/web/src/com/haulmont/sample/petclinic/web/visit/visit/VisitCreateRegularCheckup.java[VisitCreateRegularCheckup]. As part of that logic triggered by the UI, the following service method is used, which contains the business logic of the calculation:
.RegularCheckupService.java
[source,java]
----
public interface RegularCheckupService {
@Validated
LocalDate calculateNextRegularCheckupDate(
@RequiredView("pet-with-owner-and-type") Pet pet,
List<Visit> visitHistory
);
}
----
The implementation of the Service defines two dependencies that it needs in order to perform the calculation. The dependencies are declared in the constructor so that the unit test can inject its own implementation of the dependency.
.RegularCheckupServiceBean.java
[source,java]
----
@Service(RegularCheckupService.NAME)
public class RegularCheckupServiceBean implements RegularCheckupService {
final protected TimeSource timeSource;
final protected List<RegularCheckupDateCalculator> calculators;
@Inject
public RegularCheckupServiceBean(
TimeSource timeSource,
List<RegularCheckupDateCalculator> calculators
) {
this.timeSource = timeSource;
this.calculators = calculators;
}
@Override
public LocalDate calculateNextRegularCheckupDate(
Pet pet,
List<Visit> visitHistory
) {
// ...
}
}
----
The first dependency is the `TimeSource` API from CUBA for retrieving the current date. The second dependency is a list of `RegularCheckupDateCalculator` instances.
The implementation of the service does not contain the calculation logic for different pet types within the `calculateNextRegularCheckupDate` method. Instead it knows about possible Calculator classes. It filters out the only calculator that can calculate the next regular checkup date for the pet and delegates the calculation to it.
The API of the `RegularCheckupDateCalculator` looks like this:
.RegularCheckupDateCalculator.java
[source,java]
----
/**
* API for Calculators that calculates a proposal date for the next regular checkup date
*/
public interface RegularCheckupDateCalculator {
/**
* defines if a calculator supports a pet instance
* @param pet the pet to calculate the checkup date for
* @return true if the calculator supports this pet, otherwise false
*/
boolean supports(Pet pet);
/**
* calculates the next regular checkup date for the pet
* @param pet the pet to calculate checkup date for
* @param visitHistory the visit history of that pet
* @param timeSource the TimeSource CUBA API
*
* @return the calculated regular checkup date
*/
LocalDate calculateRegularCheckupDate(
Pet pet,
List<Visit> visitHistory,
TimeSource timeSource
);
}
----
It consists of two methods: `supports` defines if a Calculator is capable of calculating the date proposal for the given pet. In case the calculator is, the second method `calculateRegularCheckupDate` is invoked to perform the calculation.
There are several implementations of this interface in the implementation. Most of them implement the specific date calculation for a particular pet type.
==== Testing the Implementation with Mocking
In order to instantiate the SUT: `RegularCheckupService`, we are forced to provide both of the declared dependencies in the Constructor:
* `TimeSource`
* `List<RegularCheckupDateCalculator>`
We will provide a Stub implementation for the TimeSource in this test case via Mockito.
For the list of the Calculators, instead of using a Mock instances, we'll use a special test implementation. This can also be seen as a Stub, but instead of defining it on the fly via Mockito, a dedicated class `ConfigurableTestCalculator` is defined in the test sources. It has two static configuration options: the answers to the two API methods `support` and `calculate`.
.ConfigurableTestCalculator.java
[source,java]
----
/**
* test calculator implementation that allows to statically
* define the calculation result
*/
public class ConfigurableTestCalculator
implements RegularCheckupDateCalculator {
private final boolean supports;
private final LocalDate result;
private ConfigurableTestCalculator(
boolean supports,
LocalDate result
) {
this.supports = supports;
this.result = result;
}
// ...
/**
* creates a Calculator that will answer true
* to {@link RegularCheckupDateCalculator#supports(Pet)} for
* test case purposes and returns the provided date as a result
*/
static RegularCheckupDateCalculator supportingWithDate(LocalDate date) {
return new ConfigurableTestCalculator(true, date);
}
@Override
public boolean supports(Pet pet) {
return supports;
}
@Override
public LocalDate calculateRegularCheckupDate(
Pet pet,
List<Visit> visitHistory,
TimeSource timeSource
) {
return result;
}
}
----
This test implementation is another form of exchanging the dependency in the test case. Using a Mocking framework or providing static test classes is more or less the same from a 10'000 feet point of view and supports the same goal.
As the unit test for the `RegularCheckupService` should verify the behavior in an isolated fashion, the purpose of this test is not to test the different calculators. Instead the test case is going to verify the corresponding orchestration that the service embodies.
The implementation of the calculators will be tested in dedicated unit tests for those classes.
==== Testing the Orchestration in RegularCheckupService
Starting with the test case for the `RegularCheckupService`, the following test cases cover the majority of the orchestration functionality:
1. only calculators that support the pet instance will be asked to calculate the checkup date
2. in case multiple calculators support a pet, the first one is chosen
3. in case no calculator was found, next month will be used as the proposed regular checkup date.
You can find the implementation of the test cases in the listing below.
TIP: The JUnit Annotation `@DisplayName` is used to better describe the test cases in the test result report. Additionally it helps to link the test case implementation to the above given test case descriptions.
.RegularCheckupServiceTest.java
[source,java]
----
@ExtendWith(MockitoExtension.class)
class RegularCheckupServiceTest {
//...
@Mock
private TimeSource timeSource;
private PetclinicData data = new PetclinicData(); // <1>
@BeforeEach
void configureTimeSourceToReturnNowCorrectly() {
Mockito.lenient()
.when(timeSource.now())
.thenReturn(ZonedDateTime.now());
}
@Test
@DisplayName(
"1. only calculators that support the pet instance " +
"will be asked to calculate the checkup date"
)
public void one_supportingCalculator_thisOneIsChosen() {
// given: first calculator does not support the pet
RegularCheckupDateCalculator threeMonthCalculator =
notSupporting(THREE_MONTHS_AGO); // <2>
// and: second calculator supports the pet
RegularCheckupDateCalculator lastYearCalculator =
supportingWithDate(LAST_YEAR); // <3>
// when:
LocalDate nextRegularCheckup = calculate( // <4>
calculators(threeMonthCalculator, lastYearCalculator)
);
// then: the result should be the result that the calculator that was supported
assertThat(nextRegularCheckup)
.isEqualTo(LAST_YEAR);
}
@Test
@DisplayName(
"2. in case multiple calculators support a pet," +
" the first one is chosen to calculate"
)
public void multiple_supportingCalculators_theFirstOneIsChosen() {
// given: two calculators are valid; the ONE_MONTH_AGO calculator is first
List<RegularCheckupDateCalculator> calculators = calculators(
supportingWithDate(ONE_MONTHS_AGO),
notSupporting(THREE_MONTHS_AGO),
supportingWithDate(TWO_MONTHS_AGO)
);
// when:
LocalDate nextRegularCheckup = calculate(calculators);
// then: the result is the one from the first calculator
assertThat(nextRegularCheckup)
.isEqualTo(ONE_MONTHS_AGO);
}
@Test
@DisplayName(
"3. in case no calculator was found, " +
"next month as the proposed regular checkup date will be used"
)
public void no_supportingCalculators_nextMonthWillBeReturned() {
// given: only not-supporting calculators are available for the pet
List<RegularCheckupDateCalculator> onlyNotSupportingCalculators =
calculators(
notSupporting(ONE_MONTHS_AGO)
);
// when:
LocalDate nextRegularCheckup = calculate(onlyNotSupportingCalculators);
// then: the default implementation will return next month
assertThat(nextRegularCheckup)
.isEqualTo(NEXT_MONTH);
}
/*
* instantiates the SUT with the provided calculators as dependencies
* and executes the calculation
*/
private LocalDate calculate(
List<RegularCheckupDateCalculator> calculators
) {
RegularCheckupService service = new RegularCheckupServiceBean(
timeSource,
calculators
);
return service.calculateNextRegularCheckupDate(
data.petWithType(data.waterType()),
Lists.emptyList()
);
}
// ...
}
----
<1> `PetclinicData` acts as a Test Utility Method holder. As no DB is present in the unit test, it only creates transient objects
<2> `notSupporting` provides a Calculator that will not support the pet (returns `false` for `supports(Pet pet)`)
<3> `supportingWithDate` provides a Calculator that supports the pet (returns `true` for `supports(Pet pet)`) and returns the passed in Date as the result
<4> `calculate` is the method that contains the instantiation and execution of the service
The full source code including the helper methods and test implementation of the Calculator can be found in the example project: https://github.com/cuba-guides/cuba-petclinic-unit-testing/blob/master/modules/core/test/com/haulmont/sample/petclinic/service/visit/regular_checkup/RegularCheckupServiceTest.java[RegularCheckupServiceTest.java].
==== Testing the Calculators
As described above, the unit test for the `RegularCheckupService` does not include the implementations of different calculators itself. This is done on purpose, to benefit from the isolation aspects and the localization of the test scenarios. Otherwise the `RegularCheckupServiceTest` would contain test cases for all different calculators as well as the orchestration logic.
Splitting the test cases for the orchestration and the calculator allows us to create a comparably easy test case setup for testing one of the calculators. In this example we will take a look at the `ElectricPetTypeCalculator` and the corresponding test case.
For the calculator responsible for pets of type `Electric`, it embodies the following business rules:
1. it should only be used if the name of the pet type is `Electric`, otherwise not
2. the interval between two regular Checkups is one year for electric pets
3. Visits that are not regular checkups should not influence the calculation
4. in case the pet didn't have a regular checkup at the Petclinic before, next month should be proposed
5. if the last Regular Checkup was performed longer than one year ago, next month should be proposed
==== ElectricPetTypeCalculator Test Cases
You can find the implementation of the different test cases that verify those business rules below. This test class also uses the `PetclinicData` helper for various test utility methods.
.ElectricPetTypeCalculatorTest.java
[source,java]
----
@ExtendWith(MockitoExtension.class)
class ElectricPetTypeCalculatorTest {
private PetclinicData data;
private RegularCheckupDateCalculator calculator;
private Pet electricPet;
@BeforeEach
void createTestEnvironment() {
data = new PetclinicData();
calculator = new ElectricPetTypeCalculator();
}
@BeforeEach
void createElectricPet() {
electricPet = data.petWithType(data.electricType());
}
@Nested
@DisplayName(
"1. it should only be used if the name of the pet type " +
" is 'Electric', otherwise not"
)
class Supports { // <1>
@Test
public void calculator_supportsPetsWithType_Electric() {
// expect:
assertThat(calculator.supports(electricPet))
.isTrue();
}
@Test
public void calculator_doesNotSupportsPetsWithType_Water() {
// given:
Pet waterPet = data.petWithType(data.waterType());
// expect:
assertThat(calculator.supports(waterPet))
.isFalse();
}
}
@Nested
class CalculateRegularCheckupDate {
@Mock
private TimeSource timeSource;
private final LocalDate LAST_YEAR = now().minusYears(1); // <2>
private final LocalDate LAST_MONTH = now().minusMonths(1);
private final LocalDate SIX_MONTHS_AGO = now().minusMonths(6);
private final LocalDate NEXT_MONTH = now().plusMonths(1);
private List<Visit> visits = new ArrayList<>();
@BeforeEach
void configureTimeSourceMockBehavior() {
Mockito.lenient()
.when(timeSource.now())
.thenReturn(ZonedDateTime.now());
}
@Test
@DisplayName(
"2. the interval between two regular Checkups " +
" is one year for electric pets"
)
public void intervalIsOneYear_fromTheLatestRegularCheckup() {
// given: there are two regular checkups in the visit history of this pet
visits.add(data.regularCheckup(LAST_YEAR));
visits.add(data.regularCheckup(LAST_MONTH));
// when:
LocalDate nextRegularCheckup =
calculate(electricPet, visits); // <3>
// then:
assertThat(nextRegularCheckup)
.isEqualTo(LAST_MONTH.plusYears(1));
}
@Test
@DisplayName(
"3. Visits that are not regular checkups " +
" should not influence the calculation"
)
public void onlyRegularCheckupVisitsMatter_whenCalculatingNextRegularCheckup() {
// given: one regular checkup and one surgery
visits.add(data.regularCheckup(SIX_MONTHS_AGO));
visits.add(data.surgery(LAST_MONTH));
// when:
LocalDate nextRegularCheckup =
calculate(electricPet, visits);
// then: the date of the last checkup is used
assertThat(nextRegularCheckup)
.isEqualTo(SIX_MONTHS_AGO.plusYears(1));
}
@Test
@DisplayName(
"4. in case the pet has not done a regular checkup " +
" at the Petclinic before, next month should be proposed"
)
public void ifThePetDidNotHavePreviousCheckups_nextMonthIsProposed() {
// given: there is no regular checkup, just a surgery
visits.add(data.surgery(LAST_MONTH));
// when:
LocalDate nextRegularCheckup =
calculate(electricPet, visits);
// then:
assertThat(nextRegularCheckup)
.isEqualTo(NEXT_MONTH);
}
@Test
@DisplayName(
"5. if the last Regular Checkup was performed longer than " +
" one year ago, next month should be proposed"
)
public void ifARegularCheckup_exceedsTheInterval_nextMonthIsProposed() {
// given: one regular checkup thirteen month ago
visits.add(data.regularCheckup(LAST_YEAR.minusMonths(1)));
// when:
LocalDate nextRegularCheckup =
calculate(electricPet, visits);
// then:
assertThat(nextRegularCheckup)
.isEqualTo(NEXT_MONTH);
}
private LocalDate calculate(Pet pet, List<Visit> visitHistory) {
return calculator.calculateRegularCheckupDate(
pet,
visitHistory,
timeSource
);
}
}
}
----
<1> `Supports` describes test cases that verify the behavior of `supports()` method via `@Nested` grouping of JUnit
<2> several points in time used as test fixtures and verification values are statically defined
<3> `calculate` is executing the corresponding method of the calculator (SUT)
The test class uses the JUnit Annotation `@Nested`. This allows defining groups of test cases with the same context. In this case, it is used to group them by the methods of the SUT they test: `@Nested class Supports {}` for the `support()` method to verify, `@Nested class CalculateRegularCheckupDate` for the `calculate()` method of the API.
Furthermore, the context definition is used to create certain test data as well as `@BeforeEach` setup methods for all the test cases defined in this context. E.g. only in the context `CalculateRegularCheckupDate` it is necessary to define the `timeSource` mock behavior.
== Limitations of Unit Testing
Unit testing in an isolated fashion has some great advantages over integration testing. It spins up a different, much smaller test environment that allows a faster test run. Also, by avoiding dependencies to other classes of the application and the framework it is based upon, the test case can be much more robust and independent of a changing environment.
But this very same isolation is also the basis for the main limitation of unit tests. To achieve this level of isolation, we have to basically encode certain assumptions on the integration points. This is true for the test environment as well as the mocked dependencies. As long as those assumptions stay true, everything is fine. But once the environment changes, there is a chance that the encoded assumptions are no longer matching the reality.
Let's take an example of a very implicit assumption on the test environment:
In the `RegularCheckupService` example the expressed dependency on `List<RegularCheckupDateCalculator>` is sufficiently working as expected in production code, only if every Calculator is a Spring bean. This means it needs to have an annotation `@Component` added to the class. Otherwise, it is not picked up, and therefore it will influence the overall behavior of the business logic, because e.g. now the default value (next month) will be returned.
Furthermore, as the implementation of the service implies that the order of the injected list defines which calculator to ask first, all calculators additionally need to have an `@Ordered` annotation in place so that the order is defined.
Both of those examples could not be tested via a unit test, as we explicitly don't want to test the interaction between those classes and the comparably slow test container for every unit test.
Unit tests should be the primary mechanism of testing business logic. But to cover those interaction scenarios, too, it is crucial to implement integration testing. A good rule of thumb is to test those scenarios only in an integration test. So, instead of creating all those test cases again and execute them in an integration test environment, only a small subset of those test cases should be used.
In the above mentioned example a good candidate for an integration test would be the test case: `multiple_supportingCalculators_theFirstOneIsChosen` which verifies that the right components are injected and the correct one is used for performing the calculation.
== Summary
In this guide we learned how to use unit tests to test functionality in an isolated fashion using JUnit. In the introduction you saw the differences between using plain unit tests and middleware integration tests. In particular, no external dependencies like databases are used. Furthermore, the test environment does not leverage a test container. Instead we instantiate the SUT manually.
In order to create isolated tests, sometimes you need to leverage a Mocking framework as we did in the `RegularCheckupServiceTest` example. Mocking is a way to control the environment & dependencies the SUT interacts with. By exchanging particular dependencies with special test implementations, the test case is capable of setting up the behavior of the dependency (and potentially use it for verification purposes as well). In the Java ecosystem, Mockito allows configuring the dependencies and their desired behavior.
We also learned that exchanging field based injection with constructor based injection production code helps to manually inject dependencies into the SUT in testing scenarios.
The benefit of using plain unit tests over integration tests is that the tests run orders of magnitudes faster. This benefit sounds like a nice-to-have feature, but in fact it allows going down a path of much more aggressive usage of automated testing. Using test driven development (TDD) as an example technique benefits very much from fast feedback cycles provided by those execution times.
Besides the benefits in the workflow of developing the test cases, we also learned about an architectural benefit: test scenario locality. Keeping the surface area of the SUT helps to spot root causes of test failures much faster.
Finally, the test fixture setup is easier to create as well, when the surface area is kept small.
Besides the benefits we also looked at the limitations of unit testing. One example was that we are not able to verify the correct usage of particular environmental behavior the code lives in. Although our unit tests perfectly show that the business rules might be correctly defined in the calculators, the question if they are using the correct Spring annotations to be picked up by the framework in the right way cannot be easily verified by unit tests.
Therefore, using unit tests should only be used in combination with a set of integration tests that cover the remaining open questions. With this combination we can be quite sure that our application is actually working as we expect. | 44.663518 | 520 | 0.752789 |
456b6b59d04bd9bd176ffea3662f94df552346ae | 435 | adoc | AsciiDoc | spring-boot/spring-boot-actuator-autoconfigure/src/docs/asciidoc/zh-cn/endpoints/shutdown.adoc | jcohy/jcohy-docs | 3b890e2aa898c78d40182f3757e3e840cf63d38b | [
"Apache-2.0"
] | 19 | 2020-06-04T07:46:20.000Z | 2022-03-23T01:46:40.000Z | spring-boot/spring-boot-actuator-autoconfigure/src/docs/asciidoc/zh-cn/endpoints/shutdown.adoc | jcohy/jcohy-docs | 3b890e2aa898c78d40182f3757e3e840cf63d38b | [
"Apache-2.0"
] | 15 | 2020-06-11T09:38:15.000Z | 2022-01-04T16:04:53.000Z | spring-boot/spring-boot-actuator-autoconfigure/src/docs/asciidoc/zh-cn/endpoints/shutdown.adoc | jcohy/jcohy-docs | 3b890e2aa898c78d40182f3757e3e840cf63d38b | [
"Apache-2.0"
] | 4 | 2020-11-24T11:03:19.000Z | 2022-02-28T07:21:23.000Z | [[shutdown]]
= Shutdown (`shutdown`)
`shutdown` 端点被用来关闭应用程序.
[[shutdown-shutting-down]]
== 关闭应用程序
要关闭应用程序, 请向 `/actuator/shutdown` 发出 `POST` 请求, 如以下基于 curl 的示例所示:
include::snippets/shutdown/curl-request.adoc[]
产生类似于以下内容的响应:
include::snippets/shutdown/http-response.adoc[]
[[shutdown-shutting-down-response-structure]]
=== 响应结构
该响应包含关闭请求结果的详细信息. 下表描述了响应的结构:
[cols="3,1,3"]
include::snippets/shutdown/response-fields.adoc[]
| 15.535714 | 64 | 0.735632 |
1d1c0fb0b09b9febd9da9387b581b3f0ec785934 | 2,830 | asciidoc | AsciiDoc | vendor/github.com/elastic/beats/libbeat/docs/monitoring/monitoring-beats.asciidoc | defus/springboothystrixbeat | a85f8c33789669a2773e77126cf139d1ece4561a | [
"Apache-2.0"
] | null | null | null | vendor/github.com/elastic/beats/libbeat/docs/monitoring/monitoring-beats.asciidoc | defus/springboothystrixbeat | a85f8c33789669a2773e77126cf139d1ece4561a | [
"Apache-2.0"
] | null | null | null | vendor/github.com/elastic/beats/libbeat/docs/monitoring/monitoring-beats.asciidoc | defus/springboothystrixbeat | a85f8c33789669a2773e77126cf139d1ece4561a | [
"Apache-2.0"
] | null | null | null | //////////////////////////////////////////////////////////////////////////
//// This content is shared by all Elastic Beats. Make sure you keep the
//// descriptions here generic enough to work for all Beats that include
//// this file. When using cross references, make sure that the cross
//// references resolve correctly for any files that include this one.
//// Use the appropriate variables defined in the index.asciidoc file to
//// resolve Beat names: beatname_uc and beatname_lc.
//// Use the following include to pull this content into a doc file:
//// include::../../libbeat/docs/monitoring/configuring.asciidoc[]
//// Make sure this content appears below a level 2 heading.
//////////////////////////////////////////////////////////////////////////
[role="xpack"]
[[monitoring]]
= Monitoring {beatname_uc}
[partintro]
--
NOTE: {monitoring} for {beatname_uc} requires {es} {beat_monitoring_version} or later.
{monitoring} enables you to easily monitor {beatname_uc} from {kib}. For more
information, see
{xpack-ref}/xpack-monitoring.html[Monitoring the Elastic Stack] and
{kibana-ref}/beats-page.html[Beats Monitoring Metrics].
To configure {beatname_uc} to collect and send monitoring metrics:
. Create a user that has appropriate authority to send system-level monitoring
data to {es}. For example, you can use the built-in +{beat_monitoring_user}+ user or
assign the built-in +{beat_monitoring_user}+ role to another user. For more
information, see
{xpack-ref}/setting-up-authentication.html[Setting Up User Authentication] and
{xpack-ref}/built-in-roles.html[Built-in Roles].
. Add the `xpack.monitoring` settings in the {beatname_uc} configuration file. If you
configured {es} output, specify the following minimal configuration:
+
[source, yml]
----
xpack.monitoring.enabled: true
----
+
If you configured a different output, such as {ls}, you must specify additional
configuration options. For example:
+
["source","yml",subs="attributes"]
----
xpack.monitoring:
enabled: true
elasticsearch:
hosts: ["https://example.com:9200", "https://example2.com:9200"]
username: {beat_monitoring_user}
password: somepassword
----
+
NOTE: Currently you must send monitoring data to the same cluster as all other events.
If you configured {es} output, do not specify additional hosts in the monitoring
configuration.
. {kibana-ref}/monitoring-xpack-kibana.html[Configure monitoring in {kib}].
. To verify your monitoring configuration, point your web browser at your {kib}
host, and select Monitoring from the side navigation. Metrics reported from
{beatname_uc} should be visible in the Beats section. When {security} is enabled,
to view the monitoring dashboards you must log in to {kib} as a user who has the
`kibana_user` and `monitoring_user` roles.
--
include::shared-monitor-config.asciidoc[]
| 39.859155 | 86 | 0.720495 |
0048c2ff613267166db49ef84ec4136580d513e7 | 3,486 | asciidoc | AsciiDoc | netbeans.apache.org/src/content/wiki/DevFaqGetNameOfProjectGroup.asciidoc | Cenbe/netbeans-website | 1abe1572bf5ee9b699f1c8882921037b45b16135 | [
"Apache-2.0"
] | 158 | 2019-04-26T15:33:51.000Z | 2022-03-11T18:38:48.000Z | netbeans.apache.org/src/content/wiki/DevFaqGetNameOfProjectGroup.asciidoc | Cenbe/netbeans-website | 1abe1572bf5ee9b699f1c8882921037b45b16135 | [
"Apache-2.0"
] | 100 | 2019-05-04T12:26:54.000Z | 2022-03-23T14:09:41.000Z | netbeans.apache.org/src/content/wiki/DevFaqGetNameOfProjectGroup.asciidoc | Cenbe/netbeans-website | 1abe1572bf5ee9b699f1c8882921037b45b16135 | [
"Apache-2.0"
] | 141 | 2019-04-27T14:21:32.000Z | 2022-03-30T17:20:53.000Z | //
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
//
= DevFaqGetNameOfProjectGroup
:jbake-type: wiki
:jbake-tags: wiki, devfaq, needsreview
:jbake-status: published
:keywords: Apache NetBeans wiki DevFaqGetNameOfProjectGroup
:description: Apache NetBeans wiki DevFaqGetNameOfProjectGroup
:toc: left
:toc-title:
:syntax: true
== How to get the name of the active project group
=== Variant I: "use OpenProjects API" (since NB7.3)
[source,java]
----
org.netbeans.api.project.ui.OpenProjects.getDefault().getActiveProjectGroup().getName()
----
This approach uses a public API which is known to be stable for future versions. Since 7.3.
See link:http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectuiapi/org/netbeans/api/project/ui/OpenProjects.html#getActiveProjectGroup([http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectuiapi/org/netbeans/api/project/ui/OpenProjects.html#getActiveProjectGroup(])
=== Variant II: "direct access to properties"-hack
Note: this is rather a hack. It is not guaranteed that this will work for newer NetBeans versions. But this approach is known to work at least with NB 6.9.1 to 7.3.
[source,java]
----
/**
*
* @return name of the current project group or null
*/
public String getActiveProjectGroup() {
Preferences groupNode = getPreferences("org/netbeans/modules/projectui/groups");
if (null != groupNode) {
final String groupId = groupNode.get("active", null);
if (null != groupId) {
final Preferences groupPref = getPreferences("org/netbeans/modules/projectui/groups/" + groupId);
if (null != groupPref) {
final String activeProjectGroup = groupPref.get("name", null);
return activeProjectGroup;
}
}
}
return null;
}
/**
* Get the preference for the given node path.
*
* @param path configuration path like "org/netbeans/modules/projectui"
* @return {@link Preferences} or null
*/
private Preferences getPreferences(String path) {
try {
if (NbPreferences.root().nodeExists(path)) {
return NbPreferences.root().node(path);
}
} catch (BackingStoreException ex) {
Exceptions.printStackTrace(ex);
}
return null;
}
----
== Apache Migration Information
The content in this page was kindly donated by Oracle Corp. to the
Apache Software Foundation.
This page was exported from link:http://wiki.netbeans.org/DevFaqGetNameOfProjectGroup[http://wiki.netbeans.org/DevFaqGetNameOfProjectGroup] ,
that was last modified by NetBeans user Markiewb
on 2013-01-12T17:40:12Z.
*NOTE:* This document was automatically converted to the AsciiDoc format on 2018-02-07, and needs to be reviewed.
| 34.86 | 290 | 0.724613 |
590b0e7d05b39df7b383cc398e0dbad74a7675a3 | 28,536 | adoc | AsciiDoc | documentation/content/de/articles/freebsd-update-server/_index.adoc | freebsd/docng | 40a69dbe761d60995f617309ac2cbc2ecd4d1ac2 | [
"BSD-2-Clause"
] | 1 | 2020-12-26T04:54:20.000Z | 2020-12-26T04:54:20.000Z | documentation/content/de/articles/freebsd-update-server/_index.adoc | freebsd/docng | 40a69dbe761d60995f617309ac2cbc2ecd4d1ac2 | [
"BSD-2-Clause"
] | null | null | null | documentation/content/de/articles/freebsd-update-server/_index.adoc | freebsd/docng | 40a69dbe761d60995f617309ac2cbc2ecd4d1ac2 | [
"BSD-2-Clause"
] | 4 | 2020-08-02T15:00:15.000Z | 2020-12-07T23:00:42.000Z | ---
title: Einen eigenen FreeBSD Update Server bauen
authors:
- author: Jason Helfman
email: jgh@FreeBSD.org
copyright: 2009-2011, 2013 Jason Helfma
releaseinfo: "$FreeBSD: head/de_DE.ISO8859-1/articles/freebsd-update-server/article.xml 51671 2018-05-20 11:07:55Z bhd $"
trademarks: ["freebsd", "amd", "intel", "general"]
---
= Einen eigenen FreeBSD Update Server bauen
:doctype: article
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:source-highlighter: rouge
:experimental:
:figure-caption: Figure
include::share/authors.adoc[]
[.abstract-title]
Zusammenfassung
Dieser Artikel beschreibt den Bau eines internen FreeBSD Update Server. Die https://svnweb.freebsd.org/base/user/cperciva/freebsd-update-build/[freebsd-update-server] Software wurde von `{cperciva}`, emeritierter Security Officer von FreeBSD, geschrieben. Benutzer, die es als vorteilhaft ansehen ihre Systeme über einen offiziellen Update-Server zu aktualisieren, können mit Hilfe eines selbst erstellten FreeBSD Update Server die Funktionalität über manuell optimierte FreeBSD Releases oder über Bereitstellung eines lokalen Mirror, welcher schnellere Updates ermöglicht, erweitern.
'''
toc::[]
_Übersetzt von ``{bhd}``_.
[[acknowledgments]]
[.title]
== Danksagung
Dieser Artikel wurde anschließend im http://bsdmag.org/magazine/1021-bsd-as-a-desktop[BSD Magazine] gedruckt.
[[introduction]]
[.title]
== Einführung
Erfahrene Benutzer oder Administratoren sind häufig für etliche Maschinen oder Umgebungen verantwortlich. Sie verstehen die schwierigen Anforderungen und Herausforderungen der Aufrechterhaltung einer solchen Infrastruktur. Ein FreeBSD Update Server macht es einfacher, Sicherheits- und Software-Korrekturen für ausgewählte Test-Maschinen bereitzustellen, bevor diese dann auf den Produktionssystemen ausgerollt werden. Es bedeutet auch, dass eine Reihe von Systemen über das lokale Netzwerk, anstatt über eine langsame Internet-Verbindung, aktualisiert werden können. Dieser Artikel beschreibt die Vorgehensweise zum Erstellen eines eigenen FreeBSD Update Server.
[[prerequisites]]
[.title]
== Voraussetzungen
Für den Bau eines internen FreeBSD Update Server sollten einige Anforderungen erfüllt werden.
* Ein laufendes FreeBSD System.
+
[.note]
====
[.admontitle]*Anmerkung:* +
Als Minimum, muss das zu verteilende Ziel-Release auf einer gleichen, oder höheren FreeBSD Version gebaut werden.
====
* Ein Benutzerkonto mit mindestens 4 GB freiem Speicherplatz. Dies erlaubt die Erstellung der Updates für 7.1 und 7.2. Der genaue Platzbedarf kann sich aber von Version zu Version ändern.
* Ein man:ssh[1] Konto auf einem entfernten System, um die später zu verteilenden Updates hochzuladen.
* Einen Webserver, wie link:{handbook}#network-apache[Apache], wobei über die Hälfte des Platzes für den Bau benötigt wird. Als Beispiel benötigt der Bau von 7.1 und 7.2 insgesamt 4 GB. Der Speicherplatz, den der Webserver für die Verteilung dieser Updates benötigt, würde 2.6 GB betragen.
* Grundlegende Kenntnisse im Shell Skripting mit der Bourne Shell, man:sh[1].
[[Configuration]]
[.title]
== Konfiguration: Installation & Setup
Laden Sie die https://svnweb.freebsd.org/base/user/cperciva/freebsd-update-build/[freebsd-update-server] Software durch die Installation von package:devel/subversion[] sowie package:security/ca_root_nss[], und starten Sie:
[source,bash]
....
% svn co https://svn.freebsd.org/base/user/cperciva/freebsd-update-build freebsd-update-server
....
Passen Sie [.filename]#scripts/build.conf# an Ihre Bedürfnisse an. Diese Datei wird bei jedem Bau mit einbezogen.
Hier ist die Standardeinstellung für [.filename]#build.conf#, welche Sie für Ihre Umgebung anpassen sollten.
[.programlisting]
....
# Main configuration file for FreeBSD Update builds. The
# release-specific configuration data is lower down in
# the scripts tree.
# Location from which to fetch releases
export FTP=ftp://ftp2.freebsd.org/pub/FreeBSD/releases <.>
# Host platform
export HOSTPLATFORM=`uname -m`
# Host name to use inside jails
export BUILDHOSTNAME=${HOSTPLATFORM}-builder.daemonology.net <.>
# Location of SSH key
export SSHKEY=/root/.ssh/id_dsa <.>
# SSH account into which files are uploaded
MASTERACCT=builder@wadham.daemonology.net <.>
# Directory into which files are uploaded
MASTERDIR=update-master.freebsd.org <.>
....
Parameter, die zu berücksichtigen sind:
<.> Dies ist der Ort, von dem die ISO Abbilder (über die `fetchiso()` in [.filename]#scripts/build.subr#) heruntergeladen werden. Der Ort ist nicht auf FTP URIs beschränkt. Jedes URI-Schema, welches von man:fetch[1] unterstützt wird, sollte hier gut funktionieren.
Anpassungen am `fetchiso()` Code können Sie vornehmen, indem Sie das Standardskript [.filename]#build.subr# in den Release- und Architektur-spezifischen Bereich in [.filename]#scripts/RELEASE/ARCHITECTURE/build.subr# kopieren und dort lokale Änderungen vornehmen.
<.> Der Name des Build-Hosts. Auf aktualisierten Systemen können Sie diese Information wie folgt ausgeben:
+
[source,bash]
....
% uname -v
....
+
<.> Der SSH Schlüssel für das Hochladen der Dateien auf den Update Server. Ein Schlüsselpaar kann durch die Eingabe von `ssh-keygen -t dsa` erstellt werden. Dieser Parameter ist jedoch optional; Standard Password Authentifizierung wird als Fallback-Methode benutzt wenn `SSHKEY` nicht definiert ist.
Die man:ssh-keygen[1] Manualpage enthält detaillierte Informationen zu SSH und die entsprechenden Schritte zur Erstellung und Verwendung von Schlüsseln.
<.> Benutzerkonto zum Hochladen von Dateien auf den Update-Server.
<.> Verzeichnis auf dem Update-Server, in welches die Dateien hochgeladen werden.
Die Standard [.filename]#build.conf#, die mit den `freebsd-update-server` Quellen ausgeliefert wird ist geeignet um i386 Releases von FreeBSD zu bauen. Als Beispiel für den Aufbau eines Update-Servers für andere Architekturen beschreiben die folgenden Schritte die Konfiguration für amd64:
[.procedure]
. Erstellen Sie eine Bau-Umgebung für amd64:
+
[source,bash]
....
% mkdir -p /usr/local/freebsd-update-server/scripts/7.2-RELEASE/amd64
....
. Installieren Sie eine [.filename]#build.conf# in das neu erstellte Verzeichnis. Die Konfigurationsoptionen für FreeBSD 7.2-RELEASE auf amd64 sollten ähnlich wie die folgenden sein:
+
[.programlisting]
....
# SHA256 hash of RELEASE disc1.iso image.
export RELH=1ea1f6f652d7c5f5eab7ef9f8edbed50cb664b08ed761850f95f48e86cc71ef5 <.>
# Components of the world, source, and kernels
export WORLDPARTS="base catpages dict doc games info manpages proflibs lib32"
export SOURCEPARTS="base bin contrib crypto etc games gnu include krb5 \
lib libexec release rescue sbin secure share sys tools \
ubin usbin cddl"
export KERNELPARTS="generic"
# EOL date
export EOL=1275289200 <.>
....
+
<.> Der man:sha256[1] Fingerabdruck für die gewünschte Version wird innerhalb der jeweiligen link:https://www.FreeBSD.org/releases/[Release-Ankündigung] veröffentlicht.
<.> Um die "End of Life" Nummer für die [.filename]#build.conf# zu generieren, beziehen Sie sich bitte auf "Estimated EOL" auf der link:https://www.FreeBSD.org/security/security/[FreeBSD Security Webseite]. Der Wert für `EOL` kann aus dem Datum, das auf der Webseite veröffentlicht ist, abgeleitet werden. Benutzen Sie dafür das Werkzeug man:date[1]. Dazu ein Beispiel:
+
[source,bash]
....
% date -j -f '%Y%m%d-%H%M%S' '20090401-000000' '+%s'
....
[[build]]
[.title]
== Den Update Code bauen
Der erste Schritt ist das Ausführen von [.filename]#scripts/make.sh#. Dieses Skript baut einige Binärdateien, erstellt Verzeichnisse und einen RSA Signaturschlüssel für die Genehmigung des Bau. In diesem Schritt müssen Sie auch eine Passphrase für die Erstellung des Signaturschlüssels angeben.
[source,bash]
....
# sh scripts/make.sh
cc -O2 -fno-strict-aliasing -pipe findstamps.c -o findstamps
findstamps.c: In function 'usage':
findstamps.c:45: warning: incompatible implicit declaration of built-in function 'exit'
cc -O2 -fno-strict-aliasing -pipe unstamp.c -o unstamp
install findstamps ../bin
install unstamp ../bin
rm -f findstamps unstamp
Generating RSA private key, 4096 bit long modulus
................................................................................++
...................++
e is 65537 (0x10001)
Public key fingerprint:
27ef53e48dc869eea6c3136091cc6ab8589f967559824779e855d58a2294de9e
Encrypting signing key for root
enter aes-256-cbc encryption password:
Verifying - enter aes-256-cbc encryption password:
....
[.note]
====
[.admontitle]*Anmerkung:* +
Notieren Sie sich den Fingerabdruck des erzeugten Schlüssels. Dieser Wert wird in [.filename]#/etc/freebsd-update.conf# für die binären Updates benötigt.
====
An dieser Stelle sind wir bereit, den Bauprozess zu starten.
[source,bash]
....
# cd /usr/local/freebsd-update-server
# sh scripts/init.sh amd64 7.2-RELEASE
....
Hier folgt ein Beispiel für einen _ersten_ Bauprozess.
[source,bash]
....
# sh scripts/init.sh amd64 7.2-RELEASE
Mon Aug 24 16:04:36 PDT 2009 Starting fetch for FreeBSD/amd64 7.2-RELEASE
/usr/local/freebsd-update-server/work/7.2-RELE100% of 588 MB 359 kBps 00m00s
Mon Aug 24 16:32:38 PDT 2009 Verifying disc1 hash for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 16:32:44 PDT 2009 Extracting components for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 16:34:05 PDT 2009 Constructing world+src image for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 16:35:57 PDT 2009 Extracting world+src for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 23:36:24 UTC 2009 Building world for FreeBSD/amd64 7.2-RELEASE
Tue Aug 25 00:31:29 UTC 2009 Distributing world for FreeBSD/amd64 7.2-RELEASE
Tue Aug 25 00:32:36 UTC 2009 Building and distributing kernels for FreeBSD/amd64 7.2-RELEASE
Tue Aug 25 00:44:44 UTC 2009 Constructing world components for FreeBSD/amd64 7.2-RELEASE
Tue Aug 25 00:44:56 UTC 2009 Distributing source for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 17:46:18 PDT 2009 Moving components into staging area for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 17:46:33 PDT 2009 Identifying extra documentation for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 17:47:13 PDT 2009 Extracting extra docs for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 17:47:18 PDT 2009 Indexing release for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 17:50:44 PDT 2009 Indexing world0 for FreeBSD/amd64 7.2-RELEASE
Files built but not released:
Files released but not built:
Files which differ by more than contents:
Files which differ between release and build:
kernel|generic|/GENERIC/hptrr.ko
kernel|generic|/GENERIC/kernel
src|sys|/sys/conf/newvers.sh
world|base|/boot/loader
world|base|/boot/pxeboot
world|base|/etc/mail/freebsd.cf
world|base|/etc/mail/freebsd.submit.cf
world|base|/etc/mail/sendmail.cf
world|base|/etc/mail/submit.cf
world|base|/lib/libcrypto.so.5
world|base|/usr/bin/ntpq
world|base|/usr/lib/libalias.a
world|base|/usr/lib/libalias_cuseeme.a
world|base|/usr/lib/libalias_dummy.a
world|base|/usr/lib/libalias_ftp.a
...
....
Anschließend wird das Basissystem mit den dazugehörigen Patches erneut gebaut. Eine detaillierte Erklärung dazu finden Sie in [.filename]#scripts/build.subr#.
[.warning]
====
[.admontitle]*Warnung:* +
Während der zweiten Bauphase wird der Network Time Protocol Dienst, man:ntpd[8], ausgeschaltet. Per `{cperciva}`, emeritierter Security Officer von FreeBSD, "Der https://svnweb.freebsd.org/base/user/cperciva/freebsd-update-build/[freebsd-update-server] Code muss Zeitstempel, welche in Dateien gespeichert sind, identifizieren, sodass festgestellt werden kann, welche Dateien aktualisiert werden müssen. Dies geschieht, indem zwei Builds erstellt werden die 400 Tage auseinander liegen und anschließend die Ergebnisse verglichen werden."
====
[source,bash]
....
Mon Aug 24 17:54:07 PDT 2009 Extracting world+src for FreeBSD/amd64 7.2-RELEASE
Wed Sep 29 00:54:34 UTC 2010 Building world for FreeBSD/amd64 7.2-RELEASE
Wed Sep 29 01:49:42 UTC 2010 Distributing world for FreeBSD/amd64 7.2-RELEASE
Wed Sep 29 01:50:50 UTC 2010 Building and distributing kernels for FreeBSD/amd64 7.2-RELEASE
Wed Sep 29 02:02:56 UTC 2010 Constructing world components for FreeBSD/amd64 7.2-RELEASE
Wed Sep 29 02:03:08 UTC 2010 Distributing source for FreeBSD/amd64 7.2-RELEASE
Tue Sep 28 19:04:31 PDT 2010 Moving components into staging area for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 19:04:46 PDT 2009 Extracting extra docs for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 19:04:51 PDT 2009 Indexing world1 for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 19:08:04 PDT 2009 Locating build stamps for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 19:10:19 PDT 2009 Cleaning staging area for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 19:10:19 PDT 2009 Preparing to copy files into staging area for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 19:10:20 PDT 2009 Copying data files into staging area for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 12:16:57 PDT 2009 Copying metadata files into staging area for FreeBSD/amd64 7.2-RELEASE
Mon Aug 24 12:16:59 PDT 2009 Constructing metadata index and tag for FreeBSD/amd64 7.2-RELEASE
Files found which include build stamps:
kernel|generic|/GENERIC/hptrr.ko
kernel|generic|/GENERIC/kernel
world|base|/boot/loader
world|base|/boot/pxeboot
world|base|/etc/mail/freebsd.cf
world|base|/etc/mail/freebsd.submit.cf
world|base|/etc/mail/sendmail.cf
world|base|/etc/mail/submit.cf
world|base|/lib/libcrypto.so.5
world|base|/usr/bin/ntpq
world|base|/usr/include/osreldate.h
world|base|/usr/lib/libalias.a
world|base|/usr/lib/libalias_cuseeme.a
world|base|/usr/lib/libalias_dummy.a
world|base|/usr/lib/libalias_ftp.a
...
....
Schlussendlich wird der Bauprozess fertiggestellt.
[source,bash]
....
Values of build stamps, excluding library archive headers:
v1.2 (Aug 25 2009 00:40:36)
v1.2 (Aug 25 2009 00:38:22)
@(#)FreeBSD 7.2-RELEASE #0: Tue Aug 25 00:38:29 UTC 2009
FreeBSD 7.2-RELEASE #0: Tue Aug 25 00:38:29 UTC 2009
root@server.myhost.com:/usr/obj/usr/src/sys/GENERIC
7.2-RELEASE
Mon Aug 24 23:55:25 UTC 2009
Mon Aug 24 23:55:25 UTC 2009
##### built by root@server.myhost.com on Tue Aug 25 00:16:15 UTC 2009
##### built by root@server.myhost.com on Tue Aug 25 00:16:15 UTC 2009
##### built by root@server.myhost.com on Tue Aug 25 00:16:15 UTC 2009
##### built by root@server.myhost.com on Tue Aug 25 00:16:15 UTC 2009
Mon Aug 24 23:46:47 UTC 2009
ntpq 4.2.4p5-a Mon Aug 24 23:55:53 UTC 2009 (1)
* Copyright (c) 1992-2009 The FreeBSD Project.
Mon Aug 24 23:46:47 UTC 2009
Mon Aug 24 23:55:40 UTC 2009
Aug 25 2009
ntpd 4.2.4p5-a Mon Aug 24 23:55:52 UTC 2009 (1)
ntpdate 4.2.4p5-a Mon Aug 24 23:55:53 UTC 2009 (1)
ntpdc 4.2.4p5-a Mon Aug 24 23:55:53 UTC 2009 (1)
Tue Aug 25 00:21:21 UTC 2009
Tue Aug 25 00:21:21 UTC 2009
Tue Aug 25 00:21:21 UTC 2009
Mon Aug 24 23:46:47 UTC 2009
FreeBSD/amd64 7.2-RELEASE initialization build complete. Please
review the list of build stamps printed above to confirm that
they look sensible, then run
# sh -e approve.sh amd64 7.2-RELEASE
to sign the release.
....
Genehmigen Sie den Bau, wenn alles korrekt ist. Weitere Informationen zur korrekten Bestimmung finden Sie in der Quelldatei namens [.filename]#USAGE#. Führen Sie, wie angegeben [.filename]#scripts/approve.sh# aus. Dieser Schritt unterschreibt das Release und verschiebt die Komponenten an einen Sammelpunkt, wo sie für den Upload verwendet werden können.
[source,bash]
....
# cd /usr/local/freebsd-update-server
# sh scripts/mountkey.sh
....
[.code-example-separation]
[source,bash]
....
# sh -e scripts/approve.sh amd64 7.2-RELEASE
Wed Aug 26 12:50:06 PDT 2009 Signing build for FreeBSD/amd64 7.2-RELEASE
Wed Aug 26 12:50:06 PDT 2009 Copying files to patch source directories for FreeBSD/amd64 7.2-RELEASE
Wed Aug 26 12:50:06 PDT 2009 Copying files to upload staging area for FreeBSD/amd64 7.2-RELEASE
Wed Aug 26 12:50:07 PDT 2009 Updating databases for FreeBSD/amd64 7.2-RELEASE
Wed Aug 26 12:50:07 PDT 2009 Cleaning staging area for FreeBSD/amd64 7.2-RELEASE
....
Nachdem der Genehmigungsprozess abgeschlossen ist, kann der Upload gestartet werden.
[source,bash]
....
# cd /usr/local/freebsd-update-server
# sh scripts/upload.sh amd64 7.2-RELEASE
....
[.note]
====
[.admontitle]*Anmerkung:* +
Wenn der Update-Code erneut hochgeladen werden muss, kann dies durch die Änderung des öffentlichen Distributionsverzeichnisses für das Ziel-Release und der Aktualisierung der Attribute für die _hochgeladene_ Datei geschehen.
[source,bash]
....
# cd /usr/local/freebsd-update-server/pub/7.2-RELEASE/amd64
# touch -t 200801010101.01 uploaded
....
====
Um die Updates zu verteilen, müssen die hochgeladenen Dateien im Document Root des Webservers liegen. Die genaue Konfiguration hängt von dem verwendeten Webserver ab. Für den Apache Webserver, beziehen Sie sich bitte auf das Kapitel link:{handbook}#network-apache[Konfiguration des Apache Servers] im Handbuch.
Aktualisieren Sie `KeyPrint` und `ServerName` in der [.filename]#/etc/freebsd-update.conf# des Clients und führen Sie das Update, wie im Kapitel link:{handbook}#updating-upgrading-freebsdupdate[FreeBSD Update] des Handbuchs beschrieben, aus.
[.important]
====
[.admontitle]*Wichtig:* +
Damit FreeBSD Update Server ordnungsgemäß funktioniert, muss sowohl das _current_ Release als auch das Release _auf welches Sie aktualisieren wollen_ neu gebaut werden. Dies ist notwendig, um die Unterschiede von Dateien zwischen den beiden Releases bestimmen zu können. Zum Beispiel beim Upgrade eines FreeBSD Systems von 7.1-RELEASE auf 7.2-RELEASE, müssen für beide Versionen Updates gebaut und auf den Webserver hochgeladen werden.
====
Als Referenz wird der gesamte Verlauf von link:../../../source/articles/freebsd-update-server/init.txt[init.sh] beigefügt.
[[patch]]
[.title]
== Eine Fehlerkorrektur erstellen
Jedes Mal, wenn ein link:https://www.FreeBSD.org/security/advisories/[Sicherheits-Hinweis] oder ein link:https://www.FreeBSD.org/security/notices/[Fehler-Hinweis] angekündigt wird, kann eine Fehlerkorrektur gebaut werden.
Für dieses Beispiel wird 7.1-RELEASE benutzt.
Für den Bau eines anderen Release werden ein paar Annahmen getroffen:
* Richten Sie die korrekte Verzeichnisstruktur für den ersten Bau ein.
* Führen Sie einen ersten Bau für 7.1-RELEASE aus.
Erstellen Sie das Korrekturverzeichnis des jeweiligen Releases unter [.filename]#/usr/local/freebsd-update-server/patches/#.
[source,bash]
....
% mkdir -p /usr/local/freebsd-update-server/patches/7.1-RELEASE/
% cd /usr/local/freebsd-update-server/patches/7.1-RELEASE
....
Als Beispiel nehmen Sie die Korrektur für man:named[8]. Lesen Sie den Hinweis und laden Sie die erforderliche Datei von link:https://www.FreeBSD.org/security/advisories/[FreeBSD Sicherheits-Hinweise] herunter. Weitere Informationen zur Interpretation der Sicherheitshinweise finden Sie im link:{handbook}#security-advisories[FreeBSD Handbuch].
In der https://security.freebsd.org/advisories/FreeBSD-SA-09:12.bind.asc[Sicherheits Anweisung], nennt sich dieser Hinweis `SA-09:12.bind`. Nach dem Herunterladen der Datei, ist es erforderlich, die Datei auf einen geeigneten Patch-Level umzubenennen. Es steht Ihnen frei den Namen frei zu wählen, es wird jedoch nahegelegt, diesen im Einklang mit dem offiziellen FreeBSD Patch-Level zu halten. Für diesen Bau folgen wir der derzeit gängigen Praxis von FreeBSD und benennen sie `p7`. Benennen Sie die Datei um:
[source,bash]
....
% cd /usr/local/freebsd-update-server/patches/7.1-RELEASE/; mv bind.patch 7-SA-09:12.bind
....
[.note]
====
[.admontitle]*Anmerkung:* +
Wenn ein Patch-Level gebaut wird, wird davon ausgegangen, dass die bisherigen Korrekturen bereits vorhanden sind. Wenn der Bau läuft, werden alle Korrekturen aus dem Patchverzeichnis mit gebaut.
Es können auch selbsterstellte Korrekturen zum Bau hinzugefügt werden. Benutzen Sie die Zahl Null, oder jede andere Zahl.
====
[.warning]
====
[.admontitle]*Warnung:* +
Es liegt in der Verantwortung des Administrators des FreeBSD Update Server geeignete Maßnahmen zu treffen, um die Authentizität jeder Fehlerkorrektur zu überprüfen.
====
An dieser Stelle sind wir bereit, einen _Diff_ zu bauen. Die Software prüft zunächst, ob [.filename]#scripts/init.sh# für das jeweilige Release gelaufen ist, bevor mit dem Bau des Diff begonnen wird.
[source,bash]
....
# cd /usr/local/freebsd-update-server
# sh scripts/diff.sh amd64 7.1-RELEASE 7
....
Es folgt ein Beispiel für einen _Diff_ Bauprozess.
[source,bash]
....
# sh -e scripts/diff.sh amd64 7.1-RELEASE 7
Wed Aug 26 10:09:59 PDT 2009 Extracting world+src for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 17:10:25 UTC 2009 Building world for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 18:05:11 UTC 2009 Distributing world for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 18:06:16 UTC 2009 Building and distributing kernels for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 18:17:50 UTC 2009 Constructing world components for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 18:18:02 UTC 2009 Distributing source for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 11:19:23 PDT 2009 Moving components into staging area for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 11:19:37 PDT 2009 Extracting extra docs for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 11:19:42 PDT 2009 Indexing world0 for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 11:23:02 PDT 2009 Extracting world+src for FreeBSD/amd64 7.1-RELEASE-p7
Thu Sep 30 18:23:29 UTC 2010 Building world for FreeBSD/amd64 7.1-RELEASE-p7
Thu Sep 30 19:18:15 UTC 2010 Distributing world for FreeBSD/amd64 7.1-RELEASE-p7
Thu Sep 30 19:19:18 UTC 2010 Building and distributing kernels for FreeBSD/amd64 7.1-RELEASE-p7
Thu Sep 30 19:30:52 UTC 2010 Constructing world components for FreeBSD/amd64 7.1-RELEASE-p7
Thu Sep 30 19:31:03 UTC 2010 Distributing source for FreeBSD/amd64 7.1-RELEASE-p7
Thu Sep 30 12:32:25 PDT 2010 Moving components into staging area for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:32:39 PDT 2009 Extracting extra docs for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:32:43 PDT 2009 Indexing world1 for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:35:54 PDT 2009 Locating build stamps for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:36:58 PDT 2009 Reverting changes due to build stamps for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:37:14 PDT 2009 Cleaning staging area for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:37:14 PDT 2009 Preparing to copy files into staging area for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:37:15 PDT 2009 Copying data files into staging area for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:43:23 PDT 2009 Copying metadata files into staging area for FreeBSD/amd64 7.1-RELEASE-p7
Wed Aug 26 12:43:25 PDT 2009 Constructing metadata index and tag for FreeBSD/amd64 7.1-RELEASE-p7
...
Files found which include build stamps:
kernel|generic|/GENERIC/hptrr.ko
kernel|generic|/GENERIC/kernel
world|base|/boot/loader
world|base|/boot/pxeboot
world|base|/etc/mail/freebsd.cf
world|base|/etc/mail/freebsd.submit.cf
world|base|/etc/mail/sendmail.cf
world|base|/etc/mail/submit.cf
world|base|/lib/libcrypto.so.5
world|base|/usr/bin/ntpq
world|base|/usr/include/osreldate.h
world|base|/usr/lib/libalias.a
world|base|/usr/lib/libalias_cuseeme.a
world|base|/usr/lib/libalias_dummy.a
world|base|/usr/lib/libalias_ftp.a
...
Values of build stamps, excluding library archive headers:
v1.2 (Aug 26 2009 18:13:46)
v1.2 (Aug 26 2009 18:11:44)
@()FreeBSD 7.1-RELEASE-p7 0: Wed Aug 26 18:11:50 UTC 2009
FreeBSD 7.1-RELEASE-p7 0: Wed Aug 26 18:11:50 UTC 2009
root@server.myhost.com:/usr/obj/usr/src/sys/GENERIC
7.1-RELEASE-p7
Wed Aug 26 17:29:15 UTC 2009
Wed Aug 26 17:29:15 UTC 2009
built by root@server.myhost.com on Wed Aug 26 17:49:58 UTC 2009
built by root@server.myhost.com on Wed Aug 26 17:49:58 UTC 2009
built by root@server.myhost.com on Wed Aug 26 17:49:58 UTC 2009
built by root@server.myhost.com on Wed Aug 26 17:49:58 UTC 2009
Wed Aug 26 17:20:39 UTC 2009
ntpq 4.2.4p5-a Wed Aug 26 17:29:42 UTC 2009 (1)
* Copyright (c) 1992-2009 The FreeBSD Project.
Wed Aug 26 17:20:39 UTC 2009
Wed Aug 26 17:29:30 UTC 2009
Aug 26 2009
ntpd 4.2.4p5-a Wed Aug 26 17:29:41 UTC 2009 (1)
ntpdate 4.2.4p5-a Wed Aug 26 17:29:42 UTC 2009 (1)
ntpdc 4.2.4p5-a Wed Aug 26 17:29:42 UTC 2009 (1)
Wed Aug 26 17:55:02 UTC 2009
Wed Aug 26 17:55:02 UTC 2009
Wed Aug 26 17:55:02 UTC 2009
Wed Aug 26 17:20:39 UTC 2009
...
....
Die Updates werden angezeigt und warten auf Genehmigung.
[source,bash]
....
New updates:
kernel|generic|/GENERIC/kernel.symbols|f|0|0|0555|0|7c8dc176763f96ced0a57fc04e7c1b8d793f27e006dd13e0b499e1474ac47e10|
kernel|generic|/GENERIC/kernel|f|0|0|0555|0|33197e8cf15bbbac263d17f39c153c9d489348c2c534f7ca1120a1183dec67b1|
kernel|generic|/|d|0|0|0755|0||
src|base|/|d|0|0|0755|0||
src|bin|/|d|0|0|0755|0||
src|cddl|/|d|0|0|0755|0||
src|contrib|/contrib/bind9/bin/named/update.c|f|0|10000|0644|0|4d434abf0983df9bc47435670d307fa882ef4b348ed8ca90928d250f42ea0757|
src|contrib|/contrib/bind9/lib/dns/openssldsa_link.c|f|0|10000|0644|0|c6805c39f3da2a06dd3f163f26c314a4692d4cd9a2d929c0acc88d736324f550|
src|contrib|/contrib/bind9/lib/dns/opensslrsa_link.c|f|0|10000|0644|0|fa0f7417ee9da42cc8d0fd96ad24e7a34125e05b5ae075bd6e3238f1c022a712|
...
FreeBSD/amd64 7.1-RELEASE update build complete. Please review
the list of build stamps printed above and the list of updated
files to confirm that they look sensible, then run
# sh -e approve.sh amd64 7.1-RELEASE
to sign the build.
....
Folgen Sie dem zuvor erwähnten Verfahren für die Genehmigung des Bauprozesses:
[source,bash]
....
# sh -e scripts/approve.sh amd64 7.1-RELEASE
Wed Aug 26 12:50:06 PDT 2009 Signing build for FreeBSD/amd64 7.1-RELEASE
Wed Aug 26 12:50:06 PDT 2009 Copying files to patch source directories for FreeBSD/amd64 7.1-RELEASE
Wed Aug 26 12:50:06 PDT 2009 Copying files to upload staging area for FreeBSD/amd64 7.1-RELEASE
Wed Aug 26 12:50:07 PDT 2009 Updating databases for FreeBSD/amd64 7.1-RELEASE
Wed Aug 26 12:50:07 PDT 2009 Cleaning staging area for FreeBSD/amd64 7.1-RELEASE
The FreeBSD/amd64 7.1-RELEASE update build has been signed and is
ready to be uploaded. Remember to run
# sh -e umountkey.sh
to unmount the decrypted key once you have finished signing all
the new builds.
....
Nachdem Sie den Bau genehmigt haben, starten Sie den Upload der Software:
[source,bash]
....
# cd /usr/local/freebsd-update-server
# sh scripts/upload.sh amd64 7.1-RELEASE
....
Als Referenz wird der gesamte Verlauf von link:../../../source/articles/freebsd-update-server/diff.txt[diff.sh] beigefügt.
[[tips]]
[.title]
== Tipps
* Wenn Sie ein selbst erstelltes Release über die native `make release` link:{releng}#release-build[Prozedur] bauen, wir der `freebsd-update-server` Code Ihr Release unterstützen. Als Beispiel können Sie ein Release ohne Ports oder Dokumentation bauen, indem Sie betreffende Funktionalität der Subroutinen `findextradocs ()`, `addextradocs ()` entfernen und eine Veränderung des Download-Verzeichnisses in `fetchiso ()`, in [.filename]#scripts/build.subr#. Als letzten Schritt ändern Sie den man:sha256[1] Hash in [.filename]#build.conf# für Ihr jeweiliges Release und Architektur damit Sie bereit sind, Ihr benutzerdefiniertes Release zu bauen.
+
[.programlisting]
....
# Compare ${WORKDIR}/release and ${WORKDIR}/$1, identify which parts
# of the world|doc subcomponent are missing from the latter, and
# build a tarball out of them.
findextradocs () {
}
# Add extra docs to ${WORKDIR}/$1
addextradocs () {
}
....
* Durch das Hinzufügen von `-j _NUMMER_` zu den `buildworld` und `obj` Zielen in [.filename]#scripts/build.subr# kann die Verarbeitung, abhängig von der eingesetzten Hardware, beschleunigt werden. Die Benutzung dieser Optionen auf andere Ziele wird jedoch nicht empfohlen, da sie den Bau unzuverlässig machen können.
+
[.programlisting]
....
> # Build the world
log "Building world"
cd /usr/src &&
make -j 2 ${COMPATFLAGS} buildworld 2>&1
# Distribute the world
log "Distributing world"
cd /usr/src/release &&
make -j 2 obj &&
make ${COMPATFLAGS} release.1 release.2 2>&1
....
* Erstellen Sie einen geeigneten link:{handbook}#network-dns[DNS] SRV Datensatz für den Update-Server, und fügen Sie weitere Server mit verschiedenen Gewichtungen hinzu. Sie können diese Möglichkeit nutzen um Update-Mirror hinzuzufügen. Dieser Tipp ist jedoch nicht notwendig solange Sie keinen redundanten Service anbieten möchten.
+
[.programlisting]
....
_http._tcp.update.myserver.com. IN SRV 0 2 80 host1.myserver.com.
SRV 0 1 80 host2.myserver.com.
SRV 0 0 80 host3.myserver.com.
....
| 47.011532 | 663 | 0.771201 |
468e6c0a4554a7e682fef91879b4605fcf0e8950 | 759 | adoc | AsciiDoc | docs/0219-contains-duplicate-ii.adoc | diguage/leetcode | e72299539a319c94b435ff26cf077371a353b00c | [
"Apache-2.0"
] | 4 | 2020-02-05T10:08:43.000Z | 2021-06-10T02:15:20.000Z | docs/0219-contains-duplicate-ii.adoc | diguage/leetcode | e72299539a319c94b435ff26cf077371a353b00c | [
"Apache-2.0"
] | 1,062 | 2018-10-04T16:04:06.000Z | 2020-06-17T02:11:44.000Z | docs/0219-contains-duplicate-ii.adoc | diguage/leetcode | e72299539a319c94b435ff26cf077371a353b00c | [
"Apache-2.0"
] | 3 | 2019-10-03T01:42:58.000Z | 2020-03-02T13:53:02.000Z | = 219. Contains Duplicate II
https://leetcode.com/problems/contains-duplicate-ii/[LeetCode - Contains Duplicate II]
Given an array of integers and an integer _k_, find out whether there are two distinct indices _i_ and _j_ in the array such that *nums[i] = nums[j]* and the *absolute* difference between _i_ and _j_ is at most _k_.
*Example 1:*
[subs="verbatim,quotes,macros"]
----
*Input:* nums = [1,2,3,1], k = 3
*Output:* true
----
*Example 2:*
[subs="verbatim,quotes,macros"]
----
*Input:* nums = [1,0,1,1], k = 1
*Output:* true
----
*Example 3:*
[subs="verbatim,quotes,macros"]
----
*Input:* nums = [1,2,3,1,2,3], k = 2
*Output:* false
----
[[src-0219]]
[{java_src_attr}]
----
include::{sourcedir}/_0219_ContainsDuplicateII.java[]
----
| 16.866667 | 215 | 0.653491 |
46da2e34403050bbf2259ef9db8865acdf84c839 | 65 | adoc | AsciiDoc | docs/modules/ROOT/nav.adoc | michalszynkiewicz/fault-tolerant-client | 349ae2c258df027a656f1f0feb9ee37d48aa0348 | [
"Apache-2.0"
] | null | null | null | docs/modules/ROOT/nav.adoc | michalszynkiewicz/fault-tolerant-client | 349ae2c258df027a656f1f0feb9ee37d48aa0348 | [
"Apache-2.0"
] | null | null | null | docs/modules/ROOT/nav.adoc | michalszynkiewicz/fault-tolerant-client | 349ae2c258df027a656f1f0feb9ee37d48aa0348 | [
"Apache-2.0"
] | null | null | null | * xref:index.adoc[Quarkus - Fault Tolerant Rest Client Reactive]
| 32.5 | 64 | 0.784615 |
3228d844c0e28107bffbbccc4e0d7102b0263a5c | 3,910 | adoc | AsciiDoc | modules/migration-state-migration-cli.adoc | Animasingh123/openshift-docs | c896e30dcb36a2c80ecf31bee04e07255c0c6d9b | [
"Apache-2.0"
] | null | null | null | modules/migration-state-migration-cli.adoc | Animasingh123/openshift-docs | c896e30dcb36a2c80ecf31bee04e07255c0c6d9b | [
"Apache-2.0"
] | 4 | 2016-12-21T22:25:42.000Z | 2021-08-12T17:08:54.000Z | modules/migration-state-migration-cli.adoc | Animasingh123/openshift-docs | c896e30dcb36a2c80ecf31bee04e07255c0c6d9b | [
"Apache-2.0"
] | 3 | 2018-04-07T21:01:40.000Z | 2021-04-20T18:06:11.000Z | // Module included in the following assemblies:
//
// * migrating_from_ocp_3_to_4/about-mtc-3-4.adoc
// * migration_toolkit_for_containers/about-mtc.adoc
[id="migration-state-migration-cli_{context}"]
= Migrating an application's state
You can perform repeatable, state-only migrations by selecting specific persistent volume claims (PVCs). During a state migration, {mtc-full} ({mtc-short}) copies persistent volume (PV) data to the target cluster. PV references are not moved. The application pods continue to run on the source cluster.
If you have a CI/CD pipeline, you can migrate stateless components by deploying them on the target cluster. Then you can migrate stateful components by using {mtc-short}.
You can migrate PV data from the source cluster to PVCs that are already provisioned in the target cluster by mapping PVCs in the `MigPlan` CR. This ensures that the target PVCs of migrated applications are synchronized with the source PVCs.
You can perform a one-time migration of Kubernetes objects that store application state.
[id="excluding-pvcs_{context}"]
== Excluding persistent volume claims
You can exclude persistent volume claims (PVCs) by adding the `spec.persistentVolumes.pvc.selection.action` parameter to the `MigPlan` custom resource (CR) after the persistent volumes (PVs) have been discovered.
.Prerequisites
* `MigPlan` CR with discovered PVs.
.Procedure
* Add the `spec.persistentVolumes.pvc.selection.action` parameter to the `MigPlan` CR and set its value to `skip`:
+
[source,yaml]
----
apiVersion: migration.openshift.io/v1alpha1
kind: MigPlan
metadata:
name: <migplan>
namespace: openshift-migration
spec:
...
persistentVolumes:
- capacity: 10Gi
name: <pv_name>
pvc:
...
selection:
action: skip <1>
----
<1> `skip` excludes the PVC from the migration plan.
[id="mapping-pvcs_{context}"]
== Mapping persistent volume claims
You can map persistent volume claims (PVCs) by updating the `spec.persistentVolumes.pvc.name` parameter in the `MigPlan` custom resource (CR) after the persistent volumes (PVs) have been discovered.
.Prerequisites
* `MigPlan` CR with discovered PVs.
.Procedure
* Update the `spec.persistentVolumes.pvc.name` parameter in the `MigPlan` CR:
+
[source,yaml]
----
apiVersion: migration.openshift.io/v1alpha1
kind: MigPlan
metadata:
name: <migplan>
namespace: openshift-migration
spec:
...
persistentVolumes:
- capacity: 10Gi
name: <pv_name>
pvc:
name: <source_pvc>:<destination_pvc> <1>
----
<1> Specify the PVC on the source cluster and the PVC on the destination cluster. If the destination PVC does not exist, it will be created. You can use this mapping to change the PVC name during migration.
[id="migrating-kubernetes-objects_{context}"]
== Migrating Kubernetes objects
You can perform a one-time migration of Kubernetes objects that store an application's state.
[NOTE]
====
After migration, the `closed` parameter of the `MigPlan` CR is set to `true`. You cannot create another `MigMigration` CR for this `MigPlan` CR.
====
You add Kubernetes objects to the `MigPlan` CR by using the following options:
* Adding the Kubernetes objects to the `includedResources` section.
* Using the `labelSelector` parameter to reference labeled Kubernetes objects.
If you set both parameters, the label is used to filter the included resources, for example, to migrate `Secret` and `ConfigMap` resources with the label `app: frontend`.
.Procedure
* Update the `MigPlan` CR:
+
[source,yaml]
----
apiVersion: migration.openshift.io/v1alpha1
kind: MigPlan
metadata:
name: <migplan>
namespace: openshift-migration
spec:
includedResources: <1>
- kind: <Secret>
group: ""
- kind: <ConfigMap>
group: ""
...
labelSelector:
matchLabels:
<app: frontend> <2>
----
<1> Specify the `kind` and `group` of each resource.
<2> Specify the label of the resources to migrate.
| 32.583333 | 302 | 0.750895 |
314219b07508513d784f88425027d77087b1327a | 5,548 | adoc | AsciiDoc | README.adoc | viniciusccarvalho/spring-cloud-sockets | bbdf0c723757c591e446714bce96ec366c9b882d | [
"Apache-2.0"
] | 40 | 2017-10-21T07:02:52.000Z | 2020-06-20T03:09:07.000Z | README.adoc | viniciusccarvalho/spring-cloud-sockets | bbdf0c723757c591e446714bce96ec366c9b882d | [
"Apache-2.0"
] | 3 | 2017-10-21T14:25:01.000Z | 2018-04-09T13:52:09.000Z | README.adoc | viniciusccarvalho/spring-cloud-sockets | bbdf0c723757c591e446714bce96ec366c9b882d | [
"Apache-2.0"
] | 10 | 2017-10-21T10:45:47.000Z | 2020-10-05T05:59:12.000Z | = Spring Cloud Sockets [We need a better name] image:https://travis-ci.org/viniciusccarvalho/spring-cloud-sockets.svg?branch=master["Build Status", link="https://travis-ci.org/viniciusccarvalho/spring-cloud-sockets"] image:https://codecov.io/gh/viniciusccarvalho/spring-cloud-sockets/branch/master/graph/badge.svg["Coverage", link="https://codecov.io/gh/viniciusccarvalho/spring-cloud-sockets"]
This repo is a POC on how to expose Spring Services via http://rsocket.io[RSocket] protocol.
This is a very early draft, I've cut a lot of corners to get something that at least runs.
It's meant to attract interest and feedback on the programming model
If you are wondering why another remote protocol when we have things such as gRPC and REST already
please take a moment to read RSocket https://github.com/rsocket/rsocket/blob/master/Motivations.md[motivations] to understand
how this is different.
== Programming Model
RSocket is not a traditional RPC protocol where one can map directly a method with several parameters into a remote endpoint.
Instead it embraces the reactive async nature of message passing, therefore, methods are always expected to receive one parameter and return zero or one results.
The protocol defines 4 exchange modes
* Fire and Forget
* Request Response
* Request Stream
* Channel (bi directional channels)
Following Spring conventions, this project aims to allow developers to annotate their classes
with a series of annotations that would expose the methods to be invoked over rsocket.
=== Common Reactive Annotations Properties
* path: The path where the service method is exposed
* mimeType: The mimeType used for encoding the payload (currently only `application/java-serialized-object` and `application/json` are supported)
=== About method handling
Reactive sockets is about messaging passing, so because of this any method annotated with any of the exchange modes explained bellow
must follow the convention:
* At least one argument is needed
* In case of a single argument, that is mapped to the Payload from RSocket
* In case of multiple arguments, *at most* one must be annotated with `@Payload`
* `@RequestManyMapping` and `@RequestStreamMapping` methods must return a type `Flux`
* `@RequestStreamMapping` methods must receive a type `Flux` as the payload argument (resolved as explained before)
=== @OneWayMapping
One way methods map to a fire/forget in Rsocket and therefore any result from your Service will be ignored
and not sent to the wire to use
```java
@OneWayMapping(path="/receive", mimeType="application/json")
public void receive(Foo foo){
}
```
=== @RequestOneMapping
This is very similar to a traditional RPC scenario
```java
@RequestOneMapping(path="/convert", mimeType="application/json")
public Bar convert(Foo foo){
}
```
=== @RequestManyMapping
RequestMany or Request/Stream is where reactive streams start to show it's value on this protocol.
The server can open a channel and push data as it arrives to the client. The client control back pressure to the server
using http://projectreactor.io[reactor] backpressure support.
In the example bellow a service would emmit Temperature data on a live connection to the client.
```java
@RequestManyMapping(path="/temperature", mimeType="application/json")
public Flux<Temperature> temperature(Location location){
return temperatureService.stream(location);
}
```
=== @RequestStreamMapping
In this mode, client and server keep a full duplex communication.
An example could be a client sending a Stream of Strings that the server hashes and returns to the client
```java
@RequestStreamMapping(path="/hash", mimeType="application/json")
public Flux<Integer> hash(Flux<String> flux){
return flux.map(s -> s.hashCode());
}
```
== Configuring Server Side
On the server side, add `@EnableReactiveSocket` annotation to a Spring Configuration class:
```java
@SpringBootApplication
@EnableReactiveSockets
public class MyReactiveSocketsApplication {
}
```
Any class scanned via boot's classpath scanning that contains any of the annotations above, will be registered as a remote endpoint
=== Configuring Transport and Host/Port
As many boot applications, if not defined a default TCPTransport is used, and it binds to `localhost` on port `5000`
To override the default properties just pass them on an application.properties file:
```
reactive.socket.port=9000
reactive.socket.host=10.10.10.1
```
To use a different transport just provide a `ServerTransport` as a bean in your application, refer to https://github.com/rsocket/rsocket-java[rsocket-java] to see the available implementations.
== Configuring the Client
To use the client, just pass an interface of the service annotated with the same annotations.
```java
public interface MyService {
@RequestStreamMapping(value = "/hash", mimeType = "application/json")
public Flux<Integer> hash(Flux<String> flux);
}
ReactiveSocketClient client = new ReactiveSocketClient("localhost", 5000);
Flux<String> flux = flux.just("A","B");
MyService service = client.create(MyService.class);
service.hash(flux).subscribe(System.out::println);
```
== Short term goals
* Provide a functional model to both server and client and not only annotation style
* Create a Starter
* `@EnableReactiveSocketClient` to allow injection of a client in the application context as well scanning any services such is done in Feign
* Tests, Tests, Tests
* Improve a lot the boilerplate code, revisit serialization options
* Explore resume operations and backpressure
| 34.03681 | 394 | 0.781903 |
454dda110aef02f096c7b7f4fda1d40303ba5e85 | 1,214 | adoc | AsciiDoc | spring-cloud-release-tools-spring/src/test/resources/projects/spring-cloud-release/docs/src/main/asciidoc/README.adoc | spring-operator/spring-cloud-release-tools | 1519855e0800391fa763fc42654d5c074822dd5c | [
"Apache-2.0"
] | null | null | null | spring-cloud-release-tools-spring/src/test/resources/projects/spring-cloud-release/docs/src/main/asciidoc/README.adoc | spring-operator/spring-cloud-release-tools | 1519855e0800391fa763fc42654d5c074822dd5c | [
"Apache-2.0"
] | null | null | null | spring-cloud-release-tools-spring/src/test/resources/projects/spring-cloud-release/docs/src/main/asciidoc/README.adoc | spring-operator/spring-cloud-release-tools | 1519855e0800391fa763fc42654d5c074822dd5c | [
"Apache-2.0"
] | null | null | null | include::intro.adoc[]
== Contributing
include::https://raw.githubusercontent.com/spring-cloud/spring-cloud-build/master/docs/src/main/asciidoc/contributing.adoc[]
== Building and Deploying
Since there is no code to compile in the starters they should do not need to compile, but a compiler has to be available because they are built and deployed as JAR artifacts. To install locally:
----
$ mvn install -s .settings.xml
----
and to deploy snapshots to repo.spring.io:
----
$ mvn install -DaltSnapshotDeploymentRepository=repo.spring.io::default::https://repo.spring.io/libs-snapshot-local
----
for a.BUILD-SNAPSHOT build use
----
$ mvn install -DaltReleaseDeploymentRepository=repo.spring.io::default::https://repo.spring.io/libs-release-local
----
and for Maven Central use
----
$ mvn install -P central -DaltReleaseDeploymentRepository=sonatype-nexus-staging::default::https://oss.sonatype.org/service/local/staging/deploy/maven2
----
(the "central" profile is available for all projects in Spring Cloud and it sets up the gpg jar signing, and the repository has to be specified separately for this project because it is a parent of the starter parent which users in turn have as their own parent).
| 34.685714 | 263 | 0.768534 |
e5cbbfadb3d9a5663775159d92dd10d4568d3056 | 1,870 | asciidoc | AsciiDoc | api/opencl_assoc_spec.asciidoc | Oblomov/OpenCL-Docs | e9a4d468b1a0a38c1e10b8af484bb2bbb495e2b7 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | api/opencl_assoc_spec.asciidoc | Oblomov/OpenCL-Docs | e9a4d468b1a0a38c1e10b8af484bb2bbb495e2b7 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | api/opencl_assoc_spec.asciidoc | Oblomov/OpenCL-Docs | e9a4d468b1a0a38c1e10b8af484bb2bbb495e2b7 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | // Copyright 2017-2020 The Khronos Group. This work is licensed under a
// Creative Commons Attribution 4.0 International License; see
// http://creativecommons.org/licenses/by/4.0/
= Associated OpenCL specification
[[spirv-il]]
== SPIR-V Intermediate Language
OpenCL 2.1 and 2.2 require support for the SPIR-V intermediate
language that allows offline compilation to a binary
format that may be consumed by the {clCreateProgramWithIL} interface.
The OpenCL specification includes a specification for the SPIR-V
intermediate language as a cross-platform input language.
In addition, platform vendors may support their own IL if this is
appropriate.
The OpenCL runtime will return a list of supported IL versions using the
{CL_DEVICE_IL_VERSION} or {CL_DEVICE_ILS_WITH_VERSION} parameter to
the {clGetDeviceInfo} query.
[[opencl-extensions]]
== Extensions to OpenCL
In addition to the specification of core features, OpenCL provides a number
of extensions to the API, kernel language or intermediate representation.
These features are defined in the OpenCL extension specification document.
Extensions defined against earlier versions of the OpenCL specifications,
whether the API or language specification, are defined in the matching
versions of the extension specification document.
[[opencl-c-kernel-language]]
== The OpenCL C Kernel Language
The OpenCL C kernel language is not defined in the OpenCL unified
specification.
The OpenCL C kernel languages are instead defined in the OpenCL 1.0,
OpenCL 1.1, OpenCL 1.2, OpenCL C 2.0 Kernel Language, and OpenCL C 3.0
Kernel Language specifications.
When OpenCL devices support one or more versions of the OpenCL C
kernel language (see {CL_DEVICE_OPENCL_C_VERSION} and
{CL_DEVICE_OPENCL_C_ALL_VERSIONS}),
OpenCL program objects may be created by passing OpenCL C source
strings to {clCreateProgramWithSource}.
| 38.958333 | 75 | 0.811765 |
b3b7ed0ea5c89a87050377c944aee721007dca8f | 5,030 | adoc | AsciiDoc | docs/migration/2.6-to-3.0-lift-screen.adoc | lvitaly/framework | bb8fb65defc901e90e58356860c4e9eaf3a79729 | [
"Apache-2.0"
] | 612 | 2015-01-01T16:39:20.000Z | 2022-03-28T04:14:40.000Z | docs/migration/2.6-to-3.0-lift-screen.adoc | lvitaly/framework | bb8fb65defc901e90e58356860c4e9eaf3a79729 | [
"Apache-2.0"
] | 358 | 2015-01-02T19:06:24.000Z | 2022-03-22T21:29:05.000Z | docs/migration/2.6-to-3.0-lift-screen.adoc | lvitaly/framework | bb8fb65defc901e90e58356860c4e9eaf3a79729 | [
"Apache-2.0"
] | 164 | 2015-01-03T14:33:58.000Z | 2022-01-27T22:31:14.000Z | :idprefix:
:idseparator: -
:toc: right
:toclevels: 2
= Migrating `LiftScreen` from Lift 2.6 to Lift 3.0
The 2.x series of Lift brought a lot of change and innovation. In particular,
while it started with an approach based on the `bind` helper that transformed
namespaced XHTML elements into the values that the developer wanted to display,
it progressed to an HTML5-based approach built around CSS selector transforms.
As of Lift 3.0, CSS selector transforms are the only supported transforms, so
as to keep the core of the framework relatively lean and encourage proper HTML
usage.
One of the Lift components that leveraged the `bind`-style transforms heavily
was `LiftScreen`. As of Lift 3.0, it's been replaced with what in Lift 2.6 was
named `CssBoundLiftScreen`, which is a version of `LiftScreen` that uses CSS
selector transforms instead of `bind`-style transforms. Following is a
breakdown of the things you need to do if you were using `LiftScreen` and want
to upgrade to Lift 3.0.
== `formName`
In Lift 3.0, you need to provide a `formName` to your screen. For the most
straightforward compatibility with the current form implementation, you should
be able to simply set it to "":
[.lift-30]
```scala
val formName = ""
```
// Something more about what formName is for would be good here.
== Bind Points
In the old `LiftScreen`, bind points were elements with the namespace `prefix`
and various names, e.g. `wizard:fields` for the container of all fields. Lift
3.0 instead looks for certain CSS classes in elements. Here's a mapping from
old `wizard:*` elements to CSS classes:
.Lift 2.6 wizard element to Lift 3.0 CSS class mapping
|=========================
| Lift 2.6 Element | Lift 3.0 CSS Class
| `wizard:screen_info` | `screenInfo`
| `wizard:screen_number` | `screenNumber`
| `wizard:total_screens` | `totalScreens`
| `wizard:wizard_top` | `wizardTop`
| `wizard:screen_top` | `screenTop`
| `wizard:errors` | `globalErrors`
| `wizard:item` (within `errors`) | `error`
| `wizard:fields` | `fields`
| `wizard:line` | `fieldContainer`
| `wizard:label` | `label`
| `wizard:for` | unneeded (label is automatically given a `for` attribute)
| `wizard:help` | `help`
| `wizard:field_errors` | `errors`
| `wizard:error` | `error`
| `wizard:form` | `value`
| `wizard:prev` | `prev`
| `wizard:cancel` | `cancel`
| `wizard:next` | `next`
| `wizard:wizard_bottom` | `wizardBottom`
| `wizard:screen_bottom` | `screenBottom`
| `wizard:bind` | unnecessary (contents are put into the elements with appropriate classes)
|=========================
Generally speaking, you can annotate the container element or the element that
will have a given value directly with the class of the content it should
contain, rather than needing an extra container with the class like the old
`wizard:*` elements. For example, where before you had:
[.lift-26]
.Lift 2.6 global error markup
```html
<wizard:errors>
<div>
<ul>
<wizard:item>
<li><wizard:bind></wizard:bind></li>
</wizard:item>
</ul>
</div>
</wizard:errors>
```
In Lift 3.0, you can remove all the `wizard:*` elements and instead put the
classes directly on the remaining elements:
[.lift-30]
.Lift 3.0 global error markup
```html
<div class="globalErrors">
<ul>
<li class="error">
placeholder text, will be replaced by the error message
</li>
</ul>
</div>
```
In fact, you can even eliminate the top-level `div`, if you'd like, by putting
the `globalErrors` class on the `ul`:
[.lift-30]
```html
<ul class="globalErrors">
<li class="error">
placeholder text, will be replaced by the error message
</li>
</ul>
```
If you don't like these class names, you can customize them by overriding the
`cssClassBinding` that you want to use in your `LiftScreen` subclass and
returning a new instance of `CssClassBinding` with the appropriate CSS classes
set up:
[.lift-30]
.Dasherize class names
```scala
protected override lazy val cssClassBinding = new CssClassBinding {
override val screenInfo = "screen-info"
override val screenNumber = "screen-number"
override val totalScreens = "total-screens"
override val wizardTop = "wizard-top"
override val screenTop = "screen-top"
override val wizardBottom = "wizard-bottom"
override val screenBottom = "screen-bottom"
override val globalErrors = "global-errors"
override val fieldContainer = "field-container"
}
```
Above, we create a new version of `CssClassBinding` that uses dashes instead of
camel-case between words.
== Further Help
That's it! If you run into any issues porting your screen over to Lift 3.0's
`LiftScreen`, please ask on the Lift mailing list and you should find willing
helpers.
| 29.940476 | 109 | 0.672167 |
bf93146c35c0fe2d399a460e8a110cfd3f849d31 | 747 | adoc | AsciiDoc | _posts/2019-05-09-Summer-19-Release-Developers-Perspective.adoc | arshakian/arshakian.github.io | 9905ea190ad07eb2a0279cac31ebd0f2f0ba6f1c | [
"MIT"
] | null | null | null | _posts/2019-05-09-Summer-19-Release-Developers-Perspective.adoc | arshakian/arshakian.github.io | 9905ea190ad07eb2a0279cac31ebd0f2f0ba6f1c | [
"MIT"
] | null | null | null | _posts/2019-05-09-Summer-19-Release-Developers-Perspective.adoc | arshakian/arshakian.github.io | 9905ea190ad07eb2a0279cac31ebd0f2f0ba6f1c | [
"MIT"
] | null | null | null | = Summer '19 Release - Developer's Perspective
:hp-image: https://secure.meetupstatic.com/photos/member/2/9/d/9/highres_273670713.jpeg
:hp-tags: Development, Consultancy, Thoughts
It's been awhile since my last post - and lots of things are happen both in my life and in Salesforce - just another proof, that I'm connected to the cloud (by any mean) :)
Anyway, most posts about new release are just reminiscing the standard release notes, with a bit of hype words - like top 5, killer changes or so. Another issue with them, that they are *really* focused on development and not overall platform changes - which sometimes are way more important, than new and changed chatter-related apex classes.
Without further introduction - here we go:
== | 67.909091 | 343 | 0.776439 |
bd11c18caa227be76a7389f8ae49a6f1260cf109 | 86 | adoc | AsciiDoc | test/fixtures/basic.adoc | aerostitch/asciidoctor | 0810fd72d811bc954f958e3dec974ed5b6bd8f76 | [
"MIT"
] | 3,998 | 2015-01-03T11:01:47.000Z | 2022-03-31T01:47:39.000Z | test/fixtures/basic.adoc | aerostitch/asciidoctor | 0810fd72d811bc954f958e3dec974ed5b6bd8f76 | [
"MIT"
] | 2,512 | 2015-01-01T23:52:13.000Z | 2022-03-31T08:55:10.000Z | test/fixtures/basic.adoc | aerostitch/asciidoctor | 0810fd72d811bc954f958e3dec974ed5b6bd8f76 | [
"MIT"
] | 851 | 2015-01-04T19:26:45.000Z | 2022-03-30T14:13:29.000Z | = Document Title
Doc Writer <doc.writer@asciidoc.org>
v1.0, 2013-01-01
Body content.
| 14.333333 | 36 | 0.744186 |
cfcd0eeea04c5f6feb92cfad62209fbaf0a18e51 | 2,693 | asciidoc | AsciiDoc | docs/timelion/getting-started/timelion-math.asciidoc | elasticsearch-cn/kibana | c1c19dd2b2d658f35aeac5bcacaaf20c584c872d | [
"Apache-2.0"
] | 15 | 2017-06-06T13:53:56.000Z | 2022-03-31T06:51:39.000Z | docs/timelion/getting-started/timelion-math.asciidoc | elasticsearch-cn/kibana | c1c19dd2b2d658f35aeac5bcacaaf20c584c872d | [
"Apache-2.0"
] | 25 | 2017-08-08T01:50:51.000Z | 2018-05-12T13:41:32.000Z | docs/timelion/getting-started/timelion-math.asciidoc | elasticsearch-cn/kibana | c1c19dd2b2d658f35aeac5bcacaaf20c584c872d | [
"Apache-2.0"
] | 37 | 2017-06-02T14:44:59.000Z | 2020-12-18T23:04:33.000Z | [[timelion-math]]
=== 使用数学函数
前面两节您学会了如何创建和自定义 Timelion 可视化控件。这一节带您领略 Timelion 数学函数的风采。我们继续使用 https://www.elastic.co/downloads/beats/metricbeat[Metricbeat] 数据来创建基于网络上下行链路数据的新的 Timelion 可视化控件。您可以先添加一个新的 Timelion 可视化控件。
在菜单的顶层,点击 `Add` 添加第二个控件。添加到当前工作表的时候,您可以注意到查询输入框会被替换成默认的 `.es(*)` 表达式。因为这个查询是和您当前选中的控件相关联的。
image::images/timelion-math01.png[]
{nbsp}
为了便于追踪网络上下行流量,您的第一个表达式将会计算 `system.network.in.bytes` 下行数据的最大值。在 Timelion 查询框输入如下表达式:
[source,text]
----------------------------------
.es(index=metricbeat*, timefield=@timestamp, metric=max:system.network.in.bytes)
----------------------------------
image::images/timelion-math02.png[]
{nbsp}
监控网络流量对于评估网络速率来说是非常有价值的。`derivative()` 函数就可以用来做这种随着时间而变化的数据。我们可以简单的将该函数添加至表达式的末尾。使用如下表达式来更新您的控件:
[source,text]
----------------------------------
.es(index=metricbeat*, timefield=@timestamp, metric=max:system.network.in.bytes).derivative()
----------------------------------
image::images/timelion-math03.png[]
{nbsp}
针对上行数据,您需要添加类似的指标 `system.network.out.bytes` 。因为上行数据是从您的设备发出去的,一般我们用负数来表达这一指标。`.multiply()` 函数将会利用一个数来乘以一个序列的数据。结果也将是一个序列或者一个链表的序列数据。例如,您可以使用 `.multiply(-1)` 来将上行线路转换成负数。使用如下表达式来更新您的控件:
[source,text]
----------------------------------
.es(index=metricbeat*, timefield=@timestamp, metric=max:system.network.in.bytes).derivative(), .es(index=metricbeat*, timefield=@timestamp, metric=max:system.network.out.bytes).derivative().multiply(-1)
----------------------------------
image::images/timelion-math04.png[]
{nbsp}
为了使该可视化控件可读性更强,可以将数据单位从 bytes 转换为 megabytes(MB)。Timelion 有一个 `.divide()` 函数可以使用。该函数接收和 `.multiply()` 类似的数据参数,会利用整个序列的数据来除以定义好的参数。使用如下表达式来更新您的控件:
[source,text]
----------------------------------
.es(index=metricbeat*, timefield=@timestamp, metric=max:system.network.in.bytes).derivative().divide(1048576), .es(index=metricbeat*, timefield=@timestamp, metric=max:system.network.out.bytes).derivative().multiply(-1).divide(1048576)
----------------------------------
image::images/timelion-math05.png[]
{nbsp}
我们可以利用前面章节学过的样式定制函数 `.title()` 、 `.label()` 、 `.color()` 、 `.lines()` 和 `.legend()` 将我们的控件变的更美观。使用如下表达式来更新您的控件:
[source,text]
----------------------------------
.es(index=metricbeat*, timefield=@timestamp, metric=max:system.network.in.bytes).derivative().divide(1048576).lines(fill=2, width=1).color(green).label("Inbound traffic").title("Network traffic (MB/s)"), .es(index=metricbeat*, timefield=@timestamp, metric=max:system.network.out.bytes).derivative().multiply(-1).divide(1048576).lines(fill=2, width=1).color(blue).label("Outbound traffic").legend(columns=2, position=nw)
----------------------------------
image::images/timelion-math06.png[]
{nbsp}
保存所有的修改继续学习下一节关于条件逻辑和趋势跟踪的内容。
| 43.435484 | 419 | 0.678426 |
4c7e57f7922a8f3296cebd34d0c8fc5664381a84 | 471 | adoc | AsciiDoc | doc/examples.adoc | stetre/lunasdl | 32e98fd9ebb030edd23659bb2bb12f0ad9f8a436 | [
"MIT"
] | 2 | 2015-05-03T06:06:07.000Z | 2020-07-03T04:48:05.000Z | doc/examples.adoc | stetre/lunasdl | 32e98fd9ebb030edd23659bb2bb12f0ad9f8a436 | [
"MIT"
] | null | null | null | doc/examples.adoc | stetre/lunasdl | 32e98fd9ebb030edd23659bb2bb12f0ad9f8a436 | [
"MIT"
] | null | null | null | == Examples
This section describes the examples that are contained in the `examples/` directory
of the official LunaSDL release.
Some of them are accompanied with SDL diagrams, which are for illustratory
purposes only.
include::example_helloworld.adoc[]
include::example_pingpong.adoc[]
include::example_udppingpong.adoc[]
include::example_webdownload.adoc[]
include::example_database.adoc[]
include::example_priority.adoc[]
include::example_procedure.adoc[]
| 18.84 | 83 | 0.798301 |
3b957e725d6ca75b344d598be0538801d47f0234 | 9,353 | asciidoc | AsciiDoc | documentation/Who-Is-MrChecker/Test-Framework-Modules/Web-API-Test-Module-How-to-make-virtual-asset.asciidoc | jambulud/devonfw-testing | 8a38fb5c2d0904bdf0ec71e4686b3b4746714b27 | [
"Apache-2.0"
] | null | null | null | documentation/Who-Is-MrChecker/Test-Framework-Modules/Web-API-Test-Module-How-to-make-virtual-asset.asciidoc | jambulud/devonfw-testing | 8a38fb5c2d0904bdf0ec71e4686b3b4746714b27 | [
"Apache-2.0"
] | null | null | null | documentation/Who-Is-MrChecker/Test-Framework-Modules/Web-API-Test-Module-How-to-make-virtual-asset.asciidoc | jambulud/devonfw-testing | 8a38fb5c2d0904bdf0ec71e4686b3b4746714b27 | [
"Apache-2.0"
] | null | null | null | = How to make virtual asset
This can be done in 4 ways:
* Record all traffic (Mappings and Responses) comes through proxy - by UI
* Record all traffic (Mappings and Responses) comes through proxy - by Code
* Create manually Mappings and Responses by text files
* Create manually Mappings and Responses by code
== Record all traffic (Mappings and Responses) comes through proxy - UI
Full article here http://wiremock.org/docs/record-playback/[Wiremock record-playback].
First, start an instance of http://wiremock.org/docs/running-standalone[WireMock running standalone]. Once that’s running visit the recorder UI page at http://localhost:8080/__admin/recorder (assuming you started WireMock on the default port of 8080).
image::images/image77.png[]
Enter the URL you wish to record from in the target URL field and click the Record button. You can use http://example.mocklab.io to try it out.
Now you need to make a request through WireMock to the target API so that it can be recorded. If you’re using the example URL, you can generate a request using curl:
$ curl http://localhost:8080/recordables/123
Now click stop. You should see a message indicating that one stub was captured.
You should also see that a file has been created called something like _recordables_123-40a93c4a-d378-4e07-8321-6158d5dbcb29.json_ under the mappings directory created when WireMock started up, and that a new mapping has appeared at http://localhost:8080/__admin/mappings.
Requesting the same URL again (possibly disabling your wifi first if you want firm proof) will now serve the recorded result:
----
$ curl http://localhost:8080/recordables/123
{
"message": "Congratulations on your first recording!"
}
----
== Record all traffic (Mappings and Responses) comes through proxy - by Code
Example how such record can be achive
----
@Test
public void startRecording() {
SnapshotRecordResult recordedMappings;
DriverManager.getDriverVirtualService()
.start();
DriverManager.getDriverVirtualService()
.startRecording("http://example.mocklab.io");
recordedMappings = DriverManager.getDriverVirtualService()
.stopRecording();
BFLogger.logDebug("Recorded messages: " + recordedMappings.toString());
}
----
== Create manually Mappings and Responses by text files
EMPTY
== Create manually Mappings and Responses by code
Link to full file structure: https://github.com/devonfw/devonfw-testing/blob/develop/mrchecker-framework-modules/mrchecker-webapi-module/src/test/java/com/capgemini/mrchecker/endpoint/rest/REST_FarenheitToCelsiusMethod_Test.java[REST_FarenheitToCelsiusMethod_Test.java]
=== Start up Virtual Server
----
public void startVirtualServer() {
// Start Virtual Server
WireMockServer driverVirtualService = DriverManager.getDriverVirtualService();
// Get Virtual Server running http and https ports
int httpPort = driverVirtualService.port();
int httpsPort = driverVirtualService.httpsPort();
// Print is Virtual server running
BFLogger.logDebug("Is Virtual server running: " + driverVirtualService.isRunning());
String baseURI = "http://localhost";
endpointBaseUri = baseURI + ":" + httpPort;
}
----
=== Plug in virtual asset
REST_FarenheitToCelsiusMethod_Test.java
----
public void activateVirtualAsset() {
/*
* ----------
* Mock response. Map request with virtual asset from file
* -----------
*/
BFLogger.logInfo("#1 Create Stub content message");
BFLogger.logInfo("#2 Add resource to virtual server");
String restResourceUrl = "/some/thing";
String restResponseBody = "{ \"FahrenheitToCelsiusResponse\":{\"FahrenheitToCelsiusResult\":37.7777777777778}}";
new StubREST_Builder //For active virtual server ...
.StubBuilder(restResourceUrl) //Activate mapping, for this Url AND
.setResponse(restResponseBody) //Send this response AND
.setStatusCode(200) // With status code 200 FINALLY
.build(); //Set and save mapping.
}
----
Link to full file structure: https://github.com/devonfw/devonfw-testing/blob/develop/mrchecker-framework-modules/mrchecker-webapi-module/src/main/java/com/capgemini/mrchecker/webapi/endpoint/stubs/StubREST_Builder.java[StubREST_Builder.java]
Source link to http://wiremock.org/docs/stubbing/[How to create Stub].
StubREST_Builder.java
----
public class StubREST_Builder {
// required parameters
private String endpointURI;
// optional parameters
private int statusCode;
public String getEndpointURI() {
return endpointURI;
}
public int getStatusCode() {
return statusCode;
}
private StubREST_Builder(StubBuilder builder) {
this.endpointURI = builder.endpointURI;
this.statusCode = builder.statusCode;
}
// Builder Class
public static class StubBuilder {
// required parameters
private String endpointURI;
// optional parameters
private int statusCode = 200;
private String response = "{ \"message\": \"Hello\" }";
public StubBuilder(String endpointURI) {
this.endpointURI = endpointURI;
}
public StubBuilder setStatusCode(int statusCode) {
this.statusCode = statusCode;
return this;
}
public StubBuilder setResponse(String response) {
this.response = response;
return this;
}
public StubREST_Builder build() {
// GET
DriverManager.getDriverVirtualService()
.givenThat(
// Given that request with ...
get(urlMatching(this.endpointURI))
.withHeader("Content-Type", equalTo(ContentType.JSON.toString()))
// Return given response ...
.willReturn(aResponse()
.withStatus(this.statusCode)
.withHeader("Content-Type", ContentType.JSON.toString())
.withBody(this.response)
.withTransformers("body-transformer")));
// POST
DriverManager.getDriverVirtualService()
.givenThat(
// Given that request with ...
post(urlMatching(this.endpointURI))
.withHeader("Content-Type", equalTo(ContentType.JSON.toString()))
// Return given response ...
.willReturn(aResponse()
.withStatus(this.statusCode)
.withHeader("Content-Type", ContentType.JSON.toString())
.withBody(this.response)
.withTransformers("body-transformer")));
// PUT
DriverManager.getDriverVirtualService()
.givenThat(
// Given that request with ...
put(urlMatching(this.endpointURI))
.withHeader("Content-Type", equalTo(ContentType.JSON.toString()))
// Return given response ...
.willReturn(aResponse()
.withStatus(this.statusCode)
.withHeader("Content-Type", ContentType.JSON.toString())
.withBody(this.response)
.withTransformers("body-transformer")));
// DELETE
DriverManager.getDriverVirtualService()
.givenThat(
// Given that request with ...
delete(urlMatching(this.endpointURI))
.withHeader("Content-Type", equalTo(ContentType.JSON.toString()))
// Return given response ...
.willReturn(aResponse()
.withStatus(this.statusCode)
.withHeader("Content-Type", ContentType.JSON.toString())
.withBody(this.response)
.withTransformers("body-transformer")));
// CATCH any other requests
DriverManager.getDriverVirtualService()
.givenThat(
any(anyUrl())
.atPriority(10)
.willReturn(aResponse()
.withStatus(404)
.withHeader("Content-Type", ContentType.JSON.toString())
.withBody("{\"status\":\"Error\",\"message\":\"Endpoint not found\"}")
.withTransformers("body-transformer")));
return new StubREST_Builder(this);
}
}
}
---- | 39.970085 | 272 | 0.582059 |
b93436b4f43a053b30c2e0b8e8dd1877c2b04bf6 | 1,278 | adoc | AsciiDoc | documentation/modules/proc-create-auth-service-custom-resource-olm-ui.adoc | rkpattnaik780/enmasse | 5d5166ea47cc05012d614187afb3a5b74b364b7c | [
"Apache-2.0"
] | 1 | 2019-08-11T02:41:17.000Z | 2019-08-11T02:41:17.000Z | documentation/modules/proc-create-auth-service-custom-resource-olm-ui.adoc | rkpattnaik780/enmasse | 5d5166ea47cc05012d614187afb3a5b74b364b7c | [
"Apache-2.0"
] | 1 | 2020-02-12T11:23:01.000Z | 2020-02-12T11:23:01.000Z | documentation/modules/proc-create-auth-service-custom-resource-olm-ui.adoc | rkpattnaik780/enmasse | 5d5166ea47cc05012d614187afb3a5b74b364b7c | [
"Apache-2.0"
] | null | null | null | // Module included in the following assemblies:
//
// assembly-configuring-olm.adoc
// rhassemblies/assembly-configuring-olm-rh.adoc
[id="proc-create-auth-service-custom-resource-olm-ui-{context}"]
= Creating an authentication service custom resource using the {KubePlatform} console
You must create a custom resource for an authentication service to use {ProductName}. This example uses the standard authentication service.
.Procedure
. In the top right, click the *Plus* icon (+). The Import YAML window opens.
. From the top left drop-down menu, select the `{ProductNamespace}` project.
. Copy the following code:
+
[source,yaml,options="nowrap",subs="attributes"]
----
apiVersion: admin.enmasse.io/v1beta1
kind: AuthenticationService
metadata:
name: standard-authservice
spec:
type: standard
----
. In the Import YAML window, paste the copied code and click *Create*. The AuthenticationService overview page is displayed.
. Click *Workloads > Pods*. In the *Readiness* column, the Pod status is `Ready` when the custom resource has been deployed.
.Next steps
* link:{BookUrlBase}{BaseProductVersion}{BookNameUrl}#proc-create-auth-service-custom-resource-olm-ui-messaging[Create an infrastructure configuration custom resource using the {KubePlatform} console]
| 35.5 | 200 | 0.776995 |
f4bfe74f3379bca5e63d7ceb208fc1387668ddb0 | 4,775 | adoc | AsciiDoc | doc/release_notes/Release_notes_3.6.adoc | berezkin88/xs2a | 187e5f299f81f5058001d4695fb1bd275bcaf160 | [
"Apache-2.0"
] | 120 | 2018-04-19T13:15:59.000Z | 2022-03-27T21:50:38.000Z | doc/release_notes/Release_notes_3.6.adoc | berezkin88/xs2a | 187e5f299f81f5058001d4695fb1bd275bcaf160 | [
"Apache-2.0"
] | 72 | 2018-05-04T14:03:18.000Z | 2022-03-10T14:00:22.000Z | doc/release_notes/Release_notes_3.6.adoc | berezkin88/xs2a | 187e5f299f81f5058001d4695fb1bd275bcaf160 | [
"Apache-2.0"
] | 85 | 2018-05-07T09:56:46.000Z | 2022-03-09T10:10:29.000Z | = Release notes v. 3.6
== Table of Contents
* Update version of jackson-databind to 2.9.9
* Forced mode for starting authorisation
* Change the logic of SpiResponseStatus to MessageErrorCode mapping
* Bugfix: Wrong links in start authorisation response
* Bugfix: Balances link is not present in Read Transaction List with Balances
* Bugfix: Ignore multilevel flag for one-off Consent on Account List of Available Accounts when SCA is not needed
* Bugfix: Changed the allowed length for some HTTP headers
* Bugfix: Error on creating AIS consent with availableAccountsWithBalances attribute in the access property
* Change String type of fields in Links to HrefType
* Bugfix: Global consent applies on all PSD2 related account information requests
* Bugfix: List of Available Accounts Consent should apply only on Account List
* Bugfix: Fixed a typo in `preceding`
== Update version of jackson-databind to 2.9.9
Fixed a Polymorphic Typing issue that was discovered in FasterXML jackson-databind 2.x before 2.9.9.
https://nvd.nist.gov/vuln/detail/CVE-2019-12086[Additional information about this issue]
== Forced mode for starting authorisation
From now on, new field is present in bank_profile configuration - `startAuthorisationMode`. This field has 3 possible values:
- `auto` - allows using default flow (based on `tppExplicitAuthorisationPreferred` header and `signingBasketSupported`
bank_profile configuration property);
- `explicit` - forces explicit mode;
- `implicit` - forces implicit mode.
Default value is `auto`.
== Change the logic of SpiResponseStatus to MessageErrorCode mapping
From now on, XS2A will propagate error from SPI to the TPP based on message error code inside the
`de.adorsys.psd2.xs2a.core.error.TppMessage` instead of `de.adorsys.psd2.xs2a.spi.domain.response.SpiResponseStatus`.
This means that if `SpiResponse` will contain any errors (populated via `SpiResponseBuilder#error`), these errors will be
returned to the TPP in the response with error codes provided by the SPI Developer.
Order of the messages matters, as the HTTP status code in response will be determined based on the first error.
== Bugfix: Wrong links in start authorisation response
From now on, all authorisation responses will contain `scaStatus` link instead of `self` and `status`.
== Bugfix: Balances link is not present in Read Transaction List with Balances
From now on, the endpoint for reading transaction list (POST `/v1/accounts/{{account_id}}/transactions?withBalance=true`) returns correct response with link `balances`.
== Bugfix: Ignore multilevel flag for one-off Consent on Account List of Available Accounts when SCA is not needed
When TPP sends `Create AIS Consent` request (`POST /v1/consents`) for one-off Consent on Account List of Available Accounts and for this request ASPSP
returns SpiInitiateAisConsentResponse with `multilevelScaRequired` parameter set to true, if in ASPSP Profile parameter `scaByOneTimeAvailableAccountsConsentRequired`
set to false, then `multilevelScaRequired` parameter will be ignored because SCA is not needed at all.
== Bugfix: Changed the allowed length for some HTTP headers
From now on, while sending HTTP requests to the XS2A the maximum length of `tpp-redirect-uri` and `tpp-nok-redirect-uri`
headers is extended to 255 symbols. Header `authorization` is not validated for length.
== Bugfix: Error on creating AIS consent with availableAccountsWithBalances attribute in the access property
From now on, TPP is able to create AIS consent with `availableAccountsWithBalances` attribute in the access property.
As a result, creation of AIS Consent with `allAccountsWithBalances` value in `availableAccounts` field is no longer allowed.
== Change String type of fields in Links to HrefType
From now on, object de.adorsys.psd2.xs2a.domain.Links has new HrefType for all fields to simplify serialization to json.
== Bugfix: Global consent applies on all PSD2 related account information services
From now on, if consent is global - it will imply a consent on all available accounts of the PSU on all PSD2 related account information services.
== Bugfix: List of Available Accounts Consent should apply only on Account List
From now on, Consent on Account List of Available Accounts can get only a list of accounts (GET `v1/accounts`).
Another information about account details, balances or transactions is not permitted and TPP in this case will receive 401 response code with `CONSENT_INVALID` message.
Consent with `availableAccounts` attribute has access to accounts without balances and consent with `availableAccountsWithBalances` attribute has access to accounts with balances.
== Bugfix: Fixed a typo in `preceding`
Fixed a typo in the word `preceding` for the whole project.
| 56.845238 | 179 | 0.799162 |
df7f7a04bed68c6e25693622560dae08f47752fa | 1,844 | adoc | AsciiDoc | src/main/docs/guide/kafkaListener/kafkaSendTo.adoc | giamo/micronaut-kafka | b3a5da1a95e3803e0673565905f65a012dfd4dee | [
"Apache-2.0"
] | null | null | null | src/main/docs/guide/kafkaListener/kafkaSendTo.adoc | giamo/micronaut-kafka | b3a5da1a95e3803e0673565905f65a012dfd4dee | [
"Apache-2.0"
] | 1 | 2021-11-05T13:39:08.000Z | 2021-11-05T13:39:08.000Z | src/main/docs/guide/kafkaListener/kafkaSendTo.adoc | Tesco/micronaut-kafka | 229bb83ad37e7e2d8b25f029df6178b4cd987863 | [
"Apache-2.0"
] | null | null | null | On any `@KafkaListener` method that returns a value, you can use the https://docs.micronaut.io/latest/api/io/micronaut/messaging/annotation/SendTo.html[@SendTo] annotation to forward the return value to the topic or topics specified by the `@SendTo` annotation.
The key of the original `ConsumerRecord` will be used as the key when forwarding the message.
.Committing offsets with the `KafkaConsumer` API
[source,java]
----
include::{testskafka}/consumer/sendto/ProductListener.java[tags=imports, indent=0]
include::{testskafka}/consumer/sendto/ProductListener.java[tags=method, indent=0]
----
<1> The topic subscribed to is `awesome-products`
<2> The topic to send the result to is `product-quantities`
<3> The return value is used to indicate the value to forward
You can also do the same using Reactive programming:
.Committing offsets with the `KafkaConsumer` API
[source,java]
----
include::{testskafka}/consumer/sendto/ProductListener.java[tags=reactive, indent=0]
----
<1> The topic subscribed to is `awesome-products`
<2> The topic to send the result to is `product-quantities`
<3> The return is mapped from the single to the value of the quantity
In the reactive case the `poll` loop will continue and will not wait for the record to be sent unless you specifically annotate the method with ann:core.annotation.Blocking[].
To enable transactional sending of the messages you need to define `producerTransactionalId` in `@KafkaListener`.
.Transactional consumer-producer
[source,java]
----
include::{testskafka}/consumer/sendto/WordCounter.java[tags=transactional, indent=0]
----
<1> The id of the producer to load additional config properties
<2> The transactional id that is required to enable transactional processing
<3> Enable offset strategy to commit the offsets to the transaction
<4> Consumer read messages isolation | 43.904762 | 261 | 0.782538 |
51620e44bf1625d3e6ec982ad07ee7c2487d7cdb | 1,756 | adoc | AsciiDoc | docs/jbpm-docs/topics/Designer/Designer-chapter.adoc | sutaakar/kie-docs | 6592794fabf1556985d4fd9a6b944ba90afa863b | [
"Apache-2.0"
] | null | null | null | docs/jbpm-docs/topics/Designer/Designer-chapter.adoc | sutaakar/kie-docs | 6592794fabf1556985d4fd9a6b944ba90afa863b | [
"Apache-2.0"
] | null | null | null | docs/jbpm-docs/topics/Designer/Designer-chapter.adoc | sutaakar/kie-docs | 6592794fabf1556985d4fd9a6b944ba90afa863b | [
"Apache-2.0"
] | null | null | null | [[_chap_designer]]
== Designer
Designer is a graphical web-based BPMN2 editor.
It allows users to model and simulate executable BPMN2 processes.
The main goal of Designer is to provide intuitive means to both technical and non-technical users to quickly create their executable business processes.
This chapter intends to describe all features Designer currently offers.
.Designer
image::Designer/designer-overview1.png[]
Designer targets the following business process modelling scenarios:
* View and/or edit existing BPMN2 processes: Designer allows you to open existing BPMN2 processes (for example created using the BPMN2 Eclipse editor or any other tooling that exports BPMN2 XML).
* Create fully executable BPMN2 processes: A user can create a new BPMN2 process in the Designer and use the editing capabilities (drag and drop and filling in properties in the properties panel) to fill in the details. This for example allows business users to create complete business processes all inside a browser. The integration with Drools Guvnor allows for your business processes as well as other business assets such as business rules, process forms/images, etc. to be stored and versioned inside a content repository.
* View and/or edit Human Task forms during process modelling (using the in-line form editor or the Form Modeller).
* Simulate your business process models. Busines Process Simulation is based on the BPSIM 1.0 specification.
Designer supports all BPMN2 elements that are also supported by jBPM as well as all jBPM-specific BPMN2 extension elements and attributes.
include::Designer/UIExplained-section.adoc[leveloffset=+1]
include::Designer/Shapes-section.adoc[leveloffset=+1]
include::Designer/Toolbar-section.adoc[leveloffset=+1]
| 67.538462 | 528 | 0.81549 |
3802f0e9c7d0d290a582ecf4df3d2ea81fa7fcc1 | 506 | adoc | AsciiDoc | src/main/asciidoc/en/appendix-examples.adoc | dukecon/dukecon | cd984bc03da313a000f82f53595bad3eb496dcdd | [
"MIT"
] | 7 | 2016-04-28T23:34:21.000Z | 2019-03-13T15:39:55.000Z | src/main/asciidoc/en/appendix-examples.adoc | dukecon/dukecon | cd984bc03da313a000f82f53595bad3eb496dcdd | [
"MIT"
] | 60 | 2015-09-18T05:45:46.000Z | 2020-11-17T19:41:18.000Z | src/main/asciidoc/en/appendix-examples.adoc | dukecon/dukecon | cd984bc03da313a000f82f53595bad3eb496dcdd | [
"MIT"
] | null | null | null | :filename: main/asciidoc/en/appendix-examples.adoc
:numbered!:
[appendix]
== Examples
include::config.adoc[]
{improve}
* http://aim42.github.io/htmlSanityCheck/hsc_arc42.html[HTML Sanity Checker]
* http://www.dokchess.de/dokchess/arc42/[DocChess] (german)
* http://www.embarc.de/arc42-starschnitt-gradle/[Gradle] (german)
* http://arc42.org:8090/display/arc42beispielmamacrm[MaMa CRM] (german)
* http://confluence.arc42.org/display/migrationEg/Financial+Data+Migration[Financial Data Migration] (german)
| 36.142857 | 109 | 0.772727 |
a66f17804bb6a515398c62972d543af4e33b7555 | 23 | adoc | AsciiDoc | spec/scenarios/html_element/empty-p.adoc | pomdtr/kramdown-asciidoc | 560afe65ce73dd467b7bb5248f5d309334f6817e | [
"MIT"
] | 119 | 2018-05-22T10:15:29.000Z | 2022-03-25T15:17:32.000Z | spec/scenarios/html_element/empty-p.adoc | pomdtr/kramdown-asciidoc | 560afe65ce73dd467b7bb5248f5d309334f6817e | [
"MIT"
] | 57 | 2018-05-30T05:12:50.000Z | 2021-12-06T19:54:55.000Z | spec/scenarios/html_element/empty-p.adoc | pomdtr/kramdown-asciidoc | 560afe65ce73dd467b7bb5248f5d309334f6817e | [
"MIT"
] | 12 | 2018-05-20T10:07:49.000Z | 2021-06-09T06:24:18.000Z | before
{blank}
after
| 3.833333 | 7 | 0.695652 |
59c467e55d3887851bc0ad8341dd6e0bdc473f3b | 4,265 | adoc | AsciiDoc | source/documentation/common/install/proc-Installing_Red_Hat_Virtualization_Hosts.adoc | shenitzky/ovirt-site | 59dacde7d4c9bb57a8e6cafbf36de5d3583f21a3 | [
"MIT"
] | null | null | null | source/documentation/common/install/proc-Installing_Red_Hat_Virtualization_Hosts.adoc | shenitzky/ovirt-site | 59dacde7d4c9bb57a8e6cafbf36de5d3583f21a3 | [
"MIT"
] | null | null | null | source/documentation/common/install/proc-Installing_Red_Hat_Virtualization_Hosts.adoc | shenitzky/ovirt-site | 59dacde7d4c9bb57a8e6cafbf36de5d3583f21a3 | [
"MIT"
] | null | null | null | [id='Installing_Red_Hat_Virtualization_Hosts_{context}']
= Installing {hypervisor-fullname}s
{hypervisor-fullname} ({hypervisor-shortname}) is a minimal operating system based on {enterprise-linux} that is designed to provide a simple method for setting up a physical machine to act as a hypervisor in a {virt-product-fullname} environment. The minimal operating system contains only the packages required for the machine to act as a hypervisor, and features a Cockpit web interface for monitoring the host and performing administrative tasks. See link:http://cockpit-project.org/running.html[] for the minimum browser requirements.
{hypervisor-shortname} supports NIST 800-53 partitioning requirements to improve security. {hypervisor-shortname} uses a NIST 800-53 partition layout by default.
The host must meet the minimum link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/planning_and_prerequisites_guide/rhv_requirements#host-requirements[host requirements].
.Procedure
ifdef::rhv-doc[]
. Go to the link:https://access.redhat.com/products/red-hat-virtualization#getstarted[Get Started with Red Hat Virtualization] on the Red Hat Customer Portal and log in.
. Click *Download Latest* to access the product
download page.
. Choose the appropriate *Hypervisor Image for RHV* from the list and click *Download Now*.
endif::[]
ifdef::ovirt-doc[]
. Visit the link:/download/node.html[oVirt Node Download] page.
. Choose the version of *oVirt Node* to download and click its *Installation ISO* link.
. Write the {hypervisor-fullname} Installation ISO disk image to a USB, CD, or DVD.
endif::[]
. Start the machine on which you are installing {hypervisor-shortname}, booting from the prepared installation media.
. From the boot menu, select *Install {hypervisor-shortname} 4.4* and press `Enter`.
+
[NOTE]
====
You can also press the `Tab` key to edit the kernel parameters. Kernel parameters must be separated by a space, and you can boot the system using the specified kernel parameters by pressing the `Enter` key. Press the `Esc` key to clear any changes to the kernel parameters and return to the boot menu.
====
. Select a language, and click btn:[Continue].
. Select a keyboard layout from the *Keyboard Layout* screen and click btn:[Done].
. Select the device on which to install {hypervisor-shortname} from the *Installation Destination* screen. Optionally, enable encryption. Click btn:[Done].
+
[IMPORTANT]
====
Use the *Automatically configure partitioning* option.
====
. Select a time zone from the *Time & Date* screen and click btn:[Done].
+
. Select a network from the *Network & Host Name* screen and click *Configure...* to configure the connection details.
+
[NOTE]
====
To use the connection every time the system boots, select the *Connect automatically with priority* check box. For more information, see link:{URL_rhel_docs_latest}html/performing_a_standard_rhel_installation/graphical-installation_graphical-installation#network-hostname_configuring-system-settings[Configuring network and host name options] in the _{enterprise-linux} 8 Installation Guide_.
====
+
Enter a host name in the *Host Name* field, and click *Done*.
. Optionally configure *Language Support*, *Security Policy*, and *Kdump*. See link:{URL_rhel_docs_latest}html/performing_a_standard_rhel_installation/graphical-installation_graphical-installation[Customizing your RHEL installation using the GUI] in _Performing a standard RHEL installation_ for _{enterprise-linux} 8 for more information on each of the sections in the *Installation Summary* screen.
. Click *Begin Installation*.
. Set a root password and, optionally, create an additional user while {hypervisor-shortname} installs.
+
[WARNING]
====
Do not create untrusted users on {hypervisor-shortname}, as this can lead to exploitation of local security vulnerabilities.
====
+
. Click *Reboot* to complete the installation.
+
[NOTE]
====
When {hypervisor-shortname} restarts, `nodectl check` performs a health check on the host and displays the result when you log in on the command line. The message `node status: OK` or `node status: DEGRADED` indicates the health status. Run `nodectl check` to get more information. The service is enabled by default.
====
| 67.698413 | 539 | 0.786401 |
249205dfbf387ae78d1a7034b787230dab342351 | 487 | asciidoc | AsciiDoc | docs/java-api/query-dsl/bool-query.asciidoc | LogInsight/elasticsearch | add272084b8e3e66b1f334f061f6a18789c9f39c | [
"Apache-2.0"
] | 1 | 2016-04-21T12:13:34.000Z | 2016-04-21T12:13:34.000Z | docs/java-api/query-dsl/bool-query.asciidoc | infusionsoft/elasticsearch | 3851093483aeab0908e84b1d81e696d7cdfe862b | [
"Apache-2.0"
] | 1 | 2020-04-23T10:06:21.000Z | 2020-04-23T10:11:23.000Z | docs/java-api/query-dsl/bool-query.asciidoc | infusionsoft/elasticsearch | 3851093483aeab0908e84b1d81e696d7cdfe862b | [
"Apache-2.0"
] | 2 | 2017-06-23T09:03:13.000Z | 2017-08-10T03:49:09.000Z | [[java-query-dsl-bool-query]]
==== Bool Query
See {ref}/query-dsl-bool-query.html[Bool Query]
[source,java]
--------------------------------------------------
QueryBuilder qb = boolQuery()
.must(termQuery("content", "test1")) <1>
.must(termQuery("content", "test4")) <1>
.mustNot(termQuery("content", "test2")) <2>
.should(termQuery("content", "test3")); <3>
--------------------------------------------------
<1> must query
<2> must not query
<3> should query
| 25.631579 | 50 | 0.498973 |
4463b77f3afd2c1b69577c0cb3af3242aa71a99b | 52 | adoc | AsciiDoc | truman/src/docs/asciidoc/dedication.adoc | diguage/spring-framework | 4b6c9224aff6067f5a19c1ae329d562a0db1fb6b | [
"Apache-2.0"
] | 7 | 2015-01-06T09:01:44.000Z | 2021-09-21T09:41:15.000Z | truman/src/docs/asciidoc/dedication.adoc | diguage/spring-framework | 4b6c9224aff6067f5a19c1ae329d562a0db1fb6b | [
"Apache-2.0"
] | 4 | 2021-01-25T05:00:24.000Z | 2022-03-16T12:17:09.000Z | truman/src/docs/asciidoc/dedication.adoc | diguage/spring-framework | 4b6c9224aff6067f5a19c1ae329d562a0db1fb6b | [
"Apache-2.0"
] | 5 | 2015-01-06T03:35:38.000Z | 2021-04-21T02:30:24.000Z | [dedication]
= Dedication
献给我的父母
感谢你们不辞辛劳的养育我长大成人
| 7.428571 | 16 | 0.807692 |
959e5e4292196b6709ebfb966cdd39ed2defc3d5 | 3,832 | adoc | AsciiDoc | src/content/HandsOnML2_00.adoc | anew0m/blog | bf905e0cc0bb340a44f3d1a0090f93731c127a39 | [
"MIT"
] | null | null | null | src/content/HandsOnML2_00.adoc | anew0m/blog | bf905e0cc0bb340a44f3d1a0090f93731c127a39 | [
"MIT"
] | null | null | null | src/content/HandsOnML2_00.adoc | anew0m/blog | bf905e0cc0bb340a44f3d1a0090f93731c127a39 | [
"MIT"
] | null | null | null | = 핸즈온 머신러닝 소개
정민호
2020-09-13
:jbake-last_updated: 2020-09-13
:jbake-type: post
:jbake-status: published
:jbake-tags: 데이터분석, 책소개
:description: '데이터분석 관련 책 `핸즈온 머신러닝 2판`을 요약 및 정리하기전 간략히 소개하고자 한다.
:jbake-og: {"image": "img/jdk/duke.jpg"}
:idprefix:
https://book.naver.com/bookdb/book_detail.nhn?bid=16328592[핸즈온 머신러닝 2판(사이킷런, 케라스, 텐서플로 2를 활용한 머신러닝, 딥러닝 완벽 실무)] -
https://book.naver.com/search/search.nhn?query=%EB%B0%95%ED%95%B4%EC%84%A0&frameFilterType=1&frameFilterValue=1154889[박해선] 번역
image::img/HandsOnML2/00/book_cover.jpg[book_cover]
https://book.naver.com/bookdb/book_detail.nhn?bid=16328592[핸즈온 머신러닝 2판(사이킷런, 케라스, 텐서플로 2를 활용한 머신러닝, 딥러닝 완벽 실무)]은
'오렐리랑 제롱(Aurelien Geron)'이 집필하고 https://book.naver.com/search/search.nhn?query=%EB%B0%95%ED%95%B4%EC%84%A0&frameFilterType=1&frameFilterValue=1154889[박해선]님이 번역하신 책이다.
이 책을 사고나서 알게된것이지만 번역을 하신 박해선님이 이 분야와 관련한 여러 책을 번역하셨습니다.
그래서 그런지 본 책을 잘 읽히게 번역을 하셨다는 평이 많습니다.
책의 목차는 3개의 Part 19개의 Chapter로 구성되어있으며, 책 규격은 183*235mm, 1809g, 952p, ISBN : 9791162242964 입니다.
(출처 : https://www.aladin.co.kr/shop/wproduct.aspx?ISBN=K532639960&start=pnaver_02)
****
. *머신러닝*
1. 한눈에 보는 머신러닝
2. 머신러닝 프로젝트 처음부터 끝까지
3. 분류
4. 모델 훈련
5. 서포트 벡터 머신
6. 결정 트리
7. 앙상블 학습과 랜덤 포레스트
8. 차원 축소
9. 비지도 학습
. *신경망과 머신러닝*
[start=10]
10. 케라스를 사용한 인공 신경망 소개
11. 심층 신경망 훈련하기
12. 텐서플로를 사용한 사용자 정의 모델과 훈련
13. 텐서플로에서 데이터 적재와 전처리하기
14. 합성곱 신경망을 사용한 컴퓨터 비전
15. RNN과 CNN을 사용해 시퀀스 처리하기
16. RNN과 어텐션을 사용한 자연어 처리
17. 오토인코더와 GAN을 사용한 표현 학습과 생성적 학습
18. 강화 학습
19. 대규모 텐서플로 모델 훈련과 배포
. *부록*
A. 연습문제 정답
B. 머신러닝 프로젝트 체크리스트
C. SVM 쌍대 문제
D. 자동 미분
E. 유명한 다른 인공 신경망 구조
F. 특수한 데이터 구조
G. 텐서플로 그래프
****
그리고 본 책과 관련하여 유튜브에 강의를 올려 주셨습니다. 주소는 아래와 같으며 현재(2020.09.13) 8장까지 강좌가 있습니다.
* https://www.youtube.com/playlist?list=PLJN246lAkhQjX3LOdLVnfdFaCbGouEBeb[강좌 - 핸즈온 머신러닝 2판]
* https://drive.google.com/drive/folders/18V9V7VADM6K86_BwL6XwjTXbUDdv9qK0[슬라이드 - 핸즈온 머신러닝 2판]
* https://github.com/rickiepark/handson-ml2[깃허브 - 핸즈온 머신러닝 2판]
* https://tensorflow.blog/handson-ml2/[오탈자 - 핸즈온 머신러닝 2판]
* https://www.inflearn.com/course/핸즈온-머신러닝[인프런 - 핸즈온 머신러닝 2판]
본 책을 요약 및 정리하고자 생각하게된 계기는 기존에 데이터분석 분야를 공부하고 싶어서 이것저것 공부를 했었습니다.
하지만 중구난방으로 공부한다는 느낌이 다소 많이 들었고, 실제 사용해보지 않으니 잘 다가오지 않았습니다.
그러던 중 멘토님께 이 책을 추천 받게되어 책을 요약 및 정리하겠다고 생각하였습니다.
책은 2개월에 완독하는 것을 목표로 해보겠습니다. (2020.09.13(일) - 2020.11.08(일))
평일에는 퇴근이 늦어 눈으로 읽게 될 것같고 주말에 실습위주가 되지 않을까 생각합니다.
진도는 단순히 수치로 계산해 봤을 때 약 16p 정도를 하루에 하면 될것 같습니다.(952p / 60일)
아래는 세부 일정입니다. 아직 책을 수령받기 전이기도하고 내용의 난이도를 잘 모르기에 대략적으로 구성해보았습니다.
(Part1 : 3주, Part2 : 5주)
****
- *W1 ~ W8 - 핸즈온 머신러닝(30p - 928p, 100.0%)*
. *W1 ~ W3 - 머신러닝 (30p - 351p, 35.8%)*
1. W1 - 한눈에 보는 머신러닝 (30p - 67p, 4.1%)
2. W1 - 머신러닝 프로젝트 처음부터 끝까지 (68p - 126p, 6.5%)
3. W2 - 분류 (127p - 157p, 3.3%)
4. W2 - 모델 훈련 (158p - 204p, 5.1%)
5. W2 - 서포트 벡터 머신 (205p - 228p, 2.6%)
6. W2 - 결정 트리 (229p - 245p, 1.8%)
7. W2 - 앙상블 학습과 랜덤 포레스트 (246p - 273p, 3%)
8. W3 - 차원 축소 (274p - 299p, 2.8%)
9. W3 - 비지도 학습 (300p - 351p, 5.7%)
. *W4 ~ W8 - 신경망과 머신러닝 (352p - 842p, 54.7%)*
[start=10]
10. W4 - 케라스를 사용한 인공 신경망 소개 (352p - 411p, 6.6%)
11. W4 - 심층 신경망 훈련하기 (412p - 461p, 5.5%)
12. W5 - 텐서플로를 사용한 사용자 정의 모델과 훈련 (462p - 503p, 4.6%)
13. W5 - 텐서플로에서 데이터 적재와 전처리하기 (504p - 541p, 4.1%)
14. W5 - 합성곱 신경망을 사용한 컴퓨터 비전 (542p - 597p, 6.1%)
15. W6 - RNN과 CNN을 사용해 시퀀스 처리하기 (598p - 627p, 3.2%)
16. W6 - RNN과 어텐션을 사용한 자연어 처리 (628p - 673p, 5%)
17. W7 - 오토인코더와 GAN을 사용한 표현 학습과 생성적 학습 (674p - 719p, 5%)
18. W7 - 강화 학습 (720p - 783p, 7%)
19. W8 - 대규모 텐서플로 모델 훈련과 배포 (784p - 842p, 6.5%)
. *W8 ~ W8 - 부록 (843p - 928p, 9.5%)*
A. W8 - 연습문제 정답 (843p - 880p, 4.1%)
B. W8 - 머신러닝 프로젝트 체크리스트 (881p - 886p, 0.6%)
C. W8 - SVM 쌍대 문제 (887p - 890p, 0.3%)
D. W8 - 자동 미분 (891p - 898p, 0.8%)
E. W8 - 유명한 다른 인공 신경망 구조 (899p - 908p, 1%)
F. W8 - 특수한 데이터 구조 (909p - 916p, 0.8%)
G. W8 - 텐서플로 그래프 (917p - 928p, 1.2%)
****
책을 정리하겠노라 마음은 먹었지만 실천하는것과 별개인것 같습니다.
시작이 반이라는데 이미 반을 했으니 중간에 흐지부지 그만두지 않기 위해 노력하겠습니다. | 32.201681 | 166 | 0.664405 |
250b9cf7a37a541c4dcb9459189da258cd31a9a1 | 2,352 | adoc | AsciiDoc | docs/user-manual/modules/ROOT/pages/tokenize-language.adoc | connormcauliffe-toast/camel | a3a5fa1f93371f1b0dc7d20178e21bf84f05a44e | [
"Apache-2.0"
] | 4 | 2019-04-11T01:36:58.000Z | 2020-02-05T23:39:12.000Z | docs/user-manual/modules/ROOT/pages/tokenize-language.adoc | bobpaulin/camel | 7773d0d407445b4ebb28fef48d3706c559b1e499 | [
"Apache-2.0"
] | 23 | 2021-03-23T00:01:38.000Z | 2022-01-04T16:47:34.000Z | docs/user-manual/modules/ROOT/pages/tokenize-language.adoc | bobpaulin/camel | 7773d0d407445b4ebb28fef48d3706c559b1e499 | [
"Apache-2.0"
] | 3 | 2019-04-12T03:39:06.000Z | 2019-07-08T01:41:01.000Z | [[tokenize-language]]
= Tokenize Language
:page-source: core/camel-base/src/main/docs/tokenize-language.adoc
*Since Camel 2.0*
The tokenizer language is a built-in language in camel-core, which is
most often used only with the Splitter EIP to split
a message using a token-based strategy. +
The tokenizer language is intended to tokenize text documents using a
specified delimiter pattern. It can also be used to tokenize XML
documents with some limited capability. For a truly XML-aware
tokenization, the use of the XMLTokenizer
language is recommended as it offers a faster, more efficient
tokenization specifically for XML documents. For more details
see Splitter.
== Tokenize Options
// language options: START
The Tokenize language supports 11 options, which are listed below.
[width="100%",cols="2,1m,1m,6",options="header"]
|===
| Name | Default | Java Type | Description
| token | | String | The (start) token to use as tokenizer, for example you can use the new line token. You can use simple language as the token to support dynamic tokens.
| endToken | | String | The end token to use as tokenizer if using start/end token pairs. You can use simple language as the token to support dynamic tokens.
| inheritNamespaceTagName | | String | To inherit namespaces from a root/parent tag name when using XML You can use simple language as the tag name to support dynamic names.
| headerName | | String | Name of header to tokenize instead of using the message body.
| regex | false | Boolean | If the token is a regular expression pattern. The default value is false
| xml | false | Boolean | Whether the input is XML messages. This option must be set to true if working with XML payloads.
| includeTokens | false | Boolean | Whether to include the tokens in the parts when using pairs The default value is false
| group | | String | To group N parts together, for example to split big files into chunks of 1000 lines. You can use simple language as the group to support dynamic group sizes.
| groupDelimiter | | String | Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter.
| skipFirst | false | Boolean | To skip the very first element
| trim | true | Boolean | Whether to trim the value to remove leading and trailing whitespaces and line breaks
|===
// language options: END
| 57.365854 | 179 | 0.766156 |
77a308a3bb350ff0edc36a3b807da239dd110324 | 7,822 | asciidoc | AsciiDoc | documentation/components/components-fields.asciidoc | jforge/vaadin | 66df157ecfe9cea75c3c01e43ca6f37ed6ef0671 | [
"Apache-2.0"
] | null | null | null | documentation/components/components-fields.asciidoc | jforge/vaadin | 66df157ecfe9cea75c3c01e43ca6f37ed6ef0671 | [
"Apache-2.0"
] | null | null | null | documentation/components/components-fields.asciidoc | jforge/vaadin | 66df157ecfe9cea75c3c01e43ca6f37ed6ef0671 | [
"Apache-2.0"
] | null | null | null | ---
title: Field Components
order: 4
layout: page
---
[[components.fields]]
= Field Components
((("[classname]#Field#", id="term.components.fields", range="startofrange")))
_Fields_ are components that have a value that the user can change through the
user interface. <<figure.components.fields>> illustrates the inheritance relationships
and the important interfaces and base classes.
[[figure.components.fields]]
.Field Components
image::img/field-class-hierarchy.png[width=100%, scaledwidth=100%]
Field components are built upon the framework defined in the [classname]#HasValue#
interface.
[classname]#AbstractField# is the base class for all field components,
except those components that allow the user to select a value.
(see <<dummy/../../../framework/components/components-selection.asciidoc#components.selection,"Selection Components">>).
In addition to the component features inherited from
[classname]#AbstractComponent#, it implements the features defined in the
[interfacename]#HasValue# and [classname]#Component.Focusable# interfaces.
[[figure.components.fields.hasvalue]]
.Field components having values
image::img/field-interface-v8-hi.png[width=60%, scaledwidth=100%]
The description of the [interfacename]#HasValue# interface and field components extending [classname]#AbstractField] is broken down in the following sections.
[[components.fields.field]]
== The [interfacename]#HasValue# Interface
The [interfacename]#HasValue# interface marks a component that has a user editable value.
The type parameter in the interface is the type of the value that the component is editing.
You can set the value with the [methodname]#setValue()# and read it with the
[methodname]#getValue()# method defined in the [classname]#HasValue# interface.
The [classname]#HasValue# interface defines a number of properties, which you can
access with the corresponding setters and getters.
[methodname]#readOnly#:: Set the component to be read-only, meaning that the value is not editable.
[methodname]#requiredIndicatorVisible#:: When enabled, a required indicator
(the asterisk * character) is displayed on the left, above, or right the field,
depending on the containing layout and whether the field has a caption.
When the component is used in a form (see <<dummy/../../../framework/datamodel/datamodel-forms.asciidoc#datamodel.forms.validation,"Validation">>),
it can be set to be required, which will automatically show the required indicator,
and validate that the value is not empty. Without validation, the required indicator
is merely a visual guide.
[methodname]#emptyValue#:: The initial empty value of the component.
[methodname]#clear#:: Clears the value to the empty value.
[[components.fields.valuechanges]]
== Handling Value Changes
[interfacename]#HasValue# provides [methodname]#addValueChangeListener# method for listening to changes to the field value. This method returns a [classname]#Registration# object that can be used to later
remove the added listener if necessary.
[source, java]
----
TextField textField = new TextField();
Label echo = new Label();
textField.addValueChangeListener(event -> {
String origin = event.isUserOriginated()
? "user"
: "application";
String message = origin
+ " entered the following: "
+ event.getValue();
Notification.show(message);
});
----
[[components.fields.databinding]]
== Binding Fields to Data
Fields can be grouped into _forms_ and coupled with business data objects with
the [classname]#Binder# class. When a field is bound to a property using
[classname]#Binder#, it gets its default value from the property, and
is stored to the property either manually via the [methodname]#Binder.save# method,
or automatically every time the value changes.
[source, java]
----
class Person {
private String name;
public String getName() { /* ... */ }
public void setName(String) { /* ... */ }
}
TextField nameField = new TextField();
Binder<Person> binder = new Binder<>();
// Bind nameField to the Person.name property
// by specifying its getter and setter
binder.bind(nameField, Person::getName, Person::setName);
// Bind an actual concrete Person instance.
// After this, whenever the user changes the value
// of nameField, p.setName is automatically called.
Person p = new Person();
binder.setBean(p);
----
For more information on data binding, see <<dummy/../../../framework/datamodel/datamodel-forms.asciidoc#datamodel.forms,"Binding Data to Forms">>
== Validating Field Values
User input may be syntactically or semantically invalid.
[classname]#Binder# allows adding a chain of one or more __validators__ for
automatically checking the validity of the input before storing it to the data
object. You can add validators to fields by calling the [methodname]#withValidator#
method on the [interfacename]#Binding# object returned by [methodname]#Binder.forField#.
There are several built-in validators in the Framework, such as the [classname]#StringLengthValidator# used below.
[source, java]
----
binder.forField(nameField)
.withValidator(new StringLengthValidator(
"Name must be between 2 and 20 characters long",
2, 20))
.bind(Person::getName, Person::setName);
----
Failed validation is by default indicated with the error indicator of the field, described in
<<dummy/../../../framework/application/application-errors#application.errors.error-indicator,"Error
Indicator and Message">>. Hovering mouse on the field displays the error message
returned by the validator. If any value in a set of bound fields fails validation,
none of the field values are saved into the bound property until the validation
passes.
=== Implementing Custom Validators
Validators implement the [interfacename]#Validator# interface that simply
extends [interfacename]#java.util.function.Function#, returning a special type
called [interfacename]#Result#. This return type represents the validation outcome:
whether or not the given input was valid.
[source, java]
----
class MyValidator implements Validator<String> {
@Override
public ValidationResult apply(String value, ValueContext context) {
if(value.length() == 6) {
return ValidationResult.ok();
} else {
return ValidationResult.error(
"Must be exactly six characters long");
}
}
}
----
Since [interfacename]#Validator# is a functional
interface, you can often simply write a lambda expression instead of a full class
declaration. There is also an [methodname]#withValidator# overload that creates a
validator from a boolean function and an error message. If the application requires
more sophisticated validation diagnostics (e.g. locale-specific), there is a
method [methodname]#withValidator#, which uses a boolean function and an [classname]#ErrorMessageProvider#.
The [classname]#ErrorMessageProvider# can compose diagnostic messages based on the locale of the validation
and the source component value, which are provided with the [classname]#ValueContext#.
[source, java]
----
binder.forField(nameField)
.withValidator(name -> name.length() < 20,
"Name must be less than 20 characters long")
.bind(Person::getName, Person::setName);
----
== Converting Field Values
Field values are always of some particular type. For example,
[classname]#TextField# allows editing [classname]#String# values. When bound to
a data source, the type of the source property can be something different,
say an [classname]#Integer#. __Converters__ are used for converting the values
between the presentation and the model. Their usage is described in
<<dummy/../../../framework/datamodel/datamodel-forms.asciidoc#datamodel.forms.conversion,"Conversion">>.
(((range="endofrange", startref="term.components.fields")))
| 39.306533 | 204 | 0.759013 |
20a5b08f5a508f9b71209b17ff4f7bf25e884e65 | 10,758 | adoc | AsciiDoc | docs/index.adoc | AnupamaGupta01/kudu-1 | 79ee29db5ac1b458468b11f16f57f124601788e6 | [
"Apache-2.0"
] | 2 | 2016-09-12T06:53:49.000Z | 2016-09-12T15:47:46.000Z | docs/index.adoc | AnupamaGupta01/kudu-1 | 79ee29db5ac1b458468b11f16f57f124601788e6 | [
"Apache-2.0"
] | null | null | null | docs/index.adoc | AnupamaGupta01/kudu-1 | 79ee29db5ac1b458468b11f16f57f124601788e6 | [
"Apache-2.0"
] | 1 | 2018-09-04T01:45:03.000Z | 2018-09-04T01:45:03.000Z | // Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
[[introduction]]
= Introducing Apache Kudu (incubating)
:author: Kudu Team
:imagesdir: ./images
:icons: font
:toc: left
:toclevels: 3
:doctype: book
:backend: html5
:sectlinks:
:experimental:
Kudu is a columnar storage manager developed for the Hadoop platform. Kudu shares
the common technical properties of Hadoop ecosystem applications: it runs on commodity
hardware, is horizontally scalable, and supports highly available operation.
Kudu's design sets it apart. Some of Kudu's benefits include:
- Fast processing of OLAP workloads.
- Integration with MapReduce, Spark and other Hadoop ecosystem components.
- Tight integration with Cloudera Impala, making it a good, mutable alternative
to using HDFS with Parquet.
- Strong but flexible consistency model, allowing you to choose consistency
requirements on a per-request basis, including the option for strict-serializable consistency.
- Strong performance for running sequential and random workloads simultaneously.
- Easy to administer and manage with Cloudera Manager.
- High availability. Tablet Servers and Masters use the <<raft>>, which ensures that
as long as more than half the total number of replicas is available, the tablet is available for
reads and writes. For instance, if 2 out of 3 replicas or 3 out of 5 replicas are available, the tablet
is available.
+
Reads can be serviced by read-only follower tablets, even in the event of a
leader tablet failure.
- Structured data model.
By combining all of these properties, Kudu targets support for families of
applications that are difficult or impossible to implement on current generation
Hadoop storage technologies. A few examples of applications for which Kudu is a great
solution are:
* Reporting applications where newly-arrived data needs to be immediately available for end users
* Time-series applications that must simultaneously support:
- queries across large amounts of historic data
- granular queries about an individual entity that must return very quickly
* Applications that use predictive models to make real-time decisions with periodic
refreshes of the predictive model based on all historic data
For more information about these and other scenarios, see <<kudu_use_cases>>.
== Concepts and Terms
[[kudu_columnar_data_store]]
.Columnar Data Store
Kudu is a _columnar data store_. A columnar data store stores data in strongly-typed
columns. With a proper design, it is superior for analytical or data warehousing
workloads for several reasons.
Read Efficiency:: For analytical queries, you can read a single column, or a portion
of that column, while ignoring other columns. This means you can fulfill your query
while reading a minimal number of blocks on disk. With a row-based store, you need
to read the entire row, even if you only return values from a few columns.
Data Compression:: Because a given column contains only one type of data, pattern-based
compression can be orders of magnitude more efficient than compressing mixed data
types. Combined with the efficiencies of reading data from columns, compression allows
you to fulfill your query while reading even fewer blocks from disk. See
link:schema_design.html#encoding[Data Compression]
.Table
A _table_ is where your data is stored in Kudu. A table has a schema and
a totally ordered primary key. A table is split into segments called tablets.
.Tablet
A _tablet_ is a contiguous segment of a table. A given tablet is
replicated on multiple tablet servers, and one of these replicas is considered
the leader tablet. Any replica can service reads, and writes require consensus
among the set of tablet servers serving the tablet.
.Tablet Server
A _tablet server_ stores and serves tablets to clients. For a
given tablet, one tablet server serves the lead tablet, and the others serve
follower replicas of that tablet. Only leaders service write requests, while
leaders or followers each service read requests. Leaders are elected using
<<raft>>. One tablet server can serve multiple tablets, and one tablet can be served
by multiple tablet servers.
.Master
The _master_ keeps track of all the tablets, tablet servers, the
<<catalog_table>>, and other metadata related to the cluster. At a given point
in time, there can only be one acting master (the leader). If the current leader
disappears, a new master is elected using <<raft>>.
The master also coordinates metadata operations for clients. For example, when
creating a new table, the client internally sends an RPC to the master. The
master writes the metadata for the new table into the catalog table, and
coordinates the process of creating tablets on the tablet servers.
All the master's data is stored in a tablet, which can be replicated to all the
other candidate masters.
Tablet servers heartbeat to the master at a set interval (the default is once
per second).
[[raft]]
.Raft Consensus Algorithm
Kudu uses the link:https://raft.github.io/[Raft consensus algorithm] as
a means to guarantee fault-tolerance and consistency, both for regular tablets and for master
data. Through Raft, multiple replicas of a tablet elect a _leader_, which is responsible
for accepting and replicating writes to _follower_ replicas. Once a write is persisted
in a majority of replicas it is acknowledged to the client. A given group of `N` replicas
(usually 3 or 5) is able to accept writes with at most `(N - 1)/2` faulty replicas.
[[catalog_table]]
.Catalog Table
The _catalog table_ is the central location for
metadata of Kudu. It stores information about tables and tablets. The catalog
table is accessible to clients via the master, using the client API.
Tables:: table schemas, locations, and states
Tablets:: the list of existing tablets, which tablet servers have replicas of
each tablet, the tablet's current state, and start and end keys.
.Logical Replication
Kudu replicates operations, not on-disk data. This is referred to as _logical
replication_, as opposed to _physical replication_. Physical operations, such as
compaction, do not need to transmit the data over the network. This results in a
substantial reduction in network traffic for heavy write scenarios.
== Architectural Overview
The following diagram shows a Kudu cluster with three masters and multiple tablet
servers, each serving multiple tablets. It illustrates how Raft consensus is used
to allow for both leaders and followers for both the masters and tablet servers. In
addition, a tablet server can be a leader for some tablets, and a follower for others.
Leaders are shown in gold, while followers are shown in blue.
NOTE: Multiple masters are not supported during the Kudu beta period.
image::kudu-architecture-2.png[Kudu Architecture, 800]
[[kudu_use_cases]]
== Example Use Cases
.Streaming Input with Near Real Time Availability
A common challenge in data analysis is one where new data arrives rapidly and constantly,
and the same data needs to be available in near real time for reads, scans, and
updates. Kudu offers the powerful combination of fast inserts and updates with
efficient columnar scans to enable real-time analytics use cases on a single storage layer.
.Time-series application with widely varying access patterns
A time-series schema is one in which data points are organized and keyed according
to the time at which they occurred. This can be useful for investigating the
performance of metrics over time or attempting to predict future behavior based
on past data. For instance, time-series customer data might be used both to store
purchase click-stream history and to predict future purchases, or for use by a
customer support representative. While these different types of analysis are occurring,
inserts and mutations may also be occurring individually and in bulk, and become available
immediately to read workloads. Kudu can handle all of these access patterns
simultaneously in a scalable and efficient manner.
Kudu is a good fit for time-series workloads for several reasons. With Kudu's support for
hash-based partitioning, combined with its native support for compound row keys, it is
simple to set up a table spread across many servers without the risk of "hotspotting"
that is commonly observed when range partitioning is used. Kudu's columnar storage engine
is also beneficial in this context, because many time-series workloads read only a few columns,
as opposed to the whole row.
In the past, you might have needed to use multiple data stores to handle different
data access patterns. This practice adds complexity to your application and operations, and
duplicates storage. Kudu can handle all of these access patterns natively and efficiently,
without the need to off-load work to other data stores.
.Predictive Modeling
Data analysts often develop predictive learning models from large sets of data. The
model and the data may need to be updated or modified often as the learning takes
place or as the situation being modeled changes. In addition, the scientist may want
to change one or more factors in the model to see what happens over time. Updating
a large set of data stored in files in HDFS is resource-intensive, as each file needs
to be completely rewritten. In Kudu, updates happen in near real time. The scientist
can tweak the value, re-run the query, and refresh the graph in seconds or minutes,
rather than hours or days. In addition, batch or incremental algorithms can be run
across the data at any time, with near-real-time results.
.Combining Data In Kudu With Legacy Systems
Companies generate data from multiple sources and store it in a variety of systems
and formats. For instance, some of your data may be stored in Kudu, some in a traditional
RDBMS, and some in files in HDFS. You can access and query all of these sources and
formats using Impala, without the need to change your legacy systems.
== Next Steps
- link:quickstart.html[Get Started With Kudu]
- link:installation.html[Installing Kudu]
| 48.678733 | 105 | 0.799219 |
44655ddeab209238bbea4935f4f7cf6fa19db26f | 1,474 | adoc | AsciiDoc | securing_apps/topics/overview/supported-platforms.adoc | CollinAlpert/keycloak-documentation | 74c7451d90d3716aeec5d2ca760c04a716ba047b | [
"Apache-2.0"
] | 1 | 2022-01-22T14:54:46.000Z | 2022-01-22T14:54:46.000Z | securing_apps/topics/overview/supported-platforms.adoc | CollinAlpert/keycloak-documentation | 74c7451d90d3716aeec5d2ca760c04a716ba047b | [
"Apache-2.0"
] | 1 | 2017-04-04T07:06:27.000Z | 2017-04-05T06:18:46.000Z | securing_apps/topics/overview/supported-platforms.adoc | CollinAlpert/keycloak-documentation | 74c7451d90d3716aeec5d2ca760c04a716ba047b | [
"Apache-2.0"
] | null | null | null | === Supported Platforms
{project_name} enables you to protect applications running on different platforms and using different technology stacks using OpenID Connect and SAML protocols.
==== OpenID Connect
===== Java
* <<_jboss_adapter,JBoss EAP>>
ifeval::[{project_community}==true]
* <<_jboss_adapter,WildFly>>
endif::[]
* <<_fuse_adapter,Fuse>>
ifeval::[{project_community}==true]
* <<_tomcat_adapter,Tomcat>>
* <<_jetty9_adapter,Jetty 9>>
endif::[]
* <<_servlet_filter_adapter,Servlet Filter>>
* <<_spring_boot_adapter,Spring Boot>>
ifeval::[{project_community}==true]
* <<_spring_security_adapter,Spring Security>>
endif::[]
===== JavaScript (client-side)
* <<_javascript_adapter,JavaScript>>
===== Node.js (server-side)
* <<_nodejs_adapter,Node.js>>
ifeval::[{project_community}==true]
===== C#
* https://github.com/dylanplecki/KeycloakOwinAuthentication[OWIN] (community)
===== Python
* https://pypi.org/project/oic/[oidc] (generic)
===== Android
* https://github.com/openid/AppAuth-Android[AppAuth] (generic)
===== iOS
* https://github.com/openid/AppAuth-iOS[AppAuth] (generic)
===== Apache HTTP Server
* https://github.com/zmartzone/mod_auth_openidc[mod_auth_openidc]
endif::[]
==== SAML
===== Java
* <<_saml_jboss_adapter,JBoss EAP>>
ifeval::[{project_community}==true]
* <<_saml_jboss_adapter,WildFly>>
* <<_tomcat_adapter,Tomcat>>
* <<_jetty_saml_adapter,Jetty>>
endif::[]
===== Apache HTTP Server
* <<_mod_auth_mellon,mod_auth_mellon>>
| 23.396825 | 160 | 0.715061 |
183ca505159747fe53799f6f67d40e5f147fb070 | 6,895 | adoc | AsciiDoc | docs/src/shared/modules/concepts/pages/glossary.adoc | baodingfengyun/cloudstate | d366b420dbf8ace287a06ca005846f6ab0a45315 | [
"Apache-2.0"
] | 715 | 2019-07-09T06:20:52.000Z | 2022-03-08T07:31:57.000Z | docs/src/shared/modules/concepts/pages/glossary.adoc | baodingfengyun/cloudstate | d366b420dbf8ace287a06ca005846f6ab0a45315 | [
"Apache-2.0"
] | 377 | 2019-07-09T02:14:04.000Z | 2022-03-11T00:52:22.000Z | docs/src/shared/modules/concepts/pages/glossary.adoc | baodingfengyun/cloudstate | d366b420dbf8ace287a06ca005846f6ab0a45315 | [
"Apache-2.0"
] | 104 | 2019-07-12T16:58:29.000Z | 2021-08-25T06:09:11.000Z | = Glossary
ifdef::todo[TODO: The following need to be arranged so they can be digested more easily. Maybe alphabetically with a list with links above them?]
== Stateful service
A stateful service is a deployable unit. It is represented in Kubernetes as a `StatefulService` resource. It contains a <<User function>>, and may reference a <<Stateful store>>. The Cloudstate operator transforms a stateful service into a Kubernetes Deployment resource. The Cloudstate operator then defines a Kubernetes Service to expose the stateful service to the outside world. The deployment will be augmented by the injection of a Cloudstate <<Proxy>>.
== Stateful store
A stateful store is an abstraction over a datastore, typically a database. It is represented in Kubernetes as a `StatefulStore` resource. A single store may be used by multiple <<Stateful service>>s to store their data, but a stateful service may not necessarily have any store configured - if the stateful service does not have any durable state then it doesn't need a stateful store.
== User function
A user function is the code that the user writes, packaged as a Docker image, and deployed as a <<Stateful service>>. The user function exposes a gRPC interface that speaks the Cloudstate <<Protocol>>. The injected Cloudstate <<Proxy>> speaks to the user function using this protocol. The end-user developer, generally, doesn't need to provide the code implementation of this protocol along with the user function. This burden is delegated to a Cloudstate <<Support library>>, specific to the programming language used to write the user function.
== Protocol
The Cloudstate protocol is an open specification for Cloudstate state management proxies to speak to Cloudstate user functions. The Cloudstate project itself provides a <<Reference implementation>> of this protocol. The protocol is built upon gRPC, and provides support for multiple different <<Entity type>>s. Cloudstate provides a Technology Compatibility Kit (TCK) that can be used to verify any permutation of <<Proxy>> and <<Support library>> available. More technical details in xref:contribute:language-support.adoc#creating[Creating language support libraries].
== Proxy
The Cloudstate proxy is injected as a sidecar into the deployment of each <<Stateful service>>. It is responsible for state management, and exposing the <<Entity service>> implemented by the <<User function>> as both gRPC and REST services to the rest of the system, translating the incoming calls to <<Command>>s that are sent to the user function using the Cloudstate <<Protocol>>. The proxy will typically form a cluster with other nodes in the same stateful service, allowing advanced distributed state management features such as sharding, replication and addressed communication between nodes of a single stateful service.
== Reference implementation
The Cloudstate reference implementation implements the Cloudstate <<Protocol>>. It is implemented using https://akka.io[Akka], taking advantage of Akka's cluster features to provide scalable and resilient implementations of Cloudstate's stateful features.
== Support library
While a <<User function>> can be implemented simply by implementing the gRPC interfaces in the Cloudstate <<Protocol>>, this protocol is somewhat low level, and not particularly well suited for expressing the business logic that typically will reside in a user function. Instead, developers are encouraged to use a Cloudstate support library for their language of choice, if available.
== Command
A command is used to express the intention to alter the state of an <<Entity>>. A command is materialized by a message received by a user function. Commands may come from outside of the <<Stateful service>>, perhaps from other stateful services, other non Cloudstate services, or the outside world, or they may come from within the service, invoked as a side effect or to forward command handling from another command.
== Entity
A <<User function>> implements one or more entities. An entity is conceptually equivalent to a class, or a type of state. An entity will have multiple <<Entity instance>>s of it which can handle commands. For example, a user function may implement a chat room entity, encompassing the logic associated with chat rooms, and a particular chat room may be an instance of that entity, containing a list of the users currently in the room and a history of the messages sent to it. Each entity has a particular <<Entity type>>, which defines how the entity's state is persisted, shared, and what its capabilities are.
=== Entity instance
An instance of an <<Entity>>. Entity instances are identified by an <<Entity key>>, which is unique to a given entity. An entity holds state in the <<User function>>, and depending on the <<Entity type>> this state is held within the context of a gRPC stream. When a command for a particular entity instance is received, the <<Proxy>> will make a new streamed gRPC call for that entity instance to the <<User function>>. All subsequent commands received for that entity instance will be sent through that streamed call.
=== Entity service
An entity service is a gRPC service that allows interacting with an <<Entity>>. The <<Proxy>> makes this service available for other Kubernetes services and ingresses to consume, while the <<User function>> provides the implementation of the entity business logic. Note that the service is not implemented directly, by the user function like a normal gRPC service. Rather, it is implemented through the Cloudstate <<Protocol>>, which enriches the incoming and outgoing gRPC messages with state management capabilities, such as the ability to receive and update state.
=== Entity type
The type of state management that an <<Entity>> uses. Available types include <<Event sourced>> and <<Conflict-free replicated data type>>. Each type has its own sub protocol as part of the Cloudstate <<Protocol>> that it uses for state management, to convey state and updates specific to that type.
=== Entity key
A key used to identify instances of an <<Entity>>. All <<Command>>s must contain the entity key so that the command can be routed to the right instance of the entity that the command is for. The gRPC descriptor for the <<Entity service>> annotates the incoming message types for the entity to indicate which field(s) contain the entity key.
== Event sourced
A type of <<Entity>> that stores its state using a journal of events, and restores its state by replaying that journal. These are discussed in more detail in xref:eventsourced.adoc[Event sourcing].
=== Conflict-free replicated data type
A type of <<Entity>> that stores its state using a Conflict-free Replicated Data Type (CRDT), which is replicated across different nodes of the service. These are discussed in more detail in xref:crdts.adoc[Conflict-free Replicated Data Types].
| 107.734375 | 628 | 0.791008 |
6ab3dc9de268035d23bde63651db086339017e4c | 154 | adoc | AsciiDoc | modules/ref/partials/attributes.adoc | osfameron/docs-sdk-scala | 78c4e58aa0d293aacca091022d0fde1f0e54af12 | [
"Apache-2.0"
] | 1 | 2020-01-23T09:55:05.000Z | 2020-01-23T09:55:05.000Z | modules/ref/partials/attributes.adoc | osfameron/docs-sdk-scala | 78c4e58aa0d293aacca091022d0fde1f0e54af12 | [
"Apache-2.0"
] | null | null | null | modules/ref/partials/attributes.adoc | osfameron/docs-sdk-scala | 78c4e58aa0d293aacca091022d0fde1f0e54af12 | [
"Apache-2.0"
] | 1 | 2020-02-03T11:31:45.000Z | 2020-02-03T11:31:45.000Z | :scala-api-link: http://docs.couchbase.com/sdk-api/couchbase-scala-client-1.0.10/files/couchbase.html
:scala-current-version: 1.0.10
:version-server: 6.5
| 38.5 | 101 | 0.766234 |
609359b2beb904dcc9e8aba4c42ec0039d85d26e | 217 | adoc | AsciiDoc | docs/en-gb/modules/business-decisions/pages/reports.adoc | plentymarkets/plenty-manual-docs | 65d179a8feb8fcf1b594ef45883e3437287d8e09 | [
"MIT"
] | null | null | null | docs/en-gb/modules/business-decisions/pages/reports.adoc | plentymarkets/plenty-manual-docs | 65d179a8feb8fcf1b594ef45883e3437287d8e09 | [
"MIT"
] | 2 | 2022-01-05T10:31:24.000Z | 2022-03-11T11:56:07.000Z | docs/en-gb/modules/business-decisions/pages/reports.adoc | plentymarkets/plenty-manual-docs | 65d179a8feb8fcf1b594ef45883e3437287d8e09 | [
"MIT"
] | 1 | 2021-03-01T09:12:18.000Z | 2021-03-01T09:12:18.000Z | = Raw data
:lang: en
include::{includedir}/_header.adoc[]
:keywords: Report, raw data
:description: Learn how to export raw data.
:position: 65
:url: business-decisions/plenty-bi/reports
:id: AZ5LGXN
:author: team-bi
| 21.7 | 43 | 0.746544 |
bb3b5e9b298ffe8e737c030584a4221a1be96890 | 53 | adoc | AsciiDoc | doc/src/docs/asciidoc/deployment/manual.adoc | east301/gprdb-experimental | 0436dd6b4a5cb68bf3ad39b9ff58a0a2b46a6e4a | [
"Apache-2.0"
] | null | null | null | doc/src/docs/asciidoc/deployment/manual.adoc | east301/gprdb-experimental | 0436dd6b4a5cb68bf3ad39b9ff58a0a2b46a6e4a | [
"Apache-2.0"
] | null | null | null | doc/src/docs/asciidoc/deployment/manual.adoc | east301/gprdb-experimental | 0436dd6b4a5cb68bf3ad39b9ff58a0a2b46a6e4a | [
"Apache-2.0"
] | null | null | null | Manual deployment
-----------------
(to be written)
| 10.6 | 17 | 0.509434 |
42480f7924df4a193b606e6870ca34be3c4232ca | 323 | adoc | AsciiDoc | release-notes/src/1.7.3/release-notes.adoc | fossabot/onCourseDocs | a84f2f19fb43fd897a0a5166c4534fd60d3331ad | [
"CC-BY-4.0"
] | null | null | null | release-notes/src/1.7.3/release-notes.adoc | fossabot/onCourseDocs | a84f2f19fb43fd897a0a5166c4534fd60d3331ad | [
"CC-BY-4.0"
] | 5 | 2020-09-01T04:07:00.000Z | 2021-08-19T13:20:35.000Z | release-notes/src/1.7.3/release-notes.adoc | fossabot/onCourseDocs | a84f2f19fb43fd897a0a5166c4534fd60d3331ad | [
"CC-BY-4.0"
] | 3 | 2020-10-14T08:12:51.000Z | 2021-08-19T00:44:22.000Z | = Release 1.7.3
15 May 2009
== Fixes
* QE filtering displays current classes initially
* Fixed a bug where the enrolment is properly filled in after adding an
additional contact in QE
* Trying to attach an file with no extension would throw an exception.
* Fixed a bug which prevented the Class Tutor list from printing
| 26.916667 | 71 | 0.780186 |
caee9c7be64a60e194ce0617429b733b1f6eaa08 | 1,972 | adoc | AsciiDoc | inception/inception-ui-annotation/src/main/resources/META-INF/asciidoc/user-guide/annotation_create-annotations_primitive-features.adoc | MC-JY/inception | 60e67af28a7a4f6228e27e1ede48affc1b4bd47a | [
"Apache-2.0"
] | null | null | null | inception/inception-ui-annotation/src/main/resources/META-INF/asciidoc/user-guide/annotation_create-annotations_primitive-features.adoc | MC-JY/inception | 60e67af28a7a4f6228e27e1ede48affc1b4bd47a | [
"Apache-2.0"
] | null | null | null | inception/inception-ui-annotation/src/main/resources/META-INF/asciidoc/user-guide/annotation_create-annotations_primitive-features.adoc | MC-JY/inception | 60e67af28a7a4f6228e27e1ede48affc1b4bd47a | [
"Apache-2.0"
] | null | null | null | ////
// Licensed to the Technische Universität Darmstadt under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The Technische Universität Darmstadt
// licenses this file to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License.
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
////
= Primitive Features
Supported primitive features types are string, boolean, integer, and float.
String features without a tagset are displayed using a text field or a text area with multiple rows. If multiple rows are enabled it can either be dynamically sized or a size for collapsing and expanding can be configured. The multiple rows, non-dynamic text area can be expanded if focused and collapses again if focus is lost.
In case the string feature has a tagset, it instead appears as a radio group, a combobox, or an auto-complete field - depending on how many tags are in the tagset or whether a particular editor type has been chosen.
There is also the option to have multi-valued string features. These are displayed as a multi-value select field and can be used with or without an associated tagset. Keyboard shortcuts are not supported.
Boolean features are displayed as a checkbox that can either be marked or unmarked.
Integer and float features are displayed using a number field.
However if an integer feature is limited and the difference between the maximum and minimum is lower than 12 it can also be displayed with a radio button group instead.
| 59.757576 | 329 | 0.78499 |
1f1aea060b97cea3108ee1c0449bee298afd8523 | 6,705 | adoc | AsciiDoc | docs/components/modules/ROOT/pages/nsq-component.adoc | NiteshKoushik/camel | f6f7119f4e959af4b83c7c0cd8f549316bd7d18f | [
"Apache-2.0"
] | null | null | null | docs/components/modules/ROOT/pages/nsq-component.adoc | NiteshKoushik/camel | f6f7119f4e959af4b83c7c0cd8f549316bd7d18f | [
"Apache-2.0"
] | null | null | null | docs/components/modules/ROOT/pages/nsq-component.adoc | NiteshKoushik/camel | f6f7119f4e959af4b83c7c0cd8f549316bd7d18f | [
"Apache-2.0"
] | null | null | null | [[nsq-component]]
= NSQ Component
:page-source: components/camel-nsq/src/main/docs/nsq-component.adoc
*Since Camel 2.23*
// HEADER START
*Both producer and consumer is supported*
// HEADER END
http://nsq.io/[NSQ] is a realtime distributed messaging platform.
Maven users will need to add the following dependency to
their `pom.xml` for this component.
[source,xml]
------------------------------------------------------------
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-nsq</artifactId>
<!-- use the same version as your Camel core version -->
<version>x.y.z</version>
</dependency>
------------------------------------------------------------
== URI format
[source,java]
----------------------
nsq:topic[?options]
----------------------
Where *topic* is the topic name
== Options
// component options: START
The NSQ component supports 5 options, which are listed below.
[width="100%",cols="2,5,^1,2",options="header"]
|===
| Name | Description | Default | Type
| *servers* (common) | The hostnames of one or more nsqlookupd servers (consumer) or nsqd servers (producer). | | String
| *bridgeErrorHandler* (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean
| *lazyStartProducer* (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean
| *basicPropertyBinding* (advanced) | Whether the component should use basic property binding (Camel 2.x) or the newer property binding with additional capabilities | false | boolean
| *useGlobalSslContextParameters* (security) | Enable usage of global SSL context parameters. | false | boolean
|===
// component options: END
// endpoint options: START
The NSQ endpoint is configured using URI syntax:
----
nsq:topic
----
with the following path and query parameters:
=== Path Parameters (1 parameters):
[width="100%",cols="2,5,^1,2",options="header"]
|===
| Name | Description | Default | Type
| *topic* | *Required* The NSQ topic | | String
|===
=== Query Parameters (18 parameters):
[width="100%",cols="2,5,^1,2",options="header"]
|===
| Name | Description | Default | Type
| *servers* (common) | The hostnames of one or more nsqlookupd servers (consumer) or nsqd servers (producer). | | String
| *userAgent* (common) | A String to identify the kind of client | | String
| *autoFinish* (consumer) | Automatically finish the NSQ Message when it is retrieved from the queue and before the Exchange is processed | true | Boolean
| *bridgeErrorHandler* (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean
| *channel* (consumer) | The NSQ channel | | String
| *lookupInterval* (consumer) | The lookup interval | 5000 | long
| *lookupServerPort* (consumer) | The NSQ lookup server port | 4161 | int
| *messageTimeout* (consumer) | The NSQ consumer timeout period for messages retrieved from the queue. A value of -1 is the server default | -1 | long
| *poolSize* (consumer) | Consumer pool size | 10 | int
| *requeueInterval* (consumer) | The requeue interval in milliseconds. A value of -1 is the server default | -1 | long
| *exceptionHandler* (consumer) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | | ExceptionHandler
| *exchangePattern* (consumer) | Sets the exchange pattern when the consumer creates an exchange. The value can be one of: InOnly, InOut, InOptionalOut | | ExchangePattern
| *lazyStartProducer* (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean
| *port* (producer) | The port of the nsqd server | 4150 | int
| *basicPropertyBinding* (advanced) | Whether the endpoint should use basic property binding (Camel 2.x) or the newer property binding with additional capabilities | false | boolean
| *synchronous* (advanced) | Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). | false | boolean
| *secure* (security) | Set secure option indicating TLS is required | false | boolean
| *sslContextParameters* (security) | To configure security using SSLContextParameters | | SSLContextParameters
|===
// endpoint options: END
== Examples
To send a message to a NSQ server
[source,java]
----
from("direct:start").to("nsq:myTopic?servers=myserver:4161");
----
And to receive messages from NSQ
[source,xml]
----
<route>
<from uri="nsq:myTopic?servers=myserver:4161"/>
<to uri="bean:doSomething"/>
</route>
----
The server can be configured on the component level, for example if using Spring Boot in the `application.properties` file:
[source,properties]
----
camel.component.nsq.servers=myserver1:4161,my-second-server:4161
----
Then you can omit the servers from the endpoint URI
[source,java]
----
from("direct:start").to("nsq:myTopic");
----
| 48.941606 | 612 | 0.734079 |
5626144abccbfaffc683310dadf9451ed67e9839 | 617 | adoc | AsciiDoc | samples/incompatible-module/README.adoc | chali/dependency-management-samples | 4aec1b3cff6c0d0f79dbd8493cf806154692a701 | [
"Apache-2.0"
] | null | null | null | samples/incompatible-module/README.adoc | chali/dependency-management-samples | 4aec1b3cff6c0d0f79dbd8493cf806154692a701 | [
"Apache-2.0"
] | null | null | null | samples/incompatible-module/README.adoc | chali/dependency-management-samples | 4aec1b3cff6c0d0f79dbd8493cf806154692a701 | [
"Apache-2.0"
] | null | null | null | ## Declaring incompatibility with a module
It is also possible to reject all versions of a module in a dependency constraint. Such dependency constraints can be
used to declare incompatibilities between modules. For example, if having 2 different modules on classpath would
introduce a compile or runtime error, it is possible to declare that they are incompatible, which would lead to a
resolution error if they happen to be found in the same dependency graph:
```
dependencies {
constraints {
implementation('org.codehaus.groovy:groovy-all') {
version { rejectAll() }
}
}
}
``` | 38.5625 | 117 | 0.742301 |
9b011b3fba1aebbd69b51af673e30d6aae6cd06d | 999 | adoc | AsciiDoc | doc-content/enterprise-only/installation/controller-con.adoc | gsheldon/kie-docs | 0e03f273940ba2cbab3e536ce8e4345fbbad2569 | [
"Apache-2.0"
] | null | null | null | doc-content/enterprise-only/installation/controller-con.adoc | gsheldon/kie-docs | 0e03f273940ba2cbab3e536ce8e4345fbbad2569 | [
"Apache-2.0"
] | 15 | 2018-02-07T04:36:00.000Z | 2018-08-23T17:59:00.000Z | doc-content/enterprise-only/installation/controller-con.adoc | gsheldon/kie-docs | 0e03f273940ba2cbab3e536ce8e4345fbbad2569 | [
"Apache-2.0"
] | null | null | null | [id='controller-con']
= Installing and running the headless {PRODUCT_SHORT} controller
You can configure {KIE_SERVER} to run in managed or unmanaged mode. If {KIE_SERVER} is unmanaged, you must manually create and maintain containers. If {KIE_SERVER} is managed, the
ifdef::PAM[]
Process Automation Manager controller
endif::[]
ifdef::DM[]
Decision Server controller
endif::[]
manages the {KIE_SERVER} configuration and you interact with the controller to create and maintain containers.
The
ifdef::PAM[]
Process Automation Manager controller
endif::[]
ifdef::DM[]
Decision Server controller
endif::[]
is integrated with {CENTRAL}. If you install {CENTRAL}, use the *Execution Server* page to create and maintain containers. However, if you do not install {CENTRAL}, you can install the headless
ifdef::PAM[]
Process Automation Manager controller
endif::[]
ifdef::DM[]
Decision Server controller
endif::[]
and use the REST API or the {KIE_SERVER} Java Client API to interact with it.
| 34.448276 | 195 | 0.768769 |
9a98f50cdb9a0d893092c76a467f66870482b68d | 1,036 | adoc | AsciiDoc | rest_api/extension_apis/extension-apis-index.adoc | alebedev87/openshift-docs | b7ed96ce84670e2b286f51b4303c144a01764e2b | [
"Apache-2.0"
] | 625 | 2015-01-07T02:53:02.000Z | 2022-03-29T06:07:57.000Z | rest_api/extension_apis/extension-apis-index.adoc | alebedev87/openshift-docs | b7ed96ce84670e2b286f51b4303c144a01764e2b | [
"Apache-2.0"
] | 21,851 | 2015-01-05T15:17:19.000Z | 2022-03-31T22:14:25.000Z | rest_api/extension_apis/extension-apis-index.adoc | alebedev87/openshift-docs | b7ed96ce84670e2b286f51b4303c144a01764e2b | [
"Apache-2.0"
] | 1,681 | 2015-01-06T21:10:24.000Z | 2022-03-28T06:44:50.000Z | [id="extension-apis"]
= Extension APIs
ifdef::product-title[]
include::modules/common-attributes.adoc[]
endif::[]
toc::[]
== APIService [apiregistration.k8s.io/v1]
Description::
+
--
APIService represents a server for a particular GroupVersion. Name must be "version.group".
--
Type::
`object`
== CustomResourceDefinition [apiextensions.k8s.io/v1]
Description::
+
--
CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format <.spec.name>.<.spec.group>.
--
Type::
`object`
== MutatingWebhookConfiguration [admissionregistration.k8s.io/v1]
Description::
+
--
MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object.
--
Type::
`object`
== ValidatingWebhookConfiguration [admissionregistration.k8s.io/v1]
Description::
+
--
ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it.
--
Type::
`object`
| 19.54717 | 148 | 0.758687 |
30fbae6088f32fc5ef9c35ed08e2388ef18b42f0 | 955 | adoc | AsciiDoc | documentation/src/main/asciidoc/topics/proc_using_jgroups_inline.adoc | franz1981/infinispan | 54fccffd1b2493277c25eb27cf6f0c7b2b151ddc | [
"Apache-2.0"
] | 1 | 2020-06-18T17:55:05.000Z | 2020-06-18T17:55:05.000Z | documentation/src/main/asciidoc/topics/proc_using_jgroups_inline.adoc | franz1981/infinispan | 54fccffd1b2493277c25eb27cf6f0c7b2b151ddc | [
"Apache-2.0"
] | 6 | 2020-12-21T17:07:42.000Z | 2022-02-01T01:04:33.000Z | documentation/src/main/asciidoc/topics/proc_using_jgroups_inline.adoc | franz1981/infinispan | 54fccffd1b2493277c25eb27cf6f0c7b2b151ddc | [
"Apache-2.0"
] | 2 | 2020-11-05T15:11:32.000Z | 2022-02-28T11:16:29.000Z | [id='jgroups_inline-{context}']
= Using Inline JGroups Stacks
Custom JGroups stacks can help you optimize network performance for {brandname}
clusters compared to using the default stacks.
.Procedure
* Embed your custom JGroups stack definitions in `infinispan.xml` as in the
following example:
+
[source,xml,options="nowrap",subs=attributes+]
----
include::config_examples/config_inline_jgroups.xml[]
----
<1> root element of `infinispan.xml`.
<2> contains JGroups stack definitions.
<3> defines a JGroups stack named "prod".
<4> configures a {brandname} Cache Manager and names the "replicatedCache" cache definition as the default.
<5> uses the "prod" JGroups stack for cluster transport.
[TIP]
====
Use inheritance with inline JGroups stacks to tune and customize specific
transport properties.
====
.Reference
* link:#jgroups_inheritance-configuring[Adjusting and Tuning JGroups Stacks]
* link:{configdocroot}[{brandname} Configuration Schema]
| 29.84375 | 107 | 0.776963 |
0430e785d2ffa7f939427d34d29a1599b5c364db | 2,003 | adoc | AsciiDoc | src/asciidoctor/scala-android.adoc | mariopce/scalaonandroid | 29df8e456a712d03ce31c9b7741695f14708cb1a | [
"Apache-2.0"
] | null | null | null | src/asciidoctor/scala-android.adoc | mariopce/scalaonandroid | 29df8e456a712d03ce31c9b7741695f14708cb1a | [
"Apache-2.0"
] | null | null | null | src/asciidoctor/scala-android.adoc | mariopce/scalaonandroid | 29df8e456a712d03ce31c9b7741695f14708cb1a | [
"Apache-2.0"
] | null | null | null | ==== Scala Android
Prepare env
http://www.47deg.com/blog/scala-on-android-preparing-the-environment
==== Scala in Android
include::../../build.sbt[source]
Sbt == activator
==== Basic commands
[source]
./activator tasks
clean - clean project
debug - run app in debug mode, android wait for debugger
run - run project on device
test - run test
==== Typed Resources
TR > R
==== Implicit
Implicit != Explicit
(niejawny, jawny )
===== Method to show context
It looks like in java
Nice import
[source]
def showToast(msg : String, ctx:Context): Unit ={
import android.widget.Toast._
makeText(ctx, msg, LENGTH_SHORT).show();
}
We execute the method [explicit]
[source]
showToast("test3", getApplicationContext)
===== Modify to implicit
[source]
def showToast(msg: String)(implicit ctx: Context): Unit = {
import android.widget.Toast._
makeText(ctx, msg, LENGTH_SHORT).show();
}
[source]
implicit val context : Context = getApplicationContext
showToast("test3") (getApplicationContext)
===== Implicit context
[source]
trait ImplicitContext {
this: Activity =>
implicit val context = this
}
when Activity is with our ImplicitContext and our trait use ImplicitContext in activity then context
is activity
===== Implicit conventions
[source]
showToast("test3") (getApplicationContext)
[source]
showToast("test3")
[source]
"test3".toast()
===== Toastable
Toastable type
[source]
----
include::../../src/main/scala/com/fortysevendeg/scala/android/Toastable.scala[tags=toastable]
----
toastable call
[source]
new Toastable("test3").toast()
===== Implicit Toastable
[source]
----
include::../../src/main/scala/com/fortysevendeg/scala/android/ScalaToasts.scala[tags=scalatoasts]
----
Direct call
[source]
asToastable("test3").toast
===== Implicit conversion
[source]
class SampleActivity extends Activity with TypedFindView with ImplicitContext with ScalaToasts
"test3".toast
search implicit conversion
asToastable("String").toast(ScalaActivity.this)
| 17.570175 | 100 | 0.724413 |
7b4c176a27fd6dbcc0c04f361bfa1700455a5ebd | 2,372 | asciidoc | AsciiDoc | docs/reference/setup/install/check-running.asciidoc | diwasjoshi/elasticsearch | 58ce0f94a0bbdf2576e0a00a62abe1854ee7fe2f | [
"Apache-2.0"
] | null | null | null | docs/reference/setup/install/check-running.asciidoc | diwasjoshi/elasticsearch | 58ce0f94a0bbdf2576e0a00a62abe1854ee7fe2f | [
"Apache-2.0"
] | null | null | null | docs/reference/setup/install/check-running.asciidoc | diwasjoshi/elasticsearch | 58ce0f94a0bbdf2576e0a00a62abe1854ee7fe2f | [
"Apache-2.0"
] | null | null | null | ==== Check that Elasticsearch is running
You can test that your {es} node is running by sending an HTTPS request to port
`9200` on `localhost`:
["source","sh",subs="attributes"]
----
curl --cacert {os-dir}{slash}certs{slash}http_ca.crt -u elastic https://localhost:9200 <1>
----
// NOTCONSOLE
<1> Ensure that you use `https` in your call, or the request will fail.
+
`--cacert`::
Path to the generated `http_ca.crt` certificate for the HTTP layer.
Enter the password for the `elastic` user that was generated during
installation, which should return a response like this:
////
The following hidden request is required before the response. Otherwise, you'll
get an error because there's a response with no request preceding it.
[source,console]
----
GET /
----
////
["source","js",subs="attributes,callouts"]
--------------------------------------------
{
"name" : "Cp8oag6",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA",
"version" : {
"number" : "{version_qualified}",
"build_flavor" : "{build_flavor}",
"build_type" : "{build_type}",
"build_hash" : "f27399d",
"build_date" : "2016-03-30T09:51:41.449Z",
"build_snapshot" : false,
"lucene_version" : "{lucene_version}",
"minimum_wire_compatibility_version" : "1.2.3",
"minimum_index_compatibility_version" : "1.2.3"
},
"tagline" : "You Know, for Search"
}
--------------------------------------------
// TESTRESPONSE[s/"name" : "Cp8oag6",/"name" : "$body.name",/]
// TESTRESPONSE[s/"cluster_name" : "elasticsearch",/"cluster_name" : "$body.cluster_name",/]
// TESTRESPONSE[s/"cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA",/"cluster_uuid" : "$body.cluster_uuid",/]
// TESTRESPONSE[s/"build_hash" : "f27399d",/"build_hash" : "$body.version.build_hash",/]
// TESTRESPONSE[s/"build_date" : "2016-03-30T09:51:41.449Z",/"build_date" : $body.version.build_date,/]
// TESTRESPONSE[s/"build_snapshot" : false,/"build_snapshot" : $body.version.build_snapshot,/]
// TESTRESPONSE[s/"minimum_wire_compatibility_version" : "1.2.3"/"minimum_wire_compatibility_version" : $body.version.minimum_wire_compatibility_version/]
// TESTRESPONSE[s/"minimum_index_compatibility_version" : "1.2.3"/"minimum_index_compatibility_version" : $body.version.minimum_index_compatibility_version/]
// So much s/// but at least we test that the layout is close to matching....
| 40.896552 | 157 | 0.681703 |
f54bf7f1bb75b53d775f8bd0b3ced91e75140ac5 | 1,073 | adoc | AsciiDoc | java_test_ssl/README.adoc | lathspell/java_test | a567a46f28ef99a023ccff8a399aaae5fa63987b | [
"CC0-1.0"
] | 3 | 2019-02-01T20:15:55.000Z | 2020-05-21T07:48:32.000Z | java_test_ssl/README.adoc | lathspell/java_test | a567a46f28ef99a023ccff8a399aaae5fa63987b | [
"CC0-1.0"
] | 15 | 2020-03-04T22:02:39.000Z | 2022-01-21T23:15:46.000Z | java_test_ssl/README.adoc | lathspell/java_test | a567a46f28ef99a023ccff8a399aaae5fa63987b | [
"CC0-1.0"
] | 5 | 2019-02-05T12:35:58.000Z | 2020-05-09T22:53:42.000Z | = Java & SSL =
== Frameworks und Libraries
=== SpringBoot 2
* benutzt im RestTemplate java.net, nicht mehr Apache HTTP Client!
=== SpringBoot 1
* benutzt im RestTemplate noch Apache HTTP Client!
=== java.net HTTP Client
; Java 8
java -Djavax.net.debug=ssl,handshake
; Java 14
https://docs.oracle.com/en/java/javase/14/security/troubleshooting-security.html
java -Djava.security.debug=all
== Keytool
=== PEM zu JKS Truststore
keytool -import -v -trustcacerts -file input.pem -keystore output.jks -alias my-root-ca
=== PKCS12 zu JKS
keytool -importkeystore \
-srckeystore input.p12 -srcstoretype pkcs12 -srcalias foo \
-destkeystore output.jks -deststoretype jks -destalias foo -deststorepass changeit
=== PEM zu PKCS12 Keystore
keytool -importkeystore \
-srckeystore input.jks -srcstoretype JKS \
-destkeystore output.p12 -deststoretype PKCS12 -deststorepass changeit
=== Keystore Passwort ändern
keytool -storepasswd -keystore server.keystore -storepass OLD_SECRET -new NEW_SECRET
| 23.844444 | 92 | 0.714818 |
f8d43f54bc21b68995d7436bfc832e646809aa31 | 31,425 | asciidoc | AsciiDoc | ext/cl_khr_external_semaphore.asciidoc | ijkmn/OpenCL-Docs | 27b32190b1a4da4830c5625329b81a85ae486f58 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | 254 | 2017-05-16T15:49:08.000Z | 2022-03-23T03:03:27.000Z | ext/cl_khr_external_semaphore.asciidoc | ijkmn/OpenCL-Docs | 27b32190b1a4da4830c5625329b81a85ae486f58 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | 546 | 2017-06-04T14:42:51.000Z | 2022-03-25T21:09:48.000Z | ext/cl_khr_external_semaphore.asciidoc | ijkmn/OpenCL-Docs | 27b32190b1a4da4830c5625329b81a85ae486f58 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | 75 | 2017-05-25T08:54:41.000Z | 2022-03-31T11:26:25.000Z | // Copyright 2021 The Khronos Group. This work is licensed under a
// Creative Commons Attribution 4.0 International License; see
// http://creativecommons.org/licenses/by/4.0/
[[cl_khr_external_semaphore]]
== External Semaphores (Provisional)
`cl_khr_semaphore` introduced semaphores as a new type along with a set of APIs for create, release, retain, wait and signal operations on it.
This extension defines APIs and mechanisms to share semaphores created in an external API by importing into and exporting from OpenCL.
This extension defines:
* New attributes that can be passed as part of {cl_semaphore_properties_khr_TYPE} for specifying properties of external semaphores to be imported or exported.
* New attributes that can be passed as part of {cl_semaphore_info_khr_TYPE} for specifying properties of external semaphores to be exported.
* An extension to {clCreateSemaphoreWithPropertiesKHR} to accept external semaphore properties allowing to import or export an external semaphore into or from OpenCL.
* Semaphore handle types required for importing and exporting semaphores.
* Modifications to Wait and Signal API behavior when dealing with external semaphores created from different handle types.
* API query exportable semaphores handles using specified handle type.
Other related extensions define specific external semaphores that may be imported into or exported from OpenCL.
=== General Information
==== Name Strings
`cl_khr_external_semaphore` +
`cl_khr_external_semaphore_dx_fence` +
`cl_khr_external_semaphore_opaque_fd` +
`cl_khr_external_semaphore_sync_fd` +
`cl_khr_external_semaphore_win32`
==== Version History
[cols="1,1,3",options="header",]
|====
| *Date* | *Version* | *Description*
| 2021-09-10 | 0.9.0 | Initial version (provisional).
|====
NOTE: This is a preview of an OpenCL provisional extension specification that has been Ratified under the Khronos Intellectual Property Framework. It is being made publicly available prior to being uploaded to the Khronos registry to enable review and feedback from the community. If you have feedback please create an issue on https://github.com/KhronosGroup/OpenCL-Docs/
==== Dependencies
This extension is written against the OpenCL Specification Version 3.0.8.
This extension requires OpenCL 1.2.
The `cl_khr_semaphore` extension is required as it defines semaphore objects as well as for wait and signal operations on semaphores.
For OpenCL to be able to import external semaphores from other APIs using this extension, the other API is required to provide below mechanisms:
* Ability to export semaphore handles
* Ability to query semaphore handle in the form of one of the handle type supported by OpenCL.
The other APIs that want to use semaphore exported by OpenCL using this extension are required to provide below mechanism:
* Ability to import semaphore handles using handle types exported by OpenCL.
==== Contributors
// spell-checker: disable
Ajit Hakke-Patil, NVIDIA +
Amit Rao, NVIDIA +
Balaji Calidas, QUALCOMM +
Ben Ashbaugh, INTEL +
Carsten Rohde, NVIDIA +
Christoph Kubisch, NVIDIA +
Debalina Bhattacharjee, NVIDIA +
James Jones, NVIDIA +
Jason Ekstrand, INTEL +
Jeremy Kemp, IMAGINATION +
Joshua Kelly, QUALCOMM +
Karthik Raghavan Ravi, NVIDIA +
Kedar Patil, NVIDIA +
Kevin Petit, ARM +
Nikhil Joshi, NVIDIA +
Sharan Ashwathnarayan, NVIDIA +
Vivek Kini, NVIDIA +
// spell-checker: enable
=== New Types
[source]
----
typedef cl_uint cl_external_semaphore_handle_type_khr;
----
=== New API Functions
[source]
----
cl_int clGetSemaphoreHandleForTypeKHR(
cl_semaphore_khr sema_object,
cl_device_id device,
cl_external_semaphore_handle_type_khr handle_type,
size_t handle_size,
void *handle_ptr,
size_t *handle_size_ret);
----
=== New API Enums
Accepted value for the _param_name_ parameter to {clGetPlatformInfo} to query external semaphore handle types that may be imported or exported by all devices in an OpenCL platform:
[source]
----
CL_PLATFORM_SEMAPHORE_IMPORT_HANDLE_TYPES_KHR 0x2037
CL_PLATFORM_SEMAPHORE_EXPORT_HANDLE_TYPES_KHR 0x2038
----
Accepted value for the _param_name_ parameter to {clGetDeviceInfo} to query external semaphore handle types that may be imported or exported by an OpenCL device:
[source]
----
CL_DEVICE_SEMAPHORE_IMPORT_HANDLE_TYPES_KHR 0x204D
CL_DEVICE_SEMAPHORE_EXPORT_HANDLE_TYPES_KHR 0x204E
----
Following new attributes can be passed as part of {cl_semaphore_properties_khr_TYPE} and {cl_semaphore_info_khr_TYPE}:
[source]
----
CL_SEMAPHORE_EXPORT_HANDLE_TYPES_KHR 0x203F
CL_SEMAPHORE_EXPORT_HANDLE_TYPES_LIST_END_KHR 0
----
External semaphore handle type added by `cl_khr_external_semaphore_dx_fence`:
[source]
----
CL_SEMAPHORE_HANDLE_D3D12_FENCE_KHR 0x2059
----
External semaphore handle type added by `cl_khr_external_semaphore_opaque_fd`:
[source]
----
CL_SEMAPHORE_HANDLE_OPAQUE_FD_KHR 0x2055
----
External semaphore handle type added by `cl_khr_external_semaphore_sync_fd`:
[source]
----
CL_SEMAPHORE_HANDLE_SYNC_FD_KHR 0x2058
----
External semaphore handle types added by `cl_khr_external_semaphore_win32`:
[source]
----
CL_SEMAPHORE_HANDLE_OPAQUE_WIN32_KHR 0x2056
CL_SEMAPHORE_HANDLE_OPAQUE_WIN32_KMT_KHR 0x2057
----
=== Modifications to existing APIs added by this spec
Following new enums are added to the list of supported _param_names_ by {clGetPlatformInfo}:
.List of supported param_names by clGetPlatformInfo
[width="100%",cols="<33%,<17%,<50%",options="header"]
|====
| Platform Info | Return Type | Description
| {CL_PLATFORM_SEMAPHORE_IMPORT_HANDLE_TYPES_KHR}
| {cl_external_semaphore_handle_type_khr_TYPE}[]
| Returns the list of importable external semaphore handle types supported by all devices in _platform_.
This size of this query may be 0 if no importable external semaphore handle types are supported by all devices in _platform_.
| {CL_PLATFORM_SEMAPHORE_EXPORT_HANDLE_TYPES_KHR}
| {cl_external_semaphore_handle_type_khr_TYPE}[]
| Returns the list of exportable external semaphore handle types supported by all devices in the platform.
This size of this query may be 0 if no exportable external semaphore handle types are supported by all devices in _platform_.
|====
{clGetPlatformInfo} when called with _param_name_ {CL_PLATFORM_SEMAPHORE_IMPORT_HANDLE_TYPES_KHR} returns a common list of external semaphore handle types supported for importing by all devices in the platform.
{clGetPlatformInfo} when called with _param_name_ {CL_PLATFORM_SEMAPHORE_EXPORT_HANDLE_TYPES_KHR} returns a common list of external semaphore handle types supported for exporting by all devices in the platform.
Following new enums are added to the list of supported _param_names_ by {clGetDeviceInfo}:
.List of supported param_names by clGetDeviceInfo
[width="100%",cols="<33%,<17%,<50%",options="header"]
|====
| Device Info | Return Type | Description
| {CL_DEVICE_SEMAPHORE_IMPORT_HANDLE_TYPES_KHR}
| {cl_external_semaphore_handle_type_khr_TYPE}[]
| Returns the list of importable external semaphore handle types supported by _device_.
This size of this query may be 0 indicating that the device does not support importing semaphores.
| {CL_DEVICE_SEMAPHORE_EXPORT_HANDLE_TYPES_KHR}
| {cl_external_semaphore_handle_type_khr_TYPE}[]
| Returns the list of exportable external semaphore handle types supported by _device_.
This size of this query may be 0 indicating that the device does not support exporting semaphores.
|====
{clGetDeviceInfo} when called with _param_name_ {CL_DEVICE_SEMAPHORE_IMPORT_HANDLE_TYPES_KHR} returns a list of external semaphore handle types supported for importing.
{clGetDeviceInfo} when called with _param_name_ {CL_DEVICE_SEMAPHORE_EXPORT_HANDLE_TYPES_KHR} returns a list of external semaphore handle types supported for exporting.
One of the above two queries {CL_DEVICE_SEMAPHORE_IMPORT_HANDLE_TYPES_KHR} and {CL_DEVICE_SEMAPHORE_EXPORT_HANDLE_TYPES_KHR} must return a non-empty list indicating support for at least one of the valid semaphore handles types either for import or for export or both.
Following new properties are added to the list of possible supported properties by {clCreateSemaphoreWithPropertiesKHR}:
.List of supported semaphore creation properties by clCreateSemaphoreWithPropertiesKHR
[width="100%",cols="<33%,<17%,<50%",options="header"]
|====
| Semaphore Property | Property Value | Description
// This is already described in cl_khr_semaphore so we don't need to describe it again here.
//| {CL_DEVICE_HANDLE_LIST_KHR}
// | {cl_device_id_TYPE}[]
// | Specifies the list of OpenCL devices (terminated with
// {CL_DEVICE_HANDLE_LIST_END_KHR}) to associate with the semaphore.
// This is also already described in cl_khr_semaphore so we don't need to describe it again here.
//| {CL_SEMAPHORE_TYPE_KHR}
// | {cl_semaphore_type_khr_TYPE}
// | Specifies the type of semaphore to create.
| {CL_SEMAPHORE_EXPORT_HANDLE_TYPES_KHR}
| {cl_external_semaphore_handle_type_khr_TYPE}[]
| Specifies the list of semaphore handle type properties terminated with
{CL_SEMAPHORE_EXPORT_HANDLE_TYPES_LIST_END_KHR} that can be used to export
the semaphore being created.
|====
Add to the list of error conditions for {clCreateSemaphoreWithPropertiesKHR}:
// This is in the base spec so we don't need to describe it here.
//* {CL_INVALID_DEVICE} if one or more devices identified by properties
//{CL_DEVICE_HANDLE_LIST_KHR} is/are not part of devices within _context_ to which
//{cl_semaphore_khr_TYPE} being created will belong to.
* {CL_INVALID_DEVICE} if one or more devices identified by properties {CL_DEVICE_HANDLE_LIST_KHR} can not import the requested external semaphore handle type.
{clCreateSemaphoreWithPropertiesKHR} may return a NULL value on some implementations if _sema_props_ does not contain an external semaphore handle type to import.
Such implementations are required to return a valid semaphore when a supported external memory handle type and valid external semaphore handle is specified.
Add to the list of supported _param_names_ by {clGetSemaphoreInfoKHR}:
.List of supported param_names by clGetSemaphoreInfoKHR
[width="100%",cols="<33%,<17%,<50%",options="header"]
|====
| Semaphore Info | Return Type | Description
// These are already in the base cl_khr_semaphore so we don't need to include them again here.
//| *CL_SEMAPHORE_CONTEXT_KHR*
// | cl_context *
// | cl_context
// | Returns the cl_context associated with the {cl_semaphore_khr_TYPE}.
//| *CL_SEMAPHORE_REFERENCE_COUNT_KHR*
// | cl_uint *
// | cl_uint
// | Returns the reference count associated with the {cl_semaphore_khr_TYPE}.
//| *CL_SEMAPHORE_PROPERTIES_KHR*
// | cl_semaphore_properties_khr**
// | cl_semaphore_properties_khr*
// | Returns the array of properties associated with the {cl_semaphore_khr_TYPE}.
//| *CL_SEMAPHORE_TYPE_KHR*
// | cl_semaphore_type_khr *
// | cl_semaphore_type_khr
// | Returns the type of the {cl_semaphore_khr_TYPE}.
//| *CL_SEMAPHORE_PAYLOAD_KHR*
// | cl_semaphore_payload_khr *
// | cl_semaphore_payload_khr
// | Returns the payload value of the {cl_semaphore_khr_TYPE}. For semaphore of
// type CL_SEMAPHORE_TYPE_BINARY_KHR, payload value returned should be 0 if the
// semaphore is in unsignaled state and 1 if it is in signaled
// state.
//| *CL_DEVICE_HANDLE_LIST_KHR*
// | cl_device_id **
// | cl_device_id *
// | Returns a NULL terminated list of cl_device_id (terminated with
// CL_DEVICE_HANDLE_LIST_END_KHR) for OpenCL devices the semaphore is associated
// with.
| {CL_SEMAPHORE_EXPORT_HANDLE_TYPES_KHR}
| {cl_external_semaphore_handle_type_khr_TYPE}[]
| Returns the list of external semaphore handle types that may be used for
exporting. The size of this query may be 0 indicating that this
semaphore does not support any handle types for exporting.
|====
=== Exporting semaphore external handles
To export an external handle from a semaphore, call the function
include::{generated}/api/protos/clGetSemaphoreHandleForTypeKHR.txt[]
_sema_object_ specifies a valid semaphore object with exportable properties.
_device_ specifies a valid device for which a semaphore handle is being requested.
_handle_type_ specifies the type of semaphore handle that should be returned for this exportable _sema_object_ and must be one of the values specified when _sema_object_ was created.
_handle_size_ specifies the size of memory pointed by _handle_ptr_.
_handle_ptr_ is a pointer to memory where the exported external handle is returned.
If _param_value_ is `NULL`, it is ignored.
_handle_size_ret_ returns the actual size in bytes for the external handle.
If _handle_size_ret_ is `NULL`, it is ignored.
{clGetSemaphoreHandleForTypeKHR} returns {CL_SUCCESS} if the semaphore handle is queried successfully.
Otherwise, it returns one of the following errors:
* {CL_INVALID_SEMAPHORE_KHR}
** if _sema_object_ is not a valid semaphore
// This is redundant with the error below.
** if _sema_object_ is not exportable
* {CL_INVALID_DEVICE}
** if _device_ is not a valid device, or
** if _sema_object_ belongs to a context that is not associated with _device_, or
** if _sema_object_ can not be shared with _device_.
* {CL_INVALID_VALUE} if the requested external semaphore handle type was not specified when _sema_object_ was created.
* {CL_INVALID_VALUE} if _handle_size_ is less than the size needed to store the returned handle.
// I don't think this can happen. This would have been checked when the semaphore was created.
// ** if CL_SEMAPHORE_HANDLE_*_KHR is specified as one of the _sema_props_ and
// the property CL_SEMAPHORE_HANDLE_*_KHR does not identify a valid external
// memory handle poperty reported by
// CL_PLATFORM_SEMAPHORE_IMPORT_HANDLE_TYPES_KHR or
// CL_DEVICE_SEMAPHORE_IMPORT_HANDLE_TYPES_KHR queries.
* {CL_OUT_OF_RESOURCES} if there is a failure to allocate resources required by the OpenCL implementation on the device.
* {CL_OUT_OF_HOST_MEMORY} if there is a failure to allocate resources required by the OpenCL implementation on the host.
=== Importing semaphore external handles
Applications can import a semaphore payload into an existing semaphore using an
external semaphore handle. The effects of the import operation will be either
temporary or permanent, as specified by the application. If the import is
temporary, the implementation must restore the semaphore to its prior permanent
state after submitting the next semaphore wait operation. Performing a
subsequent temporary import on a semaphore before performing a semaphore wait
has no effect on this requirement; the next wait submitted on the semaphore must
still restore its last permanent state. A permanent payload import behaves as if
the target semaphore was destroyed, and a new semaphore was created with the
same handle but the imported payload. Because importing a semaphore payload
temporarily or permanently detaches the existing payload from a semaphore,
similar usage restrictions to those applied to {clReleaseSemaphoreKHR} are
applied to any command that imports a semaphore payload. Which of these import
types is used is referred to as the import operation's permanence. Each handle
type supports either one or both types of permanence.
The implementation must perform the import operation by either referencing or
copying the payload referred to by the specified external semaphore handle,
depending on the handle's type. The import method used is referred to as the
handle type's transference. When using handle types with reference transference,
importing a payload to a semaphore adds the semaphore to the set of all
semaphores sharing that payload. This set includes the semaphore from which the
payload was exported. Semaphore signaling and waiting operations performed on
any semaphore in the set must behave as if the set were a single semaphore.
Importing a payload using handle types with copy transference creates a
duplicate copy of the payload at the time of import, but makes no further
reference to it. Semaphore signaling and waiting operations performed on the
target of copy imports must not affect any other semaphore or payload.
Export operations have the same transference as the specified handle type's
import operations. Additionally, exporting a semaphore payload to a handle with
copy transference has the same side effects on the source semaphore's payload as
executing a semaphore wait operation. If the semaphore was using a temporarily
imported payload, the semaphore's prior permanent payload will be restored.
Please refer to handle specific specifications for more details on transference and
permanence requirements specific to handle type.
=== Descriptions of External Semaphore Handle Types
This section describes the external semaphore handle types that are added by related extensions.
Applications can import the same semaphore payload into multiple OpenCL contexts, into the same context from which it was exported, and multiple times into a given OpenCL context.
In all cases, each import operation must create a distinct semaphore object.
==== File Descriptor Handle Types
The `cl_khr_external_semaphore_opaque_fd` extension extends {cl_external_semaphore_handle_type_khr_TYPE} to support the following new types of handles, and adds as a property that may be specified when creating a semaphore from an external handle:
--
* {CL_SEMAPHORE_HANDLE_OPAQUE_FD_KHR} specifies a POSIX file descriptor handle that has only limited valid usage outside of OpenCL and other compatible APIs. It must be compatible with the POSIX system calls dup, dup2, close, and the non-standard system call dup3. Additionally, it must be transportable over a socket using an SCM_RIGHTS control message. It owns a reference to the underlying synchronization primitive represented by its semaphore object.
--
Transference and permanence properties for handle types added by `cl_khr_external_semaphore_opaque_fd`:
.Transference and Permanence Properties for `cl_khr_external_semaphore_opaque_fd` handles
[width="100%",cols="60%,<20%,<20%",options="header"]
|====
| Handle Type | Transference | Permanence
| {CL_SEMAPHORE_HANDLE_OPAQUE_FD_KHR}
| Reference
| Temporary, Permanent
|====
The `cl_khr_external_semaphore_sync_fd` extension extends {cl_external_semaphore_handle_type_khr_TYPE} to support the following new types of handles, and adds as a property that may be specified when creating a semaphore from an external handle:
--
* *CL_SEMAPHORE_HANDLE_SYNC_FD_KHR* specifies a POSIX file descriptor handle to a Linux Sync File or Android Fence object. It can be used with any native API accepting a valid sync file or fence as input. It owns a reference to the underlying synchronization primitive associated with the file descriptor. Implementations which support importing this handle type must accept any type of sync or fence FD supported by the native system they are running on.
--
The special value -1 for fd is treated like a valid sync file descriptor referring to an object that has already signaled. The import operation will succeed and the semaphore will have a temporarily imported payload as if a valid file descriptor had been provided.
Note: This special behavior for importing an invalid sync file descriptor allows easier interoperability with other system APIs which use the convention that an invalid sync file descriptor represents work that has already completed and does not need to be waited for. It is consistent with the option for implementations to return a -1 file descriptor when exporting a {CL_SEMAPHORE_HANDLE_SYNC_FD_KHR} from a {cl_semaphore_khr_TYPE} which is signaled.
Transference and permanence properties for handle types added by `cl_khr_external_semaphore_sync_fd`:
.Transference and Permanence Properties for `cl_khr_external_semaphore_sync_fd` handles
[width="100%",cols="60%,<20%,<20%",options="header"]
|====
| Handle Type | Transference | Permanence
| {CL_SEMAPHORE_HANDLE_SYNC_FD_KHR}
| Copy
| Temporary
|====
For these extensions, importing a semaphore payload from a file descriptor transfers ownership of the file descriptor from the application to the OpenCL implementation. The application must not perform any operations on the file descriptor after a successful import.
==== NT Handle Types
The `cl_khr_external_semaphore_dx_fence` extension extends {cl_external_semaphore_handle_type_khr_TYPE} to support the following new types of handles, and adds as a property that may be specified when creating a semaphore from an external handle:
--
* {CL_SEMAPHORE_HANDLE_D3D12_FENCE_KHR} specifies an NT handle returned by ID3D12Device::CreateSharedHandle referring to a Direct3D 12 fence, or ID3D11Device5::CreateFence referring to a Direct3D 11 fence. It owns a reference to the underlying synchronization primitive associated with the Direct3D fence.
--
When waiting on semaphores using {clEnqueueWaitSemaphoresKHR} or signaling semaphores using {clEnqueueSignalSemaphoresKHR}, the semaphore payload must be provided for semaphores created from {CL_SEMAPHORE_HANDLE_D3D12_FENCE_KHR}.
* If _sema_objects_ list has a mix of cl_semaphore obtained from *CL_SEMAPHORE_HANDLE_D3D12_FENCE_KHR* and other handle types,
then the _sema_payload_list_ should point to a list of _num_sema_objects_ payload values for each cl_semaphore in _sema_objects_.
However, the payload values corresponding to semaphores with type CL_SEMAPHORE_TYPE_BINARY_KHR can be set to 0 or will be ignored.
*clEnqueueWaitSemaphoresKHR* and *clEnqueueSignalSemaphoresKHR* may return *CL_INVALID_VALUE* if _sema_objects_ list has one or more cl_semaphore obtained from *CL_SEMAPHORE_HANDLE_D3D12_FENCE_KHR* and _sema_payload_list_ is NULL.
Transference and permanence properties for handle types added by `cl_khr_external_semaphore_dx_fence`:
--
.Transference and Permanence Properties for `cl_khr_external_semaphore_dx_fence` handles
[width="100%",cols="60%,<20%,<20%",options="header"]
|====
| Handle Type | Transference | Permanence
| {CL_SEMAPHORE_HANDLE_D3D12_FENCE_KHR}
| Reference
| Temporary, Permanent
|====
--
The `cl_khr_external_semaphore_win32` extension extends {cl_external_semaphore_handle_type_khr_TYPE} to support the following new types of handles, and adds as a property that may be specified when creating a semaphore from an external handle:
--
* {CL_SEMAPHORE_HANDLE_OPAQUE_WIN32_KHR} specifies an NT handle that has only limited valid usage outside of OpenCL and other compatible APIs. It must be compatible with the functions DuplicateHandle, CloseHandle, CompareObjectHandles, GetHandleInformation, and SetHandleInformation. It owns a reference to the underlying synchronization primitive represented by its semaphore object.
* {CL_SEMAPHORE_HANDLE_OPAQUE_WIN32_KMT_KHR} specifies a global share handle that has only limited valid usage outside of OpenCL and other compatible APIs. It is not compatible with any native APIs. It does not own a reference to the underlying synchronization primitive represented by its semaphore object, and will therefore become invalid when all semaphore objects associated with it are destroyed.
--
Transference and permanence properties for handle types added by `cl_khr_external_semaphore_win32`:
.Transference and Permanence Properties for `cl_khr_external_semaphore_win32` handles
[width="100%",cols="60%,<20%,<20%",options="header"]
|====
| Handle Type | Transference | Permanence
| *CL_SEMAPHORE_HANDLE_OPAQUE_WIN32_KHR*
| Reference
| Temporary, Permanent
| *CL_SEMAPHORE_HANDLE_OPAQUE_WIN32_KMT_KHR*
| Reference
| Temporary, Permanent
|====
For these extensions, importing a semaphore payload from Windows handles does not transfer ownership of the handle to the OpenCL implementation. For handle types defined as NT handles, the application must release ownership using the CloseHandle system call when the handle is no longer needed.
[[cl_khr_external_semaphore-Sample-Code]]
=== Sample Code
. Example for importing a semaphore created by another API in OpenCL in a single-device context.
+
--
[source]
----
// Get cl_devices of the platform.
clGetDeviceIDs(..., &devices, &deviceCount);
// Create cl_context with just first device
clCreateContext(..., 1, devices, ...);
// Obtain fd/win32 or similar handle for external semaphore to be imported
// from the other API.
int fd = getFdForExternalSemaphore();
// Create clSema of type cl_semaphore_khr usable on the only available device
// assuming the semaphore was imported from the same device.
cl_semaphore_properties_khr sema_props[] =
{(cl_semaphore_properties_khr)CL_SEMAPHORE_TYPE_KHR,
(cl_semaphore_properties_khr)CL_SEMAPHORE_TYPE_BINARY_KHR,
(cl_semaphore_properties_khr)CL_SEMAPHORE_HANDLE_OPAQUE_FD_KHR,
(cl_semaphore_properties_khr)fd,
0};
int errcode_ret = 0;
cl_semaphore_khr clSema = clCreateSemaphoreWithPropertiesKHR(context,
sema_props,
&errcode_ret);
----
--
. Example for importing a semaphore created by another API in OpenCL in a multi-device context for single device usage.
+
--
[source]
----
// Get cl_devices of the platform.
clGetDeviceIDs(..., &devices, &deviceCount);
// Create cl_context with first two devices
clCreateContext(..., 2, devices, ...);
// Obtain fd/win32 or similar handle for external semaphore to be imported
// from the other API.
int fd = getFdForExternalSemaphore();
// Create clSema of type cl_semaphore_khr usable only on devices[1]
// assuming the semaphore was imported from the same device.
cl_semaphore_properties_khr sema_props[] =
{(cl_semaphore_properties_khr)CL_SEMAPHORE_TYPE_KHR,
(cl_semaphore_properties_khr)CL_SEMAPHORE_TYPE_BINARY_KHR,
(cl_semaphore_properties_khr)CL_SEMAPHORE_HANDLE_OPAQUE_FD_KHR,
(cl_semaphore_properties_khr)fd,
(cl_semaphore_properties_khr)CL_DEVICE_HANDLE_LIST_KHR,
(cl_semaphore_properties_khr)devices[1], CL_DEVICE_HANDLE_LIST_END_KHR,
0};
int errcode_ret = 0;
cl_semaphore_khr clSema = clCreateSemaphoreWithPropertiesKHR(context,
sema_props,
&errcode_ret);
----
--
. Example for synchronization using a semaphore created by another API and imported in OpenCL
+
--
[source]
----
// Create clSema using one of the above examples of external semaphore creation.
int errcode_ret = 0;
cl_semaphore_khr clSema = clCreateSemaphoreWithPropertiesKHR(context,
sema_props,
&errcode_ret);
// Start the main loop
while (true) {
// (not shown) Signal the semaphore from the other API
// Wait for the semaphore in OpenCL
clEnqueueWaitSemaphoresKHR(/*command_queue*/ command_queue,
/*num_sema_objects*/ 1,
/*sema_objects*/ &clSema,
/*num_events_in_wait_list*/ 0,
/*event_wait_list*/ NULL,
/*event*/ NULL);
// Launch kernel
clEnqueueNDRangeKernel(command_queue, ...);
// Signal the semaphore in OpenCL
clEnqueueSignalSemaphoresKHR(/*command_queue*/ command_queue,
/*num_sema_objects*/ 1,
/*sema_objects*/ &clSema,
/*num_events_in_wait_list*/ 0,
/*event_wait_list*/ NULL,
/*event*/ NULL);
// (not shown) Launch work in the other API that waits on 'clSema'
}
----
--
. Example for synchronization using semaphore exported by OpenCL
+
--
[source]
----
// Get cl_devices of the platform.
clGetDeviceIDs(..., &devices, &deviceCount);
// Create cl_context with first two devices
clCreateContext(..., 2, devices, ...);
// Create clSema of type cl_semaphore_khr usable only on device 1
cl_semaphore_properties_khr sema_props[] =
{(cl_semaphore_properties_khr)CL_SEMAPHORE_TYPE_KHR,
(cl_semaphore_properties_khr)CL_SEMAPHORE_TYPE_BINARY_KHR,
(cl_semaphore_properties_khr)CL_SEMAPHORE_EXPORT_HANDLE_TYPES_KHR,
(cl_semaphore_properties_khr)CL_SEMAPHORE_HANDLE_OPAQUE_FD_KHR,
CL_SEMAPHORE_EXPORT_HANDLE_TYPES_LIST_END_KHR,
(cl_semaphore_properties_khr)CL_DEVICE_HANDLE_LIST_KHR,
(cl_semaphore_properties_khr)devices[1],
CL_DEVICE_HANDLE_LIST_END_KHR,
0};
int errcode_ret = 0;
cl_semaphore_khr clSema = clCreateSemaphoreWithPropertiesKHR(context,
sema_props,
&errcode_ret);
// Application queries handle-type and the exportable handle associated with the semaphore.
clGetSemaphoreInfoKHR(clSema,
CL_SEMAPHORE_EXPORT_HANDLE_TYPES_KHR,
sizeof(cl_external_semaphore_handle_type_khr),
&handle_type,
&handle_type_size);
// The other API or process can use the exported semaphore handle
// to import
int fd = -1;
if (handle_type == CL_SEMAPHORE_HANDLE_OPAQUE_FD_KHR) {
clGetSemaphoreHandleForTypeKHR(clSema,
device,
CL_SEMAPHORE_HANDLE_OPAQUE_FD_KHR,
sizeof(int),
&fd,
NULL);
}
// Start the main rendering loop
while (true) {
// (not shown) Signal the semaphore from the other API
// Wait for the semaphore in OpenCL
clEnqueueWaitSemaphoresKHR(/*command_queue*/ command_queue,
/*num_sema_objects*/ 1,
/*sema_objects*/ &clSema,
/*num_events_in_wait_list*/ 0,
/*event_wait_list*/ NULL,
/*event*/ NULL);
// Launch kernel
clEnqueueNDRangeKernel(command_queue, ...);
// Signal the semaphore in OpenCL
clEnqueueSignalSemaphoresKHR(/*command_queue*/ command_queue,
/*num_sema_objects*/ 1,
/*sema_objects*/ &clSema,
/*num_events_in_wait_list*/ 0,
/*event_wait_list*/ NULL,
/*event*/ NULL);
// (not shown) Launch work in the other API that waits on 'clSema'
}
----
--
| 47.903963 | 459 | 0.744185 |
274907e16c510dc8b6508bee39f8cbc8601526bf | 443 | adoc | AsciiDoc | docs/src/orchid/resources/pages/data-provider/back_to_the_future.adoc | hilarious-ceviche/kotlin-faker | 99886661a272dcebe6c8b74938c3976960049d45 | [
"MIT"
] | 233 | 2019-04-12T13:16:45.000Z | 2022-03-31T10:10:09.000Z | docs/src/orchid/resources/pages/data-provider/back_to_the_future.adoc | hilarious-ceviche/kotlin-faker | 99886661a272dcebe6c8b74938c3976960049d45 | [
"MIT"
] | 89 | 2019-03-13T15:12:21.000Z | 2022-03-17T09:09:16.000Z | docs/src/orchid/resources/pages/data-provider/back_to_the_future.adoc | hilarious-ceviche/kotlin-faker | 99886661a272dcebe6c8b74938c3976960049d45 | [
"MIT"
] | 32 | 2019-08-27T09:24:26.000Z | 2022-03-09T10:28:08.000Z | ---
---
== `Faker().backToTheFuture`
.Dictionary file
[%collapsible]
====
[source,yaml]
----
{% snippet 'back_to_the_future_provider_dict' %}
----
====
.Available Functions
[%collapsible]
====
[source,kotlin]
----
Faker().backToTheFuture.characters() // => Marty McFly
Faker().backToTheFuture.dates() // => November 5, 1955
Faker().backToTheFuture.quotes() // => Ah, Jesus Christ! Jesus Christ, Doc, you disintegrated Einstein!
----
====
| 16.407407 | 103 | 0.654628 |
6aa79a775887aa9e799bf02b67f05503804f37fe | 3,191 | adoc | AsciiDoc | modules/templates-waiting-for-readiness.adoc | georgettica/openshift-docs | 728a069f9c8ecd73701ac84175374e7e596b0ee4 | [
"Apache-2.0"
] | null | null | null | modules/templates-waiting-for-readiness.adoc | georgettica/openshift-docs | 728a069f9c8ecd73701ac84175374e7e596b0ee4 | [
"Apache-2.0"
] | 1 | 2022-01-12T21:27:35.000Z | 2022-01-12T21:27:35.000Z | modules/templates-waiting-for-readiness.adoc | georgettica/openshift-docs | 728a069f9c8ecd73701ac84175374e7e596b0ee4 | [
"Apache-2.0"
] | null | null | null | // Module included in the following assemblies:
//
// * openshift_images/using-templates.adoc
[id="templates-waiting-for-readiness_{context}"]
= Waiting for template readiness
Template authors can indicate that certain objects within a template should be waited for before a template instantiation by the service catalog, {tsb-name}, or `TemplateInstance` API is considered complete.
To use this feature, mark one or more objects of kind `Build`, `BuildConfig`, `Deployment`, `DeploymentConfig`, `Job`, or `StatefulSet` in a template with the following annotation:
[source,text]
----
"template.alpha.openshift.io/wait-for-ready": "true"
----
Template instantiation is not complete until all objects marked with the annotation report ready. Similarly, if any of the annotated objects report failed, or if the template fails to become ready within a fixed timeout of one hour, the template instantiation fails.
For the purposes of instantiation, readiness and failure of each object kind are defined as follows:
[cols="1a,2a,2a", options="header"]
|===
| Kind
| Readiness
| Failure
| `Build`
| Object reports phase complete.
| Object reports phase canceled, error, or failed.
| `BuildConfig`
| Latest associated build object reports phase complete.
| Latest associated build object reports phase canceled, error, or failed.
| `Deployment`
| Object reports new replica set and deployment available. This honors readiness probes defined on the object.
| Object reports progressing condition as false.
|`DeploymentConfig`
| Object reports new replication controller and deployment available. This honors readiness probes defined on the object.
| Object reports progressing condition as false.
| `Job`
| Object reports completion.
| Object reports that one or more failures have occurred.
| `StatefulSet`
| Object reports all replicas ready. This honors readiness probes defined on
the object.
| Not applicable.
|===
The following is an example template extract, which uses the `wait-for-ready` annotation. Further examples can be found in the {product-title} quick start templates.
[source,yaml]
----
kind: Template
apiVersion: template.openshift.io/v1
metadata:
name: my-template
objects:
- kind: BuildConfig
apiVersion: build.openshift.io/v1
metadata:
name: ...
annotations:
# wait-for-ready used on BuildConfig ensures that template instantiation
# will fail immediately if build fails
template.alpha.openshift.io/wait-for-ready: "true"
spec:
...
- kind: DeploymentConfig
apiVersion: apps.openshift.io/v1
metadata:
name: ...
annotations:
template.alpha.openshift.io/wait-for-ready: "true"
spec:
...
- kind: Service
apiVersion: v1
metadata:
name: ...
spec:
...
----
.Additional recommendations
* Set memory, CPU, and storage default sizes to make sure your application is given enough resources to run smoothly.
* Avoid referencing the `latest` tag from images if that tag is used across major versions. This can cause running applications to break when new images are pushed to that tag.
* A good template builds and deploys cleanly without requiring modifications after the template is deployed.
| 33.239583 | 266 | 0.757443 |
5d58211ba1c8b4bf8f089ecfb6742a439dd59b28 | 4,364 | adoc | AsciiDoc | spring-security/src/docs/asciidoc/zh-cn/_includes/servlet/authorization/authorize-requests.adoc | jcohy/jcohy-docs | 3b890e2aa898c78d40182f3757e3e840cf63d38b | [
"Apache-2.0"
] | 19 | 2020-06-04T07:46:20.000Z | 2022-03-23T01:46:40.000Z | spring-security/src/docs/asciidoc/zh-cn/_includes/servlet/authorization/authorize-requests.adoc | jcohy/jcohy-docs | 3b890e2aa898c78d40182f3757e3e840cf63d38b | [
"Apache-2.0"
] | 15 | 2020-06-11T09:38:15.000Z | 2022-01-04T16:04:53.000Z | spring-security/src/docs/asciidoc/zh-cn/_includes/servlet/authorization/authorize-requests.adoc | jcohy/jcohy-docs | 3b890e2aa898c78d40182f3757e3e840cf63d38b | [
"Apache-2.0"
] | 4 | 2020-11-24T11:03:19.000Z | 2022-02-28T07:21:23.000Z | [[servlet-authorization-filtersecurityinterceptor]]
= 使用 `FilterSecurityInterceptor` 授权 `HttpServletRequest`
:figures: {image-resource}/servlet/authorization
:icondir: {image-resource}/icons
本节通过深入研究 <<servlet-authorization,authorization>> 在基于 Servlet 的应用程序中的工作方式,以 <<servlet-architecture,Servlet 体系结构和实现>>为基础.
{security-api-url}org/springframework/security/web/access/intercept/FilterSecurityInterceptor.html[`FilterSecurityInterceptor`] 为 `HttpServletRequests` 提供 <<servlet-authorization,authorization>> . 它作为 <<servlet-security-filters>> 之一插入到 <<servlet-filterchainproxy,FilterChainProxy>> 中.
.Authorize HttpServletRequest
image::{figures}/filtersecurityinterceptor.png[]
* image:{icondir}/number_1.png[] 首先,`FilterSecurityInterceptor` 从 <<servlet-authentication-securitycontextholder,SecurityContextHolder>> 获得<<servlet-authentication-authentication>>.
* image:{icondir}/number_2.png[] 第二步,`FilterSecurityInterceptor` 根据传递到 `FilterSecurityInterceptor` 中的 `HttpServletRequest`,`HttpServletResponse` 和 `FilterChain` 创建一个 {security-api-url}org/springframework/security/web/FilterInvocation.html[`FilterInvocation`].
// FIXME: link to FilterInvocation
* image:{icondir}/number_3.png[] 接下来,它将 `FilterInvocation` 传递给 `SecurityMetadataSource` 以获取 `ConfigAttributes`.
* image:{icondir}/number_4.png[] 最后,它将 `Authentication`,`FilterInvocation` 和 `ConfigAttributes` 传递给 `AccessDecisionManager`.
** image:{icondir}/number_5.png[] 如果授权被拒绝,则抛出 `AccessDeniedException`. 在这种情况下,<<servlet-exceptiontranslationfilter,`ExceptionTranslationFilter`>> 处理 `AccessDeniedException`.
** image:{icondir}/number_6.png[] 如果授予访问权限,`FilterSecurityInterceptor` 继续执行 <<servlet-filters-review,FilterChain>>,该链接可允许应用程序正常处理.
// configuration (xml/java)
默认情况下,Spring Security 的授权将要求对所有请求进行身份验证. 显式配置如下所示:
.Every Request Must be Authenticated
====
.Java
[source,java,role="primary"]
----
protected void configure(HttpSecurity http) throws Exception {
http
// ...
.authorizeRequests(authorize -> authorize
.anyRequest().authenticated()
);
}
----
.XML
[source,xml,role="secondary"]
----
<http>
<!-- ... -->
<intercept-url pattern="/**" access="authenticated"/>
</http>
----
.Kotlin
[source,kotlin,role="secondary"]
----
fun configure(http: HttpSecurity) {
http {
// ...
authorizeRequests {
authorize(anyRequest, authenticated)
}
}
}
----
====
通过按优先级添加更多规则,我们可以将 Spring Security 配置为具有不同的规则.
.Authorize Requests
====
.Java
[source,java,role="primary"]
----
protected void configure(HttpSecurity http) throws Exception {
http
// ...
.authorizeRequests(authorize -> authorize // <1>
.mvcMatchers("/resources/**", "/signup", "/about").permitAll() // <2>
.mvcMatchers("/admin/**").hasRole("ADMIN") // <3>
.mvcMatchers("/db/**").access("hasRole('ADMIN') and hasRole('DBA')") // <4>
.anyRequest().denyAll() // <5>
);
}
----
.XML
[source,xml,role="secondary"]
----
<http> <!--1-->
<!-- ... -->
<!--2-->
<intercept-url pattern="/resources/**" access="permitAll"/>
<intercept-url pattern="/signup" access="permitAll"/>
<intercept-url pattern="/about" access="permitAll"/>
<intercept-url pattern="/admin/**" access="hasRole('ADMIN')"/> <!--3-->
<intercept-url pattern="/db/**" access="hasRole('ADMIN') and hasRole('DBA')"/> <!--4-->
<intercept-url pattern="/**" access="denyAll"/> <!--5-->
</http>
----
.Kotlin
[source,kotlin,role="secondary"]
----
fun configure(http: HttpSecurity) {
http {
authorizeRequests { // <1>
authorize("/resources/**", permitAll) // <2>
authorize("/signup", permitAll)
authorize("/about", permitAll)
authorize("/admin/**", hasRole("ADMIN")) // <3>
authorize("/db/**", "hasRole('ADMIN') and hasRole('DBA')") // <4>
authorize(anyRequest, denyAll) // <5>
}
}
}
----
====
<1> 指定了多个授权规则. 每个规则均按其声明顺序进行考虑.
<2> 我们指定了任何用户都可以访问的多个 URL 模式. 具体来说,如果URL以 "/resources/" 开头,等于 "/signup" 或等于 "/about",则任何用户都可以访问请求.
<3> 以 `"/admin/"` 开头的任何 URL 都将限于角色为 `ROLE_ADMIN` 的用户. 您将注意到,由于我们正在调用 `hasRole` 方法,因此无需指定 `ROLE_` 前缀.
<4> 任何以 "/db/" 开头的 URL 都要求用户同时具有 "ROLE_ADMIN" 和 "ROLE_DBA". 您会注意到,由于我们使用的是 `hasRole` 表达式,因此不需要指定 "ROLE_" 前缀.
<5> 任何尚未匹配的 URL 都会被拒绝访问. 如果您不想意外忘记更新授权规则,这是一个很好的策略.
| 36.366667 | 288 | 0.671861 |
5e1a1995c61a99865d51810712fdfe3300d2a4dd | 3,221 | adoc | AsciiDoc | docs/index.adoc | enonic/app-layers-renderer | 47ad45c2888a089f7924901c2fd87f730a2c0557 | [
"MIT"
] | null | null | null | docs/index.adoc | enonic/app-layers-renderer | 47ad45c2888a089f7924901c2fd87f730a2c0557 | [
"MIT"
] | null | null | null | docs/index.adoc | enonic/app-layers-renderer | 47ad45c2888a089f7924901c2fd87f730a2c0557 | [
"MIT"
] | null | null | null | = Layers Renderer
=== Description
A widget, that renders site layers and components into simple graphical colorful grid.
=== Usage
1. Deploy the widget with the following command:
```bash
./gradlew deploy
```
2. Open the Enonic XP Content Studio;
3. Select any content;
4. Open the widgets panel on the right and select Layers Renderer.
=== Development
===== Step 1: Choose a starter
Create an application template, using any starter, present in Enonic link:https://github.com/enonic?utf8=✓&q=starter[repo].
For the widgets like this link:https://github.com/enonic/starter-vanilla[vanilla starter] should be enough.
The detailed instruction on how to initialize the project using a starter can be found in README file a repository of the starter you choose.
```bash
mkdir new-project
cd new-project
[$XP_INSTALL]/toolbox/toolbox.sh init-project -n com.edloidas.layersenderer -r starter-vanilla
```
===== Step 2: Create template
Since we are creating a widget, all files should be placed under the `src/main/resources/admin/widgets/layersrenderer`. The last folder name is the name of out widget.
We should place an XML descriptor with the name, that matches the name of the widget (`layersrenderer.xml`):
```XML
<?xml version="1.0" encoding="UTF-8"?>
<widget>
<display-name>Layers Renderer</display-name>
<interfaces>
<interface>com.enonic.xp.content-manager.context-widget</interface>
</interfaces>
</widget>
```
We must also create a view (`layersrenderer.html`):
```html
<body>
<div>
<!-- Styles -->
<link rel="stylesheet" href="../../../assets/css/layersrenderer.css"
data-th-href="${portal.assetUrl({'_path=css/layersrenderer.css'})}"
type="text/css" media="all"/>
<style data-th-if="${css}" data-th-text="${css}"></style>
<div data-th-id="'layersrendererid_' + ${uid}" class="layersrenderer">
</div>
</div>
</body>
<script th:inline="javascript">
/*<![CDATA[*/
var uid = /*[[${uid}]]*/ 'value';
var data = /*[[${data}]]*/ 'value';
/*]]>*/
</script>
<script data-th-src="${portal.assetUrl({'_path=js/layersrenderer.js'})}"></script>
```
...and controller (`layersrenderer.js`) for our widget:
```js
var contentLib = require('/lib/xp/content');
var portalLib = require('/lib/xp/portal');
var thymeleaf = require('/lib/xp/thymeleaf');
var ioLib = require('/lib/xp/io');
var view = resolve('layersrenderer.html');
var styles = ioLib.getResource(('/assets/css/layersrenderer.css'));
var css = ioLib.readText(styles.getStream());
function handleGet(req) {
var uid = req.params.uid;
var key = req.params.contentId || portalLib.getContent()._id;
var data = contentLib.get({ key: key, branch: 'draft' });
var params = {
uid: uid,
data: data.page || null
};
return {
contentType: 'text/html',
body: thymeleaf.render(view, params)
};
}
exports.get = handleGet;
```
===== Step 3: Create assets
All sources should be placed under the `src/main/resources/assets` directory.
Lets create there a file with styles (`css/layersenderer.css`) and a file with scripts (`js/layersenderer.js`). Take a note, that you should follow any convention and can give those files any name.
| 30.386792 | 197 | 0.685191 |
6003507524e6e728719a7f58d7fc3b816ab4e553 | 212 | asciidoc | AsciiDoc | README.asciidoc | soniir/progressive-organization-pom | 09b087e46883b64f20e24901f656e56a8985e1ab | [
"Apache-2.0"
] | null | null | null | README.asciidoc | soniir/progressive-organization-pom | 09b087e46883b64f20e24901f656e56a8985e1ab | [
"Apache-2.0"
] | null | null | null | README.asciidoc | soniir/progressive-organization-pom | 09b087e46883b64f20e24901f656e56a8985e1ab | [
"Apache-2.0"
] | null | null | null | SAMARA BRANCH OF RADIO RESEARCH & DEVELOPMENT INSTITUTE
= Progressive Organization POM
== License
Apache License, Version 2.0
For full text see LICENSE.txt file or http://www.apache.org/licenses/LICENSE-2.0
| 19.272727 | 80 | 0.778302 |
f28c13963a750a5ccaf18d8cba8febf69d078130 | 2,872 | asciidoc | AsciiDoc | docs/logs/index.asciidoc | letmeupgradeya/kibana | 988b085ee613f986f08ad2d5519f22d212f41323 | [
"Apache-2.0"
] | null | null | null | docs/logs/index.asciidoc | letmeupgradeya/kibana | 988b085ee613f986f08ad2d5519f22d212f41323 | [
"Apache-2.0"
] | 11 | 2018-06-05T17:52:54.000Z | 2019-08-02T10:32:04.000Z | docs/logs/index.asciidoc | legrego/kibana | a537aaed6b1a3671cd9ed12e02e236d5aec08ca9 | [
"Apache-2.0"
] | null | null | null | [role="xpack"]
[[xpack-logs]]
= Logs
[partintro]
--
Use the Logs UI to explore logs for common servers, containers, and services.
{kib} provides a compact, console-like display that you can customize.
[role="screenshot"]
image::logs/images/logs-console.png[Log Console in Kibana]
[float]
== Add data
Kibana provides step-by-step instructions to help you add log data. The
{infra-guide}[Infrastructure Monitoring Guide] is a good source for more
detailed information and instructions.
[float]
== Configure data sources
The `filebeat-*` index pattern is used to query data by default.
If your logs are located in a different set of indices, or use a different
timestamp field, you can adjust the source configuration via the user interface
or the {kib} configuration file.
NOTE: Logs and Infrastructure share a common data source definition in
each space. Changes in one of them can influence the data displayed in the
other.
[float]
=== Configure source
Configure source can be accessed via the corresponding
image:logs/images/logs-configure-source-gear-icon.png[Configure source icon]
button in the toolbar.
[role="screenshot"]
image::logs/images/logs-configure-source.png[Configure Logs UI source button in Kibana]
This opens the source configuration fly-out dialog, in which the following
configuration items can be inspected and adjusted:
* *Name*: The name of the source configuration.
* *Indices*: The patterns of the elasticsearch indices to read metrics and logs
from.
* *Fields*: The names of particular fields in the indices that need to be known
to the Infrastructure and Logs UIs in order to query and interpret the data
correctly.
[role="screenshot"]
image::logs/images/logs-configure-source-dialog.png[Configure logs UI source dialog in Kibana]
TIP: If <<xpack-spaces>> are enabled in your Kibana instance, any configuration
changes performed via Configure source are specific to that space. You can
therefore easily make different subsets of the data available by creating
multiple spaces with different data source configurations.
[float]
[[logs-read-only-access]]
=== Read only access
When you have insufficient privileges to change the source configuration, the following
indicator in Kibana will be displayed. The buttons to change the source configuration
won't be visible. For more information on granting access to
Kibana see <<xpack-security-authorization>>.
[role="screenshot"]
image::logs/images/read-only-badge.png[Example of Logs' read only access indicator in Kibana's header]
[float]
=== Configuration file
The settings in the configuration file are used as a fallback when no other
configuration for that space has been defined. They are located in the
configuration namespace `xpack.infra.sources.default`. See
<<logs-ui-settings-kb>> for a complete list of the possible entries.
--
include::logs-ui.asciidoc[]
| 34.190476 | 102 | 0.784819 |
5cf5c3a87e1833ad028e0a3d6ad104cca1dd2507 | 51 | adoc | AsciiDoc | _posts/2016-04-18-Bite-Sized-Angular-Update-route-without-reload.adoc | rrrhys/blog.codeworkshop.com.au | 23bc4499a7bb7005563c3047049ffcf9d6ba7d78 | [
"MIT"
] | null | null | null | _posts/2016-04-18-Bite-Sized-Angular-Update-route-without-reload.adoc | rrrhys/blog.codeworkshop.com.au | 23bc4499a7bb7005563c3047049ffcf9d6ba7d78 | [
"MIT"
] | null | null | null | _posts/2016-04-18-Bite-Sized-Angular-Update-route-without-reload.adoc | rrrhys/blog.codeworkshop.com.au | 23bc4499a7bb7005563c3047049ffcf9d6ba7d78 | [
"MIT"
] | null | null | null | # Bite Sized Angular: Update route without reload
| 17 | 49 | 0.784314 |
064cc7e532ce4658939d277d4aec8ee86ee1db48 | 1,732 | adoc | AsciiDoc | deps/moongl/doc/command_execution.adoc | lyzardiar/MangoLua | 0eee756be620ca130441e13df34180d591149bb3 | [
"MIT"
] | null | null | null | deps/moongl/doc/command_execution.adoc | lyzardiar/MangoLua | 0eee756be620ca130441e13df34180d591149bb3 | [
"MIT"
] | null | null | null | deps/moongl/doc/command_execution.adoc | lyzardiar/MangoLua | 0eee756be620ca130441e13df34180d591149bb3 | [
"MIT"
] | null | null | null |
== Initialization
[[gl.init]]
* *init*( ) +
[small]#Binding to http://glew.sourceforge.net/basic.html[glewInit]().
This function *must* be called as soon as a GL context is obtained and made current, and
*before calling any other MoonGL function*
(since it initializes OpenGL's function pointers, failing to do so would likely cause a
segmentation fault). +
See <<snippet_init, example>>.#
* _boolean_ = *is_supported*(_string_) +
[small]#Binding to http://glew.sourceforge.net/basic.html[glewIsSupported]() (accepts the same strings).#
[small]#See <<snippet_is_supported, example>>.#
* The *gl* table contains the following version fields for *version information*: +
[small]#pass:[-] *pass:[gl._VERSION]*: a string describing the MoonGL version (e.g. '_MoonGL 0.4_'), and +
pass:[-] *pass:[gl._GLEW_VERSION]*: a string describing the GLEW version (e.g. '_GLEW 1.13.0_').#
* To retrieve the *OpenGL version*, use the <<gl.get, get>>() function (this can be
done only after initialization).
== Command Execution
[[gl.get_graphics_reset_status]]
* _status_ = *get_graphics_reset_status*( ) +
[small]#_status_: '_no error_' for GL_NO_ERROR, '_guilty context reset_' for GL_GUILTY_CONTEXT_RESET, etc.# +
[small]#Rfr: https://www.opengl.org/sdk/docs/man/html/glGetGraphicsResetStatus.xhtml[glGetGraphicsResetStatus].#
* <<gl.get, get>>
[[gl.flush]]
* *flush*( ) +
*finish*( ) +
[small]#Rfr: https://www.khronos.org/opengl/wiki/GLAPI/glFlush[glFlush] -
https://www.khronos.org/opengl/wiki/GLAPI/glFinish[glFinish].#
NOTE: The *glGetError*( ) function is not exposed. It is used internally by MoonGL,
that checks for errors each time it executes an OpenGL command and raises
an error if the command did not succeed.
| 38.488889 | 112 | 0.729792 |
be9d34fc5fab465d08c3683df3cbf95f5591dd8c | 716 | adoc | AsciiDoc | azure-service-bus-management/1.0/modules/ROOT/nav.adoc | nathan-at-mulesoft/docs-connectors | e5c24a7428ce717dac0f9d58b297f663b5f4eae6 | [
"BSD-3-Clause"
] | null | null | null | azure-service-bus-management/1.0/modules/ROOT/nav.adoc | nathan-at-mulesoft/docs-connectors | e5c24a7428ce717dac0f9d58b297f663b5f4eae6 | [
"BSD-3-Clause"
] | null | null | null | azure-service-bus-management/1.0/modules/ROOT/nav.adoc | nathan-at-mulesoft/docs-connectors | e5c24a7428ce717dac0f9d58b297f663b5f4eae6 | [
"BSD-3-Clause"
] | null | null | null | .xref:index.adoc[Azure Service Bus Management Connector]
* xref:index.adoc[About Azure Service Bus Management Connector]
* xref:azure-service-bus-management-connector-studio.adoc[Use Studio to Configure Azure Service Bus Management Connector]
* xref:azure-service-bus-management-connector-config-topics.adoc[Azure Service Bus Management Connector Additional Configuration]
* xref:azure-service-bus-management-connector-xml-maven.adoc[Azure Service Bus Management Connector XML and Maven Support]
* xref:azure-service-bus-management-connector-examples.adoc[Azure Service Bus Management Connector Examples]
* xref:azure-service-bus-management-connector-reference.adoc[Azure Service Bus Management Connector Reference]
| 89.5 | 129 | 0.837989 |
ea03bd0143fe4007a9cc94eb2905834a9933cefa | 1,106 | adoc | AsciiDoc | docs/src/docs/asciidoc/mapping/labelStrategy.adoc | jprinet/gorm-neo4j | 94aebd4409902509f90dd3b79e4693a5625f3502 | [
"Apache-2.0"
] | null | null | null | docs/src/docs/asciidoc/mapping/labelStrategy.adoc | jprinet/gorm-neo4j | 94aebd4409902509f90dd3b79e4693a5625f3502 | [
"Apache-2.0"
] | null | null | null | docs/src/docs/asciidoc/mapping/labelStrategy.adoc | jprinet/gorm-neo4j | 94aebd4409902509f90dd3b79e4693a5625f3502 | [
"Apache-2.0"
] | null | null | null | The default strategy for defining labels is to use the class name, however the strategy to define labels for a given node is completely configurable. For example you can use static mapping to define you labels:
[source,groovy]
----
class Person {
static mapping = {
labels "Person", "People"
}
}
----
You can also define labels dynamically. For example:
[source,groovy]
----
class Person {
static mapping = {
labels { GraphPersistentEntity pe -> "`${pe.javaClass.name}`" }
}
}
----
WARNING: Dynamic labels have a negative impact on write performance as GORM is unable to batch operations with dynamic labels so should be used sparingly.
Or mix static and dynamic labels:
[source,groovy]
----
static mapping = {
labels "People", { GraphPersistentEntity pe -> "`${pe.javaClass.name}`" }
}
----
At the cost of read/write performance you can define dynamic labels based on an instance:
[source,groovy]
----
static mapping = {
labels { GraphPersistentEntity pe, instance -> // 2 arguments: instance dependent label
"`${instance.profession}`"
}
}
---- | 25.72093 | 210 | 0.690778 |
c373becc34b73c81ef9c2f1b8471aab143896b87 | 3,351 | asciidoc | AsciiDoc | docs/index.asciidoc | dynatrace-oss/logstash-output-dynatrace | 1b9d3ba3866ccb8633a6653dba7480e952e2bffe | [
"Apache-2.0"
] | null | null | null | docs/index.asciidoc | dynatrace-oss/logstash-output-dynatrace | 1b9d3ba3866ccb8633a6653dba7480e952e2bffe | [
"Apache-2.0"
] | 3 | 2021-07-08T19:24:55.000Z | 2022-02-17T18:51:25.000Z | docs/index.asciidoc | dynatrace-oss/logstash-output-dynatrace | 1b9d3ba3866ccb8633a6653dba7480e952e2bffe | [
"Apache-2.0"
] | null | null | null | :plugin: dynatrace
:type: output
///////////////////////////////////////////
START - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
:version: %VERSION%
:release_date: %RELEASE_DATE%
:changelog_url: %CHANGELOG_URL%
:include_path: ../../../../logstash/docs/include
///////////////////////////////////////////
END - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
[id="plugins-{type}s-{plugin}"]
=== Dynatrace output plugin
include::{include_path}/plugin_header.asciidoc[]
==== Description
A logstash output plugin for sending logs to the Dynatrace https://www.dynatrace.com/support/help/how-to-use-dynatrace/log-monitoring/log-monitoring-v2/post-log-ingest/[Generic log ingest API v2].
Please review the documentation for this API before using the plugin.
[id="plugins-{type}s-{plugin}-options"]
==== Example Output Configuration Options
This plugin supports the following configuration options plus the <<plugins-{type}s-{plugin}-common-options>> described later.
[cols="<,<,<",options="header",]
|=======================================================================
|Setting |Input type|Required
| <<plugins-{type}s-{plugin}-ingest_endpoint_url>> |{logstash-ref}/configuration-file-structure.html#string[string]|Yes
| <<plugins-{type}s-{plugin}-api_key>> |{logstash-ref}/configuration-file-structure.html#string[string]|Yes
| <<plugins-{type}s-{plugin}-ssl_verify_none>> |{logstash-ref}/configuration-file-structure.html#boolean[boolean]|No
|=======================================================================
Also see <<plugins-{type}s-{plugin}-common-options>> for a list of options supported by all
output plugins.
[id="plugins-{type}s-{plugin}-ingest_endpoint_url"]
===== `ingest_endpoint_url`
* Value type is {logstash-ref}/configuration-file-structure.html#string[string]
This is the full URL of the https://www.dynatrace.com/support/help/how-to-use-dynatrace/log-monitoring/log-monitoring-v2/post-log-ingest/[Generic log ingest API v2] endpoint on your ActiveGate.
Example: `"ingest_endpoint_url" => "https://abc123456.live.dynatrace.com/api/v2/logs/ingest"`
[id="plugins-{type}s-{plugin}-api_key"]
===== `api_key`
* Value type is {logstash-ref}/configuration-file-structure.html#string[string]
This is the https://www.dynatrace.com/support/help/dynatrace-api/basics/dynatrace-api-authentication/[Dynatrace API token] which will be used to authenticate log ingest requests.
It requires the `logs.ingest` (Ingest Logs) scope to be set and it is recommended to limit scope to only this one.
Example: `"api_key" => "dt0c01.4XLO3..."`
[id="plugins-{type}s-{plugin}-ssl_verify_none"]
===== `ssl_verify_none`
* Value type is {logstash-ref}/configuration-file-structure.html#boolean[boolean]
* Default value is `false`
It is recommended to leave this optional configuration set to `false` unless absolutely required.
Setting `ssl_verify_none` to `true` causes the output plugin to skip certificate verification when sending log ingest requests to SSL and TLS protected HTTPS endpoints.
This option may be required if you are using a self-signed certificate, an expired certificate, or a certificate which was generated for a different domain than the one in use.
[id="plugins-{type}s-{plugin}-common-options"]
include::{include_path}/{type}.asciidoc[]
| 47.197183 | 196 | 0.689048 |
519e3be3ad051e3aa45255b4c358fee17ab05696 | 3,201 | adoc | AsciiDoc | README.adoc | steffakasid/clinar | 9f557dc68eb2fb9e34daabb7c30b79c0d163e460 | [
"Apache-2.0"
] | 4 | 2022-03-11T12:36:40.000Z | 2022-03-16T15:46:44.000Z | README.adoc | steffakasid/clinar | 9f557dc68eb2fb9e34daabb7c30b79c0d163e460 | [
"Apache-2.0"
] | null | null | null | README.adoc | steffakasid/clinar | 9f557dc68eb2fb9e34daabb7c30b79c0d163e460 | [
"Apache-2.0"
] | null | null | null | :imagesdir: doc
# clinar - A tool to cleanup stale gitlab runners
image:https://img.shields.io/badge/License-Apache%202.0-blue.svg[link="http://www.apache.org/licenses/LICENSE-2.0"]
image:https://github.com/steffakasid/clinar/actions/workflows/codeql-analysis.yml/badge.svg[link:https://github.com/steffakasid/clinar/actions/workflows/codeql-analysis.yml]
image:https://github.com/steffakasid/clinar/actions/workflows/release.yml/badge.svg[link:https://github.com/steffakasid/clinar/actions/workflows/release.yml]
image:https://github.com/steffakasid/clinar/actions/workflows/go-test.yml/badge.svg[link:https://github.com/steffakasid/clinar/actions/workflows/go-test.yml]
image:coverage_badge.png[]
This tool basically get's all offline runners which a user can administer. If you don't provide the `--approve` flag the tool just shows all runners which are offline with some additional information. After you provide the `--approve` flag all offline runners are deleted.
## Installation
On OSX or Linux just use brew to install:
.How to brew install
[source,sh]
----
brew install steffakasid/clinar/clinar
# or
brew tap steffakasid/clinar
#and then
brew install clinar
----
Checkout `brew help`, `man brew`
## Flags and Config Options
.Usage
clinar [flags]
.Environment Variables
GITLAB_HOST:: set the GitLab host to be able to run against self hosted GitLab instances [Default: https://gitlab.com]
GITLAB_TOKEN:: GitLab token to access the GitLab API. To view runners read_api should be sufficient. To cleanup stale runners you must have full API access.
.Flags
--approve, -a:: Boolean flag to toggle approve. If you provide this flag stale runners are deleted.
--exclude, -e:: String[] flag (can be provided multiple times). Define projects/ groups based on their names or ids which are excluded. This flag takes precedences before include. If one group/ project is excluded the full runner is excluded from the cleanup list.
--include, -i:: String flag to define a regular expressions for projects/ groups which should be included. If one group/ project is included the runner is included into the cleanup list.
## Using sops encrypted config file
You can now provide a link:https://github.com/mozilla/sops[sops] encrypted config file. To create one you need any supported encryption key e.g. gpg and encrypt your file like the following:
.Encrpting config
[source,sh]
----
❯ sops --pgp=<GPG Fingerprint> --input-type yaml --output-type yaml $HOME/.clinar
----
NOTE: You can also add the yaml extension to the filename: $HOME/.clinar.yaml in that case you don't need to specify `--input-type` and `--output-type`
GPG Fingerprint:: Your gpg fingerprint you can find out with `gpg --list-keys`
You can set any flag or env var within the config e.g.:
.Content of config file
[source.yaml]
----
GITLAB_TOKEN: <gitlab personal token>
GITLAB_HOST: <custom gitlab host>
----
gitlab personal token:: A gitlab personal token can be created in your user profile in GitLab
custom gitlab host:: If necessary you can set a custom gitlab host (e.g. a company private one)
== Development
=== Generate Coverage Badge
The badge is generated using: https://github.com/jpoles1/gopherbadger
| 43.256757 | 272 | 0.770697 |
ea1feb1ac8fa2c449f2856262b6d5a3a82a23566 | 792 | adoc | AsciiDoc | _unused_topics/using-images-source-to-image-php.adoc | alebedev87/openshift-docs | b7ed96ce84670e2b286f51b4303c144a01764e2b | [
"Apache-2.0"
] | 625 | 2015-01-07T02:53:02.000Z | 2022-03-29T06:07:57.000Z | _unused_topics/using-images-source-to-image-php.adoc | alebedev87/openshift-docs | b7ed96ce84670e2b286f51b4303c144a01764e2b | [
"Apache-2.0"
] | 21,851 | 2015-01-05T15:17:19.000Z | 2022-03-31T22:14:25.000Z | _unused_topics/using-images-source-to-image-php.adoc | alebedev87/openshift-docs | b7ed96ce84670e2b286f51b4303c144a01764e2b | [
"Apache-2.0"
] | 1,681 | 2015-01-06T21:10:24.000Z | 2022-03-28T06:44:50.000Z | // * Unused. Can be removed by 4.9 if still unused. Request full peer review for the module if it’s used.
[id="using-images-source-to-image-php"]
= PHP
include::modules/common-attributes.adoc[]
:context: using-images-source-to-image-php
toc::[]
This topic includes information on the source-to-image (S2I) supported PHP images available for {product-title} users.
//Add link to Build -> S21 following updates
include::modules/images-using-images-s2i-php.adoc[leveloffset=+1]
include::modules/images-using-images-s2i-php-pulling-images.adoc[leveloffset=+1]
include::modules/images-s2i-build-process-overview.adoc[leveloffset=+1]
include::modules/images-using-images-s2i-php-configuration.adoc[leveloffset=+1]
include::modules/images-using-images-s2i-php-hot-deploying.adoc[leveloffset=+1]
| 44 | 118 | 0.780303 |
4f8d5434e5c292c121ddbc46b660474921e79cf8 | 5,175 | adoc | AsciiDoc | docs/modules/ROOT/pages/kafka-source.adoc | christophd/kamelet-catalog | 249797b75885622740bcaeaffc44a3b310700a9d | [
"Apache-2.0"
] | null | null | null | docs/modules/ROOT/pages/kafka-source.adoc | christophd/kamelet-catalog | 249797b75885622740bcaeaffc44a3b310700a9d | [
"Apache-2.0"
] | null | null | null | docs/modules/ROOT/pages/kafka-source.adoc | christophd/kamelet-catalog | 249797b75885622740bcaeaffc44a3b310700a9d | [
"Apache-2.0"
] | null | null | null | // THIS FILE IS AUTOMATICALLY GENERATED: DO NOT EDIT
= image:kamelets/kafka-source.svg[] Kafka Source
*Provided by: "Red Hat"*
Receive data from Kafka topics.
== Configuration Options
The following table summarizes the configuration options available for the `kafka-source` Kamelet:
[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example
| *bootstrapServers {empty}* *| Brokers| Comma separated list of Kafka Broker URLs| string| |
| *password {empty}* *| Password| Password to authenticate to kafka| string| |
| *topic {empty}* *| Topic Names| Comma separated list of Kafka topic names| string| |
| *user {empty}* *| Username| Username to authenticate to Kafka| string| |
| allowManualCommit| Allow Manual Commit| Whether to allow doing manual commits| boolean| `false`|
| autoCommitEnable| Auto Commit Enable| If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer| boolean| `true`|
| autoOffsetReset| Auto Offset Reset| What to do when there is no initial offset. There are 3 enums and the value can be one of latest, earliest, none| string| `"latest"`|
| pollOnError| Poll On Error Behavior| What to do if kafka threw an exception while polling for new messages. There are 5 enums and the value can be one of DISCARD, ERROR_HANDLER, RECONNECT, RETRY, STOP| string| `"ERROR_HANDLER"`|
| saslMechanism| SASL Mechanism| The Simple Authentication and Security Layer (SASL) Mechanism used.| string| `"PLAIN"`|
| securityProtocol| Security Protocol| Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT, SASL_SSL and SSL are supported| string| `"SASL_SSL"`|
|===
NOTE: Fields marked with an asterisk ({empty}*) are mandatory.
== Dependencies
At runtime, the `kafka-source Kamelet relies upon the presence of the following dependencies:
- camel:kafka
- camel:kamelet
== Usage
This section describes how you can use the `kafka-source`.
=== Knative Source
You can use the `kafka-source` Kamelet as a Knative source by binding it to a Knative object.
.kafka-source-binding.yaml
[source,yaml]
----
apiVersion: camel.apache.org/v1alpha1
kind: KameletBinding
metadata:
name: kafka-source-binding
spec:
source:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1alpha1
name: kafka-source
properties:
bootstrapServers: "The Brokers"
password: "The Password"
topic: "The Topic Names"
user: "The Username"
sink:
ref:
kind: Channel
apiVersion: messaging.knative.dev/v1
name: mychannel
----
==== *Prerequisite*
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.
==== *Procedure for using the cluster CLI*
. Save the `kafka-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.
. Run the source by using the following command:
+
[source,shell]
----
oc apply -f kafka-source-binding.yaml
----
==== *Procedure for using the Kamel CLI*
Configure and run the source by using the following command:
[source,shell]
----
kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" channel:mychannel
----
This command creates the KameletBinding in the current namespace on the cluster.
=== Kafka Source
You can use the `kafka-source` Kamelet as a Kafka source by binding it to a Kafka topic.
.kafka-source-binding.yaml
[source,yaml]
----
apiVersion: camel.apache.org/v1alpha1
kind: KameletBinding
metadata:
name: kafka-source-binding
spec:
source:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1alpha1
name: kafka-source
properties:
bootstrapServers: "The Brokers"
password: "The Password"
topic: "The Topic Names"
user: "The Username"
sink:
ref:
kind: KafkaTopic
apiVersion: kafka.strimzi.io/v1beta1
name: my-topic
----
==== *Prerequisites*
Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.
==== *Procedure for using the cluster CLI*
. Save the `kafka-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.
. Run the source by using the following command:
+
[source,shell]
----
oc apply -f kafka-source-binding.yaml
----
==== *Procedure for using the Kamel CLI*
Configure and run the source by using the following command:
[source,shell]
----
kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
----
This command creates the KameletBinding in the current namespace on the cluster.
== Kamelet source file
https://github.com/openshift-integration/kamelet-catalog/blob/main/kafka-source.kamelet.yaml
// THIS FILE IS AUTOMATICALLY GENERATED: DO NOT EDIT
| 32.54717 | 231 | 0.730435 |
a60cff80488dc5f848ef90249c24e13eabcd549e | 31,629 | adoc | AsciiDoc | src/spec/doc/core-syntax.adoc | kriegaex/groovy | df346d56deaec8b3d50d63eb71d2afc98454a463 | [
"Apache-2.0"
] | 1 | 2022-01-06T03:41:01.000Z | 2022-01-06T03:41:01.000Z | src/spec/doc/core-syntax.adoc | kriegaex/groovy | df346d56deaec8b3d50d63eb71d2afc98454a463 | [
"Apache-2.0"
] | null | null | null | src/spec/doc/core-syntax.adoc | kriegaex/groovy | df346d56deaec8b3d50d63eb71d2afc98454a463 | [
"Apache-2.0"
] | 1 | 2022-02-05T05:58:12.000Z | 2022-02-05T05:58:12.000Z | //////////////////////////////////////////
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
//////////////////////////////////////////
ifndef::core-operators[]
:core-operators: core-operators.adoc
endif::[]
ifndef::core-semantics[]
:core-semantics: core-semantics.adoc
endif::[]
= Syntax
This chapter covers the syntax of the Groovy programming language.
The grammar of the language derives from the Java grammar,
but enhances it with specific constructs for Groovy, and allows certain simplifications.
== Comments
=== Single-line comment
Single-line comments start with `//` and can be found at any position in the line.
The characters following `//`, until the end of the line, are considered part of the comment.
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=single_line_comment,indent=0]
----
=== Multiline comment
A multiline comment starts with `/\*` and can be found at any position in the line.
The characters following `/*` will be considered part of the comment, including new line characters,
up to the first `*/` closing the comment.
Multiline comments can thus be put at the end of a statement, or even inside a statement.
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=multiline_comment,indent=0]
----
=== Groovydoc comment
Similarly to multiline comments, Groovydoc comments are multiline, but start with `/\**` and end with `*/`.
Lines following the first Groovydoc comment line can optionally start with a star `*`.
Those comments are associated with:
* type definitions (classes, interfaces, enums, annotations),
* fields and properties definitions
* methods definitions
Although the compiler will not complain about Groovydoc comments not being associated with the above language elements,
you should prepend those constructs with the comment right before it.
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=groovydoc_comment,indent=0]
----
Groovydoc follows the same conventions as Java's own Javadoc.
So you'll be able to use the same tags as with Javadoc.
In addition, Groovy supports *Runtime Groovydoc* since 3.0.0, i.e. Groovydoc can be retained at runtime.
NOTE: Runtime Groovydoc is disabled by default. It can be enabled by adding JVM option `-Dgroovy.attach.runtime.groovydoc=true`
The Runtime Groovydoc starts with `/\**@` and ends with `*/`, for example:
[source,groovy]
----
/**@
* Some class groovydoc for Foo
*/
class Foo {
/**@
* Some method groovydoc for bar
*/
void bar() {
}
}
assert Foo.class.groovydoc.content.contains('Some class groovydoc for Foo') // <1>
assert Foo.class.getMethod('bar', new Class[0]).groovydoc.content.contains('Some method groovydoc for bar') // <2>
----
<1> Get the runtime groovydoc for class `Foo`
<2> Get the runtime groovydoc for method `bar`
=== Shebang line
Beside the single-line comment, there is a special line comment, often called the _shebang_ line understood by UNIX systems
which allows scripts to be run directly from the command-line, provided you have installed the Groovy distribution
and the `groovy` command is available on the `PATH`.
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=shebang_comment_line,indent=0]
----
NOTE: The `#` character must be the first character of the file. Any indentation would yield a compilation error.
== Keywords
Groovy has the following reserved keywords:
[cols="1,1,1,1"]
.Reserved Keywords
|===
include::../test/SyntaxTest.groovy[tags=reserved_keywords,indent=0]
|===
Of these, `const`, `goto`, `strictfp`, and `threadsafe` are not currently in use.
The reserved keywords can't in general be used for variable, field and method names.
****
A trick allows methods to be defined having the same name as a keyword
by surrounding the name in quotes as shown in the following example:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=reserved_keywords_example,indent=0]
----
Using such names might be confusing and is often best to avoid.
The trick is primarily intended to enable certain Java integration scenarios
and certain link:core-domain-specific-languages.html[DSL] scenarios where
having "verbs" and "nouns" with the same name as keywords may be desirable.
****
In addition, Groovy has the following contextual keywords:
[cols="1,1,1,1"]
.Contextual Keywords
|===
include::../test/SyntaxTest.groovy[tags=contextual_keywords,indent=0]
|===
These words are only keywords in certain contexts and can be more freely used in some places,
in particular for variables, fields and method names.
****
This extra lenience allows using method or variable names that were not keywords in earlier
versions of Groovy or are not keywords in Java. Examples are shown here:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=contextual_keywords_examples,indent=0]
----
Groovy programmers familiar with these contextual keywords may still wish to avoid
using those names unless there is a good reason to use such a name.
****
The restrictions on reserved keywords also apply for the
primitive types, the boolean literals and the null literal (all of which are discussed later):
[cols="1,1,1,1"]
.Other reserved words
|===
include::../test/SyntaxTest.groovy[tags=reserved_words,indent=0]
|===
****
While not recommended, the same trick as for reserved keywords can be used:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=reserved_words_example,indent=0]
----
Using such words as method names is potentially confusing and is often best to avoid, however,
it might be useful for certain kinds of link:core-domain-specific-languages.html[DSLs].
****
== Identifiers
=== Normal identifiers
Identifiers start with a letter, a dollar or an underscore.
They cannot start with a number.
A letter can be in the following ranges:
* 'a' to 'z' (lowercase ascii letter)
* 'A' to 'Z' (uppercase ascii letter)
* '\u00C0' to '\u00D6'
* '\u00D8' to '\u00F6'
* '\u00F8' to '\u00FF'
* '\u0100' to '\uFFFE'
Then following characters can contain letters and numbers.
Here are a few examples of valid identifiers (here, variable names):
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=valid_identifiers,indent=0]
----
But the following ones are invalid identifiers:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=invalid_identifiers,indent=0]
----
All keywords are also valid identifiers when following a dot:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=keywords_valid_id_after_dot,indent=0]
----
=== Quoted identifiers
Quoted identifiers appear after the dot of a dotted expression.
For instance, the `name` part of the `person.name` expression can be quoted with `person."name"` or `person.'name'`.
This is particularly interesting when certain identifiers contain illegal characters that are forbidden by the Java Language Specification,
but which are allowed by Groovy when quoted. For example, characters like a dash, a space, an exclamation mark, etc.
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=quoted_id,indent=0]
----
As we shall see in the <<all-strings,following section on strings>>, Groovy provides different string literals.
All kind of strings are actually allowed after the dot:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=quoted_id_with_all_strings,indent=0]
----
There's a difference between plain character strings and Groovy's GStrings (interpolated strings),
as in that the latter case, the interpolated values are inserted in the final string for evaluating the whole identifier:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=quoted_id_with_gstring,indent=0]
----
[[all-strings]]
== Strings
Text literals are represented in the form of chain of characters called strings.
Groovy lets you instantiate `java.lang.String` objects, as well as GStrings (`groovy.lang.GString`)
which are also called _interpolated strings_ in other programming languages.
=== Single-quoted string
Single-quoted strings are a series of characters surrounded by single quotes:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=string_1,indent=0]
----
NOTE: Single-quoted strings are plain `java.lang.String` and don't support interpolation.
=== String concatenation
All the Groovy strings can be concatenated with the `+` operator:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=string_plus,indent=0]
----
=== Triple-single-quoted string
Triple-single-quoted strings are a series of characters surrounded by triplets of single quotes:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=triple_single_0,indent=0]
----
NOTE: Triple-single-quoted strings are plain `java.lang.String` and don't support interpolation.
Triple-single-quoted strings may span multiple lines.
The content of the string can cross line boundaries without the need to split the string in several pieces
and without concatenation or newline escape characters:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=triple_single_1,indent=0]
----
If your code is indented, for example in the body of the method of a class, your string will contain the whitespace of the indentation.
The Groovy Development Kit contains methods for stripping out the indentation with the `String#stripIndent()` method,
and with the `String#stripMargin()` method that takes a delimiter character to identify the text to remove from the beginning of a string.
When creating a string as follows:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=triple_single_2,indent=0]
----
You will notice that the resulting string contains a newline character as first character.
It is possible to strip that character by escaping the newline with a backslash:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=triple_single_3,indent=0]
----
==== Escaping special characters
You can escape single quotes with the backslash character to avoid terminating the string literal:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=string_2,indent=0]
----
And you can escape the escape character itself with a double backslash:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=string_3,indent=0]
----
Some special characters also use the backslash as escape character:
[cols="1,2" options="header"]
|====
|Escape sequence
|Character
|\b
|backspace
|\f
|formfeed
|\n
|newline
|\r
|carriage return
|\s
|single space
|\t
|tabulation
|\\
|backslash
|\'
|single quote within a single-quoted string (and optional for triple-single-quoted and double-quoted strings)
|\"
|double quote within a double-quoted string (and optional for triple-double-quoted and single-quoted strings)
|====
We'll see some more escaping details when it comes to other types of strings discussed later.
==== Unicode escape sequence
For characters that are not present on your keyboard, you can use unicode escape sequences:
a backslash, followed by 'u', then 4 hexadecimal digits.
For example, the Euro currency symbol can be represented with:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=string_4,indent=0]
----
=== Double-quoted string
Double-quoted strings are a series of characters surrounded by double quotes:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=string_5,indent=0]
----
NOTE: Double-quoted strings are plain `java.lang.String` if there's no interpolated expression,
but are `groovy.lang.GString` instances if interpolation is present.
NOTE: To escape a double quote, you can use the backslash character: +"A double quote: \""+.
==== String interpolation
Any Groovy expression can be interpolated in all string literals, apart from single and triple-single-quoted strings.
Interpolation is the act of replacing a placeholder in the string with its value upon evaluation of the string.
The placeholder expressions are surrounded by `${}`. The curly braces may be omitted for unambiguous dotted expressions,
i.e. we can use just a $ prefix in those cases.
If the GString is ever passed to a method taking a String, the expression value inside the placeholder
is evaluated to its string representation (by calling `toString()` on that expression) and the resulting
String is passed to the method.
Here, we have a string with a placeholder referencing a local variable:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=gstring_1,indent=0]
----
Any Groovy expression is valid, as we can see in this example with an arithmetic expression:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=gstring_2,indent=0]
----
[NOTE]
Not only are expressions allowed in between the `${}` placeholder, but so are statements. However, a statement's value is just `null`.
So if several statements are inserted in that placeholder, the last one should somehow return a meaningful value to be inserted.
For instance, +"The sum of 1 and 2 is equal to ${def a = 1; def b = 2; a + b}"+ is supported and works as expected but a good practice is usually to stick to simple expressions inside GString placeholders.
In addition to `${}` placeholders, we can also use a lone `$` sign prefixing a dotted expression:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=gstring_3,indent=0]
----
But only dotted expressions of the form `a.b`, `a.b.c`, etc, are valid. Expressions containing parentheses like method calls,
curly braces for closures, dots which aren't part of a property expression or arithmetic operators would be invalid.
Given the following variable definition of a number:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=gstring_4,indent=0]
----
The following statement will throw a `groovy.lang.MissingPropertyException` because Groovy believes you're trying to access the `toString` property of that number, which doesn't exist:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=gstring_5,indent=0]
----
NOTE: You can think of `"$number.toString()"` as being interpreted by the parser as `"${number.toString}()"`.
Similarly, if the expression is ambiguous, you need to keep the curly braces:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=gstring_3b,indent=0]
include::../test/SyntaxTest.groovy[tags=gstring_3b2,indent=0]
include::../test/SyntaxTest.groovy[tags=gstring_3b3,indent=0]
----
If you need to escape the `$` or `${}` placeholders in a GString so they appear as is without interpolation,
you just need to use a `\` backslash character to escape the dollar sign:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=gstring_6,indent=0]
----
==== Special case of interpolating closure expressions
So far, we've seen we could interpolate arbitrary expressions inside the `${}` placeholder, but there is a special case and notation for closure expressions. When the placeholder contains an arrow, `${->}`, the expression is actually a closure expression -- you can think of it as a closure with a dollar prepended in front of it:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=closure_in_gstring_1,indent=0]
----
<1> The closure is a parameterless closure which doesn't take arguments.
<2> Here, the closure takes a single `java.io.StringWriter` argument, to which you can append content with the `<<` leftShift operator.
In either case, both placeholders are embedded closures.
In appearance, it looks like a more verbose way of defining expressions to be interpolated,
but closures have an interesting advantage over mere expressions: lazy evaluation.
Let's consider the following sample:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=closure_in_gstring_2,indent=0]
----
<1> We define a `number` variable containing `1` that we then interpolate within two GStrings,
as an expression in `eagerGString` and as a closure in `lazyGString`.
<2> We expect the resulting string to contain the same string value of 1 for `eagerGString`.
<3> Similarly for `lazyGString`
<4> Then we change the value of the variable to a new number
<5> With a plain interpolated expression, the value was actually bound at the time of creation of the GString.
<6> But with a closure expression, the closure is called upon each coercion of the GString into String,
resulting in an updated string containing the new number value.
[NOTE]
An embedded closure expression taking more than one parameter will generate an exception at runtime.
Only closures with zero or one parameters are allowed.
==== Interoperability with Java
When a method (whether implemented in Java or Groovy) expects a `java.lang.String`,
but we pass a `groovy.lang.GString` instance,
the `toString()` method of the GString is automatically and transparently called.
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=java_gstring_interop_1,indent=0]
----
<1> We create a GString variable
<2> We double check it's an instance of the GString
<3> We then pass that GString to a method taking a String as parameter
<4> The signature of the `takeString()` method explicitly says its sole parameter is a String
<5> We also verify that the parameter is indeed a String and not a GString.
==== GString and String hashCodes
Although interpolated strings can be used in lieu of plain Java strings,
they differ with strings in a particular way: their hashCodes are different.
Plain Java strings are immutable, whereas the resulting String representation of a GString can vary,
depending on its interpolated values.
Even for the same resulting string, GStrings and Strings don't have the same hashCode.
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=gstring_hashcode_1,indent=0]
----
GString and Strings having different hashCode values, using GString as Map keys should be avoided,
especially if we try to retrieve an associated value with a String instead of a GString.
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=gstring_hashcode_2,indent=0]
----
<1> The map is created with an initial pair whose key is a GString
<2> When we try to fetch the value with a String key, we will not find it, as Strings and GString have different hashCode values
=== Triple-double-quoted string
Triple-double-quoted strings behave like double-quoted strings, with the addition that they are multiline, like the triple-single-quoted strings.
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=triple_double_1,indent=0]
----
NOTE: Neither double quotes nor single quotes need be escaped in triple-double-quoted strings.
=== Slashy string
Beyond the usual quoted strings, Groovy offers slashy strings, which use `/` as the opening and closing delimiter.
Slashy strings are particularly useful for defining regular expressions and patterns,
as there is no need to escape backslashes.
Example of a slashy string:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=slashy_1,indent=0]
----
Only forward slashes need to be escaped with a backslash:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=slashy_2,indent=0]
----
Slashy strings are multiline:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=slashy_3,indent=0]
----
Slashy strings can be thought of as just another way to define a GString but with different escaping rules. They hence support interpolation:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=slashy_4,indent=0]
----
==== Special cases
An empty slashy string cannot be represented with a double forward slash, as it's understood by the Groovy parser as a line comment.
That's why the following assert would actually not compile as it would look like a non-terminated statement:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=slashy_5,indent=0]
----
As slashy strings were mostly designed to make regexp easier so a few things that
are errors in GStrings like `$()` or `$5` will work with slashy strings.
Remember that escaping backslashes is not required. An alternative way of thinking of this is
that in fact escaping is not supported. The slashy string `/\t/` won't contain a tab but instead
a backslash followed by the character 't'. Escaping is only allowed for the slash character, i.e. `/\/folder/`
will be a slashy string containing `'/folder'`. A consequence of slash escaping is that a slashy string
can't end with a backslash. Otherwise that will escape the slashy string terminator.
You can instead use a special trick, `/ends with slash ${'\'}/`. But best just avoid using a slashy string in such a case.
=== Dollar slashy string
Dollar slashy strings are multiline GStrings delimited with an opening `$/` and a closing `/$`.
The escaping character is the dollar sign, and it can escape another dollar, or a forward slash.
Escaping for the dollar and forward slash characters is only needed where conflicts arise with
the special use of those characters. The characters `$foo` would normally indicate a GString
placeholder, so those four characters can be entered into a dollar slashy string by escaping the dollar, i.e. `$$foo`.
Similarly, you will need to escape a dollar slashy closing delimiter if you want it to appear in your string.
Here are a few examples:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=dollar_slashy_1,indent=0]
----
It was created to overcome some of the limitations of the slashy string escaping rules.
Use it when its escaping rules suit your string contents (typically if it has some slashes you don't want to escape).
=== String summary table
[cols="5*", ptions="header"]
|====
|String name
|String syntax
|Interpolated
|Multiline
|Escape character
|Single-quoted
|`'...'`
|icon:check-empty[]
|icon:check-empty[]
|`\`
|Triple-single-quoted
|`'''...'''`
|icon:check-empty[]
|icon:check[]
|`\`
|Double-quoted
|`"..."`
|icon:check[]
|icon:check-empty[]
|`\`
|Triple-double-quoted
|`"""..."""`
|icon:check[]
|icon:check[]
|`\`
|Slashy
|`/.../`
|icon:check[]
|icon:check[]
|`\`
|Dollar slashy
|`$/.../$`
|icon:check[]
|icon:check[]
|`$`
|====
=== Characters
Unlike Java, Groovy doesn't have an explicit character literal.
However, you can be explicit about making a Groovy string an actual character, by three different means:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=char,indent=0]
----
<1> by being explicit when declaring a variable holding the character by specifying the `char` type
<2> by using type coercion with the `as` operator
<3> by using a cast to char operation
NOTE: The first option [conum,data-value=1]_1_ is interesting when the character is held in a variable,
while the other two ([conum,data-value=2]_2_ and [conum,data-value=3]_3_) are more interesting when a char value must be passed as argument of a method call.
include::_working-with-numbers.adoc[leveloffset=+1]
== Booleans
Boolean is a special data type that is used to represent truth values: `true` and `false`.
Use this data type for simple flags that track true/false <<{core-operators}#_conditional_operators,conditions>>.
Boolean values can be stored in variables, assigned into fields, just like any other data type:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=variable_store_boolean_value,indent=0]
----
`true` and `false` are the only two primitive boolean values.
But more complex boolean expressions can be represented using <<{core-operators}#_logical_operators,logical operators>>.
In addition, Groovy has <<{core-semantics}#the-groovy-truth,special rules>> (often referred to as _Groovy Truth_)
for coercing non-boolean objects to a boolean value.
== Lists
Groovy uses a comma-separated list of values, surrounded by square brackets, to denote lists.
Groovy lists are plain JDK `java.util.List`, as Groovy doesn't define its own collection classes.
The concrete list implementation used when defining list literals are `java.util.ArrayList` by default,
unless you decide to specify otherwise, as we shall see later on.
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=list_1,indent=0]
----
<1> We define a list numbers delimited by commas and surrounded by square brackets, and we assign that list into a variable
<2> The list is an instance of Java's `java.util.List` interface
<3> The size of the list can be queried with the `size()` method, and shows our list contains 3 elements
In the above example, we used a homogeneous list, but you can also create lists containing values of heterogeneous types:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=list_2,indent=0]
----
<1> Our list here contains a number, a string and a boolean value
We mentioned that by default, list literals are actually instances of `java.util.ArrayList`,
but it is possible to use a different backing type for our lists,
thanks to using type coercion with the `as` operator, or with explicit type declaration for your variables:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=coercion_of_list,indent=0]
----
<1> We use coercion with the `as` operator to explicitly request a `java.util.LinkedList` implementation
<2> We can say that the variable holding the list literal is of type `java.util.LinkedList`
You can access elements of the list with the `[]` subscript operator (both for reading and setting values)
with positive indices or negative indices to access elements from the end of the list, as well as with ranges,
and use the `<<` leftShift operator to append elements to a list:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=subscript_and_leftshift,indent=0]
----
<1> Access the first element of the list (zero-based counting)
<2> Access the last element of the list with a negative index: -1 is the first element from the end of the list
<3> Use an assignment to set a new value for the third element of the list
<4> Use the `<<` leftShift operator to append an element at the end of the list
<5> Access two elements at once, returning a new list containing those two elements
<6> Use a range to access a range of values from the list, from a start to an end element position
As lists can be heterogeneous in nature, lists can also contain other lists to create multi-dimensional lists:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=multi_dim_list,indent=0]
----
<1> Define a list of list of numbers
<2> Access the second element of the top-most list, and the first element of the inner list
== Arrays
Groovy reuses the list notation for arrays, but to make such literals arrays,
you need to explicitely define the type of the array through coercion or type declaration.
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=array_1,indent=0]
----
<1> Define an array of strings using explicit variable type declaration
<2> Assert that we created an array of strings
<3> Create an array of ints with the `as` operator
<4> Assert that we created an array of primitive ints
You can also create multi-dimensional arrays:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=array_2,indent=0]
----
<1> You can define the bounds of a new array
<2> Or declare an array without specifying its bounds
Access to elements of an array follows the same notation as for lists:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=array_3,indent=0]
----
<1> Retrieve the first element of the array
<2> Set the value of the third element of the array to a new value
=== Java-style array initialization
Groovy has always supported literal list/array definitions using square brackets
and has avoided Java-style curly braces so as not to conflict with closure definitions.
In the case where the curly braces come immediately after an array type declaration however,
there is no ambiguity with closure definitions,
so Groovy 3 and above support that variant of the Java array initialization expression.
Examples:
[source,groovy]
--------------------------------------
def primes = new int[] {2, 3, 5, 7, 11}
assert primes.size() == 5 && primes.sum() == 28
assert primes.class.name == '[I'
def pets = new String[] {'cat', 'dog'}
assert pets.size() == 2 && pets.sum() == 'catdog'
assert pets.class.name == '[Ljava.lang.String;'
// traditional Groovy alternative still supported
String[] groovyBooks = [ 'Groovy in Action', 'Making Java Groovy' ]
assert groovyBooks.every{ it.contains('Groovy') }
--------------------------------------
== Maps
Sometimes called dictionaries or associative arrays in other languages, Groovy features maps.
Maps associate keys to values, separating keys and values with colons, and each key/value pairs with commas,
and the whole keys and values surrounded by square brackets.
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=map_def_access,indent=0]
----
<1> We define a map of string color names, associated with their hexadecimal-coded html colors
<2> We use the subscript notation to check the content associated with the `red` key
<3> We can also use the property notation to assert the color green's hexadecimal representation
<4> Similarly, we can use the subscript notation to add a new key/value pair
<5> Or the property notation, to add the `yellow` color
[NOTE]
When using names for the keys, we actually define string keys in the map.
[NOTE]
Groovy creates maps that are actually instances of `java.util.LinkedHashMap`.
If you try to access a key which is not present in the map:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=unknown_key,indent=0]
----
You will retrieve a `null` result.
In the examples above, we used string keys, but you can also use values of other types as keys:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=number_key,indent=0]
----
Here, we used numbers as keys, as numbers can unambiguously be recognized as numbers,
so Groovy will not create a string key like in our previous examples.
But consider the case you want to pass a variable in lieu of the key, to have the value of that variable become the key:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=variable_key_1,indent=0]
----
<1> The `key` associated with the `'Guillaume'` name will actually be the `"key"` string, not the value associated with the `key` variable
<2> The map doesn't contain the `'name'` key
<3> Instead, the map contains a `'key'` key
[NOTE]
You can also pass quoted strings as well as keys: +["name": "Guillaume"]+.
This is mandatory if your key string isn't a valid identifier,
for example if you wanted to create a string key containing a dash like in: +["street-name": "Main street"]+.
When you need to pass variable values as keys in your map definitions, you must surround the variable or expression with parentheses:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=variable_key_2,indent=0]
----
<1> This time, we surround the `key` variable with parentheses, to instruct the parser we are passing a variable rather than defining a string key
<2> The map does contain the `name` key
<3> But the map doesn't contain the `key` key as before
| 35.498316 | 330 | 0.75396 |
6f86aa0cce6a6cc641bcb1a190aa549e897206f0 | 8,665 | adoc | AsciiDoc | docs/src/main/asciidoc/spring-cloud-circuitbreaker-resilience4j.adoc | dangzhicairang/spring-cloud-circuitbreaker | b69537ca3b06456f22bbd346c14d7fa248c1c905 | [
"Apache-2.0"
] | 144 | 2019-08-27T11:45:03.000Z | 2022-03-16T12:40:11.000Z | docs/src/main/asciidoc/spring-cloud-circuitbreaker-resilience4j.adoc | dangzhicairang/spring-cloud-circuitbreaker | b69537ca3b06456f22bbd346c14d7fa248c1c905 | [
"Apache-2.0"
] | 99 | 2019-08-26T01:45:52.000Z | 2022-03-15T09:19:04.000Z | docs/src/main/asciidoc/spring-cloud-circuitbreaker-resilience4j.adoc | dangzhicairang/spring-cloud-circuitbreaker | b69537ca3b06456f22bbd346c14d7fa248c1c905 | [
"Apache-2.0"
] | 57 | 2019-08-27T11:45:06.000Z | 2022-03-31T15:44:14.000Z | === Configuring Resilience4J Circuit Breakers
==== Starters
There are two starters for the Resilience4J implementations, one for reactive applications and one for non-reactive applications.
* `org.springframework.cloud:spring-cloud-starter-circuitbreaker-resilience4j` - non-reactive applications
* `org.springframework.cloud:spring-cloud-starter-circuitbreaker-reactor-resilience4j` - reactive applications
==== Auto-Configuration
You can disable the Resilience4J auto-configuration by setting
`spring.cloud.circuitbreaker.resilience4j.enabled` to `false`.
==== Default Configuration
To provide a default configuration for all of your circuit breakers create a `Customize` bean that is passed a
`Resilience4JCircuitBreakerFactory` or `ReactiveResilience4JCircuitBreakerFactory`.
The `configureDefault` method can be used to provide a default configuration.
====
[source,java]
----
@Bean
public Customizer<Resilience4JCircuitBreakerFactory> defaultCustomizer() {
return factory -> factory.configureDefault(id -> new Resilience4JConfigBuilder(id)
.timeLimiterConfig(TimeLimiterConfig.custom().timeoutDuration(Duration.ofSeconds(4)).build())
.circuitBreakerConfig(CircuitBreakerConfig.ofDefaults())
.build());
}
----
====
===== Reactive Example
====
[source,java]
----
@Bean
public Customizer<ReactiveResilience4JCircuitBreakerFactory> defaultCustomizer() {
return factory -> factory.configureDefault(id -> new Resilience4JConfigBuilder(id)
.circuitBreakerConfig(CircuitBreakerConfig.ofDefaults())
.timeLimiterConfig(TimeLimiterConfig.custom().timeoutDuration(Duration.ofSeconds(4)).build()).build());
}
----
====
==== Specific Circuit Breaker Configuration
Similarly to providing a default configuration, you can create a `Customize` bean this is passed a
`Resilience4JCircuitBreakerFactory` or `ReactiveResilience4JCircuitBreakerFactory`.
====
[source,java]
----
@Bean
public Customizer<Resilience4JCircuitBreakerFactory> slowCustomizer() {
return factory -> factory.configure(builder -> builder.circuitBreakerConfig(CircuitBreakerConfig.ofDefaults())
.timeLimiterConfig(TimeLimiterConfig.custom().timeoutDuration(Duration.ofSeconds(2)).build()), "slow");
}
----
====
In addition to configuring the circuit breaker that is created you can also customize the circuit breaker after it has been created but before it is returned to the caller.
To do this you can use the `addCircuitBreakerCustomizer`
method.
This can be useful for adding event handlers to Resilience4J circuit breakers.
====
[source,java]
----
@Bean
public Customizer<Resilience4JCircuitBreakerFactory> slowCustomizer() {
return factory -> factory.addCircuitBreakerCustomizer(circuitBreaker -> circuitBreaker.getEventPublisher()
.onError(normalFluxErrorConsumer).onSuccess(normalFluxSuccessConsumer), "normalflux");
}
----
====
===== Reactive Example
====
[source,java]
----
@Bean
public Customizer<ReactiveResilience4JCircuitBreakerFactory> slowCustomizer() {
return factory -> {
factory.configure(builder -> builder
.timeLimiterConfig(TimeLimiterConfig.custom().timeoutDuration(Duration.ofSeconds(2)).build())
.circuitBreakerConfig(CircuitBreakerConfig.ofDefaults()), "slow", "slowflux");
factory.addCircuitBreakerCustomizer(circuitBreaker -> circuitBreaker.getEventPublisher()
.onError(normalFluxErrorConsumer).onSuccess(normalFluxSuccessConsumer), "normalflux");
};
}
----
====
==== Circuit Breaker Properties Configuration
You can configure `CircuitBreaker` and `TimeLimiter` instances in your application's configuration properties file.
Property configuration has higher priority than Java `Customizer` configuration.
====
[source]
----
resilience4j.circuitbreaker:
instances:
backendA:
registerHealthIndicator: true
slidingWindowSize: 100
backendB:
registerHealthIndicator: true
slidingWindowSize: 10
permittedNumberOfCallsInHalfOpenState: 3
slidingWindowType: TIME_BASED
recordFailurePredicate: io.github.robwin.exception.RecordFailurePredicate
resilience4j.timelimiter:
instances:
backendA:
timeoutDuration: 2s
cancelRunningFuture: true
backendB:
timeoutDuration: 1s
cancelRunningFuture: false
----
====
For more information on Resilience4j property configuration, see https://resilience4j.readme.io/docs/getting-started-3#configuration[Resilience4J Spring Boot 2 Configuration].
==== Bulkhead pattern supporting
If `resilience4j-bulkhead` is on the classpath, Spring Cloud CircuitBreaker will wrap all methods with a Resilience4j Bulkhead.
You can disable the Resilience4j Bulkhead by setting `spring.cloud.circuitbreaker.bulkhead.resilience4j.enabled` to `false`.
Spring Cloud CircuitBreaker Resilience4j provides two implementation of bulkhead pattern:
* a `SemaphoreBulkhead` which uses Semaphores
* a `FixedThreadPoolBulkhead` which uses a bounded queue and a fixed thread pool.
By default, Spring Cloud CircuitBreaker Resilience4j uses `FixedThreadPoolBulkhead`. For more information on implementation
of Bulkhead patterns see the https://resilience4j.readme.io/docs/bulkhead[Resilience4j Bulkhead].
The `Customizer<Resilience4jBulkheadProvider>` can be used to provide a default `Bulkhead` and `ThreadPoolBulkhead` configuration.
====
[source,java]
----
@Bean
public Customizer<Resilience4jBulkheadProvider> defaultBulkheadCustomizer() {
return provider -> provider.configureDefault(id -> new Resilience4jBulkheadConfigurationBuilder()
.bulkheadConfig(BulkheadConfig.custom().maxConcurrentCalls(4).build())
.threadPoolBulkheadConfig(ThreadPoolBulkheadConfig.custom().coreThreadPoolSize(1).maxThreadPoolSize(1).build())
.build()
);
}
----
====
==== Specific Bulkhead Configuration
Similarly to proving a default 'Bulkhead' or 'ThreadPoolBulkhead' configuration, you can create a `Customize` bean this
is passed a `Resilience4jBulkheadProvider`.
====
[source,java]
----
@Bean
public Customizer<Resilience4jBulkheadProvider> slowBulkheadProviderCustomizer() {
return provider -> provider.configure(builder -> builder
.bulkheadConfig(BulkheadConfig.custom().maxConcurrentCalls(1).build())
.threadPoolBulkheadConfig(ThreadPoolBulkheadConfig.ofDefaults()), "slowBulkhead");
}
----
====
In addition to configuring the Bulkhead that is created you can also customize the bulkhead and thread pool bulkhead after they
have been created but before they are returned to caller. To do this you can use the `addBulkheadCustomizer` and `addThreadPoolBulkheadCustomizer`
methods.
===== Bulkhead Example
====
[source,java]
----
@Bean
public Customizer<Resilience4jBulkheadProvider> customizer() {
return provider -> provider.addBulkheadCustomizer(bulkhead -> bulkhead.getEventPublisher()
.onCallRejected(slowRejectedConsumer)
.onCallFinished(slowFinishedConsumer), "slowBulkhead");
}
----
====
===== Thread Pool Bulkhead Example
====
[source,java]
----
@Bean
public Customizer<Resilience4jBulkheadProvider> slowThreadPoolBulkheadCustomizer() {
return provider -> provider.addThreadPoolBulkheadCustomizer(threadPoolBulkhead -> threadPoolBulkhead.getEventPublisher()
.onCallRejected(slowThreadPoolRejectedConsumer)
.onCallFinished(slowThreadPoolFinishedConsumer), "slowThreadPoolBulkhead");
}
----
====
==== Bulkhead Properties Configuration
You can configure ThreadPoolBulkhead and SemaphoreBulkhead instances in your application's configuration properties file.
Property configuration has higher priority than Java `Customizer` configuration.
====
[source]
----
resilience4j.thread-pool-bulkhead:
instances:
backendA:
maxThreadPoolSize: 1
coreThreadPoolSize: 1
resilience4j.bulkhead:
instances:
backendB:
maxConcurrentCalls: 10
----
====
For more inforamtion on the Resilience4j property configuration, see https://resilience4j.readme.io/docs/getting-started-3#configuration[Resilience4J Spring Boot 2 Configuration].
==== Collecting Metrics
Spring Cloud Circuit Breaker Resilience4j includes auto-configuration to setup metrics collection as long as the right
dependencies are on the classpath. To enable metric collection you must include `org.springframework.boot:spring-boot-starter-actuator`, and `io.github.resilience4j:resilience4j-micrometer`. For more information on the metrics that
get produced when these dependencies are present, see the https://resilience4j.readme.io/docs/micrometer[Resilience4j documentation].
NOTE: You don't have to include `micrometer-core` directly as it is brought in by `spring-boot-starter-actuator`
| 36.716102 | 233 | 0.776572 |
573eca303d15726522c49af91290904eac77676e | 5,665 | adoc | AsciiDoc | docs/de-de/modules/changelog/pages/2020-10-28.adoc | plentymarkets/plenty-manual-docs | 65d179a8feb8fcf1b594ef45883e3437287d8e09 | [
"MIT"
] | null | null | null | docs/de-de/modules/changelog/pages/2020-10-28.adoc | plentymarkets/plenty-manual-docs | 65d179a8feb8fcf1b594ef45883e3437287d8e09 | [
"MIT"
] | 2 | 2022-01-05T10:31:24.000Z | 2022-03-11T11:56:07.000Z | docs/de-de/modules/changelog/pages/2020-10-28.adoc | plentymarkets/plenty-manual-docs | 65d179a8feb8fcf1b594ef45883e3437287d8e09 | [
"MIT"
] | 1 | 2021-03-01T09:12:18.000Z | 2021-03-01T09:12:18.000Z | = Changelog 28. Oktober 2020
:lang: de
:author: kevin-stederoth
:sectnums!:
:position: 10790
:id:
:startWeekDate: 22. Oktober 2020
:endWeekDate: 28. Oktober 2020
Erfahre, was sich in der Woche vom {startWeekDate} bis zum {endWeekDate} bei plentymarkets getan hat. Im Folgenden findest du alle Changelog-Einträge der letzten Wochen für stable- und early-Systeme.
Wenn du mehr zu den einzelnen Versionen erfahren oder auf eine andere Version wechseln möchtest, siehe die Handbuchseite <<business-entscheidungen/systemadministration/versionszyklus#, Versionszyklus>>. Um die Informationen, die auf dieser Seite gesammelt sind, in Echtzeit zu erhalten, abonniere die link:https://forum.plentymarkets.com/c/changelog[Kategorie Changelog in unserem Forum^]{nbsp}icon:external-link[].
Wähle, welchen Changelog du sehen möchtest.
[.tabs]
====
stable::
+
--
[discrete]
== Neu
Folgende Neuerungen wurden in den letzten 7 Tagen auf *stable* veröffentlicht.
[discrete]
=== Kataloge
* Ab sofort werden auch die Namen der gelöschten Kataloge angezeigt.
* Ab sofort können auch die Otto-Kataloge in XML heruntergeladen werden.
* Ab sofort können auch die Otto-Kataloge in CSV heruntergeladen werden.
[discrete]
=== Import
* Ab sofort könnt ihr einen Wert im Feld *Quelle* direkt eingeben oder diesen weiterhin im Dropdown auswählen.
[discrete]
=== MyView
* Für eine MyView könnt ihr ab sofort Benutzerrechte vergeben und verwalten. Das heißt, Admins können ihren Mitarbeitern über Rollen verschiedene Ansichten zuweisen. Nur Admins haben Zugriff auf den Bearbeitungsmodus der MyView und sind somit autorisiert, Ansichten zu erstellen oder zu löschen sowie neue Rollen anzulegen und Rechte zuzuweisen. Am Anfang ist immer die Standardansicht vorausgewählt. Existiert eine weitere Ansicht, kann die Standardansicht für eine Rolle deaktiviert und Rechte für eine andere Ansicht können zugewiesen werden. Beachte, dass immer mindestens eine Ansicht ausgewählt sein muss. Die Rechteverwaltung findest du im Menü unter *Einrichtung » Einstellungen » Benutzer » Rechte » Rollen » Rolle wählen » Tab: Ansichten*. Dort werden alle rollendefinierten Ansichten angezeigt.
* Nachdem die MyView schon länger die Möglichkeit bietet, Aktionen rückgängig zu machen, haben wir für euch jetzt auch das passende Gegenstück dazu! Mit Klick auf *Wiederherstellen* werden die zuvor rückgängig gemachten Änderungen ab sofort wiederhergestellt. Dies ist allerdings nur möglich, solange die Änderungen nicht gespeichert wurden.
'''
[discrete]
== Geändert
Folgende Änderungen wurden in den letzten 7 Tagen auf *stable* veröffentlicht.
'''
[discrete]
== Behoben
Folgende Probleme wurden in den letzten 7 Tagen auf *stable* behoben.
[discrete]
=== Aufträge
* Wenn die neue Auftragsanlage zum ersten Mal genutzt wurde, wurden die Standard-Tabellenheader nicht angezeigt. Dies wurde nun behoben.
* Wenn die Versandkosten im letzten Schritt der neuen Auftragsanlage geändert wurden, wurden die Gesamtsummen nicht neu berechnet. Dies wurde nun behoben.
* Wenn die Versandkosten im letzten Schritt der neuen Auftragsanlage manuell auf `0` gesetzt wurden, konnte der Auftrag nicht angelegt werden. Dies wurde behoben, diese Aufträge können nun angelegt werden.
* Wurde bei der Auftragsanlage ein Coupon eingelöst, dann wurden unter Umständen die Auftragssummen nicht korrekt berechnet. Der Fehler ist nur in Beta aufgetreten.
[discrete]
=== eBay
* Bilder, die nur für eine spezifische eBay Plattform freigeschaltet waren, konnten bisher nicht im Listing-Layout ausgegeben werden. Das Verhalten wurde behoben.
[discrete]
=== Import
* Beim Importieren von Aufträgen wo kein Name der Auftragsposition (`Artikelname`) gesetzt wurde, konnte es zu einen Fehler kommen. Wenn an dem Artikel selbst ebenfalls kein Name hinterlegt wurde. Dies wurde behoben.
[discrete]
=== Kataloge
* Fügte man ein Spezialfeld hinzu, so war es nicht möglich, dieses zu löschen. Diesen Fehler haben wir behoben.
--
early::
+
--
[discrete]
== Neu
Folgende Neuerungen wurden in den letzten 7 Tagen auf *early* veröffentlicht.
[discrete]
=== Aufträge
* In der neuen Auftragserstellung (Beta) wurde die Möglichkeit hinzugefügt, eine neue Lieferadresse hinzuzufügen. Die neue Lieferadresse kann über der Option *Neue Lieferadresse hinzufügen* im Drop-down *Lieferadresse* hinzugefügt werden.
[discrete]
=== Webshop
* Es ist jetzt möglich, den Webshop über ein iFrame einzubinden. Dafür muss man unter *Einrichtung » Mandant » _Name des Mandanten_ » SEO » iFrame Richtlinien* die Domain der Webseite, die den Shop einbinden soll, hinterlegen. Nicht hinterlegte Domains können den Webshop nicht über ein iFrame einbinden.
'''
[discrete]
== Behoben
Folgende Probleme wurden in den letzten 7 Tagen auf *early* behoben.
[discrete]
=== Aufträge
* Es konnte beim Speichern einer Nachbestellung der Fehler `422 Unprocessable Entity` im Zusammenhang mit dem Händlerzeichen auftreteten.
[discrete]
=== OTTO Market
* Aufgrund eines Fehlers wurden Authentifizierungs-Anfragen an die API von OTTO Market zu oft vorgenommen. Dieses Verhalten wurde behoben.
--
Plugin-Updates::
+
--
Folgende Plugins wurden in den letzten 7 Tagen in einer neuen Version auf plentyMarketplace veröffentlicht:
.Plugin-Updates
[cols="2, 1, 2"]
|===
|Plugin-Name
|Version
|To-do
|link:https://marketplace.plentymarkets.com/vatidcheck_6023[VAT ID Check^]
|2.2.2
|-
|===
Wenn du dir weitere neue oder aktualisierte Plugins anschauen möchtest, findest du eine link:https://marketplace.plentymarkets.com/plugins?sorting=variation.createdAt_desc&page=1&items=50[Übersicht direkt auf plentyMarketplace^]{nbsp}icon:external-link[].
--
====
| 38.80137 | 805 | 0.791527 |
de373968b456ea91c6e39dc5fe39afdcc89b3014 | 1,355 | adoc | AsciiDoc | docs/partner_editable/licenses.adoc | EricWenyi/quickstart-databricks-unified-data-analytics-platform | b5bc205917940bc6526b0c023954d3da46eafa4c | [
"Apache-2.0"
] | null | null | null | docs/partner_editable/licenses.adoc | EricWenyi/quickstart-databricks-unified-data-analytics-platform | b5bc205917940bc6526b0c023954d3da46eafa4c | [
"Apache-2.0"
] | null | null | null | docs/partner_editable/licenses.adoc | EricWenyi/quickstart-databricks-unified-data-analytics-platform | b5bc205917940bc6526b0c023954d3da46eafa4c | [
"Apache-2.0"
] | null | null | null |
For Databricks cost estimates, see the Databricks https://databricks.com/product/aws-pricing[pricing^] page for product tiers and features.
To launch the Quick Start, you need the following:
* An AWS account.
* A Databricks account ID. Please https://databricks.com/company/contact[contact] your Databricks representative to sign up, if necessary.
* A Databricks user name and password.
Determine if your workspace will enable the following features, which require that your account be on the https://docs.databricks.com/getting-started/overview.html#e2-architecture-1[E2 version of the platform]. If you have questions about availability, contact your Databricks representative:
* https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html[Customer-managed VPC^]. Provide your own Amazon VPC.
* https://docs.databricks.com/security/secure-cluster-connectivity.html[Secure-cluster connectivity^]. Network architecture with no VPC open ports and no Databricks runtime worker public IP addresses. In some APIs, this is referred to as No Public IP or NPIP.
* https://docs.databricks.com/security/keys/customer-managed-keys-notebook-aws.html[Customer-managed keys for notebooks^] (private preview). Provide AWS Key Management Service (AWS KMS) keys to encrypt notebooks in the Databricks managed control plane.
| 84.6875 | 292 | 0.805904 |
fcd62db6d14b59cfbd37cbbc0f147ae0780bee81 | 428 | adoc | AsciiDoc | docs/guide/features/context-deployments.adoc | zregvart/wildfly-camel-1 | 418e5ad35c9c096afa071b6d8d608f5184241022 | [
"Apache-2.0"
] | null | null | null | docs/guide/features/context-deployments.adoc | zregvart/wildfly-camel-1 | 418e5ad35c9c096afa071b6d8d608f5184241022 | [
"Apache-2.0"
] | null | null | null | docs/guide/features/context-deployments.adoc | zregvart/wildfly-camel-1 | 418e5ad35c9c096afa071b6d8d608f5184241022 | [
"Apache-2.0"
] | null | null | null | [discrete]
### Camel Context Deployments
Camel contexts can be deployed to WildFly with a **-camel-context.xml** suffix.
1. As a standalone XML file
2. As part of another supported deployment
A deployment may contain multiple **-camel-context.xml** files.
A deployed Camel context is CDI injectable like this
[source,java,options="nowrap"]
@Resource(name = "java:jboss/camel/context/mycontext")
CamelContext camelContext;
| 25.176471 | 79 | 0.771028 |
d91c34bfff310544e8a19fb9bc99573a2c6fa440 | 1,133 | adoc | AsciiDoc | docs/modules/ROOT/pages/reference/extensions/azure-storage-blob.adoc | rsvoboda/camel-quarkus | 995e17918724e834745e0e9a0592d0d48808f1cf | [
"Apache-2.0"
] | null | null | null | docs/modules/ROOT/pages/reference/extensions/azure-storage-blob.adoc | rsvoboda/camel-quarkus | 995e17918724e834745e0e9a0592d0d48808f1cf | [
"Apache-2.0"
] | null | null | null | docs/modules/ROOT/pages/reference/extensions/azure-storage-blob.adoc | rsvoboda/camel-quarkus | 995e17918724e834745e0e9a0592d0d48808f1cf | [
"Apache-2.0"
] | null | null | null | // Do not edit directly!
// This file was generated by camel-quarkus-maven-plugin:update-extension-doc-page
= Azure Storage Blob Service
:cq-artifact-id: camel-quarkus-azure-storage-blob
:cq-native-supported: false
:cq-status: Preview
:cq-description: Store and retrieve blobs from Azure Storage Blob Service using SDK v12.
:cq-deprecated: false
:cq-jvm-since: 1.1.0
:cq-native-since: n/a
[.badges]
[.badge-key]##JVM since##[.badge-supported]##1.1.0## [.badge-key]##Native##[.badge-unsupported]##unsupported##
Store and retrieve blobs from Azure Storage Blob Service using SDK v12.
== What's inside
* https://camel.apache.org/components/latest/azure-storage-blob-component.html[Azure Storage Blob Service component], URI syntax: `azure-storage-blob:containerName`
Please refer to the above link for usage and configuration details.
== Maven coordinates
[source,xml]
----
<dependency>
<groupId>org.apache.camel.quarkus</groupId>
<artifactId>camel-quarkus-azure-storage-blob</artifactId>
</dependency>
----
Check the xref:user-guide/index.adoc[User guide] for more information about writing Camel Quarkus applications.
| 33.323529 | 164 | 0.760812 |
25245b69cbea7b40917d79ae099b786712b4209b | 6,874 | asciidoc | AsciiDoc | x-pack/docs/en/security/authentication/configuring-kerberos-realm.asciidoc | timroes/elasticsearch | 513e1ed095131665fb4aecedac2dea58cb6adce0 | [
"Apache-2.0"
] | 2 | 2022-01-13T13:28:12.000Z | 2022-01-14T04:43:48.000Z | x-pack/docs/en/security/authentication/configuring-kerberos-realm.asciidoc | timroes/elasticsearch | 513e1ed095131665fb4aecedac2dea58cb6adce0 | [
"Apache-2.0"
] | 14 | 2018-06-05T09:52:00.000Z | 2018-11-20T08:05:58.000Z | x-pack/docs/en/security/authentication/configuring-kerberos-realm.asciidoc | timroes/elasticsearch | 513e1ed095131665fb4aecedac2dea58cb6adce0 | [
"Apache-2.0"
] | 4 | 2015-02-02T03:10:59.000Z | 2017-02-18T19:26:04.000Z | [role="xpack"]
[[configuring-kerberos-realm]]
=== Configuring a Kerberos realm
Kerberos is used to protect services and uses a ticket-based authentication
protocol to authenticate users.
You can configure {es} to use the Kerberos V5 authentication protocol, which is
an industry standard protocol, to authenticate users.
In this scenario, clients must present Kerberos tickets for authentication.
In Kerberos, users authenticate with an authentication service and later
with a ticket granting service to generate a TGT (ticket-granting ticket).
This ticket is then presented to the service for authentication.
Refer to your Kerberos installation documentation for more information about
obtaining TGT. {es} clients must first obtain a TGT then initiate the process of
authenticating with {es}.
For a summary of Kerberos terminology, see {stack-ov}/kerberos-realm.html[Kerberos authentication].
==== Before you begin
. Deploy Kerberos.
+
--
You must have the Kerberos infrastructure set up in your environment.
NOTE: Kerberos requires a lot of external services to function properly, such as
time synchronization between all machines and working forward and reverse DNS
mappings in your domain. Refer to your Kerberos documentation for more details.
These instructions do not cover setting up and configuring your Kerberos
deployment. Where examples are provided, they pertain to an MIT Kerberos V5
deployment. For more information, see
http://web.mit.edu/kerberos/www/index.html[MIT Kerberos documentation]
--
. Configure Java GSS.
+
--
{es} uses Java GSS framework support for Kerberos authentication.
To support Kerberos authentication, {es} needs the following files:
* `krb5.conf`, a Kerberos configuration file
* A `keytab` file that contains credentials for the {es} service principal
The configuration requirements depend on your Kerberos setup. Refer to your
Kerberos documentation to configure the `krb5.conf` file.
For more information on Java GSS, see
https://docs.oracle.com/javase/10/security/kerberos-requirements1.htm[Java GSS Kerberos requirements]
--
==== Create a Kerberos realm
To configure a Kerberos realm in {es}:
. Configure the JVM to find the Kerberos configuration file.
+
--
{es} uses Java GSS and JAAS Krb5LoginModule to support Kerberos authentication
using a Simple and Protected GSSAPI Negotiation Mechanism (SPNEGO) mechanism.
The Kerberos configuration file (`krb5.conf`) provides information such as the
default realm, the Key Distribution Center (KDC), and other configuration details
required for Kerberos authentication. When the JVM needs some configuration
properties, it tries to find those values by locating and loading this file. The
JVM system property to configure the file path is `java.security.krb5.conf`. To
configure JVM system properties see {ref}/jvm-options.html[configuring jvm options].
If this system property is not specified, Java tries to locate the file based on
the conventions.
TIP: It is recommended that this system property be configured for {es}.
The method for setting this property depends on your Kerberos infrastructure.
Refer to your Kerberos documentation for more details.
For more information, see http://web.mit.edu/kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html[krb5.conf]
--
. Create a keytab for the {es} node.
+
--
A keytab is a file that stores pairs of principals and encryption keys. {es}
uses the keys from the keytab to decrypt the tickets presented by the user. You
must create a keytab for {es} by using the tools provided by your Kerberos
implementation. For example, some tools that create keytabs are `ktpass.exe` on
Windows and `kadmin` for MIT Kerberos.
--
. Put the keytab file in the {es} configuration directory.
+
--
Make sure that this keytab file has read permissions. This file contains
credentials, therefore you must take appropriate measures to protect it.
IMPORTANT: {es} uses Kerberos on the HTTP network layer, therefore there must be
a keytab file for the HTTP service principal on every {es} node. The service
principal name must have the format `HTTP/es.domain.local@ES.DOMAIN.LOCAL`.
The keytab files are unique for each node since they include the hostname.
An {es} node can act as any principal a client requests as long as that
principal and its credentials are found in the configured keytab.
--
. Create a Kerberos realm.
+
--
To enable Kerberos authentication in {es}, you must add a Kerberos realm in the
realm chain.
NOTE: You can configure only one Kerberos realm on {es} nodes.
To configure a Kerberos realm, there are a few mandatory realm settings and
other optional settings that you need to configure in the `elasticsearch.yml`
configuration file. Add a realm configuration under the
`xpack.security.authc.realms.kerberos` namespace.
The most common configuration for a Kerberos realm is as follows:
[source, yaml]
------------------------------------------------------------
xpack.security.authc.realms.kerberos.kerb1:
order: 3
keytab.path: es.keytab
remove_realm_name: false
------------------------------------------------------------
The `username` is extracted from the ticket presented by user and usually has
the format `username@REALM`. This `username` is used for mapping
roles to the user. If realm setting `remove_realm_name` is
set to `true`, the realm part (`@REALM`) is removed. The resulting `username`
is used for role mapping.
For detailed information of available realm settings,
see {ref}/security-settings.html#ref-kerberos-settings[Kerberos realm settings].
--
. Restart {es}
. Map Kerberos users to roles.
+
--
The `kerberos` realm enables you to map Kerberos users to roles. You can
configure these role mappings by using the
{ref}/security-api-role-mapping.html[role-mapping API]. You identify
users by their `username` field.
The following example uses the role mapping API to map `user@REALM` to the roles
`monitoring` and `user`:
[source,js]
--------------------------------------------------
POST _xpack/security/role_mapping/kerbrolemapping
{
"roles" : [ "monitoring_user" ],
"enabled": true,
"rules" : {
"field" : { "username" : "user@REALM" }
}
}
--------------------------------------------------
// CONSOLE
In case you want to support Kerberos cross realm authentication you may
need to map roles based on the Kerberos realm name. For such scenarios
following are the additional user metadata available for role mapping:
- `kerberos_realm` will be set to Kerberos realm name.
- `kerberos_user_principal_name` will be set to user principal name from the Kerberos ticket.
For more information, see {stack-ov}/mapping-roles.html[Mapping users and groups to roles].
NOTE: The Kerberos realm supports
{stack-ov}/realm-chains.html#authorization_realms[authorization realms] as an
alternative to role mapping.
--
| 37.977901 | 112 | 0.755164 |
dbea64a59103183ca7155fb3e88b00e850262c25 | 635 | adoc | AsciiDoc | README.adoc | selesy/hnjobs | d478a35929cddf176ace8fa762a91688a1da70fb | [
"Apache-2.0"
] | null | null | null | README.adoc | selesy/hnjobs | d478a35929cddf176ace8fa762a91688a1da70fb | [
"Apache-2.0"
] | 8 | 2020-11-04T02:30:34.000Z | 2020-11-19T21:58:56.000Z | README.adoc | selesy/hnjobs | d478a35929cddf176ace8fa762a91688a1da70fb | [
"Apache-2.0"
] | null | null | null | == HN Jobs
Provides an importer, processor, server and client to allow full-text
searches of Hacker News job postings from the monthly "Who is hiring"
thread.
=== Development
HNJobs can be developed and fully tested on Minikube before merging
pull requests. Skaffold is required as a build/deploy dependency. Use
the following set of commands to build and deploy the HNJobs project
locally:
[source, bash]
----
./minikube-start.sh
skaffold dev
----
The local deployment can be verified using the following commands:
[source, bash]
----
grpcurl --insecure -d '{"id":1234}' grpc.example.com:443 hnjobs.Import/AddWhoIsHiring
----
| 24.423077 | 85 | 0.75748 |
a6f13f7ab130ccdcb1941d466fde66746322e9ab | 35,091 | adoc | AsciiDoc | modules/spark-on-yarn/pages/spark-yarn-applicationmaster.adoc | xukaixuan33/apache-spark-internals | 4ca1a2b69ebbb948cba45c846bc682c39c669d72 | [
"Apache-2.0"
] | 931 | 2015-09-24T03:37:06.000Z | 2019-12-26T16:02:53.000Z | modules/spark-on-yarn/pages/spark-yarn-applicationmaster.adoc | xukaixuan33/apache-spark-internals | 4ca1a2b69ebbb948cba45c846bc682c39c669d72 | [
"Apache-2.0"
] | 14 | 2015-09-23T22:34:15.000Z | 2019-09-09T18:07:54.000Z | modules/spark-on-yarn/pages/spark-yarn-applicationmaster.adoc | xukaixuan33/apache-spark-internals | 4ca1a2b69ebbb948cba45c846bc682c39c669d72 | [
"Apache-2.0"
] | 374 | 2015-09-23T22:14:27.000Z | 2019-12-03T10:26:07.000Z | == [[ApplicationMaster]] ApplicationMaster (aka ExecutorLauncher)
`ApplicationMaster` is the link:spark-yarn-introduction.md#ApplicationMaster[YARN ApplicationMaster] for a Spark application submitted to a YARN cluster (which is commonly called link:README.md[Spark on YARN]).
.ApplicationMaster's Dependencies
image::spark-yarn-ApplicationMaster.png[align="center"]
`ApplicationMaster` is a <<main, standalone application>> that link:spark-yarn-introduction.md#NodeManager[YARN NodeManager] runs in a YARN container to manage a Spark application running in a YARN cluster.
[NOTE]
====
From the official documentation of http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html[Apache Hadoop YARN] (with some minor changes of mine):
> The per-application ApplicationMaster is actually a framework-specific library and is tasked with negotiating cluster resources from the YARN ResourceManager and working with the YARN NodeManager(s) to execute and monitor the tasks.
====
`ApplicationMaster` (and `ExecutorLauncher`) is launched as a result of link:spark-yarn-client.md#createContainerLaunchContext[`Client` creating a `ContainerLaunchContext`] to launch a Spark application on YARN.
.Launching ApplicationMaster
image::spark-yarn-ApplicationMaster-main.png[align="center"]
NOTE: https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/api/records/ContainerLaunchContext.html[ContainerLaunchContext] represents all of the information needed by the YARN NodeManager to launch a container.
[NOTE]
====
<<ExecutorLauncher, ExecutorLauncher>> is a custom `ApplicationMaster` for link:../spark-deploy-mode.md#client[client deploy mode] only for the purpose of easily distinguishing client and cluster deploy modes when using `ps` or `jps`.
[options="wrap"]
----
$ jps -lm
71253 org.apache.spark.deploy.yarn.ExecutorLauncher --arg 192.168.99.1:50188 --properties-file /tmp/hadoop-jacek/nm-local-dir/usercache/jacek/appcache/.../__spark_conf__/__spark_conf__.properties
----
====
When <<creating-instance, created>> `ApplicationMaster` takes a <<client, YarnRMClient>> (to handle communication with link:spark-yarn-introduction.md#ResourceManager[YARN ResourceManager] for YARN containers for `ApplicationMaster` and executors).
`ApplicationMaster` uses <<allocator, YarnAllocator>> to manage YARN containers with executors.
[[internal-properties]]
.ApplicationMaster's Internal Properties
[cols="1,1,2",options="header",width="100%"]
|===
| Name
| Initial Value
| Description
| [[amEndpoint]] `amEndpoint`
| (uninitialized)
| xref:rpc:RpcEndpointRef.md[RpcEndpointRef] to the *YarnAM* RPC endpoint initialized when `ApplicationMaster` <<runAMEndpoint, runAMEndpoint>>.
CAUTION: FIXME When, in a Spark application's lifecycle, does `runAMEndpoint` really happen?
Used exclusively when `ApplicationMaster` <<addAmIpFilter, registers the web UI security filters>> (in <<isClusterMode, `client` deploy mode>> when the driver runs outside `ApplicationMaster`).
| [[sparkConf]] `sparkConf`
| New link:../SparkConf.md[SparkConf]
| FIXME
| [[finished]] `finished`
| `false`
| Flag to...FIXME
| [[yarnConf]] `yarnConf`
| Hadoop's `YarnConfiguration`
| Flag to...FIXME
Created using link:../spark-SparkHadoopUtil.md#newConfiguration[SparkHadoopUtil.newConfiguration]
| [[exitCode]] `exitCode`
| `0`
| FIXME
| [[userClassThread]] `userClassThread`
| (uninitialized)
| FIXME
| [[sparkContextPromise]] `sparkContextPromise`
| `SparkContext` Scala's link:++http://www.scala-lang.org/api/current/scala/concurrent/Promise$.html++[Promise]
| Used only in `cluster` deploy mode (when the driver and `ApplicationMaster` run together in a YARN container) as a communication bus between `ApplicationMaster` and the separate `Driver` thread that <<startUserApplication, runs a Spark application>>.
Used to inform `ApplicationMaster` when a Spark application's `SparkContext` has been initialized successfully or failed.
Non-``null`` value <<runDriver, allows `ApplicationMaster` to access the driver's `RpcEnv`>> (available as <<rpcEnv, rpcEnv>>).
NOTE: A successful initialization of a Spark application's `SparkContext` is when link:spark-yarn-yarnclusterscheduler.md#postStartHook[YARN-specific `TaskScheduler`, i.e. `YarnClusterScheduler`, gets informed that the Spark application has started]. _What a clever solution!_
| [[rpcEnv]] `rpcEnv`
| (uninitialized)
a| xref:rpc:index.md[RpcEnv] which is:
* `sparkYarnAM` RPC environment from <<runExecutorLauncher-sparkYarnAM, a Spark application submitted to YARN in `client` deploy mode>>.
* `sparkDriver` RPC environment from the <<runDriver-rpcEnv, Spark application submitted to YARN in `cluster` deploy mode>>.
| <<isClusterMode, isClusterMode>>
| `true` (when <<command-line-parameters, `--class` was specified>>)
| Flag...FIXME
| <<maxNumExecutorFailures, maxNumExecutorFailures>>
| FIXME
|
|===
=== [[maxNumExecutorFailures]] `maxNumExecutorFailures` Property
CAUTION: FIXME
Computed using the optional link:spark-yarn-settings.md#spark.yarn.max.executor.failures[spark.yarn.max.executor.failures] if set. Otherwise, it is twice xref:executor:Executor.md#spark.executor.instances[spark.executor.instances] or [spark.dynamicAllocation.maxExecutors](../dynamic-allocation/index.md#spark.dynamicAllocation.maxExecutors) (with dynamic allocation enabled) with the minimum of `3`.
=== [[creating-instance]] Creating ApplicationMaster Instance
`ApplicationMaster` takes the following when created:
* [[args]] <<ApplicationMasterArguments, ApplicationMasterArguments>>
* [[client]] link:spark-yarn-yarnrmclient.md[YarnRMClient]
`ApplicationMaster` initializes the <<internal-registries, internal registries and counters>>.
CAUTION: FIXME Review the initialization again
=== [[reporterThread]] `reporterThread` Method
CAUTION: FIXME
=== [[launchReporterThread]] Launching Progress Reporter Thread -- `launchReporterThread` Method
CAUTION: FIXME
=== [[sparkContextInitialized]] Setting Internal SparkContext Reference -- `sparkContextInitialized` Method
[source, scala]
----
sparkContextInitialized(sc: SparkContext): Unit
----
`sparkContextInitialized` passes the call on to the `ApplicationMaster.sparkContextInitialized` that sets the internal `sparkContextRef` reference (to be `sc`).
=== [[sparkContextStopped]] Clearing Internal SparkContext Reference -- `sparkContextStopped` Method
[source, scala]
----
sparkContextStopped(sc: SparkContext): Boolean
----
`sparkContextStopped` passes the call on to the `ApplicationMaster.sparkContextStopped` that clears the internal `sparkContextRef` reference (i.e. sets it to `null`).
=== [[addAmIpFilter]] Registering web UI Security Filters -- `addAmIpFilter` Method
[source, scala]
----
addAmIpFilter(): Unit
----
`addAmIpFilter` is a helper method that ...???
It starts by reading Hadoop's environmental variable https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/api/ApplicationConstants.html#APPLICATION_WEB_PROXY_BASE_ENV[ApplicationConstants.APPLICATION_WEB_PROXY_BASE_ENV] that it passes to link:spark-yarn-yarnrmclient.md#getAmIpFilterParams[`YarnRMClient` to compute the configuration for the `AmIpFilter` for web UI].
In cluster deploy mode (when `ApplicationMaster` runs with web UI), it sets `spark.ui.filters` system property as `org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter`. It also sets system properties from the key-value configuration of `AmIpFilter` (computed earlier) as `spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.[key]` being `[value]`.
In client deploy mode (when `ApplicationMaster` runs on another JVM or even host than web UI), it simply sends a `AddWebUIFilter` to `ApplicationMaster` (namely to link:spark-yarn-AMEndpoint.md[AMEndpoint RPC Endpoint]).
=== [[finish]] `finish` Method
CAUTION: FIXME
=== [[allocator]] allocator Internal Reference to YarnAllocator
`allocator` is the internal reference to link:spark-yarn-YarnAllocator.md[YarnAllocator] that `ApplicationMaster` uses to request new or release outstanding containers for executors.
`allocator` is link:spark-yarn-yarnrmclient.md#register[created] when <<registerAM, `ApplicationMaster` is registered>> (using the internal <<client, YarnRMClient reference>>).
=== [[main]] Launching ApplicationMaster Standalone Application -- `main` Method
`ApplicationMaster` is started as a standalone application inside a YARN container on a node.
NOTE: `ApplicationMaster` standalone application is launched as a result of link:spark-yarn-client.md#createContainerLaunchContext[sending a `ContainerLaunchContext` request] to launch `ApplicationMaster` for a Spark application to YARN ResourceManager.
.Submitting ApplicationMaster to YARN NodeManager
image::spark-yarn-ApplicationMaster-client-submitApplication.png[align="center"]
When executed, `main` first parses <<command-line-parameters, command-line parameters>> and then uses link:../spark-SparkHadoopUtil.md#runAsSparkUser[SparkHadoopUtil.runAsSparkUser] to run the main code with a Hadoop `UserGroupInformation` as a thread local variable (distributed to child threads) for authenticating HDFS and YARN calls.
[TIP]
====
Enable `DEBUG` logging level for `org.apache.spark.deploy.SparkHadoopUtil` logger to see what happens inside.
Add the following line to `conf/log4j.properties`:
```
log4j.logger.org.apache.spark.deploy.SparkHadoopUtil=DEBUG
```
Refer to link:../spark-logging.md[Logging].
====
You should see the following message in the logs:
```
DEBUG running as user: [user]
```
link:../spark-SparkHadoopUtil.md#runAsSparkUser[SparkHadoopUtil.runAsSparkUser] function executes a block that <<creating-instance, creates a `ApplicationMaster`>> (passing the <<ApplicationMasterArguments, ApplicationMasterArguments>> instance and a new link:spark-yarn-yarnrmclient.md[YarnRMClient]) and then <<run, runs>> it.
=== [[run]] Running ApplicationMaster -- `run` Method
[source, scala]
----
run(): Int
----
`run` reads the <<getAttemptId, application attempt id>>.
(only <<isClusterMode, in `cluster` deploy mode>>) `run` sets <<cluster-mode-settings, `cluster` deploy mode-specific settings>> and sets the application attempt id (from YARN).
`run` sets a `CallerContext` for `APPMASTER`.
CAUTION: FIXME Why is `CallerContext` required? It's only executed when `hadoop.caller.context.enabled` is enabled and `org.apache.hadoop.ipc.CallerContext` class is on CLASSPATH.
You should see the following INFO message in the logs:
```
INFO ApplicationAttemptId: [appAttemptId]
```
`run` creates a Hadoop https://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html[FileSystem] (using the internal <<yarnConf, YarnConfiguration>>).
`run` registers the <<shutdown-hook, cleanup shutdown hook>>.
`run` creates a link:../spark-security.md#SecurityManager[SecurityManager].
(only when link:spark-yarn-settings.md#spark.yarn.credentials.file[spark.yarn.credentials.file] is defined) `run` link:spark-yarn-ConfigurableCredentialManager.md#creating-instance[creates a `ConfigurableCredentialManager`] to link:spark-yarn-ConfigurableCredentialManager.md#credentialRenewer[get a `AMCredentialRenewer`] and schedules login from keytab.
CAUTION: FIXME Security stuff begs for more details.
In the end, `run` registers `ApplicationMaster` (with YARN ResourceManager) for the Spark application -- either calling <<runDriver, runDriver>> (in <<isClusterMode, `cluster` deploy mode>>) or <<runExecutorLauncher, runExecutorLauncher>> (for `client` deploy mode).
`run` exits with <<exitCode, `0` exit code>>.
In case of an exception, you should see the following ERROR message in the logs and `run` <<finish, finishes>> with `FAILED` final application status.
```
ERROR Uncaught exception: [exception]
```
NOTE: `run` is used exclusively when `ApplicationMaster` is <<main, launched as a standalone application>> (inside a YARN container on a YARN cluster).
=== [[runExecutorLauncher]] Creating sparkYarnAM RPC Environment and Registering ApplicationMaster with YARN ResourceManager (Client Deploy Mode)
[source, scala]
----
runExecutorLauncher(
securityMgr: SecurityManager): Unit
----
[[runExecutorLauncher-sparkYarnAM]]
`runExecutorLauncher` link:../index.md#create[creates `sparkYarnAM` RPC environment] (on link:spark-yarn-settings.md#spark.yarn.am.port[spark.yarn.am.port] port, the internal <<sparkConf, SparkConf>> and `clientMode` enabled).
[TIP]
====
Read the note in link:../index.md#create[Creating RpcEnv] to learn the meaning of `clientMode` input argument.
`clientMode` is enabled for so-called a client-mode `ApplicationMaster` which is when a Spark application is submitted to YARN in link:../spark-deploy-mode.md#client[`client` deploy mode].
====
`runExecutorLauncher` then <<waitForSparkDriver, waits until the driver accepts connections and creates `RpcEndpointRef` to communicate>>.
`runExecutorLauncher` <<addAmIpFilter, registers web UI security filters>>.
CAUTION: FIXME Why is this needed? `addAmIpFilter`
In the end, `runExecutorLauncher` <<registerAM, registers `ApplicationMaster` with YARN ResourceManager and requests resources>> and then pauses until <<reporterThread, reporterThread>> finishes.
NOTE: `runExecutorLauncher` is used exclusively when <<run, `ApplicationMaster` is started>> in <<isClusterMode, `client` deploy mode>>.
=== [[runDriver]] Running Spark Application's Driver and Registering ApplicationMaster with YARN ResourceManager (Cluster Deploy Mode)
[source, scala]
----
runDriver(
securityMgr: SecurityManager): Unit
----
runDriver starts a Spark application on a <<userClassThread, separate thread>>, registers `YarnAM` endpoint in the application's `RpcEnv` followed by registering `ApplicationMaster` with YARN ResourceManager. In the end, runDriver waits for the Spark application to finish.
Internally, runDriver <<addAmIpFilter, registers web UI security filters>> and <<startUserApplication, starts a Spark application>> (on a <<userClassThread, separate Thread>>).
You should see the following INFO message in the logs:
```
Waiting for spark context initialization...
```
[[runDriver-rpcEnv]]
runDriver waits link:spark-yarn-settings.md#spark.yarn.am.waitTime[spark.yarn.am.waitTime] time till the Spark application's xref:ROOT:SparkContext.md[] is available and accesses the link:../index.md[current `RpcEnv`] (and saves it as the internal <<rpcEnv, rpcEnv>>).
NOTE: runDriver uses xref:core:SparkEnv.md#rpcEnv[`SparkEnv` to access the current `RpcEnv`] that the xref:ROOT:SparkContext.md#env[Spark application's `SparkContext` manages].
runDriver <<runAMEndpoint, creates `RpcEndpointRef` to the driver's `YarnScheduler` endpoint and registers `YarnAM` endpoint>> (using link:../spark-driver.md#spark_driver_host[spark.driver.host] and link:../spark-driver.md#spark_driver_port[spark.driver.port] properties for the driver's host and port and `isClusterMode` enabled).
runDriver <<registerAM, registers `ApplicationMaster` with YARN ResourceManager and requests cluster resources>> (using the Spark application's <<rpcEnv, RpcEnv>>, the driver's RPC endpoint reference, `webUrl` if web UI is enabled and the input `securityMgr`).
runDriver pauses until the Spark application finishes.
NOTE: runDriver uses Java's link:https://docs.oracle.com/javase/8/docs/api/java/lang/Thread.html#join--[Thread.join] on the internal <<userClassThread, Thread>> reference to the Spark application running on it.
If the Spark application has not started in link:spark-yarn-settings.md#spark.yarn.am.waitTime[spark.yarn.am.waitTime] time, runDriver reports a `IllegalStateException`:
```
SparkContext is null but app is still running!
```
If `TimeoutException` is reported while waiting for the Spark application to start, you should see the following ERROR message in the logs and runDriver <<finish, finishes>> with `FAILED` final application status and the error code `13`.
```
SparkContext did not initialize after waiting for [spark.yarn.am.waitTime] ms. Please check earlier log output for errors. Failing the application.
```
runDriver is used when ApplicationMaster is <<run, started>> (in <<isClusterMode, cluster deploy mode>>).
=== [[startUserApplication]] Starting Spark Application (in Separate Driver Thread) -- `startUserApplication` Method
[source, scala]
----
startUserApplication(): Thread
----
`startUserApplication` starts a Spark application as a separate `Driver` thread.
Internally, when `startUserApplication` is executed, you should see the following INFO message in the logs:
```
INFO Starting the user application in a separate Thread
```
`startUserApplication` takes the link:spark-yarn-client.md#getUserClasspath[user-specified jars] and maps them to use the `file:` protocol.
`startUserApplication` then creates a class loader to load the main class of the Spark application given the link:spark-yarn-client.md#isUserClassPathFirst[precedence of the Spark system jars and the user-specified jars].
`startUserApplication` works on custom configurations for Python and R applications (which I don't bother including here).
`startUserApplication` loads the main class (using the custom class loader created above with the user-specified jars) and creates a reference to the `main` method.
NOTE: The main class is specified as `userClass` in <<ApplicationMasterArguments, ApplicationMasterArguments>> when <<creating-instance, `ApplicationMaster` was created>>.
`startUserApplication` starts a Java https://docs.oracle.com/javase/8/docs/api/java/lang/Thread.html[Thread] (with the name *Driver*) that invokes the `main` method (with the application arguments from `userArgs` from <<ApplicationMasterArguments, ApplicationMasterArguments>>). The `Driver` thread uses the internal <<sparkContextPromise, sparkContextPromise>> to <<runDriver, notify `ApplicationMaster`>> about the execution status of the `main` method (success or failure).
When the main method (of the Spark application) finishes successfully, the `Driver` thread will <<finish, finish>> with `SUCCEEDED` final application status and code status `0` and you should see the following DEBUG message in the logs:
```
DEBUG Done running users class
```
Any exceptions in the `Driver` thread are reported with corresponding ERROR message in the logs, `FAILED` final application status, appropriate code status.
```
// SparkUserAppException
ERROR User application exited with status [exitCode]
// non-SparkUserAppException
ERROR User class threw exception: [cause]
```
NOTE: A Spark application's exit codes are passed directly to <<finish, finish `ApplicationMaster`>> and recorded as <<exitCode, exitCode>> for future reference.
NOTE: `startUserApplication` is used exclusively when `ApplicationMaster` <<runDriver, runs a Spark application's driver and registers itself with YARN ResourceManager>> for `cluster` deploy mode.
=== [[registerAM]] Registering ApplicationMaster with YARN ResourceManager and Requesting YARN Cluster Resources -- `registerAM` Internal Method
[source, scala]
----
registerAM(
_sparkConf: SparkConf,
_rpcEnv: RpcEnv,
driverRef: RpcEndpointRef,
uiAddress: String,
securityMgr: SecurityManager): Unit
----
.Registering ApplicationMaster with YARN ResourceManager
image::spark-yarn-ApplicationMaster-registerAM.png[align="center"]
Internally, `registerAM` first takes the application and attempt ids, and creates the URL of xref:spark-history-server:index.md[Spark History Server] for the Spark application, i.e. `[address]/history/[appId]/[attemptId]`, by link:../spark-SparkHadoopUtil.md#substituteHadoopVariables[substituting Hadoop variables] (using the internal <<yarnConf, YarnConfiguration>>) in the optional link:spark-yarn-settings.md#spark.yarn.historyServer.address[spark.yarn.historyServer.address] setting.
`registerAM` then creates a link:../index.md#RpcEndpointAddress[RpcEndpointAddress] for the driver's xref:scheduler:CoarseGrainedSchedulerBackend.md#CoarseGrainedScheduler[CoarseGrainedScheduler RPC endpoint] available at link:../spark-driver.md#spark.driver.host[spark.driver.host] and link:../spark-driver.md#spark.driver.port[spark.driver.port].
`registerAM` link:spark-yarn-ExecutorRunnable.md#launchContextDebugInfo[prints YARN launch context diagnostic information (with command, environment and resources) for executors] (with xref:executor:Executor.md#spark.executor.memory[spark.executor.memory], xref:executor:Executor.md#spark.executor.cores[spark.executor.cores] and dummy `<executorId>` and `<hostname>`)
`registerAM` requests link:spark-yarn-yarnrmclient.md#register[`YarnRMClient` to register `ApplicationMaster`] (with YARN ResourceManager) and the internal <<allocator, YarnAllocator>> to link:spark-yarn-YarnAllocator.md#allocateResources[allocate required cluster resources] (given placement hints about where to allocate resource containers for executors to be as close to the data as possible).
NOTE: `registerAM` uses `YarnRMClient` that was given when <<creating-instance, `ApplicationManager` was created>>.
In the end, `registerAM` <<launchReporterThread, launches reporter thread>>.
NOTE: `registerAM` is used when `ApplicationMaster` runs a Spark application in <<runDriver, `cluster` deploy mode>> and <<runExecutorLauncher, `client` deploy mode>>.
=== [[command-line-parameters]][[ApplicationMasterArguments]] Command-Line Parameters -- `ApplicationMasterArguments` class
`ApplicationMaster` uses `ApplicationMasterArguments` class to handle command-line parameters.
`ApplicationMasterArguments` is created right after <<main, main>> method has been executed for `args` command-line parameters.
It accepts the following command-line parameters:
* `--jar JAR_PATH` -- the path to the Spark application's JAR file
* `--class CLASS_NAME` -- the name of the Spark application's main class
* `--arg ARG` -- an argument to be passed to the Spark application's main class. There can be multiple `--arg` arguments that are passed in order.
* `--properties-file FILE` -- the path to a custom Spark properties file.
* `--primary-py-file FILE` -- the main Python file to run.
* `--primary-r-file FILE` -- the main R file to run.
When an unsupported parameter is found the following message is printed out to standard error output and `ApplicationMaster` exits with the exit code `1`.
```
Unknown/unsupported param [unknownParam]
Usage: org.apache.spark.deploy.yarn.ApplicationMaster [options]
Options:
--jar JAR_PATH Path to your application's JAR file
--class CLASS_NAME Name of your application's main class
--primary-py-file A main Python file
--primary-r-file A main R file
--arg ARG Argument to be passed to your application's main class.
Multiple invocations are possible, each will be passed in order.
--properties-file FILE Path to a custom Spark properties file.
```
=== [[localResources]] `localResources` Property
When <<creating-instance, `ApplicationMaster` is instantiated>>, it computes internal `localResources` collection of YARN's https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/api/records/LocalResource.html[LocalResource] by name based on the internal `spark.yarn.cache.*` configuration settings.
[source, scala]
----
localResources: Map[String, LocalResource]
----
You should see the following INFO message in the logs:
```
INFO ApplicationMaster: Preparing Local resources
```
It starts by reading the internal Spark configuration settings (that were earlier set when link:spark-yarn-client.md#prepareLocalResources[`Client` prepared local resources to distribute]):
* link:spark-yarn-settings.md#spark.yarn.cache.filenames[spark.yarn.cache.filenames]
* link:spark-yarn-settings.md#spark.yarn.cache.sizes[spark.yarn.cache.sizes]
* link:spark-yarn-settings.md#spark.yarn.cache.timestamps[spark.yarn.cache.timestamps]
* link:spark-yarn-settings.md#spark.yarn.cache.visibilities[spark.yarn.cache.visibilities]
* link:spark-yarn-settings.md#spark.yarn.cache.types[spark.yarn.cache.types]
For each file name in link:spark-yarn-settings.md#spark.yarn.cache.filenames[spark.yarn.cache.filenames] it maps link:spark-yarn-settings.md#spark.yarn.cache.types[spark.yarn.cache.types] to an appropriate YARN's https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/api/records/LocalResourceType.html[LocalResourceType] and creates a new YARN https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/api/records/LocalResource.html[LocalResource].
NOTE: https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/api/records/LocalResource.html[LocalResource] represents a local resource required to run a container.
If link:spark-yarn-settings.md#spark.yarn.cache.confArchive[spark.yarn.cache.confArchive] is set, it is added to `localResources` as https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/api/records/LocalResourceType.html#ARCHIVE[ARCHIVE] resource type and https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/api/records/LocalResourceVisibility.html#PRIVATE[PRIVATE] visibility.
NOTE: link:spark-yarn-settings.md#spark.yarn.cache.confArchive[spark.yarn.cache.confArchive] is set when link:spark-yarn-client.md#prepareLocalResources[`Client` prepares local resources].
NOTE: `ARCHIVE` is an archive file that is automatically unarchived by the NodeManager.
NOTE: `PRIVATE` visibility means to share a resource among all applications of the same user on the node.
Ultimately, it removes the cache-related settings from the link:../SparkConf.md[Spark configuration] and system properties.
You should see the following INFO message in the logs:
```
INFO ApplicationMaster: Prepared Local resources [resources]
```
=== [[cluster-mode-settings]] Cluster Mode Settings
When in <<isClusterMode, `cluster` deploy mode>>, `ApplicationMaster` sets the following system properties (in <<run, run>>):
* link:../spark-webui-properties.md#spark.ui.port[spark.ui.port] to `0`
* link:../configuration-properties.md#spark.master[spark.master] as `yarn`
* link:../spark-deploy-mode.md#spark.submit.deployMode[spark.submit.deployMode] as `cluster`
* link:spark-yarn-settings.md#spark.yarn.app.id[spark.yarn.app.id] as YARN-specific application id
CAUTION: FIXME Why are the system properties required? Who's expecting them?
=== [[cluster-mode]][[isClusterMode]] `isClusterMode` Internal Flag
CAUTION: FIXME link:spark-yarn-client.md#isClusterMode[Since `org.apache.spark.deploy.yarn.ExecutorLauncher` is used for client deploy mode], the `isClusterMode` flag could be set there (not depending on `--class` which is correct yet not very obvious).
`isClusterMode` is an internal flag that is enabled (i.e. `true`) for link:../spark-deploy-mode.md#cluster[cluster mode].
Specifically, it says whether the main class of the Spark application (through <<command-line-parameters, `--class` command-line argument>>) was specified or not. That is how the developers decided to inform `ApplicationMaster` about being run in link:../spark-deploy-mode.md#cluster[cluster mode] when link:spark-yarn-client.md#createContainerLaunchContext[`Client` creates YARN's `ContainerLaunchContext`] (to launch the `ApplicationMaster` for a Spark application).
`isClusterMode` is used to set <<cluster-mode-settings, additional system properties>> in <<run, run>> and <<runDriver, runDriver>> (the flag is enabled) or <<runExecutorLauncher, runExecutorLauncher>> (when disabled).
Besides, `isClusterMode` controls the <<getDefaultFinalStatus, default final status of a Spark application>> being `FinalApplicationStatus.FAILED` (when the flag is enabled) or `FinalApplicationStatus.UNDEFINED`.
`isClusterMode` also controls whether to set system properties in <<addAmIpFilter, addAmIpFilter>> (when the flag is enabled) or <<addAmIpFilter, send a `AddWebUIFilter` instead>>.
=== [[unregister]] Unregistering ApplicationMaster from YARN ResourceManager -- `unregister` Method
`unregister` unregisters the `ApplicationMaster` for the Spark application from the link:spark-yarn-introduction.md#ResourceManager[YARN ResourceManager].
[source, scala]
----
unregister(status: FinalApplicationStatus, diagnostics: String = null): Unit
----
NOTE: It is called from the <<shutdown-hook, cleanup shutdown hook>> (that was registered in `ApplicationMaster` when it <<run, started running>>) and only when the application's final result is successful or it was the last attempt to run the application.
It first checks that the `ApplicationMaster` has not already been unregistered (using the internal `unregistered` flag). If so, you should see the following INFO message in the logs:
```
INFO ApplicationMaster: Unregistering ApplicationMaster with [status]
```
There can also be an optional diagnostic message in the logs:
```
(diag message: [msg])
```
The internal `unregistered` flag is set to be enabled, i.e. `true`.
It then requests link:spark-yarn-yarnrmclient.md#unregister[`YarnRMClient` to unregister].
=== [[shutdown-hook]] Cleanup Shutdown Hook
When <<run, `ApplicationMaster` starts running>>, it registers a shutdown hook that <<unregister, unregisters the Spark application from the YARN ResourceManager>> and <<cleanupStagingDir, cleans up the staging directory>>.
Internally, it checks the internal `finished` flag, and if it is disabled, it <<finish, marks the Spark application as failed with `EXIT_EARLY`>>.
If the internal `unregistered` flag is disabled, it <<unregister, unregisters the Spark application>> and <<cleanupStagingDir, cleans up the staging directory>> afterwards only when the final status of the ApplicationMaster's registration is `FinalApplicationStatus.SUCCEEDED` or the link:README.md#multiple-application-attempts[number of application attempts is more than allowed].
The shutdown hook runs after the SparkContext is shut down, i.e. the shutdown priority is one less than SparkContext's.
The shutdown hook is registered using Spark's own `ShutdownHookManager.addShutdownHook`.
=== [[ExecutorLauncher]] ExecutorLauncher
`ExecutorLauncher` comes with no extra functionality when compared to `ApplicationMaster`. It serves as a helper class to run `ApplicationMaster` under another class name in link:spark-deploy-mode.md#client[client deploy mode].
With the two different class names (pointing at the same class `ApplicationMaster`) you should be more successful to distinguish between `ExecutorLauncher` (which is really a `ApplicationMaster`) in link:spark-deploy-mode.md#client[client deploy mode] and the `ApplicationMaster` in link:spark-deploy-mode.md#cluster[cluster deploy mode] using tools like `ps` or `jps`.
NOTE: Consider `ExecutorLauncher` a `ApplicationMaster` for client deploy mode.
=== [[getAttemptId]] Obtain Application Attempt Id -- `getAttemptId` Method
[source, scala]
----
getAttemptId(): ApplicationAttemptId
----
`getAttemptId` returns YARN's `ApplicationAttemptId` (of the Spark application to which the container was assigned).
Internally, it queries YARN by means of link:spark-yarn-yarnrmclient.md#getAttemptId[YarnRMClient].
=== [[waitForSparkDriver]] Waiting Until Driver is Network-Accessible and Creating RpcEndpointRef to Communicate -- `waitForSparkDriver` Internal Method
[source, scala]
----
waitForSparkDriver(): RpcEndpointRef
----
`waitForSparkDriver` waits until the driver is network-accessible, i.e. accepts connections on a given host and port, and returns a `RpcEndpointRef` to the driver.
When executed, you should see the following INFO message in the logs:
```
INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
```
`waitForSparkDriver` takes the driver's host and port (using <<ApplicationMasterArguments, ApplicationMasterArguments>> passed in when <<creating-instance, `ApplicationMaster` was created>>).
CAUTION: FIXME `waitForSparkDriver` expects the driver's host and port as the 0-th element in `ApplicationMasterArguments.userArgs`. Why?
`waitForSparkDriver` tries to connect to the driver's host and port until the driver accepts the connection but no longer than link:spark-yarn-settings.md#spark.yarn.am.waitTime[spark.yarn.am.waitTime] setting or <<finished, finished>> internal flag is enabled.
You should see the following INFO message in the logs:
```
INFO yarn.ApplicationMaster: Driver now available: [driverHost]:[driverPort]
```
While `waitForSparkDriver` tries to connect (while the socket is down), you can see the following ERROR message and `waitForSparkDriver` pauses for 100 ms and tries to connect again (until the `waitTime` elapses).
```
ERROR Failed to connect to driver at [driverHost]:[driverPort], retrying ...
```
Once `waitForSparkDriver` could connect to the driver, `waitForSparkDriver` sets link:../spark-driver.md#spark.driver.host[spark.driver.host] and link:../spark-driver.md#spark.driver.port[spark.driver.port] properties to `driverHost` and `driverPort`, respectively (using the internal <<sparkConf, SparkConf>>).
In the end, `waitForSparkDriver` <<runAMEndpoint, runAMEndpoint>>.
If `waitForSparkDriver` did not manage to connect (before `waitTime` elapses or <<finished, finished>> internal flag was enabled), `waitForSparkDriver` reports a `SparkException`:
```
Failed to connect to driver!
```
NOTE: `waitForSparkDriver` is used exclusively when client-mode `ApplicationMaster` <<runExecutorLauncher, creates the `sparkYarnAM` RPC environment and registers itself with YARN ResourceManager>>.
=== [[runAMEndpoint]] Creating RpcEndpointRef to Driver's YarnScheduler Endpoint and Registering YarnAM Endpoint -- `runAMEndpoint` Internal Method
[source, scala]
----
runAMEndpoint(host: String, port: String, isClusterMode: Boolean): RpcEndpointRef
----
`runAMEndpoint` sets up a xref:rpc:RpcEndpointRef.md[RpcEndpointRef] to the driver's `YarnScheduler` endpoint and registers *YarnAM* endpoint.
NOTE: `sparkDriver` RPC environment when the driver lives in YARN cluster (in `cluster` deploy mode)
.Registering YarnAM Endpoint
image::spark-yarn-ApplicationMaster-runAMEndpoint.png[align="center"]
Internally, `runAMEndpoint` link:../index.md#setupEndpointRefByURI[gets a `RpcEndpointRef`] to the driver's `YarnScheduler` endpoint (available on the `host` and `port`).
NOTE: `YarnScheduler` RPC endpoint is registered when the link:spark-yarn-yarnschedulerbackend.md#creating-instance[Spark coarse-grained scheduler backends for YARN are created].
`runAMEndpoint` then link:../index.md#setupEndpoint[registers the RPC endpoint] as *YarnAM* (and link:spark-yarn-AMEndpoint.md[AMEndpoint] implementation with ``ApplicationMaster``'s <<rpcEnv, RpcEnv>>, `YarnScheduler` endpoint reference, and `isClusterMode` flag).
NOTE: `runAMEndpoint` is used when `ApplicationMaster` <<waitForSparkDriver, waits for the driver>> (in client deploy mode) and <<runDriver, runs the driver>> (in cluster deploy mode).
=== [[createAllocator]] createAllocator Method
[source,scala]
----
createAllocator(
driverRef: RpcEndpointRef,
_sparkConf: SparkConf): Unit
----
createAllocator...FIXME
createAllocator is used when...FIXME
| 55.436019 | 488 | 0.783107 |
61331e4d38a186165a4240ee1ac05903d3c484ae | 1,213 | asciidoc | AsciiDoc | documentation/Who-Is-MrChecker/Test-Framework-Modules/Web-API-Test-Module.asciidoc | jambulud/devonfw-testing | 8a38fb5c2d0904bdf0ec71e4686b3b4746714b27 | [
"Apache-2.0"
] | null | null | null | documentation/Who-Is-MrChecker/Test-Framework-Modules/Web-API-Test-Module.asciidoc | jambulud/devonfw-testing | 8a38fb5c2d0904bdf0ec71e4686b3b4746714b27 | [
"Apache-2.0"
] | null | null | null | documentation/Who-Is-MrChecker/Test-Framework-Modules/Web-API-Test-Module.asciidoc | jambulud/devonfw-testing | 8a38fb5c2d0904bdf0ec71e4686b3b4746714b27 | [
"Apache-2.0"
] | null | null | null | :toc: macro
ifdef::env-github[]
:tip-caption: :bulb:
:note-caption: :information_source:
:important-caption: :heavy_exclamation_mark:
:caution-caption: :fire:
:warning-caption: :warning:
endif::[]
toc::[]
:idprefix:
:idseparator: -
:reproducible:
:source-highlighter: rouge
:listing-caption: Listing= Web API Test Module
== Service Virtualization
* https://github.com/devonfw/devonfw-testing/blob/develop/documentation/Who-Is-MrChecker/Test-Framework-Modules/Web-API-Test-Module-What-is-service-virtualization.asciidoc[What is service virtualization]
* https://github.com/devonfw/devonfw-testing/blob/develop/documentation/Who-Is-MrChecker/Test-Framework-Modules/Web-API-Test-Module-How-plug-in-service-virtualization-into-Application-Under-Test.asciidoc[How plug in service virtualization into Application Under Test]
* https://github.com/devonfw/devonfw-testing/blob/develop/documentation/Who-Is-MrChecker/Test-Framework-Modules/Web-API-Test-Module-How-to-make-virtual-asset.asciidoc[How to make virtual asset]
* https://github.com/devonfw/devonfw-testing/blob/develop/documentation/Who-Is-MrChecker/Test-Framework-Modules/Web-API-Test-Module-Smoke-Tests-virtualization.asciidoc[Smoke Tests virtualization]
| 50.541667 | 267 | 0.808739 |
04db88544ddd4bd38bf9745222faeb5c16b560b1 | 983 | adoc | AsciiDoc | karuta-webapp/src/main/asciidoc/groups.adoc | tlanh/karuta-backend | 53f7fd68d77b05b8e684c86584994ddfb7203fdd | [
"ECL-1.0",
"ECL-2.0"
] | 11 | 2015-07-28T00:48:42.000Z | 2020-08-10T10:45:11.000Z | karuta-webapp/src/main/asciidoc/groups.adoc | tlanh/karuta-backend | 53f7fd68d77b05b8e684c86584994ddfb7203fdd | [
"ECL-1.0",
"ECL-2.0"
] | 23 | 2015-05-22T21:52:28.000Z | 2020-02-18T09:13:21.000Z | karuta-webapp/src/main/asciidoc/groups.adoc | tlanh/karuta-backend | 53f7fd68d77b05b8e684c86584994ddfb7203fdd | [
"ECL-1.0",
"ECL-2.0"
] | 9 | 2015-06-03T19:39:32.000Z | 2020-03-04T14:58:31.000Z | == Groups
=== GET /
This endpoint returns all the groups/roles in which the current user is
present.
include::{snippets}/groups-for-user/response-fields.adoc[]
[source,role="primary"]
.Curl
include::{snippets}/groups-for-user/curl-request.adoc[]
[source,role="secondary"]
.Response
include::{snippets}/groups-for-user/response-body.adoc[]
[source,role="secondary"]
.Full response
include::{snippets}/groups-for-user/http-response.adoc[]
=== GET /{id}
This endpoint returns all the groups/roles in which the current user
is present for a given portfolio based on its ID.
[NOTE]
The user that triggers this query must have the right to read this
portfolio in order to get a response.
[source,role="primary"]
.Curl
include::{snippets}/groups-by-portfolio/curl-request.adoc[]
[source,role="secondary"]
.Response
include::{snippets}/groups-by-portfolio/response-body.adoc[]
[source,role="secondary"]
.Full response
include::{snippets}/groups-by-portfolio/http-response.adoc[]
| 23.404762 | 71 | 0.754832 |
d599f4364d4665c3d99661cca78ce3e351cd972d | 1,172 | adoc | AsciiDoc | README.adoc | nullbyte91/PythonSpeed | aec6b88e251814cc5b3bee5dc747f365b33c30b4 | [
"MIT"
] | null | null | null | README.adoc | nullbyte91/PythonSpeed | aec6b88e251814cc5b3bee5dc747f365b33c30b4 | [
"MIT"
] | null | null | null | README.adoc | nullbyte91/PythonSpeed | aec6b88e251814cc5b3bee5dc747f365b33c30b4 | [
"MIT"
] | null | null | null | = PythonSpeed
:idprefix:
:idseparator: -
:sectanchors:
:sectlinks:
:sectnumlevels: 6
:sectnums:
:toc: macro
:toclevels: 6
:toc-title: Table of Contents
toc::[]
== Introduction
List of Python Performance Tuning Tips
== Tips
=== List Comprehensions
```python
# Find first ten number square value
square = []
start = perf_counter()
for i in range(10):
square.append(i * i)
print("Time:{} seconds".format(perf_counter() - start))
Time:5.261999831418507e-06 seconds
```
```python
start = perf_counter()
square = [i * i for i in range(10)]
print("Time:{} seconds".format(perf_counter() - start))
Time:3.247000222472707e-06 seconds
```
=== Vector operations over loops
```python
# Find a intersection
a_list = [10,20, 30, 40, 50, 60, 70, 80]
b_list = [20, 90, 50]
intersect_val = []
start = perf_counter()
for a_item in a_list:
if a_item in b_list:
intersect_val.append(a_item)
print("Time:{} seconds".format(perf_counter() - start))
Time:2.196000423282385e-06 seconds
```
```python
start = perf_counter()
intersect_val = np.intersect1d(a_list, b_list)
print("Time:{} seconds".format(perf_counter() - start))
Time:0.024299857999722008 seconds
```
| 18.030769 | 55 | 0.698805 |
4734fa3b34a030bf6fe1eb65954831d285dea7ee | 730 | adoc | AsciiDoc | docs/0530-minimum-absolute-difference-in-bst.adoc | diguage/leetcode | e72299539a319c94b435ff26cf077371a353b00c | [
"Apache-2.0"
] | 4 | 2020-02-05T10:08:43.000Z | 2021-06-10T02:15:20.000Z | docs/0530-minimum-absolute-difference-in-bst.adoc | diguage/leetcode | e72299539a319c94b435ff26cf077371a353b00c | [
"Apache-2.0"
] | 1,062 | 2018-10-04T16:04:06.000Z | 2020-06-17T02:11:44.000Z | docs/0530-minimum-absolute-difference-in-bst.adoc | diguage/leetcode | e72299539a319c94b435ff26cf077371a353b00c | [
"Apache-2.0"
] | 3 | 2019-10-03T01:42:58.000Z | 2020-03-02T13:53:02.000Z | = 530. Minimum Absolute Difference in BST
https://leetcode.com/problems/minimum-absolute-difference-in-bst/[LeetCode - Minimum Absolute Difference in BST]
Given a binary search tree with non-negative values, find the minimum <a href="https://en.wikipedia.org/wiki/Absolute_difference">absolute difference</a> between values of any two nodes.
*Example:*
[subs="verbatim,quotes,macros"]
----
*Input:*
1
\
3
/
2
*Output:*
1
*Explanation:*
The minimum absolute difference is 1, which is the difference between 2 and 1 (or between 2 and 3).
----
*Note:* There are at least two nodes in this BST.
[[src-0530]]
[{java_src_attr}]
----
include::{sourcedir}/_0530_MinimumAbsoluteDifferenceInBST.java[]
----
| 19.72973 | 186 | 0.712329 |
c897f0f1f6892972195f65e9f363b19b79050597 | 94 | adoc | AsciiDoc | mR/target/snippets/GET2/http-request.adoc | 2tim/myR-java | 25c169a248c2e025244b841887fa420bc749c278 | [
"Apache-2.0"
] | null | null | null | mR/target/snippets/GET2/http-request.adoc | 2tim/myR-java | 25c169a248c2e025244b841887fa420bc749c278 | [
"Apache-2.0"
] | null | null | null | mR/target/snippets/GET2/http-request.adoc | 2tim/myR-java | 25c169a248c2e025244b841887fa420bc749c278 | [
"Apache-2.0"
] | null | null | null | [source,http,options="nowrap"]
----
GET /products/13860429 HTTP/1.1
Host: localhost:8080
---- | 15.666667 | 31 | 0.691489 |
ccf0602beb2cb339181d25968447cb793e41e825 | 4,016 | adoc | AsciiDoc | modules/ROOT/pages/Kubernetes_Module/Lab4.adoc | VadimDez/learning-cloudnative-101 | f77fedbd6c8350e811cd11487bdce3544165e6bf | [
"Apache-2.0"
] | null | null | null | modules/ROOT/pages/Kubernetes_Module/Lab4.adoc | VadimDez/learning-cloudnative-101 | f77fedbd6c8350e811cd11487bdce3544165e6bf | [
"Apache-2.0"
] | null | null | null | modules/ROOT/pages/Kubernetes_Module/Lab4.adoc | VadimDez/learning-cloudnative-101 | f77fedbd6c8350e811cd11487bdce3544165e6bf | [
"Apache-2.0"
] | null | null | null | = Lab 4 - Rollout Updates for Kubernetes Application
Bryan Kribbs <bakribbs@us.ibm.com>
v1.0, 2019-05-28
:toc:
:imagesdir: ../../assets/images
== Introduction
In this lab, you will learn about how to scale and rollout updates to an application on kubernetes using OpenShift with no downtime.
== Prerequisites
- You will need an IBM Account for https://cloud.ibm.com/[IBM Cloud]
- Downloaded and installed the IBM Cloud CLI at https://cloud.ibm.com/docs/cli?topic=cloud-cli-getting-started#step1-install-idt[IBM Cloud CLI Installation]. Follow the instructions for your computer's operating system.
- Downloaded and installed the OpenShift CLI at https://OpenShift.io/docs/tasks/tools/install-kubectl/[OpenShift CLI Installation]. Follow the instructions for your computer's operating system.
- Downloaded and installed the Kubernetes CLI at https://kubernetes.io/docs/tasks/tools/install-kubectl/[Kubernetes CLI Installation]. Follow the instructions for your computer's operating system.
- Installing `watch` command - For Mac run `brew install watch`. No install for Windows
== Scaling out your app
Now that we have successfully deployed an application on Kubernetes, the next step is to scale up the application to properly handle more traffic and have some redundancy to prevent any downtime.
Let's first check our current deployment running on OpenShift. (If there is no deployment refer to xref:Kubernetes_Module/Lab3.adoc[Lab 3:Deploy Kubernetes Application])
.Check Deployment
[source, bash]
----
oc get deployment greetings-deployment
----
image::current-deployment.png[]
We can see that only 1 replica of the application is currently desired and is all that is running. Now let's make it to where we have 3 desired replicas and see how Kubernetes automatically deploys more replicas to meet the desired state by using the `oc scale` command.
.More Replicas
[source, bash]
----
oc scale deployment greetings-deployment --replicas 3
----
The command above will scale the sample application to 3 replicas and you can verify the replicas by running one of the following:
.Verify Replicas
[source, bash]
----
oc get deployment greetings-deployment
----
image::scaled-deploy.png[]
.Verify Pods
[source, bash]
----
oc get pods
----
image::scaled-pods.png[]
Replication is easy with Kubernetes and can be changed with one simple command at any time. Next lets look at updating the replicas with new verions of code.
== Update your app
Updating applications with little to no downtime is a huge advantage for using Kubernetes. In this next section we are going to walkthrough how easy it is.
The first step is changing the docker image to the new version (v2) that we have pushed to Docker Hub and begin the rolling update:
.Updating the Deployment
[source, bash]
----
oc set image deployment/greetings-deployment greeting=ibmcase/greeting:v2
----
Next, we want to monitor the rollout and watch as containers are created and deployed.
.Monitor the Rollout
[source, bash]
----
oc rollout status deployment/greetings-deployment
----
Finally, once the rollout is complete we want to recheck our application and see the new changes that were pushed.
First we need to get the `route` in order to curl the application
.Get Route
[source, bash]
----
oc get route
----
Next copy the `PATH` it will look similar to `greeting-service-deploy-sample.gse-cloud-native-fa9ee67c9ab6a7791435450358e564cc-0001.us-east.containers.appdomain.cloud`.
Finally we want to run a `curl` against the copied path with `/greeting` appended on the end to see the response.
.Curl
[source, bash]
----
curl PATH/greeting
----
You should get a response similar to: `Welcome to Version 2 of the Cloud Native Bootcamp Application !!!`
== Conclusion
You have successfully completed this lab! Let's take a look at what you learned and did today:
- Scaled a deployed application to 3 replicas.
- Accessed the application through a CURL command.
- Updated a deployed application to a newer image version. | 38.247619 | 271 | 0.772161 |
81ea8ceb6d988c2c39a09933352fcd93e8037809 | 452 | adoc | AsciiDoc | modules/serverless-rn-template-module.adoc | derekwaynecarr/openshift-docs | c7044364f8310387ae736cfca308681cdd7c00c9 | [
"Apache-2.0"
] | 1 | 2021-07-01T06:39:52.000Z | 2021-07-01T06:39:52.000Z | modules/serverless-rn-template-module.adoc | derekwaynecarr/openshift-docs | c7044364f8310387ae736cfca308681cdd7c00c9 | [
"Apache-2.0"
] | 1 | 2019-11-15T02:09:31.000Z | 2019-11-15T02:09:31.000Z | modules/serverless-rn-template-module.adoc | derekwaynecarr/openshift-docs | c7044364f8310387ae736cfca308681cdd7c00c9 | [
"Apache-2.0"
] | null | null | null | // Module included in the following assemblies:
//
// * serverless/release-notes.adoc
[id="serverless-rn-<version>_{context}"]
//update the <version> to match the filename
= Release Notes for Red Hat {ServerlessProductName} <VERSION>
// add a version, e.g. Technology Preview 1.0.0
[id="new-features_<VERSION>{context}"]
== New features
[id="fixed-issues_<VERSION>{context}"]
== Fixed issues
[id="known-issues_<VERSION>{context}"]
== Known issues
| 23.789474 | 61 | 0.721239 |
55dc9d665570214e2a2bcbc1d9538d5d66a849d3 | 135 | adoc | AsciiDoc | kafka-streams-internals-KTableMaterializedValueGetterSupplier.adoc | jaceklaskowski/mastering-kafka-streams-book | 894cce24a9db95fc197d1fda092fa7bb048aea0a | [
"Apache-2.0"
] | 70 | 2018-01-02T00:17:13.000Z | 2021-09-27T19:51:02.000Z | kafka-streams-internals-KTableMaterializedValueGetterSupplier.adoc | jaceklaskowski/mastering-kafka-streams-book | 894cce24a9db95fc197d1fda092fa7bb048aea0a | [
"Apache-2.0"
] | null | null | null | kafka-streams-internals-KTableMaterializedValueGetterSupplier.adoc | jaceklaskowski/mastering-kafka-streams-book | 894cce24a9db95fc197d1fda092fa7bb048aea0a | [
"Apache-2.0"
] | 31 | 2018-01-02T13:26:07.000Z | 2022-01-11T16:56:03.000Z | == [[KTableMaterializedValueGetterSupplier]] KTableMaterializedValueGetterSupplier
`KTableMaterializedValueGetterSupplier` is...FIXME
| 33.75 | 82 | 0.874074 |
929a7628c93f1856ad0373570b07816312aaa1ce | 1,598 | adoc | AsciiDoc | src/main/docs/guide/ioc/springBeans.adoc | dancegit/micronaut-core | 762cfe087d6ab68e4c7fa52d770fefeddb30f8cd | [
"Apache-2.0"
] | 1 | 2020-06-20T02:01:35.000Z | 2020-06-20T02:01:35.000Z | src/main/docs/guide/ioc/springBeans.adoc | dancegit/micronaut-core | 762cfe087d6ab68e4c7fa52d770fefeddb30f8cd | [
"Apache-2.0"
] | 969 | 2020-09-18T02:03:38.000Z | 2022-03-31T02:04:27.000Z | src/main/docs/guide/ioc/springBeans.adoc | dancegit/micronaut-core | 762cfe087d6ab68e4c7fa52d770fefeddb30f8cd | [
"Apache-2.0"
] | 1 | 2021-01-29T10:12:49.000Z | 2021-01-29T10:12:49.000Z | // Doesn't seem appropriate to multi-lang these samples TODO check
The link:{api}/io/micronaut/spring/beans/MicronautBeanProcessor.html[MicronautBeanProcessor]
class is a `BeanFactoryPostProcessor` which will add Micronaut beans to a
Spring Application Context. An instance of `MicronautBeanProcessor` should
be added to the Spring Application Context. `MicronautBeanProcessor` requires
a constructor parameter which represents a list of the types of
Micronaut beans which should be added to the Spring Application Context. The
processor may be used in any Spring application. As an example, a Grails 3
application could take advantage of `MicronautBeanProcessor` to add all of the
Micronaut HTTP Client beans to the Spring Application Context with something
like the folowing:
```groovy
// grails-app/conf/spring/resources.groovy
import io.micronaut.spring.beans.MicronautBeanProcessor
import io.micronaut.http.client.annotation.Client
beans = {
httpClientBeanProcessor MicronautBeanProcessor, Client
}
```
Multiple types may be specified:
```groovy
// grails-app/conf/spring/resources.groovy
import io.micronaut.spring.beans.MicronautBeanProcessor
import io.micronaut.http.client.annotation.Client
import com.sample.Widget
beans = {
httpClientBeanProcessor MicronautBeanProcessor, [Client, Widget]
}
```
In a non-Grails application something similar may be specified using
any of Spring's bean definition styles:
[source, groovy]
----
include::spring/src/test/groovy/io/micronaut/spring/beans/MicronautBeanProcessorByAnnotationTypeSpec.groovy[tags=springconfig, indent=0]
----
| 37.162791 | 136 | 0.813517 |
1e35ccf7e36c051e136a27d99cdcc9c59d406c4b | 8,075 | adoc | AsciiDoc | docs/team/buddhavineeth.adoc | buddhavineeth/main | 6b7b2f11d28cb4ce4e837219072d1e4274fb439f | [
"MIT"
] | 1 | 2020-03-18T10:13:47.000Z | 2020-03-18T10:13:47.000Z | docs/team/buddhavineeth.adoc | buddhavineeth/main | 6b7b2f11d28cb4ce4e837219072d1e4274fb439f | [
"MIT"
] | 140 | 2020-02-19T10:58:10.000Z | 2020-04-17T17:37:55.000Z | docs/team/buddhavineeth.adoc | buddhavineeth/main | 6b7b2f11d28cb4ce4e837219072d1e4274fb439f | [
"MIT"
] | 9 | 2020-02-18T09:02:52.000Z | 2020-11-09T14:08:23.000Z | = Vineeth Buddha - Project Portfolio
:site-section: AboutUs
:imagesDir: ../images
:stylesDir: ../stylesheets
== PROJECT: Calgo
=== Overview
This portfolio page highlights some of my contributions to Calgo - a Software Engineering project developed in my second year of undergraduate studies in the National University of Singapore.
=== About the Team
We are 5 Year 2 Computer Science undergraduates reading CS2103T: Software Engineering.
==== About the Project
Calgo is an all-in-one personal meal tracking assistant which seeks to encourage a healthy lifestyle among its users. It allows users to not only have a convenient nutritional record of all their favourite food entries, but also track, monitor, and plan their food consumption. Moreover, the team has come up with a plethora of user-centric features to make Calgo well-suited to provide users with both convenience and utility.
My team was tasked with morphing an existing https://github.com/nus-cs2103-AY1920S1/addressbook-level3[Address Book Level 3 (AB3) project] into a new product via Brownfield software development.
We were therefore required to use the existing AB3 project as Calgo's project foundation, to create a desktop application supporting the Command Line Interface. This was to target users who prefer typing but also enjoy the benefits of a Graphical User Interface.
With all of us being food lovers and realising a greater societal need for healthy eating, Calgo was born.
== Summary of contributions
* *Major enhancement*: I implemented the *generation of useful statistics and key insights* via the `report` command.
** What it does: The feature provides the user with a statistical summary of the food he/she has consumed on a given day. It also generates personalised insights on how the user
can improve his/her eating habits and whether his/her favourite food item should continue to be part of the diet.
** Justification: The feature improves Calgo significantly because a user can now go beyond just tracking his/her daily meal consumption. The user can now obtain insights on
how to improve their eating habits and find out the food that contributes the most to their daily calorie count. They also no longer have to spend lots of time calculating the nutritional content they consumed in a day because
the statistics section does that for them instantly. This makes Calgo much more than a meal tracker. It helps the user build a healthy lifestyle through eating.
** Highlights: This enhancement requires an in-depth understanding of the Logic and Model components' architecture and a good understanding of String formatting. It also makes use of a sophisticated sorting mechanism
to decide what the favourite food of the user is in the past week.
* *Major enhancement*: I also implemented the `goal` command.
** What it does: The feature helps the user to set a daily calorie goal. This goal is also reflected in Calgo's GUI, so that the user is always reminded of how many calories he/she is left with whenever they consume food.
** Justification: Our target user is health-conscious and wants to build a healthy lifestyle. As that is a vague goal, it often is hard to achieve. That is why the `goal` command
is created, to help the user set clear objectives for each day and chunk their big long-term goal of eating healthily into smaller daily goals. This allows them to see noticeable progress too and is motivating. The goal is also
used to generate personalised insights in the abovementioned `report` command.
** Highlights: For this enhancement, I worked on the front-end, back-end logic, storage of the goal and unit testing as well. This required a deep understanding of all aspects of the project.
* *Minor enhancement*: I helped in the redesigning of the GUI by adding the Goal Displays, Remaining Calorie Count Display, creating labels and helping Janice with
the graph feature.
* *Code contributed*: You can view my functional code and test code contributions to Calgo https://nus-cs2103-ay1920s2.github.io/tp-dashboard/#search=&sort=groupTitle&sortWithin=title&since=2020-02-14&timeframe=commit&mergegroup=false&groupSelect=groupByRepos&breakdown=false&tabOpen=true&tabType=authorship&tabAuthor=buddhavineeth&tabRepo=AY1920S2-CS2103T-F11-1%2Fmain%5Bmaster%5D[here].
<<<
* *Other contributions*:
** Project management:
*** As the in-charge of Deadlines and Deliverables, I ensured the team was on task and was putting in consistent effort. I also managed all releases `v1.1` - `v1.4` (4 releases) on GitHub. https://github.com/AY1920S2-CS2103T-F11-1/main/releases[1].
*** I maintained the team's GitHub issue tracker and set up project dashboards and ensured everybody was assigned at least one user story to work on. Furthermore, the user stories were split into multiple milestones to ensure we worked incrementally. https://github.com/AY1920S2-CS2103T-F11-1/main/milestones?state=closed[2].
*** Contributed to product ideation, brainstorming key features and ensuring that everyone has equal responsibilities.
** Team Documentation:
*** Wrote the sections for `clear`, `report` and `goal` commands in Calgo's User Guide.
https://github.com/AY1920S2-CS2103T-F11-1/main/pull/161[#161],
https://github.com/AY1920S2-CS2103T-F11-1/main/pull/169[#169],
https://github.com/AY1920S2-CS2103T-F11-1/main/pull/171[#171],
https://github.com/AY1920S2-CS2103T-F11-1/main/pull/288[#288].
*** Wrote sections for Generating statistics and insights, setting daily calorie goals for Developer Guide. https://github.com/AY1920S2-CS2103T-F11-1/main/pull/161[#161],
https://github.com/AY1920S2-CS2103T-F11-1/main/pull/292[#292],
https://github.com/AY1920S2-CS2103T-F11-1/main/pull/298[#298].
*** Vetted through User Guide and Developer Guide. https://github.com/AY1920S2-CS2103T-F11-1/main/pull/307[#307].
*** Refined Calgo's team pages to be more user-centric (especially README.adoc). https://github.com/AY1920S2-CS2103T-F11-1/main/pull/133[#133],
https://github.com/AY1920S2-CS2103T-F11-1/main/pull/161[#161],
https://github.com/AY1920S2-CS2103T-F11-1/main/pull/242[#242],
https://github.com/AY1920S2-CS2103T-F11-1/main/pull/277[#277],
https://github.com/AY1920S2-CS2103T-F11-1/main/pull/281[#281],
https://github.com/AY1920S2-CS2103T-F11-1/main/pull/283[#283],
https://github.com/AY1920S2-CS2103T-F11-1/main/pull/292[#292],
https://github.com/AY1920S2-CS2103T-F11-1/main/pull/298[#298],
** Beyond the team:
*** Peer testing and bug reporting:
https://github.com/buddhavineeth/ped/issues/1[#1],
https://github.com/buddhavineeth/ped/issues/2[#2],
https://github.com/buddhavineeth/ped/issues/3[#3],
https://github.com/buddhavineeth/ped/issues/4[#4],
https://github.com/buddhavineeth/ped/issues/5[#5],
https://github.com/buddhavineeth/ped/issues/6[#6],
https://github.com/buddhavineeth/ped/issues/7[#7],
https://github.com/buddhavineeth/ped/issues/8[#8],
https://github.com/buddhavineeth/ped/issues/9[#9],
https://github.com/buddhavineeth/ped/issues/10[#10],
https://github.com/buddhavineeth/ped/issues/11[#11],
https://github.com/buddhavineeth/ped/issues/12[#12],
https://github.com/buddhavineeth/ped/issues/13[#13],
https://github.com/buddhavineeth/ped/issues/14[#14],
https://github.com/buddhavineeth/ped/issues/15[#15].
*** Providing consistent feedback to peer projects on how to enhace their features. For instance, providing advice to
Team `F11-3` during tutorials and after the practice demo round.
== Contributions to the User Guide
|===
|_Given below are sections I contributed to the User Guide. They showcase my ability to write documentation targeting end-users._
|===
include::../UserGuide.adoc[tag=clearCommandUG]
include::../UserGuide.adoc[tag=reportCommandUG]
include::../UserGuide.adoc[tag=goalCommandUG]
== Contributions to the Developer Guide
|===
|_Given below are sections I contributed to the Developer Guide. They showcase my ability to write technical documentation and the technical depth of my contributions to the project._
|===
include::../DeveloperGuide.adoc[tag=reportCommandDG]
include::../DeveloperGuide.adoc[tag=goalCommandDG]
| 69.017094 | 427 | 0.78291 |
22fc9638424917e7261a5aa2b82a2d6e1b822694 | 4,412 | adoc | AsciiDoc | docs/03.user-management/04.user-profile.adoc | EOEPCA/master-system-icd | 3f1ac7d559e29fcd245071da31f7f60cbded5dee | [
"FTL"
] | null | null | null | docs/03.user-management/04.user-profile.adoc | EOEPCA/master-system-icd | 3f1ac7d559e29fcd245071da31f7f60cbded5dee | [
"FTL"
] | null | null | null | docs/03.user-management/04.user-profile.adoc | EOEPCA/master-system-icd | 3f1ac7d559e29fcd245071da31f7f60cbded5dee | [
"FTL"
] | 1 | 2020-07-06T19:57:10.000Z | 2020-07-06T19:57:10.000Z | [[mainUserProfile]]
= User Profile
Refer to User Profile building block documentation - https://eoepca.github.io/um-user-profile/master
== Description
The User Profile building block serves to encapsulate profile actions (such as edit or removal) into a web interface, while at the same time providing the infrastructure upon which to implement other building blocks, such as Billing and Licensing.
== Context
The following context diagram identifies the major components interfacing with the User Profile: +
([red]#RED# ~ consumers of User Profile provided interfaces, [blue]#BLUE# ~ providers of interfaces consumed by the User Profile)
[.text-center]
[#img_user-profileContext,reftext='{figure-caption} {counter:figure-num}']
.Policy Decision Point context diagram
[plantuml, user-profile-context, png]
....
include::include/user-profile-context.wsd[]
....
In order to support the Billing Service, the User Profile building block allows to identify users (leaving a reference to their home IdP), to assign them Billing Identities, Service API keys, License keys and to record Terms and Conditions acceptance. It’s a persistence service with interfaces that will be queried by other building blocks (License Manager, Billing Service, Policy Decision Point) and modified by both the License Manager and the Login Service (during creation of a new user profile or assignment of new Licenses).
== Provided Interfaces
[.text-center]
[#img_user-profileProvidedInterfaces,reftext='{figure-caption} {counter:figure-num}']
.Policy Decision Point provided interfaces
[plantuml, user-profile-provided-interfaces, png]
....
include::include/user-profile-provided-interfaces.wsd[]
....
=== Profile Management Web Interface
A web service is made available for users to perform actions related to the building block, such as account removal.
==== Applicable Standards
No applicable standards besides Hypertext Transfer Protocol (HTTP) as means to serve the information to the End-User
==== Endpoints
URL: <um-login-service>/web_ui
== Required Interfaces
[.text-center]
[#img_user-profileRequiredInterfaces,reftext='{figure-caption} {counter:figure-num}']
.Policy Decision Point required interfaces
[plantuml, user-profile-required-interfaces, png]
....
include::include/user-profile-required-interfaces.wsd[]
....
=== OIDC Authentication
The User Profile utilizes an OIDC Client implementation to consume OIDC endpoints, authenticate itself as trusted platform component, generate JWT documents containing End-User information, and secure its Platform Resource API endpoints.
==== Applicable Standards
* IETF - OpenID Connect 1.0
==== Remote Endpoints
URL: <um-login-service>/.well-known/openid-configuration
The Login Service exposes the standard discovery document, a JSON showcasing all the necessary metadata and endpoint enumeration for a client application to dynamically load all the needed endpoints.
=== SCIM Identity Management
The User profile utilizes a SCIM Client implementation to consume SCIM endpoints, designed to make managing user identities in cloud-based applications and services easier due to its well-defined schemas and endpoints.
==== Applicable Standards
* IETF RFC 7644 - System for Cross-domain Identity Management
==== Remote Endpoints
URL: /.well-known/scim-configuration
This well-known endpoint allows to discover all relevant SCIM operations. Although this strategy is not enforced, serving a well-known endpoint makes client integration more familiar to developers that may be used to OIDC Client integration.
=== Email Service
The User Profile building block also implements a SMTP Client, that allows for the sending of email notifications that serves as, and implements, a confirmation action in an account removal scenario.
==== Applicable Standards
* IETF RFC 2822 - Internet Message Format
==== Remote Endpoints
URL: <um-login-service>/confirmation_mail
== Example Scenarios
=== Attribute Edition
[.text-center]
[#img_userProfileEdit,reftext='{figure-caption} {counter:figure-num}']
.User Profile attribute editing
[plantuml, user-profile-edit, png]
....
include::include/user-profile-edit.wsd[]
....
=== Account Deletion
[.text-center]
[#img_userProfileRemoval,reftext='{figure-caption} {counter:figure-num}']
.User Profile account removal
[plantuml, user-profile-removal, png]
....
include::include/user-profile-removal.wsd[]
....
| 37.07563 | 532 | 0.784451 |
2496cc18aeb2b0375294b8bd1353eb73767a5974 | 8,175 | adoc | AsciiDoc | doc/docs/modules/ROOT/pages/architecture/sinkconsume.adoc | larusba/neo4j-streams | e1245cc06a02bc66549acd7a85e9c991657f9b8e | [
"Apache-2.0"
] | 150 | 2018-10-03T13:48:59.000Z | 2022-03-30T07:28:27.000Z | doc/docs/modules/ROOT/pages/architecture/sinkconsume.adoc | larusba/neo4j-streams | e1245cc06a02bc66549acd7a85e9c991657f9b8e | [
"Apache-2.0"
] | 398 | 2018-09-27T14:11:55.000Z | 2022-03-31T14:47:03.000Z | doc/docs/modules/ROOT/pages/architecture/sinkconsume.adoc | larusba/neo4j-streams | e1245cc06a02bc66549acd7a85e9c991657f9b8e | [
"Apache-2.0"
] | 72 | 2018-09-27T15:32:45.000Z | 2022-02-16T15:25:10.000Z | = Sink: Consume Records from Kafka
[abstract]
This chapter describes the different ingest strategies, when to use them, and when to avoid them.
== Important Information About Batch Imports Into Neo4j
Because writing relationships requires taking a lock on both incident nodes, in general when we are doing high-performance batch inserts into Neo4j, we want a single-threaded batch, and we do not want parallelized load processes. If you parallelize the loads without careful design, mostly you will get some combination of thread waiting, and lock exceptions / errors.
As a result, the general approach should be to serialize batches of records into a stream of "one-after-the-other" transactions to maximize throughput.
== Overview
image::unwind-consume.png[align="center"]
This shows the cypher ingest strategy. This is a high level view though - we take messages from Kafka and write them to Neo4j, but it isn't the whole story.
== General Principles
Before you get fancy with import methods!
image::transformer-architecture.png[align="center"]
* **Use Kafka! Don't Use Neo4j for Everything**: In general, it is better to use Kafka and things like KSQL to re-shape, transform, and manipulate a stream *before* it gets to Neo4j. At this stage, you have more flexible options and overall better performance. Look back at that "Graph ETL" bit. Try to make neo4j-streams only do the transform and load parts, as they are inevitable for graph.
* **Don't Do Complex ETL in Cypher**: Seek to avoid situations where you are doing complex transformations of inbound messages to Neo4j. It is possible to do with pure Cypher, but basically you will be pushing a "Kafka message cleanup" task and delegating that to Neo4j to perform, which is not a good strategy if you can avoid it.
* **KISS Principle**: The simpler the input format, the better / cleaner more evolvable the architecture, and the less work Neo4j has to do.
* **Keep Topics Consistent**: Topics which contain a variety of differently formatted messages should be avoided whenever possible. They will be very difficult to deal with, and you shouldn't attempt it unless you absolutely have to.
This can get challenging in constrained customer environments where you have no control over the topics or their contents, so this is just presented as a set of principles, not hard rules.
== Change Data Capture - CDC
* Requires a specific debezium JSON format that specifies what changed in a transaction from another system
* Extremely important: for CDC usages, message ordering matters, and so a single Kafka topic partition is advised -- but because of this, throughput can be impacted because parallelization isn't an option.
* Contains two sub-strategies: SourceID and Schema strategies, which apply to particular CDC cases, and allow you to merge data in particular ways. These sub-strategies are only interesting if you have Debezium format data.
* Use When:
** Trying to mirror changes from another database that knows how to publish in Debezium format
* Avoid When:
** Custom message formats over topics, or messages coming from other applications and not databases.
* Other considerations & constraints:
** SourceId may be vulnerable to internal ID reuse, which can result from deletes or backup restores. Use caution when using this approach unless you have hard constraints on ID uniqueness.
** The Schema strategy relies on unicity constraints, which can't be used on relationship properties. That means relationship keys are effectively the source node + the destination node + the relationship type, which in turn means that the model may not have multiple edges of the same type between 2 nodes.
== The Pattern Strategy
* This contains two sub-strategies: The Node and the Relationship strategy.
* The pattern strategy always assumes that the inbound message represents 1 and only 1 node or relationship.
* All of the specifics of the sub-strategies (Node & Relationship) deal with which parts to import, and to what node label or relationship type.
* Use When:
** Best for high volume, very granular import, where you want to extract particular attributes out of the message.
* Avoid When:
** You have big complicated JSON documents that you don't control that represent a mixture of nodes & edges.
== The CUD Strategy
* This is similar to the Debezium CDC format. CUD stands for "Create, Update, Delete". It is a method of "sending commands" to Neo4j through Kafka to create, update, or delete a node or a relationship
* Use When:
** This is a good option when you can take a complex upstream message and reform it (within Kafka) to a bunch of simple commands. It's a good technique for "taking something complicated and making it simple"
** You need to stream back changes from a read replica to a core. For example, if you run a GDS algo on a read replica and then want to update a property to set a community identifier, you can't do that on a read replica. But you can publish CUD messages from the read replica from Kafka, and then have the cores set to ingest that on a particular topic. In other words -- this is a method for inter-Neo4j cluster communication.
* Avoid When:
** This strategy requires a tightly constrained data format, so it is not appropriate when you don't control the topic.
== The Cypher Template Strategy
* This is the generic catch-all strategy for processing any inbound message with any cypher statement.
* It is the most powerful and also the hardest to configure because it requires that you code cypher in a configuration file.
* Use When:
** It is best when you can't control the input topic, and when you need to transform the input data as opposed to just loading it into a structure.
* Avoid When:
** Cypher queries grow to be long and complex, or need to change frequently. This is a sign that too much extraction & transformation work is being pushed to Neo4j, and a stream transformer is needed.
== Parallelism
This is a topic to be approached carefully because of Neo4j's lock strategies. When writing a relationship, Neo4j takes a lock on both incident nodes. This means that when loading in parallel, an attempt to set properties on a node in one transaction, while at the same time adding a relationship in another transaction - can result in deadlocks and failed transactions. For this reason, if any parallelism strategy is adopted, it has to be carefully designed to avoid these deadlocks.
A parallelization setting is available in the Kafka Connect worker only. When run as a plugin, **the code always operates in sequential batch transactions, connected to individual polls of the Kafka client**.
An individual kafka client thread proceeds like this:
* Grab records from Kafka
* Formulate a batch of "events"
* Write those to Neo4j, generally of the form `UNWIND events AS event`
See information in the Kafka documentation on how the `poll()` operation works for full details.
When the Kafka Connect client is configured to run in parallel, effectively there are multiple java threads each doing the same thing. As a result, there can be more than one transaction in flight at a given time. This raises the potential for higher throughput, but also the potential for deadlocks.
== Loading Many Kinds of Data At Scale
If we want to load 8 types of nodes & relationships in a performant manner, how do we do that?
**General advice**
* Break the data you're loading into multiple different topics of 1 partition each, rather than having 1 big topic of mixed / large content
* OPTIONAL:
** Consider configuring Neo4j-streams to run in parallel if in Kafka Connect
** If you do, take care to configure your topics to talk about disjoint parts of the graph, so that you don't run into concurrent locking issues with Neo4j writes (if possible)
* Use stream transformers and KSQL techniques to craft messages into a format where you can use one of the other ingest strategies other than Cypher templates, to simplify the code you need to write, and to avoid needing to cycle the cluster due to cypher query changes.
* Experiment with batch sizing & memory sizing until you get good throughput
| 82.575758 | 489 | 0.784465 |
d3374ade9fdba456d29be047cdd7a25c9ed6a80a | 4,426 | adoc | AsciiDoc | docs/concepts.adoc | kaminfeuer/tomee | 4765b047750285a7aaa6587c742c7046d0b24648 | [
"Apache-2.0"
] | null | null | null | docs/concepts.adoc | kaminfeuer/tomee | 4765b047750285a7aaa6587c742c7046d0b24648 | [
"Apache-2.0"
] | null | null | null | docs/concepts.adoc | kaminfeuer/tomee | 4765b047750285a7aaa6587c742c7046d0b24648 | [
"Apache-2.0"
] | null | null | null | # Concepts
:index-group: Unrevised
:jbake-date: 2018-12-05
:jbake-type: page
:jbake-status: published
OpenEJB was founded on the idea that it would be embedded into
third-party environments whom would likely already have three things:
* their one "server" platform with existing clients and protocols
* their own way to configure their platform
* existing services like TransactionManager, Security, and Connector
Thus the focus of OpenEJB was to create an EJB implementation that would
be easily embeddible, configurable, and customizable.
Part of achieving that is a drive to be as simple as possible as to not
over-define and therefore restrict the ability to be embeddible,
configurable and customizable. Smaller third-party environments could
easily 'downscale' OpenEJB in their integrations by replacing standard
components with lighter implementations or removing them all together
and larger environments could 'upscale' OpenEJB by replacing and adding
heavier implementations of those standard components likely tailored to
their systems and infrastructure.
Container and Server are mentioned in the EJB spec as being separate
things but are never defined formally. In our world Containers, which
implement the basic component contract and lifecycle of a bean are not
coupled to any particular Server, which has the job of providing a
naming service and providing a way for it's clients to reference and
invoke components (beans) hosted in Containers. Because Containers have
no dependence at all only Server, you can run OpenEJB without any Server
at all in an embedded environment for example without any work or any
extra overhead. Similarly you can add as many new Server components as
you want without ever having to modify any Containers.
There is a very strong pluggability focus in OpenEJB as it was always
intended to be embedded and customized in other environments. As a
result all Containers are pluggable, isolated from each other, and no
one Container is bound to another Container and therefore removing or
adding a Container has no repercussions on the other Containers in the
system. TransactionManager, SecurityService and Connector also pluggable
and are services exposed to Containers. A Container may not be dependent
on specific implementations of those services. Service Providers define
what services they are offering (Container, Connector, Security,
Transaction, etc.) in a file they place in their jar called
service-jar.xml.
The service-jar.xml should be placed not in the META-INF but somewhere
in your package hierarchy (ours is in
/org/apache/openejb/service-jar.xml) which allows the services in your
service-jar.xml to be referenced by name (such as
DefaultStatefulContainer) or more specifically by package and id (such
as org.apache.openejb#DefaultStatefulContainer).
The same implementation of a service can be declared several times in a
service-jar.xml with different ids. This allows for you to setup several
several different profiles or pre-configured versions of the services
you provide each with a different name and different set of default
values for its properties.
In your openejb.conf file when you declare Containers and Connectors, we
are actually hooking you up with Service Providers automatically. You
get what is in the org/apache/openejb/service-jar.xml by default, but
you are able to point specifically to a specific Service Provider by the
'provider' attribute on the Container, Connector, TransactionManager,
SecurityService, etc. elements of the openejb.conf file. When you
declare a service (Container, Connector, etc.) in your openejb.conf file
the properties you supply override the properties supplied by the
Service Provider, thus you only need to specify the properties you'd
like to change and can have your openejb.conf file as large or as small
as you would like it. The act of doing this can be thought of as
essentially instantiating the Service Provider and configuring that
instance for inclusion in the runtime system.
For example Container(id=NoTimeoutStatefulContainer,
provider=DefaultStatefulContainer) could be declared with it's Timeout
property set to 0 for never, and a
Container(id=ShortTimeoutStatefulContainer,
provider=DefaultStatefulContainer) could be declared with it's Timeout
property set to 15 minutes. Both would be instances of the
DefaultStatefulContainer Service Provider which is a service of type
Container.
| 52.690476 | 72 | 0.822187 |
eaa1e06357499b01c392366f9dca21144e4562fd | 568 | adoc | AsciiDoc | spec/scenarios/ul/compound-separated.adoc | asciidoctor/kramdown-asciidoc | 2789381bf9ca8aead5b2abfb3fe332c63760fe69 | [
"MIT"
] | 119 | 2018-05-22T10:15:29.000Z | 2022-03-25T15:17:32.000Z | spec/scenarios/ul/compound-separated.adoc | asciidoctor/kramdown-asciidoc | 2789381bf9ca8aead5b2abfb3fe332c63760fe69 | [
"MIT"
] | 57 | 2018-05-30T05:12:50.000Z | 2021-12-06T19:54:55.000Z | spec/scenarios/ul/compound.adoc | asciidoctor/kramdown-asciidoc | 2789381bf9ca8aead5b2abfb3fe332c63760fe69 | [
"MIT"
] | 12 | 2018-05-20T10:07:49.000Z | 2021-06-09T06:24:18.000Z | * Make a new project folder
+
[,js]
----
const fs = require('fs')
fs.mkdirSync('project')
----
* Initialize a new repository
+
[,js]
----
const git = require('isomorphic-git')
const repo = { fs, dir: 'project' }
await git.init(repo)
----
+
This is equivalent to the following command:
$ git init project
* Create boilerplate files
+
|===
| Filename | Purpose
| README.adoc
| Introduces the project
| .gitignore
| Ignores non-versioned files
|===
* Create source files
+
We can't help you here.
This part you'll have to do on your own.
+
____
Best of luck!
____
| 13.52381 | 44 | 0.670775 |
7098310418e44f37810dc6390062b26bdc4fa7e4 | 6,083 | adoc | AsciiDoc | docs/team/ryanchew.adoc | FelixNWJ/addressbook-level3-1 | 0da7cf3a1a315a660e8f50f033adf8da1cad0529 | [
"MIT"
] | null | null | null | docs/team/ryanchew.adoc | FelixNWJ/addressbook-level3-1 | 0da7cf3a1a315a660e8f50f033adf8da1cad0529 | [
"MIT"
] | null | null | null | docs/team/ryanchew.adoc | FelixNWJ/addressbook-level3-1 | 0da7cf3a1a315a660e8f50f033adf8da1cad0529 | [
"MIT"
] | null | null | null |
= Ryan Chew - Project Portfolio
:site-section: AboutUs
:imagesDir: ../images
:stylesDir: ../stylesheets
== PROJECT: CaloFit
---
== Overview
My team of 4 software engineering students and I were tasked with enhancing a basic desktop addressbook application(AddressBook - Level 3) for our Software Engineering project. We chose to morph
it into a calorie tracker cum food database system called CaloFit. This enhanced application allows health-conscious people or those who are aiming for a diet to set their calorie budget for the day; manage the meals that they take; find dishes based on keywords or their remaining calorie budget; and get data about their calorie intake progress through a report.
CaloFit is a desktop application for tracking the calories that the user has taken from his or her meals over the course of using the application. +
The user interacts with CaloFit using a Command Line Interface(CLI) that is represented by a box near the top of the application screen. This is where the user can type in their commands and press "Enter" on their keyboards to execute them. +
It has a Graphical User Interface(GUI) created with JavaFX. The GUI is the main display that the user sees upon starting up CaloFit.
This is what our project looks like as shown in Figure 1 below:
image::PPPMain.png[]
Figure 1: The GUI for CaloFit.
My roles were to handle the backend refactoring to morph the application, and to do the budget-related features.
In the following sections, I will cover those changes and features, together with the relevant updates to
the user and developer guides.
Note the following symbols and formatting used in this document:
[NOTE]
This symbol indicates important information.
`report` A grey highlight(called a mark-up) indicates that this is a command that can be inputted into the command line and executed by the application.
== Summary of contributions
This section shows a summary of my coding, documentation, and other helpful contributions to the
team project.
* *Major enhancement*: added a *budget bar display*.
** What it does:
Visually displays the user's meals eaten today, as a progress-like bar. Each segment corresponds to one meal, and is sized accordingly.
** Justification:
This feature gives immediate visual feedback to users, of the meals they have eaten today, and how much more they can eat.
** Highlights:
The bar display is the most obvious user-facing result of the model refactoring process.
** Credits:
The [https://github.com/controlsfx/controlsfx[ControlsFX library]] was used to provide the display of the segmented progress bar.
* *Minor enhancement*: added a *`set` command to control total budget*.
** What it does: This commandd allows the user to customize and set their (expected) daily calorie intake.
** Justification: This allows each user to tailor the app for their own diet plans.
* *Minor enhancement*: added *saving of the user's historical calorie budget data to a JSON (Javascript Object Notation) file*.
** What it does: The user's historical calorie budget data into CaloFit will be saved into a JSON file, allowing the user's calorie budget history to be loaded upon starting up CaloFit.
** Justification: This feature is necessary to preserve the user's past and current calorie budget/limit, both to reference in the budget bar, and be tracked in the statistics.
** Credits: The methods and structure of converting a Class into a JSON file was taken from the original AddressBook-Level 3 and refactored for the purpose mentioned above.
* *Code contributed*: [https://nus-cs2103-ay1920s1.github.io/tp-dashboard/#search=iltep64&sort=groupTitle&sortWithin=title&since=2019-09-06&timeframe=commit&mergegroup=false&groupSelect=groupByRepos&breakdown=false&tabOpen=true&tabType=authorship&tabAuthor=FelixNWJ&tabRepo=AY1920S1-CS2103T-W11-4%2Fmain%5Bmaster%5D[Functional code]]
* *Other contributions*:
** Project management:
*** In charge of overall software architecture
*** Was responsible for refactoring the whole model from the original AddressBook 3 model,
into a model suitable for CaloFit.
**** This involved many bulk and manual rename operations upon the classes, and continuous re-testing to ensure integrity.
**** I also added many more reactive properties leveraging JavaBean's/JavaFx's Observable mechanism.
***** This allows the UI to be able to track model updates, without incurring a direct dependency from the Model to the UI component.
**** I also wrote utility code in the form of *_ObservableListUtils_* and *_ObservableUtils_* in order to ease implementation
of chained reactive properties, especially with those involving manipulations of *_ObservableList_* s.
** Community:
*** Reported bugs and suggestions for other teams in the class (https://github.com/iltep64/ped/issues[List of bugs reported])
** Tools:
*** Introduced [https://site.mockito.org/[Mockito]], a mocking library to help test implementation.
== Contributions to the User Guide
With the new purpose of the app, we had to change the AB3 user guide,
with customized instructions for our application.
The following are excerpts from our *CaloFit User Guide*.
They showcase my ability to write documentation targeting end-users.
|===
|_Given below are sections I contributed to the User Guide, regarding the budget bar:_
|===
include::../UserGuide.adoc[tag=ui]
|===
|_Given below are sections I contributed to the User Guide, for the `set` command:_
|===
include::../UserGuide.adoc[tag=setcmd]
|===
|_Given below are sections I contributed to the User Guide, about valid inputs.
|===
include::../UserGuide.adoc[tag=syntax]
== Contributions to the Developer Guide
|===
|_Given below are sections I contributed to the Developer Guide for the budget bar feature._
|===
include::../DeveloperGuide.adoc[tag=model]
|===
|_Given below are sections I contributed to the Developer Guide for the budget bar feature._
|_They showcase my ability to write technical documentation and the technical depth of my contributions to the project._
|===
include::../DeveloperGuide.adoc[tag=budgetbar]
| 53.359649 | 364 | 0.783002 |
396346d8892c08796676898991c782efbcd8cddf | 19,006 | adoc | AsciiDoc | docs/user-manual/modules/ROOT/pages/release-guide.adoc | kubawach/camel | b6713bd5c1fbb0e5b428a9d31822097f7eae33e3 | [
"Apache-2.0"
] | 4,262 | 2015-01-01T15:28:37.000Z | 2022-03-31T04:46:41.000Z | docs/user-manual/modules/ROOT/pages/release-guide.adoc | michael-salzburg/camel | 3155b1906051a7416477b9ee022b8a7cf8a33055 | [
"Apache-2.0"
] | 3,408 | 2015-01-03T02:11:17.000Z | 2022-03-31T20:07:56.000Z | docs/user-manual/modules/ROOT/pages/release-guide.adoc | michael-salzburg/camel | 3155b1906051a7416477b9ee022b8a7cf8a33055 | [
"Apache-2.0"
] | 5,505 | 2015-01-02T14:58:12.000Z | 2022-03-30T19:23:41.000Z | = Release Guide
This guide covers how to create and announce a Camel release.
[[ReleaseGuide-Prequisites]]
== Prequisites
To prepare or perform a release you *must be* at least an Apache Camel committer.
* The artifacts for each and every release must be *signed*.
* Your public key must be added to the KEYS file.
* Your public key should also be cross-signed by other Apache committers (this can be done at key signing parties at
ApacheCon for instance).
* Make sure you have the correct maven configuration in `~/.m2/settings.xml`.
* https://github.com/takari/maven-wrapper[Maven Wrapper] is used and bundled with Camel 2.21 onwards and should be used
for building the release.
* You may want to get familiar with the release settings in the parent Apache POM.
* Make sure you are using Java 1.8 for Apache Camel 2.18.0 and later.
[[ReleaseGuide-MavenSetup]]
== Maven Setup
Before you deploy anything to the https://repository.apache.org[Apache Nexus repository] using Maven, you should
configure your `~/.m2/settings.xml` file so that the file permissions of the deployed artifacts are group writable.
If you do not do this, other developers will not able to overwrite your SNAPSHOT releases with newer versions.
The settings follow the guidelines used by the Maven project. Please pay particular attention to the
http://maven.apache.org/guides/mini/guide-encryption.html[password encryption recommendations].
----
<settings>
...
<servers>
<!-- Per http://maven.apache.org/developers/committer-settings.html -->
<!-- To publish a snapshot of some part of Maven -->
<server>
<id>apache.snapshots.https</id>
<username> <!-- YOUR APACHE LDAP USERNAME --> </username>
<password> <!-- YOUR APACHE LDAP PASSWORD --> </password>
</server>
<!-- To publish a website of some part of Maven -->
<server>
<id>apache.website</id>
<username> <!-- YOUR APACHE LDAP USERNAME --> </username>
<filePermissions>664</filePermissions>
<directoryPermissions>775</directoryPermissions>
</server>
<!-- To stage a release of some part of Maven -->
<server>
<id>apache.releases.https</id>
<username> <!-- YOUR APACHE LDAP USERNAME --> </username>
<password> <!-- YOUR APACHE LDAP PASSWORD --> </password>
</server>
<!-- To stage a website of some part of Maven -->
<server>
<id>stagingSite</id> <!-- must match hard-coded repository identifier in site:stage-deploy -->
<username> <!-- YOUR APACHE LDAP USERNAME --> </username>
<filePermissions>664</filePermissions>
<directoryPermissions>775</directoryPermissions>
</server>
</servers>
...
<profiles>
<profile>
<id>apache-release</id>
<properties>
<gpg.useagent>false</gpg.useagent>
<gpg.passphrase><!-- YOUR GPG PASSPHRASE --></gpg.passphrase>
<test>false</test>
</properties>
</profile>
</profiles>
...
</settings>
----
[[ReleaseGuide-CreatingTheRelease-Camel]]
== Creating the Release
Complete the following steps to create a new Camel release:
. Grab the latest source from Git, checkout the target branch (`BRANCH_NAME`) to build from, and create a release branch off of that branch:
$ git clone https://git-wip-us.apache.org/repos/asf/camel.git
$ cd camel
$ git checkout BRANCH_NAME
$ git checkout -b release/NEW-VERSION
. Perform a license check with http://creadur.apache.org/rat/apache-rat-plugin[Apache Rat]:
./mvnw -e org.apache.rat:apache-rat-plugin:check
grep -e ' !?????' target/rat.txt
* The latter command will provide a list of all files without valid license headers.
Ideally this list is empty, otherwise fix the issues by adding valid license headers and rerun the above commands before
proceeding with the next step.
. Do a release dry run to check for problems:
./mvnw release:prepare -DdryRun -Prelease
* The release plugin will prompt for a release version, an SCM tag and next release version.
* Use a three digit release version of the form: `MAJOR.MINOR.PATCH`, e.g. `3.0.0`.
* For the tag use a string of the form: `camel-MAJOR.MINOR.PATCH`, e.g. `camel-3.0.0`.
* For the next version increase the patch version and append `-SNAPSHOT`, e.g. `3.0.1-SNAPSHOT`.
* Make sure to check the generated signature files:
$ gpg camel-core/target/camel-core-3.0.0-SNAPSHOT.jar.asc
gpg: assuming signed data in `camel-core/target/camel-core-3.0.0.jar'
gpg: Signature made Sat 06 Apr 2019 03:58:01 AM PDT using RSA key ID 5942C049
gpg: Good signature from "Gregor Zurowski <gzurowski@apache.org>"
. Prepare the release:
* First clean up the dry run results:
$ ./mvnw release:clean -Prelease
* Next prepare the release:
$ ./mvnw release:prepare -Prelease
* This command will create the tag and update all pom files with the given version number.
. Perform the release and publish to the Apache staging repository:
$ ./mvnw release:perform -Prelease
. Close the Apache staging repository:
* Login to https://repository.apache.org using your Apache LDAP credentials.
Click on "Staging Repositories". Then select "org.apache.camel-xxx" in the list of repositories, where xxx represents
your username and ip.
Click "Close" on the tool bar above.
This will close the repository from future deployments and make it available for others to view.
If you are staging multiple releases together, skip this step until you have staged everything.
Enter the name and version of the artifact being released in the "Description" field and then click "Close".
This will make it easier to identify it later.
. Verify staged artifacts:
* If you click on your repository, a tree view will appear below.
You can then browse the contents to ensure the artifacts are as you expect them.
Pay particular attention to the existence of *.asc (signature) files.
If you don't like the content of the repository, right click your repository and choose "Drop".
You can then rollback your release and repeat the process.
Note the repository URL, you will need this in your vote email.
[[ReleaseGuide-CreatingTheRelease-Camel-spring-boot]]
== Creating the Release for camel-spring-boot
Complete the following steps to create a new Camel-spring-boot release:
. Grab the latest source from Git and checkout the target branch (`BRANCH_NAME`) to build from:
$ git clone https://git-wip-us.apache.org/repos/asf/camel-spring-boot.git
$ cd camel
$ git checkout BRANCH_NAME
. From Camel 3.3.0 ahead, the camel-spring-boot project uses camel-dependencies as parent.
You'll need to set the version here https://github.com/apache/camel-spring-boot/blob/master/pom.xml#L26
To the version released from the main Camel repository as first step.
. Perform a license check with http://creadur.apache.org/rat/apache-rat-plugin[Apache Rat]:
./mvnw -e org.apache.rat:apache-rat-plugin:check
grep -e ' !?????' target/rat.txt
* The latter command will provide a list of all files without valid license headers.
Ideally this list is empty, otherwise fix the issues by adding valid license headers and rerun the above commands before
proceeding with the next step.
. You already have built the main camel repo for releasing, so you already have a final version in your local repository.
Change the camel-version property in https://github.com/apache/camel-spring-boot/blob/master/pom.xml accordingly and commit.
. Do a release dry run to check for problems:
./mvnw release:prepare -DdryRun -Prelease
* The release plugin will prompt for a release version, an SCM tag and next release version.
* Use a three digit release version of the form: `MAJOR.MINOR.PATCH`, e.g. `3.0.0`.
* For the tag use a string of the form: `camel-MAJOR.MINOR.PATCH`, e.g. `camel-3.0.0`.
* For the next version increase the patch version and append `-SNAPSHOT`, e.g. `3.0.1-SNAPSHOT`.
* Make sure to check the generated signature files:
$ gpg core/camel-spring-boot/target/camel-spring-boot-3.0.0-SNAPSHOT.jar.asc
gpg: assuming signed data in `core/camel-spring-boot/target/camel-spring-boot-3.0.0-SNAPSHOT.jar'
gpg: Signature made Sat 06 Apr 2019 03:58:01 AM PDT using RSA key ID 5942C049
gpg: Good signature from "Gregor Zurowski <gzurowski@apache.org>"
. Prepare the release:
* First clean up the dry run results:
$ ./mvnw release:clean -Prelease
* Next prepare the release:
$ ./mvnw release:prepare -Prelease
* This command will create the tag and update all pom files with the given version number.
. Perform the release and publish to the Apache staging repository:
$ ./mvnw release:perform -Prelease
. Close the Apache staging repository:
* Login to https://repository.apache.org using your Apache LDAP credentials.
Click on "Staging Repositories". Then select "org.apache.camel-xxx" in the list of repositories, where xxx represents
your username and ip.
Click "Close" on the tool bar above.
This will close the repository from future deployments and make it available for others to view.
If you are staging multiple releases together, skip this step until you have staged everything.
Enter the name and version of the artifact being released in the "Description" field and then click "Close".
This will make it easier to identify it later.
. Verify staged artifacts:
* If you click on your repository, a tree view will appear below.
You can then browse the contents to ensure the artifacts are as you expect them.
Pay particular attention to the existence of *.asc (signature) files.
If the you don't like the content of the repository, right click your repository and choose "Drop".
You can then rollback your release and repeat the process.
Note the repository URL, you will need this in your vote email.
. Once the release has been voted
* Login to https://repository.apache.org using your Apache LDAP credentials.
Click on "Staging Repositories". Then select "org.apache.camel-xxx" in the list of repositories, where xxx represents
your username and ip.
Click "Release" on the tool bar above.
This will release the artifacts.
[[ReleaseGuide-CreatingTheRelease-Camel-karaf]]
== Creating the Release for camel-karaf
Complete the following steps to create a new Camel-karaf release:
. Grab the latest source from Git and checkout the target branch (`BRANCH_NAME`) to build from:
$ git clone https://git-wip-us.apache.org/repos/asf/camel-karaf.git
$ cd camel
$ git checkout BRANCH_NAME
. From Camel 3.3.0 ahead, the camel-karaf project uses camel-dependencies as parent.
You'll need to set the version here https://github.com/apache/camel-karaf/blob/main/pom.xml#L26
To the version released from the main Camel repository as first step.
. Perform a license check with http://creadur.apache.org/rat/apache-rat-plugin[Apache Rat]:
./mvnw -e org.apache.rat:apache-rat-plugin:check
grep -e ' !?????' target/rat.txt
* The latter command will provide a list of all files without valid license headers.
Ideally this list is empty, otherwise fix the issues by adding valid license headers and rerun the above commands before
proceeding with the next step.
. You already have built the main camel repo for releasing, so you already have a final version in your local repository.
Change the camel-version property in https://github.com/apache/camel-karaf/blob/main/pom.xml accordingly and commit.
. Do a release dry run to check for problems:
./mvnw release:prepare -DdryRun -Prelease
* The release plugin will prompt for a release version, an SCM tag and next release version.
* Use a three digit release version of the form: `MAJOR.MINOR.PATCH`, e.g. `3.0.0`.
* For the tag use a string of the form: `camel-MAJOR.MINOR.PATCH`, e.g. `camel-3.0.0`.
* For the next version increase the patch version and append `-SNAPSHOT`, e.g. `3.0.1-SNAPSHOT`.
* Make sure to check the generated signature files:
$ gpg core/camel-core-osgi/target/camel-core-osgi-3.0.0-SNAPSHOT.jar.asc
gpg: assuming signed data in `core/camel-core-osgi/target/camel-core-osgi-3.0.0-SNAPSHOT.jar'
gpg: Signature made Sat 06 Apr 2019 03:58:01 AM PDT using RSA key ID 5942C049
gpg: Good signature from "Gregor Zurowski <gzurowski@apache.org>"
. Prepare the release:
* First clean up the dry run results:
$ ./mvnw release:clean -Prelease
* Next prepare the release:
$ ./mvnw release:prepare -Prelease
* This command will create the tag and update all pom files with the given version number.
. Perform the release and publish to the Apache staging repository:
$ ./mvnw release:perform -Prelease
. Close the Apache staging repository:
* Login to https://repository.apache.org using your Apache LDAP credentials.
Click on "Staging Repositories". Then select "org.apache.camel-xxx" in the list of repositories, where xxx represents
your username and ip.
Click "Close" on the tool bar above.
This will close the repository from future deployments and make it available for others to view.
If you are staging multiple releases together, skip this step until you have staged everything.
Enter the name and version of the artifact being released in the "Description" field and then click "Close".
This will make it easier to identify it later.
. Verify staged artifacts:
* If you click on your repository, a tree view will appear below.
You can then browse the contents to ensure the artifacts are as you expect them.
Pay particular attention to the existence of *.asc (signature) files.
If the you don't like the content of the repository, right click your repository and choose "Drop".
You can then rollback your release and repeat the process.
Note the repository URL, you will need this in your vote email.
. Once the release has been voted
* Login to https://repository.apache.org using your Apache LDAP credentials.
Click on "Staging Repositories". Then select "org.apache.camel-xxx" in the list of repositories, where xxx represents
your username and ip.
Click "Release" on the tool bar above.
This will release the artifacts.
[[ReleaseGuide-PublishingTheRelease-Camel]]
== Publishing the Release
. Once the release has been voted:
* Login to https://repository.apache.org using your Apache LDAP credentials.
Click on "Staging Repositories". Then select "org.apache.camel-xxx" in the list of repositories, where xxx represents
your username and IP.
Click "Release" on the tool bar above.
This will release the artifacts.
. Perform a release in JIRA:
* Release the version in JIRA: https://issues.apache.org/jira/plugins/servlet/project-config/CAMEL/versions
. Copy distribution to Apache website:
cd ${CAMEL_ROOT_DIR}/etc/scripts
./release-distro.sh <Camel version>
. Remove the old distribution version from the Apache website:
svn rm https://dist.apache.org/repos/dist/release/camel/apache-camel/OLD_CAMEL_VERSION -m "Removed the old release"
. Upload the new schema files (and the manual):
cd ${CAMEL_ROOT_DIR}/etc/scripts
./release-website.sh <Camel version>
. Merge the release branch back into the corresponding base branch (e.g. merge `release/3.2.0` into `camel-3.2.x`)
git checkout BASE_BRANCH
git pull
git merge --no-ff release/VERSION
git push
. Delete the local and remote release branch:
git branch -D release/VERSION
git push origin --delete release/VERSION
[[Publish-xsd-schemas]]
== Publish xsd schemas
* On https://github.com/apache/camel-website/tree/master/static/schema the xsd related to blueprint,cxf,spring-security and spring
must be pushed to make them available to end users.
* The blueprint one is under the camel-karaf release
[[Tagging-examples]]
== Tagging examples
These steps are optional and they could be done later too.
Once the release train (camel, camel-karaf and camel-spring-boot) has been voted and published, there are some additional steps needed for the camel examples.
. Camel-examples
* On https://github.com/apache/camel-examples in the examples/pom.xml file the following steps are needed:
* Update the camel-dependencies version to the version coming from the release-train
* Update the camel.version properties to the version coming from the release-train
* To be sure everything is fine run
$ ./mvnw clean install
* Commit
$ git commit -a
$ git push origin master (or the branch related to the release, eg. camel-3.4.x)
$ git tag -a camel-examples-$version -m "$version"
$ git push origin camel-examples-$version
* Now we pushed the tag and we need to advance the version of the examples
* Update the camel-dependencies version to the next version
* Update the camel.version properties to the next version
* Run the following command to advance the version in the examples
$ find . -type f -exec sed -i 's/$oldVersion/$newVersion/g' {} +
* To be sure everything is fine run
$ ./mvnw clean install
. Camel-spring-boot-examples
* On https://github.com/apache/camel-spring-boot-examples in the examples/pom.xml file the following steps are needed:
* Update the camel-dependencies version to the version coming from the release-train
* Update the camel.version properties to the version coming from the release-train
* To be sure everything is fine run
$ ./mvnw clean install
* Commit
$ git commit -a
$ git push origin master (or the branch related to the release, eg. camel-3.4.x)
$ git tag -a camel-spring-boot-examples-$version -m "$version"
$ git push origin camel-spring-boot-examples-$version
* Now we pushed the tag and we need to advance the version of the examples
* Update the camel-dependencies version to the next version
* Update the camel.version properties to the next version
* Run the following command to advance the version in the examples
$ find . -type f -exec sed -i 's/$oldVersion/$newVersion/g' {} +
* To be sure everything is fine run
$ ./mvnw clean install
. Camel-karaf-examples
* On https://github.com/apache/camel-karaf-examples in the examples/pom.xml file the following steps are needed:
* Update the camel-dependencies version to the version coming from the release-train
* Update the camel.version properties to the version coming from the release-train
* Update the camel.karaf.version properties to the version coming from the release-train
* To be sure everything is fine run
$ ./mvnw clean install
* Commit
$ git commit -a
$ git push origin master (or the branch related to the release, eg. camel-3.4.x)
$ git tag -a camel-karaf-examples-$version -m "$version"
$ git push origin camel-karaf-examples-$version
* Now we pushed the tag and we need to advance the version of the examples
* Update the camel-dependencies version to the next version
* Update the camel.version properties to the next version
* Update the camel.karaf.version properties to the next version
* Run the following command to advance the version in the examples
$ find . -type f -exec sed -i 's/$oldVersion/$newVersion/g' {} +
* To be sure everything is fine run
$ ./mvnw clean install
| 37.635644 | 158 | 0.744765 |
556804e4e4a076dd23927367f683f7b912127d92 | 128 | adoc | AsciiDoc | presentations/presentations.adoc | marc-bouvier/awesome-resources | 40d38bc958f7053cee64fdd0b8078d3e2a42925b | [
"Unlicense"
] | 2 | 2019-08-01T15:02:15.000Z | 2019-08-02T16:42:33.000Z | presentations/presentations.adoc | marc-bouvier/awesome-resources | 40d38bc958f7053cee64fdd0b8078d3e2a42925b | [
"Unlicense"
] | null | null | null | presentations/presentations.adoc | marc-bouvier/awesome-resources | 40d38bc958f7053cee64fdd0b8078d3e2a42925b | [
"Unlicense"
] | null | null | null | # Presentations
The purpose of the page is to list a bunch of presentation that I made (or will make).
[Back home](README.md)
| 21.333333 | 86 | 0.742188 |
9e4119478565ad51417078ad793852a1a64cc222 | 1,079 | adoc | AsciiDoc | src/pages/fundamentals/pages/appendix/pages/config-annotations/pages/view/pages/link.adoc | openanthem/docs | 448bd294928421802704f6981f115f20f5771f40 | [
"Apache-2.0"
] | 1 | 2018-03-03T15:00:32.000Z | 2018-03-03T15:00:32.000Z | src/pages/fundamentals/pages/appendix/pages/config-annotations/pages/view/pages/link.adoc | openanthem/docs | 448bd294928421802704f6981f115f20f5771f40 | [
"Apache-2.0"
] | 62 | 2018-02-06T15:56:36.000Z | 2021-05-10T06:48:42.000Z | src/pages/fundamentals/pages/appendix/pages/config-annotations/pages/view/pages/link.adoc | openanthem/docs | 448bd294928421802704f6981f115f20f5771f40 | [
"Apache-2.0"
] | 5 | 2018-02-26T16:23:07.000Z | 2019-06-27T15:35:10.000Z | [[view-config-annotation-link]]
= Link
Link is a hyperlink component used for navigation or user interaction of displayed text.
.Allowed Parent Components
<<view-config-annotation-card-detail>>,
<<view-config-annotation-grid>>,
<<view-config-annotation-link-menu>>,
<<view-config-annotation-menu>>,
<<view-config-annotation-section>>
.Allowed Children Components
None. `@Link` should decorate a field having a simple type.
[source,java,indent=0]
[subs="verbatim,attributes"]
.Sample Configuration
----
// UI Navigation to page vpHome under domain petclinic
@Link(url = "/h/petclinic/vpHome")
private String linkToHome;
// Executes a request against _subscribe, and sets the image
@Link(url = "/notifications/_subscribe:{id}/_process", b="$executeAnd$configAnd$nav", method = "POST")
private String subscribeToEmailNotifications;
// Creates an external link, as in a link navigating outside of the context of the Nimbus framework.
@Link(url = "https://www.mywebsite.com", value = Link.Type.EXTERNAL, target = "_new", rel = "nofollow")
private String myWebsiteLink;
----
| 33.71875 | 103 | 0.753475 |
c6418ce04e70a46e694f55e61ee991c9c0282e18 | 2,383 | adoc | AsciiDoc | docs/user-manual/en/faq/how-do-i-run-camel-using-java-webstart.adoc | rmannibucau/camel | 2f4a334f3441696de67e5eda57845c4aecb0d7a6 | [
"Apache-2.0"
] | 1 | 2022-01-17T01:38:58.000Z | 2022-01-17T01:38:58.000Z | docs/user-manual/en/faq/how-do-i-run-camel-using-java-webstart.adoc | rmannibucau/camel | 2f4a334f3441696de67e5eda57845c4aecb0d7a6 | [
"Apache-2.0"
] | 14 | 2019-06-07T16:36:04.000Z | 2022-02-01T01:07:42.000Z | docs/user-manual/en/faq/how-do-i-run-camel-using-java-webstart.adoc | rmannibucau/camel | 2f4a334f3441696de67e5eda57845c4aecb0d7a6 | [
"Apache-2.0"
] | 1 | 2019-07-04T05:30:30.000Z | 2019-07-04T05:30:30.000Z | [[HowdoIrunCamelusingJavaWebStart-HowdoIrunCamelusingJavaWebStart]]
=== How do I run Camel using Java WebStart?
Camel 1.5 has support for starting using Java WebStart.
[WARNING]
====
**Be Careful**
However there is a restriction to *not* use the version attribute for
*camel* jars.
====
What you need to have in mind is that Camel will scan for resources in
.jar files on-the-fly and therefore the .jars is loaded using http.
Therefore the http URLs in the .jnlp file must be complete in the *href*
tag.
[[HowdoIrunCamelusingJavaWebStart-Mavenconfiguration]]
==== Maven configuration
If you use Maven to generate your *.jnlp* file then check out these
links: http://mojo.codehaus.org/webstart/webstart-maven-plugin/
You need to configure your maven to not output the jar version attribute
(`outputJarVersion=false`) as started in this snippet:
[source,xml]
----
<plugin>
<groupId>org.codehaus.mojo.webstart</groupId>
<artifactId>webstart-maven-plugin</artifactId>
.....
<configuration>
.....
<jnlpFiles>
<jnlpFile>
<templateFilename>jnlpTemplate.vm</templateFilename>
<outputFilename>appl.jnlp</outputFilename>
<jarResources>
<jarResource>
<groupId>org.apache.camel</groupId>
<artifactId>camel-core</artifactId>
<version>${camel-version}</version>
<!-- set the outputJarVersion to false appends the version to the jar filename in the href -->
<outputJarVersion>false</outputJarVersion>
</jarResource>
<jarResource>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-core</artifactId>
<version>${activemq-version}</version>
<outputJarVersion>false</outputJarVersion>
</jarResource>
.....
</jarResources>
<jnlpFile>
<jnlpFiles>
.....
<configuration>
<plugin>
----
And a sample of the generated *appl.jnlp* file:
[source,xml]
----
<jnlp .....>
.......
<resources>
.......
<jar href="camel-core-1.4.jar"/>
<jar href="activemq-core-5.1.jar"/>
.......
<jar href="spring-core.jar" version="2.5.5"/>
</resources>
</jnlp>
----
*What to notice:*
To let Camel run using Java WebStart the `<jar href>` tag must *not* use
the version attribute for the *camel-xxx.jars*. See the difference
between Camel and the spring jar.
| 28.369048 | 106 | 0.661771 |
7c65169d74898c0d662b55a74b86781d09d8d004 | 1,369 | adoc | AsciiDoc | documentation/modules/ref-kafka-bridge-http-configuration.adoc | thpham/strimzi-kafka-operator | 295c8bfa5a582e5698042290183ece982334495c | [
"Apache-2.0"
] | 1 | 2020-06-02T09:40:46.000Z | 2020-06-02T09:40:46.000Z | documentation/modules/ref-kafka-bridge-http-configuration.adoc | thpham/strimzi-kafka-operator | 295c8bfa5a582e5698042290183ece982334495c | [
"Apache-2.0"
] | 7 | 2021-12-14T21:00:23.000Z | 2022-01-04T16:54:42.000Z | documentation/modules/ref-kafka-bridge-http-configuration.adoc | thpham/strimzi-kafka-operator | 295c8bfa5a582e5698042290183ece982334495c | [
"Apache-2.0"
] | null | null | null | // Module included in the following assemblies:
//
// assembly-kafka-bridge-configuration.adoc
[id='ref-kafka-bridge-http-configuration-{context}']
= Kafka Bridge HTTP configuration
Kafka Bridge HTTP configuration is set using the properties in `KafkaBridge.spec.http`.
As well as enabling HTTP access to a Kafka cluster, HTTP properties provide the capability to enable and define access control for the Kafka Bridge through Cross-Origin Resource Sharing (CORS).
CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin.
To configure CORS, you define a list of allowed resource origins and HTTP methods to access them.
Additional HTTP headers in requests link:{external-cors-link}[describe the origins that are permitted access to the Kafka cluster^].
.Example Kafka Bridge HTTP configuration
[source,yaml,subs="attributes+"]
----
apiVersion: {KafkaApiVersionPrev}
kind: KafkaBridge
metadata:
name: my-bridge
spec:
# ...
http:
port: 8080 <1>
cors:
allowedOrigins: "https://strimzi.io" <2>
allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" <3>
# ...
----
<1> The default HTTP configuration for the Kafka Bridge to listen on port 8080.
<2> Comma-separated list of allowed CORS origins. You can use a URL or a Java regular expression.
<3> Comma-separated list of allowed HTTP methods for CORS.
| 40.264706 | 193 | 0.7626 |
80d4081cb2cabeb32800c188e73715e11d08245c | 739 | adoc | AsciiDoc | src/content/HowHome-03-Dev-02.adoc | anew0m/blog | bf905e0cc0bb340a44f3d1a0090f93731c127a39 | [
"MIT"
] | null | null | null | src/content/HowHome-03-Dev-02.adoc | anew0m/blog | bf905e0cc0bb340a44f3d1a0090f93731c127a39 | [
"MIT"
] | null | null | null | src/content/HowHome-03-Dev-02.adoc | anew0m/blog | bf905e0cc0bb340a44f3d1a0090f93731c127a39 | [
"MIT"
] | null | null | null | = 어떡하집 프로젝트 - 근황
정민호
2020-09-13
:jbake-last_updated: 2020-09-13
:jbake-type: post
:jbake-status: published
:jbake-tags: 부동산, 개인프로젝트
:description: 오랜기간 멈춰있던 시스템 개발의 근황을 전달드립니다.
:jbake-og: {"image": "img/jdk/duke.jpg"}
:idprefix:
:toc:
:sectnums:
== 2020.09.13 현재 상황
어떡하집 프로젝트는 04월 즘에 테스트 시스템까지 구축을 끝 마친 상태입니다.
시스템 구축 할 때 https://www.data.go.kr/[공공데이터포털]에서 아파트 실거래 정보를 가져오던 중 API를 제공하는 서버에 문제가있어서 일시중단하게되었습니다.
그 후 5~6월 즘 다시 아파트 실거래 정보를 가져오려했지만 문제가 해결되지 않아 아직까지 중단된 상태입니다.
왜 그런지는 모르겠지만 현재(2020.09.13)까지 해결 되지 않았습니다. 추후 본격적으로 작업을 다시 시작할 때 다시 문의를 해봐야겠습니다.
초기 설계에선 logstash를 사용하였지만 실제구축에선 logstash를 nifi로 변경하여 진행하였습니다.
아래 사진은 개발진행상태(?) 입니다.
image::img/HowHome/Dev/02/server.png[VB를 이용한 서버 구성]
image::img/HowHome/Dev/02/use_nifi.png[nifi 현황]
| 28.423077 | 98 | 0.730717 |
e99b7524d997249dc6f59a0e298e55f20659ad72 | 235 | adoc | AsciiDoc | service/src/doc/generated-snippets/test-ledger/should-fetch-sub-ledgers/http-request.adoc | ark1/fineract-cn-accounting | 6cf09d0051374517d760006376c9056a44012819 | [
"Apache-2.0"
] | null | null | null | service/src/doc/generated-snippets/test-ledger/should-fetch-sub-ledgers/http-request.adoc | ark1/fineract-cn-accounting | 6cf09d0051374517d760006376c9056a44012819 | [
"Apache-2.0"
] | null | null | null | service/src/doc/generated-snippets/test-ledger/should-fetch-sub-ledgers/http-request.adoc | ark1/fineract-cn-accounting | 6cf09d0051374517d760006376c9056a44012819 | [
"Apache-2.0"
] | null | null | null | [source,http,options="nowrap"]
----
GET /accounting/v1/ledgers/qq4iM1m2/ HTTP/1.1
Accept: */*
Content-Type: application/json
Host: localhost:8080
Content-Length: 64
[org.apache.fineract.cn.accounting.api.v1.domain.Ledger@6d8d78a]
---- | 23.5 | 64 | 0.748936 |
7bc73f99347584f6c7e77c4efc7d0971228daf6e | 37,336 | asciidoc | AsciiDoc | docs/reference/search/profile.asciidoc | rgrig/elasticsearch | 1a25284b12da87874d91a04f5be417149421735c | [
"Apache-2.0"
] | 1 | 2022-02-17T13:45:09.000Z | 2022-02-17T13:45:09.000Z | docs/reference/search/profile.asciidoc | rgrig/elasticsearch | 1a25284b12da87874d91a04f5be417149421735c | [
"Apache-2.0"
] | null | null | null | docs/reference/search/profile.asciidoc | rgrig/elasticsearch | 1a25284b12da87874d91a04f5be417149421735c | [
"Apache-2.0"
] | null | null | null | [[search-profile]]
=== Profile API
WARNING: The Profile API is a debugging tool and adds significant overhead to search execution.
Provides detailed timing information about the execution of individual
components in a search request.
[[search-profile-api-desc]]
==== {api-description-title}
The Profile API gives the user insight into how search requests are executed at
a low level so that the user can understand why certain requests are slow, and
take steps to improve them. Note that the Profile API,
<<profile-limitations, amongst other things>>, doesn't measure network latency,
time spent in the search fetch phase, time spent while the requests spends in
queues or while merging shard responses on the coordinating node.
The output from the Profile API is *very* verbose, especially for complicated
requests executed across many shards. Pretty-printing the response is
recommended to help understand the output.
[[search-profile-api-example]]
==== {api-examples-title}
Any `_search` request can be profiled by adding a top-level `profile` parameter:
[source,console]
--------------------------------------------------
GET /my-index-000001/_search
{
"profile": true,<1>
"query" : {
"match" : { "message" : "GET /search" }
}
}
--------------------------------------------------
// TEST[setup:my_index]
<1> Setting the top-level `profile` parameter to `true` will enable profiling
for the search.
The API returns the following result:
[source,console-result]
--------------------------------------------------
{
"took": 25,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 5,
"relation": "eq"
},
"max_score": 0.17402273,
"hits": [...] <1>
},
"profile": {
"shards": [
{
"id": "[2aE02wS1R8q_QFnYu6vDVQ][my-index-000001][0]",
"searches": [
{
"query": [
{
"type": "BooleanQuery",
"description": "message:get
message:search", "time_in_nanos" : 11972972, "breakdown" :
{
"set_min_competitive_score_count": 0,
"match_count": 5,
"shallow_advance_count": 0,
"set_min_competitive_score": 0,
"next_doc": 39022,
"match": 4456,
"next_doc_count": 5,
"score_count": 5,
"compute_max_score_count": 0,
"compute_max_score": 0,
"advance": 84525,
"advance_count": 1,
"score": 37779,
"build_scorer_count": 2,
"create_weight": 4694895,
"shallow_advance": 0,
"create_weight_count": 1,
"build_scorer": 7112295
},
"children": [
{
"type": "TermQuery",
"description": "message:get",
"time_in_nanos": 3801935,
"breakdown": {
"set_min_competitive_score_count": 0,
"match_count": 0,
"shallow_advance_count": 3,
"set_min_competitive_score": 0,
"next_doc": 0,
"match": 0,
"next_doc_count": 0,
"score_count": 5,
"compute_max_score_count": 3,
"compute_max_score": 32487,
"advance": 5749,
"advance_count": 6,
"score": 16219,
"build_scorer_count": 3,
"create_weight": 2382719,
"shallow_advance": 9754,
"create_weight_count": 1,
"build_scorer": 1355007
}
},
{
"type": "TermQuery",
"description": "message:search",
"time_in_nanos": 205654,
"breakdown": {
"set_min_competitive_score_count": 0,
"match_count": 0,
"shallow_advance_count": 3,
"set_min_competitive_score": 0,
"next_doc": 0,
"match": 0,
"next_doc_count": 0,
"score_count": 5,
"compute_max_score_count": 3,
"compute_max_score": 6678,
"advance": 12733,
"advance_count": 6,
"score": 6627,
"build_scorer_count": 3,
"create_weight": 130951,
"shallow_advance": 2512,
"create_weight_count": 1,
"build_scorer": 46153
}
}
]
}
],
"rewrite_time": 451233,
"collector": [
{
"name": "SimpleTopScoreDocCollector",
"reason": "search_top_hits",
"time_in_nanos": 775274
}
]
}
],
"aggregations": []
}
]
}
}
--------------------------------------------------
// TESTRESPONSE[s/"took": 25/"took": $body.took/]
// TESTRESPONSE[s/"hits": \[...\]/"hits": $body.$_path/]
// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
// TESTRESPONSE[s/\[2aE02wS1R8q_QFnYu6vDVQ\]\[my-index-000001\]\[0\]/$body.$_path/]
<1> Search results are returned, but were omitted here for brevity.
Even for a simple query, the response is relatively complicated. Let's break it
down piece-by-piece before moving to more complex examples.
The overall structure of the profile response is as follows:
[source,console-result]
--------------------------------------------------
{
"profile": {
"shards": [
{
"id": "[2aE02wS1R8q_QFnYu6vDVQ][my-index-000001][0]", <1>
"searches": [
{
"query": [...], <2>
"rewrite_time": 51443, <3>
"collector": [...] <4>
}
],
"aggregations": [...] <5>
}
]
}
}
--------------------------------------------------
// TESTRESPONSE[s/"profile": /"took": $body.took, "timed_out": $body.timed_out, "_shards": $body._shards, "hits": $body.hits, "profile": /]
// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
// TESTRESPONSE[s/\[2aE02wS1R8q_QFnYu6vDVQ\]\[my-index-000001\]\[0\]/$body.$_path/]
// TESTRESPONSE[s/"query": \[...\]/"query": $body.$_path/]
// TESTRESPONSE[s/"collector": \[...\]/"collector": $body.$_path/]
// TESTRESPONSE[s/"aggregations": \[...\]/"aggregations": []/]
<1> A profile is returned for each shard that participated in the response, and
is identified by a unique ID.
<2> Each profile contains a section which holds details about the query
execution.
<3> Each profile has a single time representing the cumulative rewrite time.
<4> Each profile also contains a section about the Lucene Collectors which run
the search.
<5> Each profile contains a section which holds the details about the
aggregation execution.
Because a search request may be executed against one or more shards in an index,
and a search may cover one or more indices, the top level element in the profile
response is an array of `shard` objects. Each shard object lists its `id` which
uniquely identifies the shard. The ID's format is
`[nodeID][indexName][shardID]`.
The profile itself may consist of one or more "searches", where a search is a
query executed against the underlying Lucene index. Most search requests
submitted by the user will only execute a single `search` against the Lucene
index. But occasionally multiple searches will be executed, such as including a
global aggregation (which needs to execute a secondary "match_all" query for the
global context).
Inside each `search` object there will be two arrays of profiled information:
a `query` array and a `collector` array. Alongside the `search` object is an
`aggregations` object that contains the profile information for the
aggregations. In the future, more sections may be added, such as `suggest`,
`highlight`, etc.
There will also be a `rewrite` metric showing the total time spent rewriting the
query (in nanoseconds).
NOTE: As with other statistics apis, the Profile API supports human readable outputs. This can be turned on by adding
`?human=true` to the query string. In this case, the output contains the additional `time` field containing rounded,
human readable timing information (e.g. `"time": "391,9ms"`, `"time": "123.3micros"`).
[[profiling-queries]]
==== Profiling Queries
[NOTE]
=======================================
The details provided by the Profile API directly expose Lucene class names and concepts, which means
that complete interpretation of the results require fairly advanced knowledge of Lucene. This
page attempts to give a crash-course in how Lucene executes queries so that you can use the Profile API to successfully
diagnose and debug queries, but it is only an overview. For complete understanding, please refer
to Lucene's documentation and, in places, the code.
With that said, a complete understanding is often not required to fix a slow query. It is usually
sufficient to see that a particular component of a query is slow, and not necessarily understand why
the `advance` phase of that query is the cause, for example.
=======================================
[[query-section]]
===== `query` Section
The `query` section contains detailed timing of the query tree executed by
Lucene on a particular shard. The overall structure of this query tree will
resemble your original Elasticsearch query, but may be slightly (or sometimes
very) different. It will also use similar but not always identical naming.
Using our previous `match` query example, let's analyze the `query` section:
[source,console-result]
--------------------------------------------------
"query": [
{
"type": "BooleanQuery",
"description": "message:get message:search",
"time_in_nanos": "11972972",
"breakdown": {...}, <1>
"children": [
{
"type": "TermQuery",
"description": "message:get",
"time_in_nanos": "3801935",
"breakdown": {...}
},
{
"type": "TermQuery",
"description": "message:search",
"time_in_nanos": "205654",
"breakdown": {...}
}
]
}
]
--------------------------------------------------
// TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.$_path",\n"searches": [{\n/]
// TESTRESPONSE[s/]$/],"rewrite_time": $body.$_path, "collector": $body.$_path}], "aggregations": []}]}}/]
// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
// TESTRESPONSE[s/"breakdown": \{...\}/"breakdown": $body.$_path/]
<1> The breakdown timings are omitted for simplicity.
Based on the profile structure, we can see that our `match` query was rewritten
by Lucene into a BooleanQuery with two clauses (both holding a TermQuery). The
`type` field displays the Lucene class name, and often aligns with the
equivalent name in Elasticsearch. The `description` field displays the Lucene
explanation text for the query, and is made available to help differentiating
between parts of your query (e.g. both `message:get` and `message:search` are
TermQuery's and would appear identical otherwise.
The `time_in_nanos` field shows that this query took ~11.9ms for the entire
BooleanQuery to execute. The recorded time is inclusive of all children.
The `breakdown` field will give detailed stats about how the time was spent,
we'll look at that in a moment. Finally, the `children` array lists any
sub-queries that may be present. Because we searched for two values ("get
search"), our BooleanQuery holds two children TermQueries. They have identical
information (type, time, breakdown, etc). Children are allowed to have their
own children.
===== Timing Breakdown
The `breakdown` component lists detailed timing statistics about low-level
Lucene execution:
[source,console-result]
--------------------------------------------------
"breakdown": {
"set_min_competitive_score_count": 0,
"match_count": 5,
"shallow_advance_count": 0,
"set_min_competitive_score": 0,
"next_doc": 39022,
"match": 4456,
"next_doc_count": 5,
"score_count": 5,
"compute_max_score_count": 0,
"compute_max_score": 0,
"advance": 84525,
"advance_count": 1,
"score": 37779,
"build_scorer_count": 2,
"create_weight": 4694895,
"shallow_advance": 0,
"create_weight_count": 1,
"build_scorer": 7112295
}
--------------------------------------------------
// TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.$_path",\n"searches": [{\n"query": [{\n"type": "BooleanQuery",\n"description": "message:get message:search",\n"time_in_nanos": $body.$_path,/]
// TESTRESPONSE[s/}$/},\n"children": $body.$_path}],\n"rewrite_time": $body.$_path, "collector": $body.$_path}], "aggregations": []}]}}/]
// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
Timings are listed in wall-clock nanoseconds and are not normalized at all. All
caveats about the overall `time_in_nanos` apply here. The intention of the
breakdown is to give you a feel for A) what machinery in Lucene is actually
eating time, and B) the magnitude of differences in times between the various
components. Like the overall time, the breakdown is inclusive of all children
times.
The meaning of the stats are as follows:
[discrete]
===== All parameters:
[horizontal]
`create_weight`::
A Query in Lucene must be capable of reuse across multiple IndexSearchers (think of it as the engine that
executes a search against a specific Lucene Index). This puts Lucene in a tricky spot, since many queries
need to accumulate temporary state/statistics associated with the index it is being used against, but the
Query contract mandates that it must be immutable.
{empty} +
{empty} +
To get around this, Lucene asks each query to generate a Weight object which acts as a temporary context
object to hold state associated with this particular (IndexSearcher, Query) tuple. The `weight` metric
shows how long this process takes
`build_scorer`::
This parameter shows how long it takes to build a Scorer for the query. A Scorer is the mechanism that
iterates over matching documents and generates a score per-document (e.g. how well does "foo" match the document?).
Note, this records the time required to generate the Scorer object, not actually score the documents. Some
queries have faster or slower initialization of the Scorer, depending on optimizations, complexity, etc.
{empty} +
{empty} +
This may also show timing associated with caching, if enabled and/or applicable for the query
`next_doc`::
The Lucene method `next_doc` returns Doc ID of the next document matching the query. This statistic shows
the time it takes to determine which document is the next match, a process that varies considerably depending
on the nature of the query. Next_doc is a specialized form of advance() which is more convenient for many
queries in Lucene. It is equivalent to advance(docId() + 1)
`advance`::
`advance` is the "lower level" version of next_doc: it serves the same purpose of finding the next matching
doc, but requires the calling query to perform extra tasks such as identifying and moving past skips, etc.
However, not all queries can use next_doc, so `advance` is also timed for those queries.
{empty} +
{empty} +
Conjunctions (e.g. `must` clauses in a boolean) are typical consumers of `advance`
`match`::
Some queries, such as phrase queries, match documents using a "two-phase" process. First, the document is
"approximately" matched, and if it matches approximately, it is checked a second time with a more rigorous
(and expensive) process. The second phase verification is what the `match` statistic measures.
{empty} +
{empty} +
For example, a phrase query first checks a document approximately by ensuring all terms in the phrase are
present in the doc. If all the terms are present, it then executes the second phase verification to ensure
the terms are in-order to form the phrase, which is relatively more expensive than just checking for presence
of the terms.
{empty} +
{empty} +
Because this two-phase process is only used by a handful of queries, the `match` statistic is often zero
`score`::
This records the time taken to score a particular document via its Scorer
`*_count`::
Records the number of invocations of the particular method. For example, `"next_doc_count": 2,`
means the `nextDoc()` method was called on two different documents. This can be used to help judge
how selective queries are, by comparing counts between different query components.
[[collectors-section]]
===== `collectors` Section
The Collectors portion of the response shows high-level execution details.
Lucene works by defining a "Collector" which is responsible for coordinating the
traversal, scoring, and collection of matching documents. Collectors are also
how a single query can record aggregation results, execute unscoped "global"
queries, execute post-query filters, etc.
Looking at the previous example:
[source,console-result]
--------------------------------------------------
"collector": [
{
"name": "SimpleTopScoreDocCollector",
"reason": "search_top_hits",
"time_in_nanos": 775274
}
]
--------------------------------------------------
// TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.$_path",\n"searches": [{\n"query": $body.$_path,\n"rewrite_time": $body.$_path,/]
// TESTRESPONSE[s/]$/]}], "aggregations": []}]}}/]
// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
We see a single collector named `SimpleTopScoreDocCollector` wrapped into
`CancellableCollector`. `SimpleTopScoreDocCollector` is the default "scoring and
sorting" `Collector` used by {es}. The `reason` field attempts to give a plain
English description of the class name. The `time_in_nanos` is similar to the
time in the Query tree: a wall-clock time inclusive of all children. Similarly,
`children` lists all sub-collectors. The `CancellableCollector` that wraps
`SimpleTopScoreDocCollector` is used by {es} to detect if the current search was
cancelled and stop collecting documents as soon as it occurs.
It should be noted that Collector times are **independent** from the Query
times. They are calculated, combined, and normalized independently! Due to the
nature of Lucene's execution, it is impossible to "merge" the times from the
Collectors into the Query section, so they are displayed in separate portions.
For reference, the various collector reasons are:
[horizontal]
`search_sorted`::
A collector that scores and sorts documents. This is the most common collector and will be seen in most
simple searches
`search_count`::
A collector that only counts the number of documents that match the query, but does not fetch the source.
This is seen when `size: 0` is specified
`search_terminate_after_count`::
A collector that terminates search execution after `n` matching documents have been found. This is seen
when the `terminate_after_count` query parameter has been specified
`search_min_score`::
A collector that only returns matching documents that have a score greater than `n`. This is seen when
the top-level parameter `min_score` has been specified.
`search_multi`::
A collector that wraps several other collectors. This is seen when combinations of search, aggregations,
global aggs, and post_filters are combined in a single search.
`search_timeout`::
A collector that halts execution after a specified period of time. This is seen when a `timeout` top-level
parameter has been specified.
`aggregation`::
A collector that Elasticsearch uses to run aggregations against the query scope. A single `aggregation`
collector is used to collect documents for *all* aggregations, so you will see a list of aggregations
in the name rather.
`global_aggregation`::
A collector that executes an aggregation against the global query scope, rather than the specified query.
Because the global scope is necessarily different from the executed query, it must execute its own
match_all query (which you will see added to the Query section) to collect your entire dataset
[[rewrite-section]]
===== `rewrite` Section
All queries in Lucene undergo a "rewriting" process. A query (and its
sub-queries) may be rewritten one or more times, and the process continues until
the query stops changing. This process allows Lucene to perform optimizations,
such as removing redundant clauses, replacing one query for a more efficient
execution path, etc. For example a Boolean -> Boolean -> TermQuery can be
rewritten to a TermQuery, because all the Booleans are unnecessary in this case.
The rewriting process is complex and difficult to display, since queries can
change drastically. Rather than showing the intermediate results, the total
rewrite time is simply displayed as a value (in nanoseconds). This value is
cumulative and contains the total time for all queries being rewritten.
===== A more complex example
To demonstrate a slightly more complex query and the associated results, we can
profile the following query:
[source,console]
--------------------------------------------------
GET /my-index-000001/_search
{
"profile": true,
"query": {
"term": {
"user.id": {
"value": "elkbee"
}
}
},
"aggs": {
"my_scoped_agg": {
"terms": {
"field": "http.response.status_code"
}
},
"my_global_agg": {
"global": {},
"aggs": {
"my_level_agg": {
"terms": {
"field": "http.response.status_code"
}
}
}
}
},
"post_filter": {
"match": {
"message": "search"
}
}
}
--------------------------------------------------
// TEST[setup:my_index]
// TEST[s/_search/_search\?filter_path=profile.shards.id,profile.shards.searches,profile.shards.aggregations/]
This example has:
- A query
- A scoped aggregation
- A global aggregation
- A post_filter
The API returns the following result:
[source,console-result]
--------------------------------------------------
{
...
"profile": {
"shards": [
{
"id": "[P6-vulHtQRWuD4YnubWb7A][my-index-000001][0]",
"searches": [
{
"query": [
{
"type": "TermQuery",
"description": "message:search",
"time_in_nanos": 141618,
"breakdown": {
"set_min_competitive_score_count": 0,
"match_count": 0,
"shallow_advance_count": 0,
"set_min_competitive_score": 0,
"next_doc": 0,
"match": 0,
"next_doc_count": 0,
"score_count": 0,
"compute_max_score_count": 0,
"compute_max_score": 0,
"advance": 3942,
"advance_count": 4,
"score": 0,
"build_scorer_count": 2,
"create_weight": 38380,
"shallow_advance": 0,
"create_weight_count": 1,
"build_scorer": 99296
}
},
{
"type": "TermQuery",
"description": "user.id:elkbee",
"time_in_nanos": 163081,
"breakdown": {
"set_min_competitive_score_count": 0,
"match_count": 0,
"shallow_advance_count": 0,
"set_min_competitive_score": 0,
"next_doc": 2447,
"match": 0,
"next_doc_count": 4,
"score_count": 4,
"compute_max_score_count": 0,
"compute_max_score": 0,
"advance": 3552,
"advance_count": 1,
"score": 5027,
"build_scorer_count": 2,
"create_weight": 107840,
"shallow_advance": 0,
"create_weight_count": 1,
"build_scorer": 44215
}
}
],
"rewrite_time": 4769,
"collector": [
{
"name": "MultiCollector",
"reason": "search_multi",
"time_in_nanos": 1945072,
"children": [
{
"name": "FilteredCollector",
"reason": "search_post_filter",
"time_in_nanos": 500850,
"children": [
{
"name": "SimpleTopScoreDocCollector",
"reason": "search_top_hits",
"time_in_nanos": 22577
}
]
},
{
"name": "MultiBucketCollector: [[my_scoped_agg, my_global_agg]]",
"reason": "aggregation",
"time_in_nanos": 867617
}
]
}
]
}
],
"aggregations": [...] <1>
}
]
}
}
--------------------------------------------------
// TESTRESPONSE[s/"aggregations": \[\.\.\.\]/"aggregations": $body.$_path/]
// TESTRESPONSE[s/\.\.\.//]
// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
// TESTRESPONSE[s/"id": "\[P6-vulHtQRWuD4YnubWb7A\]\[my-index-000001\]\[0\]"/"id": $body.profile.shards.0.id/]
<1> The `"aggregations"` portion has been omitted because it will be covered in
the next section.
As you can see, the output is significantly more verbose than before. All the
major portions of the query are represented:
1. The first `TermQuery` (user.id:elkbee) represents the main `term` query.
2. The second `TermQuery` (message:search) represents the `post_filter` query.
The Collector tree is fairly straightforward, showing how a single
CancellableCollector wraps a MultiCollector which also wraps a FilteredCollector
to execute the post_filter (and in turn wraps the normal scoring
SimpleCollector), a BucketCollector to run all scoped aggregations.
===== Understanding MultiTermQuery output
A special note needs to be made about the `MultiTermQuery` class of queries.
This includes wildcards, regex, and fuzzy queries. These queries emit very
verbose responses, and are not overly structured.
Essentially, these queries rewrite themselves on a per-segment basis. If you
imagine the wildcard query `b*`, it technically can match any token that begins
with the letter "b". It would be impossible to enumerate all possible
combinations, so Lucene rewrites the query in context of the segment being
evaluated, e.g., one segment may contain the tokens `[bar, baz]`, so the query
rewrites to a BooleanQuery combination of "bar" and "baz". Another segment may
only have the token `[bakery]`, so the query rewrites to a single TermQuery for
"bakery".
Due to this dynamic, per-segment rewriting, the clean tree structure becomes
distorted and no longer follows a clean "lineage" showing how one query rewrites
into the next. At present time, all we can do is apologize, and suggest you
collapse the details for that query's children if it is too confusing. Luckily,
all the timing statistics are correct, just not the physical layout in the
response, so it is sufficient to just analyze the top-level MultiTermQuery and
ignore its children if you find the details too tricky to interpret.
Hopefully this will be fixed in future iterations, but it is a tricky problem to
solve and still in-progress. :)
[[profiling-aggregations]]
===== Profiling Aggregations
[[agg-section]]
====== `aggregations` Section
The `aggregations` section contains detailed timing of the aggregation tree
executed by a particular shard. The overall structure of this aggregation tree
will resemble your original {es} request. Let's execute the previous query again
and look at the aggregation profile this time:
[source,console]
--------------------------------------------------
GET /my-index-000001/_search
{
"profile": true,
"query": {
"term": {
"user.id": {
"value": "elkbee"
}
}
},
"aggs": {
"my_scoped_agg": {
"terms": {
"field": "http.response.status_code"
}
},
"my_global_agg": {
"global": {},
"aggs": {
"my_level_agg": {
"terms": {
"field": "http.response.status_code"
}
}
}
}
},
"post_filter": {
"match": {
"message": "search"
}
}
}
--------------------------------------------------
// TEST[s/_search/_search\?filter_path=profile.shards.aggregations/]
// TEST[continued]
This yields the following aggregation profile output:
[source,console-result]
--------------------------------------------------
{
"profile": {
"shards": [
{
"aggregations": [
{
"type": "NumericTermsAggregator",
"description": "my_scoped_agg",
"time_in_nanos": 79294,
"breakdown": {
"reduce": 0,
"build_aggregation": 30885,
"build_aggregation_count": 1,
"initialize": 2623,
"initialize_count": 1,
"reduce_count": 0,
"collect": 45786,
"collect_count": 4,
"build_leaf_collector": 18211,
"build_leaf_collector_count": 1,
"post_collection": 929,
"post_collection_count": 1
},
"debug": {
"total_buckets": 1,
"result_strategy": "long_terms"
}
},
{
"type": "GlobalAggregator",
"description": "my_global_agg",
"time_in_nanos": 104325,
"breakdown": {
"reduce": 0,
"build_aggregation": 22470,
"build_aggregation_count": 1,
"initialize": 12454,
"initialize_count": 1,
"reduce_count": 0,
"collect": 69401,
"collect_count": 4,
"build_leaf_collector": 8150,
"build_leaf_collector_count": 1,
"post_collection": 1584,
"post_collection_count": 1
},
"children": [
{
"type": "NumericTermsAggregator",
"description": "my_level_agg",
"time_in_nanos": 76876,
"breakdown": {
"reduce": 0,
"build_aggregation": 13824,
"build_aggregation_count": 1,
"initialize": 1441,
"initialize_count": 1,
"reduce_count": 0,
"collect": 61611,
"collect_count": 4,
"build_leaf_collector": 5564,
"build_leaf_collector_count": 1,
"post_collection": 471,
"post_collection_count": 1
},
"debug": {
"total_buckets": 1,
"result_strategy": "long_terms"
}
}
]
}
]
}
]
}
}
--------------------------------------------------
// TESTRESPONSE[s/\.\.\.//]
// TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
// TESTRESPONSE[s/"id": "\[P6-vulHtQRWuD4YnubWb7A\]\[my-index-000001\]\[0\]"/"id": $body.profile.shards.0.id/]
From the profile structure we can see that the `my_scoped_agg` is internally
being run as a `NumericTermsAggregator` (because the field it is aggregating,
`http.response.status_code`, is a numeric field). At the same level, we see a `GlobalAggregator`
which comes from `my_global_agg`. That aggregation then has a child
`NumericTermsAggregator` which comes from the second term's aggregation on `http.response.status_code`.
The `time_in_nanos` field shows the time executed by each aggregation, and is
inclusive of all children. While the overall time is useful, the `breakdown`
field will give detailed stats about how the time was spent.
Some aggregations may return expert `debug` information that describe features
of the underlying execution of the aggregation that are 'useful for folks that
hack on aggregations but that we don't expect to be otherwise useful. They can
vary wildly between versions, aggregations, and aggregation execution
strategies.
===== Timing Breakdown
The `breakdown` component lists detailed statistics about low-level execution:
[source,js]
--------------------------------------------------
"breakdown": {
"reduce": 0,
"build_aggregation": 30885,
"build_aggregation_count": 1,
"initialize": 2623,
"initialize_count": 1,
"reduce_count": 0,
"collect": 45786,
"collect_count": 4
}
--------------------------------------------------
// NOTCONSOLE
Timings are listed in wall-clock nanoseconds and are not normalized at all. All
caveats about the overall `time` apply here. The intention of the breakdown is
to give you a feel for A) what machinery in {es} is actually eating time, and B)
the magnitude of differences in times between the various components. Like the
overall time, the breakdown is inclusive of all children times.
The meaning of the stats are as follows:
[discrete]
===== All parameters:
[horizontal]
`initialise`::
This times how long it takes to create and initialise the aggregation before starting to collect documents.
`collect`::
This represents the cumulative time spent in the collect phase of the aggregation. This is where matching documents are passed to the aggregation and the state of the aggregator is updated based on the information contained in the documents.
`build_aggregation`::
This represents the time spent creating the shard level results of the aggregation ready to pass back to the reducing node after the collection of documents is finished.
`reduce`::
This is not currently used and will always report `0`. Currently aggregation profiling only times the shard level parts of the aggregation execution. Timing of the reduce phase will be added later.
`*_count`::
Records the number of invocations of the particular method. For example, `"collect_count": 2,`
means the `collect()` method was called on two different documents.
[[profiling-considerations]]
===== Profiling Considerations
Like any profiler, the Profile API introduces a non-negligible overhead to
search execution. The act of instrumenting low-level method calls such as
`collect`, `advance`, and `next_doc` can be fairly expensive, since these
methods are called in tight loops. Therefore, profiling should not be enabled
in production settings by default, and should not be compared against
non-profiled query times. Profiling is just a diagnostic tool.
There are also cases where special Lucene optimizations are disabled, since they
are not amenable to profiling. This could cause some queries to report larger
relative times than their non-profiled counterparts, but in general should not
have a drastic effect compared to other components in the profiled query.
[[profile-limitations]]
===== Limitations
- Profiling currently does not measure the search fetch phase nor the network
overhead.
- Profiling also does not account for time spent in the queue, merging shard
responses on the coordinating node, or additional work such as building global
ordinals (an internal data structure used to speed up search).
- Profiling statistics are currently not available for suggestions,
highlighting, `dfs_query_then_fetch`.
- Profiling of the reduce phase of aggregation is currently not available.
- The Profiler is still highly experimental. The Profiler is instrumenting parts
of Lucene that were never designed to be exposed in this manner, and so all
results should be viewed as a best effort to provide detailed diagnostics. We
hope to improve this over time. If you find obviously wrong numbers, strange
query structures, or other bugs, please report them!
| 38.891667 | 307 | 0.613135 |
925bc03ee48a823c596be26d1137281d29624296 | 4,417 | adoc | AsciiDoc | docs/src/stackholder_req/stackholder_req.adoc | KPI-IP94-Database/TeamThreeProject | 06ac2001f2dea22381116e78d50df4e4b13019d6 | [
"WTFPL"
] | 4 | 2020-12-13T16:53:05.000Z | 2021-04-24T20:08:21.000Z | docs/src/stackholder_req/stackholder_req.adoc | KPI-IP94-Database/TeamThreeProject | 06ac2001f2dea22381116e78d50df4e4b13019d6 | [
"WTFPL"
] | 3 | 2020-12-05T20:01:38.000Z | 2021-06-02T08:52:22.000Z | docs/src/stackholder_req/stackholder_req.adoc | KPI-IP94-Database/TeamThreeProject | 06ac2001f2dea22381116e78d50df4e4b13019d6 | [
"WTFPL"
] | null | null | null | = База Даних абітурієнтів ЗНО : Запити зацікавлених осіб
Долгова Єлизавета, Рекечинський Дмитро, Цасюк Ілля
Версія 1.0.2, 20 жовтня 2020 року
:toc: macro
:toc-title: Зміст
:sectnums:
:chapter-label:
<<<
[preface]
.Список змін
|===
|Дата |Версія |Опис |Автори
|21.09.2020
|1.0
|Створення документа
|Долгова Єлизавета, Рекечинський Дмитро, Цасюк Ілля
|12.10.2020
|1.0.1
|Правки у документ
|Рекечинський Дмитро
|20.10.2020
|1.0.2
|Додано інформацію про користувачів, схеми для наочності
|Долгова Єлизавета, Рекечинський Дмитро, Цасюк Ілля
|===
<<<
== 1. Вступ
У цьому документі описуються засоби зацікавлених осіб для
БД абітурієнтів ЗНО. Зацікавленими особами
є майбутні користувачі застосунку та його адміністрація.
=== 1.1. Мета
Метою документу є визначення основних вимог до функції, що виконує
застосунок, а також правил та обмежень, що стосуються застосунку.
=== 1.2. Контекст
Перелік вимог, що описані у цьому документі, є основою технічного
завдання на розробку бази даних абітурієнтів ЗНО.
=== 1.3. Список скорочень
БД - База даних
ЗНО - зовнішнє незалежне оцінювання
КБ - конкурсний бал
== 2. Короткий огляд продукту
БД для абітурієнтів являтиме собою платформу для внесення заяв на вступ в
університет. Абітурієнти зможуть вносити дані своїх ЗНО, а також бажаних
ВИШів для вступу. Користувачі зможуть використовувати цей продукт через
веб-інтерфейс.
== 3. Характеристика ділових процесів
=== 3.1 Призначення системи
БД для абітурієнтів призначена для надання можливості централізованого збору
та обробки заяв від абітурієнтів.
=== 3.2 Типи користувачів
У цьому засобі розглядаються 3 типи користувачів:
* Глядач (Viewer)
* Студент (Student)
* Адміністратор (Admin)
==== 3.2.1 Глядач
Цей тип користувача має право лише на перегляд поданих заяв.
Але він може зареєструватись та аутентифікуватись як
Студент, або аутентифікуватись як Адміністратор.
==== 3.2.2 Студент
Цей тип успадкований від типу Глядач та має додаткові права, а саме:
* Вихід із системи
* Внесення даних ЗНО
* Подання заяв
* Видалення заяв
Разом з тим, є вимоги для забезпечення надійності даних:
* Бути зареєстрованим у системі
* Надавати достовірну інформацію про себе та результати ЗНО
Цей тип представляє абітурієнт ЗНО.
==== 3.2.3 Адміністратор
Цей тип користувача відповідає за безперебійну роботу БД, а також за
коректність даних. Адміністратор успадкований від типу Глядач.
Крім цього, йому надані такі права:
* Вихід із системи
* Редагування даних користувача Студент
* Видалення заяв користувача Студент
* Видалення облікового запису користувача Студент
Разом з тим, Адміністратор не може додавати заяви для того, щоб
запобігти шахрайству.
=== 3.2 Взаємодія з користувачами
Взаємодія з користувачами буде проходити за допомогою веб-застосунку.
=== 3.3 Характеристика бізнес-процесу
Управління системою буде виконуватись за допомогою відокремленого розділу
веб-застосунку та за допомогою спеціальної команди адміністрування в складі
адміністраторів.
== 4. Функціональність.
=== 4.1. Функція реєстрації
Користувачі зможуть зареєструвати свій робочий кабінет для внесення даних та
відстежування змін.
=== 4.2. Функція внесення даних ЗНО.
Для реєстрації абітурієнти матимуть можливість внести результати своїх ЗНО
та отримати автоматичний розрахунок КБ.
=== 4.3. Функція подання заяв.
Зареєстровані користувачі можуть подати заяви на обрані спеціальності.
=== 4.4. Функція перегляду поданих заяв.
Кожен користувач може переглядати подані заяви.
=== 4.5. Функція керування заявами.
Зареєстровані користувачі можуть видаляти власні подані заяви, але додавати
нові після видалення не зможуть.
=== 4.6. Функція модерації.
Адміністратори матимуть можливість відстежувати коректність роботи бази
даних, а також слідкувати за безпечністю.
== 5. Доступність.
=== 5.1. Локалізація.
Інтерфейс веб-застосунку повинен бути локалізованим на українську мову.
=== 5.2. Програмні платформи.
Веб-застосунок повинен коректно показуватися в усіх веб-браузерах останніх
версій.
=== 5.3. Інтерфейс.
Інтерфейс веб-застосунку повинен бути адаптований для роботи людей з різними
вадами здоров'я.
== 6. Відмовостійкість.
Система повинна мати високий рівень відмовостійкості. Він буде забезпечуватися
резервним копіюванням даних, дублюванням баз даних, серверів тощо.
== 7. Захищеність.
Дані користувачів системи повинні буди надійно захищенні від сторонніх осіб
шляхом шифрування та організації аутентифікованого доступу.
| 28.314103 | 78 | 0.795563 |
8b74e8bfae652e1814467c6727d6024a20e0bc3b | 12,741 | adoc | AsciiDoc | documentation/modules/ROOT/pages/11-integrations.adoc | manuvaldi/acs-workshop | df03bc5fccce4b5857f8deb7960349dbe32d9774 | [
"Apache-2.0"
] | 9 | 2021-11-18T18:52:48.000Z | 2022-03-23T09:59:16.000Z | documentation/modules/ROOT/pages/11-integrations.adoc | manuvaldi/acs-workshop | df03bc5fccce4b5857f8deb7960349dbe32d9774 | [
"Apache-2.0"
] | 1 | 2022-02-01T17:50:52.000Z | 2022-02-01T17:50:52.000Z | documentation/modules/ROOT/pages/11-integrations.adoc | manuvaldi/acs-workshop | df03bc5fccce4b5857f8deb7960349dbe32d9774 | [
"Apache-2.0"
] | 7 | 2021-12-30T09:05:07.000Z | 2022-03-28T14:01:59.000Z | = Integrations
include::_attributes.adoc[]
:profile: acs
NOTE: In RHACS the Scanner component only scans those images that are not already scanned by other integrated vulnerability scanners. It means that if you have integrated Red Hat Advanced Cluster Security for Kubernetes with other vulnerability scanners, Scanner checks and uses the scanning results from the integrated scanner if available.
[#integrate_with_internal_openshift_registry]
== Integrate RHACS with the Internal Openshift Registry
Idea originally posted by Mark Roberts in https://cloud.redhat.com/blog/using-red-hat-advanced-cluster-security-with-the-openshift-registry[this great blog post], and adapt to consume internally
Red Hat Advanced Cluster Security can be used to scan images held within OpenShift image streams (the OpenShift registry).
This can be helpful within continuous integration processes, to enable organizations to scan images for policy violations and vulnerabilities prior to pushing the image to an external container registry.
In this way, the quality of container images that get to the external registry improves, and triggered activities that result from a new image appearing in the registry only happen for a good reason.
Generate a namespace and extract the token name of the
[.console-input]
[source,bash,subs="attributes+,+macros"]
----
export NSINTEGRATION="integration-internal-registry"
oc new-project $NSINTEGRATION
SECRET_TOKEN_NAME=$(oc get sa -n $NSINTEGRATION default -o jsonpath='{.secrets[*]}' | jq -r .name | grep token)
PIPELINE_TOKEN=$(oc get secret -n $NSINTEGRATION $SECRET_TOKEN_NAME -o jsonpath='{.data.token}' | base64 -d)
echo $PIPELINE_TOKEN
oc policy add-role-to-user admin system:serviceaccount:$NSINTEGRATION:pipeline -n $NSINTEGRATION
----
NOTE: TODO change the permissions to this - https://github.com/redhat-cop/gitops-catalog/blob/main/advanced-cluster-security-operator/instance/overlays/internal-registry-integration/stackrox-image-puller-sa.yaml
[#integrate_with_internal_openshift_registry_config_acs]
=== Configure RHACS integration for Internal Openshift Registry
To allow the roxctl command line interface to scan the images within the OpenShift registry, add an integration of type “Generic Docker Registry'', from the Platform Configuration - Integrations menu.
Fill in the fields as shown in figure 1, giving the integration a unique name that should include the cluster name for practicality. Paste in the username and token and select Disable TLS certificate validation if you need insecure https communication to a test cluster, for example.
Press the test button to validate the connection and press “save” when the test is successful.
image::integrations/03-registry_ocp_internal.png[ACS Integrations 1, 800]
TODO: Finish the integration!!
[#integrate_acs_slack]
== Integrate RHACS Notifications with Slack
If you are using Slack, you can forward alerts from Red Hat Advanced Cluster Security for Kubernetes to Slack.
Create a new Slack app, enable incoming webhooks, and get a webhook URL.
To do this step, follow the https://docs.openshift.com/acs/integration/integrate-with-slack.html#configure-slack_integrate-with-slack[Configuration Slack documentation guide] for generate a slack channel and the webhook url into the Slack workspace.
On the RHACS portal, navigate to Platform Configuration -> Integrations.
image::integrations/04-integration-slack.png[ACS Integrations Slack 1, 800]
Create a new integration in Red Hat Advanced Cluster Security for Kubernetes by using the webhook URL.
image::integrations/05-integrations-slack.png[ACS Integrations Slack 2, 600]
NOTE: the webhook URL will have the format of "https://hooks.slack.com/services/ZZZ/YYY/XXX"
Click Test and check the Slack Channel:
image::integrations/07-integrations-slack.png[ACS Integrations Slack 3, 500]
NOTE: by default all the notifications in the system policies are disabled. If you have not configured any other integrations, you will see No notifiers configured!.
For enable the Policy Notifications, select a System Policy and click on Actions, then Enable Notification:
image::integrations/08-integrations-slack.png[ACS Integrations Slack 4, 700]
Then in the System Policy selected, in the Notification will appear the Slack notification that it's enabled.
image::integrations/06-integrations-slack.png[ACS Integrations Slack 5, 600]
When a System Policy is violated, and appears in Violations, will be sent a Notification to Slack though the Slack integration notifier showing the information of the System Policy violated and more details about that:
image::integrations/09-integrations-slack.png[ACS Integrations Slack 6, 700]
[#integrate_acs_oauth]
== Integrate ACS with OpenShift OAuth
Red Hat Advanced Cluster Security (RHACS) Central is installed with one administrator user by default. Typically, customers request an integration with existing Identity Provider(s) (IDP).
RHACS offers different options for such integration. In this section we will see the integration with OpenShift OAuth.
NOTE: It is assumed that RHACS is already installed and login to the Central UI is available.
. Login to your RHACS and select “Platform Configuration” > “Access Control”
+
image::integrations/01-oauth.png[OAuth 1, 400]
. From the drop down menu Add auth provider select OpenShift Auth
+
image::integrations/02-oauth.png[OAuth 2, 200]
. Enter a Name for your provider and select a default role which is assigned to any user who can authenticate.
It is recommended to select the role None, so new accounts will have no privileges in RHACS.
With Rules you can assign roles to specific users, based on their userid, name, mail address or groups.
For example the user with the name admin gets the role Admin assigned. On the other hand the user1 will have the role of Continuous Integration.
+
image::integrations/03-oauth.png[OAuth 3, 800]
. After Save the integration will appear as Auth Provider
+
image::integrations/04-oauth.png[OAuth 4, 800]
. In a private windows of your browser login into the RH ACS portal, and check the OpenShift OAuth auth provider that you set up
+
image::integrations/05-oauth.png[OAuth 5, 400]
. Login first with the admin user
+
image::integrations/06-oauth.png[OAuth 6, 400]
. Admin user will have the role Admin in RH ACS, so will have full privileges in the ACS console. Check that you have full view of the Violations or Compliance among others.
+
image::integrations/07-oauth.png[OAuth 7, 700]
. Logout and login again but this time with the user user1 instead
+
image::integrations/08-oauth.png[OAuth 8, 500]
. This user have very limited privileges and for example cannot see the Violations in the cluster neither compliance
+
image::integrations/09-oauth.png[OAuth 9, 800]
. This user only have the role of the Continuous Integration, so will only have access to the CVE analysis in the Vulnerability Management among others, but not other actions like the Violations, Policies, etc
+
image::integrations/10-oauth.png[OAuth 10, 800]
[#integrate_acs_sso]
== Integrate ACS with Red Hat Single Sign On
The following steps will create some basic example objects to an existing RHSSO or Keycloak to test the authentication at RHACS.
. Create the namespace for the single-sign-on
+
[.console-input]
[source,bash,subs="attributes+,+macros"]
----
oc new-project single-sign-on
----
. Create the OperatorGroup for install the RHSSO operator
+
[.console-output]
[source,bash,subs="attributes+,+macros"]
----
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
annotations:
olm.providedAPIs: Keycloak.v1alpha1.keycloak.org,KeycloakBackup.v1alpha1.keycloak.org,KeycloakClient.v1alpha1.keycloak.org,KeycloakRealm.v1alpha1.keycloak.org,KeycloakUser.v1alpha1.keycloak.org
generateName: single-sign-on
name: single-sign-on
spec:
targetNamespaces:
- single-sign-on
----
+
[.console-input]
[source,bash,subs="attributes+,+macros"]
----
oc apply -f sso-og.yaml
----
. Create the Subscription Operator for the RHSSO
+
[.console-output]
[source,bash,subs="attributes+,+macros"]
----
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: rhsso-operator
spec:
channel: alpha
installPlanApproval: Manual
name: rhsso-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
----
+
[.console-input]
[source,bash,subs="attributes+,+macros"]
----
oc apply -f sso-subs.yaml
----
. Create an instance of Keycloak
+
[.console-output]
[source,bash,subs="attributes+,+macros"]
----
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
name: rhacs-keycloak
namespace: single-sign-on
spec:
externalAccess:
enabled: true
instances: 1
----
+
[.console-input]
[source,bash,subs="attributes+,+macros"]
----
oc apply -f sso-instance.yaml
----
. Create the Realm for the installation called **Basic**
+
[.console-output]
[source,bash,subs="attributes+,+macros"]
----
apiVersion: keycloak.org/v1alpha1
kind: KeycloakRealm
metadata:
name: rhacs-keycloakrealm
namespace: single-sign-on
spec:
instanceSelector:
matchLabels:
app: sso
realm:
displayName: Basic Realm
enabled: true
id: basic
realm: basic
----
+
[.console-input]
[source,bash,subs="attributes+,+macros"]
----
oc apply -f sso-realm.yaml
----
. Login into Red Hat SSO. Get the route to your RHSSO instance and log into the Administration Interface.
+
[.console-input]
[source,bash,subs="attributes+,+macros"]
----
oc get route keycloak -n single-sign-on --template='{{ .spec.host }}'
----
. Extract the admin password for Keycloak. The secret name is build from "credential"<keycloak-instance-name>
+
[.console-input]
[source,bash,subs="attributes+,+macros"]
----
oc extract secret/credential-rhacs-keycloak -n single-sign-on --to=-
----
+
[.console-output]
[source,bash,subs="attributes+,+macros"]
----
# ADMIN_PASSWORD
<you password>
# ADMIN_USERNAME
admin
----
. Be sure to select your Realm (Basic in our case), goto Clients and select a ClientID and enable the option Implicit Flow
+
image::integrations/01-sso.png[SSO 1, 800]
. Get the Issuer URL from your realm. This is typically your:
+
image::integrations/02-sso.png[SSO 2, 800]
+
[.console-output]
[source,bash,subs="attributes+,+macros"]
----
https://<KEYCLOAK_URL>/auth/realms/<REALM_NAME>;
----
. It's time to Create Test Users to login!
In RHSSO create 2 user accounts to test the authentication later. Goto Users and create the users:
+
[.console-output]
[source,bash,subs="attributes+,+macros"]
----
User: acsadmin
First Name: acsadmin
----
+
image::integrations/04-sso.png[SSO 3, 800]
+
[.console-output]
[source,bash,subs="attributes+,+macros"]
----
User: user1
First Name: user 1
----
+
You can set any other values for these users. However, be sure to set a password for both, after they have been created.
. To configure RHACS Authentication: RHSS, login to your RHACS and select “Platform Configuration” > “Access Control”
+
image::integrations/03-sso.png[SSO 3, 800]
* Enter a “Name” for your provider i.e. “Single Sign On”
* Leave the “Callback Mode” to the “Auto-Select” setting
* Enter your Issuer URL
* As Client ID enter account (or the ClientID you would like to use)
* Leave the Client Secret empty and select the checkbox Do not use Client Secret which is good enough for our tests.
* Remember the two callback URL from the blue box. They must be configured in Keycloak.
* Select a default role which is assigned to any user who can authenticate.
* It is recommended to select the role None, so new accounts will have no privileges in RHACS.
. With Rules you can assign roles to specific users, based on their userid, name, mail address or groups.
+
image::integrations/05-sso.png[SSO 5, 800]
+
For example the user with the name admin (which have been created previously in our RHSSO) gets the role Admin assigned.
. What is left to do is the configuration of redirect URLs. These URLs are shown in the ACS Authentication Provider configuration (see blue field in the image above)
+
image::integrations/06-sso.png[SSO 6, 800]
. Log back into RHSSO and select “Clients” > “account”
. Into Valid Redirect URLs enter the two URLs which you saved from the blue box in the RHACS configuration.
+
image::integrations/07-sso.png[SSO 7, 800]
. Verify Authentication with OpenShift Auth.
. Logout from the Central UI and reload the browser.
. Select from the drop down Single Sign On
+
image::integrations/08-sso.png[SSO 8, 800]
. Try to login with a valid SSO user.
Depending on the Rules which have been defined during previous steps the appropriate permissions should be assigned.
+
For example: If you login as user acsadmin the role Admin is assigned. | 36.612069 | 341 | 0.771603 |
dcfd77c152d2a8b7922e9951cceed678dec6dc46 | 297 | adoc | AsciiDoc | docs/topics/con_the-secured-project-structure.adoc | huntley/launcher-documentation | 52bcb6357e403658db423a9a2c0d05aa3ef26097 | [
"CC-BY-4.0"
] | null | null | null | docs/topics/con_the-secured-project-structure.adoc | huntley/launcher-documentation | 52bcb6357e403658db423a9a2c0d05aa3ef26097 | [
"CC-BY-4.0"
] | null | null | null | docs/topics/con_the-secured-project-structure.adoc | huntley/launcher-documentation | 52bcb6357e403658db423a9a2c0d05aa3ef26097 | [
"CC-BY-4.0"
] | null | null | null |
= The {name-mission-secured} project structure
The SSO booster project contains:
* the sources for the Greeting service, which is the one which we are going to to secure
* a template file (`service.sso.yaml`) to deploy the SSO server
* the Keycloak adapter configuration to secure the service
| 29.7 | 88 | 0.774411 |
2e7ca94fb9a2ec699ec9edcc8d59b827bc5f6b22 | 12,057 | adoc | AsciiDoc | website/content/ru/security/_index.adoc | EngrRezCab/freebsd-doc | b2364e0d8f5cc3d57c8be7eed928aba42c72009f | [
"BSD-2-Clause"
] | 6 | 2018-09-27T02:31:02.000Z | 2021-07-15T04:34:50.000Z | website/content/ru/security/_index.adoc | EngrRezCab/freebsd-doc | b2364e0d8f5cc3d57c8be7eed928aba42c72009f | [
"BSD-2-Clause"
] | 1 | 2021-01-17T18:57:12.000Z | 2021-01-17T18:57:12.000Z | website/content/ru/security/_index.adoc | EngrRezCab/freebsd-doc | b2364e0d8f5cc3d57c8be7eed928aba42c72009f | [
"BSD-2-Clause"
] | null | null | null | ---
title: "Информационная безопасность FreeBSD"
sidenav: support
---
include::shared/authors.adoc[]
include::shared/releases.adoc[]
= Информационная безопасность FreeBSD
== Введение
Эта веб-страница предназначена для помощи начинающим и опытным пользователям в области информационной безопасности FreeBSD. Во FreeBSD вопросы безопасности воспринимаются весьма серьёзно и постоянно идет работа над тем, чтобы сделать ОС защищённой настолько, насколько это вообще возможно.
== Содержание
* <<how,Как и куда сообщать о проблеме информационной безопасности FreeBSD>>
* <<sec,Информация об Офицере информационной безопасности FreeBSD>>
* <<pol,Политика обработки информации>>
* <<sup,Поддерживаемые релизы FreeBSD>>
* <<unsup,Неподдерживаемые релизы FreeBSD>>
== Дополнительные ресурсы, посвященные информационной безопасности
* link:https://www.FreeBSD.org/security/charter[Устав Офицера и службы информационной безопасности]
* link:advisories[Список бюллетеней информационной безопасности FreeBSD]
* link:notices[Список уведомлений об исправлениях FreeBSD]
* link:{handbook}#security-advisories[Чтение сообщений информационной безопасности FreeBSD]
[[how]]
== Как и куда сообщать о проблеме информационной безопасности FreeBSD
Все проблемы информационной безопасности FreeBSD следует направлять на адрес mailto:secteam@FreeBSD.org[службы информационной безопасности FreeBSD] или, если необходим более высокий уровень конфиденциальности, в зашифрованном в PGP виде на адрес mailto:security-officer@FreeBSD.org[службы Офицера информационной безопасности], используя link:&enbase;/security/so_public_key.asc[ключ PGP Офицера информационной безопасности]. Как минимум, все сообщения должны содержать:
* Описание уязвимости.
* Какие версии FreeBSD предположительно затронуты, по возможности.
* Любые состоятельные пути обхода проблемы.
* Примеры кода, по возможности.
После сообщения этой информации с вами свяжется Офицер информационной безопасности или представитель службы информационной безопасности.
=== Спам-фильтры
В связи с высоким уровнем спама основные контактные почтовые адреса по информационной безопасности подвергаются фильтрации спама. Если Вы не можете связаться с Офицерами информационной безопасности FreeBSD или со службой информационной безопасности из-за спам-фильтров (или предполагаете, что ваш адрес был отфильтрован), пожалуйста, отправьте письмо на `security-officer-XXXX@FreeBSD.org` с заменой _XXXX_ на `3432` вместо использования обычных адресов. Обратите внимание, что этот адрес будет периодически меняться, поэтому следует проверять здесь последний адрес. Сообщения на этот адрес будут перенаправлены службе Офицера информационной безопасности FreeBSD.
[[sec]]
== Офицер информационной безопасности FreeBSD и служба информационной безопасности FreeBSD
Для того, чтобы Проект FreeBSD мог оперативно реагировать на сообщения об уязвимостях, почтовый алиас Офицера информационной безопасности соответствует трем персонам: Офицер информационной безопасности, заместитель Офицера информационной безопасности и один член Основной группы разработчиков. Таким образом, сообщения, посланные на адрес почтового алиаса mailto:security-officer@FreeBSD.org[<security-officer@FreeBSD.org>], доставляются следующим лицам:
[.tblbasic]
[width="100%",cols="50%,50%",]
|===
|{simon}|Офицер информационной безопасности
|{cperciva}|Офицер информационной безопасности, эмерит
|{rwatson}|Представитель группы по выпуску релизов, +
представитель Проекта TrustedBSD, эксперт по архитектуре системной безопасности
|{delphij}|Заместитель офицера информационной безопасности
|===
Офицер информационной безопасности поддерживается link:https://www.FreeBSD.org/administration/#t-secteam[Службой безопасности FreeBSD] mailto:secteam@FreeBSD.org[<secteam@FreeBSD.org>], небольшой группой коммиттеров, которую он выбирает сам.
[[pol]]
== Политика обработки информации
Как общее правило, Офицер безопасности FreeBSD предпочитает полное раскрытие информации об уязвимости после достаточного перерыва на выполнение тщательного анализа и устранения уязвимости, а также соответствующего тестирования исправления и взаимодействия с другими затронутыми командами.
Офицер безопасности _будет_ уведомлять одного или большее количество администраторов кластера об уязвимостях, которые подвергают ресурсы Проекта FreeBSD непосредственной опасности.
Офицер безопасности может привлечь дополнительных разработчиков FreeBSD или внешних разработчиков к обсуждению предоставленной информации об уязвимости, если требуется их экспертиза для полного понимания или исправления проблемы. Будет выполнено необходимое разграничение для минимизации ненужного распространения информации о представленной уязвимости, и все привлечённые эксперты будут действовать в соответствии с указаниями Офицера безопасности. В прошлом привлекались эксперты с большим опытом работы с высокосложными компонентами операционной системы, включая FFS, подсистему VM и стек сетевых протоколов.
Если уже выполняется процесс выпуска релиза FreeBSD, то инженер, ответственный за выпуск релиза, может также быть оповещён об имеющейся уязвимости и её серьёзности, чтобы было принято решение об информировании относительно цикла выпуска релиза и наличии каких-либо серьёзных ошибок в программном обеспечении, связанном с готовящимся релизом. Если это будет необходимо, то Офицер безопасности не будет сообщать подробную информацию о природе уязвимости Инженеру по выпуску релиза, ограничиваясь информацией о её существовании и серьёзности.
Офицер безопасности FreeBSD поддерживает тесные рабочие отношения со многими другими организациями, включая сторонних разработчиков, имеющих с FreeBSD общий код (проекты OpenBSD, NetBSD и DragonFlyBSD, Apple и другие разработчики, программное обеспечение которых основано на FreeBSD, а также разработчики Linux), и организации, которые отслеживают уязвимости и случаи нарушения информационной безопасности, такие, как CERT. Зачастую уязвимости выходят за рамки реализации FreeBSD, и (наверное, реже) могут иметь широкий резонанс для всего сетевого сообщества. В таких условиях Офицер безопасности может раскрыть информацию об уязвимости этим сторонним организациям: если вы не хотите, чтобы Офицер безопасности это делал, пожалуйста, явно укажите это в своих сообщениях.
Сообщающие должны тщательно и явно указать любые свои требования относительно обработки сообщённой информации.
Если сообщающий об уязвимости заинтересован в координации процесса раскрытия с ним и/или другими разработчиками, это должно быть явно указано в сообщениях. При отсутствии явных требований Офицер безопасности FreeBSD выберет план раскрытия информации, который учитывает как требования оперативности, так и тестирования любых решений. Сообщающие должны иметь в виду, что если уязвимость активно обсуждается в открытых форумах (таких, как bugtraq) и используется, то Офицер Безопасности может решить не следовать предлагаемому плану по её раскрытию, для того, чтобы дать пользовательскому сообществу максимально эффективную защиту.
Сообщения могут быть защищены с помощью PGP. Если это нужно, то ответы также будут защищены посредством PGP.
[[sup]]
== Поддерживаемые релизы FreeBSD
Офицер информационной безопасности FreeBSD выпускает бюллетени безопасности для нескольких разрабатываемых веток FreeBSD. Это _Ветки -STABLE_ и _Ветки Security_. (Бюллетени не выпускаются для _Ветки -CURRENT_.)
* Тэги ветки -STABLE носят имена типа `RELENG_7`. Соответствующие сборки носят названия типа `FreeBSD 7.0-STABLE`.
* Каждому релизу FreeBSD поставлена в соответствие ветка безопасности (Security Branch). Тэги веток безопасности именуются как `RELENG_7_0`. Соответствующие сборки носят названия типа `FreeBSD 7.0-RELEASE-p1`.
Проблемы, затрагивающие Коллекцию портов FreeBSD, описываются в http://vuxml.FreeBSD.org/[документе FreeBSD VuXML].
Каждая ветка поддерживается Офицером безопасности в течение ограниченного времени. Ветки разделены на типы, которые определяют срок жизни, как показано ниже.
Раннее внедрение (Early Adopter)::
Релизы, выпускаемые из ветки -CURRENT, будут поддерживаться Офицером безопасности как минимум в течение 6 месяцев после выхода.
Обычный (Normal)::
Релизы, выпускаемые из ветки -STABLE, будут поддерживаться Офицером безопасности как минимум в течение 12 месяцев после выхода и в течение дополнительного времени (если потребуется), достаточного, чтобы следующий релиз был выпущен по крайней мере за 3 месяца до истечения срока поддержки предыдущего Обычного релиза.
Расширенный (Extended)::
Отобранные релизы (обычно, каждый второй релиз и последний релиз из каждой ветки -STABLE) будут поддерживаться Офицером безопасности как минимум в течение 24 месяцев после выхода и в течение дополнительного времени (если потребуется), достаточного, чтобы следующий Расширенный релиз был выпущен по крайней мере за 3 месяца до истечения срока поддержки предыдущего Расширенного релиза.
[[supported-branches]]
Ниже приводятся текущее назначение и ожидаемое время жизни для поддерживаемых в настоящее время веток. В колонке _Ожидаемое окончание срока жизни_ указана ближайшая дата, по истечении которой будет прекращена поддержка ветки. Пожалуйста, учтите, что эти даты могут быть сдвинуты в будущее, но только исключительные обстоятельства могут привести к отказу от поддержки ветки раньше указанной даты.
[.tblbasic]
[cols=",,,,",options="header",]
|===
|Ветка |Релиз |Тип |Дата выхода |Ожидаемое окончание срока жизни
|RELENG_8 |n/a |n/a |n/a |последний релиз + 2 года
|RELENG_8_3 |8.3-RELEASE |Расширенный |18 апреля 2012 |30 апреля 2014
|RELENG_9 |n/a |n/a |n/a |последний релиз + 2 года
|RELENG_9_1 |9.1-RELEASE |Расширенный |30 декабря 2012 |31 декабря 2014
|===
Более старые релизы не поддерживаются, а их пользователям настоятельно рекомендуется произвести обновление до одной из поддерживаемых версий, указанных выше.
Бюллетени рассылаются в следующие списки рассылки FreeBSD:
* FreeBSD-security-notifications@FreeBSD.org
* FreeBSD-security@FreeBSD.org
* FreeBSD-announce@FreeBSD.org
Список выпущенных бюллетеней можно найти на странице link:advisories[FreeBSD Security Advisories].
Бюллетени всегда подписываются с использованием link:https://www.FreeBSD.org/security/so_public_key.asc[PGP-ключа] Офицера Безопасности и помещаются, вместе с соответствующими исправлениями, на web-сервер http://security.FreeBSD.org/ в поддиректории http://security.FreeBSD.org/advisories/[advisories] и http://security.FreeBSD.org/patches/[patches].
[[unsup]]
== Неподдерживаемые релизы FreeBSD
Следующие релизы больше не поддерживаются и приведены здесь в справочных целях.
[.tblbasic]
[cols=",,,,",options="header",]
|===
|Ветка |Релиз |Тип |Дата выхода |Окончание срока жизни
|RELENG_4 |n/a |n/a |n/a |31 января 2007
|RELENG_4_11 |4.11-RELEASE |Расширенный |25 января 2005 |31 января 2007
|RELENG_5 |n/a |n/a |n/a |31 мая 2008
|RELENG_5_3 |5.3-RELEASE |Расширенный |6 ноября 2004 |31 октября 2006
|RELENG_5_4 |5.4-RELEASE |Обычный |9 мая 2005 |31 октября 2006
|RELENG_5_5 |5.5-RELEASE |Расширенный |25 мая 2006 |31 мая 2008
|RELENG_6 |n/a |n/a |n/a |30 ноября 2010
|RELENG_6_0 |6.0-RELEASE |Обычный |4 ноября 2005 |31 января 2007
|RELENG_6_1 |6.1-RELEASE |Расширенный |9 мая 2006 |31 мая 2008
|RELENG_6_2 |6.2-RELEASE |Обычный |15 января 2007 |31 мая 2008
|RELENG_6_3 |6.3-RELEASE |Расширенный |18 января 2008 |31 января 2010
|RELENG_6_4 |6.4-RELEASE |Расширенный |28 ноября 2008 |30 ноября 2010
|RELENG_7 |n/a |n/a |n/a |28 февраля 2013
|RELENG_7_0 |7.0-RELEASE |Обычный |27 февраля 2008 |30 апреля 2009
|RELENG_7_1 |7.1-RELEASE |Расширенный |4 января 2009 |28 февраля 2011
|RELENG_7_2 |7.2-RELEASE |Обычный |4 мая 2009 |30 июня 2010
|RELENG_7_3 |7.3-RELEASE |Расширенный |23 марта 2010 |31 марта 2012
|RELENG_7_4 |7.4-RELEASE |Расширенный |24 февраля 2011 |28 февраля 2013
|RELENG_8_0 |8.0-RELEASE |Обычный |25 ноября 2009 |30 ноября 2010
|RELENG_8_1 |8.1-RELEASE |Расширенный |23 июля 2010 |31 июля 2012
|RELENG_8_2 |8.2-RELEASE |Обычный |24 февраля 2011 |31 июля 2012
|RELENG_9_0 |9.0-RELEASE |Обычный |10 января 2012 |31 марта 2013
|===
| 76.310127 | 770 | 0.819358 |
78d965ae1ffc124e055d8a8af19b9d8eaccdac90 | 6,968 | adoc | AsciiDoc | content/post/learn-code-tips-from-spring-pull-requests.adoc | diguage/www.diguage.com | 62c9a967e7f7f531760c566536d67dbe25bebd39 | [
"Apache-2.0"
] | null | null | null | content/post/learn-code-tips-from-spring-pull-requests.adoc | diguage/www.diguage.com | 62c9a967e7f7f531760c566536d67dbe25bebd39 | [
"Apache-2.0"
] | 20 | 2019-11-06T10:10:25.000Z | 2022-03-31T07:00:04.000Z | content/post/learn-code-tips-from-spring-pull-requests.adoc | diguage/www.diguage.com | 62c9a967e7f7f531760c566536d67dbe25bebd39 | [
"Apache-2.0"
] | 1 | 2021-09-12T15:05:44.000Z | 2021-09-12T15:05:44.000Z | ---
title: "从 Spring PR 中学习代码技巧"
date: 2021-06-27T18:20:28+08:00
draft: false
keywords: ["Java","Spring"]
tags: ["Java","设计","架构"]
categories: ["Java","程序设计"]
thumbnail: "images/spring-framework/spring-logo.jpg"
weight: 1
---
:source-highlighter: pygments
:pygments-style: monokai
:pygments-linenums-mode: table
D瓜哥经常关注 Spring 的 PR 与 Issue。在众多 Contributor 中,除了 Spring 团队成员之外,我对 https://github.com/stsypanov[stsypanov (Сергей Цыпанов)^] 印象很深刻。这哥们给 Spring 提了非常多的 PR,请看列表 https://github.com/spring-projects/spring-framework/pulls?page=1&q=author%3Astsypanov+is%3Aclosed[Pull requests · spring-projects/spring-framework^],而且这个哥们的 PR 都非常有特点,绝大部分是性能提升方面的 PR,而且还会给出 JMH 的测试结果。不愧是毛熊人,做事细致严谨。
这周心血来潮,把这哥们的 PR 翻一翻,希望可以学习一些编码技巧。简单记录一下,以备以后回顾学习。
== 提高 `Map` 的遍历性能
请看: https://github.com/spring-projects/spring-framework/pull/1891/files[SPR-17074 Replace iteration over Map::keySet with Map::entrySet by stsypanov · Pull Request #1891^]
摘取一个示例如下:
[source%nowrap,java,indent=0,highlight=32;34]
----
// --before update------------------------------------------------------
for (String attributeName : attributes.keySet()) {
Object value = attributes.get(attributeName);
// --after update-------------------------------------------------------
for (Map.Entry<String, Object> attributeEntry : attributes.entrySet()) {
String attributeName = attributeEntry.getKey();
Object value = attributeEntry.getValue();
----
这个改动很小,但是对性能的改善还是比较显著的。翻看自己项目的代码,还是有不少是改动前的写法。
针对这点,D瓜哥也给 Spring 发了一个 PR: https://github.com/spring-projects/spring-framework/pull/27100[Improve performance of iteration in GroovyBeanDefinitionReader by diguage · Pull Request #27100^]。相信不久就会合并到 `main` 分支的。
所以,给 Spring 以及其他开源项目提 PR,其实一点也不难。只要,你花心思去研究,肯定有机会的。不过,也反思一点:我这个 PR 有点东施效颦的感觉,有点刷 KPI 的样子。还是应该脚踏实地去好好研究,提更多更有建设性意见的 PR。
== `StringJoiner`
看这个 PR: https://github.com/spring-projects/spring-framework/pull/22430/files[Use StringJoiner where possible to simplify String joining by stsypanov · Pull Request #22430^] 才知道,原来在 Java 8 直接内置了 `StringJoiner`,翻看 `StringJoiner` 的源码,你会发现它出来可以设置连接符,竟然还可以设置前置符和后置符。后续也可以使用这个工具类。
类似的 PR 还有: https://github.com/spring-projects/spring-framework/pull/22539/files[Use StringJoiner where possible to simplify String joining by stsypanov · Pull Request #22539^]。
== `ArrayList` 初始化
请看: https://github.com/spring-projects/spring-framework/pull/22418/files[Some very simple improvements regarding usage of ArrayList by stsypanov · Pull Request #22418^]。
从 PR 中摘录出两个修改片段:
[source%nowrap,java,indent=0]
----
// --before update-------------------------------------------
List<String> result = new ArrayList<>();
result.addAll(Arrays.asList(array1));
// --after update--------------------------------------------
List<String> result = new ArrayList<>(Arrays.asList(array1));
----
[source%nowrap,java,indent=0]
----
// --before update----------------------------------------
List<String> matchingHeaderNames = new ArrayList<>();
if (headers != null) {
for (String key : headers.keySet()) {
if (PatternMatchUtils.simpleMatch(pattern, key)) {
matchingHeaderNames.add(key);
}
// --after update-----------------------------------------
if (headers == null) {
return Collections.emptyList();
}
List<String> matchingHeaderNames = new ArrayList<>();
for (String key : headers.keySet()) {
if (PatternMatchUtils.simpleMatch(pattern, key)) {
matchingHeaderNames.add(key);
}
----
`new ArrayList<>(Arrays.asList(array1))` 与 `Collections.emptyList()` 都是一些值得关注的代码小技巧。另外,在第二个修改片段中,直接进行空值判断,还可以减少下面代码的括号嵌套层数。
== 字符串连接
https://github.com/spring-projects/spring-framework/pull/22466[Simplify String concatenation by stsypanov · Pull Request #22466^] 这个 PR 改动很小,代码也乏善可陈。但是,在这个 PR 的描述中,Contributor 给出了这个 PR 的解释,里面给出的 Reference: https://alblue.bandlem.com/2016/04/jmh-stringbuffer-stringbuilder.html[StringBuffer and StringBuilder performance with JMH^] ,详细对比了不同情况下,“字符串连接”的性能情况,读一读还是有不少收获的。这里直接把文章结论引入过来:
* `StringBuilder` is better than `StringBuffer`
* `StringBuilder.append(a).append(b)` is better than `StringBuilder.append(a+b)`
* `StringBuilder.append(a).append(b)` is better than `StringBuilder.append(a); StringBuilder.append(b);`
* `StringBuilder.append()` and `+` are only equivalent _provided_ that they are not nested and you don’t need to pre-sizing the builder
* Pre-sizing the `StringBuilder` is like pre-sizing an `ArrayList`; if you know the approximate size you can reduce the garbage by specifying a capacity up-front
== 数组填充
https://github.com/spring-projects/spring-framework/pull/22595/files[Use Arrays::fill instead of hand-written loop by stsypanov · Pull Request #22595^] 这个 PR 也值得看一看。
[source%nowrap,java,indent=0]
----
// --before update----------------------
for (int i = 0; i < bytes.length; i++) {
bytes[i] = 'h';
}
// --after update-----------------------
Arrays.fill(bytes, (byte) 'h');
----
用一行代码代替三行代码,何乐而不为呢?另外,估计很多人不知道 `Arrays.fill(array, object);` 这个 API。
== `Comparator`
请看: https://github.com/spring-projects/spring-framework/pull/23098/files[Simplify comparison of primitives by stsypanov · Pull Request #23098^]
[source%nowrap,java,indent=0]
----
// --before update------------------------------------
Arrays.sort(ctors, (c1, c2) -> {
int c1pl = c1.getParameterCount();
int c2pl = c2.getParameterCount();
return (c1pl < c2pl ? -1 : (c1pl > c2pl ? 1 : 0));
});
// --after update-------------------------------------
Arrays.sort(ctors, (c1, c2) -> {
int c1pl = c1.getParameterCount();
int c2pl = c2.getParameterCount();
return Integer.compare(c1pl, c2pl);
});
----
Contributor 使用 `Integer.compare(int, int)` 来简化比较代码。所以,以后比较整数可以使用 `Integer.compare(int, int)`。
其实,还可以更进一步:
[source%nowrap,java,indent=0]
----
// --before update----------------------------------------------------------
Arrays.sort(ctors, (c1, c2) -> {
int c1pl = c1.getParameterCount();
int c2pl = c2.getParameterCount();
return Integer.compare(c1pl, c2pl);
});
// --after update-----------------------------------------------------------
Arrays.sort(ctors, Comparator.comparingInt(Constructor::getParameterCount));
----
所以,我提了一个 PR: https://github.com/spring-projects/spring-framework/pull/27102[Simplify Comparator using method references. Improve #23098 by diguage · Pull Request #27102^]。
== 数组克隆
请看: https://github.com/spring-projects/spring-framework/pull/23986/files[Use array.clone() instead of manual array creation by stsypanov · Pull Request #23986^]。
[source%nowrap,java,indent=0]
----
// --before update--------------------------------
String[] copy = new String[state.length];
System.arraycopy(state, 0, copy, 0, state.length);
return copy;
// --after update---------------------------------
return state.clone();
----
复制数组,以前只知 `System.arraycopy` 可以高效完成任务,以后可以使用 `array.clone()` 。
== 参考资料
. https://alblue.bandlem.com/2016/04/jmh-stringbuffer-stringbuilder.html[StringBuffer and StringBuilder performance with JMH^] | 39.146067 | 382 | 0.670924 |
6172d7b1b5b460b9f17e4a2a809b855cd0ec4bbe | 2,805 | adoc | AsciiDoc | README.adoc | fifilyu/rmgt | 552e7fd6c22f246c615e54087046fa84cdfedcae | [
"MIT"
] | 17 | 2015-03-06T09:57:26.000Z | 2021-07-20T11:16:26.000Z | README.adoc | fifilyu/rmgt | 552e7fd6c22f246c615e54087046fa84cdfedcae | [
"MIT"
] | null | null | null | README.adoc | fifilyu/rmgt | 552e7fd6c22f246c615e54087046fa84cdfedcae | [
"MIT"
] | 6 | 2015-03-07T03:45:17.000Z | 2021-07-20T11:18:31.000Z | = rmgt
Linux and Windows Remote Management, 方便快捷的远程服务器连接工具
== 平台
支持 Linux(SSH) 以及 Windows(RDP) 远程连接
.Linux
* 支持证书登录
* 支持密码登录
.Windows
* 仅支持密码登录
== 编译安装
=== 安装 Google Test
ArchLinux:: sudo pacman -S cmake gcc gtest sshpass openssl
Ubuntu:: sudo apt-get install cmake g++ libgtest-dev libssl-dev sshpass
Ubuntu 18.04:: sudo apt-get install cmake g++ googletest libssl-dev sshpass
CentOS:: sudo yum install epel-release && sudo yum install cmake gcc-c++ gtest-devel sshpass openssl-devel
.Ubuntu平台需要手动解决单元测试依赖
----
mkdir /tmp/gtest_build
cd /tmp/gtest_build
cmake /usr/src/gtest 或者 cmake /usr/src/googletest/googletest/
make
sudo cp libgtest.a libgtest_main.a /usr/local/lib/
cd ~
rm -rf /tmp/gtest_build
----
.编译rmgt
----
$ git clone https://github.com/fifilyu/rmgt.git
$ mkdir rmgt_build
$ cd rmgt_build
$ cmake ../rmgt
$ make
$ make test
$ cp bin/rmgt 任意路径 (比如,cp bin/rmgt /home/fifilyu/bin/rmgt)
----
[NOTE]
建议将 rmgt 所在目录 (比如,/home/fifilyu/bin) 加入环境变量 `$PATH` ,也可直接复制到 `/usr/local/bin`,以使任意位置都能找到 `rmgt` 命令。
== 配置文件
主机信息将会保存到当前用户主目录下,文件名为 `.rmgt.conf`。比如,/home/fifilyu/.rmgt.conf
=== 安全
rmgt 将以明文保存主机信息,包括 *密码* 。
[WARNING]
现在,如果你对安全问题非常敏感,请 *慎用* rmgt 。
== 连接 Windows 主机的分辨率设置
默认分辨率是 800 * 600。如果需要重置分辨率,请直接修改 main.cxx 中 `"-g800x600 "` 即可。
== 用法
=== 安装软件包
.说明
openssh:: SSH 协议工具集
rdesktop:: Windows 远程桌面协议(RDP)客户端
sshpass:: 非交互式 SSH 密码工具
.安装软件
ArchLinux:: sudo pacman -S openssh rdesktop sshpass
Ubuntu:: sudo apt-get install openssh-client rdesktop sshpass
CentOS:: sudo yum install openssh-clients rdesktop sshpass (需要 EPEL 源)
=== 增加主机
==== Linux 平台
*证书登录*
`rmgt -n usa241 -o linux -i 142.4.114.xxx -p 22 -u root -d "美国代理线路"`
or
`rmgt -n usa241 -o linux -i 142.4.114.xxx`
*密码登录*
`rmgt -n usa241 -o linux -i 142.4.114.xxx -p 22 -u root -w password -d "美国代理线路"`
or
`rmgt -n usa241 -o linux -i 142.4.114.xxx -w password`
==== Windows 平台
*密码登录*
`rmgt -n ali44 -o windows -i 121.41.45.xxx -p 3389 -u administrator -w password -d "阿里云"`
or
`rmgt -n ali44 -o windows -i 121.41.45.xxx`
=== 连接主机
Linux: 必须在终端下执行 `rmgt -c usa241`
Windows: 在终端或者 X 桌面下执行 `rmgt -c ali44`
=== 删除主机
`rmgt -r usa241`
`rmgt -r ali44`
== 使用详情
请 `rmgt -h` 查看帮助
----
rmgt(remote management) v2.0.1 - 方便快捷的远程服务器连接工具
用法 :
rmgt -V
rmgt -c <主机名> [-v]
rmgt -l
rmgt -s <主机名>
rmgt -r <主机名>
rmgt -n <主机名> -o <操作系统> -i <IP地址> -p [远程端口[22|3389]] -u [用户名[root|administrator]] -w [密码] -d [描述]
参数 :
-c <主机名> 将连接的主机名
-l 显示所有主机信息
-s <主机名> 显示指定主机信息
-r <主机名> 从配置文件删除主机
-n <主机名> 增加主机时,设置主机名
-o <操作系统> 增加主机时,设置操作系统,可选值:linux windows
-i <IP地址> 增加主机时,设置IP地址
-p [远程端口] 增加主机时,设置远程端口,linux 默认值:22,windows 默认值:3389
-u [用户名] 增加主机时,设置远程登录用户名,linux 默认值:root,windows 默认值:administrator
-w [密码] 增加主机时,设置密码,默认值:空
-d [描述] 增加主机时,设置描述,默认值:空
-h <显示帮助信息> 显示帮助信息
-v <显示连接信息> 显示连接信息
-V <显示版本信息> 显示版本信息
----
| 19.753521 | 106 | 0.682709 |
256eca0bcff82a8811bc7b7d9d47678c61da3bb3 | 1,420 | adoc | AsciiDoc | content/manual/adoc/ru/development/build_scripts/build.gradle_tasks.adoc | crontabpy/documentation | 45c57a42ff729207b1967241003bd7b747361552 | [
"CC-BY-4.0"
] | null | null | null | content/manual/adoc/ru/development/build_scripts/build.gradle_tasks.adoc | crontabpy/documentation | 45c57a42ff729207b1967241003bd7b747361552 | [
"CC-BY-4.0"
] | null | null | null | content/manual/adoc/ru/development/build_scripts/build.gradle_tasks.adoc | crontabpy/documentation | 45c57a42ff729207b1967241003bd7b747361552 | [
"CC-BY-4.0"
] | null | null | null | :sourcesdir: ../../../../source
[[build.gradle_tasks]]
==== Задачи сборки
Исполняемыми единицами в Gradle являются _задачи_ (tasks). Они задаются как внутри плагинов, так и в самом скрипте сборки. Рассмотрим специфические для CUBA задачи, параметры которых могут быть сконфигурированы в `build.gradle`.
include::build.gradle_tasks/build.gradle_buildInfo.adoc[]
include::build.gradle_tasks/build.gradle_buildUberJar.adoc[]
include::build.gradle_tasks/build.gradle_buildWar.adoc[]
include::build.gradle_tasks/build.gradle_buildWidgetSet.adoc[]
include::build.gradle_tasks/build.gradle_createDb.adoc[]
include::build.gradle_tasks/build.gradle_debugWidgetSet.adoc[]
include::build.gradle_tasks/build.gradle_deploy.adoc[]
include::build.gradle_tasks/build.gradle_deployThemes.adoc[]
include::build.gradle_tasks/build.gradle_deployWar.adoc[]
include::build.gradle_tasks/build.gradle_enhance.adoc[]
include::build.gradle_tasks/build.gradle_restart.adoc[]
include::build.gradle_tasks/build.gradle_setupTomcat.adoc[]
include::build.gradle_tasks/build.gradle_start.adoc[]
include::build.gradle_tasks/build.gradle_startDb.adoc[]
include::build.gradle_tasks/build.gradle_stop.adoc[]
include::build.gradle_tasks/build.gradle_stopDb.adoc[]
include::build.gradle_tasks/build.gradle_tomcat.adoc[]
include::build.gradle_tasks/build.gradle_updateDb.adoc[]
include::build.gradle_tasks/build.gradle_zipProject.adoc[]
| 30.869565 | 229 | 0.814789 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.