text
stringlengths 1
22.8M
|
|---|
The Raid on Port Dover was an episode during the War of 1812. American troops crossed Lake Erie to capture or destroy stocks of grain and destroy mills at Port Dover, Ontario, which were used to provide flour for British troops stationed on the Niagara Peninsula. At the instigation of Lieutenant Colonel John B. Campbell and without sanction from his superiors or the government of the United States, the Americans also destroyed private houses and other property, prompting British commanders to demand reprisals in other theatres of the war. To some degree, the burning of Washington by the British later in the year was influenced by the American actions at Port Dover.
Background
In the spring of 1814, the Americans were preparing to make an attack across the Niagara River. As the Americans held undisputed control of Lake Erie, the troops at Presque Isle on the lake's southern shore were no longer needed to protect the improvised shipyard there, and were ordered to join the main American army at Buffalo, New York.
The idea of raiding the Canadian settlements near Long Point and destroying the mills there en route to Buffalo occurred both to Captain Arthur Sinclair, commanding the armed vessels of the United States Navy on Lake Erie, and Lieutenant Colonel John B. Campbell, commanding the troops at Presque Isle.
It took some days to assemble the expedition, in particular to obtain volunteers from the Pennsylvania Militia, and Sinclair later considered that the delay and publicity prevented the raid from achieving surprise. On 13 May 750 troops, composed of detachments of regulars (including artillery) and Pennsylvania militia, were embarked aboard Sinclair's ships. The expedition was accompanied by several renegade Canadian guides, including Abraham Markle.
Raid
In the late afternoon of 14 May, the Americans landed near Port Dover. There was a minor skirmish between American militiamen and some Canadian militiamen who were trying to remove goods from a storehouse.
The Americans remained where they had disembarked during the night of 14 May. The next day, they marched to the village of Dover, where they drew up in formal line of battle, although there was no opposition. On Campbell's orders they then set fire to every building in the settlement: twenty houses, three flour mills, three sawmills, three distilleries, twelve barns and some other buildings. All livestock was shot, and their bodies left to rot. Some of Sinclair's sailors took the hind ends of the slaughtered hogs, but other than these opportune thefts, there was no plundering. Although the local women and children were allowed to remove their personal possessions from their houses before they were set on fire, they were able to remove only small items.
Much of the property destroyed had belonged to Robert Nichol, who was noted for his support for the British authorities, at the instigation of Markle, who had been expelled from the local Legislative Assembly by Nichol.
The Americans then re-embarked, but landed again the next day to burn another mill and a sawmill. They then returned to Presque Isle. During the entire raid, the only opposition had been some scattered Canadian militia, and a troop of the 19th Light Dragoons. The British had either received word of the impending raid, or had taken precautions against the possibility, and almost all the flour in the settlement (several hundred barrels) had already been removed to safety.
Aftermath
Sinclair and several other American officers (particularly among the militia) were enraged by Campbell's actions. Campbell insisted, both at the time and subsequently in a note to the British Major General Phineas Riall, commanding the division on the Niagara Peninsula, that he personally ordered the destruction without any sanction from his superiors or the United States government, in retaliation for the burning of the American settlements of Havre de Grace (on Chesapeake Bay), Lewiston and Buffalo the previous year.
The official notes of protest from Riall and complaints by Sinclair and other Americans prompted the United States Army to hold a Court of Enquiry, presided over by Brigadier General Winfield Scott, on 20 June. The court concluded that Campbell was justified in burning the mills and distilleries which might have been used to supply flour and spirits to the British forces, and that some adjacent buildings were unavoidably involved. However, Campbell was found to have made an error of judgement in destroying private houses and other buildings. No further disciplinary action was taken at the time, and Campbell was mortally wounded at the Battle of Chippawa on 5 July.
British response
Lieutenant General Sir George Prevost, the Governor General of the Canadas and commander in chief of the forces there, wrote on 2 June to Vice Admiral Sir Alexander Cochrane, commander of the North American Station of the Royal Navy, without noting that Campbell had not acted under orders:
...in consequence of the late disgraceful conduct of the American troops in the wanton destruction of private property on the north shores of Lake Erie, in order that if the war with the United States continues you may, should you judge it advisable, assist in inflicting that measure of retaliation which shall deter the enemy from a repetition of similar outrages.
Cochrane in turn wrote from his station in Bermuda on 18 June to John Wilson Croker, the Secretary to the Admiralty:
I am most decidedly of opinion that the readiest way to attain this object is to bring home to the supporters of the Government which authorizes this unnatural system of warfare a full share of its dreadful calamities and to this end, I have issued to the commanding officer of H.M. blockading squadron an order, accompanied by a secret memorandum...
ORDER FOR RETALIATION
No. 1
By the Honorable Alexander Cochrane, K.B. &c, &c, &c.
Whereas... it appears that the American troops in Upper Canada have committed the most wanton and unjustifiable outrages on the unoffending inhabitants by burning their mills and houses, and by a general devastation of private property...
You are hereby required and directed to destroy and lay waste such towns and districts as you may find assailable. You will hold strictly in view the conduct of the American army towards His Majesty's unoffending Canadian subjects and you will spare merely the lives of the unarmed inhabitants of the United States.
In the appended secret memorandum, Cochrane modified these severe orders by instructing his commanders to spare places which furnished supplies to British ships or troops, or to levy contributions in return for forbearance, in proportion to the value of goods and buildings spared. This code of conduct was followed by the British during the Raid on Alexandria.
References
Footnotes
Citations
Sources
Zaslow, Morris (ed) The Defended Border, Macmillan of Canada, 1964,
External links
Conflicts in 1814
Battles of the War of 1812 in Ontario
Battles on the Niagara Frontier
1814 in Upper Canada
Military raids
History of Norfolk County, Ontario
May 1814 events
|
```smalltalk
#if !WINDOWS_UWP
// When the .NET scripting backend is enabled and C# projects are built
// The assembly that this file is part of is still built for the player,
// even though the assembly itself is marked as a test assembly (this is not
// expected because test assemblies should not be included in player builds).
// Because the .NET backend is deprecated in 2018 and removed in 2019 and this
// issue will likely persist for 2018, this issue is worked around by wrapping all
// play mode tests in this check.
using Microsoft.MixedReality.Toolkit.Input;
using Microsoft.MixedReality.Toolkit.Input.UnityInput;
using Microsoft.MixedReality.Toolkit.Teleport;
using Microsoft.MixedReality.Toolkit.UI;
using Microsoft.MixedReality.Toolkit.Utilities;
using NUnit.Framework;
using System.Collections;
using System.Linq;
using UnityEditor;
using UnityEngine;
using UnityEngine.TestTools;
namespace Microsoft.MixedReality.Toolkit.Tests
{
/// <summary>
/// Tests to verify pointer state and pointer direction
/// </summary>
public class PointerTests : BasePlayModeTests
{
// SDK/Features/UX/Prefabs/Pointers/DefaultControllerPointer.prefab
private const string LinePointerGuid = "d5b94136462644c9873bb3347169ae7e";
private static readonly string LinePointerPrefab = AssetDatabase.GUIDToAssetPath(LinePointerGuid);
// SDK/Features/UX/Prefabs/Pointers/ParabolicPointer.prefab
private const string CurvePointerGuid = "c4fd3c6fc7ff484eb434775066e7f327";
private static readonly string CurvePointerPrefab = AssetDatabase.GUIDToAssetPath(CurvePointerGuid);
/// <summary>
/// Set initial state before each test.
/// </summary>
/// <returns>enumerator</returns>
/// <remarks>
/// Note that, in order to catch incorrect reliances on identity camera transforms early on,
/// this Setup() sets the playspace transform to an arbitrary pose. This can be overridden where
/// appropriate for an individual test by starting off with, e.g., <see cref="TestUtilities.PlayspaceToOriginLookingForward"/>.
/// However, it is preferable to retain the arbitrary pose, and use the helpers within TestUtilities
/// to align test objects with the camera.
/// For example, to place an object 8 meters in front of the camera, set its global position to:
/// TestUtilities.PositionRelativeToPlayspace(0.0f, 0.0f, 8.0f);
/// See usage of these helpers throughout the tests within this file, e.g. <see cref="TestSpherePointerInsideGrabbable"/>.
/// See also comments at <see cref="TestUtilities.PlayspaceToArbitraryPose"/>.
/// </remarks>
public override IEnumerator Setup()
{
yield return base.Setup();
TestUtilities.PlayspaceToArbitraryPose();
yield return null;
}
#region Tests
/// <summary>
/// Tests that line pointers and curve pointer work as expected by using default prefab implementations
/// LinePointer should be a straight line ray while curve pointers should collide along the curve via ray-marching
/// </summary>
[UnityTest]
public IEnumerator TestLinePointers()
{
BaseEventSystem.enableDanglingHandlerDiagnostics = false;
var linePointer = CreatePointerPrefab<LinePointer>(LinePointerPrefab,
out IMixedRealityInputSource lineInputSource, out IMixedRealityController lineController);
var curvePointer = CreatePointerPrefab<TeleportPointer>(CurvePointerPrefab,
out IMixedRealityInputSource curveInputSource, out IMixedRealityController curveController);
Assert.IsNotNull(linePointer);
Assert.IsNotNull(curvePointer);
// Simulate pushing "up" on joystick axis to activate teleport pointer lines
CoreServices.InputSystem?.RaisePositionInputChanged(curveInputSource,
curveController.ControllerHandedness,
curvePointer.TeleportInputAction,
new Vector2(0.0f, 1.0f));
var hitObject = GameObject.CreatePrimitive(PrimitiveType.Cube);
hitObject.transform.position = Vector3.forward * 3.0f;
hitObject.transform.localScale = Vector3.one * 0.1f;
yield return PlayModeTestUtilities.WaitForInputSystemUpdate();
// Confirm the line pointer is colliding with the cube which is straight in front
Assert.IsTrue(hitObject == linePointer.Result.CurrentPointerTarget);
Assert.IsNull(curvePointer.Result.CurrentPointerTarget);
hitObject.transform.position = new Vector3(0.0f, -0.8f, 2.0f);
yield return PlayModeTestUtilities.WaitForInputSystemUpdate();
// Confirm the teleport pointer is colliding with the cube which is in front but down
Assert.IsTrue(hitObject == curvePointer.Result.CurrentPointerTarget);
Assert.IsNull(linePointer.Result.CurrentPointerTarget);
// Clean up our dummy controllers and objects from the input & teleport system
CoreServices.InputSystem.RaiseSourceLost(lineInputSource, lineController);
CoreServices.InputSystem.RaiseSourceLost(curveInputSource, curveController);
CoreServices.TeleportSystem.RaiseTeleportCanceled(curvePointer, null);
GameObjectExtensions.DestroyGameObject(linePointer.gameObject);
GameObjectExtensions.DestroyGameObject(curvePointer.gameObject);
yield return PlayModeTestUtilities.WaitForInputSystemUpdate();
BaseEventSystem.enableDanglingHandlerDiagnostics = true;
}
/// <summary>
/// Test pointers are correctly enabled when interacting with colliders that are visible, but whose
/// bounds are outside the camera FOV.
/// </summary>
[UnityTest]
public IEnumerator TestPointerFOVLargeCollider()
{
var rightHand = new TestHand(Handedness.Right);
yield return PlayModeTestUtilities.WaitForInputSystemUpdate();
var cube = GameObject.CreatePrimitive(PrimitiveType.Cube);
cube.AddComponent<NearInteractionGrabbable>();
cube.AddComponent<NearInteractionTouchableVolume>();
yield return rightHand.Show(TestUtilities.PositionRelativeToPlayspace(Vector3.zero));
yield return PlayModeTestUtilities.WaitForInputSystemUpdate();
var spherePointer = PointerUtils.GetPointer<SpherePointer>(Handedness.Right);
var pokePointer = PointerUtils.GetPointer<PokePointer>(Handedness.Right);
yield return TestPointerFOVLargeColliderHelper(spherePointer, cube, rightHand);
yield return TestPointerFOVLargeColliderHelper(pokePointer, cube, rightHand);
rightHand.Hide();
GameObject.Destroy(cube);
}
/// <summary>
/// Tests that pointers behave correctly when interacting with objects inside and outside
/// its field of view
/// </summary>
[UnityTest]
public IEnumerator TestPointerFOV()
{
var rightHand = new TestHand(Handedness.Right);
yield return PlayModeTestUtilities.WaitForInputSystemUpdate();
var cube = GameObject.CreatePrimitive(PrimitiveType.Cube);
cube.AddComponent<NearInteractionGrabbable>();
cube.AddComponent<NearInteractionTouchableVolume>();
yield return rightHand.Show(TestUtilities.PositionRelativeToPlayspace(Vector3.zero));
yield return PlayModeTestUtilities.WaitForInputSystemUpdate();
var spherePointer = PointerUtils.GetPointer<SpherePointer>(Handedness.Right);
var pokePointer = PointerUtils.GetPointer<PokePointer>(Handedness.Right);
yield return TestPointerFOVHelper(spherePointer, cube, rightHand);
yield return TestPointerFOVHelper(pokePointer, cube, rightHand);
rightHand.Hide();
GameObject.Destroy(cube);
}
/// <summary>
/// Tests that sphere pointer grabs object when hand is inside a giant grabbable
/// </summary>
[UnityTest]
public IEnumerator TestSpherePointerInsideGrabbable()
{
var cube = GameObject.CreatePrimitive(PrimitiveType.Cube);
TestUtilities.PlaceRelativeToPlayspace(cube.transform);
cube.AddComponent<NearInteractionGrabbable>();
var rightHand = new TestHand(Handedness.Right);
yield return rightHand.Show(TestUtilities.PositionRelativeToPlayspace(Vector3.zero));
yield return PlayModeTestUtilities.WaitForInputSystemUpdate();
var spherePointer = PointerUtils.GetPointer<SpherePointer>(Handedness.Right);
Assert.IsNotNull(spherePointer, "Right hand does not have a sphere pointer");
Assert.IsTrue(spherePointer.IsInteractionEnabled, "Sphere pointer should be enabled because it is near grabbable cube and visible, even if inside a giant cube.");
GameObject.Destroy(cube);
}
/// <summary>
/// Tests that right after hand being instantiated, the pointer's direction
/// is in the same general direction as the forward direction of the camera
/// </summary>
[UnityTest]
public IEnumerator TestHandPointerDirectionToCameraDirection()
{
var inputSystem = PlayModeTestUtilities.GetInputSystem();
// Raise the hand
var rightHand = new TestHand(Handedness.Right);
// Set initial position and show hand
Vector3 initialPos = TestUtilities.PositionRelativeToPlayspace(new Vector3(0.01f, 0.1f, 0.5f));
yield return rightHand.Show(initialPos);
// Return first hand controller that is right and source type hand
var handController = inputSystem.DetectedControllers.First(x => x.ControllerHandedness == Handedness.Right && x.InputSource.SourceType == InputSourceType.Hand);
Assert.IsNotNull(handController);
// Get the line pointer from the hand controller
var linePointer = handController.InputSource.Pointers.OfType<LinePointer>().First();
Assert.IsNotNull(linePointer);
Vector3 linePointerOrigin = linePointer.Position;
// Check that the line pointer origin is within half a centimeter of the initial position of the hand
var distance = Vector3.Distance(initialPos, linePointerOrigin);
Assert.LessOrEqual(distance, 0.005f);
// Check that the angle between the line pointer ray and camera forward does not exceed 40 degrees
float angle = Vector3.Angle(linePointer.Rays[0].Direction, CameraCache.Main.transform.forward);
Assert.LessOrEqual(angle, 40.0f);
}
/// <summary>
/// Tests that right after motion controller being instantiated, the pointer's direction
/// is in the same general direction as the forward direction of the camera
/// </summary>
[UnityTest]
public IEnumerator TestMotionControllerPointerDirectionToCameraDirection()
{
var inputSystem = PlayModeTestUtilities.GetInputSystem();
// Switch to motion controller
var iss = PlayModeTestUtilities.GetInputSimulationService();
var oldSimMode = iss.ControllerSimulationMode;
iss.ControllerSimulationMode = ControllerSimulationMode.MotionController;
// Raise the motion controller
var rightMotionController = new TestMotionController(Handedness.Right);
// Set initial position and show motion controller
Vector3 initialPos = TestUtilities.PositionRelativeToPlayspace(new Vector3(0.01f, 0.1f, 0.5f));
yield return rightMotionController.Show(initialPos);
// Return first motion controller that is right and source type controller
var motionController = inputSystem.DetectedControllers.First(x => x.ControllerHandedness == Handedness.Right && x.InputSource.SourceType == InputSourceType.Controller);
Assert.IsNotNull(motionController);
// Get the line pointer from the motion controller
var linePointer = motionController.InputSource.Pointers.OfType<ShellHandRayPointer>().First();
Assert.IsNotNull(linePointer);
Vector3 linePointerOrigin = linePointer.Position;
// Check that the line pointer origin is within half a centimeter of the initial position of the motion controller
var distance = Vector3.Distance(initialPos, linePointerOrigin);
Assert.LessOrEqual(distance, 0.005f);
// Check that the angle between the line pointer ray and camera forward does not exceed 40 degrees
float angle = Vector3.Angle(linePointer.Rays[0].Direction, CameraCache.Main.transform.forward);
Assert.LessOrEqual(angle, 40.0f);
// Restore the input simulation profile
iss.ControllerSimulationMode = oldSimMode;
yield return null;
}
/// <summary>
/// Test that the same PokePointer
/// 1) is not destroyed
/// 2) retrieved and re-used from the pointer cache
/// 3) still click buttons and provides input after re-use
/// </summary>
[UnityTest]
public IEnumerator TestPointerCaching()
{
TestButtonUtilities.InstantiateDefaultButton(TestButtonUtilities.DefaultButtonType.DefaultPushButton,
out Interactable interactable,
out Transform translateTargetObject);
Vector3 targetStartPosition = translateTargetObject.localPosition;
// Subscribe to interactable's on click so we know the click went through
bool wasClicked = false;
interactable.OnClick.AddListener(() => { wasClicked = true; });
var rightHand = new TestHand(Handedness.Right);
yield return rightHand.Show(TestUtilities.PositionRelativeToPlayspace(Vector3.right));
var rightPokePointer = PlayModeTestUtilities.GetPointer<PokePointer>(Handedness.Right);
Assert.IsNotNull(rightPokePointer);
Assert.IsFalse(rightPokePointer.DestroyOnSourceLost);
yield return TestButtonUtilities.TestClickPushButton(interactable.transform, targetStartPosition, translateTargetObject);
Assert.IsTrue(wasClicked);
Assert.IsNotNull(rightPokePointer);
Assert.IsNull(PlayModeTestUtilities.GetPointer<PokePointer>(Handedness.Right));
wasClicked = false;
yield return rightHand.Show(TestUtilities.PositionRelativeToPlayspace(Vector3.right));
// Confirm that we are re-using the same pointer gameobject that was stored in the cache
Assert.AreEqual(rightPokePointer, PlayModeTestUtilities.GetPointer<PokePointer>(Handedness.Right));
yield return TestButtonUtilities.TestClickPushButton(interactable.transform, targetStartPosition, translateTargetObject);
Assert.IsTrue(wasClicked);
Assert.IsNotNull(rightPokePointer);
Assert.IsNull(PlayModeTestUtilities.GetPointer<PokePointer>(Handedness.Right));
}
/// <summary>
/// As GameObjects, pointers can be destroyed at any time.
/// Utilize BaseControllerPointer.DestroyOnSourceLost property to test pointer cache does not break with null references (aka auto-destroyed pointers).
/// </summary>
[UnityTest]
public IEnumerator TestDestroyOnSourceLostPointer()
{
TestButtonUtilities.InstantiateDefaultButton(TestButtonUtilities.DefaultButtonType.DefaultPushButton,
out Interactable interactable,
out Transform translateTargetObject);
Vector3 targetStartPosition = translateTargetObject.localPosition;
// Subscribe to interactable's on click so we know the click went through
bool wasClicked = false;
interactable.OnClick.AddListener(() => { wasClicked = true; });
var rightHand = new TestHand(Handedness.Right);
yield return rightHand.Show(TestUtilities.PositionRelativeToPlayspace(Vector3.right));
var rightPokePointer = PlayModeTestUtilities.GetPointer<PokePointer>(Handedness.Right);
rightPokePointer.DestroyOnSourceLost = true;
yield return TestButtonUtilities.TestClickPushButton(interactable.transform, targetStartPosition, translateTargetObject);
Assert.IsTrue(wasClicked);
Assert.IsTrue(rightPokePointer == null);
Assert.IsNull(PlayModeTestUtilities.GetPointer<PokePointer>(Handedness.Right));
wasClicked = false;
yield return TestButtonUtilities.TestClickPushButton(interactable.transform, targetStartPosition, translateTargetObject);
Assert.IsTrue(wasClicked);
}
/// <summary>
/// Test that buttons still work when pointer cache is disabled.
/// Pointers that do not auto-destroy themselves on source lost should be destroyed by the input device manager creating the pointers
/// </summary>
[UnityTest]
public IEnumerator TestDisabledPointerCache()
{
TestButtonUtilities.InstantiateDefaultButton(TestButtonUtilities.DefaultButtonType.DefaultPushButton,
out Interactable interactable,
out Transform translateTargetObject);
Vector3 targetStartPosition = translateTargetObject.localPosition;
// Subscribe to interactable's on click so we know the click went through
bool wasClicked = false;
interactable.OnClick.AddListener(() => { wasClicked = true; });
PlayModeTestUtilities.GetInputSimulationService().EnablePointerCache = false;
var rightHand = new TestHand(Handedness.Right);
yield return rightHand.Show(TestUtilities.PositionRelativeToPlayspace(Vector3.right));
var rightPokePointer = PlayModeTestUtilities.GetPointer<PokePointer>(Handedness.Right);
Assert.IsNotNull(rightPokePointer);
Assert.IsFalse(rightPokePointer.DestroyOnSourceLost);
yield return TestButtonUtilities.TestClickPushButton(interactable.transform, targetStartPosition, translateTargetObject);
Assert.IsTrue(wasClicked);
Assert.IsTrue(rightPokePointer == null);
Assert.IsNull(PlayModeTestUtilities.GetPointer<PokePointer>(Handedness.Right));
wasClicked = false;
yield return TestButtonUtilities.TestClickPushButton(interactable.transform, targetStartPosition, translateTargetObject);
Assert.IsTrue(wasClicked);
}
#endregion
#region Helpers
private static T CreatePointerPrefab<T>(string prefabPath,
out IMixedRealityInputSource inputSource,
out IMixedRealityController controller)
where T : MonoBehaviour, IMixedRealityPointer
{
var pointerPrefab = AssetDatabase.LoadAssetAtPath<Object>(prefabPath);
var result = PrefabUtility.InstantiatePrefab(pointerPrefab) as GameObject;
T pointer = result.GetComponent<T>();
inputSource = CoreServices.InputSystem.RequestNewGenericInputSource(
pointer.PointerName,
new IMixedRealityPointer[] { pointer });
// use MouseController as dummy wrapper controller
controller = new MouseController(TrackingState.Tracked, Handedness.Right, inputSource);
if (inputSource != null)
{
for (int i = 0; i < inputSource.Pointers.Length; i++)
{
inputSource.Pointers[i].Controller = controller;
}
}
CoreServices.InputSystem.RaiseSourceDetected(inputSource, controller);
CoreServices.InputSystem?.RaiseSourceTrackingStateChanged(inputSource, controller, TrackingState.Tracked);
return pointer;
}
private IEnumerator TestPointerFOVHelper(IMixedRealityPointer myPointer, GameObject cube, TestHand testHand)
{
// Cube in front of camera
cube.transform.SetPositionAndRotation(Vector3.forward * 1f, Quaternion.identity);
TestUtilities.PlaceRelativeToPlayspace(cube.transform);
cube.transform.localScale = Vector3.one * 0.1f;
yield return testHand.MoveTo(cube.transform.position);
yield return PlayModeTestUtilities.WaitForInputSystemUpdate();
Assert.IsTrue(myPointer.IsInteractionEnabled, $"Pointer {myPointer.PointerName} should be enabled, cube in front camera. Cube size {cube.transform.localScale} location {cube.transform.position}.");
Vector3 playspaceUp = TestUtilities.DirectionRelativeToPlayspace(Vector3.up);
// Cube above camera
cube.transform.Translate(playspaceUp * 10);
yield return testHand.MoveTo(cube.transform.position);
yield return PlayModeTestUtilities.WaitForInputSystemUpdate();
Assert.IsFalse(myPointer.IsInteractionEnabled, $"Pointer {myPointer.PointerName} should NOT be enabled, cube behind camera. Cube size {cube.transform.localScale} location {cube.transform.position}.");
// For sphere and poke pointers, test that setting IgnoreCollidersNotInFOV works
if (myPointer is SpherePointer spherePointer)
{
spherePointer.IgnoreCollidersNotInFOV = false;
yield return PlayModeTestUtilities.WaitForInputSystemUpdate();
Assert.IsTrue(myPointer.IsInteractionEnabled, $"Pointer {myPointer.PointerName} should be enabled because IgnoreCollidersNotInFOV is false.");
spherePointer.IgnoreCollidersNotInFOV = true;
}
else if (myPointer is PokePointer pokePointer)
{
pokePointer.IgnoreCollidersNotInFOV = false;
yield return PlayModeTestUtilities.WaitForInputSystemUpdate();
Assert.IsTrue(myPointer.IsInteractionEnabled, $"Pointer {myPointer.PointerName} should be enabled because IgnoreCollidersNotInFOV is false.");
pokePointer.IgnoreCollidersNotInFOV = true;
}
// Move it back to be visible again
cube.transform.Translate(playspaceUp * -10f);
yield return testHand.MoveTo(cube.transform.position);
yield return PlayModeTestUtilities.WaitForInputSystemUpdate();
Assert.IsTrue(myPointer.IsInteractionEnabled, $"Pointer {myPointer.PointerName} should be enabled because it is near object inside of FOV. Cube size {cube.transform.localScale} location {cube.transform.position}.");
}
private IEnumerator TestPointerFOVLargeColliderHelper(IMixedRealityPointer myPointer, GameObject cube, TestHand testHand)
{
Pose cubePose = new Pose(cube.transform.position, cube.transform.rotation);
Pose worldPose = TestUtilities.PlaceRelativeToPlayspace(cubePose.position, cubePose.rotation);
cube.transform.SetPositionAndRotation(worldPose.position, worldPose.rotation);
cube.transform.localScale = new Vector3(3, 3, 0.05f);
float[] yOffsets = new float[] { -1f, 0f, 1f };
float[] xOffsets = new float[] { -1f, 0f, 1f };
float[] zOffsets = new float[] { 1f, -1f };
var collider = cube.GetComponent<BoxCollider>();
foreach (var zOffset in zOffsets)
{
foreach (var yOffset in yOffsets)
{
foreach (var xOffset in xOffsets)
{
Vector3 worldOffset = TestUtilities.DirectionRelativeToPlayspace(new Vector3(xOffset, yOffset, zOffset));
var cameraPos = CameraCache.Main.transform.position;
var pos = cameraPos + worldOffset;
cube.transform.position = pos;
yield return testHand.MoveTo(cube.transform.position);
yield return PlayModeTestUtilities.WaitForInputSystemUpdate();
bool isInFov = CameraCache.Main.IsInFOVCached(collider);
Assert.IsTrue(zOffset == 1f ? myPointer.IsInteractionEnabled : !myPointer.IsInteractionEnabled,
$"Pointer {myPointer.PointerName} in incorrect state. IsInFOV {isInFov} Cube size {cube.transform.localScale} offset {new Vector3(xOffset, yOffset, zOffset)} location {cube.transform.position}.");
}
}
}
cube.transform.SetPositionAndRotation(cubePose.position, cubePose.rotation);
}
#endregion
}
}
#endif
```
|
The Kehoe Cup ( ; ) is an annual hurling competition organised by the Leinster Council of the Gaelic Athletic Association (GAA) since 1977 for second- and third-tier inter-county teams in the province of Leinster in Ireland. Nowadays, teams from the provinces of Ulster and Connacht are eligible to compete; formerly, teams from third-level institutions within the three provinces also did. The competition runs each January. Formerly sponsored by Bord na Móna, it was formerly known as the "Bord na Móna Kehoe Cup".
The Kehoe Cup is part of a series of GAA tournaments known as the Leinster GAA Series, along with the Walsh Cup and the O'Byrne Cup. The original purpose of these competitions was to raise funds to supplement an injury scheme for the players. Nowadays, the funds generated are used to alleviate hardship among players, mentors and families who are in financial difficulty. The funds are administered throughout the twelve counties of Leinster. Apart from this, the competitions provide an opportunity for the county teams to select their panel for the year and prepare for the National Hurling League (NHL).
Since the inception of the Kehoe Cup in 1977, a total of 14 teams have won the tournament. Westmeath is the most successful team with 9 titles.
History
In 1954, the Leinster Council established a new inter-county tournament in an effort to raise funds to supplement the medical bills of players who were in financial difficulty. This scheme, known as the Players' Injury Fund, was the first of its kind to be offered by a provincial GAA council. Originally known as the Leinster Accident Fund Tournament, it started as a knockout competition for the 12 counties in Leinster. During the fifties and sixties, the hurling tournament, which became known as the Walsh Cup, was dominated by the stronger hurling counties of Kilkenny and Wexford. As a result of this, the Walsh Cup was not contested for much of the seventies. In 1977, a second cup was presented to the Leinster Council for an alternative hurling competition. The cup was dedicated to former GAA President, Michael Kehoe (Wexford), who died on 8 January 1977. The tournament thus became known as the Kehoe Cup. The Leinster Council decided to alternate it with the Walsh Cup between the stronger and developing counties for the Players' Injury Fund. In its inaugural year, it was contested by the stronger hurling counties and was won by Wexford who beat Kilkenny in the final by 2–13 to 1–15 on 21 August 1977 in Enniscorthy, County Wexford. There was a break in the Walsh Cup from 1983 to 1986. When it recommenced in 1987, it was decided by the Leinster Council that the Walsh Cup would be used exclusively for the stronger hurling counties and the Kehoe Cup for the developing counties.
Format
The Kehoe Cup was a straight knockout tournament with each match played as a single leg. The pairings are drawn at random without seeding and the draw usually takes place in November or December of the previous year. The sixteen teams that are drawn to compete in the first round. The eight winning teams from the first round progress to the quarter-finals while the losing teams are drawn against each other to compete for the Kehoe Cup Shield. If a match ends in a draw, it is settled in extra time. However, if the score remains level at the end of extra time, a replay takes place and so on until a winner is found.
The format of the competition remained virtually unchanged since its inception in 1977 until 2015. The most significant change to the tournament was the entry of teams from outside of Leinster. Many of the second- and third-tier inter-county teams in Connacht and Ulster now compete in the Kehoe Cup. Another change to the competition was the entry of teams from third-level institutions. Colleges situated within any of the three provinces were eligible to compete in the Kehoe Cup.
From 2015–18 the tournament was restricted to Leinster county and college teams, and was run on group system, with group winners playing in the final.
Since 2019, only county teams from Leinster and Ulster compete, with no third-level sides. The format and number of teams vary each year.
Sponsorship
In December 2011, the Leinster Council announced a new partnership with Bord na Móna which would provide the competition with a sponsor for the first time in its then 34-year history. This three-year sponsorship deal began in January 2012 and helps fund what is now known as the Bord na Móna Leinster GAA Series, which includes the Kehoe Cup and Shield, Walsh Cup and Shield, and the O'Byrne Cup and Shield. The sponsorship also helps to finance the Leinster GAA's hardship fund, which is the only one of its kind offered by a provincial GAA council and has been in existence since 1954. In the past, this fund has helped local communities, families and players to finance medical bills, rebuild homes lost through tragic circumstances and made financial payments to assist disabled players.
Records and statistics
Roll of honour
No competition in 1979, 1984, 1985 or 2021. Kilkenny and Wexford qualified for the 1980 final but it was never played.
Finals
Kehoe Cup Shield
The Kehoe Cup Shield () was a competition between the teams that lose in the first round of the Kehoe Cup. The competition was first held in 2009 when Kildare beat Louth in the final by 4–16 to 1–02.
The 2011 tournament was won by TCD who beat Louth in the final by 1–21 to 2–14.
It was revived in 2019 as a separate tournament.
Records and statistics
Roll of honour
Finals
AET: Abandoned in extra time.
References
External links
The Leinster GAA archive
The official Leinster GAA website
The official GAA website
Hurling competitions in Leinster
Hurling cup competitions
Inter-county hurling competitions
|
```objective-c
#import <Foundation/Foundation.h>
@interface PodsDummy_SubtleVolume : NSObject
@end
@implementation PodsDummy_SubtleVolume
@end
```
|
```scss
html.rtl {
#loadingmodal .modal-card .modal-card-icon, #ajaxerr .modal-card .modal-card-icon {
float: right;
}
#loadingmodal .modal-card .modal-card-content, #ajaxerr .modal-card .modal-card-content {
margin-right: 160px;
margin-left: 0;
text-align: right;
}
.alert-success, .alert-danger, .alert-info, .alert-warning, .alert-legal {
padding-right: 65px;
padding-left: 15px;
}
.alert {
text-align: right;
}
.alert-success::before, .alert-danger::before, .alert-info::before, .alert-warning::before, .alert-legal::before {
left: inherit;
right: 0;
}
}
@media(min-width: $screen-sm-min) {
html.rtl .nameparts-form-group {
input, select {
border-radius: $border-radius-base;
}
input:not(:first-child), select:not(:first-child) {
border-bottom-right-radius: 0;
border-top-right-radius: 0;
}
input:not(:last-child), select:not(:last-child) {
border-bottom-left-radius: 0;
border-top-left-radius: 0;
}
}
}
@media (max-width: 700px) {
html.rtl {
#loadingmodal .modal-card .modal-card-content, #ajaxerr .modal-card .modal-card-content {
margin-right: 0;
}
}
}
```
|
```go
//
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing, software
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
package rpcmetrics
import "sync"
// normalizedEndpoints is a cache for endpointName -> safeName mappings.
type normalizedEndpoints struct {
names map[string]string
maxSize int
defaultName string
normalizer NameNormalizer
mux sync.RWMutex
}
func newNormalizedEndpoints(maxSize int, normalizer NameNormalizer) *normalizedEndpoints {
return &normalizedEndpoints{
maxSize: maxSize,
normalizer: normalizer,
names: make(map[string]string, maxSize),
}
}
// normalize looks up the name in the cache, if not found it uses normalizer
// to convert the name to a safe name. If called with more than maxSize unique
// names it returns "" for all other names beyond those already cached.
func (n *normalizedEndpoints) normalize(name string) string {
n.mux.RLock()
norm, ok := n.names[name]
l := len(n.names)
n.mux.RUnlock()
if ok {
return norm
}
if l >= n.maxSize {
return ""
}
return n.normalizeWithLock(name)
}
func (n *normalizedEndpoints) normalizeWithLock(name string) string {
norm := n.normalizer.Normalize(name)
n.mux.Lock()
defer n.mux.Unlock()
// cache may have grown while we were not holding the lock
if len(n.names) >= n.maxSize {
return ""
}
n.names[name] = norm
return norm
}
```
|
```python
Following PEP 8 styling guideline.
`Module`s everywhere!
Get the most of `int`s
Get the most of `float`s
`bytearray` objects
```
|
```ruby
class Direnv < Formula
desc "Load/unload environment variables based on $PWD"
homepage "path_to_url"
url "path_to_url"
sha256 your_sha256_hash
license "MIT"
head "path_to_url", branch: "master"
bottle do
sha256 arm64_sonoma: your_sha256_hash
sha256 arm64_ventura: your_sha256_hash
sha256 arm64_monterey: your_sha256_hash
sha256 sonoma: your_sha256_hash
sha256 ventura: your_sha256_hash
sha256 monterey: your_sha256_hash
sha256 x86_64_linux: your_sha256_hash
end
depends_on "go" => :build
depends_on "bash"
def install
system "make", "install", "PREFIX=#{prefix}", "BASH_PATH=#{Formula["bash"].opt_bin}/bash"
end
test do
system bin/"direnv", "status"
end
end
```
|
```xml
// Convert router.asPath to a URLSearchParams object
// example: /dynamic/[slug]?foo=bar -> { foo: 'bar' }
export function asPathToSearchParams(asPath: string): URLSearchParams {
return new URL(asPath, 'path_to_url
}
```
|
Events from the year 1713 in Canada.
Incumbents
French Monarch: Louis XIV
British and Irish Monarch: Anne
Governors
Governor General of New France: Philippe de Rigaud Vaudreuil
Colonial Governor of Louisiana: Jean-Baptiste Le Moyne de Bienville then Antoine de la Mothe Cadillac
Governor of Nova Scotia: Francis Nicholson
Governor of Plaisance: Philippe Pastour de Costebelle
Events
The Treaty of Utrecht. The French cede Newfoundland and the Hudson Bay region. They retain Cape Breton Island and Île Saint-Jean (Prince Edward Island).
Treaty of Utrecht cedes French Acadia, Newfoundland, Hudson Bay and the "country of the Iroquois" to England.
The Treaty of Utrecht ends Queen Anne's War, confirming British possession of Hudson Bay, Newfoundland and Acadia (except Île-Royale Cape Breton Island). France starts building Fortress Louisbourg near the eastern tip of Île-Royale.
Births
Jean Baptiste de La Vérendrye born September 3, the eldest son of Pierre Gaultier de Varennes, sieur de La Vérendrye (died 1736).
Michel Bénard, councillor of the conseil souverain.
Deaths
There were no relevant deaths during this year in Canada.
References
1710s in Canada
1713 in New France
Canada
13
|
```python
# yellowbrick.classifier.classification_report
# Visual classification report for classifier scoring.
#
# Author: Rebecca Bilbro
# Author: Benjamin Bengfort
# Author: Neal Humphrey
# Author: Allyssa Riley
# Author: Larry Gray
# Created: Wed May 3 18:15:42 2017 -0400
#
# For license information, see LICENSE.txt
#
# ID: classification_report.py [5388065] neal@nhumphrey.com $
"""
Visual classification report for classifier scoring.
"""
##########################################################################
## Imports
##########################################################################
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import precision_recall_fscore_support
from yellowbrick.style import find_text_color
from yellowbrick.style.palettes import color_sequence
from yellowbrick.exceptions import YellowbrickValueError
from yellowbrick.classifier.base import ClassificationScoreVisualizer
##########################################################################
## Classification Report
##########################################################################
PERCENT = "percent"
CMAP_UNDERCOLOR = "w"
CMAP_OVERCOLOR = "#2a7d4f"
SCORES_KEYS = ("precision", "recall", "f1", "support")
class ClassificationReport(ClassificationScoreVisualizer):
"""
Classification report that shows the precision, recall, F1, and support scores
for the model. Integrates numerical scores as well as a color-coded heatmap.
Parameters
----------
estimator : estimator
A scikit-learn estimator that should be a classifier. If the model is
not a classifier, an exception is raised. If the internal model is not
fitted, it is fit when the visualizer is fitted, unless otherwise specified
by ``is_fitted``.
ax : matplotlib Axes, default: None
The axes to plot the figure on. If not specified the current axes will be
used (or generated if required).
classes : list of str, defult: None
The class labels to use for the legend ordered by the index of the sorted
classes discovered in the ``fit()`` method. Specifying classes in this
manner is used to change the class names to a more specific format or
to label encoded integer classes. Some visualizers may also use this
field to filter the visualization for specific classes. For more advanced
usage specify an encoder rather than class labels.
cmap : string, default: ``'YlOrRd'``
Specify a colormap to define the heatmap of the predicted class
against the actual class in the classification report.
support: {True, False, None, 'percent', 'count'}, default: None
Specify if support will be displayed. It can be further defined by
whether support should be reported as a raw count or percentage.
encoder : dict or LabelEncoder, default: None
A mapping of classes to human readable labels. Often there is a mismatch
between desired class labels and those contained in the target variable
passed to ``fit()`` or ``score()``. The encoder disambiguates this mismatch
ensuring that classes are labeled correctly in the visualization.
is_fitted : bool or str, default="auto"
Specify if the wrapped estimator is already fitted. If False, the estimator
will be fit when the visualizer is fit, otherwise, the estimator will not be
modified. If "auto" (default), a helper method will check if the estimator
is fitted before fitting it again.
force_model : bool, default: False
Do not check to ensure that the underlying estimator is a classifier. This
will prevent an exception when the visualizer is initialized but may result
in unexpected or unintended behavior.
colorbar : bool, default: True
Specify if the color bar should be present
fontsize : int or None, default: None
Specify the font size of the x and y labels
kwargs : dict
Keyword arguments passed to the visualizer base classes.
Examples
--------
>>> from yellowbrick.classifier import ClassificationReport
>>> from sklearn.linear_model import LogisticRegression
>>> viz = ClassificationReport(LogisticRegression())
>>> viz.fit(X_train, y_train)
>>> viz.score(X_test, y_test)
>>> viz.show()
Attributes
----------
classes_ : ndarray of shape (n_classes,)
The class labels observed while fitting.
class_count_ : ndarray of shape (n_classes,)
Number of samples encountered for each class during fitting.
score_ : float
An evaluation metric of the classifier on test data produced when
``score()`` is called. This metric is between 0 and 1 -- higher scores are
generally better. For classifiers, this score is usually accuracy, but
ensure you check the underlying model for more details about the score.
scores_ : dict of dicts
Outer dictionary composed of precision, recall, f1, and support scores with
inner dictionaries specifiying the values for each class listed.
"""
def __init__(
self,
estimator,
ax=None,
classes=None,
cmap="YlOrRd",
support=None,
encoder=None,
is_fitted="auto",
force_model=False,
colorbar=True,
fontsize=None,
**kwargs
):
super(ClassificationReport, self).__init__(
estimator,
ax=ax,
classes=classes,
encoder=encoder,
is_fitted=is_fitted,
force_model=force_model,
**kwargs
)
self.colorbar = colorbar
self.support = support
self.cmap = color_sequence(cmap)
self.cmap.set_over(color=CMAP_OVERCOLOR)
self.cmap.set_under(color=CMAP_UNDERCOLOR)
self._displayed_scores = [key for key in SCORES_KEYS]
self.fontsize=fontsize
if support not in {None, True, False, "percent", "count"}:
raise YellowbrickValueError(
"'{}' is an invalid argument for support, use None, True, "
"False, 'percent', or 'count'".format(support)
)
if not support:
self._displayed_scores.remove("support")
def score(self, X, y):
"""
Generates the Scikit-Learn classification report.
Parameters
----------
X : ndarray or DataFrame of shape n x m
A matrix of n instances with m features
y : ndarray or Series of length n
An array or series of target or class values
Returns
-------
score_ : float
Global accuracy score
"""
# Call super to check if fitted and to compute self.score_
super(ClassificationReport, self).score(X, y)
# Labels must be a data type that works with np.isnan
labels = range(len(self.classes_))
y_pred = self.predict(X)
scores = precision_recall_fscore_support(y, y_pred, labels=labels)
# Calculate the percentage for the support metric
# and store the percent in place of raw support counts
self.support_score_ = scores[-1]
scores = list(scores)
scores[-1] = scores[-1] / scores[-1].sum()
# Create a mapping composed of precision, recall, F1, and support
# to their respective values
scores = map(lambda s: dict(zip(self.classes_, s)), scores)
self.scores_ = dict(zip(SCORES_KEYS, scores))
# Remove support scores if not required
if not self.support:
self.scores_.pop("support")
self.draw()
return self.score_
def draw(self):
"""
Renders the classification report across each axis.
"""
# Create display grid
cr_display = np.zeros((len(self.classes_), len(self._displayed_scores)))
# For each class row, append columns for precision, recall, f1, and support
for idx, cls in enumerate(self.classes_):
for jdx, metric in enumerate(self._displayed_scores):
cr_display[idx, jdx] = self.scores_[metric][cls]
# Set up the dimensions of the pcolormesh
# NOTE: pcolormesh accepts grids that are (N+1,M+1)
X, Y = (
np.arange(len(self.classes_) + 1),
np.arange(len(self._displayed_scores) + 1),
)
self.ax.set_ylim(bottom=0, top=cr_display.shape[0])
self.ax.set_xlim(left=0, right=cr_display.shape[1])
# Get the human readable labels
labels = self._labels()
if labels is None:
labels = self.classes_
# Fetch the grid labels from the classes in correct order; set ticks.
xticklabels = self._displayed_scores
yticklabels = labels[::-1]
yticks = np.arange(len(labels)) + 0.5
xticks = np.arange(len(self._displayed_scores)) + 0.5
self.ax.set(yticks=yticks, xticks=xticks)
self.ax.set_xticklabels(
xticklabels, rotation=45, fontsize=self.fontsize
)
self.ax.set_yticklabels(yticklabels, fontsize=self.fontsize)
# Set data labels in the grid, enumerating over class, metric pairs
# NOTE: X and Y are one element longer than the classification report
# so skip the last element to label the grid correctly.
for x in X[:-1]:
for y in Y[:-1]:
# Extract the value and the text label
value = cr_display[x, y]
svalue = "{:0.3f}".format(value)
# change the svalue for support (when y == 3) because we want
# to label it as the actual support value, not the percentage
if y == 3:
if self.support != PERCENT:
svalue = self.support_score_[x]
# Determine the grid and text colors
base_color = self.cmap(value)
text_color = find_text_color(base_color)
# Add the label to the middle of the grid
cx, cy = x + 0.5, y + 0.5
self.ax.text(cy, cx, svalue, va="center", ha="center", color=text_color)
# Draw the heatmap with colors bounded by the min and max of the grid
# NOTE: I do not understand why this is Y, X instead of X, Y it works
# in this order but raises an exception with the other order.
g = self.ax.pcolormesh(
Y, X, cr_display, vmin=0, vmax=1, cmap=self.cmap, edgecolor="w"
)
# Add the color bar
if self.colorbar:
plt.colorbar(g, ax=self.ax) # TODO: Could use self.fig now
else:
pass
# Return the axes being drawn on
return self.ax
def finalize(self, **kwargs):
"""
Adds a title and sets the axis labels correctly. Also calls tight layout
to ensure that no parts of the figure are cut off in the final visualization.
Parameters
----------
kwargs: generic keyword arguments.
Notes
-----
Generally this method is called from show and not directly by the user.
"""
# Set the title of the classifiation report
self.set_title("{} Classification Report".format(self.name))
# Set the tick marks appropriately
self.ax.set_xticks(np.arange(len(self._displayed_scores)) + 0.5)
self.ax.set_yticks(np.arange(len(self.classes_)) + 0.5)
self.ax.set_xticklabels(self._displayed_scores, rotation=45)
self.ax.set_yticklabels(self.classes_)
self.fig.tight_layout()
def classification_report(
estimator,
X_train,
y_train,
X_test=None,
y_test=None,
ax=None,
classes=None,
cmap="YlOrRd",
support=None,
encoder=None,
is_fitted="auto",
force_model=False,
show=True,
colorbar=True,
fontsize=None,
**kwargs
):
"""Classification Report
Displays precision, recall, F1, and support scores for the model.
Integrates numerical scores as well as color-coded heatmap.
Parameters
----------
estimator : estimator
A scikit-learn estimator that should be a classifier. If the model is
not a classifier, an exception is raised. If the internal model is not
fitted, it is fit when the visualizer is fitted, unless otherwise specified
by ``is_fitted``.
X_train : ndarray or DataFrame of shape n x m
A feature array of n instances with m features the model is trained on.
Used to fit the visualizer and also to score the visualizer if test splits are
not directly specified.
y_train : ndarray or Series of length n
An array or series of target or class values. Used to fit the visualizer and
also to score the visualizer if test splits are not specified.
X_test : ndarray or DataFrame of shape n x m, default: None
An optional feature array of n instances with m features that the model
is scored on if specified, using X_train as the training data.
y_test : ndarray or Series of length n, default: None
An optional array or series of target or class values that serve as actual
labels for X_test for scoring purposes.
ax : matplotlib Axes, default: None
The axes to plot the figure on. If not specified the current axes will be
used (or generated if required).
classes : list of str, defult: None
The class labels to use for the legend ordered by the index of the sorted
classes discovered in the ``fit()`` method. Specifying classes in this
manner is used to change the class names to a more specific format or
to label encoded integer classes. Some visualizers may also use this
field to filter the visualization for specific classes. For more advanced
usage specify an encoder rather than class labels.
cmap : string, default: ``'YlOrRd'``
Specify a colormap to define the heatmap of the predicted class
against the actual class in the classification report.
support: {True, False, None, 'percent', 'count'}, default: None
Specify if support will be displayed. It can be further defined by
whether support should be reported as a raw count or percentage.
encoder : dict or LabelEncoder, default: None
A mapping of classes to human readable labels. Often there is a mismatch
between desired class labels and those contained in the target variable
passed to ``fit()`` or ``score()``. The encoder disambiguates this mismatch
ensuring that classes are labeled correctly in the visualization.
is_fitted : bool or str, default='auto'
Specify if the wrapped estimator is already fitted. If False, the estimator
will be fit when the visualizer is fit, otherwise, the estimator will not be
modified. If 'auto' (default), a helper method will check if the estimator
is fitted before fitting it again.
force_model : bool, default: False
Do not check to ensure that the underlying estimator is a classifier. This
will prevent an exception when the visualizer is initialized but may result
in unexpected or unintended behavior.
show: bool, default: True
If True, calls ``show()``, which in turn calls ``plt.show()`` however you cannot
call ``plt.savefig`` from this signature, nor ``clear_figure``. If False, simply
calls ``finalize()``
colorbar : bool, default: True
Specify if the color bar should be present
fontsize : int or None, default: None
Specify the font size of the x and y labels
kwargs : dict
Keyword arguments passed to the visualizer base classes.
Returns
-------
viz : ClassificationReport
Returns the fitted, finalized visualizer
"""
# Instantiate the visualizer
visualizer = ClassificationReport(
estimator=estimator,
ax=ax,
classes=classes,
cmap=cmap,
support=support,
encoder=encoder,
is_fitted=is_fitted,
force_model=force_model,
colorbar=colorbar,
fontsize=fontsize,
**kwargs
)
# Fit and transform the visualizer (calls draw)
visualizer.fit(X_train, y_train)
# Score the visualizer
if X_test is not None and y_test is not None:
visualizer.score(X_test, y_test)
elif X_test is not None or y_test is not None:
raise YellowbrickValueError(
"both X_test and y_test are required if one is specified"
)
else:
visualizer.score(X_train, y_train)
# Draw the final visualization
if show:
visualizer.show()
else:
visualizer.finalize()
# Return the visualizer
return visualizer
```
|
The Court of Tax Appeals ( or ) is the special court of limited jurisdiction, and has the same level with the Court of Appeals. The court consists of 8 Associate Justices and 1 Presiding Justice. The Court of Tax Appeals is located on Senator Miriam P. Defensor-Santiago Avenue (formerly Agham Road), Diliman, Quezon City in Metro Manila.
History
The Court of Tax Appeals was originally created by virtue of Republic Act No. 1125 which was enacted on June 16, 1954, composed of three (3) Judges with Mariano B. Nable as the first Presiding Judge. With the passage of Republic Act Number 9282 (R.A. 9282) on April 23, 2004, the CTA became an appellate Court, equal in rank to the Court of Appeals. Under Section 1 of the new law, the Court is headed by a Presiding Justice and assisted by five (5) Associate Justices. They shall have the same qualifications, rank, category, salary, emoluments and other privileges, be subject to the same inhibitions and disqualifications and enjoy the same retirement and other benefits as those provided for under existing laws for the Presiding Justice and Associate Justices of the Court of Appeals. A decision of a division of the CTA may be appealed to the CTA en banc, and the latter's decision may further be appealed by verified petition for certiorari to the Supreme Court.
On June 16, 2019, the Court celebrated its 65th Founding Anniversary.
Expanded jurisdiction
On June 12, 2008, Republic Act Number 9503 (R.A. 9503) was enacted and took effect on July 5, 2008. This enlarged the organizational structure of the CTA by creating a Third Division and providing for three additional justices. Hence, the CTA is now composed of one Presiding Justice and eight Associate Justices. The CTA may sit en banc or in three divisions with each division consisting of three justices. The CTA, as one of the courts comprising the Philippine Judiciary, is under the supervision of the Supreme Court of the Philippines.
Previously, only decision, judgment, ruling or inaction of the Commissioner of Internal Revenue, the Commissioner of Customs, the Secretary of Finance, the Secretary of Trade and Industry, or the Secretary of Agriculture, involving the National Internal Revenue Code and the Tariff and Customs Code on civil matters are appealable to the Court of Tax Appeals. The expanded jurisdiction transferred to the CTA the jurisdiction of the Regional Trial Courts and the Court of Appeals over matters involving criminal violation and collection of revenues under the National Internal Revenue Code and Tariff and Customs Code. It also acquired jurisdiction over cases involving local and real property taxes which used to be with the Regional Trial Court and the Court of Appeals.
2008 organizational expansion
Gloria Macapagal Arroyo on June 12, 2008, signed into law Republic Act 9503 (An Act Enlarging the Organizational Structure of the Court of Tax Appeals, Amending for the Purpose Certain Sections of the Law Creating the Court of Tax Appeals, and for Other Purposes), which added three more members (and one more division) to the court. The new law was enacted "to expedite disposition of tax-evasion cases and increase revenues for government to fund social services, food, oil and education subsidies and infrastructure."
Incumbent justices
The Court of Tax Appeals consists of a Presiding justice and eight associate justices. Among the current members of the Court, Erlinda Piñera-Uy is the longest-serving justice, with a tenure of days () as of ; the most recent justice to enter the court is Lanee S . Cui-David, whose tenure began on November 28, 2021.
Presiding Justice
Associate Justice
Appointed by President Benigno Aquino III
Appointed by President Rodrigo Duterte
Appointed by President Ferdinand Marcos, Jr.
Divisions
Court demographics
By law school
By appointing President
By gender
By tenure
Court of Tax Appeals Justices since June 11, 1954
The rule of seniority
The Associate Justices of the Court are usually ordered according to the date of their appointment. There are no official ramifications as to this ranking, although the order determines the seating arrangement on the bench and is duly considered in all matters of protocol. Within the discretion of the Court, the ranking may also factor into the composition of the divisions of the Court.
The incumbent Justice with the earliest date of appointment is deemed the Senior Associate Justice. The Senior Associate Justice has no constitutional or statutory duties, but usually acts as Acting Presiding Justice during the absence of the Presiding Justice. The Senior Associate Justice is also usually designated as the chairperson of the second division of the Court.
The following became Senior Associate Justices in their tenure in the Court of Tax Appeals:
See also
Supreme Court of the Philippines
Court of Appeals of the Philippines
Sandiganbayan
Philippines
Political history of the Philippines
Constitution of the Philippines
References
The Official Website of The Court of Tax Appeals
The Organizational Structure of The Court of Tax Appeals
Republic Act 1125, An Act Creating the Court of Tax Appeals (CTA)
Republic Act 9282, An Act Expanding the Jurisdiction Of the Court of Tax Appeals (CTA)
Republic Act 9503, An Act Enlarging The Organizational Structure of the Court of Tax Appeals (CTA)
Notes
External links
Philippines: Gov.Ph: About the Philippines – Justice category
The Philippines Court of Tax Appeals – Official website
List of CTA Justices – List of Justices of the CTA
Appellate courts
Courts in the Philippines
Taxation in the Philippines
Tax courts
1954 establishments in the Philippines
Courts and tribunals established in 1954
|
```html
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "path_to_url">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=US-ASCII">
<title>Global candelas</title>
<link rel="stylesheet" href="../../../../../doc/src/boostbook.css" type="text/css">
<meta name="generator" content="DocBook XSL Stylesheets V1.79.1">
<link rel="home" href="../../../index.html" title="The Boost C++ Libraries BoostBook Documentation Subset">
<link rel="up" href="../../../boost_units/Reference.html#header.boost.units.systems.si.luminous_intensity_hpp" title="Header <boost/units/systems/si/luminous_intensity.hpp>">
<link rel="prev" href="candela.html" title="Global candela">
<link rel="next" href="weber.html" title="Global weber">
</head>
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
<table cellpadding="2" width="100%"><tr>
<td valign="top"><img alt="Boost C++ Libraries" width="277" height="86" src="../../../../../boost.png"></td>
<td align="center"><a href="../../../../../index.html">Home</a></td>
<td align="center"><a href="../../../../../libs/libraries.htm">Libraries</a></td>
<td align="center"><a href="path_to_url">People</a></td>
<td align="center"><a href="path_to_url">FAQ</a></td>
<td align="center"><a href="../../../../../more/index.htm">More</a></td>
</tr></table>
<hr>
<div class="spirit-nav">
<a accesskey="p" href="candela.html"><img src="../../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../../../boost_units/Reference.html#header.boost.units.systems.si.luminous_intensity_hpp"><img src="../../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../../index.html"><img src="../../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="weber.html"><img src="../../../../../doc/src/images/next.png" alt="Next"></a>
</div>
<div class="refentry">
<a name="boost.units.si.candelas"></a><div class="titlepage"></div>
<div class="refnamediv">
<h2><span class="refentrytitle">Global candelas</span></h2>
<p>boost::units::si::candelas</p>
</div>
<h2 xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" class="refsynopsisdiv-title">Synopsis</h2>
<div xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" class="refsynopsisdiv"><pre class="synopsis"><span class="comment">// In header: <<a class="link" href="../../../boost_units/Reference.html#header.boost.units.systems.si.luminous_intensity_hpp" title="Header <boost/units/systems/si/luminous_intensity.hpp>">boost/units/systems/si/luminous_intensity.hpp</a>>
</span><span class="keyword">static</span> <span class="keyword">const</span> <span class="identifier">luminous_intensity</span> candelas<span class="special">;</span></pre></div>
</div>
<table xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" width="100%"><tr>
<td align="left"></td>
Watanabe<p>
file LICENSE_1_0.txt or copy at <a href="path_to_url" target="_top">path_to_url
</p>
</div></td>
</tr></table>
<hr>
<div class="spirit-nav">
<a accesskey="p" href="candela.html"><img src="../../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../../../boost_units/Reference.html#header.boost.units.systems.si.luminous_intensity_hpp"><img src="../../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../../index.html"><img src="../../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="weber.html"><img src="../../../../../doc/src/images/next.png" alt="Next"></a>
</div>
</body>
</html>
```
|
If You Tickle Us is a UK Jewish blog and Twitter account run by a Haredi blogger, which mainly covers issues concerning the Haredi communities of Stamford Hill and Golders Green.
Coverage of sexual abuse scandal
The blog's popularity flourished following allegations of a planned cover-up of sexual abuse committed by one a prominent rabbi in the Union of Orthodox Hebrew Congregations' Beth Din . The blog provided a regular update of the developing scandal, which was used by a number of media sources such as the Jewish Chronicle and the Times of Israel.
If You Tickle Us made headlines again after a high court decided that Google must provide Rabbi Aaron Halpern with the identifying details, such as the IP address, of the blogger behind the blog If You Tickle Us.
References
External links
If You Tickle Us
British news websites
Jewish bloggers
|
```objective-c
// This file is part of Eigen, a lightweight C++ template library
// for linear algebra.
//
//
// This Source Code Form is subject to the terms of the Mozilla
// with this file, You can obtain one at the mozilla.org home page
#ifndef EIGEN_SPARSEMATRIX_H
#define EIGEN_SPARSEMATRIX_H
namespace Eigen {
/** \ingroup SparseCore_Module
*
* \class SparseMatrix
*
* \brief A versatible sparse matrix representation
*
* This class implements a more versatile variants of the common \em compressed row/column storage format.
* Each colmun's (resp. row) non zeros are stored as a pair of value with associated row (resp. colmiun) index.
* All the non zeros are stored in a single large buffer. Unlike the \em compressed format, there might be extra
* space inbetween the nonzeros of two successive colmuns (resp. rows) such that insertion of new non-zero
* can be done with limited memory reallocation and copies.
*
* A call to the function makeCompressed() turns the matrix into the standard \em compressed format
* compatible with many library.
*
* More details on this storage sceheme are given in the \ref TutorialSparse "manual pages".
*
* \tparam _Scalar the scalar type, i.e. the type of the coefficients
* \tparam _Options Union of bit flags controlling the storage scheme. Currently the only possibility
* is ColMajor or RowMajor. The default is 0 which means column-major.
* \tparam _StorageIndex the type of the indices. It has to be a \b signed type (e.g., short, int, std::ptrdiff_t). Default is \c int.
*
* \warning In %Eigen 3.2, the undocumented type \c SparseMatrix::Index was improperly defined as the storage index type (e.g., int),
* whereas it is now (starting from %Eigen 3.3) deprecated and always defined as Eigen::Index.
* Codes making use of \c SparseMatrix::Index, might thus likely have to be changed to use \c SparseMatrix::StorageIndex instead.
*
* This class can be extended with the help of the plugin mechanism described on the page
* \ref TopicCustomizing_Plugins by defining the preprocessor symbol \c EIGEN_SPARSEMATRIX_PLUGIN.
*/
namespace internal {
template<typename _Scalar, int _Options, typename _StorageIndex>
struct traits<SparseMatrix<_Scalar, _Options, _StorageIndex> >
{
typedef _Scalar Scalar;
typedef _StorageIndex StorageIndex;
typedef Sparse StorageKind;
typedef MatrixXpr XprKind;
enum {
RowsAtCompileTime = Dynamic,
ColsAtCompileTime = Dynamic,
MaxRowsAtCompileTime = Dynamic,
MaxColsAtCompileTime = Dynamic,
Flags = _Options | NestByRefBit | LvalueBit | CompressedAccessBit,
SupportedAccessPatterns = InnerRandomAccessPattern
};
};
template<typename _Scalar, int _Options, typename _StorageIndex, int DiagIndex>
struct traits<Diagonal<SparseMatrix<_Scalar, _Options, _StorageIndex>, DiagIndex> >
{
typedef SparseMatrix<_Scalar, _Options, _StorageIndex> MatrixType;
typedef typename ref_selector<MatrixType>::type MatrixTypeNested;
typedef typename remove_reference<MatrixTypeNested>::type _MatrixTypeNested;
typedef _Scalar Scalar;
typedef Dense StorageKind;
typedef _StorageIndex StorageIndex;
typedef MatrixXpr XprKind;
enum {
RowsAtCompileTime = Dynamic,
ColsAtCompileTime = 1,
MaxRowsAtCompileTime = Dynamic,
MaxColsAtCompileTime = 1,
Flags = LvalueBit
};
};
template<typename _Scalar, int _Options, typename _StorageIndex, int DiagIndex>
struct traits<Diagonal<const SparseMatrix<_Scalar, _Options, _StorageIndex>, DiagIndex> >
: public traits<Diagonal<SparseMatrix<_Scalar, _Options, _StorageIndex>, DiagIndex> >
{
enum {
Flags = 0
};
};
} // end namespace internal
template<typename _Scalar, int _Options, typename _StorageIndex>
class SparseMatrix
: public SparseCompressedBase<SparseMatrix<_Scalar, _Options, _StorageIndex> >
{
typedef SparseCompressedBase<SparseMatrix> Base;
using Base::convert_index;
friend class SparseVector<_Scalar,0,_StorageIndex>;
public:
using Base::isCompressed;
using Base::nonZeros;
EIGEN_SPARSE_PUBLIC_INTERFACE(SparseMatrix)
using Base::operator+=;
using Base::operator-=;
typedef MappedSparseMatrix<Scalar,Flags> Map;
typedef Diagonal<SparseMatrix> DiagonalReturnType;
typedef Diagonal<const SparseMatrix> ConstDiagonalReturnType;
typedef typename Base::InnerIterator InnerIterator;
typedef typename Base::ReverseInnerIterator ReverseInnerIterator;
using Base::IsRowMajor;
typedef internal::CompressedStorage<Scalar,StorageIndex> Storage;
enum {
Options = _Options
};
typedef typename Base::IndexVector IndexVector;
typedef typename Base::ScalarVector ScalarVector;
protected:
typedef SparseMatrix<Scalar,(Flags&~RowMajorBit)|(IsRowMajor?RowMajorBit:0)> TransposedSparseMatrix;
Index m_outerSize;
Index m_innerSize;
StorageIndex* m_outerIndex;
StorageIndex* m_innerNonZeros; // optional, if null then the data is compressed
Storage m_data;
public:
/** \returns the number of rows of the matrix */
inline Index rows() const { return IsRowMajor ? m_outerSize : m_innerSize; }
/** \returns the number of columns of the matrix */
inline Index cols() const { return IsRowMajor ? m_innerSize : m_outerSize; }
/** \returns the number of rows (resp. columns) of the matrix if the storage order column major (resp. row major) */
inline Index innerSize() const { return m_innerSize; }
/** \returns the number of columns (resp. rows) of the matrix if the storage order column major (resp. row major) */
inline Index outerSize() const { return m_outerSize; }
/** \returns a const pointer to the array of values.
* This function is aimed at interoperability with other libraries.
* \sa innerIndexPtr(), outerIndexPtr() */
inline const Scalar* valuePtr() const { return m_data.valuePtr(); }
/** \returns a non-const pointer to the array of values.
* This function is aimed at interoperability with other libraries.
* \sa innerIndexPtr(), outerIndexPtr() */
inline Scalar* valuePtr() { return m_data.valuePtr(); }
/** \returns a const pointer to the array of inner indices.
* This function is aimed at interoperability with other libraries.
* \sa valuePtr(), outerIndexPtr() */
inline const StorageIndex* innerIndexPtr() const { return m_data.indexPtr(); }
/** \returns a non-const pointer to the array of inner indices.
* This function is aimed at interoperability with other libraries.
* \sa valuePtr(), outerIndexPtr() */
inline StorageIndex* innerIndexPtr() { return m_data.indexPtr(); }
/** \returns a const pointer to the array of the starting positions of the inner vectors.
* This function is aimed at interoperability with other libraries.
* \sa valuePtr(), innerIndexPtr() */
inline const StorageIndex* outerIndexPtr() const { return m_outerIndex; }
/** \returns a non-const pointer to the array of the starting positions of the inner vectors.
* This function is aimed at interoperability with other libraries.
* \sa valuePtr(), innerIndexPtr() */
inline StorageIndex* outerIndexPtr() { return m_outerIndex; }
/** \returns a const pointer to the array of the number of non zeros of the inner vectors.
* This function is aimed at interoperability with other libraries.
* \warning it returns the null pointer 0 in compressed mode */
inline const StorageIndex* innerNonZeroPtr() const { return m_innerNonZeros; }
/** \returns a non-const pointer to the array of the number of non zeros of the inner vectors.
* This function is aimed at interoperability with other libraries.
* \warning it returns the null pointer 0 in compressed mode */
inline StorageIndex* innerNonZeroPtr() { return m_innerNonZeros; }
/** \internal */
inline Storage& data() { return m_data; }
/** \internal */
inline const Storage& data() const { return m_data; }
/** \returns the value of the matrix at position \a i, \a j
* This function returns Scalar(0) if the element is an explicit \em zero */
inline Scalar coeff(Index row, Index col) const
{
eigen_assert(row>=0 && row<rows() && col>=0 && col<cols());
const Index outer = IsRowMajor ? row : col;
const Index inner = IsRowMajor ? col : row;
Index end = m_innerNonZeros ? m_outerIndex[outer] + m_innerNonZeros[outer] : m_outerIndex[outer+1];
return m_data.atInRange(m_outerIndex[outer], end, StorageIndex(inner));
}
/** \returns a non-const reference to the value of the matrix at position \a i, \a j
*
* If the element does not exist then it is inserted via the insert(Index,Index) function
* which itself turns the matrix into a non compressed form if that was not the case.
*
* This is a O(log(nnz_j)) operation (binary search) plus the cost of insert(Index,Index)
* function if the element does not already exist.
*/
inline Scalar& coeffRef(Index row, Index col)
{
eigen_assert(row>=0 && row<rows() && col>=0 && col<cols());
const Index outer = IsRowMajor ? row : col;
const Index inner = IsRowMajor ? col : row;
Index start = m_outerIndex[outer];
Index end = m_innerNonZeros ? m_outerIndex[outer] + m_innerNonZeros[outer] : m_outerIndex[outer+1];
eigen_assert(end>=start && "you probably called coeffRef on a non finalized matrix");
if(end<=start)
return insert(row,col);
const Index p = m_data.searchLowerIndex(start,end-1,StorageIndex(inner));
if((p<end) && (m_data.index(p)==inner))
return m_data.value(p);
else
return insert(row,col);
}
/** \returns a reference to a novel non zero coefficient with coordinates \a row x \a col.
* The non zero coefficient must \b not already exist.
*
* If the matrix \c *this is in compressed mode, then \c *this is turned into uncompressed
* mode while reserving room for 2 x this->innerSize() non zeros if reserve(Index) has not been called earlier.
* In this case, the insertion procedure is optimized for a \e sequential insertion mode where elements are assumed to be
* inserted by increasing outer-indices.
*
* If that's not the case, then it is strongly recommended to either use a triplet-list to assemble the matrix, or to first
* call reserve(const SizesType &) to reserve the appropriate number of non-zero elements per inner vector.
*
* Assuming memory has been appropriately reserved, this function performs a sorted insertion in O(1)
* if the elements of each inner vector are inserted in increasing inner index order, and in O(nnz_j) for a random insertion.
*
*/
Scalar& insert(Index row, Index col);
public:
/** Removes all non zeros but keep allocated memory
*
* This function does not free the currently allocated memory. To release as much as memory as possible,
* call \code mat.data().squeeze(); \endcode after resizing it.
*
* \sa resize(Index,Index), data()
*/
inline void setZero()
{
m_data.clear();
memset(m_outerIndex, 0, (m_outerSize+1)*sizeof(StorageIndex));
if(m_innerNonZeros)
memset(m_innerNonZeros, 0, (m_outerSize)*sizeof(StorageIndex));
}
/** Preallocates \a reserveSize non zeros.
*
* Precondition: the matrix must be in compressed mode. */
inline void reserve(Index reserveSize)
{
eigen_assert(isCompressed() && "This function does not make sense in non compressed mode.");
m_data.reserve(reserveSize);
}
#ifdef EIGEN_PARSED_BY_DOXYGEN
/** Preallocates \a reserveSize[\c j] non zeros for each column (resp. row) \c j.
*
* This function turns the matrix in non-compressed mode.
*
* The type \c SizesType must expose the following interface:
\code
typedef value_type;
const value_type& operator[](i) const;
\endcode
* for \c i in the [0,this->outerSize()[ range.
* Typical choices include std::vector<int>, Eigen::VectorXi, Eigen::VectorXi::Constant, etc.
*/
template<class SizesType>
inline void reserve(const SizesType& reserveSizes);
#else
template<class SizesType>
inline void reserve(const SizesType& reserveSizes, const typename SizesType::value_type& enableif =
#if (!EIGEN_COMP_MSVC) || (EIGEN_COMP_MSVC>=1500) // MSVC 2005 fails to compile with this typename
typename
#endif
SizesType::value_type())
{
EIGEN_UNUSED_VARIABLE(enableif);
reserveInnerVectors(reserveSizes);
}
#endif // EIGEN_PARSED_BY_DOXYGEN
protected:
template<class SizesType>
inline void reserveInnerVectors(const SizesType& reserveSizes)
{
if(isCompressed())
{
Index totalReserveSize = 0;
// turn the matrix into non-compressed mode
m_innerNonZeros = static_cast<StorageIndex*>(std::malloc(m_outerSize * sizeof(StorageIndex)));
if (!m_innerNonZeros) internal::throw_std_bad_alloc();
// temporarily use m_innerSizes to hold the new starting points.
StorageIndex* newOuterIndex = m_innerNonZeros;
StorageIndex count = 0;
for(Index j=0; j<m_outerSize; ++j)
{
newOuterIndex[j] = count;
count += reserveSizes[j] + (m_outerIndex[j+1]-m_outerIndex[j]);
totalReserveSize += reserveSizes[j];
}
m_data.reserve(totalReserveSize);
StorageIndex previousOuterIndex = m_outerIndex[m_outerSize];
for(Index j=m_outerSize-1; j>=0; --j)
{
StorageIndex innerNNZ = previousOuterIndex - m_outerIndex[j];
for(Index i=innerNNZ-1; i>=0; --i)
{
m_data.index(newOuterIndex[j]+i) = m_data.index(m_outerIndex[j]+i);
m_data.value(newOuterIndex[j]+i) = m_data.value(m_outerIndex[j]+i);
}
previousOuterIndex = m_outerIndex[j];
m_outerIndex[j] = newOuterIndex[j];
m_innerNonZeros[j] = innerNNZ;
}
if(m_outerSize>0)
m_outerIndex[m_outerSize] = m_outerIndex[m_outerSize-1] + m_innerNonZeros[m_outerSize-1] + reserveSizes[m_outerSize-1];
m_data.resize(m_outerIndex[m_outerSize]);
}
else
{
StorageIndex* newOuterIndex = static_cast<StorageIndex*>(std::malloc((m_outerSize+1)*sizeof(StorageIndex)));
if (!newOuterIndex) internal::throw_std_bad_alloc();
StorageIndex count = 0;
for(Index j=0; j<m_outerSize; ++j)
{
newOuterIndex[j] = count;
StorageIndex alreadyReserved = (m_outerIndex[j+1]-m_outerIndex[j]) - m_innerNonZeros[j];
StorageIndex toReserve = std::max<StorageIndex>(reserveSizes[j], alreadyReserved);
count += toReserve + m_innerNonZeros[j];
}
newOuterIndex[m_outerSize] = count;
m_data.resize(count);
for(Index j=m_outerSize-1; j>=0; --j)
{
Index offset = newOuterIndex[j] - m_outerIndex[j];
if(offset>0)
{
StorageIndex innerNNZ = m_innerNonZeros[j];
for(Index i=innerNNZ-1; i>=0; --i)
{
m_data.index(newOuterIndex[j]+i) = m_data.index(m_outerIndex[j]+i);
m_data.value(newOuterIndex[j]+i) = m_data.value(m_outerIndex[j]+i);
}
}
}
std::swap(m_outerIndex, newOuterIndex);
std::free(newOuterIndex);
}
}
public:
//--- low level purely coherent filling ---
/** \internal
* \returns a reference to the non zero coefficient at position \a row, \a col assuming that:
* - the nonzero does not already exist
* - the new coefficient is the last one according to the storage order
*
* Before filling a given inner vector you must call the statVec(Index) function.
*
* After an insertion session, you should call the finalize() function.
*
* \sa insert, insertBackByOuterInner, startVec */
inline Scalar& insertBack(Index row, Index col)
{
return insertBackByOuterInner(IsRowMajor?row:col, IsRowMajor?col:row);
}
/** \internal
* \sa insertBack, startVec */
inline Scalar& insertBackByOuterInner(Index outer, Index inner)
{
eigen_assert(Index(m_outerIndex[outer+1]) == m_data.size() && "Invalid ordered insertion (invalid outer index)");
eigen_assert( (m_outerIndex[outer+1]-m_outerIndex[outer]==0 || m_data.index(m_data.size()-1)<inner) && "Invalid ordered insertion (invalid inner index)");
Index p = m_outerIndex[outer+1];
++m_outerIndex[outer+1];
m_data.append(Scalar(0), inner);
return m_data.value(p);
}
/** \internal
* \warning use it only if you know what you are doing */
inline Scalar& insertBackByOuterInnerUnordered(Index outer, Index inner)
{
Index p = m_outerIndex[outer+1];
++m_outerIndex[outer+1];
m_data.append(Scalar(0), inner);
return m_data.value(p);
}
/** \internal
* \sa insertBack, insertBackByOuterInner */
inline void startVec(Index outer)
{
eigen_assert(m_outerIndex[outer]==Index(m_data.size()) && "You must call startVec for each inner vector sequentially");
eigen_assert(m_outerIndex[outer+1]==0 && "You must call startVec for each inner vector sequentially");
m_outerIndex[outer+1] = m_outerIndex[outer];
}
/** \internal
* Must be called after inserting a set of non zero entries using the low level compressed API.
*/
inline void finalize()
{
if(isCompressed())
{
StorageIndex size = internal::convert_index<StorageIndex>(m_data.size());
Index i = m_outerSize;
// find the last filled column
while (i>=0 && m_outerIndex[i]==0)
--i;
++i;
while (i<=m_outerSize)
{
m_outerIndex[i] = size;
++i;
}
}
}
//---
template<typename InputIterators>
void setFromTriplets(const InputIterators& begin, const InputIterators& end);
template<typename InputIterators,typename DupFunctor>
void setFromTriplets(const InputIterators& begin, const InputIterators& end, DupFunctor dup_func);
void sumupDuplicates() { collapseDuplicates(internal::scalar_sum_op<Scalar,Scalar>()); }
template<typename DupFunctor>
void collapseDuplicates(DupFunctor dup_func = DupFunctor());
//---
/** \internal
* same as insert(Index,Index) except that the indices are given relative to the storage order */
Scalar& insertByOuterInner(Index j, Index i)
{
return insert(IsRowMajor ? j : i, IsRowMajor ? i : j);
}
/** Turns the matrix into the \em compressed format.
*/
void makeCompressed()
{
if(isCompressed())
return;
eigen_internal_assert(m_outerIndex!=0 && m_outerSize>0);
Index oldStart = m_outerIndex[1];
m_outerIndex[1] = m_innerNonZeros[0];
for(Index j=1; j<m_outerSize; ++j)
{
Index nextOldStart = m_outerIndex[j+1];
Index offset = oldStart - m_outerIndex[j];
if(offset>0)
{
for(Index k=0; k<m_innerNonZeros[j]; ++k)
{
m_data.index(m_outerIndex[j]+k) = m_data.index(oldStart+k);
m_data.value(m_outerIndex[j]+k) = m_data.value(oldStart+k);
}
}
m_outerIndex[j+1] = m_outerIndex[j] + m_innerNonZeros[j];
oldStart = nextOldStart;
}
std::free(m_innerNonZeros);
m_innerNonZeros = 0;
m_data.resize(m_outerIndex[m_outerSize]);
m_data.squeeze();
}
/** Turns the matrix into the uncompressed mode */
void uncompress()
{
if(m_innerNonZeros != 0)
return;
m_innerNonZeros = static_cast<StorageIndex*>(std::malloc(m_outerSize * sizeof(StorageIndex)));
for (Index i = 0; i < m_outerSize; i++)
{
m_innerNonZeros[i] = m_outerIndex[i+1] - m_outerIndex[i];
}
}
/** Suppresses all nonzeros which are \b much \b smaller \b than \a reference under the tolerence \a epsilon */
void prune(const Scalar& reference, const RealScalar& epsilon = NumTraits<RealScalar>::dummy_precision())
{
prune(default_prunning_func(reference,epsilon));
}
/** Turns the matrix into compressed format, and suppresses all nonzeros which do not satisfy the predicate \a keep.
* The functor type \a KeepFunc must implement the following function:
* \code
* bool operator() (const Index& row, const Index& col, const Scalar& value) const;
* \endcode
* \sa prune(Scalar,RealScalar)
*/
template<typename KeepFunc>
void prune(const KeepFunc& keep = KeepFunc())
{
// TODO optimize the uncompressed mode to avoid moving and allocating the data twice
makeCompressed();
StorageIndex k = 0;
for(Index j=0; j<m_outerSize; ++j)
{
Index previousStart = m_outerIndex[j];
m_outerIndex[j] = k;
Index end = m_outerIndex[j+1];
for(Index i=previousStart; i<end; ++i)
{
if(keep(IsRowMajor?j:m_data.index(i), IsRowMajor?m_data.index(i):j, m_data.value(i)))
{
m_data.value(k) = m_data.value(i);
m_data.index(k) = m_data.index(i);
++k;
}
}
}
m_outerIndex[m_outerSize] = k;
m_data.resize(k,0);
}
/** Resizes the matrix to a \a rows x \a cols matrix leaving old values untouched.
*
* If the sizes of the matrix are decreased, then the matrix is turned to \b uncompressed-mode
* and the storage of the out of bounds coefficients is kept and reserved.
* Call makeCompressed() to pack the entries and squeeze extra memory.
*
* \sa reserve(), setZero(), makeCompressed()
*/
void conservativeResize(Index rows, Index cols)
{
// No change
if (this->rows() == rows && this->cols() == cols) return;
// If one dimension is null, then there is nothing to be preserved
if(rows==0 || cols==0) return resize(rows,cols);
Index innerChange = IsRowMajor ? cols - this->cols() : rows - this->rows();
Index outerChange = IsRowMajor ? rows - this->rows() : cols - this->cols();
StorageIndex newInnerSize = convert_index(IsRowMajor ? cols : rows);
// Deals with inner non zeros
if (m_innerNonZeros)
{
// Resize m_innerNonZeros
StorageIndex *newInnerNonZeros = static_cast<StorageIndex*>(std::realloc(m_innerNonZeros, (m_outerSize + outerChange) * sizeof(StorageIndex)));
if (!newInnerNonZeros) internal::throw_std_bad_alloc();
m_innerNonZeros = newInnerNonZeros;
for(Index i=m_outerSize; i<m_outerSize+outerChange; i++)
m_innerNonZeros[i] = 0;
}
else if (innerChange < 0)
{
// Inner size decreased: allocate a new m_innerNonZeros
m_innerNonZeros = static_cast<StorageIndex*>(std::malloc((m_outerSize+outerChange+1) * sizeof(StorageIndex)));
if (!m_innerNonZeros) internal::throw_std_bad_alloc();
for(Index i = 0; i < m_outerSize; i++)
m_innerNonZeros[i] = m_outerIndex[i+1] - m_outerIndex[i];
}
// Change the m_innerNonZeros in case of a decrease of inner size
if (m_innerNonZeros && innerChange < 0)
{
for(Index i = 0; i < m_outerSize + (std::min)(outerChange, Index(0)); i++)
{
StorageIndex &n = m_innerNonZeros[i];
StorageIndex start = m_outerIndex[i];
while (n > 0 && m_data.index(start+n-1) >= newInnerSize) --n;
}
}
m_innerSize = newInnerSize;
// Re-allocate outer index structure if necessary
if (outerChange == 0)
return;
StorageIndex *newOuterIndex = static_cast<StorageIndex*>(std::realloc(m_outerIndex, (m_outerSize + outerChange + 1) * sizeof(StorageIndex)));
if (!newOuterIndex) internal::throw_std_bad_alloc();
m_outerIndex = newOuterIndex;
if (outerChange > 0)
{
StorageIndex last = m_outerSize == 0 ? 0 : m_outerIndex[m_outerSize];
for(Index i=m_outerSize; i<m_outerSize+outerChange+1; i++)
m_outerIndex[i] = last;
}
m_outerSize += outerChange;
}
/** Resizes the matrix to a \a rows x \a cols matrix and initializes it to zero.
*
* This function does not free the currently allocated memory. To release as much as memory as possible,
* call \code mat.data().squeeze(); \endcode after resizing it.
*
* \sa reserve(), setZero()
*/
void resize(Index rows, Index cols)
{
const Index outerSize = IsRowMajor ? rows : cols;
m_innerSize = IsRowMajor ? cols : rows;
m_data.clear();
if (m_outerSize != outerSize || m_outerSize==0)
{
std::free(m_outerIndex);
m_outerIndex = static_cast<StorageIndex*>(std::malloc((outerSize + 1) * sizeof(StorageIndex)));
if (!m_outerIndex) internal::throw_std_bad_alloc();
m_outerSize = outerSize;
}
if(m_innerNonZeros)
{
std::free(m_innerNonZeros);
m_innerNonZeros = 0;
}
memset(m_outerIndex, 0, (m_outerSize+1)*sizeof(StorageIndex));
}
/** \internal
* Resize the nonzero vector to \a size */
void resizeNonZeros(Index size)
{
m_data.resize(size);
}
/** \returns a const expression of the diagonal coefficients. */
const ConstDiagonalReturnType diagonal() const { return ConstDiagonalReturnType(*this); }
/** \returns a read-write expression of the diagonal coefficients.
* \warning If the diagonal entries are written, then all diagonal
* entries \b must already exist, otherwise an assertion will be raised.
*/
DiagonalReturnType diagonal() { return DiagonalReturnType(*this); }
/** Default constructor yielding an empty \c 0 \c x \c 0 matrix */
inline SparseMatrix()
: m_outerSize(-1), m_innerSize(0), m_outerIndex(0), m_innerNonZeros(0)
{
check_template_parameters();
resize(0, 0);
}
/** Constructs a \a rows \c x \a cols empty matrix */
inline SparseMatrix(Index rows, Index cols)
: m_outerSize(0), m_innerSize(0), m_outerIndex(0), m_innerNonZeros(0)
{
check_template_parameters();
resize(rows, cols);
}
/** Constructs a sparse matrix from the sparse expression \a other */
template<typename OtherDerived>
inline SparseMatrix(const SparseMatrixBase<OtherDerived>& other)
: m_outerSize(0), m_innerSize(0), m_outerIndex(0), m_innerNonZeros(0)
{
EIGEN_STATIC_ASSERT((internal::is_same<Scalar, typename OtherDerived::Scalar>::value),
your_sha256_hashOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY)
check_template_parameters();
const bool needToTranspose = (Flags & RowMajorBit) != (internal::evaluator<OtherDerived>::Flags & RowMajorBit);
if (needToTranspose)
*this = other.derived();
else
{
#ifdef EIGEN_SPARSE_CREATE_TEMPORARY_PLUGIN
EIGEN_SPARSE_CREATE_TEMPORARY_PLUGIN
#endif
internal::call_assignment_no_alias(*this, other.derived());
}
}
/** Constructs a sparse matrix from the sparse selfadjoint view \a other */
template<typename OtherDerived, unsigned int UpLo>
inline SparseMatrix(const SparseSelfAdjointView<OtherDerived, UpLo>& other)
: m_outerSize(0), m_innerSize(0), m_outerIndex(0), m_innerNonZeros(0)
{
check_template_parameters();
Base::operator=(other);
}
/** Copy constructor (it performs a deep copy) */
inline SparseMatrix(const SparseMatrix& other)
: Base(), m_outerSize(0), m_innerSize(0), m_outerIndex(0), m_innerNonZeros(0)
{
check_template_parameters();
*this = other.derived();
}
/** \brief Copy constructor with in-place evaluation */
template<typename OtherDerived>
SparseMatrix(const ReturnByValue<OtherDerived>& other)
: Base(), m_outerSize(0), m_innerSize(0), m_outerIndex(0), m_innerNonZeros(0)
{
check_template_parameters();
initAssignment(other);
other.evalTo(*this);
}
/** \brief Copy constructor with in-place evaluation */
template<typename OtherDerived>
explicit SparseMatrix(const DiagonalBase<OtherDerived>& other)
: Base(), m_outerSize(0), m_innerSize(0), m_outerIndex(0), m_innerNonZeros(0)
{
check_template_parameters();
*this = other.derived();
}
/** Swaps the content of two sparse matrices of the same type.
* This is a fast operation that simply swaps the underlying pointers and parameters. */
inline void swap(SparseMatrix& other)
{
//EIGEN_DBG_SPARSE(std::cout << "SparseMatrix:: swap\n");
std::swap(m_outerIndex, other.m_outerIndex);
std::swap(m_innerSize, other.m_innerSize);
std::swap(m_outerSize, other.m_outerSize);
std::swap(m_innerNonZeros, other.m_innerNonZeros);
m_data.swap(other.m_data);
}
/** Sets *this to the identity matrix.
* This function also turns the matrix into compressed mode, and drop any reserved memory. */
inline void setIdentity()
{
eigen_assert(rows() == cols() && "ONLY FOR SQUARED MATRICES");
this->m_data.resize(rows());
Eigen::Map<IndexVector>(this->m_data.indexPtr(), rows()).setLinSpaced(0, StorageIndex(rows()-1));
Eigen::Map<ScalarVector>(this->m_data.valuePtr(), rows()).setOnes();
Eigen::Map<IndexVector>(this->m_outerIndex, rows()+1).setLinSpaced(0, StorageIndex(rows()));
std::free(m_innerNonZeros);
m_innerNonZeros = 0;
}
inline SparseMatrix& operator=(const SparseMatrix& other)
{
if (other.isRValue())
{
swap(other.const_cast_derived());
}
else if(this!=&other)
{
#ifdef EIGEN_SPARSE_CREATE_TEMPORARY_PLUGIN
EIGEN_SPARSE_CREATE_TEMPORARY_PLUGIN
#endif
initAssignment(other);
if(other.isCompressed())
{
internal::smart_copy(other.m_outerIndex, other.m_outerIndex + m_outerSize + 1, m_outerIndex);
m_data = other.m_data;
}
else
{
Base::operator=(other);
}
}
return *this;
}
#ifndef EIGEN_PARSED_BY_DOXYGEN
template<typename OtherDerived>
inline SparseMatrix& operator=(const EigenBase<OtherDerived>& other)
{ return Base::operator=(other.derived()); }
#endif // EIGEN_PARSED_BY_DOXYGEN
template<typename OtherDerived>
EIGEN_DONT_INLINE SparseMatrix& operator=(const SparseMatrixBase<OtherDerived>& other);
friend std::ostream & operator << (std::ostream & s, const SparseMatrix& m)
{
EIGEN_DBG_SPARSE(
s << "Nonzero entries:\n";
if(m.isCompressed())
{
for (Index i=0; i<m.nonZeros(); ++i)
s << "(" << m.m_data.value(i) << "," << m.m_data.index(i) << ") ";
}
else
{
for (Index i=0; i<m.outerSize(); ++i)
{
Index p = m.m_outerIndex[i];
Index pe = m.m_outerIndex[i]+m.m_innerNonZeros[i];
Index k=p;
for (; k<pe; ++k) {
s << "(" << m.m_data.value(k) << "," << m.m_data.index(k) << ") ";
}
for (; k<m.m_outerIndex[i+1]; ++k) {
s << "(_,_) ";
}
}
}
s << std::endl;
s << std::endl;
s << "Outer pointers:\n";
for (Index i=0; i<m.outerSize(); ++i) {
s << m.m_outerIndex[i] << " ";
}
s << " $" << std::endl;
if(!m.isCompressed())
{
s << "Inner non zeros:\n";
for (Index i=0; i<m.outerSize(); ++i) {
s << m.m_innerNonZeros[i] << " ";
}
s << " $" << std::endl;
}
s << std::endl;
);
s << static_cast<const SparseMatrixBase<SparseMatrix>&>(m);
return s;
}
/** Destructor */
inline ~SparseMatrix()
{
std::free(m_outerIndex);
std::free(m_innerNonZeros);
}
/** Overloaded for performance */
Scalar sum() const;
# ifdef EIGEN_SPARSEMATRIX_PLUGIN
# include EIGEN_SPARSEMATRIX_PLUGIN
# endif
protected:
template<typename Other>
void initAssignment(const Other& other)
{
resize(other.rows(), other.cols());
if(m_innerNonZeros)
{
std::free(m_innerNonZeros);
m_innerNonZeros = 0;
}
}
/** \internal
* \sa insert(Index,Index) */
EIGEN_DONT_INLINE Scalar& insertCompressed(Index row, Index col);
/** \internal
* A vector object that is equal to 0 everywhere but v at the position i */
class SingletonVector
{
StorageIndex m_index;
StorageIndex m_value;
public:
typedef StorageIndex value_type;
SingletonVector(Index i, Index v)
: m_index(convert_index(i)), m_value(convert_index(v))
{}
StorageIndex operator[](Index i) const { return i==m_index ? m_value : 0; }
};
/** \internal
* \sa insert(Index,Index) */
EIGEN_DONT_INLINE Scalar& insertUncompressed(Index row, Index col);
public:
/** \internal
* \sa insert(Index,Index) */
EIGEN_STRONG_INLINE Scalar& insertBackUncompressed(Index row, Index col)
{
const Index outer = IsRowMajor ? row : col;
const Index inner = IsRowMajor ? col : row;
eigen_assert(!isCompressed());
eigen_assert(m_innerNonZeros[outer]<=(m_outerIndex[outer+1] - m_outerIndex[outer]));
Index p = m_outerIndex[outer] + m_innerNonZeros[outer]++;
m_data.index(p) = convert_index(inner);
return (m_data.value(p) = Scalar(0));
}
private:
static void check_template_parameters()
{
EIGEN_STATIC_ASSERT(NumTraits<StorageIndex>::IsSigned,THE_INDEX_TYPE_MUST_BE_A_SIGNED_TYPE);
EIGEN_STATIC_ASSERT((Options&(ColMajor|RowMajor))==Options,INVALID_MATRIX_TEMPLATE_PARAMETERS);
}
struct default_prunning_func {
default_prunning_func(const Scalar& ref, const RealScalar& eps) : reference(ref), epsilon(eps) {}
inline bool operator() (const Index&, const Index&, const Scalar& value) const
{
return !internal::isMuchSmallerThan(value, reference, epsilon);
}
Scalar reference;
RealScalar epsilon;
};
};
namespace internal {
template<typename InputIterator, typename SparseMatrixType, typename DupFunctor>
void set_from_triplets(const InputIterator& begin, const InputIterator& end, SparseMatrixType& mat, DupFunctor dup_func)
{
enum { IsRowMajor = SparseMatrixType::IsRowMajor };
typedef typename SparseMatrixType::Scalar Scalar;
typedef typename SparseMatrixType::StorageIndex StorageIndex;
SparseMatrix<Scalar,IsRowMajor?ColMajor:RowMajor,StorageIndex> trMat(mat.rows(),mat.cols());
if(begin!=end)
{
// pass 1: count the nnz per inner-vector
typename SparseMatrixType::IndexVector wi(trMat.outerSize());
wi.setZero();
for(InputIterator it(begin); it!=end; ++it)
{
eigen_assert(it->row()>=0 && it->row()<mat.rows() && it->col()>=0 && it->col()<mat.cols());
wi(IsRowMajor ? it->col() : it->row())++;
}
// pass 2: insert all the elements into trMat
trMat.reserve(wi);
for(InputIterator it(begin); it!=end; ++it)
trMat.insertBackUncompressed(it->row(),it->col()) = it->value();
// pass 3:
trMat.collapseDuplicates(dup_func);
}
// pass 4: transposed copy -> implicit sorting
mat = trMat;
}
}
/** Fill the matrix \c *this with the list of \em triplets defined by the iterator range \a begin - \a end.
*
* A \em triplet is a tuple (i,j,value) defining a non-zero element.
* The input list of triplets does not have to be sorted, and can contains duplicated elements.
* In any case, the result is a \b sorted and \b compressed sparse matrix where the duplicates have been summed up.
* This is a \em O(n) operation, with \em n the number of triplet elements.
* The initial contents of \c *this is destroyed.
* The matrix \c *this must be properly resized beforehand using the SparseMatrix(Index,Index) constructor,
* or the resize(Index,Index) method. The sizes are not extracted from the triplet list.
*
* The \a InputIterators value_type must provide the following interface:
* \code
* Scalar value() const; // the value
* Scalar row() const; // the row index i
* Scalar col() const; // the column index j
* \endcode
* See for instance the Eigen::Triplet template class.
*
* Here is a typical usage example:
* \code
typedef Triplet<double> T;
std::vector<T> tripletList;
triplets.reserve(estimation_of_entries);
for(...)
{
// ...
tripletList.push_back(T(i,j,v_ij));
}
SparseMatrixType m(rows,cols);
m.setFromTriplets(tripletList.begin(), tripletList.end());
// m is ready to go!
* \endcode
*
* \warning The list of triplets is read multiple times (at least twice). Therefore, it is not recommended to define
* an abstract iterator over a complex data-structure that would be expensive to evaluate. The triplets should rather
* be explicitely stored into a std::vector for instance.
*/
template<typename Scalar, int _Options, typename _StorageIndex>
template<typename InputIterators>
void SparseMatrix<Scalar,_Options,_StorageIndex>::setFromTriplets(const InputIterators& begin, const InputIterators& end)
{
internal::set_from_triplets<InputIterators, SparseMatrix<Scalar,_Options,_StorageIndex> >(begin, end, *this, internal::scalar_sum_op<Scalar,Scalar>());
}
/** The same as setFromTriplets but when duplicates are met the functor \a dup_func is applied:
* \code
* value = dup_func(OldValue, NewValue)
* \endcode
* Here is a C++11 example keeping the latest entry only:
* \code
* mat.setFromTriplets(triplets.begin(), triplets.end(), [] (const Scalar&,const Scalar &b) { return b; });
* \endcode
*/
template<typename Scalar, int _Options, typename _StorageIndex>
template<typename InputIterators,typename DupFunctor>
void SparseMatrix<Scalar,_Options,_StorageIndex>::setFromTriplets(const InputIterators& begin, const InputIterators& end, DupFunctor dup_func)
{
internal::set_from_triplets<InputIterators, SparseMatrix<Scalar,_Options,_StorageIndex>, DupFunctor>(begin, end, *this, dup_func);
}
/** \internal */
template<typename Scalar, int _Options, typename _StorageIndex>
template<typename DupFunctor>
void SparseMatrix<Scalar,_Options,_StorageIndex>::collapseDuplicates(DupFunctor dup_func)
{
eigen_assert(!isCompressed());
// TODO, in practice we should be able to use m_innerNonZeros for that task
IndexVector wi(innerSize());
wi.fill(-1);
StorageIndex count = 0;
// for each inner-vector, wi[inner_index] will hold the position of first element into the index/value buffers
for(Index j=0; j<outerSize(); ++j)
{
StorageIndex start = count;
Index oldEnd = m_outerIndex[j]+m_innerNonZeros[j];
for(Index k=m_outerIndex[j]; k<oldEnd; ++k)
{
Index i = m_data.index(k);
if(wi(i)>=start)
{
// we already meet this entry => accumulate it
m_data.value(wi(i)) = dup_func(m_data.value(wi(i)), m_data.value(k));
}
else
{
m_data.value(count) = m_data.value(k);
m_data.index(count) = m_data.index(k);
wi(i) = count;
++count;
}
}
m_outerIndex[j] = start;
}
m_outerIndex[m_outerSize] = count;
// turn the matrix into compressed form
std::free(m_innerNonZeros);
m_innerNonZeros = 0;
m_data.resize(m_outerIndex[m_outerSize]);
}
template<typename Scalar, int _Options, typename _StorageIndex>
template<typename OtherDerived>
EIGEN_DONT_INLINE SparseMatrix<Scalar,_Options,_StorageIndex>& SparseMatrix<Scalar,_Options,_StorageIndex>::operator=(const SparseMatrixBase<OtherDerived>& other)
{
EIGEN_STATIC_ASSERT((internal::is_same<Scalar, typename OtherDerived::Scalar>::value),
your_sha256_hashOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY)
#ifdef EIGEN_SPARSE_CREATE_TEMPORARY_PLUGIN
EIGEN_SPARSE_CREATE_TEMPORARY_PLUGIN
#endif
const bool needToTranspose = (Flags & RowMajorBit) != (internal::evaluator<OtherDerived>::Flags & RowMajorBit);
if (needToTranspose)
{
#ifdef EIGEN_SPARSE_TRANSPOSED_COPY_PLUGIN
EIGEN_SPARSE_TRANSPOSED_COPY_PLUGIN
#endif
// two passes algorithm:
// 1 - compute the number of coeffs per dest inner vector
// 2 - do the actual copy/eval
// Since each coeff of the rhs has to be evaluated twice, let's evaluate it if needed
typedef typename internal::nested_eval<OtherDerived,2,typename internal::plain_matrix_type<OtherDerived>::type >::type OtherCopy;
typedef typename internal::remove_all<OtherCopy>::type _OtherCopy;
typedef internal::evaluator<_OtherCopy> OtherCopyEval;
OtherCopy otherCopy(other.derived());
OtherCopyEval otherCopyEval(otherCopy);
SparseMatrix dest(other.rows(),other.cols());
Eigen::Map<IndexVector> (dest.m_outerIndex,dest.outerSize()).setZero();
// pass 1
// FIXME the above copy could be merged with that pass
for (Index j=0; j<otherCopy.outerSize(); ++j)
for (typename OtherCopyEval::InnerIterator it(otherCopyEval, j); it; ++it)
++dest.m_outerIndex[it.index()];
// prefix sum
StorageIndex count = 0;
IndexVector positions(dest.outerSize());
for (Index j=0; j<dest.outerSize(); ++j)
{
StorageIndex tmp = dest.m_outerIndex[j];
dest.m_outerIndex[j] = count;
positions[j] = count;
count += tmp;
}
dest.m_outerIndex[dest.outerSize()] = count;
// alloc
dest.m_data.resize(count);
// pass 2
for (StorageIndex j=0; j<otherCopy.outerSize(); ++j)
{
for (typename OtherCopyEval::InnerIterator it(otherCopyEval, j); it; ++it)
{
Index pos = positions[it.index()]++;
dest.m_data.index(pos) = j;
dest.m_data.value(pos) = it.value();
}
}
this->swap(dest);
return *this;
}
else
{
if(other.isRValue())
{
initAssignment(other.derived());
}
// there is no special optimization
return Base::operator=(other.derived());
}
}
template<typename _Scalar, int _Options, typename _StorageIndex>
typename SparseMatrix<_Scalar,_Options,_StorageIndex>::Scalar& SparseMatrix<_Scalar,_Options,_StorageIndex>::insert(Index row, Index col)
{
eigen_assert(row>=0 && row<rows() && col>=0 && col<cols());
const Index outer = IsRowMajor ? row : col;
const Index inner = IsRowMajor ? col : row;
if(isCompressed())
{
if(nonZeros()==0)
{
// reserve space if not already done
if(m_data.allocatedSize()==0)
m_data.reserve(2*m_innerSize);
// turn the matrix into non-compressed mode
m_innerNonZeros = static_cast<StorageIndex*>(std::malloc(m_outerSize * sizeof(StorageIndex)));
if(!m_innerNonZeros) internal::throw_std_bad_alloc();
memset(m_innerNonZeros, 0, (m_outerSize)*sizeof(StorageIndex));
// pack all inner-vectors to the end of the pre-allocated space
// and allocate the entire free-space to the first inner-vector
StorageIndex end = convert_index(m_data.allocatedSize());
for(Index j=1; j<=m_outerSize; ++j)
m_outerIndex[j] = end;
}
else
{
// turn the matrix into non-compressed mode
m_innerNonZeros = static_cast<StorageIndex*>(std::malloc(m_outerSize * sizeof(StorageIndex)));
if(!m_innerNonZeros) internal::throw_std_bad_alloc();
for(Index j=0; j<m_outerSize; ++j)
m_innerNonZeros[j] = m_outerIndex[j+1]-m_outerIndex[j];
}
}
// check whether we can do a fast "push back" insertion
Index data_end = m_data.allocatedSize();
// First case: we are filling a new inner vector which is packed at the end.
// We assume that all remaining inner-vectors are also empty and packed to the end.
if(m_outerIndex[outer]==data_end)
{
eigen_internal_assert(m_innerNonZeros[outer]==0);
// pack previous empty inner-vectors to end of the used-space
// and allocate the entire free-space to the current inner-vector.
StorageIndex p = convert_index(m_data.size());
Index j = outer;
while(j>=0 && m_innerNonZeros[j]==0)
m_outerIndex[j--] = p;
// push back the new element
++m_innerNonZeros[outer];
m_data.append(Scalar(0), inner);
// check for reallocation
if(data_end != m_data.allocatedSize())
{
// m_data has been reallocated
// -> move remaining inner-vectors back to the end of the free-space
// so that the entire free-space is allocated to the current inner-vector.
eigen_internal_assert(data_end < m_data.allocatedSize());
StorageIndex new_end = convert_index(m_data.allocatedSize());
for(Index k=outer+1; k<=m_outerSize; ++k)
if(m_outerIndex[k]==data_end)
m_outerIndex[k] = new_end;
}
return m_data.value(p);
}
// Second case: the next inner-vector is packed to the end
// and the current inner-vector end match the used-space.
if(m_outerIndex[outer+1]==data_end && m_outerIndex[outer]+m_innerNonZeros[outer]==m_data.size())
{
eigen_internal_assert(outer+1==m_outerSize || m_innerNonZeros[outer+1]==0);
// add space for the new element
++m_innerNonZeros[outer];
m_data.resize(m_data.size()+1);
// check for reallocation
if(data_end != m_data.allocatedSize())
{
// m_data has been reallocated
// -> move remaining inner-vectors back to the end of the free-space
// so that the entire free-space is allocated to the current inner-vector.
eigen_internal_assert(data_end < m_data.allocatedSize());
StorageIndex new_end = convert_index(m_data.allocatedSize());
for(Index k=outer+1; k<=m_outerSize; ++k)
if(m_outerIndex[k]==data_end)
m_outerIndex[k] = new_end;
}
// and insert it at the right position (sorted insertion)
Index startId = m_outerIndex[outer];
Index p = m_outerIndex[outer]+m_innerNonZeros[outer]-1;
while ( (p > startId) && (m_data.index(p-1) > inner) )
{
m_data.index(p) = m_data.index(p-1);
m_data.value(p) = m_data.value(p-1);
--p;
}
m_data.index(p) = convert_index(inner);
return (m_data.value(p) = 0);
}
if(m_data.size() != m_data.allocatedSize())
{
// make sure the matrix is compatible to random un-compressed insertion:
m_data.resize(m_data.allocatedSize());
this->reserveInnerVectors(Array<StorageIndex,Dynamic,1>::Constant(m_outerSize, 2));
}
return insertUncompressed(row,col);
}
template<typename _Scalar, int _Options, typename _StorageIndex>
EIGEN_DONT_INLINE typename SparseMatrix<_Scalar,_Options,_StorageIndex>::Scalar& SparseMatrix<_Scalar,_Options,_StorageIndex>::insertUncompressed(Index row, Index col)
{
eigen_assert(!isCompressed());
const Index outer = IsRowMajor ? row : col;
const StorageIndex inner = convert_index(IsRowMajor ? col : row);
Index room = m_outerIndex[outer+1] - m_outerIndex[outer];
StorageIndex innerNNZ = m_innerNonZeros[outer];
if(innerNNZ>=room)
{
// this inner vector is full, we need to reallocate the whole buffer :(
reserve(SingletonVector(outer,std::max<StorageIndex>(2,innerNNZ)));
}
Index startId = m_outerIndex[outer];
Index p = startId + m_innerNonZeros[outer];
while ( (p > startId) && (m_data.index(p-1) > inner) )
{
m_data.index(p) = m_data.index(p-1);
m_data.value(p) = m_data.value(p-1);
--p;
}
eigen_assert((p<=startId || m_data.index(p-1)!=inner) && "you cannot insert an element that already exists, you must call coeffRef to this end");
m_innerNonZeros[outer]++;
m_data.index(p) = inner;
return (m_data.value(p) = Scalar(0));
}
template<typename _Scalar, int _Options, typename _StorageIndex>
EIGEN_DONT_INLINE typename SparseMatrix<_Scalar,_Options,_StorageIndex>::Scalar& SparseMatrix<_Scalar,_Options,_StorageIndex>::insertCompressed(Index row, Index col)
{
eigen_assert(isCompressed());
const Index outer = IsRowMajor ? row : col;
const Index inner = IsRowMajor ? col : row;
Index previousOuter = outer;
if (m_outerIndex[outer+1]==0)
{
// we start a new inner vector
while (previousOuter>=0 && m_outerIndex[previousOuter]==0)
{
m_outerIndex[previousOuter] = convert_index(m_data.size());
--previousOuter;
}
m_outerIndex[outer+1] = m_outerIndex[outer];
}
// here we have to handle the tricky case where the outerIndex array
// starts with: [ 0 0 0 0 0 1 ...] and we are inserted in, e.g.,
// the 2nd inner vector...
bool isLastVec = (!(previousOuter==-1 && m_data.size()!=0))
&& (std::size_t(m_outerIndex[outer+1]) == m_data.size());
std::size_t startId = m_outerIndex[outer];
// FIXME let's make sure sizeof(long int) == sizeof(std::size_t)
std::size_t p = m_outerIndex[outer+1];
++m_outerIndex[outer+1];
double reallocRatio = 1;
if (m_data.allocatedSize()<=m_data.size())
{
// if there is no preallocated memory, let's reserve a minimum of 32 elements
if (m_data.size()==0)
{
m_data.reserve(32);
}
else
{
// we need to reallocate the data, to reduce multiple reallocations
// we use a smart resize algorithm based on the current filling ratio
// in addition, we use double to avoid integers overflows
double nnzEstimate = double(m_outerIndex[outer])*double(m_outerSize)/double(outer+1);
reallocRatio = (nnzEstimate-double(m_data.size()))/double(m_data.size());
// furthermore we bound the realloc ratio to:
// 1) reduce multiple minor realloc when the matrix is almost filled
// 2) avoid to allocate too much memory when the matrix is almost empty
reallocRatio = (std::min)((std::max)(reallocRatio,1.5),8.);
}
}
m_data.resize(m_data.size()+1,reallocRatio);
if (!isLastVec)
{
if (previousOuter==-1)
{
// oops wrong guess.
// let's correct the outer offsets
for (Index k=0; k<=(outer+1); ++k)
m_outerIndex[k] = 0;
Index k=outer+1;
while(m_outerIndex[k]==0)
m_outerIndex[k++] = 1;
while (k<=m_outerSize && m_outerIndex[k]!=0)
m_outerIndex[k++]++;
p = 0;
--k;
k = m_outerIndex[k]-1;
while (k>0)
{
m_data.index(k) = m_data.index(k-1);
m_data.value(k) = m_data.value(k-1);
k--;
}
}
else
{
// we are not inserting into the last inner vec
// update outer indices:
Index j = outer+2;
while (j<=m_outerSize && m_outerIndex[j]!=0)
m_outerIndex[j++]++;
--j;
// shift data of last vecs:
Index k = m_outerIndex[j]-1;
while (k>=Index(p))
{
m_data.index(k) = m_data.index(k-1);
m_data.value(k) = m_data.value(k-1);
k--;
}
}
}
while ( (p > startId) && (m_data.index(p-1) > inner) )
{
m_data.index(p) = m_data.index(p-1);
m_data.value(p) = m_data.value(p-1);
--p;
}
m_data.index(p) = inner;
return (m_data.value(p) = Scalar(0));
}
namespace internal {
template<typename _Scalar, int _Options, typename _StorageIndex>
struct evaluator<SparseMatrix<_Scalar,_Options,_StorageIndex> >
: evaluator<SparseCompressedBase<SparseMatrix<_Scalar,_Options,_StorageIndex> > >
{
typedef evaluator<SparseCompressedBase<SparseMatrix<_Scalar,_Options,_StorageIndex> > > Base;
typedef SparseMatrix<_Scalar,_Options,_StorageIndex> SparseMatrixType;
evaluator() : Base() {}
explicit evaluator(const SparseMatrixType &mat) : Base(mat) {}
};
}
} // end namespace Eigen
#endif // EIGEN_SPARSEMATRIX_H
```
|
```xml
import { graphql } from "react-relay";
import { Environment } from "relay-runtime";
import { CoralContext } from "coral-framework/lib/bootstrap";
import {
commitMutationPromiseNormalized,
createMutation,
lookup,
MutationInput,
} from "coral-framework/lib/relay";
import { GQLComment, GQLCOMMENT_STATUS } from "coral-framework/schema";
import { RejectCommentEvent } from "coral-stream/events";
import { RejectCommentMutation as MutationTypes } from "coral-stream/__generated__/RejectCommentMutation.graphql";
let clientMutationId = 0;
const RejectCommentMutation = createMutation(
"rejectComment",
async (
environment: Environment,
input: MutationInput<MutationTypes> & {
storyID: string;
noEmit?: boolean;
},
{ eventEmitter }: CoralContext
) => {
let rejectCommentEvent: ReturnType<typeof RejectCommentEvent.begin> | null =
null;
if (!input.noEmit) {
rejectCommentEvent = RejectCommentEvent.begin(eventEmitter, {
commentID: input.commentID,
});
}
try {
const result = await commitMutationPromiseNormalized<MutationTypes>(
environment,
{
mutation: graphql`
mutation RejectCommentMutation($input: RejectCommentInput!) {
rejectComment(input: $input) {
comment {
id
status
tags {
code
}
story {
id
commentCounts {
tags {
FEATURED
UNANSWERED
QUESTION
}
}
}
lastViewerAction
}
clientMutationId
}
}
`,
variables: {
input: {
commentID: input.commentID,
commentRevisionID: input.commentRevisionID,
clientMutationId: (clientMutationId++).toString(),
reason: input.reason,
},
},
optimisticResponse: {
rejectComment: {
comment: {
id: input.commentID,
status: GQLCOMMENT_STATUS.REJECTED,
tags: lookup<GQLComment>(
environment,
input.commentID
)!.tags.map((t) => ({
code: t.code,
})),
story: {
id: input.storyID,
commentCounts: {
tags: {
FEATURED: 0,
UNANSWERED: 0,
QUESTION: 0,
},
},
},
lastViewerAction: "REJECT",
},
clientMutationId: clientMutationId.toString(),
},
},
updater: (store) => {
const comment = store
.getRootField("rejectComment")!
.getLinkedRecord("comment")!;
comment.setValue("REJECT", "lastViewerAction");
},
}
);
if (rejectCommentEvent) {
rejectCommentEvent.success();
}
return result;
} catch (error) {
if (rejectCommentEvent) {
rejectCommentEvent.error({ message: error.message, code: error.code });
}
throw error;
}
}
);
export default RejectCommentMutation;
```
|
```objective-c
//
// XHLaunchAdDownloaderManager.m
// XHLaunchAdExample
//
// Created by zhuxiaohui on 16/12/1.
// :path_to_url
#import "XHLaunchAdDownloader.h"
#import "XHLaunchAdCache.h"
#import "XHLaunchAdConst.h"
#if __has_include(<FLAnimatedImage/FLAnimatedImage.h>)
#import <FLAnimatedImage/FLAnimatedImage.h>
#else
#import "FLAnimatedImage.h"
#endif
#pragma mark - XHLaunchAdDownload
@interface XHLaunchAdDownload()
@property (strong, nonatomic) NSURLSession *session;
@property(strong,nonatomic)NSURLSessionDownloadTask *downloadTask;
@property (nonatomic, assign) unsigned long long totalLength;
@property (nonatomic, assign) unsigned long long currentLength;
@property (nonatomic, copy) XHLaunchAdDownloadProgressBlock progressBlock;
@property (strong, nonatomic) NSURL *url;
@end
@implementation XHLaunchAdDownload
@end
#pragma mark - XHLaunchAdImageDownload
@interface XHLaunchAdImageDownload()<NSURLSessionDownloadDelegate,NSURLSessionTaskDelegate>
@property (nonatomic, copy ) XHLaunchAdDownloadImageCompletedBlock completedBlock;
@end
@implementation XHLaunchAdImageDownload
-(nonnull instancetype)initWithURL:(nonnull NSURL *)url delegateQueue:(nonnull NSOperationQueue *)queue progress:(nullable XHLaunchAdDownloadProgressBlock)progressBlock completed:(nullable XHLaunchAdDownloadImageCompletedBlock)completedBlock{
self = [super init];
if (self) {
self.url = url;
self.progressBlock = progressBlock;
self.completedBlock = completedBlock;
NSURLSessionConfiguration * sessionConfiguration = [NSURLSessionConfiguration defaultSessionConfiguration];
sessionConfiguration.timeoutIntervalForRequest = 15.0;
self.session = [NSURLSession sessionWithConfiguration:sessionConfiguration
delegate:self
delegateQueue:queue];
self.downloadTask = [self.session downloadTaskWithRequest:[NSURLRequest requestWithURL:url]];
[self.downloadTask resume];
}
return self;
}
#pragma mark - NSURLSessionDownloadDelegate
- (void)URLSession:(NSURLSession *)session
downloadTask:(NSURLSessionDownloadTask *)downloadTask
didFinishDownloadingToURL:(NSURL *)location {
NSData *data = [NSData dataWithContentsOfURL:location];
UIImage *image = [UIImage imageWithData:data];
dispatch_async(dispatch_get_main_queue(), ^{
if (self.completedBlock) {
self.completedBlock(image,data, nil);
//
self.completedBlock = nil;
}
//
if ([self.delegate respondsToSelector:@selector(downloadFinishWithURL:)]) {
[self.delegate downloadFinishWithURL:self.url];
}
});
//
[self.session invalidateAndCancel];
self.session = nil;
}
- (void)URLSession:(NSURLSession *)session downloadTask:(NSURLSessionDownloadTask *)downloadTask didWriteData:(int64_t)bytesWritten totalBytesWritten:(int64_t)totalBytesWritten totalBytesExpectedToWrite:(int64_t)totalBytesExpectedToWrite {
self.currentLength = totalBytesWritten;
self.totalLength = totalBytesExpectedToWrite;
if (self.progressBlock) {
self.progressBlock(self.totalLength, self.currentLength);
}
}
- (void)URLSession:(NSURLSession *)session task:(NSURLSessionTask *)task didCompleteWithError:(NSError *)error {
if (error){
XHLaunchAdLog(@"error = %@",error);
dispatch_async(dispatch_get_main_queue(), ^{
if (self.completedBlock) {
self.completedBlock(nil,nil, error);
}
self.completedBlock = nil;
});
}
}
//HTTPS
- (void)URLSession:(NSURLSession *)session didReceiveChallenge:(NSURLAuthenticationChallenge *)challenge completionHandler:(void (^)(NSURLSessionAuthChallengeDisposition, NSURLCredential *))completionHandler{
NSURLProtectionSpace *protectionSpace = challenge.protectionSpace;
if ([protectionSpace.authenticationMethod isEqualToString:NSURLAuthenticationMethodServerTrust]) {
SecTrustRef serverTrust = protectionSpace.serverTrust;
completionHandler(NSURLSessionAuthChallengeUseCredential, [NSURLCredential credentialForTrust:serverTrust]);
} else {
completionHandler(NSURLSessionAuthChallengePerformDefaultHandling, nil);
}
}
@end
#pragma makr - XHLaunchAdVideoDownload
@interface XHLaunchAdVideoDownload()<NSURLSessionDownloadDelegate,NSURLSessionTaskDelegate>
@property (nonatomic, copy ) XHLaunchAdDownloadVideoCompletedBlock completedBlock;
@end
@implementation XHLaunchAdVideoDownload
-(nonnull instancetype)initWithURL:(nonnull NSURL *)url delegateQueue:(nonnull NSOperationQueue *)queue progress:(nullable XHLaunchAdDownloadProgressBlock)progressBlock completed:(nullable XHLaunchAdDownloadVideoCompletedBlock)completedBlock{
self = [super init];
if (self) {
self.url = url;
self.progressBlock = progressBlock;
_completedBlock = completedBlock;
NSURLSessionConfiguration * sessionConfiguration = [NSURLSessionConfiguration defaultSessionConfiguration];
sessionConfiguration.timeoutIntervalForRequest = 15.0;
self.session = [NSURLSession sessionWithConfiguration:sessionConfiguration
delegate:self
delegateQueue:queue];
self.downloadTask = [self.session downloadTaskWithRequest:[NSURLRequest requestWithURL:url]];
[self.downloadTask resume];
}
return self;
}
#pragma mark - NSURLSessionDownloadDelegate
- (void)URLSession:(NSURLSession *)session
downloadTask:(NSURLSessionDownloadTask *)downloadTask
didFinishDownloadingToURL:(NSURL *)location {
NSError *error=nil;
NSURL *toURL = [NSURL fileURLWithPath:[XHLaunchAdCache videoPathWithURL:self.url]];
[[NSFileManager defaultManager] copyItemAtURL:location toURL:toURL error:&error];//
if(error) XHLaunchAdLog(@"error = %@",error);
dispatch_async(dispatch_get_main_queue(), ^{
if (self.completedBlock) {
if(!error){
self.completedBlock(toURL,nil);
}else{
self.completedBlock(nil,error);
}
//
self.completedBlock = nil;
}
//
if ([self.delegate respondsToSelector:@selector(downloadFinishWithURL:)]) {
[self.delegate downloadFinishWithURL:self.url];
}
});
[self.session invalidateAndCancel];
self.session = nil;
}
- (void)URLSession:(NSURLSession *)session downloadTask:(NSURLSessionDownloadTask *)downloadTask didWriteData:(int64_t)bytesWritten totalBytesWritten:(int64_t)totalBytesWritten totalBytesExpectedToWrite:(int64_t)totalBytesExpectedToWrite {
self.currentLength = totalBytesWritten;
self.totalLength = totalBytesExpectedToWrite;
if (self.progressBlock) {
self.progressBlock(self.totalLength, self.currentLength);
}
}
- (void)URLSession:(NSURLSession *)session task:(NSURLSessionTask *)task didCompleteWithError:(NSError *)error {
if (error){
XHLaunchAdLog(@"error = %@",error);
dispatch_async(dispatch_get_main_queue(), ^{
if (self.completedBlock) {
self.completedBlock(nil, error);
}
self.completedBlock = nil;
});
}
}
@end
#pragma mark - XHLaunchAdDownloader
@interface XHLaunchAdDownloader()<XHLaunchAdDownloadDelegate>
@property (strong, nonatomic, nonnull) NSOperationQueue *downloadImageQueue;
@property (strong, nonatomic, nonnull) NSOperationQueue *downloadVideoQueue;
@property (strong, nonatomic) NSMutableDictionary *allDownloadDict;
@end
@implementation XHLaunchAdDownloader
+(nonnull instancetype )sharedDownloader{
static XHLaunchAdDownloader *instance = nil;
static dispatch_once_t oneToken;
dispatch_once(&oneToken,^{
instance = [[XHLaunchAdDownloader alloc] init];
});
return instance;
}
- (instancetype)init{
self = [super init];
if (self) {
_downloadImageQueue = [NSOperationQueue new];
_downloadImageQueue.maxConcurrentOperationCount = 6;
_downloadImageQueue.name = @"com.it7090.XHLaunchAdDownloadImageQueue";
_downloadVideoQueue = [NSOperationQueue new];
_downloadVideoQueue.maxConcurrentOperationCount = 3;
_downloadVideoQueue.name = @"com.it7090.XHLaunchAdDownloadVideoQueue";
XHLaunchAdLog(@"XHLaunchAdCachePath:%@",[XHLaunchAdCache xhLaunchAdCachePath]);
}
return self;
}
- (void)downloadImageWithURL:(nonnull NSURL *)url progress:(nullable XHLaunchAdDownloadProgressBlock)progressBlock completed:(nullable XHLaunchAdDownloadImageCompletedBlock)completedBlock{
NSString *key = [self keyWithURL:url];
if(self.allDownloadDict[key]) return;
XHLaunchAdImageDownload * download = [[XHLaunchAdImageDownload alloc] initWithURL:url delegateQueue:_downloadImageQueue progress:progressBlock completed:completedBlock];
download.delegate = self;
[self.allDownloadDict setObject:download forKey:key];
}
- (void)downloadImageAndCacheWithURL:(nonnull NSURL *)url completed:(void(^)(BOOL result))completedBlock{
if(url == nil){
if(completedBlock) completedBlock(NO);
return;
}
[self downloadImageWithURL:url progress:nil completed:^(UIImage * _Nullable image, NSData * _Nullable data, NSError * _Nullable error) {
if(error){
if(completedBlock) completedBlock(NO);
}else{
[XHLaunchAdCache async_saveImageData:data imageURL:url completed:^(BOOL result, NSURL * _Nonnull URL) {
if(completedBlock) completedBlock(result);
}];
}
}];
}
-(void)downLoadImageAndCacheWithURLArray:(NSArray<NSURL *> *)urlArray{
[self downLoadImageAndCacheWithURLArray:urlArray completed:nil];
}
- (void)downLoadImageAndCacheWithURLArray:(nonnull NSArray <NSURL *> * )urlArray completed:(nullable XHLaunchAdBatchDownLoadAndCacheCompletedBlock)completedBlock{
if(urlArray.count==0) return;
__block NSMutableArray * resultArray = [[NSMutableArray alloc] init];
dispatch_group_t downLoadGroup = dispatch_group_create();
[urlArray enumerateObjectsUsingBlock:^(NSURL *url, NSUInteger idx, BOOL *stop) {
if(![XHLaunchAdCache checkImageInCacheWithURL:url]){
dispatch_group_enter(downLoadGroup);
[self downloadImageAndCacheWithURL:url completed:^(BOOL result) {
dispatch_group_leave(downLoadGroup);
[resultArray addObject:@{@"url":url.absoluteString,@"result":@(result)}];
}];
}else{
[resultArray addObject:@{@"url":url.absoluteString,@"result":@(YES)}];
}
}];
dispatch_group_notify(downLoadGroup, dispatch_get_main_queue(), ^{
if(completedBlock) completedBlock(resultArray);
});
}
- (void)downloadVideoWithURL:(nonnull NSURL *)url progress:(nullable XHLaunchAdDownloadProgressBlock)progressBlock completed:(nullable XHLaunchAdDownloadVideoCompletedBlock)completedBlock{
NSString *key = [self keyWithURL:url];
if(self.allDownloadDict[key]) return;
XHLaunchAdVideoDownload * download = [[XHLaunchAdVideoDownload alloc] initWithURL:url delegateQueue:_downloadVideoQueue progress:progressBlock completed:completedBlock];
download.delegate = self;
[self.allDownloadDict setObject:download forKey:key];
}
- (void)downloadVideoAndCacheWithURL:(nonnull NSURL *)url completed:(void(^)(BOOL result))completedBlock{
if(url == nil){
if(completedBlock) completedBlock(NO);
return;
}
[self downloadVideoWithURL:url progress:nil completed:^(NSURL * _Nullable location, NSError * _Nullable error) {
if(error){
if(completedBlock) completedBlock(NO);
}else{
if(completedBlock) completedBlock(YES);
}
}];
}
- (void)downLoadVideoAndCacheWithURLArray:(nonnull NSArray <NSURL *> * )urlArray{
[self downLoadVideoAndCacheWithURLArray:urlArray completed:nil];
}
- (void)downLoadVideoAndCacheWithURLArray:(nonnull NSArray <NSURL *> * )urlArray completed:(nullable XHLaunchAdBatchDownLoadAndCacheCompletedBlock)completedBlock{
if(urlArray.count==0) return;
__block NSMutableArray * resultArray = [[NSMutableArray alloc] init];
dispatch_group_t downLoadGroup = dispatch_group_create();
[urlArray enumerateObjectsUsingBlock:^(NSURL *url, NSUInteger idx, BOOL *stop) {
if(![XHLaunchAdCache checkVideoInCacheWithURL:url]){
dispatch_group_enter(downLoadGroup);
[self downloadVideoAndCacheWithURL:url completed:^(BOOL result) {
dispatch_group_leave(downLoadGroup);
[resultArray addObject:@{@"url":url.absoluteString,@"result":@(result)}];
}];
}else{
[resultArray addObject:@{@"url":url.absoluteString,@"result":@(YES)}];
}
}];
dispatch_group_notify(downLoadGroup, dispatch_get_main_queue(), ^{
if(completedBlock) completedBlock(resultArray);
});
}
- (NSMutableDictionary *)allDownloadDict {
if (!_allDownloadDict) {
_allDownloadDict = [[NSMutableDictionary alloc] init];
}
return _allDownloadDict;
}
- (void)downloadFinishWithURL:(NSURL *)url{
[self.allDownloadDict removeObjectForKey:[self keyWithURL:url]];
}
-(NSString *)keyWithURL:(NSURL *)url{
return [XHLaunchAdCache md5String:url.absoluteString];
}
@end
```
|
```sqlpl
DROP TABLE IF EXISTS view_no_nulls;
DROP TABLE IF EXISTS view_no_nulls_set;
DROP TABLE IF EXISTS view_nulls_set;
DROP TABLE IF EXISTS view_nulls;
SET join_use_nulls = 0;
CREATE OR REPLACE VIEW view_no_nulls AS
SELECT * FROM ( SELECT number + 1 AS a, number + 11 AS b FROM numbers(2) ) AS t1
FULL JOIN ( SELECT number + 2 AS a, number + 22 AS c FROM numbers(2) ) AS t2
USING a ORDER BY a;
CREATE OR REPLACE VIEW view_nulls_set AS
SELECT * FROM ( SELECT number + 1 AS a, number + 11 AS b FROM numbers(2) ) AS t1
FULL JOIN ( SELECT number + 2 AS a, number + 22 AS c FROM numbers(2) ) AS t2
USING a ORDER BY a
SETTINGS join_use_nulls = 1;
SET join_use_nulls = 1;
CREATE OR REPLACE VIEW view_nulls AS
SELECT * FROM ( SELECT number + 1 AS a, number + 11 AS b FROM numbers(2) ) AS t1
FULL JOIN ( SELECT number + 2 AS a, number + 22 AS c FROM numbers(2) ) AS t2
USING a ORDER BY a;
CREATE OR REPLACE VIEW view_no_nulls_set AS
SELECT * FROM ( SELECT number + 1 AS a, number + 11 AS b FROM numbers(2) ) AS t1
FULL JOIN ( SELECT number + 2 AS a, number + 22 AS c FROM numbers(2) ) AS t2
USING a ORDER BY a
SETTINGS join_use_nulls = 0;
SET join_use_nulls = 1;
SELECT 'join_use_nulls = 1';
SELECT '-';
SELECT * FROM view_no_nulls; -- { serverError INCORRECT_QUERY }
SELECT '-';
SELECT * FROM view_no_nulls_set;
SELECT '-';
SELECT * FROM view_nulls_set;
SELECT '-';
SELECT * FROM view_nulls;
SET join_use_nulls = 0;
SELECT 'join_use_nulls = 0';
SELECT '-';
SELECT * FROM view_no_nulls;
SELECT '-';
SELECT * FROM view_no_nulls_set;
SELECT '-';
SELECT * FROM view_nulls_set;
SELECT '-';
SELECT * FROM view_nulls;
DETACH TABLE view_no_nulls;
DETACH TABLE view_no_nulls_set;
DETACH TABLE view_nulls_set;
DETACH TABLE view_nulls;
ATTACH TABLE view_no_nulls;
ATTACH TABLE view_no_nulls_set;
ATTACH TABLE view_nulls_set;
ATTACH TABLE view_nulls;
SET join_use_nulls = 1;
SELECT 'join_use_nulls = 1';
SELECT '-';
SELECT * FROM view_no_nulls; -- { serverError INCORRECT_QUERY }
SELECT '-';
SELECT * FROM view_no_nulls_set;
SELECT '-';
SELECT * FROM view_nulls_set;
SELECT '-';
SELECT * FROM view_nulls;
SET join_use_nulls = 0;
SELECT 'join_use_nulls = 0';
SELECT '-';
SELECT * FROM view_no_nulls;
SELECT '-';
SELECT * FROM view_no_nulls_set;
SELECT '-';
SELECT * FROM view_nulls_set;
SELECT '-';
SELECT * FROM view_nulls;
DROP TABLE IF EXISTS view_no_nulls;
DROP TABLE IF EXISTS view_no_nulls_set;
DROP TABLE IF EXISTS view_nulls_set;
DROP TABLE IF EXISTS view_nulls;
```
|
```markdown
TSG036 - Controller logs
========================
Get the last ‘n’ hours of controller logs.
Steps
-----
### Parameters```
```python
import re
from datetime import datetime
tail_lines = 500
# The controller log files are kept in a yyyy-mm-dd folder structure
#
d = datetime.utcnow()
date = "{0}-{1:02d}-{2:02d}".format(d.year, d.month, d.day)
folder = f"/var/log/controller/{date}"
pod = None # All
container = 'controller'
log_files = [ f'{folder}/controller.log', f'{folder}/kube.log', f'{folder}/controller.out', f'{folder}/access.log' ]
expressions_to_analyze = [
re.compile(".{26} WARN "),
re.compile(".{26} ERROR ")
]
print("Log files to get:")
print(log_files)
```
```markdown
### Instantiate Kubernetes client```
```python
# Instantiate the Python Kubernetes client into 'api' variable
import os
try:
from kubernetes import client, config
from kubernetes.stream import stream
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
except ImportError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
```
```markdown
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.```
```python
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
```markdown
### Get tail for log```
```python
# Display the last 'tail_lines' of files in 'log_files' list
pods = api.list_namespaced_pod(namespace)
entries_for_analysis = []
for p in pods.items:
if pod is None or p.metadata.name == pod:
for c in p.spec.containers:
if container is None or c.name == container:
for log_file in log_files:
print (f"- LOGS: '{log_file}' for CONTAINER: '{c.name}' in POD: '{p.metadata.name}'")
try:
output = stream(api.connect_get_namespaced_pod_exec, p.metadata.name, namespace, command=['/bin/sh', '-c', f'tail -n {tail_lines} {log_file}'], container=c.name, stderr=True, stdout=True)
except Exception:
print (f"FAILED to get LOGS for CONTAINER: {c.name} in POD: {p.metadata.name}")
else:
for line in output.split('
'):
for expression in expressions_to_analyze:
if expression.match(line):
entries_for_analysis.append(line)
print(line)
print("")
print(f"{len(entries_for_analysis)} log entries found for further analysis.")
```
```markdown
### Analyze log entries and suggest relevant Troubleshooting Guides```
```python
# Analyze log entries and suggest further relevant troubleshooting guides
from IPython.display import Markdown
import os
import json
import requests
import ipykernel
import datetime
from urllib.parse import urljoin
from notebook import notebookapp
def get_notebook_name():
"""Return the full path of the jupyter notebook. Some runtimes (e.g. ADS)
have the kernel_id in the filename of the connection file. If so, the
notebook name at runtime can be determined using `list_running_servers`.
Other runtimes (e.g. azdata) do not have the kernel_id in the filename of
the connection file, therefore we are unable to establish the filename
"""
connection_file = os.path.basename(ipykernel.get_connection_file())
# If the runtime has the kernel_id in the connection filename, use it to
# get the real notebook name at runtime, otherwise, use the notebook
# filename from build time.
try:
kernel_id = connection_file.split('-', 1)[1].split('.')[0]
except:
pass
else:
for servers in list(notebookapp.list_running_servers()):
try:
response = requests.get(urljoin(servers['url'], 'api/sessions'), params={'token': servers.get('token', '')}, timeout=.01)
except:
pass
else:
for nn in json.loads(response.text):
if nn['kernel']['id'] == kernel_id:
return nn['path']
def load_json(filename):
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def get_notebook_rules():
"""Load the notebook rules from the metadata of this notebook (in the .ipynb file)"""
file_name = get_notebook_name()
if file_name == None:
return None
else:
j = load_json(file_name)
if "azdata" not in j["metadata"] or \
"expert" not in j["metadata"]["azdata"] or \
"log_analyzer_rules" not in j["metadata"]["azdata"]["expert"]:
return []
else:
return j["metadata"]["azdata"]["expert"]["log_analyzer_rules"]
rules = get_notebook_rules()
if rules == None:
print("")
print(f"Log Analysis only available when run in Azure Data Studio. Not available when run in azdata.")
else:
print(f"Applying the following {len(rules)} rules to {len(entries_for_analysis)} log entries for analysis, looking for HINTs to further troubleshooting.")
print(rules)
hints = 0
if len(rules) > 0:
for entry in entries_for_analysis:
for rule in rules:
if entry.find(rule[0]) != -1:
print (entry)
display(Markdown(f'HINT: Use [{rule[2]}]({rule[3]}) to resolve this issue.'))
hints = hints + 1
print("")
print(f"{len(entries_for_analysis)} log entries analyzed (using {len(rules)} rules). {hints} further troubleshooting hints made inline.")
```
```python
print('Notebook execution complete.')
```
```markdown
Related
-------
- [TSG027 - Observe cluster
deployment](../diagnose/tsg027-observe-bdc-create.ipynb)```
|
```javascript
/**
* This file is spawned in the background and checks npm for the latest version
* of the CLI, then writes the version to the cache file.
*
* NOTE: Since this file runs asynchronously in the background, it's possible
* for multiple instances of this file to be running at the same time leading
* to a race condition where the most recent instance will overwrite the
* previous cache file resetting the `notified` flag and cause the update
* notification to appear for multiple consequetive commands. Not the end of
* the world, but something to be aware of.
*
* IMPORTANT! This file must NOT depend on any 3rd party dependencies. This
* file is NOT bundled by `esbuild` and thus any 3rd party dependencies will
* never be available.
*/
const https = require('https');
const { mkdirSync, writeFileSync } = require('fs');
const { access, mkdir, readFile, unlink, writeFile } = require('fs/promises');
const path = require('path');
const { format, inspect } = require('util');
/**
* An simple output helper which accumulates error and debug log messages in
* memory for potential persistence to disk while immediately outputting errors
* and debug messages, when the `--debug` flag is set, to `stderr`.
*/
class WorkerOutput {
debugLog = [];
logFile = null;
constructor({ debug = true }) {
this.debugOutputEnabled = debug;
}
debug(...args) {
this.print('debug', args);
}
error(...args) {
this.print('error', args);
}
print(type, args) {
// note: `args` may contain an `Error` that will be toString()'d and thus
// no stack trace
const str = format(
...args.map(s => (typeof s === 'string' ? s : inspect(s)))
);
this.debugLog.push(`[${new Date().toISOString()}] [${type}] ${str}`);
if (type === 'debug' && this.debugOutputEnabled) {
// eslint-disable-next-line no-console
console.error(`> '[debug] [${new Date().toISOString()}] ${str}`);
} else if (type === 'error') {
// eslint-disable-next-line no-console
console.error(`Error: ${str}`);
}
}
setLogFile(file) {
// wire up the exit handler the first time the log file is set
if (this.logFile === null) {
process.on('exit', () => {
if (this.debugLog.length) {
mkdirSync(path.dirname(this.logFile), { recursive: true });
writeFileSync(this.logFile, this.debugLog.join('\n'));
}
});
}
this.logFile = file;
}
}
const output = new WorkerOutput({
// enable the debug logging if the `--debug` is set or if this worker script
// was directly executed
debug: process.argv.includes('--debug') || !process.connected,
});
process.on('unhandledRejection', err => {
output.error('Exiting worker due to unhandled rejection:', err);
process.exit(1);
});
// this timer will prevent this worker process from running longer than 10s
const timer = setTimeout(() => {
output.error('Worker timed out after 10 seconds');
process.exit(1);
}, 10000);
// wait for the parent to give us the work payload
process.once('message', async msg => {
output.debug('Received message from parent:', msg);
output.debug('Disconnecting from parent');
process.disconnect();
const { cacheFile, distTag, name, updateCheckInterval } = msg;
const cacheFileParsed = path.parse(cacheFile);
await mkdir(cacheFileParsed.dir, { recursive: true });
output.setLogFile(
path.join(cacheFileParsed.dir, `${cacheFileParsed.name}.log`)
);
const lockFile = path.join(
cacheFileParsed.dir,
`${cacheFileParsed.name}.lock`
);
try {
// check for a lock file and either bail if running or write our pid and continue
output.debug(`Checking lock file: ${lockFile}`);
if (await isRunning(lockFile)) {
output.debug('Worker already running, exiting');
process.exit(1);
}
output.debug(`Initializing lock file with pid ${process.pid}`);
await writeFile(lockFile, String(process.pid), 'utf-8');
const tags = await fetchDistTags(name);
const version = tags[distTag];
const expireAt = Date.now() + updateCheckInterval;
const notifyAt = await getNotifyAt(cacheFile, version);
if (version) {
output.debug(`Found dist tag "${distTag}" with version "${version}"`);
} else {
output.error(`Dist tag "${distTag}" not found`);
output.debug('Available dist tags:', Object.keys(tags));
}
output.debug(`Writing cache file: ${cacheFile}`);
await writeFile(
cacheFile,
JSON.stringify({
expireAt,
notifyAt,
version,
})
);
} catch (err) {
output.error(`Failed to get package info:`, err);
} finally {
clearTimeout(timer);
if (await fileExists(lockFile)) {
output.debug(`Releasing lock file: ${lockFile}`);
await unlink(lockFile);
}
output.debug(`Worker finished successfully!`);
// force the worker to exit
process.exit(0);
}
});
// signal the parent process we're ready
if (process.connected) {
output.debug("Notifying parent we're ready");
process.send({ type: 'ready' });
} else {
// eslint-disable-next-line no-console
console.error('No IPC bridge detected, exiting');
process.exit(1);
}
async function fileExists(file) {
return access(file)
.then(() => true)
.catch(() => false);
}
async function isRunning(lockFile) {
try {
const pid = parseInt(await readFile(lockFile, 'utf-8'));
output.debug(`Found lock file with pid: ${pid}`);
// checks for existence of a process; throws if not found
process.kill(pid, 0);
// process is still running
return true;
} catch (err) {
if (await fileExists(lockFile)) {
// lock file does not exist or process is not running and pid is stale
output.debug(`Resetting lock file: ${err.toString()}`);
await unlink(lockFile);
}
return false;
}
}
/**
* Attempts to load and return the previous `notifyAt` value.
*
* If the latest version is newer than the previous latest version, then
* return `undefined` to invalidate `notifyAt` which forces the notification
* to be displayed, otherwise keep the existing `notifyAt`.
*
* @param {string} cacheFile The path to the cache file
* @param {string} version The latest version
* @returns {number | undefined} The previous notifyAt
*/
async function getNotifyAt(cacheFile, version) {
try {
const old = JSON.parse(await readFile(cacheFile, 'utf-8'));
if (old?.version && old.version === version) {
return old.notifyAt;
}
} catch (err) {
// cache does not exist or malformed
if (err.code !== 'ENOENT') {
output.debug(`Error reading latest package cache file: ${err}`);
}
}
}
/**
* Fetches the dist tags from npm for a given package.
*
* @param {string} name The package name
* @returns A map of dist tags to versions
*/
async function fetchDistTags(name) {
// fetch the latest version from npm
const agent = new https.Agent({
keepAlive: true,
maxSockets: 15, // See: `npm config get maxsockets`
});
const headers = {
accept:
'application/vnd.npm.install-v1+json; q=1.0, application/json; q=0.8, */*',
};
const url = `path_to_url{name}/dist-tags`;
output.debug(`Fetching ${url}`);
return new Promise((resolve, reject) => {
const req = https.get(
url,
{
agent,
headers,
},
res => {
let buf = '';
res.on('data', chunk => {
buf += chunk;
});
res.on('end', () => {
try {
if (res.statusCode && res.statusCode >= 400) {
return reject(
new Error(
`Fetch dist-tags failed ${res.statusCode} ${res.statusMessage}`
)
);
}
resolve(JSON.parse(buf));
} catch (err) {
reject(err);
}
});
}
);
req.on('error', reject);
req.end();
});
}
```
|
Peter Edvin Lindgren (13 December 1915 – 30 May 1981) was a Swedish actor. His many roles include the part of Lena's father in I Am Curious (Yellow) (1967) and I Am Curious (Blue) (1968). He won the award for best actor at the 16th Guldbagge Awards for his role in I Am Maria.
He was the father of actress Monica Nielsen.
Selected filmography
Med folket för fosterlandet (1938) - Striker (uncredited)
Du gamla du fria! (1938) - Lundström (uncredited)
Frun tillhanda (1939) - Young Man (uncredited)
With Open Arms (1940) - Student (uncredited)
Snapphanar (1941) - Guerilla soldier (uncredited)
Meeting in the Night (1946) - Filarn
Evening at the Djurgarden (1946) - Nicke
Iris and the Lieutenant (1946) - Svante (engineer)
A Ship Bound for India (1947) - Foreign sailor (uncredited)
Det kom en gäst... (1947) - Pastorn
Bill Bergson, Master Detective (1947) - Tjommen
The People of Simlang Valley (1947) - Tattar-Jan
Hammarforsens brus (1948) - Anders
The Street (1949) - Bertil 'Berra' Wiring
Big Lasse of Delsbo (1949) - Klas Hägglund
The Realm of the Rye (1950) - Markus
To mistenkelige personer (1950) - Ekstrøm
Stronger Than the Law (1951) - Manuel
Encounter with Life (1952) - Gun's Friend
Flottare med färg (1952) - Ivar Persson
Ursula - Flickan i Finnskogarna (1953) - Arne
Barabbas (1953) - Soldier which Assaulted Gang (uncredited)
Our Father and the Gypsy (1954) - Mickel
The Vicious Breed (1954) - Inmate
Voyage in the Night (1955) - Berra
Savnet siden mandag (1955)
The Summer Wind Blows (1955) - Gustav-Adolf Hållman
Mord, lilla vän (1955) - Valter Smitt
Night Child (1956) - Bruno (uncredited)
Swing it, fröken (1956) - Bi
A Dreamer's Journey (1957) - Anders Kolare
Blondin i fara (1957) - Night Club Customer
Never in Your Life (1957) - Ärtan
Do You Believe in Angels? (1961) - Poker player (uncredited)
Hide and Seek (1963) - Intellektuelle Johansson (uncredited)
Träfracken (1966) - Grevén
I Am Curious (Yellow) (1967) - Rune Nyman
I Am Curious (Blue) (1968) - Lena's Father
Vindingevals (1968) - Söder
We Are All Demons (1969) - First Mate
The New Land (1972) - Samuel Nöjd
Maria Marusjka (1973) - Ewert, fyrvokter
Ebon Lundin (1973) - Fyllo
Gangsterfilmen (1974) - Hans Nilsson
Lejonet och jungfrun (1975) - Blomberg
Garaget (1975) - Adolphson
City of My Dreams (1976) - Storsäcken
Drömmen om Amerika (1976) - Per-Olov
Kejsaren (1979) - Sjökapten
Linus eller Tegelhusets hemlighet (1979) - Medlem av stråkkvartett
Lucie (1979) - Rapist
I Am Maria (1979) - Jon
Blomstrande tider (1980) - Joel
Lyckliga vi... (1980) - The Old Swedish Man
Sverige åt svenskarna (1980) - Swedish soldier
Arme, syndige menneske (1980) - Swedish sailor
References
External links
1915 births
1981 deaths
Burials at Skogskyrkogården
People from Lidingö Municipality
Best Actor Guldbagge Award winners
20th-century Swedish male actors
|
```c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <openssl/objects.h>
#include <openssl/comp.h>
#include <openssl/err.h>
COMP_METHOD *COMP_zlib(void);
static COMP_METHOD zlib_method_nozlib = {
NID_undef,
"(undef)",
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
};
#ifndef ZLIB
# undef ZLIB_SHARED
#else
# include <zlib.h>
static int zlib_stateful_init(COMP_CTX *ctx);
static void zlib_stateful_finish(COMP_CTX *ctx);
static int zlib_stateful_compress_block(COMP_CTX *ctx, unsigned char *out,
unsigned int olen, unsigned char *in,
unsigned int ilen);
static int zlib_stateful_expand_block(COMP_CTX *ctx, unsigned char *out,
unsigned int olen, unsigned char *in,
unsigned int ilen);
/* memory allocations functions for zlib intialization */
static void *zlib_zalloc(void *opaque, unsigned int no, unsigned int size)
{
void *p;
p = OPENSSL_malloc(no * size);
if (p)
memset(p, 0, no * size);
return p;
}
static void zlib_zfree(void *opaque, void *address)
{
OPENSSL_free(address);
}
# if 0
static int zlib_compress_block(COMP_CTX *ctx, unsigned char *out,
unsigned int olen, unsigned char *in,
unsigned int ilen);
static int zlib_expand_block(COMP_CTX *ctx, unsigned char *out,
unsigned int olen, unsigned char *in,
unsigned int ilen);
static int zz_uncompress(Bytef *dest, uLongf * destLen, const Bytef *source,
uLong sourceLen);
static COMP_METHOD zlib_stateless_method = {
NID_zlib_compression,
LN_zlib_compression,
NULL,
NULL,
zlib_compress_block,
zlib_expand_block,
NULL,
NULL,
};
# endif
static COMP_METHOD zlib_stateful_method = {
NID_zlib_compression,
LN_zlib_compression,
zlib_stateful_init,
zlib_stateful_finish,
zlib_stateful_compress_block,
zlib_stateful_expand_block,
NULL,
NULL,
};
/*
* When OpenSSL is built on Windows, we do not want to require that
* the ZLIB.DLL be available in order for the OpenSSL DLLs to
* work. Therefore, all ZLIB routines are loaded at run time
* and we do not link to a .LIB file when ZLIB_SHARED is set.
*/
# if defined(OPENSSL_SYS_WINDOWS) || defined(OPENSSL_SYS_WIN32)
# include <windows.h>
# endif /* !(OPENSSL_SYS_WINDOWS ||
* OPENSSL_SYS_WIN32) */
# ifdef ZLIB_SHARED
# include <openssl/dso.h>
/* Function pointers */
typedef int (*compress_ft) (Bytef *dest, uLongf * destLen,
const Bytef *source, uLong sourceLen);
typedef int (*inflateEnd_ft) (z_streamp strm);
typedef int (*inflate_ft) (z_streamp strm, int flush);
typedef int (*inflateInit__ft) (z_streamp strm,
const char *version, int stream_size);
typedef int (*deflateEnd_ft) (z_streamp strm);
typedef int (*deflate_ft) (z_streamp strm, int flush);
typedef int (*deflateInit__ft) (z_streamp strm, int level,
const char *version, int stream_size);
typedef const char *(*zError__ft) (int err);
static compress_ft p_compress = NULL;
static inflateEnd_ft p_inflateEnd = NULL;
static inflate_ft p_inflate = NULL;
static inflateInit__ft p_inflateInit_ = NULL;
static deflateEnd_ft p_deflateEnd = NULL;
static deflate_ft p_deflate = NULL;
static deflateInit__ft p_deflateInit_ = NULL;
static zError__ft p_zError = NULL;
static int zlib_loaded = 0; /* only attempt to init func pts once */
static DSO *zlib_dso = NULL;
# define compress p_compress
# define inflateEnd p_inflateEnd
# define inflate p_inflate
# define inflateInit_ p_inflateInit_
# define deflateEnd p_deflateEnd
# define deflate p_deflate
# define deflateInit_ p_deflateInit_
# define zError p_zError
# endif /* ZLIB_SHARED */
struct zlib_state {
z_stream istream;
z_stream ostream;
};
static int zlib_stateful_ex_idx = -1;
static int zlib_stateful_init(COMP_CTX *ctx)
{
int err;
struct zlib_state *state =
(struct zlib_state *)OPENSSL_malloc(sizeof(struct zlib_state));
if (state == NULL)
goto err;
state->istream.zalloc = zlib_zalloc;
state->istream.zfree = zlib_zfree;
state->istream.opaque = Z_NULL;
state->istream.next_in = Z_NULL;
state->istream.next_out = Z_NULL;
state->istream.avail_in = 0;
state->istream.avail_out = 0;
err = inflateInit_(&state->istream, ZLIB_VERSION, sizeof(z_stream));
if (err != Z_OK)
goto err;
state->ostream.zalloc = zlib_zalloc;
state->ostream.zfree = zlib_zfree;
state->ostream.opaque = Z_NULL;
state->ostream.next_in = Z_NULL;
state->ostream.next_out = Z_NULL;
state->ostream.avail_in = 0;
state->ostream.avail_out = 0;
err = deflateInit_(&state->ostream, Z_DEFAULT_COMPRESSION,
ZLIB_VERSION, sizeof(z_stream));
if (err != Z_OK)
goto err;
CRYPTO_new_ex_data(CRYPTO_EX_INDEX_COMP, ctx, &ctx->ex_data);
CRYPTO_set_ex_data(&ctx->ex_data, zlib_stateful_ex_idx, state);
return 1;
err:
if (state)
OPENSSL_free(state);
return 0;
}
static void zlib_stateful_finish(COMP_CTX *ctx)
{
struct zlib_state *state =
(struct zlib_state *)CRYPTO_get_ex_data(&ctx->ex_data,
zlib_stateful_ex_idx);
inflateEnd(&state->istream);
deflateEnd(&state->ostream);
OPENSSL_free(state);
CRYPTO_free_ex_data(CRYPTO_EX_INDEX_COMP, ctx, &ctx->ex_data);
}
static int zlib_stateful_compress_block(COMP_CTX *ctx, unsigned char *out,
unsigned int olen, unsigned char *in,
unsigned int ilen)
{
int err = Z_OK;
struct zlib_state *state =
(struct zlib_state *)CRYPTO_get_ex_data(&ctx->ex_data,
zlib_stateful_ex_idx);
if (state == NULL)
return -1;
state->ostream.next_in = in;
state->ostream.avail_in = ilen;
state->ostream.next_out = out;
state->ostream.avail_out = olen;
if (ilen > 0)
err = deflate(&state->ostream, Z_SYNC_FLUSH);
if (err != Z_OK)
return -1;
# ifdef DEBUG_ZLIB
fprintf(stderr, "compress(%4d)->%4d %s\n",
ilen, olen - state->ostream.avail_out,
(ilen != olen - state->ostream.avail_out) ? "zlib" : "clear");
# endif
return olen - state->ostream.avail_out;
}
static int zlib_stateful_expand_block(COMP_CTX *ctx, unsigned char *out,
unsigned int olen, unsigned char *in,
unsigned int ilen)
{
int err = Z_OK;
struct zlib_state *state =
(struct zlib_state *)CRYPTO_get_ex_data(&ctx->ex_data,
zlib_stateful_ex_idx);
if (state == NULL)
return 0;
state->istream.next_in = in;
state->istream.avail_in = ilen;
state->istream.next_out = out;
state->istream.avail_out = olen;
if (ilen > 0)
err = inflate(&state->istream, Z_SYNC_FLUSH);
if (err != Z_OK)
return -1;
# ifdef DEBUG_ZLIB
fprintf(stderr, "expand(%4d)->%4d %s\n",
ilen, olen - state->istream.avail_out,
(ilen != olen - state->istream.avail_out) ? "zlib" : "clear");
# endif
return olen - state->istream.avail_out;
}
# if 0
static int zlib_compress_block(COMP_CTX *ctx, unsigned char *out,
unsigned int olen, unsigned char *in,
unsigned int ilen)
{
unsigned long l;
int i;
int clear = 1;
if (ilen > 128) {
out[0] = 1;
l = olen - 1;
i = compress(&(out[1]), &l, in, (unsigned long)ilen);
if (i != Z_OK)
return (-1);
if (ilen > l) {
clear = 0;
l++;
}
}
if (clear) {
out[0] = 0;
memcpy(&(out[1]), in, ilen);
l = ilen + 1;
}
# ifdef DEBUG_ZLIB
fprintf(stderr, "compress(%4d)->%4d %s\n",
ilen, (int)l, (clear) ? "clear" : "zlib");
# endif
return ((int)l);
}
static int zlib_expand_block(COMP_CTX *ctx, unsigned char *out,
unsigned int olen, unsigned char *in,
unsigned int ilen)
{
unsigned long l;
int i;
if (in[0]) {
l = olen;
i = zz_uncompress(out, &l, &(in[1]), (unsigned long)ilen - 1);
if (i != Z_OK)
return (-1);
} else {
memcpy(out, &(in[1]), ilen - 1);
l = ilen - 1;
}
# ifdef DEBUG_ZLIB
fprintf(stderr, "expand (%4d)->%4d %s\n",
ilen, (int)l, in[0] ? "zlib" : "clear");
# endif
return ((int)l);
}
static int zz_uncompress(Bytef *dest, uLongf * destLen, const Bytef *source,
uLong sourceLen)
{
z_stream stream;
int err;
stream.next_in = (Bytef *)source;
stream.avail_in = (uInt) sourceLen;
/* Check for source > 64K on 16-bit machine: */
if ((uLong) stream.avail_in != sourceLen)
return Z_BUF_ERROR;
stream.next_out = dest;
stream.avail_out = (uInt) * destLen;
if ((uLong) stream.avail_out != *destLen)
return Z_BUF_ERROR;
stream.zalloc = (alloc_func) 0;
stream.zfree = (free_func) 0;
err = inflateInit_(&stream, ZLIB_VERSION, sizeof(z_stream));
if (err != Z_OK)
return err;
err = inflate(&stream, Z_FINISH);
if (err != Z_STREAM_END) {
inflateEnd(&stream);
return err;
}
*destLen = stream.total_out;
err = inflateEnd(&stream);
return err;
}
# endif
#endif
COMP_METHOD *COMP_zlib(void)
{
COMP_METHOD *meth = &zlib_method_nozlib;
#ifdef ZLIB_SHARED
if (!zlib_loaded) {
# if defined(OPENSSL_SYS_WINDOWS) || defined(OPENSSL_SYS_WIN32)
zlib_dso = DSO_load(NULL, "ZLIB1", NULL, 0);
# else
zlib_dso = DSO_load(NULL, "z", NULL, 0);
# endif
if (zlib_dso != NULL) {
p_compress = (compress_ft) DSO_bind_func(zlib_dso, "compress");
p_inflateEnd
= (inflateEnd_ft) DSO_bind_func(zlib_dso, "inflateEnd");
p_inflate = (inflate_ft) DSO_bind_func(zlib_dso, "inflate");
p_inflateInit_
= (inflateInit__ft) DSO_bind_func(zlib_dso, "inflateInit_");
p_deflateEnd
= (deflateEnd_ft) DSO_bind_func(zlib_dso, "deflateEnd");
p_deflate = (deflate_ft) DSO_bind_func(zlib_dso, "deflate");
p_deflateInit_
= (deflateInit__ft) DSO_bind_func(zlib_dso, "deflateInit_");
p_zError = (zError__ft) DSO_bind_func(zlib_dso, "zError");
if (p_compress && p_inflateEnd && p_inflate
&& p_inflateInit_ && p_deflateEnd
&& p_deflate && p_deflateInit_ && p_zError)
zlib_loaded++;
}
}
#endif
#ifdef ZLIB_SHARED
if (zlib_loaded)
#endif
#if defined(ZLIB) || defined(ZLIB_SHARED)
{
/*
* init zlib_stateful_ex_idx here so that in a multi-process
* application it's enough to intialize openssl before forking (idx
* will be inherited in all the children)
*/
if (zlib_stateful_ex_idx == -1) {
CRYPTO_w_lock(CRYPTO_LOCK_COMP);
if (zlib_stateful_ex_idx == -1)
zlib_stateful_ex_idx =
CRYPTO_get_ex_new_index(CRYPTO_EX_INDEX_COMP,
0, NULL, NULL, NULL, NULL);
CRYPTO_w_unlock(CRYPTO_LOCK_COMP);
if (zlib_stateful_ex_idx == -1)
goto err;
}
meth = &zlib_stateful_method;
}
err:
#endif
return (meth);
}
void COMP_zlib_cleanup(void)
{
#ifdef ZLIB_SHARED
if (zlib_dso != NULL)
DSO_free(zlib_dso);
zlib_dso = NULL;
#endif
}
#ifdef ZLIB
/* Zlib based compression/decompression filter BIO */
typedef struct {
unsigned char *ibuf; /* Input buffer */
int ibufsize; /* Buffer size */
z_stream zin; /* Input decompress context */
unsigned char *obuf; /* Output buffer */
int obufsize; /* Output buffer size */
unsigned char *optr; /* Position in output buffer */
int ocount; /* Amount of data in output buffer */
int odone; /* deflate EOF */
int comp_level; /* Compression level to use */
z_stream zout; /* Output compression context */
} BIO_ZLIB_CTX;
# define ZLIB_DEFAULT_BUFSIZE 1024
static int bio_zlib_new(BIO *bi);
static int bio_zlib_free(BIO *bi);
static int bio_zlib_read(BIO *b, char *out, int outl);
static int bio_zlib_write(BIO *b, const char *in, int inl);
static long bio_zlib_ctrl(BIO *b, int cmd, long num, void *ptr);
static long bio_zlib_callback_ctrl(BIO *b, int cmd, bio_info_cb *fp);
static BIO_METHOD bio_meth_zlib = {
BIO_TYPE_COMP,
"zlib",
bio_zlib_write,
bio_zlib_read,
NULL,
NULL,
bio_zlib_ctrl,
bio_zlib_new,
bio_zlib_free,
bio_zlib_callback_ctrl
};
BIO_METHOD *BIO_f_zlib(void)
{
return &bio_meth_zlib;
}
static int bio_zlib_new(BIO *bi)
{
BIO_ZLIB_CTX *ctx;
# ifdef ZLIB_SHARED
(void)COMP_zlib();
if (!zlib_loaded) {
COMPerr(COMP_F_BIO_ZLIB_NEW, COMP_R_ZLIB_NOT_SUPPORTED);
return 0;
}
# endif
ctx = OPENSSL_malloc(sizeof(BIO_ZLIB_CTX));
if (!ctx) {
COMPerr(COMP_F_BIO_ZLIB_NEW, ERR_R_MALLOC_FAILURE);
return 0;
}
ctx->ibuf = NULL;
ctx->obuf = NULL;
ctx->ibufsize = ZLIB_DEFAULT_BUFSIZE;
ctx->obufsize = ZLIB_DEFAULT_BUFSIZE;
ctx->zin.zalloc = Z_NULL;
ctx->zin.zfree = Z_NULL;
ctx->zin.next_in = NULL;
ctx->zin.avail_in = 0;
ctx->zin.next_out = NULL;
ctx->zin.avail_out = 0;
ctx->zout.zalloc = Z_NULL;
ctx->zout.zfree = Z_NULL;
ctx->zout.next_in = NULL;
ctx->zout.avail_in = 0;
ctx->zout.next_out = NULL;
ctx->zout.avail_out = 0;
ctx->odone = 0;
ctx->comp_level = Z_DEFAULT_COMPRESSION;
bi->init = 1;
bi->ptr = (char *)ctx;
bi->flags = 0;
return 1;
}
static int bio_zlib_free(BIO *bi)
{
BIO_ZLIB_CTX *ctx;
if (!bi)
return 0;
ctx = (BIO_ZLIB_CTX *) bi->ptr;
if (ctx->ibuf) {
/* Destroy decompress context */
inflateEnd(&ctx->zin);
OPENSSL_free(ctx->ibuf);
}
if (ctx->obuf) {
/* Destroy compress context */
deflateEnd(&ctx->zout);
OPENSSL_free(ctx->obuf);
}
OPENSSL_free(ctx);
bi->ptr = NULL;
bi->init = 0;
bi->flags = 0;
return 1;
}
static int bio_zlib_read(BIO *b, char *out, int outl)
{
BIO_ZLIB_CTX *ctx;
int ret;
z_stream *zin;
if (!out || !outl)
return 0;
ctx = (BIO_ZLIB_CTX *) b->ptr;
zin = &ctx->zin;
BIO_clear_retry_flags(b);
if (!ctx->ibuf) {
ctx->ibuf = OPENSSL_malloc(ctx->ibufsize);
if (!ctx->ibuf) {
COMPerr(COMP_F_BIO_ZLIB_READ, ERR_R_MALLOC_FAILURE);
return 0;
}
inflateInit(zin);
zin->next_in = ctx->ibuf;
zin->avail_in = 0;
}
/* Copy output data directly to supplied buffer */
zin->next_out = (unsigned char *)out;
zin->avail_out = (unsigned int)outl;
for (;;) {
/* Decompress while data available */
while (zin->avail_in) {
ret = inflate(zin, 0);
if ((ret != Z_OK) && (ret != Z_STREAM_END)) {
COMPerr(COMP_F_BIO_ZLIB_READ, COMP_R_ZLIB_INFLATE_ERROR);
ERR_add_error_data(2, "zlib error:", zError(ret));
return 0;
}
/* If EOF or we've read everything then return */
if ((ret == Z_STREAM_END) || !zin->avail_out)
return outl - zin->avail_out;
}
/*
* No data in input buffer try to read some in, if an error then
* return the total data read.
*/
ret = BIO_read(b->next_bio, ctx->ibuf, ctx->ibufsize);
if (ret <= 0) {
/* Total data read */
int tot = outl - zin->avail_out;
BIO_copy_next_retry(b);
if (ret < 0)
return (tot > 0) ? tot : ret;
return tot;
}
zin->avail_in = ret;
zin->next_in = ctx->ibuf;
}
}
static int bio_zlib_write(BIO *b, const char *in, int inl)
{
BIO_ZLIB_CTX *ctx;
int ret;
z_stream *zout;
if (!in || !inl)
return 0;
ctx = (BIO_ZLIB_CTX *) b->ptr;
if (ctx->odone)
return 0;
zout = &ctx->zout;
BIO_clear_retry_flags(b);
if (!ctx->obuf) {
ctx->obuf = OPENSSL_malloc(ctx->obufsize);
/* Need error here */
if (!ctx->obuf) {
COMPerr(COMP_F_BIO_ZLIB_WRITE, ERR_R_MALLOC_FAILURE);
return 0;
}
ctx->optr = ctx->obuf;
ctx->ocount = 0;
deflateInit(zout, ctx->comp_level);
zout->next_out = ctx->obuf;
zout->avail_out = ctx->obufsize;
}
/* Obtain input data directly from supplied buffer */
zout->next_in = (void *)in;
zout->avail_in = inl;
for (;;) {
/* If data in output buffer write it first */
while (ctx->ocount) {
ret = BIO_write(b->next_bio, ctx->optr, ctx->ocount);
if (ret <= 0) {
/* Total data written */
int tot = inl - zout->avail_in;
BIO_copy_next_retry(b);
if (ret < 0)
return (tot > 0) ? tot : ret;
return tot;
}
ctx->optr += ret;
ctx->ocount -= ret;
}
/* Have we consumed all supplied data? */
if (!zout->avail_in)
return inl;
/* Compress some more */
/* Reset buffer */
ctx->optr = ctx->obuf;
zout->next_out = ctx->obuf;
zout->avail_out = ctx->obufsize;
/* Compress some more */
ret = deflate(zout, 0);
if (ret != Z_OK) {
COMPerr(COMP_F_BIO_ZLIB_WRITE, COMP_R_ZLIB_DEFLATE_ERROR);
ERR_add_error_data(2, "zlib error:", zError(ret));
return 0;
}
ctx->ocount = ctx->obufsize - zout->avail_out;
}
}
static int bio_zlib_flush(BIO *b)
{
BIO_ZLIB_CTX *ctx;
int ret;
z_stream *zout;
ctx = (BIO_ZLIB_CTX *) b->ptr;
/* If no data written or already flush show success */
if (!ctx->obuf || (ctx->odone && !ctx->ocount))
return 1;
zout = &ctx->zout;
BIO_clear_retry_flags(b);
/* No more input data */
zout->next_in = NULL;
zout->avail_in = 0;
for (;;) {
/* If data in output buffer write it first */
while (ctx->ocount) {
ret = BIO_write(b->next_bio, ctx->optr, ctx->ocount);
if (ret <= 0) {
BIO_copy_next_retry(b);
return ret;
}
ctx->optr += ret;
ctx->ocount -= ret;
}
if (ctx->odone)
return 1;
/* Compress some more */
/* Reset buffer */
ctx->optr = ctx->obuf;
zout->next_out = ctx->obuf;
zout->avail_out = ctx->obufsize;
/* Compress some more */
ret = deflate(zout, Z_FINISH);
if (ret == Z_STREAM_END)
ctx->odone = 1;
else if (ret != Z_OK) {
COMPerr(COMP_F_BIO_ZLIB_FLUSH, COMP_R_ZLIB_DEFLATE_ERROR);
ERR_add_error_data(2, "zlib error:", zError(ret));
return 0;
}
ctx->ocount = ctx->obufsize - zout->avail_out;
}
}
static long bio_zlib_ctrl(BIO *b, int cmd, long num, void *ptr)
{
BIO_ZLIB_CTX *ctx;
int ret, *ip;
int ibs, obs;
if (!b->next_bio)
return 0;
ctx = (BIO_ZLIB_CTX *) b->ptr;
switch (cmd) {
case BIO_CTRL_RESET:
ctx->ocount = 0;
ctx->odone = 0;
ret = 1;
break;
case BIO_CTRL_FLUSH:
ret = bio_zlib_flush(b);
if (ret > 0)
ret = BIO_flush(b->next_bio);
break;
case BIO_C_SET_BUFF_SIZE:
ibs = -1;
obs = -1;
if (ptr != NULL) {
ip = ptr;
if (*ip == 0)
ibs = (int)num;
else
obs = (int)num;
} else {
ibs = (int)num;
obs = ibs;
}
if (ibs != -1) {
if (ctx->ibuf) {
OPENSSL_free(ctx->ibuf);
ctx->ibuf = NULL;
}
ctx->ibufsize = ibs;
}
if (obs != -1) {
if (ctx->obuf) {
OPENSSL_free(ctx->obuf);
ctx->obuf = NULL;
}
ctx->obufsize = obs;
}
ret = 1;
break;
case BIO_C_DO_STATE_MACHINE:
BIO_clear_retry_flags(b);
ret = BIO_ctrl(b->next_bio, cmd, num, ptr);
BIO_copy_next_retry(b);
break;
default:
ret = BIO_ctrl(b->next_bio, cmd, num, ptr);
break;
}
return ret;
}
static long bio_zlib_callback_ctrl(BIO *b, int cmd, bio_info_cb *fp)
{
if (!b->next_bio)
return 0;
return BIO_callback_ctrl(b->next_bio, cmd, fp);
}
#endif
```
|
Magdelaine Chapelain (1651 – June 1724; also spelled Madeleine Chappelain) was a French fortune teller and poisoner. She was a defendant in the famous Affair of the Poisons.
Chapelain was a very successful fortune teller who had secured a fortune at her work. She had acquired her spouse, a former usher with a position as bureaucrat, and she also owned several buildings. In connection to the Poison Affair, she was implicated because she had formerly employed Françoise Filastre as a maid. Filastre was arrested upon her return from a trip to Auvergne (province) which had been paid for by Chapelin in December 1679. Chapelain was also connected to Louis de Vanens, to whom she had rented a house.
Adam Lesage claimed that Chapelain made her fortune by manufacturing poisons and performing black magic in collaboration with a man by the name of Boucher, and Filastre claimed that she had on occasion supplied Chapelain with poison so that she could sell it, that Chapelain had commissioned her to help her form a pact with Satan, and that Chapelain also performed curses and other magical services for her clients. Filastre claimed, that it was Chapelain who had been commissioned by Madame de Montespan to place an assassin (Filastre) in the household of Angelique de Fontanges.
Like many other involved with the Poison Affair, Magdelaine Chapelain never had a trial. She was imprisoned in perpetuity by a Lettre de cachet at Belle-Île-en-Mer. According to Frantz Funck-Brentano, she was imprisoned in Villefranche-de-Conflent, in Fort Libéria. The exact date of death is not known for many of the accused, but of the prisoners whose year of death is known, Magdelaine Chapelain's death in June 1724 was the last recorded.
References
1651 births
1724 deaths
1679 crimes
French occultists
Poisoners
French people who died in prison custody
17th-century occultists
People imprisoned by lettre de cachet
17th-century French businesspeople
Affair of the Poisons
|
```javascript
/*
* JQuery zTree exHideNodes v3.5.23
* path_to_url
*
*
* path_to_url
*
* email: hunter.z@263.net
* Date: 2016-04-01
*/
(function(i){i.extend(!0,i.fn.zTree._z,{view:{clearOldFirstNode:function(c,a){for(var b=a.getNextNode();b;){if(b.isFirstNode){b.isFirstNode=!1;d.setNodeLineIcos(c,b);break}if(b.isLastNode)break;b=b.getNextNode()}},clearOldLastNode:function(c,a,b){for(a=a.getPreNode();a;){if(a.isLastNode){a.isLastNode=!1;b&&d.setNodeLineIcos(c,a);break}if(a.isFirstNode)break;a=a.getPreNode()}},makeDOMNodeMainBefore:function(c,a,b){c.push("<li ",b.isHidden?"style='display:none;' ":"","id='",b.tId,"' class='",l.className.LEVEL,
b.level,"' tabindex='0' hidefocus='true' treenode>")},showNode:function(c,a){a.isHidden=!1;f.initShowForExCheck(c,a);j(a,c).show()},showNodes:function(c,a,b){if(a&&a.length!=0){var e={},g,k;for(g=0,k=a.length;g<k;g++){var h=a[g];if(!e[h.parentTId]){var i=h.getParentNode();e[h.parentTId]=i===null?f.getRoot(c):h.getParentNode()}d.showNode(c,h,b)}for(var j in e)a=e[j][c.data.key.children],d.setFirstNodeForShow(c,a),d.setLastNodeForShow(c,a)}},hideNode:function(c,a){a.isHidden=!0;a.isFirstNode=!1;a.isLastNode=
!1;f.initHideForExCheck(c,a);d.cancelPreSelectedNode(c,a);j(a,c).hide()},hideNodes:function(c,a,b){if(a&&a.length!=0){var e={},g,k;for(g=0,k=a.length;g<k;g++){var h=a[g];if((h.isFirstNode||h.isLastNode)&&!e[h.parentTId]){var i=h.getParentNode();e[h.parentTId]=i===null?f.getRoot(c):h.getParentNode()}d.hideNode(c,h,b)}for(var j in e)a=e[j][c.data.key.children],d.setFirstNodeForHide(c,a),d.setLastNodeForHide(c,a)}},setFirstNode:function(c,a){var b=c.data.key.children,e=a[b].length;e>0&&!a[b][0].isHidden?
a[b][0].isFirstNode=!0:e>0&&d.setFirstNodeForHide(c,a[b])},setLastNode:function(c,a){var b=c.data.key.children,e=a[b].length;e>0&&!a[b][0].isHidden?a[b][e-1].isLastNode=!0:e>0&&d.setLastNodeForHide(c,a[b])},setFirstNodeForHide:function(c,a){var b,e,g;for(e=0,g=a.length;e<g;e++){b=a[e];if(b.isFirstNode)break;if(!b.isHidden&&!b.isFirstNode){b.isFirstNode=!0;d.setNodeLineIcos(c,b);break}else b=null}return b},setFirstNodeForShow:function(c,a){var b,e,g,f,h;for(e=0,g=a.length;e<g;e++)if(b=a[e],!f&&!b.isHidden&&
b.isFirstNode){f=b;break}else if(!f&&!b.isHidden&&!b.isFirstNode)b.isFirstNode=!0,f=b,d.setNodeLineIcos(c,b);else if(f&&b.isFirstNode){b.isFirstNode=!1;h=b;d.setNodeLineIcos(c,b);break}return{"new":f,old:h}},setLastNodeForHide:function(c,a){var b,e;for(e=a.length-1;e>=0;e--){b=a[e];if(b.isLastNode)break;if(!b.isHidden&&!b.isLastNode){b.isLastNode=!0;d.setNodeLineIcos(c,b);break}else b=null}return b},setLastNodeForShow:function(c,a){var b,e,g,f;for(e=a.length-1;e>=0;e--)if(b=a[e],!g&&!b.isHidden&&
b.isLastNode){g=b;break}else if(!g&&!b.isHidden&&!b.isLastNode)b.isLastNode=!0,g=b,d.setNodeLineIcos(c,b);else if(g&&b.isLastNode){b.isLastNode=!1;f=b;d.setNodeLineIcos(c,b);break}return{"new":g,old:f}}},data:{initHideForExCheck:function(c,a){if(a.isHidden&&c.check&&c.check.enable){if(typeof a._nocheck=="undefined")a._nocheck=!!a.nocheck,a.nocheck=!0;a.check_Child_State=-1;d.repairParentChkClassWithSelf&&d.repairParentChkClassWithSelf(c,a)}},initShowForExCheck:function(c,a){if(!a.isHidden&&c.check&&
c.check.enable){if(typeof a._nocheck!="undefined")a.nocheck=a._nocheck,delete a._nocheck;if(d.setChkClass){var b=j(a,l.id.CHECK,c);d.setChkClass(c,b,a)}d.repairParentChkClassWithSelf&&d.repairParentChkClassWithSelf(c,a)}}}});var i=i.fn.zTree,m=i._z.tools,l=i.consts,d=i._z.view,f=i._z.data,j=m.$;f.addInitNode(function(c,a,b){if(typeof b.isHidden=="string")b.isHidden=m.eqs(b.isHidden,"true");b.isHidden=!!b.isHidden;f.initHideForExCheck(c,b)});f.addBeforeA(function(){});f.addZTreeTools(function(c,a){a.showNodes=
function(a,b){d.showNodes(c,a,b)};a.showNode=function(a,b){a&&d.showNodes(c,[a],b)};a.hideNodes=function(a,b){d.hideNodes(c,a,b)};a.hideNode=function(a,b){a&&d.hideNodes(c,[a],b)};var b=a.checkNode;if(b)a.checkNode=function(c,d,f,h){(!c||!c.isHidden)&&b.apply(a,arguments)}});var n=f.initNode;f.initNode=function(c,a,b,e,g,i,h){var j=(e?e:f.getRoot(c))[c.data.key.children];f.tmpHideFirstNode=d.setFirstNodeForHide(c,j);f.tmpHideLastNode=d.setLastNodeForHide(c,j);h&&(d.setNodeLineIcos(c,f.tmpHideFirstNode),
d.setNodeLineIcos(c,f.tmpHideLastNode));g=f.tmpHideFirstNode===b;i=f.tmpHideLastNode===b;n&&n.apply(f,arguments);h&&i&&d.clearOldLastNode(c,b,h)};var o=f.makeChkFlag;if(o)f.makeChkFlag=function(c,a){(!a||!a.isHidden)&&o.apply(f,arguments)};var p=f.getTreeCheckedNodes;if(p)f.getTreeCheckedNodes=function(c,a,b,e){if(a&&a.length>0){var d=a[0].getParentNode();if(d&&d.isHidden)return[]}return p.apply(f,arguments)};var q=f.getTreeChangeCheckedNodes;if(q)f.getTreeChangeCheckedNodes=function(c,a,b){if(a&&
a.length>0){var d=a[0].getParentNode();if(d&&d.isHidden)return[]}return q.apply(f,arguments)};var r=d.expandCollapseSonNode;if(r)d.expandCollapseSonNode=function(c,a,b,e,f){(!a||!a.isHidden)&&r.apply(d,arguments)};var s=d.setSonNodeCheckBox;if(s)d.setSonNodeCheckBox=function(c,a,b,e){(!a||!a.isHidden)&&s.apply(d,arguments)};var t=d.repairParentChkClassWithSelf;if(t)d.repairParentChkClassWithSelf=function(c,a){(!a||!a.isHidden)&&t.apply(d,arguments)}})(jQuery);
```
|
```go
// Code generated by smithy-go-codegen DO NOT EDIT.
package types
import (
"fmt"
smithy "github.com/aws/smithy-go"
)
// Error returned if an attempt is made to register a patch group with a patch
// baseline that is already registered with a different patch baseline.
type AlreadyExistsException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *AlreadyExistsException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *AlreadyExistsException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *AlreadyExistsException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "AlreadyExistsException"
}
return *e.ErrorCodeOverride
}
func (e *AlreadyExistsException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// You must disassociate a document from all managed nodes before you can delete
// it.
type AssociatedInstances struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *AssociatedInstances) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *AssociatedInstances) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *AssociatedInstances) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "AssociatedInstances"
}
return *e.ErrorCodeOverride
}
func (e *AssociatedInstances) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified association already exists.
type AssociationAlreadyExists struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *AssociationAlreadyExists) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *AssociationAlreadyExists) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *AssociationAlreadyExists) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "AssociationAlreadyExists"
}
return *e.ErrorCodeOverride
}
func (e *AssociationAlreadyExists) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified association doesn't exist.
type AssociationDoesNotExist struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *AssociationDoesNotExist) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *AssociationDoesNotExist) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *AssociationDoesNotExist) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "AssociationDoesNotExist"
}
return *e.ErrorCodeOverride
}
func (e *AssociationDoesNotExist) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified execution ID doesn't exist. Verify the ID number and try again.
type AssociationExecutionDoesNotExist struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *AssociationExecutionDoesNotExist) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *AssociationExecutionDoesNotExist) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *AssociationExecutionDoesNotExist) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "AssociationExecutionDoesNotExist"
}
return *e.ErrorCodeOverride
}
func (e *AssociationExecutionDoesNotExist) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// You can have at most 2,000 active associations.
type AssociationLimitExceeded struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *AssociationLimitExceeded) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *AssociationLimitExceeded) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *AssociationLimitExceeded) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "AssociationLimitExceeded"
}
return *e.ErrorCodeOverride
}
func (e *AssociationLimitExceeded) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// You have reached the maximum number versions allowed for an association. Each
// association has a limit of 1,000 versions.
type AssociationVersionLimitExceeded struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *AssociationVersionLimitExceeded) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *AssociationVersionLimitExceeded) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *AssociationVersionLimitExceeded) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "AssociationVersionLimitExceeded"
}
return *e.ErrorCodeOverride
}
func (e *AssociationVersionLimitExceeded) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// Indicates that the Change Manager change template used in the change request
// was rejected or is still in a pending state.
type AutomationDefinitionNotApprovedException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *AutomationDefinitionNotApprovedException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *AutomationDefinitionNotApprovedException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *AutomationDefinitionNotApprovedException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "AutomationDefinitionNotApprovedException"
}
return *e.ErrorCodeOverride
}
func (e *AutomationDefinitionNotApprovedException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// An Automation runbook with the specified name couldn't be found.
type AutomationDefinitionNotFoundException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *AutomationDefinitionNotFoundException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *AutomationDefinitionNotFoundException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *AutomationDefinitionNotFoundException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "AutomationDefinitionNotFoundException"
}
return *e.ErrorCodeOverride
}
func (e *AutomationDefinitionNotFoundException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// An Automation runbook with the specified name and version couldn't be found.
type AutomationDefinitionVersionNotFoundException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *AutomationDefinitionVersionNotFoundException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *AutomationDefinitionVersionNotFoundException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *AutomationDefinitionVersionNotFoundException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "AutomationDefinitionVersionNotFoundException"
}
return *e.ErrorCodeOverride
}
func (e *AutomationDefinitionVersionNotFoundException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The number of simultaneously running Automation executions exceeded the
// allowable limit.
type AutomationExecutionLimitExceededException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *AutomationExecutionLimitExceededException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *AutomationExecutionLimitExceededException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *AutomationExecutionLimitExceededException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "AutomationExecutionLimitExceededException"
}
return *e.ErrorCodeOverride
}
func (e *AutomationExecutionLimitExceededException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// There is no automation execution information for the requested automation
// execution ID.
type AutomationExecutionNotFoundException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *AutomationExecutionNotFoundException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *AutomationExecutionNotFoundException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *AutomationExecutionNotFoundException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "AutomationExecutionNotFoundException"
}
return *e.ErrorCodeOverride
}
func (e *AutomationExecutionNotFoundException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The specified step name and execution ID don't exist. Verify the information
// and try again.
type AutomationStepNotFoundException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *AutomationStepNotFoundException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *AutomationStepNotFoundException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *AutomationStepNotFoundException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "AutomationStepNotFoundException"
}
return *e.ErrorCodeOverride
}
func (e *AutomationStepNotFoundException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// You specified too many custom compliance types. You can specify a maximum of 10
// different types.
type ComplianceTypeCountLimitExceededException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ComplianceTypeCountLimitExceededException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ComplianceTypeCountLimitExceededException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ComplianceTypeCountLimitExceededException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ComplianceTypeCountLimitExceededException"
}
return *e.ErrorCodeOverride
}
func (e *ComplianceTypeCountLimitExceededException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// You have exceeded the limit for custom schemas. Delete one or more custom
// schemas and try again.
type CustomSchemaCountLimitExceededException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *CustomSchemaCountLimitExceededException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *CustomSchemaCountLimitExceededException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *CustomSchemaCountLimitExceededException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "CustomSchemaCountLimitExceededException"
}
return *e.ErrorCodeOverride
}
func (e *CustomSchemaCountLimitExceededException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The specified document already exists.
type DocumentAlreadyExists struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *DocumentAlreadyExists) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *DocumentAlreadyExists) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *DocumentAlreadyExists) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "DocumentAlreadyExists"
}
return *e.ErrorCodeOverride
}
func (e *DocumentAlreadyExists) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// You can have at most 500 active SSM documents.
type DocumentLimitExceeded struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *DocumentLimitExceeded) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *DocumentLimitExceeded) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *DocumentLimitExceeded) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "DocumentLimitExceeded"
}
return *e.ErrorCodeOverride
}
func (e *DocumentLimitExceeded) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The document can't be shared with more Amazon Web Services accounts. You can
// specify a maximum of 20 accounts per API operation to share a private document.
//
// By default, you can share a private document with a maximum of 1,000 accounts
// and publicly share up to five documents.
//
// If you need to increase the quota for privately or publicly shared Systems
// Manager documents, contact Amazon Web Services Support.
type DocumentPermissionLimit struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *DocumentPermissionLimit) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *DocumentPermissionLimit) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *DocumentPermissionLimit) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "DocumentPermissionLimit"
}
return *e.ErrorCodeOverride
}
func (e *DocumentPermissionLimit) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The document has too many versions. Delete one or more document versions and
// try again.
type DocumentVersionLimitExceeded struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *DocumentVersionLimitExceeded) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *DocumentVersionLimitExceeded) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *DocumentVersionLimitExceeded) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "DocumentVersionLimitExceeded"
}
return *e.ErrorCodeOverride
}
func (e *DocumentVersionLimitExceeded) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// Error returned when the ID specified for a resource, such as a maintenance
// window or patch baseline, doesn't exist.
//
// For information about resource quotas in Amazon Web Services Systems Manager,
// see [Systems Manager service quotas]in the Amazon Web Services General Reference.
//
// [Systems Manager service quotas]: path_to_url#limits_ssm
type DoesNotExistException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *DoesNotExistException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *DoesNotExistException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *DoesNotExistException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "DoesNotExistException"
}
return *e.ErrorCodeOverride
}
func (e *DoesNotExistException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The content of the association document matches another document. Change the
// content of the document and try again.
type DuplicateDocumentContent struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *DuplicateDocumentContent) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *DuplicateDocumentContent) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *DuplicateDocumentContent) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "DuplicateDocumentContent"
}
return *e.ErrorCodeOverride
}
func (e *DuplicateDocumentContent) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The version name has already been used in this document. Specify a different
// version name, and then try again.
type DuplicateDocumentVersionName struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *DuplicateDocumentVersionName) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *DuplicateDocumentVersionName) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *DuplicateDocumentVersionName) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "DuplicateDocumentVersionName"
}
return *e.ErrorCodeOverride
}
func (e *DuplicateDocumentVersionName) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// You can't specify a managed node ID in more than one association.
type DuplicateInstanceId struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *DuplicateInstanceId) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *DuplicateInstanceId) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *DuplicateInstanceId) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "DuplicateInstanceId"
}
return *e.ErrorCodeOverride
}
func (e *DuplicateInstanceId) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// You attempted to register a LAMBDA or STEP_FUNCTIONS task in a region where the
// corresponding service isn't available.
type FeatureNotAvailableException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *FeatureNotAvailableException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *FeatureNotAvailableException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *FeatureNotAvailableException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "FeatureNotAvailableException"
}
return *e.ErrorCodeOverride
}
func (e *FeatureNotAvailableException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// A hierarchy can have a maximum of 15 levels. For more information, see [Requirements and constraints for parameter names] in the
// Amazon Web Services Systems Manager User Guide.
//
// [Requirements and constraints for parameter names]: path_to_url
type HierarchyLevelLimitExceededException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *HierarchyLevelLimitExceededException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *HierarchyLevelLimitExceededException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *HierarchyLevelLimitExceededException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "HierarchyLevelLimitExceededException"
}
return *e.ErrorCodeOverride
}
func (e *HierarchyLevelLimitExceededException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// Parameter Store doesn't support changing a parameter type in a hierarchy. For
// example, you can't change a parameter from a String type to a SecureString
// type. You must create a new, unique parameter.
type HierarchyTypeMismatchException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *HierarchyTypeMismatchException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *HierarchyTypeMismatchException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *HierarchyTypeMismatchException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "HierarchyTypeMismatchException"
}
return *e.ErrorCodeOverride
}
func (e *HierarchyTypeMismatchException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// Error returned when an idempotent operation is retried and the parameters don't
// match the original call to the API with the same idempotency token.
type IdempotentParameterMismatch struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *IdempotentParameterMismatch) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *IdempotentParameterMismatch) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *IdempotentParameterMismatch) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "IdempotentParameterMismatch"
}
return *e.ErrorCodeOverride
}
func (e *IdempotentParameterMismatch) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// There is a conflict in the policies specified for this parameter. You can't,
// for example, specify two Expiration policies for a parameter. Review your
// policies, and try again.
type IncompatiblePolicyException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *IncompatiblePolicyException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *IncompatiblePolicyException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *IncompatiblePolicyException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "IncompatiblePolicyException"
}
return *e.ErrorCodeOverride
}
func (e *IncompatiblePolicyException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// An error occurred on the server side.
type InternalServerError struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InternalServerError) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InternalServerError) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InternalServerError) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InternalServerError"
}
return *e.ErrorCodeOverride
}
func (e *InternalServerError) ErrorFault() smithy.ErrorFault { return smithy.FaultServer }
// The activation isn't valid. The activation might have been deleted, or the
// ActivationId and the ActivationCode don't match.
type InvalidActivation struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidActivation) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidActivation) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidActivation) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidActivation"
}
return *e.ErrorCodeOverride
}
func (e *InvalidActivation) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The activation ID isn't valid. Verify the you entered the correct ActivationId
// or ActivationCode and try again.
type InvalidActivationId struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidActivationId) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidActivationId) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidActivationId) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidActivationId"
}
return *e.ErrorCodeOverride
}
func (e *InvalidActivationId) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified aggregator isn't valid for inventory groups. Verify that the
// aggregator uses a valid inventory type such as AWS:Application or
// AWS:InstanceInformation .
type InvalidAggregatorException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidAggregatorException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidAggregatorException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidAggregatorException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidAggregatorException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidAggregatorException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The request doesn't meet the regular expression requirement.
type InvalidAllowedPatternException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidAllowedPatternException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidAllowedPatternException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidAllowedPatternException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidAllowedPatternException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidAllowedPatternException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The association isn't valid or doesn't exist.
type InvalidAssociation struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidAssociation) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidAssociation) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidAssociation) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidAssociation"
}
return *e.ErrorCodeOverride
}
func (e *InvalidAssociation) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The version you specified isn't valid. Use ListAssociationVersions to view all
// versions of an association according to the association ID. Or, use the $LATEST
// parameter to view the latest version of the association.
type InvalidAssociationVersion struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidAssociationVersion) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidAssociationVersion) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidAssociationVersion) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidAssociationVersion"
}
return *e.ErrorCodeOverride
}
func (e *InvalidAssociationVersion) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The supplied parameters for invoking the specified Automation runbook are
// incorrect. For example, they may not match the set of parameters permitted for
// the specified Automation document.
type InvalidAutomationExecutionParametersException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidAutomationExecutionParametersException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidAutomationExecutionParametersException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidAutomationExecutionParametersException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidAutomationExecutionParametersException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidAutomationExecutionParametersException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The signal isn't valid for the current Automation execution.
type InvalidAutomationSignalException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidAutomationSignalException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidAutomationSignalException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidAutomationSignalException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidAutomationSignalException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidAutomationSignalException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified update status operation isn't valid.
type InvalidAutomationStatusUpdateException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidAutomationStatusUpdateException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidAutomationStatusUpdateException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidAutomationStatusUpdateException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidAutomationStatusUpdateException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidAutomationStatusUpdateException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The specified command ID isn't valid. Verify the ID and try again.
type InvalidCommandId struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidCommandId) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidCommandId) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidCommandId) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidCommandId"
}
return *e.ErrorCodeOverride
}
func (e *InvalidCommandId) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// One or more of the parameters specified for the delete operation isn't valid.
// Verify all parameters and try again.
type InvalidDeleteInventoryParametersException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidDeleteInventoryParametersException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidDeleteInventoryParametersException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidDeleteInventoryParametersException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidDeleteInventoryParametersException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidDeleteInventoryParametersException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The ID specified for the delete operation doesn't exist or isn't valid. Verify
// the ID and try again.
type InvalidDeletionIdException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidDeletionIdException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidDeletionIdException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidDeletionIdException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidDeletionIdException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidDeletionIdException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified SSM document doesn't exist.
type InvalidDocument struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidDocument) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidDocument) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidDocument) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidDocument"
}
return *e.ErrorCodeOverride
}
func (e *InvalidDocument) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The content for the document isn't valid.
type InvalidDocumentContent struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidDocumentContent) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidDocumentContent) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidDocumentContent) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidDocumentContent"
}
return *e.ErrorCodeOverride
}
func (e *InvalidDocumentContent) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// You attempted to delete a document while it is still shared. You must stop
// sharing the document before you can delete it.
type InvalidDocumentOperation struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidDocumentOperation) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidDocumentOperation) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidDocumentOperation) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidDocumentOperation"
}
return *e.ErrorCodeOverride
}
func (e *InvalidDocumentOperation) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The version of the document schema isn't supported.
type InvalidDocumentSchemaVersion struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidDocumentSchemaVersion) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidDocumentSchemaVersion) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidDocumentSchemaVersion) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidDocumentSchemaVersion"
}
return *e.ErrorCodeOverride
}
func (e *InvalidDocumentSchemaVersion) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The SSM document type isn't valid. Valid document types are described in the
// DocumentType property.
type InvalidDocumentType struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidDocumentType) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidDocumentType) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidDocumentType) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidDocumentType"
}
return *e.ErrorCodeOverride
}
func (e *InvalidDocumentType) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The document version isn't valid or doesn't exist.
type InvalidDocumentVersion struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidDocumentVersion) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidDocumentVersion) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidDocumentVersion) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidDocumentVersion"
}
return *e.ErrorCodeOverride
}
func (e *InvalidDocumentVersion) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The filter name isn't valid. Verify the you entered the correct name and try
// again.
type InvalidFilter struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidFilter) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidFilter) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidFilter) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidFilter"
}
return *e.ErrorCodeOverride
}
func (e *InvalidFilter) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified key isn't valid.
type InvalidFilterKey struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidFilterKey) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidFilterKey) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidFilterKey) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidFilterKey"
}
return *e.ErrorCodeOverride
}
func (e *InvalidFilterKey) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified filter option isn't valid. Valid options are Equals and
// BeginsWith. For Path filter, valid options are Recursive and OneLevel.
type InvalidFilterOption struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidFilterOption) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidFilterOption) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidFilterOption) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidFilterOption"
}
return *e.ErrorCodeOverride
}
func (e *InvalidFilterOption) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The filter value isn't valid. Verify the value and try again.
type InvalidFilterValue struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidFilterValue) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidFilterValue) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidFilterValue) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidFilterValue"
}
return *e.ErrorCodeOverride
}
func (e *InvalidFilterValue) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The following problems can cause this exception:
//
// - You don't have permission to access the managed node.
//
// - Amazon Web Services Systems Manager Agent (SSM Agent) isn't running. Verify
// that SSM Agent is running.
//
// - SSM Agent isn't registered with the SSM endpoint. Try reinstalling SSM
// Agent.
//
// - The managed node isn't in a valid state. Valid states are: Running , Pending
// , Stopped , and Stopping . Invalid states are: Shutting-down and Terminated .
type InvalidInstanceId struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidInstanceId) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidInstanceId) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidInstanceId) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidInstanceId"
}
return *e.ErrorCodeOverride
}
func (e *InvalidInstanceId) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified filter value isn't valid.
type InvalidInstanceInformationFilterValue struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidInstanceInformationFilterValue) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidInstanceInformationFilterValue) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidInstanceInformationFilterValue) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidInstanceInformationFilterValue"
}
return *e.ErrorCodeOverride
}
func (e *InvalidInstanceInformationFilterValue) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The specified filter value isn't valid.
type InvalidInstancePropertyFilterValue struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidInstancePropertyFilterValue) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidInstancePropertyFilterValue) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidInstancePropertyFilterValue) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidInstancePropertyFilterValue"
}
return *e.ErrorCodeOverride
}
func (e *InvalidInstancePropertyFilterValue) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The specified inventory group isn't valid.
type InvalidInventoryGroupException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidInventoryGroupException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidInventoryGroupException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidInventoryGroupException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidInventoryGroupException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidInventoryGroupException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// You specified invalid keys or values in the Context attribute for InventoryItem
// . Verify the keys and values, and try again.
type InvalidInventoryItemContextException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidInventoryItemContextException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidInventoryItemContextException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidInventoryItemContextException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidInventoryItemContextException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidInventoryItemContextException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The request isn't valid.
type InvalidInventoryRequestException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidInventoryRequestException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidInventoryRequestException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidInventoryRequestException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidInventoryRequestException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidInventoryRequestException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// One or more content items isn't valid.
type InvalidItemContentException struct {
Message *string
ErrorCodeOverride *string
TypeName *string
noSmithyDocumentSerde
}
func (e *InvalidItemContentException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidItemContentException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidItemContentException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidItemContentException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidItemContentException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The query key ID isn't valid.
type InvalidKeyId struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidKeyId) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidKeyId) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidKeyId) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidKeyId"
}
return *e.ErrorCodeOverride
}
func (e *InvalidKeyId) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified token isn't valid.
type InvalidNextToken struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidNextToken) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidNextToken) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidNextToken) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidNextToken"
}
return *e.ErrorCodeOverride
}
func (e *InvalidNextToken) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// One or more configuration items isn't valid. Verify that a valid Amazon
// Resource Name (ARN) was provided for an Amazon Simple Notification Service
// topic.
type InvalidNotificationConfig struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidNotificationConfig) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidNotificationConfig) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidNotificationConfig) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidNotificationConfig"
}
return *e.ErrorCodeOverride
}
func (e *InvalidNotificationConfig) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The delete inventory option specified isn't valid. Verify the option and try
// again.
type InvalidOptionException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidOptionException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidOptionException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidOptionException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidOptionException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidOptionException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The S3 bucket doesn't exist.
type InvalidOutputFolder struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidOutputFolder) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidOutputFolder) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidOutputFolder) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidOutputFolder"
}
return *e.ErrorCodeOverride
}
func (e *InvalidOutputFolder) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The output location isn't valid or doesn't exist.
type InvalidOutputLocation struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidOutputLocation) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidOutputLocation) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidOutputLocation) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidOutputLocation"
}
return *e.ErrorCodeOverride
}
func (e *InvalidOutputLocation) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// You must specify values for all required parameters in the Amazon Web Services
// Systems Manager document (SSM document). You can only supply values to
// parameters defined in the SSM document.
type InvalidParameters struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidParameters) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidParameters) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidParameters) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidParameters"
}
return *e.ErrorCodeOverride
}
func (e *InvalidParameters) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The permission type isn't supported. Share is the only supported permission
// type.
type InvalidPermissionType struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidPermissionType) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidPermissionType) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidPermissionType) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidPermissionType"
}
return *e.ErrorCodeOverride
}
func (e *InvalidPermissionType) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The plugin name isn't valid.
type InvalidPluginName struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidPluginName) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidPluginName) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidPluginName) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidPluginName"
}
return *e.ErrorCodeOverride
}
func (e *InvalidPluginName) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// A policy attribute or its value is invalid.
type InvalidPolicyAttributeException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidPolicyAttributeException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidPolicyAttributeException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidPolicyAttributeException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidPolicyAttributeException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidPolicyAttributeException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The policy type isn't supported. Parameter Store supports the following policy
// types: Expiration, ExpirationNotification, and NoChangeNotification.
type InvalidPolicyTypeException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidPolicyTypeException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidPolicyTypeException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidPolicyTypeException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidPolicyTypeException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidPolicyTypeException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The resource ID isn't valid. Verify that you entered the correct ID and try
// again.
type InvalidResourceId struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidResourceId) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidResourceId) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidResourceId) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidResourceId"
}
return *e.ErrorCodeOverride
}
func (e *InvalidResourceId) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The resource type isn't valid. For example, if you are attempting to tag an EC2
// instance, the instance must be a registered managed node.
type InvalidResourceType struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidResourceType) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidResourceType) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidResourceType) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidResourceType"
}
return *e.ErrorCodeOverride
}
func (e *InvalidResourceType) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified inventory item result attribute isn't valid.
type InvalidResultAttributeException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidResultAttributeException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidResultAttributeException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidResultAttributeException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidResultAttributeException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidResultAttributeException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The role name can't contain invalid characters. Also verify that you specified
// an IAM role for notifications that includes the required trust policy. For
// information about configuring the IAM role for Run Command notifications, see [Monitoring Systems Manager status changes using Amazon SNS notifications]
// in the Amazon Web Services Systems Manager User Guide.
//
// [Monitoring Systems Manager status changes using Amazon SNS notifications]: path_to_url
type InvalidRole struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidRole) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidRole) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidRole) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidRole"
}
return *e.ErrorCodeOverride
}
func (e *InvalidRole) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The schedule is invalid. Verify your cron or rate expression and try again.
type InvalidSchedule struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidSchedule) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidSchedule) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidSchedule) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidSchedule"
}
return *e.ErrorCodeOverride
}
func (e *InvalidSchedule) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified tag key or value isn't valid.
type InvalidTag struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidTag) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidTag) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidTag) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidTag"
}
return *e.ErrorCodeOverride
}
func (e *InvalidTag) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The target isn't valid or doesn't exist. It might not be configured for Systems
// Manager or you might not have permission to perform the operation.
type InvalidTarget struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidTarget) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidTarget) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidTarget) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidTarget"
}
return *e.ErrorCodeOverride
}
func (e *InvalidTarget) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// TargetMap parameter isn't valid.
type InvalidTargetMaps struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidTargetMaps) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidTargetMaps) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidTargetMaps) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidTargetMaps"
}
return *e.ErrorCodeOverride
}
func (e *InvalidTargetMaps) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The parameter type name isn't valid.
type InvalidTypeNameException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidTypeNameException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidTypeNameException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidTypeNameException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidTypeNameException"
}
return *e.ErrorCodeOverride
}
func (e *InvalidTypeNameException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The update isn't valid.
type InvalidUpdate struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvalidUpdate) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvalidUpdate) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvalidUpdate) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvalidUpdate"
}
return *e.ErrorCodeOverride
}
func (e *InvalidUpdate) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The command ID and managed node ID you specified didn't match any invocations.
// Verify the command ID and the managed node ID and try again.
type InvocationDoesNotExist struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *InvocationDoesNotExist) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *InvocationDoesNotExist) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *InvocationDoesNotExist) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "InvocationDoesNotExist"
}
return *e.ErrorCodeOverride
}
func (e *InvocationDoesNotExist) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The inventory item has invalid content.
type ItemContentMismatchException struct {
Message *string
ErrorCodeOverride *string
TypeName *string
noSmithyDocumentSerde
}
func (e *ItemContentMismatchException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ItemContentMismatchException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ItemContentMismatchException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ItemContentMismatchException"
}
return *e.ErrorCodeOverride
}
func (e *ItemContentMismatchException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The inventory item size has exceeded the size limit.
type ItemSizeLimitExceededException struct {
Message *string
ErrorCodeOverride *string
TypeName *string
noSmithyDocumentSerde
}
func (e *ItemSizeLimitExceededException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ItemSizeLimitExceededException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ItemSizeLimitExceededException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ItemSizeLimitExceededException"
}
return *e.ErrorCodeOverride
}
func (e *ItemSizeLimitExceededException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified policy document is malformed or invalid, or excessive
// PutResourcePolicy or DeleteResourcePolicy calls have been made.
type MalformedResourcePolicyDocumentException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *MalformedResourcePolicyDocumentException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *MalformedResourcePolicyDocumentException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *MalformedResourcePolicyDocumentException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "MalformedResourcePolicyDocumentException"
}
return *e.ErrorCodeOverride
}
func (e *MalformedResourcePolicyDocumentException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The size limit of a document is 64 KB.
type MaxDocumentSizeExceeded struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *MaxDocumentSizeExceeded) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *MaxDocumentSizeExceeded) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *MaxDocumentSizeExceeded) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "MaxDocumentSizeExceeded"
}
return *e.ErrorCodeOverride
}
func (e *MaxDocumentSizeExceeded) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// You don't have permission to view OpsItems in the specified account. Verify
// that your account is configured either as a Systems Manager delegated
// administrator or that you are logged into the Organizations management account.
type OpsItemAccessDeniedException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *OpsItemAccessDeniedException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *OpsItemAccessDeniedException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *OpsItemAccessDeniedException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "OpsItemAccessDeniedException"
}
return *e.ErrorCodeOverride
}
func (e *OpsItemAccessDeniedException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The OpsItem already exists.
type OpsItemAlreadyExistsException struct {
Message *string
ErrorCodeOverride *string
OpsItemId *string
noSmithyDocumentSerde
}
func (e *OpsItemAlreadyExistsException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *OpsItemAlreadyExistsException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *OpsItemAlreadyExistsException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "OpsItemAlreadyExistsException"
}
return *e.ErrorCodeOverride
}
func (e *OpsItemAlreadyExistsException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified OpsItem is in the process of being deleted.
type OpsItemConflictException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *OpsItemConflictException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *OpsItemConflictException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *OpsItemConflictException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "OpsItemConflictException"
}
return *e.ErrorCodeOverride
}
func (e *OpsItemConflictException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// A specified parameter argument isn't valid. Verify the available arguments and
// try again.
type OpsItemInvalidParameterException struct {
Message *string
ErrorCodeOverride *string
ParameterNames []string
noSmithyDocumentSerde
}
func (e *OpsItemInvalidParameterException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *OpsItemInvalidParameterException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *OpsItemInvalidParameterException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "OpsItemInvalidParameterException"
}
return *e.ErrorCodeOverride
}
func (e *OpsItemInvalidParameterException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The request caused OpsItems to exceed one or more quotas.
type OpsItemLimitExceededException struct {
Message *string
ErrorCodeOverride *string
ResourceTypes []string
Limit int32
LimitType *string
noSmithyDocumentSerde
}
func (e *OpsItemLimitExceededException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *OpsItemLimitExceededException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *OpsItemLimitExceededException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "OpsItemLimitExceededException"
}
return *e.ErrorCodeOverride
}
func (e *OpsItemLimitExceededException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified OpsItem ID doesn't exist. Verify the ID and try again.
type OpsItemNotFoundException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *OpsItemNotFoundException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *OpsItemNotFoundException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *OpsItemNotFoundException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "OpsItemNotFoundException"
}
return *e.ErrorCodeOverride
}
func (e *OpsItemNotFoundException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The Amazon Resource Name (ARN) is already associated with the OpsItem.
type OpsItemRelatedItemAlreadyExistsException struct {
Message *string
ErrorCodeOverride *string
ResourceUri *string
OpsItemId *string
noSmithyDocumentSerde
}
func (e *OpsItemRelatedItemAlreadyExistsException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *OpsItemRelatedItemAlreadyExistsException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *OpsItemRelatedItemAlreadyExistsException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "OpsItemRelatedItemAlreadyExistsException"
}
return *e.ErrorCodeOverride
}
func (e *OpsItemRelatedItemAlreadyExistsException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The association wasn't found using the parameters you specified in the call.
// Verify the information and try again.
type OpsItemRelatedItemAssociationNotFoundException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *OpsItemRelatedItemAssociationNotFoundException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *OpsItemRelatedItemAssociationNotFoundException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *OpsItemRelatedItemAssociationNotFoundException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "OpsItemRelatedItemAssociationNotFoundException"
}
return *e.ErrorCodeOverride
}
func (e *OpsItemRelatedItemAssociationNotFoundException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// An OpsMetadata object already exists for the selected resource.
type OpsMetadataAlreadyExistsException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *OpsMetadataAlreadyExistsException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *OpsMetadataAlreadyExistsException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *OpsMetadataAlreadyExistsException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "OpsMetadataAlreadyExistsException"
}
return *e.ErrorCodeOverride
}
func (e *OpsMetadataAlreadyExistsException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// One of the arguments passed is invalid.
type OpsMetadataInvalidArgumentException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *OpsMetadataInvalidArgumentException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *OpsMetadataInvalidArgumentException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *OpsMetadataInvalidArgumentException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "OpsMetadataInvalidArgumentException"
}
return *e.ErrorCodeOverride
}
func (e *OpsMetadataInvalidArgumentException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The OpsMetadata object exceeds the maximum number of OpsMetadata keys that you
// can assign to an application in Application Manager.
type OpsMetadataKeyLimitExceededException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *OpsMetadataKeyLimitExceededException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *OpsMetadataKeyLimitExceededException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *OpsMetadataKeyLimitExceededException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "OpsMetadataKeyLimitExceededException"
}
return *e.ErrorCodeOverride
}
func (e *OpsMetadataKeyLimitExceededException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// Your account reached the maximum number of OpsMetadata objects allowed by
// Application Manager. The maximum is 200 OpsMetadata objects. Delete one or more
// OpsMetadata object and try again.
type OpsMetadataLimitExceededException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *OpsMetadataLimitExceededException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *OpsMetadataLimitExceededException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *OpsMetadataLimitExceededException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "OpsMetadataLimitExceededException"
}
return *e.ErrorCodeOverride
}
func (e *OpsMetadataLimitExceededException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The OpsMetadata object doesn't exist.
type OpsMetadataNotFoundException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *OpsMetadataNotFoundException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *OpsMetadataNotFoundException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *OpsMetadataNotFoundException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "OpsMetadataNotFoundException"
}
return *e.ErrorCodeOverride
}
func (e *OpsMetadataNotFoundException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The system is processing too many concurrent updates. Wait a few moments and
// try again.
type OpsMetadataTooManyUpdatesException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *OpsMetadataTooManyUpdatesException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *OpsMetadataTooManyUpdatesException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *OpsMetadataTooManyUpdatesException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "OpsMetadataTooManyUpdatesException"
}
return *e.ErrorCodeOverride
}
func (e *OpsMetadataTooManyUpdatesException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The parameter already exists. You can't create duplicate parameters.
type ParameterAlreadyExists struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ParameterAlreadyExists) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ParameterAlreadyExists) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ParameterAlreadyExists) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ParameterAlreadyExists"
}
return *e.ErrorCodeOverride
}
func (e *ParameterAlreadyExists) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// You have exceeded the number of parameters for this Amazon Web Services
// account. Delete one or more parameters and try again.
type ParameterLimitExceeded struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ParameterLimitExceeded) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ParameterLimitExceeded) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ParameterLimitExceeded) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ParameterLimitExceeded"
}
return *e.ErrorCodeOverride
}
func (e *ParameterLimitExceeded) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// Parameter Store retains the 100 most recently created versions of a parameter.
// After this number of versions has been created, Parameter Store deletes the
// oldest version when a new one is created. However, if the oldest version has a
// label attached to it, Parameter Store won't delete the version and instead
// presents this error message:
//
// An error occurred (ParameterMaxVersionLimitExceeded) when calling the
// PutParameter operation: You attempted to create a new version of parameter-name
// by calling the PutParameter API with the overwrite flag. Version version-number,
// the oldest version, can't be deleted because it has a label associated with it.
// Move the label to another version of the parameter, and try again.
//
// This safeguard is to prevent parameter versions with mission critical labels
// assigned to them from being deleted. To continue creating new parameters, first
// move the label from the oldest version of the parameter to a newer one for use
// in your operations. For information about moving parameter labels, see [Move a parameter label (console)]or [Move a parameter label (CLI)] in
// the Amazon Web Services Systems Manager User Guide.
//
// [Move a parameter label (CLI)]: path_to_url#sysman-paramstore-labels-cli-move
// [Move a parameter label (console)]: path_to_url#sysman-paramstore-labels-console-move
type ParameterMaxVersionLimitExceeded struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ParameterMaxVersionLimitExceeded) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ParameterMaxVersionLimitExceeded) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ParameterMaxVersionLimitExceeded) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ParameterMaxVersionLimitExceeded"
}
return *e.ErrorCodeOverride
}
func (e *ParameterMaxVersionLimitExceeded) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The parameter couldn't be found. Verify the name and try again.
type ParameterNotFound struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ParameterNotFound) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ParameterNotFound) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ParameterNotFound) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ParameterNotFound"
}
return *e.ErrorCodeOverride
}
func (e *ParameterNotFound) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The parameter name isn't valid.
type ParameterPatternMismatchException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ParameterPatternMismatchException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ParameterPatternMismatchException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ParameterPatternMismatchException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ParameterPatternMismatchException"
}
return *e.ErrorCodeOverride
}
func (e *ParameterPatternMismatchException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// A parameter version can have a maximum of ten labels.
type ParameterVersionLabelLimitExceeded struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ParameterVersionLabelLimitExceeded) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ParameterVersionLabelLimitExceeded) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ParameterVersionLabelLimitExceeded) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ParameterVersionLabelLimitExceeded"
}
return *e.ErrorCodeOverride
}
func (e *ParameterVersionLabelLimitExceeded) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The specified parameter version wasn't found. Verify the parameter name and
// version, and try again.
type ParameterVersionNotFound struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ParameterVersionNotFound) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ParameterVersionNotFound) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ParameterVersionNotFound) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ParameterVersionNotFound"
}
return *e.ErrorCodeOverride
}
func (e *ParameterVersionNotFound) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// You specified more than the maximum number of allowed policies for the
// parameter. The maximum is 10.
type PoliciesLimitExceededException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *PoliciesLimitExceededException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *PoliciesLimitExceededException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *PoliciesLimitExceededException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "PoliciesLimitExceededException"
}
return *e.ErrorCodeOverride
}
func (e *PoliciesLimitExceededException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// A sync configuration with the same name already exists.
type ResourceDataSyncAlreadyExistsException struct {
Message *string
ErrorCodeOverride *string
SyncName *string
noSmithyDocumentSerde
}
func (e *ResourceDataSyncAlreadyExistsException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ResourceDataSyncAlreadyExistsException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ResourceDataSyncAlreadyExistsException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ResourceDataSyncAlreadyExistsException"
}
return *e.ErrorCodeOverride
}
func (e *ResourceDataSyncAlreadyExistsException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// Another UpdateResourceDataSync request is being processed. Wait a few minutes
// and try again.
type ResourceDataSyncConflictException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ResourceDataSyncConflictException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ResourceDataSyncConflictException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ResourceDataSyncConflictException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ResourceDataSyncConflictException"
}
return *e.ErrorCodeOverride
}
func (e *ResourceDataSyncConflictException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// You have exceeded the allowed maximum sync configurations.
type ResourceDataSyncCountExceededException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ResourceDataSyncCountExceededException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ResourceDataSyncCountExceededException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ResourceDataSyncCountExceededException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ResourceDataSyncCountExceededException"
}
return *e.ErrorCodeOverride
}
func (e *ResourceDataSyncCountExceededException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The specified sync configuration is invalid.
type ResourceDataSyncInvalidConfigurationException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ResourceDataSyncInvalidConfigurationException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ResourceDataSyncInvalidConfigurationException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ResourceDataSyncInvalidConfigurationException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ResourceDataSyncInvalidConfigurationException"
}
return *e.ErrorCodeOverride
}
func (e *ResourceDataSyncInvalidConfigurationException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The specified sync name wasn't found.
type ResourceDataSyncNotFoundException struct {
Message *string
ErrorCodeOverride *string
SyncName *string
SyncType *string
noSmithyDocumentSerde
}
func (e *ResourceDataSyncNotFoundException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ResourceDataSyncNotFoundException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ResourceDataSyncNotFoundException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ResourceDataSyncNotFoundException"
}
return *e.ErrorCodeOverride
}
func (e *ResourceDataSyncNotFoundException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// Error returned if an attempt is made to delete a patch baseline that is
// registered for a patch group.
type ResourceInUseException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ResourceInUseException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ResourceInUseException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ResourceInUseException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ResourceInUseException"
}
return *e.ErrorCodeOverride
}
func (e *ResourceInUseException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// Error returned when the caller has exceeded the default resource quotas. For
// example, too many maintenance windows or patch baselines have been created.
//
// For information about resource quotas in Systems Manager, see [Systems Manager service quotas] in the Amazon
// Web Services General Reference.
//
// [Systems Manager service quotas]: path_to_url#limits_ssm
type ResourceLimitExceededException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ResourceLimitExceededException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ResourceLimitExceededException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ResourceLimitExceededException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ResourceLimitExceededException"
}
return *e.ErrorCodeOverride
}
func (e *ResourceLimitExceededException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified parameter to be shared could not be found.
type ResourceNotFoundException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ResourceNotFoundException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ResourceNotFoundException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ResourceNotFoundException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ResourceNotFoundException"
}
return *e.ErrorCodeOverride
}
func (e *ResourceNotFoundException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The hash provided in the call doesn't match the stored hash. This exception is
// thrown when trying to update an obsolete policy version or when multiple
// requests to update a policy are sent.
type ResourcePolicyConflictException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ResourcePolicyConflictException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ResourcePolicyConflictException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ResourcePolicyConflictException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ResourcePolicyConflictException"
}
return *e.ErrorCodeOverride
}
func (e *ResourcePolicyConflictException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// One or more parameters specified for the call aren't valid. Verify the
// parameters and their values and try again.
type ResourcePolicyInvalidParameterException struct {
Message *string
ErrorCodeOverride *string
ParameterNames []string
noSmithyDocumentSerde
}
func (e *ResourcePolicyInvalidParameterException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ResourcePolicyInvalidParameterException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ResourcePolicyInvalidParameterException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ResourcePolicyInvalidParameterException"
}
return *e.ErrorCodeOverride
}
func (e *ResourcePolicyInvalidParameterException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The PutResourcePolicy API action enforces two limits. A policy can't be greater than 1024 bytes
// in size. And only one policy can be attached to OpsItemGroup . Verify these
// limits and try again.
type ResourcePolicyLimitExceededException struct {
Message *string
ErrorCodeOverride *string
Limit int32
LimitType *string
noSmithyDocumentSerde
}
func (e *ResourcePolicyLimitExceededException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ResourcePolicyLimitExceededException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ResourcePolicyLimitExceededException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ResourcePolicyLimitExceededException"
}
return *e.ErrorCodeOverride
}
func (e *ResourcePolicyLimitExceededException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// No policies with the specified policy ID and hash could be found.
type ResourcePolicyNotFoundException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ResourcePolicyNotFoundException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ResourcePolicyNotFoundException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ResourcePolicyNotFoundException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ResourcePolicyNotFoundException"
}
return *e.ErrorCodeOverride
}
func (e *ResourcePolicyNotFoundException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified service setting wasn't found. Either the service name or the
// setting hasn't been provisioned by the Amazon Web Services service team.
type ServiceSettingNotFound struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *ServiceSettingNotFound) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *ServiceSettingNotFound) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *ServiceSettingNotFound) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "ServiceSettingNotFound"
}
return *e.ErrorCodeOverride
}
func (e *ServiceSettingNotFound) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The updated status is the same as the current status.
type StatusUnchanged struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *StatusUnchanged) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *StatusUnchanged) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *StatusUnchanged) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "StatusUnchanged"
}
return *e.ErrorCodeOverride
}
func (e *StatusUnchanged) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The sub-type count exceeded the limit for the inventory type.
type SubTypeCountLimitExceededException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *SubTypeCountLimitExceededException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *SubTypeCountLimitExceededException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *SubTypeCountLimitExceededException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "SubTypeCountLimitExceededException"
}
return *e.ErrorCodeOverride
}
func (e *SubTypeCountLimitExceededException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// You specified the Safe option for the DeregisterTargetFromMaintenanceWindow
// operation, but the target is still referenced in a task.
type TargetInUseException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *TargetInUseException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *TargetInUseException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *TargetInUseException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "TargetInUseException"
}
return *e.ErrorCodeOverride
}
func (e *TargetInUseException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The specified target managed node for the session isn't fully configured for
// use with Session Manager. For more information, see [Getting started with Session Manager]in the Amazon Web Services
// Systems Manager User Guide. This error is also returned if you attempt to start
// a session on a managed node that is located in a different account or Region
//
// [Getting started with Session Manager]: path_to_url
type TargetNotConnected struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *TargetNotConnected) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *TargetNotConnected) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *TargetNotConnected) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "TargetNotConnected"
}
return *e.ErrorCodeOverride
}
func (e *TargetNotConnected) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The Targets parameter includes too many tags. Remove one or more tags and try
// the command again.
type TooManyTagsError struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *TooManyTagsError) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *TooManyTagsError) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *TooManyTagsError) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "TooManyTagsError"
}
return *e.ErrorCodeOverride
}
func (e *TooManyTagsError) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// There are concurrent updates for a resource that supports one update at a time.
type TooManyUpdates struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *TooManyUpdates) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *TooManyUpdates) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *TooManyUpdates) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "TooManyUpdates"
}
return *e.ErrorCodeOverride
}
func (e *TooManyUpdates) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The size of inventory data has exceeded the total size limit for the resource.
type TotalSizeLimitExceededException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *TotalSizeLimitExceededException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *TotalSizeLimitExceededException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *TotalSizeLimitExceededException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "TotalSizeLimitExceededException"
}
return *e.ErrorCodeOverride
}
func (e *TotalSizeLimitExceededException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The calendar entry contained in the specified SSM document isn't supported.
type UnsupportedCalendarException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *UnsupportedCalendarException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *UnsupportedCalendarException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *UnsupportedCalendarException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "UnsupportedCalendarException"
}
return *e.ErrorCodeOverride
}
func (e *UnsupportedCalendarException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// Patching for applications released by Microsoft is only available on EC2
// instances and advanced instances. To patch applications released by Microsoft on
// on-premises servers and VMs, you must enable advanced instances. For more
// information, see [Turning on the advanced-instances tier]in the Amazon Web Services Systems Manager User Guide.
//
// [Turning on the advanced-instances tier]: path_to_url
type UnsupportedFeatureRequiredException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *UnsupportedFeatureRequiredException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *UnsupportedFeatureRequiredException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *UnsupportedFeatureRequiredException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "UnsupportedFeatureRequiredException"
}
return *e.ErrorCodeOverride
}
func (e *UnsupportedFeatureRequiredException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The Context attribute that you specified for the InventoryItem isn't allowed
// for this inventory type. You can only use the Context attribute with inventory
// types like AWS:ComplianceItem .
type UnsupportedInventoryItemContextException struct {
Message *string
ErrorCodeOverride *string
TypeName *string
noSmithyDocumentSerde
}
func (e *UnsupportedInventoryItemContextException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *UnsupportedInventoryItemContextException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *UnsupportedInventoryItemContextException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "UnsupportedInventoryItemContextException"
}
return *e.ErrorCodeOverride
}
func (e *UnsupportedInventoryItemContextException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// Inventory item type schema version has to match supported versions in the
// service. Check output of GetInventorySchema to see the available schema version
// for each type.
type UnsupportedInventorySchemaVersionException struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *UnsupportedInventorySchemaVersionException) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *UnsupportedInventorySchemaVersionException) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *UnsupportedInventorySchemaVersionException) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "UnsupportedInventorySchemaVersionException"
}
return *e.ErrorCodeOverride
}
func (e *UnsupportedInventorySchemaVersionException) ErrorFault() smithy.ErrorFault {
return smithy.FaultClient
}
// The operating systems you specified isn't supported, or the operation isn't
// supported for the operating system.
type UnsupportedOperatingSystem struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *UnsupportedOperatingSystem) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *UnsupportedOperatingSystem) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *UnsupportedOperatingSystem) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "UnsupportedOperatingSystem"
}
return *e.ErrorCodeOverride
}
func (e *UnsupportedOperatingSystem) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The parameter type isn't supported.
type UnsupportedParameterType struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *UnsupportedParameterType) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *UnsupportedParameterType) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *UnsupportedParameterType) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "UnsupportedParameterType"
}
return *e.ErrorCodeOverride
}
func (e *UnsupportedParameterType) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
// The document doesn't support the platform type of the given managed node IDs.
// For example, you sent an document for a Windows managed node to a Linux node.
type UnsupportedPlatformType struct {
Message *string
ErrorCodeOverride *string
noSmithyDocumentSerde
}
func (e *UnsupportedPlatformType) Error() string {
return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
}
func (e *UnsupportedPlatformType) ErrorMessage() string {
if e.Message == nil {
return ""
}
return *e.Message
}
func (e *UnsupportedPlatformType) ErrorCode() string {
if e == nil || e.ErrorCodeOverride == nil {
return "UnsupportedPlatformType"
}
return *e.ErrorCodeOverride
}
func (e *UnsupportedPlatformType) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
```
|
```java
/**
* IK 5.0
* IK Analyzer release 5.0
*
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*
* (linliangyi2005@gmail.com)
* 2012
* provided by Linliangyi and copyright 2012 by Oolong studio
*
*
*/
package org.wltea.analyzer.sample;
import java.io.IOException;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.StringField;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.CorruptIndexException;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.IndexWriterConfig.OpenMode;
import org.apache.lucene.queryparser.classic.ParseException;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.LockObtainFailedException;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.Version;
import org.wltea.analyzer.lucene.IKAnalyzer;
/**
* IKAnalyzerLucene
* 2012-3-2
*
* Lucene4.0 API
*
*/
public class LuceneIndexAndSearchDemo {
/**
*
*
* @param args
*/
public static void main(String[] args){
//Lucene Document
String fieldName = "text";
//
String text = "IK Analyzer";
//IKAnalyzer
Analyzer analyzer = new IKAnalyzer(true);
Directory directory = null;
IndexWriter iwriter = null;
IndexReader ireader = null;
IndexSearcher isearcher = null;
try {
//
directory = new RAMDirectory();
//IndexWriterConfig
IndexWriterConfig iwConfig = new IndexWriterConfig(Version.LUCENE_40 , analyzer);
iwConfig.setOpenMode(OpenMode.CREATE_OR_APPEND);
iwriter = new IndexWriter(directory , iwConfig);
//
Document doc = new Document();
doc.add(new StringField("ID", "10000", Field.Store.YES));
doc.add(new TextField(fieldName, text, Field.Store.YES));
iwriter.addDocument(doc);
iwriter.close();
//**********************************
//
ireader = DirectoryReader.open(directory);
isearcher = new IndexSearcher(ireader);
String keyword = "";
//QueryParserQuery
QueryParser qp = new QueryParser(Version.LUCENE_40, fieldName, analyzer);
qp.setDefaultOperator(QueryParser.AND_OPERATOR);
Query query = qp.parse(keyword);
System.out.println("Query = " + query);
//5
TopDocs topDocs = isearcher.search(query , 5);
System.out.println("" + topDocs.totalHits);
//
ScoreDoc[] scoreDocs = topDocs.scoreDocs;
for (int i = 0; i < topDocs.totalHits; i++){
Document targetDoc = isearcher.doc(scoreDocs[i].doc);
System.out.println("" + targetDoc.toString());
}
} catch (CorruptIndexException e) {
e.printStackTrace();
} catch (LockObtainFailedException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (ParseException e) {
e.printStackTrace();
} finally{
if(ireader != null){
try {
ireader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
if(directory != null){
try {
directory.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
```
|
```css
Position elements with `position: sticky`
Vertical percentages are relative to container width, not height
Vertically center text
Use `float` to allow an element to be placed to the left or right of the container
Vertically-center anything
```
|
```objective-c
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google LLC nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef your_sha256_hash__
#define your_sha256_hash__
#include "resource.h"
#endif // your_sha256_hash__
```
|
Franziska Busch (born 20 October 1985) is a German retired ice hockey forward and former alternate captain of the German national ice hockey team. She currently serves as the head coach of the German women's national under-18 ice hockey team.
International career
Busch was selected to represent Germany at the Winter Olympic Games in 2006 and 2014. At the women's ice hockey tournament in 2006, she did not record a point across five games. At the women's ice hockey tournament in 2014, she led the team in scoring with three goals and five points.
She played in the 2006, 2010 and 2014 Olympic qualification tournaments.
Busch also played with Germany at eight IIHF Women's World Championships. Her first appearance came in 2004.
Career statistics
International career
References
External links
1985 births
Living people
German women's ice hockey forwards
German ice hockey coaches
Ice hockey players at the 2006 Winter Olympics
Ice hockey players at the 2014 Winter Olympics
Olympic ice hockey players for Germany
People from Seesen
Sportspeople from Lower Saxony
|
```c++
// tuple_basic.hpp -----------------------------------------------------
//
// accompanying file LICENSE_1_0.txt or copy at
// path_to_url
// For more information, see path_to_url
// Outside help:
// This and that, Gary Powell.
// Fixed return types for get_head/get_tail
// ( and other bugs ) per suggestion of Jens Maurer
// simplified element type accessors + bug fix (Jeremy Siek)
// Several changes/additions according to suggestions by Douglas Gregor,
// William Kempf, Vesa Karvonen, John Max Skaller, Ed Brey, Beman Dawes,
// David Abrahams.
// Revision history:
// 2002 05 01 Hugo Duncan: Fix for Borland after Jaakko's previous changes
// 2002 04 18 Jaakko: tuple element types can be void or plain function
// types, as long as no object is created.
// Tuple objects can no hold even noncopyable types
// such as arrays.
// 2001 10 22 John Maddock
// Fixes for Borland C++
// 2001 08 30 David Abrahams
// Added default constructor for cons<>.
// your_sha256_hash-
#ifndef BOOST_TUPLE_BASIC_HPP
#define BOOST_TUPLE_BASIC_HPP
#include <utility> // needed for the assignment from pair to tuple
#include <boost/type_traits/cv_traits.hpp>
#include <boost/type_traits/function_traits.hpp>
#include <boost/type_traits/integral_constant.hpp>
#include <boost/utility/swap.hpp>
#include <boost/detail/workaround.hpp> // needed for BOOST_WORKAROUND
#if defined(BOOST_GCC) && (BOOST_GCC >= 40700)
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-local-typedefs"
#endif
namespace boost {
namespace tuples {
// -- null_type --------------------------------------------------------
struct null_type {};
// a helper function to provide a const null_type type temporary
namespace detail {
inline const null_type cnull() { return null_type(); }
// -- if construct ------------------------------------------------
// Proposed by Krzysztof Czarnecki and Ulrich Eisenecker
template <bool If, class Then, class Else> struct IF { typedef Then RET; };
template <class Then, class Else> struct IF<false, Then, Else> {
typedef Else RET;
};
} // end detail
// - cons forward declaration -----------------------------------------------
template <class HT, class TT> struct cons;
// - tuple forward declaration -----------------------------------------------
template <
class T0 = null_type, class T1 = null_type, class T2 = null_type,
class T3 = null_type, class T4 = null_type, class T5 = null_type,
class T6 = null_type, class T7 = null_type, class T8 = null_type,
class T9 = null_type>
class tuple;
// tuple_length forward declaration
template<class T> struct length;
namespace detail {
// -- generate error template, referencing to non-existing members of this
// template is used to produce compilation errors intentionally
template<class T>
class generate_error;
template<int N>
struct drop_front {
template<class Tuple>
struct apply {
typedef BOOST_DEDUCED_TYPENAME drop_front<N-1>::BOOST_NESTED_TEMPLATE
apply<Tuple> next;
typedef BOOST_DEDUCED_TYPENAME next::type::tail_type type;
static const type& call(const Tuple& tup) {
return next::call(tup).tail;
}
};
};
template<>
struct drop_front<0> {
template<class Tuple>
struct apply {
typedef Tuple type;
static const type& call(const Tuple& tup) {
return tup;
}
};
};
} // end of namespace detail
// -cons type accessors ----------------------------------------
// typename tuples::element<N,T>::type gets the type of the
// Nth element ot T, first element is at index 0
// -------------------------------------------------------
#ifndef BOOST_NO_CV_SPECIALIZATIONS
template<int N, class T>
struct element
{
typedef BOOST_DEDUCED_TYPENAME detail::drop_front<N>::BOOST_NESTED_TEMPLATE
apply<T>::type::head_type type;
};
template<int N, class T>
struct element<N, const T>
{
private:
typedef BOOST_DEDUCED_TYPENAME detail::drop_front<N>::BOOST_NESTED_TEMPLATE
apply<T>::type::head_type unqualified_type;
public:
#if BOOST_WORKAROUND(BOOST_BORLANDC,<0x600)
typedef const unqualified_type type;
#else
typedef BOOST_DEDUCED_TYPENAME boost::add_const<unqualified_type>::type type;
#endif
};
#else // def BOOST_NO_CV_SPECIALIZATIONS
namespace detail {
template<int N, class T, bool IsConst>
struct element_impl
{
typedef BOOST_DEDUCED_TYPENAME detail::drop_front<N>::BOOST_NESTED_TEMPLATE
apply<T>::type::head_type type;
};
template<int N, class T>
struct element_impl<N, T, true /* IsConst */>
{
typedef BOOST_DEDUCED_TYPENAME detail::drop_front<N>::BOOST_NESTED_TEMPLATE
apply<T>::type::head_type unqualified_type;
typedef const unqualified_type type;
};
} // end of namespace detail
template<int N, class T>
struct element:
public detail::element_impl<N, T, ::boost::is_const<T>::value>
{
};
#endif
// -get function templates -----------------------------------------------
// Usage: get<N>(aTuple)
// -- some traits classes for get functions
// access traits lifted from detail namespace to be part of the interface,
// (Joel de Guzman's suggestion). Rationale: get functions are part of the
// interface, so should the way to express their return types be.
template <class T> struct access_traits {
typedef const T& const_type;
typedef T& non_const_type;
typedef const typename boost::remove_cv<T>::type& parameter_type;
// used as the tuple constructors parameter types
// Rationale: non-reference tuple element types can be cv-qualified.
// It should be possible to initialize such types with temporaries,
// and when binding temporaries to references, the reference must
// be non-volatile and const. 8.5.3. (5)
};
template <class T> struct access_traits<T&> {
typedef T& const_type;
typedef T& non_const_type;
typedef T& parameter_type;
};
// get function for non-const cons-lists, returns a reference to the element
template<int N, class HT, class TT>
inline typename access_traits<
typename element<N, cons<HT, TT> >::type
>::non_const_type
get(cons<HT, TT>& c) {
typedef BOOST_DEDUCED_TYPENAME detail::drop_front<N>::BOOST_NESTED_TEMPLATE
apply<cons<HT, TT> > impl;
typedef BOOST_DEDUCED_TYPENAME impl::type cons_element;
return const_cast<cons_element&>(impl::call(c)).head;
}
// get function for const cons-lists, returns a const reference to
// the element. If the element is a reference, returns the reference
// as such (that is, can return a non-const reference)
template<int N, class HT, class TT>
inline typename access_traits<
typename element<N, cons<HT, TT> >::type
>::const_type
get(const cons<HT, TT>& c) {
typedef BOOST_DEDUCED_TYPENAME detail::drop_front<N>::BOOST_NESTED_TEMPLATE
apply<cons<HT, TT> > impl;
return impl::call(c).head;
}
// -- the cons template --------------------------------------------------
namespace detail {
// These helper templates wrap void types and plain function types.
// The reationale is to allow one to write tuple types with those types
// as elements, even though it is not possible to instantiate such object.
// E.g: typedef tuple<void> some_type; // ok
// but: some_type x; // fails
template <class T> class non_storeable_type {
non_storeable_type();
};
template <class T> struct wrap_non_storeable_type {
typedef typename IF<
::boost::is_function<T>::value, non_storeable_type<T>, T
>::RET type;
};
template <> struct wrap_non_storeable_type<void> {
typedef non_storeable_type<void> type;
};
} // detail
template <class HT, class TT>
struct cons {
typedef HT head_type;
typedef TT tail_type;
typedef typename
detail::wrap_non_storeable_type<head_type>::type stored_head_type;
stored_head_type head;
tail_type tail;
typename access_traits<stored_head_type>::non_const_type
get_head() { return head; }
typename access_traits<tail_type>::non_const_type
get_tail() { return tail; }
typename access_traits<stored_head_type>::const_type
get_head() const { return head; }
typename access_traits<tail_type>::const_type
get_tail() const { return tail; }
cons() : head(), tail() {}
// cons() : head(detail::default_arg<HT>::f()), tail() {}
// the argument for head is not strictly needed, but it prevents
// array type elements. This is good, since array type elements
// cannot be supported properly in any case (no assignment,
// copy works only if the tails are exactly the same type, ...)
cons(typename access_traits<stored_head_type>::parameter_type h,
const tail_type& t)
: head (h), tail(t) {}
template <class T1, class T2, class T3, class T4, class T5,
class T6, class T7, class T8, class T9, class T10>
cons( T1& t1, T2& t2, T3& t3, T4& t4, T5& t5,
T6& t6, T7& t7, T8& t8, T9& t9, T10& t10 )
: head (t1),
tail (t2, t3, t4, t5, t6, t7, t8, t9, t10, detail::cnull())
{}
template <class T2, class T3, class T4, class T5,
class T6, class T7, class T8, class T9, class T10>
cons( const null_type& /*t1*/, T2& t2, T3& t3, T4& t4, T5& t5,
T6& t6, T7& t7, T8& t8, T9& t9, T10& t10 )
: head (),
tail (t2, t3, t4, t5, t6, t7, t8, t9, t10, detail::cnull())
{}
cons( const cons& u ) : head(u.head), tail(u.tail) {}
template <class HT2, class TT2>
cons( const cons<HT2, TT2>& u ) : head(u.head), tail(u.tail) {}
template <class HT2, class TT2>
cons& operator=( const cons<HT2, TT2>& u ) {
head=u.head; tail=u.tail; return *this;
}
// must define assignment operator explicitly, implicit version is
// illformed if HT is a reference (12.8. (12))
cons& operator=(const cons& u) {
head = u.head; tail = u.tail; return *this;
}
template <class T1, class T2>
cons& operator=( const std::pair<T1, T2>& u ) {
BOOST_STATIC_ASSERT(length<cons>::value == 2); // check length = 2
head = u.first; tail.head = u.second; return *this;
}
// get member functions (non-const and const)
template <int N>
typename access_traits<
typename element<N, cons<HT, TT> >::type
>::non_const_type
get() {
return boost::tuples::get<N>(*this); // delegate to non-member get
}
template <int N>
typename access_traits<
typename element<N, cons<HT, TT> >::type
>::const_type
get() const {
return boost::tuples::get<N>(*this); // delegate to non-member get
}
};
template <class HT>
struct cons<HT, null_type> {
typedef HT head_type;
typedef null_type tail_type;
typedef cons<HT, null_type> self_type;
typedef typename
detail::wrap_non_storeable_type<head_type>::type stored_head_type;
stored_head_type head;
typename access_traits<stored_head_type>::non_const_type
get_head() { return head; }
null_type get_tail() { return null_type(); }
typename access_traits<stored_head_type>::const_type
get_head() const { return head; }
const null_type get_tail() const { return null_type(); }
// cons() : head(detail::default_arg<HT>::f()) {}
cons() : head() {}
cons(typename access_traits<stored_head_type>::parameter_type h,
const null_type& = null_type())
: head (h) {}
template<class T1>
cons(T1& t1, const null_type&, const null_type&, const null_type&,
const null_type&, const null_type&, const null_type&,
const null_type&, const null_type&, const null_type&)
: head (t1) {}
cons(const null_type&,
const null_type&, const null_type&, const null_type&,
const null_type&, const null_type&, const null_type&,
const null_type&, const null_type&, const null_type&)
: head () {}
cons( const cons& u ) : head(u.head) {}
template <class HT2>
cons( const cons<HT2, null_type>& u ) : head(u.head) {}
template <class HT2>
cons& operator=(const cons<HT2, null_type>& u )
{ head = u.head; return *this; }
// must define assignment operator explicitely, implicit version
// is illformed if HT is a reference
cons& operator=(const cons& u) { head = u.head; return *this; }
template <int N>
typename access_traits<
typename element<N, self_type>::type
>::non_const_type
get() {
return boost::tuples::get<N>(*this);
}
template <int N>
typename access_traits<
typename element<N, self_type>::type
>::const_type
get() const {
return boost::tuples::get<N>(*this);
}
};
// templates for finding out the length of the tuple -------------------
template<class T>
struct length: boost::integral_constant<int, 1 + length<typename T::tail_type>::value>
{
};
template<>
struct length<tuple<> >: boost::integral_constant<int, 0>
{
};
template<>
struct length<tuple<> const>: boost::integral_constant<int, 0>
{
};
template<>
struct length<null_type>: boost::integral_constant<int, 0>
{
};
template<>
struct length<null_type const>: boost::integral_constant<int, 0>
{
};
namespace detail {
// Tuple to cons mapper --------------------------------------------------
template <class T0, class T1, class T2, class T3, class T4,
class T5, class T6, class T7, class T8, class T9>
struct map_tuple_to_cons
{
typedef cons<T0,
typename map_tuple_to_cons<T1, T2, T3, T4, T5,
T6, T7, T8, T9, null_type>::type
> type;
};
// The empty tuple is a null_type
template <>
struct map_tuple_to_cons<null_type, null_type, null_type, null_type, null_type, null_type, null_type, null_type, null_type, null_type>
{
typedef null_type type;
};
} // end detail
// your_sha256_hash---
// -- tuple ------------------------------------------------------
template <class T0, class T1, class T2, class T3, class T4,
class T5, class T6, class T7, class T8, class T9>
class tuple :
public detail::map_tuple_to_cons<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9>::type
{
public:
typedef typename
detail::map_tuple_to_cons<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9>::type inherited;
typedef typename inherited::head_type head_type;
typedef typename inherited::tail_type tail_type;
// access_traits<T>::parameter_type takes non-reference types as const T&
tuple() {}
explicit tuple(typename access_traits<T0>::parameter_type t0)
: inherited(t0, detail::cnull(), detail::cnull(), detail::cnull(),
detail::cnull(), detail::cnull(), detail::cnull(),
detail::cnull(), detail::cnull(), detail::cnull()) {}
tuple(typename access_traits<T0>::parameter_type t0,
typename access_traits<T1>::parameter_type t1)
: inherited(t0, t1, detail::cnull(), detail::cnull(),
detail::cnull(), detail::cnull(), detail::cnull(),
detail::cnull(), detail::cnull(), detail::cnull()) {}
tuple(typename access_traits<T0>::parameter_type t0,
typename access_traits<T1>::parameter_type t1,
typename access_traits<T2>::parameter_type t2)
: inherited(t0, t1, t2, detail::cnull(), detail::cnull(),
detail::cnull(), detail::cnull(), detail::cnull(),
detail::cnull(), detail::cnull()) {}
tuple(typename access_traits<T0>::parameter_type t0,
typename access_traits<T1>::parameter_type t1,
typename access_traits<T2>::parameter_type t2,
typename access_traits<T3>::parameter_type t3)
: inherited(t0, t1, t2, t3, detail::cnull(), detail::cnull(),
detail::cnull(), detail::cnull(), detail::cnull(),
detail::cnull()) {}
tuple(typename access_traits<T0>::parameter_type t0,
typename access_traits<T1>::parameter_type t1,
typename access_traits<T2>::parameter_type t2,
typename access_traits<T3>::parameter_type t3,
typename access_traits<T4>::parameter_type t4)
: inherited(t0, t1, t2, t3, t4, detail::cnull(), detail::cnull(),
detail::cnull(), detail::cnull(), detail::cnull()) {}
tuple(typename access_traits<T0>::parameter_type t0,
typename access_traits<T1>::parameter_type t1,
typename access_traits<T2>::parameter_type t2,
typename access_traits<T3>::parameter_type t3,
typename access_traits<T4>::parameter_type t4,
typename access_traits<T5>::parameter_type t5)
: inherited(t0, t1, t2, t3, t4, t5, detail::cnull(), detail::cnull(),
detail::cnull(), detail::cnull()) {}
tuple(typename access_traits<T0>::parameter_type t0,
typename access_traits<T1>::parameter_type t1,
typename access_traits<T2>::parameter_type t2,
typename access_traits<T3>::parameter_type t3,
typename access_traits<T4>::parameter_type t4,
typename access_traits<T5>::parameter_type t5,
typename access_traits<T6>::parameter_type t6)
: inherited(t0, t1, t2, t3, t4, t5, t6, detail::cnull(),
detail::cnull(), detail::cnull()) {}
tuple(typename access_traits<T0>::parameter_type t0,
typename access_traits<T1>::parameter_type t1,
typename access_traits<T2>::parameter_type t2,
typename access_traits<T3>::parameter_type t3,
typename access_traits<T4>::parameter_type t4,
typename access_traits<T5>::parameter_type t5,
typename access_traits<T6>::parameter_type t6,
typename access_traits<T7>::parameter_type t7)
: inherited(t0, t1, t2, t3, t4, t5, t6, t7, detail::cnull(),
detail::cnull()) {}
tuple(typename access_traits<T0>::parameter_type t0,
typename access_traits<T1>::parameter_type t1,
typename access_traits<T2>::parameter_type t2,
typename access_traits<T3>::parameter_type t3,
typename access_traits<T4>::parameter_type t4,
typename access_traits<T5>::parameter_type t5,
typename access_traits<T6>::parameter_type t6,
typename access_traits<T7>::parameter_type t7,
typename access_traits<T8>::parameter_type t8)
: inherited(t0, t1, t2, t3, t4, t5, t6, t7, t8, detail::cnull()) {}
tuple(typename access_traits<T0>::parameter_type t0,
typename access_traits<T1>::parameter_type t1,
typename access_traits<T2>::parameter_type t2,
typename access_traits<T3>::parameter_type t3,
typename access_traits<T4>::parameter_type t4,
typename access_traits<T5>::parameter_type t5,
typename access_traits<T6>::parameter_type t6,
typename access_traits<T7>::parameter_type t7,
typename access_traits<T8>::parameter_type t8,
typename access_traits<T9>::parameter_type t9)
: inherited(t0, t1, t2, t3, t4, t5, t6, t7, t8, t9) {}
template<class U1, class U2>
tuple(const cons<U1, U2>& p) : inherited(p) {}
template <class U1, class U2>
tuple& operator=(const cons<U1, U2>& k) {
inherited::operator=(k);
return *this;
}
template <class U1, class U2>
tuple& operator=(const std::pair<U1, U2>& k) {
BOOST_STATIC_ASSERT(length<tuple>::value == 2);// check_length = 2
this->head = k.first;
this->tail.head = k.second;
return *this;
}
};
// The empty tuple
template <>
class tuple<null_type, null_type, null_type, null_type, null_type, null_type, null_type, null_type, null_type, null_type> :
public null_type
{
public:
typedef null_type inherited;
};
// Swallows any assignment (by Doug Gregor)
namespace detail {
struct swallow_assign;
typedef void (detail::swallow_assign::*ignore_t)();
struct swallow_assign {
swallow_assign(ignore_t(*)(ignore_t)) {}
template<typename T>
swallow_assign const& operator=(const T&) const {
return *this;
}
};
} // namespace detail
// "ignore" allows tuple positions to be ignored when using "tie".
inline detail::ignore_t ignore(detail::ignore_t) { return 0; }
// your_sha256_hash-----------
// The call_traits for make_tuple
// Honours the reference_wrapper class.
// Must be instantiated with plain or const plain types (not with references)
// from template<class T> foo(const T& t) : make_tuple_traits<const T>::type
// from template<class T> foo(T& t) : make_tuple_traits<T>::type
// Conversions:
// T -> T,
// references -> compile_time_error
// reference_wrapper<T> -> T&
// const reference_wrapper<T> -> T&
// array -> const ref array
template<class T>
struct make_tuple_traits {
typedef T type;
// commented away, see below (JJ)
// typedef typename IF<
// boost::is_function<T>::value,
// T&,
// T>::RET type;
};
// The is_function test was there originally for plain function types,
// which can't be stored as such (we must either store them as references or
// pointers). Such a type could be formed if make_tuple was called with a
// reference to a function.
// But this would mean that a const qualified function type was formed in
// the make_tuple function and hence make_tuple can't take a function
// reference as a parameter, and thus T can't be a function type.
// So is_function test was removed.
// (14.8.3. says that type deduction fails if a cv-qualified function type
// is created. (It only applies for the case of explicitly specifying template
// args, though?)) (JJ)
template<class T>
struct make_tuple_traits<T&> {
typedef typename
detail::generate_error<T&>::
do_not_use_with_reference_type error;
};
// Arrays can't be stored as plain types; convert them to references.
// All arrays are converted to const. This is because make_tuple takes its
// parameters as const T& and thus the knowledge of the potential
// non-constness of actual argument is lost.
template<class T, int n> struct make_tuple_traits <T[n]> {
typedef const T (&type)[n];
};
template<class T, int n>
struct make_tuple_traits<const T[n]> {
typedef const T (&type)[n];
};
template<class T, int n> struct make_tuple_traits<volatile T[n]> {
typedef const volatile T (&type)[n];
};
template<class T, int n>
struct make_tuple_traits<const volatile T[n]> {
typedef const volatile T (&type)[n];
};
template<class T>
struct make_tuple_traits<reference_wrapper<T> >{
typedef T& type;
};
template<class T>
struct make_tuple_traits<const reference_wrapper<T> >{
typedef T& type;
};
template<>
struct make_tuple_traits<detail::ignore_t(detail::ignore_t)> {
typedef detail::swallow_assign type;
};
namespace detail {
// a helper traits to make the make_tuple functions shorter (Vesa Karvonen's
// suggestion)
template <
class T0 = null_type, class T1 = null_type, class T2 = null_type,
class T3 = null_type, class T4 = null_type, class T5 = null_type,
class T6 = null_type, class T7 = null_type, class T8 = null_type,
class T9 = null_type
>
struct make_tuple_mapper {
typedef
tuple<typename make_tuple_traits<T0>::type,
typename make_tuple_traits<T1>::type,
typename make_tuple_traits<T2>::type,
typename make_tuple_traits<T3>::type,
typename make_tuple_traits<T4>::type,
typename make_tuple_traits<T5>::type,
typename make_tuple_traits<T6>::type,
typename make_tuple_traits<T7>::type,
typename make_tuple_traits<T8>::type,
typename make_tuple_traits<T9>::type> type;
};
} // end detail
// -make_tuple function templates -----------------------------------
inline tuple<> make_tuple() {
return tuple<>();
}
template<class T0>
inline typename detail::make_tuple_mapper<T0>::type
make_tuple(const T0& t0) {
typedef typename detail::make_tuple_mapper<T0>::type t;
return t(t0);
}
template<class T0, class T1>
inline typename detail::make_tuple_mapper<T0, T1>::type
make_tuple(const T0& t0, const T1& t1) {
typedef typename detail::make_tuple_mapper<T0, T1>::type t;
return t(t0, t1);
}
template<class T0, class T1, class T2>
inline typename detail::make_tuple_mapper<T0, T1, T2>::type
make_tuple(const T0& t0, const T1& t1, const T2& t2) {
typedef typename detail::make_tuple_mapper<T0, T1, T2>::type t;
return t(t0, t1, t2);
}
template<class T0, class T1, class T2, class T3>
inline typename detail::make_tuple_mapper<T0, T1, T2, T3>::type
make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3) {
typedef typename detail::make_tuple_mapper<T0, T1, T2, T3>::type t;
return t(t0, t1, t2, t3);
}
template<class T0, class T1, class T2, class T3, class T4>
inline typename detail::make_tuple_mapper<T0, T1, T2, T3, T4>::type
make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3,
const T4& t4) {
typedef typename detail::make_tuple_mapper<T0, T1, T2, T3, T4>::type t;
return t(t0, t1, t2, t3, t4);
}
template<class T0, class T1, class T2, class T3, class T4, class T5>
inline typename detail::make_tuple_mapper<T0, T1, T2, T3, T4, T5>::type
make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3,
const T4& t4, const T5& t5) {
typedef typename detail::make_tuple_mapper<T0, T1, T2, T3, T4, T5>::type t;
return t(t0, t1, t2, t3, t4, t5);
}
template<class T0, class T1, class T2, class T3, class T4, class T5, class T6>
inline typename detail::make_tuple_mapper<T0, T1, T2, T3, T4, T5, T6>::type
make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3,
const T4& t4, const T5& t5, const T6& t6) {
typedef typename detail::make_tuple_mapper
<T0, T1, T2, T3, T4, T5, T6>::type t;
return t(t0, t1, t2, t3, t4, t5, t6);
}
template<class T0, class T1, class T2, class T3, class T4, class T5, class T6,
class T7>
inline typename detail::make_tuple_mapper<T0, T1, T2, T3, T4, T5, T6, T7>::type
make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3,
const T4& t4, const T5& t5, const T6& t6, const T7& t7) {
typedef typename detail::make_tuple_mapper
<T0, T1, T2, T3, T4, T5, T6, T7>::type t;
return t(t0, t1, t2, t3, t4, t5, t6, t7);
}
template<class T0, class T1, class T2, class T3, class T4, class T5, class T6,
class T7, class T8>
inline typename detail::make_tuple_mapper
<T0, T1, T2, T3, T4, T5, T6, T7, T8>::type
make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3,
const T4& t4, const T5& t5, const T6& t6, const T7& t7,
const T8& t8) {
typedef typename detail::make_tuple_mapper
<T0, T1, T2, T3, T4, T5, T6, T7, T8>::type t;
return t(t0, t1, t2, t3, t4, t5, t6, t7, t8);
}
template<class T0, class T1, class T2, class T3, class T4, class T5, class T6,
class T7, class T8, class T9>
inline typename detail::make_tuple_mapper
<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9>::type
make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3,
const T4& t4, const T5& t5, const T6& t6, const T7& t7,
const T8& t8, const T9& t9) {
typedef typename detail::make_tuple_mapper
<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9>::type t;
return t(t0, t1, t2, t3, t4, t5, t6, t7, t8, t9);
}
namespace detail {
template<class T>
struct tie_traits {
typedef T& type;
};
template<>
struct tie_traits<ignore_t(ignore_t)> {
typedef swallow_assign type;
};
template<>
struct tie_traits<void> {
typedef null_type type;
};
template <
class T0 = void, class T1 = void, class T2 = void,
class T3 = void, class T4 = void, class T5 = void,
class T6 = void, class T7 = void, class T8 = void,
class T9 = void
>
struct tie_mapper {
typedef
tuple<typename tie_traits<T0>::type,
typename tie_traits<T1>::type,
typename tie_traits<T2>::type,
typename tie_traits<T3>::type,
typename tie_traits<T4>::type,
typename tie_traits<T5>::type,
typename tie_traits<T6>::type,
typename tie_traits<T7>::type,
typename tie_traits<T8>::type,
typename tie_traits<T9>::type> type;
};
}
// Tie function templates -------------------------------------------------
template<class T0>
inline typename detail::tie_mapper<T0>::type
tie(T0& t0) {
typedef typename detail::tie_mapper<T0>::type t;
return t(t0);
}
template<class T0, class T1>
inline typename detail::tie_mapper<T0, T1>::type
tie(T0& t0, T1& t1) {
typedef typename detail::tie_mapper<T0, T1>::type t;
return t(t0, t1);
}
template<class T0, class T1, class T2>
inline typename detail::tie_mapper<T0, T1, T2>::type
tie(T0& t0, T1& t1, T2& t2) {
typedef typename detail::tie_mapper<T0, T1, T2>::type t;
return t(t0, t1, t2);
}
template<class T0, class T1, class T2, class T3>
inline typename detail::tie_mapper<T0, T1, T2, T3>::type
tie(T0& t0, T1& t1, T2& t2, T3& t3) {
typedef typename detail::tie_mapper<T0, T1, T2, T3>::type t;
return t(t0, t1, t2, t3);
}
template<class T0, class T1, class T2, class T3, class T4>
inline typename detail::tie_mapper<T0, T1, T2, T3, T4>::type
tie(T0& t0, T1& t1, T2& t2, T3& t3,
T4& t4) {
typedef typename detail::tie_mapper<T0, T1, T2, T3, T4>::type t;
return t(t0, t1, t2, t3, t4);
}
template<class T0, class T1, class T2, class T3, class T4, class T5>
inline typename detail::tie_mapper<T0, T1, T2, T3, T4, T5>::type
tie(T0& t0, T1& t1, T2& t2, T3& t3,
T4& t4, T5& t5) {
typedef typename detail::tie_mapper<T0, T1, T2, T3, T4, T5>::type t;
return t(t0, t1, t2, t3, t4, t5);
}
template<class T0, class T1, class T2, class T3, class T4, class T5, class T6>
inline typename detail::tie_mapper<T0, T1, T2, T3, T4, T5, T6>::type
tie(T0& t0, T1& t1, T2& t2, T3& t3,
T4& t4, T5& t5, T6& t6) {
typedef typename detail::tie_mapper
<T0, T1, T2, T3, T4, T5, T6>::type t;
return t(t0, t1, t2, t3, t4, t5, t6);
}
template<class T0, class T1, class T2, class T3, class T4, class T5, class T6,
class T7>
inline typename detail::tie_mapper<T0, T1, T2, T3, T4, T5, T6, T7>::type
tie(T0& t0, T1& t1, T2& t2, T3& t3,
T4& t4, T5& t5, T6& t6, T7& t7) {
typedef typename detail::tie_mapper
<T0, T1, T2, T3, T4, T5, T6, T7>::type t;
return t(t0, t1, t2, t3, t4, t5, t6, t7);
}
template<class T0, class T1, class T2, class T3, class T4, class T5, class T6,
class T7, class T8>
inline typename detail::tie_mapper
<T0, T1, T2, T3, T4, T5, T6, T7, T8>::type
tie(T0& t0, T1& t1, T2& t2, T3& t3,
T4& t4, T5& t5, T6& t6, T7& t7,
T8& t8) {
typedef typename detail::tie_mapper
<T0, T1, T2, T3, T4, T5, T6, T7, T8>::type t;
return t(t0, t1, t2, t3, t4, t5, t6, t7, t8);
}
template<class T0, class T1, class T2, class T3, class T4, class T5, class T6,
class T7, class T8, class T9>
inline typename detail::tie_mapper
<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9>::type
tie(T0& t0, T1& t1, T2& t2, T3& t3,
T4& t4, T5& t5, T6& t6, T7& t7,
T8& t8, T9& t9) {
typedef typename detail::tie_mapper
<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9>::type t;
return t(t0, t1, t2, t3, t4, t5, t6, t7, t8, t9);
}
template <class T0, class T1, class T2, class T3, class T4,
class T5, class T6, class T7, class T8, class T9>
void swap(tuple<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9>& lhs,
tuple<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9>& rhs);
inline void swap(null_type&, null_type&) {}
template<class HH>
inline void swap(cons<HH, null_type>& lhs, cons<HH, null_type>& rhs) {
::boost::swap(lhs.head, rhs.head);
}
template<class HH, class TT>
inline void swap(cons<HH, TT>& lhs, cons<HH, TT>& rhs) {
::boost::swap(lhs.head, rhs.head);
::boost::tuples::swap(lhs.tail, rhs.tail);
}
template <class T0, class T1, class T2, class T3, class T4,
class T5, class T6, class T7, class T8, class T9>
inline void swap(tuple<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9>& lhs,
tuple<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9>& rhs) {
typedef tuple<T0, T1, T2, T3, T4, T5, T6, T7, T8, T9> tuple_type;
typedef typename tuple_type::inherited base;
::boost::tuples::swap(static_cast<base&>(lhs), static_cast<base&>(rhs));
}
} // end of namespace tuples
} // end of namespace boost
#if defined(BOOST_GCC) && (BOOST_GCC >= 40700)
#pragma GCC diagnostic pop
#endif
#endif // BOOST_TUPLE_BASIC_HPP
```
|
```ruby
require_relative '../../../spec_helper'
describe :numeric_rect, shared: true do
before :each do
@numbers = [
20, # Integer
398.72, # Float
Rational(3, 4), # Rational
99999999**99, # Bignum
infinity_value,
nan_value
]
end
it "returns an Array" do
@numbers.each do |number|
number.send(@method).should be_an_instance_of(Array)
end
end
it "returns a two-element Array" do
@numbers.each do |number|
number.send(@method).size.should == 2
end
end
it "returns self as the first element" do
@numbers.each do |number|
if Float === number and number.nan?
number.send(@method).first.nan?.should be_true
else
number.send(@method).first.should == number
end
end
end
it "returns 0 as the last element" do
@numbers.each do |number|
number.send(@method).last.should == 0
end
end
it "raises an ArgumentError if given any arguments" do
@numbers.each do |number|
-> { number.send(@method, number) }.should raise_error(ArgumentError)
end
end
end
```
|
ex:el is the third studio album by 808 State, released on 4 March 1991 by ZTT Records. In contrast to the band's previous work, the album features more catchy melodies and heavier acid techno beats and percussion, "embracing earlier flirtations with hip-hop and industrial music".
The album also features the guest vocals of Bernard Sumner of Joy Division and New Order, who sings on "Spanish Heart". In addition, Björk sings on "Qmart" and "Ooops", and is credited with co-writing both; this album marked the start of a long-running working relationship between Björk and Graham Massey.
It is considered to be the first major release to feature the sample from the film Willy Wonka & the Chocolate Factory of the phrase "we are the music makers", which became one of the most common vocal samples in electronic music. It is also the first electronic album to feature guest vocals by important alternative rock artists on selected tracks, which became commonplace too on later pop-oriented electronic albums.
Phil Sutcliffe in Q Magazine called the album "irresistibly full of fun"
The album is last to feature founding member Martin Price, who left the group in October 1991 to perform solo production work, eventually forming his own label, Sun Text.
UK track listing
US track listing
2008 deluxe edition
In September 2008, ex:el was re-released as a 'deluxe edition'. The original album was remastered by Graham Massey, and Ian Peel and Graham Massey compiled a bonus disc of remixes and unreleased tracks which included:
"In Yer Face" (Facially Yours Remix) – 4:17
"Olympic" (Euro Bass Mix) – 5:44
"Lift" (Heavy Mix) – 4:42
"Cubik" (State to Pan Am Mix) – 4:29
"Open Your Mind" (Sound Garden Mix) – 4:28
"Lambrusco Cowboy" (Alt Mix) – 4:17
"Ski Family" – 5:14
"Ooops" (Mellow Birds Mix) – 4:04
"In Yer Face" (Cheadle Royal Mix) – 3:26
"Olympic" (Unreleased Mix) – 4:55
Charts
References
1991 albums
808 State albums
Tommy Boy Records albums
ZTT Records albums
|
The Canopus Foundation is a registered private charitable institution under German jurisdiction founded in 1997 by Wolfgang Heller and Dr. Peter W. Heller.
Organization
Management
Executive Director of the foundation is Dr. Peter W. Heller. other Members of the Board of Trustees are Dipl.Vw. Micaela Heller, Julia T. Rahmani, M.Sc.(LSE) and Jakob Heller, B.Sc. (Tübingen).
Staff
The foundation employs 2 permanent staff and 8 freelancers (status: 2019). Yearly operating expenses are 300.000 Euro.
Office
The foundation is legally registered in Dreieich, Germany. The office is located in Freiburg im Breisgau, Germany.
Profile
Environmental protection, poverty alleviation, entrepreneurial commitment, education, and science
The Canopus Foundation is a private family foundation registered as a charitable organization under German law. It has been committed to rural electrification by renewable energies in developing countries for more than 20 years. Since 2014 Canopus has also been active in the field of education and science for a sustainable economy. Its work is based on the model of “Venture Philanthropy”.
Energy Access for All
1.1 billion people worldwide live without access to energy. Especially in developing and emerging countries the shortage of electrical power supply keeps large parts of the population from overcoming their poverty. Particularly in rural areas many qualified skilled workers and small-sized businesses have to forgo the use of electrical machines. They are limited in the goods they can produce, and denied economic development. To this day, much of the urgently needed food spoils in the heat because of the lack of refrigeration and conservation. Without energy for the cold storage of medication and vaccines, even basic health care is lacking. Without access to a power supply to light their huts and houses, charge batteries, or run electrical devices such as phones, radios, TVs, or fans, rural low income communities have no access to the simplest requirements for connecting to the “world”, taking up employment, or acquiring knowledge. Access to sustainable energy for all people: the Canopus Foundation has made it its mission to work on this challenge and provide people who are still forced to live without electricity with access to power from renewable energy sources, and thus give them new prospects for the future.
2000 - 2020 the Canopus Foundation has been supporting 51 social enterprises and non-profit organizations working in the energy access sector, and organized two international "Solar for All" contests to promote best practice solutions.
Education and Science for a Sustainable Economy
Since 2008 the economic sciences have faced a blatant crisis of legitimacy. Only a limited number of economists predicted the imminent crash of international financial markets, and retrospective analyses of the causes were in part contradictory. Ever since, the image of major economic research institutes has noticeably suffered. This loss of influence in the area of political advice and the additional loss of their legitimacy in the media and civil society has led to a defensive mood on the part of economists, leaving little space for critical self-reflection. Critics of the orthodox school are finding increasing sympathy in the public eye. They focus on the following points of criticism: the abstract model-type worlds of neoclassical equilibrium economics are applied to reality without taking the social environment into account, thus rendering sustainable economics inconceivable. Knowledge in the fields of social science and the humanities including economic history, scientific theory, and ethics turns into a marginal phenomenon of academia. This has led to a growing dysfunctionality where comprehensive interaction and an exchange of opinions would be urgently needed.
Since 2014 the Canopus Foundation has been a shareholder of the Humboldt-Viadrina Governance Platform (HVGP) gGmbH in Berlin; since 2015 it has been supporting the Cusanus University in Bernkastel-Kues, Germany. Both institutions are committed to supporting plural economics in their respective fields, in the organization of events, and in research and teaching.
Modus operandi
As a family foundation Canopus pursues the venture philanthropy model which transfers the method and tools of venture capital to the social sector, responding to its growing demand for advanced financial engineering. The foundation promotes the long term allocation of philanthropic risk capital to early stage social enterprises, laying the ground for financial self-sufficiency and the expansion of their social and ecological impact. Towards this end the Canopus Foundation provides additional technical expertise, management skills and market intelligence where needed. Since 2019 the Canopus Foundation is a member of the Association of German Foundations.
Cooperating Partners (selected)
Ashoka (non-profit organization), Elea Foundation for Ethics in Globalization, Humboldt-Viadrina Governance Platform (HVGP)gGmbH, Cusanus University.
See also
Renewable energy in developing countries
Renewable energy in Africa
Renewable energy in China
Solar power in South Asia
Solar powered refrigerator
SolarAid
References
External links
Official webpage
EVPA
ASHOKA
Social finance
Foundations based in Germany
Energy and the environment
|
```java
package tech.tablesaw.io;
import java.util.*;
import java.util.concurrent.CopyOnWriteArrayList;
import tech.tablesaw.api.ColumnType;
import tech.tablesaw.columns.AbstractColumnParser;
public class ColumnTypeDetector {
/** Consider using TextColumn instead of StringColumn for string data after this many rows */
private static final int STRING_COLUMN_ROW_COUNT_CUTOFF = 50_000;
/**
* Use a TextColumn if at least this proportion of values are found to be unique in the type
* detection sample
*
* <p>Note: This number is based on an assumption that as more records are considered, a smaller
* proportion of these new records will be found to be unique
*
* <p>Sample calculation; 10 character string = 2 bytes * 10 + 38 extra bytes = 58; rounded up to
* 64 so it's a multiple of 8
*
* <p>With dictionary encoding, we have 2*64 + 2*4 = 136 byte per unique value plus 4 bytes for
* each value For text columns we have 64 bytes per string
*
* <p>So, if every value is unique, using dictionary encoding wastes about 70 bytes per value. If
* there are only two unique values, dictionary encoding saves about 62 bytes per value.
*
* <p>Of course, it all depends on the lengths of the strings.
*/
private static final double STRING_COLUMN_CUTOFF = 0.50;
private final List<ColumnType> typeArray;
/**
* @param typeArray Types to choose from. When more than one would work, we pick the first of the
* options. The order these appear in is critical. The broadest must go last, so String must
* be at the end of the list. Any String read from the input will match string. If it were
* first on the list, you would get nothing but strings in your table. As another example, an
* integer type, should go before double. Otherwise double would match integers so the integer
* test would never be evaluated and all the ints would be read as doubles.
*/
public ColumnTypeDetector(List<ColumnType> typeArray) {
this.typeArray = typeArray;
}
/**
* Estimates and returns the type for each column in the input text
*
* <p>The type is determined by checking a sample of the data. Because only a sample of the data
* is checked, the types may be incorrect. If that is the case a Parse Exception will be thrown.
*
* <p>The method {@code printColumnTypes()} can be used to print a list of the detected columns
* that can be corrected and used to explicitly specify the correct column types.
*/
public ColumnType[] detectColumnTypes(Iterator<String[]> rows, ReadOptions options) {
boolean useSampling = options.sample();
// to hold the results
List<ColumnType> columnTypes = new ArrayList<>();
// to hold the data read from the file
List<List<String>> columnData = new ArrayList<>();
int rowCount = 0; // make sure we don't go over maxRows
int nextRow = 0;
String[] nextLine = new String[0];
while (rows.hasNext()) {
try {
nextLine = rows.next();
// initialize the arrays to hold the strings. we don't know how many we need until we read
// the
// first row
if (rowCount == 0) {
for (int i = 0; i < nextLine.length; i++) {
columnData.add(new ArrayList<>());
}
}
int columnNumber = 0;
if (rowCount == nextRow) {
for (String field : nextLine) {
columnData.get(columnNumber).add(field);
columnNumber++;
}
if (useSampling) {
nextRow = nextRow(nextRow);
} else {
nextRow = nextRowWithoutSampling(nextRow);
}
}
rowCount++;
} catch (IndexOutOfBoundsException e) {
throw new ColumnIndexOutOfBoundsException(e, nextRow, nextLine);
}
}
// now detect
for (List<String> valuesList : columnData) {
ColumnType detectedType = detectType(valuesList, options);
/*
if (detectedType.equals(STRING) && rowCount > STRING_COLUMN_ROW_COUNT_CUTOFF
&& options.columnTypesToDetect().contains(TEXT)
) {
HashSet<String> unique = new HashSet<>(valuesList);
double uniquePct = unique.size() / (valuesList.size() * 1.0);
if (uniquePct > STRING_COLUMN_CUTOFF) {
detectedType = TEXT;
}
}
*/
columnTypes.add(detectedType);
}
return columnTypes.toArray(new ColumnType[0]);
}
private int nextRowWithoutSampling(int nextRow) {
return nextRow + 1;
}
private int nextRow(int nextRow) {
if (nextRow < 10_000) {
return nextRow + 1;
}
if (nextRow < 100_000) {
return nextRow + 1000;
}
if (nextRow < 1_000_000) {
return nextRow + 10_000;
}
if (nextRow < 10_000_000) {
return nextRow + 100_000;
}
if (nextRow < 100_000_000) {
return nextRow + 1_000_000;
}
return nextRow + 10_000_000;
}
/**
* Returns a predicted ColumnType derived by analyzing the given list of undifferentiated strings
* read from a column in the file and applying the given Locale and options
*/
private ColumnType detectType(List<String> valuesList, ReadOptions options) {
CopyOnWriteArrayList<AbstractColumnParser<?>> parsers =
new CopyOnWriteArrayList<>(getParserList(typeArray, options));
CopyOnWriteArrayList<ColumnType> typeCandidates = new CopyOnWriteArrayList<>(typeArray);
boolean hasNonMissingValues = false;
for (String s : valuesList) {
for (AbstractColumnParser<?> parser : parsers) {
if (!parser.isMissing(s)) {
hasNonMissingValues = true;
if (!parser.canParse(s)) { // we can skip this test if we know the value is missing
typeCandidates.remove(parser.columnType());
parsers.remove(parser);
}
}
}
}
if (hasNonMissingValues) {
return selectType(typeCandidates);
} else {
// the last type in the typeArray is the default
return typeArray.get(typeArray.size() - 1);
}
}
/**
* Returns the selected candidate for a column of data, by picking the first value in the given
* list
*
* @param typeCandidates a possibly empty list of candidates. This list should be sorted in order
* of preference
*/
private ColumnType selectType(List<ColumnType> typeCandidates) {
return typeCandidates.get(0);
}
/**
* Returns the list of parsers to use for type detection
*
* @param typeArray Array of column types. The order specifies the order the types are applied
* @param options CsvReadOptions to use to modify the default parsers for each type
* @return A list of parsers in the order they should be used for type detection
*/
private List<AbstractColumnParser<?>> getParserList(
List<ColumnType> typeArray, ReadOptions options) {
// Types to choose from. When more than one would work, we pick the first of the options
List<AbstractColumnParser<?>> parsers = new ArrayList<>();
for (ColumnType type : typeArray) {
parsers.add(type.customParser(options));
}
return parsers;
}
}
```
|
The Kress Building, also known as Kress Wholesale Company Store and Mehornay Furniture Store, is a historic commercial building located in downtown Columbia, Missouri. It was built in 1910 for the S. H. Kress & Co., and remodeled in 1946 when it became Mehornay Furniture. It is a tall two-story, brick building with an open storefront topped by horizontal metal banding, that consists of large plate glass windows.
It operated as a furniture store from 1946 until 1979-80. It was again renovated in 2005. Post-renovation it became the home of the Penguin Piano Bar in 2005 on the ground floor and Roxy’s in 2015 on the 2nd floor. Both went out of business in 2020 due to the closures during the covid-19 pandemic.
It was listed on the National Register of Historic Places in 2005.
References
S. H. Kress & Co.
Commercial buildings on the National Register of Historic Places in Missouri
Commercial buildings completed in 1910
Buildings and structures in Columbia, Missouri
National Register of Historic Places in Boone County, Missouri
|
Sostegno is a comune (municipality) in the Province of Biella in the Italian region Piedmont, located about northeast of Turin and about northeast of Biella.
Sostegno borders the following municipalities: Crevacuore, Curino, Lozzolo, Roasio, Serravalle Sesia, Villa del Bosco. Economy is based on the production of apples and wine.
References
Cities and towns in Piedmont
|
Olaf Sørensen (3 August 1892 – 1 August 1962) was a Norwegian politician for the Labour Party.
He was elected to the Norwegian Parliament from the Market towns of Buskerud county in 1945, and was re-elected on two occasions.
Sørensen was born in Kongsberg and held various positions in Kongsberg city council between 1922 and 1959, except for a period between 1940 and 1945 during the German occupation of Norway. He served briefly as mayor in 1945.
References
1892 births
1962 deaths
Labour Party (Norway) politicians
Members of the Storting
20th-century Norwegian politicians
People from Kongsberg
|
George Georgiou (born 1961) is a freelance British photographer and photojournalist best known for his work in eastern Europe, particularly Turkey.
Career in photography
Born in London to Greek Cypriot parents, Georgiou graduated in photography from the Polytechnic of Central London.
Georgiou's work has focussed on communities split between different cultures. After working for six years in Serbia, Greece and eastern Europe, he was recently based for four years in Istanbul. His work in Turkey led to a series of photographs titled Fault Lines/Turkey/East/West, which has led to several exhibitions and a book. Georgiou has also taught photography at Barnet College in London and a number of workshops in Europe.
Arriving somewhere new, Georgiou's approach is first to unburden himself of preexisting images of the place and to try to see through superficial differences with places he knows; he then looks for commonalities and actual differences. He starts by himself and only when well underway hopes to attract commissions and make sales.
Georgiou's early work was in black-and-white but for Fault Lines and subsequent work he moved to colour, using a compact camera with an articulated LCD that may be viewed from above, like the ground glass screen of a twin-lens reflex camera; this is because he believes it less intimidating for the people photographed than a camera held to the eye.
Georgiou belongs to Panos Pictures. His noncommercial approach has presented challenges; speaking in 2009, he described himself as having large debts but remaining optimistic.
Turkey
Georgiou had long been curious about Turkey, and when his visit to Istanbul in 2003 coincided with bombings he determined to learn more about the issues involved. The eventual theme of his work in Turkey gradually emerged as he observed bleak new collective housing springing up for an incongruous urbanisation of the rugged Anatolian plateau. The resulting work, Fault Lines/Turkey/East/West, explores the notion of an East/West division and the additional and complex fault lines – religious/secular, tradition/modernity, and more – that cross the Turkey of today.
Georgiou started the work in monochrome but soon moved to colour. Photographing in spring and autumn helped in subduing the light and avoiding the blue skies familiar from National Geographic and the like.
In a review of Georgiou's exhibition Fault Lines at Side Gallery (Newcastle), Katie Lin found that his photographs evoked sadness rather than sympathy resulting from "the desolation and emptiness that features in so many of his shots." In some cases, this desolation was exaggerated by the "disproportional space awarded to the sky" or by the look of the "faces of passersby who just happened to get caught in the frame." But overall, she found the photographs were "thought-provoking and beautiful in content, composition and colour, a fantastic display of the everyday life experience of Turkish people".
Adam Stoltman wrote for the New York Times that in Fault Lines:
Through a series of haunting architectural and landscape scenes of Turkey's rush toward modernization – and the resulting tension between the secular and the modern – George Georgiou has visually put his finger on a kind of listless alienation which at times can seem to pervade globalized society.
Georgia and Ukraine
In late 2010 Georgiou had been working for five years on In the Shadow of the Bear, a project that looks at the aftermath of the peaceful "Rose" and "orange" revolutions that took place in Georgia and Ukraine against the backdrop of Russia's resurgence as a major international power and its continuous involvement in the two nations' affairs. The project looks at signs in the domestic and public spheres, that when taken together build up a representation of how the people of Georgia and Ukraine negotiate the space that they find themselves in; individual aspects of the two very different countries, and aspects common to them through their shared history in the Soviet Union. Georgiou hopes to present this work in either one volume or two.
Awards
World Press Photo: Award for "The Serbs" (2002)
Pictures of the Year International, prize for "Bombing Victim" (2003)
World Press Photo: Award for "Flour War" (2004)
Project Assistance Award from Nikon and the British Journal of Photography (2010)
Bibliography
By Georgiou
George Georgiou. Fault Lines/Turkey/East/West. Amsterdam: Schilt, 2010. 128 pp. .
Fault Lines/Turquie/Est/Ouest. Trézélan: Filigranes, 2010. .
Turkey / Τουρκία : Στη ρωγμή του χρόνου (Turkey / Tourkia: stē rōgmē tou chronou). Athens: Apeiron Photos, 2010. .
Fault Lines/Turchia/Est/Ovest. Rome: Postcart, 2010. .
Last Stop. Self-published, 2015. Edition of 950 copies.
Americans Parade. Self-published, 2019. With an introduction by David Campany and a short story by Vanessa Winship.
With contributions by Georgiou
Street Photography Now. London: Thames & Hudson, 2010. (hardback). London: Thames & Hudson, 2011. (paperback). Edited by Sophie Howarth and Stephen McLaren.
Unseen London. London: Hoxton Mini Press, 2017. . With photographs by and interviews with various photographers, and text by Rachel Segal Hamilton.
Exhibitions (with others)
2011/2012: New Photography 2011, Museum of Modern Art, New York. With Moyra Davey, Deana Lawson, Doug Rickard, Viviane Sassen and Zhang Dali.
Last Stop, Le château d’eau, pôle photographique de Toulouse, Toulouse, France, January–March 2015. Exhibited alongside Voyage Mélancolique by Vanessa Winship.
References
External links
Georgiou's profile at Panos Pictures
"Fault Lines: Turkey East to West " at Moving Walls 14.
Whitney Johnson. "Off the shelf; George Georgiou's Turkey". New Yorker, 23 September 2010.
1961 births
Date of birth missing (living people)
Living people
Photographers from London
British people of Greek Cypriot descent
British photojournalists
Photography in Turkey
Alumni of the University of Westminster
Street photographers
|
Sacred Heart Medical Center may refer to:
In the United States:
Sacred Heart Medical Center at RiverBend, Springfield, Oregon
Sacred Heart Medical Center University District, Eugene, Oregon
Providence Sacred Heart Medical Center and Children's Hospital, Spokane, Washington
See also
Sacred Heart Hospital (disambiguation)
|
```c++
//
//
//
// file LICENSE_1_0.txt or copy at path_to_url
//
#ifndef BOOST_TYPE_INDEX_RUNTIME_CAST_BOOST_SHARED_PTR_CAST_HPP
#define BOOST_TYPE_INDEX_RUNTIME_CAST_BOOST_SHARED_PTR_CAST_HPP
/// \file boost_shared_ptr_cast.hpp
/// \brief Contains the overload of boost::typeindex::runtime_pointer_cast for
/// boost::shared_ptr types.
#include <boost/type_index/runtime_cast/detail/runtime_cast_impl.hpp>
#include <boost/type_traits/is_base_and_derived.hpp>
#include <boost/smart_ptr/shared_ptr.hpp>
#ifdef BOOST_HAS_PRAGMA_ONCE
# pragma once
#endif
namespace boost { namespace typeindex {
/// \brief Creates a new instance of std::shared_ptr whose stored pointer is obtained from u's
/// stored pointer using a runtime_cast.
///
/// The new shared_ptr will share ownership with u, except that it is empty if the runtime_cast
/// performed by runtime_pointer_cast returns a null pointer.
/// \tparam T The desired target type to return a pointer of.
/// \tparam U A complete class type of the source instance pointed to from u.
/// \return If there exists a valid conversion from U* to T*, returns a boost::shared_ptr<T>
/// that points to an address suitably offset from u.
/// If no such conversion exists, returns boost::shared_ptr<T>();
template<typename T, typename U>
boost::shared_ptr<T> runtime_pointer_cast(boost::shared_ptr<U> const& u) {
T* value = detail::runtime_cast_impl<T>(u.get(), boost::is_base_and_derived<T, U>());
if(value)
return boost::shared_ptr<T>(u, value);
return boost::shared_ptr<T>();
}
}} // namespace boost::typeindex
#endif // BOOST_TYPE_INDEX_RUNTIME_CAST_BOOST_SHARED_PTR_CAST_HPP
```
|
Castle Brewery is one of the oldest commercial breweries in South Africa. As company-endorsed legend would have it, the company was founded by Charles Glass in Johannesburg in 1894. UCT history professor Anne Kelk Mager has argued that the official SAB story overemphasized the role of Charles and that it was his wife Lisa Glass who was primarily responsible for the creation of Castle. It later merged with other breweries to form South African Breweries, which still later merged with Miller of the United States to form SABMiller.
On October 10, 2016, Anheuser-Busch InBev acquired SABMiller for £69 billion (US $107 billion at the time the deal closed a year later). The arrangement had been approved by shareholders of both companies on 28 September 2016, and the deal closed on 10 October 2016. The acquisition - subsequently referred to as a merger in the news media - ended the corporate use of the name SABMiller. The new company is called Anheuser-Busch InBev SA/NV, (AB InBev) and is trading as ABI on the Brussels Stock Exchange, as BUD on the New York stock exchange and as ANH on the Johannesburg market.
SABMiller ceased trading on global stock markets and became a business division of Anheuser Busch Inbev SA/NV. On October 11, 2016, Anheuser Busch Inbev SA/NV divested itself of its interests in the MillerCoors beer company to Molson Coors.
Since SABMiller no longer exists as an entity, South African Breweries is now a subsidiary of Anheuser-Busch InBev SA/NV (abbreviated as AB InBev).
History
Prior to incorporation in 1895, Castle Brewery had operations in Cape Town to serve the steady expansion of a settler community from the mid-17th century. The demand for beer prompted the first Dutch governor, Jan van Riebeeck, to establish a brewery at the Fort (later replaced by the Castle in central Cape Town) as early as 1658 - beating the first wine production by six months. In the same year, Pieter Visagie brewed the first beer from the waters of the Liesbeeck River. Over the next 200 years, brewing made its mark in the Cape and beyond. Noted brewers of the time included Cloete at the Newlands Brewery; Ohlsson at the Anneberg Brewery; Jacob Letterstedt at Mariendahl Brewery - also in Newlands: Hiddingh at Cannon Brewery; Martienssen at the Salt River Brewery, and a second Cloete in Kloof Street.
One of the key figures in the story of Newlands, and in the annals of South African beer manufacturing history, was Swede Anders Ohlsson, who sailed for Africa, aged 23, in 1864. Initially, he imported Swedish goods and timbers, and developed an extensive trade network and a solid business empire. Then he turned to brewing, basing himself at Newlands, where he produced Lion Lager. In 1956, Castle Brewery bought out Ohlssons and Chandlers Union Breweries and the company, for the first time, became known as the South African Breweries.
Brands
The main brand is Castle Lager, first brewed in 1895. Castle Lager has won many awards, from gold medals to the "World's Best Bottled Lager" award at the 2000 International Brewing Industry Awards. The lager has 5% ABV with a unique light hops taste, advertised as "somewhat dry, somewhat bitter, never sweet" and as "the beer that stood the test of time".
South African Breweries is a major supporter of South African sport, and Castle Lager is the official sponsor of the South African cricket and soccer teams. Until 2004, it was also the primary sponsor of the national rugby team, the Springboks, but that position has now been taken by Standard bank Group Limited.
References
External links
Official website
https://web.archive.org/web/20070211115433/http://www.springbokradio.com/ADCASTLE.html (1972 Radio Commercial)
http://www.castlelager.com/
Beer in South Africa
Food and drink companies based in Cape Town
Manufacturing companies based in Cape Town
Manufacturing companies based in Johannesburg
Breweries of South Africa
South African brands
SABMiller
|
Sharples School is a co-educational secondary school located in the Sharples area of Bolton in the English county of Greater Manchester.
Established in 1974, the School celebrated its 40th anniversary in 2014.
Previously a community school administered by Bolton Metropolitan Borough Council, in June 2016 Sharples School converted to academy status. The school continues to coordinate with Bolton Metropolitan Borough Council for admissions.
Sharples School offers GCSEs, BTECs and the CiDA as programmes of study for pupils. The school also has a specialism in STEM (Science, Engineering, Technology and Maths).
References
External links
Sharples School official website
Secondary schools in the Metropolitan Borough of Bolton
Educational institutions established in 1974
1974 establishments in England
Academies in the Metropolitan Borough of Bolton
|
```c
#include <errno.h>
#include <string.h>
#include <netlink/genl/genl.h>
#include <netlink/msg.h>
#include <netlink/attr.h>
#include "nl80211.h"
#include "iw.h"
static int set_power_save(struct nl80211_state *state,
struct nl_cb *cb,
struct nl_msg *msg,
int argc, char **argv,
enum id_input id)
{
enum nl80211_ps_state ps_state;
if (argc != 1) {
printf("Invalid parameters!\n");
return 2;
}
if (strcmp(argv[0], "on") == 0)
ps_state = NL80211_PS_ENABLED;
else if (strcmp(argv[0], "off") == 0)
ps_state = NL80211_PS_DISABLED;
else {
printf("Invalid parameter: %s\n", argv[0]);
return 2;
}
NLA_PUT_U32(msg, NL80211_ATTR_PS_STATE, ps_state);
return 0;
nla_put_failure:
return -ENOBUFS;
}
COMMAND(set, power_save, "<on|off>",
NL80211_CMD_SET_POWER_SAVE, 0, CIB_NETDEV, set_power_save,
"Set power save state to on or off.");
static int print_power_save_handler(struct nl_msg *msg, void *arg)
{
struct nlattr *attrs[NL80211_ATTR_MAX + 1];
struct genlmsghdr *gnlh = nlmsg_data(nlmsg_hdr(msg));
const char *s;
nla_parse(attrs, NL80211_ATTR_MAX, genlmsg_attrdata(gnlh, 0),
genlmsg_attrlen(gnlh, 0), NULL);
if (!attrs[NL80211_ATTR_PS_STATE])
return NL_SKIP;
switch (nla_get_u32(attrs[NL80211_ATTR_PS_STATE])) {
case NL80211_PS_ENABLED:
s = "on";
break;
case NL80211_PS_DISABLED:
default:
s = "off";
break;
}
printf("Power save: %s\n", s);
return NL_SKIP;
}
static int get_power_save(struct nl80211_state *state,
struct nl_cb *cb,
struct nl_msg *msg,
int argc, char **argv,
enum id_input id)
{
nl_cb_set(cb, NL_CB_VALID, NL_CB_CUSTOM,
print_power_save_handler, NULL);
return 0;
}
COMMAND(get, power_save, "<param>",
NL80211_CMD_GET_POWER_SAVE, 0, CIB_NETDEV, get_power_save,
"Retrieve power save state.");
```
|
```go
package handlers
import (
"encoding/json"
"log"
"net/http"
"github.com/gorilla/mux"
"github.com/play-with-docker/play-with-docker/storage"
)
type PublicUserInfo struct {
Id string `json:"id"`
Avatar string `json:"avatar"`
Name string `json:"name"`
}
func GetUser(rw http.ResponseWriter, req *http.Request) {
vars := mux.Vars(req)
userId := vars["userId"]
u, err := core.UserGet(userId)
if err != nil {
if storage.NotFound(err) {
log.Printf("User with id %s was not found\n", userId)
rw.WriteHeader(http.StatusNotFound)
return
}
log.Println(err)
rw.WriteHeader(http.StatusInternalServerError)
return
}
pui := PublicUserInfo{Id: u.Id, Avatar: u.Avatar, Name: u.Name}
json.NewEncoder(rw).Encode(pui)
}
```
|
```shell
#!/usr/bin/env bash
# <xbar.title>Jenkins Agent Status</xbar.title>
# <xbar.version>v1.0</xbar.version>
# <xbar.author.github>avidit</xbar.author.github>
# <xbar.desc>Monitor status of jenkins agents</xbar.desc>
# <xbar.image>path_to_url
# <xbar.dependencies>jq</xbar.dependencies>
# Variables:
# <xbar.var>string(JENKINS_URL="path_to_url"): Jenkins URL</xbar.var>
# <xbar.var>string(JENKINS_AGENTS="AGENT_01,AGENT_02,AGENT_03"): Jenkins Agent(s)</xbar.var>
# <xbar.var>string(JENKINS_USER_ID=""): Jenkins user id</xbar.var>
# <xbar.var>string(JENKINS_API_TOKEN=""): Jenkins API Token</xbar.var>
# Dependencies:
# jq (path_to_url
# Installation:
# 1. Copy this script to xbar plugin folder ~/Library/Application Support/xbar/plugins
# 2. Ensure the plugin file is executable by running chmod +x jenkins-agent-status.5m.sh
echo ""
echo "---"
[ -n "$JENKINS_URL" ] || { echo " JENKINS_URL not set"; exit; }
[ -n "$JENKINS_AGENTS" ] || { echo " JENKINS_AGENTS not set"; exit; }
[ -n "$JENKINS_USER_ID" ] || { echo " JENKINS_USER_ID not set"; exit; }
[ -n "$JENKINS_API_TOKEN" ] || { echo " JENKINS_API_TOKEN not set"; exit; }
function check_status() {
AGENT=$1
STATUS_URL="$JENKINS_URL/computer/$AGENT/api/json"
RESPONSE=$(curl --silent --user "$JENKINS_USER_ID:$JENKINS_API_TOKEN" "$STATUS_URL")
OFFLINE=$(echo "$RESPONSE" | /usr/local/bin/jq -r '.offline')
REASON=$(echo "$RESPONSE" | /usr/local/bin/jq -r '.offlineCauseReason')
if [[ "$OFFLINE" == "false" ]];
then
echo " $AGENT: Online | href=${JENKINS_URL}/computer/$AGENT/"
elif [[ "$OFFLINE" == "true" ]];
then
echo " $AGENT: Offline | href=${JENKINS_URL}/computer/$AGENT/"
echo "-- ${REASON//$'\n'*/ }"
else
echo " $AGENT: Unknown | href=${JENKINS_URL}/computer/$AGENT/"
fi
}
IFS=', ' read -r -a AGENTS <<< "$JENKINS_AGENTS"
for AGENT in "${AGENTS[@]}"
do
check_status "$AGENT"
done
```
|
```javascript
import React from 'react';
import withNavigationContext from './withNavigationContext';
const getCleanPath = path => {
return path.replace(/^\//, '').replace(/\/$/);
};
export default Component => {
return withNavigationContext(
({
fullpage,
onTransitionReject,
onTransitionStart,
onTransitionEnd,
...extra
}) => {
const { navigation, navigate } = fullpage;
const handleTransitionStart = element => {
const cleanPath = getCleanPath(window.location.pathname);
if (
typeof window !== 'undefined' &&
cleanPath !== element.nextMedia.slug
) {
if (navigation.pop === false) {
window.history.pushState({}, '', `/${element.nextMedia.slug}`);
} else {
navigate({
...navigation,
pop: false,
goto: cleanPath,
});
return;
}
}
navigate({
...navigation,
slug: navigation.goto,
navigating: true,
});
if (onTransitionStart) {
onTransitionStart(element);
}
};
const handleTransitionEnd = element => {
const state = {
...navigation,
navigating: false,
pop: false,
};
if (element.currentMedia.slug !== state.goto) {
state.slug = element.currentMedia.slug;
state.goto = element.currentMedia.slug;
}
navigate(state);
if (onTransitionEnd) {
onTransitionEnd(element);
}
const cleanPath = getCleanPath(window.location.pathname);
if (cleanPath !== element.currentMedia.slug) {
navigate({
...state,
goto: cleanPath,
});
}
};
const handleTransitionReject = element => {
if (navigation.navigating === true) {
return;
}
navigate({
slug: element.currentMedia.slug,
goto: element.currentMedia.slug,
navigating: false,
});
if (onTransitionReject) {
onTransitionReject(element);
}
};
return (
<Component
buttons
fillParent
bullets={false}
infinite={false}
onFirstMount={() => {
window.addEventListener('popstate', () => {
event.stopPropagation();
event.preventDefault();
if (event.path && event.path[0]) {
navigate({
...navigation,
pop: true,
goto: getCleanPath(event.path[0].location.pathname),
});
}
});
}}
selected={navigation.goto}
onTransitionReject={handleTransitionReject}
onTransitionStart={handleTransitionStart}
onTransitionEnd={handleTransitionEnd}
{...extra}
/>
);
}
);
};
```
|
Basinów is a village in the administrative district of Gmina Zabrodzie, within Wyszków County, Masovian Voivodeship, in east-central Poland.
References
Villages in Wyszków County
|
Bayfair Center (orig. Bay-Fair, later Bay Fair, Bayfair Mall) is a regional shopping mall and power center in San Leandro, California. It was among the first malls in the East Bay of the San Francisco Bay Area. Anchor stores are Macy's, Target, Kohl's, Staples, Old Navy, PetSmart, Bed Bath & Beyond, Cinemark, and 24 Hour Fitness.
History
Launch (1950s)
Announced in April 1953, the shopping center was built on the 48-acre site of the former Oakland Speedway automobile racing stadium, and cost $25 million to build, and an additional $6 million to build the anchor department store, a , three-story Macy's. The mall construction did not begin until 1956.
The architect (including for the interior) for the Macy's store was John Savage Bolles, who had designed Candlestick Park and also designed interiors for Macy's Hilltop Mall in Richmond, Hillsdale Shopping Center in San Mateo, and Valley Fair in San Jose, and interiors for the renovation of the Macy's Union Square San Francisco flagship store.
Macy's was the first unit to open, on August 8, 1957, with mall shops opening in the months following. On November 8, 1957, 19 new stores (besides Macy's), including a supermarket, celebrated their grand opening.
The mall shop area (outside Macy's) was open-air and in an L-shape, split-level (i.e. on two levels, but not two stories one on top of another). It claimed to be the first shopping center in the Western United States to be built across two stories.
Expansion (1960s–1990s)
The mall continued to expand, and a new department store anchor, Montgomery Ward opened a two-story, store and auto center on August 4, 1971. With Ward's the mall had grown to in size and had 62 stores.
In 1972, Bay Area Rapid Transit opened Bay Fair station adjacent to the mall to the south, providing access via rapid rail transit.
In 1977, owner Macy's announced a major renovation of the mall. It was enclosed and added escalators, air conditioning and carpeting. On the ground level, of retail space was added on and a further atrium and "specialty court" for boutiques and restaurants. On the new second level, of retail space was added. In a second pase, of retail space was adjacent to Macy's and elsewhere. In total, of space was added for about 40 additional shops, for a total of about 100 shops.
A T.J. Maxx anchor opened April 28, 1994.
Hybrid power center (2000s–present)
In 2001, Montgomery Ward went bankrupt and closed its stores nationwide. The abandoned Ward's store was demolished and in October 2002, a Target Greatland opened on the site.
Also In late 2002, the mall was acquired by Chicago-based M & J Wilkow Ltd. The updated shopping center measured . Bayfair's owner planned to remodel the ailing center into an open-air power center, renamed "Bayfair/580," which would have several big-box tenants and upscale "lifestyle-oriented" stores. The plan never came to fruition, however, and the mall was sold to Madison Marquette in late 2003.
The Macy's continues to operate and the mall is enclosed, but by 2012, the other anchors were more typical of those in a power center: big box stores Kohl's, Staples, Old Navy, PetSmart, Bed Bath & Beyond, and 24 Hour Fitness. There was also a Cinemark cinema multiplex. According to the city of San Leandro in a 2016 study, Bayfair has been successful in transforming itself to a tenant mix that meets with current needs.
Plans for transit-oriented village
In 2018, the city of San Leandro adopted a plan to transform the Bay Fair neighborhood, including the mall and areas around it, into a transit-oriented "village", a high-density, mixed-use neighborhood with a street grid of small blocks to encourage walking and cycling, and including small parks and space for community events.
References
External links
"Market Analysis, Bay Fair BART TOD Specific Plan", City of San Leandro, 2016
Shopping malls in the San Francisco Bay Area
Shopping malls in Alameda County, California
San Leandro, California
|
Captain Walker may refer to :
Captain Frederic John Walker (1896–1944), a Royal Navy officer during World War II
Captain Walker, a character in The Who's Tommy
Captain Walker, a character in Mad Max Beyond Thunderdome
Captain Martin Walker, main character of Spec Ops: The Line
John Walker (Marvel Cinematic Universe), a character in the Marvel Cinematic Universe
See also
Joseph R. Walker (1798—1876), an American mountain man and scout
|
```php
<div class="list-item-thumbs-container">
<ul class="list-item-thumbs">
<li data-flag="%ALBUM_IMAGES_SLICE_1_FLAG%">%1<a href="%ALBUM_URL%" style="background-image: url(%ALBUM_IMAGES_SLICE_1_THUMB_URL%)"></a>%1</li>
<li data-flag="%ALBUM_IMAGES_SLICE_2_FLAG%">%2<a href="%ALBUM_URL%" style="background-image: url(%ALBUM_IMAGES_SLICE_2_THUMB_URL%)"></a>%2</li>
<li data-flag="%ALBUM_IMAGES_SLICE_3_FLAG%">%3<a href="%ALBUM_URL%" style="background-image: url(%ALBUM_IMAGES_SLICE_3_THUMB_URL%)"></a>%3</li>
<li data-flag="%ALBUM_IMAGES_SLICE_4_FLAG%">%4<a href="%ALBUM_URL%" style="background-image: url(%ALBUM_IMAGES_SLICE_4_THUMB_URL%)"></a>%4</li>
</ul>
</div>
```
|
The Beijing–Hankou or Jinghan railway (), also Peking–Hankow railway, was the former name of the railway in China from Beijing to Hankou, on the northern bank of the Yangtze River. The railway was built between 1897 and 1906 by a Belgian company backed by French financing. At Hankou, railway carriages were ferried across the Yangtze River to Wuchang on the southern bank, where they would connect to the Guangdong–Hankou railway. The completion of the Wuhan Yangtze River Bridge in 1957 linked the two railways into a single contiguous railway known as the Beijing–Guangzhou railway.
From 1928 to 1945, when Beijing was known as Beiping, the Beijing–Hankou railway was known as the Beiping–Hankou or Pinghan railway. During the Second Sino-Japanese War, the Japanese advance into central China was known as the Beiping–Hankou Railway Operation.
History
In 1896, the Imperial Chinese Railway Administration was established to oversee railway construction in China. Sheng Xuanhuai attempted to balance the foreign powers by awarding concessions to different countries. In 1897, a Belgian consortium agreed to lend £4.5 million sterling for the construction of a railway between Beijing and Hankou. The connecting Guangdong–Hankou railway was awarded to the American China Development Company in 1898.
Starting in March 1899, the work progressed from both ends.
By the end of 1899 the embankments had been completed along a stretch and of track had been laid down in the south.
In the north there were of embankments and of track.
The Boxer Rebellion halted construction for several months in 1900.
All the railway officials were given arms to protect themselves.
In the northern stretch from Lugouqiao to Fengtai all the workshops, warehouses and wagons were destroyed and the sleepers were taken.
Work continued in the south, where the viceroys ensured protection for the Europeans.
In 1901 the line was extended through the section between Xinyang and Hankou in the hilly land between the Yellow and Yangtze rivers.
Only one tunnel was needed.
In January 1902 the Imperial Court travelled along a completed section of the line on their way back to Beijing.
In June 1905 the bridge over the Yellow River was open to traffic.
The line with 125 stations was opened on 14 November 1905.
It was recognized as a major (and profitable) achievement, and the responsible engineer Jean Jadot gained great credit.
The Beijing–Hankou railway was completed in 1906. In the meantime, the Belgians had purchased a controlling stake in the American company that held the concession for the Guangdong–Hankou railway. Most of the shares in the Belgian company were owned by Édouard Empain, and this move threatened to place the entire route between Beijing and Guangzhou under foreign control. Opposition to this state of affairs was especially strong in Hunan.
In 1907, Liang Shiyi proposed the formation of a Bank of Communications to redeem the Beijing–Hankou railway from its Belgian owners. The Bank of Communications was formed in 1908 and provided more than half of the financing needed to buy the railway, the remaining coming from the Imperial Bank of China and the Ministry of Finance. The railway was placed under Chinese control on January 1, 1909, and the successful redemption enhanced the prestige of Liang's Communications Clique.
Railway workers' strike
The Beijing–Hankou railway workers' strike of 1923, also known as the February 7th strike, was an important event involving this railway. By the end of 1922, 16 workers' unions had been established on the Jing-Han Railway. A ceremony to establish the Federation of Workers' Unions of the Beijing–Hankou Railway was held on February 1, 1923. However, warlord Wu Peifu sent his military police to sabotage the meeting. The Federation protested, and decided on a major strike on February 4, 1923, and relocated its office to Jiang'an, in the city of Hankou. The strike took place on February 7. Wu Peifu sent his troops to besiege the Workers' Union of Jiang'an. The chief of the Jiang'an Workers' Union (Lin Xiangqian) was arrested, and subsequently executed. Workers' movements in Changxindian, Zhengzhou, Baoding, and Gaobeidian were also put down. Union members wore badges at the strike – these were inscribed 江岸京漢鐵路工會會員證勞工神聖 (Member's badge of the Jiang'an Jing-Han Railway Union. Labour is sacred).
References
See also
Rail transport in the People's Republic of China
List of railways in China
Beiping–Hankou Railway Operation (battle along railway line)
Railway lines in China
Rail transport in Beijing
Rail transport in Hebei
Rail transport in Hubei
Railway lines opened in 1905
1905 establishments in China
Belgium–China relations
|
"The Architects of Fear" is an episode of the original The Outer Limits television show. It first aired on 30 September 1963, during the first season.
Introduction
Certain that the Cold War will lead to mankind's destruction, a cabal of scientists decide that they must act to save the world. A film of a nuclear missile attack with people running for shelter is shown before the plot.
Opening narration
Plot
The world has entered a Cold War-like setting in which nuclear holocaust appears imminent. In the hope of staving off an apocalyptic military confrontation between nations, an idealistic group of scientists working at United Labs plans to stage a fake alien invasion of Earth in an effort to unite all humanity against a perceived common enemy. The scientists have managed to study the planetary conditions on the planet Theta. They draw lots, and physicist Dr. Allen Leighton is chosen to undergo radical surgical procedures that will transform him into an inhabitant from the planet Theta. Leighton's death is faked, and the bizarre series of transplants and modifications to his body proceed. His wife, Yvette, persists in not believing he is dead; she even feels sympathetic pain as Allen suffers on the operating table. Complications arise when the effects of Leighton's transformation extend beyond his physical appearance and begin to affect his mind, a situation compounded by the scientist's strong emotional connection with his now-pregnant wife.
The scientists' plan is for Dr. Leighton, as the Thetan creature equipped with an energy weapon and spaceship, to land at the United Nations in an effort to create initial panic. This panic, in theory, will be resolved as the world unites to fight the invader. Leighton, now a perfect simulation of an inhabitant of the planet Theta, is launched into orbit as a weather satellite, but the mission goes awry when the spaceship comes down off course and lands in a wooded area near the United Labs facility. After disintegrating their station wagon with his laser pistol, Allen is severely wounded by three armed hunters as he emerges from the underbrush. With nowhere else to go, Allen stumbles back to the lab. Yvette, sensing trouble, hurries to the lab looking for her husband. She arrives as Allen, now hideously transformed, enters and collapses to the floor. Before dying of mortal wounds, Allen makes a sign in the air with his hand -- a sign familiar to his wife -- and she then realizes the horrifying truth that the alien is, in fact, her husband.
Closing narration
Censorship
The "bear" in this episode, the monstrously-altered Allen Leighton, was judged by some of ABC's local affiliate stations to be so frightening that they broadcast a black screen during the Thetan's appearances, effectively censoring most of the show's last act. In other parts of the United States, the Thetan footage was tape delayed until after the 11pm/10c news. In others, it was not shown at all. Unlike today where film series are transferred to video tape for transmission, even until the mid-1980s film series were broadcast live from the film print via telecine.
The sequence involving the Thetan's encounter with the duck hunters was shot at the Metro-Goldwyn-Mayer Backlot #3.
Precursors
Theodore Sturgeon's short story "Unite and Conquer" (1948), published in Astounding Science Fiction, turns on a similar device, humans uniting against a fake alien threat. Sturgeon used the idea again in "Occam's Scalpel" (1971), published in If (magazine).
The Jan/Feb 1951 issue of Weird Science (#5), features the comic story, "The Last War on Earth" by Harvey Kurtzman, wherein a scientist creates a fake threat from another world — in this instance a "Martian" bomb is dropped on an American suburb — eventually uniting Earth against Mars. The story has a twist ending typical of many Weird Science stories.
In Kurt Vonnegut's novel The Sirens of Titan (1959), a fake invasion is carried out to unite Earth and eventually leads to world peace.
The plot also bears resemblance to the short story “The Delegate from Venus” by Henry Slesar.
Legacy
This episode is similar to the ending of Alan Moore and Dave Gibbons' comic book mini-series, Watchmen (1986–87). According to Moore, while he was writing issue 10, he came across a guide to cult television that featured this episode and was surprised by its similarity to his already planned ending. However, editor Len Wein said that "it simply stole the ending to an episode of The Outer Limits, which Alan fully admitted!" Wein found reusing the episode's ending to be unacceptable, and quit the series when Moore refused to change it. A promotional spot for "The Architects of Fear" is overheard on a television in the comic's penultimate scene. When writing the prequel series Before Watchmen: Ozymandias (2012), Wein specifically referred to this episode as the in-universe source of the idea. While the film adaptation of Watchmen (2009) omits the "space squid", the opening titles of The Outer Limits are shown on a television screen towards the end of the film. In the fifth episode of HBO's Watchmen, a direct reference is made when Adrian Veidt claims that the only weapon that can stave off mankind's extinction is fear, and subsequently claims to be its architect.
The Showtime series The Outer Limits revisited this episode with "Afterlife" (1996), using a more alien approach to the main character, played this time by Clancy Brown. The ending in this case has the aliens coming to retrieve their new "brother".
Filmmaker Kevin Smith has stated that, before offering him the chance to write Superman Lives in 1996, Warner Bros. offered him two projects: A remake of "The Architects of Fear" and Beetlejuice Goes Hawaiian.
In 2011, Nobel prize-winning economist Paul Krugman mentioned the episode when he said that building a defense against a fictional alien invasion could speed recovery from the late-2000s recession; however, he misattributed the episode to The Twilight Zone.
Cast
References
External links
Remembering Janos Prohaska — MovieMorlocks.com: Movie Blog
www.davidjschow.com/The Outer Limits — David J. Schow site (archived)
wearecontrollingtransmission.blogspot.com — We Are Controlling Transmission: Architects of Fear
The Outer Limits (1963 TV series season 1) episodes
1963 American television episodes
|
Jan Frederik Glastra van Loon (16 March 1920 – 22 October 2001) was a Dutch politician of the Democrats 66 (D66) party.
Decorations
References
External links
Official
Mr.Dr. J.F. (Jan) Glastra van Loon Parlement & Politiek
Mr.Dr. J.F. Glastra van Loon (D66) Eerste Kamer der Staten-Generaal
1920 births
2001 deaths
Commanders of the Order of Orange-Nassau
Chairmen of the Democrats 66
Democrats 66 politicians
Knights of the Order of the Netherlands Lion
Dutch academic administrators
Dutch humanists
Dutch nonprofit directors
Dutch nonprofit executives
20th-century Dutch judges
Dutch legal scholars
Dutch resistance members
Dutch people of Indonesian descent
Dutch people of World War II
Dutch political philosophers
Dutch political writers
Leiden University alumni
Academic staff of Leiden University
Indo people
Jurisprudence academics
Members of the Senate (Netherlands)
People from Batavia, Dutch East Indies
Politicians from The Hague
Philosophers of law
State Secretaries for Justice of the Netherlands
Academic staff of the University of Amsterdam
Writers from The Hague
20th-century Dutch educators
20th-century Dutch male writers
20th-century Dutch politicians
|
The Communist Party of Uzbekistan (, ), initially known as Communist Party (Bolshevik) of Uzbekistan, was the ruling communist party of the Uzbek SSR, and a part of the Communist Party of the Soviet Union (CPSU). On 14 September 1991, the party announced its withdrawal from the CPSU.
First Secretaries
References
1925 establishments in Uzbekistan
1991 disestablishments in Uzbekistan
Uzbekistan
Communism in Uzbekistan
Communist parties in the Soviet Union
Defunct communist parties
Defunct political parties in Uzbekistan
Defunct socialist parties in Asia
Formerly ruling communist parties
Political parties disestablished in 1991
Political parties established in 1925
Uzbek Soviet Socialist Republic
|
Dayton E. Phillips (1910 – 1980) was an American politician and a member of the United States House of Representatives for the 1st congressional district of Tennessee.
Biography
Born, Dayton Edward Phillips, on March 29, 1910, at Shell Creek, Tennessee in Carter County, he grew up on a farm, attended the country school, and went to Cloudland High School in Roan Mountain, Tennessee. From 1929 to 1931, he attended Milligan College in Tennessee. He attended the University of Tennessee at Knoxville and graduated with a Bachelor of Laws degree in 1934. He taught school in Carter County, Tennessee in 1931 and 1932.
Career
Phillips was admitted to the bar in 1935 and commenced practice in Elizabethton, Tennessee, and graduated from National University Law School in Washington, D.C., with a J.D. in 1936. He was the attorney for Carter County from 1938 to 1942. He was district attorney general of the first judicial circuit of Tennessee from 1942 to 1947. During World War II, he served as an enlisted man in the United States Army, with overseas service in the European Theater of Operations, from 1943 to 1945.
Elected as a Republican to the Eightieth and Eighty-first Congresses, Phillips served from January 3, 1947, to January 3, 1951, but was not a successful candidate for renomination in 1950. He resumed the practice of law and was the chancellor of the First Chancery Court of Tennessee. He resided in Elizabethton, Tennessee.
Death
Phillips died on October 23, 1980, in Kingsport, Tennessee. He is interred at Happy Valley Memorial Park, Elizabethton, Tennessee.
References
External links
1910 births
1980 deaths
People from Carter County, Tennessee
Republican Party members of the United States House of Representatives from Tennessee
20th-century American politicians
People from Elizabethton, Tennessee
National University School of Law alumni
University of Tennessee alumni
|
Peter J. Hamill (c. 1885 – January 13, 1930) was an American politician who served in the New York State Assembly from 1917 to his death. A native of Lower Manhattan, he was affiliated with Tammany Hall from an early age and became a Tammany Hall leader in his Assembly district. In late 1929 he was chosen as the Minority Leader of the Assembly to replace Maurice Bloch, who had died of complications from an appendectomy. Hamill would himself be stricken with appendicitis a week later and die from complications of the surgery a week after that.
Life
He attended the public schools. He entered politics as a Democrat, and was an Inspector of the New York City Bureau of Weights and Measures from 1910 to 1915. He married Matilda Van Axen, and they had two children, Mary and Peter Joseph.
Hamill was a member of the New York State Assembly in 1916, 1917, 1918, 1919, 1920, 1921, 1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929 and 1930.
Rise in Tammany Hall
Hamill was forced out of his house on 585 Broome Street in 1923 when it was demolished to make way for an approach to the Holland Tunnel. He and his family moved into 34 Dominick Street, a Federal-style rowhouse that had been constructed in 1826 and modified in 1866. After Thomas "Big Tom" Foley's death in 1925 he was chosen as Tammany Hall leader of the 1st assembly district, beating out such candidates as alderman Martin F. Tanahey and chief clerk of the first district municipal court Patrick Whelan. Tanahey and Whelan eventually respectively moved and seconded his leadership, and Hamill was elected as the leader on April 29. Tammany Hall would subsequently divide the district between Broadway; Hamill continued as leader of the part east of Broadway, eventually sharing this role with the wife of justice Thomas J. Nolan.
He was chosen Minority Leader at the opening of the session on January 1, 1930.
Death
On January 6, he underwent an emergency operation for appendicitis, but remained ill in Stuyvesant Polyclinic Hospital in Manhattan for another week, dying there about 20 minutes past midnight on January 13. He was buried at the Holy Cross Cemetery in Brooklyn.
On January 23, 1930, his widow Matilda Van Axen Hamill was appointed as Supervisor of Investigators for the new Crime Prevention Bureau of the New York City Police Department at a salary of $4,500 ($ in 2021) a year.
Matilda would retain the title to 34 Dominick Street until 1963. It was designated a New York City Landmark in 2011 over the opposition of its owners.
Sources
GUIDE FOR VOTERS BY CITIZENS UNION in NYT on October 28, 1917 ["Assemblyman 1916–7 with poor showing."]
NOMINEES ANALYZED BY CITIZENS UNION in NYT on October 27, 1918 ["The three years' service of Peter J. Hamill has been without public benefit."]
CITIZEN UNION GIVES LINE ON CANDIDATES in NYT on October 26, 1921 ["...an experienced and active member with a considerably improved record of votes over previous years, but the character of his legislation continues poor."]
GOVERNOR IN THRONG AT HAMILL FUNERAL in NYT on January 17, 1930 (subscription required)
Works cited
1880s births
1930 deaths
Politicians from Manhattan
Democratic Party members of the New York State Assembly
Burials at Holy Cross Cemetery, Brooklyn
Deaths from appendicitis
20th-century American politicians
|
Brian Keeble is a British author and editor. He is the founder of Golgonooza Press and a co-founder of the Temenos and Temenos Academy.
Biography
Keeble is the founder of Golgonooza Press where he worked as editor, designer and publisher from 1974-2004.
He was a co-founder and is a Fellow of the Temenos Academy - whose Patron is Charles, Prince of Wales - which is a teaching organization dedicated to the same central idea that had inspired the earlier Temenos, of which he was a co-founder and editor (1980–1991). Both focus on a devotion to the ‘Arts of the Imagination’ and feature the lectures and works of scholars and teachers committed to Perennial Philosophy.
The Golgonooza Press Archive, covering the years 1962 to 2012, is in the British Library (Add MS 89131).
He wrote a poem to be set to music by John Tavener for choir, organ and temple gong as Mother and Child.
Bibliography
Art: For Whom and For What?, (Golgonooza Press, 1998)
Vernon Watkins: Inspiration as Poetry,Poetry as Inspiration,(Temenos Academy, 2002)
Twenty-four Poems, (Golgonooza Press, 2002)
Conversing with Paradise, (Golgonooza Press, 2005)
Shapes of Light,Poems, (Golgonooza Press, 2005)
Every Man an Artist: Readings in the Traditional Philosophy of Art, (World Wisdom, 2005)
Kathleen Raine: Poetic Imagination and the Vision of Reality, (Temenos Academy, 2008)
In His Name and other poems, (Golgonooza Press, 2008)
God & Work: Aspects of Art and Tradition, (World Wisdom, 2009)
Cecil Collins:The Artist as Writer and Image Maker, (Golgonooza Press, 2009)
From a Handful of Dust,Poems, (Golgonooza Press, 2011)
Far from the Dawn, Poems, (Golgonooza Press, 2014)
Daily Bread: Art and Work in the Reign of Quantity, (Selected essays), Edited and introduced by Andrew Frisardi, (Angelico Press, 2015)
Mask After Mask, Poems, (Golgonooza Press, 2018)
These Bright Shadows: The Poetry of Kathleen Raine, (Angelico Press, 2020)
Words to the Wind, Poems, (Golgnooza Press, 2021)
'Works edited by Brian KeebleThe Inner Journey of the Poet, and other papers, by Kathleen Raine. (Allen & Unwin, 1982)A Holy Tradition of Working. An Anthology of the Writings of Eric Gill, (Golgonooza Press, 1983)What is Civilisation? and other Essays, by Ananda K. Coomaraswamy, (Golgonooza Press, 1989)Standing on Earth. Selected Essays of Wendell Berry, (Golgonooza Press, 1991)Meditations, Poems, Pages from a Sketch Book, by Cecil Collins, (Golgonooza Press, 1997)The Music of Silence, a Composer's Testament, by Sir John Tavener, (Faber & Faber, 1999)The Vision of the Fool and other Writings, by Cecil Collins, enlarged edition, (Golgonooza Press, 2002)Temenos Academy Review 7, Kathleen Raine Memorial Issue, (Temenos Academy, 2004)The Underlying Order and other Essays, by Kathleen Raine, (Temenos Academy, 2008)That Wondrous Pattern, Essays on Poetry and Poets, by Kathleen Raine. (Counterpoint Press, 2017)A Holy Tradition of Working, Passages from the writings of Eric Gill, selected with an Introduction by Brian Keeble, new edition with foreword by Wendell Berry''. (Angelico Press, 2021)
See also
Temenos Academy Review
Traditionalism
Kathleen Raine
References
Living people
British editors
British poets
British publishers (people)
British spiritual writers
Philosophers of art
British male poets
Year of birth missing (living people)
Traditionalist School
|
Douglas Leslie Ringrose (4 August 1900 – 28 December 1953) was an Australian rules footballer who played for and coached Fitzroy in the Victorian Football League (VFL) during the 1920s.
Ringrose was also an exceptional soccer player when he was a teenager, living in Tasmania.
Ringrose played with West Melbourne Football Club in 1920, before moving to Brighton in 1921.
In 1922, Ringrose was captain-coach of the Benalla in the Ovens & Murray Football League and was a great acquisition to the club, leading them to fourth position, where they lost the first semi final to Wangaratta.
Ringrose won Brighton's Most Consistent Player award in 1927 when they finished runners up in the VFA Grand Final.
Ringrose, who came from Brighton in 1928, was a handy player for Fitzroy in his two seasons, averaging almost a goal a game. He spent the majority of the 1929 season as playing coach of Fitzroy, with the club managing just two wins.
In 1930, Ringrose coached East Albury in the Ovens & Murray Football League to the Preliminary Final, losing to Wangaratta and breaking his collarbone.
Ringrose trained with Brighton in early 1931 and was also listed as an official Victorian Football League umpire in 1931.
Ringrose was captain-coach of the Yarram Football Club in the Gippsland Football League in 1932 and 1933. Ringrose kicked 31 goals in 1933. Ringrose did not coach Yarram in 1934, but continued to play.
References
Links
Holmesby, Russell and Main, Jim (2007). The Encyclopedia of AFL Footballers. 7th ed. Melbourne: Bas Publishing.
1930 - East Albury FC team photo
1934 - Yarram FC team photo
1900 births
Australian rules footballers from Hobart
Fitzroy Football Club players
Fitzroy Football Club coaches
Brighton Football Club players
1953 deaths
|
Queeristan is a book written by Parmesh Shahani. The book was published in 17 August 2020 by Westland Books.
Reception
Moneycontrol Review write about the book "The author covers various aspects of framing diversity and inclusion policies, finding talent from the LGBTQ community, creating a recruitment process that is LGBTQ friendly, creating an LGBTQ-friendly work culture at the workplace, and how to be an ally, whether you identify as LGBTQ or not."
SheThePeopleTv said "Shahani alludes to these practices of tokenism in the book, about how these gestures are empty if changes are not brought about in institutional policies."
References
2020 non-fiction books
LGBT literature in India
2020 LGBT-related literary works
Westland Books books
|
```javascript
class Foo {
#x;
constructor() {
delete this.#x;
}
}
```
|
Things We Do for Love is a Ghanaian television series. It was telecast on GTV in 2003 to educate youth about sexuality.
Plot
Things We Do for Love is about youths and their lives at school and home, with their parents getting in their way. Pusher (Adjetey Anang) is in a relationship with Dede, but has other girlfriends. He is the cause of nearly all the trouble in the neighbourhood. His friend BB seeks ideas from Pusher to attract Marcia. However, Marcia's brother has romantic feelings for Dede. Shaker (Majid Michel) plays a Lebanese working in a hotel and a womanizer who could make a lady "melt" with his sweet words. His personality leads him into embarrassing incidents. His attempts to use those charms on Enyonam (Jackie Appiah) won't work either as she is in love with a calm guy in the neighbourhood, although they are both unable to live in this dreamland called Love, as she lives in fear of her Father.
Episode 1
Pusher teaches BB his walking style. Marcia almost runs into them runs to them to apologize, Pusher was furious with her because she was listening to music, BB tells Pusher that he should exercise patience but Pusher refuses. Marcia runs to her house. Pusher asks BB why he loves her.
Max is watching tv at high volume. Marcia runs home to inform her mother and brother about what had happened. Max becomes furious with the area boys because they don’t like him. He asks the sister about the car. Marcia says that she had to leave the key in the car. Max loses it and goes to pick up the car.
As they approach the car, Pusher sees that Marcia has left the key so he tells BB to have a look. BB tells Pusher that he is sending the car to the house. BB drives the car to meet the area boys under a tree and he tells them he is going to come and is praised by the area boys. He sends the car to Marcia and makes Marcia happy. Max goes to the spot where Marcia left the car, meets Pusher and angrily asks him what happened. Pusher angrily responds but does not reveal its location. Max goes home and finds the car parked there. Marcia tells him that BB brought the car.
BB returns to Pusher telling him that everything is sorted out between him and Marcia. Max complains about BB and all the area boys. Marcia tells Max why he is still furious about BB, but he just is given a chance. Max says if anything happens to her he is going to blame BB.
Characters
Pusher is a character called Clotey who obtained the nickname 'Pusher' when he was asked by his class teacher what he wanted to be in the future. He said he wanted to be a truck pusher, and so this is what his classmates called him. He lives with his grandfather and brother. It is inferred that his father lives abroad; there his mother is not mentioned. He is arrogant and likes to hang out with the neighbourhood boys who usually give the community problems. He is a womanizer.
Dede is a teenage girl who has a lot of boys in her life. She is Pusher's girlfriend, but breaks up with him because of her relationship with Max.
Marcia is a young lady who is now skeptical about relationships with guys as a result of losing her virginity to her former boyfriend, who coerced her into having sex. She is falling for BB.
BB is best friends with Pusher. BB is articulate and decent, but puts on a front to maintain his reputation as a neighbourhood tough guy like Pusher. His archenemy is Max, because Max stands in his way of falling in love with Marcia, Max's younger sister.
Max is the older, overly protective brother of Marcia. He is in a relationship with Ofeibea but happens to be one of the many guys interested in Dede as well.
Ofeibea is a beautiful young lady who is officially in a relationship with Max. She resists Max's attempts to get intimate. She is naive about sex, but is bent on abstinence.
Julia more often than not is usually with older, wealthier men. She is Pusher's girlfriend, but their relationship is uncertain because Pusher is using her.
Aluta is close to Pusher and BB, and is one of the neighbourhood area boys who likes to aggravate the community.
AKiller is a drunk who disturbs the community a lot.
Octopussy is a young man who is familiar with boys such as Pusher, BB and Aluta. His real name is Tsatsu. He believes in catching the ladies young keeping them forever. He does not believe in pursuing older ladies.
High Priest is a young man who loves to rap.
Cambodia is an old soldier who is usually depicted as aggravated by boys such as Pusher, Killer, BB, Aluta and Octopussy
Lois who was known for her American accent, wasn't an easy-going character
Cast
Adjetey Anang as Pusher
Jackie Appiah as Enyonam
Majid Michel as Shaker
Alice Schreyer as Dede
Sena Tsika as Marcia Mensah
David Bossman as BB
Zimran Clottey as Aluta
Vincent McCauley as Max
High Priest
Abeiku Nana Acquah as AKiller
Octopusy
Adjoa Pieterson
Akwasi Boadi (Akrobeto) as Police Officer
Nat Banini as Cambodia
Julia
Offeibea
Marleen Hutchful as Lois
References
2010 Ghanaian television series debuts
2000s Ghanaian television series
English-language television shows
Ghana Broadcasting Corporation original programming
|
Omiodes albicinctalis is a moth in the family Crambidae. It was described by George Hampson in 1904. It is found on the Bahamas.
References
Moths described in 1904
albicinctalis
|
```scss
.video-react .video-react-poster {
display: inline-block;
vertical-align: middle;
background-repeat: no-repeat;
background-position: 50% 50%;
background-size: contain;
background-color: #000000;
cursor: pointer;
margin: 0;
padding: 0;
position: absolute;
top: 0;
right: 0;
bottom: 0;
left: 0;
height: 100%;
img {
display: block;
vertical-align: middle;
margin: 0 auto;
max-height: 100%;
padding: 0;
width: 100%;
}
}
```
|
```python
from django.shortcuts import render, redirect
from django.http import HttpResponse, HttpResponseBadRequest
from django.contrib.auth.decorators import login_required
from bootcamp.decorators import ajax_required
from django.contrib.auth.models import User
import json
from bootcamp.messenger.models import Message
@login_required
def inbox(request):
conversations = Message.get_conversations(user=request.user)
active_conversation = None
messages = None
if conversations:
conversation = conversations[0]
active_conversation = conversation['user'].username
messages = Message.objects.filter(user=request.user,
conversation=conversation['user'])
messages.update(is_read=True)
for conversation in conversations:
if conversation['user'].username == active_conversation:
conversation['unread'] = 0
return render(request, 'messenger/inbox.html', {
'messages': messages,
'conversations': conversations,
'active': active_conversation
})
@login_required
def messages(request, username):
conversations = Message.get_conversations(user=request.user)
active_conversation = username
messages = Message.objects.filter(user=request.user,
conversation__username=username)
messages.update(is_read=True)
for conversation in conversations:
if conversation['user'].username == username:
conversation['unread'] = 0
return render(request, 'messenger/inbox.html', {
'messages': messages,
'conversations': conversations,
'active': active_conversation
})
@login_required
def new(request):
if request.method == 'POST':
from_user = request.user
to_user_username = request.POST.get('to')
try:
to_user = User.objects.get(username=to_user_username)
except Exception, e:
try:
to_user_username = to_user_username[
to_user_username.rfind('(')+1:len(to_user_username)-1]
to_user = User.objects.get(username=to_user_username)
except Exception, e:
return redirect('/messages/new/')
message = request.POST.get('message')
if len(message.strip()) == 0:
return redirect('/messages/new/')
if from_user != to_user:
Message.send_message(from_user, to_user, message)
return redirect(u'/messages/{0}/'.format(to_user_username))
else:
conversations = Message.get_conversations(user=request.user)
return render(request, 'messenger/new.html',
{'conversations': conversations})
@login_required
@ajax_required
def delete(request):
return HttpResponse()
@login_required
@ajax_required
def send(request):
if request.method == 'POST':
from_user = request.user
to_user_username = request.POST.get('to')
to_user = User.objects.get(username=to_user_username)
message = request.POST.get('message')
if len(message.strip()) == 0:
return HttpResponse()
if from_user != to_user:
msg = Message.send_message(from_user, to_user, message)
return render(request, 'messenger/includes/partial_message.html',
{'message': msg})
return HttpResponse()
else:
return HttpResponseBadRequest()
@login_required
@ajax_required
def users(request):
users = User.objects.filter(is_active=True)
dump = []
template = u'{0} ({1})'
for user in users:
if user.profile.get_screen_name() != user.username:
dump.append(template.format(user.profile.get_screen_name(), user.username))
else:
dump.append(user.username)
data = json.dumps(dump)
return HttpResponse(data, content_type='application/json')
@login_required
@ajax_required
def check(request):
count = Message.objects.filter(user=request.user, is_read=False).count()
return HttpResponse(count)
```
|
Al-Jama'a al-Islamiyya (), "Islamic Group", may refer to:
Al-Jama'a al-Islamiyya, the Egyptian Sunni Islamist movement
Al-Jamā'ah al-Islāmiyyah al-Aḥmadiyyah, alternative name for the Ahmadiyya movement
Jemaah Islamiyah, a Southeast Asian organization
al-Jama'ah al-Islamiyah al-Musallaha, Islamist insurgent group in Algeria
Al-Jama'a al-Islamiyya (Lebanon), Sunni Islamist political party in Lebanon
Al-Jama’a al-Islamiyyah al-Muqatilah bi-Libya, the armed Islamist group in Libya
Al-Jama'a al-Islamiyya al-Kurdistaniya
See also
Jamaat-e-Islami (disambiguation)
|
Lombardia may refer to:
Lombardia, Italian name for Lombardy region in Italy
Lombardia Svizzera, alternative name of Italian Switzerland
Lombardia Siciliana, ethno-linguistic minority living in Sicily, southern Italy
Lombardia (wine), wine produced in the Lombardy region of north central Italy
Ascelin of Lombardia, a 13th-century Dominican friar
Romano di Lombardia, a municipality in the Province of Bergamo in the Italian region of Lombardy
Palazzo Lombardia, a skyscraper in Milan, Italy
Giro di Lombardia, a cycling race, in Lombardy, Italy
Castello di Lombardia, castle in Enna, Sicily
People with the surname
Pedro Lombardía, Spanish canonist
See also
Lombardi (disambiguation)
Lombardo
|
The Swiss Union of Jewish Students (SUJS) is the umbrella organization of the Jewish student or young adults unions in Switzerland. SUJS represents the young Jews that are from 18 years old until 35 years old.
SUJS is a member union of the European Union of Jewish Students (EUJS) and of the World Union of Jewish Students (WUJS).
SUJS has some strong contact with the Swiss Federation of Jewish Communities
History
The Swiss Union of Jewish Students has been founded in 1948, since then it has been the main organization for the Jewish young adults in Switzerland.
Main activities
In 2009-2010 SUJS has organized the following activities :
Winter International Gathering (WING): Wing is an event built by the collaboration between the Union of Young Adults in Italy, the JDC and SUJS. This event brings since now 4 years about 270 young Jewish people from the ages of 18 until 35.
March of the Living (MOL): MOL is a very emotional and important event that brings people, mainly young adults, from all around the world to Poland in order to commemorate the victims of the Shoah. SUJS participated with 15 of its members to the 2010 edition.
Demonstration against the show of an antisemitic humorist in Geneva : Dieudonne is an antisemitic humorist and SUJS with the CICAD held a demonstration to protest against his performance.
Member unions
ADEIG
ADEIG stands for Association Des Etudiants Israelites de Geneve, they are active in the region of Geneva and are often working on programs at the United Nations in Geneva.
Their website is www.adeig.com
ALEJ
ALEJ stands for Association Lausannoise des Etudiants Juifs, they are active in Lausanne and all the region (Canton de Vaud).
This union is in charge of the Jewish students in three main schools, the EPFL, the UNIL and the EHL. This makes a total of 100 to 130 Jewish students.
Their website is www.alej.ch
Jewpoint
Jewpoint is an organization that organizes activities for the students and young adults of Basel and the region.
VJSZ
- VJSZ
Umbrella organizations
- European Union of Jewish Students
- World Union of Jewish Students
Website
SUJS
Jewish youth organizations
Zionists
Zionist organizations
Religious organisations based in Switzerland
Student organisations in Switzerland
Student religious organizations
|
```shell
#!/bin/bash
#
# path_to_url
#
# Unless required by applicable law or agreed to in writing, software
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
source ${CCPROOT}/examples/common.sh
echo_info "Cleaning up.."
cleanup "${CCP_NAMESPACE?}-sync"
$CCPROOT/examples/waitforterm.sh primarysync ${CCP_CLI?}
$CCPROOT/examples/waitforterm.sh replicasync ${CCP_CLI?}
```
|
```xml
import { GraphQLProject } from "./base";
import { LoadingHandler } from "../loadingHandler";
import { FileSet } from "../fileSet";
import { ServiceConfig } from "../config";
import { ClientIdentity } from "../engine";
import URI from "vscode-uri";
export function isServiceProject(
project: GraphQLProject
): project is GraphQLServiceProject {
return project instanceof GraphQLServiceProject;
}
export interface GraphQLServiceProjectConfig {
clientIdentity?: ClientIdentity;
config: ServiceConfig;
rootURI: URI;
loadingHandler: LoadingHandler;
}
export class GraphQLServiceProject extends GraphQLProject {
constructor({
clientIdentity,
config,
rootURI,
loadingHandler,
}: GraphQLServiceProjectConfig) {
const fileSet = new FileSet({
rootURI: config.configDirURI || rootURI,
includes: [
...config.service.includes,
".env",
"apollo.config.js",
"apollo.config.cjs",
],
excludes: config.service.excludes,
configURI: config.configURI,
});
super({ config, fileSet, loadingHandler, clientIdentity });
this.config = config;
}
get displayName() {
return this.config.graph || "Unnamed Project";
}
initialize() {
return [];
}
validate() {}
getProjectStats() {
return { loaded: true, type: "service" };
}
resolveFederationInfo() {
return this.schemaProvider.resolveFederatedServiceSDL();
}
}
```
|
Baseball was one of the many sports which was held at the 2002 Asian Games in Busan, South Korea beginning on October 2, 2002. Five East and Southeast Asian nations participated in the tournament. The competition took place at Sajik Baseball Stadium.
Schedule
Medalists
Squads
Results
All times are Korea Standard Time (UTC+09:00)
Preliminary
Final round
Semifinals
3rd–4th
Final
Final standing
References
Japan Baseball
External links
Official website
2002 Asian Games events
2002
Asian Games
2002 Asian Games
|
```kotlin
package kotlinx.coroutines
import kotlinx.coroutines.testing.*
import org.junit.*
class RunBlockingJvmTest : TestBase() {
@Test
fun testContract() {
val rb: Int
runBlocking {
rb = 42
}
rb.hashCode() // unused
}
}
```
|
The Rajshahi Kings are a franchise cricket team based in Rajshahi, Bangladesh, which plays in the Bangladesh Premier League (BPL). They are one of the seven teams that are competing in the 2016 Bangladesh Premier League. The team is being captained by Darren Sammy.
Player draft
The 2016 BPL draft was held on 30 September. Prior to the draft, the seven clubs signed 38 foreign players to contracts and each existing franchise was able to retain two home-grown players from the 2015 season. A total 301 players participated in the draft, including 133 local and 168 foreign players. 85 players were selected in the draft.
Player transfers
Prior to the 2016 draft, a number of high-profile players moved teams. These included transfers between competing teams and due to the suspension of the Sylhet Super Stars and the introduction of two new teams, Khulna Titans and Rajshahi Kings.
Standings
The top four teams will qualify for playoffs
advanced to the Qualifier
advanced to the Eliminator
Current squad
CEO – Tahmid Azizul Haque
Head coach–
Sarwar Imran
References
Bangladesh Premier League
|
```html
<!DOCTYPE html>
<!--
path_to_url
Unless required by applicable law or agreed to in writing, software
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-->
<html>
<head>
<title>Quite Interesting Quiz</title>
<meta name='viewport' content='width=device-width, initial-scale=1'>
<link rel='stylesheet' href='//maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css'>
</head>
<body ng-app='qiqApp'>
<nav class='navbar navbar-default'>
<div class='container'>
<div class='navbar-header'>
<button type='button' class='navbar-toggle collapsed' ng-click='isNavCollapsed = !isNavCollapsed' aria-expanded='false'>
<span class='sr-only'>Toggle navigation</span>
<span class='icon-bar'></span>
<span class='icon-bar'></span>
<span class='icon-bar'></span>
</button>
<a class='navbar-brand' href='/'>Quite Interesting Quiz</a>
</div>
<div class='collapse navbar-collapse' uib-collapse='isNavCollapsed' id='bs-example-navbar-collapse-1'>
<ul class='nav navbar-nav'>
<li><a href='#!/quiz/gcp'>GCP</a></li>
<li><a href='#!/quiz/places'>Places</a></li>
<li><a href='#!/quiz/people'>People</a></li>
</ul>
<qiq-login></qiq-login>
</div>
</div>
</nav>
<div class='container'>
<h1>Quite Interesting Quiz</h1>
<div ng-view></div>
</div>
<script src='//ajax.googleapis.com/ajax/libs/angularjs/1.7.8/angular.js'></script>
<script src='//ajax.googleapis.com/ajax/libs/angularjs/1.7.8/angular-route.js'></script>
<script src='//cdnjs.cloudflare.com/ajax/libs/angular-ui-bootstrap/2.5.0/ui-bootstrap-tpls.js'></script>
<script src='app/app.js'></script>
<script src='app/quizzes/quiz-module.js'></script>
<script src='app/quizzes/quiz-factory.js'></script>
<script src='app/quizzes/quiz-controller.js'></script>
<script src='app/auth/auth-module.js'></script>
<script src='app/auth/auth-factory.js'></script>
<script src='app/auth/auth-controller.js'></script>
<script src='app/auth/qiq-login.js'></script>
</body>
</html>
```
|
```c
/* ====================================================================
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
*
* 3. All advertising materials mentioning features or use of this
* software must display the following acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit. (path_to_url"
*
* 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to
* endorse or promote products derived from this software without
* prior written permission. For written permission, please contact
* openssl-core@openssl.org.
*
* 5. Products derived from this software may not be called "OpenSSL"
* nor may "OpenSSL" appear in their names without prior written
* permission of the OpenSSL Project.
*
* 6. Redistributions of any form whatsoever must retain the following
* acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit (path_to_url"
*
* THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
* EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR
* ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
* ==================================================================== */
#include <assert.h>
#include <string.h>
#include "internal.h"
#ifndef STRICT_ALIGNMENT
# define STRICT_ALIGNMENT 0
#endif
void CRYPTO_cbc128_encrypt(const uint8_t *in, uint8_t *out, size_t len,
const void *key, uint8_t ivec[16],
block128_f block) {
size_t n;
const uint8_t *iv = ivec;
assert(key != NULL && ivec != NULL);
assert(len == 0 || (in != NULL && out != NULL));
if (STRICT_ALIGNMENT &&
((size_t)in | (size_t)out | (size_t)ivec) % sizeof(size_t) != 0) {
while (len >= 16) {
for (n = 0; n < 16; ++n) {
out[n] = in[n] ^ iv[n];
}
(*block)(out, out, key);
iv = out;
len -= 16;
in += 16;
out += 16;
}
} else {
while (len >= 16) {
for (n = 0; n < 16; n += sizeof(size_t)) {
*(size_t *)(out + n) = *(size_t *)(in + n) ^ *(size_t *)(iv + n);
}
(*block)(out, out, key);
iv = out;
len -= 16;
in += 16;
out += 16;
}
}
while (len) {
for (n = 0; n < 16 && n < len; ++n) {
out[n] = in[n] ^ iv[n];
}
for (; n < 16; ++n) {
out[n] = iv[n];
}
(*block)(out, out, key);
iv = out;
if (len <= 16) {
break;
}
len -= 16;
in += 16;
out += 16;
}
memcpy(ivec, iv, 16);
}
void CRYPTO_cbc128_decrypt(const uint8_t *in, uint8_t *out, size_t len,
const void *key, uint8_t ivec[16],
block128_f block) {
size_t n;
union {
size_t t[16 / sizeof(size_t)];
uint8_t c[16];
} tmp;
assert(key != NULL && ivec != NULL);
assert(len == 0 || (in != NULL && out != NULL));
const uintptr_t inptr = (uintptr_t) in;
const uintptr_t outptr = (uintptr_t) out;
/* If |in| and |out| alias, |in| must be ahead. */
assert(inptr >= outptr || inptr + len <= outptr);
if ((inptr >= 32 && outptr <= inptr - 32) || inptr < outptr) {
/* If |out| is at least two blocks behind |in| or completely disjoint, there
* is no need to decrypt to a temporary block. */
const uint8_t *iv = ivec;
if (STRICT_ALIGNMENT &&
((size_t)in | (size_t)out | (size_t)ivec) % sizeof(size_t) != 0) {
while (len >= 16) {
(*block)(in, out, key);
for (n = 0; n < 16; ++n) {
out[n] ^= iv[n];
}
iv = in;
len -= 16;
in += 16;
out += 16;
}
} else if (16 % sizeof(size_t) == 0) { /* always true */
while (len >= 16) {
size_t *out_t = (size_t *)out, *iv_t = (size_t *)iv;
(*block)(in, out, key);
for (n = 0; n < 16 / sizeof(size_t); n++) {
out_t[n] ^= iv_t[n];
}
iv = in;
len -= 16;
in += 16;
out += 16;
}
}
memcpy(ivec, iv, 16);
} else {
/* |out| is less than two blocks behind |in|. Decrypting an input block
* directly to |out| would overwrite a ciphertext block before it is used as
* the next block's IV. Decrypt to a temporary block instead. */
if (STRICT_ALIGNMENT &&
((size_t)in | (size_t)out | (size_t)ivec) % sizeof(size_t) != 0) {
uint8_t c;
while (len >= 16) {
(*block)(in, tmp.c, key);
for (n = 0; n < 16; ++n) {
c = in[n];
out[n] = tmp.c[n] ^ ivec[n];
ivec[n] = c;
}
len -= 16;
in += 16;
out += 16;
}
} else if (16 % sizeof(size_t) == 0) { /* always true */
while (len >= 16) {
size_t c, *out_t = (size_t *)out, *ivec_t = (size_t *)ivec;
const size_t *in_t = (const size_t *)in;
(*block)(in, tmp.c, key);
for (n = 0; n < 16 / sizeof(size_t); n++) {
c = in_t[n];
out_t[n] = tmp.t[n] ^ ivec_t[n];
ivec_t[n] = c;
}
len -= 16;
in += 16;
out += 16;
}
}
}
while (len) {
uint8_t c;
(*block)(in, tmp.c, key);
for (n = 0; n < 16 && n < len; ++n) {
c = in[n];
out[n] = tmp.c[n] ^ ivec[n];
ivec[n] = c;
}
if (len <= 16) {
for (; n < 16; ++n) {
ivec[n] = in[n];
}
break;
}
len -= 16;
in += 16;
out += 16;
}
}
```
|
```yaml
version: "3.1"
intents:
- affirm
- deny
- greet
- thankyou
- goodbye
- search_concerts
- search_venues
- compare_reviews
- bot_challenge
- nlu_fallback
- how_to_get_started
entities:
- name
slots:
concerts:
type: list
influence_conversation: false
mappings:
- type: custom
venues:
type: list
influence_conversation: false
mappings:
- type: custom
likes_music:
type: bool
influence_conversation: true
mappings:
- type: custom
responses:
utter_greet:
- text: "Hey there!"
utter_goodbye:
- text: "Goodbye :("
utter_default:
- text: "Sorry, I didn't get that, can you rephrase?"
utter_youarewelcome:
- text: "You're very welcome."
utter_iamabot:
- text: "I am a bot, powered by Rasa."
utter_get_started:
- text: "I can help you find concerts and venues. Do you like music?"
utter_awesome:
- text: "Awesome! You can ask me things like \"Find me some concerts\" or \"What's a good venue\""
actions:
- action_search_concerts
- action_search_venues
- action_show_concert_reviews
- action_show_venue_reviews
- action_set_music_preference
session_config:
session_expiration_time: 60 # value in minutes
carry_over_slots_to_new_session: true
```
|
```shell
#!/bin/sh
# @cmd:
pyomo solve scont2.py --transform gdp.bigm --solver=glpk
# @:cmd
python verify_scont.py results.yml
rm results.yml
```
|
Turret Peak () is a prominent rock peak, 2,790 m, standing 7 miles (11 km) northwest of Crosscut Peak in Millen Range. The peak is topped with a 10 m vertical spire, or tower, which is an excellent landmark. Turret Ridge extends northeast from the peak. Named for its distinctive appearance by the Southern party of NZFMCAE, 1962–63.
Mountains of Victoria Land
Pennell Coast
|
Miami Beach, Barbados, near the town of Oistins, is a popular sandy beach in Barbados. It is located on the south coast of the island, with usually calm waters and brilliant sunset views. On its north side is Enterprise Beach, a much more sheltered bay popular with families. Miami Beach is popular with both locals and tourists. Each morning local seniors swim in the sea and exercise on the beach. Also a great break for bodysurfing and bodyboarding.
A large yellow lifeguard station stands at the junction between Miami Beach and Enterprise Beach.
It is also a popular docking area for cruises on board a catamaran. Miami Beach has a beach shopping complex and a snack bar which serves a range of local food and rum punch. There are also gardens close by. Miami Beach has been voted one of the Top 10 beaches in Barbados.
In 2004 the beach started to suffer erosion by the sea as a natural occurrence, and the beach is narrower than it was 20 years ago. Action by the local authorities (Barbados Coastal Zone Management Unit (CZMU) and the National Conservation Commission (NCC) has successfully stopped the erosion and started a programme, which is currently allowing the beach to naturally heal itself.
Miami Beach is considered to be a "local hangout place" by those who live near it.
References
Christ Church, Barbados
Beaches of Barbados
|
```smalltalk
using AspNetCoreSpa.Domain.Entities;
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Metadata.Builders;
namespace AspNetCoreSpa.Infrastructure.Persistence.Configurations
{
public class EmployeeTerritoryConfiguration : IEntityTypeConfiguration<EmployeeTerritory>
{
public void Configure(EntityTypeBuilder<EmployeeTerritory> builder)
{
builder.HasKey(e => new { e.EmployeeId, e.TerritoryId })
.IsClustered(false);
builder.Property(e => e.EmployeeId).HasColumnName("EmployeeID");
builder.Property(e => e.TerritoryId)
.HasColumnName("TerritoryID")
.HasMaxLength(20);
builder.HasOne(d => d.Employee)
.WithMany(p => p.EmployeeTerritories)
.HasForeignKey(d => d.EmployeeId)
.OnDelete(DeleteBehavior.ClientSetNull)
.HasConstraintName("FK_EmployeeTerritories_Employees");
builder.HasOne(d => d.Territory)
.WithMany(p => p.EmployeeTerritories)
.HasForeignKey(d => d.TerritoryId)
.OnDelete(DeleteBehavior.ClientSetNull)
.HasConstraintName("FK_EmployeeTerritories_Territories");
}
}
}
```
|
```java
package org.hswebframework.web.datasource.switcher;
import lombok.extern.slf4j.Slf4j;
import org.hswebframework.web.context.ContextKey;
import org.hswebframework.web.context.ContextUtils;
import java.util.Deque;
import java.util.LinkedList;
import java.util.Optional;
@Slf4j
public class DefaultSwitcher implements Switcher {
private String name;
private String defaultId;
private String type;
public DefaultSwitcher(String name, String type) {
this.name = "DefaultSwitcher.".concat(name);
this.defaultId = name.concat(".").concat("_default");
this.type = type;
}
protected Deque<String> getUsedHistoryQueue() {
// ThreadLocal
return ContextUtils.currentContext()
.<Deque<String>>getOrDefault(ContextKey.of(name), LinkedList::new);
}
@Override
public void useLast() {
//
if (getUsedHistoryQueue().isEmpty()) {
return;
}
//,
getUsedHistoryQueue().removeLast();
if (log.isDebugEnabled()) {
String current = current().orElse(null);
if (null != current) {
log.debug("try use last {} : {}", type, current);
} else {
log.debug("try use last default {}", type);
}
}
}
@Override
public void use(String id) {
//
getUsedHistoryQueue().addLast(id);
if (log.isDebugEnabled()) {
log.debug("try use {} : {}", type, id);
}
}
@Override
public void useDefault() {
getUsedHistoryQueue().addLast(defaultId);
if (log.isDebugEnabled()) {
log.debug("try use default {}", type);
}
}
@Override
public Optional<String> current() {
if (getUsedHistoryQueue().isEmpty()) {
return Optional.empty();
}
String activeId = getUsedHistoryQueue().getLast();
if (defaultId.equals(activeId)) {
return Optional.empty();
}
return Optional.of(activeId);
}
@Override
public void reset() {
getUsedHistoryQueue().clear();
if (log.isDebugEnabled()) {
log.debug("reset {} history", type);
}
}
}
```
|
```yaml
### YamlMime:Landing
title: Azure Edge Hardware Center documentation # < 60 chars
summary: Use Azure Edge Hardware Center service to order from a variety of Azure Stack Edge devices as per your business need. # < 160 chars
metadata:
title: Azure Edge Hardware Center documentation
description: Azure Edge Hardware Center Documentation.
ms.service: databox
ms.subservice: edge
ms.topic: landing-page
author: alkohli
ms.author: alkohli
ms.date: 11/30/2021
# linkListType: architecture | concept | deploy | download | get-started | how-to-guide | learn | overview | quickstart | reference | tutorial | whats-new
landingContent:
# Cards and links should be based on top customer tasks or top subjects
# Start card title with a verb
# Card
- title: About Azure Edge Hardware Center
linkLists:
- linkListType: overview
links:
- text: What is Azure Edge Hardware Center?
url: azure-edge-hardware-center-overview.md
- linkListType: concept
links:
- text: FAQ
url: azure-edge-hardware-center-faq.yml
# Card
- title: "Create, manage orders"
linkLists:
- linkListType: deploy
links:
- text: Create an order
url: azure-edge-hardware-center-create-order.md
- linkListType: how-to-guide
links:
- text: Cancel your order
url: azure-edge-hardware-center-manage-order.md#cancel-order
- text: Track your order
url: azure-edge-hardware-center-manage-order.md#track-order
- text: Return hardware
url: azure-edge-hardware-center-manage-order.md#return-hardware
- text: Move order resource
url: your_sha256_hashroup.md
# Card
- title: Troubleshoot orders
linkLists:
- linkListType: how-to-guide
links:
- text: Azure Edge Hardware Center ordering issues
url: azure-edge-hardware-center-troubleshoot-order.md
- text: Open Support ticket
url: azure-edge-hardware-center-contact-microsoft-support.md
```
|
```xml
/*
* one or more contributor license agreements. See the NOTICE file distributed
* with this work for additional information regarding copyright ownership.
*/
import {makeObservable, override, action, observable} from 'mobx';
import {logger} from 'modules/logger';
import {
fetchDecisionInstance,
DecisionInstanceDto,
} from 'modules/api/decisionInstances/fetchDecisionInstance';
import {NetworkReconnectionHandler} from './networkReconnectionHandler';
type State = {
decisionInstance: DecisionInstanceDto | null;
decisionInstanceId: string | null;
status: 'initial' | 'fetched' | 'error' | 'forbidden';
};
const DEFAULT_STATE: State = {
decisionInstance: null,
decisionInstanceId: null,
status: 'initial',
};
class DecisionInstanceDetails extends NetworkReconnectionHandler {
state: State = {...DEFAULT_STATE};
constructor() {
super();
makeObservable(this, {
state: observable,
handleFetchSuccess: action,
handleFetchFailure: action,
reset: override,
});
}
fetchDecisionInstance = this.retryOnConnectionLost(
async (decisionInstanceId: string) => {
const response = await fetchDecisionInstance(decisionInstanceId);
if (response.isSuccess) {
this.handleFetchSuccess(response.data, decisionInstanceId);
} else {
this.handleFetchFailure(response.statusCode);
}
},
);
handleFetchSuccess = (
decisionInstance: DecisionInstanceDto,
decisionInstanceId: string,
) => {
this.state.decisionInstance = decisionInstance;
this.state.decisionInstanceId = decisionInstanceId;
this.state.status = 'fetched';
};
handleFetchFailure = (statusCode: number) => {
logger.error('Failed to fetch decision instance');
if (statusCode === 403) {
this.state.status = 'forbidden';
return;
}
this.state.status = 'error';
};
reset() {
super.reset();
this.state = {...DEFAULT_STATE};
}
}
export const decisionInstanceDetailsStore = new DecisionInstanceDetails();
```
|
```go
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin dragonfly freebsd netbsd
package route
import (
"runtime"
"syscall"
)
func (m *RouteMessage) marshal() ([]byte, error) {
w, ok := wireFormats[m.Type]
if !ok {
return nil, errUnsupportedMessage
}
l := w.bodyOff + addrsSpace(m.Addrs)
if runtime.GOOS == "darwin" {
// Fix stray pointer writes on macOS.
// See golang.org/issue/22456.
l += 1024
}
b := make([]byte, l)
nativeEndian.PutUint16(b[:2], uint16(l))
if m.Version == 0 {
b[2] = sysRTM_VERSION
} else {
b[2] = byte(m.Version)
}
b[3] = byte(m.Type)
nativeEndian.PutUint32(b[8:12], uint32(m.Flags))
nativeEndian.PutUint16(b[4:6], uint16(m.Index))
nativeEndian.PutUint32(b[16:20], uint32(m.ID))
nativeEndian.PutUint32(b[20:24], uint32(m.Seq))
attrs, err := marshalAddrs(b[w.bodyOff:], m.Addrs)
if err != nil {
return nil, err
}
if attrs > 0 {
nativeEndian.PutUint32(b[12:16], uint32(attrs))
}
return b, nil
}
func (w *wireFormat) parseRouteMessage(typ RIBType, b []byte) (Message, error) {
if len(b) < w.bodyOff {
return nil, errMessageTooShort
}
l := int(nativeEndian.Uint16(b[:2]))
if len(b) < l {
return nil, errInvalidMessage
}
m := &RouteMessage{
Version: int(b[2]),
Type: int(b[3]),
Flags: int(nativeEndian.Uint32(b[8:12])),
Index: int(nativeEndian.Uint16(b[4:6])),
ID: uintptr(nativeEndian.Uint32(b[16:20])),
Seq: int(nativeEndian.Uint32(b[20:24])),
extOff: w.extOff,
raw: b[:l],
}
errno := syscall.Errno(nativeEndian.Uint32(b[28:32]))
if errno != 0 {
m.Err = errno
}
var err error
m.Addrs, err = parseAddrs(uint(nativeEndian.Uint32(b[12:16])), parseKernelInetAddr, b[w.bodyOff:])
if err != nil {
return nil, err
}
return m, nil
}
```
|
```c
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
*
* You should have received a copy of the GNU Lesser General Public
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/log.h"
#include "libavutil/timer.h"
#include "libavutil/lfg.h"
int main(void)
{
int x = 0;
int i, j;
AVLFG state;
av_lfg_init(&state, 0xdeadbeef);
for (j = 0; j < 10000; j++) {
START_TIMER
for (i = 0; i < 624; i++) {
//av_log(NULL, AV_LOG_ERROR, "%X\n", av_lfg_get(&state));
x += av_lfg_get(&state);
}
STOP_TIMER("624 calls of av_lfg_get");
}
av_log(NULL, AV_LOG_ERROR, "final value:%X\n", x);
/* BMG usage example */
{
double mean = 1000;
double stddev = 53;
double samp_mean = 0.0, samp_stddev = 0.0;
double samp0, samp1;
av_lfg_init(&state, 42);
for (i = 0; i < 1000; i += 2) {
double bmg_out[2];
av_bmg_get(&state, bmg_out);
samp0 = bmg_out[0] * stddev + mean;
samp1 = bmg_out[1] * stddev + mean;
samp_mean += samp0 + samp1;
samp_stddev += samp0 * samp0 + samp1 * samp1;
av_log(NULL, AV_LOG_INFO,
"%f\n%f\n",
samp0,
samp1);
}
/* TODO: add proper normality test */
samp_mean /= 1000;
samp_stddev /= 999;
samp_stddev -= (1000.0/999.0)*samp_mean*samp_mean;
samp_stddev = sqrt(samp_stddev);
av_log(NULL, AV_LOG_INFO, "sample mean : %f\n"
"true mean : %f\n"
"sample stddev: %f\n"
"true stddev : %f\n",
samp_mean, mean, samp_stddev, stddev);
}
return 0;
}
```
|
The Beaune Altarpiece (or The Last Judgement) is a large polyptych 1445–1450 altarpiece by the Early Netherlandish artist Rogier van der Weyden, painted in oil on oak panels with parts later transferred to canvas. It consists of fifteen paintings on nine panels, of which six are painted on both sides. Unusually for the period, it retains some of its original frames.
Six of the outer panels (or shutters) have hinges for folding; when closed the exterior view of saints and donors is visible. The inner panels contain scenes from the Last Judgement arranged across two registers. The large central panel spans both registers and shows Christ seated on a rainbow in judgement, while below him, the Archangel Michael holds scales to weigh souls. The lower register panels form a continuous landscape, with the panel on the far proper right showing the gates of Heaven, while the entrance to Hell is on the far proper left. Between these, the dead rise from their graves, and are depicted moving from the central panel to their final destinations after receiving judgement.
The altarpiece was commissioned in 1443 for the Hospices de Beaune in eastern France, by Nicolas Rolin, Chancellor of the Duchy of Burgundy, and his wife Guigone de Salins, who is buried in front of the altarpiece's original location. It is in poor condition; it was moved in the 20th century both to shield it against sunlight and protect it from the almost 300,000 visitors the hospice receives annually. It has suffered from extensive paint loss, the wearing and darkening of its colours, and an accumulation of dirt. In addition, a heavy layer of over-paint was applied during restoration. The two painted sides of the outer panels have been separated to be displayed; traditionally, the shutters would have been opened only on selected Sundays or church holidays.
Commission and hospice
Nicolas Rolin was appointed Chancellor of Burgundy by Philip the Good in 1422, a position he held for the next 33 years. His tenure with the duke made him a wealthy man, and he donated a large portion of his fortune for the foundation of the Hôtel-Dieu in Beaune. It is not known why he decided to build in Beaune rather than in his birthplace of Autun. He may have chosen Beaune because it lacked a hospital and an outbreak of the plague had decimated the population between 1438 and 1440. Furthermore, in 1435, when the Treaty of Arras failed to bring a cessation to the longstanding hostility and animosity between Burgundy and France, Beaune suffered first the ravages of marauding bands of , who roamed the countryside scavenging in the late 1430s and early 1440s, then an ensuing famine. The hospice was built after Rolin gained permission from Pope Eugene IV in 1441, and was consecrated on 31 December 1452. At the same time, Rolin established the religious order of the . He dedicated the hospice to Anthony the Great, who was commonly associated with sickness and healing during the Middle Ages.
Rolin declared in the hospice's founding charter, signed in August 1443, that "in the interest of my salvation ... in gratitude for the goods which the Lord, source of all wealth, has heaped upon me, from now on and for always, I found a hospital." In the late 1450s, only a few years before he died, he added a provision to the hospital charter stipulating that the Mass for the Dead be offered twice daily. Rolin's wife, Guigone de Salins, played a primary role in the foundation, as probably did his nephew Jan Rolin. De Salins lived and served at the hospice until her own death in 1470.
Documents relating to the altarpiece's commission survive, with the artist, patron, date of completion and place of installation all known – unusual for a Netherlandish altarpiece. It was intended as the centrepiece for the chapel, and Rolin approached Rogier van der Weyden around 1443, when the hospital was founded. The altarpiece was ready by 1451, the year the chapel was consecrated. Painted in van der Weyden's Brussels workshop – most likely with the aid of apprentices – the panels were transported to the hospice once completed. The altarpiece is first mentioned in a 1501 inventory, at which time it was positioned on the high altar.
The polyptych was intended to provide both comfort and warning to the dying; acting as a reminder of their faith and directing their last thoughts towards the divine. This is evident in its positioning within view of the patients' beds. Medical care was expensive and primitive in the 15th century; the spiritual care of patients was as important as the treatment of physical ailments. For those too ill to walk, Rolin specified that 30 beds be placed within sight of the altarpiece which was visible through a pierced screen. There were usually only two patients per bed, a luxury at a time when six to fifteen in a large bed was more common.
St Sebastian and St Anthony represent healing. Both were associated with bubonic plague and their inclusion is intended to reassure the dying that they will act as intercessors with the divine. St Michael developed a cult following in 15th-century France, and he was seen as a guardian of the dead, a crucial role given the prevalence of plague in the region. There was another severe outbreak in 1441–1442, just before Rolin founded the hospital. According to the art historian Barbara Lane, patients were unlikely to survive their stay at Beaune, yet the representation of St Michael offered consolation as they could "gaze on his figure immediately above the altar of the chapel every time the altarpiece was opened. Like Saints Anthony and Sebastian on the exterior of the polyptych, the archangel offered ... hope that they would overcome their physical ills."
Description
The altarpiece measures , and comprises fifteen separate paintings across nine panels, six of which are painted on both sides. When the shutters are opened, the viewer is exposed to the expansive "Last Judgement" interior panels. These document the possible spiritual fates of the viewers: that they might reach Heaven or Hell, salvation or damnation; stark alternatives appropriate for a hospice. When the outer wings (or shutters) are folded, the exterior paintings (across two upper and four lower panels) are visible. The exterior panels serve as a funerary monument for the donors. Art historian Lynn Jacobs believes that the "dual function of the work accounts for the choice of the theme of the Last Judgement on its interior".
When the shutters are closed the polyptych resembles the upper portion of a cross. The elevated central panel allowed additional space for a narrative scene depicting a heavenly vista, a single large figure, or a crucifixion with space for the cross to extend above the other panels. Van der Weyden conveys the heavenly sphere in the tall vertical panel, whereas the earthly is relegated to the lower-register panels and the exterior view. Moreover, the T-shape echoes typical configurations of Gothic churches, where the naves often extended past the aisles into the apse or choir. The imagery of the outer panels is set in the earthly realm with the donors and the saints painted in grisaille to imitate sculpture. Hence, the work clearly distinguishes between figures of the divine, earthly and hellish realms.
Inner panels
As with van der Weyden's Braque Triptych, the background landscape and arrangements of figures extend across individual panels of the lower register to the extent that the separations between panels are ignored. There are instances of figures painted across two adjoining panels, whereas Christ and St Michael are enclosed within the single central panel, giving emphasis to the iconography. The celestial sphere, towards which the saved move, is dramatically presented with a "radiant gold background, spanning almost the entire width of the altarpiece".
The lower register presents Earth and contains the gates to Heaven and Hell. The imposing figure of Christ indicates the "reign of heaven is about to begin." The distinction between the earthly and heavenly realms creates a sense of order, and Christ "exudes calm and control", and a sense of balance and movement throughout the panels.
The presentation of the resurrected dead across the five lower panels is reminiscent of a Gothic tympanum, specifically that at Autun Cathedral. Rolin would have been familiar with the Autun Cathedral entrances, which may have influenced his commissioning of a Last Judgement for the hospice. Additionally, Rolin was aware of the liturgy associated with the Mass for the Dead, and would have known Last Judgement scenes associated with the Mass from 15th-century illuminated manuscripts, such as the full-page Last Judgement in the Hours of Catherine of Cleves, which shows Christ in a similar position, seated above the dead as they rise from their graves.
Upper register
Christ sits in judgement in the upper centre panel. He holds a lily in his right hand and a sword in his left, and sits on a rainbow extending across two panels, his feet resting on a sphere. His right hand is raised in the act of benediction, and his left hand is lowered. These positions indicate the act of judgement; he is deciding if souls are to be sent to Heaven or Hell, his gestures echoing the direction and positioning of the scales held by the Archangel Michael beneath him. His palms are open, revealing the wounds sustained when they were nailed to the cross, while his cope gapes in places making visible the injury caused by the lance, from which pours deep-red blood.
Christ's face is identical to the representation in the Braque Triptych, completed just a few years later in 1452. Christ, placed so high in the pictorial space and spanning both registers, orchestrates the entirety of the inner panels. Whereas earlier Last Judgements might have seemed chaotic, here he brings a sense of order.
The Archangel Michael, as the embodiment and conduit of divine justice, is positioned directly below Christ, the only figure to reach both Heaven and Earth. He wears a dispassionate expression as he holds a set of scales to weigh souls. Unusually for Christian art, the damned outweigh the blessed; Michael's scales have only one soul in each pan, yet the left pan tips below the right. Michael is given unusual prominence in a "Last Judgement" for the period, and his powerful presence emphasises the work's function in a hospice and its preoccupation with the liturgy of death. His feet are positioned as if he is stepping forward, about to move out of the canvas, and he looks directly at the observer, giving the illusion of judging not only the souls in the painting but also the viewer.
Michael, like Sebastian and Anthony, was a plague saint and his image would have been visible to patients through the openings of the pierced screen as they lay in their beds. He is portrayed with iconographic elements associated with the Last Judgement, and, dressed in a red cope with woven golden fabrics over a shining white alb, is by far the most colourful figure in the lower panels, "hypnotically attracting the viewer's glance" according to Lane. He is surrounded by four cherubs playing trumpets to call the dead to their final destination. Michael's role in the Last Judgement is emphasised through van der Weyden's use of colour: Michael's gleaming white alb contrasts with the cherubs' red vestments, set against a blue sky directly below heaven's golden clouds.
Both of the upper register wings contain a pair of angels holding instruments of the Passion. These include a lance, a crown of thorns and a stick with a sponge soaked in vinegar. The angels are dressed in white liturgical vestments, including an alb and an amice.
Beneath Michael, souls scurry left and right. The saved walk towards the gates of Heaven where they are greeted by a saint; the damned arrive at the mouth of Hell and fall en masse into damnation. The souls balanced in the scales are naked. The blessed look towards Christ, the banished look downwards. Both groups are tilted in the direction of Christ's hands. Reinforcing this, inscriptions around the groupings read VIRTUTES (Virtues) and PECCATA (sins).
Lower register
The Virgin Mary, John the Baptist, the twelve Apostles and an assortment of dignitaries are positioned in a Deësis, at either side of Michael. The apostles are seated in a semicircle; St Peter is dressed in red on the far left, and St Paul, dressed in green, is on the far right. The seven haloed dignitaries, dressed in contemporary clothing, are unidentified but include a king, a pope, a bishop, a monk, and three women. Rather than general representative types, they are portraits of specific unidentified individuals, according to Shirley Blum.
The dead rise from their graves around Michael's feet; some emerge to walk towards Heaven, others towards Hell. They are on a dramatically reduced scale compared to the saints. Lorne Campbell notes that the panels indicate a deeply pessimistic view of humanity, with the damned far outnumbering the saved, especially compared to Stefan Lochner's Cologne panel, where the saved crowd around the gate to Heaven.
The souls undergo a gradual transformation as they move from panel to panel. Those rising from their graves at Michael's feet show little expression, but become more animated as they move to either side; horror and desperation become especially visible on the faces of the damned as they move towards Hell.
On the left, the saved have, according to Jacobs, "the same beatific expressions", but their postures gradually change from facing Christ and Michael to looking towards Heaven's gate, most notably with the couple below Mary where the man turns the woman's gaze away from Michael, and towards Heaven. This contrasts with another couple on the opposite panel who face Hell; the woman is hunched over as the man raises his hand in vain to beseech God for mercy.
Heaven is represented by an entrance to the Heavenly City, which is in a contemporary Gothic style illuminated by long, thin rays of light. The saved approach clasping their hands in prayer and are greeted at the entrance by an angel. Only a few souls pass through the heavenly gates at a time. The imagery of a church as an earthly representation of Heaven was popularised in the 13th century by theologians such as Durandus; the gate to Heaven in this work resembles the entrance to the Beaune hospice. The way to Heaven is shown clearly as a gilded church – the saved ascend a set of steps, turn right, and disappear from sight. It is fully enclosed in a single panel, whereas Hell extends onto the adjoining panel, perhaps hinting that sin contaminates all around it.
Van der Weyden depicts Hell as a gloomy, crowded place of both close and distant fires, and steep rock faces. The damned tumble helplessly into it, screaming and crying. The sinners enter Hell with heads mostly bowed, dragging each other along as they go. Traditionally, a Last Judgement painting would depict the damned tormented by malevolent spirits; yet here the souls are left alone, the only evidence of their torment in their expressions.
The hellscape is painted so as to instil terror, but without devils. Erwin Panofsky was the first to mention this absence, and proposed that van der Weyden had opted to convey torment in an inward manner, rather than through elaborate descriptions of devils and fiends. He wrote, "The fate of each human being ... inevitably follows from his own past, and the absence of any outside instigator of evil makes us realize that the chief torture of the Damned is not so much physical pain as a perpetual and intolerably sharpened consciousness of their state". According to Bernhard Ridderbos, van der Weyden accentuated the theme by "restricting the number of the dead and treating them almost as individuals. As the damned approach the abyss of hell they become more and more compressed."
Exterior panels
The six exterior panels consists of two donor wings, two containing saints, and two panels with Gabriel presenting himself to Mary. The donors are on the outer wings, kneeling in front of their prayer books. Four imitation statues in grisaille make up the inner panels. The lower two depict St Sebastian and St Anthony. Sebastian was the saint of plagues and an intercessory against epidemics, Anthony the patron saint of skin diseases and ergotism, then known as St Anthony's Fire. The two saints had close associations with the Burgundian court: Philip the Good was born on St Anthony's day, he had an illegitimate son named Anthony, and two of Rolin's sons were named Anthony. St Sebastian was the patron saint of Philip the Good's chivalric Order of the Golden Fleece.
The two small upper register panels show a conventional Annunciation scene, with the usual dove representing the Holy Spirit. The two sets of panels, unlike those on the interior, are compositionally very different. The figures occupy distinctly separate niches and the colour schemes of the grisaille saints and the donors contrast sharply.
Like many mid-15th century polyptychs, the exterior panels borrow heavily from the Ghent Altarpiece, completed in 1432. The use of grisaille is borrowed from that work, as is the treatment of the Annunciation. Van der Weyden uses iconography in the Beaune exterior that is not found in his other works, suggesting that Rolin may have asked that the altarpiece follow van Eyck's example. Van der Weyden was not inclined merely to imitate though, and arranged the panels and figures in a concentrated and compact format. Jacobs writes that "the exterior presents the most consistent pictorial rendering of trompe l'oeil sculpture to date". Gabriel's scroll and Mary's lily appear to be made of stone; the figures cast shadows against the back of their niches, creating a sense of depth which adds to the illusion.
The exterior panels are drab, according to Blum, who writes that on Rolin's panel the most colourful figure is the red angel, which, with its gold helmet and keys, "emerges like an apparition". Rolin and de Salins can be identified by the coats-of-arms held by the angels; husband and wife kneel at cloth-covered prie-dieux (portable altars) displaying their emblems. Although De Salins was reputedly pious and charitable, and even perhaps the impetus for the building of the hospice, she is placed on the exterior right, traditionally thought of as an inferior position corresponding to Hell, linking her to Eve, original sin and the Fall of man.
Van Eyck had earlier portrayed Rolin in the 1435 Madonna of Chancellor Rolin, and the patron is recognizable from that work; both portraits show similar lips, a large chin and somewhat pointed ears. In van Eyck's portrait, Rolin is presented as perhaps pompous and arrogant; here – ten years later – he appears more thoughtful and concerned with humility. Campbell notes wryly that van der Weyden may have been able to disguise the sitter's ugliness and age, and that the unusual shape of his mouth may have been downplayed. He writes that while "van Eyck impassively recorded, van der Weyden imposed a stylised and highly personal vision of the subject". Van Eyck's depiction was most likely the more accurate; van der Weyden embellished, mainly by lengthening the nose, enlarging the eyes and raising the eyebrows.
Inscriptions
The panels contain quotations in Latin from several biblical texts. They appear either as lettering seemingly sewn into the edges of the figures' clothes (mostly hidden in the folds), or directly on the surface of the central inner panel. The latter occur in four instances; two pairs of text float on either side of Christ, two around Michael. Beneath the lily, in white paint are the words of Christ: VENITE BENEDICTI PATRIS MEI POSSIDETE PARATUM VOBIS REGNUM A CONSTITUTIONE MUNDI ("Come ye blessed of my father, inherit the kingdom prepared for you from the foundations of the world"). The text beneath the sword reads: DISCEDITE A ME MALEDICTI IN IGNEM ÆTERNUM QUI PARATUS EST DIABOLO ET ANGELIS EJUS ("Depart from me ye cursed, into everlasting fire, prepared for the devil and his angels").
The inscriptions follow the 14th-century convention of showing figures, imagery and motifs associated with the saved to Christ's right, and those of the damned to his left. The words beneath the lily (the benedicti) read upwards towards Heaven, their curves leaning in towards Christ. The text to the left (the maledicti) flows in the opposite direction; from the highest point downwards. The inscriptions to Christ's right are decorated in light colours, to the extent that they are usually difficult to discern in reproduction. The lettering opposite faces downwards, and is applied with black paint.
Condition
A number of the panels are in poor condition, owing variously to darkening of the colours, accumulated dirt and poor decisions during early restorations. The altarpiece stayed in the chapel from the time of its installation until the French Revolution, from which it was hidden in an attic for decades. When it was brought out, the nude souls – thought to be offensive – were painted over with clothing and flames; it was moved to a different room, hung from the ground, and portions were whitewashed. In 1836, the Commission of Antiquities retrieved it and began plans to have it restored. Four decades later it underwent major restoration – between 1875 and 1878 – when many of these additions were removed, but not without significant damage to the original paintwork, such as the loss of pigment to the wall-hangings in the donor panels, which were originally red and gold. In general, the central inside panels are better preserved than the interior and exterior wings. De Salins' panel is damaged; its colours have darkened with age; originally the niche was a light blue (today it is light green) and the shield held by the angel was painted in blue.
The panels were laterally divided so both sides could be displayed simultaneously, and a number have been transferred to canvas.
Sources and influences
Since before 1000, complex depictions of the Last Judgement had been developing as a subject in art, and from the 11th century became common as wall-painting in churches, typically placed over the main door in the west wall, where it would be seen by worshippers as they left the building. Iconographical elements were gradually built up, with St Michael weighing the souls first seen in 12th-century Italy. Since this scene has no biblical basis, it is often thought to draw from pre-Christian parallels such as depictions of Anubis performing a similar role in Ancient Egyptian art. In medieval English, a wall-painting of the Last Judgement was called a doom.
Van der Weyden may have drawn influence from Stefan Lochner's 1435 Last Judgement, and a similar 1420 painting now in the Hotel de Ville, Diest, Belgium. Points of reference include Christ raised over a Great Deësis of saints, apostles and clergy above depictions of the entrance to Heaven, and the gates of Hell. In both of the earlier works, Christ perches on a rainbow; in the Deësis panel he is also above a globe. While the two earlier works are filled with dread and chaos, van der Weyden's panels display the sorrowful, self-controlled dignity typical of his best work. This is most evident in the manner in which the oversized and dispassionate Christ orchestrates the scene from Heaven.
The work's moralising tone is apparent from some of its more overtly dark iconography, its choice of saints, and how the scales tilt far lower beneath the weight of the damned than the saved. The damned to Christ's left are more numerous and less detailed than the saved to his right. In these ways it can be compared to Matthias Grünewald's Isenheim Altarpiece, which served much the same purpose, having been commissioned for the Monastery of St Anthony in Isenheim, which cared for the dying.
The similarities between the altarpiece and the late-1460s Last Judgement by van der Weyden's apprentice Hans Memling has led art historians to suggest a common tie with Florentine banker Angelo Tani who gave commissions to van der Weyden before his death in 1464. Because Memling's apprenticeship post-dated the completion and installation of the altarpiece, art historians speculate that Tani or Memling would have seen it in situ, or that Memling came into possession of a workshop copy.
In Memling's work the Deësis and Christ's placement, above St Michael with his scales, are almost identical to the Beaune Altarpiece. Despite the marked similarities, the crowded scenes in Memling's Last Judgement contrast sharply with "the hushed serenity of Rogier's composition", according to Lane, and in a mirror image of van der Weyden's altarpiece, Memling shows the saved outweighing the damned in St Michael's scales.
Notes
References
Citations
Sources
Acres, Alfred. "Rogier van der Weyden's Painted Texts". Artibus et Historiae, Volume 21, No. 41, 2000.
Blum, Shirley Neilsen. Early Netherlandish Triptychs: A Study in Patronage. Berkeley: California Studies in the History of Art, 1969.
Campbell, Lorne. Van der Weyden. London: Chaucer Press, 2004.
Campbell, Lorne. Van der Weyden. New York: Harper & Row, 1980.
Campbell, Lorne. "Early Netherlandish Triptychs: A Study in Patronage by Shirley Neilsen Blum" (review). Speculum, Volume 47, No. 2, 1972.
Drees, Clayton. The Late Medieval Age of Crisis and Renewal, 1300–1500. Westport: Greenwood, 2000.
Hall, James. A History of Ideas and Images in Italian Art. London: John Murray, 1983.
Hayum, Andrée. "The Meaning and Function of the Isenheim Altarpiece: The Hospital Context Revisited". Art Bulletin, Volume 59, No. 4, 1977.
Jacobs, Lynn. Opening Doors: The Early Netherlandish Triptych Reinterpreted. University Park: Pennsylvania State University Press, 2011.
Jacobs, Lynn. "The Inverted 'T'-Shape in Early Netherlandish Altarpieces: Studies in the Relation between Painting and Sculpture". Zeitschrift für Kunstgeschichte, Volume 54, No. 1, 1991.
Lane, Barbara. "Requiem aeternam dona eis: The Beaune Last Judgment and the Mass of the Dead". Simiolus: Netherlands Quarterly for the History of Art, Volume 19, No. 3, 1989.
Lane, Barbara. "The Patron and the Pirate: The Mystery of Memling's Gdańsk Last Judgment". The Art Bulletin, Volume 73, No. 4, 1991.
McNamee, Maurice. Vested Angels: Eucharistic Allusions in Early Netherlandish paintings. Leuven: Peeters Publishers, 1998.
Panofsky, Erwin. Early Netherlandish Painting: Its Origins and Character. New York: Harper & Row, 1953.
Ridderbos, Bernhard; Van Buren, Anne; Van Veen, Henk. Early Netherlandish Paintings: Rediscovery, Reception and Research. Amsterdam: Amsterdam University Press, 2005.
Smith, Jeffrey Chipps. The Northern Renaissance. London: Phaidon Press, 2004.
Smith, Molly Teasdale. "On the Donor of Jan van Eyck's Rolin Madonna". Gesta, Volume 20, No. 1, 1981.
Upton, Joel Morgan. Petrus Christus: his place in Fifteenth-Century Flemish painting. University Park: Pennsylvania State University Press, 1989.
Vaughan, Richard. Philip the Good. Martlesham: Boydell and Brewer, 2012.
External links
1445 paintings
1446 paintings
1447 paintings
1448 paintings
1449 paintings
1450 paintings
Paintings based on the Book of Revelation
Angels in art
Polyptychs
Paintings by Rogier van der Weyden
Beaune
Paintings depicting Jesus
Paintings of the Virgin Mary
Paintings depicting John the Baptist
Paintings depicting Michael (archangel)
Paintings of apostles
The Last Judgement in art
|
```go
/*
*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*
*/
// Package proto defines the protobuf codec. Importing this package will
// register the codec.
package proto
import (
"fmt"
"github.com/golang/protobuf/proto"
"google.golang.org/grpc/encoding"
)
// Name is the name registered for the proto compressor.
const Name = "proto"
func init() {
encoding.RegisterCodec(codec{})
}
// codec is a Codec implementation with protobuf. It is the default codec for gRPC.
type codec struct{}
func (codec) Marshal(v interface{}) ([]byte, error) {
vv, ok := v.(proto.Message)
if !ok {
return nil, fmt.Errorf("failed to marshal, message is %T, want proto.Message", v)
}
return proto.Marshal(vv)
}
func (codec) Unmarshal(data []byte, v interface{}) error {
vv, ok := v.(proto.Message)
if !ok {
return fmt.Errorf("failed to unmarshal, message is %T, want proto.Message", v)
}
return proto.Unmarshal(data, vv)
}
func (codec) Name() string {
return Name
}
```
|
```ocaml
(* Unison file synchronizer: src/fileutil.mli *)
(* Convert backslashes in a string to forward slashes. Useful in Windows. *)
val backslashes2forwardslashes : string -> string
val removeTrailingSlashes : string -> string
```
|
Maurice Huisman (1912 – 23 July 1993) was a Belgian Opera director.
Life
Born in Brussels, a chemist by training, he and his brother Jacques were involved in the founding of the Comédiens routiers, the precursors of the Théâtre national de Belgique established in 1945.
When in 1959 he succeeded Joseph Rogatchewsky at the head of La Monnaie, he developed a policy of international exchange and public outreach, with a particular focus on youth. That same year, he brought some of his dancers and the ballet troupe of La Monnaie to create the highly successful Rite of Spring. Following this, Huisman and Béjart founded the Ballet of the 20th Century in 1960. Wishing to broaden the company's audience, they brought out the dance from the Théâtre de la Monnaie to present Les Quatre Fils Aymon at the Cirque Royal in 1961, the choreographer's first major popular success. This ballet was followed by many others, which made La Monnaie one of the first choreographic scenes in Europe
But Huisman did not abandon the lyrical productions. He involved renowned directors such as Franco Zeffirelli (Rigoletto and Falstaff), Jean-Pierre Ponnelle (several operas by Rossini), and Wieland Wagner (Tristan und Isolde, which premiere took place in Brussels before Bayreuth).
Although he preferred to give young artists and original productions a chance, Huisman nevertheless invited many stars, such as Victoria de los Ángeles, Elisabeth Schwarzkopf, Mario Del Monaco, José Carreras and José van Dam. In 1968, he hiredJacques Brel for the first French version of Man of La Mancha, an adaptation of Dale Wasserman's musical theatre Man of La Mancha, played in Broadway in 1965. The premiere took place at La Monnaie on October 4, 1968 and then in Paris in December.
Huisman also paid attention to the renewal of Baroque music, staging performances of works by (Rameau, Cavalli, Monteverdi) and to 20th century works (Janáček, Alban Berg, Dario Fo, Philip Glass, Bob Wilson).
He died in a car crash.
References
External links
1912 births
1993 deaths
Musicians from Brussels
Opera managers
Directors of La Monnaie
Road incident deaths in Belgium
|
Sergio Di Zio is a Canadian actor. He starred in the television series Flashpoint as Michelangelo "Spike" Scarlatti until the show concluded on December 13, 2012. His other works include The Lookout, Cinderella Man, Senior Trip; the television series This is Wonderland, Northern Town; as a voice actor for the animated series Stoked and Babar and the Adventures of Badou. He also appeared in the stage debut of Léo written by Rosa Labordé for which he received a Dora Award nomination in 2006. Di Zio was recently part of an animated show called Grojband, until the show concluded in May 2015. He more recently has transitioned to short films, however he continues to take on various projects.
Career
Di Zio is best known for playing Michelangelo Scarlatti, nicknamed "Spike", on the CTV police drama Flashpoint. However, prior to landing his breakthrough role, he had appeared in over 30 movies and TV series. After making his debut in the 1995 film Senior Trip, Di Zio appeared in a string of telepics including The Wall, Major Crime, Freak City, Rembrandt: Fathers & Sons, and RFK. Additionally, he guest starred on other Canadian series, such as Murdoch Mysteries, Republic of Doyle, and even played "Ripper" on Stoked for 11 episodes.
Film appearances include Ron Howard's Cinderella Man, Boondock Saints, Flash of Genius, and the Independent Spirit Awards winning The Lookout, playing Deputy Ted. Di Zio has starred in Just Buried, 19 Months and the Peter Wellington film, Luck, winner of the South by Southwest Film Festival.
Sergio's TV movie appearances include Robert Ludlum's Covert One: The Hades Factor, John Stamos' The Wedding Wars and the Fox biopic RFK, where he played Robert F. Kennedy’s adviser and speechwriter Adam Walinsky.
In July 2012, he made a brief appearance in the show The Listener as Spike, the same Spike as in Flashpoint. The episode is Now You See Him.
In 2015, Di Zio starred in the film The Walk as officer Genco.
Filmography
References
External links
Northern Stars
CTV.ca biography
Living people
Male actors from Toronto
Canadian male film actors
Canadian people of Italian descent
Canadian male stage actors
Canadian male television actors
Canadian male voice actors
20th-century Canadian male actors
21st-century Canadian male actors
Best Supporting Actor in a Drama Series Canadian Screen Award winners
Year of birth missing (living people)
|
James J. Stukel (born March 30, 1937) is an American former educator who served as the 15th president of the University of Illinois system.
Early life
James Stukel was born on March 30, 1937, in Joliet, Illinois, to Philip and Julia Stukel. James and his sole sibling, a sister 13 years older than he was, had a modest upbringing. His father, a pulp mill worker, and his mother, a homemaker, maintained a small, clapboard house. While neither of his parents had more than an eighth grade education, Stukel would say of them, "my father had a real gift for numbers. He could do things in his head that were remarkable, and my mother was extremely sharp until the day she died...they were...bright." His parents began to save for his college education soon after his birth.
Stukel's parents instilled in him a strong work ethic. He would later say, "they were pretty stern regarding my grades and homework...and I always worked." In third grade, he joined the school band. He would practice three to five hours each day on his saxophone. Of his band experience, he said, "nothing was given; it was earned." He would credit his band experiences to force him to set goals. According to Stukel, his "whole life...is based around competition and goal setting."
In junior high, Stukel started a paper route to earn income. In high school, he entered into student politics and was elected junior class president. His opponent would later remark that he was a "class act" and an "outstanding student and quiet leader."
College
A high school chemistry teacher, recognizing Stukel's potential in engineering, drove him to visit Purdue University. He would later joke, "the University of Illinois wasn't in his vocabulary, but he took over the decision-making process." Stukel enrolled at Purdue and joined the Phi Gamma Delta fraternity. To help pay for school, he played saxophone with his dance band, The Spotlighters. The band played music from Woody Herman, Stan Getz, and other jazz artists. During the summer, Stukel would play at resorts.
It was at Purdue that Stukel met his wife Joan Helpling, a majorette with the Purdue marching band. The two toured Europe as members of a variety band. They would marry during their senior years. Stukel would later comment on his wife, "I [have] been...lucky in that I have a very supportive wife who...influenced the way I developed...in...positive ways." Stukel graduated with a Bachelor of Science degree in engineering from Purdue shortly after his marriage. The couple then moved to Virginia.
Stukel earned his M.S. and Ph.D. from the University of Illinois at Urbana-Champaign.
Career
After the completion of his Ph.D., Stukel joined the faculty of the Engineering College. He rose to the level of Associate Dean before transferring to the University of Illinois at Chicago. While there, he served in a variety of administrative capacities, assuming the roles of the Vice-Chancellor for Research, the Vice-Chancellor for Academic Affairs, and finally, Chancellor of the campus.
After his four-year tenure as chancellor, Stukel was selected as president of the University of Illinois system by the UI Board of Trustees. He served in this capacity for approximately 10 years (1995–2005) and was succeeded by B. Joseph White.
A residence hall at the University of Illinois at Chicago, the James Stukel Towers, was named after the former president.
References
External links
Daily Illini, "University remembers, recounts Stukel's tenure"
1937 births
Living people
Leaders of the University of Illinois
People from Joliet, Illinois
Purdue University College of Engineering alumni
University of Illinois Urbana-Champaign alumni
|
The Sejm of the Republic of Poland (; Polish: Sejm Rzeczypospolitej Polskiej) is the lower house of the Polish parliament. Its name comes from what was once a generic Polish word for a political gathering. It is also used to refer to historical diets or assemblies.
Pre-partition sejms
Sejm of the Kingdom of Poland, 15th–16th centuries
Sejm of the Polish–Lithuanian Commonwealth, 1569–1793
Sejm of Four Lands, or Council of Four Lands (Sejm Czterech Ziem, Va'ad Arba' Aratzot), central Jewish authority in Poland, 1580–1764
Types of sessions
Confederated sejm (sejm skonfederowany), a form of sejm where decisions were made by the majority of deputy votes cast
Convocation sejm (sejm konwokacyjny), part of the process of royal elections in which candidates were put forward and rules of election established
Coronation sejm (sejm koronacyjny), the first sejm convened by a newly crowned king
Election sejm (sejm elekcyjny), the election of the king by the nobility
Pacification sejm (Sejm pacyfikacyjny), held after a period of conflict, usually a disputed royal election, to bring peace and unity to the country
Specific sessions
In chronological order:
Convocation Sejm (1764)
Election Sejm of 1632
Silent Sejm (Sejm Niemy), 1717
Repnin Sejm (Sejm Repninowski), 1767–1768
Partition Sejm (Sejm Rozbiorowy), 1773–1775
Great Sejm (Sejm Wielki), 1788–1792
Grodno Sejm (Sejm Grodzieński), 1793
Post-partition sejms
In chronological order:
Galician Sejm (Sejm Galicyjski, Sejm Krajowy), or Diet of Galicia and Lodomeria, 1861–1918
Silesian Sejm, or Silesian Parliament (Sejm Śląski), legislature of the Autonomous Silesian Voivodeship, 1920–1939
Sejm of the Republic of Central Lithuania (Sejm Litwy Środkowej), 1922
Contract Sejm (sejm kontraktowy), 1989–1991
See also
Saeima, the parliament of Latvia
Seimas, the parliament of Lithuania
Seym River, a river in Russia and Ukraine spelled "Sejm" in Polish
|
Jean-Louis Mandengue (born September 15, 1971, in Paris) is a retired male boxer from France. At the 1996 Summer Olympics in Atlanta, Georgia, he fought in the men's light-heavyweight division (– 81 kg) and lost to Brazil's Daniel Bispo in the second round of the tournament.
External links
sports-reference
1971 births
Living people
Light-heavyweight boxers
Boxers at the 1996 Summer Olympics
Olympic boxers for France
Boxers from Paris
French male boxers
|
```xml
import * as path from 'path';
import { runTests } from '@vscode/test-electron';
import { EXTENSION_ROOT_DIR_FOR_TESTS } from './constants';
import { getChannel } from './utils/vscode';
const workspacePath = path.join(__dirname, '..', '..', 'src', 'testMultiRootWkspc', 'multi.code-workspace');
process.env.IS_CI_SERVER_TEST_DEBUGGER = '1';
process.env.VSC_PYTHON_CI_TEST = '1';
function start() {
console.log('*'.repeat(100));
console.log('Start Debugger tests');
runTests({
extensionDevelopmentPath: EXTENSION_ROOT_DIR_FOR_TESTS,
extensionTestsPath: path.join(EXTENSION_ROOT_DIR_FOR_TESTS, 'out', 'test', 'index'),
launchArgs: [workspacePath],
version: getChannel(),
extensionTestsEnv: { ...process.env, UITEST_DISABLE_INSIDERS: '1' },
}).catch((ex) => {
console.error('End Debugger tests (with errors)', ex);
process.exit(1);
});
}
start();
```
|
```php
<?php
namespace Modules\Team\Http\Controllers;
use App\ApiConfig\ApiConfig;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Validator;
use Modules\Team\Http\Requests\ModalTeamRequest;
use Illuminate\Routing\Controller;
use Modules\Team\Http\Requests\CreatTeamRequest;
use Illuminate\Support\Facades\Session;
use Modules\User\helper;
use Exception;
class TeamController extends Controller
{
protected $helper;
public function __construct()
{
$this->helper = Helper::getInstance();
}
public function viewTeams()
{
$apiUrl = ApiConfig::get('/team/get-details');
try {
$response = $this->helper->postApiCallWithAuth('get', $apiUrl);
if ($response['code'] === 200) {
$responseData = $this->helper->responseHandler($response['data']);
return view('team::view_teams')->with(["accounts" => $responseData]);
} else {
return view('team::view_teams')->with(["ErrorMessage" => 'Can not complete the process, please reload page']);
}
} catch (Exception $e) {
$this->helper->logException($e->getLine(), $e->getCode(), $e->getMessage(), 'viewTeams() {TeamController}');
}
}
public function teamView($id)
{
$teamName = '';
$teamLogo = '';
try {
$apiUrl = ApiConfig::get('/team/get-team-details?teamId=' . $id);
$response = $this->helper->postApiCallWithAuth('get', $apiUrl);
if ($response['data']->code === 200) {
$teamName = $response['data']->data->teamSocialAccountDetails[0]->team_name;
$teamLogo = $response['data']->data->teamSocialAccountDetails[0]->team_logo;
$data = array('teamName' => $teamName, 'teamLogo' => $teamLogo);
$result['code'] = 200;
$result['message'] = 'Success';
$result['data'] = $data;
} else if ($response['data']->code === 400) {
$result['code'] = 400;
$result['message'] = 'failed';
} else {
$result['code'] = 500;
$result['message'] = 'failed';
}
return view('team::view_particular_team')->with(['team_id' => $id, 'data' => $result]);
} catch (\Exception $e) {
$this->helper->logException($e->getLine(), $e->getCode(), $e->getMessage(), 'teamView() {TeamController}');
return view('team::view_particular_team')->with('team_id', $id);
}
}
public function viewTeam()
{
return view('team::index');
}
public function createTeam(Request $request)
{
if ($request->isMethod('get')) {
return view('team::create_team');
} elseif ($request->isMethod('post')) {
$result = [];
if ($request->team_name == "") {
$result['code'] = 204;
$result['message'] = "Team name is required";
return $result;
}
try {
if (isset($request->profile_avatar)) {
$file = $request->profile_avatar;
$team = Session::get('team');
$pathToStorage = public_path('media/uploads');
if (!file_exists($pathToStorage))
mkdir($pathToStorage, 0777, true);
$publishimage = $file->getClientOriginalName();
$data['media'] = $pathToStorage . "/" . $publishimage;;
file_put_contents($data['media'], file_get_contents($file->path()));
$filedata = array("name" => "media",
"file" => $data['media']);
$apiUrl = env('API_URL_PUBLISH') . env('API_VERSION') . '/upload/media?title=' . $team['teamName'] . '&teamId=' . $team["teamid"] . '&privacy=3';
$response = $this->helper->postApiCallWithAuth('post', $apiUrl, $filedata, true);
$responseData = $this->helper->responseHandler($response['data']);
if ($responseData['code'] == 200) {
$str=substr(env('APP_URL'), 0, 30);
$mediaUrl = $str . "media/uploads/".$publishimage;;
$data['TeamInfo'] = array(
"name" => $request->team_name,
"description" => "Short note about the team activity",
"logoUrl" => $mediaUrl
);
} else {
$data['TeamInfo'] = array(
"name" => $request->team_name,
"description" => "Short note about the team activity",
"logoUrl" => null
);
}
} else {
$data['TeamInfo'] = array(
"name" => $request->team_name,
"description" => "Short note about the team activity",
"logoUrl" => 'path_to_url
);
}
} catch (\GuzzleHttp\Exception\RequestException $e) {
$this->helper->logException($e, 'createTeam() {TeamController}');
return redirect()->back()->with("ErrorMessage", 'Can not complete the process, please check default team is set or not');
}
$apiUrl = ApiConfig::get('/team/create');
try {
$response = $this->helper->postApiCallWithAuth('post', $apiUrl, $data);
$responseData = $this->helper->responseHandler($response['data']);
if ($responseData['code'] == 200) {
$team = array(
'team_name' => $responseData['data']->team_name,
'team_id' => $responseData['data']->team_id,
);
$user = Session::get('user');
$responseData['admin'] = $user['userDetails']['first_name']." ". $user['userDetails']['last_name'];
$responseData['admin_profile'] = $user['userDetails']['profile_picture'];
$responseData['admin_id'] = $user['userDetails']['user_id'];
$responseData['email'] = $user['userDetails']['email'];
return $responseData;
} else {
return $responseData;
}
} catch (\GuzzleHttp\Exception\RequestException $e) {
$this->helper->logException($e, 'createTeam() {TeamController}');
return redirect()->back()->with("ErrorMessage", 'Can not complete the process, please reload page');
}
}
}
public function getParticularTeamDetails(Request $request)
{
try {
$adminIds = [];
$adminData = [];
$socialAccounts = [];
$teamMembersAcceptedIDs = [];
$teamMembersPendingIds = [];
$leftFromTeamIds = [];
$teamMembersAcceptedDatas = [];
$teamMembersPendingDatas = [];
$leftFromTeamDatas = [];
$availableSocialAccounts = [];
$availableSocialAccountsDatas = [];
$teamDetails = [];
$teamId = $request->teamid;
$apiUrl = ApiConfig::get('/team/get-team-details?teamId=' . $teamId);
$response = $this->helper->postApiCallWithAuth('get', $apiUrl);
if ($response['data']->code === 200) {
$socialAccounts = $response['data']->data->teamSocialAccountDetails[0]->SocialAccount;
$availableSocialAccounts = $this->getAvailableSocialAccounts();
if ($availableSocialAccounts['code'] === 200) {
$availableSocialAccounts = $availableSocialAccounts['data'];
if (count($socialAccounts) > 0) {
for ($i = 0; $i < count($availableSocialAccounts); $i++) {
$count = 0;
for ($j = 0; $j < count($socialAccounts); $j++) {
if ($availableSocialAccounts[$i]->account_id !== $socialAccounts[$j]->account_id) {
$count++;
if ($count === count($socialAccounts)) {
array_push($availableSocialAccountsDatas, $availableSocialAccounts[$i]);
}
}
}
}
} else {
$availableSocialAccountsDatas = $availableSocialAccounts;
}
} else {
$availableSocialAccountsDatas = [];
}
foreach ($response['data']->data->teamMembers as $data) {
if ($data->permission === 2 && $data->left_from_team === 0 && $data->invitation_accepted === 1 ) {
array_push($adminIds, $data->user_id);
}
if ($data->invitation_accepted === 1 && $data->left_from_team === 0) {
array_push($teamMembersAcceptedIDs, (object)array('teamMembersAcceptedIDs' => $data->user_id, 'permissions' => $data->permission));
}
if ($data->invitation_accepted === 0 ) {
array_push($teamMembersPendingIds, (object)array('teamMembersPendingIds' => $data->user_id, 'permissions' => $data->permission));
}
if ($data->left_from_team === 1) {
array_push($leftFromTeamIds, $data->user_id);
}
}
foreach ($response['data']->data->memberProfileDetails as $data2) {
for ($i = 0; $i < count($adminIds); $i++) {
if ($adminIds[$i] === $data2->user_id) {
array_push($adminData, $data2);
}
}
for ($i = 0; $i < count($leftFromTeamIds); $i++) {
if ($leftFromTeamIds[$i] === $data2->user_id) {
array_push($leftFromTeamDatas, $data2);
}
}
for ($i = 0; $i < count($teamMembersAcceptedIDs); $i++) {
if ($teamMembersAcceptedIDs[$i]->teamMembersAcceptedIDs === $data2->user_id) {
if ($teamMembersAcceptedIDs[$i]->permissions === 1) {
array_push($teamMembersAcceptedDatas, array('label' => 'Full permissions', 'user' => $data2));
} else {
array_push($teamMembersAcceptedDatas, array('label' => 'Admin', 'user' => $data2));
}
}
}
for ($i = 0; $i < count($teamMembersPendingIds); $i++) {
if ($teamMembersPendingIds[$i]->teamMembersPendingIds === $data2->user_id) {
array_push($teamMembersPendingDatas, $data2);
}
}
}
$teamDetails['code'] = 200;
$teamDetails['teamMembersAcceptedDatas'] = $teamMembersAcceptedDatas;
$teamDetails['teamMembersPendingDatas'] = $teamMembersPendingDatas;
$teamDetails['adminData'] = $adminData;
$teamDetails['teamSocialAccounts'] = $socialAccounts;
$teamDetails['availableSocialAccounts'] = $availableSocialAccountsDatas;
$teamDetails['leftFromTeamDatas'] = $leftFromTeamDatas;
return $teamDetails;
} else if ($response['data']->code === 400) {
$result['code'] = 400;
$result['message'] = $response['data']->error;
return $result;
} else {
$result['code'] = 500;
$result['message'] = 'Some error occurred while fetching data Please reload it...';
return $result;
}
} catch (\Exception $e) {
return $this->helper->errorHandler($e->getLine(), $e->getCode(), $e->getMessage(), 'getParticularTeamDetails(){TeamController}');
}
}
public function getAvailableSocialAccounts()
{
try {
$apiUrl = ApiConfig::get('/team/get-available-social-accounts');
$response = $this->helper->postApiCallWithAuth('get', $apiUrl);
$socialAccounts = [];
if ($response['data']->code === 200) {
$socialAccounts = $response['data']->data;
$result['code'] = 200;
$result['data'] = $socialAccounts;
return $result;
}
} catch (\Exception $e) {
return $this->helper->errorHandler($e->getLine(), $e->getCode(), $e->getMessage(), 'getAvailableSocialAccounts(){TeamController}');
}
}
public function dragDopTeamOperation(Request $request)
{
$sourcevalue = $request->sourceValue;
$targetValue = $request->targetValue;
$accid = (int)$request->id;
$teamid = (int)$request->teamid;
$currentUserid = session::get('user')['userDetails']['user_id'];
$usertype = 'Member';
try {
$apiUrl = ApiConfig::get('/team/get-team-details?teamId=' . $teamid);
$response = $this->helper->postApiCallWithAuth('get', $apiUrl);
if ($response['data']->code === 200) {
foreach ($response['data']->data->teamMembers as $data) {
if ($data->permission === 2) {
if ($data->user_id === $currentUserid) {
$usertype = 'Admin';
break;
}
}
}
}
} catch (\Exception $e) {
return $this->helper->errorHandler($e->getLine(), $e->getCode(), $e->getMessage(), 'dragDopTeamOperation(){TeamController}');
}
if ($sourcevalue === '_allSocialAccounts' && $targetValue === '_teamSocialAccounts') {
$apiUrl = ApiConfig::get('/team/add-other-team-account?accountId=' . $accid . '&teamId=' . $teamid);
$response = $this->helper->postApiCallWithAuth('post', $apiUrl);
return $this->helper->responseHandler($response['data']);
} else if ($sourcevalue === '_teamSocialAccounts' && $targetValue === '_allSocialAccounts') {
$apiUrl = ApiConfig::get('/team/delete-team-social-profile?accountId=' . $accid . '&teamId=' . $teamid);
$response = $this->helper->postApiCallWithAuth('delete', $apiUrl);
if ($response['data']->code === 200) {
$result['code'] = 200;
$result['message'] = 'Account Removed from Team';
} else if ($response['data']->code === 400) {
$result['code'] = 400;
$result['message'] = $response['data']->error;
} else {
$result['code'] = 500;
$result['message'] = 'some error occured';
}
return $result;
} else if ($sourcevalue === '_teamSocialAccounts' && $targetValue === '_teamMembers') {
$result['code'] = 501;
$result['message'] = 'We can not Move Social Accounts to Team members';
return $result;
} else if ($sourcevalue === '_allSocialAccounts' && $targetValue === '_teamMembers') {
$result['code'] = 501;
$result['message'] = 'We can not Move Social Accounts to Team members';
return $result;
} else if ($sourcevalue === '_allSocialAccounts' && $targetValue === '_pendingTeamMembers') {
$result['code'] = 501;
$result['message'] = 'We can not Move Social Accounts to Pending Team members';
return $result;
} else if ($sourcevalue === '_teamSocialAccounts' && $targetValue === '_pendingTeamMembers') {
$result['code'] = 501;
$result['message'] = 'We can not Move Social Accounts to Pending Team members';
return $result;
} else if ($sourcevalue === '_teamSocialAccounts' && $targetValue === '_leftTeamMembers') {
$result['code'] = 501;
$result['message'] = 'We can not Move Social Accounts to Left Team members';
return $result;
} else if ($sourcevalue === '_admin') {
if (($targetValue === '_allSocialAccounts' || $targetValue === '_teamSocialAccounts')) {
$result['code'] = 501;
$result['message'] = 'We can not move the Admin to Accounts';
return $result;
} else {
if ($targetValue === '_teamMembers') {
if ($accid === $currentUserid) {
$result['code'] = 501;
$result['message'] = 'We can not Move Main admin';
return $result;
} else {
try {
$apiUrl = ApiConfig::get('/team/edit-member-permission?teamId=' . $teamid . '&memberId=' . $accid . '&Permission=1');
$response = $this->helper->postApiCallWithAuth('post', $apiUrl);
if ($response['data']->code === 200) {
$result['code'] = 200;
$result['message'] = 'Added to Team members';
} else if ($response['data']->code === 400) {
$result['code'] = 400;
$result['message'] = $response['data']->error;
} else {
$result['code'] = 500;
$result['message'] = 'some error occured';
}
return $result;
} catch (\Exception $e) {
return $this->helper->errorHandler($e->getLine(), $e->getCode(), $e->getMessage(), 'dragDopTeamOperation(){TeamController}');
}
}
} else if ($targetValue === '_pendingTeamMembers') {
$result['code'] = 501;
$result['message'] = 'We can not perform operations from Pending Team members';
return $result;
} else {
if ($accid === $currentUserid) {
$result['code'] = 501;
$result['message'] = 'We can not Move Main admin';
return $result;
} else {
try {
$apiUrl = ApiConfig::get('/team/removeTeamMember?teamId=' . $teamid . '&memberId=' . $accid);
$response = $this->helper->postApiCallWithAuth('delete', $apiUrl);
if ($response->code === 200) {
$result['code'] = 200;
$result['message'] = 'You have left from Team';
} else if ($response->code === 400) {
$result['code'] = 400;
$result['message'] = $response->message;
} else {
$result['code'] = 500;
$result['message'] = 'some error occured';
}
return $result;
} catch (\Exception $e) {
return $this->helper->errorHandler($e->getLine(), $e->getCode(), $e->getMessage(), 'dragDopTeamOperation(){TeamController}');
}
}
}
}
} else if ($sourcevalue === '_leftTeamMembers') {
$result['code'] = 501;
$result['message'] = 'We can not move the Left/removed Members';
return $result;
} else if ($sourcevalue === '_teamMembers') {
if (($targetValue === '_allSocialAccounts' || $targetValue === '_teamSocialAccounts')) {
$result['code'] = 501;
$result['message'] = 'We can not add Team members to Social accounts';
return $result;
} else if ($targetValue === '_admin') {
try {
$apiUrl = ApiConfig::get('/team/edit-member-permission?teamId=' . $teamid . '&memberId=' . $accid . '&Permission=2');
$response = $this->helper->postApiCallWithAuth('post', $apiUrl);
if ($response['data']->code === 200) {
$result['code'] = 200;
$result['message'] = 'Added to Admin';
} else if ($response['data']->code === 400) {
$result['code'] = 400;
$result['message'] = $response['data']->error;
} else {
$result['code'] = 500;
$result['message'] = 'some error occured';
}
return $result;
} catch (\Exception $e) {
return $this->helper->errorHandler($e->getLine(), $e->getCode(), $e->getMessage(), 'dragDopTeamOperation(){TeamController}');
}
} else if ($targetValue === '_pendingTeamMembers') {
$result['code'] = 501;
$result['message'] = 'We can not add Team members to Pending Members';
return $result;
} else {
if ($accid === $currentUserid) {
$result['code'] = 501;
$result['message'] = 'We can not remove Main admin';
return $result;
} else {
try {
$apiUrl = ApiConfig::get('/team/remove-teamMember?teamId=' . $teamid . '&memberId=' . $accid);
$response = $this->helper->postApiCallWithAuth('delete', $apiUrl);
if ($response['data']->code === 200) {
$result['code'] = 200;
$result['message'] = 'You have left from Team';
} else if ($response['data']->code === 400) {
$result['code'] = 400;
$result['message'] = $response['data']->error;
} else {
$result['code'] = 500;
$result['message'] = 'some error occured';
}
return $result;
} catch (\Exception $e) {
return $this->helper->errorHandler($e->getLine(), $e->getCode(), $e->getMessage(), 'dragDopTeamOperation(){TeamController}');
}
}
}
} else if ($sourcevalue === '_pendingTeamMembers') {
$result['code'] = 501;
$result['message'] = 'We can not perform oprations from Pending Team members';
return $result;
} else if ($sourcevalue === '_teamSocialAccounts' || $sourcevalue === '_allSocialAccounts') {
if ($targetValue === '_admin') {
$result['code'] = 501;
$result['message'] = 'We can not add Social Accounts to the Admin';
return $result;
}
} else {
if (($targetValue === '_allSocialAccounts' || $targetValue === '_teamSocialAccounts')) {
$result['code'] = 501;
$result['message'] = 'We can not add Team Members to Social accounts';
return $result;
} else if ($targetValue === '_leftTeamMembers') {
if ($accid === $currentUserid) {
$result['code'] = 501;
$result['message'] = 'We can not Move Main admin';
return $result;
} else {
try {
$apiUrl = ApiConfig::get('/team/remove-teamMember?teamId=' . $teamid . '&memberId=' . $accid);
$response = $this->helper->postApiCallWithAuth('delete', $apiUrl);
if ($response['data']->code === 200) {
$result['code'] = 200;
$result['message'] = 'You have left from Team';
} else if ($response['data']->code === 400) {
$result['code'] = 400;
$result['message'] = $response->error;
} else {
$result['code'] = 500;
$result['message'] = 'some error occured';
}
return $result;
} catch (\Exception $e) {
return $this->helper->errorHandler($e->getLine(), $e->getCode(), $e->getMessage(), 'dragDopTeamOperation(){TeamController}');
}
}
}
}
}
// public function teamModal(ModalTeamRequest $request)
// {
// $html = null;
// if ($request['modal'] === 'create')
// {
// $html = view('team::createModal')->render();
// }
// if ($request['modal'] === 'invite')
// {
// $html = view('team::inviteModal')->render();
// }
// return response()->json([
// 'html' => $html,
// 'status' => true,
// 'modal' => $request['modal']
// ]);
// }
public function getAvailableMembers()
{
// $apiUrl = $this->API_URL . env('API_VERSION') . '/team/getAvailableTeamMembers';
$apiUrl = ApiConfig::get('/team/get-available-team-members');
try {
$response = $this->helper->postApiCallWithAuth('get', $apiUrl);
return $this->helper->responseHandler($response['data']);
} catch (\GuzzleHttp\Exception\RequestException $e) {
$this->helper->errorHandler($e, 'getAvailableMembers(){TeamController}');
return redirect()->back()->with("ErrorMessage", 'Can not complete the process, please reload page');
}
}
public function getInvitedMembers()
{
$apiUrl = ApiConfig::get('/team/get-available-invited-members');
try {
$response = $this->helper->postApiCallWithAuth('get', $apiUrl);
return $this->helper->responseHandler($response['data']);
} catch (\GuzzleHttp\Exception\RequestException $e) {
$this->helper->errorHandler($e, 'getInvitedMembers(){TeamController}');
return redirect()->back()->with("ErrorMessage", 'Can not complete the process, please reload page');
}
}
public function deleteTeams($id)
{
$team = Session::get('team');
if ($team['teamid'] == $id) {
$response['code'] = '500';
$response['error'] = ' Current Team you cannot delete, switch to other Team to delete this Team';
return $response;
} else {
$apiUrl = ApiConfig::get('/team/delete?teamId=');
$apiUrl = $apiUrl . $id;
try {
$response = $this->helper->postApiCallWithAuth('delete', $apiUrl);
return $this->helper->responseHandler($response['data']);
} catch (\GuzzleHttp\Exception\RequestException $e) {
$this->helper->logException($e, 'deleteTeams() {TeamController}');
return redirect()->back()->with("ErrorMessage", 'Can not complete the process, please reload page');
}
}
}
public function updateTeams(Request $request)
{
$apiUrlupdate = ApiConfig::get('/team/edit?teamId=');
$apiUrlupdate = $apiUrlupdate . $request->id;
$logo = '';
$team = Session::get('team');
if (isset($request->profile_avatar)) {
$file = $request->profile_avatar;
$pathToStorage = public_path('media/uploads');
if (!file_exists($pathToStorage))
mkdir($pathToStorage, 0777, true);
$publishimage = $file->getClientOriginalName();
$data['media'] = $pathToStorage . "/" . $publishimage;;
file_put_contents($data['media'], file_get_contents($file->path()));
$filedata = array("name" => "media",
"file" => $data['media']);
$apiUrl = env('API_URL_PUBLISH') . env('API_VERSION') . '/upload/media?title=' . $team['teamName'] . '&teamId=' . $team["teamid"] . '&privacy=3';
$response = $this->helper->postApiCallWithAuth('post', $apiUrl, $filedata, true);
$responseData = $this->helper->responseHandler($response['data']);
if ($responseData['code'] == 200) {
$mediaUrl = 'path_to_url . $responseData['data'][0]->media_url;
$details['TeamInfo'] = array(
"name" => $request->team_name,
"logoUrl" => $mediaUrl
);
$logo = $mediaUrl;
} else {
$details['TeamInfo'] = array(
"name" => $request->team_name,
"logoUrl" => null
);
}
} else {
$details['TeamInfo'] = array(
"name" => $request->team_name,
"logoUrl" => $request->old_pic
);
$logo = $request->old_pic;
}
try {
$response = $this->helper->postApiCallWithAuth('post', $apiUrlupdate, $details);
$respons = $this->helper->responseHandler($response['data']);
if ($respons['code'] === 200) {
if ($team['teamid'] === (int)$request->id) {
$team['teamLogo'] = $logo;
session()->put('team', $team);
}
}
return $respons;
} catch (\GuzzleHttp\Exception\RequestException $e) {
$this->helper->logException($e, 'updateTeams() {TeamController}');
return redirect()->back()->with("ErrorMessage", 'Can not complete the process, please reload page');
}
}
public function holdTeams($id)
{
$apiUrl = ApiConfig::get('/team/lock-team');
try {
$parameters = [$id];
$response = $this->helper->postApiCallWithAuth('put', $apiUrl, $parameters);
return $this->helper->responseHandler($response['data']);
} catch (\GuzzleHttp\Exception\RequestException $e) {
$this->helper->logException($e, 'holdTeams() {TeamController}');
return redirect()->back()->with("ErrorMessage", 'Can not complete the process, please reload page');
}
}
public function unholdTeams($id)
{
$apiUrl = ApiConfig::get('/team/unlock-team');
try {
$parameters = [$id];
$response = $this->helper->postApiCallWithAuth('put', $apiUrl, $parameters);
return $this->helper->responseHandler($response['data']);
} catch (\GuzzleHttp\Exception\RequestException $e) {
$this->helper->logException($e, 'unholdTeams() {TeamController}');
return redirect()->back()->with("ErrorMessage", 'Can not complete the process, please reload page');
}
}
public function dragToInviteMembers(Request $request)
{
if (str_contains($request['targetValue'], '_team')) {
$id = trim($request['targetValue'], '_team');
$apiUrl = ApiConfig::get('/team/invite?teamId=');
$apiUrl = $apiUrl . $id . '&Permission=1&Email=' . $request['email'];
try {
$response = $this->helper->postApiCallWithAuth('post', $apiUrl);
return $this->helper->responseHandler($response['data']);
} catch (\GuzzleHttp\Exception\RequestException $e) {
$this->helper->logException($e, 'dragToInviteMembers() {TeamController}');
return redirect()->back()->with("ErrorMessage", 'Can not complete the process, please reload page');
}
} else {
$result = array("code" => "500", "message" => "You can't make non-members as Admins");
return $result;
}
}
public function inviteMembers(Request $request)
{
if ($request['member_email'] == "") {
$result['code'] = 204;
$result['message'] = "Email is required";
return $result;
}
$name = $request->member_name !== null ? $request->member_name : "";
$apiUrl = ApiConfig::get('/team/invite?teamId=');
$apiUrl = $apiUrl . $request['team_id'] . '&Permission=' . $request['permission'] . '&Email=' . $request['member_email'].'&name='.$name;
try {
$response = $this->helper->postApiCallWithAuth('post', $apiUrl);
return $this->helper->responseHandler($response['data']);
} catch (\GuzzleHttp\Exception\RequestException $e) {
$this->helper->logException($e, 'inviteMembers() {TeamController}');
return redirect()->back()->with("ErrorMessage", 'Can not complete the process, please reload page');
}
}
public function changeTeamSession(Request $request)
{
try {
$teamID = (int)$request->teamid;
$apiUrl = ApiConfig::get('/team/get-team-details?teamId=' . $teamID);
$response = $this->helper->postApiCallWithAuth('get', $apiUrl);
if ($response['data']->code === 200) {
$teamid = $response['data']->data->teamSocialAccountDetails[0]->team_id;
$teamname = $response['data']->data->teamSocialAccountDetails[0]->team_name;
$teamlogo = $response['data']->data->teamSocialAccountDetails[0]->team_logo;
$teamDetails['teamid'] = $teamid;
$teamDetails['teamName'] = $teamname;
$teamDetails['teamLogo'] = $teamlogo;
session()->put('team', $teamDetails);
Session::save();
$result['code'] = 200;
$result['message'] = 'Successfully Switched Team';
} else if ($response->code === 400) {
$result['code'] = 400;
$result['message'] = $response->message;
} else {
$result['code'] = 500;
$result['message'] = 'Some error occured';
}
return $result;
} catch (\Exception $e) {
return $this->helper->errorHandler($e->getLine(), $e->getCode(), $e->getMessage(), 'changeTeamSession(){TeamController}');
}
}
public function searchTeam(Request $request)
{
$validator = Validator::make($request->all(), [
'team_name' => 'required',
]);
if ($validator->fails()) {
return redirect()->back()
->withErrors($validator)
->withInput();
}
$apiUrl = ApiConfig::get('/team/search-team?teamName=' . $request->team_name);
try {
$response = $this->helper->postApiCallWithAuth('post', $apiUrl);
if ($response['code'] === 200) {
$responseData = $this->helper->responseHandler($response['data']);
return view('team::view_teams')->with(["accounts" => $responseData]);
} else {
return view('team::view_teams')->with(["ErrorMessage" => 'Can not complete the process, please reload page']);
}
} catch (Exception $e) {
$this->helper->logException($e->getLine(), $e->getCode(), $e->getMessage(), 'viewTeams() {TeamController}');
}
}
function withDrawInvitation(Request $request)
{
try {
$teamid=(integer)$request->teamId;
$email=$request->email;
$apiUrl = ApiConfig::get('/team/withdraw-invitation?teamId=' . $teamid.'&Email='.$email);
$response = $this->helper->postApiCallWithAuth('delete', $apiUrl);
if ($response['data']->code === 200) {
$result['code'] = 200;
$result['message'] = 'Invitation Withdrawn';
} else if ($response['data']->code === 400) {
$result['code'] = 400;
$result['message'] = $response->error;
} else {
$result['code'] = 500;
$result['message'] = 'some error occured';
}
return $result;
}
catch (Exception $e) {
$this->helper->logException($e->getLine(), $e->getCode(), $e->getMessage(), 'withDrawInvitation() {TeamController}');
}
}
}
```
|
```c++
/**
* (C) 1999-2003 Lars Knoll (knoll@kde.org)
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Library General Public
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
*
* along with this library; see the file COPYING.LIB. If not, write to
* the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
* Boston, MA 02110-1301, USA.
*/
#include "config.h"
#include "core/css/StyleSheetList.h"
#include "core/HTMLNames.h"
#include "core/dom/Document.h"
#include "core/dom/StyleEngine.h"
#include "core/frame/UseCounter.h"
#include "core/html/HTMLStyleElement.h"
#include "wtf/text/WTFString.h"
namespace blink {
using namespace HTMLNames;
StyleSheetList::StyleSheetList(TreeScope* treeScope)
: m_treeScope(treeScope)
{
}
DEFINE_EMPTY_DESTRUCTOR_WILL_BE_REMOVED(StyleSheetList);
inline const WillBeHeapVector<RefPtrWillBeMember<StyleSheet>>& StyleSheetList::styleSheets()
{
#if !ENABLE(OILPAN)
if (!m_treeScope)
return m_detachedStyleSheets;
#endif
return document()->styleEngine().styleSheetsForStyleSheetList(*m_treeScope);
}
#if !ENABLE(OILPAN)
void StyleSheetList::detachFromDocument()
{
m_detachedStyleSheets = document()->styleEngine().styleSheetsForStyleSheetList(*m_treeScope);
m_treeScope = nullptr;
}
#endif
unsigned StyleSheetList::length()
{
return styleSheets().size();
}
StyleSheet* StyleSheetList::item(unsigned index)
{
const WillBeHeapVector<RefPtrWillBeMember<StyleSheet>>& sheets = styleSheets();
return index < sheets.size() ? sheets[index].get() : 0;
}
HTMLStyleElement* StyleSheetList::getNamedItem(const AtomicString& name) const
{
#if !ENABLE(OILPAN)
if (!m_treeScope)
return 0;
#endif
// IE also supports retrieving a stylesheet by name, using the name/id of the <style> tag
// (this is consistent with all the other collections)
// ### Bad implementation because returns a single element (are IDs always unique?)
// and doesn't look for name attribute.
// But unicity of stylesheet ids is good practice anyway ;)
// FIXME: We should figure out if we should change this or fix the spec.
Element* element = m_treeScope->getElementById(name);
return isHTMLStyleElement(element) ? toHTMLStyleElement(element) : 0;
}
CSSStyleSheet* StyleSheetList::anonymousNamedGetter(const AtomicString& name)
{
if (document())
UseCounter::count(*document(), UseCounter::StyleSheetListAnonymousNamedGetter);
HTMLStyleElement* item = getNamedItem(name);
if (!item)
return 0;
return item->sheet();
}
DEFINE_TRACE(StyleSheetList)
{
visitor->trace(m_treeScope);
}
} // namespace blink
```
|
```java
/*
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
package org.apache.shardingsphere.infra.binder.context.statement.ddl;
import org.apache.shardingsphere.infra.binder.context.statement.CommonSQLStatementContext;
import org.apache.shardingsphere.infra.database.core.DefaultDatabase;
import org.apache.shardingsphere.sql.parser.statement.core.segment.ddl.index.IndexNameSegment;
import org.apache.shardingsphere.sql.parser.statement.core.segment.ddl.index.IndexSegment;
import org.apache.shardingsphere.sql.parser.statement.core.statement.ddl.DropIndexStatement;
import org.apache.shardingsphere.sql.parser.statement.core.value.identifier.IdentifierValue;
import org.apache.shardingsphere.sql.parser.statement.mysql.ddl.MySQLDropIndexStatement;
import org.apache.shardingsphere.sql.parser.statement.oracle.ddl.OracleDropIndexStatement;
import org.apache.shardingsphere.sql.parser.statement.postgresql.ddl.PostgreSQLDropIndexStatement;
import org.apache.shardingsphere.sql.parser.statement.sqlserver.ddl.SQLServerDropIndexStatement;
import org.junit.jupiter.api.Test;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.LinkedList;
import java.util.stream.Collectors;
import static org.hamcrest.CoreMatchers.instanceOf;
import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
class DropIndexStatementContextTest {
@Test
void assertMySQLNewInstance() {
assertNewInstance(mock(MySQLDropIndexStatement.class));
}
@Test
void assertPostgreSQLNewInstance() {
assertNewInstance(mock(PostgreSQLDropIndexStatement.class));
}
@Test
void assertOracleNewInstance() {
assertNewInstance(mock(OracleDropIndexStatement.class));
}
@Test
void assertSQLServerNewInstance() {
assertNewInstance(mock(SQLServerDropIndexStatement.class));
}
private void assertNewInstance(final DropIndexStatement dropIndexStatement) {
Collection<IndexSegment> indexes = new LinkedList<>();
IndexSegment index1 = new IndexSegment(0, 0, new IndexNameSegment(0, 0, new IdentifierValue("idx_1")));
IndexSegment index2 = new IndexSegment(0, 0, new IndexNameSegment(0, 0, new IdentifierValue("idx_2")));
indexes.add(index1);
indexes.add(index2);
when(dropIndexStatement.getIndexes()).thenReturn(indexes);
DropIndexStatementContext actual = new DropIndexStatementContext(dropIndexStatement, DefaultDatabase.LOGIC_NAME);
assertThat(actual, instanceOf(CommonSQLStatementContext.class));
assertThat(actual.getSqlStatement(), is(dropIndexStatement));
assertThat(actual.getTablesContext().getSimpleTables(), is(Collections.emptyList()));
assertThat(actual.getIndexes().stream().map(each -> each.getIndexName().getIdentifier().getValue()).collect(Collectors.toList()), is(Arrays.asList("idx_1", "idx_2")));
}
}
```
|
The South American U-15 Women’s Softball Championship is the main championship tournament between national women softball teams in South America, governed by the Pan American Softball Federation.
Results
Medal table
Participating nations
External links
Brazilian Baseball Softball Federation
Softball competitions
|
was a Japanese film actor. He appeared in more than twenty films from 1950 to 1959. Takahashi died in a traffic accident.
Career
Born in Tokyo, Takahashi graduated from the Japanese Film School (Nihon Eiga Gakkō) and joined the Shochiku studio in 1945. He became one of the company's top young male stars, alongside Keiji Sada and Kōji Tsuruta.
Selected filmography
References
External links
1926 births
1959 deaths
Male actors from Tokyo
Japanese male film actors
20th-century Japanese male actors
|
```python
# mypy: allow-untyped-defs
from typing import Any, Callable, cast, Tuple
import torch
import torch.distributed as dist
__all__ = [
"allreduce_hook",
"fp16_compress_hook",
"bf16_compress_hook",
"fp16_compress_wrapper",
"bf16_compress_wrapper",
]
def _allreduce_fut(
process_group: dist.ProcessGroup, tensor: torch.Tensor
) -> torch.futures.Future[torch.Tensor]:
"""Average the input gradient tensor by allreduce and returns a future."""
group_to_use = process_group if process_group is not None else dist.group.WORLD
# Apply the division first to avoid overflow, especially for FP16.
tensor.div_(group_to_use.size())
return (
dist.all_reduce(tensor, group=group_to_use, async_op=True)
.get_future()
.then(lambda fut: fut.value()[0])
)
def allreduce_hook(
process_group: dist.ProcessGroup, bucket: dist.GradBucket
) -> torch.futures.Future[torch.Tensor]:
"""
Call ``allreduce`` using ``GradBucket`` tensors.
Once gradient tensors are aggregated across all workers, its ``then``
callback takes the mean and returns the result.
If user registers this DDP communication hook,
DDP results is expected to be same as the case where no hook was registered.
Hence, this won't change behavior of DDP and user can use this as a reference
or modify this hook to log useful information or any other purposes while
unaffecting DDP behavior.
Example::
>>> # xdoctest: +SKIP
>>> ddp_model.register_comm_hook(process_group, allreduce_hook)
"""
return _allreduce_fut(process_group, bucket.buffer())
def fp16_compress_hook(
process_group: dist.ProcessGroup,
bucket: dist.GradBucket,
) -> torch.futures.Future[torch.Tensor]:
"""
Compress by casting ``GradBucket`` to ``torch.float16`` divided by process group size.
This DDP communication hook implements a simple gradient compression
approach that casts ``GradBucket`` tensor to half-precision floating-point format (``torch.float16``)
and then divides it by the process group size.
It allreduces those ``float16`` gradient tensors. Once compressed gradient
tensors are allreduced, the chained callback ``decompress`` casts it back to the input data type (such as ``float32``).
Example::
>>> # xdoctest: +SKIP
>>> ddp_model.register_comm_hook(process_group, fp16_compress_hook)
"""
group_to_use = process_group if process_group is not None else dist.group.WORLD
world_size = group_to_use.size()
buffer = (
cast(Tuple[torch.Tensor, ...], bucket)[0]
if isinstance(bucket, tuple)
else bucket.buffer()
)
compressed_tensor = buffer.to(torch.float16).div_(world_size)
def decompress(fut):
decompressed_tensor = buffer
# Decompress in place to reduce the peak memory.
# See: path_to_url
value = fut if isinstance(fut, torch.Tensor) else fut.value()[0]
decompressed_tensor.copy_(value)
return decompressed_tensor
if torch._utils.is_compiling():
grad = dist._functional_collectives.all_reduce(
compressed_tensor, "sum", group_to_use
)
return decompress(grad)
else:
fut = dist.all_reduce(
compressed_tensor, group=group_to_use, async_op=True
).get_future()
return fut.then(decompress)
# TODO: create an internal helper function and extract the duplicate code in FP16_compress and BF16_compress.
def bf16_compress_hook(
process_group: dist.ProcessGroup,
bucket: dist.GradBucket,
) -> torch.futures.Future[torch.Tensor]:
"""
Warning: This API is experimental, and it requires NCCL version later than 2.9.6.
This DDP communication hook implements a simple gradient compression
approach that casts ``GradBucket`` tensor to half-precision
`Brain floating point format <path_to_url`_ (``torch.bfloat16``)
and then divides it by the process group size.
It allreduces those ``bfloat16`` gradient tensors. Once compressed gradient
tensors are allreduced, the chained callback ``decompress`` casts it back to the input data type (such as ``float32``).
Example::
>>> # xdoctest: +SKIP
>>> ddp_model.register_comm_hook(process_group, bf16_compress_hook)
"""
group_to_use = process_group if process_group is not None else dist.group.WORLD
world_size = group_to_use.size()
buffer = (
cast(Tuple[torch.Tensor, ...], bucket)[0]
if isinstance(bucket, tuple)
else bucket.buffer()
)
compressed_tensor = buffer.to(torch.bfloat16).div_(world_size)
def decompress(fut):
decompressed_tensor = buffer
# Decompress in place to reduce the peak memory.
# See: path_to_url
value = fut if isinstance(fut, torch.Tensor) else fut.value()[0]
decompressed_tensor.copy_(value)
return decompressed_tensor
if torch._utils.is_compiling():
grad = dist._functional_collectives.all_reduce(
compressed_tensor, "sum", group_to_use
)
return decompress(grad)
else:
fut = dist.all_reduce(
compressed_tensor, group=group_to_use, async_op=True
).get_future()
return fut.then(decompress)
def fp16_compress_wrapper(
hook: Callable[[Any, dist.GradBucket], torch.futures.Future[torch.Tensor]]
) -> Callable[[Any, dist.GradBucket], torch.futures.Future[torch.Tensor]]:
"""
Cast input tensor to ``torch.float16``, cast result of hook back to input dtype.
This wrapper casts the input gradient tensor of a given DDP communication hook to half-precision
floating point format (``torch.float16``), and casts the resulting tensor of the given hook back to
the input data type, such as ``float32``.
Therefore, ``fp16_compress_hook`` is equivalent to ``fp16_compress_wrapper(allreduce_hook)``.
Example::
>>> # xdoctest: +SKIP
>>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1, start_powerSGD_iter=10)
>>> ddp_model.register_comm_hook(state, fp16_compress_wrapper(powerSGD_hook))
"""
def fp16_compress_wrapper_hook(
hook_state, bucket: dist.GradBucket
) -> torch.futures.Future[torch.Tensor]:
# Cast bucket tensor to FP16.
bucket.set_buffer(bucket.buffer().to(torch.float16))
fut = hook(hook_state, bucket)
def decompress(fut):
decompressed_tensor = bucket.buffer()
# Decompress in place to reduce the peak memory.
# See: path_to_url
decompressed_tensor.copy_(fut.value())
return decompressed_tensor
# Decompress after hook has run.
return fut.then(decompress)
return fp16_compress_wrapper_hook
def bf16_compress_wrapper(
hook: Callable[[Any, dist.GradBucket], torch.futures.Future[torch.Tensor]]
) -> Callable[[Any, dist.GradBucket], torch.futures.Future[torch.Tensor]]:
"""
Warning: This API is experimental, and it requires NCCL version later than 2.9.6.
This wrapper casts the input gradient tensor of a given DDP communication hook to half-precision
`Brain floating point format <path_to_url `_ (``torch.bfloat16``),
and casts the resulting tensor of the given hook back to the input data type, such as ``float32``.
Therefore, ``bf16_compress_hook`` is equivalent to ``bf16_compress_wrapper(allreduce_hook)``.
Example::
>>> # xdoctest: +SKIP
>>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1, start_powerSGD_iter=10)
>>> ddp_model.register_comm_hook(state, bf16_compress_wrapper(powerSGD_hook))
"""
def bf16_compress_wrapper_hook(
hook_state, bucket: dist.GradBucket
) -> torch.futures.Future[torch.Tensor]:
# Cast bucket tensor to BF16.
bucket.set_buffer(bucket.buffer().to(torch.bfloat16))
fut = hook(hook_state, bucket)
def decompress(fut):
decompressed_tensor = bucket.buffer()
# Decompress in place to reduce the peak memory.
# See: path_to_url
decompressed_tensor.copy_(fut.value())
return decompressed_tensor
# Decompress after hook has run.
return fut.then(decompress)
return bf16_compress_wrapper_hook
```
|
Hemant Batra (born 15 August 1967) is an Indian business, public policy, corporate and commercial senior practicing lawyer, newspaper – columnist writer author, TV anchor/Host, public speaker, commentator (law & public policy), mentor, and legal counsel. He was the co-founder and counsel at Kaden Boriss Global, an alliance and forum of independent and autonomous law firms and legal enterprises. Hemant holds a position as the Secretary General of SAARCLAW (South Asian Association for Regional Co-operation in Law). SAARCLAW is a regional Apex body of SAARC (South Asian Association for Regional Cooperation). He has also authored several books on law and policy. He was engaged by the Lok Sabha as a Guest Anchor and Guide for a fresh series of Sansad TV called `75 Years: Laws That Shaped India' The other Guest Anchors engaged by the Parliament for Sansad TV being Bibek Debroy, Dr Karan Singh, Amitabh Kant, Shashi Tharoor, Maroof Raza and Sanjeev Sanyal. The global TV channel Sansad TV was launched by the Prime Minister of India Narendra Modi, Vice President of India Venkaiah Naidu and Speaker of Lok Sabha Om Birla on 15 September 2021.
Hemant Batra withdrew from all active roles in Kaden Boriss law firm in 2015 and formally quit the law firm in 2016. He now independently practises law as an arguing Counsel in the Indian Supreme Court and Delhi High Court. He is seen now in the roles of arbitrator, mediator, conciliator, tutor of law and public policy. He is authoring books for Eastern Book Company. Hemant has recently merged his practice with India's leading and topmost law firm, namely, Shardul Amarchand Mangaldas & Co(SAM). In the law firm, he heads the new vertical of New Ventures & Growth where he is responsible for the expansion and growth of SAM geographically, territorially, and professionally. As per industry sources, his role is working closely with the founders Shardul S. Shroff and Pallavi S. Shroff
Kaden Boriss partnered with I-Inspire 2013, a national conference for women leaders, aiming to be a confluence of ideas, a convergence of thoughts and celebration of the spirit of diversity and entrepreneurship. He is also the chairperson of the Organizing Council of International Infrastructure & Construction Law Arbitration Moot (IICLAM) and member of the advisory board of the Organisation for International Cooperation (OIC) based in New Jersey, US. IICLAM is a joint initiative of Kaden Boriss, National University of Singapore (NUS) and Singapore International Arbitration Centre (SIAC), Singapore. He is a recipient of the Mahatma Gandhi Seva Medal. Batra took over as the chairman of the panel of jury for the grant of Aces Awards for Corporate Excellence and Sustainability in Asia.
"We are committed to social justice and can play a key role in encouraging legal reforms and disseminating information on progressive decisions, related to human rights, across the legal community in South Asia," says Hemant Batra on the UNDP website.
Batra joined the advisory board of the pivotals, the first ever stakeholders engagement platform. In his role as a policy specialist and global business lawyer, he was invited to join the leadership team of a well known not for profit organization.
Batra was appointed in July 2017 as an active member of the Union of International Associations (UIA), the only appointment from India that year.
He established his music label called Urf Hekbat to accomplish his passion of playing, composing and arranging lounge and new age music. This fact was disclosed by him in an interview given to a well-known lawyers monthly magazine. He is also a sound recording artist.
Law and public policy facet
Batra is associated with National Law University, Delhi. Batra presented a paper on issues connected with WTO at the 10th SAARCLAW Conference held at Karachi on the theme, "Leap Forward – Next Generation Laws". 20 February 2004 The first annual Asia Entrepreneurship Forum (AEF), organised by Enterprise Asia kicked off on 19 September 2012. Five hundred of the region's leading business leaders and industry thought leaders gathered in Macau, one of Asia's fastest growing cities to discuss the future of entrepreneurship in Asia. Hemant K. Batra, Secretary General at SAARCLAW spoke on legal and regulatory issues in mergers and acquisitions deals.
Though not as a Secretary-General of SAARCLAW but taking advantage of his standing in South Asia including Pakistan, he took up the issue of grant of mercy to the Indian prisoner Sarabjit Singh (now deceased). Batra has done substantial work for ensuring legal rights of people living with HIV and key populations at higher risk of exposure to HIV.
Considered as one of the prominent authorities on cross-border issues connected with the legal profession, he was invited by the Harvard Law School to share his thoughts with senior students on the Future Market for Corporate Legal Services in India.
He has always mooted for opening up of the legal profession in India be it allowing foreign lawyers as well as showcasing. According to him medicine and law were once on the same footing as "noble professions" and as the corporatization of the medical profession has been permitted, the restrictions on lawyers are unjustified. These views were expressed by him to the Live Mint and the Wall Street Journal.
Batra vociferously defended former IPL chairman Lalit Modi in UK as well as in India. Three leading publications captured his interview, namely, India Today, Daily Mail UK and Tehelka.
He recently mooted the idea of large scale repeal of laws. He supported and complimented the landmark initiative taken by the Narendra Modi Government in India in repealing obsolete laws. He also emphasized that repeal of laws is as significant as the passage of new laws. He was one of the first set of legal experts who covered the legal aspects connected with Coronavirus or COVID-19.
As Chairman of the Jury, Batra has propagated a policy change in the way businesses are run and managed. According to him, a successful business goes beyond money-making. Societal responsibilities are key to successful businesses.
He highlighted and favoured the policy of Narendra Modi BJP Government in India regarding the enhancement of traffic violation fines. A public policy expert and commentator Hemant Batra opines that Insolvency and Bankruptcy Code, 2016 is a phenomenal economic law and public policy reform of the last decade, which provided a revolutionary shift of earlier law from a defaulting debtor controlling assets until resolution or liquidation, to the creditors in command of the NPAs during resolution or liquidation. Further under the Insolvency and Bankruptcy Code, a new route to M&A has emerged. The distressed assets are the new targets for bigger companies planning to grow through the organic route. This is according to Batra.
Expressing his public policy opinion on the Indian cryptocurrency market to a well-known online publication, he said that the "cryptocurrency market has now become very big with involvement of billions of dollars in the market hence, it is now unattainable and irreconcilable for the government to completely ban all sorts of cryptocurrency and its trading and investment". He mooted regulating the cryptocurrency market rather than completely banning it. He favoured following IMF and FATF guidelines in this regard.
Origin and background
Hemant was born in Hisar located in the Indian state of Haryana, in a Punjabi Family to Veena Batra and G.L Batra who was the Additional Secretary in the Indian parliament and Chairman of the Haryana State Public Service Commission. In 1988 he graduated with bachelor's degree in Humanities and in 1991 received his Bachelor of Law degree from Chandigarh Law faculty in Panjab University. Hemant Batra and his wife Preeti Wahi Batra, who is also a prominent commercial lawyer were interviewed by a leading international business magazine Outlook Money regarding changing trends in investing for the future of one's child. They have two children . Hemant has a fetish for electronic gadgets and changes them every six months.
Legal and socio-economic pathways
Hemant started his law practice in 1991 with Solicitors – Amarchand & Mangaldas & Suresh A Shroff & Co. Four years later he joined Kesar Dass B. & Associates and by the year 2000 became a managing partner of its corporate office. In 2003, he founded an independent international law firm named Kaden Boriss Legal. Kaden Boriss then became an alliance and network of independent/autonomous member law firms located in India, Australia and UAE. Each member firm and office has no financial relationship with another. A prominent Australian law firm LBR Legal merged its practice with Batra's law firm Kaden Boriss in 2010 when the Managing Partner of LBR Sunil Lal said that 'if you can't beat Indians then the best way is to join them'.
Hemant has worked with many multinational firms such as Bayer, Suzuki, LG, Philip Morris (JV), Coca-Cola, Accor, Findel, AMEX, Western Union, ABB and Knight Frank. He has also been an advisor for various former Chief Justices of India, Senior Counsels and Members of the Indian Parliament. On the social front Hemant associated with United Nations Development Programme (UNDP), UNAIDS, and the World Bank by venturing into a very sensitive zone thereby discussing Human Rights Challenges Faced by Key Populations at Higher Risk of HIV. He has always propagated the idea of Parents respecting the sexual preferences of their kids. People find themselves so busy these days that they only have vague succession plans with no documentation. But irrespective of age (everyone) needs to keep his or her succession planning ready as death always comes unannounced, propagates Hemant Batra. Recently, one eminent lawyer from Cornell described Hemant as one of the best brains in the corporate transactional legal work.
He is Goodwill Ambassador to the World NGO Day, headquartered in London. The World NGO Day is an international calendar day held on 27 February every year. For stronger, better & more effective civil society worldwide through an international annual calendar day for NGOs, NPOs and CSOs. WND Secretariat and SAARCLAW have entered into an MoU on 24 February 2012 to promote objectives of their respective organizations and also expand NGO horizons in India and the SAARC region.
He was re-elected as the Secretary General of SAARCLAW on 12 April 2014 in Kathmandu when India and Pakistan came together to propose his name and Bangladesh/Bhutan seconded it.
Hemant Batra as the chairman of the prestigious panel of jury for the grant of Aces Awards for Corporate Excellence and Sustainability in Asia promoted the concept of excellence and sustainability in the corporate world terming it as integral part of good governance. He has started propagating that the concept of corporate governance does not merely relate to management of the companies but also excellence and sustainability. He heads the prominent jury in Asia, which identifies global business leaders who have shown signs of good governance.
Books and Publications
Hemant Batra's book on Due Diligence published by a well-known law and public policy publisher EBC (Eastern Book Company) has been reviewed as a must-read book for any professional looking to be a part of India's vibrant M&A industry. The book has been highly recommended as an outstanding handbook or operating manual for those engaging in due diligence relating to business or asset acquisitions.
Batra's other book, which received global acclaim and appreciation is titled Mediation - Legitimacy & Practice. The book written under the layman series is forwarded by Justice R.C. Lahoti Former Chief Justice of India and reviewed by K.K. Venugopal, Attorney General of India.
Criticism
It is noticed that Hemant has been unnecessarily critical of newer legislation and even verdicts passed by courts of law. As Secretary General of SAARCLAW, he made a controversial public statement that in south Asian region the parliamentarians were not able to meet up expectations of their people.
He made a controversial statement in a leading English newspaper The Tribune by expressing concern over the alleged ill-treatment of Mr Justice Rana Bhagwandas, the first Hindu to be appointed Acting Chief Justice of Pakistan, and his family members by immigration authorities at the Wagah border on 29 March 2006. He also released a very controversial report in collaboration with UNDP seeking stronger legal protection for women in health care settings. The report reflects many gaps in laws in South Asia and is very women-specific and specific gender-based thereby creating complete dissension amongst gender. This report has come under heavy criticism for being lope-sided.
Batra just wants to be in news by speaking on contentious and critically controversial issues. In 2007, he had made a sweeping allegation against residents of big cities stating that they had a load of unaccounted cash monies which they were investing in far-off agriculture lands located in remote states. He said that their motive was to eventually blend their black monies with agricultural income. He branded such people as money launderers. Batra was also quoted in The Pulitzer Center on Crisis Reporting for having said that "India is today the most isolated country within its region." This definitely was a strong comment against India's foreign policy.
During the 10th SAARC Chief Justices Conference, Hemant Batra made a sweeping statement regarding SAARCLAW's landmark achievement in getting a 1 Million US$ grant from Asian Development Bank. Some delegates from Bangladesh were critical of SAARCLAW getting into a subordinate position to ADB because of this grant.
He recently supported the move by Narendra Modi BJP Government in India regarding massive enhancement in fines and penalties related to traffic violations. Whereas generally in India people have been opposed to such arbitrary enhancement on the grounds that India is a developing country and this new measure will lead to more corruption on the Indian streets. Batra on the contrary supports BJP Government on the ground that even the neighbouring country Sri Lanka has enhanced penalties almost ten folds.
Honorary positions
Elected Vice President of SAARCLAW (2016–2018)
Re elected Secretary General of SAARCLAW (2014–2016)
Elected Secretary-General of SAARCLAW (2011–2013)
Founder of AKNOI
Chairperson of IICLAM
Expert-Observer for a hearing of the United Nation's Global Commission on HIV and the Law
Member of the advisory board of Organization for International Cooperation (OIC)
Member of the leadership programme committee of ICAAP11
Chairman of the panel of Jury of prestigious Asia Corporate Excellence & Sustainability Awards (ACES)
Active Elected Member of Union of International Associations (UIA)<ref>
References
1967 births
20th-century Indian lawyers
Living people
Panjab University alumni
Punjabi people
Indian lawyers
Law firm founders
Secretaries General of the South Asian Association for Regional Cooperation
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.