text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
How to reset selected file with input tag file type in Angular 9?
To reset the selected file with input tag file type, we can implement a method which on invocation will clear the selected file.
Approach:
- Set the id of your input using ‘#’ to make it readable inside the component.
- Now in the component we can use ViewChild directive to read the value of the input from the HTML view.
- For that, import ViewChild from @angular/core.
import { ViewChild } from '@angular/core';
- ViewChild allows you to set a reference variable to your input, using that you can clear the value of input.
- After clearing the value of input using the reference variable, the selected file will be reset.
Example:
We will create a similar scenario, there will be an input field and the task will be to create a button which on click will reset the selected file.
Javascript
Output:
After selecting the file:
After pressing clear button:
My Personal Notes arrow_drop_up | https://www.geeksforgeeks.org/how-to-reset-selected-file-with-input-tag-file-type-in-angular-9/?ref=lbp | CC-MAIN-2022-40 | refinedweb | 163 | 61.77 |
Somewhat delayed from the rest of the Defold Engine tutorial series, I realized there was a major gap in subjects we covered... GUI programming. Defold actually has some really solid UI functionality built in, and in this tutorial we are going to look at the basics of handling a UI in Defold. Well very conventional in it’s approach, if you’ve gotten accustomed to the Defold way of doing things, you will find Defold’s approach to handling UI remarkably consistent to the way you do just about everything else using the Defold game engine.
As always, there is an HD video version of this tutorial available here.
In this tutorial we are going to implement the simplest of UI, but all of the concepts of a much more complicated interface can easily be extrapolated from what we cover here. We are going to create a UI that pops up when the user hits the ESC key, dimming the existing screen, showing the UI, handling input, then un-dimming the UI. There is a ton more UI functionality available in Defold, but it should be fairly easy to pick it up once you’ve got the basics down. So without further ado, lets jump right in.
A UI in Defold consists of two different file types, a .gui and a .gui_script file. A gui_script file is just a traditional lua script file, but has access to the gui namespace. Let’s take a step by step look at creating a UI that appears when you hit the ESC key and handles a simple button being clicked.
First we need a font, drag any applicable TTF file to your project. I personally created a folder called MyFont and dragged it there. Next I created a Font file in the same folder.
Next open the newly created Font file and select the ttf file you just imported. Please note that the filter is broken and you should manually type *.ttf to locate your font.
After selecting the font I also increased the size to 30pts. This is aesthetic and purely optional.
Now that we have a font selected, we need to define an Input map. I fully describe the process here for more information on how Input Maps work in Defold. This are the bindings I created:
Next we create our gui file. Simply select Gui File:
We can use the Gui File to arrange the controls that make up our UI. Here are the options:
In this simple UI we are simply going to create a button by creating a box then adding a text label to it. We will also create a label telling the user to click the button. First we need to set up our font that we created earlier. In the GUI editor, right click Fonts and select Add Font:
When prompted, select the font you just created:
Now right click Nodes and add a Box. Once created it can be positioned by hitting W then moving manually.
Next right click the newly created box node and create a child Text field:
Child nodes automatically inherit the position of their parents. With the Text node selected, lets set it’s font and text, like so:
I also created another Text field telling the user to click the button. This one in the root of the GUI hierarchy and not parented to the box. You’re outline should look something like:
While your Gui layout should look something like:
Now that we have a gui file, let’s right a script that will display it. In our main.collection I simply create a new script and attach it to the default logo objecct.
Now of course we need to add the UI to our game object. Create a new root level GameObject named UI, then add component from file and select our gui file:
So now you main.collection should look something like:
Now we enter the following code for main.script:
function init(self)
-- we want focus and to hide our UI until needed
msg.post(".", "acquire_input_focus")
msg.post("UI","disable");
end
function on_message(self, message_id, message, sender)
-- Wait for a message from the UI layer that the UI has been dismissed
-- un-dim our sprite
if(message_id == hash("DONE_UI")) then
go.set("#sprite","tint.w",1.0)
end
end
function on_input(self, action_id, action)
-- If the user hits the ESC show the UI and dim our sprite
if(action_id == hash("ESC") and action.released) then
-- UI needed now, reenable
msg.post("UI","enable")
go.set("#sprite","tint.w",0.2)
end
end
This code does a couple things, first on input it tells Defold we want to get input messages. We also start out by disabling the UI, by sending it the built-in message disable. When the user actually hits the escape key, we send a message to re-enable the UI layer. We also dim the logo sprite so it’s not visually in focus when the UI is active. Also note we wait for the DONE_UI message to undim our sprite. This is sent by the UI script, which we will create now.
If you select your .gui file, in the properties you will notice there is a setting for Script.
There is a special kind of script, .gui_script, that is used to control gui objects, the Gui Script File. Let’s create one in the same folder as our .gui file:
This is a standard lua script, but it has access to the gui namespace. Once you’ve created your gui_script, set it as the script for your gui file. Now enter the following script:
function init(self)
-- We want input control. AKA, pump input to the UI
msg.post(".", "acquire_input_focus")
end
function on_message(self, message_id, message, sender)
-- Expect to be enabled by main when needed. Acquire focus and set text back to click me
if(message_id == hash("enable")) then
msg.post(".", "acquire_input_focus")
gui.set_text(gui.get_node("text"),"Click Me")
end
end
function on_input(self, action_id, action)
-- handle left clicks. On left click, see if click was within box
-- if so change our text (this wont actually be seen), disable ourself and pass a message back
-- to logo that we are done so it can undim itself
if(action_id == hash("LEFT_CLICK") and action.released == true) then
local box = gui.get_node("box")
if(gui.pick_node(box,action.x,action.y)) then
local text = gui.get_node("text")
gui.set_text(text,"Clicked")
msg.post(".","disable")
msg.post(".","release_input_focus")
msg.post("/logo",hash("DONE_UI"))
end
end
end
This code waits for the enable message then sets input focus so the gui can receive input messages from Defold. It also illustrates how you could change the text of a component within the gui. The most important logic is in the on_input event handler. We wait for the LEFT_CLICK input. We then check to see if the click was within our box, if so we set the text of our control ( which is somewhat pointless as it’s about to be hidden! ) to “Clicked”, disable our self, release input focus then send the message DONE_UI back to main.script. Now if you run the code:
Of course we only scratched the surface of what you can do in a Defold gui, but that should certainly get you started!
Programming
Tutorial Defold Lua
Welcomex512 pixel image composed of 64x:
Design Programming
Lua Defold | http://www.gamefromscratch.com/?tag=/Lua | CC-MAIN-2018-43 | refinedweb | 1,235 | 71.95 |
Intermediate Tutorial 2
From Ogre Wiki
Intermediate Tutorial 2: RaySceneQueries and Basic Mouse Usage
Any problems you encounter while working with this tutorial should be posted to the Help Forum.
Introduction
In this tutorial we will create the beginnings of a basic Scene Editor. During this process, we will cover:
- How to use RaySceneQueries to keep the camera from falling through the terrain
- How to use the MouseListener and MouseMotionListener interfaces
- Using the mouse to select x and y coordinates on the terrain
You can find the code for this tutorial here. As you go through the tutorial you should be slowly adding code to your own project and watching the results as we build it.
Prerequisites
This tutorial will assume that you already know how to set up an Ogre project and make it compile successfully. Knowledge of basic Ogre objects (SceneNodes, Entities, etc) is assumed. You should also be familiar with basic STL iterators, as this tutorial uses them. (Ogre also uses a lot of STL, if you are not familiar with it, you should take the time to learn it.)
Getting Started
First, you need to create a new project for the demo. Add a file called "MouseQuery.cpp" to the project, and add this to it:
#include <CEGUI/CEGUISystem.h> #include <CEGUI/CEGUISchemeManager.h> #include <OgreCEGUIRenderer.h> #include "ExampleApplication.h" class MouseQueryListener : public ExampleFrameListener, public OIS::MouseListener { public: MouseQueryListener(RenderWindow* win, Camera* cam, SceneManager *sceneManager, CEGUI::Renderer *renderer) : ExampleFrameListener(win, cam, false, true), mGUIRenderer(renderer) { } // MouseQueryListener ~MouseQueryListener() { } bool frameStarted(const FrameEvent &evt) { return ExampleFrameListener::frameStarted(evt); } /* MouseListener callbacks. */ bool mouseMoved(const OIS::MouseEvent &arg) { return true; } bool mousePressed(const OIS::MouseEvent &arg, OIS::MouseButtonID id) { return true; } bool mouseReleased(const OIS::MouseEvent &arg, OIS::MouseButtonID id) { return true; } protected: }; class MouseQueryApplication : public ExampleApplication { protected: CEGUI::OgreCEGUIRenderer *mGUIRenderer; CEGUI::System *mGUISystem; // cegui system public: MouseQueryApplication() { } ~MouseQueryApplication() { } protected: void chooseSceneManager(void) { // Use the terrain scene manager. mSceneMgr = mRoot->createSceneManager(ST_EXTERIOR_CLOSE); } void createScene(void) { } void createFrameListener(void) { mFrameListener = new MouseQueryListener(mWindow, mCamera, mSceneMgr, mGUIRenderer); mFrameListener->showDebugOverlay(true); mRoot->addFrameListener(mFrameListener); } }; #if OGRE_PLATFORM == PLATFORM_WIN32 || OGRE_PLATFORM == OGRE_PLATFORM_WIN32 #define WIN32_LEAN_AND_MEAN #include "windows.h" INT WINAPI WinMain(HINSTANCE hInst, HINSTANCE, LPSTR strCmdLine, INT) #else int main(int argc, char **argv) #endif { // Create application object MouseQueryApplication app; try { app.go(); } catch(Exception& e) { #if OGRE_PLATFORM == OGRE_PLATFORM_WIN32 MessageBox(NULL, e.getFullDescription().c_str(), "An exception has occurred!", MB_OK | MB_ICONERROR | MB_TASKMODAL); #else fprintf(stderr, "An exception has occurred: %s\n", e.getFullDescription().c_str()); #endif } return 0; }
Be sure this code compiles before continuing.
Setting up the Scene
Go to the MouseQueryApplication::createScene method. The following code should all be familiar. If you do not know what something does, please consult the Ogre API reference before continuing. Add this to createScene:
// Set ambient light mSceneMgr->setAmbientLight(ColourValue(0.5, 0.5, 0.5)); mSceneMgr->setSkyDome(true, "Examples/CloudySky", 5, 8); // World geometry mSceneMgr->setWorldGeometry("terrain.cfg"); // Set camera look point mCamera->setPosition(40, 100, 580); mCamera->pitch(Degree(-30)); mCamera->yaw(Degree(-45));
Now that we have the basic world geometry set up, we need to turn on the cursor. We do this using some CEGUI function calls. Before we can do that, however, we need to start up CEGUI. We first create an OgreCEGUIRenderer, then we create the System object and give it the Renderer we just created. I will leave the specifics of setting up CEGUI for a later tutorial, just know that you always have to tell CEGUI which SceneManager you are using with the last paramater to mGUIRenderer.
// CEGUI setup mGUIRenderer = new CEGUI::OgreCEGUIRenderer(mWindow, Ogre::RENDER_QUEUE_OVERLAY, false, 3000, mSceneMgr); mGUISystem = new CEGUI::System(mGUIRenderer);
Now we need to actually show the cursor. Again, I'm not going to explain most of this code. We will revisit it in a later tutorial.
// Mouse CEGUI::SchemeManager::getSingleton().loadScheme((CEGUI::utf8*)"TaharezLookSkin.scheme"); CEGUI::MouseCursor::getSingleton().setImage("TaharezLook", "MouseArrow");
If you compile and run the code, you will see a cursor at the center of the screen, but it will not move (yet).
Introducing the FrameListener
That was all that needed to be done for the application. The FrameListener is the complicated portion of the code, so I will spend some time outlining what we are trying to accomplish with the application so you have an idea before we start implementing it.
- First, we want to bind the right mouse button to a "mouse look" mode. It's fairly annoying not being able to use the mouse to look around, so our first priority will be making adding mouse control back to the program (though only when we hold the right mouse button down).
- Second, we want to make it so that the camera does not pass through the Terrain. This will make it closer to how we would expect program like this to work.
- Third, we want to add entities to the scene anywhere on the terrain we left click.
- Finally, we want to be able to "drag" entities around. That is by left clicking and holding the button down we want to see the entity, and move him to where we want to place him. Letting go of the button will actually lock him into place.
To do this we are going to use several protected variables (these are already added to the class):
The mRaySceneQuery variable holds a copy of the RaySceneQuery we will be using to find the coordinates on the terrain. The mLMouseDown and mRMouseDown variables will track whether we have the mouse held down (IE mLMouseDown is true when the user holds down the left mouse button, false otherwise). mCount counts the number of entities we have on screen. mCurrentObject holds a pointer to the most recently created SceneNode that we have created (we will be using this to "drag" the entity around). Finally, mGUIRenderer holds a pointer to the CEGUI Renderer, which we will be using to update CEGUI with.
Also note that there are many functions related to Mouse listeners. We will not be using all of them in this demo, but they must be there or the compiler will complain that you did not define them.
Setting up the FrameListener
Go to the MouseQueryListener constructor, and add the following inialization code. Note that we are also reducing the movement speed and rotation speed since the Terrain is fairly small.
// Setup default variables mCount = 0; mCurrentObject = NULL; mLMouseDown = false; mRMouseDown = false; mSceneMgr = sceneManager; // Reduce move speed mMoveSpeed = 50; mRotateSpeed /= 500;
In order for the MouseQueryListener to receive mouse events, we must register it as a MouseListener. If any of this is unfamiliar, please consult Basic Tutorial 5
// Register this so that we get mouse events. mMouse->setEventCallback(this);
Finally, in the constructor we need to create the RaySceneQuery object. This is done with a call to the SceneManager:
// Create RaySceneQuery mRaySceneQuery = mSceneMgr->createRayQuery(Ray());
This is all we need for the constructor, but if we create a RaySceneQuery, we must later destroy it. Go to the MouseQueryListener destructor (~MouseQueryListener) and add the following line:
// We created the query, and we are also responsible for deleting it. mSceneMgr->destroyQuery(mRaySceneQuery);
Be sure you can compile your code before moving on to the next section.
Adding Mouse Look
We are going to bind the mouse look mode to the right mouse button. To do this, we are going to:
- update CEGUI when the mouse is moved (so that the cursor is also moved)
- set mRMouseButton to be true when the mouse is pressed
- set mRMouseButton to be false when it is released
- change the view when the mouse is "dragged" (that is held down and moved)
- hide the mouse cursor when the mouse is dragging
Find the MouseQueryListener::mouseMoved method. We will be adding code to move the mouse cursor every time the mouse has been moved. Add this code to the function:
// Update CEGUI with the mouse motion CEGUI::System::getSingleton().injectMouseMove(arg.state.X.rel, arg.state.Y.rel);
Now find the MouseQueryListener::mousePressed method. This chunk of code hides the cursor when the right mouse button goes down, and sets the mRMouseDown variable to true.
// Left mouse button down if (id == OIS::MB_Left) { mLMouseDown = true; } // if // Right mouse button down else if (id == OIS::MB_Right) { CEGUI::MouseCursor::getSingleton().hide(); mRMouseDown = true; } // else if
Next we need to show the mouse cursor again and toggle mRMouseDown when the right button is let up. Find the mouseReleased function, and add this code:
// Left mouse button up if (id == OIS::MB_Left) { mLMouseDown = false; } // if // Right mouse button up else if (id == OIS::MB_Right) { CEGUI::MouseCursor::getSingleton().show(); mRMouseDown = false; } // else if
Now we have all of the prerequisite code written, we want to change the view when the mouse is moved while holding the right button down. What we are going to do is read the distance it has moved since the last time the method was called. This is done in the same way that we rotated the camera in Basic Tutorial 5. Find the MouseQueryListener::mouseMoved function and add the following code just before the return statement:
// If we are dragging the left mouse button. if (mLMouseDown) { } // if // If we are dragging the right mouse button. else if (mRMouseDown) { mCamera->yaw(Degree(-arg.state.X.rel * mRotateSpeed)); mCamera->pitch(Degree(-arg.state.Y.rel * mRotateSpeed)); } // else if
Now if you compile and run this code you will be able to control where the camera looks by holding the right mouse button down.
Terrain Collision Detection
We are now going to make it so that when we move towards the terrain, we cannot pass through it. Since the BaseFrameListener already handles moving the camera, we are not going to touch that code. Instead, after the BaseFrameListener moves the camera we are going to make sure the camera is 10 units above the terrain. If it is not, we are going to move it there. Please follow this code closely. We will use the RaySceneQuery to do several other things by the time this tutorial is finished, and I will not go into as much detail after this section.
Go to the MouseQueryListener::frameStarted method and remove all of the code from the method. The first thing we are going to do is call the ExampleFrameListener::frameStarted method to do all of its normal functions. If it returns false, we will return false as well.
// Process the base frame listener code. Since we are going to be // manipulating the translate vector, we need this to happen first. if (!ExampleFrameListener::frameStarted(evt)) return false;
We do this at the top of our frameStarted function because the ExampleFrameListener's frameStarted member function moves the camera, and we need to perform the rest our actions in this function after this happens. Our goal is to find the camera's current position, and fire a Ray straight down it into the terrain. This is called a RaySceneQuery, and it will tell us the height of the Terrain below us. After getting the camera's current position, we need to create a Ray. A Ray takes in an origin (where the ray starts), and a direction. In this case our direction will be NEGATIVE_UNIT_Y, since we are pointing the ray straight down. Once we have created the ray, we tell the RaySceneQuery object to use it.
// Setup the scene query Vector3 camPos = mCamera->getPosition(); Ray cameraRay(Vector3(camPos.x, 5000.0f, camPos.z), Vector3::NEGATIVE_UNIT_Y); mRaySceneQuery->setRay(cameraRay);
Note that we have used a height of 5000.0f instead of the camera's actual position. If we used the camera's Y position instead of this height we would miss the terrain entirely if the camera is under the terrain. Now we need to execute the query and get the results. The results of the query comes in the form of an std::iterator, which I will briefly describe.
// Perform the scene query RaySceneQueryResult &result = mRaySceneQuery->execute(); RaySceneQueryResult::iterator itr = result.begin();
The result of the query is basically (oversimplification here) a list of worldFragments (in this case the Terrain) and a list of movables (we will cover movables in a later tutorial). If you are not familiar with STL iterators, just know that to get the first element of the iterator, call the begin method. If the result.begin() == result.end(), then there were no results to return. In the next demo we will have to deal with multiple return values for SceneQuerys. For now, we'll just do some hand waving and move through it. The following line of code ensures that the query returned at least one result ( itr != result.end() ), and that the result is the terrain (itr->worldFragment).
// Get the results, set the camera height if (itr != result.end() && itr->worldFragment) {
The worldFragment struct contains the location where the Ray hit the terrain in the singleIntersection variable (which is a Vector3). We are going to get the height of the terrain by assigning the y value of this vector to a local variable. Once we have the height, we are going to see if the camera is below the height, and if so we are going to move the camera up to that height. Note that we actually move the camera up by 10 units. This ensures that we can't see through the Terrain by being too close to it.
Real terrainHeight = itr->worldFragment->singleIntersection.y; if ((terrainHeight + 10.0f) > camPos.y) mCamera->setPosition( camPos.x, terrainHeight + 10.0f, camPos.z ); } return true;
Lastly, we return true to continue rendering. At this point you should compile and test your program.
Terrain Selection
In this section we will be creating and adding objects to the screen every time you click the left mouse button. Every time you click and hold the left mouse button, an object will be created and "held" on your cursor. You can move the object around until you let go of the button, at which point it will lock into place. To do this we are going to need to change the mousePressed function to do something different when you click the left mouse button. Find the following code in the MouseQueryListener::mousePressed function. We will be adding code inside this if statement.
// Left mouse button down if (id == OIS::MB_Left) { mLMouseDown = true; } // if
The first piece of code will look very familiar. We will be creating a Ray to use with the mRaySceneQuery object, and setting the Ray. Ogre provides us with Camera::getCameraToViewportRay; a nice function that translates a click on the screen (x and y coordinates) into a Ray that can be used with a RaySceneQuery object.
// Left mouse button down if (id == OIS::MB_Left) { // Setup the ray scene query, use CEGUI's mouse position);
Next we will execute the query and make sure it returned a result.
// Execute query RaySceneQueryResult &result = mRaySceneQuery->execute(); RaySceneQueryResult::iterator itr = result.begin( ); // Get results, create a node/entity on the position if (itr != result.end() && itr->worldFragment) {
Now that we have the worldFragment (and therefore the position that was clicked on), we are going to create the object and place it on that position. Our first difficulty is that each Entity and SceneNode in ogre needs a unique name. To accomplish this we are going to name each Entity "Robot1", "Robot2", "Robot3"... and each SceneNode "Robot1Node", "Robot2Node", "Robot3Node"... and so on. First we create the name (consult a reference on C for more information on sprintf).
char name[16]; sprintf( name, "Robot%d", mCount++ );
Next we create the Entity and SceneNode. Note that we use itr->worldFragment->singleIntersection for our default position of the Robot. We also scale him down to 1/10th size because of how small the terrain is. Be sure to take note that we are assigning this newly created object to the member variable mCurrentObject. We will be using that in the next section.
Entity *ent = mSceneMgr->createEntity(name, "robot.mesh"); mCurrentObject = mSceneMgr->getRootSceneNode()->createChildSceneNode(String(name) + "Node", itr->worldFragment->singleIntersection); mCurrentObject->attachObject(ent); mCurrentObject->setScale(0.1f, 0.1f, 0.1f); } // if mLMouseDown = true; } // if
Now compile and run the demo. You can now place Robots on the scene by clicking anywhere on the Terrain. We have almost completed our program, but we need to implement object dragging before we are finished. We will be adding code inside this if statement:
// If we are dragging the left mouse button. if (mLMouseDown) { } // if
This next chunk of code should be self explanatory now. We create a Ray based on the mouse's current location, we then execute a RaySceneQuery and move the object to the new position. Note that we don't have to check mCurrentObject to see if it is valid or not, because mLMouseDown would not be true if mCurrentObject had not been set by mousePressed.
if (mLMouseDown) {); RaySceneQueryResult &result = mRaySceneQuery->execute(); RaySceneQueryResult::iterator itr = result.begin(); if (itr != result.end() && itr->worldFragment) mCurrentObject->setPosition(itr->worldFragment->singleIntersection); } // if
Compile and run the program. We are now finished! Your result should look something like this, after some strategic clicking:
Note: You (= the Ray's origin) must be over the Terrain for the RaySceneQuery to report the intersection when using the TerrainSceneManager.
Note: If you are using your own framework, make sure your scene query has access to the frame listener, e.g. in your frameStarted() method. Otherwise, if you use it in an init() function you may get no results.
Exercises for Further Study
Easy Exercises
- To keep the camera from looking through the terrain, we chose 10 units above the Terrain. This selection was arbitrary. Could we improve on this number and get closer to the Terrain without going through it? If so, make this variable a static class member and assign it there.
- We sometimes do want to pass through the terrain, especially in a SceneEditor. Create a flag which turns toggles collision detection on and off, and bind this to a key on the keyboard. Be sure you do not make a SceneQuery in frameStarted if collision detection is turned off.
Intermediate Exercises
- We are currently doing the SceneQuery every frame, regardless of whether or not the camera has actually moved. Fix this problem and only do a SceneQuery if the camera has moved. (Hint: Find the translation vector in ExampleFrameListener, after the function is called test it against Vector3::ZERO.)
Advanced Exercises
- Notice that there is a lot of code duplication every time we make a scene query call. Wrap all of the SceneQuery related functionality into a protected function. Be sure to handle the case where the Terrain is not intersected at all.
Exercises for Further Study
- In this tutorial we used RaySceneQueries to place objects on the Terrain. We could have used it for many other purposes. Take the code from Tutorial 1 and complete Difficult Question 1 and Expert Question 1. Then merge that code with this one so that the Robot now walks on the terrain instead of empty space.
- Add code so that every time you click on a point on the scene, the robot moves to that location.
- Proceed to Intermediate Tutorial 3 Mouse Picking (3D Object Selection) and SceneQuery Masks | http://www.ogre3d.org/wiki/index.php/Intermediate_Tutorial_2 | crawl-002 | refinedweb | 3,191 | 54.83 |
optional 0.7.1
An optional/maybe type with safe dispatchingrange semantics
To use this package, put the following dependency into your project's dependencies section:
Optional type for D with safe dispatching and NotNull type
Full API docs available here
- Summary
- What about std.typecons.Nullable std.range.only?
- Motivation for Optional
- Scala we have a Swift comparison
- Examples
- Example Optional!T usage
- Example dispatch usage
- Example NotNull!T usage
Summary
The purpose of this library is two fold, to provide types that:
- Eliminate null dereferences - Aka the Billion Dollar Mistake.
- Show an explicit intent of the absence of a value
- Safe (non crashing) array access
This is done with the following:
Optional!T: Represents an optional data type that may or may not contain a value that acts like a range.
NotNull!T: Represents a type that can never be null.
dispatch: A null-safe dispatching utility that allows you to call methods on possibly null values (including optionals, and
std.typecons.Nullable)
An
Optional!T signifies the intent of your code, works as a range and is therefor useable with Phobos algorithms, and allows you to call methods and operators on your types even if they are null references - i.e. safe dispatching.
You can use this library:
- When you need a type that may have a value or may not (
Optional!Type)
- When you want to safely dispatch on types (
possibleNullClass.dispatch.someFuncion // safe)
- When you want a guaranteed non null object (
NotNull!Type)
- When you want to not crash with array access (
some([1, 2])[7] == none // no out of bounds exception)
What about
std.typecons.Nullable and
std.range.only?
It is NOT like the
Nullable type in Phobos.
Nullable is basically a pointer and applies pointer semantics to value types. It does not give you any safety guarantees and says nothing about the intent of "I might not return a value". Whereas
Optional signifies intent on both reference and value types, and is safe to use without the need to check
isNull before every usage.
It is also NOT like
std.range.only. D's
only cannot be used to signify intent of a value being present or not, nor can be used for safe dispatching, nor the result of
only(value) be passed around. It's only (heh) usage is to create a range out of a value so that values can act as ranges and be used seamlessly with
std.algorithms. This
Optional has a type constructor -
some - that can be used for this purpose as well.
Motivation for Optional
Lets = john.dispatch").orEl } }
Also like in Swift, you can unwrap an optional to get at it's value:
D
auto str = "123"; if (auto number = toInt(str).unwrap) { writeln(*number); } else { writeln("could not convert string ", str); }
Swift
let string = "123" if let number = Int(str) { print(number) // was successfully converted } else { print("could not convert string \(string)") }
Examples
The following section has example usage of the various types
Example Optional!T usage
import optional; // Create empty optional auto a = no!int; assert(a == none); ++a; // none; a - 1; // none; // Assign and try doing the same stuff a = 9; assert(a == some(9)); ++a; // some(10); a - 1; // some(9); // Acts like a range as well import std.algorithm: map; import std.conv: to; auto b = some(10); auto c = no!int; b.map!(to!double); // [10.0] c.map!(to!double); // empty auto r = b.match!( (int a) => "yes", () => "no", ); assert(r == "yes");
Example NotNull!T usage
static class C { void f() {} } static struct S { void f() {} } void f0(NotNull!C c) { c.f(); } void f1(NotNull!(S*) sp) { sp.f(); } auto c = notNull!C; auto sp = notNull!(S*); f0(c); f1(sp); static assert(!__traits(compiles, { c = null; })); static assert(!__traits(compiles, { sp = null; })); static assert(!__traits(compiles, { c = new C; }));
Example dispatch usage
// Safely dispatch to whatever inner type is struct A { struct Inner { int g() { return 7; } } Inner inner() { return Inner(); } int f() { return 4; } } auto d = some(A()); // Dispatch to one of its methods d.dispatch.f(); // calls a.f, returns some(4) d.dispatch.inner.g(); // calls a.inner.g, returns some(7) // Use on a pointer or reference type as well A* e = null; // If there's no value in the reference type, dispatching works, and produces an optional assert(e.dispatch.f() == none); assert(e.dispatch.inner.g() == none);
- Registered by ali akhtarzada
- 0.7.1 released 24 days ago
- aliak00/optional
- MIT
- Authors:
-
- Dependencies:
- bolts
- Versions:
- Show all 14 versions
- Download Stats:
10 downloads today
60 downloads this week
203 downloads this month
605 downloads total
- Score:
- 2.8
- Short URL:
- optional.dub.pm | http://code.dlang.org/packages/optional | CC-MAIN-2018-43 | refinedweb | 783 | 57.67 |
As title says, my random essays about microcontrollers, all in one package
This isn't probably the most exciting post in here, but since I'm author of firmware for #Badge for Hackaday Conference 2018 in Belgrade , I'm often asked how to update firmware in the badge, so I decided to write it down to single place to have reference point.
Omitting the most obvious ways (buying PicKit3 or PicKit4 and using MPLABX IDE or IPE tool for this task), there are other ways how to achieve the goal - from the two I have on my mind, both revolving around great piece of software, called pic32prog. You can buy PicKit2 "clone" (PicKit2 was open-source design made and released by Microchip, so those clones are more like derivative works) and use it with pic32prog, or alternatively you can build bitbanged loader using arduino. I used cheap atmega328 nano board from usual vendors, costing me something like 2USD.
Hook 3,3kOhm pull-up to D2 and D3 pins, then connect
D4 - 1kOhm resistor - MCLR
D3 - 1kOhm resistor - B0
D2 - 1kOhm resistor - B1
GND - GND
as you can see on picture
Clone git or download zip from github, extract and if you don't want to compile it, look for precompiled binary for your OS.
Now turn the arduino into PIC32 programmer by running
pic32prog -d:ascii:SERIAL -b3
where SERIAL is your serial port. For windows it's COMx, for linux it's /dev/ttyUSBx. This should load firmware into the arduino.
From this point on, this setup should be able to flash new firmware into badge, or any supported PIC32. Run pic32prog -d:ascii:/SERIAL file.hex
pic32prog -d:ascii:SERIAL file.hex
pic32prog should find the programmer and if wired properly, also the target. Don't forget to have fresh batteries installed in badge.
The rest should look like this
Notice the full FLASH loading is really slow - that's what you pay for trivial programmer from parts you have in your drawer - but acceptable for occasional firmware update.
As I was playing with my LLG project, I spent a few moments with exploring XC16 compiler.
Three facts are known about this one
Honestly, I'm OK with all three points for hobby projects, though I try to use 8- or 32-bitters, where open source compilers are available. Less known fact is that source codes of XC16 are available and free to download, probably mostly to satisfy GNU license requirements. Better than -O1 compiler options are fine for squeezing last bits of optimization efforts, though - that's why the paid version exists. Though the sources are available, in professional circles not everybody will spent their expensive time building the compiler (that is far from being trivial exercise) with nobody to ask questions, so they buy directly the full version plus support from Microchip...
...or something. In fact, I'm able to use optimizations higher than -O1 on free version. Compiler complains I have no valid license, but the code builds and runs just fine, with apparent results of compiler optimization efforts. That's what i did on LLG, where I built the code with -O3 and code execution is indeed faster than with -O1. That is where story could end, but I went further.
I downloaded soruces for XC16 1.33 from here. The archive contains almost 10.000 files in 5200 directories, so I unziped it on temporary location. In directory \v1.33.src\src\XC_GCC\gcc\gcc\config are all targets, including the ones for XC16 /PIC30/ - because originally the compiler was meant for dsPIC30 DSPs, PIC24 and dsPIC33 being derivatives of dsPIC30) - as well as PIC32 /PIC32/.
in PIC30 directory there are files pic30.opt and pic30.c being of interest. At line 3707 of file pic30.c, there is block of code
#elif defined(LICENSE_MANAGER_XCLM) if (pic30_license_valid < MCHP_XCLM_VALID_PRO_LICENSE) { invalid = (char*) "restricted"; if (pic30_license_valid < MCHP_XCLM_VALID_STANDARD_LICENSE) { nullify_O2 = 1; } nullify_Os = 1; nullify_O3 = 1; } #endif #define NULLIFY(X) \ if ((X) && (message_displayed++ == 0)) \ fnotice(stderr,"Options have been disabled due to %s license\n" \ "Visit to purchase a new key.\n", \ invalid); \
The variable 'pic30_license_valid' is being set on results of xclm, license checker. So this is where optimizations warning are being emitted. Never mind, lets look further.
#ifdef LICENSE_MANAGER_XCLM if (mchp_mafrlcsj) { pic30_license_valid = MCHP_XCLM_VALID_PRO_LICENSE; } else if (mchp_skip_license_check) { pic30_license_valid = -1; } else { pic30_license_valid = get_license(); }
By setting proper value into 'mchp_mafrlcsj' we can omit the license check. The option is entered via command line entry, being described in pic30.opt file, line 228:
mafrlcsj Target RejectNegative Var(mchp_mafrlcsj) Undocumented
So, entering -mafrlcsj option into command line should be equal to having proper license. When compiling from command line using make or similar tool, it should be straightforward, within MPLABX IDE it works like this:
I created file cmd.txt containing single line
*cc1:+ -mafrlcsj
and in project settings I opted to use this file
and hit compile
Notice the resulting binary is indeed a bit smaller, though at -O1 optimization (as if the optimization beyond -01 would be prohibited) is the binary even bigger - not sure about this one.
I took my LLG sources and performed tests on them (code size and execution time of geolocation algorithm), using all levels of optimizations with and without additional options as desribed here.
It's apparent that with options the compiler tries a bit harder. At Os (where code size is main factor) it gets 20B lower, at O3 (where speed is at premium, code size is secondary) it indeed runs a bit faster.
So, what is described here is option to get full optimization level of XC16 compiler without need to recompile...Read more »
Offering from Infineon piqued my interest lately, as the price for Cortex M0 with 32k FLASH and 16k RAM in small package for under 1EUR from EU stores isn't bad. I bought one XMC1301T016F0032ABXUMA1 (oh my) and tiny devboard XMC2Go and Infineon is kind enough to provide some design resources. +1 point from me.
They also provide IDE named DAVE based on Eclipse and GCC, though being only offered for Windows. Meh, I don't mind it much, as I'm going to switch to native Linux tools as soon as possible, vendor IDE is usually good enough to get basic grasp and move on to something useful later. But anyway, I would be happier if they provided also non-windows variant of the tool. It's late 2017, dominance of "Wintel" is not as strong as used to be.
So, IDE installed, let's run it and create new project. Nope.
Clicking to Next, but nothing happens. It just froze here. So again, Cancel, new project, Next... Next, Next, Next, Next, Next, Next, NEXT NEEEEXT... Screw you. Closing IDE, trying the same... no avail. No error messages, just non-responsive program. Restarting windows (because that's what always helps), then running IDE, the same result. I just can't start a project on freshly installed IDE.
Googling time... it reveals I'm not the only one with this issue. So, after I install the complete dedicated IDE from Infineon solely for Infineon MCUs, I need to install some more libraries for this dedicated Infineon IDE to actually support the Infineon MCUs. Go figure.
Since I don't want to spend any more time restarting everything and figuring out what everything is needed, I just install everything available. After one or two restarts of the IDE and half an hour spent dicking around I'm finally able to start a empty project. Hallelujah, I thought - at the time I didn't know the worst is still to come.
Hey Infineon, why on Earth the IDE dedicated to Infineon MCUs isn't able handling of Infineon MCUs? Why the plugins or whatever can't be installed from main install?
With the XMC2Go kit in USB port (being enumerated as J-Link debugger) I built the empty project and flashed the FLASH of target MCU on kit, everything went smoothly, so I considered this as done. But devkits are always just training wheels and you don't own the MCU unless you can buy virgin part, stick it to PCB and blink a LED with it. I took out my XMC1301T016F0032ABXUMA1 (once you start diving into ARM world you notice the weird part numbers), soldered it onto one of my breakout boards and tested it for continuity and shorts.
For flashing the board, I elected for J-link EDU I had on hand, just to connect it. Pins 5 and 6 are GND/Vss, now I need SWD pins. And they are not listed in datasheet. Hey Infineon, you always list such as important pins in datasheet, no matter what. Datasheet is go-to resource, the first document one takes look at. I opened reference manual, being mere 1337 words long (oh 1337!) and sine nobody in his/her right mind would start reading it page by page I hit SWDCLK into full-text search. Notice how easy and fast I progressed here, since I knew what to search for, beginner would spend hours looking for what to look for.
I found the SWD pins are shared with other pins
depending on boot mode, but SWD are top ones, so probably are the default for given pins, what would be logical choice, as every other manufacturer has programming/debugging pins enabled by default. Isn't it logical? Yes it is. Is this the case of Infineon MCUs? No, it isn't. So I connected SWDCLK pin of J-link EDU to P0.15 and SWDIO to P0.14, asked J-link software (DAVE was off at the time) to find the target on SWD interface and... it didn't found anything (<insert Spaceballs reference here>). Never mind, check connections, try again, reboot, reinstall drivers, reinstall software, try in DAVE, install Keil uVision, reboot few more times, take out SMT32 Discovery, google, install J-link firmware on it, try...Read more »
Before the STM8S001J3 MCUs hit the streets, I received a few samples directly from ST Microeletronics to evaluate. ST finally jumped on a train ran by companies like Microchip or NXP for something like 20 years. It is coming from STM8S family, now being almost classic and well proven chip family.
The Good
STM8S001J3 is very similar to STM8S003F3 - so much similar it's probably the same silicon chip, actually. There is nothing inherently wrong about it, as it makes development cheaper - that is easier for manufacturer and cheaper for end customer - and allows to jump on existing development tools, again enabling customers to work with the MCUs easier and faster.
The Bad
On the other hand, dark side if this decision is easily recognizable. Fitting 20-pin chip into 8-pin frame forces designers into some compromises - which pins to expose, which ones to leave unconnected? Another option is to merge multiple bonding pads to single package pin - and that is route which was chosen in ST.
It allows multiple peripherals to be shared on single pin, which could be advantageous at times, also uncomfortable, as seen later.
From 8 pins of SOIC package, two are for power supply pins - there is not way doing it otherwise. One is sacrificed for VCap pin. As this one is based on STM8S, with supply voltage range up to 5V and relatively modern core manufactured on "dense" manufacturing process, innards of MCU are supplied by lower voltage, requiring internal LDO to do this job. VCap is pin to connect capacitor to keep this LDO stable - and this pin can't be used for anything else.
I wish ST would design also STM8L device in SOIC package. Apart from Vdd being maximum 3,6V (nothing unusual in last 10-15 years), this would free up one pin (now sacrificed to VCap) and sleep current consumption would be as low as usual for STM8L devices. STM8S (including STM8S001J3) do have sleep consumption of about 5-10uA, what is one order of magnitude higher than sleep current of MCU devices designed 15 years ago!
The Ugly
Now, with 5 pins, guys at ST had to decide what to do with the other ones. Higher pin count STM micros do have dedicated NRST (Not ReSeT - low active reset) pin, but from obvious reasons they decided to omit it on 8-pin STM8. And SWIM pin is shared with three other pins; being it not exactly ideal solution, as confirmed by datasheet too:
So, setup your data direction register into wrong state and there you have it - OTP device. We have seen really bad programming interfaces in the past - just like AVR. With more than one way of programming of FLASH, you can easily lock one or another access, having fall-back in 1980's parallel programming. That being said, once the fuses were setup correctly, one didn't have to touch it anymore and any further access was safe. On this device it's different. DDR registers have zero protection agains fuck-ups, from programmer side or even from runaway program - that is particularly dangerous, IMO. With STM8S001J3 you should always have a few spares and hot-air soldering gun for the case you do something goofy in your program. One solution of how to escape this problem is to set-up and use some kind of bootloader to load the FLASH.
While absolutely vital SWIM pin is shared with three other pins, PB4 is just alone. I wish it would be other way around. And alternative pin functions list don't mention MISO signal of SPI interface. Either MISO is forgotten in documentation, or it's forgotten to be bonded out. On STM8S003F3, MISO is on PC7, which is explicitly being listed as NC
I hope it's alternative function to some other pin, otherwise the SPI would be seriously crippled without MISO.
Resumé
All in all, while STM8 family seems to be one of the better 8-bit MCU design, this 8-pin variant does seem to suffer a bit from compromises during design stage. I really hope ST will release new 8-pin device, as there seems to be space for further optimizations. | https://hackaday.io/project/27250-mcu-how-tos-reviews-rants | CC-MAIN-2020-29 | refinedweb | 2,360 | 61.26 |
Provided by: libdpm-dev_1.13.0-1_amd64
NAME
dpns_getreplica - get the replica entries associated with a DPNS file in the name server
SYNOPSIS
#include <sys/types.h> #include "dpns_api.h" int dpns_getreplica (const char *path, const char *guid, const char *se, int *nbentries, struct dpns_filereplica **rep_entries)
DESCRIPTION
dpns_getreplica gets the replica entries associated with a DPNS file in the name server. The file can be specified by path name or by guid. If both are given, they must point at the same file. path specifies the logical pathname relative to the current DPNS directory or the full DPNS pathname. guid specifies the Grid Unique IDentifier. se allows to restrict the replica entries to a given SE. nbentries will be set to the number of entries in the array of replicas. rep_entries */), dpns_chdir(3)
AUTHOR
LCG Grid Deployment Team | http://manpages.ubuntu.com/manpages/eoan/man3/dpns_getreplica.3.html | CC-MAIN-2020-29 | refinedweb | 138 | 51.04 |
reposurgeon − surgical operations on repositories
reposurgeon [command...]
The purpose of reposurgeon is to enable risky operations that VCSes (version−control systems) don't want to let you do, such as (a) editing past comments and metadata, (b) excising commits, (c) coalescing and splitting commits, (d) removing files and subtrees from repo history, (e) merging or grafting two or more repos, and (f) cutting a repo in two by cutting a parent−child link, preserving the branch structure of both child repos.
A major use of reposurgeon is to assist a human operator to perform higher−quality conversions among version control systems than can be achieved with fully automated converters.
The original motivation for reposurgeon was to clean up artifacts created by repository conversions. It was foreseen that the tool would also have applications when code needs to be removed from repositories for legal or policy reasons.
To keep reposurgeon simple and flexible, it normally does not do its own repository reading and writing. Instead, it relies on being able to parse and emit the command streams created by git−fast−export and read by git−fast−import. This means that it can be used on any version−control system that has both fast−export and fast−import utilities. The git−import stream format also implicitly defines a common language of primitive operations for reposurgeon to speak.
Fully supported systems (those for which reposurgeon can both read and write repositories) include git, hg, bzr, svn, darcs, RCS, and SRC. For a complete list, with dependencies and technical notes, type prefer to the reposurgeon prompt.
Writing to the file−oriented systems RCS and SRC is done via rcs-fast-import(1) and has some serious limitations because those systems cannot represent all the metadata in a git−fast−export stream. Consult that tool's documentation for details and partial workarounds.
Writing Subversion repositories also has some significant limitations, discussed in the section on Working With Subversion.
Fossil repository files can be read in using the −−format=fossil option of the read command and written out with the −−format=fossil option of the write. Ignore patterns are not translated in either direction.
CVS is supported for read only, not write. For CVS, reposurgeon must be run from within a repository directory (one with a CVSROOT subdirectory). Each module becomes a subdirectory in the the reposurgeon representation of the change history.
In order to deal with version−control systems that do not have fast−export equivalents, reposurgeon can also host extractor code that reads repositories directly. For each version−control system supported through an extractor, reposurgeon uses a small amount of knowledge about the system's command−line tools to (in effect) replay repository history into an input stream internally. Repositories under systems supported through extractors can be read by reposurgeon, but not modified by it. In particular, reposurgeon can be used to move a repository history from any VCS supported by an extractor to any VCS supported by a normal importer/exporter pair.
Mercurial repository reading is implemented with an extractor class; writing is handled with the stock "hg fastimport" command. A test extractor exists for git, but is normally disabled in favor of the regular exporter.
For guidance on the pragmatics of repository conversion, see the DVCS Migration HOWTO [1] .
reposurgeon is a sharp enough tool to cut you. It takes care not to ever write a repository in an actually inconsistent state, and will terminate with an error message rather than proceed when its internal data structures are confused. However, there are lots of things you can do with it − like altering stored commit timestamps to they no longer match the commit sequence − that are likely to cause havoc after you're done. Proceed with caution and check your work.
Also note that, if your DVCS does the usual thing of making commit IDs a cryptographic hash of content and parent links, editing a publicly−accessible repository with this tool would be a bad idea. All of the surgical operations in reposurgeon will modify the hash chains.
Please also see the notes on system−specific issues under the section called “LIMITATIONS AND GUARANTEES”.
The program can be run in one of two modes, either as an interactive command interpreter or in batch mode to execute commands given as arguments on the reposurgeon invocation line. The only differences between these modes are (1) the interactive one begins by turning on the 'verbose 1' option, (2) in batch mode all errors (including normally recoverable errors in selection−set syntax) are fatal, and (3) each command−line argument beginning with “−−” has that stripped off (which, in particular means that −−help and −−version will work as expected). Also, in interactive mode, Ctrl−P and Ctrl−N will be available to scroll through your command history and tab completion of both command keywords and name arguments (wherever that makes semantic sense) is available.
A git−fast−import stream consists of a sequence of commands which must be executed in the specified sequence to build the repo; to avoid confusion with reposurgeon commands we will refer to the stream commands as events in this documentation. These events are implicitly numbered from 1 upwards. Most commands require specifying a selection of event sequence numbers so reposurgeon will know which events to modify or delete.
For all the details of event types and semantics, see the git-fast-import(1) manual page; the rest of this paragraph is a quick start for the impatient. Most events in a stream are commits describing revision states of the repository; these group together under a single change comment one or more fileops (file operations), which usually point to blobs that are revision states of individual files. A fileop may also be a delete operation indicating that a specified previously−existing file was deleted as part of the version commit; there are a couple of other special fileop types of lesser importance.
Commands to reposurgeon consist of a command keyword, sometimes preceded by a selection set, sometimes followed by whitespace−separated arguments. It is often possible to omit the selection−set argument and have it default to something reasonable.
Here are some motivating examples. The commands will be explained in more detail after the description of selection syntax.
:15 edit ;; edit the object associated with mark :15
edit ;; edit all editable objects
29..71 list ;; list summary index of events 29..71
236..$ list ;; List events from 236 to the last
<#523> inspect ;; Look for commit #523; they are numbered
;; 1−origin from the beginning of the repository.
<2317> inspect ;; Look for a tag with the name 2317, a tip commit
;; of a branch named 2317, or a commit with legacy ID
;; 2317. Inspect what is found. A plain number is
;; probably a legacy ID inherited from a Subversion
;; revision number.
/regression/ list ;; list all commits and tags with comments or
;; committer headers or author headers containing
;; the string "regression"
1..:97 & =T delete ;; delete tags from event 1 to mark 97
[Makefile] inspect ;; Inspect all commits with a file op touching Makefile
;; and all blobs referred to in a fileop
;; touching Makefile.
:46 tip ;; Display the branch tip that owns commit :46.
@dsc(:55) list ;; Display all commits with ancestry tracing to :55
@min([.gitignore]) remove .gitignore delete
;; Remove the first .gitignore fileop in the repo.
SELECTION SYNTAX
The selection−set specification syntax is an expression−oriented minilanguage. The most basic term in this language is a location. The following sorts of primitive locations are supported:
event numbers
A plain numeric literal is interpreted as a 1−origin event−sequence number.
marks
A numeric literal preceded by a colon is interpreted as a mark; see the import stream format documentation for explanation of the semantics of marks.
tag and branch names
The basename of a branch (including branches in the refs/tags namespace) refers to its tip commit. The name of a tag is equivalent to its mark (that of the tag itself, not the commit it refers to). Tag and branch locations are bracketed with < > (angle brackets) to distinguish them from command keywords.
legacy IDs
If the contents of name brackets (< >) does not match a tag or branch name, the interpreter next searches legacy IDs of commits. This is especially useful when you have imported a Subversion dump; it means that commits made from it can be referred to by their corresponding Subversion revision numbers.
commit numbers
Anumeric literal within name brackets (< >) preceded by # is interpreted as a 1−origin commit−sequence number.
$
Refers to the last event.
These may be grouped into sets in the following ways:
ranges
A range is two locations separated by "..", and is the set of events beginning at the left−hand location and ending at the right−hand location (inclusive).
lists
Comma−separated lists of locations and ranges are accepted, with the obvious meaning.
There are some other ways to construct event sets:
visibility sets
A visibility set is an expression specifying a set of event types. It will consist of a leading equal sign, followed by type letters. These are the type letters:
references
A reference name (bracketed by angle brackets) resolves to a single object, either a commit or tag.
Note that if an annotated tag and a branch have the same name foo, <foo> will resolve to the tag rather than the branch tip commit.
dates and action stamps
A date or action stamp in angle brackets resolves to a selection set of all matching commits.
To refine the match to a single commit, use a 1−origin index suffix separated by '#'. Thus "<2000−02−06T09:35:10Z>" can match multiple commits, but "<2000−02−06T09:35:10Z#2>" matches only the second in the set.
text search
A text search expression is a Python regular expression surrounded by forward slashes (to embed a forward slash in it, use a Python string escape such as \x2f).
A text search normally matches against the comment fields of commits and annotated tags, or against their author/committer names, or against the names of tags; also the text of passthrough objects.
The scope of a text search can be changed with qualifier letters after the trailing slash. These are as follows:
Multiple qualifier letters can add more search scopes.
(The “b” qualifier replaces the branchset syntax in earlier versions of reposurgeon.)
paths
A "path expression" enclosed in square brackets resolves to the set of all commits and blobs related to a path matching the given expression. The path expression itself is either a path literal or a regular expression surrounded by slashes. Immediately after the trailing / of a path regexp you can put any number of the following characters which act as flags: 'a', 'c', 'D', "M', 'R', 'C', 'N'.
By default, a path is related to a commit if the latter has a fileop that touches that file path − modifies that change it, deletes that remove it, renames and copies that have it as a source or target. When the 'c' flag is in use the meaning changes: the paths related to a commit become all paths that would be present in a checkout for that commit.
A path literal matches a commit if and only if the path literal is exactly one of the paths related to the commit (no prefix or suffix operation is done). In particular a path literal won't match if it corresponds to a directory in the chosen repository.
A regular expression matches a commit if it matches any path related to the commit anywhere in the path. You can use '^' or '$' if you want the expression to only match at the beginning or end of paths. When the 'a' flag is in use, the path expression selects commits whose every path matches the regular expression. This is not always a subset of commits selected without the 'a' flag because it also selects commits with no related paths (e.g. empty commits, deletealls and commits with empty trees). If you want to avoid those, you can use e.g. '[/regex/] & [/regex/a]'.
The flags 'D', "M', 'R', 'C', 'N' restrict match checking to the corresponding fileop types. Note that this means an 'a' match is easier (not harder) to achieve. These are no−iops when used with 'c'.
A path or literal matches a blob if it matches any path that appeared in a modification fileop that referred to that blob. To select purely matching blobs or matching commits, compose a path expression with =B or =C.
If you need to embed '[^/]' into your regular expression (e.g. to express "all characters but a slash") you can use a Python string escape such as \x2f.
function calls
The expression language has named special functions. The sequence for a named function is “@” followed by a function name, followed by an argument in parentheses. Presently the following functions are defined:
Set expressions may be combined with the operators | and &; these are, respectively, set union and intersection. The | has lower precedence than intersection, but you may use parentheses '(' and ')' to group expressions in case there is ambiguity (this replaces the curly brackets used in older versions of the syntax).
Any set operation may be followed by '?' to add the set members' neighbors and referents. This extends the set to include the parents and children of all commits in the set, and the referents of any tags and resets in the set. Each blob reference in the set is replaced by all commit events that refer to it. The '?' can be repeated to extend the neighborhood depth.
Do set negation with prefix ~; it has higher precedence than & and | but lower than ?
IMPORT AND EXPORT
reposurgeon can hold multiple repository states in core. Each has a name. At any given time, one may be selected for editing. Commands in this group import repositories, export them, and manipulate the in−core list and the selection.
read [−−format=fossil] [directory|−|<infile]
With a directory−name argument, this command attempts to read in the contents of a repository in any supported version−control system under that directory; read with no arguments does this in the current directory. If output is redirected to a plain file, it will be read in as a fast−import stream or Subversion dumpfile. With an argument of “−”, this command reads a fast−import stream or Subversion dumpfile from standard input (this will be useful in filters constructed with command−line arguments).
If the contents is a fast−import stream, any "cvs−revision" property on a commit is taken to be a newline−separated list of CVS revision cookies pointing to the commit, and used for reference lifting.
If the contents is a fast−import stream, any "legacy−id" property on a commit is taken to be a legacy ID token pointing to the commit, and used for reference−lifting.
If the read location is a git repository and contains a .git/cvsauthors file (such as is left in place by git cvsimport −A) that file will be read in as if it had been given to the authors read command.
If the read location is a directory, and its repository subdirectory has a file named legacy−map, that file will be read as though passed to a legacy read command.
If the read location is a file and the −−format=fossil is used, the file is interpreted as a Fossil repository.
The just−read−in repo is added to the list of loaded repositories and becomes the current one, selected for surgery. If it was read from a plain file and the file name ends with one of the extensions .fi or .svn, that extension is removed from the load list name.
Note: this command does not take a selection set.
write [−−legacy] [−−format=fossil] [−−noincremental] [−−callout] [>outfile|−]
Dump selected events as a fast−import stream representing the edited repository; the default selection set is all events. Where to dump to is standard output if there is no argument or the argument is '−', or the target of an output redirect.
Alternatively, if there is no redirect and the argument names a directory, the repository is rebuilt into that directory, with any selection set being ignored; if that target directory is nonempty its contents are backed up to a save directory.
If the write location is a file and the −−format=fossil is used, the file is written in Fossil repository format.
With the −−legacy option, the Legacy−ID of each commit is appended to its commit comment at write time. This option is mainly useful for debugging conversion edge cases.
If you specify a partial selection set such that some commits are included but their parents are not, the output will include incremental dump cookies for each branch with an origin outside the selection set, just before the first reference to that branch in a commit. An incremental dump cookie looks like "refs/heads/foo^0" and is a clue to export−stream loaders that the branch should be glued to the tip of a pre−existing branch of the same name. The −−noincremental option suppresses this behavior.
When you specify a partial selection set, including a commit object forces the inclusion of every blob to which it refers and every tag that refers to it.
Specifying a partial selection may cause a situation in which some parent marks in merges don't correspond to commits present in the dump. When this happens and −−callout option was specified, the write code replaces the merge mark with a callout, the action stamp of the parent commit; otherwise the parent mark is omitted. Importers will fail when reading a stream dump with callouts; it is intended to be used by the graft command.
Specifying a write selection set with gaps in it is allowed but unlikely to lead to good results if it is loaded by an importer.
Property extensions will be be omitted from the output if the importer for the preferred repository type cannot digest them.
Note: to examine small groups of commits without the progress meter, use inspect.
choose [reponame]
Choose a named repo on which to operate. The name of a repo is normally the basename of the directory or file it was loaded from, but repos loaded from standard input are "unnamed". reposurgeon will add a disambiguating suffix if there have been multiple reads from the same source.
With no argument, lists the names of the currently stored repositories and their load times. The second column is '*' for the currently selected repository, '−' for others.
drop [reponame]
Drop a repo named by the argument from reposurgeon's list, freeing the memory used for its metadata and deleting on−disk blobs. With no argument, drops the currently chosen repo.
rename reponame
Rename the currently chosen repo; requires an argument. Won't do it if there is already one by the new name.
REBUILDS IN PLACE
reposurgeon can rebuild an altered repository in place. Untracked files are normally saved and restored when the contents of the new repository is checked out (but see the documentation of the “preserve” command for a caveat).
rebuild [directory]
Rebuild a repository from the state held by reposurgeon. This command does not take a selection set.
The single argument, if present, specifies the target directory in which to do the rebuild; if the repository read was from a repo directory (and not a git−import stream), it defaults to that directory. If the target directory is nonempty its contents are backed up to a save directory. Files and directories on the repository's preserve list are copied back from the backup directory after repo rebuild. The default preserve list depends on the repository type, and can be displayed with the stats command.
If reposurgeon has a nonempty legacy map, it will be written to a file named legacy−map in the repository subdirectory as though by a legacy write command. (This will normally be the case for Subversion and CVS conversions.)
preserve [file...]
Add (presumably untracked) files or directories to the repo's list of paths to be restored from the backup directory after a rebuild. Each argument, if any, is interpreted as a pathname. The current preserve list is displayed afterwards.
It is only necessary to use this feature if your version−control system lacks a command to list files under version control. Under systems with such a command (which include git and hg), all files that are neither beneath the repository dot directory nor under reposurgeon temporary directories are preserved automatically.
unpreserve [file...]
Remove (presumably untracked) files or directories to the repo's list of paths to be restored from the backup directory after a rebuild. Each argument, if any, is interpreted as a pathname. The current preserve list is displayed afterwards.
INFORMATION AND REPORTS
Commands in this group report information about the selected repository.
The output of these commands can individually be redirected to a named output file. Where indicated in the syntax, you can prefix the output filename with “>” and give it as a following argument. If you use “>>” the file is opened for append rather than write.
list [>outfile]
This is the main command for identifying the events you want to modify. It lists commits in the selection set by event sequence number with summary information. The first column is raw event numbers, the second a timestamp in local time. If the repository has legacy IDs, they will be displayed in the third column. The leading portion of the comment follows.
stamp [>outfile]
Alternative form of listing that displays full action stamps, usable as references in selections. Supports > redirection.
tip [>outfile]
Display the branch tip names associated with commits in the selection set. These will not necessarily be the same as their branch fields (which will often be tag names if the repo contains either annotated or lightweight tags).
If a commit is at a branch tip, its tip is its branch name. If it has only one child, its tip is the child's tip. If it has multiple children, then if there is a child with a matching branch name its tip is the child's tip. Otherwise this function throws a recoverable error.
tags [>outfile]
Display tags and resets: three fields, an event number and a type and a name. Branch tip commits associated with tags are also displayed with the type field 'commit'. Supports > redirection.
stats [repo−name...] [>outfile]
Report size statistics and import/export method information about named repositories, or with no argument the currently chosen repository.
count [>outfile]
Report a count of items in the selection set. Default set is everything in the currently−selected repo. Supports > redirection.
inspect [>outfile]
Dump a fast−import stream representing selected events to standard output. Just like a write, except (1) the progress meter is disabled, and (2) there is an identifying header before each event dump.
graph [>outfile]
Emit a visualization of the commit graph in the DOT markup language used by the graphviz tool suite. This can be fed as input to the main graphviz rendering program dot(1), which will yield a viewable image. Supports > redirection.
You may find a script like this useful:
graph $1 >/tmp/foo$$
shell dot </tmp/foo$$ −Tpng | display −; rm /tmp/foo$$
You can substitute in your own preferred image viewer, of course.
sizes [>outfile]
Print a report on data volume per branch; takes a selection set, defaulting to all events. The numbers tally the size of uncompressed blobs, commit and tag comments, and other metadata strings (a blob is counted each time a commit points at it).
The numbers are not an exact measure of storage size: they are intended mainly as a way to get information on how to efficiently partition a repository that has become large enough to be unwieldy.
Supports > redirection.
lint [>outfile]
Look for DAG and metadata configurations that may indicate a problem. Presently checks for: (1) Mid−branch deletes, (2) disconnected commits, (3) parentless commits, (4) the existence of multiple roots, (5) committer and author IDs that don't look well−formed as DVCS IDs, (6) multiple child links with identical branch labels descending from the same commit, (7) time and action−stamp collisions.
Options to issue only partial reports are supported; "lint −−options" or "lint −?" lists them.
The options and output format of this command are unstable; they may change without notice as more sanity checks are added.
when >timespec
Interconvert between git timestamps (integer Unix time plus TZ) and RFC3339 format. Takes one argument, autodetects the format. Useful when eyeballing export streams. Also accepts any other supported date format and converts to RFC3339.
SURGICAL OPERATIONS
These are the operations the rest of reposurgeon is designed to support.
squash [policy...]
Combine or delete commits in a selection set of events. The default selection set for this command is empty. Has no effect on events other than commits unless the −−delete policy is selected; see the 'delete' command for discussion.
Normally, when a commit is squashed, its file operation list (and any associated blob references) gets either prepended to the beginning of the operation list of each of the commit's children or appended to the operation list of each of the commit's parents. Then children of a deleted commit get it removed from their parent set and its parents added to their parent set.
The default is to squash forward, modifying children; but see the list of policy modifiers below for how to change this.
Warning
It is easy to get the bounds of a squash command wrong, with confusing and destructive results. Beware thinking you can squash on a selection set to merge all commits except the last one into the last one; what you will actually do is to merge all of them to the first commit after the selected set.
Normally, any tag pointing to a combined commit will also be pushed forward. But see the list of policy modifiers below for how to change this.
Following all operation moves, every one of the altered file operation lists is reduced to a shortest normalized form. The normalized form detects various combinations of modification, deletion, and renaming and simplifies the operation sequence as much as it can without losing any information.
After canonicalization, a file op list may still end up containing multiple M operations on the same file. Normally the tool utters a warning when this occurs but does not try to resolve it.
The following modifiers change these policies:
−−delete
Simply discards all file ops and tags associated with deleted commit(s).
−−coalesce
Discard all M operations (and associated blobs) except the last.
−−pushback
Append fileops to parents, rather than prepending to children.
−−pushforward
Prepend fileops to children. This is the default; it can be specified in a lift script for explicitness about intentions.
−−tagforward
With the "tagforward" modifier, any tag on the deleted commit is pushed forward to the first child rather than being deleted. This is the default; it can be specified for explicitness.
−−tagback
With the "−−tagback" modifier, any tag on the deleted commit is pushed backward to the first parent rather than being deleted.
−−quiet
Suppresses warning messages about deletion of commits with non−delete fileops.
−−complain
The opposite of quiet. Can be specified for explicitness.
Under any of these policies except “−−delete”, deleting a commit that has children does not back out the changes made by that commit, as they will still be present in the blobs attached to versions past the end of the deletion set. All a delete does when the commit has children is lose the metadata information about when and by who those changes were actually made; after the delete any such changes will be attributed to the first undeleted children of the deleted commits. It is expected that this command will be useful mainly for removing commits mechanically generated by repository converters such as cvs2svn.
delete [policy...]
Delete a selection set of events. The default selection set for this command is empty. On a set of commits, this is equivalent to a squash with the −−delete flag. It unconditionally deletes tags, resets, and passthroughs; blobs can be removed only as a side effect of deleting every commit that points at them.
divide parent [child]
Attempt to partition a repo by cutting the parent−child link between two specified commits (they must be adjacent). Does not take a general selection set. It is only necessary to specify the parent commit, unless it has multiple children in which case the child commit must follow (separate it with a comma).
If the repo was named 'foo', you will normally end up with two repos named 'foo−early' and 'foo−late' (option and feature events at the beginning of the early segment will be duplicated onto the beginning of the late one.). But if the commit graph would remain connected through another path after the cut, the behavior changes. In this case, if the parent and child were on the same branch 'qux', the branch segments are renamed 'qux−early' and 'qux−late' but the repo is not divided.
expunge [path | /regexp/]...
Expunge files from the selected portion of the repo history; the default is the entire history. The arguments to this command may be paths or Python regular expressions matching paths (regexps must be marked by being surrounded with //).
All filemodify (M) operations and delete (D) operations involving a matched file in the selected set of events are disconnected from the repo and put in a removal set. Renames are followed as the tool walks forward in the selection set; each triggers a warning message. If a selected file is a copy (C) target, the copy will be deleted and a warning message issued. If a selected file is a copy source, the copy target will be added to the list of paths to be deleted and a warning issued.
After file expunges have been performed, any commits with no remaining file operations will be removed, and any tags pointing to them. Commits with deleted fileops pointing both in and outside the path set are not deleted, but are cloned into the removal set.
The removal set is not discarded. It is assembled into a new repository named after the old one with the suffix "−expunges" added. Thus, this command can be used to carve a repository into sections by file path matches.
tagify [−−canonicalize] [−−tipdeletes] [−−tagify−merges]
Search for empty commits and turn them into tags. Takes an optional selection set argument defaulting to all commits. For each commit in the selection set, turn it into a tag with the same message and author information if it has no fileops. By default merge commits are not considered, even if they have no fileops (thus no tree differences with their first parent). To change that, use the −−tagify−merges option.
The name of the generated tag will be 'emptycommit−ident', where ident is generated from the legacy ID of the deleted commit, or from its mark, or from its index in the repository, with a disambiguation suffix if needed.
With the −−canonicalize, tagify tries harder to detect trivial commits by first ensuring that all fileops of selected commits will have an actual effect when processed by fast−import.
With the −−tipdeletes, tagify also considers branch tips with only deleteall fileops to be candidates for tagification. The corresponding tags get names of the form 'tipdelete−branchname' rather than the default 'emptycommit−ident'.
With the −−tagify−merges, tagify also tagifies merge commits that have no fileops. When this is done the merge link is move to the yagified commit's parent.
coalesce [−−debug}|−−changelog] [timefuzz]
Scan the selection set for runs of commits with identical comments close to each other in time (this is a common form of scar tissues in repository up−conversions from older file−oriented version−control systems). Merge these cliques by deleting all but the last commit, in order; fileops from the deleted commits are pushed forward to that last one
The optional second argument, if present, is a maximum time separation in seconds; the default is 90 seconds.
The default selection set for this command is =C, all commits. Occasionally you may want to restrict it, for example to avoid coalescing unrelated cliques of "*** empty log message ***" commits from CVS lifts.
With the −−debug option, show messages about mismatches.
With the −−changelog option, any commit with a comment containing the string 'empty log message' (such as is generated by CVS) and containing exactly one file operation modifying a path ending in ChangeLog is treated specially. Such ChangeLog commits are considered to match any commit before them by content, and will coalesce with it if the committer matches and the commit separation is small enough. This option handles a convention used by Free Software Foundation projects.
split {at|by} item
The first argument is required to be a commit location; the second is a preposition which indicates which splitting method to use. If the preposition is 'at', then the third argument must be an integer 1−origin index of a file operation within the commit. If it is 'by', then the third argument must be a pathname to be prefix−matched, pathname match is done first).
The commit is copied and inserted into a new position in the event sequence, immediately following itself; the duplicate becomes the child of the original, and replaces it as parent of the original's children. Commit metadata is duplicated; the new commit then gets a new mark.
Finally, some file operations − starting at the one matched or indexed by the split argument − are moved forward from the original commit into the new one. Legal indices are 2−n, where n is the number of file operations in the original commit.
add {D path | M perm mark path | R source target C source target}
To a specified commit, add a specified fileop.
For a D operation to be valid there must be an M operation for the path in the commit's ancestry. For an M operation to be valid, the 'perm' part must be a token ending with 755 or 644 and the 'mark' must refer to a blob that precedes the commit location. For an R or C operation to be valid, there must be an M operation for the source in the commit's ancestry.
remove [index | path | deletes] [to commit]
From a specified commit, remove a specified fileop. The op must be one of (a) the keyword “deletes”, (b) a file path, (c) a file path preceded by an op type set (some subset of the letters DMRCN), or (d) a 1−origin numeric index. The “deletes” keyword selects all D fileops in the commit; the others select one each.
If the “to” clause is present, the removed op is appended to the commit specified by the following singleton selection set. This option cannot be combined with “deletes”.
Note that this command does not attempt to scavenge blobs even if the deleted fileop might be the only reference to them. This behavior may change in a future release.
blob
Create a blob at mark :1 after renumbering other marks starting from :2. Data is taken from stdin, which may be a here−doc. This can be used with the add command to patch synthetic data into a repository.
renumber
Renumber the marks in a repository, from :1 up to :<n> where <n> is the count of the last mark. Just in case an importer ever cares about mark ordering or gaps in the sequence.
A side effect of this comment is to clean up stray "done" passthroughs that may have entered the repository via graft operations. After a renumber, the repository will have at most one "done" and it will be at the end of the events.
mailbox_out [>outfile]
Emit a mailbox file of messages in RFC822 format representing the contents of repository metadata. Takes a selection set; members of the set other than commits, annotated tags, and passthroughs are ignored (that is, presently, blobs and resets).
The output from this command can optionally be redirected to a named output file. Prefix the filename with “>” and give it as a following argument.
May have an option −−filter, followed by = and a /−enclosed regular expression. If this is given, only headers with names matching it are emitted. In this context the name of the header includes its trailing colon.
mailbox_in [<infile] [−−changed >outfile]
Accept a mailbox file of messages in RFC822 format representing the contents of the metadata in selected commits and annotated tags. Takes no selection set. If there is an argument it will be taken as the name of a mailbox file to read from; no argument, or one of '−'; reads from standard input.
Users should be aware that modifying an Event−Number or Event−Mark field will change which event the update from that message is applied to. This is unlikely to have good results.
If the Event−Number and Event−Mark fields are absent, the mailbox_in logic will attempt to match the commit or tag first by Legacy−ID, then by a unique committer ID and timestamp pair.
If output is redirected and the modifier “−−changed” appears, a minimal set of modifications actually made is written to the output file in a form that can be fed back in.
setfield attribute value
In the selected objects (defaulting to none) set every instance of a named field to a string value. The string may be quoted to include whitespace, and use backslash escapes interpreted by the Python string−escape codec, such as \n and \t.
Attempts to set nonexistent attributes are ignored. Valid values for the attribute are internal Python field names; in particular, for commits, “comment” and “branch” are legal. Consult the source code for other interesting values.
append [−−rstrip] [>text]
Append text to the comments of commits and tags in the specified selection set. The text is the first token of the command and may be a quoted string. C−style escape sequences in the string are interpreted using Python's string_decode codec.
If the option −−rstrip is given, the comment is right−stripped before the new text is appended.
filter [−−shell|−−regex|−−replace|−−dedos]
Run blobs, commit comments, or tag comments in the selection set through the filter specified on the command line.
In any mode other than −−dedos,.
When filtering blobs, if the command line contains the magic cookie '%PATHS%' it is replaced with a space−separated list of all paths that reference the blob.
With −−shell, the remainder of the line specifies a filter as a shell command. Each blob or comment is presented to the filter on standard input; the content is replaced with whatever the filter emits to standard output. At present −−shell is required. Other filtering modes will be supported in the future.
With −−regex, the remainder of the line is expected to be a Python regular expression substitution written as /from/to/ with from and to being passed as arguments to the standard re.sub() function and it applied to modify the content. Actually, any non−space character will work as a delimiter in place of the /; this makes it easier to use / in patterns. Ordinarily only the first such substitution is performed; putting 'g' after the slash replaces globally, and a numeric literal gives the maximum number of substitutions to perform. Other flags available restrict substitution scope − 'c' for comment text only, 'C' for committer name only, 'a' for author names only.
With −−replace, the behavior is like −−regexp but the expressions are not interpreted as regular expressions. (This is slightly faster).
With −−dedos, DOS/Windows−style \r\n line terminators are replaced with \n.
transcode codec
Transcode blobs, commit comments and committer/author names, or tag comments and tag committer names in the selection set to UTF−8 from the character encoding specified on the command line..
The encoding argument must name one of the codecs known to the Python standard codecs library. In particular, 'latin−1' is a valid codec name.
Errors in this command are fatal, because an error may leave repository objects in a damaged state.
The theory behind the design of this command is that the repository might contain a mixture of encodings used to enter commit metadata by different people at different times. After using =I to identify metadata containing non−Unicode high bytes in text, a human must use context to identify which particular encodings were used in particular event spans and compose appropriate transcode commands to fix them up.
edit
Report the selection set of events to a tempfile as mailbox_out does, call an editor on it, and update from the result as mailbox_in does. If you do not specify an editor name as second argument, it will be taken from the $EDITOR variable in your environment.
Normally this command ignores blobs because mailbox_out does. However, if you specify a selection set consisting of a single blob, your editor will be called directly on the blob file.
timeoffset offset [timezone]
Apply a time offset to all time/date stamps in the selected set. An offset argument is required; it may be in the form [+−]ss, [+−]mm:ss or [+−]hh:mm:ss. The leading sign is required to distinguish it from a selection expression.
Optionally you may also specify another argument in the form [+−]hhmm, a timezone literal to apply. To apply a timezone without an offset, use an offset literal of +0 or −0.
unite [−−prune] reponame...
Unite repositories. Name any number of loaded repositories; they will be united into one union repo and removed from the load list. The union repo will be selected.
The root of each repo (other than the oldest repo) will be grafted as a child to the last commit in the dump with a preceding commit date. Running last to first, duplicate names will be disambiguated using the source repository name (thus, recent duplicates will get priority over older ones). After all grafts, marks will be renumbered.
The name of the new repo will be the names of all parts concatenated, separated by '+'. It will have no source directory or preferred system type.
With the option −−prune, at each join D operations for every ancestral file existing will be prepended to the root commit, then it will be canonicalized using the rules for squashing the effect will be that only files with properly matching M, R, and C operations in the root survive.
graft [−−prune] reponame
For when unite doesn't give you enough control. This command may have either of two forms, selected by the size of the selection set. The first argument is always required to be the name of a loaded repo.
If the selection set is of size 1, it must identify a single commit in the currently chosen repo; in this case the name repo's root will become a child of the specified commit. If the selection set is empty, the named repo must contain one or more callouts matching a commits in the currently chosen repo.
Labels and branches in the named repo are prefixed with its name; then it is grafted to the selected one. Any other callouts in the named repo are also resolved in the context of the currently chosen one. Finally, the named repo is removed from the load list.
With the option −−prune, prepend a deleteall operation into the root of the grafted repository.
path [source] rename [−−force}] [target]
Rename a path in every fileop of every selected commit. The default selection set is all commits. The first argument is interpreted as a Python regular expression to match against paths; the second may contain back−reference syntax.
Ordinarily, if the target path already exists in the fileops, or is visible in the ancestry of the commit, this command throws an error. With the −−force option, these checks are skipped.
paths [{sub|sup}] [dirname] [>outfile]
Takes a selection set. Without a modifier, list all paths touched by fileops in the selection set (which defaults to the entire repo). This reporting variant does >−redirection.
With the 'sub' modifier, take a second argument that is a directory name and prepend it to every path. With the 'sup' modifier, strip the first directory component from every path.
merge
Create a merge link. Takes a selection set argument, ignoring all but the lowest (source) and highest (target) members. Creates a merge link from the highest member (child) to the lowest (parent).
unmerge
Linearize a commit. Takes a selection set argument, which must resolve to a single commit, and removes all its parents except for the first.
It is equivalent to reparentfirst_parent,commitrebase, where commit is the same selection set as used with unmerge and first_parent is a set resolving commit's first parent (see the reparent command below
The main interest of the unmerge is that you don't have to find and specify the first parent yourself, saving time and avoiding errors when nearby surgery would make a manual first parent argument stale.
reparent [options...] [policy]
Changes the parent list of a commit. Takes a selection set, zero or more option arguments, and an optional policy argument.
Selection set:
The selection set must resolve to one or more commits. The selected commit with the highest event number (not necessarily the last one selected) is the commit to modify. The remainder of the selected commits, if any, become its parents: the selected commit with the lowest event number (which is not necessarily the first one selected) becomes the first parent, the selected commit with second lowest event number becomes the second parent, and so on. All original parent links are removed. Examples:
# this makes 17 the parent of 33
17,33 reparent
# this also makes 17 the parent of 33
33,17 reparent
# this makes 33 a root (parentless) commit
33 reparent
# this makes 33 an octopus merge commit. its first parent
# is commit 15, second parent is 17, and third parent is 22
22,33,15,17 reparent
Options:
−−use−order
Use the selection order to determine which selected commit is the commit to modify and which are the parents (and if there are multiple parents, their order). The last selected commit (not necessarily the one with the highest event number) is the commit to modify, the first selected commit (not necessarily the one with the lowest event number) becomes the first parent, the second selected commit becomes the second parent, and so on. Examples:
# this makes 33 the parent of 17
33|17 reparent −−use−order
# this makes 17 an octopus merge commit. its first parent
# is commit 22, second parent is 33, and third parent is 15
22,33,15|17 reparent −−use−order
Because ancestor commit events must appear before their descendants, giving a commit with a low event number a parent with a high event number triggers a re−sort of the events. A re−sort assigns different event numbers to some or all of the events. Re−sorting only works if the reparenting does not introduce any cycles. To swap the order of two commits that have an ancestor–descendant relationship without introducing a cycle during the process, you must reparent the descendant commit first.
Policy:
By default, the manifest of the reparented commit is computed before modifying it; a deleteall and some fileops are prepended so that the manifest stays unchanged even when the first parent has been changed. This behavior can be changed by specifying a policy:
rebase
Inhibits the default behavior—no deleteall is issued and the tree contents of all descendents can be modified as a result.
branch branchname... {rename|delete} [arg]
Rename or delete a branch (and any associated resets). First argument must be an existing branch name; second argument must one of the verbs 'rename' or 'delete'.
For a 'rename', the third argument may be any token that is a syntactically valid branch name (but not the name of an existing branch). For a 'delete', no third argument is required.
For either name, if it does not contain a '/' the prefix 'refs/heads' is prepended.
tag tagname... {move|rename|delete} [arg].
Move, rename, or delete a tag. First argument must be an existing tag name; second argument must be one of the verbs “move”, “rename”, or “delete”.
For a “move”, a third argument must be a singleton selection set. For a “rename”, the third argument may be any token that is a syntactically valid tag name (but not the name of an existing tag). For a “delete”, no third argument is required.
The behavior of this command is complex because features which present as tags may be any of three things: (1) True tag objects, (2) lightweight tags, actually sequences of commits with a common branchname beginning with “refs/tags” − in this case the tag is considered to point to the last commit in the sequence, (3) Reset objects. These may occur in combination; in fact, stream exporters from systems with annotation tags commonly express each of these as a true tag object (1) pointing at the tip commit of a sequence (2) in which the basename of the common branch field is identical to the tag name. An exporter that generates lightweight−tagged commit sequences (2) may or may not generate resets pointing at their tip commits.
This command tries to handle all combinations in a natural way by doing up to three operations on any true tag, commit sequence, and reset matching the source name. In a rename, all are renamed together. In a delete, any matching tag or reset is deleted; then matching branch fields are changed to match the branch of the unique descendent of the tagged commit, if there is one. When a tag is moved, no branch fields are changed and a warning is issued.
Attempts to delete a lightweight tag may fail with the message “couldn't determine a unique successor”. When this happens, the tag is on a commit with multiple children that have different branch labels. There is a hole in the specification of git fast−import streams that leaves it uncertain how branch labels can be safely reassigned in this case; rather than do something risky, reposurgeon throws a recoverable error.
reset resetname... {move|rename|delete} [arg].
Move, rename, or delete a reset. First argument must match an existing reset name; second argument must be one of the verbs “move”, “rename”, or “delete”.
For a “move”, a third argument must be a singleton selection set. For a “rename”, the third argument may be any token token that matches a syntactically valid reset name (but not the name of an existing reset). For a “delete”, no third argument is required.
For either name, if it does not contain a “/” the prefix “heads/” is prepended. If it does not begin with “refs/”, “refs/” is prepended.
An argument matches a reset's name if it is either the entire reference (refs/heads/FOO or refs/tags/FOO for some some value of FOO) or the basename (e.g. FOO), or a suffix of the form heads/FOO or tags/FOO. An unqualified basename is assumed to refer to a head.
When a reset is renamed, commit branch fields matching the tag are renamed with it to match. When a reset is deleted, matching branch fields are changed to match the branch of the unique descendent of the tip commit of the associated branch, if there is one. When a reset is moved, no branch fields are changed.
debranch source−branch... [target−branch].
Takes one or two arguments which must be the names of source and target branches; if the second (target) argument is omitted it defaults to refs/heads/master. Any trailing segment of a branch name is accepted as a synonym for it; thus master is the same as refs/heads/master. Does not take a selection set.
The history of the source branch is merged into the history of the target branch, becoming the history of a subdirectory with the name of the source branch. Any resets of the source branch are removed.
strip [blobs|reduce].
Reduce the selected repository to make it a more tractable test case. Use this when reporting bugs.
With the modifier 'blobs', replace each blob in the repository with a small, self−identifying stub, leaving all metadata and DAG topology intact. This is useful when you are reporting a bug, for reducing large repositories to test cases of manageable size.
A selection set is effective only with the 'blobs' option, defaulting to all blobs. The 'reduce' mode always acts on the entire repository.
With the modifier 'reduce', perform a topological reduction that throws out uninteresting commits. If a commit has all file modifications (no deletions or copies or renames) and has exactly one ancestor and one descendant, then it may be boring. To be fully boring, it must also not be referred to by any tag or reset. Interesting commits are not boring, or have a non−boring parent or non−boring child.
With no modifiers, this command strips blobs.
ignores [rename]. [translate]. [defaults].
Intelligent handling of ignore−pattern files. This command fails if no repository has been selected or no preferred write type has been set for the repository. It does not take a selection set.
If the rename modifier is present, this command attempts to rename all ignore−pattern files to whatever is appropriate for the preferred type − e.g. .gitignore for git, .hgignore for hg, etc. This option does not cause any translation of the ignore files it renames.
If the translate modifier is present, syntax translation of each ignore file is attempted. At present, the only transformation the code knows is to prepend a 'syntax: glob' header if the preferred type is hg.
If the defaults modifier is present, the command attempts to prepend these default patterns to all ignore files. If no ignore file is created by the first commit, it will be modified to create one containing the defaults. This command will error out on prefer types that have no default ignore patterns (git and hg, in particular). It will also error out when it knows the import tool has already set default patterns.
REFERENCE LIFTING
This group of commands is meant for fixing up references in commits that are in the format of older version control systems. The general workflow is this: first, go over the comment history and change all old−fashioned commit references into machine−parseable cookies. Then, automatically turn the machine−parseable cookie into action stamps. The point of dividing the process this way is that the first part is hard for a machine to get right, while the second part is prone to errors when a human does it.
A Subversion cookie is a comment substring of the form [[SVN:ddddd]] (example: [[SVN:2355]] with the revision read directly via the Subversion exporter, deduced from git−svn metadata, or matching a $Revision$ header embedded in blob data for the filename.
A CVS cookie is a comment substring of the form [[CVS:filename:revision]] (example: [[CVS:src/README:1.23]] with the revision matching a CVS $Id$ or $Revision$ header embedded in blob data for the filename.
A mark cookie is of the form [[:dddd]] and is simply a reference to the specified mark. You may want to hand−patch this in when one of previous forms is inconvenient.
An action stamp is an RFC3339 timestamp, followed by a '!', followed by an author email address (author rather than committer because that timestamp is not changed when a patch is replayed on to a branch). It attempts to refer to a commit without being VCS−specific. Thus, instead of "commit 304a53c2" or "r2355", "2011−10−25T15:11:09Z!fred@foonly.com".
The following git aliases allow git to work directly with action stamps. Append it to your ~/.gitconfig; if you already have an [alias] section, leave off the first line.
[alias]
# git stamp <commit−ish> − print a reposurgeon−style action stamp
stamp = show −s −−format='%cI!%ce'
# git scommit <stamp> <rev−list−args> − list most recent commit that matches <stamp>.
# Must also specify a branch to search or −−all, after these arguments.
scommit = "!f(){ d=${1%%!*}; a=${1##*!}; arg=\"−−until=$d −1\"; if [ $a != $1 ]; then arg=\"$arg −−committer=$a\"; fi; shift; git rev−list $arg ${1:+\"$@\"}; }; f"
# git scommits <stamp> <rev−list−args> − as above, but list all matching commits.
scommits = "!f(){ d=${1%%!*}; a=${1##*!}; arg=\"−−until=$d −−after $d\"; if [ $a != $1 ]; then arg=\"$arg −−committer=$a\"; fi; shift; git rev−list $arg ${1:+\"$@\"}; }; f"
# git smaster <stamp> − list most recent commit on master that matches <stamp>.
smaster = "!f(){ git scommit \"$1\" master −−first−parent; }; f"
smasters = "!f(){ git scommits \"$1\" master −−first−parent; }; f"
# git shs <stamp> − show the commits on master that match <stamp>.
shs = "!f(){ stamp=$(git smasters $1); shift; git show ${stamp:?not found} $*; }; f"
# git slog <stamp> <log−args> − start git log at <stamp> on master
slog = "!f(){ stamp=$(git smaster $1); shift; git log ${stamp:?not found} $*; }; f"
# git sco <stamp> − check out most recent commit on master that matches <stamp>.
sco = "!f(){ stamp=$(git smaster $1); shift; git checkout ${stamp:?not found} $*; }; f"
There is a rare case in which an action stamp will not refer uniquely to one commit. It is theoretically possible that the same author might check in revisions on different branches within the one−second resolution of the timestamps in a fast−import stream. There is nothing to be done about this; tools using action stamps need to be aware of the possibility and throw a warning when it occurs.
In order to support reference lifting, reposurgeon internally builds a legacy−reference map that associates revision identifiers in older version−control systems with commits. The contents of this map comes from three places: (1) cvs2svn:rev properties if the repository was read from a Subversion dump stream, (2) $Id$ and $Revision$ headers in repository files, and (3) the .git/cvs−revisions created by git cvsimport.
The detailed sequence for lifting possible references is this: first, find possible CVS and Subversion references with the references or =N visibility set; then replace them with equivalent cookies; then run references lift to turn the cookies into action stamps (using the information in the legacy−reference map) without having to do the lookup by hand.
references [list|edit|lift] [>outfile]
With the modifier 'list', list commit and tag comments for strings that might be CVS− or Subversion−style revision identifiers. This will be useful when you want to replace them with equivalent cookies that can automatically be translated into VCS−independent action stamps. This reporting command supports >−redirection. It is equivalent to '=N list'.
With the modifier 'edit', edit the set where revision IDs are found. This is equivalent to '=N edit'.
With the modifier "lift", attempt to resolve Subversion and CVS cookies in comments into action stamps using the legacy map. An action stamp is a timestamp/email/sequence−number combination uniquely identifying the commit associated with that blob, as described in the section called “TRANSLATION STYLE”.
It is not guaranteed that every such reference will be resolved, or even that any at all will be. Normally all references in history from a Subversion repository will resolve, but CVS references are less likely to be resolvable.
VARIABLES, MACROS AND EXTENSIONS
Occasionally you will need to issue a large number of complex surgical commands of very similar form, and it's convenient to be able to package that form so you don't need to do a lot of error−prone typing. For those occasions, reposurgeon supports simple forms of named variables and macro expansion.
assign [name]
Compute a leading selection set and assign it to a symbolic name. It is an error to assign to a name that is already assigned, or to any existing branch name. Assignments may be cleared by sequence mutations (though not ordinary deletions); you will see a warning when this occurs.
With no selection set and no name, list all assignments.>
Use this to optimize out location and selection computations that would otherwise be performed repeatedly, e.g. in macro calls.
unassign [name]
Unassign a symbolic name. Throws an error if the name is not assigned.
names [>outfile]
List the names of all known branches and tags. Tells you what things are legal within angle brackets and parentheses.
define name body
Define a macro. The first whitespace−separated token is the name; the remainder of the line is the body, unless it is “{”, which begins a multi−line macro terminated by a line beginning with “}”.
A later “do” call can invoke this macro.
The command “define” by itself without a name or body produces a macro list.
do name arguments...
Expand and perform a macro. The first whitespace−separated token is the name of the macro to be called; remaining tokens replace {0}, {1}... in the macro definition (the conventions used are those of the Python format method). Tokens may contain whitespace if they are string−quoted; string quotes are stripped. Macros can call macros.
If the macro expansion does not itself begin with a selection set, whatever set was specified before the "do" keyword is available to the command generated by the expansion.
undefine name]
Undefine the named macro.
Here's an example to illustrate how you might use this. In CVS repositories of projects that use the GNU ChangeLog convention, a very common pre−conversion artifact is a commit with the comment "***empty log message***" that modifies only a ChangeLog entry explaining the commit immediately previous to it. The following
define changelog <{0}> & /empty log message/ squash −−pushback
do changelog 2012−08−14T21:51:35Z
do changelog 2012−08−08T22:52:14Z
do changelog 2012−08−07T04:48:26Z
do changelog 2012−08−08T07:19:09Z
do changelog 2012−07−28T18:40:10Z
is equivalent to the more verbose
<2012−08−14T21:51:35Z> & /empty log message/ squash −−pushback
<2012−08−08T22:52:14Z> & /empty log message/ squash −−pushback
<2012−08−07T04:48:26Z> & /empty log message/ squash −−pushback
<2012−08−08T07:19:09Z> & /empty log message/ squash −−pushback
<2012−07−28T18:40:10Z> & /empty log message/ squash −−pushback
but you are less likely to make difficult−to−notice errors typing the first version.
(Also note how the text regexp acts as a failsafe against the possibility of typing a wrong date that doesn't refer to a commit with an empty comment. This was a real−world example from the CVS−to−git conversion of groff.)
When even a macro is not enough, you can write and call custom Python extensions.
exec name
Execute custom code from standard input (normally a file via < redirection). Use this to set up custom extension functions for later eval calls. The code has full access to all internal data structures. Functions defined are accessible to later eval calls.
This can be called in a script with the extension code in a here−doc.
eval function−name
Evaluate a line of code in the current interpreter context. Typically this will be a call to a function defined by a previous exec. The variables _repository and _selection will have the obvious values. Note that _selection will be a list of integers, not objects.
script filename [arg...]
Takes a filename and optional following arguments. Reads each line from the file and executes it as a command.
During execution of the script, the script name replaces the string $0 and the optional following arguments (if any) replace the strings $1, $2 ... $n in the script text. This is done before tokenization, so the $1 in a string like “foo$1bar” will be expanded. Additionally, $$ is expanded to the current process ID (which may be useful for scripts that use tempfiles).
Within scripts (and only within scripts) reposurgeon accepts a slightly extended syntax: First, a backslash ending a line signals that the command continues on the next line. Any number of consecutive lines thus escaped are concatenated, without the ending backslashes, prior to evaluation. Second, a command that takes an input filename argument can instead take literal following data in the syntax of a shell here−document. That is: if the filename is replaced by "<<EOF", all following lines in the script up to a terminating line consisting only of "EOF" will be read, placed in a temporary file, and that file fed to the command and afterwards deleted. EOF may be replaced by any string. Backslashes have no special meaning while reading a here−document.
Scripts may have comments. Any line beginning with a '#' is ignored. If a line has a trailing position that begins with one or more whitespace characters followed by '#', that trailing portion is ignored.
ARTIFACT REMOVAL
Some commands automate fixing various kinds of artifacts associated with repository conversions from older systems.
authors [read|write] [<filename] [>filename]
Apply or dump author−map information for the specified selection set, defaulting to all events.
Lifts from CVS and Subversion may have only usernames local to the repository host in committer and author IDs. DVCSes want email addresses (net−wide identifiers) and complete names. To supply the map from one to the other, an authors file is expected to consist of lines each beginning with a local user ID, followed by a '=' (possibly surrounded by whitespace) followed by a full name and email address, optionally followed by a timezone offset field. Thus:
ferd = Ferd J. Foonly <foonly@foo.com> −0500
An authors file may have comment lines beginning with '#'; these are ignored.
When an authors file is applied, email addresses in committer and author metadata for which the local ID matches between < and @ are replaced according to the mapping (this handles git−svn lifts). Alternatively, if the local ID is the entire address, this is also considered a match (this handles what git−cvsimport and cvs2git do)
With the 'read' modifier, or no modifier, apply author mapping data (from standard input or a <−redirected file). May be useful if you are editing a repo or dump created by cvs2git or by git−svn invoked without −A.
With the 'write' modifier, write a mapping file that could be interpreted by authors read, with entries for each unique committer, author, and tagger (to standard output or a <−redirected mapping file). This may be helpful as a start on building an authors file, though each part to the right of an equals sign will need editing.
branchify [path−set]
Specify the list of directories to be treated as potential branches (to become tags if there are no modifications after the creation copies) when analyzing a Subversion repo. This list is ignored when the −−nobranch read option is used. It defaults to the 'standard layout' set of directories, plus any unrecognized directories in the repository root.
With no arguments, displays the current branchification set.
An asterisk at the end of a path in the set means 'all immediate subdirectories of this path, unless they are part of another (longer) path in the branchify set'.
Note that the branchify set is a property of the reposurgeon interpreter, not of any individual repository, and will persist across Subversion dumpfile reads. This may lead to unexpected results if you forget to re−set it.
branchify_map [/regex/branch/...]
Specify the list of regular expressions used for mapping the svn branches that are detected by branchify. If none of the expressions match the default behaviour applies, which maps a branch to the name of the last directory, except for trunk and “*” which are mapped to master and root.
With no arguments the current regex replacement pairs are shown. Passing 'reset' will clear the reset mapping.
Will match each branch name against regex1 and if it matches rewrite its branch name to branch1. If not it will try regex2 and so forth until it either found a matching regex or there are no regexs left. The regular expressions should be in Python's [2] . format. The branch name can use backreferences (see the sub function in the Python documentation).
Note that the regular expressions are appended to 'refs/' without either the needed 'heads/' or 'tags/'. This allows for choosing the right kind of branch type.
While the syntax template above uses slashes, any first character will be used as a delimeter (and you will need to use a different one in the common case that the paths contain slashes).
Note that the branchify_map set is a property of the reposurgeon interpreter, not of any individual repository, and will persist across Subversion dumpfile reads. This may lead to unexpected results if you forget to re−set it.
EXAMINING TREE STATES
manifest [regular expression] [>outfile]
Takes an optional selection set argument defaulting to all commits, and an optional Python regular expression. For each commit in the selection set, print the mapping of all paths in that commit tree to the corresponding blob marks, mirroring what files would be created in a checkout of the commit. If a regular expression is given, only print "path −> mark" lines for paths matching it. This command supports > redirection.
Takes a selection set which must resolve to a single commit, and a second argument. The second argument is interpreted as a directory name. The state of the code tree at that commit is materialized beneath the directory.
diff [>outfile]
Display the difference between commits. Takes a selection−set argument which must resolve to exactly two commits. Supports output redirection.
HOUSEKEEPING
These are backed up by the following housekeeping commands, none of which take a selection set:
Get help on the interpreter commands. Optionally follow with whitespace and a command name; with no argument, lists all commands. '?' also invokes this.
shell
Execute the shell command given in the remainder of the line. '!' also invokes this.
prefer [repotype]
With no arguments, describe capabilities of all supported systems. With an argument (which must be the name of a supported system) this has two effects:
First, if there are multiple repositories in a directory you do a read on, reposurgeon will read the preferred one (otherwise it will complain that it can't choose among them).
Secondly, this will change reposurgeon's preferred type for output. This means that you do a write to a directory, it will build a repo of the preferred type rather than its original type (if it had one).
If no preferred type has been explicitly selected, reading in a repository (but not a fast−import stream) will implicitly set the preferred type to the type of that repository.
In older versions of reposurgeon this command changed the type of the selected repository, if there is one. That behavior interacted badly with attempts to interpret legacy IDs and has been removed.
sourcetype [repotype]
Report (with no arguments) or select (with one argument) the current repository's source type. This type is normally set at repository−read time, but may remain unset if the source was a stream file.
The source type affects the interpretation of legacy IDs (for purposes of the =N visibility set and the 'references' command) by controlling the regular expressions used to recognize them. If no preferred output type has been set, it may also change the output format of stream files made from the repository.
The source type is reliably set whenever a live repository is read, or when a Subversion stream or Fossil dump is interpreted but not necessarily by other stream files. Streams generated by cvs-fast-export(1) using the −−reposurgeon are detected as CVS. In some other cases, the source system is detected from the presence of magic $−headers in contents blobs.
INSTRUMENTATION
A few commands have been implemented primarily for debugging and regression−testing purposes, but may be useful in unusual circumstances.
The output of most of these commands can individually be redirected to a named output file. Where indicated in the syntax, you can prefix the output filename with “>” and give it as a following argument.
index [>outfile]
Display four columns of info on objects in the selection set: their number, their type, the associate mark (or '−' if no mark) and a summary field varying by type. For a branch or tag it's the reference; for a commit it's the commit branch; for a blob it's the repository path of the file in the blob.
The default selection set for this command is =CTRU, all objects except blobs.
resolve [label−text...]
Does nothing but resolve a selection−set expression and echo the resulting event−number set to standard output. The remainder of the line after the command is used as a label for the output.
Implemented mainly for regression testing, but may be useful for exploring the selection−set language.
verbose [n]
'verbose 1' enables the progress meter and messages, 'verbose 0' disables them. Higher levels of verbosity are available but intended for developers only.
quiet [on | off]
Without an argument, this command requests a report of the quiet boolean; with the argument 'on' or 'off' it is changed. When quiet is on, time−varying report fields which would otherwise cause spurious failures in regression testing are suppressed.
print output−text...
Does nothing but ship its argument line to standard output. Useful in regression tests.
echo [number]
'echo 1' causes each reposurgeon command to be echoed to standard output just before its output. This can be useful in constructing regression tests that are easily checked by eyeball.
version [version...]
With no argument, display the program version and the list of VCSes directly supported. With argument, declare the major version (single digit) or full version (major.minor) under which the enclosing script was developed. The program will error out if the major version has changed (which means the surgical language is not backwards compatible).
It is good practice to start your lift script with a version requirement, especially if you are going to archive it for later reference.
prompt [format...]
Set the command prompt format to the value of the command line; with an empty command line, display it. The prompt format is evaluated in Python after each command with the following dictionary substitutions:
chosen
The name of the selected repository, or None if none is currently selected.
Thus, one useful format might be 'rs[%(chosen)s]%% '.
More format items may be added in the future. The default prompt corresponds to the format 'reposurgeon%% '. The format line is evaluated with shell quotng of tokens, so that spaces can be included.
history
List the commands you have entered this session.
legacy [read|write] [<filename] [>filename]
Apply or list legacy−reference information. Does not take a selection set. The 'read' variant reads from standard input or a <−redirected filename; the 'write' variant writes to standard output or a >−redirected filename.
A legacy−reference file maps reference cookies to (committer, commit−date, sequence−number) pairs; these in turn (should) uniquely identify a commit. The format is two whitespace−separated fields: the cookie followed by an action stamp identifying the commit.
It should not normally be necessary to use this command. The legacy map is automatically preserved through repository reads and rebuilds, being stored in the file legacy−map under the repository subdirectory..
set [option]
Turn on an option flag. With no arguments, list all options
Most options are described in conjunction with the specific operations that the modify. One of general interest is “compressblobs”; this enables compression on the blob files in the internal representation reposurgeon uses for editing repositories. With this option, reading and writing of repositories is slower, but editing a repository requires less (sometimes much less) disk space.
clear [option]
Turn off an option flag. With no arguments, list all options
profile
Enable profiling. Profile statistics are dumped to the path given as argument. Must be one of the initial command−line arguments, and gathers statistics only on code executed via '−'.
timing
Display statistics on phase timing in repository analysis. Mainly of interest to developers trying to speed up the program.
exit
Exit, reporting the time. Included here because, while EOT will also cleanly exit the interpreter, this command reports elapsed time since start.
reposurgeon can read Subversion dumpfiles or edit a Subversion repository (and you must point it at a repository, not a checkout directory). The reposurgeon distribution includes a script named “repotool” that you can use to make and then incrementally update a local mirror of a remote repository for editing or conversion purposes.
READING SUBVERSION REPOSITORIES
Certain optional modifiers on the read command change its behavior when reading Subversion repositories:
−−nobranch
Suppress branch analysis.
−−ignore−properties
Suppress read−time warnings about discarded property settings.
−−user−ignores
Don't generate .gitignore files from svn:ignore properties. Instead, just pass through .gitignore files found in the history.
−−use−uuid
If the −−use−uuid read option is set, the repository's UUID will be used as the hostname when faking up email addresses, a la git−svn. Otherwise, addresses will be generated the way git cvs−import does it, simply ciopying the username into the address field.
These modifiers can go anywhere in any order on the read command line after the read verb. They must be whitespace−separated.
Here are the rules used for mapping subdirectories in a Subversion repository to branches:
1. At any given time there is a set of eligible paths and path wildcards which declare potential branches. See the documentation of the branchify for how to alter this set, which initially consists of {trunk, tags/*, branches/*, and '*'}.
2. A repository is considered "flat" if it has no directory that matches a path or path wildcard in the branchify set. All commits in a flat repository are assigned to branch master, and what would have been branch structure becomes directory structure. In this case, we're done; all the other rules apply to non−flat repos.
If you give the option −−nobranch when reading a Subversion repository, branch analysis is skipped and the repository is treated as though flat (left as a linear sequence of commits on refs/heads/master). This may be useful if your repository configuration is highly unusual and you need to do your own branch surgery. Note that this option will disable partitioning of mixed commits.
3. If "trunk" is eligible, it always becomes the master branch.
4. If an element of the branchify set ends with *, each immediate subdirectory of it is considered a potential branch. If '*' is in the branchify set (which is true by default) all top−level directories other than /trunk, /tags, and /branches are also considered potential branches.
5. Each potential branch is checked to see if it has commits on it after the initial creation or copy. If there are such commits, it becomes a branch. If not, it becomes a tag in order to preserve the commit metadata. (In all cases, the name of the tag or branch is the basename of the directory.)
6. Files in the top−level directory are assigned to a synthetic branch named 'root'.
Each commit that only creates or deletes directories (in particular, copy commits for tags and branches, and commits that only change properties) will be transformed into a tag named after the branch, containing the date/author/comment metadata from the commit. While this produces a desirable result for tags, non−tag branches (including trunk) will also get root tags this way. This apparent misfeature has been accepted so that reposurgeon will never destroy human−generated metadata that might have value; it is left up to the user to manually remove unwanted tags.
Subversion branch deletions are turned into deletealls, clearing the fileset of the import−stream branch. When a branch finishes with a deleteall at its tip, the deleteall is transformed into a tag. This rule cleans up after aborted branch renames.
Occasionally (and usually by mistake) a branchy Subversion repository will contain revisions that touch multiple branches. These are handled by partitioning them into multiple import−stream commits, one on each affected branch. The Legacy−ID of such a split commit will have a pseudo−decimal part − for example, if Subversion revision 2317 touches three branches, the three generated commits will have IDs 2317.1, 2317.2, and 2317.3.
The svn:executable and svn:special properties are translated into permission settings in the input stream; svn:executable becomes 100755 and svn:special becomes 120000 (indicating a symlink; the blob contents will be the path to which the symlink should resolve).
Any cvs2svn:rev properties generated by cvs2svn are incorporated into the internal map used for reference−lifting, then discarded.
Normally, per−directory svn:ignore properties become .gitignore files. Actual .gitignore files in a Subversion directory are presumed to have been created by git−svn users separately from native Subversion ignore properties and discarded with a warning. It is up to the user to merge the content of such files into the target repository by hand. But this behavior is inverted by the −−user−ignores option; if that is on, .gitignore files are passed through and Subversion svn:ignore properties are discarded.
(Regardless of the setting of the −−user−ignores option, .cvsignore files found in Subversion repositories always become .gitignores in the translation. The assumption is that these date from before a CVS−to−SVN lift and should be preserved to affect behavior when browsing that section of the repository.)
svn:mergeinfo properties are interpreted. Any svn:mergeinfo property on a revision A with a merge source range ending in revision B produces a merge link such that B becomes a parent of A.
All other Subversion properties are discarded. (This may change in a future release.) The property for which this is most likely to cause semantic problems is svn:eol−style. However, since property−change−only commits get turned into annotated tags, the translated tags will retain information about setting changes.
The sub−second resolution on Subversion commit dates is discarded; Git wants integer timestamps only.
Because fast−import format cannot represent an empty directory, empty directories in Subversion repositories will be lost in translation.
Normally, Subversion local usernames are mapped in the style of git cvs−import; thus user "foo" becomes "foo <foo>", which is sufficient to pacify git and other systems that require email addresses. With the option "svn_use_uuid", usernames are mapped in the git−svn style, with the repository's UUID used as a fake domain in the email address. Both forms can be remapped to real address using the authors read command.
Reading a Subversion stream enables writing of the legacy map as 'legacy' passthroughs when the repo is written to a stream file.
reposurgeon tries hard to silently do the right thing, but there are Subversion edge cases in which it emits warnings because a human may need to intervene and perform fixups by hand. Here are the less obvious messages it may emit:
user−generated .gitignore
This message means means reposurgeon has found a .gitignore file in the Subversion repository it is analyzing. This probably happened because somebody was using git−svn as a live gateway, and created ignores which may or may not be congruent with those in the generated .gitignore files that the Subversion ignore properties will be translated into. You'll need to make a policy decision about which set of ignores to use in the conversion, and possibly set the −−user−ignores option on read to pass through user−created .gitignore files; in that case this warning will not be emitted.
can't connect nonempty branch XXXX to origin
This is a serious error. reposurgeon has been unable to find a link from a specified branch to the trunk (master) branch. The commit graph will not be fully connected and will need manual repair.
permission information may be lost
A Subversion node change on a file sets or clears properties, but no ancestor can be found for this file. Executable or symlink position may be set wrongly on later revisions of this file. Subversion user−defined properties may also be scrambled or lost. Usually this error can be ignored.
properties set
reposurgeon has detected a setting of a user−defined property, or the Subversion properties svn:externals. These properties cannot be expressed in an import stream; the user is notified in case this is a showstopper for the conversion or some corrective action is required, but normally this error can be ignored. This warning is suppressed by the −−ignore−properties option.
branch links detected by file ops only
Branch links are normally deduced by examining Subversion directory copy operations. A common user error (making a branch with a non−Subversion directory copy and then doing an svn add on the contends) can defeat this. While reposurgeon should detect and cope with most such copies correctly, you should examine the commit graph to check that the branch is rooted at the correct place.
could not tagify root commit
The earliest commit in your Subversion repository has file operations, rather than being a pure directory creation. This probably means your Subversion dump file is malformed, or you may have attempted to lift from an incremental dump. Proceed with caution.
deleting parentless tip delete
This message may be triggered by a Subversion branch move followed by a re−creation under the source name. Check near the indicated revision to make sure the renamed branch is connected to master.
mid−branch deleteall
A deleteall operation has been found in the middle of a branch history. This usually indicates that a Subversion tag or branch was created by mistake, and someone later tried to undo the error by deleting the tag/branch directory before recreating it with a copy operation. Examine the topology near the deleteall closely, it may need hand−hacking. It is fairly likely that both (a) the reposurgeon translation will be different from what other translators (such as git−svn) produce, and (b) it will not be immediately obvious which is right.
couldn't find a branch root for the copy
Branch analysis failed, probably due to a set of file copies that reposurgeon thought it should interpret as a botched branch creation but couldn't deduce a history for. Use the −−nobranch option.
inconsistently empty from set
This message means means reposurgeon has failed an internal sanity check; the directory structure implied by its internally−built filemaps is not consistent with what's in the parsed Subversion nodes. This should never happen; if you see it, report a bug in reposurgeon.
WRITING SUBVERSION REPOSITORIES
reposurgeon has support for writing Subversion repositories. Due to mismatches between the ontology of Subversion and that of git import streams, this support has some significant limitations and bugs.
In summary, Subversion repository histories do not round−trip through reposurgeon editing. File content changes are preserved but some metadata is unavoidably lost. Furthermore, writing out a DVCS history in Subversion also loses significant portions of its metadata. Details follow..
Import−stream timestamps have 1−second granularity. The sub−second parts of Subversion commit timestamps will be lost on their way through reposurgeon.
Empty directories aren't represented in import streams. Consequently, reading and writing Subversion repositories preserves file content, but not empty directories. It is also not guaranteed that after editing a Subversion repository that the sequence of directory creations and deletions relative to other operations will be identical; the only guarantee is that enclosing directories will be created before any files in them are.
When reading a Subversion repository, reposurgeon discards the special directory−copy nodes associated with branch creations. These can't be recreated if and when the repository is written back out to Subversion; rather, each branch copy node from the original translates into a branch creation plus the first set of file modifications on the branch.
When reading a Subversion repository, reposurgeon also automatically breaks apart mixed−branch commits. These are not re−united if the repository is written back out.
When writing to a Subversion repository, all lightweight tags become Subversion tag copies with empty log comments, named for the tag basename. The committer name and timestamp are copied from the commit the tag points to. The distinction between heads and tags is lost.
Because of the preceding two points, it is not guaranteed that even revision numbers will be stable when a Subversion repository is read in and then written out!
Subversion repositories are always written with a standard (trunk/tags/branches) layout. Thus, a repository with a nonstandard shape that has been analyzed by reposurgeon won't be written out with the same shape.
When writing a Subversion repository, branch merges are translated into svn:mergeinfo properties in the simplest possible way − as an svn:mergeinfo property of the translated merge commit listing the merge source revisions..
reposurgeon recognizes how supported VCSes represent file ignores (CVS .cvsignore files lurking untranslated in older Subversion repositories, Subversion ignore properties, .gitignore/.hgignore/.bzrignore file in other systems) and moves ignore declarations among these containers on repo input and output. This will be sufficient if the ignore patterns are exact filenames.
Translation may not, however, be perfect when the ignore patterns are Unix glob patterns or regular expressions. This compatibility table describes which patterns will translate; “plain” indicates a plain filename with no glob or regexp syntax or negation.
RCS has no ignore files or patterns and is therefore not included in the table.
The hg rows and columns of the table describes compatibility to hg's glob syntax rather than its default regular−expression syntax. When writing to an hg repository from any other kind, reposurgeon prepends to the output .hgignore a "syntax: glob" line.
After converting a CVS or SVN repository, check for and remove $−cookies in the head revision(s) of the files. The full Subversion set is $Date:, $Revision:, $Author:, $HeadURL and $Id:. CVS uses $Author:, $Date:, $Header:, $Id:, $Log:, $Revision:, also (rarely) $Locker:, $Name:, $RCSfile:, $Source:, and $State:.
When you need to specify a commit, use the action−stamp format that references lift generates when it can resolve an SVN or CVS reference in a comment. It is best that you not vary from this format, even in trivial ways like omitting the 'Z' or changing the 'T' or '!' or ':'. Making action stamps uniform and machine−parseable will have good consequences for future repository−browsing tools.
Sometimes, in converting a repository, you may need to insert an explanatory comment − for example, if metadata has been garbled or missing and you need to point to that fact. It's helpful for repository−browsing tools if there is a uniform syntax for this that is highly unlikely to show up in repository comments. We recommend enclosing translation notes in [[ ]]. This has the advantage of being visually similar to the [ ] traditionally used for editorial comments in text.
It is good practice to include, in the comment for the root commit of the repository, a note dating and attributing the conversion work and explaining these conventions. Example:
[[This repository was converted from Subversion to git on 2011−10−24 by Eric S. Raymond <esr@thyrsus.com>. Here and elsewhere, conversion notes are enclosed in double square brackets. Junk commits generated by cvs2svn have been removed, commit references have been mapped into a uniform VCS−independent syntax, and some comments edited into summary−plus−continuation form.]]
It is also good practice to include a generated tag at the point of conversion. E.g
mailbox_in −−create <<EOF
Tag−Name: git−conversion
Marks the spot at which this repository was converted from Subversion to git.
EOF
define lastchange {
@max(=B & [/ChangeLog/] & /{0}/B)? list
}
List the last commit that refers to a ChangeLog file containing a specified string. (The trick here is that ? extends the singleton set consisting of the last eligible ChangeLog blob to its set of referring commits, and listonly notices the commits.)
The event−stream parser in “reposurgeon” supports some extended syntax. Exporters designed to work with “reposurgeon” may have a −−reposurgeon option that enables emission of extended syntax; notably, this is true of cvs-fast-export(1). The remainder of this section describes these syntax extensions. The properties they set are (usually) preserved and re−output when the stream file is written.
The token “#reposurgeon” at the start of a comment line in a fast−import stream signals reposurgeon that the remainder is an extension command to be interpreted by “reposurgeon”.
One such extension command is implemented: #sourcetype, which behaves identically to the reposurgeonsourcetype command. An exporter for a version−control system named “frobozz” could, for example, say
#reposurgeon sourcetype frobozz
Within a commit, a magic comment of the form “#legacy−id” declares a legacy ID from the stream file's source version−control system.
Also accepted is the bzr syntax for setting per−commit properties. While parsing commit syntax, a line beginning with the token “property” must contibue with a whitespace−separated property−name token. If it is then followed by a newline it is taken to set that boolean−valued property to true. Otherwise it must be followed by a numeric token specifying a data length, a space, following data (which may contain newlines) and a terminating newline. For example:
commit refs/heads/master
mark :1
committer Eric S. Raymond <esr@thyrsus.com> 1289147634 −0500
data 16
Example commit.
property legacy−id 2 r1
M 644 inline README
Unlike other extensions, bzr properties are only preserved on stream output if the preferred type is bzr, because any importer other than bzr's will choke on them.
In versions before 3.23, “prefer” changed the repository type as well as the preferred output format.
In versions before 3.0, the general command syntax put the command verb first, then the selection set (if any) then modifiers (VSO). It has changed to optional selection set first, then command verb, then modifiers (SVO). The change made parsing simpler, allowed abolishing some noise keywords, and recapitulates a successful design pattern in some other Unix tools − notably sed(1).
In versions before 3.0, path expressions only matched commits, not commits and the associated blobs as well. The names of the “a” and “c” flags were different.
In reposurgeon versions before 3.0, the delete command had the semantics of squash; also, the policy flags did not require a “−−” prefix. The “−−delete” flag was named “obliterate”.
In reposurgeon versions before 3.0, read and write optionally took file arguments rather than requiring redirects (and the write command never wrote into directories). This was changed in order to allow these commands to have modifiers. These modifiers replaced several global options that no longer exist.
In reposurgeon versions before 3.0, the earliest factor in a unite command always kept its tag and branch names unaltered. The new rule for resolving name conflicts, giving priority to the latest factor, produces more natural behavior when uniting two repositories end to end; the master branch of the second (later) one keeps its name.
In reposurgeon versions before 3.0, the tagify command expected policies as trailing arguments to alter its behaviour. The new syntax uses similarly named options with leading dashes, that can appear anywhere after the tagify command
In versions before 2.9. the syntax of "authors", "legacy", "list", and "mailbox_{in|out}" was different (and "legacy" was "fossils"). They took plain filename arguments rather that using redirect < and >.
Guarantee: In DVCses that use commit hashes, editing with reposurgeon never changes the hash of a commit object unless (a) you edit the commit, or (b) it is a descendant of an edited commit in a VCS that includes parent hashes in the input of a child object's hash (git and hg both do this).
Guarantee: reposurgeon only requires main memory proportional to the size of a repository's metadata history, not its entire content history. (Exception: the data from inline content is held in memory.)
Guarantee: In the worst case, reposurgeon makes its own copy of every content blob in the repository's history and thus uses intermediate disk space approximately equal to the size of a repository's content history. However, when the repository to be edited is presented as a stream file, reposurgeon requires no or only very little extra disk space to represent it; the internal representation of content blobs is a (seek−offset, length) pair pointing into the stream file.
Guarantee: reposurgeon never modifies the contents of a repository it reads, nor deletes any repository. The results of surgery are always expressed in a new repository.
Guarantee: Any line in a fast−import stream that is not a part of a command reposurgeon parses and understands will be passed through unaltered. At present the set of potential passthroughs is known to include the progress, the options, and checkpoint commands as well as comments led by #.
Guarantee: All reposurgeon operations either preserve all repository state they are not explicitly told to modify or warn you when they cannot do so.
Guarantee: reposurgeon handles the bzr commit−properties extension, correctly passing through property items including those with embedded newlines. (Such properties are also editable in the mailbox format.)
Limitation: Because reposurgeon relies on other programs to generate and interpret the fast−import command stream, it is subject to bugs in those programs.
Limitation: bzr suffers from deep confusion over whether its unit of work is a repository or a floating branch that might have been cloned from a repo or created from scratch, and might or might not be destined to be merged to a repo one day. Its exporter only works on branches, but its importer creates repos. Thus, a rebuild operation will produce a subdirectory structure that differs from what you expect. Look for your content under the subdirectory 'trunk'.
Limitation: under git, signed tags are imported verbatim. However, any operation that modifies any commit upstream of the target of the tag will invalidate it.
Limitation: Stock git (at least as of version 1.7.3.2) will choke on property extension commands. Accordingly, reposurgeon omits them when rebuilding a repo with git type.
Limitation: Converting an hg repo that uses bookmarks (not branches) to git can lose information; the branch ref that git assigns to each commit may not be the same as the hg bookmark that was active when the commit was originally made under hg. Unfortunately, this is a real ontological mismatch, not a problem that can be fixed by cleverness in reposurgeon.
Limitation: While the Subversion read−side support is in good shape, the write−side support is more of a sketch or proof−of−concept than a robust implementation; it only works on very simple cases and does not round−trip. It may improve in future releases.
Limitation: reposurgeon may misbehave under a filesystem which smashes case in filenames, or which nominally preserves case but maps names differing only by case to the same filesystem node (Mac OS X behaves like this by default). Problems will arise if any two paths in a repo differ by case only. To avoid the problem on a Mac, do all your surgery on an HFS+ file system formatted with case sensitivity specifically enabled.
Guarantee: As version−control systems add support for the fast−import format, their repositories will become editable by reposurgeon.
reposurgeon relies on importers and exporters associated with the VCSes it supports.
git
Core git supports both export and import.
bzr
Requires bzr plus the bzr−fast−import plugin.
hg
Requires core hg, the hg−fastimport plugin, and the third−party hg−fast−export.py script.
svn
Stock Subversion commands support export and import.
darcs
Stock darcs commands support export and import.
CVS
Requires cvs−fast−export. Note that the quality of CVS lifts may be poor, with individual lifts requiring serious hand−hacking. This is due to inherent problems with CVS's file−oriented model.
RCS
Requires cvs−fast−export (yes, that's not a typo; cvs−fast−export handles RCS collections as well). The caveat for CVS applies.
It is expected that reposurgeon will be extended with more deletion policies. Policy authors may need to know more about how a commit's file operation sequence is reduced to normal form after operations from deleted commits are prepended to it.
Recall that each commit has a list of file operations, each a M (modify), D (delete), R (rename), C (copy), or 'deleteall' (delete all files). Only M operations have associated blobs. Normally there is only one M operation per individual file in a commit's operation list.
To understand how the reduction process works, it's enough to understand the case where all the operation in the list are working on the same file. Sublists of operations referring to different files don't affect each other and reducing them can be thought of as separate operations. Also, a "deleteall" acts as a D for everything and cancels all operations before it in the list.
The reduction process walks through the list from the beginning looking for adjacent pairs of operations it can compose. The following table describes all possible cases and all but one of the reductions.
This section will become relevant only if reposurgeon or something underneath it in the software and hardware stack crashes while in the middle of writing out a repository, in particular if the target directory of the rebuild is your current directory.
The tool has two conflicting objectives. On the one hand, we never want to risk clobbering a pre−existing repo. On the other hand, we want to be able to run this tool in a directory with a repo and modify it in place.
We resolve this dilemma by playing a game of three−directory monte.
1. First, we build the repo in a freshly−created staging directory. If your target directory is named /path/to/foo, the staging directory will be a peer named /path/to/foo−stageNNNN, where NNNN is a cookie derived from reposurgeon's process ID.
2. We then make an empty backup directory. This directory will be named /path/to/foo.~N~, where N is incremented so as not to conflict with any existing backup directories. reposurgeon never, under any circumstances, ever deletes a backup directory.
So far, all operations are safe; the worst that can happen up to this point if the process gets interrupted is that the staging and backup directories get left behind.
3. The critical region begins. We first move everything in the target directory to the backup directory.
4. Then we move everything in the staging directory to the target.
5. We finish off by restoring untracked files in the target directory from the backup directory. That ends the critical region.
During the critical region, all signals that can be ignored are ignored.
Returns 1 on fatal error, 0 otherwise. In batch mode all errors are fatal.
bzr(1), cvs(1), darcs(1), git(1), hg(1), rcs(1), svn(1).
Eric S. Raymond <esr@thyrsus.com>; project page at.
1.
DVCS Migration HOWTO
2.
Python’s | http://man.sourcentral.org/f23/1+reposurgeon | CC-MAIN-2018-39 | refinedweb | 17,168 | 53.61 |
<target name="deployments" depends="...">
<assert isref="deployments.d"/>
<foreach i="deployment.d" dirs="${deployments.d}" mode="local">
<overlay file="${deployment.d}/deploy.properties">
...
</overlay>
</foreach>
</target>
3. Value URIs
<target name="workspace" depends="...">
<echo message="Started: ${$time:}" level="info"/>
<do unless="os.disabled">
<property file="${conf.d}/build-${$os:|$lowercase:}.properties"/>
</do>
...
</target>
Hope this helps,
The Wabbit
Matt Benson wrote:
>
> --- Dominique Devienne <ddevienne@gmail.com> wrote:
>
>> On 6/15/07, Matt Benson <gudnabrsam@yahoo.com>
>> wrote:
>> > I am actively working on this as we speak,
>> actually,
>> > and I'm pleased so far with my results.
>>
>> FTR Matt, I still haven't read anything to convince
>> me that write
>> access via <property> is desirable, needed, and
>> good. I'm not trying
>> to put a damper on your efforts, but so far the use
>> cases I've seen
>> for "write" are better handled by custom tasks.
>
> Okay, first to be more clear: I determined that the
> natural extension points for properties handling would
> be, reading them, expanding them from a string (the
> use case that kicked off this discussion), and setting
> them. I did and do recognize that changing how
> properties are set was weird, and as such have still
> not even written the interface for how that would
> happen. Even if the final group consensus is to allow
> for them, I am putting them last, and who knows? I
> might not even be the one to implement them in the
> event we do go forth with them. :)
>
>>
>> What about the <*ant> tasks? These "things" which
>> are not string
>> properties, how do they percolate to sub-Projects?
>> We have clear
>> semantic for properties and references passing, so
>> it would be much
>> clearer and "The Ant Way"(tm) to have them as
>> references, manipulated
>> using custom tasks, and passed using reference
>> semantic, and which
>> unlike properties are not fully compartmented
>> between Projects, which
>> the parent and child project share the same
>> referenced-object.
>>
>
> Here you've simplified "pluggable property setting" to
> "supporting non-String properties" and I suppose
> that's fair enough from a buildfile-only standpoint. But the current
> design of PropertyHelper allows for a
> given property to be set as an arbitrary object via
> Ant's API. I think, even if we don't recognize that
> we should allow a hook for setting properties, that
> this is harmless enough, despite your well-founded
> arguments regarding references. That said, it's no
> concern of mine if we reduce properties to Strings--it
> would simplify some things, certainly--but the user
> community might feel otherwise. Then again, (1) if
> we're already giving them breaking changes we can
> certainly go whole-hog with those if we so choose, and
> (2) I sent Wascally Wabbit from AntXtras (who seem to
> be the greatest consumer of PropertyHelpers from the
> list Peter sent out) a personal message inviting him
> to this conversation and we still haven't seen him. I'll follow up with
> a similar message to the other
> admin on the project in case he's on vacation or
> something. Meanwhile I'll try restricting properties
> to strings and see if we break anything internal.
>
>> Would installed PH instanced percolate to
>> sub-Project automatically?
>
> I'm not sure. I think we need more discussion of
> this:
>
>> Because if they do, Peter's argument that the
>> explicit declaration of
>> the PH ensures BC falls flat if one uses "external"
>> reusable build
>> files which would happen to use the same syntax as
>> the PH prefix
>> installed in another build file. That would be bad
>> encapsulation.
>>
>> So the more I think about this, the more I feel it's
>> wrong at several level.
>>
>
> I don't necessarily agree that the PropertyHelper
> should be externally configurable (via, I assume,
> magic properties). I think we'll be in better shape,
> personally, to simply provide a reasonable set of
> tasks to replace PropertyHelpers and add handling
> delegates to the currently installed PH, all from
> within the buildfile. It's a similar argument to why
> externally declared namespace prefixes are wrong, and
> so I am confident you and I will (for once) be in
> agreement on this point.
>
>> Let's stick with read access. As toString:
>> demonstrates already,
>> what's to the right of the PH scheme doesn't have to
>> reference a
>> property name, so it's flexible enough. --DD
>>
>
> Final thought wrt not allowing for setter delegates: Because we plan to
> continue to allow a user to install
> an arbitrary subclass of PropertyHelper we would have
> to make setXXX final operations to stop a determined
> user from doing I-can't-foresee-what kind of things
> with property setting. Are we prepared to do this?
>
> -Matt
>>
[snip]
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
For additional commands, e-mail: dev-help@ant.apache.org | http://mail-archives.apache.org/mod_mbox/ant-dev/200707.mbox/%3C4687D447.3020605@earthling.net%3E | CC-MAIN-2016-26 | refinedweb | 798 | 53.21 |
example code to access some C++ classes in a DLL
Discussion in 'Python' started by Torsten Moh,149
- Ham
- Oct 28, 2004
Base Classes in .exe, derived classes in .dllColin Goudie, Jan 21, 2004, in forum: C++
- Replies:
- 6
- Views:
- 541
- Victor Bazarov
- Jan 26, 2004
ctypes: Problems using Windows-DLL from VB example codeNoralf Trønnes, May 9, 2007, in forum: Python
- Replies:
- 4
- Views:
- 314
- Noralf Trønnes
- May 9, 2007
I can't access some classes, but can access others in the same namespace, why?ThunderMusic, Feb 21, 2007, in forum: ASP .Net Web Services
- Replies:
- 1
- Views:
- 171
- ThunderMusic
- Feb 22, 2007
Why does Ruby use both tcl83.dll and tk83.dll (instead of just tk83.dll)?H. Simpson, Aug 2, 2004, in forum: Ruby
- Replies:
- 4
- Views:
- 381
- H. Simpson
- Aug 3, 2004 | http://www.thecodingforums.com/threads/example-code-to-access-some-c-classes-in-a-dll.328961/ | CC-MAIN-2015-11 | refinedweb | 136 | 73.98 |
What?
Sometimes we need to “compress” our data to speed up algorithms or to visualize data. One way is to use dimensionality reduction which is the process of reducing the number of random variables under consideration by obtaining a set of principal variables. We can think of 2 approaches:
- Feature selection: find a subset of the input variables.
- Feature projection (also Feature extraction): transforms the data in the high-dimensional space to a space of fewer dimensions. PCA is one of the methods following this approach.
Figure 1. An idea of using PCA from 2D to 1D.
Figure 2. An idea of using PCA from 5D to 2D.
❓ Questions: How can we choose the green arrows like in Figure 1 and 2 (their directions and their magnitudes)?
From a data points, there are many ways of projections, for examples,
Figure 3. We will project the points to the green line or the violet line? Which one is the best choice?
Intuitively, the green line is better with more separated points. But how can we choose it “mathematically” (precisely)? We need to know about:
- Mean: find the most balanced point in the data.
- Variance: measure the spread of data from the mean. However, variance is not enough. There are many different ways in that we get the same variance.
- Covariance: indicate the direction in that data are spreading.
An example of the same mean and variance but different covariance.
Figure 4. Different data but the same mean and variance. That’s why we need covariance!
Algorithm
- Subtract the mean to move to the original axes.
- From the original data (a lot of features ), we construct a covariance matrix .
- Find the eigenvalues and correspondent eigenvectors of that matrix (we call them eigenstuffs). Choose couples and (the highest eigenvalues) and we get a reduced matrix .
Projection original data points to the -dimensional plane created based on these new eigenstuffs. This step creates new data points on a new dimensional space ().
- Now, instead of solving the original problem ( features), we only need to solve a new problem with features ().
Figure 5. A big picture of the idea of PCA algorithm.[ref]
Code
from sklearn.decomposition import PCA s = np.array([...]) pca = PCA(n_components=150, whiten=True, random_state=42) # pca.fit(s) s1 = pca.fit_transform(s) print (pca.components_) # eigenvectors print (pca.explained_variance_) # eigenvalues
Some notable components (see full):
pca.fit(X): only fit
X(and then we can use
pcafor other operations).
pca.fit_transform(X): Fit the model with
Xand apply the dimensionality reduction on
X(from
(n_samples, n_features)to
(n_samples, n_components)).
pca.inverse_transform(s1): transform
s1back to original data space (2D) - not back to
s!!!
pca1.mean_: mean point of the data.
pca.components_: eigenvectors (
n_componentsvectors).
pca.explained_variance_: eigenvalues. It’s also the amount of retained variance which is corresponding to each components.
pca.explained_variance_ratio_: the percentage in that variance is retained if we consider on each component.
Some notable parameters:
n_components=0.80: means it will return the Eigenvectors that have the 80% of the variation in the dataset.
When choosing the number of principal components (), we choose to be the smallest value so that for example, of variance, is retained.[ref]
In Scikit-learn, we can use
pca.explained_variance_ratio_.cumsum(). For example,
n_components = 5 and we have,
[0.32047581 0.59549787 0.80178824 0.932976 1.]
then we know that with , we would retain of the variance.
Whitening
Whitening makes the features:
- less correlated with each other,
- all features have the same variance (or, unit component-wise variances).
PCA / Whitening. Left: Original toy, 2-dimensional input data. Middle: After performing PCA. The data is centered at zero and then rotated into the eigenbasis of the data covariance matrix. This decorrelates the data (the covariance matrix becomes diagonal). Right: Each dimension is additionally scaled by the eigenvalues, transforming the data covariance matrix into the identity matrix. Geometrically, this corresponds to stretching and squeezing the data into an isotropic gaussian blob.
If this section doesn’t satisfy you, read this and this (section PCA and Whitening).
PCA in action
Example to understand the idea of PCA:
–
- Plot points with 2 lines which are corresponding to 2 eigenvectors.
- Plot & choose Principal Components.
- An example of choosing
n_components.
- Visualization hand-written digits (the case of all digits and the case of only 2 digits – 1 & 8).
- Using SVM to classifier data in the case of 1 & 8 and visualize the decision boundaries.
-:
–
References
- Luis Serrano – [Video] Principal Component Analysis (PCA). It’s very intuitive!
- Stats.StackExchange – Making sense of principal component analysis, eigenvectors & eigenvalues.
- Scikit-learn – PCA official doc.
- Tiep Vu – Principal Component Analysis: Bài 27 and Bài 28.
- Jake VanderPlas – In Depth: Principal Component Analysis.
- Tutorial 4 Yang – Principal Components Analysis.
- Andrew NG. – My raw note of the course “Machine Learning” on Coursera.
- Shankar Muthuswamy – Facial Image Compression and Reconstruction with PCA.
- UFLDL - Stanford – PCA Whitening. | https://dinhanhthi.com/principal-component-analysis | CC-MAIN-2020-29 | refinedweb | 812 | 52.26 |
Spawning and Controlling Vehicles in CARLA
This is a short tutorial on using agents and traffic tools in CARLA. This wiki contains details about:
- Spawning Vehicles in CARLA
- Controlling these spawned Vehicles using CARLA’s PID controllers.
Pre-requisites
We assume that you have installed CARLA according to instructions on the website. This tutorial used CARLA 0.9.8.
Let’s begin! First, start the CARLA server:
./CarlaUE4.sh
This should open up the CARLA server and you will be greeted with a camera feed:
Spawning a vehicle in CARLA
Now that we have the CARLA server running, we need to connect a client to it. Create a python file, and add the following lines to it:
import carla client = carla.Client('localhost', 2000) client.set_timeout(2.0)
We now have a client connected to CARLA!
Try exploring the city using the mouse and arrow keys. Try moving to a bird’s eye view of the city and add the following lines to your code:
def draw_waypoints(waypoints, road_id=None, life_time=50.0): for waypoint in waypoints: if(waypoint.road_id == road_id): self.world.debug.draw_string(waypoint.transform.location, 'O', draw_shadow=False, color=carla.Color(r=0, g=255, b=0), life_time=life_time, persistent_lines=True) waypoints = client.get_world().get_map().generate_waypoints(distance=1.0) draw_waypoints(waypoints, road_id=10, life_time=20)
All roads in CARLA have an associated road_id. The code above will query the CARLA server for all the waypoints in the map, and the light up the waypoints that are present on road with road_id 10. You should see something like this:
This visualization helps us in finding out a good spawn location for a vehicle. Let’s spawn a car somewhere on road 10 now.
We first need to query for the car’s blueprint.
vehicle_blueprint = client.get_world().get_blueprint_library().filter('model3')[0]
This blueprint will be used by CARLA to spawn a Tesla Model 3.
We now need to obtain a spawn location.
filtered_waypoints = [] for waypoint in waypoints: if(waypoint.road_id == 10): filtered_waypoints.append(waypoint)
This gives us a list of all waypoints on road 10. Let’s choose a random waypoint from this list as the spawn point. This information, together with the blueprint, can be used to spawn vehicles.
spawn_point = filtered_waypoints[42].transform spawn_point.location.z += 2 vehicle = client.get_world().spawn_actor(vehicle_blueprint, spawn_point)
The reason for increasing the ‘z’ coordinate of the spawn point it to avoid any collisions with the road. CARLA does not internally handle these collisions during spawn and not having a ‘z’ offset can lead to issues.
We should now have a car on road 10.
Controlling the spawned car
We will be using CARLA’s built-in PID controllers for controlling our spawned model 3.
Let’s initialize the controller:
from agents.navigation.controller import VehiclePIDController custom_controller = VehiclePIDController(vehicle, args_lateral = {'K_P': 1, 'K_D': 0.0, 'K_I': 0}, args_longitudinal = {'K_P': 1, 'K_D': 0.0, 'K_I': 0.0})
This creates a controller that used PID for both lateral and longitudinal control. Lateral control is used to generate steering signals while latitudinal control tracks desired speed. You are free to play around with the Kp, Kd and Ki gains and see how the motion of the car is affected!
Let’s choose a waypoint to track. This is a waypoint on the same lane as the spawned car.
target_waypoint = filtered_waypoints[50] client.get_world().debug.draw_string(target_waypoint.transform.location, 'O', draw_shadow=False, color=carla.Color(r=255, g=0, b=0), life_time=20, persistent_lines=True)
The tracked waypoint should now be red in color.
Now, track!
ticks_to_track = 20 for i in range(ticks_to_track): control_signal = custom_controller.run_step(1, target_waypoint) vehicle.apply_control(control_signal)
You should see something like the GIF below:
Summary
That’s it! You can now spawn and control vehicles in CARLA.
See Also:
- Follow our work at for more CARLA related demos.
References | https://roboticsknowledgebase.com/wiki/simulation/Spawning-and-Controlling-Vehicles-in-CARLA/ | CC-MAIN-2021-10 | refinedweb | 637 | 51.75 |
In today’s Programming Praxis our task is to neatly display Pascal’s triangle. Let’s get started, shall we?
A quick import to making printing slightly more convenient:
import Text.Printf
Calculating Pascal’s triangle is trivial:
pascal :: [[Integer]] pascal = iterate (\prev -> 1 : zipWith (+) prev (tail prev) ++ [1]) [1]
To display the triangle correctly, we need to prepend the appropriate amount of spacing to each line based on the longest (i.e. last) line.
prettyPascal :: Int -> IO () prettyPascal n = mapM_ (\r -> printf "%*s\n" (div (longest + length r) 2) r) rows where rows = map (unwords . map show) $ take (n + 1) pascal longest = length $ last rows
An that’s all there is to it. A quick test to see if everything is working proprely:
main :: IO () main = prettyPascal 10
Advertisements
Tags: bonsai, code, Haskell, kata, pascal, praxis, programming, triangle
December 6, 2011 at 12:19 pm |
Nice to see you back. Learned so much from your answers. Thanks. | https://bonsaicode.wordpress.com/2011/12/06/programming-praxis-pascals-triangle/ | CC-MAIN-2017-30 | refinedweb | 158 | 61.06 |
I am trying to write a program that keeps track of social security numbers (SSN) of customers of a bank. I have to assume that SSN is a four-digit number (e.g., 1234. 0123 is not allowed.). When the program runs,it has to prompt that the user should enter a 4-digit SSN number to record, or 0 (zero) to exit.
The program also needs to be able to give an error message if a certain number is entered more than once.
After all the numbers have been saved and you finally end it by pressing 0, it needs to be able to output all the entered SSN's.
Here is my code so far, I am kinda lost from what to do from here can anybody help explain? not sure how to approach the rest can somebody please help me finish this code?
#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;
bool isExist(int ssn, int records[], int numberOfRecords);
void listRecords(int records[], int numberOfRecords);
int main()
{
int records[32];
cout << "Please enter number or 0";
system("pause");
return 0;
}
bool isExist(int ssn, int records[], int numberOfRecords)
{
}
void listRecords(int records[], int numberOfRecords)
{
}
First, please use code tags when posting code. Without code tags, then code is unformatted and hard to read.
Here is my code so far, I am kinda lost from what to do from here can anybody help explain?
The code you posted does nothing really.
You are to read the number, and then search the array to see if the number appears in the array. If it doesn't then you store the number in the array at some spot (usually at the spot in the array after the last element that was added). The assignment states plainly what you're supposed to do.
can somebody please help me finish this code?
Again, you really didn't do any coding except write some function header. You have no search loop in the isExist() function, you have no code to store the value in the array, you basically have nothing.
But why were you able to at least write the empty function body, not knowing what to do next? Ok, so finish the internals of each of the functions that you wrote. If not, what specifically is stopping you from completing the isExist() and listRecords() functions? You are passed arguments, and regardless of the main() program, you should be able to finish those functions. Then you get main() to call these functions at the appropriate time.
Since you need to store the number, I can see where it may get tricky since you need to keep track of where the last element was added. But in no way does searching an array for a number be that difficult (the isExist() function), and printing the values of an array (the listRecords) functions be such a stumbling block. So again, what is the specific reason why you cannot complete these rather simple functions? (given that you wrote the function header, complete with parameters).
Regards,
Paul McKenzie
Last edited by Paul McKenzie; June 26th, 2013 at 09:49 PM.
So again, what is the specific reason why you cannot complete these rather simple functions? (given that you wrote the function header, complete with parameters).
Skeleton provided in the assigment?
All advice is offered in good faith only. You are ultimately responsible for the effects of your programs and the integrity of the machines they run on. Anything I post, code snippets, advice, etc is licensed as Public Domain
C, C++ Compiler: Microsoft VS2017
Forum Rules | http://forums.codeguru.com/showthread.php?538091-please-help-with-c-program-question-over-arrays!&mode=hybrid | CC-MAIN-2017-17 | refinedweb | 600 | 70.94 |
NAME
BN_generate_prime_ex2, BN_generate_prime_ex, BN_is_prime_ex, BN_check_prime, - generate primes and test for primality
SYNOPSIS
#include <openssl/bn.h> int BN_generate_prime_ex2, since OpenSSL 0.9.8,);
Deprecated since OpenSSL 3.
If ret is not NULL, it will be used to store the number.
If cb is not NULL, it is used as follows:
BN_GENCB_call(cb, 0, i) is called after generating the i-th potential prime number.
While the number is being tested for primality, BN_GENCB_call(cb, 1, j) is called as described below.
When a prime has been found, BN_GENCB_call(cb, 2, i) is called. OPENSSL_CTX associated with ctx will be used.
BN_generate_prime_ex() is the same as BN_generate_prime_ex2() except that no ctx parameter is passed. In this case the random number generator associated with the default OPENSSL-allocated, ...).. BN_is_prime() and BN_is_prime_fasttest() can similarly be compared to BN_is_prime_ex() and BN_is_prime_fasttest_ex(), respectively.
RETURN VALUES
BN_generate_prime_ex() return 1 on success or 0 on error.
BN_is_prime_ex(), BN_is_prime_fasttest_ex(), BN_is_prime(), BN_is_prime_fasttest() and BN_check_prime return 0 if the number is composite, 1 if it is prime with an error probability of less than 0.25^nchecks, and -1 on error.
BN_generate_prime() returns the prime number on success, NULL otherwise.. | https://www.openssl.org/docs/manmaster/man3/BN_generate_prime.html | CC-MAIN-2019-47 | refinedweb | 190 | 59.4 |
Here is the winners archive.
Some of photos aren't too crazy or offer a landmark that is recognizable if you'd seen it before. But most of them offer very little in terms of knowing where to start unless you've got a huge body of contextual knowledge you can draw on.
A couple ones that I had absolutely no idea where to start with:...
I imagine the CIA/NSA has a crack team of a couple dozen people doing this exact job.
It's a travesty communities and discussions devolve so quickly on the internet (though I of course know from PGs eternal struggle how hard it is to prevent). Whoever can solve this problem (nice try disqus etc) will certainly claim fame.
Up and down votes won't cut it. It will require a serious inquiry into psychology, sociology and behavioural studies I believe.
from land artifacts. I supposed it is a bit more difficult as google maps didnt make it to Waziristan yet.
It sort of reminds me of this article from a while back...
What are the next steps?
However, it is not for any command line program. It works only for python programs that use argument parser. Though, one can write a python argument parser around a cli to make it work.
The description file should help automatic GUI creating or shell completion.
Encoder (-e). OpenCV FOURCC Type. {Empty text box}.
Just take all possible options and ... shove them right in the user's face. Is that user friendly? Whom is it helping, who doesn't want a CLI but also knows how the CLI program works, but also can't script it?
and lose the ability to pipe, check exit status, run from cron, run through ssh or a different machine, call from other scripts...
The last couple of months as I know about more and more the command line, and can configure my shell and tools the way I like, I feel I'm more productive with the terminal than any other GUI application.
1. Conditions and restarts: As far as error handling in programs go this is the most rock-solid system I've encountered. You can tell the system which bits of code, called restarts, are able to handle a given error condition in your code. The nice thing about that is you can choose the appropriate restart based on what you know at a higher-level in the program and continue that computation without losing state and restarting from the beginning. This plays well with well structured programs because the rest of your system can continue running. Watching for conditions and signalling errors to invoke restarts... it's really much better than just returning an integer.
As a CL programmer using SLIME or any suitable IDE, this error system can throw up a list of appropriate restarts to handle an error it encounters. I can just choose one... or I can zoom through the backtrace, inspect objects, change values in instance slots, recompile code to fix the bug, and choose the "continue" restart... voila the computation continues, my system never stopped doing all of the other tasks it was in the middle of doing, and my original error was fixed and I didn't lose anything. That is really one of my favorite features.
2. CLOS -- it's CL's OO system. Completely optional. But it's very, very powerful. The notion of "class" is very different than the C++ sense of struct-with-vtable-to-function-pointers-with-implicit-reference-to-this. Specifically I enjoy parametric dispatch to generic functions. C++ has this but only to the implicit first argument, this. Whereas CLOS allows me to dispatch based on the types of all of the arguments. As a benign example:
Will print "..." and "Bark!" But the trivial example doesn't show that I can dispatch based on all of the arguments to a method:Will print "..." and "Bark!" But the trivial example doesn't show that I can dispatch based on all of the arguments to a method:
(defclass animal () ()) (defclass dog (animal) ()) (defgeneric make-sound (animal)) (defmethod make-sound ((animal animal)) (format t "...")) (defmethod make-sound ((dog dog)) (format t "Bark!")) (make-sound (make-instance 'animal)) (make-sound (make-instance 'dog))
Conversely...Conversely...
(defclass entity () ()) ;; some high-level data about entities in a video game (defclass ship (entity) ()) ;; some ship-specific stuff... you get the idea. (defclass bullet (entity) ()) ;; ... more code (defmethod collide ((player ship) (bullet bullet))) ;; some collision-handling code for those types of entities... (defmethod collide ((player ship) (enemy ship))) ;;; and so on...
Where collide is a virtual function of the Entity class requiring all sub-classes to implement it. In the CLOS system a method is free from the association to a class and is only implemented for anyone who cares about colliding with other things.Where collide is a virtual function of the Entity class requiring all sub-classes to implement it. In the CLOS system a method is free from the association to a class and is only implemented for anyone who cares about colliding with other things.
Ship::collide(const Bullet& bullet) {} Ship::collide(const Ship& ship) {}
The super-powerful thing about this though is that... I can redefine the class while the program is running. I can compile a new definition and all of the live instances in my running program will be updated. I don't have to stop my game. If I encounter an error in my collision code I can inspect the objects in the stack trace, recompile the new method, and continue without stopping.
3. Macros are awesome. They're like little mini-compilers and their usefulness is difficult to appreciate but beautiful to behold. For a good example look at [0] where baggers has implemented a Lisp-like language that actually compiles to an OpenGL shader program. Or read Let Over Lambda.
One of the most common complaint I hear about macros (and programmable programming languages in general) is that it opens the gate for every developer to build their own personal fiefdom and isolate themselves from other developers: ie -- create their own language that nobody else understands.
Examples like baggers' shader language demonstrate that it's not about creating a cambrian explosion of incompatible DSLs... it's about taming complexity; taking complex ideas and turning them into smaller, embedded programs. A CL programmer isn't satisfied writing their game in one language and then writing their shaders in another language. And then having to learn a third language for hooking them all up and running them. They embody those things using CL itself and leverage the powerful compiler under the floorboards that's right at their finger tips.
Need to read an alternate syntax from a language that died out decades ago but left no open source compilers about? Write a reader-macro that transforms it into lisp. Write a runtime in lisp to execute it. I've done it for little toy assemblers. It's lots of fun.
... this has turned into a long post. Sorry. I just miss some of the awesome features CL has when I work in other languages which is most of the time.
[0]...
"""Thats asking too much. If Lisp languages are so great, then it should be possible to summarize their benefits in concise, practical terms. If Lisp advocates refuse to do this, then we shouldnt be surprised when these languages remain stuck near the bottom of the charts."""
What?? Why? The problem is not and has never been communication of Lisp features. No one made a concise list of why C and Java are so great that people rushed to use them. Instead, they were pervasively used and taught in universities, and they are pervasively used in the development of most applications for e.g. Windows and Linux, and they are relatively simple languages (in theory) whose semantics most people "get". No wacko high order crap, no weight curried things, no arrows or morphisms or monads or macros.
Programmers of such languages don't owe the rest of the world anything. Everyone has a choice about what to use, and it's each individual programmer's responsibility to choose them wisely. There is plenty of material about Lisp and Scheme out there. Unfortunately, we are in this TL;DR culture where no one has the time to spend a few hours every week to learn something new, since somehow that's too big a risk on their precious time.
Now, for some comments:
1. Everything is an expression.
He says this is a boon, but it's also confusing for "expressions" which are side effectful. Too bad he did not talk about that, nor did he talk about how the expression-oriented way of thinking is really best for purely functional languages that allow for substitution semantics.
2. Every expression is either a single value or a list.
This is wrong, unless we devolve "single value" into the 1950's idea of an "atom". What about vectors or other literal composite representations of things? What about read-time things that aren't really lists or values?
3. Functional programming.
Functional programming is indeed great, but why don't we talk about how in Lisps, we don't get efficient functional programming? Lisp has tended to prefer non-functional ways of doing things because Lisp will allocate so much memory during functional programming tasks that for many objectives, FP is far to inefficient. Haskell solves this to some extent with things like efficient persistent structures and compilation algorithms such as loop fusion. Lisp doesn't really have any of this, and the data structures that do exist, many people don't know about or use.
4 and 5 don't really have to do with Lisp but particular implementations. That's fine I guess.
6. X-pressions.
What the hell is an X-pression?
7. Racket documentation tools.
Okay.
8. Syntax transformations.
He made the same mistake as he so baroquely outlined at the start. What in the world are these "macros" and "syntax transformations" good for? You're just telling me they're more powerful C preprocessor macros that can call arbitrary functions. But I was taught that fancy CPP code is a Bad Idea, so boosting them is a Worse Idea.
9. New languages.
Same problem as 8. You say it's useful but you don't say why. Just that it's "easier".
10. Opportunities to participate.
Nothing to do with Lisp again.
* * *
Instead of all this glorifying of Lisp and etc, why don't we spend time increasing that library count from 2727 to 2728? Or do we need to go through an entire exercise about whether that time spent is worth it or not?
"""Rather, you arebecause a Lisp language offers you the chance to discover your potential as a programmer and a thinker, and thereby raise your expectations for what you can accomplish."""
You're repeating everyone else. Notice how difficult it is to convey such things without being hugely abstract and unhelpful? Why don't other programmers see this huge productivity benefit from these Lisp wizards in their day-to-day life? Where are the huge, useful applications? They all seem to be written in C or C++.
"""Its mind-bendingly great, and accessible to anyone with a mild curiosity about software. """
It is accessible to those who are intently curious about theoretical aspects of software development, especially abstraction, and who can take exercises which require mathematical reasoning. A "mild curiosity" in my experience with others will not suffice.
* * *
This post may sound somewhat cynical and negative, but Lisp enlightenment articles are almost as bad as Haskell monad tutorials. They're everywhere and by the end, still no one gets it. And I don't like the attitude that because a group G doesn't understand it, and group H does, that H owes it to G to spoonfeed the information. That's not the case.
I decided to use racket for my little sides projects, asreplacement for scala and clojure.
I choose it because it was clear for me i can't stand limitationsother language impose to me in way of style and boilerplate, theracket macro (aka syntax transformer) system is the most advancedi know to reduce the boilerplate to a minimum and so just writewhat i want to express. In facts i rarely write macro becausewriting a good macro demand you take care of errors syntax, i amlazy in the bad meaning of the term.
I choose it because it's dynamic typed and i get more convincedthat type go on your way most of the time (expect complexalgorithms)(i write little projects, so refactoring argument isout). It enable me to write code and eval it on the fly withgeiser (using enter! on the back), after eval the new function itest it in the repl, hack until the function meet the requirementcopy paste from repl and boom i get a unit test. Because it aseval and it will become handy one time in your programmer lifefor sure.
I choose it because of it's sexpr syntax, as a heavy user ofemacs i know that other syntax is a pity.
Also because it has (and i use):
1. llar parser (implemented through a macro).
2. A pattern matching nothing to envy scala or clojure deconstructs.
3. An optional type system.
4. A contract system.
What i find hard as a new comer (to racket, not as a programmeralready now scala, clojure, half of c++ :), php) is
1. The broadness of the features the language offer, whichfeature to use e.g.: class or generics.
2. The documentation is rich but lack of examples for the commoncases, so you need to read the doc of the function (sometime it'shuge).
3. Understanding how racket module works is quite hard and youhave the documentation, if you don't plan to play with the macroexpander (the stuff that run your macro) and some dynamicfeatures you don't really need to.
4. You need to 'register' the errortrace library if you want astacktrace, quite a surprising behaviours for me.
My opinionated conclusion:
Racket is the best language design i ever see, it's hard to learnbut make you feel learning an other language will just become tolearn a new sub-optimal syntax. Sadly the ecosystem is lackinglibraries and peoples and i am not helping in this way.
e.g.:
As for the idea of Lisps, well, it sure seems neat. But I've literally never run across a situation where I needed my code to edit itself. I've never run across a situation where the lack of an everything-is-an-expression-is-a-list feature prevented me from doing what I wanted to do.
So I just don't really feel the need to get repetitive strain injuries in my pinky from reaching for the parentheses all the time.
I think this is a reason for much of the smugness of Lisp programmers. Whatever features you think are new or cool or advanced about your programming language, Lisp probably got there first.
This article is fairly misguided. I find it painful that everybody who writes about a Lisp offshoot (Scheme, Clojure, ...) ends up misrepresenting Common Lisp.
To sum up "Why Lisp?" from a CL perspective: CL has pretty much every feature of every programming language around, only that its better designed, implemented and generally more powerful. It's just a poweruser language. Its not just macros, sexps and lambdas. Its also number types, arrays, OOP, symbols, strings, structs, dynamic/lexical variables, lambda lists, multiple return values, on-line disassemble, exceptions, restarts, MOP, metacircular definition, great implementations, great libraries...... the list goes on and on... I surely forgot a ton of great stuff. TL;DR: CL got everything. And this "everything" is designed so well that its extensible and no CL programmer ever needs to doubt that any new feature can be implemented easily in CL.
To correct a few of the wrong statements of OP:
> WaitI love state and data mutation. Why would you take them away? Because theyre false friends.
CL is NOT particularily functional. Just because we know how to write good side-effect free code, doesn't mean its a functional language. (We jave SETF after all, failed to mention that aove).
> a syntax transformation in Racket can be far more sophisticated than the usual Common Lisp macro.
Outright wrong. The only reason Scheme has weird macro systems is because its a Lisp 1. CL is designed well (thus being a Lisp 2), and thats why its simple but ultimately more powerful macro system can work.
> A macro in Common Lisp is a function that runs at compile-time, accepting symbols as input and injecting them into a template to produce new code.
This is so wrong I had to write this comment. A macro in Common Lisp is a COMPILER, it accepts arguments and returns an SEXP. It is infinitely powerful, it can do EVERYTHING.
--
Maybe I'm doing it wrong, but a problem I've had with racket is as you begin to build larger projects, when something breaks it can be quite difficult to find out exactly where the break happened. When you compile Java or run Python, it's almost always immediately obvious what broke.
The way I got around this was to use a methodical TDD approach. Would be a shame if that turns out to be as good as it gets for lisp.
Something I haven't done yet but am interested to get to is attaching a repl console to a running process.
From my perspective Lisp is a powerful language because of its genesis in research. The question wasn't "How do we make a tool to make this hardware do what want?" but rather for a research goal.
If you want to read the actual original Lisp paper look up: Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I John McCarthy April 1960
Paul Graham covers it nicely in this essay, especially the "What made Lisp different" list about 1/3 in
Lisp has had expressiveness we're only recently seeing in popular mainstream languages now. It has to do with the design, the simplicity, and how Lisp expresses problems. I've often heard it described as "the language gets out of your way." That's why Lisp.
Why Racket? From an ignorant outsider's perspective, all Lisps seem to be more or less interchangeable when it comes to the language. They only differ in the details, and each seems to be about as difficult to learn as the other. Although this article does make somewhat of a case for specifically Racket, it seems to be a rather weak one - tools are nice and some language details are nice. But the same general arguments can be made for other Lisps, most notably Clojure. It seems to me that Clojure is a lot more practical: it has many good libraries in both Clojure and Java, it has some great tools, there's a lot of momentum, and it can be deployed everywhere (including the browser).
So, being an ignorant outsider, is there any reason the Lisp I should learn isn't Clojure?
I hope to see some of you there!
His list is concise but man did he take a while to get to it!
Seriously though. The introduction was super relevant as I have wondered the exact same question about Lisp myself. What features make it so praise-worthy? Maybe X-expressions isn't a core feature for everyone to appreciate, but the fact that everything is an S-expression is an understated value. People complain about its syntax, but alternate versions (so many reincarnations of parentheses-less Lisps) have never caught on.
The thing is, Lisp is no longer unique in its feature set, and languages with more standard forms of syntax have incorporated some of its features. But it is uncommon to find all of these listed features in one language. In the domain of data analysis where I do most of my work, it still makes me sad that XLISP-STAT has been supplanted by other languages which leave the user wanting.
I find it is very hard to define functional programming for many people but this is what I have come to explain to people:
Functional programming means thinking in terms of mathematical functions in the f(x) sense. Once you get that basic promise, that for any given input you have a single correct output, then it transforms the whole way you think about and designing your software.
The better I get with lisp, the more everything else changes. I may have to try Racket.
I believe the author is the one responsible for the facelift of Racket's documentation. He may belittle his own lack of formal programming education but I am thankful for his design chops.
By the way, Scribble (item 7) is an implementation of Literate Programming - a feature is some other languages too. In Haskell, for example, you can write programs in Latex with embedded code.
The distinct feature of Lisps and good lispers is clarity of thought and conscience of writing.
You do not have to know any Lisp to do basic things in TXR, like extracting data (in fairly complicated ways) and reformatting it, but the power is there to tap into.
In TXR's embedded Lisp dialect, ("TXR Lisp"), you can express yourself in ways that can resemble Common Lisp, Racket or Clojure.
You can see a glimpse of this in this Rosetta Code task, which is solved in three ways that are almost expression-for-expression translations of the CL, Racket and Clojure solutions:
Or here, with syntax coloring:...
If you closely compare the original solutions, you will see that certain things are done more glibly in TXR Lisp.
I spent most of my life being disgusted by the frivolity of most people's desires and qualms, and for this reason, I feel I deeply understand why Chris Knight did what he did. No reason, no justification, no particular aim, just life.
While I still catch myself wishing for such a life, I realized I could not blame or reject what I do not actively participate in. Furthermore, I came to the conclusion, possibly wrongly, that a life worth living is a life worth sharing, that society will always be able to offer you more than you can offer it.
I now believe that the solution is not to reject society, nor be tied by its requirements or norms, but rather behave as a free agent, with independence, compassion and mental fortitude.
Law, Economy, Politics, Religion, Science, Technology, ... are, in my opinion, mere relics and artifacts of thousands of years of civilization, localized attempts at guiding the seemingly mis-guided, while becoming eventually meaningless in the grand scheme of things.
These civilized relics are not necessarily bad, but as with anything else attachment becomes the issue. While becoming a hermit is possibly the quickest way of severing those ties, attachment is the burden of the mind, not of society at large. Isolation diminishes, or even wipes attachment issues altogether, but it does not resolve them.
This might come across as preachy, though it certainly isn't my intention, I simply wanted to share my view with anybody who, like I used to, wishes for isolation as a remedy.
This part resonated a lot to me. I consider self-awareness one of my qualities. But I too feel like the more I try and understand myself, the more distanced of the world I am. If I micro-analyze every reaction I have, I miss the point to connect to another person. I take myself out of society.
I found out that being defined by another person is a good thing for me. Particurlaly by people I love. I want to naturally be the person that made people I love love me.
Interestingly, the journalist has an upcoming movie where he's played by Jonah Hill. A fugitive murderer had used his name as an alias and through that, he'd developed a relationship with him and interviewed him after the person was convicted.
This resonates very strongly with me personally. So much so, I traveled to Alaska and hiked into "The Magic Bus" of Chris McCandless/Into The Wild fame [1]. From there, I spent 2 years driving to Argentina, sleeping out in my tent as often as possible. I'd often go a week without seeing or talking to another person, two weeks when I found somewhere remote enough.
Since then I've moved to the Yukon, where I've met some very interesting characters. One guy, in Dawson City, lives in a cave across the Yukon River from town. He has a second cave full of chickens, and he sells the eggs in town to make enough money to pay for food/beer. He boats across the river in summer and walks across the river for 7 months of the year.
I once again feel the pull, and I'm heavily planning my next trip - 2 years around Africa, hopefully getting as remote as possible. With luck, that will lead into a 2 year Europe->SE Asia trip, once again camping and hiking as much as possible.
[1]).
The moon was the minute hand, the seasons the hour hand.
Or write a book, invest, live on interest in the woods.
He wouldn't really like having to write, but he admires good writing, and if it would grant contentment...
See also, coding in the woods......
Granted, they weren't alone (it was a family), but they truly lived a hermit's existence, even when they were discovered by geologists.
The author of this has written a number of other spellbinding articles:
I also just finished listening to the latest "Hardcore History" podcast regarding WWI. Holy shit, what crazy things humans have done/experienced.
An Island to Oneself...
More info on Tom Neale:-...
And this...
Fugue? Small scale stroke? Or just a need to quieten the brain? Has this man had a neurological examination of any kind?
So for example, you might have a feature branch that includes some schema changes and some value modifications, and a content branch that includes a bunch of inserts into a few content tables that happen to include foreign key references to each other (so you need to maintain referential integrity when replaying those updates/inserts).
I don't see anything in the description that indicates this tool address those problems. For me, those are really the only problems that a DB version control system ought to be focused on. Speed of snapshotting is not all that important in a development environment as you typically work on a cut-down dataset anyway. A minute or so to take a snapshot a few times a day isn't a huge deal, whereas taking more frequent snapshots doesn't seem like something that adds any value, if it doesn't address any of the other problems.
Excerpt: "Irmin is a
Irmin is not, strictly speaking, a full database engine. It is, as are all other components of Mirage OS, a collection of libraries designed to solve different flavours of the challenges raised by the CAP theorem. Each application can select the right combination of libraries to solve its particular distributed problem."
[1]
[2]
I can't imagine this would be kind to a production database (lots of cleanup from copied & deleted tables), and would consume a lot more space than a gripped logical backup of the tables in question.
It takes snapshots and computes diffs between snapshots or the live database. It lets me drop and re-import some of my app's tables, then compute the minimum set of changes between the previous import and the new import. I wouldn't call it "git for ActiveRecord models" but it appears to be similar to this project.
Imagine a world where daily time-series data can be stored efficiently:This is a lesser known use case, but it works like this: I'm a financial company and I want to store 1000 metrics about a potential customer. Maybe the number of transactions in the past year, the number of defaults, the number of credit cards, etc.
Normally I would have to duplicate this row in the database every day/week/month/year for every potential customer. With some kind of git-like storing of diffs between the row today and the row yesterday, I could easily have access to time series information without duplicating unchanged information. This would accomplish MASSIVE storage savings.
FWIW efficiently storing time series data is big problem at my company. No off the shelf solution makes this easy for us right now, and we would rather throw cheap hard disk at the problem rather than expensive engineers.
Seems like it's for table schema snapshotting in a database without any external storage.
Browsing through the code, I see that it's highly table centric using SQLAlchemy.
This made me think of a table calledThis made me think of a table called
CREATE TABLE %s.%s LIKE %s.%s
... which works in mysql.... which works in mysql.
create table `a; drop table users;` (col int);
I don't know if the stellar code will trip over something like this. But mysql (SQL) shouldn't even allow names like that.
It's not too mature yet, the readme is mediocre at best, and it has some issues that will popup when working with a team, but it's pretty damn useful.
Looks like a good project, I definitely want an easy way to manage development databases....
I'm just wondering if this project offers anything special/better than the method I described.
As this hasn't been happening (despite how easy it would be for a terrorist to do), the only logical conclusion I draw is that the entire terrorist threat is so unbelievably overblown it doesn't warrant even thinking about when it comes to evaluating personal safety. I mean, how can it not be, given how easy it would be for a terrorist to just stroll in to an airport departures hall with a jacket bomb and detonate himself yet the closest we've seen to this is one lunatic failing to ignite some powder in his shoes and another idiot burning his crotch.
I think the real answer is that you can probably count the number of truly dangerous terrorists in the world on two hands. The rest of the current crop are nothing more than brainwashed amateurs who spend their time wreaking havoc and misery in isolated parts of the world that no normal person would ever have occasion to set foot in. This article from the FT makes a similar point:...
If I was a black man in the US I'd be much more afraid of looking at a cop the wrong way than being caught up in a terrorist outrage.
If you travel a lot and you're a smoker you'd know that the worst part is getting off the flight, crawling through the airport at exit and then not having a lighter because they took it off you at departure[1].
No penalty for being caught with a lighter, so I kept leaving lighters in my bag in different places deliberately to figure out how I could get one through.
Solution turned out to be simple, and I hit it almost accidentally. I removed the metal shield and then dropped the lighter into an inside pocket of my bag that contains pens and loose coins.
Worked 100% of the time thereafter.
The scanners being blacklisting like a virus scanner means they have the same problem, they can only identify known threats. Change the form of the threat and you're through until they update and train their scanners again (both human and machine).
The illusion of safety. I've since quit both smoking and flying frequently.
[1] I gave the TSA the idea of handing out lighters they have confiscated from departing passengers to arriving passengers but they didn't buy into it.
One thing that jumped out at me right away is that the explosives sniffer is also configured to detect narcotics, amphetamine and marijuana. Is this standard procedure at American domestic airports?
The diagram-heavy slides could certainly use some context.
This to me is the most troublesome aspect of this an entire ordeal. Any security pen-testing firm with their wits about them could have discovered these backdoors in a few simple audits.
The fact that he was able to find all of these is very worrisome to me. I can only imagine what other bugs/backdoors are built in to these systems.
Does any of this security matter with the fact that you can build weapons using airport giftshop items?
So, I took the case carry through x-ray in Phoenix, then, during a layover in Dallas I went outside the airport with the case, came back in through security, re-boarded the airplane and proceeded to Corpus Christi where I passed my vacation. After vacation, on the return flight to Phoenix, they found the knives and tools at the small airport in Corpus Christi as I attempted to board. I gave them to security and nothing came of it but I didn't feel it wise to tell them that I had already been through two checkpoints with the contraband. I realize things have probably tightened further since then but still... I was a bit shocked. And I'm still thinking a lot of the "security" at airports is for show.
My YouTube video gives a nice overview of the benefits of a virtual whiteboard:
What does Hacker News think?
1. Be able to type in text (any font will do)
2. Drag-and-drop an image to be be able to annotate it over the top (would be excellent design task, e.g. using screenshots of some work-in-progress).
3. Movable objects?
I'm the author of a similar tool, , which is bitmap-based, ie the eraser works as it'd on a physical whiteboard, and there's no zoom (or undo/redo).
Great start, looking forward to seeing the future progress!
[1]:
Tried this years ago over dialup with some MS app. The technology was flakey and kept getting in the way. This works great.
I love using a Wacom tablet for drawing diagrams, and wish there was a good shared whiteboard tool that supported pen pressure.
I didn't know if this is possible at first, but a quick search revealed, which supports pen pressure using a plugin.
Any chance you might add that kind of flair to whiteboardfox?
It think you're on the right path positioning this for use in schools. That's our main use case too, replacing overhead projectors in the classroom, and ruining snow days for an entire generation of kids.
Good luck!
[1]
There's one thing stopping me from using it: the URL
I use jotwithme right now, and I can set a session name, then tell people to go there. Here, I have to get the url from my ipad, sent it to myself somehow, and send it to the student.
That's not exactly hard, but it's annoying enough compared to jotwithme that I'd keep using that. But if you had that feature, I'd switch. Jot's erase feature isn't as good.
If teachersare going to use this, many of them will already have a powerpoint to use.
Adding to this, an easy way to navegate betweens different whiteboards of the same author (so it is easy to go to the next slide).
Just my opinion about what might work.
Which doesn't say much to most, I guess... It looks as if the scan lines are mis-aligned, i.e. as if some pixels in each line is missing from each, causing the resulting image to be slanted and distorted....
Except all you need to follow along is a browser....
With DocumentDB, not having a local version severely limits what I'd consider this for. Losing that flexibility is a big deal. Maybe this is just a limited preview and they haven't build the management side for local installs.
> Want to edit or suggest changes to this content? You can edit and submit changes to this article using GitHub.
Pretty remarkable given Microsoft's approach to open source in the 1990s that they're now using a service built around Linus's bespoke open source version control system to allow people to suggest changes to their documentation.
Javascript execution within database. Stored procedures, triggers and functions can be written with Javascript. "All JavaScript logic is executed within an ambient ACID transaction with snapshot isolation. During the course of its execution, if the JavaScript throws an exception, then the entire transaction is aborted."
Pricing is based on "capacity units". Starts with $22.50 per month (this includes 50% preview period discount). One capacity unit (CU) gives 10GB of storage and can perform 2000 reads per second, 500 insert/replace/delete, 1000 simple queries returning one doc.
In order to see pricing details, change the region to "US West":
Very interesting addition to Microsoft offering. I was actually just yesterday wondering if they have any plans for this kind of service. Table Storage is quite primitive and Azure SQL on the other hand gets expensive when you have lots of data.
One potential "problem" with this is the bundling of storage capacity and processing power. If I understand this correctly, I would need to buy 10 CUs per month to store 100GB of data even if I'm not very actively using that data.
How does that work? Isn't that going to incur a major performance hit? If not, why don't other databases get rid of indexes?
Also, if anyone from MS is reading,... links to... which is a 404 error.
I pray Microsoft is looking for Python developers:
Have I missed something, or have MS delivered a novel and valuable feature? I'm not aware of support for transactions across documents in other NoSQL platforms. I'd be grateful if someone has any experience or better information in that regard, thanks.
What's the max. duration of database query, max size of query result.
What kind of performance can be expected, does it decrease as the size of database increases or it remains constant?
I'm going to wait a few days until hype settles.
[1]
Programmers tend to fall into (at least) two camps: the skeptics and the pragmatists.
Sometimes when I report a finding, programmers accuse me in one way or another of messing something up because that cant possibly be failing. Those are the skeptics, using incredulousness almost like a shield to protect their worldview. They tend to have an up-close/whats right in front of them approach to programming, are prolific and usually take a positive stance on programming.
At other times, reporting a finding is met with resignation, almost like please work around it because we just dont need this right now. Those are the pragmatists, taking the long view/forest for the trees approach, knowing that programming is more than the some of its parts but also that its a miracle it even works at all. They are the daydreamers and sometimes are perceived as negative or defeatist.
I was a pragmatist for as long as I could remember, but had a change of heart working with others in professional settings. I saw that certain things like databases or unix filesystems could largely be relied upon to work in a deterministic manner, like they created a scaffolding that helps one stay grounded in reality. They help one command a very mathematical/deliberate demeanor, and overcome setbacks by treating bugs as something to be expected but still tractable.
But here is one of those bugs, where the floor seemed to fall out from under our feet. One day I mentioned that SSL isnt working and about half the office flipped out on me and the other half rolled their eyes and had me track it down:
The gist of it is that OpenSSL was failing when the MTU was 1496 instead of 1500, because of path MTU discovery failing and SSL thinking it was a MITM attack and closing (at least, that is how I remember it, I am probably futzing some details).
That was odd behavior to me, because I see SSL as something that should be a layer above TCP and not bother with the implementation details of underlying layers. It should operate under the assumption that there is always a man in the middle. If you can open a stream to a server, you should be able to send secure data over it.
Anyway, we fixed the router setting and got back to work. All told I probably lost a day or two of productivity, because the office network had been running just fine for years and so I discounted it for a long time until I ruled out every other possibility. Ive hit crazy bugs like this at least once a year, and regret not documenting them I suppose. Usually by the time they are fixed, people have forgotten the initial shock, but they still remember that you are the one weirdo who always seems to break everything.
1. This is one reason it's a good idea to use signed ints for lengths even though they can never be negative. Signed 64-bit ints have plenty of range for any array you're actually going to encounter. It may also be evidence that it's a good idea for mixed signed/unsigned arithmetic to produce signed results rather than unsigned: signed tends to be value-correct for "small" values (less than 2^63), including negative results; unsigned sacrifices value-correctness on all negative values to be correct for very large values, which is a less common case here it will never happen since there just aren't strings that large.
2. If you're going to use a fancy algorithm like two-way search, you really ought to have a lot of test cases, especially ones that exercise corner cases of the algorithm. 100% coverage of all non-error code paths would be ideal.
I see it not as a question whether lengths should be signed or unsigned but whether subtraction, assignment etc should be polymorphic w.r.t signed and unsigned. I think the issue here is the polymorphic variants of these binary operators are inherently risky.
Casting gets a little tedious, but languages that do not have operator overloading should disallow subtraction from unsigned and subtraction that return unsigned. You either cast it up, or if possible reorder the expression/comparison so that the problem goes away. Even assignment can be a problem. Ocaml can get irritating because it takes this view but I think it is safer this way.It is very hard to be always vigilant about the unsigned signed issue, but hopefully a compiler error will mitigate risks, not completely, but it is better than nothing.
That leaves languages that allow operator overloading, in those cases if you are overloading an operator you better know what you are doing, and watch out for cases where the operator is not closed.
I don't think the author is going to make any strides towards improving/changing the UI
[1]...
From my relatively light explorations of Calibre to date (v. 1.25 on Debian jessie/sid):
The UI is clunky. Especially when trying to edit / capture bibliographic information I've found it beyond frustrating.
The built-in readers are severely brain-damaged and I've found no way to change them. The PDF reader is complete and total fail, the eBook reader isn't much better, and I seem to recall that accessing HTML docs is similarly frustrating.
By contrast, I've been impressed by the Moon+Reader Android eBook reader, generally like the Readability online (Web) reader and Android app, and had found a Debian eBook reader that was fairly decent client -- fbreader. Its main disadvantage is in not having the ability to set a maximum content width. I find that 40-45 em is my preferred width in general. Among fbreader's frustrations: I cannot define a stylesheet, though I can apply a selected set of styles (defining margin widths, e.g., but not the _text_ width, which is frustrating). The book I've presently got loaded is either right or center justified -- the left margin is ragged, again, frustrating. And text doesn't advance on a <space>, like virtually any other Linux pager.
If calibre readily supported alternative clients, I'd be a lot happier with it.
The ability to include / reference / convert Web content would be somewhere north of awesome. There's still a large amount of information online that I reference, but would prefer to archive or cache locally, and/or convert to more useful formats (usually ePub or PDF).
Optimizing viewing experiences for wide-format, vertically-challenged screens would be hugely useful. 16:9 display ratios mean vertical space is at an absolute premium. Most PDF viewers are utterly brain-dead in this regard (evince, for example, requires four manual repositionings to view a typical 2-up document). The Internet Archive's BookReader does an excellent job of consider positioning content and paging through it as two separate functions. I strongly recommend taking some UI notes from it.
Alternatively, the old 'gv' ghostscript Postscript and PDF reader will page through documents in a highly sensible fashion: top-bottom, left-right. Why this was achieved in 1992 while PDF readers of the subsequent 22 years have utterly blundered in this regard escapes me.
That said, I'm looking forward to this showing up in Debian's repos (I've got v1.25 presently).
________________________________
Notes:
1....
For everyone complaining about the UI and management functionality, realize that you are not the target audience. Head over to, look at the Calibre forum and the praise Kovid gets, and you'll see that he's largely catering directly to what his core users want.
It is interesting that Calibre and mobileread are still around, and relatively little changed. I lost interest and moved on once pretty much every commercially-available e-book became available in EPUB format. What's left is a very, very specialized core of enthusiasts.
My use-case: I download material in various formats from online, mostly in PDF, ePub, or some markup format (LaTeX, Markdown, HTML, etc.) I've got a large set of downloads, which I then try to import into Calibre. This is in support of a large research project.
1. It's difficult to tell what I've imported and what I haven't.
2. The import process itself is slow. Enough so that I'll fire it up, get caught up in other stuff, and ... well, tend not to get back to it.
3. The corpus is fairly large: around 1000 books and papers, plus another 5,000 others pulled from web archives.
4. Tracking this by metadata is crucial. Title, author, publication date, and tags. Managing _that_ is a headache on its own, especially adding metadata to works / confirming automatically extracted content is accurate.
5. Once I've got the information organized, reading, referencing, annotating, and other tasks should be supported.
Again: calibre is about the only tool out there I'm familiar with, but it's a pain. Zotero and various LaTeX bibliographic tools are also of some use.
A quick glance at the documentation says yes.
However, the user interface of Calibre is one of the worst I've ever encountered. It looks and feels like a teenagers first attempt at creating a desktop software prototype back in 1995. (Having to go to the website to download and install every single new minor release also feels like something from a bygone era.)
I donate to Calibre because I need it to continue existing, but I have no love for it.
one recently released such device is Boyue T62 (...) Here is an overview (the review is for the same device, just rebranded and with previos generation specs)-...
You also get much better pdf reading capabilities with these devices.
Until the next generation displays for reading come into play, these look much better overall than kindle, nook, etc.
Now I'm going to have to set up the system again, and I don't know whether this is going to happen again. The SD card that got corrupted was a Class 4 Kingston.
Maybe I'll look into a Sandisk (possibly Class 10?) next time. But I am worried that it's not the SD card's fault, but rather a combination of a journaling filesystem, an SD card and a sudden power outage.
Edited: Apologies, I realized now that the red button cuts power to the network switch, not to each individual Pi. But my concerns about the Pi and power cuts still remain though.
Also, any reason for not making the big red button randomly select a "datacenter" to take offline?
Idea: transition this into a 3 or 4 datacenter cluster.
A single binary with zero dependancies is so awesome. With a ruby on rails app I need to worry about ruby version, ruby implementation, gem versions, compilation of C based gems, etc etc. With Go i simply copy up the binary and run it..
- -
I favoured gogs because of the ease of deployment. What killed it was the fact that forking public repositories and creating pull requests is not implemented yet.
Since the ease of deployment for gitlab was drastically reduced lately, we settled for gitlab.
Installing Gitlab was definitely one of the painpoints, having an executable that just works is amazing
What are people's thoughts on open software projects like this eating into github licensing money? I feel guilty sometimes pushing gitlab since I really like github as a company (and want them to thrive)
How does this compare feature-wise to GitLab? How does this compare to GitLab regarding updates, i.e. how easy and seamless will update be and how often are they available?
Would be nice if someone could post some first-hand experience :)
1. The blue-on-red contrast is a bit hard on the eyes. Try a softer color palette?
2. Repository languages doesn't include Javascript?
3. Issue sorting and filtering might be important
Once again, kudos on a job well done!
Looks great! Good job!
Highly verbal kids, and that is generally kids who read a lot, will be told they are smart whether you do it or not. And if you're child's teachers are telling you how smart they are, and they ask you "Dad, my teacher said I'm really smart, do you think I'm really smart?" You'll have to decide what the narrative is.
That said, it's great to reward struggle rather than success and to emphasize that it is through failure that we value succeeding. Everyone I know who shielded their children from failure has struggled later with teaching them how to cope with failure. That isn't scientific of course, just parents swapping horror stories, but it has been highly correlated in my experience. Putting those struggles into the proper light is very important.
A less obvious but also challenging aspect of this though is that you must teach your children that natural skillsets don't determine their worth. You are good at maths but lousy at sports? Makes you no better or worse than someone with the opposite levels of skill. That is much harder as kids are always looking for ways to evaluate themselves relative to their peers. If you endorse that you can find yourself inculcating in them an unhealthy externally generated view of self worth.
I'm sympathetic to Khan's overall POV here, but "research says there are basically two kinds of people..." always tickles my skepticism antennae.
Claims like this are so often overstated by researchers to punch up an abstract, and then so often simplified further in uncritical 3rd party reports that I wouldn't bet a sandwich on the truth of any such claim without seeing the data for myself. C.f. the widely believed and largely unsupported claims about learning styles.
Would be nice of Khan to link to the publications so we could decide for ourselves.
Giving negative motivation to a kid, saying "you're stupid," is recognized to sometimes be a self-fulfilling prophecy. There's no reason that "you're smart" can't work in the same way. I would not be surprised if a lot of this phenomenon of children being negatively motivated from positive feedback ends up having a different explanation than the one posited here.
As the article notes, I was only praised when I got a correct answer, or used a big word without stumbling. In one particular memory, I am afraid of taking a new mathematics placement test in school -- not because of the difficulty, but precisely because I had gotten a perfect score on the last one. There was no room to grow, if I didn't get them all right again, would that make me not "smart?"
Very simple changes in the language we use with young children could possibly avoid that kind of anxiety in bright youth.
I only know of Japan and the US, but as someone who went to one of the most prestigious secondary schools in Japan and universities in the US, I have seen well-educated, smart people with "growth mindsets" struggle later in their lives.
1. Regardless of what we say, in many corners of adult life, results are valued over processes. While a superior process has a higher likelihood of yielding a superior result, this is often not the case, and in a perversely Murphy's law-esque manner, it turns out to be false at critical junctures of one's life. And the deeper the growth mindset is ingrained into you, the more disappointed/despaired you find the situation and feel incapacitated and betrayed. Of course, a singular emphasis on results with no consideration for process is equally bad. Most people find their own local optimum between the two extrema, and I don't see how a campaign towards one end of the spectrum is all that meaningful or worthy.
2. This probably sounds terrible, but not everyone is "smart" as measured by academic performance. Certainly effort is a huge part of the equation, but some minds are better wired for academics than others. And the longer you work at it and hence surround yourself with qualified peers, the more apparent it becomes that not everyone is working equally hard. This realization usually does't mesh well with the emphasis on process from one's formative education, and many people become jaded/hopeless. (And of course, even within academic subjects, there are individual variances). While it is important to try, it is also the responsibility of educators (and adults) to see if the child's potential lies somewhere else, or to borrow Mr. Khan's words, to see if the child can be tenacious and gritty about something other than academics.
But I also don't want that to be the equivalent of "participation awards" in Little League. For it to be of any real value, it has to go with teaching her how to actually think.
(Her mum is an excellent role model in this, 'cos she's basically competent in a dizzying array of small skills. "If you want to be good at everything like Mummy is, this is how you learn it!")
Basically the hard part is capturing her interest. Anything she's interested in, she will absolutely kill. Anything she's not interested in, she won't bother with. That bit she gets from me ...
It also reminds us to set a good example: learn things and do them. Because it doesn't matter what you say, it's the example you present.
That said, I was most calmed by the many, many studies that show that, as long as you don't actually neglect the kid, they'll probably turn out how they were going to anyway. So helicopter parenting really is completely futile.
We've caught her at midnight reading books more than once, so I'll call that "huge success" ;-)
Exactly what I feel. Days are becoming too short for such amount of interesting things to do and to learn (Hacker News, Quora, Designer News, Coursera, Khan Academy, TED, Project Guttenberg... the list is long, and it's growing...)
> "Researchers have known for some time that the brain is like a muscle; that the more you use it, the more it grows. Theyve."
I wonder to which extent this is correct. Sure, it would be nice if it was the case. It's a nice myth that anybody can achieve anything with the proper amount of work. I see it all the time in fields such as maths or music. Some people are naturally so much better than others than even a lifetime wouldn't be enough to catch up.
What makes me sad is the idea that not telling a child she smart is justified so that the child will meet the parent's expectations. Telling a smart child they are smart is honest and kind and humane. I believe that in the long run the attitudes toward honesty and humility and empathy are the most important things I instill as a parent.
Some things are easy for smart people and not acknowledging that as a factor in my child's successes would be dishonest when discussing those successes. It is akin to not acknowledging that a pitcher of cold Kool-Aid is not the product of economic circumstance.
Some success is comes from pure good fortune, some comes from just showing up, and some comes from hard work. Talking honestly about when and how each plays a role is my job as a parent. I hope my child develops the ability to distinguish challenge from a checklist of busy work.
It's not either or. A child can understand that some successes come because the task is easy for them. Others will come from hard work. The can tell the difference between watching an addition video and earning an orange belt.
That said, my standard for good parenting is forgiving. Just trying to do a better job than one's own parents is hard enough. My parenting advice, for what it's worth, is to treat children as antonymous moral agents, fully capable of making intelligent decisions and able to learn from mistakes. Talk with them honestly as such and avoid deceit even when they are small.
Because that is when the foundation for their life as a teenager and adult is laid.
Even a kid's brain will not "grow" more or less depending on what kind of stimulus he is exposed to. But it doesn't mean it's bad to reward and compliment your kid for struggling and working hard instead of just being naturally good at something. It helps the child to build a character and face problems instead of giving up. The article is right about that.
There are also many ways to get a better access to the full capacity of your brain. It's not like the movie "Lucy", but many conditions may prevent you for using it to its full potential: Age, injury or illness, sleep deprivation, stress and exhaustion, lack of nutrients, drug abuse and chemical unbalances, etc. Some of those factors present problems that can be treated or even prevented, and you will (most of the time) function at the same cognitive level as a careless smarter person.
Also, the fact that there is no way you can alter your intelligence without altering your DNA doesn't mean you can't use it to discover and apply better problem-solving patterns for a particular discipline, making yourself effectively smarter.
To me, the possibility that anyone can move from fixed to growth is astounding [1] ... that fact itself positively brims with the possibilities it opens up, if only a person can realize they're not stuck and they can expand their horizons.
Khan's description of "interventions" is interesting.
[1] I also suspect the converse is equally possible, given the right circumstances ... which is worth keeping in mind, I 'spose.
[1]...
[2]...
I have even seen non-educated mothers state this fact even while playing poker, "yeah I never tell my son he's smart, I congratulate his hard work instead because it changes his mindset".
When I was a child, I was told that I was very smart (which I was) and pressured to fulfill my potential. Other children may be pressured to be hard working and studious. I would rather celebrate people who are naturally gifted, and also people who choose to work hard. What is important is that people's actions arise naturally from their own desires, not from external pressure or manipulation.
Can intelligence be gained though? I agree that skill can only be gained through effort/practice/etc. But intelligence ... isn't intelligence more like a natural talent than something you can gain?
Much like you can't just train yourself to have a beautiful singing voice or big boobs or absolute pitch hearing, I don't think you can train yourself to be more intelligent. Smarter, yes, intelligenter, not really. It's a talent, not a skill.
USB Condom: ~$10, available. Tends towards either a bare board with USB connectors or that board with plastic shrink tubing on it. () () (probably an earlier version but the same person:)
UmbrellaUSB: ~$12, available soon? More polished/finished looking than the USBCondom, got their information on voltages from the USBCondom folks (see comments in the Krebs article above). Working on fulfillment of their Kickstarter (funded July 3). ()
ChargeDefense: ~$??, a "coming soon" page, a picture of a prototype, and maybe more in September. ()
LockedUSB: ~$20, available. More technical details available, more expensive and very blocky looking - expect it to block any adjacent ports. Technical information indicates that the single unit should work with both Apple and non-Apple devices (...)
Practical Meter: ~$20, available. Protects ONLY when used with their optimized 3-in-1 charging cables otherwise passes data through. Provides a 5-bar indicator of current. () more details in their kickstarter (...)
PortPilot: ~$60, not yet available. Much more expensive, MUCH more informative, switchable between data/no data. Includes a display showing possible and actual power draw, etc. Almost a development/diagnostic device. ()
At least 3 listed below via Amazon (2 in UK): PortaPow $7 (2 versions,, looks like a "beat you to market" device), and Pisen ~$1.70 ( and).
I think the board is because some power sources might go "hey I'm leaking, there is no device but I draw power!" and cut it off, but I only ever heard about it and never encountered it. My USB ports nicely power fans without ever having a data connection to anything.
...so that no drivers or userspace programs are allowed to communicate with any newly connected devices....so that no drivers or userspace programs are allowed to communicate with any newly connected devices.
# cd /sys/bus/usb/devices # for n in usb* ; do echo 0 >$n/authorized_default ; done...
Of course this only prevents the USB host, you'd have to disable all USB-gadget daemons on your android phone to not have the charger tinker with the phones's data.
NOTE/added: I just realized that the main purpose this is marketed is to protect the phone's data. I'd me more worried about the computer if someone asks me to lend some juice...
There should be an option to enable data transfer, currently you have to physically remove it.
I would love to have something like this, if it enabled my devices to be read only; some usb flash drives have a physical button to enable that.
>
I personally just carry a three way AC power splitter cube while traveling, which gives me enough ports for laptop+phone+whomever I ask to share with.
One of the ports has the data lines connected, the other port doesn't, so it could be used as a USB condom.
."
Way too many words on that page before just getting to the damned point.
Never mind....
Well, the data is very noisy. The main problem is that this data doesn't have a before/after comparison. Is the 850nm light visible now or it was always visible???
It's also very difficult to make a fair comparison. The room must be the same, the light sources must be the same (a new coffeepot with a small led can ruin the experiment, removing a coffeepot because it has recently broken can ruin the experiment).
For a preliminary experiment, the before-after comparison is enough. For a serious experiment you need many voluntaries, compare the before-after signals of them all at the same time in the same experimental conditions, and double blind testing.
There is a small possibility that they are measuring "excitement" instead of light. The subject hears that they are now going to test with very near infrared light. He got exited. They measure that. Perhaps the flash makes a slight sound, perhaps the light operator makes a slight sound. (Perhaps the 850nm flash makes a sound that the other flashes don't make?)
1.
There are similar stories with sound etc. I think some people can see near infrared, it is just question of finding them.
I know that this isn't written to be read critically, but I don't know what the take-away is.
Related and probably equally silly idea: I've always wanted a pair of sunglasses that could tune in to different EM spectra. How far are we from that? Night vision goggles are bulky because they need external power to do the frequency shifting, right?
[1]
[1]
Pretty friendly guy, helped me out via email with some questions I had when I was playing around with facetracker.
Anyways, fantastic execution! Great visualization. My only super-minor complaint is the fade in/fade out could be a little less abrupt when the songs change :)
Or you can click the background to listen to it on play.spotify.com.
I remember back when I delivered pizzas, it was not uncommon for most of us drivers to all be humming the same song as we are getting stuff inside, since we all listened to the same stations.
If it was a really good song ending as I got back to the store, it was not uncommon to find that I waited it out in the parking lot along with at least two other drivers. :)
One book that's not part of the collection but that I would recommend to the people here on HN is "James Nasmyth, Engineer: An Autobiography":
Here's a bit from a "coding interview" that went well for him:
."
Anyway, a pretty fun, educational book for someone with that mindset.
Just take a second to look up whether there are any modern translations that might be up your alley, or whether you prefer accuracy over readability, or what have you.
I think he means zenith, not nadir. 1909 was the high point of human civilisation, before barbarism and ugliness took hold.
Also, not covering Freud, Nietzsche & Marx was no mistake: this is a collection of lessons to learn, not lessons to learn from.
Very glad to see these freely available though.
[1]
P.S.: This is my first Ruby script. I'm still learning it.... will lead to The Cambridge History of ___ (geography or topic, e.g. Literature, India) and The Cambridge ___ History (time or topic, e.g. Ancient, Medieval, Natural). Each of these titles are several volumes, 500-1000 pages per volume, covering centuries of events from a British perspective.
German Classics,...
Eastern Classics,...-......
I was glad to see that some like that page. I was actually the one who grabbed that list of contents from Wikipedia, requested access to edit Project Gutenberg's "Bookshelves" wiki, and added the links there to the Project Gutenberg versions of many of the selections. It was fun and not hard.
For instance, Adam Smith argued that barter was an inefficient way to make transactions because it required a dual coincidence of wants by both parties. Nevermind that communities simply didn't function this way, instead giving what they had now in a system of credit rather than debt. This is one of many examples undermining Smith's ideas, so be careful if you decide to read such books. Unless your degree concerns historiography, your time would be much better spent elsewhere. (Graeber)
Smith is easy to debunk, but ideas contained within many classical novels provide popular justification for cultural imperialism. They're not so easy to address. (Said)
Not here to be cynical/negative -- they might be of great value, this is not my expertise. Can someone explain why deep learning articles are receiving attention rather than, say, Support Vector Machines / kernel-based methods of pattern analysis? Or other nonlinear analysis? Are they related?
It's been quite a while and even Ng has demonstrated that a billion parameter setup could be built for $20k using commodity hardware ().
I wonder what's happening at Google labs as of August 2014.
I imagine, though, that anyone not well versed in college mathematics may have issues with the explanations. If you want a good introductory resource, but either haven't covered or have forgotten some of the math in this book, I would recommend one of two resources:
[1] MetaCademy:[2] Neural Networks and Deep Learning (In Progress):
The first will take you through all the math first through some online courses and textbooks, and the second is a good general purpose introduction that I recommend to anybody interested in neural nets.
When I first encountered the idea that we do not get fat from eating too much and that calories weren't responsible, I thought it ludicrousthe body can't disobey the laws of physics! Thermodynamics! But after seriously thinking about the idea, I realized Taubes was providing a far more complete understanding of metabolism. The human body doesn't run on calories, it runs on food. Yes, we can easily learn the caloric content of food, but that's largely irrelevant. What's important is how food affects the body, not its raw energy content. I see this misconception time and time again, especially among smart people who like to reduce the human body to merely a physical machine, often ignoring the whole biology thing.
I think the hormone theory of obesity is correct and I think these studies will prove it. But even if they show otherwise, this type of research is long overdue and we all stand to benefit from the results.
[1]:
[2]:
I was the medical director of an obesity treatment clinic for 10 years, working with thousands of obese patients.
The most important lesson is that obesity is a disease, and each obese person has a different disease. Each case requires a unique treatment approach. "Cookie-cutter" methods won't cut it.
I'm convinced that obesity is the most complex disease the art and science of medicine has ever faced. I can't even begin to describe the mind-boggling complexity of the situation.
A minimalist outline: factor in participation of the endocrine system (insulin resistance, role of cortisol, thyroid, reproductive hormones), the immune system products promoting obesity, as well as adverse inflammatory effects of adiposity contributing to metabolic disarray, and the brain's functional role in metabolism involving highly intertwined connections of neuronal circuits regulating metabolism and sleep/circadian rhythms. And so I could go on for gigabytes on these subjects, even before citing the enormous list of references.
Short answer: all of these body systems (neural, endocrine, immune) are interactive. Think many:many relationship with "many"==trillions. Therein are the solutions to obesity. Small needles, huge haystack.
Short answer: all of these body systems (neural, endocrine, immune) are interactive. Think many:many relationship with "many"==trillions. Therein are the solutions to obesity. Small needle, huge haystack.
A few years ago it was mentioned at a conference that at the time over 250 human genes (and their peptide products) had been identified to play a role in obesity. Considering the multitude of known and potential gene/environment interactions, what simple "cause and effect" paradigm could we glean?
So yes, many obese patients respond favorably to low CHO, high N diets. Altering PUFA intake to approximate a 1:1 intake of N3 and N6 EFA in adequate amounts is warranted. Elimination of physiologically incompatible trans-fatty acids in the diet is absolutely necessary. Mono-unsaturated or saturated fats within calorie constraints are not usually an issue. Behavioral approaches are always indicated.
Just remember, each of us is different, our systems are inherently quirky, and tremendous variation is common. The above general rules are fine to start with, but be prepared, understand the "reality paradox": exceptions are the rule and not the exception.
There's a table at the bottom of the article that contains the tl;dr about the scientific studies referenced. All are still underway, there are no published results yet.
[1]:...
Not all weight is fat
Metabolic efficiency varies, including by calorie type
Much of the chemical energy output in the body is involved in actually repairing or replacing, not only in expanding the volume of fat reserves or even muscle.
It's all a thermodynamically-limited bunch of processes but thermodynamics is a limit rather than a driver of energy transformations.
Calorie REDUCTIONS don't guarantee weight loss because obviously the body can choose to expend less energy. And if the term CALORIE DEFICIT is used, it is not justifiably used because science currently can't determine the necessary level of granularity since energy, weight, and measurable metabolic output/activity all change in response to factors other than the ones which are thermodynamically relevant, and this makes thermodynamic equations/measurement of human dieing problematic. Essentially the system is kind of a 'black box' and some of the relevant inputs and outputs in the thermodynamic equation are 'inside' that mathematical 'black-box.'
Edits for spelling
Oh and a slightly less vague explanation can be expounded onto the concept of energy transformation to explain why it wouldn't always correlate with a weight change...combining or dividing molecules.
What if your body doesn't have enough energy to go through the processes of burning a fuel source (or the necessarily mistake or vitamins, or other nutrients...)
In my experience what gets me fat is meat, and what makes me lose weight is eating less meat and more pasta and rice. But I suppose it varies per person.
For example...
A British group of volunteers were locked in a zoo and were allowed to eat up to 5 kilos of raw fruit and vegetables per day - but only raw fruit and vegetables.
"Nine volunteers, aged 36 to 49, took on the 12-day Evo Diet, consuming up to five kilos of raw fruit and veg a day."
"The prescribed menu was:
- safe to eat raw; - met adult human daily nutritional requirements; and - provided 2,300 calories - between the 2,000 recommended for women and 2,500 for men,"
."
This is very difficult to do, as almost all human beings eat when they feel like eating WHAT they feel like eating. Earlier human experiments on effects of diet in the 1970s actually required the experiment subjects to live in the laboratory long-term, and to have every gram of everything they ate during the experiment measured exactly by experiment team assistants. Even at that, those experiments came up with few clear conclusions, perhaps because the experiments weren't lengthy enough or didn't include enough subjects for strong inferences. Now the experimenting begins again. Whether the currently hotly debated hypotheses about human diet win or lose, it's important to put the hypotheses to the test of a rigorous experimental study to advance human knowledge.
[1]
Also, have a look at this, originally written/published, it seems, in 1958:
Makes for a fascinating read, and it amazes me how close it gets to what's currently being put forward now (high fat low carb == good).
What if it also depends on the subject's microbiota, which would be impacted by a number of things including the (unwanted) consumption of residual antibiotics in meats.
Seems like the more we find out, the more questions there are.
TL;DR Calorie counting makes for mindful eating and changes habits, without suffering.
* Science in general and nutritional science specifically may or may not be sketchy (And this is news?),
* There are at least three ongoing, very interesting, apparently well-designed studies exploring the topic, with an emphasis on ongoing, and
* These three studies are the children of a researcher who lost weight when he changed his diet, an Enron billionaire, and Gary Taubes, a science journalist with a history of being very, very partisan. (No, really, go read Bad Science and then track down Polywater by Felix Franks---different scientific episodes, but with roughly similar hoo-ha involved; I'm talking about the style of the two discussions.
NuSI's approach to test long-standing food science assumptions.
From the practical standpoint of actually trying to lose weight/get people to lose weight, the challenges in nutrition are almost entirely around compliance (how to ensure someone sticks with the program) rather than substance (what people put in their bodies). Most people know, within reasonable terms, how to eat healthily. It may not be the most optimal way possible (perhaps keto or some other diet is), but if we spent more time studying how to teach compliance I think we'd be making a lot more progress towards stopping obesity.
Also, the warning on my iPhone was in Japanese and it was impossible to copy and paste it into a translator it so it was useless.
That's another way of earthquake advance warning - taking advantage of the latency between the epicenter and the surrounding area.
That sounds like a somewhat misleading simplification or a complete misunderstanding.
Presumably "the speed of sound in the earth" and "the speed at which earthquakes travel" is by definition the same - earthquakes just being "sound vibrations" in the earth with macro level amplitudes. I'd be very surprised if that was particularly close to what people think of as "the speed of sound" (which I'd assume means "about 350m/s").
Yes scientifically this is interesting. But it also means that we are willingly allowing ourselves to be tracked to great detail. You know they have internal reports or queries to show who has sex and when. Not that this is a big deal - we're human, and humans have sex. But it also can show who is having sex with whom, in some cases.
How long until Jawbone starts receiving court requests for this? (probably already happens)....
1) Do you have to give your calendar login to a 3rd party? 2) What happens I add something to my calendar on my computer, is there some alert sent to someone that they need to add a lego to the board?3) What I schedule something on the calendar online, but the lego doesn't get added to the board. When someone takes a picture and syncs it, what will happen to my appointment? Will it think it's gone and erase it? Notify of the descrepency, etc....
I'm envisioning in my head some arduino powered lego calendar that automatically puts the blocks in place as appointments are added/moved/deleted from the cloud.
Also, another cool level of granularity (if needed) could be using 1x2 or 1x1 lego blocks to add more information that's easily seen in the photo. Not only do you have different colors of 1x2 and 1x1 blocks, you can also place them in different positions (left/right vertically, top/bottom horizontally).
All in all, great idea. I'd like to set one of these up myself in the future.
[0]:
[1]
[2]
[3]...
Normally, vulnerabilities would be considered a bad thing. Heartbleed is a great example of that. But in cases like these, it's a very good thing. This is why I always like to remind those whose goal is to build more secure systems to consider the implications of their work, lest our devices become even more secure against us. They usually have in mind a world where everyone has full control of their devices which are then highly secure against attacks by others, and that's a good thing; but I think it's far more likely to turn into one where corporations have all the control and devices are secure against their owners, especially as typical users continue to choose security over freedom.
What are Chromecasts used for? Should I buy one?
The device is ideal for hotels since you usually get a nice HD TV in the room. But half the time I can't stream from Chromecast because of the wifi login.
A rooted Chromecast would essentially let me log in to the hotel wifi like I would on my laptop or phone. Then I can stream away.
This week we are freezing all feature merges and focusing on refactoring, code cleanup and generally repaying as much technical debt as possible.
We are also considering a gradual slowdown of the release cadence (we currently cut a release every month), to give more time for QA. Even though we work hard to keep master releasable at all times and run every merge through the full test suite, in practice there can never be enough real-world testing before a release. An 8-week cycle (which is roughly what Linux does) would allow us to freeze the release 1-2 weeks in advance and do more aggressive QA.
Trying to get CoreOS installed on VPS providers is a huge pain[0], and fleet and etcd are technically not labelled as production-ready (only CoreOS used as a base OS is)[1], so I'm really glad I can go back to vanilla Docker.
[0]:
[1]:
While great, I worry that this is a part-solution that will delay the implementation of a proper one.
I haven't found a satisfactory solution to having communicating containers across multiple hosts. There seems to be quite a few solutions in the making (libswarm, geard, etc). How are other people solving this (in production, beyond two or three hosts)?
So now docker is taking on the work of what systemd and other daemon-managers are supposed to solve? Looking forward to docker run --restart on-failure ubuntu /bin/bash exit -1
When you include a --restart "feature" you know for sure you have don goofed.
But anyway, the rest of the stuff looks like pure candy. Great job!
is this info up-to-date?...
Specifically: every click/interaction that loads content in your custom web view sends the webView:shouldStartLoadWithRequest:navigationType: message to your web view delegate. Without implementing that method, clicking a tel: link will prompt first. However, many apps throw some logic in there to detect any URI schemes that don't match the standard HTTP/HTTPS schemes used in normal websites, and trying to do something "nice" for the user, they handle requests for those URIs by calling:
[[UIApplication sharedApplication] openURL:request.URL];
This is a reasonable thing to do (outside the context of tel: links) because it allows the app to spawn an external app for custom URIs.
Therein lies the problem: not that UIWebView opens tel: links without prompting (it doesn't), but that many app developers are just trying to improve the inter-app experience, and unknowingly open tel: links directly with that openURL: method.
EDIT: Just my opinion, but I think it's actually pretty cool that Apple gives developers the ability to dial phone numbers without an extra prompt. It makes third-party contact/phone apps much more useful (imagine having to confirm every phone number to dial after tapping the contact in the built-in phone app). In a way, this is the kind of trust / freedom that iOS developers rarely enjoy without a fight. It's just unfortunate that in this instance, it also happens to be very easy to overlook this pitfall when implementing web view logic that handles non-http links.
On the other hand, with this system, every single app that ever uses a web view has to somehow magically divine that this could be an issue. The UIWebView docs certainly don't warn you about this. So what, is the expected behavior is that if you ever use a WebView in your app you should read every RFC on the planet in case there's some weird edge case like this? Maybe instead of creating systems that require careful developers we could try creating systems that work well by default and need you to explicitly turn on dangerous features like this.
I'd blame Apple just as much as the devs. They made a choice to be insecure by default in a situation when a majority of developers are going to assume it functions like the rest of the OS does. Web views in any app ought to behave like Safari by default.
FB, Google, and others all pay for bugs such as these, so even monetarily, it doesn't make sense to just release it to the public immediately. Again, this is assuming these bugs were not disclosed previously to companies affected.
edit: clarification
I just did a talk during BSidesLV on the subject of URL Schemes and dangerous implementations.
For those who want all the details:
For those who want to skip explanations on how they work and see the bad examples, auto skipping about 10min:
One example that I have in there is Yo. Yo will automatically Yo someone on your behalf. So if an inline frame has yo://gepeto42 (basically), and you have Yo installed, I have just "de-anonymized" your Yo account as you browsed my website (or any page where I could inject that iframe). A good tip on where to find out about those is to buy Launch Center Pro and to extract the plist it has. This has info about hundreds of iOS apps and how their URL Schemes work.
Happy hunting.
Can you please provide some evidence that this is practically possible? Last time I used Facetime it took quite some time before the connection was established.Can you please provide some evidence that this is practically possible? Last time I used Facetime it took quite some time before the connection was established.
Facetime calls are instant. Imagine you clicking a link, your phone calls my (attacker) account, I instantly pick it up and (yes) save all the frames. Now I know how your face looks like and maybe where you are. Hello pretty! Yes, it works. I tried.
"In practice, the closest to 'malicious' use I've seen the redict-to-app-store-from-ad case"...
Especially if you don't live in the Bay Area, since they'll want to know the falloff curves for it.
(You can fill out the form even if you didn't feel it. They need that data too.)
There is published work indicating correlations between rainfall and seismicity (1) and rainfall and volcanic activity (2). There's other work relating seismicity to fracking, filling the Oroville reservoir, etc.
A study (3) this week indicated a median land uplift of 4mm ranging up to 15mm uplift in some California mountains due to a mass deficit of 240Gt of missing rainfall since 2013.
I wonder if the drought-related uplift could alter underground strain patterns enough to influence earthquake frequencies or magnitudes? Any geophysicists wanna weigh in?
(1)...
Growing up in Wellington NZ we were always taught to expect and prepare for the 'big one', it was just part of life. Sometimes they completely come out of nowhere though, hitting places that are unprepared and often unaware that they are susceptible to seismic risk. This happened in Christchurch NZ in 2011 [1]. I'm not sure about the Valley, but there is nothing scarier than a new fault opening up.
[1]
Put it in the middle of the Australian outback, nothing's happened there for a billion years, nearly literally.
To the people down voting me: please explain why you think it's a good idea to put our digital infrastructure in a place which might well be destroyed in an afternoon when it can be put quite literally anywhere.
They are expecting a huge earthquake in my city and a recent analysis I read said if it happens (and happens in the magnitudes they are expecting) millions would die (not from the quake itself, but from not being able to get help, cold etc.). I believe in a case like that entire economy (and everything, really) of the country would fall irrecoverably for at least a 50 years.
Well, if it happens in the night, at least I won't be alive to see the effects of it. (My apartment is old and I live very near to the sea, so there is also the risk of tsunamis. YAY!)
The moral of the story is, if you are going to live in earthquake country, live in a well constructed wooden house on top of a mountain, or at least solid rock.
I actually got up and walked around this time. Last time I felt an earth quake was 3 years ago in NJ and that was super weird.
Feeling a little spoiled with these Earthquakes! Only moved here a few weeks ago.
I have similar feelings about how cheap politicians are to bribe....
This story shows that some things should not be privatized. Some people (Rand Paul) believe the Iraq war would not have happened had the US government not relied on for-profit corporations (Halliburton) for war-related contracts. See:...
(disclaimer: I am a volunteer/part time judge)
Edit:
I'd like to add that I will not comment in public whether I think the sentence is appropriate or not, as I don't ever want anybody in a trial to accuse me of a biased opinion, although I highly doubt that anyone of "my" indicted people will ever read HN.
I admire China for their swift execution of the criminals who were involved with the bombing. The west would still be hand wringing.
There's a sad state of affairs on the internet, where a couple of paragraphs of slanted commentary are called an 'article'.
BTW, I recently learned the Gutenberg was not his name and is really a significant historical inaccuracy. His name was Hannes Gensfleisch. "Gutenberg" was just one of the places his family resided.
[1]
What often happens is that editors have one translation of a book, say Les Misrable, and keep reprinting the same translation independently of the quality. So I was thinking that a github like platform to foster translation would be a great idea. Looks like gitenberg might by the project just for that.
But maybe it should pick a clone (gitlab ?), self host and fork/extend that tool to ease the use so that non-developer could use the site without git knowledge. Then again, tailorisation for translation might not be needed.
GitHub should really put some work into improving their feed algorithm so one project can't just clog it all.
Thanks.
Helps a lot from my experience.Helps a lot from my experience.
body { color: black; font-weight: normal; font-family: verdana; }
I found a few months ago... It isn't focused on building a digital library yet but what I like of this project is the good execution. It would be nice to merge them together!
Besides translations, what can people besides the author contribute? Doesn't it, on some level, ruin the character of these books? If you look at a non-fiction book from 80 years ago, is it worth bothering to correct the information when you can probably find it at your fingertips on wikipedia?
Using Git for just about anything other than what it was built for is a terrible idea. I mean the underlying system is incredibly powerful and could be useful in various projects, but the interface is horrific. I swear its like someone tried to make Git as difficult as possible to use. Programmers have a hard time understanding and using Git, non-programmers will just laugh and walk away. Every time a programmer has an issue with Git, whoever helps them has to sit down and explain the underlying system for 20 minutes and draw a bunch of sticks and bubbles. Non-programmers will never put up with this.
> Python is a language that suffers from not having a language specification ... There are so many quirks and odd little behaviors that the only thing a language specification would ever produce, is a textual description of the CPython interpreter... Keeping a language lean and well defined seems to be very much worth the troubles. Future language designers definitely should not make the mistake that PHP, Python and Ruby did, where the language's behavior ends up being "whatever the interpreter does".
This is an incredibly important point. The rise of PyPy is just one compelling illustration of how Python is at a point where a language specification is needed, and these crazy CPython-specific bugs need to be purged.
> I think for Python this is very unlikely to ever change at this point, because the time and work required to clean up language and interpreter outweighs the benefits.
I would disagree - I think it's possible for Python to change this. Any such bizarre behaviors need to be treated as a bug, and eliminated in the next release.
If you feel like the type system fights against you, chances are that you are doing something wrong. When I program, the type system definitely fights for me. It gives me a lot of guarantees, plus it's a really convenient way of self-documentation. I mean, I'm not even talking about Haskell or something really sophisticated: I see the advantages even in old plain Java or C++ (so much that, in fact, most of my gripes about Java are about its type system not being complex enough).
Also, being "unrestricted" is far from being an universally good thing, since it also means many more opportunities for mistakes. I know for a fact that I make more mistakes in languages without static typing, but well, I guess it's just me.
elliptic.nim:...
elliptic.py:...
Nimrod looks and feels like python, but it compiles to C. It's like C except with Pythonic syntax and with Boehm GC optional. In addition, Nimrod has a burgeoning NPM-like module ecosystem developing, albeit in the early stages.
import rdstdin, strutils let time24 = readLineFromStdin("Enter a 24-hour time: ").split(':').map(parseInt) hours24 = time24[0] minutes24 = time24[1] flights: array[8, tuple[since: int, depart: string, arrive: string]] = [(480, "8:00 a.m.", "10:16 a.m."), (583, "9:43 a.m.", "11:52 a.m."), (679, "11:19 a.m.", "1:31 p.m."), (767, "12:47 p.m.", "3:00 p.m."), (840, "2:00 p.m.", "4:08 p.m."), (945, "3:45 p.m.", "5:55 p.m."), (1140, "7:00 p.m.", "9:20 p.m."), (1305, "9:45 p.m.", "11:58 p.m.")] proc minutesSinceMidnight(hours: int = hours24, minutes: int = minutes24): int = hours * 60 + minutes proc cmpFlights(m = minutesSinceMidnight()): seq[int] = result = newSeq[int](flights.len) for i in 0 .. <flights.len: result[i] = abs(m - flights[i].since) proc getClosest(): int = for k,v in cmpFlights(): if v == cmpFlights().min: return k echo "Closest departure time is ", flights[getClosest()].depart, ", arriving at ", flights[getClosest()].arrive
Another argument in favor of types is that it will enable Python to optimize code better. But since Python isn't built for static typing, the CPython bytecode interpreter has no facilities for exploiting the extra information. And even if it had, the V8 Javascript VM proves that you dont need static types to generate optimized code.
However, they did not set out to just design the next version of Perl, but the last version, reasoning that if you have proper extension mechanism in place, you won't have to do a reboot ever again.
This resulted in gradual typing (which sometimes needs to fall-back to runtime checks), a pluggable syntax with extensible grammar, a flexible meta-object protocol, default numeric types that are objects (rationals or bigints), lazy list, reified language construct (variables as container objects) and other stuff that makes a straight-forward implementation horribly slow.
Ronacher is referring to the fact that Guido van Rossum, the Python language creator and BDFL, recently said he wanted to make mypy's type annotation standard into a standard by making use of Python 3 function annotations.Ronacher is referring to the fact that Guido van Rossum, the Python language creator and BDFL, recently said he wanted to make mypy's type annotation standard into a standard by making use of Python 3 function annotations.
So not long ago someone apparently convinced someone else at a conference that static typing is awesome and should be a language feature. I'm not exactly sure how that discussion went but the end result was that mypy's type module in combination with Python 3's annotation syntax were declared to be the gold standard of typing in Python.
The original function annotations standard is PEP-3107[1], GvR's proposal is on the python-ideas list[2], and information on mypy can be found at the project's site[3].
I agree with Ronacher's conclusion; I don't think static types -- even if only used at runtime -- are a good fit for the language. As for function annotation syntax, I think we just need to admit that isn't really good for anything.
Great article!
[1]:
[2]:...
[3]:
There is,of course, no such a language but future languages wether they are dynamic or static ,strong or weak (type wise) will certainly not make the same mistake as their ancestors.
Personally I want a scripting language,with type inference but real strong static typing, that can be easily interfaced with C++, that handles concurrency the right way(ie not like javascript callbacks),that is trully multipurpose(like python) elegant(a bit like ruby), 00 with composition and strong encapsulation in mind and access modifiers, with immutable structures and variables by default but not limited to it,with some functional features without looking like Haskell,resonably fast for GUI dev,scientif computation and non trivial web apps,easy to deploy on a server, with sane error handling,IO streams everywhere, a clear syntax(no unreachable characters on non english keyboards everywhere),with a good package manager, an interactive repl(like IPython or the tool for swift,I forgot the name) and with battery included.
So we are definetly living in an exciting period.
Then I want the compiler to check that I'm not mixing up my units. It seems like this would be really useful, but I've never seen it before.Then I want the compiler to check that I'm not mixing up my units. It seems like this would be really useful, but I've never seen it before.
float<km> drop(float<m> x0, float<s> duration) { float<m> x = x0; float<s> t = 0; float<m/s> v = 0; float<m/s^2> g = -10; float<s> dt = 0.01; while (t < duration) { v += g*dt; x += v*dt; t += dt; } return x * (1<km>/1000<m>); // abbrev for a cast: (float<km/m>).001 }
The more I read Armin's posts, the more I believe he should switch to Lua. It has all the core features he wants:
- simple design
- fast
- consistent behavior
In addition to what's mentioned above, LuaJIT is a marvelously designed JIT (please donate to the project [1]. Let's keep allowing Mike Pall to have a livelihood)
[1]
*...
*
*
Is this kind of flawed type system worth it? Hell yes. I've maintained large programs in both Python and (closure-compiled) Javascript, and with the former I've wished I had the help of the limited type checking available in the latter.
By adding static types, the focus of the language would move to a different niche, which would probably be already occupied by some competitor language(s) which do(es) types much better.
If you want sort-of Python-ish syntax with elaborate types, just use Nimrod and leave Python alone. Get the right tool for the job, don't mutilate a perfectly good existing tool.
I, for one, have been working on an app implemented mostly with PyObjC, the bridge between Python and Objective-C. I had all but written off PyObjC as a bizarre yet unuseful language mule... but lately I had the occasion to read through the PyObjC source code base, in service of my project. Did you know that when you subclass a wrapped Objective-C type in Python, an all-new Objective-C class is created and wrapped as the descendant class, behind the scenes? That blew my mind.
That happens transparently, in concordance with Python's type heiarchy and runtime ABI. As it turns out, PyObjC predates Mac OS X, and the authors have put a lot of work into mitigating things like the GIL and system events.
I am also a fan of Django's field type system, as the author mentioned and I am curious about what he thinks about descriptors (which he mentioned one but did not address) I think descriptors are an amazing addition to the duck-typing regime in Pythonville.
I would summarize my view of the type annotation proposal as follows: Statically typed languages can introduce inference heuristics that minimize the amount of type declarations. They can "jump" into the dynamically typed world more easily. The other way around is a lot harder. Not only are all those type annotations lacking in the standard library and the tools around, but there is also a lack of function design by types.
Throwing the baby out with the bath water. Null types are extremely useful.
I don't get why the author claims the null type in C# is a form of "damage". It's just said that it's bad, not why. The problem with None in python was that you can't tell what it's supposed to be. In C# you know what type the null is meant to be.
Julia 0.3 is a solid release and many Julia users have been using the master Github version for the past few months for production use, academic work, and just messing around quite happily. There are relatively few breaking changes from 0.2 to 0.3, with much better package support - so upgrade as soon as you can.
Julia 0.4 is likely to be a more "unstable" release in the tick-tock style that Linux used to have. Already been some great new things merged in, many more to come. I'd recommend not using the master branch of Julia at this time, and instead stay on release-0.3 if you like to build from source.
It allowed me to incorporate Julia into my academic work without breaking a sweat.
Startup time has been massively improved. It used to take 5-10sec to start the REPL or run a test, now it's about 0.2sec.
> However, we want to be clear that this edition is only free to read online, and this posting does not transfer any right to download all or any portion of The Feynman Lectures on Physics for any purpose.
I know it doesn't actually mean anything in practice, but still, I'm shaking my head in disbelief that there's still people out there clinging to this mentality. Aside from the fact that it's fundamentally technically impossible to read something online without downloading it first.
I guess I could buy them and then download the "pirate" versions from somewhere.
Instead, I'll stick with my hardcopy edition.
classics.. to be sure....
edit: i almost feel like these shouldn't be something that gets digitized.....this knowledge and its presentation belongs in a tactile medium...
Absolutely great books however!
I've learned a lot already from those books.
Also, the "For the Practical Man" (algebra, geometry, trig, arithemtic) series of books on mathematics that Feynman started his career with. They are hard to get hold of and expensive but the calculus book is wonderful if incredibly dense and written in an early 1900's style!
Those, a cheap Casio calculator, a box of pencils and some school exercise books have taught me more than a university degree and years of industry experience.
Edit: found a legitimate PDF of "Calculus for the practical man"...
"Now if we multiply Eq. (41.19) by [math], [math]. We want the time average of [math], so let us take the average of the whole equation, and study the three terms. Now what about [math] times the force?"
Soo... am I going to need math skills to understand this stuff? | http://hackerbra.in/best/1408979461 | CC-MAIN-2018-26 | refinedweb | 17,713 | 62.48 |
]
Sonia Bhadouria Vishvkarma(9)
Mahesh Chand(8)
Sagar Pardeshi(3)
Mahak Gupta(3)
Arun Choudhary(3)
Delpin Susai Raj(2)
Sachin Bhardwaj(2)
Rizwan Ali(2)
Vipin Kumar(2)
Gaurav Gupta(2)
Dipendra Singh Shekhawat(1)
Sumit Deshmukh(1)
Vaikesh K P(1)
Abhishek Kumar(1)
Veena Sarda(1)
Emiliano Musso(1)
Devesh Omar(1)
Sunny Sharma(1)
Ashwani Tyagi(1)
Hirendra Sisodiya(1)
Vijai Anand Ramalingam(1)
Destin joy(1)
Tanmay Sarkar(1)
Diptimaya Patra(1)
Leung Yat Chun(1)
Najuma Mahamuth(1)
Shubham Sharma(1)
S.Ravi Kumar(1)
Arun Goyal(1)
James Willock(1)
Manpreet Singh(1)
Rahul Singh(1)
Ramakrishna Pathuri(1)
Vinod Kumar(1)
Dhananjay Kumar (1)
Sanjoli Gupta(1)
Shubham Srivastava(1)
Pradip Pandey(1)
Chandra Shekher(1)
Jaydip Trivedi(1)
Ankit Bansal(1)
Nipesh Janghel(1)
Michal Habalcik(1)
Gyanender Sharma(1)
Chintan Rathod(1)
Prabhakar Maurya(1)
Krishna Garad(1)
Amit Maheshwari(1)
Dea Saddler(1)
Scott Lysle(1)
Adam Smith(1)
Robert Pohl(1)
Matt Watson(1)
Mike Gold(1)
Edwin Lima(1)
K Niranjan Kumar(1)
Resources
No resource found
Create An Alert Dialog With Icon In Xamarin Android App Using Visual Studio
Jan 12, 2017.
In this article, you will learn how to create an Alert dialog with the icon in Xamarin Android app, using Visual Studio 2015.
How To Change The App Icon In Android App Using Visual Studio 2015 Update 3
Sep 10, 2016.
In this article, you will learn how to change the app icon in Android app, using Visual Studio 2015 Update 3..
Resolve "Sync Icon Overlay Missing" Issue in Git On Windows
Mar 01, 2016.
In this article you will learn how to resolve the "Sync icon overlay missing" issue in Git on Windows.
Change Site Icon In SharePoint App Using NAPA Development Tool
Sep 11, 2015.
In this article we will learn how to change site icon in sharepoint app using napa development tool.
Remove Android Action Bar Icon in Xamarin.Forms
Aug 12, 2015.
In this article we will how to remove an Android action bar icon in Xamarian Forms.
Set the Icon of a Folder Shortcut in Windows 10
Jul 23, 2015.
This article will mainly focus on how to change the icon of a shortcut in Windows 10.
How to Change Icon of Custom Case Origin in Dynamics CRM 2015
Apr 17, 2015.
In this article we will see how to change an icon of custom case origin in Dynamics CRM 2015.
Genetic Algorithm For Icon Generation in Visual Basic
Apr 16, 2015.
This article provides some of the basics of genetic algorithms, including what they are, what they're good for, why we call them "genetic", and so on. This provides both theory and sample implementations in Visual Basic .NET.
Custom Ribbon Action and Set the Customize Icon to Ribbon Button in a SharePoint Hosted App
Dec 30, 2014.
In this article you will learn how to set the Customize Icon to Ribbon Button in a SharePoint Hosted App.
Notification Area Icon in C# Windows Forms
Jul 16, 2014.
This article explains how to place an icon in the Windows Taskbar Notification area using C# Windows Forms as per requirements..
Adding an Icon in LightSwitch Visual Studio 2012
Apr 22, 2013.
In this article we will learn about how to add an icon to your application. In this the Application Designer takes a (.png) file which I have created by using Paint.
Change the Icon Of a Windows Phone 7 Application
Apr 06, 2013.
In this article, we will discuss how to easily change the icon of a Windows Phone Application.
App Icon in IPhone
Nov 19, 2012.
This article provides a walkthrough of setting an image for an app icon in iPhone.
Use Notify icon in VB.NET
Nov 09, 2012.
In this article w can discuss about notify icon control in vb.net
Design Social Media Icon in Expression Blend 4
Oct 01, 2012.
We are going to design some Social Networking Icons.
Share Icon in Expression Blend 4
Sep 18, 2012.
This article shows the design of a Share Icon.
RSS Icon in Expression Blend 4
Sep 17, 2012.
Today we are going to see the various types and use of ellipse shapes and arc shapes.
Alert Icon in Expression Blend 4
Sep 17, 2012.
Let's design an alert icon.
Design Search Icon in Expression Blend 4
Aug 09, 2012.
I'm going to help you design a Search Icon.
Hide Volume Control Icon in Windows 8
May 20, 2012.
This article explains how to hide the Volume Control Icon in the Taskbar in Windows 8.
Remove Recycle Bin Icon From Desktop in Windows 8
May 02, 2012.
In this article I describe how to remove the Recycle Bin icon from the Desktop in Windows 8.
New Icon Indicator in SharePoint 2010
Jun 23, 2011.
In SharePoint, when a new item is created in the list or library a new icon will appear. In this article we will be seeing how to change the duration of the new icon and how to delete the new icon for the list or library.
How to Change New Icon time period in SharePoint
Jun 20, 2011.
In this article I am describing about how to change the time period that a new Icon is displayed while you upload or add new item in SharePoint list.
Notify Icon in C#
Aug 26, 2010.
In this article, I will discuss how to add a system tray icon for an application using the NotifyIcon in a Windows Forms application using Visual Studio 2010.
How you make a dll file which contains the icon set like "SHELL32.dll"
Aug 23, 2010.
Here I describe how you make a dll file which contains the icon set like "SHELL32.dll".
Get Icon From FileName in WPF
Jan 17, 2010.
In this article we will see how we can convert a filename to Icon.
Windows Icon in WPF
Jan 14, 2010.
An Icon is a bitmap image (.ico file) that is displayed in the top left corner of a Window. This article discusses how to create and use Icons in WPF applications.
Filename To Icon Converter
Jan 01, 2009.
This article describes FileToIconConverter, which is a MultiBinding Converter that can retrieve an Icon from system based on a filename(exist or not) and size.
Creating Splash Screen And Other Tile Icons For UWP Apps - Part Two
Oct 14, 2016.
In this article, you will learn how to create a splash screen and other tile icons for UWP apps.
How To Change The Size Of Desktop Icons And Taskbar Icons In Windows 10
Sep 23, 2016.
In this article you will learn how to change the size of desktop and taskbar icons in Windows 10.
Configure App Icons In Xamarin Forms App
Jun 14, 2016.
In this article, you will learn how to configure app icons in the Xamarin Forms app.
How To Design The Perfect Mobile App Icon
May 13, 2016.
In this article you will learn how to design the perfect Mobile App Icon..
Hide Like And Comment Icons From A Blog Site In SharePoint 2013 And Office 365
Sep 07, 2015.
in this article you will learn how to hide, like and comment icons from a blog site in SharePoint 2013 and Office 365.
How to Turn Off and On Notification Area Icons in Windows 10
Jun 18, 2015.
This article explains the Notification Area ("System Tray") icons in Windows 10 and how to turn them off and turn on depending on our needs.
Implementing Favicon in Web Applications
Nov 24, 2014.
This article explains how to implement a Favorite icon in a web application.
TreeView Control With Custom Icons in ASP.Net Using SiteMap
Mar 07, 2014.
This article describes customization of a TreeView Control with custom icons in ASP.Net using a SiteMap.
How to Create Custom Icons Using Font Awesome
Dec 07, 2013.
This article explains how to use the awesome font icon; how to convert a PNG file to a SVG file.
Creating Icons in PHP
Apr 15, 2013.
In this article I explain creation of an icon in PHP using the Twitter bootstrap file.
Learning Bootstrap Part 3: Working With Image and Icons
Apr 02, 2013.
Working with images are very essential of web application development and Boot strap also realize this.
Draw "Comment Icon" in Expression Blend 4
Sep 13, 2012.
Today we are going to Design "Comment-Icon".
Prevent User From Changing Desktop Icons in Windows 8
Jul 01, 2012.
This article describes how to prevent a user from changing Desktop Icons in Windows 8.
Create Shortcut Icons in Windows 8
Jun 28, 2012.
In this article we are going to learn how to create an icon in Windows 8 to restart your system.
How to Install Custom Icons in Windows 8
Apr 27, 2012.
In this article I will explain how to install custom icons in Windows 8
Turn Off System Icons in Windows 8
Apr 16, 2012.
Here we describe the procedure to disable the system icons from the notification area.
Working with Icons in GDI+
Feb 18, 2010.
In this article I will explain about working with Icons in GDI+.
Drawing Icons in GDI+
Nov 30, 2009.
In this article I will explain how to draw a Drawing Icons in GDI+ using C#.
Mar 13, 2001.
In .NET framework, the Icon class represents a Windows icon, which is a small bitmap image used to represent an object. The icon class is defined in System.Drawing namespace.
Utilizing The Action Bar In Android Applications
Jan 16, 2017.
In this article, you will learn about action bar used in Android applications. The action bar displays the application icon together with the activity title. On the right side of the action bar are action items.
Call Images In Different Platforms Using Xamarin Forms
May 11, 2016.
In this article we'll learn how to display images as an icon using Xamarin forms in different platforms..
ASP.NET Autocomplete Textbox Using jQuery, JSON and AJAX
Sep 06, 2015.
In this article, we will learn how to show icons in the textbox suggestions using jQueryUI Autocomplete widget and ASP.NET Web Service.
Windows Phone Development For Beginner - Part 3
Jul 23, 2015.
This article shows how to set the start page, apps icon and apps title for your Windows Phone apps.
Clear the Clipboard Memory in Windows 10 by Shortcut
May 14, 2015.
This article explains the clipboard in Windows 10 and also how to clear the clipboard memory in Windows 10 using a shortcut..
Getting Started With Bootstrap: Part 2
Sep 02, 2014.
In this article you will learn styling and formatting of text content, table, form, list, image and icons in Bootstrap.
Creating Some Impressive Buttons Using CSS
Sep 12, 2013.
In this article, you'll learn creating five beautiful CSS based buttons which you can use in your website.
Build First Application Using Android Studio
May 25, 2013.
Android Studio is a new Android development environment based on IntelliJ IDEA. Similar to Eclipse with the ADT Plugin, Android Studio provides integrated Android developer tools for development and debugging.
Splash in iPhone
Dec 17, 2012.
In this article I will explain how to use splash in iPhone.
Design Header in Expression Blend 4
Oct 22, 2012.
Today we are going to design a Header in Expression Blend 4.
Design Calendar in Expression Blend 4
Sep 20, 2012.
Today we are going to design a Calendar Icon using Controls.
Create All Applications Shortcut in Windows 8
Sep 20, 2012.
In this article we are explaining how to create a shortcut for "All applications" in Windows 8.
Windows Phone Application in Expression Blend 4
Sep 06, 2012.
Today we are going to learn about Windows Phone Applications..
Add Shutdown Shortcut to Start Screen in Windows 8
Jun 30, 2012.
In this article we are going to learn how to create your own shortcut shutdown icon on the start screen.
Understanding a Window Properties in WPF
May 11, 2012.
In this article, we will discuss some important properties of the window object in the WPF. Here we set the properties with the help of example.
Pushpin in Bing Maps in Windows Phone 7
Apr 29, 2012.
In this article, we will discuss how to add a Pushpin in our Bing Maps and how we can change the image of the Pushpin Icon.
Animated Cursor-Custom Control
Oct 12, 2011.
Animated cursor custom control shows the change of the mouse cursor icon when we move it to the windows application.
How to Implement ToolBar in WPF using F#
Aug 29, 2011.
This article is a demonstration regarding how you can craft a Toolbar with icons in WPF using F#. Take a quick review to learn.#.
Cursors in C#
Jun 15, 2010.
A cursor in Windows is an icon that is displayed when you move a mouse, a pen, or a trackball. This code shows how to apply and manage cursors in your Windows applications.
Working with Fonts in GDI+
Dec 29, 2009.
In this article I will explain about working with Fonts in GDI+.
Image Conversion Utility in C#
Sep 14, 2006.
This article describes a very easy approach to building an image conversion utility that will permit the user to open a supported image type and convert it to another supported image type.
The Grouper
Jan 25, 2006.
The Grouper is a special groupbox control that is rounded and fully customizable. The control can paint borders, drop shadows, gradient and solid backgrounds, custom text and custom icons.).
Mail Checker 1.0
Jan 16, 2003.
In this article, author shows how to create a program to check your IMAP mail.
Directory Picker Dialog
Oct 30, 2001.
This Directory Picker in this article is also a bit different because it uses the "Large Icon" view of the ListView to traverse through directories.
Tray Bar Application
Oct 30, 2001.
This is a very simple C# application which implements those very familiar Windows applications with a tray Icon..
About Icon
NA
File APIs for .NET
Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more! | http://www.c-sharpcorner.com/tags/Icon | CC-MAIN-2017-17 | refinedweb | 2,378 | 74.49 |
! 296: \G Line comment: if @code{BLK} contains 0, parse and discard the remainder ! 297: \G of the parse area. Otherwise, parse and discard all subsequent characters in the ! 298: \G parse area corresponding to the current line. ! 299: immediate ! 300: ! 301: ' ( alias ( ( compilation 'ccc<close-paren>' -- ; run-time -- ) \ core,file paren ! 302: \G Comment: parse and discard all subsequent characters in the parse ! 303: \G area until ")" is encountered. During interactive input, an end-of-line ! 304: \G also acts as a comment terminator. For file input, it does not; if the ! 305: \G end-of-file is encountered whilst parsing for the ")" delimiter, Gforth ! 306: \G will generate a warning. ! 307: immediate 1.1 anton 308: 309: forth definitions 310: 311: \ the following gymnastics are for declaring locals without type specifier. 312: \ we exploit a feature of our dictionary: every wordlist 313: \ has it's own methods for finding words etc. 314: \ So we create a vocabulary new-locals, that creates a 'w:' local named x 315: \ when it is asked if it contains x. 316: 317: also locals-types 318: 319: : new-locals-find ( caddr u w -- nfa ) 320: \ this is the find method of the new-locals vocabulary 321: \ make a new local with name caddr u; w is ignored 322: \ the returned nfa denotes a word that produces what W: produces 323: \ !! do the whole thing without nextname 1.3 anton 324: drop nextname 325: ['] W: >name ; 1.1 anton 326: 327: previous 328: 329: : new-locals-reveal ( -- ) 330: true abort" this should not happen: new-locals-reveal" ; 331: 1.22 anton 332: create new-locals-map ( -- wordlist-map ) 1.29 anton 333: ' new-locals-find A, 334: ' new-locals-reveal A, 335: ' drop A, \ rehash method 1.34 jwilke 336: ' drop A, 1.1 anton 337: 1.27 pazsan 338: slowvoc @ 339: slowvoc on 1.1 anton 340: vocabulary new-locals 1.27 pazsan 341: slowvoc ! 1.36 pazsan 342: new-locals-map ' new-locals >body wordlist-map A! \ !! use special access words 1.1 anton 343: 344: variable old-dpp 345: 346: \ and now, finally, the user interface words 1.32 anton 347: : { ( -- lastxt wid 0 ) \ gforth open-brace 1.1 anton 348: dp old-dpp ! 349: locals-dp dpp ! 1.32 anton 350: lastxt get-current 1.1 anton 351: also new-locals 1.32 anton 352: also locals definitions locals-types 1.1 anton 353: 0 TO locals-wordlist 354: 0 postpone [ ; immediate 355: 356: locals-types definitions 357: 1.32 anton 358: : } ( lastxt wid 0 a-addr1 xt1 ... -- ) \ gforth close-brace 1.1 anton 359: \ ends locals definitions 360: ] old-dpp @ dpp ! 361: begin 362: dup 363: while 364: execute 365: repeat 366: drop 367: locals-size @ alignlp-f locals-size ! \ the strictest alignment 368: previous previous 1.32 anton 369: set-current lastcfa ! 1.37 pazsan 370: locals-list 0 wordlist-id - TO locals-wordlist ; 1.1 anton 371: 1.14 anton 372: : -- ( addr wid 0 ... -- ) \ gforth dash-dash 1.1 anton 373: } 1.9 anton 374: [char] } parse 2drop ; 1.1 anton 375: 376: forth definitions 377: 378: \ A few thoughts on automatic scopes for locals and how they can be 379: \ implemented: 380: 381: \ We have to combine locals with the control structures. My basic idea 382: \ was to start the life of a local at the declaration point. The life 383: \ would end at any control flow join (THEN, BEGIN etc.) where the local 384: \ is lot live on both input flows (note that the local can still live in 385: \ other, later parts of the control flow). This would make a local live 386: \ as long as you expected and sometimes longer (e.g. a local declared in 387: \ a BEGIN..UNTIL loop would still live after the UNTIL). 388: 389: \ The following example illustrates the problems of this approach: 390: 391: \ { z } 392: \ if 393: \ { x } 394: \ begin 395: \ { y } 396: \ [ 1 cs-roll ] then 397: \ ... 398: \ until 399: 400: \ x lives only until the BEGIN, but the compiler does not know this 401: \ until it compiles the UNTIL (it can deduce it at the THEN, because at 402: \ that point x lives in no thread, but that does not help much). This is 403: \ solved by optimistically assuming at the BEGIN that x lives, but 404: \ warning at the UNTIL that it does not. The user is then responsible 405: \ for checking that x is only used where it lives. 406: 407: \ The produced code might look like this (leaving out alignment code): 408: 409: \ >l ( z ) 410: \ ?branch <then> 411: \ >l ( x ) 412: \ <begin>: 413: \ >l ( y ) 414: \ lp+!# 8 ( RIP: x,y ) 415: \ <then>: 416: \ ... 417: \ lp+!# -4 ( adjust lp to <begin> state ) 418: \ ?branch <begin> 419: \ lp+!# 4 ( undo adjust ) 420: 421: \ The BEGIN problem also has another incarnation: 422: 423: \ AHEAD 424: \ BEGIN 425: \ x 426: \ [ 1 CS-ROLL ] THEN 427: \ { x } 428: \ ... 429: \ UNTIL 430: 431: \ should be legal: The BEGIN is not a control flow join in this case, 432: \ since it cannot be entered from the top; therefore the definition of x 433: \ dominates the use. But the compiler processes the use first, and since 434: \ it does not look ahead to notice the definition, it will complain 435: \ about it. Here's another variation of this problem: 436: 437: \ IF 438: \ { x } 439: \ ELSE 440: \ ... 441: \ AHEAD 442: \ BEGIN 443: \ x 444: \ [ 2 CS-ROLL ] THEN 445: \ ... 446: \ UNTIL 447: 448: \ In this case x is defined before the use, and the definition dominates 449: \ the use, but the compiler does not know this until it processes the 450: \ UNTIL. So what should the compiler assume does live at the BEGIN, if 451: \ the BEGIN is not a control flow join? The safest assumption would be 452: \ the intersection of all locals lists on the control flow 453: \ stack. However, our compiler assumes that the same variables are live 454: \ as on the top of the control flow stack. This covers the following case: 455: 456: \ { x } 457: \ AHEAD 458: \ BEGIN 459: \ x 460: \ [ 1 CS-ROLL ] THEN 461: \ ... 462: \ UNTIL 463: 464: \ If this assumption is too optimistic, the compiler will warn the user. 465: 1.28 anton 466: \ Implementation: 1.1 anton 467: 1.3 anton 468: \ explicit scoping 1.1 anton 469: 1.14 anton 470: : scope ( compilation -- scope ; run-time -- ) \ gforth 1.36 pazsan 471: cs-push-part scopestart ; immediate 472: 473: : adjust-locals-list ( wid -- ) 1.37 pazsan 474: locals-list @ common-list 1.36 pazsan 475: dup list-size adjust-locals-size 1.37 pazsan 476: locals-list ! ; 1.3 anton 477: 1.14 anton 478: : endscope ( compilation scope -- ; run-time -- ) \ gforth 1.36 pazsan 479: scope? 480: drop adjust-locals-list ; immediate 1.1 anton 481: 1.3 anton 482: \ adapt the hooks 1.1 anton 483: 1.3 anton 484: : locals-:-hook ( sys -- sys addr xt n ) 485: \ addr is the nfa of the defined word, xt its xt 1.1 anton 486: DEFERS :-hook 487: last @ lastcfa @ 488: clear-leave-stack 489: 0 locals-size ! 490: locals-buffer locals-dp ! 1.37 pazsan 491: 0 locals-list ! 1.3 anton 492: dead-code off 493: defstart ; 1.1 anton 494: 1.3 anton 495: : locals-;-hook ( sys addr xt sys -- sys ) 496: def? 1.1 anton 497: 0 TO locals-wordlist 1.3 anton 498: 0 adjust-locals-size ( not every def ends with an exit ) 1.1 anton 499: lastcfa ! last ! 500: DEFERS ;-hook ; 501: 1.28 anton 502: \ THEN (another control flow from before joins the current one): 503: \ The new locals-list is the intersection of the current locals-list and 504: \ the orig-local-list. The new locals-size is the (alignment-adjusted) 505: \ size of the new locals-list. The following code is generated: 506: \ lp+!# (current-locals-size - orig-locals-size) 507: \ <then>: 508: \ lp+!# (orig-locals-size - new-locals-size) 509: 510: \ Of course "lp+!# 0" is not generated. Still this is admittedly a bit 511: \ inefficient, e.g. if there is a locals declaration between IF and 512: \ ELSE. However, if ELSE generates an appropriate "lp+!#" before the 513: \ branch, there will be none after the target <then>. 514: 1.30 anton 515: : (then-like) ( orig -- ) 516: dead-orig = 1.27 pazsan 517: if 1.30 anton 518: >resolve drop 1.27 pazsan 519: else 520: dead-code @ 521: if 1.30 anton 522: >resolve set-locals-size-list dead-code off 1.27 pazsan 523: else \ both live 1.30 anton 524: over list-size adjust-locals-size 525: >resolve 1.36 pazsan 526: adjust-locals-list 1.27 pazsan 527: then 528: then ; 529: 530: : (begin-like) ( -- ) 531: dead-code @ if 532: \ set up an assumption of the locals visible here. if the 533: \ users want something to be visible, they have to declare 534: \ that using ASSUME-LIVE 535: backedge-locals @ set-locals-size-list 536: then 537: dead-code off ; 538: 539: \ AGAIN (the current control flow joins another, earlier one): 540: \ If the dest-locals-list is not a subset of the current locals-list, 541: \ issue a warning (see below). The following code is generated: 542: \ lp+!# (current-local-size - dest-locals-size) 543: \ branch <begin> 544: 545: : (again-like) ( dest -- addr ) 546: over list-size adjust-locals-size 547: swap check-begin POSTPONE unreachable ; 548: 549: \ UNTIL (the current control flow may join an earlier one or continue): 550: \ Similar to AGAIN. The new locals-list and locals-size are the current 551: \ ones. The following code is generated: 552: \ ?branch-lp+!# <begin> (current-local-size - dest-locals-size) 553: 554: : (until-like) ( list addr xt1 xt2 -- ) 555: \ list and addr are a fragment of a cs-item 556: \ xt1 is the conditional branch without lp adjustment, xt2 is with 557: >r >r 558: locals-size @ 2 pick list-size - dup if ( list dest-addr adjustment ) 559: r> drop r> compile, 560: swap <resolve ( list adjustment ) , 561: else ( list dest-addr adjustment ) 562: drop 563: r> compile, <resolve 564: r> drop 565: then ( list ) 566: check-begin ; 567: 568: : (exit-like) ( -- ) 569: 0 adjust-locals-size ; 570: 1.1 anton 571: ' locals-:-hook IS :-hook 572: ' locals-;-hook IS ;-hook 1.27 pazsan 573: 574: ' (then-like) IS then-like 575: ' (begin-like) IS begin-like 576: ' (again-like) IS again-like 577: ' (until-like) IS until-like 578: ' (exit-like) IS exit-like 1.1 anton 579: 580: \ The words in the locals dictionary space are not deleted until the end 581: \ of the current word. This is a bit too conservative, but very simple. 582: 583: \ There are a few cases to consider: (see above) 584: 585: \ after AGAIN, AHEAD, EXIT (the current control flow is dead): 586: \ We have to special-case the above cases against that. In this case the 587: \ things above are not control flow joins. Everything should be taken 588: \ over from the live flow. No lp+!# is generated. 589: 590: \ About warning against uses of dead locals. There are several options: 591: 592: \ 1) Do not complain (After all, this is Forth;-) 593: 594: \ 2) Additional restrictions can be imposed so that the situation cannot 595: \ arise; the programmer would have to introduce explicit scoping 596: \ declarations in cases like the above one. I.e., complain if there are 597: \ locals that are live before the BEGIN but not before the corresponding 598: \ AGAIN (replace DO etc. for BEGIN and UNTIL etc. for AGAIN). 599: 600: \ 3) The real thing: i.e. complain, iff a local lives at a BEGIN, is 601: \ used on a path starting at the BEGIN, and does not live at the 602: \ corresponding AGAIN. This is somewhat hard to implement. a) How does 603: \ the compiler know when it is working on a path starting at a BEGIN 604: \ (consider "{ x } if begin [ 1 cs-roll ] else x endif again")? b) How 605: \ is the usage info stored? 606: 607: \ For now I'll resort to alternative 2. When it produces warnings they 608: \ will often be spurious, but warnings should be rare. And better 609: \ spurious warnings now and then than days of bug-searching. 610: 611: \ Explicit scoping of locals is implemented by cs-pushing the current 612: \ locals-list and -size (and an unused cell, to make the size equal to 613: \ the other entries) at the start of the scope, and restoring them at 614: \ the end of the scope to the intersection, like THEN does. 615: 616: 617: \ And here's finally the ANS standard stuff 618: 1.14 anton 619: : (local) ( addr u -- ) \ local paren-local-paren 1.3 anton 620: \ a little space-inefficient, but well deserved ;-) 621: \ In exchange, there are no restrictions whatsoever on using (local) 1.4 anton 622: \ as long as you use it in a definition 1.3 anton 623: dup 624: if 625: nextname POSTPONE { [ also locals-types ] W: } [ previous ] 626: else 627: 2drop 628: endif ; 1.1 anton 629: 1.4 anton 630: : >definer ( xt -- definer ) 631: \ this gives a unique identifier for the way the xt was defined 632: \ words defined with different does>-codes have different definers 633: \ the definer can be used for comparison and in definer! 1.30 anton 634: dup >does-code 635: ?dup-if 636: nip 1 or 1.4 anton 637: else 638: >code-address 639: then ; 640: 641: : definer! ( definer xt -- ) 642: \ gives the word represented by xt the behaviour associated with definer 643: over 1 and if 1.13 anton 644: swap [ 1 invert ] literal and does-code! 1.4 anton 645: else 646: code-address! 647: then ; 648: 1.23 pazsan 649: :noname 1.31 anton 650: ' dup >definer [ ' locals-wordlist ] literal >definer = 1.23 pazsan 651: if 652: >body ! 653: else 654: -&32 throw 655: endif ; 656: :noname 1.21 anton 657: 0 0 0. 0.0e0 { c: clocal w: wlocal d: dlocal f: flocal } 1.28 anton 658: comp' drop dup >definer 1.21 anton 659: case 1.30 anton 660: [ ' locals-wordlist ] literal >definer \ value 1.21 anton 661: OF >body POSTPONE Aliteral POSTPONE ! ENDOF 1.35 anton 662: \ !! dependent on c: etc. being does>-defining words 663: \ this works, because >definer uses >does-code in this case, 664: \ which produces a relocatable address 665: [ comp' clocal drop >definer ] literal 1.21 anton 666: OF POSTPONE laddr# >body @ lp-offset, POSTPONE c! ENDOF 1.35 anton 667: [ comp' wlocal drop >definer ] literal 1.21 anton 668: OF POSTPONE laddr# >body @ lp-offset, POSTPONE ! ENDOF 1.35 anton 669: [ comp' dlocal drop >definer ] literal 1.21 anton 670: OF POSTPONE laddr# >body @ lp-offset, POSTPONE 2! ENDOF 1.35 anton 671: [ comp' flocal drop >definer ] literal 1.21 anton 672: OF POSTPONE laddr# >body @ lp-offset, POSTPONE f! ENDOF 673: -&32 throw 1.23 pazsan 674: endcase ; 1.24 anton 675: interpret/compile: TO ( c|w|d|r "name" -- ) \ core-ext,local 1.1 anton 676: 1.6 pazsan 677: : locals| 1.14 anton 678: \ don't use 'locals|'! use '{'! A portable and free '{' 1.21 anton 679: \ implementation is compat/anslocals.fs 1.8 anton 680: BEGIN 681: name 2dup s" |" compare 0<> 682: WHILE 683: (local) 684: REPEAT 1.14 anton 685: drop 0 (local) ; immediate restrict | https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/glocals.fs?annotate=1.39;sortby=rev;f=h;only_with_tag=MAIN;ln=1 | CC-MAIN-2021-25 | refinedweb | 2,567 | 67.96 |
wifi_get_scan_results()
Get the Wi-Fi scan results.
Synopsis:
#include <wifi/wifi_service.h>
WIFI_API wifi_result_t wifi_get_scan_results(wifi_scan_results_t **scan_results, wifi_scan_report_t *report_type, int *num_scan_entries)
Since:
BlackBerry 10.2.0
Arguments:
- scan_results
Pointer that will be set to the scan result list.
- report_type
The report type for these results.
- num_scan_entries
The number of list entries.
Library:libwifi (For the qcc command, use the -l wifi option to link against this library)
Description:
This function queries the latest available scan results list as well as the size of the list. It should be called after a scan result event notification is received to retrieve the scan result list. The wifi_free_scan_results() function must be called to free the scan results when scan results processing is complete.
The scan result entries can be decoded by looping from 1 to num_scan_entries and calling one of the functions prefixed with wifi_get_scan_result_ to extract the details of each scan result entry.
Returns:
A return code from wifi_result_t.
Last modified: 2014-05-14
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.wifi_service.lib_ref/topic/wifi_get_scan_results.html | CC-MAIN-2019-35 | refinedweb | 178 | 59.3 |
Please login
Prepinsta Prime
Video courses for company/skill based Preparation
Prepinsta Prime
Purchase mock tests for company/skill building
Cognizant GenC Elevate Sample Coding Question 5
Question 5
In this article, we will discuss about Coding Question with their solution in java. In this Question , we have to find maximum number of bulbs jasleen can collect , starting from any machine and collecting from every consecutive machine until shre reaches the last machine she wants to collect from.
Question 5
Jasleen has bought a new bulb factory. The factory has a single file of machines, numbered from 1 to N. Each machine has a certain number of fully prepared bulbs.
Jasleen has a rule she wants to follow. She wants to collect an equal amount of bulb from
each machine from which she collects bulbs.
Jasleen can start collecting bulb from any machine, but once she starts collecting, she collects
from every consecutive machine until she reaches the last machine she wants to collect from. Find the maximum number of bulbs she can collect.
Input Specification:
Input1: N, the number of machines
Input2: An array of N elements (a1,a2 a3……aN], denoting the number of fully prepared bulbs in each machine of the factory.
Output Specification:
An integer output denoting the maximum number of bulbs that Allie can collect.
Example 1:
input1: 3
Input2: [1,2,3]
Output: 3
Example 2:
input1: 4
Input2: [5,8,9,10]
Output: 20
import java.util.*;
public class Main
{
public static void main (String[]args)
{
Scanner sc = new Scanner (System.in);
int n = sc.nextInt ();
int a[] = new int[n];
for (int i = 0; i < n; i++)
a[i] = sc.nextInt ();
Arrays.sort (a);
ArrayList < Integer > a1 = new ArrayList <> ();
for (int i = a.length - 1; i >= 0; i--)
{
int x = 0;
x = a[i] * (a.length - i);
a1.add (x);
}
for (int i = 0; i < a1.size (); i++)
{
if (a1.get (a1.size () - 1 - i) >= a1.get (i))
{
System.out.println (a1.get (a1.size () - 1 - i));
break;
}
}
}
} | https://prepinsta.com/cognizant-genc-elevate/coding-questions/question-5/ | CC-MAIN-2021-39 | refinedweb | 336 | 66.44 |
In addition to delivering content to Looker’s built-in destinations, you can use actions — also called integrations — to deliver content to third-party services integrated with Looker through an action hub server.
Actions served through an action hub server differ from data actions, which are defined by the
actionLookML parameter.
This page will walk you through your options for building custom actions that you can request to add to the Looker Action Hub or add to your own private action hub server. This page also describes how to spin up a local action hub server to test your custom actions or run a private action hub server.
The graphic below illustrates your available workflow options for integrating actions through either a privately hosted action hub or the Looker-hosted Action Hub:
- Use Looker’s existing actions available from the Looker Action Hub.
- Build and publish a custom action to the Looker Action Hub for public use.
- Build and publish a custom action to a private action hub server for private use.
Once the action is added to the action hub, a Looker admin can enable it for use in delivering Looker content to those services.
You can also set up multiple action hubs if you would like to use Looker’s integrations through the Looker Action Hub and also host your own private or custom actions. The actions for each action hub would appear on the Actions page of the Admin panel.
The Looker Action Hub
Looker hosts and provides the Looker Action Hub, a stateless server that implements Looker’s Action API and exposes popular actions. Any data your users send using an action will be processed temporarily on the Looker Action Hub server rather than in your Looker instance.
Looker is already integrated with several services. See the Admin settings - Actions documentation page to learn how to enable these existing services.
Looker Action Hub requirements.
The Looker Action Hub must be able to send and receive API requests in the following ways:
- From the Looker instance to the Looker Action Hub network
- From the Looker user’s browser to the Looker Action Hub network
- From the Looker Action Hub network to the Looker instance
If your Looker deployment cannot accommodate these requests or if the IP Allowlist feature is enabled on your Looker instance, consider setting up a local action hub server to serve private Looker integrations or custom actions. Admins of customer-hosted instances can also deploy a local action server specifically for OAuth and streaming actions.
Requests from the Looker instance to the Looker Action Hub network
Requests to
actions.looker.com resolve to a dynamic IP address. Outgoing requests from the Looker instance must be able to reach these endpoints:
actions.looker.com/ actions.looker.com/actions/<name>/execute actions.looker.com/actions/<name>/form
where
name is the programmatic name of the action.
Requests from the Looker user’s browser to the Looker Action Hub network
The Looker user’s browser must be able to make requests to these Looker Action Hub endpoints (for OAuth):
actions.looker.com/actions/<name>/oauth
where
name is the programmatic name of the action.
Requests from the Looker Action Hub network to the Looker instance
The Looker Action Hub must make requests to the Looker instance for actions that support streamed results or that use OAuth.
A streaming action enables the action to consume queries that deliver All Results. OAuth-enabled actions use per-user authentication through OAuth 2.0 flows. OAuth actions must store user credentials in their source Looker instance because the Looker Action Hub is stateless and multi-tenant, and it will not store user-specific credentials of any kind.
The requests from the Looker Action Hub to a Looker instance take the following forms:
GET <host_looker_url>/downloads/<random_40_char_token> POST <host_looker_url>/action_hub_state/<random_40_char_token>
These URLs are generated on the spot in the Looker instance before being sent to the Looker Action Hub. For this reason, the Looker Action Hub needs to be able to both resolve the
<host_looker_url> to an IP address and make requests into the network in which your Looker instance resides.
The Looker Action Hub has static egress IP addresses that the requests will always come from:
35.153.89.114,
104.196.138.163, and
35.169.42.87. Admins of Looker-hosted instances who have enabled the IP allowlist must add these IP addresses to use any actions that support streamed results or that use OAuth.
Considerations for customer-hosted instances
To use Looker integrations, the Looker Action Hub must be able to communicate with the Looker instance and fulfill these requirements. This is not always possible with customer-hosted Looker instances, for various reasons. If bi-directional communication between the Looker Action Hub and the Looker instance is not possible, the Looker Action Hub may exhibit unexpected or undesirable behavior, such as hanging queries or unavailable actions.
To address the potential issue of the Looker Action Hub not being able to communicate with the Looker instance, Looker admins can implement one of the solutions posed below. The appropriate solution or combination of solutions will depend on the architecture of the Looker instance:
- If the customer-hosted instance is not resolvable by the Looker Action Hub — that is, the Looker Action Hub cannot receive requests from the Looker instance — Looker admins can consult their Looker account manager to enable the
public_host_urllicense feature. That license feature reveals the
--public-host-urlstartup option, which lets admins specify a resolvable
<public_host_url>hostname that is different from the instance
<host_looker_url>. The
public_host_urloverrides the hostname for some specific Looker Action Hub callback URLs and routes those callback URLs through a reverse proxy that has the
public_host_urlas a publicly resolvable name. This reverse proxy accepts requests only from the static egress IP addresses for the Looker Action Hub; Looker admins who use this method must add to the allowlist the egress IP addresses from which the Looker Action Hub makes requests to the Looker instance:
35.153.89.114,
104.196.138.163, and
35.169.42.87.
- If the customer-hosted instance URL is resolvable by the Looker instance but the Looker Action Hub cannot send requests to the Looker instance, users may be unable to configure or use actions that support streamed results or that use OAuth. To solve this, Looker admins must add to the allowlist the egress IP addresses from which the Looker Action Hub makes requests to the Looker instance:
35.153.89.114,
104.196.138.163, and
35.169.42.87.
If neither of the aforementioned solutions is appropriate for the Looker instance architecture, Looker admins can deploy a customer-hosted action hub for all actions or just for actions that support streamed results or that use OAuth.
To deploy a customer-hosted action hub, you must ensure that the JAR file is hosted on a public server so that the Looker Action Hub can communicate with it. Looker does not recommend this solution, however.
Another reason the OAuth and streaming actions might not be usable on a customer-hosted Looker instance is if the instance uses an SSL certificate issued by a Certificate Authority (CA) not on this list.
Building a custom action
This section describes the steps to follow to write and test a custom action using the Looker Action Hub source code. To see functional code examples, check the existing actions in the
looker-open-source/actions repo in GitHub.
You can create a custom action by:
- Setting up a development repo
- Writing your action
- Testing your action
- Publishing and enabling your action, either in the Looker Action Hub or on your own private action hub server
As with any action, you may need to configure your LookML models with specific parameters before you can use the action to deliver your data.
Setting up a development repo
The Looker Action Hub is a Node.js server written in TypeScript, a small layer on top of modern JavaScript that adds type information to help catch programming errors. If you’re familiar with JavaScript, most of the TypeScript language should be familiar to you.
Running the Looker Action Hub requires the following software:
- Node.js
- Node Version Manager (NVM — to select the proper Node.js version)
- Yarn (to manage dependencies)
Once you’ve installed the required software, you’re ready to set up your development environment. Our example below uses Git.
- Clone the
looker-open-source/actionsrepo locally:
git clone git@github.com:looker-open-source/actions.git
- Create a directory with the name of your action in the
actions/src/actionsdirectory. For example:
mkdir actions/src/actions/my_action
- Start populating your directory with the files you’ll need to execute your action. See the actions GitHub repo for an example file structure.
Looker recommends that you also add:
- A README to explain the purpose and means of authentication for your action
- PNG icon to display in the Looker Action Hub (or private action hub on your Looker instance) and in the Looker data delivery windows
- Any files for tests you want to run on your action code — this is different from testing your action
Writing an action
A design requirement for the Looker Action Hub server is that it remain completely stateless, so storing any information in the action application or service is not allowed. Any information needed to fulfill the action must be provided within the action file’s request calls.
The exact contents of the action file will vary depending on the service, the type or level at which the action operates, and what data or visualization formats need to be specified. The action can also be configured for Google OAuth authorization.
Action files are based on the
/execute API method. Looker API requests are passed a
DataActionRequest each time a user executes the action within Looker. The
DataActionRequest contains all the data and metadata needed to execute your action. There is also a
/form method that can be used to collect additional information from the user before they execute the action. The fields you specify in the
/form will appear in the Send or Schedule pop-up when users select the action as a destination for their data delivery.
If using the Looker Action API, the format of these parameters may appear different.
When writing your action file, include at least the following parameters marked Required in your action definition:
Examples from the Looker Action Hub actions are on GitHub for reference.
Supported action types
Looker supports three types of actions, as specified in the
supportedActionTypes parameter of your action: query, cell, and dashboard.
- A query-level action: This is an action that sends an entire query. The Segment action, for example, is a query-level action.
- A cell-level action: A cell-level action sends the value of a single, specific cell in a data table. This action type is different from data actions, which can be defined for dimensions or measures using the
actionparameter. To send information from a specific cell within a table, Looker uses tags to map actions to the corresponding cells. Actions need to specify which tags they support in
requiredFields. To map actions and fields, fields in LookML need to specify which tags they are mapped to with the LookML
tagsparameter. For example, the Twilio Message action uses a
phonetag so that LookML developers can control on which phone number fields the Twilio action will appear.
- A dashboard-level action: A dashboard-level action supports sending an image of a dashboard. For example, the SendGrid action sends dashboard images through email.
Adding user attributes to custom actions
For custom actions, you can add user attributes in the
params parameter of your action file. A user must have a value for this attribute defined in their user account or for a user group they belong to, in addition to the
send_to_integration permission, to see the action as a destination option when sending or scheduling content.
To add a user attribute to your action:
- A Looker admin may need to create the user attribute corresponding to the
user_attribute_paramif it does not already exist.
- Define a valid value for the user attribute for the users or user groups that need to deliver content to your action destination. (These users must also have
send_to_integrationpermissions.)
- The
paramsparameter represents the form fields that a Looker admin must configure on the action’s enablement page from the Actions list in the Admin panel. In the
paramsparameter of your action file, include:
where
user_attribute_name is the user attribute defined in the Name field on the User Attributes page in the Users section of the Admin panel,
required: true means that a user must have a non-null and valid value defined for that user attribute to see the action when delivering data, and
sensitive: true means that user attribute is encrypted and never displayed in the Looker UI once entered. You can specify multiple user attribute subparameters.
- Deploy your updates to the action hub server.
- If you are adding a new action, a Looker admin will need to enable the action by clicking the Enable button next to the action on the Actions page in the Admin panel.
- If you are updating an existing action, refresh your list of actions by clicking the Refresh button. Next, click the Settings button.
- On the action settings/enablement page, a Looker admin must configure the action’s form fields to pull information from the user attribute by clicking the user attribute icon to the right of the appropriate field and selecting the desired user attribute.
requiredField parameters in cell-level actions
For cell-level actions, you can configure your model’s LookML fields to deliver data to that action destination by specifying which tags your action supports in the
requiredFields parameter of your action file.
Supported data formats
The
DataActionRequest class defines what data delivery format is available for the action to work with. For query-level actions, the request will contain an attachment that can be in several formats. The action can either specify one or more
supportedFormats or let the user choose the format by specifying all possible formats. For cell-level actions, the value of the cell will be present on
DataActionRequest.
Configuring an action for OAuth
OAuth-enabled actions cannot be configured from the Looker Action Hub for Looker instances that have the IP Allowlist feature enabled or that cannot accommodate the Looker Action Hub requirements. See the Setting up a local action hub for actions that use OAuth or streaming article in the Looker Help Center for more information about configuring an action for OAuth.
You can configure your action so that users can authenticate into the action with OAuth. Even though the Looker Action Hub must remain stateless, you can enforce a state through a form request from the Looker Action API.
Looker action OAuth flow
For actions in the Looker Action Hub, you can extend an
OAuthAction instead of a
Hub.Action to set a Boolean that indicates which OAuth methods are needed to authenticate a user into an action. For every OAuth-enabled or state-enabled action, Looker stores a per-user, per-action state, so that each action and user combination has an independent OAuth event.
The flow for creating actions typically involves a
/form request followed by a
/execute request. For OAuth, the
/form request should have a method to determine if the user is authenticated within the target service. If the user is already authenticated, the action should return a normal
/form in accordance with whatever the
/execute request requires. If the user is not authenticated, the action returns a link that will initialize an OAuth flow.
Saving state with the OAuth URL
Looker will send an HTTP POST request with an empty body to the
ActionList endpoint. If the action returns
uses_oauth: true in its definition, then the action will be sent a one-time-use
state_url in every
/form request from Looker. The
state_url is a special one-time-use URL that sets a user’s state for a given action.
If the user is not authenticated with the endpoint, the
/form returned should contain a
form_field of type
oauth_link that goes to the
/oauth endpoint of an action. The
state_url should be encrypted and saved as a
state param in the
oauth_url that is returned. For example:
{ "name": "login", "type": "oauth_link", "label": "Log in", "description": "OAuth Link", "oauth_url": "ACTIONHUB_URL/actions/my_action/oauth?state=encrypted_state_url" }
In this example, the
/oauth endpoint redirects the user to the authentication server. The
/oauth endpoint constructs the redirect in the
oauthUrl(...) method on an OAuth action, as shown in the Dropbox OauthUrl.
The
state param containing that encrypted
state_url should be passed to the Looker Action Hub.
Saving state with the action hub redirect URI
In the
/oauth endpoint, a
redirect_uri for the action hub is also created and passed to the action’s
oauthUrl(...) method. This
redirect_uri is of the form
/actions/src/actions/my_maction/oauth_redirect and is the endpoint used if the authentication returns a result.
This endpoint will call the
oauthFetchInfo(...) method, which should be implemented by the
OauthAction method to extract the necessary information and attempt to receive or save any state or
auth received from the authentication server.
The
state decrypts the encrypted
state_url and uses it to POST
state back to Looker. The next time that a user makes a request to that action, the newly saved state will be sent to the Looker Action Hub.
Adding your action files to the Looker Action Hub repo
Once your action file is written, in the Looker Action Hub repo:
- Add the action file (for example,
my_action.ts) to
actions/src/actions/index.ts.
import “./my_action/my_action.ts”
- Add any Node.js package requirements that you utilized in the writing of your action. For example:
yarn add aws-sdk yarn add express
- Install the Node.js dependencies of the Looker Action Hub server.
yarn install
- Run any tests you wrote.
yarn test
Testing an action
For complete testing, you can try your action against your Looker instance by hosting a private action hub server. This server needs to be on the public internet with a valid SSL certificate and must be able to initiate and receive connections or HTTPS requests to and from Looker. For this, you can use a cloud-based platform, like Heroku, as shown in the following example, or you can use any platform that satisfies the aforementioned requirements.
Setting up a local action hub server
In this example, we will take the action we developed in the
looker-open-source/actions/src/actions GitHub repo and will be committing the code to a new Git branch. We recommend that you work on features using branches so that you can easily track your code and, if desired, easily create a PR with Looker.
To get started, create your branch and then stage and commit your work. For example:
git checkout -b my-branch-name git add file-names git commit -m commit-message
For this example, to push a branch to Heroku, configure your Git repo with Heroku as a remote option in your command line:
heroku login heroku create git push heroku
Heroku will return the public URL now hosting the action hub for your use. Visit the URL or run
heroku logsto confirm that the action hub is running. If you forget the public URL, you can run the following in your command line:
heroku info -s | grep web_url
Heroku will return your public URL. For example:
In your command line, set your action hub base URL:
heroku config:set ACTION_HUB_BASE_URL="
Set your action hub label:
heroku config:set ACTION_HUB_LABEL="Your Action Hub"
Looker uses an authorization token to connect to the action hub. Generate the token in your command line:
heroku run yarn generate-api-key
If you are not using Heroku, as we are in this example, instead use:
yarn generate-api-key
Heroku will return your authorization token. For example:
Authorization: Token token="abcdefg123456789"
Set your action hub secret using the secret key:
heroku config:set ACTION_HUB_SECRET="abcdefg123456789"
Customer-hosted deployments may require configuration of additional environment variables not documented here.
Add your action on your local Looker instance by going to Admin > Actions.
- At the bottom of the list of actions, click Add Action Hub.
- Enter the Action Hub URL and, optionally, a Secret Key.
- Find your action in the Actions list within Looker’s Admin menu.
- Click Enable.
If your action requires that specific kinds of data be passed from Looker, be sure to configure any models to include the appropriate
tags parameter.
Now you’re ready to test your action!
Testing dashboard-level and query-level actions
In your Looker instance, configure your LookML model with tags, if necessary. Create and save a Look. On the saved Look, click the upper-right menu and select Send with your action as the destination. If you have a form for delivery, Looker will render it in the Sent window.
Click Send Test to deliver the data. The status of the action will appear in the Scheduler History in the Admin panel. If your action encounters an error, it will be shown in the Admin panel and Looker will send an email with the error message to the user who sent the action.
Testing cell-level actions
Set up a LookML field with the proper tags for your action. In your Looker instance, run a query that includes that field. Find the field in the data table. Click the … in the cell and select Send from the drop-down menu. If you receive errors, you’ll need to do a full refresh on the data table after addressing those errors.
- If your action is delivered without any errors, you’re ready to publish your action!
- If you want to keep hosting your action privately, you can publish to your private action hub.
- If you want to publish your action for use by all Looker customers, see the section on Publishing to the Looker Action Hub.
Publishing and enabling a custom action
There are two publication options for custom actions:
- Publishing to the Looker Action Hub: This makes your action available to anyone who uses Looker.
- Publishing to a private action hub server: This makes your action available on your Looker instance only.
Once your action is published, you can enable it from the Actions page in the Admin panel.
Publishing to the Looker Action Hub
This approach is the easiest and works for any action that you’d like to make available to anyone who uses Looker.
After your action has been tested, you can submit a PR to the
looker-open-source/actions repo in GitHub.
- Enter the following command:
git push <your fork> <your development branch>
Create your pull request with the
looker-open-source/actionsrepo as your target.
Fill out the Looker Marketplace & Action Hub Submission Form. For more information about the form requirements, see Submitting content to the Looker Marketplace.
Looker will review your action code. We reserve the right to decline your PR but can help you with any issues that you have and offer suggestions for improvement. We then merge the code into the
looker-open-source/actionsrepo and deploy it to
actions.looker.com. Once deployed, the code will become available to all Looker customers.
Enable the action in your Looker instance, so that it will appear as an option for data delivery.
Publishing to a private action hub server
If you have custom actions that are private to your company or use case, you should not add your action to the
looker-open-source/actions repo. Instead, create a private action hub using the same Node.js framework you used to test your action.
You can set up your internal action hub server on your own infrastructure or using a cloud-based application platform (our example used Heroku). Don’t forget to fork the Looker Action Hub to your private action hub server before deployment.
Configuring a LookML model for use with an action
For both custom actions and actions available from the Looker Action Hub, you must identify the relevant data fields using the
tags parameter in your LookML model.
The Actions page in the Admin panel will provide information about the tags that are required for the service, if any. For example:
The Zapier integration states that it works with any query. There is no requirement to add the
tags parameter to a field in your LookML model.
The Twilio Send Message service, however, sends a message to a list of phone numbers. It requires a query that includes a phone number field and uses the
tags parameter to identify which field in the query contains phone numbers. You identify a phone number field in LookML by specifying
tags: ["phone"] for that field. Your LookML for a phone number field might look like this:
Be sure to identify any required fields in your LookML model with the
tags parameter so that your users can use the service to send data.
<a name=deliver_data_with_an_action” class=”anchor”>
Delivering data with an action
Your data can be delivered in several ways, depending on the level on which the action operates. Actions work on the field, query, or dashboard level and can operate on one or more levels. Each action listed on the Actions page of the Admin panel has a description of how it is used. You can:
- Deliver a cell of data
- Deliver an entire dashboard or query (from a Look or an Explore)
Delivering cell data
Field-level actions are denoted on the Actions page of the Admin panel by a description that includes “Action can be used with fields” or with a Yes in the Can Use from Fields column in the list of integrated services.
Field-level actions are designed to deliver a cell of data to the specified service. They work similarly to data actions except that they are served through the Looker Action API. Instead of defining an
action LookML parameter for a dimension or measure, you must configure your LookML model by tagging the relevant fields with the information provided in the Tags for This Action column in the list of integrated services.
After enabling the service and tagging fields in the LookML model, you can:
View the data you want to deliver in a Look, a dashboard, or an Explore. If the service specifies “Action can be used with queries that have a field tagged…”, then your query or one of the dashboard’s tiles must include one or more fields with any required tags.
The tagged field in each cell in the Look, dashboard tile, or Explore will contain a drop-down list, indicated by an ellipsis (…). Click on the ellipsis to see the actions available for that link:
In the ACTIONS section, click the service that you want to receive the row data.
Delivering dashboard or query data
Query-level actions are denoted on the Actions page of the Admin panel by a description that includes “Action can be used with queries that have a field tagged…” or “Action can be used with any query.” According to the Can Send or Schedule column in list of integrated services, you can deliver Every row (in a Look or an Explore). Query-level actions are designed to deliver the entire query results from an Explore or Look to the specified service.
Dashboard-level actions are denoted on the Actions page of the Admin panel by a description that includes “Action can be used with any dashboard.” According to the Can Send or Schedule column in the list of integrated services, you can deliver A dashboard. Dashboard-level actions are designed to deliver a dashboard to the specified service.
Enable the service and, if necessary, tag fields in the LookML model.
To deliver a Look or an Explore, see the Delivering Looks and Explores documentation page.
To deliver dashboards, see the Delivering legacy dashboards and Scheduling and sending dashboards documentation pages. | https://docs.looker.com/sharing-and-publishing/action-hub | CC-MAIN-2022-21 | refinedweb | 4,688 | 50.36 |
By Robert L. Bogue
For most developers, XML is a storage mechanism for data. You use the Web.config file to store configuration information about your Web-based applications. Other XML files are used to persist data that an application needs. The same application creates and consumes the XML file.
However, as Web services grow and XML files become the mechanisms by which organizations interoperate, the importance of validating will grow substantially. Not only will the XML need to be well formed so that it can be read by an XML parser, but it must also be valid so you can be assured that you're getting the data that you're expecting.
Schema is the contract
In essence, an XML schema is the contract for the XML file being exchanged. The schema defines what can exist and what must exist in the file and where. It's important to be able to use a schema to validate an XML file so that code will work predictably.
Using XML schemas to confirm that an XML file conforms to the format that your program expects substantially reduces the need to put in error handling code and can substantially reduce testing. XML Validation can cover the typical present or missing types of verification as well as ensure that a variety of other characteristics, such as the length of a node, are adhered to.
The amount of error handling code is directly related to trapping for unexpected conditions. If the first step in a process is to verify that everything in the XML file that is being brought in is exactly as expected, it eliminates the need to check that input file as each element is located and used. The only error checking will be how the information in the file relates to information already in the organization.
Schema uniform resource identifiers. One of the challenges is that although schemas are referred to by their namespace, which is typically a URL, they don't have to be URLs, and they don't have to be valid for fetching the schema file.
Schema files need a namespace so that they can be differentiated from other schemas for other files. In other words, namespaces allow the unique identification of a schema so it can be determined whether two or more XML files conform to the same schema. Most schemas just refer to a URL on the server of the company that publishes the schema. However, many of the URLs used don't actually correspond to the location where the XML schema can be resolved from; instead, they are just placeholders to make the schema unique.
In .NET, if you're using a validating XML reader, it will automatically try to fetch the XML schema from the URL specified in the schemaLocation attribute of the root XML node. It will try to resolve the URL and fetch the schema definition and will fail and promptly ignore the schema definition, expecting that you'll provide the schema yourself.
This is the same thing that it will do if the referenced schema namespace isn't a URL, but is, instead, a Uniform Resource Identifier (URI). Universal Resource Names (URNs) are another form of URI. A URN is nothing more than a name. For instance, urn:mySchema is a valid URN and, therefore, a valid URI.
Whether the schema namespace is a URN or a URL where the schema isn't located, it will be up to you to manually specify the schema you want to validate an XML file against. Unfortunately, this is the case for a large number of schemas.
Included and imported schemas
DTD documents are the old way for specifying the valid format of an XML file. One of the changes that happened when you migrated from DTD documents to XML Schemas was that XML schemas can be modular. With the new XML schema definition (XSD), each XSD can reference another XSD, and so on. This allows the XML schema designer to follow the same modular design principles as programmers of object-oriented languages.
The XML schema element <include> allows the inclusion of another XML schema file into the existing XML schema file. This is the basic element which allows a single schema to be broken up into multiple files. The schemaLocation attribute specifies where to resolve the new schema. This is typically a relative path to the current schema document.
.NET bug
There's a bug in the .NET framework (up to V 1.1) that doesn't allow URIs to be created with a single / as their first character. This causes an error when trying to process schemas that refer to included schemas from the root of a URL. This issue is expected to be fixed in the next major release of the .NET framework.
The XML schema element <import> is used to include schema definitions from another namespace. This is helpful when another organization defines useful basic schema parts that you can reuse. For instance, if you're organization (called Foo) had a schema namespace that was:
And you wanted to use an address element defined by the Address standard organization with the namespace of:
You would import—not include—the schema that defined the address element. The <import> tag is similar to the <include> tag, except that it refers to the namespace that the imported schema will use in addition to its location.
As stated above, both the <import> and <include> elements are very helpful in separating out one potentially huge schema file into more manageable bits. However, there is such a thing as too much of a good thing. In a recent project, it took 25 separate schema files to validate an XML document. Each imported or included schema seemed to import or include another.
Although having multiple files is a great management technique for schema files, the trade-off is that each individual file is located and processed individually, which means more overhead for each file. Luckily, there are techniques in .NET that allow you to cache the schemas and reduce the impact of reloading the schema every time.
Caching with XML resolver
In .NET, all XML readers have a property that points to an XML resolver object. This XML resolver object is used when the XML reader encounters URIs that it wants to resolve. For the most part, this is used only with validating XML readers that are trying to validate the schema and, therefore, need to locate the schema and any imported or included schemas.
By subclassing this XML resolver into your own class, you can override where the validating XML reader goes to get the XML schema files that it's looking for. This is useful when the schema publisher doesn't publish a copy of the schema in the URL that it uses as the schema namespace and when you want to cache the schema files locally to improve performance and reliability of applications using schema validation.
There is only one method call to override when creating your subclassed URL resolver, ResolveUri(). This is the function that is called when the validating XML reader encounters a URI through the schemaLocation tag in the root node or through the <include> and <import> tags in a schema. By overriding this function, you can tell the validating XML reader to read the schemas from the location where you want to have it resolved.
Listing A shows an XMLResolver class that allows you to specify a local cache directory.
Validating the XML with a validating reader
Next, you need to process the XML file itself with an XML validating reader. If the validating reader doesn't throw an XmlSchemaException when used as a parameter of the load method of an XmlDocument class, the schema is valid. Listing B shows a console application that reads the schema location from the XML file passed and uses a set of cache parameters to replace a URI with the directory in which you've cached the schemas.
The only tricky part is forming the URL for the location where the local files are stored. A complete set of parameters for the program might look like the code shown in Listing C.
� | http://www.techrepublic.com/article/validate-xml-files-efficiently-via-cached-schemas-in-net/ | CC-MAIN-2017-22 | refinedweb | 1,368 | 60.14 |
Timeline
04/05/08:
- 23:03 Changeset [35776] by
- yelp: * Remove the ugly workaround: the shared libraries of the latest …
- 21:52 Ticket #14934 (Patch to eclipse-ecj to give the caller more flexibility) created by
- The current wrapper script tries to figure out what java runtime to use. …
- 20:16 Ticket #14933 (ssh (port:openssh) is not Kerberized) created by
- I have not investigated this, but the openssh installed by the port …
- 17:55 Ticket #10572 (BUG: libcryptopp-5.1 fails when building on darwinports 1.320) closed by
- worksforme: The port is now at version 5.5.2 and builds fine for me.
- 14:19 Changeset [35775] by
- math/fftw-3-single: build shared libraries
- 13:42 Ticket #14932 (Smultron needs update to 3.4) created by
- version 3.4 of Smultron is available now
- 12:47 Changeset [35774] by
- Total number of ports parsed: 4638 Ports successfully parsed: 4638 …
- 12:00 Changeset [35773] by
- ruby.setup now takes type "fetch" for just fetch/extract. Thanks to …
- 11:39 Changeset [35772] by
- Get rid of all usage of _cd and [exec find ...] in ruby-1.0.tcl
- 11:27 Ticket #14931 (wxWidgets26 does not build on Leopard - conflicting function declarations) created by
- FSVolumeMount has conflicting declarations, see below (complete build log …
- 11:10 Ticket #12432 (BUG: zenity build fails) closed by
- invalid: OBE: The GNOME stack has been entirely migrated to python 2.5
- 09:10 Ticket #12370 (BUG: gtk2 fails to compile; gtk-update-icon-cache crashes) closed by
- invalid: OBE
- 09:06 Changeset [35771] by
- gedit-plugins: update to 2.22.0
- 09:03 Ticket #13240 (libglade2 fails to build on Mac OS X 10.5 Leopard (/usr/X11/lib/libSM.la: ...) closed by
- worksforme: I can not reproduce this issue on an Intel iMac or a G4 Powerbook …
- 07:51 Changeset [35770] by
- gnome-applets: update to 2.22.0
- 07:09 Changeset [35769] by
- gnome-python-desktop: update to 2.22.0
- 06:07 Changeset [35768] by
- bug-buddy: version bump to 2.22.0
- 05:22 Ticket #14930 (RFE php5: move all graphic libraries into separate variant?) created by
- Hi, Just reviewing the tickets for php5, -- looks like you has been busy! …
- 05:10 Ticket #12235 (BUG: libgnomeui looks for incorrect version of Gtk+) closed by
- invalid: OBE
- 05:00 Ticket #14929 (RFE: py25-psycopg needs postgresql82, postgresql83 variants like ...) created by
- Hi... In […] py-psycopg was updated to have postgresql82 and 83 …
- 04:53 Changeset [35767] by
- gnome-media: update to 2.22.0, remove unneeded flags, correct dependencies
- 04:47 Ticket #14928 ('port install BAD' should return non-0 error code) created by
- Hi, Is this the correct behavior for macports 1.6.0? I would have …
- 04:38 Ticket #11653 (vte python module is broken) closed by
- invalid: This report is OBE, marking invalid.
- 04:35 Changeset [35766] by
- Add long description Change description Silence lint
- 04:28 Ticket #14751 (Cannot configure yelp -required by gnucash-docs) closed by
- worksforme: Please sync the ports tree and try again. Works for me. Please reopen if …
- 04:28 Ticket #10880 (BUG: Repair bad dependency entries in GNOME ports) closed by
- wontfix: Gui_dos has the GNOME ports well in hand, without this task, so I'm …
- 04:22 Ticket #14924 (ffmpeg fails to patch) closed by
- fixed: Committed in r35765. Thanks for your patch.
- 04:21 Changeset [35765] by
- ffmpeg: Add missing $worksrcpath in post-patch
- 04:21 Ticket #14133 (freeciv-x11: Mouse problem after some time of playing) closed by
- invalid: Freeciv-x11 upgraded to version 2.1.3 in r35764 Please open new ticket …
- 04:17 Changeset [35764] by
- Upgrade to version 2.1.3
- 03:46 Changeset [35763] by
- x11/homebank: Updated to version 3.8.
- 03:41 Ticket #13502 (freeciv-2.1.1 does not correctly fetch patch-configure.diff) closed by
- fixed: Patch removed in r35762
- 03:38 Changeset [35762] by
- Upgrade to version 2.1.3 Remove patch that no longer works
- 03:32 Changeset [35761] by
- gucharmap: * Update to 2.22.0 * Remove unneeded patches and flags
- 03:08 Ticket #12236 (libgnomekbd fails to build) closed by
- invalid: If this remains an issue with the current version of port libgnomekbd, …
- 03:06 Ticket #12202 (BUG: libgnomekbd does not compile) closed by
- duplicate: Duplicates #12236
- 02:35 Ticket #14927 (p5-net-dns update) created by
- Upsteam update from 0.59 to 0.63
- 01:47 Ticket #14402 (libglade fails to install) closed by
- wontfix: Closing: wontfix I'm not sure we shoudl be keeping GNOME 1.x stuff in the …
- 01:34 Changeset [35760] by
- transmission-x11: * Update to 1.11 * Correct dependency (gtk -> gtk2) * …
- 01:30 Ticket #6125 (BUG: Building gnotify didn't find /opt/local/share/automake-1.7/install-sh) closed by
- invalid: If this is still an issue with the current version of gnotify, please …
- 01:24 Ticket #13426 (header docs for MacPorts.Framework) closed by
- fixed: These have been in the MacPorts Framework for some time now, so I'm …
- 01:16 Ticket #14926 (For gunucash-docs: the dependentcy on scrollkeeper needs to be removed and ...) closed by
- fixed: Committed in changeset:35759
- 01:15 Changeset [35759] by
- Change dependency on scrollkeeper to a dependency on rarian Fixes …
- 01:00 Ticket #14926 (For gunucash-docs: the dependentcy on scrollkeeper needs to be removed and ...) created by
- For gunucash-docs: the dependentcy on scrollkeeper needs to be removed and …
- 00:53 Changeset [35758] by
- winetricks: update to 20080402 (adds an option to install .NET 2.0)
- 00:47 Changeset [35757] by
- Total number of ports parsed: 4638 Ports successfully parsed: 4638 …
- 00:45 Changeset [35756] by
- gdl: update to 0.7.11, correct dependecies
04/04/08:
- 23:30 Ticket #14908 (BUG: firefox-x11 2.0.0.13 can't compile on Xcode 3.1) closed by
- invalid: I tried to compile the 2.0.0.11 version and it fails the same way. It used …
- 19:23 Changeset [35755] by
- propset
- 19:19 Changeset [35754] by
- Fixed lint warnings
- 19:09 Ticket #14914 (RFE: build fftw-3 shared libraries) closed by
- fixed: Thanks!
- 19:09 Changeset [35753] by
- fftw-3: builds shared libraries
- 18:00 Changeset [35752] by
- Upgraded sqlalchemy-migrate to 0.4.4
- 17:51 Ticket #14925 (UPDATE: openssh-5.0p1) created by
- Please upgrade openssh to 5.0p1
- 14:41 Changeset [35751] by
- gvfs: disable features not provided by dependencies. Fixes building when …
- 14:16 Ticket #14924 (ffmpeg fails to patch) created by
- Wrong path set in Portfile for reinplace. Patch attached. DEBUG: …
- 13:20 Changeset [35750] by
- ChangeLog: fetch prefers mirrors with lower ping times.
- 13:09 Changeset [35749] by
- Add full modeline to portfetch.tcl.
- 13:02 Ticket #14891 (PATCH: use the fastest mirror in fetch phase) closed by
- fixed: Committed in r35748. Thanks for the help!
- 12:58 Changeset [35748] by
- Try mirrors in ascending order of ping time in fetch.
- 12:47 Changeset [35747] by
- Total number of ports parsed: 4638 Ports successfully parsed: 4638 …
- 12:28 Ticket #14461 (New port: CPAN pmtools - A suite of small programs to help manage Perl ...) closed by
- fixed: Added in r35746.
- 12:20 Changeset [35746] by
- New port: p5-pmtools. Closes #14461.
- 11:48 Changeset [35745] by
- PortGroup and name change pertaining to py25 copy.
- 11:36 Ticket #14923 (python25 +universal fails to compile Mac OS 10.4) created by
- I previously installed MacPorts python2.5 and am trying to switch to …
- 11:31 Changeset [35744] by
- system-tools-backends: version bump to 2.6.0
- 11:11 Changeset [35743] by
- Adding Python 2.5 port of py-icalendar.
- 11:04 Changeset [35742] by
- libgtop: version bump to stable 2.22.0
- 11:01 Changeset [35741] by
- Comply with lint and keyword fix.
- 10:36 Changeset [35740] by
- Fix new maintainer address and lint warnings in preparation for copy to …
- 10:29 Ticket #14871 (evince-2.21.1 install fails due to compilation errors) closed by
- fixed: Fixed in r35738 and r35739.
- 10:25 Changeset [35739] by
- evince: 2.22.0, compile against poppler-0.8.0, fix #14871, have patches …
- 10:15 Changeset [35738] by
- evince: 2.22.0, compile against poppler-0.8.0, fix #14871, have patches …
- 09:54 Changeset [35737] by
- version bump to 0.9.59
- 09:31 Changeset [35736] by
- version bump to 5.2.6RC4, split PostgreSQL support for 8.2 in variant …
- 09:17 Changeset [35735] by
- Added modeline and untabified
- 09:09 Changeset [35734] by
- Upgraded py-sqlalchemy to 0.4.5
- 08:15 Changeset [35733] by
- Python 2.4 version of sclapp, by request (untested)
- 08:14 Changeset [35732] by
- devel/gvfs: a first port of glib's new integrated virtual file system used …
- 08:09 Changeset [35731] by
- Python 2.4 version of mutagen, by request (untested)
- 07:47 Ticket #14922 (evolution data server 2.22.0 fails to build) created by
- On an Intel based MacBook Pro under Tiger 10.4.11 evolution data server …
- 07:07 Changeset [35730] by
- Upgraded py25-sqlalchemy to 0.4.5
- 07:04 Changeset [35729] by
- Mutagen depends on zlib
- 04:05 Ticket #14921 (TeXShop uses cd command) created by
- […]
- 03:13 Changeset [35728] by
- gedit: add dependency on py25-gnome required by the Snippets plugin
- 02:02 Changeset [35727] by
- gnome-audio: update to 2.22.1
- 01:24 Changeset [35726] by
- py25-gnome: update to 2.22.0
- 00:47 Changeset [35725] by
- Total number of ports parsed: 4633 Ports successfully parsed: 4633 …
- 00:37 Changeset [35724] by
- gnome-desktop: add dependency on py25-gnome required by gnome-about
04/03/08:
- 23:56 Changeset [35723] by
- rsync: Update to 3.0.1
- 23:30 Ticket #14634 (darcs 2.0.0pre3 new portfile) closed by
- fixed: Distname for darcs-devel fixed in r35722.
- 23:29 Changeset [35722] by
- darcs-devel: use correct distname.
- 23:19 Changeset [35721] by
- gnome-control-center, gnome-session: update to 2.22.0
- 22:43 Ticket #14920 (openal port build failure) created by
- Building in Leopard fails because CADebugMacros.cp does not exist. …
- 21:43 Changeset [35720] by
- Very first port of gnome-settings-daemon, required separately by …
- 21:16 Changeset [35719] by
- gnome-keyring: update to 2.22.0
- 19:14 Ticket #14919 (mzscheme-371 Build failure.) created by
- While attempting to build MzScheme I am getting this error: […]
- 19:11 Ticket #14918 (sphinx-0.9.8-rc2 Update to latest available version of port) created by
- 0.9.8-rc2 is the latest The current ruby API for sphinx is only supported …
- 17:40 Changeset [35718] by
- Added dependencies on py25-zlib and py25-hashlib. Thanks, Greg Onufer.
- 17:22 Ticket #13804 (osxvnc: allow non-universal build) closed by
- fixed: I'm happy to change osxvnc so that universal is not the default. I just …
- 17:20 Changeset [35717] by
- osxvnc: build non-universal by default; closes #13804
- 16:23 Changeset [35716] by
- osxvnc: whitespace changes only (tabs to spaces, etc.)
- 14:18 Changeset [35715] by
- follow GSL upgrade, actually use python2.5
- 14:14 Changeset [35714] by
- fix typo
- 13:44 Ticket #14917 (py-pyrex ver 0.9.6.4 tries to fetch source ver 0.9.5.1) created by
- Here is the command line with default options: sudo port install py-pyrex …
- 12:47 Changeset [35713] by
- Total number of ports parsed: 4632 Ports successfully parsed: 4632 …
- 12:05 Ticket #13412 (libao: Use AudioUnits instead of base CoreAudo to support different ...) closed by
- fixed: Marking fixed as per comment:7.
- 11:57 Ticket #13844 (mercurial 0.9.5 - Update to use Python 2.5) closed by
- fixed: Update is in r35711.
- 11:57 Ticket #14808 (mercurial update to 1.0) closed by
- fixed: Update is in r35711.
- 11:51 Ticket #14896 (ldns upstream update, leopard support and enhancement) closed by
- fixed: Committed in r35712 . Thanks
- 11:50 Changeset [35712] by
- Version bump, fixes #14896
- 11:49 Ticket #14916 (upgrade to recent version) closed by
- duplicate: Duplicate of #14915.
- 11:47 Changeset [35711] by
- Update to 1.0. Closes #13844 and #14808 Release notes: …
- 10:45 Changeset [35710] by
- libmms: * Update to 0.4 * Remove patch landed upstream: …
- 10:42 Ticket #14916 (upgrade to recent version) created by
- yaws 1.68 is from february 2007, since then there were 6 new releases.
- 10:42 Ticket #14915 (upgrade yaws to recent version) created by
- yaws 1.68 is from february 2007, since then there were 6 new releases.
- 10:30 Changeset [35709] by
- lcms: version bump to 1.17
- 09:28 Changeset [35708] by
- gmime: version bump to 2.2.18
- 07:46 Ticket #14914 (RFE: build fftw-3 shared libraries) created by
- As noted in the mailing list, fftw-3 does not build shared …
- 07:37 Changeset [35707] by
- gnome-panel: update to 2.22.0
- 07:35 Ticket #14913 (RFE: add fortran variant to hdf5) created by
- In the recent update to hdf5, r35665, a variant was added which builds the …
- 06:36 Changeset [35706] by
- libgtkhtml3: update to 3.18.0
- 06:32 Ticket #14912 (py25-setuptools should include site.py) created by
- Setuptools 0.6c8_0 (package py25-setuptools) is currently broken, because …
- 05:55 Changeset [35705] by
- totem: avoid PyGTK duplicated symbols
- 04:24 Changeset [35704] by
- totem: * Add dependency on eel * Detect MacPorts' own python2.5
- 04:03 Ticket #14911 (Trac ticket type: task) created by
- Please bring back the Trac ticket type of "task" so that activities that …
- 04:02 Changeset [35703] by
- totem-pl-parser: update to 2.22.1
- 03:56 Changeset [35702] by
- libpng: update to 1.2.26 All 1 tests passed
- 03:51 Changeset [35701] by
- libpng: whitespace changes / rearrangement only (tabs to spaces, etc.)
- 03:43 Ticket #13198 (tango-icon-theme 0.8.1 - NEW Port) closed by
- fixed: Landed in r35700. Thanks!
- 03:40 Changeset [35700] by
- New ports (commits #13198): x11/tango-icon-theme …
- 02:56 Changeset [35699] by
- net/libgweather: new port (required by gnome 2.22.0 panel/applets)
- 02:39 Ticket #14899 (pdf2svg-0.2.1 Port Submission) closed by
- fixed: Committed in r35698. Thanks!
- 02:37 Changeset [35698] by
- graphics/pdf2svg: new port (commits #14899)
- 02:11 Ticket #14910 (New component: administration) created by
- Having a ticket component for administrative actions in the project would …
- 02:09 Ticket #14909 (Roadmap does not provide direction) created by
- The Trac Roadmap is currently either misused or underused or both. While …
- 01:51 Ticket #14907 (gnu-classpathx-comm needs an updated cvs path) closed by
- fixed: Fixed in r35697.
- 01:51 Changeset [35697] by
- gnu-classpathx-comm: update cvs.root, fixes #14907.
- 01:20 Changeset [35696] by
- ImageMagick: update to 6.4.0-3 All 696 tests behaved as expected (33 …
- 01:15 Ticket #14900 (BUG: gnome-panel fails in destroot stage) closed by
- fixed: Fixed in r35695, thanks.
- 01:14 Changeset [35695] by
- gnome-panel: post-destroot unneeded after switch to rarian (closes #14900)
- 01:07 Ticket #14904 (BUG: evolution-data-server fails to build) closed by
- fixed: Committed in r35693, thanks.
- 01:02 Ticket #14746 (ffmpeg broken with variants) closed by
- fixed: Looks like fixed. Please reopen when needed.
- 01:01 Ticket #14694 (ffmpeg won't build with avfilter variant) closed by
- fixed: Fixed in r35694.
- 01:00 Changeset [35694] by
- ffmpeg: libswscale compiles on intel leopard now. Fixes #14694.
- 00:59 Changeset [35693] by
- evolution-data-server: update to 2.22.0 (closes #14904)
- 00:55 Ticket #14903 (dtach 0.8) closed by
- fixed: Committed in r35692. Thanks for taking maintainership!
- 00:55 Changeset [35692] by
- dtach: bump version to 0.8, add maintainer. Closes #14903.
- 00:45 Ticket #14902 (esniper upgrade to 2.18.0) closed by
- fixed: Thanks, committed in r35691.
- 00:44 Changeset [35691] by
- esniper: bump version to 2.18.0. Closes #14902.
- 00:43 Changeset [35690] by
- Total number of ports parsed: 4627 Ports successfully parsed: 4627 …
04/02/08:
- 23:57 Ticket #14908 (BUG: firefox-x11 2.0.0.13 can't compile on Xcode 3.1) created by
- With the latest firefox-x11 2.0.0.13, compilation can't start. cd …
- 23:10 Changeset [35689] by
- glib2: update to 2.16.2
- 22:17 Ticket #14907 (gnu-classpathx-comm needs an updated cvs path) created by
- The class path for the comm portfile needs to be changed to: cvs.root …
- 19:16 Ticket #14906 (port requires too many flags for simple operations) created by
- Ideally, this would be filed against a 2.0 release not the trunk or the …
- 19:07 Ticket #14905 (metacity fails to build (core/compositor.c issues)) created by
- […]
- 18:52 Ticket #14904 (BUG: evolution-data-server fails to build) created by
- After the new recent upgrade to libsoup, r35552, evolution-data-server …
- 17:08 Ticket #14903 (dtach 0.8) created by
- Bumped up version on port, from 0.7 -> 0.8
- 15:56 Ticket #14902 (esniper upgrade to 2.18.0) created by
- Attached please find a portfile for the latest version of the net/esniper …
- 15:20 Ticket #14901 (firefox-x11 does not provide file-selection browsers) created by
- The X11 port of firefox (2.0.0.13) does not include the code to allow the …
- 15:16 Ticket #14900 (BUG: gnome-panel fails in destroot stage) created by
- In the recent switch from scrollkeeper to rarian, r35580, it seems that …
- 15:06 Ticket #12086 (NEW: py-mutagen) closed by
- fixed: Resolved by r35303
- 14:57 Ticket #14899 (pdf2svg-0.2.1 Port Submission) created by
-
- 13:29 Ticket #14898 (gnome-python-extras gives configure failure) created by
- The configure step for gnome-python-extras fails. […]
- 12:44 Changeset [35688] by
- Total number of ports parsed: 4627 Ports successfully parsed: 4627 …
- 12:18 Ticket #11965 (NEW: py-reportlab2) closed by
- fixed: Py-reportlab was updated to 2.1 in r30474, and py25-reportlab 2.1 was …
- 09:50 Changeset [35687] by
- math/{tilp2,libticables2,libticalcs2,libticonv,libtifiles2}: Set …
- 09:29 Ticket #14897 (x11/php5-gtk Version 2.0 final) created by
- version 2 (final) of PHP-GTK has been released, and the Portfile needs to …
- 07:02 Changeset [35686] by
- Updated to 1.27.
- 07:00 Changeset [35685] by
- Updated to 1.24.
- 06:53 Changeset [35684] by
- Updated to 1.18.
- 06:51 Changeset [35683] by
- Updated to 2.15.
- 05:41 Ticket #14896 (ldns upstream update, leopard support and enhancement) created by
- Update ldns and drill to 1.2.2 and added darwin 9 support. Added ldns …
- 04:49 Ticket #14895 (boost-1.34.1 New "docs" variant to install docs) created by
- Subject line says it all really. I find it useful to keep the boost docs …
- 04:35 Changeset [35682] by
- version 2008-03-31
- 04:26 Changeset [35681] by
- glib2, glib2-devel: whitespace changes only (tabs to spaces)
- 03:37 Ticket #14894 (libtool: update to 2.2) created by
- I updated the libtool port from 1.5.24 to 1.5.26 because that fixed a …
- 03:24 Ticket #14884 (pike 7.6.112 updated portfile to offer more variants (additional pike ...) closed by
- fixed: Fixed, r35680. Don't forget to update port revision on changes.
- 03:23 Changeset [35680] by
- update from port maintainer (#14884)
- 00:44 Changeset [35679] by
- Total number of ports parsed: 4627 Ports successfully parsed: 4627 …
- 00:44 Ticket #14893 (gnucash (as of version 2.2.3) now works with goffice 0.6) created by
- As of version 2.2.3, gnucash now works with goffice 0.6 I've tried it and …
04/01/08:
- 23:39 wms edited by
- (diff)
- 22:50 easieste edited by
- (diff)
- 22:49 easieste created by
- Mark Evenson maintainer description
- 22:44 MacPortsDevelopers edited by
- Added new maintainer easieste (Mark Evenson) (diff)
- 20:49 Ticket #14892 (Add doc variant to gtk2) created by
- By default gtk2 depends on gtk-doc, which installs a bunch of unneeded …
- 20:42 Ticket #14891 (PATCH: use the fastest mirror in fetch phase) created by
- Here's a patch which makes the fetch phase ping each of the candidate …
- 20:26 Ticket #14889 (Not installing fullly?) closed by
- fixed: This is a known issue with the installer (it's meant to add /opt/local/bin …
- 19:17 Ticket #14890 (request for new port: kdevelop) created by
- Can someone make a Portfile for the "kdevelop" IDE. …
- 19:12 Ticket #14889 (Not installing fullly?) created by
- I have just installed XCode 2.4.1 and the Tiger version of MacPorts. I …
- 19:08 Ticket #14888 (Problems with otcl installation with MacPorts) created by
- Dear all, I found no helping site for this problem with otcl installation …
- 18:31 Ticket #14887 (NEW: tesseract-2.01) created by
- Portfile to build the Tesseract OCR engine from google code is attached. …
- 16:50 Changeset [35678] by
- math/{tilp2,libticables2,libticalcs2,libticonv,libtifiles2}: Linted.
- 16:45 Ticket #13227 (TiLP) closed by
- fixed: Added correct dependencies and committed in r35676 and r35677. You didn't …
- 16:44 Changeset [35677] by
- math/tilp2: New port, see #13227
- 16:44 Changeset [35676] by
- math/{libticables2,libticalcs2,libticonv,libtifiles2}: New ports as …
- 16:27 Changeset [35675] by
- port1.0/portutil.tcl: tracemode: always allow gzip in destroot phase, as …
- 16:24 Changeset [35674] by
- Change of ownership from brett to nick, and fixed incorrect patching of …
- 16:01 Changeset [35673] by
- Update to 4.6.21 and include the 4.6.21.1 patch.
- 16:00 Changeset [35672] by
- Fix a lint warning.
- 15:38 Changeset [35671] by
- Remove empty line.
- 15:38 Changeset [35670] by
- Remove pumpkingod as a maintainer since he isn't a committer per #12557 …
- 15:37 Changeset [35669] by
- Trim trailing whitespace.
- 15:05 Ticket #14750 (Crash in freetype) closed by
- wontfix: --with-old-mac-fonts was specifically added to the portfile in r31552 …
- 14:55 Ticket #14878 (UPDATE: Wireshark 1.0) closed by
- fixed: wireshark 1.0.0 committed in r35668
- 14:54 Changeset [35668] by
- Upgraded to 1.0.0 Closes ticket #14878
- 14:19 Changeset [35667] by
- lint
- 14:16 Changeset [35666] by
- fix destroot handling
- 14:02 Ticket #14886 (pandoc fails to build) created by
- The port pandoc fails with: […]
- 14:01 Changeset [35665] by
- follow upstream upgrade to 1.6.7, provide gcc43 variant (however, this …
- 13:09 Ticket #14389 (wireshark don't launch. "dyld: Library not loaded: ...) closed by
- invalid
- 12:55 Changeset [35664] by
- New version
- 12:55 Ticket #13182 (Apache2 failure to start due to mod_ssl loading problem under Mac OS X ...) closed by
- fixed: libtool has been updated to 1.5.26 in r35661 (see #14503) which based on …
- 12:44 Changeset [35663] by
- Total number of ports parsed: 4622 Ports successfully parsed: 4622 …
- 12:38 Ticket #11163 (BUG: lyx-1.4.0pre3 fails to download + compile issue on intel) closed by
- fixed: The main problem was that LyX now requires Qt 4. I've committed a working …
- 12:38 Changeset [35662] by
- LyX: update to 1.5.4. Closes #11163.
- 12:37 Ticket #14503 (libtool: upgrade to 1.5.26, add patch to avoid -flat_namespace) closed by
- fixed: Replying to ricci@macports.org: > I believe that always …
- 12:35 Changeset [35661] by
- libtool: update to 1.5.26 and avoid the use of -flat_namespace when …
- 11:37 Changeset [35660] by
- dbus: fix Vim typo in r35659
- 11:15 Changeset [35659] by
- dbus: create also /etc/dbus-1/session.d during destroot
- 11:03 Changeset [35658] by
- deluge: update to 0.5.8.7
- 09:33 Ticket #14885 (UPDATE: math/glpk 4.28) closed by
- fixed: Committed in r35657, thanks.
- 09:32 Changeset [35657] by
- glpk: update to 4.28 (commits #14885)
- 09:26 Ticket #14779 (RFE: include examples and documentation with glpk) closed by
- fixed: Outdated by #14885.
- 09:23 Ticket #14546 (UPDATE: libsexy 0.1.11) closed by
- fixed: Committed in r35656.
- 09:21 Changeset [35656] by
- libsexy: version bump to 0.1.11 (closes #14546, maintainer timout)
- 09:14 Ticket #14400 (UPDATE:rb-cocoa to 0.13.2) closed by
- fixed: Committed in r35655, thanks!
- 09:12 Changeset [35655] by
- rb-cocoa: update to 0.13.2 (commits #14400)
- 09:08 Ticket #14885 (UPDATE: math/glpk 4.28) created by
- Update math/glpk to 4.28 This patch includes #14779 and adds the doc and …
- 09:05 Changeset [35654] by
- Add some new mirror sites.
- 08:56 Changeset [35653] by
- libwnck: version bump to 2.22.0
- 08:51 Ticket #14882 (UPDATE: glade3 maintainer update) closed by
- fixed: Committed in r35652 , thanks!
- 08:50 Changeset [35652] by
- glade3: cleanup by maintainer (commits #14882)
- 07:03 Changeset [35651] by
- add 'ossp-uuid' new include dir to CFLAGS; no need to increment revision …
- 07:01 Changeset [35650] by
- follow update to 1.11
- 06:52 Changeset [35649] by
- ChangeLog: Fetching of daily snapshot tarballs of the ports tree as an …
- 06:48 Changeset [35648] by
- port/port.tcl, macports/macports.tcl: * Pass optional $optionslist to …
- 06:09 Changeset [35647] by
- macports1.0/macports.tcl: Implement fetching of daily snapshot tarballs as …
- 06:05 Ticket #14203 (apr fails to build if ossp-uuid is activated (dependency of postgresql83)) closed by
- fixed: Replying to bulk@modp.com: > Oh you are going to like this! …
- 06:03 Changeset [35646] by
- Move ossp-uuid's include file to a subdirectory (fixes #14203)
- 04:56 Changeset [35645] by
- science/wview: Fix installation into /Library (was outside destroot)
- 04:51 Changeset [35644] by
- List master download sites last.
- 04:05 Changeset [35643] by
- Fix or remove broken mirrors.
- 03:20 MacPortsDevelopers edited by
- Adding myself (nick) into the list (diff)
- 03:18 Ticket #13954 (python25 not building, OpenGL problem) reopened by
- Looks like I'm getting the same problem, again on 10.4.11. I've got OpenGL …
- 02:55 Ticket #13413 (BUG: Error installing ettercap-ng) closed by
- duplicate: Dupe, #13399.
- 02:54 Ticket #13399 (ettercap-ng fix) closed by
- fixed: Fixed, r35642.
- 02:53 Changeset [35642] by
- fix leopard build (#13399, maintainer timeout: 4 months)
- 02:52 Changeset [35641] by
- libxklavier: add dependency on glib2
- 02:05 Ticket #12969 (inkscape 0.45.1 crashing on startup -boehmgc 7.0 breaking it?) closed by
- fixed: This seems to have been fixed by r35638.
- 01:54 Ticket #14881 (GIMP plug-in crashes on pdf open) closed by
- fixed: OK, I bumped the revision in r35640 so everyone will rebuild against the …
- 01:53 Changeset [35640] by
- gimp2: bump revision to force rebuilding against new poppler version. …
- 01:02 Changeset [35639] by
- Minor cleanups. Depend on the boehmgc port, rather than a regex search.
- 01:01 Changeset [35638] by
- Remove --enable-parallel-mark. The port does not pass its tests with this …
- 00:57 Changeset [35637] by
- libxklavier: version bumped to 3.5 and lint'ed
- 00:50 Changeset [35636] by
- Change a build dependency from a port: dependency to a bin: dependency so …
- 00:44 Changeset [35635] by
- Total number of ports parsed: 4622 Ports successfully parsed: 4622 …
Note: See TracTimeline for information about the timeline view. | https://trac.macports.org/timeline?from=2008-04-05T05%3A22%3A55-0700&precision=second | CC-MAIN-2016-18 | refinedweb | 4,500 | 65.12 |
Making Time-Dependent Features Testable (c#)
>>IMAGE here is the straight-forward test code:
var token = new Token();
Assert.IsTrue(token.IsValid());
Thread.Sleep(21 * 60 * 1000); // wait 21 minutes
Assert.IsFalse(token.IsValid());
Making test run hang for 20 minutes just literally wasting time? That’s hardly feasible.
We can move the constant 20 (token lifetime in minutes) to config file (which we should do anyway), and then try to set much lower value for testing, but this does not eliminate the problem completely.
In manual testing for an entire application, QA engineers sometimes actually wait for this long, for a lack of a better way to test it. Another method sometimes used — changing machine time for a short interval; but it is tricky, and it is a race against the network time service that will reset the time back.
So how we can make it easier? The idea is to create a wrapper for the DateTime.UtcNow that allows 'shifting' the current time for an application — and using the wrapped time throughout the code.
Here is the static time wrapper:
public static class AppTime {
private static TimeSpan _offset = TimeSpan.Zero;
public static DateTime UtcNow {
get {
var now = DateTime.UtcNow;
return (_offset == TimeSpan.Zero) ? now : now.Add(_offset);
}
}
public static void SetOffset(TimeSpan offset) {
_offset = offset;
}
public static void ClearOffset() {
_offset = TimeSpan.Zero;
}
}
This class works as a time machine — you can shift time forward or back, provided all access to current time in the application is done through the AppTime.UtcNow static property.
The new Token code:
public class Token {
DateTime _created;
public Token() {
_created = AppTime.UtcNow;
}
public bool IsValid() {
return AppTime.UtcNow < _created.AddMinutes(20);
}
}
Testing the expiration now becomes trivial:
var token = new Token();
Assert.IsTrue(token.IsValid());
// shift app time forward
AppTime.SetOffset(TimeSpan.FromMinutes(21));
Assert.IsFalse(token.IsValid());
AppTime.ClearOffset();
It’s not just for expiring things — I suggest that you use this AppTime.UtcNow everywhere, just for consistency. Just establish the rule — no DateTime.UtcNow, ever, only AppTime.
Using this technique in manual testing
It might seem this technique is only usable in automated code-based testing. But you can use it in manual testing process as well. Let’s take a Web application as an example. We can provide a special ‘hidden’ back-door page in our app, enabled only in test environment, that allows the tester/user to set the time offset in the AppTime class on the server. It does not have to be a page, just a REST endpoint that you can hit from the browser and send offset value in a query parameter.
Let’s see how a QA person can test that login/session expires in 20 minutes due to inactivity.
- Login to the app.
- Click a few things, everything works
- Open another tab in browser, hit URL like “" (1260 -> 21 minutes in seconds)
- Back to the app, click something that invokes the server. The app should show message “session expired” and redirect to login page.
I have been using this class for years now, in all applications I write. I hope you find this technique useful too.
This article was originally published on codeproject.com. | https://rivantsov.medium.com/making-time-dependent-features-testable-c-d5feff606ece?source=post_internal_links---------0---------------------------- | CC-MAIN-2021-31 | refinedweb | 530 | 57.77 |
We’ll learn how to use the 2SLS technique to estimate linear models containing Instrumental Variables
In this article, we’ll learn about two different ways to estimate a linear model using the Instrumental Variables technique.
In the previous article, we learnt about Instrumental Variables, what they are, and when and how to use them. Let’s recap what we learnt:
Consider the following linear model:
In the above equation, y, 1, x_2, x_3, and ϵ are column vectors of size [n x 1]. From subsequent equations, we’ll drop the 1 (which is a vector of 1s) for brevity.
If one or more regression variables, say x_3, is endogenous, i.e., it is correlated with the error term ϵ, the Ordinary Least Squares (OLS) estimator is not consistent. The coefficient estimates it generates are biased away from the true values, putting into question the usefulness of the experiment.
One way to rescue the situation is to devise a way to effectively “break” x_3 into two parts:
- A chunk that is uncorrelated with ϵ which we will add back into the model in place of x_3. This is the part of x_3 that is in fact exogenous.
- A second chunk that is correlated with ϵ which we will cut out of the model. This is the part that is endogenous.
And one way to accomplish this goal is to identify a variable z_3, “an instrument for x_3”, with the following properties:
- It is correlated with x_3. That (to some extent) satisfies the first of the above two requirements, and
- It is uncorrelated with the error term which takes care of the second requirement.
Replacing x_3 with z_3 yields the following model:
All variables on the R.H.S of Eq (1a) are exogenous. This model can be consistently estimated using least-squares.
The above estimation technique can be easily extended to multiple endogenous variables and their corresponding instruments as long as each endogenous variable is paired one-on-one with a single unique instrumental variable.
The above example suggests a general framework for IV estimation which we present below.
A linear regression of y on X takes the following matrix form:
Assuming a data set of size n, in Eq (2):
- y is a vector of size [n x 1].
- X is the matrix of regression variables of size [n x (k+1)], i.e. it has n rows and (k+1) columns of which the first column is a column of 1s and it acts as the placeholder for the intercept.
- β is a column vector of regression coefficients of size [(k+1) x 1] where the first element β_1 is the intercept of regression.
- ϵ is a column vector of regression errors of size [n x1]. ϵ effectively holds the balance amount of variance in y that the model Xβ wasn’t able to explain.
Here’s how the above equation would look in matrix format:
Without loss of generality, and not counting the intercept, let’s assume that the first p regression variables in X are exogenous and the next q variables are endogenous such that 1 + p + q = k:
Suppose we are able to identify q instrumental variables which would be the instruments for the corresponding q regression variables in X namely x_(p+1) thru x_k that are suspected to be endogenous.
Let’s construct a matrix Z as follows:
- The first column of Z will be a column of 1s.
- The next p columns of Z namely z_2 thru z_p will be identical to the p exogenous variables x_2 thru x_p in X.
- The final set of q columns in Z namely z_(p+1) thru z_k will hold the data for the q variables that would be the instruments for the corresponding q endogenous variables in X namely x_(p+1) thru x_k.
Thus, the size of Z is also [n x (k+1)] i.e. the same as that of X.
Next, we’ll take the transpose Z which interchanges the rows and columns. The transpose operation essentially turns Z on its side. The transpose of Z denoted as Z’ is of size [(k+1) x n].
Now, let’s pre-multiply Eq (2) by Z’:
Eq (3) is dimensionally correct. On the L.H.S., Z’ is of size [(k+1) x n] and y is of size [n x 1]. Hence Z’y is of size [(k+1) x 1].
On the R.H.S., X is of size [n x (k+1)] and β is of size [(k+1) x 1]. Working left to right, Z’X is a square matrix of size [(k+1) x (k+1)] and (Z’X)β is of size [(k+1) x 1].
Similarly, ϵ is of size [n x 1]. So Z’ϵ is also of size [(k+1) x 1].
Now, let’s apply the expectation operator E(.) on both sides of Eq. (3):
E(Z’y) and E(Z’Xβ) resolve respectively to Z’y and Z’Xβ.
Recollect that Z contains only exogenous variables. Therefore, Z and ϵ are not correlated and hence the mean value of (Z’ϵ) is a column vector of zeros, and Eq (3a) resolves to the following:
Next, we’ll isolate the coefficients vector β on the R.H.S. of (4) by multiplying both sides of Eq (4) with the inverse of the square matrix (Z’X).
The inverse of a matrix is conceptually the multi-dimensional equivalent of the inverse of a scalar number N (assuming N is non-zero). The inverse of a matrix is calculated using a complex formula which we’ll skip getting into.
It is possible to show that (Z’X) is invertible (again something we won’t get into here). Pre-multiplying both sides of Eq. (4) by the inverse of (Z’X) namely (Z’X)^-1, gets us the following:
The yellow and green bits on the R.H.S. cancel each other out and yield an identity matrix in the same way as N*(1/N) equals 1, leaving us with the following equation for estimating the coefficients vector β of the instrumented model:
Notice that Z, X and y are all observable quantities and so all regression coefficients can be estimated in one shot using Eq (6) provided there is a one-to-one correspondence between the endogenous variables in X and the chosen instruments in Z.
There is one final point that must be mentioned about Eq (6). Eq (6) is strictly speaking estimable only asymptotically, i.e. when the number of data samples n → ∞. But in practice, and for a set of mathematical reasons that probably deserve their own article, we can use it to calculate the coefficient estimates of a model estimated via IV on finite sized samples, in other words, on a real world data set.
Thus, the finite sample IV estimator β_cap_IV of β can be stated as follows:
Now, let’s look at the case where there is more than one Instrumental Variable defined for an endogenous variable.
Consider the following regression model of wages:
In the above model, we regress the natural log of wage instead of the raw wage as wage data is often right-skewed and logging it can reduce the skew. Education is measured in terms of years of schooling. College and city are boolean variables indicating whether the person went to college and whether they live in a city. Unemp contains the percentage unemployment rate in the county of residence.
Our X matrix is [1, age, experience, college, city, unemp, education], where the each variable is a column vector of size [n x 1] and the size of X is [n x 7].
We’ll argue that education is endogenous. As such, years of schooling captures only what is taught in school or college. And it also leaves out aspects such as how well the person has grasped the material, their knowledge of topics outside of the curriculum and so on, all of which are left unobserved and therefore captured in the error term ϵ.
We’ll propose two variables, mother’s number of years of schooling (meducation) and father’s number of years of schooling (feducation) as the IVs for the person’s education.
The relevance and exogeneity conditions
Our chosen IVs need to pass the relevance condition. If a regression of education on the rest of the variables in X plus meducation and feducation reveals (via an F-test) that meducation and feducation are jointly significant, the two IVs pass the relevance condition.
The error term ϵ is inherently unobservable. So the exogeneity condition for the IVs cannot be directly tested. Instead, we take it upon faith that parents’ number of years of schooling is unlikely to be correlated with factors such as the child’s grasp of material, i.e. the factors that are hiding in the error term and which are making education be endogenous. But we could be wrong about this. We’ll soon find out.
The regression model containing IVs
Our regression model with IVs is as follows:
Our Z matrix is [1, age, experience, college, city, unemp, meducation, feducation], where the each variable is a column vector of size [n x 1] and the size of Z is [n x 8]. Notice how we have replaced education with its two IVs.
And the coefficient vector to be estimated is:
β_cap_IV=[β*_1_cap, β*_2_cap, β*_3_cap, β*_4_cap, β*_5_cap, β*_6_cap, β*_7_cap, β*_8_cap]
Where the caps indicated estimated values.
With X and Z defined, can we use Eq (6a) to perform a single-shot calculation of β_cap_IV?
Unfortunately, the answer is , no.
Recollect that the size of Z is [n x 8]. So, the size of Z’ is [8 x n]. The size of X is [n x 7]. Hence Z’X has size [8 x 7] which is not a square matrix and therefore not invertible. Thus, Eq. (6a) cannot be used when multiple instrumental variables such as meducation and feducation are used to represent a single endogenous variable such as education.
This difficulty suggests that we explore a different approach for estimating β_cap_IV. This different approach is a two-stage OLS estimator.
We begin by developing the first stage of this estimator.
The First Stage
In this stage, we’ll regress education on age, experience, college, city, unemp, meducation, and feducation.
Let’s suppose that we have determined via the F-test that education is indeed correlated with the IVs meducation and feducation.
We will now regress education not only on meducation and feducation but also the other variables which allows us to account for the effect of possible correlations between the non-IV variables and the IV variables. See my earlier article on Instrumental Variables for a detailed explanation of this effect.
ν is the error term. The above model can be consistently estimated using OLS as all regression variables are exogenous. The estimated model has the following form:
In the above equation, education_cap is the estimated (a.k.a. predicted) value of education. The caps on the coefficients similarly indicate estimated values.
The above OLS based regression represents the first stage of a two-stage OLS (2SLS) estimation that we are about to do.
The second stage
The key insight to be had about the first stage is that education_cap contains only the portion of variance of education that is exogenous, i.e. not correlated with the error term.
Therefore, we can replace education in the original model of ln(wage) with education_cap to form a model that contains only exogenous regression variables, as follows:
Since the above model contains only exogenous regression variables, it can be consistently estimated using OLS. This estimation forms the second stage of the 2-stage OLS estimator.
We’ll use the following cross-sectional data from a 1976 Panel Study of Income Dynamics of married women based on data for the previous year, 1975.
Each row contains hourly wage data and other variables about a married female participant. The data set contains several variables. The ones of interest to us are as follows:
wage: Average hourly wage in 1975 dollars
education: years of schooling of participant
meducation: years of schooling of mother of participant
feducation: years of schooling of father of participant
participation: Did the individual participate in the labor force in 1975? (1/0). We consider only those individuals who participated in 1975.
Our goal is to estimate the effect of education as approximated by number of years of schooling on the hourly wage, specifically log of hourly wage, of married female respondents in 1975.
As we saw earlier, education is endogenous, hence a straight-up estimation using OLS will yield biased estimates of all coefficients. Specifically, an OLS estimation of β_1 and β_2 will likely overestimate their values i.e. it will overestimate the effect of education on hourly wages.
We’ll try to remediate this situation by using meducation and feducation as instruments for education.
We’ll use Python, Pandas and Statsmodels to load the data set and build and train the model. Let’s start by importing the required packages:
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
from statsmodels.api import add_constant
from statsmodels.sandbox.regression.gmm import IV2SLS
Let’s load the data set into a Pandas
Dataframe:
df = pd.read_csv('PSID1976.csv', header=0)
Next, we’ll use a subset of the data set where participation=yes.
df_1975 = df.query('participation == 'yes'')
We’ll need to verify that the instruments meducation and feducation satisfy the relevance condition. For that, we’ll regress education on meducation and feducation, and verify using the F-test that the coefficients of meducation and feducation in this regression are jointly significant.
reg_expr = 'education ~ meducation + feducation'olsr_model = smf.ols(formula=reg_expr, data=df_1975)olsr_model_results = olsr_model.fit()print(olsr_model_results.summary())
We see the following output:
The coefficients of meducation and feducation are individually significant at a p of < 0.001 as indicated by their p-values which are basically zero. The coeffcients are also jointly significant at a p of 2.96e-22 i.e. < .001. meducation and feducation clearly meet the relevance condition for IVs of education.
We’ll now build a linear model for the wage equation and using statsmodels, we’ll train the model using the 2SLS estimator.
We’ll start by building the design matrices. The dependent variable is ln(wage):
ln_wage = np.log(df_1975['wage'])
Statsmodel’s IV2SLS estimator is defined as follows:
statsmodels.sandbox.regression.gmm.IV2SLS(endog, exog, instrument=None)
Statsmodels needs the
endog,
exog and
instrument matrices to be constructed in a specific way as follows:
endog is an [n x 1] matrix containing the dependent variable. In our example, it is the ln_wage variable.
exog is an [n x (k+1)] size matrix that must contain all the endogenous and exogenous variables, plus the constant. In our example, apart from the constant, we do not have any exogenous variables defined in our wage equation. So it will look like this:
instrument is a matrix that contains the instrumental variables. Additionally, the Statsmodels’
IV2SLS estimator requires
instrument to also contain all variables from the
exog matrix that are not being instrumented. In our example, the instrumental variables are meducation and feducation. The variables in
exog that are not being instrumented is just the placeholder column for the intercept. Hence, our instrument matrix will look like this:
Let’s build out the three matrices:
df_1975['ln_wage'] = np.log(df_1975['wage'])exog = df_1975[['education']]
exog = add_constant(exog)instruments = df_1975[['meducation', 'feducation']]
instruments = add_constant(instruments)
Now let’s build and train the
IV2SLS model:
iv2sls_model = IV2SLS(endog=df_1975['ln_wage'], exog=exog, instrument=instruments)iv2sls_model_results = iv2sls_model.fit()
And let’s print the training summary:
print(iv2sls_model_results.summary())
Interpretation of results of the 2SLS model
Since our primary interest is in estimating the effect of education on hourly wages, we’ll focus our attention on the coefficient estimate of the education variable.
We see that the 2SLS model has estimated the coefficient of education as 0.0505 with a standard error of 0.032 and a 95% confidence interval of -0.013 to 0.114. The p value of 0.117 suggests a significance at (1–0.117)100%=88.3%. Overall, and as expected for a 2-SLS model, the model lacks precision.
Note that dependent variable is log(wage). To calculate the rate of change of hourly wages for each unit change (i.e. one year) of education, we must exponentiate the coefficient of education.
e^(0.0505)=1.05179 implying that a unit increase in number of years of education is estimated to yield an increase of $1.05179 in hourly wages, and vice-versa.
Comparison of the IV estimator with an OLS estimator
Let’s compare the performance of the 2SLS model with a straight-up OLS model that regresses log(wage) on education.
reg_expr = 'ln_wage ~ education'olsr_model = smf.ols(formula=reg_expr, data=df_1975)olsr_model_results = olsr_model.fit()print(olsr_model_results.summary())
We’ll focus our attention on the estimated value of the coefficient of education. At 0.1086, it is double the estimate reported by the 2SLS model.
e^(0.1086)=1.11472, implying a unit increase (decrease) in the number of years of education is estimated to translate into a $1.11472 increase (decrease) in hourly wages.
The higher estimate from OLS is expected due to the suspected endogeniety of education. In practice, depending on the situation we are modeling, we may want to accept the more conservative estimate of 0.0505 reported by the 2SLS model. However, (and against the 2SLS model), the coefficient estimate from the OLS model is highly significant with a p-value that is essentially zero. Recollect that the estimate from the 2SLS model was significant at only a 88% confidence level.
Also, (and again as expected from the OLS model), the coefficient estimate of education reported by the OLS model has a much smaller standard error (0.014) as compared to that from the 2SLS model (0.032). And therefore, the corresponding 95% CIs from the OLS model are much tighter than those estimated by the 2SLS model.
For comparison, here are the coefficient estimates of education and corresponding 95% CIs from the two models:
With the IV estimator, one trades precision of estimates for the removal of endogeneity and the consequent bias in the estimates.
And here’s a comparison of the main effect of education estimated by the two models on hourly wages:
Introduction to Two-Stage Least Squares Estimation Republished from Source via | https://bizbuildermike.com/introduction-to-two-stage-least-squares-estimation/ | CC-MAIN-2022-40 | refinedweb | 3,112 | 54.52 |
Supports creating and interacting with email messages, recipients, and attachments.
Represents an email attachment.
Represents an email conversation.
Represents a group of EmailConversation objects for batch processing.
Reads a batch of email conversations.
Represents an email folder.
Represents email information rights management (IRM) info.
Represents a template that can be used to create new EmailIrmInfo objects.
Represents the counts for various email message attributes such as flagged, important, unread, and so on.
Represents an email mailbox located on a remote email server.
Provides data about a change that occurred to a mailbox.
Represents an auto-reply message set on a mailbox.
Represents the settings for the automatic reply functionality of an email account.
Represents the capabilities associated with an email mailbox.
The functionality described in this topic is not available to all Windows and Windows Phone apps. For your code to call these APIs, Microsoft must approve your use of them and provision your developer account. Otherwise the calls will fail at runtime.
For more information about the Windows.ApplicationModel.Email namespace, please work with your Microsoft Account Team representative.
Represents a deferred process that will halt a thread until the deferral is complete.
Represents the deferral process.
Represents the result of a TryCreateFolderAsync operation.
Represents the encryption and signing policies associates with an email mailbox.
Allows an application to launch the email application with a new message displayed. Use this to allow users to send email from your application.
Represents a service that source apps can call to access email data for a specific user.
Represents the information associated with a meeting.
Represents an email message.
Represents a collection of email messages.
Gets a batch of email messages.
Represents the options selected for an email mailbox query.
Represents a text search query in an email mailbox.
Represents an email recipient.
Represents the result of an attempt to resolve an email recipient.
Defines the states of an email attachment download.
Defines the states of an email batch operation.
Describes the result of an attempt to validate a certificate.
Defines the flag state of an email message.
Defines the importance of an email message.
Defines the kind of action to be taken.
Defines the type of negotiation on encryption algorithms permitted by the server.
Defines the encoding schema used for automatic replies.
Defines the type of change made to the mailbox item.
Indicates the result of a call to TryCreateFolderAsync.
Indicates the result of a call to TryDeleteFolderAsync.
Indicates the result of a call to TryEmptyFolderAsync.
Defines whether an application can read from a mailbox.
Defines whether an application can write to a mailbox.
Defines the encryption algorithm used for an email.
Defines the algorithm used to sign an email.
Defines the sync status of the mailbox.
Defines the type of response to a meeting request.
Defines the format of an email message.
Defines the download status of an email message.
Defines the type of response to an email message.
Defines the S/MIME type of an email message.
Defines the type of email query.
Defines the field(s) by which to search a collection of email messages. Use the OR operator to combine these values together into a single bit field.
Defines the scope of a query.
Defines the order in which to sort query results.
The property by which to sort. Currently limited to date.
Describes the state of an attempt to resolve an email recipient.
Defines the email special folders.
Defines the scope for store access.
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our new feedback system is built on GitHub Issues. Read about this change in our blog post. | https://docs.microsoft.com/en-us/uwp/api/Windows.ApplicationModel.Email | CC-MAIN-2018-22 | refinedweb | 610 | 62.64 |
I'm fortunate enough to work with some pretty sophisticated developers.
We have a .NET team with a diverse background; some, like myself, come
from the VB6.0 / Windows DNA school and have been going through the OOP
.NET learning curve and have become proficient in C# as their preferred
.NET language of choice. Others came from the JAVA space - they already
were well - schooled in OOP principles (sometimes I think too much, since
in .NET, it's possible to flatten out the object model considerably and
still achieve OOP-oriented objectives). There is a pretty good interchange
of ideas, and developers help each other. A highly productive environment
in which to learn, work and grow.
One of the things I've noticed, especially from the former JAVA guys,
is that they come from an environment (not usually on the Win32 platform)
where they are used to having the kind of application servers that scale
with multiple instances of components acting as a monolithic instance.
This is done both for failover reasons as well as to have more computing
power handling large amounts of traffic. Naturally, their first look at
Windows Component Load Balancing tells them it's analogous to this stuff
they are accustomed to from the JAVA space, and that's where the problems
begin.
Application Center Server 2000 (ACS) is designed to handle component
load balancing (CLB), monitoring, and automated application deployment
(not just CLB). There are probably cheaper ways to do load balancing,
but when you add in the management features, some companies find that
they save money and time by using ACS.. ACS, as opposed to Windows 2000
DataCenter Edition, which requires a minimum of 8 processors, works just
fine with commodity single - and dual-proc servers. It also allows you
to configure NLB on Windows 2000 Server. For some companies, this could
save some costs over the licensing costs of Windows 2000 Advanced Server,
depending on your server environment configuration.
I've seen at least one situation where the ex - JAVA guys believed (understandably)
that if they made their .NET components Enterprise Services components,
installed them in the GAC and had them run in COM+ Services, that they
would be able to use this environment in the same way as they did with
their Beans and other JAVA stuff through IBM WebSphere or whatever - like
Component Load Balancing. I've railed continuously about the overhead
of COM+ and how by using Network Load Balancing (NLB), which comes with
Windows Advanced Server and higher (or can be configured on Windows 2000
Server using ACS as mentioned above), you can have multiple copies
of the application running on a web farm and achieve similar
scalability results, and you don't need COM+, as long as your calls are
being made over the web, or via TCP, which these days is more often than
not the case. And, let's not forget that TCP can be remoting framework
calls over the TCP Channel into the NLB "Farm" that have nothing
to do with the web at all.
Throughput
Throughput performance declines when making any kind of call across
a network. Using CLB always incurs this cost and this needs to be accounted
for in making cluster architecture decisions. To give some perspective
on this, the following data (from one of the CLB Whitepapers) shows the
number of calls per second on a single threaded Visual Basic 6 COM component
that returns “Hello, world” as a string property. The client
is early bound and doesn’t release references between calls to retrieve
the property.
It’s clear that calls over the network yield slower throughput
than calls to software installed on the same computer. For this reason,
CLB does not make a great solution where throughput is the most important
consideration. In this case it is better to install the COM+ components
locally on the Web-tier cluster members, avoiding cross-network calls.
CLB support is lost but load balancing is still available through NLB.
And as we will see in a moment, in the case of a .NET - based solution,
careful consideration should be given to whether the components should
be installed in COM+ at all, since NLB could care less whether your components
are in COM+. With NLB, we can have cookieless sessions via ASP.NET and
we can also use the SQL Server Session State service with a simple web.config
entry and a database script. In addition, it is possible to pass credentials
data to our Remoting Framework components over NLB.
Network Load Balancing (NLB) looks like a single server. As enterprise traffic
increases, network administrators can simply plug another server into
the cluster. Provided that DTC - type transactionality across components
is not absolutely necessary and the components are written to be stateless,
great economies of scale can be obtained at a much lower cost using relatively
inexpensive 2 or 4 proc boxes for the farm.
In my article about
Implementing .NET Role-based security without COM+ , I showed how
it is possible to eliminate the overhead of COM+ in order to achieve role-based
security with pure .NET code by using the LogonUser API in combination
with attribute-based or imperative security checks. And, contrary to what
some may believe, it is possible to make such calls via the Remoting Framework
over the TCP Channel by employing the CallContext Class, deriving from
ILogicalThreadAffinative, to pass out-of-band credentials parameters such
that the LogonUser API can even be called on the remote server without
messing up interface or method signatures. I had read enough articles
by some of the pros on MSDN to convince me that my suspicions about COM+
overhead were correct, and so I stand by my position:
Unless your .NET application needs intra-component
transactions, or has components that are very expensive to instantiate,
you don't need COM+ services. If all you need is role-based security,
consider using the LogonUser API.
In fact, most of what you would want to achieve by calling LogonUser()
could be accomplished, much more simply and with less overhead, in the
<identity> section of web.config. Just use Windows authentication
and impersonate, or - now you can even configure the identity under which
aspnet_wp will run using Microsoft's new security enhancement tool (more
on this one soon!).
NOTE: If you are using the System.Data.SqlClient namespace,
be aware that this class already uses System.EnterpriseServices.ResourcePool
internally for connection pooling, so unless your component is very complex
or needs transaction services, installing it in COM+ may be, well - kind
of "overkill".
The Common Language Runtime can instantiate 10 million objects a second
on a typical machine. The jury may still be out, but I doubt you need
"Just in Time Activation", and you certainly don't need "Object
Pooling" with fast, lightweight .NET assemblies. If your components
use transactions, you can control these off the Connection Object in ADO.NET,
or handle it directly in Sql Server with BEGIN TRAN / COMMIT TRAN, etc.
Enough said; you are free to disagree, this is just my opinion. My point
is: do your homework, get the facts, figures and arithmetic, and make
informed architectural decisions as a developer. Do some tests, and evaluate.
Make sure you're comparing apples to apples. Above all, don't accept marketing
"hype" as gospel.
With that in mind, I decided to write a small DB Component that did nothing
but run the CustOrderHist stored procedure in Sql Server against the ever-popular
Northwind sample database, and return a SqlDataReader. I created a Winforms
Test app that calls into the DB component 10,000 times and records the
elapsed time for the run. I compiled a version of the component deriving
from Enterprise Services with AutoDual and Autocomplete, and installed
it in the GAC and COM+ as a Library Application. I compiled a separate
version of the component with no COM+ attributes, just pure C# .NET code.
Aside from these differences, the components are identical. I thought
readers might be interested in my results. As you can see in the chart
below, I did three runs on each test component, and averaged the results.
While this cannot be considered to be an authoritative test scenario,
the difference is so dramatic that it's sufficient to confirm my own belief
that COM+ Services is something most developers really don't need. I might
add that the above represents the best times I have obtained for the COM+
installation; subsequent tests with various settings such as Object Pooling
and Just In Time Activation gave even worse results for COM+.
Those readers who are still using VB 6.0 might be interested to know
that an equivalent component and Forms-based test app I wrote in VB 6.0
with the component being a COM DLL, averaged 38.5 seconds for the above
test run. Comments and criticism are always welcome here at our Eggheadcafe.com
forums. In particular, I would be interested in hearing from other developers
who have actually conducted their own test scenarios, since aside from
an excellent piece by Shannon Pahl of MS, "Understanding
Enterprise Services in .NET", and the articles I quote in my
previous article referenced above, all of which more or less confirm the
information I present here, there is literally NO documentation about
this in any of the literature that I've been able to find.
As is my custom, I have provided a downloadable ZIP file containing the
full solutions for both the NETDBComponent and the COMPlusDBComponent,
each with its own Winforms test harness that kicks off the 10,000 call
test and records the elapsed time in seconds, using the System.DateTime.Now.Tick
which records time in 100 nanosecond resolution. There are two batch files
in the bin/Release folder for the COM+ solution "Regme.bat"
and "UnRegme.bat" which will do (or undo) your Gacutil.exe and
your Regsvcs.exe installations for convenience.
Summary:
COM+ Services is a sophisticated COM - based infrastructure whose primary
value to the developer lies in multi-component transaction control with
object instantiation and pooling services. There is ample support for
these COM+ services in the .NET Framework through the System.EnterpriseServices
namespace as well as attributed programming features. Component Load Balancing
(CLB), available on Application Center Server, enables you to run COM+
Services components on multiple machines with failover and have the components
both appear and function as a single "machine". However, because
of the overhead of COM+ and the fact that expensive cross network calls
are necessary with CLB, it is probably best advised to be used for Database
and other back-end applications. If your objective is speed and scalability,
and you do not need transactions, it would be wise to compare the throughput
using a pure .NET solution, even if you need role - based security in
your application. Such solutions can be hosted through Network Load Balancing
for cost - efficiency and the best possible throughput and scalablity,
and COM+ is neither necessary nor particularly desirable in such configurations.
Download
the code that accompanies this article | http://www.nullskull.com/articles/20021025.asp | CC-MAIN-2014-49 | refinedweb | 1,852 | 51.89 |
15 April 2008 00:38 [Source: ICIS news]
ORLANDO, Florida (ICIS news)--Alternative feedstocks such as recycled cooking oil, animal fats and recovered corn oil from the manufacture of ethanol can be costly for biodiesel production because of their higher levels of contaminants, an executive with engineering and technology firm Desmet Ballestra said on Monday.
“There is no such thing as free lunch with these cheaper feedstock,” said Desmet biodiesel product manager at the the SODEOPEC (Soap, Detergent, Oleochemicals and Personal Care) conference in ?xml:namespace>
Mitchell noted the biodiesel industry’s growing interest in cheaper feedstocks because of surging vegetable oils prices.
Among other alternative feedstock that are being looked into, oil from algae is said to be the most probable bright spot for the biodiesel industry.
“Algal oil is very interesting as it does not compete for acreage and it can grow virtually anywhere,” Mitchell said. “The use of algae will completely change the fundamental economics of the industry.”
Feedstock availability is currently the greatest impediment to biodiesel’s growth, said Mitchell. Feedstock comprises almost 85% of total production costs by most plant economics.
Feedstock costs of rapeseed oil and palm oil as of 9 April were even higher than their corresponding biodiesel B100 pricing, Mitchell said. Crude rapeseed oil was quoted around $1,496/tonne (€957/tonne) while rapeseed oil-based B100 pricing was around $1,440-1,470/tonne.
Refined, bleached and deodorised (RBD) palm oil price was at $1,266/tonne while its corresponding B99 to B100 biodiesel prices were quoted around $1,150 to $1,190/tonne.
Crude soy oil price was quoted at $1,259/tonne while soy-based B99 to B100 biodiesel was said to be around $1,320 to $1,370/tonne, said Mitchell.
“This gives a very sobering indication of what the economics are like for the current biodiesel industry,” he said.
The three-day conference, which is hosted by the American Oil Chemists Society (AOCS), ends on Wednesday.
($1 = €0.64) | http://www.icis.com/Articles/2008/04/15/9116011/waste-fats-based-biodiesel-no-cure-all-desmet.html | CC-MAIN-2015-11 | refinedweb | 332 | 57.71 |
Problem with signal and slot
I'm totally new to QT but I have studied lots of its' documents. However, when I come to code, lots of problem show up.
But I just can't figure it out what happened to these code:
@
#include "mainwindow.h"
#include "ui_mainwindow.h"
#include "model.h"
MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent)
{
ui->setupUi(this);
model *m= new model;
connect(ui->horizontalSlider,SIGNAL(this->valueChanged(int)),m,SLOT(m.setTemp(double) ) );
}
MainWindow::~MainWindow()
{
delete ui;
}
@
I don't know why the compiler is always complaining about the @connect() method@
QObject is an inaccessible base of 'model'
I'll appreciate you if any examples code with explanations are provided. (signal and slots)
Thanks!
Hi and welcome to devnet,
You have several errors:
The signals and slots signature must match when using thie version of connect
You must not give the object in SIGNAL nor in SLOT, just the method.
What is model ? a private QObject ?
I beleive you declare your class like this:
@
class model : QObject
@
But in that case you forgot the public keyword, otherwise it default to private inheritence which is not what you want. So it should be:
@
class model : public QObject
@
Then, like SGaist said, in the SIGNAL and SLOT, you have to put the exact signature, and the argument must match
@
connect(ui->horizontalSlider,SIGNAL(valueChanged(int)),m,SLOT(setTemp(int) ) );
@
Which means you need to change setTemp to take an int and not a double. However, if you are using Qt5, i recommand the other syntax which allow automatic conversion of the argument from int to double:
@
connect(ui->horizontalSlider,&QSlider::valueChanged,m,&model::setTemp );
@
SGaist, Olivier Goffart:
Thank you so much. I really made the mistake which I make a private inheritance. Btw: I found the new sytnax documentation:
Thanks!!
Well, I haven't finished my design yet. My plan is to let user drag the slide to set the tempertaure variable. And the the mehtod setTemp(int) in the class model will call another signal method
changeColor() to set a QWidget (I don't know what widget can I use, a label?) to show the color.
Just like:
@
#ifndef MODEL_H
#define MODEL_H
#include <QObject>
class model:public QObject
{
public:
model();
void setTemp(int temparature);
private:
double temparature;
signals:
void changeColor();
};
#endif // MODEL_H
But I have several quesions here:
@
The function arguments in the SIGNAL() and SLOT() should be equal but I don't have an argument for the method changeColor();
I want to use the method changeColor() to decide the color to be represented with some if else judgement. But I think it's a little redundant. I ask for a good design. Any good suggestion?
Should I write the connect function in the MainWindow class or where?
You can e.g. add a QColor parameter to changeColor so you have only once place that handles that.
Yes, a QLabel is fine for that.
Where will your QLabel be ?
I will put my QLabel in MainWindow class. But I how can resolve the signal and slot problem? They don't have correspoding arguments.
Then add a slot to your MainWindow that takes a QColor parameter and update the QLabel content in there.
Ok, should I define the slot function in MainWindow or can I define the slot function in another class like A and then inheriting A?
I don't want to put all the code together in one class~
If you are thinking about inheriting both from QMainWindow and from A then no, you can't you can only inherit from one QObject and it also must be the first class to be inherited from.
All right. Thank you. | https://forum.qt.io/topic/38922/problem-with-signal-and-slot | CC-MAIN-2018-39 | refinedweb | 613 | 55.13 |
This tutorial series is now also available as an online video course. You can watch the first hour on YouTube or get the complete course on Udemy. Or you just keep on reading. Enjoy! :)
Authentication (continued)
Token Authentication with JSON Web Tokens
The idea behind token authentication is simple.
At this stage of our application, the user can log in with her username and password. We verify the credentials and tell the user that the password is correct.
But if the user wants to call a function, where she needs to be authenticated, she would have to send the credentials again. That’s because the web service is stateless.
This means, we never know who sent a request. Unless we get some credentials with the request.
Instead of entering the credentials every single time with every request, we could store the username and password on the local or session storage of the browser and grab the information from there. But that’s highly insecure because everybody who has access to your computer could have a glance at your password.
That’s where tokens come in.
A token is nothing more than a long string that stores information - or claims - of the user. These claims do not consist of the password, but it could tell the server who the user is and what kind of rights the user might have.
The token is generated with a private key only the server knows. So it’s hard to fake such a token. And we can give that token an expiration date. So, even if someone would be able to steal your token, chances are that the token is invalid as soon as this certain someone tries to use it.
Since this token doesn’t consist of critical information in plain text, we can store it in the browser and automatically send it to the web service with every request.
The service then knows who the user is and may even send a new token back for your next request.
On the website jwt.io, we’re able to have a look at a JSON web token and even how the information is stored in it.
You see the header with the used algorithm and the payload with claims like the name of the user, for instance.
Okay, let’s use JSON web tokens now for our Web API.
JSON Web Tokens (JWT) preparations
There are some things we have to do first before we write the actual code.
We start with the
appsettings.json. Here we enter the security key for the JWT authentication. Below the
ConnectionStrings section, we can create a new section
AppSettings and just enter a new key called
Token and set any kind of string as value. For instance,
my top secret key would be totally sufficient here. Just make sure that it has at least 16 characters.
"AppSettings": { "Token": "my top secret key" },
After that, we have to add some package reference to our application.
In Visual Studio Code you can do that by pressing
Ctrl + Shift + P, enter
Add Package to get the entry
NuGet Package Manager: Add Package and then you can enter the desired package.
We need a total of three package references.
The first one is
Microsoft.IdentityModel.Tokens. This package includes types that provide support for security tokens and cryptographic operations like signing and verifying signatures. You can always choose the latest version.
After adding the package reference, Visual Studio Code wants to execute the restore command to resolve the new dependencies. Of course, we can do that.
By the way, instead of using the NuGet Package Manager, you could have entered
dotnet add package Microsoft.IdentityModel.Tokens into the terminal to add the latest version. Both ways work. It’s up to you what you prefer.
The second package is
System.IdentityModel.Tokens.Jwt. This one provides support for creating, serializing and validating JSON web tokens. Exactly what we need.
And the last one is
Microsoft.AspNetCore.Authentication.JwtBearer. This is a .NET Core middleware that enables our application to receive bearer tokens.
A bearer token is just a general name for the token we have already discussed. Some servers use short strings as tokens, we will utilize structured JSON web tokens. Both can be called bearer tokens.
Alright. After adding these packages, your
.csproj file should be a bit bigger now and include these new references.
<PackageReference Include="Microsoft.IdentityModel.Tokens" Version="5.6.0"/> <PackageReference Include="System.IdentityModel.Tokens.Jwt" Version="5.6.0"/> <PackageReference Include="Microsoft.AspNetCore.Authentication.JwtBearer" Version="3.1.2"/>
Now we’re ready to write some code.
JSON Web Tokens (JWT) implementations
In the
AuthRepository we start by adding a new
private method called
CreateToken() that returns a
string and takes a
User object as an argument.
private string CreateToken(User user) { return string.Empty; //token; }
Let’s return an empty
string for now, so that we can already call this method in the
Login() method and set the
response.Data accordingly if the user has entered the correct password.
public async Task<ServiceResponse<string>> Login(string username, string password) { ServiceResponse<string> response = new ServiceResponse<string>(); User user = await _context.Users.FirstOrDefaultAsync(x => x.Username.ToLower().Equals(username.ToLower())); if (user == null) { response.Success = false; response.Message = "User not found."; } else if (!VerifyPasswordHash(password, user.PasswordHash, user.PasswordSalt)) { response.Success = false; response.Message = "Wrong password."; } else { response.Data = CreateToken(user); } return response; }
Back to the
CreateToken() method, we declare a
List of
Claims.
We will have to add some using directives while implementing this method.
The first claim type we add is the
ClaimTypes.NameIdentifier. This will be the
Id of the given
user. The second one is
ClaimTypes.Name, which will simply be the
Username.
List<Claim> claims = new List<Claim> { new Claim(ClaimTypes.NameIdentifier, user.Id.ToString()), new Claim(ClaimTypes.Name, user.Username) };
Next, we need a
SymmetricSecurityKey. This is the secret key from our
appsettings.json file. We need to be able to access the file, so we jump to the constructor of the
AuthRepository and inject the
IConfiguration. Make sure to add the
Microsoft.Extensions.Configuration using directive.
private readonly IConfiguration _configuration; public AuthRepository(DataContext context, IConfiguration configuration) { _configuration = configuration; _context = context; }
Back to the
CreateToken() method, we create an instance of the
SymmetricSecurityKey class and give it a
byte array. We do that with
Encoding.UTF8.GetBytes() and then access the
_configuration to get the proper section with our secret key as value.
SymmetricSecurityKey key = new SymmetricSecurityKey(Encoding.UTF8 .GetBytes(_configuration.GetSection("AppSettings:Token").Value));
With that key, we create new
HmacSha512Signature algorithm for that.
Next is the
SecurityTokenDescriptor. This object gets the information used to create the final token. We’ll give it the claims and an expiration date, for instance.
To set the claims, we set the
Subject with a
new ClaimsIdentity and give it the
claims we created before.
Expires can be set to any date. How about the next day? So
DateTime.Now.AddDays(1).
And finally, the
creds object.
SecurityTokenDescriptor tokenDescriptor = new SecurityTokenDescriptor { Subject = new ClaimsIdentity(claims), Expires = DateTime.Now.AddDays(1), SigningCredentials = creds };
We are almost done.
Now we need a new
JwtSecurityTokenHandler and use this
tokenHandler and the
tokenDescriptor to create the
SecurityToken.
JwtSecurityTokenHandler tokenHandler = new JwtSecurityTokenHandler(); SecurityToken token = tokenHandler.CreateToken(tokenDescriptor);
And finally, with
tokenHandler.WriteToken(token); we return the JSON web token as a
string.
private string CreateToken(User user) { List<Claim> claims = new List<Claim> { new Claim(ClaimTypes.NameIdentifier, user.Id.ToString()), new Claim(ClaimTypes.Name, user.Username) }; SymmetricSecurityKey key = new SymmetricSecurityKey(Encoding.UTF8 .GetBytes(_configuration.GetSection("AppSettings:Token").Value)); SigningCredentials creds = new SigningCredentials(key, SecurityAlgorithms.HmacSha512Signature); SecurityTokenDescriptor tokenDescriptor = new SecurityTokenDescriptor { Subject = new ClaimsIdentity(claims), Expires = DateTime.Now.AddDays(1), SigningCredentials = creds }; JwtSecurityTokenHandler tokenHandler = new JwtSecurityTokenHandler(); SecurityToken token = tokenHandler.CreateToken(tokenDescriptor); return tokenHandler.WriteToken(token); }
Alright, take a deep breath and let’s test this in Postman. HTTP Method is
POST, we use the login URL, the correct credentials and hit ‘Send’.
{ "data": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJuYW1laWQiOiIxIiwidW5pcXVlX25hbWUiOiJwYXRyaWNrIiwibmJmIjoxNTgyMTA1MDEyLCJleHAiOjE1ODIxOTE0MTIsImlhdCI6MTU4MjEwNTAxMn0.FaxvO8pqLLkBjplEW815-DzekgBwW94gBx--_3n4X5UAs6kRuM2zflpwQ0H2PnCgIuupJKq7EED5c_mC_DI8FQ", "success": true, "message": null }
There is our token!
We can grab this token now and have a deeper look at the JWT debugger on jwt.io.
When you paste the token, you’re able to see the claims we have entered in the code.
And if you enter the correct key, the signature can be verified.
Great! So that’s how we get our JSON Web Token.
Authorize Attribute
To secure web service calls or even a complete controller, we can use the
[Authorize] attribute on top of the controller class or on top of any method you want to secure. But before we can use this attribute, we have to add an authentication scheme to the web service. We do that in the
Startup class.
In the
ConfigureServices() method, we use
AddAuthentication() with the
JwtBearerDefaults.AuthenticationScheme and add some configuration options with
AddJwtBearer().
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddJwtBearer(options => { });
Regarding the
options we initialize a new instance of
TokenValidationParameters and set these parameters.
We want to validate the signing key, so we set
ValidateIssuerSigningKey to
true.
The
IssuerSigningKey is again the one from our
appsettings.json file. So a new
SymmetricSecurityKey that gets the encoded
AppSettings:Token value.
We don’t need to validate the issuer or the audience, hence we set
ValidateIssuer and
ValidateAudience to
false.
So far the authentication scheme.
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddJwtBearer(options => { options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuerSigningKey = true, IssuerSigningKey = new SymmetricSecurityKey(Encoding.ASCII .GetBytes(Configuration.GetSection("AppSettings:Token").Value)), ValidateIssuer = false, ValidateAudience = false }; });
Additionally, we have to add the .NET Core
AuthenticationMiddleware to the
IApplicationBuilder to enable authentication capabilities.
To do that, we add the line
app.UseAuthentication(); above
app.UseAuthorization();. It’s important to add the line there.(); }); }
Alright. With that in place, we can make use of the authentication. To do that we add the
[Authorize] attribute on top of the
CharacterController class.
[Authorize] [ApiController] [Route("[controller]")] public class CharacterController : ControllerBase
If we now want to get all RPG characters with Postman using
GET as HTTP method and the URL
we get a 401 Unauthorized back.
That’s exactly what we want! So now we have to add a token to the header of our request. Let’s login first to get the JSON web token.
Just copy the token and then go back to the
GetAll call.
Here we have to add a header. We add the key
Authorization and add the token as value. The token itself is not enough. We have to add the term
Bearer with a space in front of the token.
Now we’re finally able to get all RPG characters again.
There’s one little thing I’d like to add here.
To call any method of the
CharacterController, we have to be authenticated. But we can make exceptions to the rule. For instance, if we add the attribute
[AllowAnonymous] on top of the
GetAll() method, we can call this method again without being authenticated.
[AllowAnonymous] [HttpGet("GetAll")] public async Task<IActionResult> Get()
In Postman you can see that we can remove the
Authorization key in the header and still get the RPG characters from the database.
Let’s remove the attribute again, because we only want authenticated users to get characters.
Actually, we want users to get their created characters. At the moment they can see all RPG characters.
So we have to make use of the relation between users and characters and we also have to get the claims from the authenticated user.
Read Claims & Get the User’s RPG Characters
One of the beauties of the
ControllerBase class is that it provides a
User object of type
ClaimsPrincipal.
// Summary: // Gets the System.Security.Claims.ClaimsPrincipal for user associated with the // executing action. public ClaimsPrincipal User { get; }
This
User object provides all the claims we added to the JSON web token.
Let’s get the
NameIdentifier which was the
Id of our authenticated user from the database.
In the
Get() method of the
CharacterController we define a new
int variable.
With
User.Claims we access the claims and then we iterate through them or find the one we want with
FirstOrDefault() followed by a lambda expression where the
Type of the claim equals
ClaimTypes.NameIdentifier.
From that result, we grab the
Value and parse it to an
int.
int id = int.Parse(User.Claims.FirstOrDefault(c => c.Type == ClaimTypes.NameIdentifier).Value);
Now we have to pass this
id to the
CharacterService. So we make a little change to the
GetAllCharacters() method and add the
id as an argument in the corresponding interface as well as the service and maybe call this argument
userId to avoid any confusion.
Task<ServiceResponse<List<GetCharacterDto>>> GetAllCharacters(int userId);
Then we can call the method with the proper
userId we got from the claims and make a slight modification to the service method.
public async Task<IActionResult> Get() { int userId = int.Parse(User.Claims.FirstOrDefault(c => c.Type == ClaimTypes.NameIdentifier).Value); return Ok(await _characterService.GetAllCharacters(userId)); }
The only thing we have to change in the
GetAllCharacters() method is how we access the
Characters from the
_context. Instead of returning all of the RPG characters, we make use of the
Where() function and only return the characters that are related to the given user with the help of the
userId.
List<Character> dbCharacters = await _context.Characters.Where(c => c.User.Id == userId).ToListAsync();
Thanks to Entity Framework we can access the related
User object and its
Id and actually get the RPG characters that have the proper
UserId set in the database table.
When we test that with Postman, we get no characters back. That’s absolutely correct because we haven’t set any relations, yet.
But we can fix that real quick.
In the
Characters table, we simply set the
UserId of Frodo to
1. Run the test again, and now we’re getting Frodo! Setting the
UserId for Sam as well, we’re getting both our heroes back.
Great! This works. Of course, we want to add the relations with our code and not manually in the database every single time. Before we do that, let’s sum up what you have learned in this section.
Summary
Congratulations! You successfully implemented JSON Web Token Authentication in your Web API.
We started by creating the User model and adding this new User as a relation to our RPG characters. This is a one-to-many relation, which means that one user can have several characters.
Then we were already diving into the theory of authentication. You learned how authentication works in general with password hash values and password salts and why you should use hashed values and salts. It’s all about security.
We built the user registration and the user login and used a specific cryptography algorithm to hash and verify the entered passwords.
After that, we covered token authentication with JSON web tokens.
You now know what a bearer token is, what you find in that token - like the used algorithm and the payload - and how you can add any claims you want to add to that token.
So you learned how you can create a JSON web token and how to secure your Web API with the
[Authorize] attribute. While doing that, you also learned how to read the claims from the JSON web token and use them to return the correct data to the user.
Great! Now it’s time to add some more relations to our application. What about some skills to use during fights, for instance? And, of course, let’s add the correct relation between users and characters as soon as a character has been generated.
We do all that in the next section.
That's it for the 8th part of this tutorial series. I hope it was useful for you. To get notified for the next part, simply follow me here on dev.to or subscribe to my newsletter. You'll be the first to know.
See you next time!
Take care.
Next up: Advanced Relationships with Entity Framework Core.
Discussion (16)
Hey Patrick, thanks for taking the time to make this awesome series, its one of the best I've found to date! The next sections can't come quick enough!
Any chance you could cover seeding data to a DB so that we can seed the setup data each time a new instance of the application is run? ie. the data like rpg classes which we wouldn't want to recreate each time, but rather just seed the standard 5 or 6 classes that one can choose?
Another idea might be to cover role based calls so that if I am an "admin" user, a call might return different/more data than if I was a "standard" user?
Thanks again and keep the content coming!
PS. In your video series maybe you could also include setup on a Mac? SQL is not available for Mac users so I wasn't able to easily connect to a local DB. It took a little research but I eventually got my SQL DB up and running on a docker image.
Hi,
Thank you very much! :)
These are good ideas. I will remember them and probably add them as "bonus" chapters - at least regarding seeding the database and role based authentication.
I'm afraid I don't have a Mac. But maybe I'll add a chapter with SQLite. You can use SQLite on MacOS, hence you wouldn't have to use SQL Server with Docker. The code should stay the same, just the connection string would be different, I guess.
Let me think about this. ;)
Take care,
Patrick
Hi Patrick,
This is a great series, thanks for the effort you are putting into this.
In this episode, CreateToken is first shown as being declared 'static' but I don't think it should be?
Many thanks!
Hello again and thank you very much again. :)
And you are correct again, I removed the
statickeyword.
Stay healthy!
Patrick
This is the most lucid, clear explanation I could find of how to do this. I'm a long-time .NET developer, but sometimes walking into a new framework there's just so much any abstractions that it's hard to know where to start. Thank you!
Thank you very much, Nicholas! :)
I have learnt a ton from this series, thank you for selflessly sharing Patrick!
Would you mind showing us how we can go about resetting a users password? The username in my test application is a email address.
I have implemented the following mailkit solution and it works pretty well apart from minor bugs that was fixed in the version I am using - Version="2.10.1
github.com/ffimnsr-ext/article-dem...
Thank you Patrick for this awesome series..!!
I like the way you have devided the content step by step.
I am still newbie in asp . net core world and learn a lot from this series regarding web api and repository& service pattern.
God bless you...!!
Regards,
Vinod
Thank you so much! Means a lot! Really glad the course resonates with you.
And please stay tuned! New updates and a complete new series on Blazor WebAssembly are coming soon. :)
Take care & stay safe,
Patrick
Hi Patrick! i trying contact you from Twitter DM and you have disabled sending message, i fowared in graphql-authentication and i need how to show graphql login response with only token and id_user. Help me this problem i pay for it, thanks a lot coleague.
Hey,
Thanks for the wait. DMs work now. I do not quite understand your question. Feel free to elaborate, also on Twitter now if you like.
Take care,
Patrick
When you used AddJwtBearer in Startup.cs, the AppSettings token was converted by the GetBytes method of Encoding.ASCII instead of Encoding.UTF8 like in AuthController. But then I tried running it and it still worked. Could you explain to me why?
Patrick how much, to learn programming .NET CORE 3.1 GRAPHQL, stage login with JWT via private stream??
Happy to help with that. Please send me a DM on Twitter. :)
Should we manually manage tokens in this scenario? Is there an article explaining an example of combining middleware with UserStore to work?
Please share the GIT location and can you implement role based authentication in this sample. | https://dev.to/_patrickgod/authentication-with-json-web-tokens-in-net-core-3-1-29bd | CC-MAIN-2022-21 | refinedweb | 3,398 | 59.09 |
Leaderboard
Popular Content
Showing content with the highest reputation since 09/14/2019 in Posts
- 2 pointsIt depends on the collation setting for the column.
- 1 point'.
- 1 pointUser doesn't care. They don't look at URLs when they're just browsing around, and if they want to share the page they'll either use a share button or copy/paste what's up there. In fact that copying and pasting is a huge reason why ideas like putting session IDs into the URL (PHP's session.use_cookies/use_only_cookies) are strongly discouraged. That said, try to keep it simple. example.com/product.php?id=123 (or /products/123) is fine. Attempting to obfuscate it because you're scared, like example.com/product.php?product_id=uw433hyg5kishev6nyliser6nbyioq2gv49n68of325ob8nq534tb8, is not fine. People don't like things they can't understand: "123" is a number and people are okay with numbers, "B00005N5PF" is some sort of cryptic ID but it's okay too because it's short and easy to understand, but "uw433hyg5kishev6nyliser6nbyioq2gv49n68of325ob8nq534tb8" is a code and codes are for hackers. CoDeS aRe FoR hAcKeRs Probably, yeah. Lots of stuff on the internet already works like that. People are used to it.
- 1 pointI don't like having side conversations not specific to the thread topic,. But since this appears to be more instructive, I thought I'd respond to this question. There are a multitude of uses for hashes aside from passwords. It all depends on the developer identifying a need and implementing it. Basically any time you need to compare complex data. Here are a couple examples: 1. File comparison. For example, let's say you have an application that picks up a file every hour for processing. The file gets written regularly from some process outside of your application. BUT, even though it gets written regularly, it may not have any new data. I might store a hash when I process the file. Then, every hour I will run a hash on the current file contents. If the hash is the same, then I don't process it. There are many use cases where file comparison is needed and where hashing will fill that need. 2. Creating a unique key. In a mp3/music app I worked on, I needed to quickly look for duplicates based on a combination of multiple meta data fields before I inserted new records into the database. Since I was dealing with raw "text" values from the files being process I had not yet determine the unique IDs for some of that meta data. So, I could not use a unique constrain on a single table and it would require a query with multiple JOINs in order to check for a duplicate - on every MP3 file. The processing was executing against hundreds/thousands of files, so I wanted an efficient process. In order to simplify this process, I just created a unique key using a hash on the multiple values and could just check that value against a single table in the DB.
- 1 pointI wasn't trying to sound sarcastic, but I still don't follow what you are trying to accomplish in real life. Why do you want a product name, color and every SKU associated with it on one row? What happens if you have 25 SKUs of Blue Pliers? If this is a report, I think there is a better way to communicate things. Follow me?
- 1 pointWhat does a sample of your data look like before the query and what should it look like after? I showed you mine, you show me yours.
- 1 pointThis begs the question, "Why the phuk are you boring us to death here, on what is basically a PHP site, with all this Python stuff when you could be doing it to the members of "python-forum.io?"
- 1 pointIt's been over 10 years since I messed with Authorize.net API. I first used it to allow people to make single secure payments from a web page of mine to a bank account. That's it. Back then, they didn't have all of this fancy new stuff. Therefore, i really don't have any valuable comparisons to give. Also, I do not consider myself to be anything close to an expert (or even intermediate) level of creating secure systems. I ran a mail server about 6 years ago and that was a total nightmare. Literally, as soon as the server went live, it was plagued with bots and whatever else that started using my SMTP server as an open relay and my IP became blacklisted pretty quickly for spam. I google ad nauseum for how to secure this and how to secure that and what the best practices were, but I was in way over my head with absolutely no budget for anything to help me out. After 4 years of trying to maintain a mail server that successfully sent and received mail with no issues (though, there were still issues), I finally was able to convince my boss to switch to a Google Business account and let them handle all of that guff. Mail servers are an absolute nightmare that I wouldn't wish upon my worst enemy. I mean, installing SSL certificates is easier than maintaining a mail server. Anyway, this topic has nothing to do with mail servers. You know, I've never tried to even perform a breach in my life. I've never even tried to breach myself. It's an exhausting realm of web development that I avoid like the plague. Really, what is secure? Unless you're a Fortune 1000 company or something, I doubt you're going to have a hoard of people trying to hack your site; don't flatter yourself. I was a web developer and ran probably the least secure site, in my opinion, but the audience for that site was so minuscule compared to that of large corporations. It's about the same concept as viruses. Most people running a *nix system do not really need to worry about viruses because *nix systems do not take up much of the market share for personal computers. However, Windows is always being probed and poked and molested because it has a gigantic user-base. In any case, Authorize.net seems to have improved pretty much everything they had when I messed with it over a decade ago. Most, if not everything, of what I utilized is gone or deprecated. I mean, I would trust it. At the end of the day, though, the most secure you'll ever be able to make your system is if you cut it off from the net. If it's not on the internet, you really have nothing to worry about. If you're not connected to the internet, you're not going to get any viruses anytime soon. I know that's not an answer, but it's a hard truth to accept. Online banking is really awesome in my opinion, but I know that at any particular time, something could go awry and cause my life hell.
- 1 pointIf you are using Authorize.net, then you can setup Customer Payment Profiles, using their API. You can then store (or relegate) the customer payment profile id to your users table in your database. Then, you don't have to worry about storing credit cards info anywhere. Maintaining reconciliation with Authorize.net customer profiles and your own database/table of users can allow you to do what you're attempting to do. Using the API, you can send a request for the current users list of payment profiles. If there are more than two profiles, then you can write in whatever logic you want in your PHP script, for instance, aborting the chance of a transaction from the user, showing them an error message. Everything you need and more is available in their API.
- 1 pointAccording to your first post you have an array of paths/filenames EG $arr = [ 'xxx/yyy/aaa-bbb-xxx.txt', 'xxx/yyy/aaa-vcf.txt', 'xxx/yyy/aaa-bbb-vbn.txt', 'xxx/yyy/aaa-bbb-vvv.txt', 'xxx/yyy/aaa-bbb-vcf.txt', 'xxx/yyy/aaa-bbb-xcv.txt' ]; If that is the case, I think your preg_split line needs to add a "." so the file extension is excluded. I.E. if(preg_split("/[-.]+/", $userBase)[2] == $keyword) ^ then echo array_search_partial($arr, 'vcf'); //--> 4 Also, your function should return something (false ?) if no match is found.
- 1 pointAlternatively you can use the "@@" prefix for system variables E.G. mysql> select user(), @@hostname, @@port; +----------------+-----------------+--------+ | user() | @@hostname | @@port | +----------------+-----------------+--------+ | root@localhost | DESKTOP-DCGAC4S | 3306 | +----------------+-----------------+--------+
- 1 pointAnother way is to simply: ALTER TABLE table_name SET AUTO_INCREMENT=0; Hope that helps.
- 1 pointYou can use this regex to match internationally, even Japanese. /([\w -'\p{L}]+)/
- 1 pointIANAL. Check Articles 12-22 for the most significant parts. No, there does not have to be a means to contact the site owner, but there does have to be a way for the user to request their information, and/or that the information be destroyed. Which means some means of contact. If you don't already have a contact page then you can put the information in your privacy policy.
- 1 point
- 1 pointUse glob() function which returns an array of the files. E.G. $folder = 'C:/Users/... /chartSamples/' ; foreach (glob("{$folder}*.png") as $fn) { echo basename($fn) . '<br>'; } giving column.png doughnut.png line.png radar.png rosechart.png stacked.png
- 1 pointThe $freqs array contains the counts for P1, P2 , P3 for each digit... $freqs = Array ( [0] => Array # digit "0" ( [0] => 4 # P1 [1] => 7 # P2 [2] => 1 # P3 ) [1] => Array ( [0] => 3 [1] => 2 [2] => 6 ) [2] => Array ( [0] => 4 [1] => 4 [2] => 6 ) which, coincidentally, is the same structure as the output table. You now loop through the array and for each digit (row) loop through its array (positions columns) and build the table. // // create frequncy table and calc digit totals // $totals = array_fill_keys(range(0,9), []); $tdata = ''; foreach ($freqs as $n => $occs) { $tdata .= "<tr><td><b>$n</b></td>"; foreach ($occs as $o) { $tdata .= "<td>$o</td>"; } $total = array_sum($occs); $totals[$n] = [$n,$total]; $tdata .= "<td>=</td><td><b>$total</b></td></tr>\n"; } My complete solution...
- 1 pointBinding is useful when you want to process records in a loop. Bind the variables first then, in the loop, update the values and execute. EG $data = [ [ 1, 'Curly'], [ 2, 'Larry'], [ 3, 'Mo'] ]; $stmt = $db->prepare("INSERT INTO testuser (id, username) VALUES (:id, :user)"); $stmt->bindParam(':id', $id, PDO::PARAM_INT); $stmt->bindParam(':user', $username, PDO::PARAM_STR); foreach ($data as $user) { list($id, $username) = $user; $stmt->execute(); } EDIT: But, with PDO, there is the alternative that I used before EG $data = [ [ 1, 'Curly'], [ 2, 'Larry'], [ 3, 'Mo'] ]; $stmt = $db->prepare("INSERT INTO testuser (id, username) VALUES (?, ?)"); foreach ($data as $user) { $stmt->execute($user); } where the values are passed as an array when executing.
- 1 pointThe answer is "normalize". Don't store comma-separated lists (especially when the list items are ids). The role_access table should be CREATE TABLE `role_access` ( `id` int(10) NOT NULL PRIMARY KEY, `page` int NOT NULL, `role` int(7) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; INSERT INTO `role_access` (`id`, `page`, `role`) VALUES (1,1,1), (2,2,1), (3,3,1), (4,4,1), (5,5,1), (6,2,2), (7,4,2), (8,5,2); Now you can join to the page table to get the page name
- 1 pointIn that case you need to specify the banner you are looking for in the LEFT JOIN's ON clause EG (looking for banner #2) SELECT DISTINCT f.id as frameId , f.title as frameTitle , bf.banner_id FROM frames f LEFT JOIN banner_frame bf ON bf.frame_id = f.id AND bf.banner_id = 2 ORDER BY f.id; +---------+------------+-----------+ | frameId | frameTitle | banner_id | +---------+------------+-----------+ | 1 | Frame 1 | 2 | | 2 | Frame 2 | NULL | | 3 | Frame 3 | NULL | | 4 | Frame 4 | NULL | | 5 | Frame 5 | 2 | +---------+------------+-----------+
- 1 pointJust about all of your code is misplaced. The PHP code should be first. (except for output which should be in the html/body section Your <form> should be in the html/body section. Your <options>..</options>s should be between the <select>..</select> tags plus your course material appears to be many years out of date.
- 1 pointYour ?> is misplaced. It needs to be at the end of the PHP code and before the HTML code.
- 1 pointPHP and ASP are two very different languages and programming styles. Don't try to find PHP versions of the ASP things you know and instead learn the PHP way of doing it. Whatever editor you want. There is no best one.
- 1 point... or you could have used <?=$tdata?> as I did. FYI, my PDO connection code is... $dsn = "mysql:dbname=$database; host=$host; charset=utf8"; $db = new pdo($dsn, $username, $password, [ PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::ATTR_EMULATE_PREPARES => false, PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC ]); so that any errors are reported
- 1 pointtry WHERE wa.nurse=? or you can go with: WHERE wa.nurse=:nid $stmt->bindParam(":nid", $nid); $stmt->execute(); @Barand already showed you this and he showed you how to make your query beter readable:
- 1 pointI find the easiest way for this type of report is to store the data in a structured array as you process the query results. The array structure should reflect the report structure. EG Array ( [Week 38 Thursday 19/09/2019] => Array ( [shift] => 1 [ward] => ICU [patients] => Array ( [0] => Array ( [bed] => 1 [id] => HSSC014 [name] => Patient E ) [1] => Array ( [bed] => 3 [id] => HSSC019 [name] => Patient B ) [2] => Array ( [bed] => 6 [id] => 3bb2dc [name] => Patient J ) ) ) It is then just a matter of looping through the arrays to produce the desired output. HINT: You want to show patients on each day where that date is between the patients admission date and discharge date (ie.. the patient is there). It makes the logic much simpler, therefore, if unknown discharge dates (sometime in the future) are set to the "infinity date" (9999-12-31) +-----+------------+------+------+------------+----------------+-------+ | aid | patient_id | ward | bed | from_date | discharge_date | notes | +-----+------------+------+------+------------+----------------+-------+ | 8 | 3bb2dc | 7 | 6 | 2019-09-19 | 2019-09-22 | NULL | | 9 | HSSC018 | 5 | 1 | 2019-09-19 | 9999-12-31 | NULL | <-- discharge date not yet known +-----+------------+------+------+------------+----------------+-------+ This code builds the array from the query Now you just have to loop through the array with a couple of nested foreach() loops to output, like this ... $$dt</th><th class='day' colspan='5'> </th></tr> <tr><td> </td><td class='ca'>{$ddata['shift']}</td><td>{$ddata['ward']}</td><td colspan='3'> </td></tr>\n"; foreach ($ddata['patients'] as $p) { $tdata .= "<tr><td colspan='3'> </td><td class='ca'>{$p['bed']}</td><td>{$p['id']}</td><td>{$p['name']}</td></tr>\n"; } } ?> <!DOCTYPE html> <html> <head> <meta http- <title>Sample</title> <style type="text/css"> body { font-family: verdana,sans-serif; font-size: 12pt; padding: 20px 50px; } th { padding: 16px; text-align: left; background-color: #396; color: #FFF; } th.day { background-color: #EEE; color: black; } td { padding: 8px 16px; } .ca { text-align: center; } </style> </head> <body> <table> <tr><th>Date</th><th>Shift</th><th>Ward</th><th>Bed</th><th colspan="2">Patient</th></tr> <?=$tdata?> </table> </body> </html> Results
- 1 pointUse a DatePeriod $dt1 = new DateTime('next sunday'); $diwk = new DateInterval('P7D'); $di6 = new DateInterval('P6D'); $num_weeks = 8; $period = new DatePeriod($dt1, $diwk, $num_weeks-1); foreach ($period as $d) { echo $d->format('M d') . ' – ' ; $end = $d->add($di6)->format('M d'); echo "$end<br>"; }
- 1 point.. therefore the query being executed is SELECT * FROM UserList WHERE UserID=E0000001 1 ) String variables in a SQL statement need to be in single quotes otherwise they are thought to be a column name. 2 ) The variable shouldn't be there at all - you should be using a prepared statement and passing the id as a parameter $query = mysqli_prepare($con, "SELECT * FROM UserList WHERE UserID = ? "); $query->bind_param('s', $id); $query->execute();
- 1 pointOK, I need to correct myself I was mixing up some of the techniques. I went back and reviewed a training session about account management on Pluralsight (great training material). Troy Hunt (the author) recommends the following approach to prevent account enumeration: Upon submitting a form to register an account provide the user a common message along the lines of "Your registration has been processed, an email has been sent to complete your account". They would get this message in the case of a successful registration or a duplicate username/email. 1. If the registration was successful, the user receives an email to confirm the account 2. If there was a duplicate, send an email to the user that the account was already registered. Of course, this requires a more complicated process of user registration.
- 1 pointAgreed, you should let the user keep trying to register until eventually, in desperation, they try a different user name. At that point, when it works, they realize that the problem was a duplicate username. But at least, you didn't tell them. Just to add some clarity here. @benanamen is correct in that you don't want to create a system that allows a malicious user to easily ascertain usernames from your system - specifically in mass. And, @Barand is correct that it makes no sense on a registration page to NOT tell a user you could not create their account because they chose a user ID that is already in use. The problem to solve is to prevent a malicious user from farming the system to create an inventory of all your users through automation. The malicious users could then iterate through all the users trying different common passwords until they get a match. If this is important, there are various solutions that can be employed: 1. CAPTCHA or some other means that requires human interaction 2. Slow them down. Introduce a delay of a few seconds or more in the registration process which would make the time to get a full list lengthy even with automation. Easy to implement and would not be noticed by users. (as long as it is not excessive) 3. Keep a log of requests by IP, session, or some other means. If those attempts exceed a threshold you set then either prevent new requests or introduce an even longer delay. More difficult to implement. There are other ways (such as using analytics) to programatically detect malicious submissions. But, you need to determine the risks to your application and the costs associated with any potential data breach in order to weight how much effort to invest. EDIT: This is a registration page where a user is creating an account - not an authentication page. You should never tell a user the reason you could not authenticate them (i.e. username not found or password wrong). But, that is not what this was about
- 1 pointI didn't realize this was a challenge question. You're all being lazy relying on the date function 😁 function isFridayThirteenth($year, $month, $day) { $m = (($month+9)%12)+1; $C = floor($year/100); $Y = $year%100-(($m<11)?0:1); $W = ($day + floor(2.6*$m - 0.2) - (2*$C) + $Y + floor($Y/4) + floor($C/4)) % 7; return ($W==5 && $day==13); }
- 1 pointPerfect! I see it now. I ended up following your advice and created an fgetcsv PHP script (which only took me an hour, not the 2 days I anticipated :-) Now instead of the hassle of opening the file in excel, copy-and-pasting into text editor, creating a mySQL lookup, formatting the data to paste back into Excel, etc., all I have to do is open up SSH and type "php my_new_script.php" and voila.
- 1 pointCreate a view for yourself that shows threads and the initial posts. It'll make life easier. Though I'm really skeptical that XenForo doesn't have a way to get that information sort of finding the first post for a given thread ID - after all, since there is an ID in the first place, surely there is some source generating that ID, right? Once you have the view the query to find users is trivial.
- 1 point.
- 1 pointTruncate the table. It will also delete all the data. TRUNCATE TABLE table_name; TRUNCATE TABLE table_name;
- 1 point...
- 1 point.
- 1 pointLol first impression was "who the hell is starting yet another thread in caps". But I think many people here agree on what you just wrote. I don't even bother reading vanilla js doing ajax stuff.
- 0 points
This leaderboard is set to New York/GMT-04:00 | https://forums.phpfreaks.com/leaderboard/?in=forums-pid | CC-MAIN-2019-43 | refinedweb | 3,440 | 71.95 |
How to Check file size in Python
In this article, we will learn to check the size of a file in Python. We will use some built-in functions and some custom codes as well. Let's first have a quick look over why we need file size and how can we calculate the file size in Python.
Check the File Size in Python
It is important to get file size in Python in case of ordering files according to file size or in many use case scenarios. The output of file size is always in bytes. The value could be passed as multiples of the file system block size so that further computation is easy.
We will learn four ways to check file size using the path and os module.
path.stat() function
os.stat() function
os.path.getsize() function
seek() and tell() function
Check File Size using Path.stat() function in Python
Python language has
os module which helps python programs to interact with the operating system and provide the user with functionality. Here
stat() is a function of the os module. For this, here we are using
pathlib library. In the below example, we have used
st_size() function to find the size of any given file.
Syntax
Path(filename).stat().st_size()
Example
It returns an object which contains so many headers including file created time and last modified time etc. among them st_size gives the exact size of the file.
from pathlib import Path var1 = Path('filename.txt').stat() var2 = Path('filename.txt').stat()
Explanation: The first path is imported from pathlib library which is an easy way to perform file-related operations. The filename is passed with
stat() function to get details of file and then st_size() is used to return file size in bytes.
Check File Size using os.stat() function in Python
Comparing with the above example, instead of using pathlib, we have used os module. Thereby performing
os.stat() function. st_size() property of the object is returned by os.stat() function.
Example
import os var1 = os.stat('filename.txt') var2 = os.stat('filename.txt')
Check File Size using os.path.stat() function in Python
The third way of finding the size of the file is by using
os.path.getsize(). It also involves the os module. Implementation of os.path.getsize() is simple and easy to process as compared to os.stat(file).st_size(). It raises
os.error if the file does not exist or is inaccessible.
Syntax
os.path.getsize("file path/file name")
Example
In this, we have to provide the exact file path(absolute path), not a relative path.
import os var1 = os.path.getsize('filename.txt') print("File size- ", var1)
File size- 93
Check File Size using seek() and tell() function in Python
The above-given methods work for real files, but if we need something that works for "file-like objects", the solution is using seek/tell file handling functions. It works for real files and StringIO's.
In this, seek() will take the cursor from beginning to end, and then tell() will return the size of the file.
seek()- This function is used to change the cursor position of the file to a given specific position. The cursor defines where the data has to be read or written in the file.
tell()- This function returns the current file position in a file stream.
Let us look at the below example and see how the seek() and tell() gives file size.
import os with open('filename.txt') as f: f.seek(0, os.SEEK_END) size = f.tell() print("File size- ", size)
File size- 93
Explanation-
In the above example, f is a file type object made while opening the file. f is used to perform the seek function. As we can see 0 and os.SEEK.END is used in the parameters. First, the pointer is placed at the beginning of the file i.e. 0, and then
SEEK_END() will place the pointer at the end of the file. Further, in the next line, f.tell() is used to tell the current position which is equivalent to the number of bytes the cursor has moved. This will store the size into the size variable starting from 0 to end.
The difference between seek/tell and os.stat() is that you can stat() a file even if you don't have permission to read it. So, the seek/tell approach won't work unless you have read permission.
Conclusion
In this article, we learned how to check the file size by using several built-in functions such as
seek(),
tell(),
st_size(), and
os.path.getsize(). We used some custom codes and file handling concepts as well. For example, we used open() function to open the file and then used functions to check file size. | https://www.studytonight.com/python-howtos/how-to-check-file-size-in-python | CC-MAIN-2022-21 | refinedweb | 805 | 75.91 |
SPACE
rasterio
Overview
rasterio is a Python package which aims to provide a friendlier API to GDAL than GDAL’s own Python API (which feels very C-like). It is an
open source project on GitHub that is created and maintained by
mapbox.
Most of the code examples below assume you have imported
rasterio into the current module with:
import rasterio
rasterio’s API documentation can be found at. Be warned that it is very incomplete (as of November 2019) — there is missing documentation for many
rasterio features.
Reading A GeoTIFF
There are two common ways to do this, with or without a context manager.
With a context manager:
with rasterio.open('example.tif') as dataset: pixels = dataset.read() # This will read all bands # Dataset (the file) is closed automatically once you leave the context
Without a context manager:
ds = rasterio.open('example.tif') pixels = ds.read() # You have to remember to close the dataset yourself ds.close()
The
read() function as used above will read all bands of data from the
.tif file. You can read a specific band by providing a band index to
read(). The band indexes start from
1, just as they do in GDAL. The following example just reads the first band:
pixels = ds.read(1)
You can also open a raster by passing in a
Path object to
open() (Python v3.4 or higher only):
from pathlib import Path file_path = Path('example.tif') ds = rasterio.open(file_path)
Getting Projection Info
The projection information in obtained through the
Dataset.crs property:
ds = rasterio.open('example.tif') ds.crs # EPSG:4326
You can get also get the “Well Known Text” (WKT) syntax:
ds.crs","9122"]],AXIS["Latitude",NORTH],AXIS["Longitude",EAST],AUTHORITY["EPSG","4326"]]
To get the Affine transformation:
ds.transform # | 0.00, 0.00, 26.04| # | 0.00,-0.00,-15.29| # | 0.00, 0.00, 1.00|
Converting Coordinates To Pixels
rasterio provides the
index() function in the
Dataset class to convert coordinates from the projection space (e.g.
(latitude, longitude) if in WGS 84) of a dataset to
(x, y) pixel coordinates in the image.
lat = [ 10.0, 20.0 ] lng = [ -120.0, -110.0 ] x, y = ds.index(lat, lng)
Reprojection
reproject() does not create the destination array for you, you have to create the array yourself and pass it into the function.
rasterio.reproject( src_array, dst_array, src_transform, src_crs, dst_transform, dst_crs, resampling)
Masking
More on masking can be found at.
Common Errors
rasterio._err.CPLE_AppDefinedError: Too many points (10201 out of 10201) failed to transform, unable to compute output bounds.
This error usually occurs if you are trying to reproject an image into a projection space that does not contain the image (e.g. images are in completely different UTM zones).
External Info
The documentation for the latest version of
rasterio can be found at. | https://blog.mbedded.ninja/space/rasterio/ | CC-MAIN-2021-17 | refinedweb | 473 | 58.38 |
Add <http:serverpost/> widget for extensions/xmlextras
RESOLVED WONTFIX
Status
()
People
(Reporter: WeirdAl, Assigned: WeirdAl)
Tracking
Firefox Tracking Flags
(Not tracked)
Attachments
(1 obsolete attachment)
I've created a demo widget for an XUL application to post information to a server. The demo is located at . I propose adding an <xul:serverpost/> widget similar to this, in the extensions/xmlextras area. I'm willing to do all the work necessary to support this widget, including documentation. The XBL binding, I suggest, would be located in the chrome at chrome://xmlextras/content/serverpost.xml. I prefer this over adding it to chrome://global/content/bindings as it depends on the XMLHttpRequest() object. Appropriate CSS, DTD localizations, and /extensions/xmlextras/jar.mn will be included in my patch. This serverpost element, unlike <html:form/>, does not replace the content document when the HTTP response is received. Instead, the server may respond with any type of message it desires (text/plain or application/xml is recommended), and the serverpost element includes an event handler (onHTTPResponseReceived) to handle the server's response. It also includes an event handler (onHTTPResponseError) in case an error happens in the XMLHttpRequest response. The idea is that the XUL application will receive the response as a data string or XML document, and process the response appropriately -- perhaps going to another panel in an <xul:deck/> if desired. Reloading or loading a new XUL document should be unnecessary. Opinions and feedback strongly requested.
If it relies on xmlextras, it shouldn't be part of XUL probably. Perhaps implement it using regular necko nsIFoo interfaces rather than xmlhttprequest? Also, web servers already can do this if you have a web service, and xmlhttprequest can do get/post. What are the advantages of your method vs doing a get/post using xmlhttprequest?
doron: I created the widget as an extension to the current XUL implementation, so that XUL applications can come from the server, run on the client, and interact with the server. Perhaps Necko would be better, but I don't know jack about Necko. This widget uses XMLHttpRequest, so on first glance directly using XMLHttpRequest may be better indeed. However, the widget takes the time to preformat the message it submits as a multipart/form-data, which CGI scripts (a well-established technology) can process them. PHP can also receive multipart/form-data. Really, this widget can be seen as a connection between the XUL application running on the client and the XMLHttpRequest which handles the true connection. XUL app <=> <serverpost/> <=> XMLHttpRequest <=> server-side programming I deliberately do not plan on adding it to the mainstream XUL widgets; at this point I'm not even considering modifying xul.css. Instead, I would suggest the XUL app using this include an extra processing instruction to reference a stylesheet specific to the serverpost widget. With the widget implemented and enabled, and an extra attribute on each XUL control you wish to submit, it's just automating the process for the XUL app developer.
bsmedberg has informed me that method="post" does not require enctype="multipart/form-data"; enctype="application/x-www-form-urlencoded" works just fine if files are not included. So I'll make that the default instead.
Now who can I get to review this patch, and who to test it besides myself?
I wrote this patch so that people can in the future add other XML extras extensions. Usage: (1) Add to XUL or other application: <?xml-stylesheet type="text/css" href="chrome://xmlextras/skin/"?> (2) Add namespace declaration to document root element: xmlns:http="" (3) Elements wishing to be controls submitted to the server need an http:serverpostnames attribute with a space-separated list of http:serverpost element names. (4) The type attribute of the http:serverpost element reflects the content-type submitted. Default is application/x-www-form-urlencoded, with a multipart/form-data as an alternate.
Summary: Add <xul:serverpost/> widget for extensions/xmlextras → Add <http:serverpost/> widget for extensions/xmlextras
Wouldn't the stuff in my Web Forms proposal be better than this proposal? If not, why not?
"It is not ready yet! At all!" Hixie, I actually respect your proposal a great deal. To be honest, I do not care how we get XUL controls' values submitted to a server and the response processed. I simply care that we do it, according to technologies popularly supported. I cannot explicitly state a strong argument for using my proposal over yours at this time, except that this is a single element and your proposal talks about several elements which are currently implemented through XHTML (and some which aren't implemented yet). Nonetheless, I'm going to try, as the devil's advocate :) XUL works, for the most part (I have some fun with it in DOM bugs, but that's irrelevant here). Your spec would require using XHTML-namespaced elements, and possibly that might not have the same meanings (<checkbox/> vs <input type="checkbox"/>) or may not even work quite the same (doesn't XUL support three-state controls of one sort or another?). If XUL were to obsolete its controls and move to your web forms proposal, then yes, I would agree wholeheartedly that implementing your proposal would be a better solution. If you needed XBL bindings for a few of them (repetitions come to mind), I'd be happy to provide them. In any case, this is an RFE; if you think Web Forms 2.0 is a better approach and there's a bug on file to implement that, feel free to WONTFIX this one. As I said before, I think we should have some mechanism to automate the process for submissions. (Though to be honest, until such an implementation was actually added to Mozilla, I'd probably rebel and use my binding anyway from a http URI. If I have something that works now, and I need something working to make my project go forward, why shouldn't I use it, at least temporarily?)
I guess I'm confused. What is <serverpost> for exactly? Could you provide a specification for it? I'm confused. My understanding was that the form controls weren't really relevant as far as it went.
Cool idea. I only scanned the code so maybe I'm wrong, but it looks as if there is only a limited set of XUL input widgets being supported. I would suggest to support any widget which implements the value property, no matter what is its type. This way new XBL bindings can be supported.
re comment 8: See for the specification I've drawn up. (At this time, it is a very rough draft, in plain text.) re comment 9: Not every element is appropriate for doing that to. In the case of menulists which offer multiple selections, the value property is inadequate and possibly misleading. (I haven't looked.) For radiogroups, the radiogroup may have a value, and so do the radio buttons. It's a little difficult to do everything properly. Also, I should note this is planned as an extension, and the limited support of XUL widgets by this patch is simply because I haven't been able to spend the time researching to see which elements to support and which not to. It's a fairly limited set right now because the three I found were the most obvious candidates. The fact remains adding support for these XUL widgets is really a hack. A better solution would be to add support to the XUL widgets themselves, but I don't feel making changes to, say, textbox.xml to support the serverpost widget specifically is a wise decision.
Two notes: a. Menulists aren't multiselect. Although we have multiselect lists we don't attempt to track a value for them, the nearest we have are checkbox listboxes for example in scripts preferences which works like a number of checkboxes. b. radios are to radiogroups as menuitems are to menulists - they are the part of the ui that lets the user give the radiogroup/menulist a value.
Ok, I understand now. Doing this makes sense, and would be in parallel with the Web Forms stuff, specifically for XUL.
Comment on attachment 140376 [details] [diff] [review] <http:serverpost/> patch This patch already differs from the spec I wrote. New patch will be forthcoming to better handle errors and conform to the spec better. Note to self: Include filea for mozilla.org as documentation. That is should be a requirement before checkin.
Attachment #140376 - Attachment is obsolete: true
Target Milestone: --- → mozilla1.7beta
In coding terms, what I meant is to set a default case for xul elements in getSuccessfulControls to be response[response.length] = {name: aNode.getAttribute("name"), value: aNode.value}; This way anyone which wish to use the serverpost widget for his own XBL widgets, will know that all he has to do is to implement the value property. IMHO this will make serverpost to be more future proof.
Actually it's not just the value property, but you could check the implementation - if the element supports nsIDOMXULSelectControlElement then it's a radio or menulist type element, which uses a value; if it supports nsIDOMXULTextboxElement then it also uses a value; if it supports nsIDOMXULCheckboxElement the it uses the checked property. This probably needs <listitem type="checkbox"> to be fixed to advertise itself as a checkbox element and <radiogroup> to be fixed so that setting the value selects the appropriate radio button.
(In reply to comment #14) > This way anyone which wish to use the serverpost widget for his own XBL widgets, > will know that all he has to do is to implement the value property. IMHO this > will make serverpost to be more future proof. I'm thinking about it... my instinct is to say, "no, this is not a good idea for XUL elements." But if I can't come up with a specifically strong reason not to do so, I'll probably make it happen. You need two other attributes on the XUL element anyway for serverpost to pick up on it...
Per discussions on IRC, this bug is almost a WONTFIX. The widget itself is good; pretty solid, actually. It just doesn't have a home. doron says it shouldn't go in extensions/xmlextras. It would need approval from hyatt to become part of the XUL toolkit. I can't think of any other ideas for good places within the Mozilla source tree for it to go, and it doesn't justify creating a whole new extensions/serverpost directory for. mozdev has been proposed as a possible good home, particularly in JSLib. I've been meaning to give them an assert() function anyway (which would WONTFIX another bug I've filed). Anyone in particular want to come to my rescue here? petejc? hyatt?
I'm not saying it shouldn't, I just think that adding a new xul/xbl element only if you build xmlhttprequest might cause some issues.
heikki: do you want this code added to extensions/xmlextras, or should I try somewhere else? (I apologize if the question sounds abrupt, but I should probably have asked you this a few weeks ago before working myself into a fever.) doron: what sort of code changes to a Makefile (God help me) would I need to make to force the file to check for XMLHttpRequest first?
I think the path of least resistance would be to make it a mozdev project. Then see how popular it gets and what people think about it. If it gets good reception, then do another proposal to get it included in the main Mozilla tree.
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → WONTFIX | https://bugzilla.mozilla.org/show_bug.cgi?id=231833 | CC-MAIN-2019-26 | refinedweb | 1,955 | 63.49 |
In this article you will learn how to isolate yourself from change by taking advantage of the Provider Model.
Designing your applications using the Provider Model will allow you to swap components out at runtime, thus allowing you to upgrade them easily.
Developers face the problem of constantly changing technology. When Microsoft releases a new version of a data provider, or a customer decides to switch databases from Oracle to SQL Server, this can cause you to have to rework a lot in the code you’ve already written. You can avoid much of this rework if you take the time to plan and code for such changes. One recommended way to do this is to develop components that take advantage of the Provider Model.
Microsoft provides a set of Provider Model Templates that you can download from their Web site. The difference between their model and the one that I will explain in this article is that Microsoft’s are really designed for Web applications. The method I show is UI agnostic.
A provider is a class or a component that provides specific functionality to an application. However, the Provider class used will not be known until runtime. In this article, you will learn how to create a data provider that will allow you to change from SQL Server to an OLE DB provider with no code changes! You will just have to change a setting in a configuration file.
Microsoft provides a set of Provider Model Templates that you can download from their Web site at. The difference between their model and the one that I will explain in this article is that Microsoft’s are really designed for Web applications. The method I’ll show is UI agnostic. This means that you can use the same technique in Windows Forms, ASP.NET, Windows services, Web services, etc.
Creating a Provider
To build a provider you need to take advantage of a few technologies available in .NET. Essentially you’ll perform these four steps:
Before you learn how to implement a data provider, you need to look at three of the items that help you create a provider.
The Configuration Manager Class
The ConfigurationManager class, located in the System.Configuration.dll, is used to retrieve application settings from a configuration file. This configuration file can be a Windows Forms configuration file or a Web.config file in an ASP.NET Web application. ConfigurationManager replaces the old ConfigurationSettings class from .NET 1.1.
The ConfigurationManager class contains two properties that are designed for specifically retrieving values from two built-in sections in .NET 2.0 configuration files; namely AppConfig and ConnectionStrings. So given the following entry in a configuration file:
<appSettings> <add key="StateCode" value="CA" /> </appSettings>
You can use the following code to retrieve the StateCode value:
In C#
ConfigurationManager.AppSettings["StateCode"];
In Visual Basic
ConfigurationManager.AppSettings("StateCode")
If you have the following entry in the configuration file:
<connectionStrings> <add name="Northwind" connectionString= "Server=Localhost;Database=Northwind; Integrated Security=True"/> </connectionStrings>
You can use the following code to retrieve the Northwind connection string.
In C#
ConfigurationManager. ConnectionStrings["Northwind"].ConnectString;
In Visual Basic
ConfigurationManager. _ ConnectionStrings("Northwind").ConnectString
Abstract Base Class or Interface
You use an abstract base class when you have a class that can implement some or most of the functionality of the classes that will be inheriting from it, but the inheriting class must provide the actual implementation. In other words, the class that inherits from the abstract base class will do some of the work and the abstract base class will do some of the work.
You use an Interface when there is no common code that could be put into a base class. In this case, you use an Interface so each class has a list of standard methods and properties that whatever consumes that class can rely on being there and being implemented.
System.Activator Class
Sometimes in an application you do not know what class to load until run time. This is normally due to a data-driven scenario where the name of the class is placed into a database table or in a configuration file as a string. Your application then needs to use this at run time to create an actual instance of a class. To do this, you can use the System.Activator class to build an object from a string. The example below shows how to dynamically create an instance of an object at run time.
In C#
IDataClass cust; Type typ; typ = Type.GetType("Customer"); x = (IDataClass)Activator.CreateInstance(typ); MessageBox.Show(cust.GetData());
In Visual Basic
Dim cust As IDataClass Dim typ As Type typ = Type.GetType("Customer") cust = CType(Activator.CreateInstance(typ), _ IDataClass) MessageBox.Show(cust.GetData())
In the code above you create an instance of a Customer class. This code assumes that the Customer class either inherits from an abstract base class or implements an Interface named IDataClass.
Building a Data Provider
To illustrate the points outlined so far in this article you can create a data provider to use SQL Server, OLE DB or the Oracle native providers based on settings in a configuration file. The advantage of this approach is your User Interface layer will only ever call the DataLayer class for all DataSets, DataReaders, commands, etc. The DataLayer class will ensure that the appropriate provider is used based on settings in the Configuration file (Figure 1).
Sample Application
To test out this model you can create a sample Windows Form application with a GridView control on a form that will load the Customers table from the Northwind database (Figure 2).
In the Form Load event procedure you will call a method named GridLoad. This method will be responsible for calling the GetDataSet method in the DataLayer class.
You can use the System.Activator Class to dynamically create an instance of a class at run time from a string variable.
The GridLoad method must read the appropriate connection string from the configuration file for the application. For that purpose there is an AppConfig class that you will create to return the appropriate connection string. The code in the UI layer is very generic and you do not know which specific data provider is used to retrieve the data.
Configuration Settings
In the next code snippet you can see the configuration settings that you will need to create to provide not only the connection string, but the provider class to use for retrieving data. In the <appSettings> element you will need a key called ProviderName. The value for the ProviderName will correspond to another key in the <appSettings> element that has the fully qualified Namespace and Class name for the data provider class. In addition, the ProviderName value will also be the same as the name key in the <connectionStrings> element where the appropriate connection string for the data provider is stored.
<configuration> <appSettings> <add key="ProviderName" value="OleDbDataProvider"/> <add key="SqlDataProvider" value="DataCommon.SqlDataProvider"/> <add key="OleDbDataProvider" value="DataCommon.OleDbDataProvider"/> </appSettings> <connectionStrings> <add name="SqlDataProvider" connectionString="Server=Localhost; Database=Northwind;uid=sa; pwd=sa;Persist Security Info=False"/> <add name="OleDbDataProvider" connectionString="Provider=SQLOLEDB.1; Password=sa; Persist Security Info=False;User ID=sa; Initial Catalog=Northwind; Data Source=(local)"/> </connectionStrings> </configuration>
AppConfig Class
To retrieve the appropriate connection string from the configuration file you will need to create the following static/Shared property in the AppConfig class. Notice that you have to read from the configuration file twice: once to get the ProviderName value, the second time to retrieve the connection string from the <connectionString> element.
In C#
public class AppConfig { public static string ConnectString { get { string ProviderName; // Get Provider Name ProviderName = ConfigurationManager. AppSettings["ProviderName"]; // Get Connect String return ConfigurationManager. ConnectionStrings[ProviderName]. ConnectionString; } } }
In Visual Basic
Public Class AppConfig Public Shared ReadOnly Property _ ConnectString() As String Get Dim ProviderName As String ' Get Provider Name ProviderName = _ ConfigurationManager. _ AppSettings("ProviderName") ' Get Connect String Return ConfigurationManager. _ ConnectionStrings(ProviderName). _ ConnectionString End Get End Property End Class
Note: To keep the code simple, the ProviderName value is read each time. In a real application you would want to cache the connection string after reading it the first time.
IDataProvider Interface
As mentioned earlier when you use the Provider Model you will need to create either an abstract base class or an interface that each provider class must inherit or implement. In this example you will use an interface called IDataProvider. Since each data provider class you write will vary widely in their implementation, an interface is the logical choice. There is no common code between the different data providers, so an abstract base class cannot be used in this particular case. You can see the interface class in the code below.
In C#
interface IDataProvider { IDbConnection CreateConnection(); IDbCommand CreateCommand(); IDbDataAdapter CreateDataAdapter(); }
In Visual Basic
Public Interface IDataProvider Function CreateConnection() As IDbConnection Function CreateCommand() As IDbCommand Function CreateDataAdapter() As IDbDataAdapter End Interface
DataLayer.GetDataSet Method
If you look back at the sample Windows Form (Figure 2) and you look at the code (reiterated below) you will see a call to the DataLayer.GetDataSet method. This method is called by passing in an SQL statement and a connection string. This method has fairly standard ADO.NET code in that it creates an instance of a DataSet class and uses a DataAdapter to fill that DataSet. The filled DataSet is then returned from this method and given to the DataSource property of the grid control.
In the code on the form you cannot tell what data provider is used to retrieve the code. It could be SQL Server, Oracle or some OLE DB data provider. The UI code does not care. This works because the DataLayer class abstracts the specific code away from the UI layer. Let's take a look at the GetDataSet method in the DataLayer and see how it does its job.
The GetDataSet method itself does not use any specific provider like SqlDataAdapter or OleDbDataAdapter. Instead you use the interface IDbDataAdapter. The IDbDataAdapter is a .NET interface that anyone who writes a .NET native provider must implement when creating a DataAdapter class. You will find interface classes for each of the specific ADO.NET provider classes such as IDbConnection and IDbCommand.
In C#
public static DataSet GetDataSet( string SQL, string ConnectString) { DataSet ds = new DataSet(); IDbDataAdapter da; da = CreateDataAdapter(SQL, ConnectString); da.Fill(ds); return ds; }
In Visual Basic
Public Shared Function GetDataSet( _ ByVal SQL As String, _ ByVal ConnectString As String) As DataSet Dim ds As New DataSet Dim da As IDbDataAdapter ' Create Data Adapter da = CreateDataAdapter(SQL, ConnectString) da.Fill(ds) Return ds End Function
Instead of writing code in this method to create a specific instance of a data adapter, a method called CreateDataAdapter is called to perform this function. This method, also contained within the DataLayer, will load the appropriate data provider class that you are going to create.
DataLayer.CreateDataAdapter Method
In the CreateDataAdapter method you will have to do a couple of things to create an instance of a specific data adapter. First you will need to initialize the appropriate provider based on the information in the configuration file. The InitProvider method is responsible for this and will be shown in the next section. After the appropriate DataProvider class is loaded the CreateDataAdapter method on that specific provider will be called. This is where the SqlDataAdapter or the OleDbDataAdapter or the OracleDataAdapter is created.
In C#
public static IDbDataAdapter CreateDataAdapter( string SQL, string ConnectString) { IDbDataAdapter da; // Make sure provider is created InitProvider(); da = DataProvider.CreateDataAdapter(); da.SelectCommand = CreateCommand(SQL, ConnectString, false); return da; }
In Visual Basic
Public Shared Function CreateDataAdapter( _ ByVal SQL As String, _ ByVal ConnectString As String) As IDbDataAdapter Dim da As IDbDataAdapter ' Make sure provider is created InitProvider() da = DataProvider.CreateDataAdapter() da.SelectCommand = CreateCommand(SQL, _ ConnectString, False) Return da End Function
DataLayer.InitProvider Method
The InitProvider method is responsible for creating the actual provider object that will be used. To do this you first need a field/member variable to hold that data provider. You will create a variable named DataProvider that is of the type IDataProvider. Remember that the IDataProvider is the interface that each of the specific DataProviders that you create will need to implement.
The first time the InitProvider method is called the Provider name will be loaded by reading the value from the configuration file, then you will use the System.Activator class to create a new instance of this provider. The DLL with the appropriate provider class must already be referenced by your project for this to work.
In C#
private static IDataProvider DataProvider = null; private static void InitProvider() { string TypeName; string ProviderName; if(DataProvider == null) { // Get provider name ProviderName = ConfigurationManager. AppSettings["ProviderName"]; // Get type to create TypeName = ConfigurationManager. AppSettings[ProviderName]; // Create new DataProvider DataProvider = (IDataProvider) Activator.CreateInstance( Type.GetType(TypeName)); } }
In Visual Basic
Private Shared DataProvider As IDataProvider = _ Nothing Private Shared Sub InitProvider() Dim TypeName As String Dim ProviderName As String If DataProvider Is Nothing Then ' Get Provider Name ProviderName = _ ConfigurationManager. _ AppSettings("ProviderName") ' Get Type to Create TypeName = ConfigurationManager. _ AppSettings(ProviderName) ' Create new DataProvider DataProvider = _ CType(Activator.CreateInstance( _ Type.GetType(TypeName)), _ IDataProvider) End If End Sub
DataProvider.CreateDataAdapter Method
Now you can finally look at the DataProvider class and its specific implementation of the CreateDataAdapter method. Look at the snippet below to see the class that uses the SqlClient.SqlDataAdapter.
In C#
class SqlDataProvider : IDataProvider { public IDbDataAdapter CreateDataAdapter() { SqlDataAdapter da = new SqlDataAdapter(); return da; } }
In Visual Basic
Public Class SqlDataProvider Implements IDataProvider Public Function CreateDataAdapter() _ As IDbDataAdapter _ Implements IDataProvider.CreateDataAdapter Dim da As New SqlDataAdapter Return da End Function End Class
While this is a very simple provider method to write, it is necessary to implement it this way to provide the maximum flexibility and reusability. This becomes more apparent when you look at the other Provider class that uses the OLE DB namespace to create instances of OleDbDataAdapters.
OLEDB DataProvider.CreateDataAdapter Method
Below is another DataProvider class that uses the OleDb native provider. Notice that this code is almost exactly the same as the SqlClient-just the provider used differs.
In C#
class OleDbDataProvider : IDataProvider { public IDbDataAdapter CreateDataAdapter() { OleDbDataAdapter da = new OleDbDataAdapter(); return da; } }
In Visual Basic
Public Class OleDbDataProvider Implements IDataProvider Public Function CreateDataAdapter() _ As IDbDataAdapter _ Implements IDataProvider.CreateDataAdapter Dim da As New OleDbDataAdapter Return da End Function End Class
Try it Out
In the sample application that you can download for this article, try using each of the different providers provided to see how each one is called just by changing the value in the configuration file from OleDbDataAdapter to SqlDataAdapter. Step through the code to see where it creates an instance of the OleDb or SqlClient DataAdapters. As an exercise you could create additional providers that implement the OracleClient or any other native provider you are using.
Conclusion
Using a Provider Model will make the code you write much more generic, easier to maintain, and easier to upgrade as Microsoft (and other companies) introduce new technology. Other areas where you should use the Provider Model include Exception Management to determine where to publish exceptions. You could also use the Provider Model to determine where to read configuration settings from. You could have providers that read configuration settings from an XML file, the registry, a database table, or even a Web service. With a little imagination you can apply the concepts presented in this article to many areas of your application development process. | https://www.codemag.com/Article/0711081/The-Provider-Model | CC-MAIN-2020-10 | refinedweb | 2,596 | 53.81 |
C# allows using pointer variables in a function of code block when it is marked by the unsafe modifier. The unsafe code or the unmanaged code is a code block that uses a pointer variable.
Pointers
A pointer is a variable whose value is the address of another variable i.e., the direct address of the memory location. similar to any variable or constant, you must declare a pointer before you can use it to store any variable address.
The general form of a pointer declaration is:
type *var-name;
Following are valid pointer declarations:
int *ip; /* pointer to an integer */ double *dp; /* pointer to a double */ float *fp; /* pointer to a float */ char *ch /* pointer to a character */
The following example illustrates use of pointers in C#, using the unsafe modifier:
using System; namespace UnsafeCodeApplication { class Program { static unsafe void Main(string[] args) { int var = 20; int* p = &var; Console.WriteLine("Data is: {0} ", var); Console.WriteLine("Address is: {0}", (int)p); Console.ReadKey(); } } }
When the above code wass compiled and executed, it produces the following result:
Data is: 20 Address is: 99215364
Instead of declaring an entire method as unsafe, you can also declare a part of the code as unsafe. The example in the following section shows this.
Retrieving the Data Value Using a Pointer
You can retrieve the data stored at the located referenced by the pointer variable, using the ToString() method. The following example demonstrates this:
using System; namespace UnsafeCodeApplication { class Program { public static void Main() { unsafe { int var = 20; int* p = &var; Console.WriteLine("Data is: {0} " , var); Console.WriteLine("Data is: {0} " , p->ToString()); Console.WriteLine("Address is: {0} " , (int)p); } Console.ReadKey(); } } }
When the above code was compiled and executed, it produces the following result:
Data is: 20 Data is: 20 Address is: 77128984
Passing Pointers as Parameters to Methods
You can pass a pointer variable to a method as parameter. The following example illustrates this:
using System; namespace UnsafeCodeApplication { class TestPointer { public unsafe void swap(int* p, int *q) { int temp = *p; *p = *q; *q = temp; } public unsafe static void Main() { TestPointer p = new TestPointer(); int var1 = 10; int var2 = 20; int* x = &var1; int* y = &var2; Console.WriteLine("Before Swap: var1:{0}, var2: {1}", var1, var2); p.swap(x, y); Console.WriteLine("After Swap: var1:{0}, var2: {1}", var1, var2); Console.ReadKey(); } } }
When the above code is compiled and executed, it produces the following result:
Before Swap: var1: 10, var2: 20 After Swap: var1: 20, var2: 10
Accessing Array Elements Using a Pointer
In C#, an array name and a pointer to a data type same as the array data, are not the same variable type. For example, int *p and int[] p, are not same type. You can increment the pointer variable p because it is not fixed in memory but an array address is fixed in memory, and you can't increment that.
Therefore, if you need to access an array data using a pointer variable, as we traditionally do in C, or C++ ( please check: C Pointers), you need to fix the pointer using the fixed keyword.
The following example demonstrates this:
using System; namespace UnsafeCodeApplication { class TestPointer { public unsafe static void Main() { int[] list = {10, 100, 200}; fixed(int *ptr = list) /* let us have array address in pointer */ for ( int i = 0; i < 3; i++) { Console.WriteLine("Address of list[{0}]={1}",i,(int)(ptr + i)); Console.WriteLine("Value of list[{0}]={1}", i, *(ptr + i)); } Console.ReadKey(); } } }
When the above code was compiled and executed, it produces the following result:
Address of list[0] = 31627168 Value of list[0] = 10 Address of list[1] = 31627172 Value of list[1] = 100 Address of list[2] = 31627176 Value of list[2] = 200
Compiling Unsafe Code
For:
- Open project properties by double clicking the properties node in the Solution Explorer.
- Click on the Build tab.
- Select the option "Allow unsafe code". | http://blogs.binarytitans.com/2017/04/c-unsafe-codes.html | CC-MAIN-2018-13 | refinedweb | 653 | 50.97 |
I had the same problem which cminus. Problem was with this two lines:
RUBYLIBVER ?= $(shell $(RUBY) -e 'print RUBY_VERSION.split(".")[0..1].join(".")')
RUBYINC ?= $(shell $(PKG_CONFIG) --exists ruby-$(RUBYLIBVER) && $(PKG_CONFIG) --cflags ruby-$(RUBYLIBVER) || $(PKG_CONFIG) --cflags ruby)
First one return wrong version:
[netrunner@nightcity] libselinux $ ruby -e 'print RUBY_VERSION.split(".")[0..1].join(".")'
2.3
Where I have installed 2.4:
[netrunner@nightcity] libselinux $ pacman -Qi ruby
Name : ruby
Version : 2.4.1-3
pkg-config confirm that:
[netrunner@nightcity] libselinux $ pkg-config --exists ruby-2.3 ; echo $?
1
[netrunner@nightcity] libselinux $ pkg-config --exists ruby-2.4 ; echo $?
0
So variable $RUBYINC is empty and my cc looks like that (without /usr/include/ruby-2.4) -fPIC -DSHARED -c -o selinuxswig_ruby_wrap.lo selinuxswig_ruby_wrap.c
This happen because I also have rvm and I had set default ruby version to 2.3:
[netrunner@nightcity] libselinux $ ruby -v
ruby 2.3.3p222 (2016-11-21 revision 56859) [x86_64-linux]
[netrunner@nightcity] libselinux $ rvm list
rvm rubies
=* ruby-2.3.3 [ x86_64 ]
# => - current
# =* - current && default
# * - default
As you see, I don't had 2.4. After installing and setting as default version 2.4:
[netrunner@nightcity] libselinux $ /bin/bash --login # needed for installing ruby with rvm
[netrunner@nightcity] libselinux $ rvm install ruby-2.4.1
(...)
[netrunner@nightcity] libselinux $ rvm list
rvm rubies
* ruby-2.3.3 [ x86_64 ]
=> ruby-2.4.1 [ x86_64 ]
# => - current
# =* - current && default
# * - default
[netrunner@nightcity] libselinux $ rvm --default use 2.4.1
[netrunner@nightcity] libselinux $ rvm list
rvm rubies
ruby-2.3.3 [ x86_64 ]
=* ruby-2.4.1 [ x86_64 ]
# => - current
# =* - current && default
# * - default
[netrunner@nightcity] libselinux $ makepkg # still in login shell
libselinux build successfully.
Maybe use something like that:
pacman -Q ruby | cut -d ' ' -f 2 | cut -d '.' -f 1,2
to get ruby version?
Search Criteria
Package Details: libselinux 2.8-1
Dependencies (10)
- pcre (pcre-svn)
- libsepol>=2.8
- python (make)
- python2 (pypy19, stackless-python2) (make)
- ruby (ruby1.8, rvm, rubinius-ruby) (make)
- swig (swig-git) (make)
- xz (xz-git) (make)
- python (optional) – python bindings
- python2 (pypy19, stackless-python2) (optional) – python2 bindings
- ruby (ruby1.8, rvm, rubinius-ruby) (optional) – ruby bindings
Required by (37)
- aide-selinux
- casync
- coreutils-selinux
- cronie-selinux
- dbus-docs-selinux (make)
- dbus-selinux (make)
- f2fs-tools-git
- findutils-selinux
- iproute2-selinux
- iris-temperature
- libsemanage
- libsystemd-selinux (make)
- libsystemd-selinux
- libutil-linux-selinux
- libutil-linux-selinux (make)
- libvirt-sandbox
- logrotate-selinux
- lxc-selinux
- mathematica (optional)
- matlab-support
- mcstrans
- openssh-selinux
- pam-selinux
- psmisc-selinux
- python-blivet
- python-blivet-git
- python2-blivet
- python2-pyblock
- restorecond
- sepolgen
- setools
- setools3-libs
- sudo-selinux
- systemd-selinux (make)
- systemd-sysvcompat-selinux (make)
- util-linux-selinux (make)
- xperia-flashtool
Sources (2)
Latest Comments
IooNag commented on 2017-06-02 11:48
netrunn3r commented on 2017-06-02 08:52
I had the same problem which cminus. Problem was with this two lines:
cminus commented on 2017-02-19 22:53
@IooNag..
I am not sure what happened, but it worked and got installed..
PS: I have removed old rvm and let the process continue.. I am not sure how this relates to the system-installed ruby..
Kinda strange
Here there are the current outputs of the lines you requested
Arch% echo $RUBY
# Blank line
Arch% ${RUBY:-ruby} -e 'print RUBY_VERSION.split(".")[0..1].join(".")'
2.4%
Arch% pkg-config --cflags ruby-2.4
-I/usr/include/ruby-2.4.0/x86_64-linux -I/usr/include/ruby-2.4.0
Arch% ls /usr/include/ruby-2.4.0/
ruby ruby.h x86_64-linux
IooNag commented on 2017-02-19 22:05
cminus: what is the compiler line right before your error? Is it -I/usr/include/ruby-2.4.0/x86_64-linux -I/usr/include/ruby-2.4.0 -fPIC -DSHARED -c -o selinuxswig_ruby_wrap.lo selinuxswig_ruby_wrap.c"?
Also I am interested in the result of these commands on your system (the second and the third ones are used by libselinux Makefile to find where Ruby header files are installed):
* echo $RUBY
* ${RUBY:-ruby} -e 'print RUBY_VERSION.split(".")[0..1].join(".")'
* pkg-config --cflags ruby-2.4
* ls /usr/include/ruby-2.4.0/
cminus commented on 2017-02-19 21:50
I have this error and I don't know how to resolve..
Any help will be appreciated.
selinuxswig_ruby_wrap.c:855:18: fatal error: ruby.h: No such file or directory
#include <ruby.h>
^
IooNag commented on 2016-10-20 07:16
chrisbdaemon: pkg-config is in base-devel, which has to be installed before building any AUR package (cf.). Hence I won't add it to makedepends.
chrisbdaemon commented on 2016-10-19 19:24
Can pkg-config be added to the list of makedepends as well?
v1rous commented on 2016-02-07 17:50
Not sure if this is helpful or not, but I was able to build and install this package and its dependency libsepol on armv7h (RPi 2).
IooNag commented on 2015-07-23 12:02
Thanks for your comment, but actually it is not libselinux which requires flex to be built but libsepol and libsemanage. I've just added "flex" to their makedepends.
nebulon commented on 2015-07-20 12:02
The package dependencies are missing 'flex' currently.
netrunn3r: if I correctly understand, the issue you report comes from the fact that pkg-config does not use the same Ruby version as the one from your shell environment. However I fail to see the point of building the package with a non-system ruby command. Such a setup would build and install Ruby packages for a Ruby version which is different from the system one (eg. it would put files in the "wrong" system directory, like /usr/lib/ruby/2.3.0/ instead of /usr/lib/ruby/2.4.0/)...
Actually it makes sense to define RUBY=/usr/bin/ruby (and use the full paths for PYTHON definitions too) in the PKGBUILD in order to ensure the package is built with the packaged (system-wide) Ruby. Could you please add "RUBY=/usr/bin/ruby" to these lines:
* make rubywrap
* make DESTDIR="${pkgdir}" USRBINDIR="${pkgdir}"/usr/bin LIBDIR="${pkgdir}"/usr/lib SHLIBDIR="${pkgdir}"/usr/lib install-rubywrap"
... and tell whether this fixed your issue? | https://aur.archlinux.org/packages/libselinux/ | CC-MAIN-2018-26 | refinedweb | 1,026 | 50.33 |
This is a translation of my article 抓取網頁的最佳語言 : Python written in chinese
At first
At first, I used C/C++ to write programs for grabbing data from websites. I tried to write a library for these tasks, but I realized that it’s not easy to implement a HTTP client library. Then, I used cUrl library for downloading pages, but even with cUrl, it’s not productive. I had to to modify program frequently, the compiling time is costly. There was also no regular expression for C/C++. I also had to deal with many annoying details like memory management, string handling.
Then
After that, I was wondering, C/C++ is not a nice choice to grab data from websites. Why do I have to handle so many details? Why don’t I just use script language or other language? At first I was worrying about the performance, and then I realized that the performance of language is not the bottleneck. What’s more? I can get much more benefits if I use script language, it is easier to develop and debug. So I decided to find another solution for grabbing data from websites.
How about Perl?
Long time ago, I used Perl to write CGI programs, like guest-book, website managing system and so on. That said, Perl is a “write-once” language. Lots of Perl programs are full filled with short syntax and symbols. It is really difficult to read. And it is not easy to modularize Perl programs. It doesn’t support OO well. And there is no more new version of Perl. Even the new Perl is under construction, but it takes too long time, I still think it is almost dead. For these reasons and personal feeling, I don’t like Perl.
PHP
As a popular programming language designed for websites, I don’t think it is suitable to use in other situations. And although it is popular, it is really a bad designed language. It is also not an easy job to modularize PHP programs, it doesn’t support OO well, too. The name-space is also a big problem, there are so many function looks like mysql_xxxx, mysql_oooo. But even such a bad language got its advantage. That is: popular, popular and popular. Some one said that:
PHP is the BASIC of the 21st century
Well, what ever, PHP is out.
Lua
Lua is a light weight script language, almost everything about design of Lua is for performance. I wanted to warp C/C++ library for Lua, but there is also lots of weakness of Lua. It is not easy to modularize, too. And almost everything in Lua is designed for performance, its syntax is not so friendly. What’s more, there are little resources for Lua, I might have to build everything I need. So Lua is not on the list.
Java
Java is a language grows with Internet, it is absolutely qualified. But, I don’t like it because it is too verbose. And what’s more, it is too fat! I want to throw my laptop that has only 256MB RAM out the window when I am running Eclipse on it. I’m sorry, I don’t like Java. The guy I mentioned in PHP, also said that:
Java is the COBOL of the 21st century
Python
Finally, I postdd questions on PTT, then one recommend Python. Well, Python? WTF? I have never heard that before. And I searched it and ask some questions. Then I found that it is exactly what I want! It can be extended easily. If I need performance, I can write module in C for Python. And there are so many resources to use. You can find almost any Python libraries that you can imagine. Also, those libraries are easy to install, you can type “easy_install” to install almost everything you want. Most of script languages are not suitable for big program, but Python is not the one among them, it is easy to modularize, and it supports OO well. What else, it is really easy to read and write. There are also lots of big guy use Python, like Google, YouTube and so on. When I decide to learn Python, I buy a Learning Python and start my journey with Python.
Fall in love with Python
It did’t let me feel disappointed. It is very productive to develop with Python. I wrote almost everything that I did in C/C++ before. But for grabbing data from websites, there is still lots of work to do.
Twisted
It is really a piece of cake for Python to get a web page. There are standard modules, urllib and urllib2. But they are not good enough. Then, I find Twisted.
Twisted is an event-driven networking engine written in Python and licensed under the MIT license.
It is very powerful. It has beautiful callback design for handling async operations named deferred. You can write one line to grab a page:
getPage("").addCallback(printPage)
You can also use its deferred to handle data
d = getPage("") d.addCallback(parseHtml) d.addCallback(extractData) d.addCallback(saveResult)
What’s more, I wrote an auto-retry function for twisted to retry any async function automatically, you can read An auto-retry recipe for Twisted.
Beautifulsoup
It is not a difficult job to get page from a website. Parsing html is a much more difficult job. There are standard modules of Python, but they are too simple. The biggest trouble of parsing html is: there are so many websites don’t follow the standard of html or xhtml. You can see lots of syntax error in those pages. It makes parsing become a difficult job. So I need an html parser that can deal wrong html syntax well. Then, here comes BeautifulSoup, it is an html parser written in Python, it can handle wrong html syntax well. But there is a problem, it is not efficient. For example, you want to find a specific tag, then you write:
soup.find('div', dict(id='content'))
It is okay when you do this in a small page. But it is a big problem if you do that in a big page, its tag finding method is very very slow. At first, I expect the bottleneck will be on network, but with beautifulsoup, the bottleneck is on parsing and finding tags. You can notice that when you run your spider, the CPU usage rate is 100% all the time. I run profile for my program, most of the time of running are in soup.find. For performance reason, I have to find another solution.
lxml
Then, I find a nice article: Python HTML Parser Performance, it shows comparison of performance of different Python html parsers. The most impressive one is lxml. At first, I am worrying about that is it difficult to find target tags with lxml. And I notice that it provides xpath! It is much easier to write xpath then find methods of beautifulsoup. And it is also much more efficient to use lxml to parse and find target tags. Here are some real life example I wrote:
def getNextPageLink(self, tree): """Get next page link @param tree: tree to get link @return: Return url of next page, if there is no next page, return None """ paging = tree.xpath("//span[@class='paging']") if paging: links = paging[0].xpath("./a[(text(), '%s')]" % self.localText['next']) if links: return str(links[0].get('href')) return None
listPrice = tree.xpath("//*[@class='priceBlockLabel']/following-sibling::*") if listPrice: detail['listPrice'] = self.stripMoney(listPrice[0].text)
With beautifulsoup, I have to write logic in Python to find target tags. With lxml, I write almost all logic in xpath, it is much easier to write.
Useful FireFox tool
With xpath, it is not a difficult job to find target tags. But it would be wonderful if you can try xpath on websites, right? I find there are some plugins of FireFox are very useful for writing spiders. Here are some useful tools for analysis:
Example
I wrote an example to show how it looks like.
# -*- coding: utf8 -*- import cStringIO as StringIO from twisted.internet import reactor from twisted.web.client import getPage from twisted.python.util import println from lxml import etree def parseHtml(html): parser = etree.HTMLParser(encoding='utf8') tree = etree.parse(StringIO.StringIO(html), parser) return tree def extractTitle(tree): titleText = unicode(tree.xpath("//title/text()")[0]) return titleText d = getPage('') d.addCallback(parseHtml) d.addCallback(extraTitle) d.addBoth(println) reactor.run()
This is a very simple program, it grabs title of google.com and prints it out. Very elegance, isn’t it? 😀
Conclusion
One year has been passed since I wrote this article in Chinese. Today, I still use Python + Twited + lxml for grabbing data from websites. You might not agree what I said, but they are best tool to write spider (crawler or whatever) for me.
Learn English.
Learn to not be a jerk.
Sorry for my poor English, I already fix most of wrong grammar and typo. If you find anything wrong, please let me know. Thanks your advice.
Fuck Thomas. I had seen javascript for the getNextPageLink function. Yours is a tribute to the elegance of Python and lxml.
Bravo.
Thanks for the article.
Hey Thomas: vaffanculo!!! (learn italian)
ANY language is “write once”. That isn’t the languages fault it is the programmers. The same goes for Perl. Perl can be used as a “write once” language. That isn’t Perl’s fault. It is and always has been the programmers fault.
lxml is nice, thanks for sharing your knowledge.
regards
Govind
Nice article, and your English is pretty good!
Thanks for the article, your English is fine 🙂
Thomas is a loser and he knows it 🙂
Your knowledge of perl seems woefully outdated. It is trivial to make Perl modules, and I’d bet perl has the largest set of modules of any language out there ().
It’s also possible to do perfectly reasonable OO, and if you want anal OO (positively stop anyone from doing anything outside the published interface) you can do that, too (See Conway’s Perl Best Practices and “inside out classes”).
I do plenty of web page parsing in perl, and we’ve got stuff like Soup, more than once choice, in fact, for non-compliant HTML.
Sorry for my out-of-date knowledge about perl, but even so, I don’t like the design idea of perl. It just put too many things to syntaxes, make it like a mess. You got tons of $$, $%, $&… and so on. There are so many dollar signs for different meanings. There are also tons of syntaxes for different tasks, e.g. read a line from file. How can you tell what the hell it is if you don’t have a manual, or you don’t remeber what it is?
When I don’t know what python is, I can read some of simple python programs. When I know python, I can read almost all python programs that I can find. But…with perl, when I don’t know perl, I can’t read anything written in perl, they look like spell. When I know perl, it is still difficult to read a perl program, I have to read manual all the time when I encounter those plenty syntax. They just put too many things into syntax. Why you have to have syntaxes for everything? Why don’t just put them into modules? Do you need “Turn on the light of your kitchen” syntax, too? I don’t think so.
Also, there are so many dirty ways to achieve same task. I have seen so many Perl programs modify global variable to to make something works. Well, it is really really a bad practice, what about another guy also modify the global variable in his module and expect it works?
It is interesting, losts of perl guys hate Python, and lots of python guys hate perl. I think that’s because people trust different idea of design. The idea of python is “There should be one– and preferably only one –obvious way to do it.”. And the idea of perl is “There’s More Than One Way To Do It.”.
So, I am sorry, I hate Perl.
Keep at the python, your english is fine by the way !
good choice with lxml. Its fast, lightweight and pretty good generally. A lot of people tend to think twisted is the solution for everything network related, I strongly disagree.
本來不想回的,可是看到這麼多人可以睜眼說瞎話實在忍不住。意思雖然勉強可以看懂,光第一段問題就一堆。
At first, I use C/C++ to write programs for grabbing data from websites.
At first是指一開始不是嗎?你現在還是活在你的”一開始”嗎?為什麼這一整段都是用現在式?這是這篇最嚴重的錯誤,不對的時態讓人讀起來感覺非常的奇怪。很多美國人第一句看起來不對後面就都不讀了。還好你這句還算吸引人–因為沒有人會用C++來做這種小工具。
And I try to write a library for these tasks, but it is not easy to implement a HTTP client library.
句子不要一開頭就用And。雖然你用了And,你上句說的跟這句說的還是八竿子打不著,接不起來。還有,一個段落前面兩個句子都讀完了,卻還是不知道你這段是想要說什麼。
So I use cUrl library for downloading pages, but even so, it is not productive to write web spider in C/C++.
句子不要一開頭就用So。還一個句子兩個so勒。第二個so指的是什麼?這樣子寫,意思一樣,還沒用到so:Even with cUrl library, it was unproductive to write a web spider in C/C++. 且上下句都還是可以連貫。有了這句,上句也不需要了。
When I am developing spiders, I need to modify program frequently, but the compiling time is costly.
不必說的話就不要說了,I need to modify program frequently不是廢話嗎?少了那句後,可以變得很簡潔。一個句子裡有二個以上的連接詞很奇怪。
There is also no regular express for C/C++ (Now we have boost).
regular “expression”。本來只想說一次,忍不住:不對的時態會改變意思。你這句翻成中文:現在C/C++也沒有regex(現在我們有boost)。這樣沒有抵觸嗎?刮號不知道怎麼用就不要用。其實文章裡也不應該用。
With C/C++, I also have to deal with many annoying details like memory managing, string handling and e.t.c.
memory “management”。不要用etc。 “like memory management and string handling”就很好了。大家都知道你在說C/C++,已經重覆很多次了,所以”With C/C++”不必要。這個句子也不像一個段要結束的樣子。後面應該還要有至少一個句子。類似,For all these reasons, I have long abandoned this approach to look for other solutions.
最後,你這一段也才幾個句子,用了個幾個”C/C++”跟”but”?…..讀起來就很煩。你自己寫的文章有自己先讀過嗎?
Don’t apologize for your english, it is perfectly fine. The person that made that comment, Thomas, is what we call in the U.S., a dickhead.
我是美国人。你的英文比他的中文好!
PHP doesn’t support OO? You’re an idiot. Go back to the drawing board.
@your english is fine?! :
感謝你的意見,我英文程度到哪裡我自己很清礎,說我英文爛,這也是事實,我不會因為這樣而動怒或怎樣,相反的我會再一次檢查我的文章,確實有很多地方要修正
如果你說我沒讀過我自己的文章,並不是那樣,我已經修改過n次了,因為原文是中文,我大略上照著中文的句子寫,受中文的影響,所以會有很奇怪的句子或用法出現,感謝你的提醒,你提到的部份我已經盡量改好了
如果因為我覺得我自己英文很爛而不去用它,永遠就是那樣爛,這篇文章除了分享,也算是練習我自己的英文能力,有人能指出我文章裡的錯誤,其實我蠻高興的,修正了錯誤,只要記得,下次就不容易再犯同樣的錯誤,所謂的進步不就是這樣嗎?
@Manny:
I am sorry, I didi’t say “PHP doesn’t support OO”, I said “it doesn’t support OO well”. They are quite different. At that time I wrote this article, php did’t support OO well. For now, I have no idea how is php going. I did’t write php for a long while. I use TurboGears2 to build web application.
Victor,
Thanks for the article. Ixml will help me with a project I’m working on and I’m going to look into Twisted.
I’ve recently come back to Python after going over to ruby because of rails, but have found django to fit into my current projects.
Hope to read more articles from you (to bad I can’t read Chinese).
BTW, you’re English is not poor at all, I would like to see anybody who complains about your article speak and write in Chinese (or any other language).
Also, I will try to check out the rest of your site using google’s translate. I know it will not do the job right, but hopefully it can do a good enough job for me to get what you are trying to say.
Best Regards
“BTW, you’re English is not poor at all”
hahahaha. I rest my case.
PHP has had proper OO support for many years, what planet are you from? The date on your article isn’t that long ago (this month). Yikes, man.
Pingback: Everybody Needs Some Kind of Bailout | HOT Trends and Breaking News
To Manny:
It doesn’t matter if PHP has OO. It still sucks really bad compared to C# and Python.
If you want something “up-to-date”, take a look at this:
PHP is like a bicycle. You can attach a lot of bells and whistles (like OO) to it, but it will NEVER go faster than an airplane.
The world has already moved so far ahead that PHP simply has no hope of catching up.
Cool stuff! btw, also checkout Feedity – – I use it a lot these days for creating custom RSS feeds from various webpages. It is simple to use and gives great results. Hope it helps. Chao 🙂
Have you tried Scrapy?. It’s a very powerful (and simple) web crawling/screen-scraping framework which is also built on Python and Twisted.
Hi,
First, thanks for sharing that useful information. It is kind of difficult to find info about this topic.
At the present moment, Im chosing the techology/language to develop a project which scraps several websites and
I have a couple of questions for you:
1.- Have you tried sitescraper (a tool based on lxml)? How was it?
2.- I have seen a great Java library called htmlunit for webscraping.
Isn’t the Java time performance MUCH better than lxml/Python ?
To scrap 10 or 50 websites at the same time.
Dont you think it is very time consuming to use lxml/python instead java? and what about memory ?
That links points out some bechmarks written in pure python
However, I have read that using native-libraries like lxml the time is significant less.
Thanks!
@jose:
1. I didn’t try it.
2. lxml is based on library written in C language. It is very fast. You can reference to this article:
It has the best overall performance among Python html parsers. And talking about the performance of network framework, you can read this article:
Twisted is not the best, but is is good enough, and the most important thing is it has full stack of protocols implementations.
Hope this could be helpful for you.
@jose
The libxml2 parser that lxml uses is actually faster than pretty much any parser that exists in the Java world. And for web scaping, your code will usually be limited by the network, not so much by the CPU.
Best Site good looking photos nudist naked lolitas >:-DDD
I’d like to pay this in, please Tiny Models Girls
:-OOO
The first thing you need to do before anything else is to get yourself a domain name. A domain name is the name you want to give to your website. For example, the domain name of the website you’re reading is “thesitewizard the name.:
Our personal web-site
<"'
casio android | https://blog.ez2learn.com/2009/09/26/the-best-choice-to-grab-data-from-websites-python-twisted-lxml/ | CC-MAIN-2021-10 | refinedweb | 3,110 | 75.61 |
Adventures in web scraping and data analysis
Not a member? You should
Already have an account?
To make the experience fit your profile, pick a username and tell us what interests you.
We found
and
based on your interests.
Choose more interests.
x-gzip -
1.05 MB -
06/09/2016 at 18:15
I'll continue to put up interesting things as I think of them. Here are a few interesting tidbits.
Most often used post tags:
Perhaps unsurprisingly, arduino hacks are near the top of the list.
If you look at the most prolific authors you get:
Plotting the number of articles per week, segregated by the top ten authors, over time gives the following picture:
You can clearly see where submitters became active and when when they stopped. Brian had a early submission somewhere in 2006 before he joined HAD. Mike Szczys was active early and then starting tailing off around 2013-- other behind the scenes activities I imagine.
Here is the data requested: featured per week and %featured.
The above was for articles with the "Featured" post marker. If you include "Featured","Retrotechtacular","Hackaday Columns", "The Hackaday Prize", "Ask Hackaday", "Hackaday Store", "Interviews", that roughly triples the number of articles, but the overall shape looks the same.
OK, first plot of the data before I go to bed. I munged the data and plotted posts per day as a function of time. Not surprisingly, the number of posts per day have been going up since the early days. Somewhat surprisingly the maximum posts per day was way back in Feb 28, 2011 when there were no less than 16 posts! Here you go:
Staying true to its name, most days early on had one article per day. Now the mode appears to be 8 per day.
I started off knowing nothing about web scraping. I found a good link which shows how to scrape using python:
Found a few websites that explain the xtree syntax and I was off to the races. So a few baby steps first.
from lxml import html
import requests
page = requests.get('')
tree = html.fromstring(page.content)
# get post titles
tree.xpath('//article/header/h1/a/text()')
# get post IDs
tree.xpath('//article/@id')
# get Date of publication
tree.xpath('//article/header/div/span[@class="entry-date"]/a/text()')
Eventually wrote a script to scrape the entire HAD archives. On Wednesday June 8th at 11PM Pacific time, it had 3223 pages. Decided to include article ID, date of publication, title, author, #comments, "posted ins", and tags. Here is a quick and dirty python script to output all data to a tab delimited file:
from lxml import html
import requests
fh = open("Hackaday.txt", 'w')
for pageNum in xrange(1,3224,1):
page = requests.get(''%pageNum)
tree = html.fromstring(page.content)
titles = tree.xpath('//article/header/h1/a/text()')
postIDs = tree.xpath('//article/@id')
dates = tree.xpath('//article/header/div/span[@class="entry-date"]/a/text()')
authors = tree.xpath('//article/header/div/a[@rel="author"]/text()')
commentCounts = tree.xpath('//article/header/div/a[@class="comments-counts comments-counts-top"]/text()')
commentCounts =[i.strip() for i in commentCounts]
for i in xrange(len(titles)):
posts.append(tree.xpath('//article[%d]/footer/span/a[@rel="category tag"]/text()'%(i+1)))
tags.append(tree.xpath('//article[%d]/footer/span/a[@rel="tag"]/text()'%(i+1)))
for i in xrange(len(titles)):
#print postIDs[i] + '\t' + dates[i] +'\t' +titles[i] +'\t' + authors[i]+'\t'+commentCounts[i]+ '\t' + ",".join(posts[i]) + '\t' + ",".join(tags[i])
fh.write(postIDs[i] + '\t' + dates[i] +'\t' +titles[i] +'\t' + authors[i]+'\t'+commentCounts[i]+ '\t' + ",".join(posts[i]) + '\t' + ",".join(tags[i]) + '\n')
fh.close()
I felt a bit guilty about scraping the entire website but Brian said it was OK. The html file for each page is ~60KB times 3223 pages is about 193 MB of data. This was distilled down to 3.5 MB of data and took about 25 minutes.
The latested post is #207753 and the earliest is post # 7. The numbers are not sequential and there are total of 22556 articles. The file looks like this
post-207753 June 8, 2016 Hackaday Prize Entry: The Green Machine Anool Mahidharia 1 Comment The Hackaday Prize 2016 Hackaday Prize,arduino,Coating machine,grbl,Hackaday Prize,linear motion,motor,raspberry pi,Spraying machine,stepper driver,the hackaday prize
post-208524 June 8, 2016 Rainbow Cats Announce Engagement Kristina Panos 1 Comment ATtiny Hacks attiny,because cats,blinkenlights,RGB LED,smd soldering,wedding announcements
post-208544 June 8, 2016 Talking Star Trek Al Williams 8 Comments linux hacks,software hacks computer speech,natural language,speech recognition,star trek,text to speech,voice command,voice recognition
.....
post-11 September 9, 2004 hack the dakota disposable camera Phillip Torrone 1 Comment digital cameras hacks
post-10 September 8, 2004 mod the cuecat, and scan barcodes… Phillip Torrone 1 Comment misc hacks
post-9 September 7, 2004 make a nintendo controller in to a usb joystick Phillip Torrone 22 Comments computer hacks,macs hacks
post-8 September 6, 2004 change the voice of an aibo ers-7 Phillip Torrone 10 Comments robots hacks
post-7 September 5, 2004 radioshack phone dialer – red box Phillip Torrone 38 Comments misc hacks
Addendum: for whatever reason, two articles were missing the posts/tags fields. I fixed them manually and uploaded the corrected file.
View all 4 project logs
Already have an account?
chewabledrapery
RoGeorge
Vedran
Edward C. Deaver, IV
Become a member to follow this project and never miss any updates
Contact Hackaday.io
Give Feedback Terms of Use
Hackaday API
© 2021 Hackaday
Yes, delete it
Cancel
You are about to report the project "Hackaday statistics", please tell us the reason.
Your application has been submitted.
Are you sure you want to remove yourself as
a member for this project?
Project owner will be notified upon removal. | https://hackaday.io/project/12158-hackaday-statistics | CC-MAIN-2021-43 | refinedweb | 982 | 57.98 |
Greetings,
This, I hope, is a simply answered question.
Based on Agile Web D. with Rails (depot application), I’m
developing a single table application for contact info. There is only
an admin side to this, so there’s always authentication.
Part of the info record (member) is changed_by and changed_at which I
automatically want updated. Changed_at looks after itself (yay!);
however changed_by doesn’t. Since I know who is accessing the table
(everyone has a user_name) I’ve stored user_name in the session. I
can retrieve and display this information to the input form that is
collecting the info, so I know that there is session[:user_name].
Getting it into the active record is more difficult. What I have done
is inside the class for member.rb I’ve added a callback method:
def before_save
self.changed_by = “Rick”
end
and this will work. However,
def before_save
self.changed_by = session[:user_name]
end
does not, complaining that session is undefined :-(.
Obviously session is not available everywhere. How do I reference the
session correctly from inside the member class?
All flames and grace welcome.
Regards,
Rick W. | https://www.ruby-forum.com/t/access-to-session-data/55833 | CC-MAIN-2020-50 | refinedweb | 185 | 62.14 |
Cerebral uses a single state tree to store all the state of your application. Even though you split up your state into modules, at the end of the day it will look like one big tree:
{ title: 'My Project', someOtherModule: { foo: 'bar' } }
You will normally store other objects, arrays, strings, booleans and numbers in it. Forcing you to think of your state in this simple form gives us benefits.
Let us add some new state to the application to show of some more Cerebral. In our main/index.js file:
import { App } from 'cerebral' import Devtools from 'cerebral/devtools' const app = App({ state: { title: 'My Project', posts: [], users: {}, userModal: { show: false, id: null }, isLoadingItems: false, isLoadingUser: false, error: null } }, {...})
We are going to load posts from JSONPlaceholder. We also want to be able to click a post to load information about the user who wrote it, in a modal. For this to work we need some state. All the state defined here is pretty straight forward, but why do we choose an array for the posts and an object for the users?
Data in this context means entities from the server that are unique, they have a unique id. Both posts and users are like this, but we still choose to store posts as arrays and users as an object. Choosing one or the other is as simple as asking yourself, “What am I going to do with the state?”. In this application we are only going to map over the posts to display a list of posts, nothing more. Arrays are good for that. But users here are different. We want to get a hold of the user in question with an id, userModal.id. Objects are very good for this. Cause we can say:
users[userModal.id]
No need to iterate through an array to find the user. Normally you will store data in objects, because you usually have the need for lookups. An object also ensures that there will never exist two entities with the same id, unlike in an array. | https://cerebraljs.com/docs/introduction/state.html | CC-MAIN-2019-26 | refinedweb | 344 | 81.22 |
When it is said that in Linux everything is file then it really stands true. Most of the operations that we can do on files can be done on other entities like socket, pipe, directories etc.
There are certain situations where a software utility might have to travel across directories in the Linux system to find or match something. This is the use-case where the programmer of that utility has to deal with directory programming. So, in this article we will cover the following basics of directory programming with an example.
- Creating directories.
- Reading directories.
- Removing directories.
- Closing the directory.
- Getting the current working directory.
We will go through the functions that are used for each step above and then finally we will see an example that will summarize all the directory operations.
1. Creating Directories
Linux system provides the following system call to create directories :
#include <sys/stat.h> #include <sys/types.h> int mkdir(const char *pathname, mode_t mode);
The ‘pathname’ argument is used for the name of the directory.
From the man page :.
2. Reading Directories
A family of functions is used for reading the contents of the directory.
1. First a directory stream needs to be opened. This is done by the following system call :
#include <sys/types.h> #include <dirent.h> DIR *opendir(const char *name);
From the man page :
The opendir() function opens a directory stream corresponding to the directory name, and returns a pointer to the directory stream. The stream is positioned at the first entry in the directory.
2. Next, to read the entries in directory, the above opened stream is used by the following system call :
#include struct dirent *readdir(DIR *dirp);
From the man page : */ };
3. Removing Directories
Linux system provides the following system call to remove directories :
#include <unistd.h> int rmdir(const char *pathname);
From the man page :
rmdir() removes the directory represented by ‘pathname’ if it is empty. IF the directory is not empty then this function will not succeed.
4. Closing directories
Linux system provides the following system call to close the directories :
#include <sys/types.h> #include <dirent.h> int closedir(DIR *dirp);
From the man page :
The closedir() function closes the directory stream associated with dirp. A successful call to closedir() also closes the underlying file descriptor associated with dirp. The directory stream descriptor dirp is not available after this call.
5. Getting Current Working Directory
Linux system provides the following system call to get the CWD :
#include <unistd.h> char *getcwd(char *buf, size_t size);
From the man page :
The getcwd() function copies an absolute path name of the current working directory to the array pointed to by buf, which is of length size.This function returns a null-terminated string containing an absolute path name that is the current working directory of the calling process. The path name is returned as the function result and via the argument buf, if present. If the length of the absolute path name of the current working directory, including the terminating null byte, exceeds size bytes, NULL is returned, and errno is set to ERANGE; an application should check for this error, and allocate a larger buffer if necessary.
6. An Example
#include<stdio.h> #include<stdlib.h> #include<string.h> #include<dirent.h> #include <sys/stat.h> #include <sys/types.h> #include <unistd.h> int main (int argc, char *argv[]) { if(2 != argc) { printf("\n Please pass in the directory name \n"); return 1; } DIR *dp = NULL; struct dirent *dptr = NULL; // Buffer for storing the directory path char buff[128]; memset(buff,0,sizeof(buff)); //copy the path set by the user strcpy(buff,argv[1]); // Open the directory stream if(NULL == (dp = opendir(argv[1])) ) { printf("\n Cannot open Input directory [%s]\n",argv[1]); exit(1); } else { // Check if user supplied '/' at the end of directory name. // Based on it create a buffer containing path to new directory name 'newDir' if(buff[strlen(buff)-1]=='/') { strncpy(buff+strlen(buff),"newDir/",7); } else { strncpy(buff+strlen(buff),"/newDir/",8); } printf("\n Creating a new directory [%s]\n",buff); // create a new directory mkdir(buff,S_IRWXU|S_IRWXG|S_IRWXO); printf("\n The contents of directory [%s] are as follows \n",argv[1]); // Read the directory contents while(NULL != (dptr = readdir(dp)) ) { printf(" [%s] ",dptr->d_name); } // Close the directory stream closedir(dp); // Remove the new directory created by us rmdir(buff); printf("\n"); } return 0; }
The above example should be now self explanatory.
The output of above example is :
# ./direntry /home/himanshu/practice/linux Creating a new directory [/home/himanshu/practice/linux/newDir/] The contents of directory [/home/himanshu/practice/linux] are as follows [redhat] [newDir] [linuxKernel] [..] [ubuntu] [.]
Get the Linux Sysadmin Course Now!
{ 6 comments… read them below or add one }
Valuable article with clear example..
Hi,
Thanks a lot, Very useful article..
good job.
why can’t you use bash script to do the same?
what are the 2 args we are supposed to give to run it??
great and clear article.
wish it showed how to run over the files in the directory as well. | http://www.thegeekstuff.com/2012/06/c-directory/ | CC-MAIN-2014-42 | refinedweb | 852 | 55.74 |
NAME | DESCRIPTION | EXAMPLES | SEE ALSO
FNS defines policies for naming objects in the federated namespace. The goal of these policies is to allow easy and uniform composition of names. The policies use the basic rule that objects with narrower scopes are named relative to objects with wider scopes.
FNS policies are described in terms of the following three categories: global, enterprise, and application.
A global naming service is a naming service that has world-wide scope. Internet DNS and X.500 are examples of global naming services. The types of objects named at this global level are typically countries, states, provinces, cities, companies, universities, institutions, and government departments and ministries. These entities are referred to as enterprises.
Enterprise-level naming services are used to name objects within an enterprise. Within an enterprise, there are naming services that provide contexts for naming common entities such as organizational units, physical sites, human users, and computers. Enterprise-level naming services are bound below the global naming services. Global naming services provide contexts in which the root contexts of enterprise-level naming services can be bound.
Application-level naming services are incorporated in applications offering services such as file service, mail service, print service, and so on. Application-level naming services are bound below enterprise naming services. The enterprise-level naming services provide contexts in which contexts of application-level naming services can be bound.
FNS has policies for global and enterprise naming. Naming within applications is left to individual applications or groups of related applications and not specified by FNS.
FNS policy specifies that DNS and X.500 are global naming services that are used to name enterprises. The global namespace is named using the name . . . . A DNS name or an X.500 name can appear after the . . . . Support for federating global naming services is planned for a future release of FNS.
Within an enterprise, there are namespaces for organizational units, sites, hosts, users, files and services, referred to by the names orgunit, site, host, user, fs, and service. In addition, these namespaces can be named using these names with an added underscore ('_') prefix (for example, host and _host have the same binding). The following table summarizes the FNS policies.
In Solaris, an organizational unit name corresponds to an NIS+ domain name and is identified using either the fully-qualified form of its NIS+ domain name, or its NIS+ domain name relative to the NIS+ root. Fully-qualified NIS+ domain names have a terminal dot ('.'). For example, assume that the NIS+ root domain is "Wiz.COM." and "sales" is a subdomain of that. Then, the names org/sales.Wiz.COM. and org/sales both refer to the organizational unit corresponding to the same NIS+ domain sales.Wiz.COM.
User names correspond to names in the corresponding NIS+ passwd.org_dir table. The file system context associated with a user is obtained from his entry in the NIS+ passwd.org_dir table.
Host names correspond to names in the corresponding NIS+ hosts.org_dir table. The file system context associated with a host corresponds to the files systems exported by the host.
names a conference room videoconference located in the north wing of the site associated with the organizational unit accounts_payable.finance.
names a user mjones in the organizational unit finance.
names a machine inmail belonging to the organizational unit finance.
names a file pub/blue-and-whites/FY92-124 belonging to the organizational unit accounts_payable.finance.
names the calendar service of the organizational unit accounts_payable.finance. This might manage the meeting schedules of the organizational unit.
names a printer speedy in the b5.mtv site.
names a file directory usr/dist available in the site admin.
names the calendar service of the user jsmith.
names the file bin/games/riddles of the user jsmith.
names the mailbox service associated with the machine mailhop.
names the directory pub/saf/archives.91 found under the root directory of the machine mailhop.
fncreate(1M), nis+(1), xfn(3XFN), fns(5), fns_initial_context(5), fns_references(5)
NAME | DESCRIPTION | EXAMPLES | SEE ALSO | http://docs.oracle.com/cd/E19683-01/817-0684/6mgfg0ppv/index.html | CC-MAIN-2014-35 | refinedweb | 671 | 50.53 |
I have installed python 2 after installing python 3.And now when I executing my python file by clicking on file (not by cmd) its run python 2 ,but I want python 3.
I have tried script:
import sys
print (sys.version)
2.7.11
If the current default windows application for
.py files is currently
python2 (i.e.
C:\python27\python.exe) and not the new
py.exe launcher, you can just change the default windows application for the file type. Right-click on file -> properties -> click the change button for default application and change it to the python3 executable.
If the default application for the file is the
py.exe windows launcher, you can add a shebang line in your scripts to force the python executable and the launcher should respect it. Add this as the first line of your file
#!C:\python3\python.exe
If you're python3 installation path is different, make sure to use that instead. | https://codedump.io/share/U7FG4jTsnDt2/1/execute-python-3-not-python-2 | CC-MAIN-2017-34 | refinedweb | 161 | 84.47 |
Hello everyone. I am in a programming fundamentals class which we are using python. I am working on a Lab problem and really am not 100% sure how to do it.
The Problem:
A shipping company (Fast Freight Shipping Company) wants a program that asks the user to enter the weight of a package and then display the shipping charges. (This was the easy part)
Shipping Costs:
2 pounds or less = $1.10
over 2 but not more than 6 pounds = $2.20
over 6 but not more than 10 pounds = $3.70
over 10 = 3.80
My teacher added on to this saying we need to
Next, you will enhance this lab to include the following:
a. Include the name of the shipper and recipient.
b. Print a properly formatted invoice.
c. The shipper is required to purchase insurance on their package. The insurance
rates are based on the value of the contents of the package and are as follows:
Package Value Rate
0 – 500 3.99
501 – 2000 5.99
2000 10.99
2. The printed invoice must include the name of the shipper and recipient. As well, this
invoice will display the total of the shipping charge and the insurance cost.
So here is my code
def main(): #define shipping rates less_two = 1.10 twoPlus_six = 2.20 sixPlus_ten = 3.70 tenPlus = 3.80 shippingClass = 0 insuranceClass = 0 insurance1 = 3.99 insurance2 = 5.99 insurance3 = 10.99 #Getting name of shipper and recipient company = str(input('Please enter the name of the shipping company: ')) customer = str(input('Please enter the name of the customer :')) #get weight of the package weight = float(input('Enter the weight of the package: ')) insurance = shippingRate(weight, shippingClass) #shipping classification if weight <= 2.0: shippingClass = less_two elif weight > 2.0 and weight <= 6.0: shippingClass = twoPlus_six elif weight > 6.0 and weight <= 10.0: shippingClass = sixPlus_ten elif weight > 10.0: shippingClass = tenPlus else: print('Weight has to be a possitive number') print() if insurance <= 500 and insurance >= 0: insuranceClass = insurance1 elif insurance > 500 and insurance <= 2000: insuranceClass = insurance2 else: insuranceClass = insurance3 total = shippingRate + insuranceClass #New variables insurance = shippingRate(weight, shippingClass) shippingRate(weight, shippingClass) #Display print('The total with insurance will be $', total, '.', sep='') print() int(input('Press ENTER to end')) #calculating shipping cost. def shippingRate(poundage, classification): shipping_rate = poundage * classification print('The shipping will cost $', format(shipping_rate, '.2f'), '.', sep='') main()
I am getting this error
Traceback (most recent call last): File "F:\Python\ACC - Labs\Lab 3\lab3_shipping.py", line 50, in <module> main() File "F:\Python\ACC - Labs\Lab 3\lab3_shipping.py", line 31, in main if insurance <= 500 and insurance >= 0: TypeError: unorderable types: NoneType() <= int()
I'm sure you can see that I do not have a complete understanding of what I am doing yet so any help or direction would be great. Also I am sorry for the sloppy code. I did the first part and now trying to do the 2nd and it is just a mess right now. After I figure it out I was going to clean it up.
Thank you for anyone who spends time to help me.
Edited by nUmbdA: edit | https://www.daniweb.com/programming/software-development/threads/436830/python-typeerror-unorderable-types | CC-MAIN-2018-05 | refinedweb | 526 | 67.76 |
How To Use the Django One-Click Install Image for Ubuntu 16.04
Introduction
Django is a high-level Python framework for developing web applications rapidly. DigitalOcean's Django One-Click Application quickly deploys a preconfigured development environment to your VPS with Django, Nginx, Gunicorn, and Postgres.
Creating a Django Droplet
To create a Django Droplet, start on the Droplet creation page. In the Choose an image section, click the One-click apps tab and select the Django 1.8.7.
Once it's created, navigate to in your favorite browser to verify that Django is running. You'll see a page with the header It worked! Congratulations on your first Django-powered page.
You can now log into to your Droplet as root.
- ssh root@your_server_ip
Make sure to read the Message of the Day, which contains important information about your installation, like the username and password for both the Django user and the Postgres database.
Login output------------------------------------------------------------------------------- Thanks for using the DigitalOcean Django One-Click Application Image The "ufw" firewall is enabled. All ports except for 22, 80, and 443 are BLOCKED. Let's Encrypt has been pre-installed for you. If you have a domain name, and you will be using it with this 1-Click app, please see: Django is configured to use Postgres as the database back-end. You can use the following SFTP credentials to upload your files (using FileZilla/WinSCP/Rsync): Host: your_server_ip User: django Pass: 2fd21d69bb13890c960b965c8c88afb1 You can use the following Postgres database credentials: DB: django User: django Pass: 9853a37c3bc81bfc15f264de0faa9da5 Passwords have been saved to /root/.digitalocean_passwords
If you need to refer back to this later, you can find the information in the file
/etc/update-motd.d/99-one-click.
Configuration Details
The Django project is served by Gunicorn, which listens on
/home/django/gunicorn.socket. Gunicorn is proxied by Nginx, which listens on port
80.
The Nginx configuration file is located at
/etc/nginx/sites-enabled/django. If you rename the project folder, remember to change the path to your static files.
Gunicorn is started on boot by a Systemd file at
/etc/systemd/system/gunicorn.service. This Systemd script also sources a configuration file located at
/etc/gunicorn.d/gunicorn.py that sets the number of worker processes. You can find more information on configuring Gunicorn in the Gunicorn project's documentation.
The Django project itself is located at
/home/django/django_project.
Note: If you rename the project folder, you need to make a few configuration file updates. Specifically, you need to change the path to your static files in the Nginx configuration. You also need to update the
WorkingDirectory,
name, and
pythonpath in the Gunicorn Systemd file.
The project can be started, restarted, or stopped using the Gunicorn service. For instance, to restart the project after having made changes, run:
- systemctl restart gunicorn.service
While developing, it can be annoying to restart the server every time you make a change. In that case, you might want to use Django's built-in development server, which automatically detects changes:
- systemctl stop gunicorn.service
- python manage.py runserver 0.0.0.0:8000
You can then access the application through in a browser. This built-in server does not offer the best performance, so it's best practice to use the Gunicorn service for production.
Writing Your First Django App
There are many in-depth guides on writing Django applications, but this step will just get you up and running with a very basic Django app.
If you haven't already, log into your server as root.
ssh root@your_server_ip
Next, switch to the django user.
- su django
Move into the project directory.
- cd /home/django/django_project
Now create a new app called
hello.
python manage.py startapp hello
This will create a new directory named
hello in the folder
django_project. The whole directory tree will be structured like this:
. ├── django_project │ ├── __init__.py │ ├── __init__.pyc │ ├── settings.py │ ├── settings.pyc │ ├── settings.py.orig │ ├── urls.py │ ├── urls.pyc │ ├── wsgi.py │ └── wsgi.pyc ├── hello │ ├── admin.py │ ├── __init__.py │ ├── migrations │ │ ├── __init__.py │ ├── models.py │ ├── tests.py │ └── views.py └── manage.py
It's not necessary, but you can generate this output yourself with the
tree utility. Install it with
sudo apt-get install tree and then use
tree /home/django/django_project.
Next, create your first view. Open the file
hello/views.py for editing using
nano or your favorite text editor.
- nano hello/views.py
It'll look like this originally:
from django.shortcuts import render # Create your views here.
Modify it to match the following. This tells Django to return Hello, world! This is our first view. as an HTTP response.
from django.shortcuts import render from django.http import HttpResponse def index(request): return HttpResponse("Hello, world! This is our first view.")
Save and close the flie. Next, we need to connect the view we just created to a URL. To do this, open
django_project/urls.py for editing.
- nano django_project/urls.py
Add the following two lines to the file, which imports the view you just created and sets it to the default URL:
. . . from django.conf.urls import include, url from django.contrib import admin from hello import views urlpatterns = [ url(r'^$', views.index, name='index'), url(r'^admin/', include(admin.site.urls)), ]
Save and close the file, then log out of the django user and return to the root shell.
- exit
As root, restart the project.
- systemctl restart gunicorn.service
Now, if you reload your Droplet's IP address,, you'll see a page with Hello, world! This is our first view.
Next Steps
You're ready to start working with Django. From here, you can:
- Follow our Initial Server Setup guide to give
sudoprivileges to your user, lock down root login, and take other steps to make your VPS ready for production.
- Use Fabric to automate deployment and other administration tasks.
2 Comments | https://www.digitalocean.com/community/tutorials/how-to-use-the-django-one-click-install-image-for-ubuntu-16-04 | CC-MAIN-2017-30 | refinedweb | 984 | 59.7 |
Hi folks, I previously wrote to this list about a performance problem I was having with Twisted, Quixote, and (I thought) HTTP/1.1, which I erroneously thought was a problem in Twisted's ability to deal with HTTP/1.1... I've since spent lots of time digging, and first figured out that the problem wasn't really in Twisted (and it really didn't have anything to do with HTTP/1.1, though persistent connections did contribute. More accurately, the lack of persistent connections would mask the problem.), and then eventually figured out what the problem REALLY was. It was an odd little thing that had to do with Linux, Windows, network stacks, slow ACKs, and sending more packets than were needed. Well, I don't want to go into much more detail, because your time is valuable. First, for those that haven't heard of it, Quixote is a python based web publishing framework that doesn't include a web server. Instead, it can be published through a number of mechanisms: CGI, FastCGI, SCGI, or mod_python, plus it has interfaces for Twisted and Medusa. I think I may be missing one, but I'm not sure. It's home page is at We (the quixote-users folks) seem to have a lack of expertise in Twisted :) The interface between twisted and quixote: A twisted request object is used to create a quixote request object, quixote is called to publish the request, and then the output of quixote is wrapped into a producer which twisted then finishes handling. Actually, that's how it has been for quite some time, except for the producer bit. My modifications revolved around creating the producer class that (I think/hope) works well in the Twisted framework, and let's twisted publish it when it's ready (i.e., in it's event loop). Formerly, quixote's output was just pushed out through the twisted request object's write() method. Which could cause REALLY bad performance; the bug I was chasing. In many cases it did just fine, however. This was also just a generally bad idea, because, for instance, publishing a large file could consume large amounts of RAM until it was done being pushed over the wire. It's also worth mentioning that a quixote Stream object (noticable in the source) is a producer, but it uses the iterator protocol instead of .more() or resumeProducing(). I'm hoping that someone can take a look at the finished product (just the interface module) and say something like, "you're nuts! you're doing this all wrong!", or "yeah, this looks like the right general idea, except maybe this bit here...". Also, if anyone can share a brief one-liner or two about whether or not I should leave in the hooks for pb and threadable, I'd appreciate it (quixote is almost always run single threaded... Maybe just always...). I also changed the demo/test code at the bottom of the module from using the Application object to using the reactor. I'd appreciate any feedback on that and the SSL code (it's also new...) as well. If anyone should want to actually run this, it'll work with Quixote-1.0b1, and the previous 'stable' (I say that because it was the latest version for several months...) version 0.7a3. I wrote the interface against twisted 1.2.0, but I think it'll work with older versions. I just don't know how old. Oh, and if you wanna drop it in a quixote install, it lives as quixote.server.twisted_http Thanks in advance for any help, Jason Sibre -------------- next part -------------- #!/usr/bin/env python """ twist -- Demo of an HTTP server built on top of Twisted Python. """ __revision__ = "$Id: medusa_http.py 21221 2003-03-20 16:02:41Z akuchlin $" # based on qserv, created 2002/03/19, AMK # last mod 2003.03.24, Graham Fawcett # tested on Win32 / Twisted 0.18.0 / Quixote 0.6b5 # # version 0.2 -- 2003.03.24 11:07 PM # adds missing support for session management, and for # standard Quixote response headers (expires, date) # # modified 2004/04/10 jsibre # better support for Streams # wraps output (whether Stream or not) into twisted type producer. # modified to use reactor instead of Application (Appication # has been deprecated) import urllib from twisted.protocols import http from twisted.web import server from quixote.http_response import Stream # Imports for the TWProducer object from twisted.spread import pb from twisted.python import threadable from twisted.internet import abstract class QuixoteTWRequest(server.Request): def process(self): self.publisher = self.channel.factory.publisher environ = self.create_environment() ## this seek is important, it doesnt work without it ## (It doesn't matter for GETs, but POSTs will not ## work properly without it.) self.content.seek(0,0) qxrequest = self.publisher.create_request(self.content, environ) self.quixote_publish(qxrequest, environ) resp = qxrequest.response self.setResponseCode(resp.status_code) for hdr, value in resp.generate_headers(): self.setHeader(hdr, value) if resp.body is not None: TWProducer(resp.body, self) else: self.finish() def quixote_publish(self, qxrequest, env): """ Warning, this sidesteps the Publisher.publish method, Hope you didn't override it... """ pub = self.publisher output = pub.process_request(qxrequest, env) # don't write out the output, just set the response body # the calling method will do the rest. if output: qxrequest.response.set_body(output) pub._clear_request() def create_environment(self): """ Borrowed heavily from twisted.web.twcgi """ # Twisted doesn't decode the path for us, # so let's do it here. This is also # what medusa_http.py does, right or wrong. if '%' in self.path: self.path = urllib.unquote(self.path) serverName = self.getRequestHostname().split(':')[0] env = {"SERVER_SOFTWARE": server.version, "SERVER_NAME": serverName, "GATEWAY_INTERFACE": "CGI/1.1", "SERVER_PROTOCOL": self.clientproto, "SERVER_PORT": str(self.getHost()[2]), "REQUEST_METHOD": self.method, "SCRIPT_NAME": '', "SCRIPT_FILENAME": '', "REQUEST_URI": self.uri, "HTTPS": (self.isSecure() and 'on') or 'off', } client = self.getClient() if client is not None: env['REMOTE_HOST'] = client ip = self.getClientIP() if ip is not None: env['REMOTE_ADDR'] = ip xx, xx, remote_port = self.transport.getPeer() env['REMOTE_PORT'] = remote_port env["PATH_INFO"] = self.path qindex = self.uri.find('?') if qindex != -1: env['QUERY_STRING'] = self.uri[qindex+1:] else: env['QUERY_STRING'] = '' # Propogate HTTP headers for title, header in self.getAllHeaders().items(): envname = title.replace('-', '_').upper() if title not in ('content-type', 'content-length'): envname = "HTTP_" + envname env[envname] = header return env class TWProducer(pb.Viewable): """ A class to represent the transfer of data over the network. JES Note: This has more stuff in it than is minimally neccesary. However, since I'm no twisted guru, I built this by modifing twisted.web.static.FileTransfer. FileTransfer has stuff in it that I don't really understand, but know that I probably don't need. I'm leaving it in under the theory that if anyone ever needs that stuff (e.g. because they're running with multiple threads) it'll be MUCH easier for them if I had just left it in than if they have to figure out what needs to be in there. Furthermore, I notice no performance penalty for leaving it in. """ request = None def __init__(self, data, request): self.request = request self.data = "" self.size = 0 self.stream = None self.streamIter = None self.outputBufferSize = abstract.FileDescriptor.bufferSize if isinstance(data, Stream): # data could be a Stream self.stream = data self.streamIter = iter(data) self.size = data.length elif data: # data could be a string self.data = data self.size = len(data) else: # data could be None # We'll just leave self.data as "" pass request.registerProducer(self, 0) def resumeProducing(self): """ This is twisted's version of a producer's '.more()', or an iterator's '.next()'. That is, this function is responsible for returning some content. """ if not self.request: return if self.stream: # If we were provided a Stream, let's grab some data # and push it into our data buffer buffer = [self.data] bytesInBuffer = len(buffer[-1]) while bytesInBuffer < self.outputBufferSize: try: buffer.append(self.streamIter.next()) bytesInBuffer += len(buffer[-1]) except StopIteration: # We've exhausted the Stream, time to clean up. self.stream = None self.streamIter = None break self.data = "".join(buffer) if self.data: chunkSize = min(self.outputBufferSize, len(self.data)) data, self.data = self.data[:chunkSize], self.data[chunkSize:] else: data = "" if data: self.request.write(data) if not self.data: self.request.unregisterProducer() self.request.finish() self.request = None def pauseProducing(self): pass def stopProducing(self): self.data = "" self.request = None self.stream = None self.streamIter = None # Remotely relay producer interface. def view_resumeProducing(self, issuer): self.resumeProducing() def view_pauseProducing(self, issuer): self.pauseProducing() def view_stopProducing(self, issuer): self.stopProducing() synchronized = ['resumeProducing', 'stopProducing'] threadable.synchronize(TWProducer) class QuixoteFactory (http.HTTPFactory): def __init__(self, publisher): self.publisher = publisher http.HTTPFactory.__init__(self, None) def buildProtocol (self, addr): p = http.HTTPFactory.buildProtocol(self, addr) p.requestFactory = QuixoteTWRequest return p def run (): from twisted.internet import reactor from quixote import enable_ptl from quixote.publish import Publisher enable_ptl() import quixote.demo # Port this server will listen on http_port = 8080 namespace = quixote.demo # If you want SSL, make sure you have OpenSSL, # uncomment the follownig, and uncomment the # listenSSL() call below. ##from OpenSSL import SSL ##class ServerContextFactory: ## def getContext(self): ## ctx = SSL.Context(SSL.SSLv23_METHOD) ## ctx.use_certificate_file('/path/to/pem/encoded/ssl_cert_file') ## ctx.use_privatekey_file('/path/to/pem/encoded/ssl_key_file') ## return ctx publisher = Publisher(namespace) ##publisher.setup_logs() qf = QuixoteFactory(publisher) reactor.listenTCP(http_port, qf) ##reactor.listenSSL(http_port, qf, ServerContextFactory()) reactor.run() if __name__ == '__main__': run() | https://twistedmatrix.com/pipermail/twisted-web/2004-April/000311.html | CC-MAIN-2018-26 | refinedweb | 1,571 | 53.98 |
Warnings and ErrorsBy Hugo Giraudel
Warnings and errors are the way a program has to communicate with a developer or user. For instance, when you inadvertently introduce a syntax error in your code, the program/language (whatever it is) is likely to throw an error explaining your mistake and how you should fix it.
Sass providing easy ways to build public APIs (essentially using functions and mixins), there is nothing surprising in having a way to emit warnings and errors as part of the language. It is especially useful when checking arguments from mixins and functions.
Both warnings and errors are emitted in the current output channel. When compiling Sass by hand or with a CLI-based tool such as Grunt or Gulp, the output stream is the console. For tools including user interfaces like Codekit or Prepros, it is likely that they catch and display them as part of their interface. Online playgrounds like CodePen and SassMeister manage to catch errors, so don’t be alarmed if you cannot test warnings in there.
Warnings
Warnings, through the
@warn directive, are messages that are displayed in the current output stream without stopping the execution process. They come in handy when wanting to let the user know about something going on with the code, for instance when a mixin assumes something, which might be wrong or incorrect.
The warning directive could not be any easier: it is the
@warn token followed by a string. There is no configuration whatsoever or extra options.
@warn 'Ohai! I am a warning message.';
Errors
Errors, through the
@error directive, are messages that are displayed in the current output stream, but unlike warnings, they do stop the compilation. When Sass meets an
@error directive, it prints the given message and stops execution right away.
The syntax for the
@error directive is the exact same as the one from
@warn: the
@error token followed by a string.
@error 'Uh-ho, something is going wrong.';
Example
One of the best examples I have encountered for
@warn lives in the Sass-MQ library. It is meant to let the developer know that the
mq-px2em mixin is assuming that a unitless value should be considered as a pixel value.
@function mq-px2em($px, $base-font-size: $mq-base-font-size) { @if unitless($px) { @warn 'Assuming #{$px} to be in pixels, attempting to convert it into pixels.'; @return mq-px2em($px + 0px); } @else if unit($px) == em { @return $px; } @return ($px / $base-font-size) * 1em; }
I think this is a nice addition because the program is effectively able to keep going, but something might possibly go wrong. The user deserves to know, hence the warning.
Regarding errors, the simplest use case would be parameter validation in a mixin. For instance:
@mixin size($width, $height: $width) { @if not is-length($width) or not is-length($height) { @error '`size` mixin is expecting lengths.'; } width: $width; height: $height; }
Engine compatibility
Sass warnings are fully compatible across all Sass engines and there is no known bug to this day about their implementation. On the other hand, Sass errors are only supported since Sass 3.4. Using the
@error directive in an unsupported environment will result in a parsing error.
One way to handle errors in unsupported (pre 3.4) environments is to wrap tests in functions, warn then return null.
@function do-something($value) { @if not $value { @warn "Error message"; @return null; } // Function core when `$value` is okay }
One downside we could raise about this way of doing is that it does not actually stop Sass from processing.
If we want to trigger an error so that Sass completely stops, Eric Suzanne found a neat little solution. The idea is to create an empty function with no return statement.
@function error() {}
When you want to throw an error, you simply call this function. Because Sass expects a
@return statement in a function, it will throw the following error:
Function error finished without @return
That does not tell much about the error, but at least Sass is not running anymore. I am not a big fan of this technique since there is no way of outputing a specific message, but it has the benefit of working fine in unsupported environment. | https://www.sitepoint.com/sass-reference/warnings-and-errors/ | CC-MAIN-2017-04 | refinedweb | 705 | 60.35 |
BeakerX's table and plot widgets both support the scroll wheel. For tables, the scroll wheel scrolls vertically. For plots, the scroll wheel zooms. However, for the notebook as a whole, the scroll wheel also has a meaning, to scroll the notebook. So there's an ambiguity, and the UI needs a way to resolve it and decide where the scroll events go.
BeakerX's approach starts with idea of focus, a widget that would be the target of any user commands. And indeed, in BeakerX when you click on a table or a plot, it gets the focus. This is represented by a green outline around the widget. The outline is styled to match the blue one that Jupyter uses to indicate the current cell.
The widget will keep the focus as long as the mouse remains over it. Feel free to interact with the widget, clicking, scrolling, and zooming away. When you are done and move the mouse elsewhere, the green outline will vanish. Focus is returned to the notebook, and the next scroll event will go to the page, and not to a widget.
This keyless blur is a BeakerX innovation.
Try it with the widgets below:
import pandas as pd from beakerx import *
TableDisplay([{'y1':4, 'm3':2, 'z2':1}, {'m3':4, 'z2':2}])
pd.read_csv('../resources/data/interest-rates.csv')
rng = pd.date_range('1/1/2011', periods=1000, freq='H') ts = pd.Series(np.random.randn(len(rng)), index=rng) df = pd.DataFrame(ts, columns=['mV']) SimpleTimePlot(df, ['mV']) | https://nbviewer.jupyter.org/github/twosigma/beakerx/blob/0.21.0/doc/python/ScrollZoom.ipynb | CC-MAIN-2021-39 | refinedweb | 254 | 76.52 |
Introduction and Base Project .
Before starting here if you are familiar with creating Django project feel free to skip this tutorial and start the next.
If not just make sure you have installed python and Django in your system by follow command, if you want more detail follow Django Website
pip install django
above command will install Django application in system, now we need to create project directory where we can store our code and manage to run, copy below code, and run in your CMD/Terminal
django-admin startproject langtests
This will create the langtests directory in the current working directory. To create the languages app, cd into langtests and execute:
python3 manage.py startapp languages
The directory structure of our project will be the following:
Here, langtests contains 1 manage.py file, one sub directory which contains 4 python files with extension .py file such as settings.py, urls.py, wsgi.py and _init.py. and last i.e. third one is 2nd sub folder which is our application i.e., languages, this contains 5 python files and one folder with name migrations, migrations folder store all db file info will discuss this in other post, now look at 5 diff files namely __init_.py, admin.py, models.py, tests.py, and views.py,
Overview of each file:
_init_.py: An empty file that tells Python that this directory should be considered a Python package.
settings.py: Contains all the information of projects such as Database, Template, Middleware, Third Party, i18N, Static information and many more. Settings/configuration for this Django project. Django settings will tell you all about how settings work.
urls.py: The URL (Uniform Resource Locator) declarations for this Django project; a “table of contents” of your Django-powered site. You can read more about URLs (Uniform Resource Locators) in URL dispatcher.
wsgi.py: An entry-point for WSGI-compatible web servers to serve your project. See How to deploy with WSGI for more details.
If you have worked with Django, you already know what these files are. If not, I recommend following the Django tutorial as it will allow you to understand most of the basic concepts.
Let get touch some code introductory code part:
Let create a simple view for our 'languages' app's index page. Open languages/views.py and paste the code below:
from django.http import HttpResponse def index(request): output = 'Welcome to my site.' return HttpResponse(output)
This view will now be mapped to a URL. Created a file named urls.py in the languages directory that contains the following code:
from django.contrib import admin from django.urls import path urlpatterns = [ path('admin/', admin.site.urls), ]
Finally, from the root URLconf, map a URL. Include the URLs for the language apps in urlpatterns in langtests/urls.py as follows:
from django.conf.urls import include from django.contrib import admin from django.urls import path urlpatterns = [ path('admin/', admin.site.urls), path('languages/', include('languages.urls')), ]
To test the view, start the Django server using below command
python3 manage.py runserver
and navigate to in your browser.
Disclaimer: This is a personal [blog, post, statement, opinion]. The views and opinions expressed here are only those of the author and do not represent those of any organization or any individual with whom the author may be associated, professionally or personally.
Discussion (0) | https://dev.to/epampythonpractice/django-internationalization-tutorial-1-introduction-and-base-project-3e8o | CC-MAIN-2022-33 | refinedweb | 563 | 58.99 |
What the approach below, I’ve used Python and SQLAlchemy to build up “big SQL” from small, unit-testable expressions. This keeps the codebase manageable without sacrificing the – often quite impressive – analytics performance obtainable with ordinary databases.
“Big SQL” resists being teased apart into independent, manageable functions because in defining inputs and outputs we have to let them be materialized somewhere. Persisting any bulky intermediate results, however, is going to clobber performance.
The example (code here) shows one way of taming complex SQL-based logic:
- SQLAlchemy Core is used to create modular, testable expression elements that are developed separately but composed and executed efficiently at runtime
- Unit Testing is supported by turning literal values in fixtures into input tables on-the-fly
- SQLAlchemy ORM is used to add an entity mapping for results so they can be rendered as JSON etc.
The demo code doesn’t do very impressive analytics – it performs a simple “categorising pivot” to summarise sales of items across different price bands. Extending this example for logistic regression or deep learning is left as an exercise…
Limits of SQLAlchemy ORM
For those who have encountered SQLAlchemy mostly as an ORM, this extract from the example will be familiar – constructing a query for some entities that already exist:
.join(m.Book) \
.join(m.BookSale) \
.join(m.Transaction) \
.filter(m.Genre.name == 'Art') \
.filter(and_(m.Transaction.create_date >= start_date, m.Transaction.create_date < datetime(2016, 2, 1)))
The query returns a Genre entity which, thanks to Alchemy’s lazy loading of all the related entities, will magically appear with the contained books and sales nested within it when referenced.
This is fine for directly accessing the existing model and doing simple things like COUNT(*). However, it can’t be used to transform and aggregate in order to derive any new entities. To create the kind of pivot used in the example means creating new table-like entities for the categories and aggregates.
SQLAlchemy Core
To build up the summary query, we start with a similar query to the above, this time in SQLAlchemy Core.
return select([
booksales_te.c.book_id,
books_te.c.price,
books_te.c.genre_id,
transactions_te.c.id.label('transaction_id'),
transactions_te.c.create_date]) \
.select_from(
booksales_te
.join(books_te)
.join(genres_te)
.join(transactions_te)
) \
.where(between(transactions_te.c.create_date, bindparam('start_date'), bindparam('end_date')))
The joins and where-clause filters are different from ORM – just different enough to be confusing. The SQL-dressed-as-Python syntax can be offputting at first – it’s more cluttered in some ways so the promised maintainability benefit is mostly in the bigger picture.
Table expressions as parameters
The source entities (Book, BookSale etc.) are passed as “table expression” parameters rather than being mentioned explicitly. This is so any derived expression can be mocked by a literal value in a test.
Note that this function returns the expression for finding the result, not the result itself. This is what we want when we’re composing bigger expressions from small ones.
Bind variable parameters
The expression being built up can have real parameters of its own, of course. These are the bind variables, such as the start_date above, which are supplied when it’s executed. We avoid passing these in along with the table expressions to allow the final expressions to be cached and reused (both cached within Alchemy and in the database) as the cost of recreating it each time is substantial.
There doesn’t seem to be a neat way to compose parameters along with expressions as they all share the same namespace so crafting anything here would probably be elaborate enough obscure the rest of the example.
Composing table expressions (TEs)
Following the pattern where all functions take TEs as input and return a new TE, it’s obviously pretty easy to apply a MAX function on the relevant sales, the output of the above function being given to this one:
return select([
func.max(sales_for_period_te.c.price).label('max_price')
])
Unit testing
The example aggregates the sales figures by price band and by genre.
(The bands are calculated dynamically based on the number required, the max. price of the relevant sales and the rounding unit to be applied to the from- and to-price limits, e.g. rounding to 50 pence).
So, after max. price, the next ingredient to calculate is the increment or price difference to be used between bands:
return select([
func.greatest(
(max_price_te.c.max_price + (rounding_unit * max_num_bands - 1)) / rounding_unit / max_num_bands
* rounding_unit,
rounding_unit).label('incr')
])
Here we have a function that’s worth unit testing. In this case, I’ve chosen to throw a range of inputs at it, including base and edge cases, and check the results, all in one test. This at least keeps things concise:
for fixture in [
{'max_price_in': {'max_price': 1}, 'result': [{'increment': 50}]},
{'max_price_in': {'max_price': 50}, 'result': [{'increment': 50}]},
{'max_price_in': {'max_price': 99}, 'result': [{'increment': 50}]},
{'max_price_in': {'max_price': 499}, 'result': [{'increment': 100}]},
{'max_price_in': {'max_price': 500}, 'result': [{'increment': 100}]},
{'max_price_in': {'max_price': 501}, 'result': [{'increment': 150}]},
{'max_price_in': {'max_price': 1001}, 'result': [{'increment': 250}]},
{'max_price_in': {'max_price': 10001}, 'result': [{'increment': 2050}]},
]:
results = self.exec(get_increment_te(
TestTable('max_price', [fixture['max_price_in']]),
50,
5))
self.assertEqual(results, fixture['result'])
Table expression inputs are provided as literal dicts which are turned into real – if transient – tables by the TestTable class. The column names and types are inferred from the dictionary keys and values.
The table expression output is executed and the results turned into another plain dict. (Remember, we’re just getting row-like results here, not the mapped entities retrieved in ORM).
The important thing of course is that we’re able to test any expression, including intermediate ones hiding in the middle of a big compound query.
The final expression
There is sometimes a bit of voodoo involved in coaxing Alchemy to compose expressions as you’d like, at least in version 1.0. Essentially we can choose whether to include a subexpression as a common table expression (CTE), so reusing both it and its results, or as a subquery, so reusing the same expression in a new context. Whether the new context ends up producing different results will depend on whether it’s correlated to the enclosing query – obviously if the results are going to be the same the subquery should be factored out into a CTE.
In the following, the sales history is retrieved once as the sales CTE – this is definitely something that should only be done once:
m.BookSale.__table__,
m.Book.__table__,
m.Genre.__table__,
m.Transaction.__table__).cte('sales')
max_price_cte = get_max_price_te(
sales_for_period_cte) \
.cte('max_price')
increment_cte = get_increment_te(
max_price_cte,
rounding_unit=50,
max_num_bands=5) \
.cte('incr')
Deriving the price bands is also non-trivial so this expression also has a unit test. (Although it looks like part of the expression, the max_num_bands value ends up being a bind variable, just a static one):
max_price_cte,
increment_cte,
max_num_bands=5) \
.cte('price_bands')
We now map the sales to the bands and, finally, perform the aggregation by band and genre to give our completed expression:
sales_for_period_cte,
price_bands_te).cte('sales_in_bands')
return get_total_sales_by_band_and_genre_te(
sales_in_bands_cte)
Mapping results to classes
The above query will get our results, and quite efficiently too. So what’s the point of trying to retrofit an entity class on this result set? The reasons are more to do with graphs than OO:
- Generating JSON, e.g. via Marshmallow, relies on serializing a graph of dicts.
- Results-as-classes means that foreign key values can be made to work as relationships, just as with a normal mapping
To map to a class we’re going to have to provide the information ORM can’t figure out for itself, such as keys and relationships. We don’t have to define columns though – it’s able to get these from the query expression.
__table__ = mapped_te().alias('total_sales')
# Create a primary key for anything needing object identity, e.g. Marshmallow
__mapper_args__ = {
'primary_key': [__table__.c.price_band, __table__.c.genre_id]
}
genre = relationship(m.Genre, primaryjoin="SalesByPriceBandAndGenre.genre_id == Genre.id", viewonly=True)
The compound primary key might not be a great idea – in other situations generating a row number or other single surrogate key would probably be better.
Since the columns that the mapper digs out of the query won’t include foreign key attributes we have to provide that explicitly in the relationship. The viewonly flag stops Alchemy getting into trouble by trying to track changes.
Should this be allowed?
This “classical mapping” – mentioning the
SELECTable in the __table__ definition, is discouraged in the Alchemy docs. The recommendation there is to use a column_property or similar small tweak to an existing mapping, but this kind of thing can’t handle the kind of transformations needed for analytics. (I found that trying to shoehorn even quite a simple calculation into this model – a percentile rank – didn’t really work).
Other uses
This pattern can obviously apply whenever a query threatens to become too complex to maintain as a single statement or too slow to be run as separate statements. This applies particularly when exploiting the stats-over-advanced-groupings (cubes etc.) now in Postgres.
It can also start to influence how data models are designed. For example, returning a graph with nodes derived from different entities would usually need careful implementation in the model, deriving entities from some Node class and requiring the full ORM machinery to be thrown in. Here, where diverse source tables can be transformed into a common structure without too much trouble, and that structure then queried using WITH RECURSIVE…. UNION ALL and mapped to a class, there’s much less need to decide up-front what analysis views of the data are going to be needed. | https://tech.labs.oliverwyman.com/blog/2016/08/23/analytics-with-sqlalchemy/ | CC-MAIN-2020-05 | refinedweb | 1,595 | 51.38 |
Image: 1
<o:p>
Introduction
I still remember it was a neatly done report that got me my first pay raise (every one likes pay raise right?). Ever since, I am very passionate about report writing. In this article, I will guide you through step by step how to create a simple report using MS Reporting Services 2005; and host it with Smart Client application.
So, are you ready to get your share of pay raise? Why not! Who knows, your neatly done report can just do that.
Prior to this article, I wrote three others, which were addressing different issues related to the reporting services. However, all of them were targeted towards the intermediate-advance level audience. From all the feedbacks I received, one was common, quite a few of you asked for an article which will be specially geared towards the novice-beginner level.<o:p>
I assume the reader has the basic understanding of the Visual Studio 2005 IDE and comfortable with writing code using C#. You don’t have to know the MS Reporting Services to understand this article; although, any pervious experience with the report writing would help to fast track yourself.
Although, I am calling this article 101, my intention is to adopt the applied approach rather then discussing each and every topic associated with reporting services. I am touching on most common aspect of report designing with most commonly used controls. I would strongly encourage you to please go through MSDN documentation for more detailed information.
*Updated to add Access Database interface.
Let’s roll up our sleeves, it’s reporting time<o:p>
Please take a look at Image 1. How complex is that report? How much time do you think it will take to create such a report? Well, as for complexity, it is a simple report extracted out of source NorthWind->Products (Sql Server 2000) and lists all the products information with summary totals.<o:p>
About time, obviously, it should not take you hours to do it. About R&D and trial & error time, I leave that to you; dig down deep; the deeper you will explore, the better the treasure you will find.<o:p>
Here it is, the million $ question: How to start? What is going to be the first step?<o:p>
Often, it is very easy to find out what should be the first step. Have you seen a house built before the foundation? No! So, have I given you a hint here? Sure, we must first develop the Smart Client to host our report.<o:p>
Step 1: Create Windows Application Project<o:p>
Please do the following to create a Windows Application (Smart Client) project:<o:p>.<o:p>
Please update following properties of Form1:
Form1.Text = “MS Reporting Services 101 with Smart Client”
Form1.Size = 750, 300
Feel free to change any other property of Form1 as per your requirement.<o:p>
Step 2: Add Report Viewer to the Form<o:p>
So, what is report viewer? As we need the DVD player to play a DVD; same goes with the reports, we need a report viewer to have the report preview done.<o:p>
For all those who are brand.<o:p>
Please perform following actions to setup Report Viewer Control on Form1:<o:p>
After step 1 and step 2, your project should look as per Image 2.<o:<o:lock v:<o:p>
Image: 2<o:p>
Step 3: Add DataSet to the Project <o:p>
Hurray! We are done with the foundation. It’s time to put walls around the foundation; eventually these walls will hold the doors and windows of your home. DataSet is just that for Report Viewer, it holds and provides the raw data from data source to be processed and ready to be outputted on the Smart Client interface.<o:p>
Following step is required to have DataSet added to project: <o:p>
Let’s add a DataTable to our newly created DataSet. DataTable is essential to load the reporting data; we will use the information from DataSet/DataTable while designing the report.<o:p>
Following step are required to have DataTable added to DataSet(dsProduct): <o:p>
<o:p>
Image: 3<o:p>
Let’s start adding columns to DataTable(dtProductList). Your designer screen should look like Image 4. Right-click on dtProductList and select Add -> Column to start adding columns to DataTable.<o:p>
Image: 4<o:p>
Please repeat the action for following columns:<o:p>
As you are adding columns, by default it is string data type. Please go to properties windows after selecting column to change it from String to Integer or Double.<o:p>
Please see image 5. Your DataTable should look the same. Also, you can see the properties window to change the data type.<o:p>
Image: 5<o:p>
Have you heard of “Typed DataSet”? If not, then we have just created a Typed DataSet here. Please consult online help to know more about Typed DataSet.<o:p>
Step 4: Add Report to the Project <o:p>
Alright, so far we created the project; added Report Viewer and DataSet. Now, it is the time to deal with star of the show! Let’s create that neat report. <o:p>
Following steps is required to have Report (rptProductList.rdlc): <o:p>
Typically, after add action is finished your screen should be similar to Image 6. When a report is added to project, it is ready to use the DataSet for designing. <o:p>
Image: 6<o:p>
Weather this is your very first report or you are a reporting junkie like me; we have to deal with the most basic building blocks of report writing, which is: Header, Body and Footer.<o:p>
Typically, reports are designed with specific page size and layout in mind. Our report is Letter size and Portrait layout. You can explore various properties attached to report layout by right clicking anywhere on open designer surface and select properties.<o:p>
It is always advisable to draw a prototype of your report on paper, before you start the design attempt. As you can see in Image 1, we have Report Name and Report Date in header section. The body section has the product list information together with summary totals; and footer carries the Page Numbers.<o:p>
Let’s start working on Page Header:<o:p>
When new report is added to project, by default, all you will see in report designer is the body section. Right click on report designer surface anywhere other then body and select Page Header. This will add header to report. Feel free to adjust the height of header and body section. See Image 7, I have reduced the height of body and increased the height of the header.<o:p>
Image: 7<o:p>
While inside the report designer, if you explore the Toolbox, you will see variety of controls which can be used to design report. For our example, we will use, TextBox, Line and Table control. I would encourage you to go through the online documents if you need detailed information for all available controls.<o:p>
Header Section<o:p>
Let’s start designing the header. We will start by dragging two TextBox and dropping on header section. Texbox can show both static and dynamic data. Line control is used to separate header from body section.<o:p>
After dropping controls over report designer surface, you can control the look and feel by changing associated properties. We will designate one TextBox to report title and another one to show current date. We can directly type static text into TextBox control by selecting it and start typing inside.<o:p>
Please change following properties of Title TextBox:
Value = “Product List”
Color = Purple (you like purpule too for title right?)
Please change following properties of Date TextBox:
Value = ="Run Data: " & Today
Please note Value property for Date TextBox starts with a “=” sign. This is not a simple static text, instead it is an expression. This expression is a result of string “Run Date” and VB.NET script keyword Today (to get current system date).<o:p>
You can specify desired names to all objects in report; I choose to stay with default name for most of the controls, however, for demo purpose I did specified “txtTitle” to Title TextBox.<o:p>
Please refer to Image 8; your finished design for header should look relatively same.<o:p>
Image: 8<o:p>
Body Section<o:p>
Body section, also referred as details section, is by far the most important part of the report. As you can see when we added the report to the project; body section was added for us automatically. All we have to do is start putting controls on it.<o:p>
Traditionally, Body section is used to display details (in our example it is product information) usually more then one row of information. Body section can expand as per the growth of reported data. Often report is designed with intention to have one physical page (Letter/A4 etc.) output; in this case Body section still can be used to display information.<o:p>
Out of Table, Matrix and List, the three most commonly used control on Body section; we will use Table control for our example. All three can repeat information; Matrix goes a step further and even produces Pivot output.<o:p>
Let’s drag and drop Table control on body section of report designer surface. If you notice, this action will produce a table with three rows and three columns. You may have also noticed that center column also has been labeled: Header, Detail and Footer.<o:p>
Now, don’t be surprise if I tell you that Table control is nothing but bunch of TextBox attached together! Yes, each and every Cell in Table is like TextBox, which means you can either type static text on it or specify a dynamic expression.<o:p>
Before we start designing the Body section, let’s add two more columns (remember we have total of five columns in the report). Adding columns is easy; please do the following to get new columns added to report:<o:p>
Make sure your report resemble to Image 9. Feel free to adjust the width of column based on length of data it will hold. <o:p>
Image: 9<o:p>
I am sure majority of us have used Excel or something similar; think of same for Table control as mini worksheet. We can apply borders, change font of individual cell etc. etc. So, all you have to do is to think of desired formatting theme and start applying it.<o:p>
Starting with first column to the last one, please click on individual column header cell and type the following text:
Header 1: “Product Name”
Header 2: “Packaging”
Header 3: “Unit Price”
Header 4: “Units in Stock”
Header 5: “Stock Value”
Let’s continue to do so the same for Detail section, here one thing to know is, instead of text we have to type the expression which is columns from dsProduct.dtProductInfo. You can either type the expression or simply drag and drop the column from Data Sources Toolbar (see Image 7 on left side).
In case if you decide to type it out, starting with first column to the last one, please click Units in Stock and Unit Value.
Tip: If you drag and drop the column to detail section of Table control, it will try to add column header automatically, if column header is empty.
Finally, let’s add summary total in footer section of Table control. Please make sure to select footer cell on column 4 and 5 inside Body section and type following text:
Cell 4: “Total Value:”
Cell 5: “=SUM(Fields!UnitsInStock.Value * Fields!UnitPrice.Value)”
Please check the expression in Cell 5; I am using a built-in function SUM() to find out total stock value of all the products listed in report.<o:p>
Footer Section<o:p>
Before we start writing some cool C# code to bring our report alive, let’s finish the report footer section. As we have added report header earlier, similarly we have to right click on open report designer surface and select Page Footer (see Image 7). <o:p>
Drag and drop a Line and TexBox control on Footer section. Please type the following expression inside TextBox:
Value: ="Page: " & Globals!PageNumber & "/" & Globals!TotalPages
As you can see I have used PageNumber and TotalPages, both are Global variables maintained by the reporting engine.
Tip: Make sure all expression you type must start with “=” in front of it.
Please make sure your report looks like Image 10. As you can see I have introduced some color and right alignment to numeric data etc. Feel free to try out all the different formatting options, just think of Table control as mini spreadsheet with columns and rows and now you know all the formatting you can try on them.<o:p>
Image: 10<o:p>
Expression Builder<o:p>
Expression builder is a very powerful feature of Reporting Services. As you can see in Image 11, Stock Value is calculated with the help of SUM function. All fields in DataSet can be access with “Fields!” keyword. <o:p>
Image: 11<o:p>
Step 5: Let’s write some C# code to bring life to our report<o:p>
Phew… I hope you guys are not exhausted already. Hang in there; we are on last step now. It’s like we have waited for that long nine months and the time has come to witness the miracle of birth.<o:p>
From solution explorer, select Form1. Right click on surface of form and select View Code.
using System.Data.SqlClient;<o:p>
using Microsoft.Reporting.WinForms;<o:p>
Make sure the Form1_Load event has following code:
private void Form1_Load(object sender, EventArgs e)<o:p>
{<o:p>
//declare connection string<o:p>
string cnString = @"(local); Initial Catalog=northwind;" +<o:p>
"User Id=northwind;Password=northwind";<o:p>
//use following if you use standard security<o:p>
//string cnString = @"Data Source=(local);Initial <o:p>
//Catalog=northwind; Integrated Security=SSPI";<o:p>
//declare Connection, command and other related objects<o:p>
SqlConnection conReport = new SqlConnection(cnString);<o:p>
SqlCommand cmdReport = new SqlCommand();<o:p>
SqlDataReader drReport;<o:p>
DataSet dsReport = new dsProduct();<o:p>
try<o:p>
{<o:p>
//open connection<o:p>
conReport.Open();<o:p>
//prepare connection object to get the data through reader and<o:p>
populate into dataset<o:p>
cmdReport.CommandType = CommandType.Text;<o:p>
cmdReport.Connection = conReport;<o:p>
cmdReport.CommandText = "Select TOP 5 * FROM<o:p>
Products Order By ProductName";<o:p>
//read data from command object<o:p>
drReport = cmdReport.ExecuteReader();<o:p>
//new cool thing with ADO.NET... load data directly from reader<o:p>
to dataset<o:p>
dsReport.Tables[0].Load(drReport);<o:p>
//close reader and connection<o:p>
drReport.Close();<o:p>
conReport.Close();<o:p>
//provide local report information to viewer<o:p>
rpvAbraKaDabra.LocalReport.ReportEmbeddedResource = <o:p>
"rsWin101.rptProductList.rdlc";<o:p>
//prepare report data source<o:p>
ReportDataSource rds = new ReportDataSource();<o:p>
rds.Name = "dsProduct_dtProductList";<o:p>
rds.Value = dsReport.Tables[0];<o:p>
rpvAbraKaDabra.LocalReport.DataSources.Add(rds);<o:p>
//load report viewer<o:p>
rpvAbraKaDabra.RefreshReport();<o:p>
}<o:p>
catch (Exception ex)<o:p>
//display generic error message back to user<o:p>
MessageBox.Show(ex.Message);<o:p>
finally<o:p>
//check if connection is still open then attempt to close it<o:p>
if (conReport.State == ConnectionState.Open)<o:p>
{<o:p>
conReport.Close();<o:p>
}<o:p>
}<o:p>
You might be wondering why I have used “TOP 5” for select query; the reason is, I wanted to limit the output so that I can show you summary total in Image 1.
Tip: Name property of ReportDataSource object should be always “DataSet_DataTable”.
Can I use Access instead of SQL Server 2000?
Yes, you can use the Access database. Please make sure the following changes are applied to the above mentioned code to get the data reported from NorthWind Access Database.
Although Northwind database comes with the Access database installation; in case if you don’t have it then you can get it from here:
Revised code should look like the following:
using System.Data.OleDb;<o:p>
string cnString = @"Provider=Microsoft.Jet.OLEDB.4.0;<o:p>
Data Source=c:\nwind.mdb;User Id=admin;Password=;";<o:p>
OleDbConnection conReport = new OleDbConnection(cnString);<o:p>
OleDbCommand cmdReport = new OleDbCommand();<o:p>
OleDbDataReader drReport;<o:p>
//prepare connection object to get the data through<o:p>
reader and populate into dataset<o:p>
Products Order By ProductName";<o:p>
//new cool thing with ADO.NET... load data directly<o:p>
from reader to dataset<o:p>
conReport.Close();<o:p>
rpvAbraKaDabra.LocalReport.ReportEmbeddedResource =<o:p>
rds.Value = dsReport.Tables[0];<o:p>
MessageBox.Show(ex.Message);<o:p>
conReport.Close();<o:p>
Conclusion<o:p>
Although, I tried to keep the language of this article as simple as possible; however, please feel free to get back to me if you need any further clarification. I consider myself a budding author; I have to learn a lot; it is the reader like you, who has always helped me to improve my writing.<o:p>
I am looking forward to receive any comments/suggestion you have for me.
Thank you for reading; I sincerely hope this article will help you a bit or two to know reporting services better through my applied approach.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
reportViewer1.LocalReport.ReportEmbeddedResource = "{appname}.Report.rdlc";
MemoryStream mStream = new MemoryStream();
mStream.Write(Resource1.Report, 0, Resource1.Report.Length - 1);
reportViewer1.LocalReport.LoadReportDefinition(mStream);
... the ultimate solution
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/articles/15318/applied-ms-reporting-services-using-smart-clie | CC-MAIN-2016-50 | refinedweb | 3,058 | 63.9 |
Hi On Fri, Jun 22, 2007 at 10:08:58PM -0400, Ronald S. Bultje wrote: > Hi, > > On 6/22/07, Ramiro Ribeiro Polla <ramiro at lisha.ufsc.br> wrote: > > > >It no longer applies. Could you send an updated patch? > >And please svn diff it from the source folder. > > > I think it's common to use a source-folder patch (i.e. one that is applied > with -p1 from the source folder)? At least that is kernel-style... Anyone > else wants me to switch to -p0 (does anyone really care?)? > > New patch with get_bits_long for oggparsetheora, removed the (offset) cast > in ffm.c attached. I left the get_pts() in the mpeg reader as-is, if > everyone hates it I'll change it... [...] > @@ -694,15 +694,9 @@ > > static int64_t get_pts(const uint8_t *p) > { > - int64_t pts; > - int val; > - > - pts = (int64_t)((p[0] >> 1) & 0x07) << 30; > - val = (p[1] << 8) | p[2]; > - pts |= (int64_t)(val >> 1) << 15; > - val = (p[3] << 8) | p[4]; > - pts |= (int64_t)(val >> 1); > - return pts; > + return ((int64_t)((p[0] >> 1) & 0x07) << 30) | > + ((AV_RB16(p + 1) >> 1) << 15) | > + (AV_RB16(p + 3) >> 1); > } i: <> | http://ffmpeg.org/pipermail/ffmpeg-devel/2007-June/031191.html | CC-MAIN-2015-06 | refinedweb | 182 | 83.86 |
A), is the standard comparative measure for capital costs used in energy industries. The specific Overnight Capital Costs used include:
- Civil and structural costs
- Mechanical equipment supply and installation
- Electrical and instrumentation and control
- Project indirect costs
- Other owners costs: design studies, legal fees, insurance costs, property taxes and local electrical linkages to the Grid.
However fickle, as in the week in July 2014, clearly shown above, where Wind-Power input across Germany was close to zero for several days. Similarly an established high pressure system, with little wind over the whole of Northern Europe is a common occurrence in winter months, when electricity demand is at its highest..
192 thoughts on “Renewable Energy – Solar and Wind-Power: capital costs and effectiveness compared”
Now, that’s all very true but it doesn’t include the costs of the externalities.
The justification for renewables is that they save money that would be spent coping with climate change.
That may not be a strong argument but it is the justification that is used.
When the argument in favor of renewables is predicated on CO2 emissions substantially affecting the climate system, there’s a hell of a lot of proving models to do before we commit to them as our main source of power.
Come on man. Green scams have made almost difference in the yearly world increase in CO2 and have only resulted in small reductions in the countries where the renewables are installed. There is therefore almost no reduction in ‘climate change’ due to spending trillions of dollars on the green scams.
What climate change in the last 18 years?
And since the externalities are fabricated in both extent and cost, you only underscore how much of a scam the climate industry is getting away with.
But this the lie behind it all.
Since renewables require 100% backup from conventionally fossil fuel powered generation, and since such generation when used in ramp up/ramp down mode uses just as much fuel as it would if it were run at steady full output, there is in practice no reduction in CO2 emissions. The only reason that there has been a reduction in UK CO2 emissions is because the spare capacity/surplus capicity of the UK national grid has been reduced down from about 16% to 6 to 8%. If the UK had not installed any wind or solar, and simply scaled back surplus/safety capacity down from about 16% to about 8% there would have been the same reduction in CO2 emissions.
The is not difficult to understand since the position is similar to fuel usage in a car. A car uses least fuel when run at a steady freeway/motorway speed (say 56mph) and uses most fuel when run in urban/in town usage, with stop/start acceleration even though the car may then be averaging only about 15 to 20mph. Considerable fuel is used in the ramp up process.
“The justification for renewables is that they save money that would be spent coping with climate change.”
WHAT saved money?!
Even Zuckerburg’s scientists agree: : Google: renewables “simply won’t work”
Nov 21, 2014
Energy: solar Energy: wind
Via The Register we learn that some of Google’s top engineers have been tasked with making renewable energy cheaper than fossil fuels. We also learn that they have given up.
At the start.
Original article at The Register (El Reg) is here:
The origignal IEEE article is quite different from the quote-mined Register version
I ended my own experiment with Solar power today as my second Fronius inverter died in 3 years. They cost $2000 each, though the first one was replaced under warranty. With my 3600W system, I never saw an electrical savings of more than $1/day or so. And, I live in Southern California and have clear days most days of the year. The inverters are the weak point in the system, and keeping one working eats up all the potential money the generated power would save. Complete waste of $15,000.
Wally being trained as a Electronic tech when I penciled solar out I quickly ascertained that it would never pay for itself, even with the “rebates and subsidies”, I have little faith of electronic running any kind of power last much over ten years, let alone the twenty to thirty that would be needed to get a decent rate of return.
AGW is not real therefore there are no externalities from that source. In terms of real pollution renewables can hold their heads high in terms of their polluting output.
An easy argument to counter, but it is not the one I hear.
I basically get incessant arguments that often are based on government green energy lies, name plate capacity, no accounting for intermittency other then “hey often when the sun does not shine, the wind blows”, the work together.
I have to agree on reflection though. They do not listen to any argument, no matter how well constructed.
So our governments have squandered billions on a dream that any engineer could show to be a waste of time, effort and money. A sobering thought.
Thanks for a good post.
What amateurs don’t get is “non-dispatchability”.
The true measure of the value of renewable energy is how many votes it gets for its political supporters. As they keep getting re-elected, I’d say renewable energy is very effective.
… and don’t forget the effectiveness of getting those same politicians elected with our tax dollars… now routed to green energy
scamsinvestments run by Democrat party donors and therefore right back into the politicians pocket. That is also “effective.” All based on the ignorance of their hippy wannabe (or has been) tree hugging, medical marijuana infused supporters and the equally uneducated Hollywood left… all whooped up by the scientifically scatterbrained, mathematically challenged cheerleaders in the “news” media.
In related news we have Google.
Thanks for that drastic insight into the deceitful benefits of RES. Just let us assume that of these 0.5 trillion the yield nowspentwasted was put aside for the improbable case of a necessary enforced adaptation in, let us say 40 years, this yield alone would be sufficient to pay for that adaptation, I presume. The RES systems will be written off and scrapped for long and the taxpayer/consumer will pay and pay and pay for ever. That RES system will never ever reach the break even point,
Attributing a capacity number to an intermittent source is fraudulent in a marketplace that requires continuous availability.
As indicated the upfront costs for wind and solar, for the nominal, not name plate outputs look like poor investments. What would the numbers look like over 5/10/15 years when cost of fuel, connection costs, and maintenance is thrown into the numbers? A net present value calculation for the time periods above might be interesting to look at. At some point there will have to be investment in load leveling storage capacity be it pumped storage or large scale chemical energy storage systems such as V redox flow batteries. These costs should be included as well. If the ultimate goal were to do away with fossil fuel based generation these storage options would no longer be options rather they would be essential components of low to no carbon based energy generation.
look on the bright side , its kept the cement industry happy.
225,000 wind turbines x 800 tons of cement per turbine/foundations.
You mean: 225 000 Wind turbines x 1000 tons of CO2…LOL!
Thats 225 millon tons of CO2.
Wow!
1,000 tons of CO2 from 800 tons of cement. Now that’s a good trick.
And, oh by the way, despite your magical (tax payer funded) climate models, Mother Nature does not appear to think “warming” is driven by CO2 in Earth’s ecosystem.
The carbon tax on cement manufacturing in my province (British Columbia) means that cement is imported instead, bypassing the carbon tax.
That will be true anywhere that such a tax is applied.
An enlightening and thoroughly researched factual article. Great visuals. Thank you Ed.
Any business faced with incredibly poor ROCE figures like that would face bankruptcy. To survive, they would need to borrow lots and lots of money. Actually, come to think of it . . . .
(ROCE – Return on Capital Employed).
Uh…my personal experience is I paid $34,000 for 11,000W residential PV system. Within 90 days, I received a $20,000 politically-mandated rebate from my utility plus a $10,200 Federal tax credit. So net cost is $34,000 – 20,000 – 10,200 = $3,800.
I save about $2,500/year on my electric bill
My TPR&RPR ROE is somewhere in the neighborhood of 65%.
(TPR & RPR = Tax Payer Rip-off & Rate Payer Rip-off)
Living in Phoenix, Arizona, we thought Geo-thermal might be a good alternative, so last year we had someone come over and give an estimate. When the cost turned out to be more than a third of the value of our home, we decided to look at solar. Unfortunately, they wanted us to cut down ALL of the trees which provide shade to the south side of our house (3 pines, 2 palms, 1 each of ficus, grapefruit, lemon, and tangerine), so we decided to pass on that also.
We did not fail to note the irony in having to cut down our trees so we could “go green”.
So in other words you are being subsidized by me and other taxpayers, what do you do when we finally say “to hell with that.”
Yes, it is a “beggar thy neighbour” benefit. Everyone else’s taxes pay for your returns.
I had a landowner in West Texas where we were producing oil and gas ask me to review his “wind rights’ proposal.The calculations showed that a 1 megawatt windmill would take 17 years to return its capital cost assuming no maintenance. Having looked at the mechanics of the system-gear box, bearings,generator, etc., it is doubtful that the useful mechanical life would exceed 10 years in the best circumstances, much less in the dusty abrasive conditions of West Texas.
Billy Jack , did you factor in aux power back to the turbine when it is not turning to keep it from seizing.
No, I didn’t consider that, but driving from Dallas to Midland I drive through several massive installations that I’d guess two out of ten mills are not rotating. I was more interested in calculating the hydraulic capacity of the support structures, when they are all shut down once the tax credits are used up and we begin taking them down to use as pipeline material or as housing for “undocumented workers”.
Conventional power turbines require internal inspections every 9 months with boroscopes (a very short outage of 3-4 days), and routine minor maintenance every quarter (every 3 months). Open-and-inspection outages every 18 months, and major re-open-inspect-and-replacement dis-assemblies every 36 months (3 years). A “Throwaway” small wind turbine is much, much more highly stressed with vibrations, corrosion, and damage from oscillating forces up at the tall end of a long pendulum being shook every 1.5 seconds every minute of the year. True, a wind turbine is lighter, but that also means that it cannot enjoy the large bearings and heavy mass around the moving parts that provide redundancy and increase wear life.
Net? Few wind turbines last longer than 7 seven years. But they are not INTENDED to last longer than 7-9 years. They ONLY need to last long enough to get the renewable energy tax credits and tax write-off’s and construction subsidies for their corporate-CAGW-funded start-up money spongers (er, sponsors)!
NO wind turbine has been funded by private money for return-of-investment from the power produced over the 20, 30, or 40 long term lifetime of a conventional power plant. And many, many power plants are still generating money and power usefully after 50 years.
In the US, there are no construction subsidies. The only subsidy is based on actual power produced. If the machine goes out of production, the subsidy stops.
Good points.
Have you a source for the 7 year longevity? That would be useful.
Chris is incorrect about construction subsidies. They are called “investment tax credits” and accrue to the owner when the facility is “substantially complete.”
@oeman50: Thank you for that, I was not aware of the program. However, for wind it only applies to units smaller than 100kW. Not utility scale. For solar it apparently applies to all sizes.
Chris4692,
That’s bull, here in Maryland, we are being hit with a surcharge to help off set the cost of a wind farm to be built off the coast of Ocean City MD about 10 miles out. So we’re already paying higher energy costs because of RES.
The insanity is maddening!
ouch.
Even at a conservatively estimated 16x the cost of conventional power, the energy expenditure to build and run renewables exceeds by far the energy they will produce in their lifetime.
They are ruinously expensive, waste energy, increase CO2 emissions, and are generally all pain, no gain, regardless of whether one believes in manbearpig or not.
Tagerbaek, good point. But to our community, remember that it matters not one jot that renewables ‘increase CO2 emissions’. We already know that this incy-wincy microscopic amount of atmospheric gas is not a threat. I agree with ‘ruinously expense’ and ‘waste energy’ though. Adding, intrusive, ugly, intermittent, self-righteous, subsidy-grabbing, energy-bill-increasing, rip-off would be more fitting.
Can you point me to a site that quantifies the cost to run renewables?
This may be of interest : (referenced on WUWT a few months back):
“Energy intensities, EROIs, and energy payback times of electricity generating plants “. H Weissbach et al
Abstract
“The Energy Returned on Invested, EROI, has been evaluated for typical power plants representing wind
energy, photovoltaics, solar thermal, hydro, natural gas, biogas, coal and nuclear power. The strict energyuered” (?) scenario. The results show that nuclear,
hydro, coal, and natural gas power systems (in this order) are one order of magnitude more effective than
photovoltaics and wind power.”
( I copied the abstract from my pdf copy in Open Office , so some words were a bit mangled , like “unbuered” which is presumably unburdened or unbundled – but I think that the context is clear)
You might find it here somewhere :
I’ll note that they place the fuel cost savings at $28 /MWhr of renewable energy produced, and the cost of the added maintenance in conventional plants due to increased variability of the wind at around $1 /MWhr of renewable energy produced. I’ve not encountered the total maintenance cost of wind or solar.
you can also look for the studies and reports concerning Ivanpah solar facility, a recent report showed that it has produced about half of what it was “supposed to because clouds etc…” also it has been burning birds of all sizes out of the sky and another study showed that instead of the one hour a day of the boilers being “warmed up” by burning natural gas it has been burning natural gas for about 5 hours a day. I might add that since it is not a combined cycle gas generator its use of gas is much less efficient than an actual combined cycle gas generator to generate electricity, so all the “carbon credits” given to it are even more of a sham. There are also reports of pilots being “blinded” by the glare of the reflectors.
“renewables” don’t ever seem to measure up to their “supposed to” performance numbers yet the scammers.. oops politicians keep ramming them down the tax payers throat.
Joe!
@JoeCivis
The second image on the following page includes Ivanpah in the distance (the three pairs of very bright lights to the right of the freeway (I15)):
It also shows to unwary Californians that the amount of radiation you continually receive in a jet at 39,000 feet is 29 times what you receive on the golf course. If you’re frightened of radiation – beware: those in Business and First get the same.
Ed Hoskins;
Your diagram seems made by a functional illiterate, with terminology such as “5 times less effective” Does this mean 20%?
Should have read your comment… I posted exactly the same thing below.
Yea, I hate the “5 times less” construct, too.
I guess “5” is a more impressive number than “0.20”.
the opposite of 5 x more (read the otherside of the Bar Chart) is 5x less or a fifth. To write 20% on the left would mean writing up to 55,500% more on the right, which would be equally correct but less elegant on the chart. I would have gone along with nitpicking of 1/5th but not 20%. I’m bored and at my computer so am prepared to answer…..your excuse??
If I may…
1/5 as much is not five times less. -4 is five times less than 1.
Well my Ga Tech physics and math education says “bovine excrement”. That’s my excuse.
My previous comment intended for MWH
So 5 x more is 6 times as much.
It does mean that.
But is this a question of style or substance?
Personally, I’m not stylish (according to the fiancé).
I agree! one times less is zero. 5 or 9 times less is a negative number.
Renewable energy is yet another manifestation of the human yearning for Utopia, in this case a Gaia-compliant Utopia where unicorns traverse the capital cost burden by jumping from a pot of gold to a pot of fiat money.
Liberals who scold us often on the topic of “sustainability” demand schemes whose economic models cannot be sustainable. Wind and solar power are only economically sustainable when the cost of conventional power is artificially boosted by regulations for which there is little scientific basis. Thus the demands for “clean air” that are never satisfied by compliance with the previous round of emission reductions.
Most of the capital costs of solar and wind is borne by government, which in turn gets that money far more by creating it out of thin air, or by issuing perpetual debt, than by tax revenues. Particularly in the case of the US (who is also paying for the strategic defense of Germany) government has the power to create near infinite amounts of money with which it pays for near-infinite amounts of government.
One of the best ways to force energy costs to be rationalized with respect to the total economic efficiency of the technology is to revoke the ability of government to create near-infinite money. This would terminate government’s appetite for subsidies. People would have to have “sustainable” project economics.
The left has long scolded us that we are not paying the “full cost of oil”. Yes indeed! The cost of having a navy to protect Gulf oil should be levied on imported oil. And fair is fair. The full cost of wind (including a reserve for decommissioning of abandoned turbines), and the full cost of solar must be acknowledged in the cost of power. Let us have that level of transparency. Otherwise we have based our economic decisions on spreadsheets built of lies, all lies.
“Most of the capital costs of solar and wind is borne by government,”
In the US this is false. There is no US Government subsidy of capital costs of utility scale wind or solar. The subsidy is based on electricity actually produced. Revenue produced by sale of renewable energy credits can be said to be a regulatory subsidy, but that also depends on electricity actually produced and is not a construction subsidy.
There are tax credits for construction of home scale renewable generation, but that is a small fraction of the total.
There were extensive grants from the federal government and tax credits for such equipment. I personally have had discussions on this very topic with executives of wind power companies. The federal government has granted permits to wind and solar companies to use federal lands, often without paying the same lease or use fees they would pay on the open market.
Case in point, a few years ago, a business that processed walnuts in central Missouri was flooded and millions of pounds of walnuts were rendered useless for use as food. These walnuts were eventually sold to a nearby utility for use in their coal fired power plant. The article from the utility boasted that if they used at least 5% of the walnut hulls in their coal feed, by federal regulation **they could count all the power produced as being “green” power**. This type of fraud is also a subsidy since “green power” sold for more on the market than non-green power.
All of these factors have the effect of a subsidy. Sadly subsidies cost us far more than we can possibly know.
Chris says…”In the US, there are no construction subsidies?
Then please tell Google to stop asking for the Fed govt to throw good money after bad regarding the pay back of the building costs of the Ivanpah Solar facility. (I never found clarity regarding if this was a government loan, or a government loan guarantee, but the tax payer is being asked to foot the bill for the poor wag, since electrical output is 1/2 of predicted (about like warming, models vs. reality)
I also recall a whole bunch of rich green folk contributed to Obama’s campaign and received government loans that were not paid back, (Solyndra plus a host of other failed investments.)
Of course the greens will argue and show bogus statistics of how much conventional producers and oil is subsidized. They will assert, “big oil gets ten to 20 times the subsidies of wind and solar}
(They get the same tax write offs, applied to PROFITS any international company gets) Wind and solar is massively subsidized, top to bottom.
Ask your green friend how much tax wind and solar paid vs. conventional? Not only is “Big Oil” and conventional power production not subsidized, it (all three nations) pays about between 200 to 400 billion every year in taxes, after tax write offs. (What greens call subsidies) So the taxes paid by for profit equal what over the 15 years of build up to get to 5% wind and solar? Five trillion maybe? Do not forget all the individual tax paid by all the workers, and all the companies that work for big Oil, that manufacture well heads, ship the product etc, etc.
The 500 billion wasted is about likely ten times that, minimum, when you consider the tax revenue that all the subsidized green is NOT generating,, plus the Government imposed inefficiencies on regulating conventional power to the back seat, thus raising the cost of power on every poor slob with a utility bill, or who buys ANY product made with something we call ENERGY, the life blood of EVERY economy.
The full cost of that fleet or only part?
It could be argued that the parts of the US Navy stationed in the Indian Ocean have more than one role.
The costs of running a country are not subsidies to its industries.
MCourtney
The cost of oil is paid by consumers; the cost of the Navy is paid by tax payers and Chinese loans.
Combining the 2 as the cost of fuel would be a regressive tax (Ayn Rand would like it, though).
This is always a burr under my saddle…
What is ‘x times less’? Is that 1/x as much? Then say that.
“x times less” is so vague as to be meaningless. I pass it by.
Exactly… when I read that, I know I’m reading something by someone who’s bad at math.
Only half times less works as intended. Well, 0 too.
It’s a real pity you did not include solar hot water heating. Solar water heating was so effective that it paid for itself in 2-5 years (in Scotland) and there was absolutely no need for a subsidy.
What this very clearly demonstrates is that none of this renewable subsidy had anything really to do with saving energy use. It was instead a get rich quick scheme dreamt up by wind scamsters and gullible greens.
Just to emphasise the point. If the aim really had been to reduce carbon, then the first scheme that anyone would have chosen would have been solar hot water heating. This shows that the aim was not to reduce carbon – but instead to create a market for “renewable electricity”
@richard What you say about cement is especially interesting. The production and installation of a ton of concrete creates about zero point nine tons of CO2. Thus the pad for an industrial wind turbine adds nearly 720 tons of CO2 in one fell swoop. How much does the equivalent production via natural gas create during its installation?
Underneath every wind turbine park is a city of concrete in pristine ground.
From the moment the first part of a wind turbine is built, to its final days of dismantling for the scrap heap, it is reliant on fossil fuels.
Interesting Jardinero, I’ll add your 720 tonnes (1 tonne of CO2 is about the size of a three-bedroom house) to my list of CO2 emitters . . . .but, (I’ll keep saying this until I’m blue in the face), to us, it doesn’t matter how much CO2 is added. We all know it represents a microscopic amount of inert atmospheric gas and is not a threat. The warmists think otherwise of course – which is the main reason why we’ve been left with their legacy of renewable rip-offs.
Focusing on capital costs is of course entirely misleading, since the direct financial savings from renewables comes from fuel (operating) costs, not to mention environmental and human health externalities. And what a lot of people don’t realize (or admit) is that fossil fuels, despite being “mature” industries, are subsidized to the tune of $500 billion a year:
This is a very highly questionable statement.
The problem with every assertion of fossil fuel subsidies that I have found does not give enough detail to be able to tell what is being counted as a subsidy. If a coal plant (or other equipment) is given an accelerated depreciation schedule, the subsidy is the interest saved, not the capital cost deducted, it’s never stated how this is calculated. If an oil well is given a depletion allowance: the subsidy is the difference between the allowance and the development cost, not the total of the allowance. Studies I’ve seen report the total of the depletion allowance as a subsidy.
These studies are mostly for propaganda. They cannot be counted as serious work.
Chris4692,
You are right, it is propaganda. It neglects to mention that _all_extractive industries: mining, minerals, timber, etc. are entitled to a depletion allowance and this corresponds to capital depreciation in other businesses such as wind turbines, solar panels, etc.
A fact ignored by the left is that the oil depletion allowance was eliminated for major producers in 1975. Other minerals, etc remain in effect. The lie about oil subsidies is touted to confuse the issue of huge subsidies for renewables..
That fossil fuel subsidy claim has been effectively and lengthily rebutted several times on WUWT, most recently within a couple of months.
Let’s get rid of all subsidies and see renewables compete on their own merits.
There is a second problem with intermettant, non controllable ‘green’ power sources. They produce power at times when power is not required.
Utilities use highly efficient combined cycle gas turbines for base power. Single cycle gas turbines are use for peak and variable demand. There is roughly 30% net difference in efficiency between combined cycle and single cycle. Combined cycle power plants take almost a day to start up and can not hence be turned on/off.
Wind turbine power output varies as the cube root of wind speed. A wind farm power output can change 60% in 20 minutes. Large wind turbine for utilities to turn off combine cycle power plants and use more in efficient single cycle power plants. This problem limits wind turbines maximum possible reduction in CO2 emission to about 10%, in cases where there is not hydro electric power to handle the swings.
Green scams do not work for basic engineering reasons.
I followed your link. Great piece of disinformation. Go down to the US section. it says that
The three largest fossil fuel subsidies were:
1.Foreign tax credit ($15.3 billion)
2.Credit for production of non-conventional fuels ($14.1 billion)
3.Oil and Gas exploration and development expensing ($7.1 billion)
Every corporation doing business outside the US get a Foreign tax credit. If a company makes money in another country and that country taxes that profit, then this tax is a credit against that profit. IT the foreign tax credit was not issued, then the US and the foreign country would tax the same bit of profit. This would make it so that US companies could not operate internationally.
#2 does anyone know what this is? maybe ethanol.
#3 says oil companies can expense the cost of doing business. novel concept.
On the other hand, it says
The three largest renewable fuel subsidies were:
1.Alcohol Credit for Fuel Excise Tax ($11.6 billion)
2.Renewable Electricity Production Credit ($5.2 billion)
3.Corn-Based Ethanol ($5.0 billion)
If you propose that all energy companies get treated as any other company and not get direct tax dollar subsidies. I would agree with that.
A tax credit is not a subsidy. It merely is less tax paid. To think it is a subsidy is to think that the government owns all money.
Indeed, please ask any green friend how much tax all of the wind and solar companies paid. Do not forget to include the dozen plus Solyndra’s out there.
This is completely ignorant. To make it simple, governments do not tax total revenue, they tax profits. The so called subsidy is the cost to produce the fossil fuels, Labor, equipment,etc.are extracted from the gross revenue before taxes. All businesses are taxed the same way or there would be far fewer businesses. .
As always, numbers can be explained in so many ways.
Certainly the carbon fuel industry is not without it’s subsidies. Let’s assume that all numbers in the article are correct and so is wikipedia’s $500 billion. How much tax is being levied in your country for each litre/gallon of petrol you stick in your car? In some countries, Europe in particular, it tops 50% of the price at the pump. That turns into a lot of $ being paid to the government. That some of that comes back as subsidy is not totally unfair.
Europe is now closing in on 300 million cars registered. Assume each car is using an average of 20 litres a week. At a total tax rate of close to US$1 per litre that is US$6 billion per week, and that is just for petrol use in the EU. Makes US$500 billion worldwide subsidy seem like chicken you know what to me.
chicken
dropping
residue
detritus [Pratchett has Detritus as a troll. Do we know any trolls? ]
Auto
Wiki is very wrong here.
Solar is a viable source of energy. It works very well for space heating and domestic hot water, particularly when designed into the structure. It doesn’t do very well in the production of electricity except in very remote areas where some electricity is better than none. Off grid as a supplement it is OK, on grid, not so much. The same can be said for wind. It works well for pumping of water in an off grid application, on grid, again not so much.
Intermittancy and efficiency kill wind and solar when coupled to the grid.
Those running the grid already know how to manage the intermittancy.
The big problem in having lots of small sources connected to the grid is safety of the workers when the power goes out. All these independent sources must be isolated from the system before linemen can work on the power lines or before firemen can work around the burning structure.
Chris4692 says:
“Those running the grid already know how to manage the intermittancy.”
They can manage it only within certain ranges. The larger the penetration of renewables into the grid, the greater the amount of conventional, dispatchable generation that must be kept available.
oeman50
November 21, 2014 at 8:50 am
The larger the penetration of renewables into the grid, the greater the amount of conventional, dispatchable generation that must be kept available.
=======
That cannot happen! Large scale implementation of intermittent energy MUST lead to blackouts.
The greater the penetration of renewables, the less financially viable the backup generation will become. The less an asset is used, the more difficult it becomes to justify it. In extremis, no one is going to build a plant to be run 5% of the time. And that will lead to interesting “unforeseen,” unintended, consequences.
Chris4692 as someone who works with “those running the grid..” they have a significant problem managing the intermittency and it is at greatly increased cost to those who use electricity. As others have stated conventional generation must wait in standby mode waiting for “renewables” to either produce or not produce and so even if the renewables produce when they are scheduled to the standby gets paid for playing back up.
Reblogged this on SiriusCoffee and commented:
What matters the economics of a thing, when the faith of true believers is at stake?
+1. Especially when the true believers get to tap other people’s money to sustain their work of faith.
+1 also
Noting the need to hoover up other peoples’ money to make a chance of vertebracy.
Auto
It should be noted that the people who claim renewables are competitive also claim that oil companies that pay billions of dollars in taxes (“royalties” are a tax) are “subsidized”. Math is not their strong suit.
John E.
Be realistic, please.
Some of the greenies can manage a multi-million dollar/teuro/pound/etc. subsidy perfectly well.
I am sure their income tax returns are models of transparency, accuracy and veracity.
Auto
PS – Mods – this is meant seriously. Greenies are arrow-straight, and will pay to the cent their dues [legal – and morally constrained . . . . . . . . .]. Sarc/
Royalties are paid to the owner of the mineral rights, either public agency or private landowner. Royalties are not a tax, it is more of “cost of goods sold” in accounting terms.
The problem in the US is that government subsidies produce malinvestment. It also stifles innovation. There are so many fingers in the pie that almost impossible to stop. Once you get involved in politics you get all sorts of stupid decisions – like the congressman who makes sure that contracts to build planes and helicopters in his district is funded even though the navy does not want the planes.
What is sad it that if all the money that is being spent on alternative energy was diverted into fusion research, we might actually come up with the energy supply of the future.
Ed Hoskins,
Thank you for so patiently and logically laying out the situation with full supporting method and data. Please keep doing posts like this.
My view is that the ” ~$0.5trillion in capital costs alone, (conservatively estimated, only accounting for the primary capital costs )” was authoritarian intervention which distorted the free marketplace. This will cause three results which increases non-productivity thus reducing the wealth per capita: 1) it makes the free marketplace economic calculation, to best use capital productively, incorrect; 2) it prevents capital from freely flowing to better use of the capital as determined by the free market place’s private values; 3) the inefficiencies of return on the capital investment destroys wealth as compared to better efficiencies in other investments.
Such central planning inspired intervention in the free marketplace is as stupid as its basis in the collectivist philosophy of Marx that sought the demise of both individual freedom in the market and the private ownership of capital.
John
They didn’t listen six year ago. Here’s a surprisingly ‘skeptic’ article in the Guardian from August 2008.
If the philosophy of this piece was extrapolated, every energy plant constructed in any part of these three countries would be of one type. They would be all gas, or all coal, or all nuclear. But each type of production has its own characteristics, so producers have a mix to take advantage of each.
It should give the essayist pause to consider that companies building their own wind power installations are doing so with their own funds. They know about the intermittancy. They know about power factors. They know about the maintenance required. They know about transmission losses. They know about reserve capacity requirements. They know statistically how much the wind blows what direction and how hard. They know what the out put will be. They know what the power source will do to their system. They also know about the subsidy and the renewable energy credits.
They do the economic analysis of all these factors, and factors that none of us know about, in more detail than anyone commenting here knows about (because much of what they know is proprietary – if any of their engineers is reading this discussion they cannot comment) and they decide to invest their own funds in the project.
There is a net advantage to these projects or they would not be constructed. Which should give the essayist some pause.
You seemed quite intelligent for part of that … whatever it was. Why are you talking through your hat?
Chris
It is called Alround Guaranteed Wincome or AGW. Paid by the collective (or government, take your pick) to the plant operator. How can one refuse?
The energy market is not truly functional because electricity is not a commodity. It cannot be stored (ok hydro pumps but that is restrained by geography and expensive).
Yet the grid requires smooth flows of electricity – not dropping out or ramping up unexpectedly. Thus there is a clear requirement for easily turn-off-and-onable power sources that can be relied upon. Wind and Solar don’t so that. Therefore, they ought to be paying a premium to get into the system at all. And they do – but it is paid for by Governments through legislation or subsidy.
And as they are more diffuse than power plants (you have to catch all the winds not just transfer the fuel to the power plant) you need amore grid connections too. That conversion is an extra cost that isn’t put on the wind plant builders.
The choice to build wind may be rational but not necessarily beneficial for the consumer.
The grid inherently has variable demands for electricity. On a very large scale, the demands can be predicted reasonably well; response to the variable demand is a technical problem that is already handled minute to minute every day.
Wind can be predicted 24 hours ahead to within about 20% and better at shorter timescales: it’s fluctuation can be accommodated. It does not suddenly drop off.
Stand by for the variation in wind is not provided by turning a generator on and off. The excess capacity is throughout the system. No particular plant is varied to compensate for variations in the wind output: the entire system varies. There is no currently readily available technique for storage of large scale electricity. None is needed. Natural gas units can provide the fluctuation necessary as long as the amount of wind power is a small part of the system. The loss of efficiency converting to storage and converting back to electricity would make any system not worthwhile.
@Chris
I was unable to find a comment I’d read a year or so ago to the effect that the grid as a whole could buffer local fluctuations in wind and sunshine. It said that experience was showing that these variations didn’t average out over even large geographical areas.
Anyway, here’s a comment from a 10/22/14 Judith Curry thread on this topic
Oops– Make that: “the grid as a whole could NOT buffer”
>There is a net advantage to these projects or they would not be constructed.
That is nonsense. They are doing so because renewable energy state mandate laws have been passed, and federal money makes the projects feasible.
Or maybe the don’t want the fines and jail time for state law as writen
.
Currently, 29 states have renewable electricity mandates (REM) and 7 states have renewable electricity goals. These mandates require utilities to sell or produce a certain percent of their electricity from sources defined as “renewable.”
And maybe you have no concept of Utility Boards setting rates, which have to be payed to the monopoly providers that guaranty a profit on your and every other purchaser’s electricity.
Chris4692 – in this you are mistaken. As a person who work with the “grid folks” wind and solar do not have to pay any of the increased costs that they force onto the market. They have gotten and get large tax breaks and many local and state subsidies provided by tax payer dollars and by everyone else who uses electricity paying higher rates to provide back up services for their intermittent product. The reliable prediction of wind and solar that you assert does not bear out in the actual numbers production numbers. search the latest report on the Ivanpah solar facility and its production was 50% of what was “reliably predicted” if your home was provided with only 50% of what was reliably predicted most likely you would have lots of wasted money in spoiled food from your non-electrified refrigerator. If you are actually curious there are many reports on the day to day performance of solar and wind generators by the actual grid operators.
Cheers,
Joe
The waste of all those taxpayer dollars is another reason the perpetrators of the AGW hoax should stand trial for crimes against humanity.
English quibble: As soon as anything becomes ONE time less, it vanishes, so any statement that something is “9 times less” (or whatever) is meaningless. (There is no problem whatever in saying that something is 9 times MORE. It’s only the negative that is wrong.) The only correct way to express the negative is to say that the alternative is only a ninth as effective. (I realize that this becomes more awkward for a number that includes a fraction, like 4.2…..)
Ian M
This morning’s Los Angeles Times reports that Bureau of Land Management has rejected the application from a Spanish company to construct a combined solar/wind energy project in California’s Silurian Valley.
[begin quote]
The Bureau of Land Management on Thursday denied a Spanish company’s application to build a controversial renewable energy facility in the Mojave Desert’s remote Silurian Valley, deciding the sprawling project “would not be in the public interest.”
The closely watched decision is considered a bellwether for how the federal agency will handle future requests to develop renewable energy projects outside established development areas.
The company had planned a side-by-side wind and solar facility. Thursday’s decision applies only to the solar portion of the project. The wind energy aspect is still in the planning stages.
[…]
If the money that went into commercial efforts to integrate these technologies into the power grid had gone to homeowners, and allowed them to afford the hardware that would lower their monthly bills when paid off, the economy would have been bolstered in several ways, right at the grass roots level. The roofs of buildings are much more practical for solar panels than remote acreage where the environmental impact is higher. The need for grid integration equipt. is reduced to the consumer being required to provide automatic disconnection in case of service interruptions for safety. The load on the grid would be reduced as power was generated at the consumer end of the chain. Brownouts due to hot sunny days in urban areas would no doubt be reduced.
The way it works now, we’ll have to wait for trickle-down from the corporate profits to help the economy (thanks Ronnie), but negative numbers also seem to eventually trickle down also.
They don’t work under 3 feet of snow, they are a boondoggle in most of the USA.
Good point. They tap a resource that is not abundant enough to be universally practical.
In Germany, regular fuel ist taxed at almost 63%. The current fuel price is € 1,41 per liter, (that is US$ 5.33 per gallon). 88,22 ct/liter(or US$ 3.33 per gallon) is tax.
I designed, developed, built and operated power plants of all stripes for some forty years, twenty of those years doing “renewable” energy plants. I could quibble with a few of the author’s details, but the overall thrust of the work is dead on point. Wind power is an economic dog by any measure and solar is utterly absurd. I’ve also run the numbers using the most fantastical assumptions of fossil fuel price-escalators, alternate energy conversion efficiency improvements and remotely achievable capital cost reductions that green advocates could dream up; still “no contest”. The problem with wind and solar is always the same; “energy density”. There is simply too much physical material required for too little power output. End of story.
Bravo, Claude.
True Claude, thank you.
I wrote similar conclusions in 2002 that were confirmed by E.On Netz insightful “Wind Report 2005”:.
______________________
E.On Netz, in their report “Wind Power 2005” describes the problems.%.
So Ed Hoskins, given that the Substitution Factor is the governing factor (in most wind power applications, and in in the absence or a super-battery), I suggest that the real capacity of wind power in Germany today is between 12.5 and 25 times less than gas-fired power, not 4.2.
Regards to all, Allan
Here’s the full article.
They could have figured this out with a few calculations on a back of a napkin, but had to spend a billion dollars in research and a new computer model to figure out the obvious. To be fair that billion was also used to make Google offset its carbon emissions which has close to zero as possible affect on CO2 concentrations. Google is still committed to spending big dollars on saving the planet from certain catastrophe by pursuing new technologies such as “…a method of extracting CO2 from the atmosphere and sequestering the carbon.” Makes sense, maybe a big balloon in space could suck up all the excess CO2. I wonder who controls the CO2 vacumn valve These guys also might like my idea of shooting ice cubes at the sun in order to reduce global temperature, or building a huge magnet in space to control the tilt of the earth or better yet, putting a mini Google inverter on everyone so that people provide energy as demonstrated in the Matrix movies.
Meanwhile another headline, “The_Moral Issueo f Climate change”,
This article has a ray of sunshine in it, I mean good sunshine, not like the bad sunshine that contributes to global warming. Here is an admission that climate change is not a scientific issue but a moral issue. Maybe alarmists can start a new moral majority. People of my age might remember when a small minority started a political action group to promote a religious agenda in government. If Jerry Falwell was alive, I wonder if he would support the climate change moral majority. Probably not, like one person’s junk is another person’s treasure, what one man may consider moral another may consider a mental disorder.
“…a method of extracting CO2 from the atmosphere and sequestering the carbon.”
Sounds like another Sequestered Carbon Accounting Method to me. Perhaps landfilling trees?
J a e: Like it! A great fla (four letter acronym)
A report from Google/Stanford Univ. engineers shows renewables will NEVER work based on math and physics instead of unicorn flatulence. The authors totally buy into the AGW theory but still say there is no hope.
The numbers from Mann/EPA/Climate Fraud are all Gruber based.
Renewable Subsidies
WORLD ENERGY OUTLOOK 2014 FACTSHEET, How will global energy markets evolve to 2040?
World Energy Outlook International Energy Agency
Slides
Note: Slide 4/15 – EU electricity costs ~ 200% of China.
Slide 5/15 – US oil peaks ~ 2020 and declines to negligible by 2040. Major oil growth relies on the OPEC cartel!
Slide 10/15 – Major growth in Hydropower. Note Renewable Subsidies ~ $117 billion in 2030.
TomL, you beat me to it!
If only they would make the small step from ‘… there is no hope!’ to ‘.. well, actually, there is no need :)’. One small step for a man, one giant leap …’ (Sorry!)
No apologies necessary,phaedo. When the world awakes from CO2 hypnotism, we might actually enter an age of enlightenment where children are taught to think instead of programmed by propeganda.
OK… i just finished writing propaganda 50 times on the chalk board… can I go out on the playground?
on-grid supplement by PV solar can be cost effective in some locations….but you need hi sun flux (low latitude and low cloud cover average). here in Las Vegas, I put a 5kW PV system on our home after doing the calcs….with the fed tax 30% credit, the cash outlay is equivalent to a non-revocable CD (till house sale) that pays an 11% tax-free return (tax free since it is actually a discount in utility bills). without the fed credit, it is still a 7.7% tax-free return, which is better than one can do zero risk elsewhere in the markets. (and return goes up as utility rates increase). other points one can argue are the fairness of the state’s rules on net metering (the utility grid serves as a battery storage), which doesn’t reduce infrastructure needs for peak loading and the lack of a time-of-use metering (same purpose, would charge folks for peak loading), either of which would reduce cost effectiveness. still, southwest US (away from coastal cloud cover) is not bad for solar. i wouldn’t touch it in NE US or europe, though……
Jeff, your experience in Las Vegas pretty much tracks mine in Azusa, CA, and my mother’s in Sierra Vista, AZ. With current incentives, solar in the sunbelt makes economic sense for individual homeowners, though not necessarily for the society as a whole. If the green ideologues succeed in their attempt to make fossil energy prices skyrocket, solar will soon make economic sense without any incentives at all. But again, an advantage to individuals, at the expense of the country as a whole.
Jeff (for others too),
I encourage anyone thinking of PV solar to get a few solar lights and watch them for a year or so. As you indicate, the technology works in some places and not well in others.
I am at 47 XXX North and 2,200 feet elevation. I bought a box of small stick-in-the-ground solar lights that I’ve mounted on top of fence posts. In high-sun summer (we have short nights) those things produce light from dark to dawn. Now with low sun and a clear sky they light up for just a couple of hours – gone by 10 PM. On cloudy days, like yesterday, they might as well have still been in the box they came in.
Here is a useful site:
At my latitude, it tells me there are 15.7 hours of daylight on June 22.
That’s 47° N.
They don’t work for very long here in Florida in the dry season. If they don’t work here I am not sure where they work, but one place you see a lot of them is Vermont, most of the winter they are covered in snow. Only thing I can think of is that the government paid these people to put them in, or they are stupid, or both.
Thanks Jeff. So, after all, people are not nailing solar panels to their roof for ecological reasons and belief in global warming and that all that enormous amount of CO2 is so terribly evil and that unprecedented warming will kill us all. They’re doing it to make lots of money. Took me a while.
Here @ 51 N, we have little sunshine from mid-November to mid or late January. What we DO have is pretty close to the horizon (especially on the northern side of a hill round London).
Solar, as a point supply, is useful – some railway signals, teamed with small-scale wind, say.
Not as a baseload; not here – London ; and not in autumn/winter.
Auto
The post has a chart of German origin for a week in August regarding intermittency and non-dispatchability (colors of gray, yellow, blue, +).
Below is a link to electricity produced in a region of the USA and the balancing needed to accommodate the various sources. It is a near real time (updates every 5 minutes) chart. Underneath the chart is an explanation of sources. The “thermal” sources are varied and interesting.
At the time of this writing – mid-morning Friday – the green line for wind is very close to zero. Last week the area filled, from the north, with very cold air that is now stagnant, and we have an air advisory. New weather systems from the west will move this air out in a few days. The wind power will kick in, and the green line will ramp up.
Check back.
I like how they don’t chart power sent to Cali-land. Wonder now they get paid for it if they cannot show on the chart.
Clicking on the chart around the wind line shows. 4515 MW of installed power in 4/2/13. What does that work out to for capacity factor?
DD,
The BPA does say they are a net exporter. They just don’t give specifics on this page.
Note that one of the lines is direct current. Here are the coordinates of the station just south of the Dalles Dam on the Columbia River.
45.606082, -121.111377
Regional History is interesting:
You will find this interesting.
ASME SmartBrief
Hydropower plants pay wind farms to take a break
Hydropower plants face surging water levels as snow melts in springtime — and the best way to get the excess water downstream is to push in through the turbines, generating huge amounts of surplus power. That’s putting hydro-plant operators in the unusual position of having to pay wind-farm operators to suspend their operations, making room for hydro facilities to dump their excess energy into the grid. Jefferson Public Radio (Ashland, Ore.) (11/12)
Like everything involved with climate obsession, the policies and actions pushed by the obsessed are expensive failures.
I am considering the ‘Broken Window Fallacy’ as applicable to highly relative unproductivity of capital investment in solar and wind. It fits.
Hazlitt said generally of the principle central to understanding the broken window fallacy,
John
“Across the board overall solar energy is about ~34 times the cost of comparable standard Gas Fired generation and 9 times less effective. Wind-Power is only ~12 times the comparable cost and about 4 times less effective.”
34 versus only 12 times the costs and 9 times versus 4 times less effective.
Just dump the entire scheme immediately and never discuss it again.
Very interesting analysis thanks. Disappointing that Hydro was left out of all the graphs, as it is fully renewable and cost effective.
I lose interest in analysis that looks at solar averages when the deviations within this sector are huge.
Photovoltaics … such a cute idea … was it hatched in southern California?
Let’s get back to reality … how are things looking in Buffalo?
All that snow covering all those roofs … do you have to get up on the rood to shovel off the snow so you can have power?
I don’t know what all these environmentalists are on, but they need to get on back down to earth and get real.
+1 heh.
Get a roof rake if you have snow build up. Takes a while longer but you don’t risk falling.
Of course you could build your house with a nice steep A frame chalet style. Let gravity do the work for you.
A friend bought a solar charger for his electronics before he went on a holiday to Honduras because he was told electricity was intermittent in a lot of places. After getting back he said that was an understatement. He could charge his phone in 2-3 hours down there with his solar charger even when cloudy (yes you could get reception but no electricity, go figure). Back up in Canada where he lives even in summer it is 10 hours or multiple days to do it.
Location, location, location!
Enough snow and even an A frame roof still has snow on it. As for a roof rack for small low roofs ok , but not for a big one. With a roof rake you won’t fall through the roof, but you risk have an avalanche of snow falling on you.
+ 2 Leon. It’s fun watching melting pack-ice snow slide down your neighbour’s solar panels and into their yard. The same yard they’ve just spent 3 hours clearing snow from.
A “trick” previously employed in Germany was to run electricity “backwards” through PV panels under snow until the snow in contact with the panel lubricated the rest to slide off. Glacier calving on a small scale. Only takes about 50kWh per kWp of PV solar.
Apparently; it’s viable if you get paid 4 or 5 times more for in-feed power than what you tap off the grid. Not many get such a good deal nowadays.
When you have that much snow, you clear it off your roof to keep the roof from collapsing. Amazingly as of last report, very few people in Buffalo have lost their power.
Here is a nice reasonable proposal: Take away all so-called subsidies from oil/gas and wind & solar. Also take away the taxes each pay in to taxing authorities. Let’s see which are still standing after 12 months, 18 months and 24 months.
Seems to be something wrong with this link
Fixed it for you. Why don’t you try next time?
-the data is in English
in the “Overnight Capital Costs compared: $billion/Gigawatt” graph, there is a reference figure of $5.53 billion for “Dual Unit Nuclear”.
the European Commission has approved Britain’s heavily-subsidised Hinkley Point nuclear reactor (in the cause of decarbonisation***); however, altho Bloomberg wrote “Nukes and Shale Win The Day in U.S.-China Climate Deal” on 17 November, the costs for Hinkley are spiralling, & there are continuing project delays:
20 Nov: CarbonBrief: Simon Evans: How the UK’s nuclear new-build plans keep getting delayed
Doubts surfaced again today with the Times reporting a “secret government review” into French firm EDF’s plan to build a new plant at Hinkley Point in Somerset…
The news follows an announcement from EDF that its Flamanville plant in Normandy is facing further delays. The project uses identical designs to the Hinkley scheme.
Flamanville was supposed to take five years to build and begin operating by 2012. Instead it will now take 10 years, and open in 2017. A third identical project at Olkiluoto in Finland is nearly a decade behind schedule.
***New nuclear capacity is a key part of UK government plans for decarbonisation…
Carbon Brief asked EDF when construction of the plant itself will begin and how long it will take to finish. EDF said that level of detail was not yet available…
It isn’t only the finish date that has changed for the UK’s new nuclear plans. The costs have also skyrocketed.
Back in 2008 the white paper on new nuclear in the UKsuggested it would cost £2.8 billion to build a first of its kind 1.6 gigawatt plant, with a range of between £2 and £3.6 billion.
The government later said in 2013 that the the Hinkley C project of two 1.6 gigawatt reactors would cost £16 billion. When the European Commission gave the deal the green light in October it said the project would cost £24 billion…
12 Nov: Financial Times: Nuclear plants closure bill to reach $100bn……
end the war on coal & there wouldn’t be such a panic over energy in general.
All that money and yet one of the simplest of ideas goes unused. I’d love to see some testing of this guy’s idea (). It could offset a lot of heating and cooling if it works. But why test an idea that’s been in use with data collected for 2 decades when you can blow half a billion on Solyndra?
I don’t agree with the numbers in the article cost of rooftop solar-powered electricity will be on par with prices for common coal or oil-powered generation in just two years — and the technology to produce it will only get cheaper.
That cost is for nameplate capacity at high noon. You need to divide it by the annual capacity factor, which will be less than 0.5 even at the equator. Needless to say, if your utility company isn’t forced to accept your excess power at retail, ripping off the rest of the rate payers, your savings will also be lower. Ask Denmark and Germany about having to practically give away their excess power to the European grid while paying full price for power supplied when the wind isn’t blowing and the sun isn’t shining. Of course you could always invest in a lot of high quality, deep discharge batteries…
Seems like it’d have to be darn near free to be cost effective where I live, and for what? We have several centuries before fossil fuels are rare and we are currently witnessing the dawn of fusion engines. CO2 has been most accurately shown to follow warming and not to be the primary driver of global temperature. So, our focus should be on using our resources in a way that benefits world economy for the present, while minimizing the actual (mostly local and regional) pollution effects of the present technology. Solar power (PV) is only available up to half of any given year at any given location and is only efficient where solar density is high. Even if you had shown proof of your data, those prices mean that PV panels alone for my needs (7KW) will cost $14,000 (at $2 a watt) beside the supporting hardware and installation costs. If I sold my house I could never recoup that.
In England the sun appears about as bright as the moon does in Australia during the day but still someone thinks that they might get something out of solar power. With people as deluded as this that they think that increasing CO2 causes increasing temperatures, even when World temperatures have not increased for 18 years, the future looks as dim as the sun in England.
. . . . and rest assured ntesdorf, it’s been very dim over here in the UK. Clocks gone back, fog, idle wind (turbines stationary), no sunshine (solar panels dormant), dark and no water either (Rutland county, east/central England). Just finished my second bottle of wine to cheer me up.
Gosh, I hope that’s an exaggeration.
I’m on my 2nd between the two of us.
MCourtney, it’s Friday night. My lovely hard working wife and I have consumed 3 x bottles of NZ Sauv Blanc – so I lied. Unfortunately my wife retired to bed about an hour ago – so she had one bottle, I drank two.
Much the same here then.
Just piously pastoral.
Fridays are quite nice.
Haiku
[Gesundheit! (Hope your allergies get better.) .mod]
Is that pious as in devout? Like it’s our belief, faith and conviction to consume quaffable Sauv Blanc (ABC) and for yourself and me to become totally emersed in all things WUWT (and JD @ Breitbart and Booker @ The ST and, etc.). As for ‘pastoral’ – there are too many definitions. Enjoy your evening. Back tomorrow.
Might have possibilities for preheating/cooling make-up air to large buildings, but that’s as far as 55-60 deg. discharge air will go. Those folks were probably more astounded that he likes his house at 60F. Betting he’s got a heater in the bathroom…
Sorry, that was for TRM…
Oh… you have wine too?
What I really like is that nuclear power generation is claimed to be too expensive by comparison, of course, to a modern combined cycle gas fired power plant, not solar, not wind.
You have not discussed line losses here which are potentially significant when transporting the power from relatively remote locations.
It seems to me that one solution to the problem of intermittent supply is dispersed peaker stations using gas turbines. Not the most efficient but capable of supplying instant capacity near to the demand.
But those “peaker gas turbines” are much less efficient (34 – 38 %) compared to combined cycle turbines (60 – 64%) and break down (cracked compressors casings, bearings, turbine vanes, burners, air coolers, exhaust structures, compressor blades, etc.) than steady cycle units. Sure, they start and stop quickly. And burn out, break, and need more constant repairs. I’ve stood inside exhaust ducts, looking through the cracks at the mountains outside.
Price of renewables: ~$0.5trillion.
Price of saving the planet?
Priceless.
There are some things in life money can’t buy. For everything else, there’s GreenieCard.
Governments do not mind spending these horrendous sums on renewable energy – because.
1. It is not their money – it is money they appropriate from taxpayers.
2. Being seen to spend up large on “saving the world” i.e. “renewables” gets votes.
Lets not blame the politicians for all this, they are simply doing these ridiculous things because the public at large want them to.
In other words the greens, the UN with their manufactured crisis and the IPCC with their grinding pessimism are winning the race to break western economies.
All this is consistent with Agenda 21 policies where even capitalism and therefore wealth for the masses is under threat.
During our earthquakes, our government took advantage of the “unprecedented opportunity” to build a new Agenda 21 conforming city. They have started this process by abusing private property rights and have ripped the land from beneath many suburbanites in order to “preserve waterways” and are now working on the CBD (where the real estate is next to worthless) by having “compulsory purchases” of strategic sites.
And in spite of court decisions ruling the Governmental powers are being abused, they are still getting away with it. In fact they got reelected recently with an improved majority.
A lot of this is in my blog at
Cheers
Roger
“At least ~$0.5 trillion.”
This of course is low number.
Now, there is a place where solar energy does make economic sense.
It’s chosen as the best way to generate electrical power, and does not require
evil government laws forcing people to pay for it.
Nor are governments paying them to use solar energy. It’s simply a cheaper
and more reliable way to get electrical power.
And has nothing to do with stupid evil drug crazed hippies ideas.
You mean in space?
Yes.
In geostationary orbit satellites can get a nearly constant source of electrical power from solar panels:
“A geostationary satellite experiences eclipses only during two near-equinox periods a year (March, September) and these eclipses last no more than 72 minutes a day, or 5% of the total time. ”
The hundreds of geostationary communication satellite require a considerable amount electrical power to beam a strong enough signal to Earth.
The Satellite in low earth orbit can be more blocked by the Earth, but they still get more hours per day of solar energy than anywhere on Earth at earth surface, or assuming near cloudless conditions one only get about 25% of an average day with usable amounts of sunlight due to Earth’s thick atmosphere.
Of course if in Germany or UK you aren’t getting much in terms of getting cloudless conditions and during the wintertime it is near useless.
About only vaguely reasonable place on Earth to harvest solar energy is in desert regions- which also are not regions with a lot of world’s population- but again Earth’s atmosphere still makes it so one can only get about 6 hours of a significant amount sunlight per day. Whereas if on the Moon which is a vacuum, one can get 12 hours solar energy per average 24 hour “day”- 12 hours of average 1360 watts per meter of solar flux, whereas it’s only in hours around noon on earth that one gets the max solar flux is around 1000 watts per square meter [if there is clear skies].
It solar energy wasn’t actually a scam, what would have been promoted would using solar energy to heat water- such use of solar energy is or can be economically sound- it save you the energy costs of heating hot water.
But the scam was the lie that costs of PV panels was the main issue- that it was problem these idiot politicians could solve- if they wasted enough tax dollars.
Profrt-iec
The introduction of the Renewable Heat Incentive (RHI) is a scam of the highest order. The DECC claims that 8.6m UK homes will install renewable heating – Heat pumps, solar thermodynamic etc) by 2025.
What they fail to grasp is that draughty, poorly insulated, ageing housing stock simply doesn’t have the thermal insulation required to maintain a warm home when ambient temperatures drop. The RHI payments have hooked people in, but they soon feel remorse when it actually gets cold outside.
The fact is, these renewable systems are not the silver bullet the DECC claims they are. If they were, there wouldn’t be any need to subsidise them with RHI payments up to 19.2p per kWh and put your elderly neighbour’s bills up.
The government’s green agenda has created an industry of failure and inefficiency.
The solar generation figures from Comet Churymov–Gerasimenko aren’t looking too good…
More seriously, this is an excellent article. The numbers show just how mad this whole thing is.
Recently Christopher Booker, when discussing a proposed new offshore wind farm, stated that a recently built gas-fired station would, on average, generate 8 times more power and cost just half as much to build. And of course its output would be completely reliable and controllable.
Renewables are a completely idiotic solution for a problem that almost certainly doesn’t exist. This is a major reason why I’m now proud to be a UKIP voter. I’ll never vote Conservative again until, at the very least, they’ve promised to scrap the Climate Change Bill, a suicide note that will cost my country around 1.3 trillion pounds over the next few decades. It will achieve nothing except to remorselessly push up the price of energy. As others have said, I regard this squandering of resources as a crime against humanity. Think of what could have been done with a small fraction of that money….
Chris
They’re all starting to go bust
Edits: “Had conventional Gas Fired technology had been used” — one “had” suffices
“if an power generating installation” — a power …
“The actual savings of CO2 emissions may be hardly exceeded over their installed working life” — nonsensical. Rephrase.
Unfair, irrelevant, useless post.
The renewable stuff is sold to the public as it doesn’t need fuel. So, of course it may, and do, cost more : wouldn’t you be willing to pay more for a car that wouldn’t need fuel ?
The point is : how much more ?
How much more is worth a device that doesn’t burn fuel nor produce ashes ? Is the investment worth the variable cost (gas, maintenance, etc.) that won’t be incurred ?
But this post doesn’t answer these questions. That’s why it’s irrelevant.
It does need gas and it produces ashes as well. The amount of maintenance required is huge, that isn’t even figured in here. Secondly, they do fail and they burn quite nicely. Secondly, the stress on a power system of non consistent power sources causes a ton of problems. It’s a maintenance head ache and costs real money. The variable cost of natural gas and coal never range over the expense of these supposed renewable energy sources.
I said a long time ago, it’s not big oil that beat you, it’s not the conservatives or anti global warming crowd. It was Microsoft. It was an Excel spreadsheet that beat wind and solar. When they do the math, it just doesn’t make sense.
Greetings,
There is something I wanted to show you, it is extremely interesting, you have to see it! Take a look here
My Best, gerjaison | https://wattsupwiththat.com/2014/11/21/renewable-energy-solar-and-wind-power-capital-costs-and-effectiveness-compared/ | CC-MAIN-2020-50 | refinedweb | 12,348 | 62.78 |
#include <line.h>
Inheritance diagram for line::
The wavelet Lifting Scheme "line" wavelet approximates the data set using a line with with slope (in contrast to the Haar wavelet where a line has zero slope is used to approximate the data).
The predict stage of the line wavelet "predicts" that an odd point will lie midway between its two neighboring even points. That is, that the odd point will lie on a line between the two adjacent even points. The difference between this "prediction" and the actual odd value replaces the odd element.
The update stage calculates the average of the odd and even element pairs, although the method is indirect, since the predict phase has over written the odd value. 56 of file line.h. | http://www.bearcave.com/misl/misl_tech/wavelets/forecast/doc/classline.html | CC-MAIN-2017-47 | refinedweb | 126 | 67.38 |
view raw
Could you guys please explain to me how to set main class in SBT project ? I'm trying to use version 0.13.
My directory structure is very simple (unlike SBT's documentation). In the root folder I have
build.sbt
name := "sbt_test"
version := "1.0"
scalaVersion := "2.10.1-local"
autoScalaLibrary := false
scalaHome := Some(file("/Program Files (x86)/scala/"))
mainClass := Some("Hi")
libraryDependencies ++= Seq(
"org.scalatest" % "scalatest_2.10" % "2.0.M5b" % "test"
)
EclipseKeys.withSource := true
project
Hi.scala
object Hi {
def main(args: Array[String]) = println("Hi!")
}
sbt compile
sbt run
The system cannot find the file C:\work\externals\sbt\bin\sbtconfig.txt.
[info] Loading project definition from C:\work\test_projects\sbt_test\project
[info] Set current project to sbt_test (in build file:/C:/work/test_projects/sbt_test/)
java.lang.RuntimeException: No main class detected.
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) No main class detected.
[error] Total time: 0 s, completed Apr 8, 2013 6:14:41 PM
You need to put your application's source in
src/main/scala/,
project/ is for build definition code. | https://codedump.io/share/6pjfjsigWTQ5/1/how-to-set-main-class-in-sbt-013-project | CC-MAIN-2017-22 | refinedweb | 195 | 53.78 |
Setting up Sapper with Netlify CMS
What are Sapper and Netlify CMS?
Sapper
Sapper is Svelte's answer to Next.js/Nuxt.js. It's a way of rendering Svelte code on the server so your site is compatible with JavaScript-free devices, and so it renders immediately instead of waiting for a JS blob to download, parse, and run.
Sapper ordinarily runs as a full server application, but using the
sapper export command we can generate a static version of our site that we can host on Github Pages or, in this case, Netlify. That's a great way to have a very fast site that's free for small-to-medium traffic numbers.
Netlify CMS
Netlify CMS is as open-source content management system, meaning it's a way to create blog posts and web pages through a web page. Since it's from Netlify, the static site host, it's designed to work with static site generators like Hugo and Jekyll. We'll be adapting it to work with Sapper.
Putting these together
Adapting Netlify CMS to work with Sapper is pretty straightforward. First we'll follow Netlify's directions for adding the CMS to a generic site. That'll give us a web interface that drops Markdown files into our Sapper site's git repository. Next, we'll update our Sapper site code to see those Markdown files and render them as blog posts.
You can copy/paste the same code changes we make in this tutorial to support adding entire pages to Sapper, or to add multiple content sections, like a personal and a professional blog.
Let's get started!
* Note: If you want to skip all of this and just get something working, you can clone the repository I made for this tutorial.
Let's do it - Netlify CMS
Prepare your workspace
Create a project
Start your project by cloning the Sapper template git repository.
$ npx degit "sveltejs/sapper-template#webpack" my-site $ cd my-site $ npm install
Commit that project
Go ahead and commit and push this to Github so you can create your Netlify project, which is tied to your Git repo.
Create a new repository on Github, and substitute that URL in the fourth command.
$ git init $ git add . $ git commit -am "Degit'd the Sapper starter project" $ git remote add origin <your github project URL here> $ git push
Go to Netlify and create a project
Now that we have a barebones Sapper project in Git, it's time to tell Netlify that we'd like to host that project. Sign up at Netlify.com and begin creating your new Netlify project.
Click the button to create a new site from a Git repo.
Use Github as the source for your project. Netlify CMS only supports Github at this time.
Select the repo you created in Github
Configure your build process. Set the build command to generate the static Sapper site, and the publish directory to the directory where Sapper exports to.
Install the Netlify CMS
Back in our workspace it's time to add the Netlify CMS code to our project.
Add the CMS code
Create the directory
static/admin, then add the below snippet to the file
static/admin/index.html. This file contains the code that bootstraps the Netlify CMS.
<> </body> </html>
Configure Netlify CMS
Edit
static/admin/config.yml and add the following:
backend: name: git-gateway branch: master # Branch to update (optional; defaults to master) publish_mode: editorial_workflow # Allows you to save drafts before publishing them media_folder: static/uploads # Media files will be stored in the repo under static/images/uploads public_folder: /uploads # The src attribute for uploaded media will begin with /images/uploads collections: - name: "blog" # Used in routes, e.g., /admin/collections/blog label: "Blog" # Used in the UI folder: "static/_posts" # The path to the folder where the documents are stored create: true # Allow users to create new documents in this collection slug: "{{slug}}" # Filename template, e.g., title.md fields: # The fields for each document, usually in front matter - {label: "Layout", name: "layout", widget: "hidden", default: "blog"} - {label: "Title", name: "title", widget: "string"} - {label: "Publish Date", name: "date", widget: "datetime"} - {label: "Body", name: "body", widget: "markdown"}
Set up Netlify authentication
We'll use Netlify's authentication service -- called "Identity" -- tot let users log into our CMS and create posts. We'll also wire up Netlify with write access to our Git repo so the CMS can actually add the content to the repo.
Activate Identity
Follow Netlify's directions to activate Identity and connect your git account to your Netlify project. Also, invite yourself as a user to the project.
Add Identity code to your site
We need to add the Netlify Identity code to both our admin page (so we can log in) and our main site (so it can redirect us back to the admin after we log in).
Take this snippet:
<script src=""></script>
And add it to the head section of your
static/admin/index.html:
<> + <script src=""></script> </body> </html>
Also add it to the
<svelte:head> section of your
src/routes/index.svelte:
<svelte:head> <title>Sapper project template</title> + <script src=""></script> </svelte:head>
- You'll also need to add this snippet to the top of your
src/routes/index.svelte:
<script> import { onMount } from 'svelte'; onMount(() => { if (window.netlifyIdentity) { window.netlifyIdentity.on("init", user => { if (!user) { window.netlifyIdentity.on("login", () => { document.location.href = "/admin/"; }); } }); } }); </script>
Success part 1!
Commit and push your code.
$ git add . $ git commit -am "Configured the site to run the Netlify CMS" $ git push
Wait for Netlify to deploy it, then visit your admin site (the URL will be your Netlify site +
/admin, like), log in, and create a post! You won't see the post on your published site yet -- Sapper still doesn't know anything about the Netlify CMS content. Let's fix that!
Lets do it - Sapper rendering markdown blog posts
Here's where the real work comes in. Sapper, out of the box, reads posts from a rather unwieldy
_posts.json file. We're going to replace that with reading from Markdown files that Netlify CMS creates in our repo.
Install dependencies
You'll need to install a few packages for managing the markdown files:
npm install mz glob markdown-it front-matter
globmakes it easy to get a list of Markdown files
mzwraps the standard Node.js
fslibrary in promises, so we can
async/
awaitour way to success
front-matterreads the metadata out of our blog posts and separates it from the markdown content
markdown-itwill render our markdown content
Update the
blog.json server route
The built-in Sapper blog engine reads a list of all blog entries from the
/blog.json server route, which is controlled by the
src/routes/blog/index.json.js file. We're going to open that file and replace the whole thing with this:
import fm from 'front-matter'; import glob from 'glob'; import {fs} from 'mz'; import path from 'path'; export async function get(req, res) { // List the Markdown files and return their filenames const posts = await new Promise((resolve, reject) => glob('static/_posts/*.md', (err, files) => { if (err) return reject(err); return resolve(files); }), ); // Read the files and parse the metadata + content const postsFrontMatter = await Promise.all( posts.map(async post => { const content = (await fs.readFile(post)).toString(); // Add the slug (based on the filename) to the metadata, so we can create links to this blog post return {...fm(content).attributes, slug: path.parse(post).name}; }), ); // Sort by reverse date, because it's a blog postsFrontMatter.sort((a, b) => (a.date < b.date ? 1 : -1)); res.writeHead(200, { 'Content-Type': 'application/json', }); // Send the list of blog posts to our Svelte component res.end(JSON.stringify(postsFrontMatter)); }
Now, when the
blog.json server route is called, Sapper will scan the list of markdown files at
static/_posts/, read the metadata for each one, and create a list of the blog titles, dates, and any other fields (besides content) that we added to our Netlify CMS
fields section.
Edit the per-post Svelte component
Next up, we need to update our Svelte component to fetch the Markdown files instead of the old JSON content, then render those files to HTML and present the content to the user.
Remove an unused server route
Sapper provides a server route for extracting post content from the
_posts.js file. Since we're not using that file, we need neither the file nor the server route. remove both:
$ cd src/routes/blog $ rm _posts.js [slug].json.js
Render markdown posts
Next, open
src/routes/blog/[slug].svelte and replace both
<script> blocks with this code:
<script context="module"> export async function preload({ params, query }) { // the `slug` parameter is available because // this file is called [slug].svelte const res = await this.fetch(`_posts/${params.slug}.md`); if (res.status === 200) { return { postMd: await res.text() }; } else { this.error(res.status, data.message); } } </script> <script> import fm from 'front-matter'; import MarkdownIt from 'markdown-it'; export let postMd; const md = new MarkdownIt(); $: frontMatter = fm(postMd); $: post = { ...frontMatter.attributes, html: md.render(frontMatter.body) }; </script>
We've changed the default Sapper code in two ways:
- We fetch text from the server instead of JSON
- We break up that text into metadata and content, and render the content.
When we put the metadata and content back together, we're passing the rest of the Svelte component the same data it expected to get from the old
[slug].json.js server route, and now everything renders!
Success part 2!
See our work in action
Our site now works! You can see it for yourself by running
npm run dev and visiting .
Verify the exported site
If you want to verify your site exports correctly before committing and sending it to Netlify, you can do this:
$ npm run export $ npx serve __sapper__/export
if the export command didn't produce any
500 messages, visit, click around and confirm that everything works like you expect.
Send it to Netlify
$ git add . $ git commit -am "Configured the site to read and publish markdown blog posts" $ git push
After waiting for Netlify to publish your site, you can visit your site and see the glorious blog post you created earlier!
You're now all set with a hosted and operational Sapper blog! | https://spiffy.tech/blog/setting-up-sapper-with-netlify-cms/ | CC-MAIN-2019-47 | refinedweb | 1,732 | 61.97 |
Forum Index
I'm trying to use the node_dlang pakckage but the code example from this repo isn't working
the command dub build is fine but node example.js retuns an error saying the module.node is not a valid win32 application. How do I fix this?
dub build
node example.js
On Sunday, 6 June 2021 at 04:25:39 UTC, Jack wrote:
Looking at node_dlang's dub.json, it's building a DLL then renaming it to module.node. The JS script then causes node to load the DLL.
So I expect this error may be related to a 32-bit vs. 64-bit issue. I assume you are on 64-bit Windows, in which case recent versions of dub compile as 64-bit by default. So if that's the case, and your installation of node is 32-bit, you would see this error. Ditto when you're loading a 32-bit DLL in a 64-bit process.
On Sunday, 6 June 2021 at 06:10:18 UTC, Mike Parker wrote:
that's right, I was on 64bit system and node was 32bit installation (I didn't even realize I installed the 32bit instead of 64bit). I just installed node 64bit version, that fixed, thanks!
the npm was 32bit so the switch to 64bit worked then npm example.js ran file but electron requires a 32bit module, so I need to switch back to npm 32bit. Now, I can't build the example from node_dlang with --arch=x86, returns the error:
command:
$ dub --arch=x86
output:
Performing "debug" build using C:\D\dmd2\windows\bin\dmd.exe for x86, x86_omf.
node_dlang 0.4.11: building configuration "node_dlang_windows"...
..\..\AppData\Local\dub\packages\node_dlang-0.4.11\node_dlang\source\node_dlang.d(137,11): Error: none of the overloads of `this` are callable using argument types `(string, string, ulong, Throwable)`, candidates are:.
On Sunday, 6 June 2021 at 15:42:55 UTC, Jack wrote:
0.4.11\node_dlang\source\node_dlang.d(137,11): Error: none of the overloads of this are callable using argument types (string, string, ulong, Throwable), candidates are:
this
(string, string, ulong, Throwable).
object.Exception.this(string msg, string file = __FILE__, uint line = cast(uint)__LINE__, Throwable nextInChain = null)
object.Exception.this(string msg, Throwable nextInChain, string file = __FILE__, uint line = cast(uint)__LINE__)
The error is from line 137 of node_dlang.d. Looking at it, we see this:
super (message, file, line, nextInChain);
This is in the constructor of the JSException class, a subclass of Exception, calling the superclass constructor. According to the error message, one or more of the arguments in this list does not match any Exception constructor's parameter list.
JSException
Exception
Looking closer, we can see that the arguments to the super constructor are all declared in the JSException constructor like this:
this (
napi_value jsException
, string message = `JS Exception`
, string file = __FILE__
, ulong line = cast (ulong) __LINE__
, Throwable nextInChain = null)
Compare that with the constructors in the Exception class and you should see that the problem is ulong line. The equivalent argument in the superclass is size_t. In 32-bit, size_t is defined as uint, not ulong. So it's passing a ulong to a uint, which is a no-no.
ulong line
size_t
uint
ulong
The JSException constructor should be modified to this:
, size_t line = __LINE__
The README does say it hasn't been tested with 32-bit. So there may be more such errors.
Unrelated, but I recommend you use --arch=x86_mscoff so that you can use the same linker and object file format as -m64 uses (MS link, or lld, and PE/COFF), rather than the default (which is OPTLINK and OMF). It may save you further potential headaches.
--arch=x86_mscoff
-m64
Hello, I'm the author of the library.
Indeed I only tested it on 64-bit systems. I can try to make it 32-bit compatible if needed.
Aside from the auto-translated headers, in the case of Windows the repo also includes node.lib compiled for 64-bit, so unless the 32-bit version is added it will also give errors.
I'm pretty sure it does work with electron as I have used it myself.
On Sunday, 6 June 2021 at 17:32:57 UTC, Mike Parker wrote:
[...]
Thanks, I managed to get out this error by making everything 64bit so no type sizes mismatch anymore. The electron itself was still 32bit, which was causing the error to load but it was gone once I reinstalled the 64version.
Unrelated, but I recommend you use --arch=x86_mscoff
thanks for the tip. in this case, i get the same error above not found the proper overload. But it's to avoid futher potential headaches, I'll edit the node_dlang.d file to make it work. Thanks!
so that you can use the same linker and object file format as -m64 uses (MS link, or lld, and PE/COFF), rather than the default (which is OPTLINK and OMF). It may save you further potential headaches.
On Sunday, 6 June 2021 at 21:44:44 UTC, NotSpooky wrote:
Hello, I'm the author of the library.
nice works, thanks for the library!
Indeed I only tested it on 64-bit systems. I can try to make it 32-bit compatible if needed.
for me i think it would be needed, i made everything 64bit so it at least compiled and ran.
I managed to compile the module and it passed this test:
const nativeModule = require('./module.node');
const assert = require('assert');
assert(nativeModule.ultimate() == 42);
The D code looks like this:
import std.stdio : stderr;
import node_dlang;
extern(C):
void atStart(napi_env env)
{
import std.stdio;
writeln ("Hello from D!");
}
int ultimate()
{
return 42;
}
mixin exportToJs! (ultimate, MainFunction!atStart);
its builds sucessfully with dub no arguments (no --arch=x86_mscoff yet because it requires to fix other compilation error). The dub script looks like this:
dub
```json
{
"authors": [
"test"
],
"dependencies": {
"node_dlang": "~>0.4.11"
},
"description": "using electron from D",
"license": "proprietary",
"name": "eled",
"targetType": "dynamicLibrary",
"targetName" : "module.node",
"targetPath": "bin"
}
```
then i test with node:
node test.js
it works fine. However, when I attempt to use it in the prescript within electron, I get this error:
A JavaScript error occured in the browser process
---------------------------
Uncaught Exception:
Error: A dynamic link library (DLL) initialization routine failed.
\\?\C:\Users\001\Desktop\ele\module.node
at process.func [as dlopen] (VM70 asar_bundle.js:5)
at Object.Module._extensions..node (VM43 loader.js:1138)
at Object.func [as .node] (VM70 asar_bundle.js:5)
at Module.load (VM43 loader.js:935)
at Module._load (VM43 loader.js:776)
at Function.f._load (VM70 asar_bundle.js:5)
at Function.o._load (VM75 renderer_init.js:33)
at Module.require (VM43 loader.js:959)
at require (VM50 helpers.js:88)
at Object.<anonymous> (VM88 C:\Users\001\Desktop\ele\preload.js:6)
```
the lines of this in the prescript goes like this:
```javascript
const nativeModule = require('./module.node');
const assert = require('assert');
assert(nativeModule.ultimate() == 42);
What am I missing?
On Monday, 7 June 2021 at 02:33:38 UTC, Jack wrote:
Does your code / node_dlang initialize Druntime before calling writeln?
Try replacing the writeln with puts (from core.stdc.stdio) which doesn't require an initialized runtime.
node_dlang
writeln
puts
core.stdc.stdio
On Monday, 7 June 2021 at 17:22:48 UTC, MoonlightSentinel wrote:
Does your code / node_dlang initialize Druntime before calling writeln?
actually i didnt so I just added:
shared static this()
{
Runtime.initialize();
}
shared static ~this()
{
Runtime.terminate();
}
but it didn't change anything
Try replacing the writeln with puts (from core.stdc.stdio) which doesn't require an initialized runtime.
I've tried just removed the writeln() call it didn't change anything either | https://forum.dlang.org/thread/vmpldzrbccasfoiwdngx@forum.dlang.org | CC-MAIN-2021-25 | refinedweb | 1,293 | 68.36 |
On Sun, 27 Jun 1999 allbery@ece.cmu.edu wrote:> On 27 Jun, Jason Thorpe wrote:> +-----> | Alexander Viro <viro@math.psu.edu> wrote:> | > doesn't unmap the stuff. Oh, shit, there is such thing as pending> | > unlink... Does vgone() force it?> | > | Regarding unlink()... those aren't operations on vnodes. Those are> | operations on the filesystem namespace, and are thus (correctly)> | unaffected.> +--->8> > I believe what he meant is "how is deallocation of a pending-unlink> file whose only reference is an open fd which has been revoked dealt> with"?> > (To which my own answer would be: "deallocated on close as usual, no> reason to treat this case specially that I know of".)When it's already remounted r/o?-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.rutgers.eduPlease read the FAQ at | http://lkml.org/lkml/1999/6/27/165 | CC-MAIN-2013-48 | refinedweb | 144 | 60.72 |
Several comments on the Versioning discussion:
-- New suggestion: has anyone looked at "Open Software Description Format"
or OSD?
This is a pre-existing XML format for describing software releases - just
about what we want. I'm not sure I like all the fields they use in their
format, but you can always extend it with namespaces. Check it out.
-- I really like Thomas' idea below, where the actual version info is
stored in an XML file, and then read in by a Version.class. As he points
out, both communities are happy: XML and text parsers have the info, and
Java programs have the info. This would also make it simple to have the
version info in just one place for the entire project, since the
documentation could also parse the XML file and pull out version info too.
One important question is where should the XML version file live, both in
source trees and in binary outputs? Although the Version.class can
gracefully fail when it doesn't find the version.xml (or whatever name)
file, I'd rather that this didn't happen too often.
Re: "Thomas B. Passin" <tpassin@mitretek.org> said:
> ... "How about the
> version is described by an xml file, and the Version class could read
> that file to produce its output."
-- I also agree with Mike on several points. the version numbers should
probably be stored as strings, or as several *separate* numeric fields.
Yes, it's a tiny bit more programming work to compare versions then, but
it's worth it in my experience.
Also, we should have at least three levels or fields (major, minor,
maintenance release? or something like that?) in the versioning scheme.
It's better to start off with slightly more info in this area, since we'll
probably find additional uses for version info later on.
Re: Mike Pogue <mpogue@apache.org> said (among other things):
> 1) The information should be strings. Do NOT count on it being a
> floating point number with a single dot...
-- I also support points 2 and 3 that Stefano makes below.
Java projects should be distributed as jar files. While some
non-developers may be more experienced wit hzip (or gzip) files, I think
that's a fairly minor point. A big point is that a jar tool exists on most
platforms we'll be on, so we can have just one file for distribution (which
is a big win). Also, the jar format can be read by many zip utilties, so
you can always use that to unzip in a pinch.
The name-type-version.jar is a pretty good scheme too. The important point
I like about this is that the name and version are the same ones that the
Version file/class will support, which makes writing automated tools much
easier.
Re: Stefano Mazzocchi <stefano@apache.org> wrote:
> 2) archive type
>
> I propose that every java project is distributed as one or more jar
> file.
>
> 3) package name
>
> I propose the following name model for packages
>
> name[-type]-version.jar
>
> where
>
> name := the project name
> type := an optional indication (bin|src|all|???)
> version := the version information
>
> note that "name" and "version" _must_ be the one passed by "getName()"
> and "getVersion()" methods in Version.
Hope this wasn't too long a discourse!
---- ----
- Shane Automation, Test, & Build guy
mailto:shane_curcuru@lotus.com AIM:xsltest | http://mail-archives.apache.org/mod_mbox/xml-general/199911.mbox/%3COF388A7B8D.C941B0FA-ON05256832.004927D1@LocalDomain%3E | CC-MAIN-2019-26 | refinedweb | 563 | 73.88 |
On Thu, 25 Apr 2002 12:48, Peter Donald wrote:
> Currently there can not be multiple roles with same classname but different
> classdata
Yes, I know. Doing away with that restriction is the whole point.
> and I can't honestly see where it would be viable.
We are awful close to being able to drop in two different versions of an
antlib, and have it work. Not necessarily in the same project, but certainly
across projectrefs or <ant> calls. The main issue is naming - obviously this
isn't going to work if we have a global role namespace.
> Java is not designed to work that way
No? Bugger. That's going to make things harder, I guess :)
> and until we go
> there I don't see any need to burden the rest of the system with
> complexitites until we know we are going to use it.
Ok. I'm not planning on ignoring complexity. What I'd like to do is to *try*
to solve the issues we have with roles (can't have more than one with the
same name, can't have a typeless role, that kinda thing), without impacting
complexity too much. If it turns out to be horribly complex, then, we just
have to let it go. I think it's worth exploring.
> Given that the classname is not meaningless - it accurately describes
> exactly what the role is ;)
Fair enough. Not meaningless. But not any more meaningful than the short
name, either.
> I would be opposed to untyped roles but you knew I would say that :)
Yeah, I knew. Would you be opposed to making them available to task writers
to use, without us using them in myrmidon, and provided they fell out of
whatever role solution we come up with?
> The longer ones map to physical representations. It is easy to verify if a
> component actually implements service by checking the interfaces
> implemented by component and see if one of them has same name as role
> (minus any decoration if necessary).
Checking the name doesn't guarantee much at all when the component, the role,
and the thing doing the checking are all in different classloaders. That's
why we added the role's Class object to the role registry - to make the check
foolproof.
> If you are a user you need not know about physical name or suffer through
> ClassLoader hell. If you are a develoepr you need not know the logical
> name.
You do if you are writing something that sits between the user and the
internals. Like, say, a task. Or if you happen to be working on the
infrastructure: What's this name here? Is is a short name, a long name, a
class name? Ug.
Like I said before, if we were to change a role's (programmatic) identifier
from the role's class name, to the role's Class object, then it really
doesn't matter what the role names are.
--
Adam
--
To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org> | http://mail-archives.apache.org/mod_mbox/ant-dev/200204.mbox/%3C200204251446.22372.adammurdoch@apache.org%3E | CC-MAIN-2016-36 | refinedweb | 516 | 71.44 |
In today’s Programming Praxis exercise, our goal is to provide two different solutions for the well known SEND + MORE = MONEY sum, in which each letter must be replaced by a valid digit to yield a correct sum. Let’s get started, shall we?
A quick import:
import Data.List
I’ll be honest, the only reason I wrote this first solution this way is because the exercise explicitly called for checking all possible solutions using nested loops. It’s so horribly inefficient! Take the test whether all digits are unique for example: normally you’d remove each chosen digit from the list of options for all subsequent ones, but we’re not allowed to do that. I normally also wouldn’t do the multiplications this explicitly, but to avoid overlap with the second solution I left it like this. Unsurprisingly, it takes almost a minute and a half to run.
send1 :: ([Integer], [Integer], [Integer]) send1 = head [([s,e,n,d], [m,o,r,e], [m,o,n,e,y]) | s <- [1..9], e <- [0..9], n <- [0..9], d <- [0..9] , m <- [1..9], o <- [0..9], r <- [0..9], y <- [0..9] , length (group $ sort [s,e,n,d,m,o,r,y]) == 8 , 1000*(s+m) + 100*(e+o) + 10*(n+r) + d+e == 10000*m + 1000*o + 100*n + 10*e + y]
This is actually the solution I started with: since all digits need to be unique, you can simply generate the permutations of the numbers 0 through 9, backtracking when s or m are zero or when the numbers don’t add up correctly. By writing a function to do the multiplication and assinging some variables we not only make things more readable, but we also get to use the problem statement directly in the code, which I find conceptually satisfying. I do have the distinct impression that this is what we’re supposed to make in part 2 of this exercise though, since it runs in about a second, which is significantly faster than the two provided solutions.
send2 :: (Integer, Integer, Integer) send2 = head [ (send, more, money) | (s:e:n:d:m:o:r:y:_) <- permutations [0..9] , s /= 0, m /= 0, let fill = foldl ((+) . (* 10)) 0 , let send = fill [s,e,n,d], let more = fill [m,o,r,e] , let money = fill [m,o,n,e,y], send + more == money]
A quick test shows that both algorithms produce the correct solution.
main :: IO () main = do print send1 print send2
Tags: bonsai, code, Haskell, kata, money, more, praxis, programming, send, sum
December 23, 2013 at 9:07 pm |
I have solved the problem using my set-cover package. The compiled program runs in half a second:
December 23, 2013 at 10:06 pm |
In send2 you test every assignment twice, because you ignore the last two digits in “(s:e:n:d:m:o:r:y:_) <- permutations [0..9]". | http://bonsaicode.wordpress.com/2012/07/31/programming-praxis-send-more-money-part-1/ | CC-MAIN-2014-10 | refinedweb | 488 | 62.58 |
In this tutorial, we will learn to use FreeRTOS software timers with Arduino. Unlike, hardware timers of Arduino, Software timers are provided by the FreeRTOS kernel and they do not use Arduino timers hardware resources. Because they are implemented by FreeRTOS and under the control of RTOS kernel. Most, importantly, they do not use Arduino processing time, until we actually call software timer call back function.
FreeRTOS Software Timers Introduction
In real-time operating systems, software timers provide the functionality to execute a task or function after a specific interval of time. In other words, they help to create a periodic task with a fixed frequency. Hence, we can create a function and attach a software timer with it.
Software Timers Configuration Setting
The use of software timers is optional in FreeRTOS. Before using them in your application, you should enable them by following these steps:
First, build the FreeRTOS source file by going to this location FreeRTOS/Source/timers.c as part of your project. But in FreeRTOS Arduino library, timers.c builds automatically when we build Arduino code.
But you should Set configUSE_TIMERS to 1 in FreeRTOSConfig.h. To set configUSE_TIMERS, first go to the FreeRTOS Arduino library folder and open FreeRTOSConfig.h file.
After that set configUSE_TIMERS to 1 as shown below:
Software Timer’s Callback Function
One most important term is the software timer’s callback function. The function which executes only by the software timer call is known as software timer’s callback function. This is a prototype of a call back function.
void ATimerCallback( TimerHandle_t xTimer );
It return void and only argument it takes as a paramter is a handle to the software timer. You will learn about its use in example section.
Note: The software timer call back function must be short. It should run in a single execution from start to end and must not enter blocking state. Also to avoid its function to enter blocking state, do not call vTaskDelay() API.
Period of Software Timer
A Software timer period defines the interval between timer starts and after a specified time software timer’s call back function start to execute. For example, if we set the period to 100ms, a software timer will start and as soon as it reaches 100ms, the timer callback function starts its execution.
Types of Software Timers
FreeRTOS software timers can be configured in two modes. Details of both types are given below:
One-shot timers
One-shot timers execute its callback functions only once. For example, it will start and after the specified time executes call back function. But it will not restart itself automatically. We should restart it manually.
Auto-reload timers
Unlike one-shot, auto-reload timers are used for periodic execution of functions. They will re-start themselves after executing a callback function.
Now let’s see the difference between one-shot auto-reload timers with the help of a diagram. This picture depicts the difference between them. The dashed vertical lines mark the times at which a tick interrupt occurs.
According to this diagram, Timer1 is a one-shot type with a period of 6-time ticks, and Timer2 is an auto-reload type with a period of 5-time ticks. Both timers start at time t1.
Timer1 starts at t1 and its call back function starts to execute after 6 ticks at time t7. But its callback function will not execute again. Because timer1 is a one-shot timer. Similarly, Timer2 starts at t1, and its call back function executes after every 5 ticks at t6, t11, t17, and so on.
FreeRTOS Creating and Starting a Software Timer
In this section, we will learn to create and start a software timer in one-shot and auto-relaod mode using Arduino.
xTimerCreate() API Function
xTimerCreate() API function is used to create a timer. Therefore, we must create a timer with the help of this FreeRTOS API before using a timer. FreeRTOS software timer can be created either before staring a scheduler or after the scheduler has started.
Like FreeRTOS tasks and queues, reference variables are also used for timers that are of type TimerHandle_t. This is a xTimerCreate() API function prototype. It returns a TimerHandle_t to reference to the software timer it creates.
TimerHandle_t xTimerCreate( const char * const pcTimerName, TickType_t xTimerPeriodInTicks, UBaseType_t uxAutoReload, void * pvTimerID, TimerCallbackFunction_t pxCallbackFunction );
For more information on xTimerCreate() visit this link.
This example creates the one shot timer, storing the handle to the created timer in xOneShotTimer.
- First argument to this function is a name of the timer
- Second argument specifies the period that is 3333
- Setting third argument to pdFALSE creates a one-shot software timer and setting pdTRUE creates an auto-reload timer
- Fourth argument used to specify timer ID, but This example does not use the timer id. Therefore, it is set to 0.
- The last argument is the name of the callback function.
TimerHandle_t xOneShotTimer; xOneShotTimer = xTimerCreate("OneShot",3333, pdFALSE, 0, prvOneShotTimerCallback );
Start and Stop Software Timer
It is used to start a timer that is already created. When we create a timer with xTimerCreate(), it will be in a dormant state. In the dormant state, software timer exists, and can be referenced by its handle, but is not running, so its callback functions will not execute. Therefore, we must start a timer with xTimerStart() after creating it.
Similarly, we can also stop a timer using xTimerStop() function. After stopping the timer, it enters into a dormant state. This is a prototype of a timer start function.
BaseType_t xTimerStart( TimerHandle_t xTimer, TickType_t xTicksToWait );
The first argument to xTimerStart() is a handle of the reference timer that you want to start and the second argument is a starting time of the timer. For more information on xTimerStart() visit this link.
This example starts the software timers, using a block time of 0 (no block time). The scheduler has not been started yet so any block time specified here would be ignored anyway
TimerHandle_t xOneShotTimer; xTimer1Started = xTimerStart( xOneShotTimer, 0 );
The xTimerDelete() API function deletes a timer. By calling xTimerDelete(), it will not exist and also removes it reference variable.
Create and Start FreeRTOS Software Timer with Arduino
This code generates on-shot timer and auto-load timer with 6 seconds and one second periods respectively.
#include <Arduino_FreeRTOS.h> #include <timers.h> #include <task.h> /* The periods assigned to the one-shot and auto-reload timers are 6 second and one econd respectively. */ #define mainONE_SHOT_TIMER_PERIOD pdMS_TO_TICKS( 3333 ) #define mainAUTO_RELOAD_TIMER_PERIOD pdMS_TO_TICKS( 500 ) //create reference hanlders for one-shot and auto-relaod timers TimerHandle_t xAutoReloadTimer, xOneShotTimer; BaseType_t xTimer1Started, xTimer2Started; void setup() { Serial.begin(9600); // Enable serial communication library. /* Create the one shot timer, storing the handle to the created timer in xOneShotTimer. */ xOneShotTimer = xTimerCreate( /* Text name for the software timer - not used by FreeRTOS. */ "OneShot", /* The software timer's period in ticks. */ mainONE_SHOT_TIMER_PERIOD, /* Setting uxAutoRealod to pdFALSE creates a one-shot software timer. */ pdFALSE, /* This example does not use the timer id. */ 0, /* The callback function to be used by the software timer being created. */ prvOneShotTimerCallback ); /* Create the auto-reload timer, storing the handle to the created timer in xAutoReloadTimer. */ xAutoReloadTimer = xTimerCreate( /* Text name for the software timer - not used by FreeRTOS. */ "AutoReload", /* The software timer's period in ticks. */ mainAUTO_RELOAD_TIMER_PERIOD, /* Setting uxAutoRealod to pdTRUE creates an auto-reload timer. */ pdTRUE, /* This example does not use the timer id. */ 0, /* The callback function to be used by the software timer being created. */ prvAutoReloadTimerCallback ); /* Check the software timers were created. */ if( ( xOneShotTimer != NULL ) && ( xAutoReloadTimer != NULL ) ) { /* Start the software timers, using a block time of 0 (no block time). The scheduler has not been started yet so any block time specified here would be ignored anyway. */ xTimer1Started = xTimerStart( xOneShotTimer, 0 ); xTimer2Started = xTimerStart( xAutoReloadTimer, 0 ); /* The implementation of xTimerStart() uses the timer command queue, and xTimerStart() will fail if the timer command queue gets full. The timer service task does not get created until the scheduler is started, so all commands sent to the command queue will stay in the queue until after the scheduler has been started. Check both calls to xTimerStart() passed. */ if( ( xTimer1Started == pdPASS ) && ( xTimer2Started == pdPASS ) ) { /* Start the scheduler. */ vTaskStartScheduler(); } } } void loop() { // put your main code here, to run repeatedly: } static void prvOneShotTimerCallback( TimerHandle_t xTimer ) { TickType_t xTimeNow; /* Obtain the current tick count. */ xTimeNow = xTaskGetTickCount(); /* Output a string to show the time at which the callback was executed. */ Serial.print("One-shot timer callback executing "); Serial.println(xTimeNow/31); } static void prvAutoReloadTimerCallback( TimerHandle_t xTimer ) { TickType_t xTimeNow; /* Obtain the current tick count. */ xTimeNow = xTaskGetTickCount(); /* Output a string to show the time at which the callback was executed. */ Serial.print("Auto-reload timer callback executing "); Serial.println( xTimeNow/31 ); }
Arduino Serial Monitor Output
Now upload this code to Arduino. As you can observe from the output of serial monitor that one-shot timer executes call back function only once after 6 seconds and auto-reload timer executes its call back function every one second.
Other FreeRTOS tutorials: | https://microcontrollerslab.com/freertos-create-software-timers-with-arduino/ | CC-MAIN-2021-39 | refinedweb | 1,495 | 56.66 |
Are.
Contents
- Contents
- Morgan's question
- The really short answer
- What is multi-threading?
- Multi-threading gets complicated
- Using threads in Unity
- ThreadPool
- Coroutines vs Threads
- Managing a thread with a promise
- Example Unity project
- Conclusion
- Resources
Morgan's question
I have recently started working with your C-Sharp-Promise library for Unity, and it's been great. I am however wondering if this library deals with threading so I can run synchronous code as a promise, or is this something I'd have to implement myself? Or is this where you would wrap it with a unity coroutine in a promise? I am trying to avoid unity coroutines at all costs ;)
The really short answer
The C-Sharp promises library doesn't contain any support for threading and doesn't really have anything to do with threading, but you can easily use the promise library with threads. Just instantiate a promise, pass it into your thread function then call
Resolve on the promise when the thread has completed it's work. Also don't forget to call
Reject if anything goes wrong, that allows you to chain your error handling.
Threads and co-routines are different, which I'll explain later in this post, and it's difficult and unecessary to avoid coroutines in Unity, so maybe don't try too hard to do that! Co-routines are ugly but they can be managed with promises and unlike worker threads, co-routines can access the Unity API without having to jump through hoops.
Please read on for code examples and a longer discussion about threads, promises and Unity...
What is multi-threading?
Creating a multi-threaded application is the process of splitting your application up so that you may have multiple streams of code execution. When developing a game you might split rendering, physics and game logic into separate threads.
Why do this? Because you can achieve better performance if your game executes theses separate tasks in parallel. This means each thread can potentially run on a separate CPU core and when that happens you really can get massive performance improvements. Even when there is only a single CPU core you still can gain some performance benefits, say if one thread is blocked doing IO the other threads can continue to do their respective jobs.
Of course whether or not you can increase performance through multi-threading depends a lot on the situation at hand and on your architecture. You can't just throw multi-threading into the mix and expect it to work: you are going to need an understanding of how it can help and if it fits your problem.
Separate threads are typically used in games for loading assets, so that while the game continues to run, render and be interactive, its assets are being loaded in the background. Open world games especially make use of this kind of technique.
Multi-threading gets complicated
There is a big problem with threading: it can complicate things massively.
Starting a thread and managing its lifetime is not too difficult. Having two threads in a application is very manageable, say you have the main thread and then one worker thread for loading assets.
Still there are a series of potential problems you must deal with and they almost always center on how threads are synchronized and how they share data.
Problems that can occur when adding even one extra thread:
- Race conditions
- Deadlocks
- Livelocks
- False sharing
- Shared data corruption
- Timing based issues
- Non-determinstic code execution: unrepeatable sequences of thread interaction
These problems all contribute to complexity in debugging, maintenance nightmares and weird bugs that only show for users and can never again be reproduced.
So you need to be very very careful about how you share data between your threads and how you synchronize them.
Have I turned you off threads yet?
It gets worse. Have you heard of Metcalf's Law? I've mentioned it before in this blog and I think it applies to multi-threading as well. As you add more communicating threads to your game the complexity of the communication between them becomes exponentially worse. As a rule, the more threads you have the more difficult it is to debug your code and understand what the hell it is doing.
So don't go getting into multi-threading lightly or because it sounds cool.
It's an advanced programming technique and you can easily get bogged down in thread-related problems. Needless to say this isn't good for your project. Please make sure you weigh up the costs and benefits.
When you have multiple threads running and working on the same data at the same time it can be difficult to avoid corrupting your data. If possible you should only allow a single thread at a time to work on a block of data. If you need multiple threads to work on the data at the same time you'll have to use locking to prevent the threads from smashing the data and locking can cause it's own problems, for example deadlocks.
An alternative to locking is to use lockfree data structures to manage your data. Don't try and roll your own though. If Jon Skeet can't do it you most certainly can't. Lockfree data structures are fantastic, presuming you have robust and bulletproof code to handle this. If you are interested then Julian Bucknall's blog is a good place to start.
Using threads in Unity
So you want to use threads with Unity?
Normally in a game engine we'd use different threads for rendering, effects, physics, etc. But Unity takes care of most of that for us. Also, rather unfortunately we can only access the Unity API from the main thread, this means we can't use the Unity API from worker threads. This is a big limitation! However threads are still useful in Unity for loading data and doing rare and expensive computations.
Eventually you will probably need to join a worker thread back to the main thread in order to do something with the data that was loaded or the expensive calculation that was performed. You can't use the Unity API from a worker thread, so any subsequent work that is to be done with the Unity API must be dispatched to run on the main thread.
Starting a worker thread is really simple. First you need a thread function that will be run within the worker thread:
private void ThreadFn(object threadInput) { // // Do work in the thread. // // ... }
To start the worker thread: instantiate a
Thread and call
Start on it:
var thread = new Thread(ThreadFn); var threadInput = ...; // Some input object to pass to the thread. thread.Start(threadInput);
You'll need to import the
System.Threading namespace to compile this code.
ThreadPool
When you have a bunch of threads to manage it might be easier for you to use the
ThreadPool. Again you need a thread function, but this time you don't need to explictly create a thead, just queue your thread function to be executed on the next available thread in the thread pool:
var threadInput = ...; // Some input object to pass to the thread. ThreadPool.QueueUserWorkItem(ThreadFn, threadInput);
Using a thread pool can be a little more efficient as threads can be reused multiple times, so there there is no overhead next time to create the thread, if there is one available it will be reused.
Coroutines vs Threads
So what do coroutines have to do with threads?
Well, nothing really. Coroutines are not threads. A coroutine might seem like it is a thread, but coroutines execute within the main thread.
The difference between a coroutine and a thread is very much like the difference between cooperative multitasking and preemptive multitasking. Note that a coroutine runs on the main thread and must voluntarily yield control back to it, if control is not yielded (this is where the coroutine must be cooperative) then your coroutine will hang your main thread, thus hanging your game.
Because coroutines run in the main thread you won't have any of the added complexity that you get when using multiple threads that must share data. That's a good argument for sticking with coroutines if you can. Unfortunately the things that threads are good for don't necessarily apply to coroutines. If you try do any blocking operations in a coroutines you will block the main thread, so you should consider using worker threads in these scenarios.
Managing a thread with a promise
Now let's look at how to manage a thread with a promise. We'll get straight into the detail. If you need an indepth introduction to promises, please see my earlier article on the subject.
The following example code shows the basics of managing a thread with a promise:
public class ExampleThreadStarter : MonoBehaviour { // // The thread function: This function runs in the thread. // private void ThreadFn(object threadInput) { var promise = (Promise)threadInput; // // Do work in the thread. // // ... // // Resolve the promise when the thread is complete. // promise.Resolve(); } // // Call this function to start the thread. // public IPromise StartWorkerThread() { var thread = new Thread(ThreadFn); var promise = new Promise(); thread.Start(promise); return promise; } }
To retrieve a result back from the thread, some kind of output from the thread, you must use the generic version of
Promise. For example say you want to get a string back, you need to instantiate your promise as follows:
var promise = new Promise<string>();
Now pass the string-resolving promise into your thread function. When the work is done resolve the promise with an appropriate value:
private void ThreadFn(object threadInput) { var promise = (IPendingPromise<string>)threadInput; // // Do work in the thread. // // ... // // Resolve the promise with an appropriate // value when the thread is complete. // promise.Resolve("Hi from inside the thread!"); }
Also be sure to call
Reject on your promise if anything goes wrong. If you don't do this you will silently lose errors - you won't know that any problem has happened. It's best to put an exception handler around your thread function so you can reject the promise if anything goes wrong:
private void ThreadFn(object threadInput) { var promise = (IPendingPromise<string>)threadInput; try { // // Do work in the thread. // // // Resolve the promise when the thread is complete. // promise.Resolve("Hi from inside the thread!"); } catch (Exception ex) { promise.Reject(ex); } }
Now you can start your thread and wait for the promise to resolve or reject:
var aGameObject = ... var threadStarter = aGameObject.GetComponent<ExampleThreadStarter>(); threadStarter.StartWorkerThread() .Then(result => { Debug.Log("Thread completed, result is: " + result); }) .Catch(ex => { Debug.Log("An error occurred in the thread!"); })
There's nothing more to it than that!
Example Unity project
Working example code and support is available for my Patrons. The example project demonstrates use of
Thread,
ThreadPool,
Promise and
Dispatcher.
Conclusion
My advice: Only use threads when you really need to. Then restrict the number of threads to something that you can manage.
Working with a large number of threads is difficult and fraught with problems. Your product will be much more likely to exhibit seemingly random and unreproduceable bugs.
Don't avoid coroutines, they are a necessary evil when working with Unity. Do wrap your coroutines in promises, they will make your life a bit easier.
Resources
C-Sharp-Promise library: | https://codecapers.com.au/threads-promises-and-unity/ | CC-MAIN-2021-49 | refinedweb | 1,890 | 62.68 |
React Hooks
While I was learning React while in The Flatiron School’s Software Engineering bootcamp I learned about creating components by using class components only. We learned about some lifecycle methods in general and state was to be managed with a constructor. Once we were encouraged to check out functional components and useState to manage our state I enjoyed the look and feel of this style of React.
Using functional components and useState amongst the other react Hooks make the code look much cleaner and easier to understand. I have been using useState and useEffect in my recent projects and I have enjoyed using them. I want to dig deeper into the full lifecycle and become familiar all of the Hooks to become a more experienced React developer. In this blog I will be covering The 3 most common hooks in detail and a brief introduction to the other 7.
What is a Hook?
To learn more about Hooks we should start with the definition. The official react documentation says, “Hooks are functions that let you “hook into” React state and lifecycle features from function components. Hooks don’t work inside classes — they let you use React without classes.” Hooks are functions that are prepended buy the “use”.
Rules of Hooks
- Only call Hooks at the top level
- Only use Hooks in functional components
- You cannot call Hooks from regular JavaScript functions they must be React functions.
- Node version 6 or above
- NPM version 5.2 or above
- Import all appropriate hooks you plan to use at the top of a component.
import { useState } from 'react';
What are the Hooks available?
There are currently 10 hooks available as well as custom hooks.
- useState
- useEffect
- useContext
- useReducer
- useCallback
- useMemo
- useRef
- useImperativeHandle
- useLayoutEffect
- useDebugValue
useState()
The most important and often used hook. The purpose is to handle reactive data in the form of state. When a change is made to state you want it to update throughout the component. UseState takes 1 optional argument which is the default state. UseState returns an array of 2 values. The first value is the reactive value of the state and the second is the setter in which you call to change the state when necessary. These are local variables that can be named anything but the setter must be prepended by “set”.
[name, setName] = useState("programmer")
//name in state will default to programmerconst eventToChangeName = () => {
setName("Adam")
}
//name will be Adam after function is called
useEffect()
useEffect is one of the more confusing Hooks. To understand useEffect you must understand the React component lifecycle, which I will likely do a blog on soon. A simple refresher of lifcycle methods are componentDidMount(), componentDidUpdate(), componentWillUnmount(). UseEffect allows us to handle these lifecycle methods in one function. UseEffect takes it’s first argumet as a function you define. This will run once when mounted then every time state changes. An issue I have run into with this in my own projects has been doing a fetch request inside useEffect to set state asynchronously. The fetch will run then set state. After that is completed state will be updated and the componentDidUpdate() role of useEffect will run again, creating an infinite loop. To avoid this useEffect takes a second argument which is an array of dependencies. If you pass in an empty array it will run once. If you add state in tyhe array of dependencies it will run every time thaty state is updated.
useEffect(() => {
eventToChangeName()
}, [])
useContext()
UseContext allows us to work with React’s context API which shared data throughout the entire component tree without passing props. You can create a context and call it from useContext to use in a component on a different level than the one you are on.
cont pets = {
bobo: 'Cat' ,
lucy: 'Dog'
}const PetContext = createContext(pets);function App(props) {
return(
<PetContext.Provider value={pets.bobo}> <FindTheAnimal /> </PetContext.Provider>
)
}function FindTheAnimal = () => {
const animal = useContext(PetContext)
return (
<p> animal </p>
)
}
useRef()
UseRef allows you to create a mutable object that will keep the same reference between renders. This is used when you want to store a value like useState but do not want to trigger a re-render of the page. A common use for useRef is to grab HTML elements from the DOM.
function App() { const myButton = useRef(null) const click = () => myButton.current.click() return(
<button ref={myButton}> </button>
)
}
useReducer()
useReducer is a state management hook that manages state in a different way. This Hook is used in Redux to dispatch actions to the store. The useReducer function takes in an argument of a reducer you are using and returns an array of the state and the dispatch. It can also take a second argument of a default value of the state. Using the Redux pattern and useReducer is helpful in a large app whith many components to manage state.
useMemo()
UseMemo is used to optimize computation and improve performance. You can use this when you know there is something hurting performance. Like useEffect, you can set a dependency to determine when these computations take place.
const [counter, setCounter] = useState(2)const expensiveCount = useMemo(() => {
return count ** 2
}, count)
This function only will take place when count changes to avoid happening every re-render. This will memoize a return value. If you want to memoize an entire function you would use the next Hook.
useCallback()
When you define a function in a component a new function object is created on render. This is slow when passing the same function down to multiple children components. Wrapping the function in useCallback will increase performance rendering the same thing multiple times.
useImperativeHandle()
If you build a reusable React library un react you may need to a native DOM element. This hook comes in if you want to change the behavior of that ref.
useLayoutEffect()
This is just like useEffect but with one difference. It will run but before updates have been rendered. React will wait for the code to run before it updates you the user.
useDebugValue()
Use useDebugValue inside custom hooks to see the name of the Hooks inside dev tools along with the Hooks involved to create the custom Hook. | https://adamadolfo8.medium.com/react-hooks-b259ee985b5d | CC-MAIN-2021-43 | refinedweb | 1,026 | 55.54 |
0
#ifndef HYS_MAIN_HEADER_H #define HYS_MAIN_HEADER_H #include <windows.h> #include <windowsx.h> #ifndef HYS_GLOBAL_VARIABLES #define HYS_GLOBAL_VARIABLES namespace hys { const short Null = 0; const bool True = 1; const bool False = 0; }; #endif // HYS_GLOBAL_VARIABLES #endif // HYS_MAIN_HEADER_H
I want to avoid using macros as much as possible (C++ Coding Standards, by Herb Sutter & Andrei Alexandrescu), so I've decided to try to use my own constant global variables instead of NULL, TRUE, FALSE etc. and see what happens (I'm just trying different ways to do things, it's just an experiment!).
This #ifndef is supposed to check if this namespace was defined before in other files before. It is placed in my main header file.
It does work but I was wondering if this is violating any standards/rules of C++? Can this be a good use of #ifndef, #endif? | https://www.daniweb.com/programming/software-development/threads/329491/use-ifndef-to-check-global-variables | CC-MAIN-2017-39 | refinedweb | 137 | 54.83 |
(writing the first program)
If you are tired of talking and want some action – let’s not waste anymore time. In this part you will learn how to set up a project using MPLab, write a simple program and run it on your microcontorller.
Before we start I must mention that there are 2 IDEs that Microchip has made. MPLab and MPLab X. I tried MPLab X few times and even when it wasn’t beta anymore there were still things that doesn’t work. On the other hand the old MPLab has a crappy editor. Because you can use external editor I would recommend using MPLab and just to cover it all I’ll write about MPLab X at the end of this tutorial.
Downloading and Installing MPLab
If you bought PIC Kit 2 you might already have the MPLab installation CD, but as it’s written on the CD it’s recommended to download the latest version:
I found this permanent link that redirects to big-bad-url, but if it doesn’t work go to Microchip website and search for “MPLab 8” or Google it.
Scroll to the bottom of the page and download the file with title MPLAB IDE v<N.NN>, where the <N.NN> is the version (for example MPLAB IDE v8.92).
When you install make sure that you select Hi-Tech C and Microchip C18. You can leave the rest by default.
Starting a project
I prefer to start a project with the project wizard because it’s less likely to forget something. From menu “Project” select “Project Wizzard…”.
The first step (well technically the second) is choosing a device. That’s not permanent, so if you want you can change it later. I’ll use PIC16F648A as an example:
You can use any of PIC16F628A or PIC16F627A – they are fully compatible. Actually for the first example there will be no much difference what MCU you use as long as it’s Microchip’s PIC16 series.
Next thing is to choose toolsuite (or in other words compilers, linkers … and stuff). Choose Hi-Tech C and I promise we’ll have a look at assembly language later. One step at a time:
If you see red cross in front of the Hi-Tech compiller … then you forgot to install it. If you do, close MPLab, run the setup again and make sure you check the “Hi-Tech C Compiller” option.
On the third step you have to select where your proiect file will be located at. Choose a separate directory for each project – like “C:\MCU\Projects\Program1\project-name” and not
“C:\MCU\Projects\project-name”:
Yes, we are just going to blink a LED. The first thing you must learn in microcontrollers is to always take small steps because there are so many things that can go wrong and you can loose days to track multiple prblems.
The next step is to add some files to the project. Of course when you start a new project you have no files, so – skip this step.
That’s it. Click on finish and you have a project.
Now choose File->New (or click on the “New File” icon). Choose “File->Save as …”. Pay close attention on what path you are in when saving a file. MPLab does not change directory to project directory as you might think. If you do not save all files in the project directory you will be sorry later. Save the file as “blink.c” (do I have to tell it’s without the quotes?!?). Then add the file to the project:
Ok. Now you can start with dummy program which does nothing:
void main(void){ while(1){ } }
Now press F10 to compile. It’s implortant to compile as often as possible and resolve the problems before they become too many. A common problem on first compile is this error:
Error [939] ; . no file arguments
It means that you forgot to add the file to the project.
If it compiles you can continue with making the program actually do something:
#include "htc.h" __CONFIG(FOSC_INTOSCIO & MCLRE_OFF & BOREN_OFF & PWRTE_ON & WDTE_OFF & LVP_OFF); void init(void){ PCONbits.OSCF = 0; // set internal oscillator to 48kHz // by default all ports are set to inputs and analog functions are enabled CMCON = 0x07; TRISA0 = 0; // set RA0 (pin 17) as output RA0 = 0; // we start with the LED off } void main(void){ unsigned int i; while(1){ // repeat forever RA0 = 1; // LED on for(i=0;i<255;i++){ // wait a bit // do nothing } RA0 = 0; // LED off for(i=0;i<255;i++){ // wait a bit more } } //while } Press again F10 to compile. | https://www.microlab.info/beginners/138-first-step-with-microcontrollers-part-2.html | CC-MAIN-2021-25 | refinedweb | 778 | 80.21 |
an action result
}
You can easily post a comic book to that action method using JSON.
Under the hood, ASP.NET MVC uses the DefaultModelBinder in combination with the JsonValueProviderFactory to bind that value.
DefaultModelBinder
JsonValueProviderFactory
A question on an internal mailing list recently asked the question (and I’m paraphrasing here), “Why not cut out the middle man (the value provider) and simply deserialize the incoming JSON request directly to the model (ComicBook in this example)?”
ComicBook
Great question! Let me provide a bit of background to set the stage for the answer.
There are a couple of different content types you can use when posting data to an action method.
You may not realize it, but when you submit a typical HTML form, the content type of that submission is application/x-www-form-url-encoded.
application/x-www-form-url-encoded
As you can see in the screenshot below from Fiddler, the contents of the form is posted as a set of name value pairs separated by ampersand characters. The name and value within each pair are separated by an equals sign.
By the time you typically interact with this data (outside of model binding), it’s in the form of a dictionary like interface via the Request.Form name value collection.
Request.Form
The following screenshot shows what such a request looks like using Fiddler.
When content is posted in this format, the DefaultModelBinder calls into the FormValueProvider asking for a value for each property of the model. The FormValueProvider is a very thin abstraction over the Request.Form collection.
FormValueProvider
Another content type you can use to post data is application/json. As you might guess, this is simply JSON encoded data.
application/json
Here’s an example of a bit of JavaScript I used to post the same content as before but using JSON. Note that this particular snippet requires jQuery and a browser that natively supports the JSON.stringify method.
JSON.stringify
<script type="text/javascript">
$(function() {
var comicBook = { Title: "Groo", IssueNumber: 101 }
var comicBookJSON = JSON.stringify(comicBook);
$.ajax({
url: '/home/update',
type: 'POST',
dataType: 'json',
data: comicBookJSON,
contentType: 'application/json; charset=utf-8',
});
});
</script>
When this code executes, the following request is created.
Notice that the content is encoded as JSON rather than form url encoded.
JSON is a serialization format so it’s in theory possible that we could straight deserialize that post to a ComicBook instance. Why don’t we do that? Wouldn’t it be more efficient?
To understand why, let’s suppose we did use serialization and walk through a common scenario. Suppose someone submits the form and they enter a string instead of a number for the field IssueNumber. You’d probably expect to see the following.
IssueNumber
Notice that the model binding was able to determine that the Title was submitted correctly, but that the IssueNumber was not.
If our model binder deserialized JSON into a ComicBook it would not be able to make that determination because serialization is an all or nothing affair. When serialization fails, all you know is that the format didn’t match the type. You don’t have access to the granular details we need to provide property level validation. So all you’d be able to show your users is an error message stating something went wrong, good luck figuring out what.
Instead, what we really want is a way bind each property of the model one at a time so we can determine which of the fields are valid and which ones are in error. Fortunately, the DefaultModelBinder already knows how to do that when working with the dictionary-like IValueProvider interface.
IValueProvider
So all we need to do is figure out how to expose the posted JSON encoded content via the IValueProvider interface. As I wrote before, Jonathan Carter had the bit of insight that provided the solution to this problem. He realized that you could have the JSON value provider deserialize the incoming JSON post to a dictionary. Once you have a dictionary, it’s pretty easy to implement IValueProvider and the DefaultModelBinder already knows how to bind those values to a type while providing property level validation. Score!
The answer I provided only tells part of the story of why this is implemented as a value provider. There’s another aspect that was illustrated by my co-worker Levi. Sadly, for someone so gifted intellectually, he has no blog, so I’ll paraphrase his words here (with a bit of verbatim copying).
As I mentioned earlier, value providers provide an abstraction over where values actually come from. Value providers are responsible for aggregating the values that are part of the current request, e.g. from Form collection, the query string, JSON, etc. They basically say “I don’t know what a ‘FirstName’ is for or what you can do with it, but if you ask me for a ‘FirstName’ I can give you what I have.”
Model binders are responsible for querying the value providers and building up objects based on those results. They basically say “I don’t know where directly to find a ‘FirstName’, ‘LastName’, or ‘Age’, but if the value provider is willing to give them to me then I can create a Person object from them.”
Since model binders aren’t locked to individual sources (with some necessary exceptions, e.g. HttpPostedFile), they can build objects from an aggregate of sources. If your Person type looks like this:
HttpPostedFile
Person
public class Person {
int Id { get; set; }
[NonNegative]
int Age { get; set; }
string FirstName { get; set; }
string LastName { get; set; }
}
And a client makes a JSON POST request to an action method (say with the url /person/edit/1234 with the following content:
{
"Age": 30,
"FirstName": "John",
"LastName": "Doe"
}
The DefaultModelBinder will pull the Id value from the RouteData and the Age, FirstName, and LastName values from the JSON when building up the Person object. Afterwards, it’ll perform validation without having to know that the various values came from different sources.
Id
RouteData
Age
Even better, if you wrote a custom Person model binder and made it agnostic as to the current IValueProvider, you’d get the correct behavior on incoming JSON requests without having to change your model binder code one tiny iota. Neither of these is possible if the model binder is hard-coded to a single provider.
The goal of this post was to provide a bit of detail around an interesting aspect of how ASP.NET MVC turns strings sent to a web server into strongly typed objects passed into your action methods.
Going back to the original question, the answer is simply, we use a value provider for JSON to enable property level validation of the incoming post and also so that model binding can build up an object by aggregating multiple sources of data without having to know anything about those sources.... | http://haacked.com/archive/2011/06/30/whatrsquos-the-difference-between-a-value-provider-and-model-binder.aspx | CC-MAIN-2013-20 | refinedweb | 1,157 | 51.78 |
A process is nothing but a running instance of a program. It is also defined as a program in action.
The concept of a process is fundamental concept of a Linux system. Processes can spawn other processes, kill other processes, communicate with other processes and much more.
In this tutorial, we will discuss life cycle of a process and touch base on various aspects that a process goes through in its life cycle.
1. Code Vs Program Vs Process
Lets first understand difference between code, program and process.
Code: Following is an example of code :
#include <stdio.h> #include <unistd.h> int main(void) { printf("\n Hello World\n"); sleep(10); return 0; }
Lets save the above piece of code in a file named helloWorld.c. So this file becomes code.
Program: Now, when the code is compiled, it produces an executable file. Here is how the above code is compiled :
$ gcc -Wall helloWorld.c -o helloWorld
This would produce an executable named helloWorld. This executable is known as a program.
Process: Now, lets run this executable :
$ ./helloWorld Hello World
Once run, a process corresponding to this executable(or program) is created. This process will execute all the machine code that was there in the program. This is the reason why a process is known as running instance of a program.
To check the details of the newly created process, run the ps command in following way :
$ ps -aef | grep hello* 1000 6163 3017 0 18:15 pts/0 00:00:00 ./helloWorld
To understand the output of ps command, read our article on 7 ps command examples.
2. Parent and Child Process
Every process has a parent process and it may or may not have child processes. Lets take this one by one. Consider the output of ps command on my Ubuntu machine :
1000 3008 1 0 12:50 ? 00:00:23 gnome-terminal 1000 3016 3008 0 12:50 ? 00:00:00 gnome-pty-helper 1000 3017 3008 0 12:50 pts/0 00:00:00 bash 1000 3079 3008 0 12:58 pts/1 00:00:00 bash 1000 3321 1 0 14:29 ? 00:00:12 gedit root 5143 2 0 17:20 ? 00:00:04 [kworker/1:1] root 5600 2 0 17:39 ? 00:00:00 [migration/1] root 5642 2 0 17:39 ? 00:00:00 [kworker/u:69] root 5643 2 0 17:39 ? 00:00:00 [kworker/u:70] root 5677 2 0 17:39 ? 00:00:00 [kworker/0:2] root 5680 2 0 17:39 ? 00:00:00 [hci0] root 5956 916 0 17:39 ? 00:00:00 /sbin/dhclient -d -sf /usr/lib/NetworkManager/nm-dhcp-client.action -pf /run/sendsigs. root 6181 2 0 18:35 ? 00:00:00 [kworker/1:0] root 6190 2 0 18:40 ? 00:00:00 [kworker/1:2] 1000 6191 3079 0 18:43 pts/1 00:00:00 ps -aef
Integers in second and third column of the above output represent process ID and parent process ID. Observe the figures highlighted in bold. When I executed the command ‘ps -aef’, a process was created, its process ID is 6191. Now, look at its parents process ID, it is 3079. If you look towards the beginning of the output you will see that ID 3079 is the process ID of bash process. This confirms that bash shell is the parent for any command that you run through it.
Similarly, even for processes that are not created through shell, there is some parent process. Just run ‘ps -aef’ command on your Linux machine and observe the PPID (parent process ID)column. You will not see any empty entry in it. This confirms that every process has a parent process.
Now, lets come to child processes. Whenever a process creates another process, the former is called parent while latter is called child process. Technically, a child process is created by calling fork() function from within the code. Usually when you run a command from shell, the fork() is followed by exec() series of functions.
We discussed that every process has a parent process, this can bring a question that what will happen to a child process whose parent process is killed? Well, this is a good question but lets come back to it sometime later.
3. The init Process
When Linux system is booted, First thing that gets loaded into memory is vmlinuz. It is the compressed Linux kernel executable. This results in the creation of init process. This is the first process that gets created. Init process has PID of one, and is the super parent of all the processes in a Linux session. If you consider Linux process structure as a tree then init is the starting node of that tree.
To confirm that init is the first process, you can run the pstree command on your Linux box. This command displays the tree of processes for a Linux session.
Here is a sample output :
init-+-NetworkManager-+-dhclient | |-dnsmasq | `-3*[{NetworkManager}] |-accounts-daemon---2*[{accounts-daemon}] |-acpid |-at-spi-bus-laun-+-dbus-daemon | `-3*[{at-spi-bus-laun}] |-at-spi2-registr---{at-spi2-registr} |-avahi-daemon---avahi-daemon |-bamfdaemon---3*[{bamfdaemon}] |-bluetoothd |-colord---{colord} |-console-kit-dae---64*[{console-kit-dae}] |-cron |-cups-browsed |-cupsd |-2*[dbus-daemon] |-dbus-launch |-dconf-service---2*[{dconf-service}] |-evince---3*[{evince}] |-evinced---{evinced} |-evolution-sourc---2*[{evolution-sourc}] |-firefox-+-plugin-containe---16*[{plugin-containe}] | `-36*[{firefox}] |-gconfd-2 |-gedit---3*[{gedit}] |-6*[getty] |-gnome-keyring-d---7*[{gnome-keyring-d}] |-gnome-terminal-+-bash | |-bash-+-less | | `-pstree | |-gnome-pty-helpe | `-3*[{gnome-terminal}] |-gvfs-afc-volume---2*[{gvfs-afc-volume}] |-gvfs-gphoto2-vo---{gvfs-gphoto2-vo} |-gvfs-mtp-volume---{gvfs-mtp-volume} |-gvfs-udisks2-vo---{gvfs-udisks2-vo} |-gvfsd---{gvfsd} |-gvfsd-burn---2*[{gvfsd-burn}] |-gvfsd-fuse---4*[{gvfsd-fuse}] ... ... ...
The output confirms that init is at the top of process tree. Also, if you observe the text in bold, you will see the complete parent child relation of pstree process. Read more about pstree in our article on tree and pstree.
Now, lets come back to the question (we left open in the last section) about the consequences when parent process gets killed while child is still alive. Well in this case, the child obviously becomes orphan but is adopted by the init process. So, init process becomes the new parent of those child processes whose parents are terminated.
4. Process Life Cycle
In this section, we will discuss the life cycle of a normal Linux process covers before it is killed and removed from kernel process table.
- As already discussed, a new process is created through fork() and if a new executable is to be run then exec() family of functions is called after fork(). As soon as this new process is created, it gets queued into the queue of processes that are ready to run.
- If only fork() was called then it is highly likely that new process runs in user mode but if exec() is called then the new process will run in kernel mode until a fresh process address space is created for it.
- While the process is running, a higher priority process can pre-empt it through an interrupt. In this case, the pre-empted process again goes into queue of processes that are ready to run. This process is picked up by the scheduler at some later stage.
- A process can enter into kernel mode while running. This is possible when it requires access some resource like text file which is kept on hard disk. As operations involving access to hardware may take time, it is highly likely that process will go to sleep and will wake up only when the requested data is available. When the process is awakened, it does not mean that it will start executing immediately, it will again queue up and will be picked for execution by scheduler at appropriate time.
- A process can be killed through many ways. It can call exit() function to exit or can process Linux signals to exit. Also, some signals cannot be caught and cause the process to terminate immediately.
- There are different types of Linux process. Once the process is killed, it does not get completely eliminated. An entry containing some information related to it is kept in the Kernel process address table till the parent process explicitly calls wait() or waitpid() functions to get the exit status of child process. Until parent process does this, the terminated process is known as zombie process.
{ 14 comments… add one }
thanks for your informative posts! I’ve just one thing to add to this one: The PID of init is 1 instead of 0, at least on my servers
Just 2 corrections:
– Init has PID 1, not 0.
– When a process die, its children also dies, unless you run it with nohup, or use disown on a running process to keep it running after is parent process died.
Very informative and easily digestible article. Thanks a lot.
@Oli, @Enrique,
Thanks for pointing out about init’s PID. It is corrected now.
Hi,
Thanks a lot, useful article…
– When a process die, its children also dies, unless you run it with nohup, or use disown on a running process to keep it running after is parent process died.
After nohup/disown the init adopts the ophans as stated above?
Why is that -Wall there??
How about info on zombie/runaway process ?
clearly written. I will try to find more in my laptop.
Note that the majority of Linux distributions are switching (or have switched) from init to systemd (I think it’s only Debian/Ubuntu and Gentoo still using init now). So if you do pstree on most Linux systems and you see systemd instead of init as described here, that is why.
Systemd’s role so far as this article is concerned is the same as init, but its workings are different and it also only starts daemons as needed (similar to xinetd) and offers automatic recovery when subprocess daemons die.
@Thomas Sebastian
“Why is that -Wall there??”
I’m not sure if you are asking this because you don’t know this command or you don’t understand why he uses it for this simple program.
If it is the first: -Wall is an optional command for gcc and shows all warnings the compiler throws. It’s a comand used frequently used for debugging.
If it is the second: For this simple program, -Wall is not needed, because there won’t be any warnings to be displayed, but it doesn’t hurt to use it.
Just precious.
@Mike, just i checked in RHEL 6.4 its showing init.
I would like to see some things about multythreading, in c++11, and it would be nice to tell people abut the differences in Windows, Apple and Linux stuff. | http://www.thegeekstuff.com/2013/07/linux-process-life-cycle/ | CC-MAIN-2017-04 | refinedweb | 1,820 | 73.17 |
."
Maybe... (Score:5, Funny)
Re:Maybe... (Score:5, Funny)
Re: (Score:3, Insightful)
Re: (Score:3, Interesting)
Re:Maybe... (Score:5, Funny)
Re: (Score:2)
Re:Maybe... (Score:5, Funny)
T,FTFY. HAND.
Re:Maybe... (Score:5, Funny)
That joke was old when I was in school (Score:4, Insightful)
Re: (Score:3)
Re: (Score:3, Funny)
Re: (Score:3, Funny)
Guess it's not all it's cracked up to be.
As expected. (Score:5, Funny)
no, wait....
Re: (Score:2)
Re:As expected. (Score:5, Funny)
(asdf-just-because-the-code-is-all-in-one-namespace-p
asdf-does-not-make-it-ineffective))
Re:As expected. (Score:5, Funny)
I heard this was so he could have more time to work on HURD
Well, he could always port HURD so it runs on Emacs
...
Re:As expected. (Score:5, Funny)
Re:As expected. (Score:5, Funny)
No wait...
Are maintainers even necessary? (Score:5, Funny)
Re:Are maintainers even necessary? (Score:5, Funny)
Re: (Score:3, Funny)
Re:Are maintainers even necessary? (Score:5, Funny)
May His Next Adventure Be Twice as Fruitfull (Score:4, Interesting)
I remember being told this before rushing home to d/l and install it.
It gave me a hunger for linux too and though I never mastered its complexities for most things I do,It is amazing and I hope it stays maintained.
RMS is amazing,I wish him well in any venture he chooses.
Re:May His Next Adventure Be Twice as Fruitfull (Score:5, Funny)
For example, to make picture-mode work for photographs, you'd need a canvas about the size of an aircraft carrier flight deck to express the pixels as text, more RAM than Dodge's truck division to hold the image, and a great deal of patience to scroll it on a typical LCD.
Really, it's OK to pick the proper tool for the job.
Re: (Score:3, Funny)
Re: (Score:3, Informative)
Or go with the XEmacs fork, which supports real bitmaps (and has been maintained without RMS for years).
Butterfly effect? (Score:5, Funny)
Wow (Score:5, Funny)
Re: (Score:2, Funny)
Re:Wow (Score:5, Funny)
I kid, I kid... all the best wishes for RMS and Emacs both.
Re: (Score:3, Informative)
Re: (Score:3, Funny)
Re: (Score:3, Interesting)
Indeed he can. In 2003 I attended a free software press conference in Soissons, France at which he spoke at length in French about free software, GNU, and a few other topics. While socially awkward, he is quite capable in front of groups and spoke well and without the need for assistance.
Unfortunately for me, in 2003 I knew exactly zero French so I mostly stared blankly for an hour and a half. Oh well...
:)
Re: (Score:3, Insightful)
Then how do you know he was speaking French?
Re:Wow (Score:5, Funny)
Of course, being Slashdot, comparing Bill Gates to him is Official Policy....
Re: (Score:2)
Re: (Score:2)
Those that want to look past Castro's brutality and focus on "stickin' it to the man" are in very poor taste.
That kind of sloppy thinking abets the Castros of the world in their efforts to suppress political freedom and practice brutality.
The media are a bunch of sycophants for doing this, and it's a shame that the more technically sharp on
Emacs bloat (Score:5, Insightful)
Bring back 19.34b!
Re:Emacs bloat (Score:4, Funny)
Please explain.
Re: (Score:2)
No, the GP was referring to the emacs shrink (Score:3, Informative)
Me: Every release since 19.34b has suffered badly from bloat and other crud.
Emacs: What would happen if 19 34b has suffered badly from bloat and other crud?
Me: MULE was probably the worst offender.
Emacs: Why do you say mule was probably the worst offender?
Me: Such a fat jackass, that MULE.
Emacs: Can you elaborate on that?
Re: (Score:3, Insightful)
Re:Emacs bloat (Score:4, Funny)
syntax highlighting
I love you (Score:5, Funny)
Actually, I see the problem as the exact opposite. It used to be that people would ask themselves "I got this huge powerful 20 MHz computer with 4 megabytes of RAM, how will I ever I ever use all that power", and the nerd overhearing it would answer "use Emacs", and despite advances in computers, Emacs could keep track and was always the program that could fully utilize your hardware.
However, somewhere along the way we lost out to the competition. I see kids in the Emacs fora who, with a straight face, say they prefer Emacs because it is such as lean and mean editing machine. It is so sad. People nowadays go to Microsoft, KDE or Gnome for software to fully utilize their machines. In the olden days, Emacs would have offered a superset of all of these environments!
I think it is good RMS is stepping back. We need young people to revitalize Emacs, and once again make it a leader in resource consumption. We need to get back to our roots. We need EGACS: Eight Gigabytes And Constantly Swapping.
Re:I love you (Score:5, Funny)
Wait... how do you pronounce Eclipse?
I pronounce it (Score:3, Funny)
Downhill since 18, mostly because of windowing (Score:4, Interesting)
Re: (Score:3, Informative)
hmm (Score:5, Funny)
Re:hmm (Score:4, Insightful)
Re:hmm (Score:5, Funny)
Re: (Score:3, Funny)
The article is EXTREMELY misleading (Score:5, Funny)
Re: (Score:3, Funny)
Damnit RMS .... (Score:5, Funny)
Re:Damnit RMS .... (Score:5, Funny)
(Had to say it)
Re: (Score:2)
Petty, but it had to be said.
Goodbye (Score:5, Funny)
More time to work on HURD? (Score:4, Funny)
Re:More time to work on HURD? (Score:5, Informative) [debian.org]
Question on "use" (Score:2)
Re: (Score:2)
The reason he is leaving.. (Score:5, Funny)
I'm honestly surprised he's been maintaining it (Score:2)
Favorite from reddit: (Score:5, Funny)
Good news for MS coders! (Score:5, Funny)
Real reason? (Score:5, Funny)
More a story on Emacs than on RMS (Score:3, Interesting)
"Gerd Moellmann was the Emacs maintainer from the beginning of Emacs 21 development until the release of 21.1."
Yet RMS has had a decades-long involvement with Emacs. It seems he will continue to be involved, so what's the big deal? More generally, GNU has always been about freedom first, development second.
editing LaTeX under Emacs (Score:5, Informative)
Re:Stallman is still around? (Score:5, Insightful)
Yeah, I lose track of his ideas after a point (ethics), but I'm a firm believer in "credit where due".
Certainly more deserving of something like a Nobel Peace Prize than some of the nitwits that have besmirched the concept in recent history.
Anyone know how to nominate someone for [medaloffreedom.com]
Re: (Score:3, Interesting)
Obviously you have never met RMS.
I can't decide whether to put a ":-)" on that or not. I'll just leave it ambiguous. He's yelled at me. I won the argument by leaving.
Re:Stallman is still around? (Score:5, Insightful)
Hence the fact that I taper off from agreement when the discussion gets abstract: his philosophical basis leaves me unmoved.
However, when you consider the impact of the GPL, GCC, and the FSF world-wide, and into the future, the Nobel Peace Prize makes sense, even if the fellow himself has some cantankerous moments.
In any case, I submit that the man's overall historical impact may rank with Gutenberg, and for the same reason: taking information out of the hands of the elite and offering a level playing field. Gutenberg did it for literacy, Stallman for programming.
Re:Stallman is still around? (Score:4, Interesting)
By the early '90s, people were routinely giving source code to their customers, rather than trusting "code escrow" services.
I wasn't only giving source - I was also giving a (legit original paid-for) CD with the compiler and tools.
I figured it was just good marketing - giving them the source was an additional incentive to deal with me instead of a competitor, and when it came time for mods, after they screwed it up, I'd get the business of making it right
:-)
At that point I had not yet heard of RMS or the term "open source" - it just made good sense to help differentiate oneself in a competitive market.
"We have 3 bids, all about the same price, but one of them is also giving us the source code." - gee, which one would YOU deal with?
I vote for the RMS peace prize (Score:4, Insightful)
By the early '60s, people were routinely giving source code to their customers.
Mr. Stallman explains in his historical writings and speeches how he first saw free software ethics in action in the early behavior of both academic and commercial software developers. When vendors moved, in a very large way, away from free source, he recognized the danger, and opposed the trend with his proselytizing for free software. The whole context in which you worked in the early 90's was shaped by that.
You don't mention what sort of software you provide to your customers. Unless it includes an operating system kernel, then they depend either on binary-only code from MS or Apple, or on free code that depends one way or another on Mr. Stallman's free software movement (yes, even if it's not licensed under GPL).
I started studying computing in 1969, and devoted my career to it. I contributed to the world as much as I could figure out and accomplish. Mr. Stallman's contributions are so many orders of magnitude greater than mine, I am filled with awe. All of my software development, research, or teaching today depends on things that he supported in various ways. I have no interest in carping about his personal affect, nor the things that he didn't do in addition to all that he did, nor the things that could conceivably have been done better if someone else who didn't do them had done them. Nor in the supposition that those ignorant of his work were therefore not aided by it.
Re: (Score:3, Insightful)
This sentence, as far as Gutenberg is concerned, makes no sense whatsoever. Medieval nobles were illiterate, they didn't consider it worth their time to learn how to read. The thing is, if you were able to read, you would go after a literacy-requiring work,
Re:Stallman is still around? (Score:4, Insightful)
You can certainly attack the comparison on technical grounds.
It's like a car, see...
Re: (Score:3, Insightful)
In a similar fashion, programming is a skill set possessed by relatively few people, but I don't think scarcity of available code or a lack of oppor
Re: (Score:3, Insightful)
You can learn to write programs from books that teach the material, but to learn to write good programs requires seeing other good programs. It takes a very long time to go from your built-in BASIC interpreter and a manual to writing actually useful, well-designed programs, but having access to the source for other programs can accelerate that process.
Microsoft's compiler is very good, and if you're learning to write Hello, World! then there's no real difference between using it and using gcc. But if you
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
Re:Stallman is still around? (Score:4, Interesting)
This is an oversimplification of what happened then.
If the Church (not only the Pope, but a lot of people; just the Pope disagreeing meant nothing if the others agreed) saw a problem in what you wrote, they would send someone knowledgeable on the subject to talk to you ("inquisitor" means "asker"), requesting you both to talk on the subject. This talk could proceed for as long as it was needed for one to convince the other, or for both to agree that an agreements was unreachable. Depending on what of these things happened, this was the procedure:
a) In case you were convinced by the inquisitor, nothing happened, of course. You both went back to your lives.
b) In case the inquisitor was convinced by you, what historically happened many times, he would take the subject back to the Vatican where it would enter the list of themes to be debated in the next council. Afterwards, once the council happened, one of two things could happen after some months of debate: the Church as a whole might conclude you were in fact correct, and change accordingly (what also happened historically many times), or it could conclude you and the inquisitor were wrong. What, however, didn't exclude the possibility of the theme being the subject of other councils, and the Church position change again, what also happened many times.
As for you yourself, the practical consequences while your position wasn't agreed upon by the Church were similar to the next case:
c) In case you both agreed that you couldn't reach an agreement on the subject, a document was presented to you wish you was expected to sign. This document basically said that you were aware that your arguments weren't strong enough to convince other sages as much knowledgeable on the subject as you; thus, that the Church's position on the matter could very well be the correct, that you're just unable to fully appreciate it; and thus, that since it's not a certainty, it isn't worth disclosing to less knowledgeable people as a proven fact, so to avoid social distress. You signed it, and while nothing happened to you, you could still bring the subject to discussion and investigation on Universities.
d) The last alternative was you refusing to sign the document, and then walking around preaching your ideas as if they were pure facts, trying to convince the simple people as a compensation for the fact you didn't manage to convince those at your own knowledge level, i.e., by becoming a cult leader and, as more and more non-scholars were convinced by you, a source of social unrest. This would set you as an heretical and put an excommunication decree over your head, with the consequences we know.
So, it's extremely naive, historically, to think the Church went directly to 'd'. It rarely happened, and most of the time the Church was a very reasonable entity for the time (for example, by threatening with excommunication those civil official who used more than one torture session on a suspect, as the custom was a lot of torture sessions; and by dismissing as unfounded and freeing the accused in 99 of each 100 witchcraft trials). They assumed that the unrestricted diffusion as fact of unproven and unsustainable hypotheses and theories would result in utter chaos, and history has shown they were correct in this regards as far as the immediately following centuries is concerned, as the many religious wars of the subsequent Modern Age have shown.
In fact, it took a lot of blood for societies to develop the profound concept of "Just don't care what your neighbor think, damn it!". Now we know this is possible, but at the time no one dreamed of such a possibility, and contrasting their stance of "perfect the proof, reach unanimity on it, and only then diffuse it" with the current understanding that "complete freedom of
Re:Stallman is still around? (Score:4, Insightful)
A lot of us use Emacs extensively for code writing. It's a helpful tool.
Re: (Score:2)
Less cheekily, I'd say he's after building a community that has a homogeneous view. Kinda like the Amish, with source code instead of plows.
The point about tapering off that I'm making is this: it's one thing to state your views in a positive way, and quite another to anathematize others who disagree.
Stallman's desire for community is simply one among many
Re: (Score:2, Informative)
Re:Stallman is still around? (Score:5, Funny)
Emacs vs Vi
GPL vs BSDL
GNU/Linux vs Linux
Free vs Open Source
etc etc...
Not that I'm trying to discredit his contributions to Free/Opensource Software, but a "peace" award might be a bit off the mark
Re: (Score:3, Insightful)
He offers precise feedback on where he disagrees with others.
He does get shrill and baffling when he ventures into the abstract, and calls others "unethical".
For me to follow his train of thought there, he would have to publish a complete philosophical model.
But so what? His flamewars have contributed far less carbon to the atmosphere than those of other Nobel laureates.
Re: (Score:3, Insightful)
Re: (Score:2)
That's only because we're closer to the sun than to all the other stars. In other words, its a matter of perspective. Take a step back, and you'll KNOW that vi outshines emacs
;-0
Re: (Score:2, Insightful)
Re: (Score:2)
Re: (Score:3, Funny)
Re:Stallman is still around? (Score:5, Funny)
Re:Stallman is still around? (Score:5, Insightful)
I'm not sure what you think you're proving. I mean...
Re: (Score:2)
Ok, I admit that was funny.
Silly argument in any case. Kate is a much better editor than either vim (stateful editor? No thanks!) or emacs (ever tried to get line numbers on that thing?)
Runs and hides
Re:Stallman is still around? (Score:5, Informative)
Re: (Score:3, Funny)
1. Simplicity: [_] Notepad [X] Ed
2. Less bloat: [_] Notepad [X] Ed
3. More users: [_] Notepad [X] Ed
and, remember, it's the standard!
Re: (Score:3, Insightful)
Re: (Score:2)
The reason RMS is stepping down is Emacs doesn't need any more developement - its self-aware,
Re:What I sadly discovered about RMS and GNU/GPL. (Score:4, Interesting)
I saw RMS about 10 years ago, and found him to be a real 'hippie'. It was really quite embarrassing.
But I saw him again just 2 years ago and found that he'd changed a lot. He gave a very good speech and talked about the copyright on books. He proposed a two year copyright length on books, extended if it sells well to five years etc. He put forward his reasoning (Most books go out of print after two years), and the reaction from book writers during his research (positive), etc. It was a very reasonable argument. He brought up the philisophy of being free, but it was more of an undertone, than a dominant statement.
I think RMS has matured a lot during the years. Maybe listen to one of his recent talks and give him a fair ear. If you still don't like him, then fair enough. | http://developers.slashdot.org/story/08/02/23/1313229/rms-steps-down-as-emacs-maintainer?sdsrc=next | CC-MAIN-2015-14 | refinedweb | 3,166 | 70.33 |
On Tue, Jul 08, 2003 at 03:21:04PM +0200, Nardy Pillards wrote > > > > > >libwsock32.a, libmswsock.a and mswsock.h are within MinGW. > > > (example > makefile) > > > > > >so are winsock.h and winsock2.h > > > > > > > > Yes, I know. I've done applications with winsock earlier with the mingw > > compiler. > > .c sources, .h headers: > inserted #if WIN32 with <winsocket.h> in stead of the 'BSD-like' socket.h > lines Now that we have sockets support for windows, it might be worthwhile to make the external player and the random.org RNG support. What is needed? Is it just #if HAVE_SYS_SOCKET_H #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #include <sys/un.h> #endif #elif HAVE_WINSOCKET_H #include <winsocket.h> #endif (assuming we define a HAVE_WINSOCKET_H in config.h) Are there any other changes needed? Jørn | http://lists.gnu.org/archive/html/bug-gnubg/2003-07/msg00133.html | CC-MAIN-2015-22 | refinedweb | 137 | 74.15 |
How to deploy NFT tokens on TomoChain
Create your unique ERC721 tokens (ie: CryptoKitties) on TomoChain!
This article will explain:
- What is a Non-Fungible Token (NFT)
- Use-cases of NFT
- How-to step-by-step deploy a NFT token on TomoChain
What is a Non-Fungible Token (NFT)?
Fungible tokens are all equal and interchangeable. For instance, dollars or Bitcoins or 1 kilogram of pure gold or ERC20 tokens. All TOMO coins are equivalent too, they are the same and have the same value. They are interchangeable 1:1. This is a fungible token.
Non-fungible tokens (NFTs) are all distinct and special. Every token is rare, with unique atributes and different value. For instance: CryptoKitty tokens, collectible cards, airplane tickets or real art paintings. Every item has its own characteristics and specifics and is clearly differentiable to another one. They are not interchangeable 1:1. They are distinguishable.
Think of Non-Fungible Tokens (NFT) as a rare collectible on the TomoChain network. Every token has unique characteristics, its own metadata and special attributes
Non-Fungible Tokens (NFT) are used to create verifiable digital scarcity. NFTs are unique and distinctive tokens that you can mainly find on EVM blockchains.
The ERC-721 is the standard interface for Non-Fungible Tokens (but there are also other NFTs, like ERC1155). ERC721 is a set of rules to make your NFT easy for other people / apps / contracts to interface with.
ERC721 is a free, open standard that describes how to build non-fungible or unique tokens on EVM compatible blockchains. While most tokens are fungible (not distinguishable), ERC721 tokens are all unique, with individual identities and properties. Think of them like rare, one-of-a-kind collectables — each unit is a unique item with its own serial number.
ERC20: identical tokens. ERC721: unique tokens
Some high demand non-fungible tokens are applications like CryptoKitties, Decentraland, CryptoPunks, and many others.
CryptoKitties
At the end of 2017, NFTs made a remarkable entrance in the blockchain world with the sucess of CryptoKitties. Each one is a unique collectible item, with its own serial number, which can be compared to its DNA card. This unleashed an unprecedented interest for NFTs, that went so far as to clog the Ethereum network. The CryptoKitties market alone generated $12 million dollars in two weeks after its launch, and over $25 million in total. Some rare cryptokitties were even sold for 600 ETH ($170,000).
The strength of NFTs resides in the fact that each token is unique and cannot be mistaken for another one– unlike bitcoins, for example, which are interchangeable with one another.
Crypto Item Standard (ERC-1155)
One step further in the non-fungible token space is the ERC-1155 Standard proposed by the Enjin team, also known as the “Crypto Item Standard”. This is an improved version of ERC-721 which will actually be suitable for platforms where there are tens of thousands of digital items and goods.
Online games can have up to 100,000 different digital items. The current problem with ERC-721 is that if we would like to tokenize all those 100,000 items, then we would need to deploy 100,000 separate smart contracts.
ERC-1155 standard combines ERC-20 and ERC-721 tokens in its smart contract. Each token is saved in the contract with a minimal set of data that distinguishes it from others. This allows for the creation of bigger collections which contain multiple different items.
Use-cases of Non-Fungible Tokens (NFT)
Most of the time when people think about ERC-721 or NFT, they refer to the most notably successful CryptoKitties. But there are many other usability applications for NFT contracts:
- Software titles or software licences to guarantee anti-piracy, privacy and transferability — like Collabs.io
- Betting in real time on the outcome of a video game being live-streamed
- Gaming in general is an important field of experimentation and development for the uses of NFT in the future. TomoChain is having mini contests for games on blockchain, and is welcoming all developers to build blockchain games
- Concert tickets and sports match tickets can be tokenized and name-bound, preventing fraud and at the same time offering fans an option to have a single place where to collect all their past event experiences
- Digital Art (or physical art!) has already entered the game and showed an important usage of ERC721. Digital art auctions were the first application and still are the first thought of non-fungible token standards. The auctions organized by Christie’s revealed the appeal of the public for crypto-collectibles. Several digital art assets were sold during this event, the high point being the sale of the ‘Celestial Cyber Dimension’, an ERC721 CryptoKitty piece of art, for $140,000
- Real Estate assets, to carry out transfers of houses, land and other ‘tokenized’ properties through smart contracts
- Financial instruments like loans, burdens and other responsibilities, or a futures contract to buy 1,000 barrels of oil for $60k on May 1
- KYC compliance check to verify users. Receiving a specific NFT token in your wallet similar to the blue checkmark ☑️ on Twitter — like Wyre
- and more…
Crypto-Collectibles are more than a passing craze. It is easy to see the reason why, especially when you look at the potential of the crypto-collectible technology, including: securing digital ownership, protecting intellectual property, tracking digital assets and overall creating real world value.
How to deploy a NFT token on TomoChain
This article will create a basic ERC721 token using the OpenZeppelin implementation of the ERC721 standard. Look at the links in order to familiarize yourself with the requirements as they can sometimes be hidden in the excellent OpenZeppelin ERC721 implementations.
The assets that your ERC721 tokens (NFT) represent will influence some of the design choices for how your contract works, most notably how new tokens are created.
- You can have an initial supply of tokens defined during token creation
- You can have a function, which is only callable by the contract creator (or others — if you allow this) that will issue new tokens when called
For example, in CryptoKitties, players are able to “breed” their Kitties, which creates new Kitties (tokens). However, if your ERC721 token represents something more tangible, like concert tickets, you may not want token holders to be able to create more tokens. In some cases, you may even want token holders to be able to “burn” their tokens, effectively destroying them.
Let’s Start the NFT Tutorial
We will now implement a NFT collectible token, like CryptoKitties but with simpler logic.
You’ll learn how to create non fungible tokens, how to write tests for your smart contracts and how to interact with them once deployed.
We’ll build non-fungible collectibles: gradient tokens. Every token will be represented as a unique CSS gradient and will look somewhat like this:
1. Creating a new project
Create a new directory and move inside it. Then start a new
Truffle project:
mkdir nft-tutorial
cd nft-tutorial
truffle init
We will use OpenZeppelin ERC721 implementation, which is quick and easy and broadly used. Install
OpenZeppelin in the current folder.
npm install openzeppelin-solidity
2. Preparing your TOMO wallet
Create a TOMO wallet. Then grab a few tokens:
TomoChain (testnet): Get free tokens from faucet (grab ~60 TOMO)
TomoChain (mainnet): You will need real TOMO from exchanges
Go to Settings menu, select Backup wallet and then Continue. Here you can see your wallet’s private key and the 12-word recovery phrase.
Write down your 12-word recovery phrase.
3. Writing the Smart Contract
3.1 GradientToken.sol
We’ll be extending now the OpenZeppelin ERC721 token contracts to create our Gradient Token.
- Go to
contracts/folder and create a new file called
GradientToken.sol
- Copy the following code
pragma solidity ^0.5.4;import 'openzeppelin-solidity/contracts/token/ERC721/ERC721Full.sol';
import 'openzeppelin-solidity/contracts/ownership/Ownable.sol';// NFT Gradient token
// Stores two values for every token: outer color and inner colorcontract GradientToken is ERC721Full, Ownable {
using Counters for Counters.Counter;
Counters.Counter private tokenId;
struct Gradient {
string outer;
string inner;
}
Gradient[] public gradients; constructor(
string memory name,
string memory symbol
)
ERC721Full(name, symbol)
public
{}
// Returns the outer and inner colors of a token
function getGradient( uint256 gradientTokenId ) public view returns(string memory outer, string memory inner){
Gradient memory _gradient = gradients[gradientTokenId]; outer = _gradient.outer;
inner = _gradient.inner;
} // Create a new Gradient token with params: outer and inner
function mint(string memory _outer, string memory _inner) public payable onlyOwner {
uint256 gradientTokenId = tokenId.current();
Gradient memory _gradient = Gradient({ outer: _outer, inner: _inner });
gradients.push(_gradient);
_mint(msg.sender, gradientTokenId);
tokenId.increment();
}
}
We inherited from two contracts: ERC721Full to make it represent a non-fungible token, and from the Ownable contract.
Every token will have a unique
tokenId, like a serial number. We also added two attributes:
inner and
outer to save CSS colors.
Ownable allows managing authorization. It assigns ownership to deployer (when the contract is deployed) and adds modifier onlyOwner that allows you to restrict certain methods only to contract owner. Also, you can transfer ownership. You can approve a third party to spend tokens, burn tokens, etc.
Our solidity code is simple and I would recommend a deeper dive into the ERC-721 standard and the OpenZeppelin implementation.
You can see the functions to use in OpenZeppelin ERC721 here and here.
You can find another ERC721 smart contract example by OpenZeppelin here.
4. Config Migrations
4.1 Create the migration scripts
In the
migrations/ directory, create a new file called
2_deploy_contracts.js and copy the following:
const GradientToken = artifacts.require("GradientToken");module.exports = function(deployer) {
const _name = "Gradient Token";
const _symbol = "GRAD";
return deployer
.then(() => deployer.deploy(GradientToken, _name, _symbol));
};
This code will deploy or migrate our contract to TomoChain, with the name
Gradient Token and the symbol
GRAD.
4.2 Configure truffle.js
Now we set up the migrations: the blockchain where we want to deploy our smart contract, specify the wallet address to deploy, gas, price, etc.
1. Install Truffle’s
HDWalletProvider, a separate npm package to find and sign transactions for addresses derived from a 12-word
mnemonic.
npm install truffle-hdwallet-provider
2. Open
truffle.js file (
truffle-config.js on Windows). You can edit here the migration settings: networks, chain IDs, gas... You have multiple networks to migrate your ICO, you can deploy: locally,
ganache, public
Ropsten (ETH) testnet,
TomoChain (testnet),
TomoChain (Mainnet), etc…
Both Testnet and Mainnet network configurations are described in the official TomoChain documentation — Networks. We need the
RPC endpoint, the
Chain id and the
HD derivation path.
Replace the
truffle.js file with this new content:
const HDWalletProvider = require('truffle-hdwallet-provider');
const infuraKey = "a93ffc...<PUT YOUR INFURA-KEY HERE>";// const fs = require('fs');
// const mnemonic = fs.readFileSync(".secret").toString().trim();
const mnemonic = '<PUT YOUR WALLET 12-WORD RECOVERY PHRASE HERE>';module.exports = { networks: {
// Useful for testing. The `development` name is special - truffle uses it by default
development: {
host: "127.0.0.1", // Localhost (default: none)
port: 8545, // Standard Ethereum port (default: none)
network_id: "*", // Any network (default: none)
}, // Useful for deploying to a public network.
// NB: It's important to wrap the provider as a function.
ropsten: {
//provider: () => new HDWalletProvider(mnemonic, `{infuraKey}`),
provider: () => new HDWalletProvider(
mnemonic,
`{infuraKey}`,
0,
1,
true,
"m/44'/889'/0'/0/", // Connect with HDPath same as TOMO
), deploying to TomoChain testnet
tomotestnet: {
provider: () => new HDWalletProvider(
mnemonic,
"",
0,
1,
true,
"m/44'/889'/0'/0/",
),
network_id: "89",
gas: 3000000,
gasPrice: 10000000000000, // TomoChain requires min 10 TOMO to deploy, to fight spamming attacks
}, // Useful for deploying to TomoChain mainnet
tomomainnet: {
provider: () => new HDWalletProvider(
mnemonic,
"",
0,
1,
true,
"m/44'/889'/0'/0/",
),
network_id: "88",
gas: 3000000,
gasPrice: 10000000000000, // TomoChain requires min 10 TOMO to deploy, to fight spamming attacks
}, //
compilers: {
solc: {
version: "0.5.4", //"
// }
}
}
}
3. Remember to update the
truffle.js file using your own wallet recovery phrase. Copy the 12 words previously obtained from your wallet and paste it as the value of the
mnemonic variable.
const mnemonic = '<PUT YOUR WALLET 12-WORD RECOVERY PHRASE HERE>';
⚠️ Warning: In production, it is highly recommend storing the
mnemonicin another secret file (loaded from environment variables or a secure secret management system)...
4.3 Ganache
You can use
Ganache blockchain to test your smart contracts locally, before migrating to a public blockchain like
Ethereum (Ropsten) or
Tomochain.
On a separate console window, install
Ganache and run it:
npm install -g ganache-cli
ganache-cli -p 8545
Ganache will start running, listening on port
8545. Automatically you will have 10 available wallets with their private keys and
100 ETH each. You can use them to test your smart contracts.
5. Adding Tests
We will add now tests to check our smart contracts.
When you deploy contracts your first contract will usually be the deployer. This test will check that.
Create
GradientTokenTest.js in
/test directory and write the following test:
const GradientToken = artifacts.require("GradientToken");contract("Gradient token", accounts => {
it("Should make first account an owner", async () => {
let instance = await GradientToken.deployed();
let owner = await instance.owner();
assert.equal(owner, accounts[0]);
});
});
Here we run the
contract block, that deploys our contract. We wait for the contract to be deployed and request
owner() which returns owner’s address. Then we assert that the owner address is the same as
account[0].
Note: Make sure that
Ganacheis running (on a different console).
Run the test:
truffle test
The test should pass. This means that the smart contract works correctly and it did successfully what it was expected to do.
Adding more tests
Every NFT token will have a unique ID. The first minted token has
ID: 0, the second one has
ID: 1, and on and on…
Now we’ll test the
mint function. Add the following test:
describe("mint", () => {
it("creates token with specified outer and inner colors", async () => {
let instance = await GradientToken.deployed();
let owner = await instance.owner(); let token0 = await instance.mint("#ff00dd", "#ddddff");
let token1 = await instance.mint("#111111", "#ffff22");
let token2 = await instance.mint("#00ff00", "#ffff00"); let gradients1 = await instance.getGradient( 1 );
assert.equal(gradients1.outer, "#111111");
assert.equal(gradients1.inner, "#ffff22");
});
});
This test is simple. First we check that we can mint new tokens. We mint 3 tokens. Then we expect that the unique attributes
outer and
inner of token with tokenId =
1 are saved correctly and we assert it by using the
getGradient function that we created before.
The test passed.
6. Deploying
6.1 Start the migration
You should have your smart contract already compiled. Otherwise, now it’s a good time to do it with
truffle compile.
Note: Check that you have enough
TOMOtokens in your wallet!! I recommend at least
60 TOMOto deploy this smart contract
Back in our terminal, migrate the contract to TomoChain testnet network:
truffle migrate --network tomotestnet
To deploy to TomoChain mainnet is very similar:
truffle migrate --network tomomainnet
The migrations start…
Starting migrations...
======================
> Network name: 'tomotestnet'
> Network id: 89
> Block gas limit: 840000001_initial_migration.js
======================Deploying 'Migrations'
----------------------
> transaction hash: 0x67c0f12247d0bb0add43e81e8ad534df9cd7d3473ef76f5b60cee3e3d34bae1a
> Blocks: 2 Seconds: 5
> contract address: 0x6056dC38715C7d2703a8aA94ee68A964eaE86fdc
> account: 0x169397F515Af9E93539e0F483f8A6FC115de660C
> balance: 90.05683
> gas used: 273162
> gas price: 10000 gwei
> value sent: 0 ETH
> total cost: 2.73162 ETH> Saving artifacts
-------------------------------------
> Total cost: 2.73162 ETH2_deploy_contracts.js
=====================Deploying 'GradientToken'
-------------------------
> transaction hash: 0xca09a87ad8f834644dcb85f8ea89beff74b818eff11d355e0774e6b60c51718c
> Blocks: 2 Seconds: 5
> contract address: 0x8B830F38b798B7b39808A059179f2c228209514C
> account: 0x169397F515Af9E93539e0F483f8A6FC115de660C
> balance: 60.64511
> gas used: 2941172
> gas price: 10000 gwei
> value sent: 0 ETH
> total cost: 29.41172 ETH> Saving artifacts
-------------------------------------
> Total cost: 29.41172 ETHSummary
=======
> Total deployments: 2
> Final cost: 32.14334 ETH
Congratulations! You have already deployed your non-fungible token (NFT) to TomoChain! The deployment fees were
32.14 TOMO.
Read the output text on the screen. The NFT token contract address is (yours will be different):
0x8B830F38b798B7b39808A059179f2c228209514C
⚠️ Note: TomoChain’s smart contract creation fee: gas price 10000 Gwei, gas limit >= 1000000
*** Troubleshooting ***
- Error:
smart contract creation cost is under allowance. Why?Increasing transaction fees for smart contract creation is one of the ways TomoChain offers to defend against spamming attacks. Solution: edit
truffle.jsand add more gas/gasPrice to deploy.
- Error:
insufficient funds for gas * price + value. Why? You don’t have enough tokens in your wallet for gas fees. Solution: you need more funds in your wallet to deploy, go to faucet and get more tokens.
7. Interacting with the smart contract
7.1 Minting new Tokens
Now to create a new Gradient Token you can call:
GradientToken(gradientTokenAddress).mint("#001111", "#002222")
You can call this function via MyEtherWallet/Metamask or Web3... In a DApp or game this would probably be called from a button click in a UI.
Let’s use MyEtherWallet (MEW) to interact with the contract. We use MetaMask to connect to the GradientToken owner wallet in TomoChain (testnet), then we will call function
mint() to mint the first token.
In MyEtherWallet, under menu
Contract >
Interact with Contract two things are required:
- Contract Address: you got this address when you deployed
- ABI: file
build/contracts/GradientToken.json, search
"abi": […
]and copy everything inside brackets, including brackets. Then paste on MEW
On the right you will see a dropdown list with the functions. Select
mint. MEW will show two fields:
outer and
inner. Input two colors, like
#ff0000 or
#0000ff and click the button Write. Confirm with MetaMask.
Here is our contract address, and the new
mint transaction:
You can use MEW to Write and to Read functions, like
getGradient! This way you can check if values are correct, totalSupply, transfer tokens...
Note: In
Ethereum (Ropsten), the Etherscan page with our migrated contract will change after the first token is minted. A new link will be displayed now to track the ERC721 token
GRAD.
What’s next?
A few suggestions to continue from here:
- You could add a lot of different attributes to your unique NFT tokens
- You can connect to a JS front end and show your tokens
- You can have some buttons to interact with the tokens (buy, sell, change, transfer, change attributes/colors, etc…)
- You can iterate on this basic code and create a new CryptoKitties game :)
Congratulations! You have learnt about non-fungible tokens, use-cases of NFTs and how to deploy NFT tokens on TomoChain.
Now we are looking forward to see your awesome ideas implemented! | https://medium.com/tomochain/how-to-deploy-nft-tokens-on-tomochain-fe476a68594d?source=collection_home---4------2----------------------- | CC-MAIN-2019-30 | refinedweb | 3,047 | 55.84 |
- ! Thank you for posting code again!
Admin
As long as this is Java, then _inputStream.read() will return -1 when it reaches the end of the stream, so it will be possible for _char == -1 to be true.
Also, catching an OutOfMemoryError is extremely bad practice, as is catching anything derived from Error...
JDK5.0 API - java.lang.Error
Because OOME is thrown when the JVM is out of memory (suprise suprise...) the JVM cannot handle the catch case and just exits.
Admin
I think the problem is that (char)-1 is *not* -1 as char is an unsigned type.
The catch(OutOfMemoryError) actually made me go 'WTF?' for real
Admin
Ah, too true - Friday afternoon and my mind is no longer here ;-)
Admin
>I think the problem is that (char)-1 is *not* -1 as char is an unsigned type.
Well,Java does not have unsigned data types, o that part is OK.
The problen is that stream.read() actually returns an int, in which case last 8 bits will technically have 'char' value; (int)-1 will signify failure or EOF. However, in this case by assigning the value directly to char type, there's no way to distinguish between (unsigned char)255 -> valid value and (int)-1-> EOF
Admin
What is this trying to do? Read as much data from _inputStream as it will offer, stripping off up to two CR+LF's from the front? Or any combination of CR+LF's, and the inner pair of if() blocks are completely redundant? (I figure I could work it out by poking around the net, but why do that when someone who does Java for a living can prolly figure it out in five seconds)
Admin
Parsing HTTP headers, I suspect.
Admin
I suppose they didnt realise the reason they were running out of memory in the first place....
while (true) {
char _char;
Now unless java has some wierdness - im pretty sure thats allocating a new block of memory each loop?
Admin
Java does have one unsigned datatype, namely char: its range is 0 to 0xFFFF.
So char == -1 will in fact return true if char is 0xFFFF.
Admin
No, that will create a char on the stack, but it will be reused every loop (or recreated in the same place on teh stack).
This part will allocate a new byte on the heap though, which if it wasnt for Javas magic garbage collection, would leak a lot of memory (being a C++ coder, this line made me go "WTF!")
_inputStream.read(
new byte[_inputStream.available()]);
however, the Java spec states that the garbage collecter *is not guarenteed to run at any time during your programs execution*! So depending on the platform this is running on, those bytes may not get cleaned up till the program quits.
Admin
LOL
I write much better code when I'm smoking hashish, people underestimate its ability to help solve complex problem...
Admin
Cannot believe the HTTP client company is so lazy that they did not even strip out their comments in their code. That's a serious business (not technical) mistaken.
Admin
Ow. My head.
Admin
No, it won't. I just checked.
Java only does four types of math: int, long, float, and double. byte, short, and char all get promoted to an int whenever any math is done with them (including comparisons). I suppose you could argue that boolean operations count as math so that's a fifth type.
In any case, all math operations involving bytes, shorts, and chars are done with ints. So the following code:
public class test {
public static void main(String[] args) {
char c = (char) -1;
System.out.println(c == -1);
System.out.println(-1 == '\uFFFF');
}
}
Prints:
false
false
I just tested it. And, yes, it's really annoying that Java has a policy of "no unsigned datatypes" and then treats chars as essentially unsigned shorts.
Admin
Actually, that's not quite true: OOME is thrown when there is not enough memory to honour a request to allocate more. Depending on what you are doing, this may be recoverable (e.g. if you're allocating a 128MB block).
Admin
In the case of the Sun JVM, objects won't become candidates for GC inside a method - so allocating objects in a long-running loop in a single method is likely to cause you problems.
We've resolved GC issues in long-running loops (e.g. numerical analysis) by extracting methods out.
Admin
however, the Java spec states that the garbage collecter *is not guarenteed to run at any time during your programs execution*! So depending on the platform this is running on, those bytes may not get cleaned up till the program quits.
On mobile phones it typically runs GC when memory is about to run out. I did a test once, and was like WTF, when showing available memory and it was continually shrinking. I was curious what will happen, when it runs out, but it just ran GC and freed all unused memory.
Admin
It does not. It is a variable local to the loop, same stack slot will be used at each loop.
Admin
The real WTF is that if this is indeed Java Code, that they payed a "huge" amount for a closed source HTTPClient - i dont know if that was like 10 years ago because the jakarta http commons util exists for quiet some time (years)....
Admin
Being serious about this stripping comments? WTF.
Admin
The real WTF is that there were two WTFs on the same day!
Admin
And one disappeared for a while the other day.... Some WTFery is going on at The Daily WTF...
Admin
Maybe I get your remark wrong, but a while loop generating a stack?
That's nonsense, the assembly equivalent of a while loop would be a conditional jump, that doesn't create a call on the stack.
Admin
I think the problem is when a read() return -1, it is not detected by the if() (because of the conversion to char), then it sits in the while(true) loop, appending one char (with value 0xFFFF) to the StringBuffer at each iteration until there is no more memory and the processor emits smoke.
They could just read the whole buffer (since they allocate it at the end any way), then search for "\r\n\r\n" in-memory (much more simple to do) and then build a StringBuffer from the first part of byte[] buffer.
Admin
As a novice at programming it's no surprise I don't understand everything that is said on here, but what I find very confusing is all the different answers and explainations. A little help please.
Admin
You're thinking the same thing, just saying it differently. The variable is local to the block, you can't access it outside the scope of the while block. As far as the java language cares, it only exists inside the while block, and the specifications require it to be initialized to zero at the point of declaration.
Now the compiler probably does NOT actually allocate new stack space of it every time it enters the loop, but instead optimizes it by allocating stack space for all variables on function entry. Now all it has to do is zero the value at the beginning of the loop.
Strictly speaking in compiler theory though, you only need to put the variable on the stack inside of the loop. It's the optimizer that will move it back outside :)
Admin
Anyway, what was this http client doing on server ? This CodeSOD smells like a J2ME...
Admin
The OOME in this case really tells the whole story, I'm pretty sure of what's happening here without even trying to understand the conditional logic in there. Something similar has actually happened to me -- except I fixed the loop rather than catching the OOME :)
The string buffer has an internal buffer that is being added to with each pass of the loop. When the exit condition for a loop is wrong and the loop runs forever adding one char every time this will very quickly allocate all available memory to the string buffer.
Just think: A processor with a memory bandwidth of gigabytes per second and a computer with maybe a single gigabyte of ram.
StringBuffer sb = new StringBuffer();
while(true) { sb.appendChar('x'); }
will run out of memory within seconds, especially if the VM size is restricted.
Admin
Fewww... when I start reading, and got to the comments, I was sure that someone had posted MY code. I have put both
Admin
'bout time somebody posts a non-WTF version of this code.
People just gotta love this Paula HTTP Client. I can imagine management decision.. "WHAT? 100 % CPU usage? We need better hardware!"
Admin
*Ouch*... My brain... :)
Seriously, the little side note:
Sounds a bit like me, if I do not sound too high on myself... :) My biggest problem is not that I need to know how something works, but I expect other developers to as well. My bad... :)
Peace!
Admin
Well maybe you're a computer (did you pass the turing test already) but the phrase :
Sounds like a load of bollocks to me...
Unless you're a computer compiling shit won't help you understand it.
I precompile it in my head to figure it out. WTF!
Admin
Actually, in Java, local variables must be explicitly assigned to (Java Language Specification, chapter 16) before they are read and the compiler must be able to prove it or it refuses to compile.
Admin
OK - maybe not "compile" in the sense of doing translation to opcodes... My fault for copying without editing. But there were smileys present, and I did say "a bit like me"...
But along those lines, I am sure that you can get an idea of what happens when you see code like
- You can see a possible bug simply by knowing how the code will (or wil not) execute depending on the value of bValue.
Hey - I once debugged something by having a hex dump of a memory area read to me over the phone, which was kinda like reverse-compiling... :)
Peace!
--------------------------------------------------
Just because you do not know what I am talking about, that does not mean that I do not.
Admin
Actually, the specification only requires fields to be initialized to zero; a local variable will not be automatically initialized. If it is used before it is initialized, you will get a 'the local variable var may not have been initialized' compiler error.
Admin
And the semantics of the or operator (remember VB6).
duh!
Admin
<font color="#009933">> And the semantics of the or operator (remember VB6).</font>
Ahhh... But I do not need to worry about that, nor how it works in the many other languages that I do not currently have a need for. It is but one of the many benefits of being a highly-paid specialist. I am sure that one day, you will understand.
<font color="#660000">>> Hey - I once debugged something by having a hex dump of a
>> memory area read to me over the phone, which was kinda like
>> reverse-compiling... :)
</font><font color="#009933">> duh!</font>
Easy, kid... I said "kinda like" and there is no information in my post that clarifies what kind of memory I was looking at. So jumping to a conclusion would be inappropriate. ...Just like taking an inappropriate tone with those whom you do not know.
Admin
Maybe you're one of those selfnamed (highly-paid) specialists the ends up (with his atrocities ) on theDailywtf?
Till the cold freemarket wind sweeps across the plain, and its back to shoveling dung.
Admin
Well it can help. I do it a bit with Java. Having written a JVM from scratch in the past, I know vaguely what it does behind the scenes and what the various opcodes are and when they are generated. However, that still doesn't really satisfy me properly because the implementations of various JVMs are all different and you just have to trust that it's doing something sensible and not be too much of a control freak about it.
When I first encountered interpreted and compiled-but-virtual languages, I didn't like them because I was a further step removed from the processor and felt uncomfortable not being able to control the processor properly. But those sorts of things crop up in other places too. I was used to programming in C and it is relatively easy to know how that is going to compile. Use C++ though, and the preprocessor is doing loads behind your back and bloating your code for you.
I thought about it and realised that if I wanted to be a complete control freak then I would really have to program in assembler because even in plain C, the optimiser can be doing strange things behind your back. Useful, but still strange. And also, even in assembler, you need to know your CPU inside-out to know what it's doing because CPUs contain various levels of microcode and nanocode for various instructions.
So knowing how your language is compiled can give you hints as to how efficient your algorithms will be and how much memory they will consume. A lot of it is still up to various lower-level factors over which we have less control so keep that in mind. Caring about it, however, is a significant step towards being a good programmer.
Admin
Gasp! A highly paid specialist that doesn't know jack... why am I not surprised??
captcha: null (maybe if I was a highly paid specialist then I would know what this means)
Admin
> Maybe you're one of those selfnamed (highly-paid) specialists
> the ends up (with his atrocities ) on theDailywtf?
I can honestly say that no, I have never seen my code on TDWTF...! I have not even seen code close to it. While I have written some (really) bad code in my younger days (some of it easily WTF-worthy!) and even being a college dropout, I do not think that "atrocity" applied since I had a VIC-20! :)
As far as the freemarket wind, remember there are companies out there that are competent enough to recognize experience and wisdom, and they will pay for it, even when the market sucks. Trust in that when things start to look bad - there is always a position for a skilled (or better skilled) developer. The last think a company needs when the software market sucks is to put out a poor(er)-quality product, which will only cause them to lose further sales. That is the best time for them to spend money wisely.
Peace!
Admin
> Gasp! A highly paid specialist that doesn't know jack... why am I not surprised??
Heh... One might wonder just exactly how you would be aware of my body of knowledge (or not). Not to point too fine a point on it, but I did not say that I did not know VB6 (or any other languages for that matter), I said that I do not have to worry about the ones that I do not currently have a need for.
> captcha: null (maybe if I was a highly paid specialist then I would know what this means)
Perhaps... One can hope... :) J/K
Peace!
Admin
i'm sorry, but i don't like the whole new "code snippet of the day" idea. to me, the daily wtf is code like this, and while i like a few stories mixed in, we've had nothing but stories. code is what i think should be the main diet of the daily wtf, like it used to be.
Admin
Um, all he said was that he didn't need to know the semantics of the VB6 or operator. He didn't actually say he didn't know it... just that it was unnecessary since he doesn't work with VB6.
Er, wait, I mean... He suggested people who make lots of money don't write VB6! He is TEH STUPID ELITIST!1 BURN HIMMMMM
Admin
Agreed, love the code, this is why I visit this place!
Admin
Well, I think the community here is quite heterogeneous with respect to knowledge on coding, and on IT in general. Which is natural. No one has a full grasp of the unbounded ocean of possible IT knowledge anymore... and, of course, different life histories... As for your request for help: I think you should try to be a little bit more specific. Whence could we know what you know or not?
When it comes down to discussions on code snippets in programming languages I don't know (or don't know well) I also often don't understand the comments. But WTF? Sometimes I learn something from them, nevertheless. And otherwise I just skip them.
Admin
So... Rather than getting a story per day which may or may not have code in it, we will be getting two stories per day, one of which is guaranteed to have code. How is this not an improvement?
Admin
It never ceases to amaze me when people, presented with free entertainment, feel the obligation to criticize and complain because it's not exactly what they were looking for.
Personally, I like Alex' writing - I usually get a laugh-of-the-day out of the story. The code, if present, is just an added bonus.
Cable has hundreds of channels - most with nothing of value to watch, but there is only one TDWTF!
Admin
I have been lurking around here for several months only and don't know how TDWTF formerly was. For my taste, a good mix of stories w/o code, stories with code, WTF code snippet collections and WTF collections of codeless stuff is preferable. The problem with pure code WTFs is that often only a (rather small?) subset of people here would benefit from them. Or does anyone here know well all the programming languages that frequently occured on TDWTF so far? Whereas I daresay that everyone can profit from reading about management WTFs. And the subtitle of TDWTF is "Curious Perversions in Information Technology", after all. For me, that does include not only code, but also people, hardware, management, even politics.
Nevertheless, I admit that the assortment during the last weeks may have been a bit onesided.
Admin
OOME is much worse than a failed call to malloc. The VM spec states that after an error is thrown everything may be broken and all guarantees the lang and VM gave you may be long gone. In Azureus they do catch and report errors, probably with a catch(Throwable) line: I got an OOME and after that every action I tried to do involving a torrent threw an AbstractMethodError because the VM invalidate part of the class. Never catch an Error in Java. | https://thedailywtf.com/articles/comments/Please_Supply_a_Test_Case | CC-MAIN-2018-39 | refinedweb | 3,166 | 71.04 |
Hello all, I'd like to revive this sig. I know I've been quiet of late, but hey, so has everyone else. In the interest of restarting the discussion, I guess I'll try to restate what I think we need to accomplish here. But I represent it only as my opinion. I hope others will comment, or suggest wholly different agendas, whatever. Anyway, this post is just intended to kick start renewed discussion of what the goals should be and the means for achieving them. First off, "THE GOAL", in my mind, is to come up with a set of core enabling code that would ultimately be directly absorbed by Guido into the Python distribution. When it's all said and done, I want Python extension programmers to be able to pull down a stock distribution, and using that, write extension modules in C++ with the full conveniences accorded by the C++ language. A lot of the action for Python, in the circles I travel, is in the business of hooking Python up to other code. I frequently find myself writing this "glue code" that we call Python compiled extensions, which talks to Python's C api in a pretty low level C-ish way, and talks to other compiled assets, using fairly upscale C++. I'd like the code which lives in this space, to be able to talk to the Python API in a similarly upscale fashion.]. Both of these seem like very reasonable issues to me, but I'm optimistic that both can be satisfactorily redressed. Regarding the first point, I figure that much of the C++ support machinery that I envision being developed in this SIG, is pretty lightweight stuff. For example, the CXX_Objects package that Paul has been distributing, is a pretty thin veneer over the Python C API, which happens to really make life a lot easier on the Python extension programmer working in C++, without really representing all that much code. I think that is representative of how most of the work of this SIG can come out. Moreover, I think there's a pretty good set of dividing lines between some of the different subsystems, so that we should be able to advocate some of them to Guido for inclusion into the core Python distribution, without necessarily having to couch it as an all-or-nothing kind of arrangement. As for consensus, well, everybody speak up! In the sig charter, I proposed the following list of example topics for this sig. I'll just quote that list, and insert some comments betwinst. > 1.Autoconf support for enabling C++ support in Python. This must be > managed in a way which does not change the C API, or the behavior of > Python if not configured for C++ support, in any (observable?) way. In > other words, conventional C extension modules must continue to work, > whether Python is configured with C++ support or not. I've recently produced a new patch set for fixing the Python-1.5.2 configure script to support C++. Yell loud if you don't agree, but I'm under the impression that there is a pretty universal awareness of the basic issue that we need main() compiled in C++ in order to support the possibility of the python executable hosting other C++ code buried in extension modules. This much we need, and I anticipate we can all agre on, almost irrespective of exactly /what/ the specific C++ interface we come up with is. Or stated another way, people wanting to write Python extension modules in C++ will need this, no matter exactly what they do for the rest of the C++ binding.. > 3.Introducing C++ classes for the various major Python abstractions, > such as dictionaries, lists, tuples, modules, etc. Paul and I spent some real time talking over this when I was at LLNL, and his CXX_Objects package is a really good incarnation of the basic ideas I think are important here. I've got to get reacquainted with the Python C aPI and see how much coverage we have right now. A pretty basic way of saying what classes I figure we need, is to look at the list of all Python C API entry points. They all look sort of like: Py<Pkg>_<Operation>. To me, this would seem to translate naturally to: namespace Py { class <Pkg> { PyObject *<Operation>( ... ); The idea then is to come up with C++ classes representing the major subdivisions of the Python C API, with member functions corresponding to the various function entry points in this package family in the C API. That's at least a pretty decent first approximation to what should be done. A little additional work to bring [] subscripting operators to Dicts and so forth, seems prudent as well. So anyway, I think Paul's CXX_Objects thing looks to me like a pretty significant step along the path toward "wrapping" the Python C API. > 4.Providing semantics for these classes which are natural both to the > Python programmer, as well as the C++ programmer. For example, the > PyDict class should have an operator[], and should also support STL > style iterators. Many issues of this form.. There may be some issues here of just how far we can go with this. Python is very dynamically typed, and C++ is statically typed. For instance, a PyDict holds PyObject *'s, not int's. So there may be some issues about just how "normal" we can make the Python C++ wrapper classes look, in comparison to their STL counterparts, but we should do what we can.>() ); Or whatever. Anyway, hopefully you can understand the basic idea I'm trying to convey. > 5.Method invocation. How to make the invocation of C++ member > functions as natural and/or painless as possible. I have code that makes it possible to invoke methods of an object in the Python language, and have it result in invoking methods of an underlying C++ object. I'll dust that code off and try to post it soon. There were some comments on this code about a year ago, and I tried to fold in some of the constructive criticisms that I received, when last I rewrote that. Anyway, I'll get that cleaned up, and post it to the www site so poeople can look it over. > 6.Error handling. How to irradicate NULL checking in favor of C++ > style exceptions. I really want to expunge this business of NULL checking of the results of Python C API calls. The wrapper classes envisioned above, should do that, and throw appropriate exceptions when something goes gonzo in the C aPI functions they call. Then in the C++ code which uses these wrapper classes, you could replace all blecherous NULL checking, with a conventional C++ try block. Or even just let it go unhandled, if you have been suitably rigorous in the use of ctor/dtor pairs. For this to work, there needs to be a try/catch block around this methodobject invoker thingie in the Python core. Again, I have code that implements an "exception safe" method invocation semantic. I'll repackage that for 1.5.2, and post it for comment. In any event, I hope people will comment on this, and voice any other views or raise any other issues that ought to be heard and considered by all. Also, if you have code which you think should be considered by the larger community in the context of this effort to codify a C++ interface for Python extension programming, then by all means, send it to me, and I'll try to post it on the SIG www page. We should get all the ideas and competing propositions out on the table where every one can see, review, comment, contribute. One last thing for this initial attempt at revitalizing this SIG. In the past, there has been a lot of talk about the dearth of compilers at the level anticipated by the code that I and Paul have posted. There are some developments on this front that I think are worthing bringing to people's attention: 1) Sun C++ 5.0 is out. Although still a far cry short of "ANSI C++", it is a huge step up over their previous 4.2 product. They left out partial specialization and member templates, which is extremely dissapointing, but still, the overall product is a huge step up over its direct predecessor. 2) MS VC++ 6.0 is reputed to have support for "inline member templates" and most of the rest of the modern template machinery. Evidently it still lacks out of line member templtes, and partial specialization, but--so I am told--on the whole it is a big step forward. 3) For those who may not have heard yet, EGCS has been named the official successor to the GCC throne. This is extremely good news for'll be running precise evaluations of this over the next few days, but I'm pretty sure late model egcs snapshots are "there". Expect a GCC 3.0 shortly, which will be bascially a shaken down release of the current egcs developers snapshots. In other words, we are probably just days or weeks away from having an extremely high quality freeware virtually-ISO C++ compiler. Cheers to all. Please express your views.. -- | https://mail.python.org/pipermail/cplusplus-sig/1999-April/000104.html | CC-MAIN-2016-30 | refinedweb | 1,562 | 68.7 |
Key:shoulder
Indicates the presence of a
shoulder on highways.
A shoulder, often serving as an emergency stopping lane, is a reserved lane by the verge of a road or motorway - on the right in countries which drive on the right, or on the left side in those which drive on the left (Japan, the UK, Australia, etc.).
Usage
Tag a highway way with shoulder=yes if it has a shoulder.
You may find it useful to tag roads with shoulder=no if it might be otherwise assumed to have a shoulder. For example, in some countries, most highway=motorway will have a shoulder but there may be exceptions.
If there is a shoulder on just one side of the road, there are two commonly used ways of tagging:
- shoulder=yes/no/left/right/both
- shoulder:left/right/both=yes/no
Both kinds of tagging are in use with about the same amount of occurrences as of 04/2016.
Unless otherwise tagged, a shoulder so tagged should be assumed paved, wide enough to be used as an emergency refuge for cars, and wide enough for through passage by bicycles.
Access permissions are assumed to be inherited from the national defaults for that type of highway, unless otherwise tagged.
Refinement
You may want to add further details of the shoulder using normal tags, suitably namespaced. Here are some examples of tags that are in use:
- shoulder:width=4
- shoulder:surface=concrete
- shoulder:access:psv=yes
- shoulder:smoothness=excellent
- shoulder:line=continuous
would indicate that the shoulder is 4 meters wide, has a excellently smooth concrete surface, can be used by buses and is separated from the road using a solid line.
If the shoulders are different on both sides of the road, consider using the :left/:right suffixes in the keys:
History
This tag was first proposed in 2010 by Xan and has since gained 'In use' status through widespread usage. It was moved to formal tag documentation in 2016 by Richard after discussion on the tagging mailing list. See initially thread (where Daniel Tremblay comments if there is tag for shoulders), and subsequent threads on the OSM forum and talk-us lists. | http://wiki.openstreetmap.org/wiki/Shoulder | CC-MAIN-2017-26 | refinedweb | 362 | 53.85 |
For some reason I assumed for a long time that NetBSD didn't support any kind of socket credentials. However, I recently discovered that it indeed supports them through the LOCAL_CREDS socket option. Unfortunately it behaves quite differently from other methods. This poses some annoying portability problems in applications not designed in the first place to support it (e.g. D-Bus, the specific program I'm fighting right now).
LOCAL_CREDS works as follows:
- The receiver interested in remote credentials uses setsockopt(2) to enable the LOCAL_CREDS option in the socket.
- The sender sends a message through the channel either with write(2) or sendmsg(2). It needn't do anything special other than ensuring that the message is sent after the receiver has enabled the LOCAL_CREDS option.
- The receiver gets the message using recvmsg(2) and parses the out of band data stored in the control buffer: a struct sockcred message that contains the remote credentials (UID, GID, etc.). This does not provide the PID of the remote process though, as other implementations do.
To ensure this restriction there needs to be some kind of synchronization protocol between the two peers. This is illustrated in the following example: it assumes a client/server model and a "go on" message used to synchronize. The server could do:
- Wait for client connection.
- Set LOCAL_CREDS option on remote socket.
- Send a "go on" message to client.
- Wait for a response, which carries the credentials.
- Parse the credentials.
- Connect to server.
- Wait until "go on" message.
- Send any message to the server.
#include <sys/param.h>
#include <sys/types.h>
#include <sys/inttypes.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <err.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int
main(void)
{
int sv[2];
int on = 1;
ssize_t len;
struct iovec iov;
struct msghdr msg;
struct {
struct cmsghdr hdr;
struct sockcred cred;
gid_t groups[NGROUPS - 1];
} cmsg;
/*
* Create a pair of interconnected sockets for simplicity:
* sv[0] - Receive end (this program).
* sv[1] - Write end (the remote program, theorically).
*/
if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) == -1)
err(EXIT_FAILURE, "socketpair");
/*
* Enable the LOCAL_CREDS option on the reception socket.
*/
if (setsockopt(sv[0], 0, LOCAL_CREDS, &on, sizeof(on)) == -1)
err(EXIT_FAILURE, "setsockopt");
/*
* The remote application writes the message AFTER setsockopt
* has been used by the receiver. If you move this above the
* setsockopt call, you will see how it does not work as
* expected.
*/
if (write(sv[1], &on, sizeof(on)) == -1)
err(EXIT_FAILURE, "write");
/*
* Prepare space to receive the credentials message.
*/
iov.iov_base = &on;
iov.iov_len = 1;
memset(&msg, 0, sizeof(msg));
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
msg.msg_control = &cmsg;
msg.msg_controllen = sizeof(struct cmsghdr) +
SOCKCREDSIZE(NGROUPS);
memset(&cmsg, 0, sizeof(cmsg));
/*
* Receive the message.
*/
len = recvmsg(sv[0], &msg, 0);
if (len < 0)
err(EXIT_FAILURE, "recvmsg");
printf("Got %zu bytesn", len);
/*
* Print out credentials information, if received
* appropriately.
*/
if (cmsg.hdr.cmsg_type == SCM_CREDS) {
printf("UID: %" PRIdMAX "n",
(intmax_t)cmsg.cred.sc_uid);
printf("EUID: %" PRIdMAX "n",
(intmax_t)cmsg.cred.sc_euid);
printf("GID: %" PRIdMAX "n",
(intmax_t)cmsg.cred.sc_gid);
printf("EGID: %" PRIdMAX "n",
(intmax_t)cmsg.cred.sc_egid);
if (cmsg.cred.sc_ngroups > 0) {
int i;
printf("Supplementary groups:");
for (i = 0; i < cmsg.cred.sc_ngroups; i++)
printf(" %" PRIdMAX,
(intmax_t)cmsg.cred.sc_groups[i]);
printf("n");
}
} else
errx(EXIT_FAILURE, "Message did not include credentials");
close(sv[0]);
close(sv[1]);
return EXIT_SUCCESS;
} | https://jmmv.dev/2006/08/localcreds-socket-credentials.html | CC-MAIN-2020-16 | refinedweb | 565 | 53.37 |
The Tkinter framework provides some standard GUI widgets for use in building Graphical User Interfaces in Python. If you need more freedom you can use the Canvas widget included in Tkinter, which gives you an area where you can draw custom shapes.
The Tkinter framework provides some standard GUI widgets for use in building Graphical User Interfaces in Python. If you need more freedom you can use the Canvas widget included in Tkinter, which gives you an area where you can draw custom shapes.
If you're unfamiliar with the basics of using Tk in Python, brush up by taking a quick read of a previous Quickstart to building GUI based applications in Python.
We'll start with a basic Tk application. This will create a window containing a Canvas widget 300 pixels by 300. Note: you can type all the commands in this article into your interpreter, and the GUI window will be updated as you go.
from Tkinter import * root = Tk() canvas = Canvas(root, width=300, height=300) canvas.pack(fill=BOTH)
The resulting application doesn't look like much, you should have a square, grey window. More interesting is to draw shapes to the canvas -- the follow code, using the create_rectangle method of the canvas, draws a green square to the top-left corner:
square = canvas.create_rectangle(0,0,150,150, fill="green")
create_rectangle takes four positional arguments representing the top-left and bottom-right coordinates of the rectange, and then a list of optional named parameters. In this case we set the fill colour to green. For a full list, you can browse the documentation for Tkinter rectangles here.
Likewise, you can draw other shapes to the canvas. The create_oval canvas method works in the same way as the create_rectangle method, except that it draws the ellipse contained within the bounding rectangle. The following line draws a blue circle directly below the square:
circle = canvas.create_oval(0,150,150,300, fill="blue")
The next basic shape type to learn is the polygon, which allows you to draw objects of any shape. The create_polygon method takes in any number of positions and draws the shape that is formed by using all the positions as vertices. The following example draws a red diamond to the right of the square:
diamond = canvas.create_polygon(150,75,225,0,300,75,225,150, fill="red")
Polygons could have any amount of points, so you could have just as easily drawn a five, six or seven sided shape here, rather than a diamond.
Lastly you can add text to the canvas by using the create_text method. This method takes the centre point of the text object, then optional arguments including the colour. The most important of these arguments is
text which is the text to be written to the canvas. In the following example we write some simple text to the bottom-right corner of the window:
text = canvas.create_text(230,230, text="Tkinter canvas", fill="purple", font=("Helvectica", "16"))
We also manually set the font, to make the text appear larger. At their simplest, font's are simply a tuple containing a font name and size -- in this case, 16 point Helvectica.
Deleting items from the canvas
You may have noticed that we've been storing the value returned by the create methods used to add shapes to the canvas. Each creation method returns an object indentifier, which allows us to manipulate the objects after they have been added.
You can delete any item from the canvas by using canvas.delete:
canvas.delete(square) canvas.delete(text)
By adding and removing items from the canvas, you can create more sophisticated and customised feedback to users of your applications. The canvas is highly customisable and allows more complicated interactions with objects.
Stay tuned in the coming weeks when we'll explore freeform GUIs using Python in more detail. | https://www.techrepublic.com/article/tkinter-canvas-freeform-guis-in-python/ | CC-MAIN-2021-10 | refinedweb | 646 | 61.56 |
In March 2021, Arm introduced the next-generation Armv9 architecture with increasingly capable security and artificial intelligence (AI). This was followed by the launch of the new Arm Total Compute solutions in May, which include the first ever Armv9 CPUs. The biggest new feature that developers will see immediately is the enhancement of vector processing. It will enable increased machine learning (ML) and digital signal processing (DSP) capabilities across a wider range of applications. In this blog post, we share the advantages and benefits of version two of the Scalable Vector Extension (SVE2).
Figure 1. Extending Vector Processing for ML and DSP in Armv9 (from Arm Vision Day)
Applications that process large amounts of data can be sped up by taking advantage of parallel execution instructions, known as SIMD (Single Instruction Multiple Data) instructions. SVE was first introduced as an optional extension by Armv8.2 architecture, following the existing Neon technology. SVE2 was introduced for Armv9 CPUs as a feature extension of SVE. The main difference between SVE2 and SVE is the functional coverage of the instruction set. SVE was designed for High Performance Computing (HPC) and ML applications. SVE2 extends the SVE instruction set to enable data-processing domains beyond HPC and ML such as computer vision, multimedia, games, LTE baseband processing, and general-purpose software. We see SVE and SVE2 as an evolution of our SIMD architecture, bringing many useful features beyond those already provided by Neon.
The SVE2 design concept enables developers to write and build software once, then run the same binaries on different AArch64 hardware with various SVE2 vector length implementations, as the name suggests. Since some laptop and mobile devices have different vector lengths, SVE2 can reduce the cost of cross-platform support by sharing code. Removing the requirement to rebuild binaries allows software to be ported more easily. The scalability and portability of the binaries means that developers do not have to know and care about the vector length for their target devices. This particular benefit of SVE2 is more effective when the software is shared across platforms or used over an extended period of time.
In addition to that, SVE2 produces more concise and easier to understand assembler code than Neon. This significantly reduces the complexity of the generated code, making it easier to develop and easier to maintain. This provides an overall better developer experience.
So, how can you make the most of SVE2? There are several ways to write or generate SVE2 code:A library that uses SVE2
1) A library that uses SVE2
2) SVE2-enabled Compiler
3) SVE2 Intrinsics in C/C++
#include <arm_sve.h>
void saxpy(const float x[], float y[], float a, int n) {
for (int i = 0; i < n; i += svcntw()) {
svbool_t pg = svwhilelt_b32(i, n);
svfloat32_t vec_x = svld1(pg, &x[i]);
svfloat32_t vec_y = svld1(pg, &y[i]);
vy = svmla_x(pg, vy, vx, a);
svst1(pg, &y[i], vy);
}
}
Code 1. SVE2 Intrinsic Example
4) SVE2 Assembly
Code 2. SVE2 Assembly Example
If there are SVE2-enabled libraries that provide the functionality you need, then using them may be the easiest option. Assembly can generally give impressive performance for certain applications, but it is more difficult to write and maintain due to register management and readability. Another alternative approach is to use intrinsics, which generates appropriate SVE2 instructions and allows functions to be called from C/C++ code, thus improving readability. In addition to libraries and intrinsics, SVE2 allows you to let compilers auto-vectorize code, improving ease of use while maintaining high performance. More information about how to program for SVE2 can be found on this Arm Developer page.
SVE2 not only makes vector length scalable, but also has many other features. In this section, we will show you some examples of the benefit of using SVE2 and some of the new instructions that have been added.
Non-linear data-access patterns are common in a variety of applications. Many existing SIMD algorithms spend a lot of time re-arranging data structures into a vectorizable form. SVE2’s gather-load and scatter-store allows direct data transfer between non-contiguous memory locations and SVE2 registers.
Figure 2. Gather-Load and Scatter-Store
An example of a process that can benefit from this is FFT (Fast Fourier Transform). This operation is useful in many fields such as image compression and wireless communications. This scatter-store feature is ideal for the butterfly operation addressing used in FFT.
The SVE2 instruction set implements complex-valued integer operations. They are especially useful for operations with complex calculations such as quaternions used to represent orientation and rotation of objects in games. For example, the multiplication of signed 16-bit complex vectors in SVE2 assembly can be up to 62% faster than in Neon assembly. Below is the C code version of the vector multiplication. Similarly, the computation of an 8x8 inverse matrix using complex numbers was found to be about 13% faster.
struct cplx_int16_t {
int16_t re;
int16_t im;
};
int16_t Sat(int32_t a) {
int16_t b = (int16_t) a ;
if (a > MAX_INT16) b = 0x7FFF; // MAX_INT16 = 0x00007FFF
if (a < MIN_INT16) b = 0x8000; // MIN_INT16 = 0xFFFF8000
return b ;
}
void vecmul(int64_t n, cplx_int16_t * a, cplx_int16_t * b, cplx_int16_t * c) {
for (int64_t i=0; i<n; i++) {
c[i].re = Sat((((int32_t)(a[i].re * b[i].re) +
(int32_t)0x4000)>>15) -
(((int32_t)(a[i].im * b[i].im) +
(int32_t)0x4000)>>15));
c[i].im = Sat((((int32_t)(a[i].re * b[i].im) +
(int32_t)0x4000)>>15) +
(((int32_t)(a[i].im * b[i].re) +
(int32_t)0x4000)>>15));
}
}
Code 3. C code of Vector Multiply with Complex 16-bit Integer Elements
There are also several new instructions introduced in SVE2, such as bitwise permute, string processing, and cryptography. Among them, I would like to highlight the histogram acceleration instructions. Image histogram is widely used in the fields of computer vision and image processing, for example, by using libraries such as OpenCV. It can be used in techniques like image thresholding and image quality improvements. Modern day cameras and smartphones utilize this kind of information to calculate exposure control and white balance to provide better picture quality.
Figure 3. Image Histogram
The histogram acceleration instructions, newly introduced in SVE2, provide a count of two vector registers whose specific elements match. With these instructions, the histogram can be computed with fewer instructions and faster than before. For example, the histogram calculation is conventionally coded as below. In my experiment to check the capabilities of SVE2, compilers are not yet mature enough to recognize the loop pattern and pick up the newest instructions. Therefore, we prepared assembly code with the specialized instructions to allow this loop to be vectorized. The result was that the assembly optimized with SVE2 is about 29% faster than the C code compiled. Neon does not offer a way to vectorize this kind of process. The assembly code used to measure the performance in this section, as well as compiler version and options, can be found in the Appendix at the end of this blog.
void calc_histogram(unsigned int * histogram, uint8_t * records, unsigned int nb_records) {
for (unsigned int i = 0; i < nb_records; i++) {
histogram[records[i]] += 1;
}
}
Code 4. C code of Histogram Computation for an Image
SVE2 is great instruction set for computer vision, games and beyond. There are many other features that we have not mentioned here, so if you want to know more about SVE2 please have a look at this page. Also, more detailed information on SVE2 programming examples can be found here. In the future, there will be more examples of use cases and applications using SVE2 that are effective, not just the highlighting the differences in primitive operations. Also, compiler optimization should get better as time goes on.
The first SVE2-enabled hardware releases will be available at the start of 2022. We are really excited that SVE2 will provide better programmer productivity and enable enhanced ML and DSP capabilities across a wider range of devices and applications. We are very much looking forward to the wider deployment of SVE2.
Learn more about SVE2
Here is the assembly code used for the performance measurement in this blog post. I used clang version 12.0.0 to compile the code, and -march=armv8-a+sve2+sve2-bitperm as the compilation option. The platform environment is on a simulator using Cortex-A core with a 128-bit vector length.
size .req x0 // int64_t n
aPtr .req x1 // cplx_int16_t * a
bPtr .req x2 // cplx_int16_t * b
outPtr .req x3 // cplx_int16_t * c
aPtr_1st .req x4
bPtr_1st .req x5
outPtr_1st .req x6
count .req x7
PTRUE p2.h
LSL size, size, #1
DUP z31.s, #0
CNTH count
WHILELT p4.h, count, size
B.NFRST .L_tail_vecmul
ADDVL aPtr_1st, aPtr, #-1
ADDVL bPtr_1st, bPtr, #-1
ADDVL outPtr_1st, outPtr, #-1
.L_unrolled_loop_vecmul:
LD1H z0.h, p2/z, [aPtr_1st, count, LSL #1]
LD1H z2.h, p4/z, [aPtr, count, LSL #1]
LD1H z1.h, p2/z, [bPtr_1st, count, LSL #1]
LD1H z3.h, p4/z, [bPtr, count, LSL #1]
Code 5. Optimized SVE2 Assembler Code of Vector Multiply with Complex 16-bit Integer Elements
MOV w2, w2
MOV x4, #0
WHILELO p1.s, x4, x2
B.NFRST .L_return
.L_loop:
LD1B z1.s, p1/Z, [x1, x4]
LD1W z2.s, p1/Z, [x0, z1.s, UXTW #2]
HISTCNT z0.s, p1/Z, z1.s, z1.s
ADD z2.s, p1/M, z2.s, z0.s
ST1W z2.s, p1, [x0, z1.s, UXTW #2]
INCW x4
WHILELO p1.s, x4, x2
B.FIRST .L_loop
.L_return:
RET
Code 6. Vector Length Agnostic SVE2 Assembler Code of 8-bit Pixels Image Histogram | https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/sve2 | CC-MAIN-2021-49 | refinedweb | 1,596 | 57.37 |
Components and supplies
Apps and online services
About this project
You can also read this project on ElectroPeak's official website.
It would be fun to see how your Instagram posts perform in action! We are going to build a gauge that shows your likes per minute speed. In this article, you will learn how to get data from web pages by ESP8266 and send them to Arduino to analyze and run other actuators. At the end of this article, you can :
- Connect the ESP8266 to the internet and get data from web pages.
- Use Arduino to read ESP8266 data and analyze them.
- Get data from social media such as Instagram.
- Make a gadget that can show you the speed of Instagram’s likes.
An Introduction to ESP8266
Wireless interfacing, connecting to the web and remote controlling are features that can be very helpful in many projects.. It is one of the best solutions for adding wifi to projects and (but not the only one.)
This microchip comes with different types of module like ESP-01, ESP-12 or other development boards and breakouts like NodeMCU devkit, Wemos and Adafruit Huzzah. The difference is their pins, components needed for easier usage and also price. The microchip has 32 pins that 16 pins of it are GPIO; depending on the model, the number of GPIOs provided is different. For ESP-01 it is just two pins but other models like breakouts have all of them. When using ESP-8266, you will need a serial interface to communicate and program. Simple modules usually don’t have serial converter (FTDI is usually suggested but other converters can be used, too) and it should be provided separately. Regulators, built-in LEDs, and pull-up or down resistors are other features that some models may have; the lowest cost between all of these modules is for ESP-01 and it’s our choice now.
ESP-01 is the first module that comes for esp-8266 and it has just two GPIO pin and needs 3.3V for power. It doesn’t have a regulator, so make sure to have a reliable power supply. It doesn’t have a converter, therefore you need USB to TTL convertor. Converter for this module (and also other models of ESP) should be in 3.3V mode. The reason for this is the convertor will make 0 and 1 via pulses, and voltage of these pulses should be recognizable for ESP, so check this before buying. Because of the limited quantity GPIO pins and also their low current (12mA per each one), we may need more pins or more current; so we can easily use Arduino with a module to access its IO pins (another way to access more GPIO pins is wiring out a very thin wire on the chip to the pin headers you need, but its not a good and safe solution). If you don’t want to use another board, you can design or use a circuit to increase current.
In this project, We want to connect ESP-01 to the Internet and get some data from Instagram pages. Then we send the data to Arduino and after processing it, Arduino changes the location of Servo pointer according to data. Let’s do it.
Circuit
Code
First we write a code for ESP-01 to get data from Instagram pages and send them "Servo.h" Servo myservo; String inputString = ""; // a String to hold incoming data boolean stringComplete = false; // whether the string is complete long flike; long like; long mlike; void setup() { // initialize serial: Serial.begin(115200); myservo.attach(9); // reserve 200 bytes for the inputString: inputString.reserve(200); } void loop() { // print the string when a newline arrives: if (stringComplete) { flike=like; like=inputString.toInt(); Serial.println(like); // clear the string: inputString = ""; stringComplete = false; } mlike=like-flike; mlike=mlike*20; //Serial.print(mlike); if (mlike==0) {mlike = 0;} if (mlike==1) mlike = 20; if (mlike<=10 && mlike>1) mlike = map(mlike, 1, 10, 20, 50); if (mlike<=30 && mlike>10) mlike = map(mlike, 10, 30, 50, 70); if (mlike<=50 && mlike>30) mlike = map(mlike, 30, 50, 70, 90); if (mlike<=70 && mlike>50) mlike = map(mlike, 50, 70, 90, 110); if (mlike<=100 && mlike>70) mlike = map(mlike, 70, 100, 110, 130); if (mlike<=200 && mlike>100) mlike = map(mlike, 100, 200, 130, 150); if (mlike<=500 && mlike>200) mlike = map(mlike, 200, 500, 150, 170); if (mlike<=1000 && mlike>500) mlike = map(mlike, 500, 1000, 170, 180); myservo.write(mlike); //Serial.print(" "); //Serial.println(mlike); delay(15); } /* its time to upload the ESP-01 code. We want to use Arduino IDE to upload the sketch to ESP. Before uploading the code, you should select ESP board for IDE.
Go to File>Preferences and put in the additional boards. Then download and install it. Now you can see the ESP boards in Tools>Board. Select “Generic ESP8266 Module” and copy the code in a new sketch. Download the “InstagramStats” library and add it to IDE. Note that we have modified the library, So you should download it here. Then you should set USB to TTL Converter as Uploader hardware. Just plug the converter in and set the right port in Tools>Port. It’s ready to Upload.
#include "InstagramStats.h" #include "ESP8266WiFi.h" #include "WiFiClientSecure.h" #include "JsonStreamingParser.h" char ssid[] = "Electropeak.com"; // your network SSID (name) char password[] = "electropeak1928374650"; // your network key WiFiClientSecure client; InstagramStats instaStats(client); unsigned long delayBetweenChecks = 1000; //mean time between api requests unsigned long whenDueToCheck = 0; //Inputs String userName = "arduino.cc"; // Replace your Username void setup() { Serial.begin(115200); WiFi.mode(WIFI_STA); WiFi.disconnect(); delay(100); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); } IPAddress ip = WiFi.localIP(); } void getInstagramStatsForUser() { InstagramUserStats response = instaStats.getUserStats(userName); Serial.println(response.followedByCount); } void loop() { unsigned long timeNow = millis(); if ((timeNow > whenDueToCheck)) { getInstagramStatsForUser(); whenDueToCheck = timeNow + delayBetweenChecks; } }
Assembling
Upload the code and wire up the circuit according to the picture. Now it’s time to make a frame for this circuit. we used a laser cutting machine to make a frame with plexiglass and designed a gauge sketch to stick on it. We have also made a pointer for the gauge with paper.
After assembling, just plug in the power supply and see the speed of likes.
What’s Next?
You can improve this project as you wish. Here are a few suggestions:
- Change the InstagramStats library to receive other data such as the number of followers and so on.
- Change the speed of getting data to decrease your internet utilization.
- Try to get the data from videos posts on Instagram.
Code
InstagramStats.zipC/C++
No preview (download only).
Author
ElectroPeak
- 9 projects
- 127 followers
Published onNovember 14, 2018
Members who respect this project
you might like | https://create.arduino.cc/projecthub/electropeak/instagram-likes-speedometer-with-arduino-and-esp8266-8957b7 | CC-MAIN-2019-13 | refinedweb | 1,133 | 64.3 |
Twisted.
Twisted is an asynchronous framework. This means standard database modules cannot be used directly, as they typically work something like:
#.
adbapi will do blocking database operations in separate threads, which trigger callbacks in the originating thread when they complete. In the meantime, the original thread can continue doing normal work, like servicing other requests.:
Now we can do a database query:
# adbapi.Transaction, which basically mimics a DB-API cursor.
In all cases a database transaction will be committed after your database usage is finished, unless an exception is raised in which case it will be rolled back.
def parameter munging –
runQuery(query, params, ...) maps directly onto
cursor.execute(query, params, ...).") | http://twistedmatrix.com/documents/current/core/howto/rdbms.html | CC-MAIN-2016-26 | refinedweb | 112 | 50.94 |
Functions in the Kernel namespace control RTOS kernel information. More...
Attach a function to be called by the RTOS idle task.
Attach a function to be called when a thread terminates.
Read the current RTOS kernel millisecond tick count.
The tick count corresponds to the tick count the RTOS uses for timing purposes. It increments monotonically from 0 at boot, so it effectively never wraps. If the underlying RTOS only provides a 32-bit tick count, this method expands it to 64 bits.
Kernel::Clock::now()to get a chrono time_point instead of an integer millisecond count.
Maximum duration for Kernel::Clock::duration_u32-based APIs.
Definition at line 114 of file Kernel.h. | https://os.mbed.com/docs/mbed-os/v6.0/mbed-os-api-doxy/namespacertos_1_1_kernel.html | CC-MAIN-2021-04 | refinedweb | 113 | 60.82 |
Is it normal for "new VideoCapture()" to take AGES?
I've built an inventory robot for a warehouse. It has two IP cameras (Vivotek CC8130). Its onboard computer is a FitPC, the operator console is a random Lenovo notebook. Both computers run OpenCV 2.4.9 on Debian GNU/Linux Jessie/sid. The onboard program and the operator console share the same codebase. They are written in Java 1.6.
In principle, everything works. I can open a video stream into the cameras from both computers. Both "http" and "rtsp" work. Once the stream is open, performance is reasonable....
...but what absolutely puzzles me: on both computers, it takes AGES (something like one minute per camera) to perform the following operation (both for RTSP and HTTP, but HTTP seems to take especially long):
logger.info("Attempting to open video capture for mast camera."); vcMast = new VideoCapture(""); // VideoCapture vcMast = new VideoCapture("rtsp://127.0.0.1:5554/live2.sdp");
This occurs every time. It occurs when connecting directly and likewise when going through SSH tunnels (the sample code shows "127.0.0.1" for the camera IP, because it was copied during SSH testing). To provide a little context of what I'm doing:
// Import libraries. import org.opencv.core.Core; import org.opencv.core.Mat; import org.opencv.highgui.Highgui; import org.opencv.highgui.VideoCapture; /.../ public class ClientMain { /.../ // Initialize OpenCV static{ System.loadLibrary(Core.NATIVE_LIBRARY_NAME); } /.../ public VideoCapture vcMast = null; public VideoCapture vcFloor = null; /.../ logger.info("Welcome to OpenCV " + Core.VERSION); /.../ try { logger.info("Attempting to open video capture for mast camera."); vcMast = new VideoCapture(""); // VideoCapture vcMast = new VideoCapture("rtsp://127.0.0.1:5554/live2.sdp"); /.../
....and here it hangs for a whole minute, before getting to:
if (vcMast.isOpened())
P.S. What I notice: when it finally gets unstuck, and I get to call "vcMast.read(frame)", where "frame" is an object of type "Mat"... I immediately get a bunch of past frames, as if they had been accumulating in a buffer somewhere. Once the buffer is spent, I start getting fresh frames.
P.S. I have by now measured that RTSP works significantly better than HTTP. Unfortunately, I cannot get RTSP to work via SSH tunnels.
I know that for "ffmpeg" (using which, as far as I know, OpenCV does its RTSP work) accepts the command line option "-rtsp_transport tcp", which makes it use TCP. Does anyone know, is there a way to specify this in OpenCV, especially when using the Java API?
Hi, from me experience with IP cameras they do take awhile to connect with OpenCV, from memory it would take about 30 seconds. | https://answers.opencv.org/question/43161/is-it-normal-for-new-videocapture-to-take-ages/ | CC-MAIN-2019-51 | refinedweb | 438 | 61.22 |
In this tutorial, you will learn how you can extract tables in PDF using camelot library in Python. Camelot is a Python library and a command-line tool that makes it easy for anyone to extract data tables trapped inside PDF files, check their official documentation and Github repository. Let's dive in !
Related tutorial: How to Convert HTML Tables into CSV Files in Python.
First, you need to install required dependencies for this library to work properly, and then you can install the library using the command line:
pip3 install camelot-py[cv]
Note that you need to make sure that you have Tkinter and ghostscript (which are the required dependencies) installed properly in your computer.
Now that you have installed all requirements for this tutorial, open up a new Python file and follow along:
import camelot # PDF file to extract tables from file = "foo.pdf"
I have a PDF file in the current directory called "foo.pdf" which is a normal page that contains one table shown in the following image:
Just a random table, let's extract it in Python:
# extract all the tables in the PDF file tables = camelot.read_pdf(file)
read_pdf() function extracts all tables in a PDF file, let's print number of tables extracted:
# number of tables extracted print("Total tables extracted:", tables.n)
This outputs:
Total tables extracted: 1
Sure enough, it contains only one table, printing this table as a Pandas DataFrame:
# print the first table as Pandas DataFrame print(tables[0].df)
Output:
0 1 2 3 4 5 6 0 Cycle \nName KI \n(1/km) Distance \n(mi) Percent Fuel Savings 1 Improved \nSpeed Decreased \nAccel Eliminate \nStops Decreased \nIdle 2 2012_2 3.30 1.3 5.9% 9.5% 29.2% 17.4% 3 2145_1 0.68 11.2 2.4% 0.1% 9.5% 2.7% 4 4234_1 0.59 58.7 8.5% 1.3% 8.5% 3.3% 5 2032_2 0.17 57.8 21.7% 0.3% 2.7% 1.2% 6 4171_1 0.07 173.9 58.1% 1.6% 2.1% 0.5%
That's precise, let's export the table to a CSV file:
# export individually tables[0].to_csv("foo.csv")
Or if you want to export all tables in one go:
# or export all in a zip tables.export("foo.csv", f="csv", compress=True)
f parameter indicates the file format, in this case "csv". By setting compress parameter equals to True, this will create a ZIP file that contains all the tables in CSV format.
You can also export the tables to HTML format:
# export to HTML tables.export("foo.html", f="html")
or you can export to other formats such as JSON and Excel too.
It is worth to note that Camelot only works with text-based PDFs and not scanned documents. If you can click and drag to select text in your table in a PDF viewer, then it is a text-based PDF, so this will work on papers, books, documents and much more!
So this won't convert image characters to digital text, if you wish so, you can use OCR techniques to convert image optical characters to actual text that can be manipulated in Python.
Alright, this is it for this tutorial, check their official documentation for more information.
Read also: How to Convert Speech to Text in Python.
Happy Coding ♥View Full Code | https://www.thepythoncode.com/article/extract-pdf-tables-in-python-camelot | CC-MAIN-2020-16 | refinedweb | 571 | 69.41 |
Learning Java/Applets
Contents
See also[edit]
Introduction
Applets are Java programs that are used in Internet computing. They can be viewed using an applet viewer or any browser. An applet can perform functions like displaying graphics, animation, accept user input, etc. Applet is derived either from Applet or the newer JApplet class. Both inherit from Container, so if you know how to build JFrame or Frame (used in standalone applications), you largely know how to build the applet. Also, for many simple applets, it is common to just register the mouse listener on the same applet, call repaint() from mouse events when required and provide the paint(Graphics g) method to draw the applet how it must look at the given moment of time.
Applets are different from applications
They are not full featured application programs. They basically are developed for small tasks. There are many restrictions with applets.
- Applets do not use the main() method of Java (the traditional Java programs are required to do so). Method init() is called on startup and must setup the applet. The rest of activity usually happen in various event listeners, registered by init() on the applet components.
- Applets can not read or write to a local computer. This feature provides security to the applets from the local computer and to the local computer from applets.
- Applets can't communicate with other services of the network, apart the originating server.
- Applets can't use other language libraries like C and C++. Traditional programs can do so using so-called native methods.
Applet security restrictions can be lifted up by creating a so-called signed applet that verifies your identity through an independent authority server. However, it is complex and expensive to do this in a proper way, and if done wrongly (like self-signing) the signature can make the applet look untrustworthy.
Simple example[edit]
The following example is made simple enough to illustrate the essential use of Java applets through its java.applet package. (UTF-8 also works) with the same name as the class and .java extension, i.e. HelloWorld.java. The resulting HelloWorld.class applet should be installed on the web server and is invoked within an HTML page by using an <APPLET> or an <OBJECT> detail in Sun's official page about the APPLET tag.[1]. When you complete your first functional applet, you likely will want to share it somewhere. Unlike pictures, applets are currently not accepted in Wikipedia, but there are some alternative initiatives like Ultrastudio.org. Many applets are also deployed at SourceForge.net project pages. Of course, you can also have your own website, but there your applet may be more difficult to find.
Example with mouse listener[edit]
The following applet does all activities that majority of educational applets need: it responds to mouse clicks and drags and orders to repaint itself after the 100 ms in order to reflect the mouse manipulations. Applets can also interact with the keyboard, but this is less common. The applet below shows the mouse position in black if moved, in blue if dragged and repaints in red if clicked.
import java.applet.Applet; import java.awt.Color; import java.awt.Graphics; import java.awt.event.MouseEvent; import java.awt.event.MouseListener; import java.awt.event.MouseMotionListener; public class HelloMouse extends Applet implements MouseMotionListener, MouseListener { // The "applet state" int x = -1; int y = -1; Color color = Color.BLACK; // Register mouse listener here. Mouse listeners can be the // same class as the applet if the listener methods // are added. public void init() { // Forwared mouse movements to mouseMoved, mouseDragged addMouseMotionListener(this); // Forwared mouse clicks. addMouseListener(this); } // This method is mandatory, but can be empty. public void stop() {} // Print a message on the screen (x=20, y=10). public void paint(Graphics g) { g.setColor(color); g.drawString("The mouse is at "+x+","+y, 20, 10); } public void mouseDragged(MouseEvent e) { x = e.getX(); y = e.getY(); color = Color.BLUE; repaint(100); // Repaint after 100 ms, } public void mouseMoved(MouseEvent e) { x = e.getX(); y = e.getY(); color = Color.BLACK; repaint(100); // Repaint after 100 ms, } } public void mouseClicked(MouseEvent e) { color = Color.RED; repaint(100); } public void mouseEntered(MouseEvent e) {} public void mouseExited(MouseEvent e) {} public void mousePressed(MouseEvent e) {} public void mouseReleased(MouseEvent e) {} }
Mouse listeners allow to detect not just the mouse manipulations but also when the mouse enters or leaves the applet area. The MouseEvent structure, that is passed to every method of the listener, contains information about the coordinates of the mouse pointer and also which button has been pressed.
Example with timer[edit]
The following example shows how register a timer to change applets on "its own initiative" after the programmed period of time. Timer is one of the basic elements of animations, non-interactive demonstrations and computer games. An applet can implement both ActionListener and mouse listeners, combining periodic actions with responses to the user manipulation.
import java.applet.Applet; import java.awt.Graphics; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.Timer; public class HelloTimer extends Applet implements ActionListener { // The "applet state" that advances up every second. int x = 0; Timer timer; // Register mouse listener here. Mouse listeners can be the // same class as the applet if the listener methods // are added. public void init() { // Create javax.swing.Timer that fires action events every second // The second parameter is the action listener. As this applet implements // ActionListener, we can pass "this" here. timer = new Timer(1000, this); timer.start(); } // This method is mandatory, but can be empty. public void stop() {} // Print a message on the screen (x=20, y=10). public void paint(Graphics g) { g.drawString("Counting: "+x, 20, 10); } // This method is called by the timer. public void actionPerformed(ActionEvent e) { x = x + 1; repaint(10); // Repaint in 10 ms; } }
Events listener methods, called by javax.swing.Timer, run in a Swing thread. This means, it is safe to do Swing manipulations like setting texts for labels, etc., without using InvokeAndWait. Mind that there are more classes named "Timer" in other packages of Java system library, so be sure you are importing the right one.
References[edit]
- ↑ Java.Sun.com Sun's APPLET tag page | https://en.wikiversity.org/wiki/Learning_Java/Applets | CC-MAIN-2019-43 | refinedweb | 1,040 | 58.99 |
We're a place where coders share, stay up-to-date and grow their careers.
I have come up with this
sumPair :: Num a => (a, a) -> a
sumPair (x, y) = x + y
isMoreThanOnce :: Eq a => a -> [a] -> Bool
isMoreThanOnce x xs = (>1).length.(filter (==x)) $ xs
createPairs :: (Num a, Eq a) => [a] -> a -> [(a, a)]
createPairs xs x
| isMoreThanOnce x xs = tuples.addX.whithoutX $ xs
| otherwise = tuples.whithoutX $ xs
where whithoutX = filter (/=x)
tuples = map ((,) x)
addX = (x:)
createAllPairs :: (Num a, Eq a) => [a] -> [(a, a)]
createAllPairs xs = flatmap (createPairs xs) xs
problem :: (Num a, Eq a) => a -> [a] -> Bool
problem k xs = elem k.map sumPair.createAllPairs $ xs
Here's another solution in Haskell
import qualified Data.Set as Set
solve :: Integral a => a -> [a] -> Bool
solve = solve' Set.empty
solve' :: Integral a => Set.Set a -> a -> [a] -> Bool
solve' _ _ [] = False
solve' possibleAddends targetSum (x:xs)
| Set.member x possibleAddends = True
| otherwise = solve' (Set.insert addend possibleAddends) targetSum xs
where addend = targetSum - x.
I have come up with this
Here's another solution in Haskell | https://dev.to/juancuiule/comment/758e | CC-MAIN-2022-21 | refinedweb | 178 | 74.9 |
I want to compile a custom c++ extension that uses functionality from libpng. This is what my current
jit.py file looks like:
from torch.utils.cpp_extension import load from glob import glob import os os.environ["CC"] = "gcc-11" os.environ["CXX"] = "g++-11" sources = glob("src/*.cpp") font_renderer = load(name="font_renderer", sources=sources, extra_cflags=["-std=c++17"], verbose=True) help(font_renderer)
If I try to run this, I get an error with the first libpng function that I try and use,
undefined symbol: png_get_image_width
Since I want to
#include <png.h> in one of my source files, how do I go about adding this simple dependency?
Thank you. | https://discuss.pytorch.org/t/how-to-jit-compile-c-extension-with-libpng/156794 | CC-MAIN-2022-33 | refinedweb | 110 | 60.01 |
Clear the confusion and add tags and tag clouds to your Oracle+Rails application.
Published June 2007
Social computing has taken the Internet by storm in the last few years and one of the signatures of the trend has been the notion of "tagging." While tagging is certainly not new, its latest incarnation is novel in its application—at least as far as the Web application world is concerned. Sharing tag data has allowed users of the latest round of Web applications to search for and share data like never before.
Tagging by itself won't add incredible functionality to your Rails application, but the features you build on top of tagging can add a layer of richness to your user's experience. You might find yourself quickly becoming addicted to new features that leverage your simple little strings.
This article will show you just how easy it is to add tag functionality to your site through the use of the acts_as_taggable_on_steroids plugin.
Believe it or not, the hardest part of adding tags to your Rails application is figuring out which of the libraries to use. There are a number of competing implementations of the acts_as_taggable idea. At the time of writing there are at least four advertised ways to implement tagging in a Rails application:
acts_as_taggable gem. This gem was the first of the implementations written for Rails and as such, it's really meant for older versions of 1.0. It will work on the current Rails versions (1.1, 1.2 and Edge Rails—"Edge Rails" being a fancy term for running the very latest, or HEAD, version of Rails). The Gem has one really significant weakness that makes it a non-starter for usage in your Rails application. It requires a separate join table for every model you want to tag. If you're looking to simply tag one model, though, this may be the simplest solution. You can tag more than one model, but you're required to create a separate join table and association for each model tagged, adding a significant amount of overhead.
acts_as_taggable plugin. This plugin (as opposed to the gem) was written by David Heinemeier Hanson, the creator of Rails. It was implemented using some of the more advanced features available in the newer versions of Rails like :has_many_through and polymorphic associations. It's a good start, but Hanson himself admits that the plugin is only half-baked and the core Rails team has opted not to apply many of the patches that have been submitted by the Rails community. (There's even a reported SQL injection vulnerability that, to my knowledge, has not been patched in the official releases!) This is probably the easiest implementation to install and use, though, so it may be something you want to look at. It's accessible directly through the Rails SVN server, so all that's needed is to call
$ script/plugin install acts_as_taggable
Unfortunately, according to the Rails core developers, the plugin was intended only as a proof of concept and there are
quite a few issues with it.
acts_as_taggable_on_steroids plugin. Although the name of the plugin sounds like it should be additional functionality built atop the original acts_as_taggable plugin, it's really not. It is a rewrite of the original plugin and it does add tests and some additional functionality. And adding to the extreme confusion is that the original author of the acts_as_taggable gem posted a blog entry called
"Tagging on Steroids with Rails". Whew! (We'll focus on this plugin, since the "official" Rails plugin
isn't intended for production use.)
has_many_polymorphs plugin. A relative newcomer on the block, has_many_polymorphs is a more abstract, very powerful plugin that can be used for adding tags to your model, although it's not built specifically for that purpose. I'll explain some of the problems that the has_many_polymorphs plugin solves later; for now, we'll favor the acts_as_taggable_on_steroids due to its more straightforward nature and ease of use for developers new to Rails.
Polymorphic associations are an advanced Rails topic, and although they are used in the acts_as_taggable_on_steroids plugin, they're used as plumbing and it's not necessary to fully understand them.
First, read my article
"Guide to Ruby on Rails Migrations". In that article readers learn how to use Rails Migrations to create a database schema for an online social music cataloging application called Discographr. You'll be building on that application for the rest of this article.
Let's try out the acts_as_taggable_on_steroids plugin first. It's not included in the official Rails SVN so you'll need to find it first. Rails comes with a handy script/plugin command that you can use to find the plugin you're looking for:
$ script/plugin discover -l | grep viney
$ script/plugin source
$ script/plugin install acts_as_taggable_on_steroids
$ script/plugin install \
Take a quick look to make sure the plugin was in fact installed in your RAILS_ROOT/vendor/plugin directory.
Next you need to make the changes to the database to support the plugin and allow you to add tags to your models. To do this you'll use Migrations, just as in "The Great Ruby Migration"):
$ script/generate migration AddTags
exists db/migrate
create db/migrate/004_add_tags.rb
Now edit the generated migration file you just created, 004_add_tags.rb, adding this to the self.up method:
def self.up
create_table :tags do |t|
t.column :name, :string
end
create_table :taggings do |t|
t.column :tag_id, :integer
t.column :taggable_id, :integer
t.column :taggable_type, :string
t.column :created_at, :datetime
end
end
Look in RAILS_ROOT/vendor/plugins/acts_as_taggable_on_steroids/test and you'll find all the tests written for the plugin. Tests not only serve as an excellent safety net that allows you to feel much freer to change your code, knowing that you can always run rake test and see what breaks, but they also provide for excellent example usage of a library. It's almost as good as built in documentation! So, you can tell exactly what the plugin requires to function by looking in the schema.rb file under the test directory.
Next, you need to run the migration to actually add the required tables to the schema:
$ rake db:migrate
(in /Users/mattkern/dev/discographr)
== AddTags: migrating =========================================================
-- create_table(:tags, {:force=>true})
-> 0.7773s
-- create_table(:taggings, {:force=>true})
-> 1.0804s
== AddTags: migrated (1.8754s) ===============================================
Now it's time to use the added functionality. The first thing you need to do is add an acts_as helper to your models. In Discographr you want all the models to be taggable, so you'll need to add a single line to each of the models. Rails uses acts_as helpers to add functionality or extend the ActiveRecord::Base class and subclasses. So, for example, acts_as_taggable adds tagging functionality to your models in a single call (theoretically), while acts_as_tree adds hierarchical tree functionality to your models. Start off by adding tags to the Artist model.
class Artist < ActiveRecord::Base
acts_as_taggable
end
If you've worked with Rails in the past you know that the :has_many, :has_one, and :has_and_belongs_to_many methods add an awful lot of methods to your models. Since the plugin relies on the :has_many method and polymorphic associations, you get all that functionality by virtue of declaring your model as acts_as_taggable. You'll see those added methods in action later.
So, go ahead and add tags to the other models in Discographr. You want all of the models you've already defined to be taggable so you'll add the acts_as_taggable call to the Album and Song models:
class Album < ActiveRecord::Base
belongs_to :artist
acts_as_taggable
end
class Song < ActiveRecord::Base
belongs_to :album
acts_as_taggable
end
Now that all your models are taggable, you can start using the tag information in your application. As you haven't really written any controllers at this point, you'll use the handy script/console to test-drive your newly taggable models. Fire up the console by issuing:
$ script/console
>>
a = Artist.new( :name => "Bob Dylan" )
=> #<Artist:0x35589dc @new_record=true, @attributes={"created_on"=>nil, "name"=>"Bob Dylan", "updated_on"=>nil}>
>>
a.save
=> true
>>
alb = Album.new( :release_name => "Good As I Been To You",
?>
:artist => a,
?>
:year => 1992 )
=> #<Album:0x3500ef8 @new_record=true, @attributes={"created_on"=>nil, "artist_id"=>10001,
"updated_on"=>nil, "year"=>1992, "release_name"=>"Good As I Been To You"}, ={}>>>
>>
alb.save
=> true
>>
s = Song.new( :title => "Froggie Went A-Courtin'",
?>
:length => 386,
?>
:track_number => 13,
?>
:album => alb )
=> #<Song:0x34450f4 @new_record=true, @album=#<Album:0x3500ef8 @new_record=false,
@new_record_before_save=true, @attributes={"created_on"=>Sun Feb 18 22:50:47 PST 2007, "artist_id"=
10001, "updated_on"=>Sun Feb 18 22:50:47 PST 2007, "id"=>10000, "year"=>1992, "release_name"=>"Good As
I Been To You"}, @errors=#<ActiveRecord::Errors:0x34f65d4 @base=#<Album:0x3500ef8 ...>,
@errors={}>, ={}>>>, @attributes={"created_on"=>nil, "track_number"=>13, "title"=
"Froggie Went A-Courtin'", "updated_on"=>nil, "album_id"=>10000, "length"=>386}>
>>
s.save
=> true
Now, create a few tags. Remember, the plugin defined the Tag model for you so you can create tags just like any other model you might define:
>>
t1 = Tag.new( :name => "male vocalist" )
=> #<Tag:0x3361700 @new_record=true, @attributes={"name"=>"male vocalist"}>
>>
t1.save
=> true
>>
t2 = Tag.new( :name => "children's songs" )
=> #<Tag:0x333a150 @new_record=true, @attributes={"name"=>"children's songs"}>
>>
t2.save
=> true
>>
s.tags << t1
=> [#<Tag:0x3361700 @new_record=false, @new_record_before_save=true, @attributes={"name"=>"male
vocalist", "id"=>10000}, @errors=#<ActiveRecord::Errors:0x335133c @base=#<Tag:0x3361700 ...>,
@errors={}>>]
>>
s.tags << t2
=> [#={}>>]
>>
s.tags.count
=> 2
The << method (remember that in Ruby everything called on an object is a method, << included) was added to your Artist model when you added the acts_as_taggable helper. Recall that it adds an :has_many :tags helper to your model thereby adding the append operator (method!). So, you've just added two tags to your "Bob Dylan" artist object.
>>
a.tags
=> [#={}>>]
>>
a.tags.count
=> 2
But, it gets better. The acts_as_taggable_on_steroids plugin adds quite a few convenience methods you can use to make development even easier and quicker. The plugin is designed to replicate the functionality provided by the original acts_as_taggable plugin. This means that you can do things like:
>>
a2 = Artist.new( :name => "The Flaming Lips" )
=> #<Artist:0x326d600 @new_record=true, @attributes={"created_on"=>nil,
"name"=>"The Flaming Lips", "updated_on"=>nil}>
>> a2.save
=> true
>> a2.tag_list = "alternative, male vocalist"
=> "alternative, male vocalist"
>> a2.save
=> true
>> a2.tags.count
=> 2
One of the most common approaches to visualizing tag data in most social and folksonomy based applications is the notion of a tag cloud. You've seen these before on sites like Flickr and del.icio.us. A tag cloud is basically a visual representation of the tags present in a system sized according to popularity (in other words, frequency). The acts_as_taggable plugin provides a convenient method for calculating tag frequencies called tag_counts. Unfortunately, the developer of the plugin must have had a MySQL bias because the definition of the tag_counts method includes a find_by_sql call that uses invalid SQL with aggregation functions. Oracle follows the standard and requires that any non-aggregate expressions must be included in the GROUP BY clause. Thus tag_counts breaks even though Oracle is doing the right thing!
There's still hope though; simply add the following to the end of the init.rb file found in the plugin at RAILS_ROOT/vendor/plugins/act_as_taggable_on_steroids:
require File.dirname(__FILE__) + '/lib/acts_as_taggable_override'
module ActiveRecord
module Acts #:nodoc:
module Taggable #:nodoc:
module SingletonMethods
def tag_counts(options = {})
logger.debug{"Using overriden tag_counts method"}
options.assert_valid_keys :start_at, :end_at, :conditions, :at_least, :at_most, :order
start_at = sanitize_sql(['taggings.created_at >= ?', options[:start_at]]) if options[:start_at]
end_at = sanitize_sql(['taggings.created_at <= ?', options[:end_at]]) if options[:end_at]
options[:conditions] = sanitize_sql(options[:conditions]) if options[:conditions]
conditions = [options[:conditions], start_at, end_at].compact.join(' and ')
at_least = sanitize_sql(['count(*) >= ?', options[:at_least]]) if options[:at_least]
at_most = sanitize_sql(['count(*) <= ?', options[:at_most]]) if options[:at_most]
having = [at_least, at_most].compact.join(' and ')
order = "order by #{options[:order]}" if options[:order]
limit = sanitize_sql(['limit ?', options[:limit]]) if options[:limit]
Tag.find_by_sql <<-END
select tags.id, tags.name, count(*) as count
from tags left outer join taggings on tags.id = taggings.tag_id
left outer join #{table_name} on #{table_name}.id = taggings.taggable_id
where taggings.taggable_type = '#{name}'
#{"and #{conditions}" unless conditions.blank?}
group by tags.id, tags.name
having count(*) > 0 #{"and #{having}" unless having.blank?}
#{order}
END
end
end
end
end
end
>>
Artist.tag_counts
=> [#<Tag:0x34495b4 @attributes={"name"=>"male vocalist", "id"=>10000,
"count"=>2.0}>, #<Tag:0x3449578 @attributes={"name"=>"children's songs",
"id"=>10001, "count"=>1.0}>, #<Tag:0x344953c @attributes={"name"=>"alternative",
"id"=>10002, "count"=>1.0}>]
The tag_counts method allows you to add conditions like :start_at, :end_at, :at_least, :at_most and :order. These allow you to make queries like:
Artist.tag_counts(:start_at => 7.days.ago) # retrieves tags for added in the last 7 days
Artist.tag_counts(:at_least => 10) # retrieves tags that have been used 10 or more times.
Artist.tag_counts(:order => "name") # order the array of tags by name alphabetically
$ script/generate scaffold Artist Catalog
$ script/server
In order to make your tag cloud method available to our entire application you'll put the following code in the RAILS_ROOT/app/helpers/application_helper.rb file. You could put it in the catalog_helper.rb file, but you'll probably want to use the tag_cloud outside the catalog controller, too.
def tag_cloud(tag_counts)
ceiling = Math.log(tag_counts.max { |a,b| a.count <=> b.count }.count)
floor = Math.log(tag_counts.min { |a,b| a.count <=> b.count }.count)
range = ceiling - floor
tag_counts.each do |tag|
count = tag.count
size = (((Math.log(count) - floor)/range)*66)+33
yield tag, size
end
end
Next you'll have to add your CSS. As you used the scaffold command to generate a basic view you can add to the scaffold.css file under RAILS_ROOT/public/stylesheets:
.tagCloud {
margin: 10px;
font-size: 40px;
}
.tagCloud li {
display:inline;
}
Finally, now that the plumbing is in place, you need to add the code to display the tag cloud in a view. Again, since you used the script/generate scaffold command the necessary views and controller actions have been created for you. Let's go ahead and add the tag cloud to the list action and view. First, change Catalog_Controller's list action in RAILS_ROOT/app/controllers/catalog_controller.rb to:
def list
@artists = Artist.find(:all)
@artist_tag_count = Artist.tag_counts()
end
So, now that the controller has the data needed to create the tag cloud, finish up with the view. Open up the default view for the list action at RAILS_ROOT/app/views/catalog/list.rhtml. list.rhtml was automatically generated by the scaffold command along with the catalog controller. Add the following to the end of the list.rhtml file (after the last line):
<br />
&ol
<% tag_cloud(@artist_tag_count.sort_by {|t| t.name}) do |tag, size| %>
<li><%= link_to h("#{tag.name}"), { :action => "show_tag/#{tag.name}" },
{ :style => "font-size: #{size}%"}%></li>
<% end %>
</ol>
<%= link_to 'Previous page', { :page => @artist_pages.current.previous } if @artist_pages.current.previous %>
<%= link_to 'Next page', { :page => @artist_pages.current.next } if @artist_pages.current.next %>
Finally you need to add the show_tag action that is referenced as the action to which your tags link in the cloud. Add this last action to the end of the RAILS_ROOT/app/controllers/catalog_controller.rb file:
def show_tag
@artists = Artist.find_tagged_with(params[:id])
@artist_tag_count = Artist.tag_counts()
render :template => "catalog/list"
end
After all that your list.rhtml file should read as follows:
<h1>Listing artists</h1>
<table>
<tr>
<% for column in Artist.content_columns %>
<th><%= column.human_name %></th>
<% end %>
</tr>
<% for artist in @artists %>
<tr>
<% for column in Artist.content_columns %>
<td><%=h artist.send(column.name) %></td>
<% end %>
<td><%= link_to 'Show', :action => 'show', :id => artist %></td>
<td><%= link_to 'Edit', :action => 'edit', :id => artist %></td>
<td><%= link_to 'Destroy', { :action => 'destroy', :id => artist },
:confirm => 'Are you sure?', :method => :post %></td>
</tr>
<% end %>
</table>
<br />
<%= link_to 'New artist', :action => 'new' %>
<br />
<ol class="tagCloud">
<% tag_cloud(@artist_tag_count.sort_by {|t| t.name}) do |tag, size| %>
<li><%= link_to h("#{tag.name}"), { :action => "show_tag/#{tag.name}" },
{ :style => "font-size: #{size}%"}%></li>
<% end %>
</ol>
class CatalogController < ApplicationController
def index
list
render :action => 'list'
end
# GETs should be safe (see)
verify :method => :post, :only => [ :destroy, :create, :update ],
:redirect_to => { :action => :list }
def list
@artists = Artist.find(:all)
@artist_tag_count = Artist.tag_counts()
end
def show
@artist = Artist.find(params[:id])
end
def new
@artist = Artist.new
end
def create
@artist = Artist.new(params[:artist])
if @artist.save
flash[:notice] = 'Artist was successfully created.'
redirect_to :action => 'list'
else
render :action => 'new'
end
end
def edit
@artist = Artist.find(params[:id])
end
def update
@artist = Artist.find(params[:id])
if @artist.update_attributes(params[:artist])
flash[:notice] = 'Artist was successfully updated.'
redirect_to :action => 'show', :id => @artist
else
render :action => 'edit'
end
end
def destroy
Artist.find(params[:id]).destroy
redirect_to :action => 'list'
end
def show_tag
@artists = Artist.find_tagged_with(params[:id])
@artist_tag_count = Artist.tag_counts()
render :template => "catalog/list"
end
end
Now going to should display a list of Artists with CRUD functionality along with a tag cloud beneath it:
Clicking on the tags should display a filtered list of artists by tag with the same tag cloud below.
That is but one example of the experience that tags can bring to your application. Tag clouds can convey a great amount of meaning in a very simple format—the mark of a powerful user interface.
There are quite a few shortcomings to the acts_as_taggable_on_steroids plugin, as you've seen already. As of this writing, recently the has_many_polymorphs plugin has started to gain acceptance as a more powerful replacement for the acts_as_taggable plugins. As is the case with many solutions that gain flexibility and power, it is a bit more abstract than the plugins we've looked at in this article. As such the plugin is not as much an out-of-the-box, straightforward solution for tagging but it does an excellent job of avoiding many of the inherent challenges of the acts_as_taggable_on_steroids plugin—including potential model definition clashes, incompatibility with Oracle without tweaking, inflexible tag separators, and perhaps most important, the inability to easily query across models for a given tag.
That said, the acts_as_taggable_on_steriods plugin is a very powerful and simple to use extension for any Rails application.
Matt Kern has been searching for and developing ways to make life easier through technologies like Rails for years—mostly an attempt at finding ways to spend ever more time roaming the mountains of Central Oregon with his family. He is the founder of Artisan Technologies Inc. and co-founder of Atlanta PHP. | http://www.oracle.com/technetwork/articles/kern-rails-tagging-082955.html | CC-MAIN-2015-18 | refinedweb | 3,051 | 57.98 |
Join us in Chat. Click link in menu bar to join. Unofficial chat day is every Friday night (US time).
0 Members and 1 Guest are viewing this topic.
#include "sys/ATMega8.h"
Timer.h: 198: undefined reference to 'pgm_Timers' timer.c: 782: undefined reference to ' NUMBER_OF_TIMERS' core.c: (.text+0x8c): undefined reference to 'appInitSoftware'
I have wrote your "My first program" Hello world from the pdf file and came up with 25 errors.
Writer rprintfInit(Writer writer)
c:\program files\winavr\bin\..\lib\gcc\avr\4.1.1\..\..\..\..\avr\bin\ld.exe: region text is full
For the ATMega328P I believe I need to do some modification with the $50 robot board.
For the ATMega328P I believe I need to do some modification with the $50 robot board. | http://www.societyofrobots.com/robotforum/index.php?topic=9561.0 | CC-MAIN-2014-49 | refinedweb | 129 | 68.87 |
Bio::Graphics::FeatureFile -- A set of Bio::Graphics features, stored in a file
use Bio::Graphics::FeatureFile; my $data = Bio::Graphics::FeatureFile->new(-file => 'features.txt'); # create a new panel and render contents of the file onto it my $panel = $data->new_panel; my $tracks_rendered = $data->render($panel); # or do it all in one step my ($tracks_rendered,$panel) = $data->render; # for more control, render tracks individually my @feature_types = $data->types; for my $type (@feature_types) { my $features = $data->features($type); my %options = $data->style($type); $panel->add_track($features,%options); # assuming we have a Bio::Graphics::Panel } # get individual settings my $est_fg_color = $data->setting(EST => 'fgcolor'); # or create the FeatureFile by hand # add a type $data->add_type(EST => {fgcolor=>'blue',height=>12}); # add a feature my $feature = Bio::Graphics::Feature->new( # params ); # or some other SeqI $data->add_feature($feature=>'EST');
The Bio::Graphics::FeatureFile module reads and parses files that describe sequence features and their renderings. It accepts both GFF format and a more human-friendly file format described below. Once a FeatureFile object has been initialized, you can interrogate it for its consistuent features and their settings, or render the entire file onto a Bio::Graphics::Panel.
This module is a precursor of Jason Stajich's Bio::Annotation::Collection class, and fulfills a similar function of storing a collection of sequence features. However, it also stores rendering information about the features, and does not currently follow the CollectionI interface.
There are two types of entry in the file format: feature entries, and formatting entries. They can occur in any order. See the Appendix for a full example.
Formatting entries are in the form:
[Stanza Name] option1 = value1 option2 = value2 option3 = value3 [Stanza Name 2] option1 = value1 option2 = value2 ...
There can be zero or more stanzas, each with a unique name. The names can contain any character except the [] characters. Each stanza consists of one or more option = value pairs, where the option and the value are separated by an "=" sign and optional whitespace. Values can be continued across multiple lines by indenting the continuation lines by one or more spaces, as in:
[Named Genes] feature = gene glyph = transcript2 description = These are genes that have been named by the international commission on gene naming (The Hague).
Typically configuration stanzas will consist of several Bio::Graphics formatting options. A -option=>$value pair passed to Bio::Graphics::Panel->add_track() becomes a "option=value" pair in the feature file.
Feature entries can take several forms. At their simplest, they look like this:
Gene B0511.1 Chr1:516..11208
This means that a feature of type "Gene" and name "B0511.1" occupies the range between bases 516 and 11208 on a sequence entry named Chr1. Columns are separated using whitespace (tabs or spaces). Embedded whitespace can be escaped using quote marks or backslashes:
Gene "My Favorite Gene" Chr1:516..11208 (a #include located in a file that is itself an include file) are #allowed. You may also use one of the shell wildcard characters * and #? to include all matching files in a directory.
The following are examples of valid #include directives:
#include "/usr/local/share/my_directives.txt" #include 'my_directives.txt' #include chromosome3_features.gff3 #include gff.d/*.conf
You can enclose the file path in single or double quotes as shown above. If there are no spaces in the filename the quotes are optional. The #include directive is case insensitive, allowing you to use #INCLUDE or #Include if you prefer.
Include file processing is not very smart and will not catch all circular #include references. You have been warned!
The special comment "#exec 'command'" will spawn a shell and incorporate the output of the command into the configuration file. This command will be executed quite frequently, so it is suggested that any time-consuming processing that does not need to be performed on the fly each time should be cached in a local file.
Return the version number -- needed for API checking by GBrowse
Create a new Bio::Graphics::FeatureFile using @args to initialize the object. Arguments are -name=>value pairs:
Argument Value -------- ----- -file Read data from a file path or filehandle. Use "-" to read from standard input. -text Read data from a text scalar. -allow_whitespace If true, relax GFF2 and GFF3 parsing rules to allow columns to be delimited by whitespace rather than tabs. -map_coords Coderef containing a subroutine to use for remapping all coordinates. -smart_features Flag indicating that the features created by this module should be made aware of the FeatureFile object by calling their configurator() method. -safe Indicates that the contents of this file is trusted. Any option value that begins with the string "sub {" or \&subname will be evaluated as a code reference. -safe_world If the -safe option is not set, and -safe_world is set to a true value, then Bio::Graphics::FeatureFile will evalute "sub {}" options in a L<Safe::World> environment with minimum permissions. Subroutines will be able to access and interrogate Bio::DB::SeqFeature objects and perform basic Perl operations, but will have no ability to load or access other modules, to access the file system, or to make system calls. This feature depends on availability of the CPAN-installable L<Safe::World> module.
The -file and -text arguments are mutually exclusive, and -file will supersede the other if both are present.
-map_coords points to a coderef with the following signature:
($newref,[$start1,$end1],[$start2,$end2]....) = coderef($ref,[$start1,$end1],[$start2,$end2]...)
See the Bio::Graphics::Browser (part of the generic genome browser package) for an illustration of how to use this to do wonderful stuff.
The -smart_features flag is used by the generic genome browser to provide features with a way to access the link-generation code. See gbrowse for how this works.
If the file is trusted, and there is an option named "init_code" in the [GENERAL] section of the file, it will be evaluated as perl code immediately after parsing. You can use this to declare global variables and subroutines for use in option values.
Like new() but caches the parsed file in /tmp/bio_graphics_ff_cache_* (where * is the UID of the current user). This can speed up parsing tremendously for files that have many includes.
Note that the presence of an #exec statement always invalidates the cache and causes a full parse.
Return the modification time of the indicated feature file without performing a full parse. This takes into account the various #include and #exec directives and returns the maximum mtime of any of the included files. Any #exec directive will return the current time. This is useful for caching the parsed data structure.
Render features in the data set onto the indicated Bio::Graphics::Panel. If no panel is specified, creates one.
All arguments are optional.
$panel is a Bio::Graphics::Panel that has previously been created and configured.
$position_to_insert indicates the position at which to start inserting new tracks. The last current track on the panel is assumed.
$options is a scalar used to control automatic expansion of the tracks. 0=auto, 1=compact, 2=expanded, 3=expand and label, 4=hyperexpand, 5=hyperexpand and label.
$max_bump and $max_label indicate the maximum number of features before bumping and labeling are turned off.
$selector is a code ref that can be used to filter which features to render. It receives a feature and should return true to include the feature and false to exclude it.
In a scalar context returns the number of tracks rendered. In a list context, returns a three-element list containing the number of features rendered, the created panel, and an array ref of all the track objects created.
Instead of a Bio::Graphics::Panel object, you can provide a hash reference containing the arguments that you would pass to Bio::Graphics::Panel->new(). For example, to render an SVG image, you could do this:
my ($tracks_rendered,$panel) = $data->render({-image_class=>'GD::SVG'}); print $panel->svg;
Get/set the current error message.
Get/set the "smart_features" flag. If this is set, then any features added to the featurefile object will have their configurator() method called using the featurefile object as the argument.
If true, then GFF3 and GFF2 parsing is relaxed to allow whitespace to delimit the columns. Default is false.
Add a new Bio::FeatureI object to the set. If $type is specified, the object's primary_tag() will be set to that type. Otherwise, the method will use the feature's existing primary_tag() to index and store the feature.
Add a new feature type to the set. The type is a string, such as "EST". The hashref is a set of key=>value pairs indicating options to set on the type. Example:
$features->add_type(EST => { glyph => 'generic', fgcolor => 'blue'})
When a feature of type "EST" is rendered, it will use the generic glyph and have a foreground color of blue.
Change an individual option for a particular type. For example, this will change the foreground color of EST features to my favorite color:
$features->set('EST',fgcolor=>'chartreuse')
In the two-element form, the setting() method returns the value of an option in the configuration stanza indicated by $stanza. For example:
$value = $features->setting(general => 'height')
will return the value of the "height" option in the [general] stanza.
Call with one element to retrieve all the option names in a stanza:
@options = $features->setting('general');
Call with no elements to retrieve all stanza names:
@stanzas = $features->setting;
$value = $browser->setting(gene => 'fgcolor');
Tries to find the setting for designated label (e.g. "gene") first. If this fails, looks in [TRACK DEFAULTS]. If this fails, looks in [GENERAL].
This works like setting() except that it is also able to evaluate code references. These are options whose values begin with the characters "sub {". In this case the value will be passed to an eval() and the resulting codereference returned. Use this with care!
This works like code_setting() except that it evaluates anonymous code references in a "Safe::World" compartment. This depends on the Safe::World module being installed and the -safe_world option being set to true during object construction.
This gets or sets and "safe" flag. If the safe flag is set, then calls to setting() will invoke code_setting(), allowing values that begin with the string "sub {" to be interpreted as anonymous subroutines. This is a potential security risk when used with untrusted files of features, so use it with care.
This gets or sets and "safe_world" flag. If the safe_world flag is set, then values that begin with the string "sub {" will be evaluated in a "safe" compartment that gives minimal access to the system. This is not a panacea for security risks, so use with care.
These routines are used internally to get and set the source of a sub {} callback.
Given a feature type, returns a list of track configuration arguments suitable for suitable for passing to the Bio::Graphics::Panel->add_track() method.
Return the name of the glyph corresponding to the given type (same as $features->setting($type=>'glyph')).
Return a list of all the feature types currently known to the feature file set. Roughly equivalent to:
@types = grep {$_ ne 'general'} $features->setting;
This is similar to the previous method, but will return *all* feature types, including those that are not configured with a stanza.().
Two APIs:
1) original API: # Reference to an array of all features of type "$type" $features = $features-E<gt>features($type) # Reference to an array of all features of all types $features = $features-E<gt>features() # A list when called in a list context @features = $features-E<gt>features() 2) Bio::Das::SegmentI API: @features = $features-E<gt>features(-type=>['list','of','types']); # variants $features = $features-E<gt>features(-type=>['list','of','types']); $features = $features-E<gt>features(-type=>'a type'); $iterator = $features-E<gt>features(-type=>'a type',-iterator=>1); $iterator = $features-E<gt>features(-type=>'a type',-seq_id=>$id,-start=>$start,-end=>$end);"; }
Usage : $db->get_feature_by_name(-name => $name) Function: fetch features by their name Returns : a list of Bio::DB::GFF::Feature objects Args : the name of the desired feature Status : public
This method can be used to fetch a named feature from the file.
The full syntax is as follows. Features can be filtered by their reference, start and end positions
@f = $db->get_feature_by_name(-name => $name, -ref => $sequence_name, -start => $start, -end => $end);
This method may return zero, one, or several Bio::Graphics::Feature objects.
Title : search_notes Usage : @search_results = $db->search_notes("full text search string",$limit) Function: Search the notes for a text string Returns : array of results Args : full text search string, and an optional row limit Status : public
Each row of the returned array is a arrayref containing the following fields:
column 1 Display name of the feature column 2 The text of the note column 3 A relevance score.
Provided for compatibility with older BioPerl and/or Bio::DB::GFF APIs.
Return the list of reference sequences referred to by this data file.
Return the minimum coordinate of the leftmost feature in the data set.
Return the maximum coordinate of the rightmost feature in the data set.
Returns stat() information about the data file, for featurefile objects created using the -file option. Size is in bytes. mtime, atime, and ctime are in seconds since the epoch.
Given a feature, determines the configuration stanza that bests describes it. Uses the feature's type() method if it has it (DasI interface) or its primary_tag() method otherwise.
Given a feature, tries to generate a URL to link out from it. This uses the 'link' option, if one is present. This method is a convenience for the generic genome browser.
Given a feature, tries to generate a citation for it, using the "citation" option if one is present. This method is a convenience for the generic genome browser.
Get/set the name of this feature set. This is a convenience method useful for keeping track of multiple feature sets.
# file begins [general] pixels = 1024 bases = 1-20000 reference = Contig41 height = 12 [mRNA] glyph = gene key = Spliced genes [Cosmid] glyph = segments fgcolor = blue key = C. elegans conserved regions [EST] glyph = segments bgcolor= yellow connector = dashed height = 5; [FGENESH] glyph = transcript2 bgcolor = green description = 1
Bio::Graphics::Panel, Bio::Graphics::Glyph, Bio::DB::SeqFeature::Store::FeatureFileLoader, Bio::Graphics::Feature, Bio::Graphics::FeatureFile
Lincoln Stein <lstein@cshl.org>.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See DISCLAIMER.txt for disclaimers of warranty. | http://search.cpan.org/~lds/Bio-Graphics/lib/Bio/Graphics/FeatureFile.pm | CC-MAIN-2014-23 | refinedweb | 2,409 | 55.03 |
CodePlexProject Hosting for Open Source Software
I was trying to use NivoSlider () plugin with BlogEngine.Net, but geting all sort of javascript errors all the time. I've tryied $j = jQuery.noconflict()...
Can someone help to integrate this plugin into my blog? :]
I would try not using 'noconflict'. BE 1.6, and even BE 1.5, no longer uses the $ alias. But, at the end of blog.js, it will assign the $ alias to the 'BlogEngine' namespace if $ is not already being
used. So, if you don't use noconflict, then jQuery will use $ and BE should leave the $ alias alone. And then things might start working.
I have "js/jquery-1.4.2.min.js" in root directory on blog. I'm adding "<script>" tags in "site,master" of my theme, also NivoSlider is added in "site.master"... when I'm loading my blog in browser there are lot
of javascript errors... and the slider isn't visible...
I just installed it on a test blog I have. The one problem I ran across was the 'pathing' (or SRC) for the JS file. If you can use an absolute URL to the CSS and JS files, that'll be easiest. For reference, here's what I put in my site.master
file:
<link rel="stylesheet" href="nivo-slider.css" type="text/css" media="screen" />
<script src="" type="text/javascript"></script>
<script src="" type="text/javascript"></script>
<script type="text/javascript">
$(window).load(function() {
$('#slider').nivoSlider();
});
</script>
You should probably AVOID what I did by directly referencing the NivoSlider JS file on the creator's website. Instead, if you can replace that last <script> tag so it points to your own site (probably via an absolute URL beginning with http://),
that would be more considerate.
The CSS file (nivo-slider.css) I put in the theme folder.
Thanks Ben. I'll try this on my blog... hope it works.
I am not sure if you were able to get your plugin working but I have been working on getting Jquery to function in BlogEngine and have finally worked out the best way to get it to work with pretty much any theme.
Just add this into your masterpage .cs file and make sure you know where your files are at to update the src path.
HtmlGenericControl js = new HtmlGenericControl("script");
js.Attributes["type"] = "text/javascript";
js.Attributes["src"] = Request.ApplicationPath + "/scripts/jquery.js";
Page.Header.Controls.Add(js);
HtmlGenericControl js2 = new HtmlGenericControl("script");
js2.Attributes["type"] = "text/javascript";
js2.InnerText = "$j = jQuery.noConflict();";
Page.Header.Controls.Add(js2);
The other thing that I have noticed is when you download the jquery plugin you want to use (not the default jquery file you always need) you will have to customize it and change all of the $ to $J since you will be running in noConflict mode.
Here is a live demo of a jquery slider I have working in BlogEngine.NET
It is the MoviesForMyBlog Plugin on the side
The plugin is downloadable from
Jason
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://blogengine.codeplex.com/discussions/207697 | CC-MAIN-2017-34 | refinedweb | 540 | 76.93 |
leon de boer wrote:Lets list the ones I know that don't have a c++ compiler
Quote:$ mpicc gaussian.c
gaussian.c: In function ‘main’:
gaussian.c:73:8: warning: assignment from incompatible pointer type [-Wincompatible-pointer-types]
data = (double *) malloc(num_cols * sizeof(double *));
^
$
data = (double **) malloc(num_cols * sizeof(double **));
/*???? data = new double*[num_cols];*/
for(int i = 0; i < num_cols; i++)
data[i] = (double *) malloc(num_rows * sizeof(double));
/* First you malloc a set of pointers to a double ... not a pointer to a pointer to a double */
/* data is a pointer to a pointer of double .. so you need to allocate pointers to doubles */
data = (double **) malloc(num_cols * sizeof(double *));
/* Then for each pointer of double you malloc a 1D array of doubles to that pointer */
for(int i = 0; i < num_cols; i++)
data[i] = (double *) malloc(num_rows * sizeof(double));
/* see how the thing inside the sizeof is always one asterix less than what malloc returns */
double* data1D = (double*) malloc(num_rows * num_cols * sizeof(double));
data[i][j]
data1D[j*num_cols + i]
cfront
ChrisFromWales wrote:The last version of cfront was released around the time that the C++ language added templates
[...]
cfront version 3.0.3 from May 1994 [...]
I am having difficulty processing the following problem.
Quote:The purpose of this Task is to let you express your problem-solving skills, programing skills, as well as to reveal your style of coding.
<pre> ifstream inFile;
inFile >> num_rows;
file_buffer.resize(num_rows);
FILE* inFile;
inFile = fopen(argv[1], "r");
fgets(strNum_rows, 20, inFile);
num_rows = atoi(strNum_rows);
vector<double> file_buffer;
file_buffer.resize(num_rows);
file_buffer
vector
int *file_buffer = malloc(num_rows * sizeof(int)); // assumes you are reading ints from file
fgets()
++
send_buffer = new double[num_rows];
send_buffer = (double *)malloc(num_rows * sizeof(double);
zak100 wrote:
Please let me know.
void *calloc(size_t nitems, size_t size)
send_buffer = (double *)calloc(num_rows, sizeof(double));
Preprocessor are executed before compilation. This is a macro processor, which is used automatically by the C compiler to transform your program before actual compilation.
In simple words, preprocessor directives tells the compiler to preprocess the source code before compiling. All the preprocessor commands are begin with "#" symbol.
The most common use of the preprocessor is to include header files. In C and C++, all symbols must be declared in a file before they can used. They don’t always need to be defined*, but the compiler needs to know they exist somewhere. A preprocessor is just another technique to help a programming language be more useful. There are numerous techniques available and every language designer must choose the ones they like.
Aakashdata wrote:What if, we don't use it?
#include <stdio.h>
int main (void)
{
printf("hello world\n");
return 0;
}
#include <stdio.h>
int main (void)
{
volatile int i = 0;
i = i++;
}
leon de boer wrote:Even stopped it optimizing away
int
#include
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Messages/5622285/processing-Multithread-in-MFC | CC-MAIN-2022-05 | refinedweb | 501 | 53.81 |
In the introduction to the TestNG tutorial, we highlighted TestNG briefly and focussed on how TestNG fetches its power from its annotations. Tests annotate to decide the sequence of the tests that run. Annotations are taken from the Java language and are an excellent tool for the testers using TestNG. This tutorial includes the TestNG Annotations with the following content:
- What are TestNG annotations?
- Benefits of TestNG annotations
- Hierarchy of TestNG annotations
- Multiple Test Case Scenario
- TestNG Case Priorities In TestNG
What Are TestNG Annotations?
Annotations, in general, mean "a comment" or "a note" on a diagram, etc. to denote its meaning. TestNG also uses them for the same reason. TestNG annotations are the code that is written inside your source test code logic to control the flow of the execution of tests. It is essential to annotate your methods in TestNG to run the tests. TestNG will ignore the method which does not contain an annotation since it won't know when to execute this method.
A TestNG annotation starts from the symbol "@" and whatever follows is the annotation name. These names are predefined, and we will discuss them in the later section of this tutorial. Apart from the '@' symbol and the header file (of course), there is nothing you require to run TestNG annotations. There are many types of annotations in TestNG. In the section, you will find their definition along with their meaning.
Types Of TestNG Annotations
In TestNG, there are ten types of annotations:
- @BeforeSuite - The
@BeforeSuitemethod in TestNG runs before the execution of all other test methods.
- @AfterSuite - The
@AfterSuitemethod in TestNG runs after the execution of all other test methods.
- @BeforeTest - The
@BeforeTestmethod in TestNG runs before the execution of all the test methods that are inside that folder.
- @AfterTest - The
@AfterTestmethod in TestNG executes after the execution of all the test methods that are inside that folder.
- @BeforeClass - The
@BeforeClassmethod in TestNG will run before the first method invokes of the current class.
- @AfterClass - The
@AfterClassmethod in TestNG will execute after all the test methods of the current class execute.
- @BeforeMethod - The
@BeforeMethodmethod in TestNG will execute before each test method.
- @AfterMethod - The
@AfterMethodmethod in TestNG will run after each test method is executed.
- @BeforeGroups - The
@BeforeGroupsmethod in TestNG run before the test cases of that group execute. It executes just once.
- @AfterGroups - The
@AfterGroupsmethod in TestNG run after the test cases of that group execute. It executes only once.
These annotations have self-explanatory meanings. It is one of the primary reasons to prefer TestNG as it is simple and easy to learn. If TestNG draws so much from its annotations, there must be a few benefits associated with it.
Why Use Annotations?
TestNG annotations boast the following benefits:
- Easy To Learn - The annotations are very easy to learn and execute. There is no predefined rule or format, and the tester just needs to annotate methods using their judgment.
- Can Be Parameterized - Annotations can also be parameterized, just like any other method in Java.
- Strongly Typed- Annotations type strongly, and the errors can be encountered during the run time, which saves time for the testers.
- No Need To Extend Any Class - While using the annotations, there is no need to extend any Test class like JUnit.
Now that we know the benefits and the annotations used, its time to use them in our code. But hey!, as I said, you control the flow of the program using these annotations. For this, we must know what test will execute first and what next. So before we jump onto the coding part, let's see the hierarchy of these annotations.
Hierarchy In TestNG Annotations
TestNG provides many annotations to write good test source code while testing software. So, how will TestNG figure out which test case to run first and then the next and so on? The answer is a hierarchy in these annotations. TestNG contains a hierarchy among the annotations. This hierarchy is as follows (top being the highest priority):
@BeforeSuite
@BeforeTest
@BeforeClass
@BeforeMethod
@Test
@AfterMethod
@AfterClass
@AfterTest
@AfterSuite
To demonstrate the hierarchy, we have written a small code for you.
Now with the below example code, it will be clear to you quickly.
import org.testng.annotations.AfterClass; import org.testng.annotations.AfterMethod; import org.testng.annotations.AfterSuite; import org.testng.annotations.AfterTest; import org.testng.annotations.BeforeClass; import org.testng.annotations.BeforeMethod; import org.testng.annotations.BeforeSuite; import org.testng.annotations.BeforeTest; import org.testng.annotations.Test; public class TestNG { public void testCase1() { System.out.println("This is the A Normal Test Case"); } public void beforeMethod() { System.out.println("This will execute before every Method"); } public void afterMethod() { System.out.println("This will execute after every Method"); } public void beforeClass() { System.out.println("This will execute before the Class"); } public void afterClass() { System.out.println("This will execute after the Class"); } public void beforeTest() { System.out.println("This will execute before the Test"); } public void afterTest() { System.out.println("This will execute after the Test"); } public void beforeSuite() { System.out.println("This will execute before the Test Suite"); } public void afterSuite() { System.out.println("This will execute after the Test Suite"); } }
Can you guess the output based on the hierarchy? Give it a thought before seeing the output below.
The output of the above code will be like this:
It is visible that the
@Suite annotation is the very first and the very lastly executed. Then
@Test followed by
@Class. Now, if you notice, the
@Method has run twice. As
@Test is a method in the class, hence
@Method
Multiple Test Case Scenario
Numerous test cases can run by setting the priority of the test in the test methods. How? Hold that thought as we will surely take it up later in the tutorial. But, what if we forget about the priorities for a second? What's the protocol for running multiple test cases in TestNG?
If there are multiple
@Test cases, TestNG runs the test cases in the alphabetical order. So, a test as
public void alpha(){ }
will run before the following test case:
public beta(){ }
Test Priority in TestNG
Although TestNG annotations decide in which order the tests will run, priorities do more or less the same job.
The priorities are an additional option that we can put to use with the test annotations. This attribute decides the priority of the annotation. But remember that priority check happens after the annotation check by TestNG. So the TestNG annotation hierarchy is followed first and then priority-based execution. The larger the priority number, the lower is its priority. So a method with priority 1 will run after the test with priority 0. A genuine question after learning this is, what if the priorities are the same for two methods? Let's see those.
TestNG Methods With Same Priorities
It might happen (intentionally or unintentionally) that the tester decides to provide the same priorities for different methods under TestNG annotations. In that case, TestNG runs the test cases in the alphabetical order. So the following test cases:
public void b_method(){ System.out.println("B Method"); } public void a_method(){ System.out.println("A method"); }
Will have the following output:
That is, in alphabetical order.
Okay, I think you got it. Two tests with no priority will run alphabetically. Test cases with the same priority also run alphabetically. But, what about the combination of them?
TestNG Test Case With and Without Priority
This section will explain how the TestNG will execute the test cases with and without the priority option. For this, I will add two more methods to our previous code.
import org.testng.annotations.Test; public class TestNG { (priority = 1) public void b_method() { System.out.println("This is B method"); } (priority = 1) public void a_method() { System.out.println("This is A method"); } public void d_method() { System.out.println("This is D Method"); } public void c_method() { System.out.println("This is C Method"); } }
Execute the following code and see the output:
The test cases without the priority attribute are given the "priority" and executed before the methods with priority. Also, they run alphabetically. I hope the priority attribute is clear in TestNG annotations. It brings us to the end of the concepts of annotations. Annotations are the core of TestNG, and mastering them means mastering TestNG. So keep practicing and keep experimenting to learn. In the next tutorial, we will see the TestNG groups.
Common Questions
Can we use parameters in TestNG Annotations?
Yes, using parameters is a very common way to use annotations. Parameters can be used similarly to a method in Java. An example of using parameters along with the annotations would be:
public class ParameterInTestNG { public void prameterTestOne(String param) { System.out.println("Test one suite param is: " + param); }
We have a dedicated tutorial for this TestNG Parameters. You can get a clear explanation there.
Can we set priority manually in TestNG annotations?
Definitely yes! TestNG provides this feature of defining the priority as a parameter in its annotations. TestNG figures out the priority and which test to run with the help of this parameter.
public class TestNGFirstTest { // Second Highest Priority public void a_test() { } // Lowest Priority public void c_test() { } // Highest Priority public void b_test() { } }
Are multiple parameters allowed in annotations?
Yes, you can use multiple parameters in the annotations.
Similar Articles
| https://www.toolsqa.com/testng/testng-annotations/ | CC-MAIN-2022-27 | refinedweb | 1,552 | 50.23 |
:
10-20-2014
Record Information
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph
-
366622
oclc
-
15802799
System ID:
UF00028315:03636
This item is only available as the following downloads:
( PDF )
Full Text
PAGE 1
POLL OCTOBER 20, 2014Floridas Best Community Newspaper Serving Floridas Best Community50 CITRUS COUNTY ONLINE POLL:Your choice?How will you vote on Amendment 2, Use of Marijuana for Certain Medical Conditions? A. Yes. B. No. To vote, visit www. chronicleonline.com. Click on the word Opinion in the menu to see the poll. Results will appear next Monday. Find last weeks online poll results./ Page A3 HIGH85LOW61Partly cloudy to mostly sunny.PAGE A4TODAY& next morning MONDAY When you see someone holding a sign asking for money, how do you respond? QUESTION OF THE WEEK Allison Meakins It is not our job to decide who is needy and who isnt. That is a higher powers choice to decide. We always give food and pray ers. It is really easy to make up Ziplock bags with a toothbrush, protein bar, toothpaste and small bars of soap. ... Linda Keplinger Wiese I stopped to give money to a man holding a sign and after others saw me, two other people also stopped to give money. Its amazing how an act of kindness can encourage others to do the same. ... Jeri Nixon Ziebarth My son gave lunch to one guy who is on the same corner a lot. The man was thankful. We could see him as we drove away sitting down and eating right then. ... We usually give to charities because of scammers out there. Andrea King Richards Sometimes you have to think of these people as your family and then the question would be: Would you give food and a few dollars to a family member? Tammie Herbert Griffin I give. Id hate to not give and be wrong. Contribute! Like us at facebook.com/ citruscounty chronicle and respond to our Question of the Week. INDEX Classifieds................B9 Comics....................B8 Crossword................B7 Editorial..................A10 Entertainment..........A4 Horoscope................A4 Lottery Numbers......B3 Lottery Payouts........B3 Movies......................B8 Obituaries................A6 TV Listings................B7 Clarification A statement from an Oct.8 WTSP Channel10 News report used in a story on Page A1 of Fridays edition, County health care workers ready to respond, warrants clarification. Tampa International Airport has an average of two dozen passengers arrive daily from Africa, with a half-dozen traveling from West African countries. Readers can alert the Citrus County Chronicle to any errors in news articles by emailing newsdesk@chronicle online.com or by calling 352-563-5660. Putting Citrus front and center A.B. SIDIBE Staff writerThe business of tourism in Citrus County has been on a blitzkrieg of sorts recently all manner of strategies and efforts are under way to sell tourists on the hidden ecological and sparkling gem that is this county. And, the reward, officials hope, is hordes of outsiders trudging through with their pocketbooks open to peek or swim with manatees, go fishing, scalloping or go spelunk underwater. Better yet, go bike or walk the mossdraped, oak-canopied trails of the county. Nov. 1, the countys tourism director Adam Thomas and Tara Tufo will leave for the World Travel Market 2014 gathering in London, England, to push for more visitors to the county. The Board of County Commissioners at their last meeting gave Thomas and Tufo the green light to attend this networking convention and another one next March in Berlin, Germany. The funding of the trips is coming from the bed tax charged by hotels. A record 94.7million visitors came to Florida in 2013, an increase of 3.5percent over 2012, according to state officials. The previous high was 91.5million in 2012. It takes spending at least $600,000 to crack the international market. That is obviously the kind of funds we dont have around here. So we picked these two events, which are the biggest in the world and attract tour operators and people involved in tourism from all over, Adam ThomasCitrus County Tourist Development Council chief. See CITRUS/ Page A9 Square space Special to the ChronicleStumpknockers on the Square was once Allens 5, 10 & 25 Store. From five and dime to catfish and hush puppies Editors note: This is part of an occasional Then and Now series spotlighting historic buildings around Citrus County, what they were originally and what they are used for now.NANCYKENNEDY Staff writerINVERNESS Today, 110 W. Main St. in Inverness is a place where you can get a plate of catfish, cole slaw and hush puppies. But for more than 50 years, it was the place in Inverness to buy hair combs and batteries, greeting cards, shoes, hardware and pencils. The owners of Stumpknockers on the Square, brothers Tim and John Channell, remember when their downtown Inverness restaurant was Allens 5, 10 & 25 Store. When we were kids, our mom would take us in there shopping, although we liked the building across the street better because we could get half-penny candy, 50pieces for a quarter, Tim Channell said. Still, when I bought the Stumpknockers on the Square is owned by brothers Tim and John Channell. Monday CONVERSATION Bringing focus to domestic violence NANCYKENNEDY Staff writerINVERNESS The typical victim of domestic violence is young and she is old, educated and non-educated, poor, middle class and wealthy. In other words, there is no typical woman who ends up at a domestic violence shelter such as CASA, the Citrus Abuse Shelter Association. According to Diana Finegan, CASA executive director, one in four women will be a victim of domestic violence some time in her life. Finegan is the focus of todays Monday Conversation. She is also a part of the Color Citrus Purple campaign encouraging people to wear purple on Wednesday, Oct. 22, to send the message that Domestic violence has no place in our community in recognition of October as Domestic Violence Month.CHRONICLE: Tell me about CASA and your role here. Why are you involved with domestic violence? FINEGAN: CASA incorporated in 1983, started by a group of concerned rape crisis volunteers ... who realized the great need with domestic violence. Ive been with CASA for 15 years. Why domestic violence? I think it found me. I started out in the field of education, and the daughter of the former director was in my class. I was young and wanted to make an impact. CHRONICLE: Whats the capacity at the shelter? Are you always full? FINEGAN: We have 32 beds, not including cribs. At times we can be full at 25 people, because when youre mixing women and sometimes teenage boys and teenage girls and babies and small children, you have different configurations of the space and the bedrooms. Two summers ago we had 22 kids. CHRONICLE: I know you cant disclose the shelter location, but what is the building on Turner Camp Road in Inverness? FINEGAN: This is the CASA outreach building. We have offices here, meeting rooms, we conduct classes and womens empowerment groups. Weve been here since November 2008. CHRONICLE: Whats the average stay at the shelter? FINEGAN: That depends. We say six weeks, but weve helped See MONDAY/ Page A9 Expelled Nazis paid millions in Social Security Associated PressOS See NAZIS/ Page A11 See SQUARE/ Page A9Showtime: Volleyball teams set for district tourneys /B1 VOL. 120 ISSUE 74
PAGE 2
A2MONDAY, OCTOBER20, 2014CITRUSCOUNTY(FL) CHRONICLE OCALA EAST 352-861-2275 3405 SW COLL EGE RD, STE 207 Col ours BEVERLY HILLS 352-527-0779 4065 N. Lecanto Hwy Suite 400 000JJSH Introducing The ReSound Series with LiNX Technology Only at NuTech Hearing M-F 9:00 to 4:00 p.m. Sat. and Sun. Appt. Needed Its So Free Leave Your Purse And Checkbook At Home! Find In Ad And Save An Additional $ 250 THE FREE s FR EE Demo With Surround Sound Hearing FREE Consultation FREE Hearing Test FREE Gift Bag FREE 30 Day Trial FREE Video View Of Your Ear Drum And Ear Canal THE ZERO s 0 Money Down 0 Interest* (1) 0 Commitment* (2) 0 Pressure Any Pair Of Li NX Hearing Aids (3) DIGITAL HEARING AIDS FROM $777.00 CALL FOR MORE NUTECH HEARING LOCATIONS *(1) With approved credit and 12 month maximum term. *(2) Prepay for hearing aid trial. 100% money r efunded if returned within 30 days. *(3) Cannot be combined with any other offer. Void where prohibi ted by law.
PAGE 3
Amputation doesnt stop this volunteer BUSTERTHOMPSON Staff writerBEVERLY HILLSHope never left Ray Albros mind during one of his most troubling times. It was hope that kept him volunteering at the Beverly Hills Surveillance Unit after he recovered from the amputation of his right and last remaining leg as a result of diabetic complications. Everyday is an improvement, Albro said. I feel that you have to have goals to try to do something, and youre not going to be able to do something until you try it either. Albro, 75, has had a total of four surgeries on his legs from 2008 to this year each one removing a portion of his body, but not his spirit in returning back to the organization hes loved and volunteered with since 2000. I feel as though Im helping people even though Im sitting here, Albro said about returning back as dispatcher for the unit. Im helping the people that are out in patrol cars, making sure theyre alright, and if somebody needs something I have the radio and the phone. Albro is no stranger to law enforcement. He was with the police force in Oneida, New York, since the 1970s until his retirement to Citrus County in 1997. His main trade though was as a tool and dye maker. I worked all night in the police department and all day in the machine shop, Albro said. He joined with the Beverly Hills Unit in 2000, keeping an eye out for Beverly Hills as a surveillance patrol car driver and serving in multiple positions. Albro faced three surgeries on his left leg from 2008 to 2011, but still came back to the unit with encouragement from his fellow volunteers. I did mainly dispatching and things like that, and then I was captain so I didnt have to go on the road as much, Albro said. Everybody was fine and everybody was encouraging and happy that I was able to do it. A majority of his right leg was amputated in the fourth and most recent surgery in April. Albro debated going back to volunteering, but he returned in August with the support of his squad. I didnt think I should be here or that I should go because I couldnt do anything, and that was my biggest fear I have was not doing my share, he said. A lot of the people there kept after me and I have guys that pick me up and take me over ... they got me right back in there. Capt. Jim Barrows of the Beverly Hills Surveillance Unit has seen Albro as strength to the volunteers. Hes a good man, and been involved in a lot of roles, Barrows said. Just by showing up means hes not afraid to advance on life. Albro was also licensed earlier this month to drive with hand controls an adaptation hes still getting used to. Everythings a little different; you got to think what youre doing and not go down the road and have a big conversation with anyone, Albro said. My foot still wants to come up and hit the pedals. His goal is to eventually go back north to see his grandchildren and hunt in the wilds of northeast. Albro will still continue to volunteer at the unit, alongside the many volunteers who are there for him as he gets more and more mobile. I want to be there for as long as I can, and Im hoping it gets better as it goes on, Albro said. With friends, it makes a big difference.Contact Chronicle reporter Buster Thompson at 352-5642916 or bthompson@chronicle online.com. Around theCOUNTY Group to discuss constitutional issuesAmericans United for Separation of Church and State-Nature Coast Chapter will meet at 4p.m. Tuesday, Oct.21, at Lakes Region Library, 1511 Druid Road, Inverness. The public is welcome to join in the discussion about constitutional issues pertaining to separation of church and state. For information, call 352344-9211 or email nature coastau@hotmail.com.Hatcher to address Homosassa GOPThe regular monthly meeting of the Homosassa River Republican Club will be at 6p.m. Thursday, Oct.23, at the Homosassa Lions Club on Homosassa Trail, Homosassa. The speakers will be Mary Hatcher, candidate for 5th Judicial Circuit, Group3, judgeship; Dennis Siebert and Caitlin Wilcox, candidates for Seat2 on the Homosassa Water District Board; and Scott Adams, Citrus County commissioner. Cookies and coffee will be served. From staff reports STATE& LOCAL Page A3MONDAY, OCTOBER 20, 2014 CITRUSCOUNTYCHRONICLE QUESTION: How strictly managed should Three Sisters Springs be? Ban people from swimming and paddling in there during the winter months. 36 percent (177 votes) Limit the number of people swimming or paddling in there at any given time. 17 percent (82 votes) Keep it a freefor-all. No restrictions necessary. 16 percent (80 votes) Charge people $5 each to enter. Thatd solve the crowding problem. 30 percent (147 votes) Total votes: 485. ONLINE POLL RESULTS Veterans Appreciation Week honors WWII veterans Special to the ChronicleCitrus Countys 22nd annual Veterans Appreciation Week will be celebrated from Oct. 25 through Nov. 16. This years theme, Honoring Our Greatest Generation, World War II Veterans, recognizes the selfless sacrifice and devotion to duty of our Greatest Generation, who saved the world from the tyranny of the Axis Powers. Special Veterans Appreciation Week commemorative pins are available from DAV Chapter 70 by calling 352-860-0123 or emailing john_d_seaman@yahoo.com. The schedule of Veterans Appreciation Week activities: Veterans Appreciation Honor the Heroes Concert, NCCB, 2:30 p.m. Saturday, Oct. 25, First United Methodist Church, Homosassa; 2:30 p.m. Sunday, Oct. 26, Cornerstone Baptist Church, Inverness. For information, contact Cindy Hazzard, 352601-7397; nccommunityband@ earthlink. net. Veterans in the Classroom, Nov. 3 to 14. To volunteer, contact Mac McLeod at 352-746-1384 or cmcleod670@earthlink. net; or Bob Crawford at 352-2709025 or baddogusmc@tampabay. rr.com. Veterans Flea Market, 7 a.m. to 2 p.m. Wednesday, Nov. 5, Stokes Flea Market. To schedule a free table for a veterans service organization, call Dinah Williams at 352-746-7200 two Wednesdays prior to Nov. 5. Veterans Program, 2 p.m. to 3:30 p.m. Friday, Nov. 7, Inverness Primary School. Veterans and their guests are invited. Veterans are requested to wear their military or VSO uniform. For information, contact Mary Tyler at 352-726-2632 or tylerm@citrus.k12.fl.us. Veterans Social, 5 to 6:30 p.m. Friday, Nov. 7, American Legion Post 155. Sponsored by 40 & 8. $7 at door. For information, contact John Kaiserian at 352-746-1959 or johnk40and8 @yahoo.com. Veterans Fair, 10 a.m. to 2 p.m. Saturday, Nov. 8, Crystal River Mall. Opening ceremony 9:45 a.m. For information, contact Sam Dininno at 352-527-5915 or samuel.dininno@bocc.citrus.fl.us. Military Ball, 5:30 p.m. Saturday, Nov. 8, West Citrus Elks, Homosassa. Sponsored by the Marine Corps League Citrus Detachment 819. Tickets $35. For information, contact Morgan Patterson at 352-746-1135 or mpatterson41@tampabay.rr.com. Veterans Appreciation Program, 6 p.m. Sunday, Nov. 9, Cornerstone Baptist Church, Inverness. Ice cream social follows program. Veterans are requested to wear their military or VSO uniform. Ice cream social follows program. For information, contact Ray Michael at 352637-3265 or rmichael5@tampa bay.rr.com. Women Veterans Luncheon, noon Monday, Nov. 10, 320 N. Citrus Ave. Hosted by Crystal River Womens Club. For information, contact Leslie Martineau at 352746-2396 or lmartineau_2001 @yahoo.com. Marine Corps Ball, 6 p.m. Monday, Nov. 10, Citrus Hills Country Club. Sponsored by Marine Corps League Det. 1139. Tickets $40. For information, contact Chris Gregoriou at 352-7957000 or allprestige@yahoo.com. Never Forget 5K Run, Registration 6:30 to 7 a.m. Tuesday, Nov. 11, Courthouse Square, Inverness. Run 8:45 a.m. For information, contact Dennis Flanagan at 352-697-1815 or integralpm97@yahoo.com or visit. Veterans Day Parade, 10 a.m. Tuesday, Nov. 11, Inverness. Staging at Citrus High School parking area beginning 8:30 a.m. For information, contact Chris Gregoriou at 352-795-7000 or allprestige@yahoo.com. Memorial Service, Tuesday, Nov. 11, following parade, Old County Courthouse Heritage Museum, Inverness: For information, contact Mac McLeod at 352-746-1384 or cmcleod670@ earthlink.net. Veterans Day luncheon, Tuesday, Nov. 11, following memorial service: VFW 4337, Inverness. Hosted by VFW 4337. VSO commanders and auxiliary presidents, local dignitaries and their guests are invited. For information, contact John Lowe at 352-344-4702 or thelowes@ tampabay.rr.com. Veterans Day motorcycle ride from parade to Fallen Heroes Monument, Tuesday, Nov. 11. Police escort from old Publix parking lot at 11 a.m. to monument. For information, contact Tom Voelz at 352-7952884 or tvoelz816@gmail.com. Massing of the Colors, 3 p.m. Sunday, Nov. 16, Cornerstone Baptist Church, Inverness: For information, contact Reggie Thurlow at 352-563-1101 or rcri@embarqmail.com. The Great Florida Yard Sale, Nov. 7 to 9: For information, contact Lori Greene at 352610-1306 or yardsale.com. Economic malaise clouds governors race Associated PressLAKEpercent. But along this stretch of central Florida, a crucial swing-voting area, the numbers are little more than an abstraction to middle-class voters who see a tepid turnaround. I keep hearing theres a recovery, but I dont know if I see a recovery, said Kevin McVeigh, a 49-yearold software developer who described himself as an undecided Republican. You feel like youre just standing still. Thats were doing, and we arent doing better because of Rick Scott, Crist said. Frankly, were in a stall and a squeeze right now. The official number of unemployed residents, 590,000, is down from about 1 million in January 2011. Indomitable spirit BUSTER THOMPSON/ChronicleRay Albro works at the dispatchers desk inside the Beverly Hills Surveillance Unit building. He has returned as a dispatcher following his recovery from the amputation of his right leg. SO YOU KNOW Corrections and clarifications for stories on any page of the newspaper are now run on Page A1.
PAGE 4
Birthday Big changes are coming your way this year. You will get positive results if you go with the flow and let events unfold naturally. Keep life simple by avoiding overindulgence and overspending.Stick to a healthy routine. Libra (Sept. 23-Oct. 23) Get your facts straight and your paperwork in order before dealing with banks, government agencies or other institutions. Scorpio (Oct. 24-Nov. 22) Greater freedom will be yours if you ask for help. You will get a good response from people in a position to influence your future. Sagittarius (Nov. 23-Dec. 21) Consider the consequences before taking action. Not everyone will play by the rules, so make the appropriate preparations and then counterattack. Capricorn (Dec. 22-Jan. 19) An opportunity to travel should not be missed. You have a lot to learn, but you must be willing to listen to others. Aquarius (Jan. 20-Feb. 19) You need to have a serious discussion with a loved one. The time is right to discuss the future and the pros and cons of moving in a new direction. Pisces (Feb. 20-March 20) Money is headed your way. You will be hard to resist, so let everyone know what you want and expect. Aries (March 21-April 19) Consider all of the options available to you. Think about altering your location or lifestyle to get the most out of an opportunity. Taurus (April 20-May 20) Love is in the stars. If something is important to you, see to the arrangements yourself. Waiting for someone else to make things happen will be a waste of time. Gemini (May 21-June 20) Your strengths and weaknesses will be tested. It may seem you are meeting opposition at every turn, but with a little persistence, you will come out ahead. Cancer (June 21-July 22) Communication is your strength. Group discussions with people from different backgrounds will give you greater insight you can utilize in your personal projects. Leo (July 23-Aug. 22) Whether you need to collect money or possessions or pay someone back, its time to deal with such matters. If your life is not going the way you envisioned, determine whats required to improve it. Virgo (Aug. 23-Sept. 22) Dont neglect your responsibilities. If things have become unsettled or out of control, back up and consider the best way to turn things around. TodaysHOROSCOPES Today is Monday, Oct. 20, the 293rd day of 2014. There are 72 days left in the year. Todays Highlights in History: On Oct. 20, 1944, during World War II, Gen. Douglas MacArthur stepped ashore at Leyte in the Philippines, 21 1803, the U.S. Senate ratified the Louisiana Purchase. In 1947, the House Un-American Activities Committee opened hearings into alleged Communist influence and infiltration in the U.S. motion picture industry. In 1968, former first lady Jacqueline Kennedy married Greek shipping magnate Aristotle Onassis. In 1981, a bungled armored truck robbery carried out by members of radical groups in Nanuet, New York, left a guard and two police officers dead. Ten years ago: A U.S. Army staff sergeant, Ivan Chip Frederick, pleaded guilty to abusing Iraqi detainees at Abu Ghraib prison. (Frederick was sentenced to eight years in prison; he was paroled in 2007.) One year ago: A suicide bomber slammed his explosives-laden car into a busy caf in Baghdad, killing some three dozen people. Todays Birthdays: Actor William Christopher is 82. Japans Empress Michiko is 80. Singer Tom Petty is 64. Actor Viggo Mortensen is 56. Actor Kenneth Choi is 43. Rapper Snoop Lion (formerly Snoop Dogg) is 43. Actor Sam Witwer is 37. Actor John Krasinski is 35. Actress Katie Featherston is 32. Actress Jennifer Nicole Freeman is 29. Thought for Today: Being a politician is a poor profession. Being a public servant is a noble one. President Herbert C. Hoover (18741964).Today inHISTORY CITRUSCOUNTY(FL) CHRONICLE Todays active pollen: Elm, ragweed, grasses Todays count: 6.0/12 Tuesdays count: 6.6 Wednesdays count: 6 Fury blasts Gone Girl from top of box officeLOS ANGELES The bloody World War II drama Fury blew past Gone Girl at theaters this weekend. Gone Girl was tops at the box office for two weeks before Brad Pitt and his rag-tag group of tank mates in Fury blasted the film to second place. Sonys Fury captured $23.5million in ticket sales during its opening weekend, according to studio estimates Sunday. Foxs Gone Girl followed with $17.8million. The weeksmillion, followed by Disneys Alexander and the Terrible, Horrible, No Good, Very Bad Day with $12million. Were now in full adult moviegoing season and well see a lot more adult-skewing fare, said Fox distribution chief Chris Aronson, who added that the colorful Book of Life suits any audience. Another new film rounds out the top five: Relativitys Nicholas Sparks romance The Best of Me, starring Michelle Monaghan and James Marsden, debuted with $10.2million. Birdman, the Alejandro Gonzalez Inarritu drama starring Michael Keaton, opened in just four theaters and boasted a per-screen average of $103,750. It opens in additional locations next week. Overall box office is up almost 25percent from the same weekend last year, Dergarabedian said, and the strong fall showing at cinemas is making up for a year-to-date box-office deficit that dropped from 6percent to 4percent in the last month. Estimated ticket sales for Friday through Sunday at U.S. and Canadian theaters, according to Rentrak. Final domestic figures will be released today. 1. Fury, $23.5million. 2. Gone Girl, $17.8million. 3. The Book of Life, $17million. 4. Alexander and the Terrible, Horrible, No Good, Very Bad Day, $12million. 5. The Best of Me, $10.2million. 6. Dracula Untold, $9.9million. 7. The Judge, $7.94million. 8. Annabelle, $7.92million. 9. The Equalizer, $5.4million. 10. The Maze Runner, $4.5million.Letterman cue card man fired NEW YORK David Lettermans longtime cue-card holder says he wound up cuing his own firing by getting aggressive with a colleague. Tony Mendez told the New York Post in a story published Sunday he lost his job after grabbing a co-worker by the shirt Oct. 9 behind the scenes at CBSs Late Show with David Letterman. CBS directed an inquiry to Lettermans production company, Worldwide Pants. A spokesman said Worldwide Pants wont comment on personnel matters. Attempts to reach Mendez werent immediately successful. The 69-year-old Mendez told the Post he knows he knows he shouldnt have laid a hand on his colleague. He said Letterman wasnt apprised of any tensions between the two. Mendez has become familiar to Late Show viewers, appearing in episodes going back to at least 1997. From wire reports Associated PressBrad Pitt stars as Sgt. Don Wardaddy Collier in Columbia Pictures Fury. A4MONDAY, OCTOBER20, 2014 000J5M8 in Todays Citrus County Chronicle LEGAL NOTICES Lien Notices . . . . . . . . . . . . . . . . . . . . . . B12 Miscellaneous Notices . . . . . . . . . . . . . . . B12 Foreclosure Sale/Action Notices . . B11, B12
PAGE 5
CITRUSCOUNTY(FL) CHRONICLEMONDAY, OCTOBER20, 2014 A5 $ 21,800 $ 24,500 000JJTH 2014 PRIUSs 2014 TUNDRAs OR LEASE A NEW 2014 PRIUS $ 239 per month for 36 months* T141461 T141450 VILLAGE TOYOTA SALE DAYS! VILLAGE TOYOTA 2431 S. Suncoast Blvd., Homosassa 352-628-5100 Of CRYSTAL RIVER Delivering Delivering Quality Cars, Quality Cars, Preserving Preserving Quality Quality Standards Standards $ 21,800 2014 RAV4s OR LEASE A NEW 2014 RAV4 XLE $ 239 per month for 36 months* 60 months T141280 OR LEASE A NEW 2015 COROLLA $ 179 per month for 36 months* OWN IT FOR $ 15,900 2014 COROLLAs 0 % APR 60 months T141342 OR LEASE A NEW 2014.5 CAMRY SE $ 189 per month for 36 months* OWN IT FOR $ 18,700 2014.5 CAMRYs 0 % OWN IT FOR OWN IT FOR 0 % APR 72 months* OWN IT FOR 0 % APR 60 months* 0 % APR 36 months* 3p.m. Monday through Friday (subject to holiday closures), 712 S. School Ave., Lecanto. 352-513-4960. Please call for a list of required documentation., 1201 Parkside Ave., Inverness, to assist Citrus County residents facing temporary hardship. Call CUB at 352344-2242 or citrusunited basket 8 to 10a.m. Wednesdays, and the second Wednesday monthly is distribution of bagged canned goods, dry goods and meat from 8to 10a.m. at 5310 S. Suncoast Blvd., Homosassa Springs. Open to Homosassa residents only. to28-9087 or 352302-9925. Hernando Seventh-day Adventist Church, 1880 N. Trucks Ave., Hernando, provides food distribution for needy families from 10a.m. to 11:30a.m. the second Tuesday monthly. Call 352-212-5159. Raymond Ferrari Sr., 95HOMOSASSARaymond Anthony Ferrari Sr., 95, of Homosassa, Florida, passed away early Saturday morning, Oct.18, 2014, of natural causes at the Hospice House of Citrus Country. Ray was born Sept.11, 1919, in Brooklyn, New York, where he grew up; and met and married Angelina Mirabella, his wife of 69years. Ray moved with his family to Seaford, Long Island, and was employed at Grumman Aerospace, where he worked for 39years. While at Grumman, Ray was chosen to be part of the Lunar Module Team and participated in the national effort to land American astronauts on the moon. Ray was an avid painter, sculptor and wood carver. Some of his prominent works were the handcarved Stations of the Cross at both St. Thomas and St. Benedict churches. Ray also carved the exterior medallions on each side of the entrance to St. Benedict. He was a World WarII veteran, serving as an aviation engineer in the U.S. Navy. After he retired from Grumman Aerospace, he moved with his wife to the Sugarmill Woods community of Homosassa, Florida, in 1978. He continued his artistic endeavors and became an avid golfer as a member of the Sugarmill Woods Country Club. Ray is survived by his wife of 69years, Angelina Ferrari; his three sons, Stephen, Raymond Jr. and Richard and their spouses; eight grandchildren; and one greatgrandson. There will be a viewing from 5to 7p.m. Tuesday, Oct.21, at Wilder Funeral Home and a funeral Mass at 11a.m. Oct.22 at St. Thomas The Apostle Catholic Church, Homosassa, Florida. Arrangements are entrusted to Wilder Funeral Home, funeral.com.Adelaide Hoffman, 92BEVERLY HILLSAdelaide Hoffman, 92, Beverly Hills and Crystal River, died Oct.18, 2014. Chas. E. Davis Funeral Home with Crematory is assisting the family with private arrangements.Joseph Waddington II, 52CRYSTAL RIVERJoseph Hampton Waddington II, 52, went to be with the Lord on Oct.17, 2014. Born April 4, 1962, in Largo, Florida, his family moved to Crystal River in 1965. He graduated from Gupton Jones College of Mortuary Science in 1988 and spent his career serving communities in Central Florida. He also attended Lee University in Cleveland, Tennessee. He has been a longtime member of Crystal River Church of God, and was a former member of Living Waters Worship Center, Ocala. Joe was very active in music ministry since he was a very young boy, and traveled for many years singing with various Southern gospel groups. He is preceded in death by his father, Joseph H. WaddingtonI. He is survived by his loving wife of 19years, Susan McCray Waddington; son Joseph H. WaddingtonIII; mother, Doris Lee Waddington; five sisters, Yvonne Waddington (Jim) Bartek, Harleyville, Alabama; Sharon Waddington (Robert) Bilby, Homosassa, Florida; Lynn Hutchinson, Golden Meadows, Louis iana; Laura Waddington (Glen) Smith, Beverly Hills, Florida; and Karen Waddington (James) Surber, Homosassa, Florida; foster sister Mary Cobane, St. Petersburg, Florida; foster sister Laurleen Aungst, Crystal River, Florida; foster brother William Poucher, Brooksville, Florida; and many nieces and nephews. Funeral service will be 11a.m. Wednesday, Oct.22, 2014, at Crystal River Church of God, with Pastor Ronnie Reid officiating. Family and friends will be welcomed for a visitation at 10a.m. Arrangements under the care of Countryside Funeral Home, Anthony, Florida. In lieu of flowers, please make donations to Joseph Waddington Memorial Fund.Sign the guest book at, OCTOBER20,FWB 20/20 Eyecare N OW A CCEPTING Over 1,000 Frames In Stock with Purchase of Lenses AND Get a 2nd Pair of Glasses FREE FREE Frames ( $ 89.00 Value) 000JJBA 16176 Cortez Blvd. Brooksville, FL 34601 352-597-8839 Kelli K. Maw, MD, MPH Board Certified, Family Medicine JosephWaddingtonII Raymond Ferrari Obituaries are at www. chronicleonline.com. SO YOU KNOW The Citrus County Chronicles policy permits both free and paid obituaries. Email obits@chronicle online.com or phone 352-563-5660 for details..
PAGE 7
CITRUSCOUNTY(FL) CHRONICLEMONDAY, OCTOBER20, 2014 A7 Latest Hearing Aids Local Service Lowest Prices!RechargeableTMFindKN3 15 Years at this Location FATHER & SONS HEARING AID CENTERS HURRY SALE ENDS FRIDAY 10/24/14REXTON SIEMENS Starkey Now HEAR this:Hearing Aids May Save Your MarriageDoes this scenario sound familiar to you or someone you care about? Please read one of the famous letters written to Ann LandersDear hearing aids, Ann? Often, being patient is not the best solution to the problem. -Contented Now in California Dear California: I often suggest hearing aids, and have been told, Thanks, they helped a lot
PAGE 8
LifeSouth Community Blood Centers bloodmobile schedule for October. 5p.m. weekdays, 8a.m. to 4p.m. Saturdays and 10a.m. to 5p.m. Sundays. To find a donor center or a blood drive near you, call 352527-3061. Donors must be at least 17, or 16 with parental permission, weigh a minimum of 110pounds and be in good health to be eligible to donate. A photo ID is also required. Visit for details. 10a.m. to 3p.m. Monday, Oct.20, Withlacoochee Technical Institute, 1201 W. Main St., Inverness. Free 6-inch. BUSTERTHOMPSON Staff writerCollected Lyngbya from Kings Bay, Crystal River, is transported to a drying field off of Southeast Eighth Avenue next to Jim LeGrove Memorial Park before being moved to the Path Homeless Shelter as fertilizer in their organic gardens to provide food to the homeless community. This Lyngbya recycling process has been coordinated for over a year by Save Crystal River, Kings Bay Rotary Club, Path Homeless Shelter and FDS to reuse over four years of collected Lyngbya from clean-ups in Kings Bay. According to an Oct. 2. 2011, project proposal from Kings Bay Rotary, Once removed Lyngbya can be tilled into the soil and used as an enriched humus, a sort of natural recycled fertilizer that will retain moisture and provide nutrients like nitrogen and phosphorus. Expenses are paid for by donations to Save Crystal River and the Kings Bay Rotary Charitable Foundation, with FDS paying for its own fuel. To request dried Lyngbya for personal fertilizer, call Save Crystal Rivers President Bob Mercer at 352-795-9230 with contact information. A8MONDAY, OCTOBER20, 2014CITRUSCOUNTY(FL) CHRONICLELOCAL Submit Your Best images of Autumn for a chance to Most Votes Will Receive a Gift Basket Full of Goodies from Snows Country Market & Jewelry from Jim Green Jewelers.Submit Photos Until November 2Vote November 3-19 Blackshears II Aluminum 795-9722 Free Estimates Licensed & Insured RR 0042388 Years As Your Hometown Dealer 000JFSO HWY. 44 CRYSTAL RIVER Rescreen Seamless Gutters Garage Screens New Screen Room Glass Room Conversions 000JIYY HEARING HEALTH EVENT! Homosassa 5699 S. Suncoast Blvd., Janack Plaza 352-436-4393 Inverness 2036 Hwy. 44 West, Colonial Plaza 352-419-0763 Dunnellon 20170 E. Pennsylvania Ave 352-502-4337 2014 2014 2014 2014 000JD83 BOB MERCER/Special to the ChronicleAn FDS truck is loaded with dried Lyngbya to be used as fertilizer at the Path Homeless Shelter gardens in Inverness. From clean water nemesis to fertilizer BloodDRIVES Special to the ChronicleDuring October, donors can get a team T-shirt. TO GET FERTILIZER Call Save Crystal Rivers President Bob Mer cer at 352-795-9230 with contact information. Donors must be at least 17, or 16 with parental permission, weigh a minimum of 110 pounds and be in good health to be eligible to donate. A photo ID is also required.
PAGE 9
Thomas said. Our hope is to let people know when they come to Orlando, they can come here and experience eco-tourism, something they may not be used to doing where they come from, he added. Thomas said studies have shown most tourists who arrive in big metropolitan centers such as Tampa and Orlando often engage in side trips, and that Citrus Countys unique combo of manatees and the springs lends itself to being potentially attractive. Thomas said his agency partnered with Visit Florida and will be subleasing a smaller booth from them to showcase the many natural activities which await potential visitors to the county. Thomas said officials are trying to produce value based on limited funds for marketing and going to these two premier events is one of the ways to achieve that. Thomas and Tufo also will be carrying with them several brochures and thumb files for online use. A presence on the Internet and social media is another method officials are using to push the county as a preferred destination. The Tourism Development Council and the county have been involved in a host of recent activities aimed at putting the countys bucolic splendor on display. A film crew from Brand USA, a national partner with TDC, brought a film crew to town to capture natural beauty of the county. The film was done in German for that audience. The TDC also has agreed to a Bright House contract to shine a natural spotlight on the county in both the Orlando and Tampa markets via a camera link on Kings Bay to show morning sunrises. Eco-tourism has become very popular around the world and specifically in Europe. And, Thomas said, European tourists seem to have more disposable income at this juncture. He thinks the strength of the dollar against their currencies is fairly weak making it possible for the tourists to splurge. Thomas said he and Tufo expect to hit about 25 meetings a day during the four-day event and hope it will help us further our reach around the world, especially given the limited (financial) resources we have available. For more about what the county has to offer, go to visitcitrus.com.Contact Chronicle reporter A.B. Sidibe at 352564-2925 or asidibe@ chronicleonline.com. people for much longer. Some women come with nothing but the clothes on their backs. Some have jobs, some dont. For some, its not safe for them to be anywhere in our community and we need to help them get relocation assistance that comes through the Attorney Generals office or get them to another domestic violence shelter. Some come badly beaten and need to heal. CHRONICLE: What do people incorrectly believe about domestic violence? FINEGAN: People often judge the victim. They ask, Why does she stay? and Why does she go back to him? Sometimes they think its something she did wrong instead of asking, Why does he hit her? and Why is he getting away with it? The question we need to be asking is, As a society, why arent we holding the abuser more accountable? CHRONICLE: Last month, Ray Rice was caught on video hitting his then-fiance. What does that do for domestic violence? Does that make it OK or not OK to be an abuser? FINEGAN: It certainly doesnt show a good role model for our children. Our laws are getting better, but I think as a society were tolerating more and more violence in our media movies, TV, video games. In my opinion, any sports player or actor, rapper, singer, etc., should be held to a higher standard. When society is giving you millions of dollars and putting you on a pedestal, whether you like it or not, youre a role model. It doesnt show a good role model for our children if the NFL is tolerating any type of violence. CHRONICLE: Here in Citrus County, do you see domestic violence increasing or decreasing? FINEGAN: Its about the same. And its not just hitting, but also other forms of abuse that are just as harmful. Abuse is about systematic power and control over another person. CHRONICLE: Is there a prevention? Can you preventdomestic violence? FINEGAN: We can educate, and in that regard that could be prevention. If we can teach our children about healthy relationships, teach the community about what is acceptable and whats not acceptable, over time if we could make social changes, I think we can prevent it. We would have to change a lot as a society, though. Wed have to be less tolerant of violence. CHRONICLE: Lets switch gears whats happening Oct.22? FINEGAN: Were trying to Color Citrus Purple. Wed like everyone to wear purple, and on that day there will be a live telethon on ABC Action News for all the domestic violence centers in their viewing area. So, Ill be on television, and a couple of other people from Citrus County, wearing purple, taking calls and promoting Citrus County. CHRONICLE: What are some of the immediate needs of CASA? FINEGAN: Our two greatest, constant needs are paper products toilet paper and paper towels. We need whatever you need for your home, times 20. CHRONICLE: Anything else you want our readers to know? FINEGAN: One out of four women will be a victim of domestic violence, and thats a sobering statistic. When I talk to womens groups and tell them that, they look around and think, This is a church group and its not happening here. But it is. A lot of women who have been victimized hide it, and you dont know. It could be anybody. LOCALCITRUSCOUNTY(FL) CHRONICLEMONDAY, OCTOBER20, 2014 A9 000EXJS btn ftrJFUN KNIGHTS OF COLUMBUS 352/746-6921 Located County Rd. 486 & Pine Cone Lecanto, FL (1/2 Mile East of County Rd. 491) 000JFUG PROGRESSIVE JACKPOT WEDNESDAY & FRIDAY Doors Open 4:30 PM Games Start 6:00 PM ALL PAPER BINGO PRIZES $ 50 TO $ 250 WINNER T AKES ALL POT-O-GOLD Smoke-Free Environment FREE Coffee & Tea TV Monitors for Your Convenience ~ Sandwiches & Snacks ~ 000JFU 776 N. Enterprise Pt., Lecanto 746-7830 Visit our Showroom Next to Stokes Flea Market on Hwy. 44 Visit Our New Website For Great Specials Wood Laminate Tile Carpet Vinyl Area Rugs 000JFTR 527-0012 72 HOUR BLIND FACTORY FAUX WOOD BLINDS, TOP TREATMENTS SHADES, SHUTTERS, VERTICALS ADO WRAP, CELLULAR BLIND S 1657 W. GULF TO LAKE HWY (2 MI. E. OF HWY. 491 & 44) LECANTO 2012 2012 2012 2012 000JKF. B 10 I 19 For a Day or Night of Fun and to Meet New Friends. Come and Play! To place your Bingo ads, call 563-5592 9203147 000JIKB $ 99 through the end of the year Unlimited Classes Offer expires Oct. 31, 2014AWP H APPY B IRTHDAY P ETIE L OVE D ON NANCY KENNEDY /ChronicleDiana Finegan, executive director of Citrus Abuse Shelter Association, says one in four women will be a victim of domestic violence some time in her life. October is Domestic Violence Awareness Month and people are encouraged to Color Citrus Purple by wearing purple on Wednesday, Oct.22, and send the message that Domestic violence has no place in our community. MONDAYContinued from Page A1 CITRUSContinued from Page A1 building back in 1998, it felt comfortable buying a little piece of our hometown. Built in the early 1920s by Jack Kibler, what is now Stumpknockers was used as the Inverness post office, then Vanns Drug Store and Ernest Johnstons Restaurant before George Pop Allen and son Otto opened their five and dime in 1932. Allen had first come to Crystal River from Kansas in 1924 and opened a retail lumber business. In 1931, he moved to Inverness and opened his retail lumber business. The lumber business was along the railroad tracks. Allen wanted to test out a new business idea, a dime store, so he bought $60 worth of merchandise from W.T. Grants 10-Cent Store in Tampa and placed the items on boards and sawhorses, pricing it at cost. He wanted to see if anyone would be interested in such a store. Within one year, he extended his display of merchandise to 60feet along the railroad tracks and, in 1932, he moved his business to 110 W. Main St. Its been said that in 1932, Allens was called one of the most modern department stores in Florida and stayed open until midnight on Saturdays to serve customers after a late movie in town. Pop Allen died in 1962, and Otto kept the five-anddime business until 1985. When he closed the store, he donated store memorabilia to the Citrus County Historical Society, where its currently on display at the Historic Courthouse. Between the time Allens left and Stumpknockers came, the building housed A-Z Furniture, Seaweed Sams and Main Exchange restaurants. The buildings historic plaque was dedicated Sept. 11, 2001. I love that were downtown Inverness, Tim Channell said. At the time we came, the downtown area was really struggling. A lot of the stores were empty back then. Now things are going great. SQUAREContinued from Page A1 I love that were downtown Inverness. At the time we came, the downtown area was really struggling. A lot of the stores were empty back then. Now things are going great. Tim Channell, co-owner, Stumpknockers on the Square.
PAGE 10
OPINION Page A10MONDAY, OCTOBER 20, 2014 To the point on Cent for CitrusNo.Bernie Leven Citrus SpringsFire Ambulance serviceRecently on a trip into the city of Ocala, going to the eye doctor. I saw a red fire rescue ambulance park at Gateway plaza. I was able to speak to the men and found out just what they do as to life support services. These men are all trained paramedics and perform advance life support services at any scene they approach. I am trying to keep this letter simple and to the point. This is what we need here in Citrus County. It was a wonderful experience to speak to these men and feel the passion they had for the lives they save everyday. One would think that the Board of County Commissioners pass an ordnance that required this needed services and fund it in stead of trying to built a port to nowhere or a marina for special interest or any other of their zany ideas.Charles Knecht Sr. DunnellonElection boils down to representationRe: Homosassa Special Water District Seat 2 Article Oct. 8, 2014, Issue No. 1: All about the water tower. The election is not all about the water tower. That decision has already been made by the board that the tank will either be sold to the preservation group (which the board prefers) or be torn down by the end of the year. That means this will not be on the agenda going forward. The election is about being a commissioner representing all the members of the district and not just a special interest group for the next four years. And please be assured that the district had looked at all the options that are available for all projects. So use business sense when voting since the district is a $3,144,414 business.Dennis Seibert Homosassa. You dont have the luxury of staying home, the former president intoned at Arkansas State in Jonesboro, so Im pleading with you: Vote. In Wisconsin, where Democrat Mary Burke has a shot at deposing the Republican governor, Scott Walker, Michelle Obama struck a similar theme aimed at young women and blacks. When we stay home, they win, she told a large crowd at a Milwaukee convention center. This is a hard year to be a Democrat. Every voting model gives Republicans at least a 3 in 5 chance of winning the six seats they need to control the Senate for the rest of Barack Obamas presidency. Second-term presidents almost always lose allies in Congress during off-year elections. Obamas favorable rating hovers at a dismal 42 percent nationally, but its even worse in crucial battleground states like Arkansas (34 percent) and Louisiana (37 percent). And Republicans are more motivated than Democrats. According to a recent Gallup survey, 32 percent of respondents say theyll be voting to oppose the president while only 20 percent see their ballot as a message of support for Obama. Still, this election is not quite over. If the Democrats have any chance at all of retaining the Senate and its a slim one their hopes rest on galvanizing their own voters in Jonesboro and Milwaukee and countless other communities. One by one. Door by door. Cellphone by cellphone. As union organizer Joan Zeiger told theNew York Timesat Michelle Obamas rally: The biggest fear of the Republican Party is high turnout. Shes right, and that fear has come true before. Two years ago, Mitt Romney was absolutely convinced he was going to win. On Election Day, he was drafting an acceptance speech while his aides were organizing transition teams and planning fireworks displays. Romney lost for many reasons, but one of the biggest was his failure to anticipate the effectiveness of his rivals turnout machine. The Democrats had invested heavily in local organizers and advanced technologies that maximized their ability to contact and motivate voters. We went into the evening confident we had a good path to victory, one Romney adviser told CBS. I dont think there was one person who saw this coming. Now, 2014 is not 2012. For one thing, Republicans have learned from their failure and copied the Democrats strategy. As one conservative activist told the Washington Post, People used to make fun of President Obamas background, but community organizing works. For another, key Democratic constituents racial minorities, young people, single women are difficult to motivate in off-year elections. A lot of Democrats dont vote during midterms, Obama told a party gathering last winter. We just dont. Heres where the Democratic turnout machine could again make a difference. TheNew York Timesdid an indepth analysis of campaign expenditures and concluded, Democrats are making much greater investments in the ground game than Republicans. A Timespoll, conducted with CBS and YouGov, reflected the impact of that investment. More voters have been contacted by Democratic than Republican campaigns in every state but Kansas and Kentucky, where Republican senators fought competitive primaries, the paper reports. One example is Iowa, where an open Senate seat has turned into a tight race between Democrat Bruce Braley and Republican Joni Ernst. The latest figures show Democrats with 148 paid staffers in the state, compared to 11 for the Republicans. Another example is Alaska, where the Democrat, Sen. Mark Begich, is fighting for his life. The Washington Postreports, however, that Begich has a secret weapon: an expensive, sophisticated political field operation that reaches into tiny villages along rivers and in mountain ranges throughout the vast Last Frontier. We have knocked on every single door in rural Alaska, Begich said. This is unbelievable. No ones ever done it like this ever. So he discounts polls that give Republican Dan Sullivan an edge. I dont care if were up or down, he says. Were winning on the ground because we will turn out more voters. Is he right? Probably not. But if the Democrats pull an upset, in Alaska and elsewhere, their secret weapon will be the main reason.Steve and Cokie Roberts can be contacted by email at stevecokie@gmail.com. To an honest man, it is an honor to have remembered his duty.Plautus, The Three-Peny Day, 194 BC The Democrats secret weapon BAD TRIP Drug scare sobering reminder of dangers An ambulance last week transported a 14-yearold Citrus High School female student to the hospital after she began exhibiting strange symptoms following the ingestion of an ecstasy pill. A fellow student reportedly provided the pill to her and pills to two other male students, who also took them while in school. While its not often a student suffers a bad trip while in school, this incident acts as a sobering reminder that young people do experiment with drugs and drugs can be extremely dangerous. Without getting to preachy, it is important to remember drugs can kill, or seriously impair those who improperly administer them or use them recreationally. Local schools take every precaution to prevent these types of things from happening, but they cant be everywhere at all times. Students share some of the responsibility. Schools are tight communities and news travels fast. Students who witness or hear of drug possession and use have an obligation to speak up their actions might save a life. Parents also share responsibility in getting to know who their children and teens are friends with and what they do while with them. No parent ever wants to hear that dreaded call from school that their child has been arrested or has just been sent to the hospital. One bad decision can change a life; be the one who decides to change it for the better. THE ISSUE:Drugs in schools.OUR OPINION:Drugs can be dangerous. Conspiracy theoryAgain today we tried to watch the county commissioners meeting on Bright House Channel 622 and theres absolutely no sound. This has happened more than once now. It seems almost like an intention to keep the meetings private from the public who cannot actually go and attend the meetings.More toward the middleAs a subscriber to both the Tampa Bay (Times) and the Citrus County Chronicle, I have to admit that the Chronicle is far less biased. And one thing that I especially notice day after day is that they are more likely to print cartoons on both sides of the fence, whereas the Tampa Bay (Times) stays pretty much on the liberal side of the fence. And the cartoon in todays paper (Oct. 14) the political cartoon on the Opinion page, that is is very accurate. It shows a portrait of the IRS, the VA, the Department of Justice and the CDC. The only thing lacking; the cartoonist, above this, should have had Obama pulling the strings. Congratulations.Give me a person on phoneThis is in regards to the people that call in about the Lecanto VA Center. Yes, it is very, very, very difficult to get through their phone system. They are absolutely right. Every time you call, you get a recording saying, Please press 1 for veterans. Well, when you press the 1, it says, Im sorry, I didnt hear your selection. So you press 1 again and what do you get? Im sorry, I didnt hear your selection. You press 1 again. What do you get? Im sorry, I didnt hear your selection. It keeps going on and on and on. And when you push 0, which is operator, then it goes back to that recording, Im sorry I didnt hear your selection. So, yes, they do have to get their phone system fixed. I think Ill take a drive down there and face them face to face rather than call them on the phone.
PAGE 11zingers son, who lives in the U.S., confirmed his father receives Social Security payments and said he deserved them. The deals allowed the Justice Departments former Nazi-hunting unit, the Office of Special Investigations, to skirt lengthy deportation hearings and increased the number of Nazis it expelled from the U.S. But internal U.S. government records obtained by the AP reveal heated objections from the State Department to OSIs practices. Social Security benefits became tools, U.S. diplomatic officials said, to secure agreements in which Nazi suspects would accept the loss of citizenship and voluntarily leave the United States. Itsrilers war machine accountable. Amid the objections, the practice known as Nazi dumping stopped. But the benefits loophole wasnt closed. Justice Department spokesman Peter Carr said in an emailed statement that Social Security payments never were employed to persuade Nazi suspects to depart voluntarily. The Social Security Administration refused the APs request for the total number of Nazi suspects who received benefits and the dollar amounts of those payments. Spokes man didnt make any sense. Citrus County Sheriffs OfficeBurglaries A residential burglary was reported at 3:17p.m. Wednesday, Oct.15, in the 5400 block of S. Loni Point, Homosassa. A vehicle burglary was reported at 4:46p.m. Oct.15 in the 3200 block of E. Crown Drive, Inverness.Thefts A grand theft was reported at 1:15p.m. Friday, Oct.3, in the 3500 block of W. Blossom Drive, Beverly Hills. A grand theft was reported at 11:36a.m. Wednesday, Oct.15, in the 90 block of S. Fillmore St., Beverly Hills. A grand theft was reported at 11:04a.m. Thursday, Oct.16, in the 1500 block of S. Skylark Terrace, Inverness. A grand theft was reported at 12:24p.m. Oct.16 in the 4000 block of S. Florida Ave., Inverness. A petit theft was reported at 12:40p.m. Oct.16 in the 400 block of Poinsettia Ave., Inverness. A petit theft was reported at 5:16p.m. Oct.16 in the 300 block of N. Suncoast Blvd., Crystal River. A grand theft was reported at 9:27p.m. Oct.16 in the 40 block of S.J. Kellner Blvd., Beverly Hills. A larceny petit theft was reported at 9:17a.m. Friday, Oct.17, in the 6200 block of W. Monticello St., Homosassa.Vandalism A vandalism was reported at 8:40p.m. Thursday, Oct.16, in the 5100 block of E. Live Oak Lane, Inverness.NATION/LOCALCITRUSCOUNTY(FL) CHRONICLEMONDAY, OCTOBER20, 2014 A11 000J9D4 For more information on early voting or vote by mail call the Supervisor of Elections Office at 341-6740 or visit Are You Ready To Vote? General Election Tuesday, November 4 th Check Voter Status Name Change Address Change Know Your Polling Place Location Request a Vote by Mail Ballot Information on Early Voting Have Proper Photo and Signature ID 000J9d4 000JGQG Not a Chain Store No Salesmen 31 Years of Experience You Can Trust HEAR CLEARER NOW! HEAR CLEARER NOW! HEAR CLEARER NOW! 100% SATISFACTION GUARANTEED Our Services Teeth Whitening Botox / Fillers Facials Massage Therapy Photo RejuventationAchieve optimal health and the chance to look and feel your best. Call or stop in today! 352-746-6327 1982 N. Prospect Ave., Lecanto, FL 34461 Be Our Guest$20 OffYour First Spa Service** Special offer excludes weight loss and hormone replacementLike Us On Facebook For Specials & Events Chemical Peels Laser Hair Removal Laser Liposuction Hormone Replacement for men & women 000JJXH For theRECORD ON THE NET For more information about arrests made by the Citrus County Sheriffs Office, go to and click on the Public Information link, then on Arrest Reports. NAZISContinued from Page A1 Associated PressThis undated file photo shows Martin Bartesch in a photo belonging to his daughter Ann Bresnen of Chicago. An Associated Press investigation found dozens of suspected Nazi war criminals and SS guards collected millions of dollars in Social Security payments after being forced out of the United States.
PAGE 12
Bagpiper Associated PressRe-enactors play the bagpipes while marching to raise the colors during the 41st annual Fort Massac Encampment in Metropolis, Ill. The weekend event had mock battles, period artisans, storytelling and childrens activities. Indiana man a suspect in deaths HAMMOND, Ind. An Indiana man confessed to killing a woman whose body was found in a Motel6 and told police where the bodies of three more women could be found, police said Sunday. Gary police found the bodies of three women at different locations in Gary late Saturday and early Sunday, following up on information the 43-year-old man provided during questioning, Hammond police Lt. Rich Hoyda said. The Lake County coroners office on Sunday identified the victim found in Hammond as 19-year-old Afrika Hardy and ruled she had been strangled. The coroners office said a second victim had been identified by family members as 35-year-old Anith Jones of Merrillville. Autopsies had not yet been completed on her or the other two women, who have not yet been identified. The Post-Tribune of Merrillville reported Jones had been missing since Oct.8 and Gary police had searched a block recently looking for her. Hoyda said the mans name is not being released because he has not been charged. He is being held in the Hammond City Jail.Robotic device helps groom walk DEWITT, N.Y. New York resident Matt Ficarra has been paralyzed from the chest down since an accident three years ago, but that didnt stop him from walking down the aisle. Ficarra was able to stand and walk during the wedding ceremony in suburban Syracuse on Saturday with the help of a batterypowered robotic exoskeleton called an Ekso. Ficarra has been paralyzed since he broke his neck in a boating accident in 2011. He married Jordan Basile in the ballroom of the Doubletree Hotel in DeWitt. The couple leaves today for a Jamaican honeymoon.3-year-old girl beaten to death NEW YORK A man was under arrest Sunday in the beating death of his girlfriends 3-year-old daughter, whose 5-year-old brother also was assaulted. The girl was beaten after she apparently soiled her pants, the Daily News said. Police said Kelsey Smith, 20, was arrested on charges of assault and acting in a manner injurious to a child less than 17 years old. It wasnt clear if he had a lawyer. Smith was taken into police custody later Saturday in Queens and was hospitalized in stable condition. From wire reports Nation BRIEFS NATION& WORLD Page A12MONDAY, OCTOBER 20, 2014 CITRUSCOUNTYCHRONICLE Protester Associated PressA woman supporter takes part Sunday in an anti-government rally in Lahore, Pakistan. Thousands of protestors led by Muslim cleric Tahir-ul-Qadri rallied in Lahore demanding Prime Minister Nawaz Sharifs ouster over alleged fraud in last years election. Kenya: Suspected bombers killedNAIROBI, Kenya 220 pounds of TNT from the vehicle which was intercepted at Dolo, along KenyShabab, which is allied to al-Qaida. Al-Shabab has vowed to carry violent terrorism in Kenya because Kenyan troops are in Somalia fighting the militants. Sweden: Three sub sightingsHELSINKIedens armed forces routinely hunted for Soviet submarines in its waters. The armed forces published a photograph taken on Sunday by a passerby showing a partially submerged object in the water from a distance, but it was unclear what kind of vessel was in question. World BRIEFS From wire reports Associated PressLOS ANGELESIt was last summer and Israeli-Palestinian tensions were at the highest theyd wasnt unfamiliar. Still, its one thing to get a liquored-up audience laughing at lines like, Take my motherin-law please. Its another to bring people from across the world who dislike each other together and hope they will laugh at each other. And yet, thats what hes trying to do with what he calls the first Funniest Person in the World competition. Masada has scoured comedy festivals from Afghanistan to South Korea and Egypt to Israel for candidates and had online voters winnow the list to 10 semifinalists who would perform at the Laugh Factory and before a worldwide Internet audience today. After online voters narrow the list to five, the finalists will travel to the Laugh Factorys sister club in Las Vegas. There, following another competition and vote, the winner will be crowned on Oct.24, United Nations Day. It might sound stupid, Masada said. But some people, they sit down, they break bread together, they never hurt each other. Some people, they sit down, they laugh together, they never hurt each other. werent strictly amateurs which they arent in the Olympics, anyway they would be from all over the world. CDC to revise Ebola protocol Associated PressWve got to be completely covered. So thatsdays. The team would not be sent to West Africa or other overseas location, and would be called upon domestically only if deemed prudent by our public health professionals, Pentagon press secretary Rear Adm. John Kirby said in a statement Sunday. Ebolas. China considers legal reforms Associated PressBEIJING The most important meeting of the year for the 205 members of Chinas ruling Communist Partys Central Committee, beginning Monday, will focus on how to rule the country in accordance with law. That has fed hopes that the party might move to respect the letter and spirit of the constitution, but some legal experts and political analysts say the countrys leaders are intent on expanding power, not limiting it. There may be some efforts at the four-day plenum to discourage rampant corruption in lowlevel courts, they say, but the key goal will be to build a legal system that protects and strengthens the partys political dominance. There is absolutely zero chance that the plenum session will see support for constitutional reform that imposes meaningful checks on party power, said Carl Minzner, a law professor and expert on the Chinese legal system at Fordham Law School in New York. As usual, this years plenary session will be held in a conclave in Beijing, and its decisions, expected to be announced after the conclusion, set the broad policy framework for the upcoming year. Its not clear if the meeting will discuss the protests in Hong Kong, where prodemocracy students have occupied key streets for three weeks to demand that Beijing change its decision to screen candidates for first open elections in the semiautonomous city in 2017. Party-controlled media are already gearing up to tout great legal progress to come, but some observers expect the party to continue something it has done since President Xi Jinping took power nearly two years ago: Step up efforts to suppress criticism and dissent. The developments over the past year under Xis leadership have signaled deep disregard for the law as a tool for resolving grievances in an impartial manner, said Maya Wang, researcher with Human Rights Watch. The detentions and sham trials of activists ... show just how Chinas legal system has remained an instrument of the partys power. Yet the party will seek changes to bring some fairness to the local level, where unrest stemming from lack of justice has flared up into violence. Comic diplomacy Associated PressJamie Masada, owner of the venerable Hollywood nightclub The Laugh Factory, speaks Nov.20, 2006, at the club in West Hollywood, Calif. It was this summer and Israeli-Palestinian tensions were at the highest theyd been in some time when Masada hit on a formula for world peace: Forget about guns and bombs, and just tell jokes to each other. He knew itd be a challenge to bring together people from across the world who dislike each other, and hope they will laugh at each other. And yet, thats what hes trying to do with what he calls the firstFunniestPerson in the World competition. On the road to find the worlds funniest person Associated PressA Chinese policeman stands on duty Oct.11 near shields and sticks in front of Tiananmen Gate in Beijing.
PAGE 13
Baseball, golf, hockey/ B2 Scoreboard/B3 Sports briefs/B3 NFL/B4, B5 College football/B6 Puzzles/ B7 Comics/ B8 Classifieds/ B9 Jaguars snap long losing skid with win over Browns. / B4 S PORTsS Section B MONDAY, OCTOBER 20, 2014 c Keselowski keeps title hopes alive at Talladega TALLADEGA, Ala. Brad Keselowski pulled away in overtime Sunday at Talladega Super speedway to earn an automatic berth into the third round of NASCARs championship race. Needing to win to stay alive in the Chase for the Sprint Cup championship, the 2012 Sprint Cup champion came through with his series-best sixth victory of the year. But the Hendrick Motorsports trio of Jimmie Johnson, Dale Earnhardt Jr. and Kasey Kahne. Keselowski had a triumphant ending to a tu multuous lotte. He faded over the final two laps and forced himself into a must-win situation Sunday. I know theres probably some people out there that arent really happy I won, Keselowski said. I can understand that. But Im a man like anyone else and not real proud of last week. But Im real proud of today. Keselowskis win meant one driver ahead of him on points was out of the Chase. Kahne was the last one out as part of a crushing day for Hendrick. Jeff Gordon is the lone Hendrick driver left in the Chase. Johnson failed to defend his champi onship, missing a chance to match Richard Petty with a seventh career title. Earnhardt won the Daytona 500 and two other races this season, put ting up his best year in almost a decade and stamping himself an early championship favorite. Not anymore. Well just go and try and win some races be fore the years out, Earnhardt said. That all weve got left. Associated Pressrfrrrntb rrn SEC has 4 of top 5 in AP poll Mar shall, which plays one of the weakest schedules in the country and re alistically has little chance of being part of the football final four. And for those who think Mar shalls chances are much closer to none than slim, lets just say its best to never say never. There are also 17 one-loss teams, from No. 4 Alabama to un ranked Minnesota and Duke that have every right to dream big. I had wouldnt they have a case to play for the na tional championship? The selec tion committees first top 25 comes out Oct. 28, and this race promises to take plenty of twists and turns before the field is set on Dec. 7. For now the Southeastern Con ference is dominating the top of The Associated Press college football poll. The SEC on Sunday became the first conference to hold four of the top five spots in the rankings all from the western division. Glad were not playing the Mis sissippis this year, though I dont know who you want to play over on the western side, said South Car olina Vir ginia and Notre Dames loss at Florida State, to inch up a spot during an off week. The Egg Bowl between Missis sippi State and Ole Miss and the Iron Bowl between Auburn and Alabama, both played the Satur days top-five party, the Seminoles showed again Sat urday night that resiliency breeds good fortune. Behind a brilliant second half by Jameis Winston, and with the help of a late offen sive pass interference penalty against Notre Dame, Florida State escaped again. PREVIEWIts showtime C.J. RISAK Its time to sort things out. The 2014 volleyball season will end for at least one Citrus County team, and there is a chance only one will survive this week as the state district tournaments unfold. The three-team 2A-3 tourney starts at 7 p.m. tonight when Seven Rivers Christian battles First Academy of Leesburg at Ocala St. John Lutheran. On Tuesday at Lecanto High School, the 5A-6 semifinals will be contested, with top-seed Lecanto going against No. 4 Dunnellon at 5:30 p.m. and No. 2 Crystal River taking on No. 3 Citrus at 7 p.m. The finals in 2A-3 will be at 7 p.m. Thursday, with Mondays winner meet ing top-seed St. John Lutheran at St. John. The winners in Tuesdays 5A-6 matches will meet for the district championship at 6 p.m. Thursday at Lecanto. Any team reaching Thursdays dis trict final will advance to the regional round of play, which starts Oct. 28 (2A) and Oct. 29 (5A). Once again in 5A-6, there is no clearcut favorite. Lecanto and Crystal River both went 5-1 in district play. Both beat each other once, while not losing to anyone else in the district.Canes welcome return of King r rf C.J. RISAK It didnt take long to realize this would be a transitional season for the Citrus volleyball team. A year ago, they were 5A-6 champions and host to a first-round regional match. But with so many players from that squad having graduated, including five starters, it was apparent it would take some time to pull it together. Things were beginning to take shape during the Bishop McLaughlin Tournament in mid-September Citrus won two of its three matches played when disaster struck. A girl landed on it, was how Kayla King described the ankle injury she suffered. I sprained it and tore some tissue. I missed about three weeks. MATT PFIFFNER /Chroniclefrnftrn tnrrtrrrrrnr rrfttrrrrrntr nrfftbMATT PFIFFNER /Chroniclefrfrrntrrnrr ftrfnrrrftfrntt See KING/ Page B3 See SHOWTIME/ Page B3 Top 25 pollFor the complete AP Top 25 college football poll, see Page B3.
PAGE 14
B2 Trade for Shields spurred Royals turnaround KANSAS CITY, Mo. The moment Alex Gordon knew the Kansas City Royals were serious about winning can be traced to a cold December day when his wife heard they had traded for James Shields. The franchise had long suf fered through a forgettable cast of starting pitchers, from Jay Wi tasick to Darrell May to Runelvys Hernandez. Hot pros pects flamed out. Free agents fizzled. And every year, the Roy als languished near the AL Cen tral cellar. But things changed in Decem ber 2012. General manager Day ton Moore thought enough pieces had been assembled and all that was missing was the right starting pitcher some one who could be the staff ace and change a clubhouse culture accustomed to losing. Moore called up the Rays and made the deal. Thats when I knew, Gordon said, that we were going for it. Two years later, a trade that was panned by many has helped the Royals reach the World Se ries for the first time since 1985. Shields, who will start Game 1 at home Tuesday night against San Francisco, has been everything Moore had hoped he would be. Those opportunities to ac quire a top rotation starter and an impact pitcher like Wade Davis, theyre not presented year-in and year-out, Moore said. We were fortunate the timing of it was such that it was staring us in the face and put us in a position to compete in 2014. It was a gamble. The Royals sent baseballs top minor league talent, Wil Myers, and a bevy of other promising prospects to the Rays. Shields went 13-9 with a 3.15 ERA last year, helping the Roy als to their best record in more than 20 years. And over the course of the season, Davis es tablished himself as one of the most dominant late-inning re lievers in the game. This year, Shields finished the regular season 14-8 with a 3.21 ERA, helping the Royals return to the postseason for the first time since their only World Se ries. Hes earned the nickname Big Game James for a reason, Royals manager Ned Yost said. On a team with few veterans, Shields has proved invaluable in October. And when he takes the mound against the Giants, hell be drawing on the experi ence he gained in 2008, when he tossed 5 2/3 innings for the Rays against the Phillies in the only other World Series start of his nine-year big league career. Hes been tremendous, said Greg Holland, the Royals AllStar closer. He takes that start ing five as kind of collectively, Hey, we want to be the back bone of this team. We want to throw 200 innings apiece. We want to push each other, learn from each other. I think he also leads by ex ample, club house jovial between games and ratchets up the intensity when its time to compete. Once hes on the mound, he stalks around like a lion, often roaring as he heads back to the dugout after a big strikeout. His teammates took notice, adopting many of his mannerisms. I just try to be myself and hopefully its contagious, Shields said. Thats about it. I mean, I have fun with this game. I feel like Im a grinder. I feel like I have a winning attitude, so hopefully it feeds off these guys, and we have fun with it. Were whats going on in Kansas City, he said. Were going to go out there and play our game and trust our ability to win ballgames. Associated Press rfntbnf fffbf Ilonen beats Stenson Rangers net 2 in 4 seconds rfnt btt ASH, England Mikko Ilonen defeated top-seeded Henrik Stenson 3 and 1 in the final Sun day to win the World Match Play Championship. Ilonen fought back from being 1 down after four holes against the fifth-ranked Swede on the London Club course at Ash in Kent. It was the 34-year-old Finns fifth European Tour vic tory and his second this season after winning the Irish Open. Earlier Sunday, Ilonen ended Joost Luitens unbeaten run by beating the Dutchman 2 and 1 in the semifinals, while Stenson won 1 up at the last hole against George Coetzee of South Africa. While I didnt play so well this morning, I didnt make any mistakes this afternoon against Henrik, said Ilonen. (I) felt like I had a good chance to beat him and I did. Three years ago Ilonens ca reer was in chaos, having sus tained an ankle injury that required surgery and kept him out of the second half of the 2011 season. However, after falling to 334th in the world rankings early in 2012, Ilonen has continued to im prove. He finished 23rd on the money list last year. Stenson seized the early ini tiative, imme diately to stay 3 up at the 14th. However, Stenson then bird , Sten son said. Victory earned Ilonen his highest tour prize of $830,000. In the third-place playoff, Lu iten defeated Coetzee at the first extra hole. Ben Martin wins 1st PGA Tour title in Las Vegas LAS VEGAS Ben Martin made a 45-foot eagle putt on the 16th hole that sent him to his first PGA Tour victory in the Shriners Hospitals for Children Open. Martin was one shot behind Kevin Streelman when his eagle put him back in the lead. He closed with a birdie for a 3-under 68 and a twoshot. Baek wins playoff to take LPGA South Korea INCHEON, South Korea Kyu Jung Baek of South Korea won a three-way playoff to win the LPGAs KEB-HanaBank Championship. Baek shot a final-round 67 to fin ish tied at 10-under 278 with Brittany Lincicome of the United States and compatriot In Gee Chun, then won the first playoff hole on the Ocean Course at the Sky72 Golf Club to take the title. No. 2-ranked Inbee Park of South Korea shot a 67 to finish one stroke behind the leaders. U.S. Womens Open champion Michelle Wie finished two strokes back after a 67 that included three birdies and an eagle on the par-5 No. 5. Wie was playing in her first tournament since withdrawing during the first round of the Evian Champi onship in September after reinjuring a stress fracture in her right hand. Jay Haas wins Greater Hickory Kia Classic CONOVER, N.C. Jay Haas be came the 18th player to win a Cham pions Tour at 60 or older, closing with a 5-under 66 for a two-stroke victory in the Greater Hickory Kia Classic. The 60-year-old former Wake For est Spas Champions Course. He ended a 27-month, 49-event winless streak since June 2012. Players 60 and older have won 22 events on the tour, with Hale Irwin ac complishing the feat three times and Tom Watson and Jimmy Powell doing it twice each. Mike Fetchick is the old est winner at 63 years in the 1985 Hil ton Head Seniors Invitational. Joe Durant and Kirk Triplett tied for second. Durant and Triplett, the winner last week in Cary, shot 66. NEW YORK Martin St. Louis and Rick Nash scored 4 seconds apart late in the second period, tying a New York Rangers record, and Henrik Lund qvist made 33 saves Sun day in a 4-0 victory over the San Jose Sharks.. Kings 2, Wild 1 LOS ANGELES Jona than Quick made 40 saves and Tyler Toffoli had a goal and an assist in the Los Angeles Kings fourth straight victory, 2-1 over the Minnesota Wild. Niklas Backstrom stopped 14 shots and Matt Cooke scored with 13:13 to play for the Wild, who lost back-toback games in Southern Cali fornia despite largely impressive performances. Flames 4, Jets 1 WINNIPEG, Manitoba Mason Raymond had a goal and two assists, and the Cal gary Flames beat Winnipeg 4-1 to hand the Jets their fourth straight loss. Dennis Wideman, Johnny Gaudreau and Raymond scored against Ondrej Pavelec in a span of 6 min utes, 42 seconds, during the second period as Calgary (4-3-0) finished a six-game road trip with its fourth win away from home. Jonas Hiller stopped 34 shots for the Flames. Ducks 3, Blues 0 ANAHEIM, Calif. Sami Vatanen scored two powerplay goals and Frederik Ander sen stopped 27 shots in his first career shutout, leading the Anaheim Ducks to a 3-0 victory over the St. Louis Blues. Vatanen had his first career multigoal game, scoring on a pair of blistering one-timers as the Ducks (5-1-0) won their fifth straight. Associated Press ntnn fnnnn
PAGE 15
AIRWAVES TODAYS SPORTS MAJOR LEAGUE BASEBALL 2 p.m. (MLB) League Championship Series: Teams TBA (taped) NBA PRESEASON BASKETBALL 7 p.m. (NBA) Chicago Bulls at Cleveland Cavaliers COLLEGE FOOTBALL 1 p.m. (FSNFL) Washington at Oregon (taped) 7 p.m. (FSNFL) Oklahoma State at Texas Christian (taped) 8 p.m. (ESPNU) Kentucky at LSU (taped) 12 a.m. (ESPNU) Furman at South Carolina (taped) 2:30 a.m. (ESPNU) Arkansas vs. Georgia (taped) NFL FOOTBALL 8:15 p.m. (ESPN) Houston Texans at Pittsburgh Steelers NHL HOCKEY 9:30 p.m. (SUN) Tampa Bay Lightning at Edmonton Oilers SOCCER 3 p.m. (NBCSPT) English Premier League: West Bromwich Albion FC vs Manchester United FC 7:30 p.m. (FS1) Womens CONCACAF World Cup Qualifying: Haiti vs USA TENNIS 7:30 a.m. (TENNIS) WTA Championships, Round 1 12:30 p.m. (TENNIS) ATP Swiss Indoors Basel, Early Rounds 4:30 p.m. (TENNIS) ATP Valencia Open, Early Round (sameday tape) Note: Times and channels are subject to change at the discretion of the network. If you are unable to locate a game on the listed channel, please contact your cable provider. CALENDAR TODAYS PREP SPORTS VOLLEYBALL 2A-3 District Semifinals at St. John Lutheran 7 p.m. Seven Rivers Christian vs. First Academy Leesburg BOYS GOLF 1A-3 Regional Meet at Chiefland Golf & CC 8 a.m. Seven Rivers Christian GIRLS GOLF 1A-3 Regional Meet at Chiefland Golf & CC 8 a.m. Seven Rivers Christian In a season that lasts perhaps eight weeks, thats a sizable segment, par ticularly to a team trying to rebuild. And King was a valuable asset: She was leading the Hurricanes in kills per match when she went down. She tried to help during her time spent on the sidelines, on crutches with a stationary boot wrapped around her injured ankle. And she tried to stay positive, even as her team lost five of the eight matches it played. It was fun watching my team learn ing to work together, King said. And I tried to help them, improve them, but . It was fun watching them, but at the same time I wanted to be out there with them. King has become what she looked up to last season a leader. The only players with considerable experience returning from last seasons title team are herself, Jordan Josey and Morgan Cleary. Josey, who plays middle blocker, felt Kings absence in particular. It just takes one more person away that I was comfortable with, knowing she was going to have my back, Josey said. Her and Morgan (Cleary) were the only people I played with last year. So it just takes one more person away that I knew would be there, if I block (someone) to cover behind me, to be there to set or to have another person to hit. Hitting is something King does es pecially well, despite being maybe 5-5, according to her coach, Sandy VanDervort. In a four-set loss to Seven Rivers Christian last week, King showed it with nine kills. Shes aver aging eight kills per match. Her presence back on the court was also evident in Joseys play against the Warriors the senior co-captain was able to roam around the net more freely, collecting nine kills and nine blocks. VanDervort cant say enough about Kings value to the team. Shes huge. Shes a leader, and its not just her play, she said. Shes a leader because shell help others. Shell tell them if they need to make a change right on the court. A lot of times players will look to her if they hit the ball into the net or something, and shell just real quickly tell them what they did wrong. Shes a real easy go-to person, they look up to her. They know shes a strong player. Shes just a junior, but King knows her position. I feel like they rely on me, like Im a senior, so they look to me more, she said. So Im working with them. Like last year I looked up to the seniors, this year they look up to me and Jor dan and Morgan. And she is good. She just has an incredible vertical, which just gives her that elevation and makes her look bigger than she is, VanDervort said. But she can get very low when she needs to and she can get very high when she needs to. And yet, like her teammates, shes had to adjust to a change in position, making this team a work still in progress. Shes pretty consistent, VanDer vort said. She will be more consis tent, its her just getting back into it. And I have changed things up. Last week I had her hitting from strong side, this week I changed her to weak side because thats where the team needs her. She can be more effective there both on offense and defense. I believe this is the combination where theyre the strongest. Its district tournament time start ing Tuesday and King knows what Cit rus must do. We need to talk a lot, weve had some issues, but we just need to work together and trust each other. If we work hard and play like we did (against Seven Rivers), I believe we can (advance in district play). I just need to make sure I stay pos itive all the time and work together with my team and talk a lot. Nows the time to come together. The AP Top 25 The Top 25 teams in The Associated Press college football poll, with first-place votes in parentheses, records through Oct. 18, total points based on 25 points for a first-place vote through one point for a 25th-place vote, and previous ranking: Recor d. AUTO RACING NASCAR Sprint Cup GEICO 500 Sunday At Talladega Superspeedway Talladega, Ala. Lap length: 2.66 miles (Start position in parentheses) 1. (5) Brad Keselowski, Ford, 194 laps, 118.4 rating, 47 points, $288,361. 2. (13) Matt Kenseth, Toyota, 194, 71.7, 43, $228,207. 3. (33) Clint Bowyer, Toyota, 194, 78, 41, $180,329. 4. (29) Landon Cassill, Chevrolet, 194, 85.3, 0, $129,475. 5. (11) Ryan Newman, Chevrolet, 194, 59.8, 40, $134,521. 6. (7) Travis Kvapil, Chevrolet, 194, 76.6, 38, $122,860. 7. (18) Kurt Busch, Chevrolet, 194, 110.6, 37, $102,115. 8. (26) Marcos Ambrose, Ford, 194, 98.8, 36, $130,125. 9. (39) Kevin Harvick, Chevrolet, 194, 94.1, 36, $134,261. 10. (19) Casey Mears, Chevrolet, 194, 84.3, 34, $121,919. 11. (40) Joey Logano, Ford, 194, 87.1, 33, $131,544. 12. (8) Kasey Kahne, Chevrolet, 194, 104.4, 33, $108,086. 13. (30) Austin Dillon, Chevrolet, 194, 67.2, 31, $142,697. 14. (36) Reed Sorenson, Chevrolet, 194, 78.5, 30, $105,973. 15. (22) Cole Whitt, Toyota, 194, 67.5, 30, $89,940. 16. (34) Michael Waltrip, Toyota, 194, 55.7, 28, $96,686. 17. (42) Kyle Larson, Chevrolet, 194, 96.4, 27, $114,681. 18. (38) Denny Hamlin, Toyota, 194, 63.2, 27, $96,536. 19. (27) Danica Patrick, Chevrolet, 194, 79.9, 26, $95,661. 20. (1) Brian Vickers, Toyota, 194, 47.1, 24, $129,594. 21. (15) Carl Edwards, Ford, 194, 45.9, 23, $102,511. 22. (4) Ryan Blaney, Ford, 194, 73.9, 0, $82,940. 23. (3) AJ Allmendinger, Chevrolet, 194, 51.3, 21, $100,273. 24. (2) Jimmie Johnson, Chevrolet, 194, 118.2, 22, $142,859. 25. (24) Greg Biffle, Ford, 194, 66.4, 20, $128,002. 26. (43) Jeff Gordon, Chevrolet, 194, 50.5, 19, $129,197. 27. (12) Martin Truex Jr., Chevrolet, 194, 87.7, 18, $115,252. 28. (35) Josh Wise, Ford, 194, 49.7, 16, $84,440. 29. (21) David Gilliland, Ford, 194, 57, 16, $104,419. 30. (25) David Ragan, Ford, 194, 62.9, 15, $103,633. 31. (28) Dale Earnhardt Jr., Chevrolet, 194, 98.5, 14, $91,931. 32. (16) Trevor Bayne, Ford, 194, 72.7, 0, $80,640. 33. (9) Terry Labonte, Ford, 193, 33.3, 11, $80,490. 34. (37) Tony Stewart, Chevrolet, accident, 190, 46.3, 11, $115,436. 35. (31) Jamie McMurray, Chevrolet, 189, 63.6, 10, $117,900. 36. (20) Paul Menard, Chevrolet, accident, 188, 71.3, 8, $108,439. 37. (10) Michael Annett, Chevrolet, accident, 187, 73.9, 7, $79,821. 38. (23) Mike Wallace, Toyota, 186, 26.6, 0, $74,805. 39. (17) Aric Almirola, Ford, 166, 56.3, 5, $108,312. 40. (41) Kyle Busch, Toyota, 145, 31.1, 4, $115,217. 41. (6) Michael McDowell, Ford, accident, 127, 44.4, 3, $62,805. 42. (32) J.J. Yeley, Toyota, accident, 102, 56.3, 0, $58,805. 43. (14) Alex Bowman, Toyota, accident, 102, 44, 1, $55,305. BASEBALL WORLD SERIES (Best-of-7) All games televised by Fox Tuesday, Oct. 21: San Francisco (Bumgarner 18-11) at Kansas City (Shields 14-8), 8:07 p.m. Wednesday, Oct. 22: San Francisco (Peavy 6-4) at Kansas City (Ventura 14-10), 8:07 p.m. Friday, Oct. 24: Kansas City at San Francisco (Hudson 9-13), 8:07 p.m. Saturday, Oct. 25: Kansas City at San Fran cisco (Vogelsong 8-13), 8:07 p.m. x-Sunday, Oct. 26: Kansas City at San Fran cisco, 8:07 p.m. x-Tuesday, Oct. 28: San Francisco at Kansas City, 8:07 p.m. x-Wednesday, Oct. 29: San Francisco at Kansas City, 8:07 p.m. HOCKEY NHL standings EASTERN CONFERENCE Atlantic Division GP W L O T Pts GF GA Montreal 6 5 1 0 10 20 20 Ottawa 5 4 1 0 8 14 10 Tampa Bay 5 3 1 1 7 17 10 Detroit 5 3 1 1 7 11 8 Boston 7 3 4 0 6 15 17 Toronto 6 2 3 1 5 15 19 Florida 5 1 2 2 4 5 11 Buffalo 6 1 5 0 2 8 22 Metropolitan Division GP W L O T Pts GF GA N.Y. Islanders 5 4 1 0 8 20 15 Washington 5 3 0 2 8 18 11 Pittsburgh 4 3 1 0 6 16 10 Columbus 5 3 2 0 6 15 12 New Jersey 5 3 2 0 6 17 16 N.Y. Rangers 6 3 3 0 6 17 20 Philadelphia 5 1 2 2 4 17 21 Carolina 4 0 2 2 2 10 15 WESTERN CONFERENCE Central Division GP W L O T Pts GF GA Nashville 5 3 0 2 8 12 8 Chicago 4 3 0 1 7 12 7 Dallas 5 2 1 2 6 15 17 St. Louis 5 2 2 1 5 12 9 Minnesota 4 2 2 0 4 10 4 Colorado 6 1 4 1 3 9 20 Winnipeg 5 1 4 0 2 8 15 Pacific Division GP W L O T Pts GF GA Anaheim 6 5 1 0 10 21 13 Los Angeles 6 4 1 1 9 15 10 San Jose 6 4 1 1 9 20 15 Calgary 7 4 3 0 8 19 17 Vancouver 4 3 1 0 6 13 10 Arizona 4 2 2 0 4 13 18 Edmonton 5 0 4 1 1 11 25 NOTE: Two points for a win, one point for overtime loss. Sundays Games Los Angeles 2, Minnesota 1 N.Y. Rangers 4, San Jose 0 Calgary 4, Winnipeg 1 Anaheim 3, St. Louis 0 Todays Games Tampa Bay at Edmonton, 9:30 p.m. POINT SPREADS Major League Baseball World Series Tomorrow FAVORITE LINE UNDERDOG LINE San Francisco -110 at Kansas City +100 Odds to Win Series Kansas City -110 San F rancisco -110 NCAA Football Tomorrow FAVORITE OPEN TODAY O/U UNDERDOG Arkansas St. 1 2 at La.-Laf ayette Thursday at East Carolina 26 26 UConn at Virginia Tech 3 1 Miami Friday at South Alabama 12 13 T roy at Cincinnati 11 11 South Flor ida at Boise St. 6 6 BYU Oregon 18 17 Calif ornia-x Saturday at Auburn 16 17 South Carolina N. Illinois 20 20 at E. Michigan Mississippi St. 14 14 at K entucky at Clemson 15 15 Syr acuse Minnesota 6 6 at Illinois Akron 1 1 at Ball St. Cent. Michigan 3 3 at Buff alo at W. Michigan 9 9 Ohio Boston College 12 12 at Wake Forest at UCF 11 10 T emple at Virginia 5 6 Nor th Carolina at Pittsburgh 3 3 Georgia T ech at Wisconsin 11 11 Mar yland at Missouri 21 20 V anderbilt at Navy 8 8 San Jose St. at Utah St. OFF OFF UNL V at Toledo 14 14 UMass UCLA 14 13 at Color ado at Nebraska 17 17 Rutgers at Miami (Ohio) 5 6 K ent St. at Arkansas 23 22 U AB at TCU 21 20 T exas Tech Memphis 22 22 at SMU at Stanford 14 13 Oregon St. Alabama 16 16 at T ennessee at Michigan St. 17 16 Michigan at Colorado St. 18 18 W yoming Mississippi 3 3 at LSU Arizona 3 3 at W ashington St. Southern Cal 1 1 at Utah Georgia Southern 15 15 at Georgia St. at Rice 14 15 Nor th Texas Louisiana Tech 10 10 at Souther n Miss. at UTSA 11 11 UTEP at Marshall 27 27 F AU at La.-Monroe 3 3 T exas St. at W. Kentucky 11 11 Old Dominion at Oklahoma St. 3 3 W est Virginia at Kansas St. 9 10 T exas Ohio St. 13 13 at P enn St. at Washington OFF OFF Ar izona St. Nevada 4 4 at Ha waii x-at Santa Clara, Calif. Off Key Utah St. QB questionable Arizona St. and Washington QBs questionable NFL Tonight FAVORITE OPEN TODAY O/U UNDERDOG at Pittsburgh 4 3 (44) Houston Thursday at Denver 6 6 (53) San Diego Sunday Detroit-x 3 3 (47) Atlanta at Tampa Bay 2 2 (42) Minnesota at New England 6 7 (49) Chicago at Kansas City 6 6 (43) St. Louis Seattle 3 3 (44) at Carolina at N.Y. Jets 2 3 (41) Buff alo Miami 4 4 (43) at Jville Houston 2 2 (43) at Tenn. at Cincinnati 2 2 (46) Baltimore at Arizona 2 2 (48) Philadelphia Indianapolis 2 2 (48) at Pitt. at Cleveland 7 7 (43) Oakland at New Orleans 2 1 (54) Green Ba y Oct. 27 at Dallas 7 9 (49) W ash. x-at London NHL FAVORITE LINE UNDERDOG LINE Tampa Bay -160 at Edmonton +140 TRANSACTIONS BASKETBALL National Basketball Association SACRAMENTO KINGS Exercised the 2015-16 contract option on G Ben McLemore. HOCKEY National Hockey League CAROLINA HURRICANES Recalled F Patrick Brown from Charlotte (AHL). TAMPA BAY LIGHTNING Recalled F Jonathan Drouin and D Luke Witkowski from Syracuse (AHL). VANCOUVER CANUCKS Assigned C Bo Horvat to Utica (AHL). KING Continued from Page B1 LOTTERY rfntnfb CASH 3 (early) 9 1 2 CASH 3 (late) 7 4 8 PLAY 4 (early) 3 2 3 9 PLAY 4 (late) 6 9 1 1 FANTASY 5 5 16 23 35 36 Players should verify winning numbers by calling 850-487-7777 or at. rffnb Powerball: 20 26 27 36 54 Powerball: 19 5-of-5 PB No winner No Florida winner 5-of-5 3 winners $1 million No Florida winner Lotto: 1 21 25 27 29 40 6-of-6 No winner 5-of-6 17 $6,712 4-of-6 1,582 $75.50 3-of-6 34,022 $5 Fantasy 5: 8 16 17 21 30 5-of-5 1 winner $248,175.46 4-of-5 349 $1 14.50 3-of-5 10,939 $10 B3 Lecanto got the top seed and will go against winless Dunnellon. Consider ing the Tigers failed to win a single set against any district rival shows how dominant the Panthers should be in this match. Shannon Fernandez, Olivia Grey and Annalee Garcia provide an offensive spark that wont be easy for Dunnellon to stop. Citrus versus Crystal River is an other matter. A year ago, Citrus was in the title mix when all three teams went 4-2 in district play. Citrus eventually emerged with the crown, Crystal River finishing second. But the Hurricanes are in a transi tion year, with only two seniors on the team. Outside hitter Kayla King, a key part of the Citrus attack, has returned after missing three weeks with an ankle injury. She teams with Jordan Josey and Cheyann Reneer to give the Hurricanes a solid attack. However, defensive lapses persist. Crystal River, led by outside hitter Cassidy Wardlow and middle hitters Abby Epstein and Kaylan Simms, will be difficult to stop offensively, and the Pirate defense has improved signifi cantly since the seasons start. Expect this to be a good match. Crys tal River has had its moments this sea son, both good and bad and sometimes in the same match. Last week against Seven Rivers the Pirates won two tight sets against the Warriors, then got progressively worse as Seven Rivers fought back to win in five sets. Crystal River has the ability, but it cant let up too early. Now for what is arguably one of the most competitive districts, even if there are just three teams. St. John, First Academy and Seven Rivers had a combined 55-12 record this season, with St. John capturing the 2A-3 regular-season title with a 4-0 record. First Academy and Seven Rivers were both 1-3. Seven Rivers, which dominated all three of its county rivals, has a very dif ficult task ahead. The Warriors, 18-7 overall, must beat First Academy (17-4) tonight to at least qualify for regionals. That wont be easy they lost to the Eagles in three close sets in their first meeting, then beat them in five sets in their second. How effective Alyssa Gage and the improving Julia Eckart can both at tack and block will be imperative. And of course, any attack starts with passes from both the defense, led by Tessa Kacer, and setter Kim Iwaniec, whose role has expanded during the season. Watch out for First Academys Emma Gray and Victoria Gause, who lead their team in kills, usually set by Alyssa Rojas. Warriors sweep the county Seven Rivers did indeed manage it. The Warriors accomplished their goal of sweeping all their matches against county foes, beating all three of them last week. Counting a pair of wins at the Bishop McLaughlin Tournament, Seven Rivers went 8-0 against Le canto, Crystal River and Citrus. As coach Wanda Grey said, one of the teams major pre-season goals was to beat their county rivals every time. The Warriors did just that. County regular season leaders TEAM RECORDS Lecanto, 14-6 overall, 5-1 in 5A-6; Crystal River, 15-10 overall, 5-1 in 5A-6; Citrus, 7-13 overall, 2-4 in 5A-6; Seven Rivers Christian, 18-7 overall, 1-3 in 2A-3. INDIVIDUAL STATISTICS KILLS: Alyssa Gage (Seven Rivers), 359 (14.4 per match); Cassidy Wardlow (Crystal River), 218 (9.1); Kayla King (Citrus), 94 (7.8); Abby Epstein (Crystal River), 190 (7.6); Julia Eckart (Seven Rivers), 186 (7.4). KILL PERCENTAGE: Epstein (Crystal River), .353; Gage (Seven Rivers), .350; Julia Eckart (Seven Rivers), .326; Wardlow (Crystal River), .285; Kaylan Simms (Crystal River), .274. ASSISTS TO KILLS: Kim Iwaniec (Seven Rivers), 519 (20.8 per match); Katie Eichler (Crystal River), (18.2); Shannon Fernandez (Lecanto), 212 (12.5); Gage (Seven Rivers), 257 (10.3); Natalie Dodd (Citrus), 126 (6.3). BLOCKS: Epstein (Crystal River), 96 (3.8 per match); Kaylan Simms (Crystal River), 61 (3.6 17 matches); Cheyann Reneer (Cit rus), 64 (3.2); Gage (Seven Rivers), 78 (3.1); DeeAnna Mohering (Lecanto), 45 (2.6). DIGS: Wardlow (Crystal River), 319 (13.3 per match); Kim Iwaniec (Seven Rivers), 292 (11.7); Tessa Kacer (Seven Rivers), 290 (11.6); Gage (Seven Rivers), 274 (11.0); Erin Smilgen (Lecanto), 180 (10.6); Eichler (Crystal River), 241 (9.6). SERVING ACES: Garcia (Lecanto), 50 (2.5); Eckart (Seven Rivers), 59 (2.4); Iwaniec (Seven Rivers), 59 (2.4); Wardlow (Crystal River), 53 (2.2); Epstein (Crystal River), 50 (2.0). SHOWTIME Continued from Page B1 FOOTBALL Citrus County Speedway opens Saturday for racing The Citrus County Speedway is re opening next Saturday, Oct. 25, for a night of racing that will include six car classes as well as Halloween festivities. Non-Winged Sprints, Modified Mini Stocks, Pure Stocks, Street Stocks, Mini Stocks and Pro Hornets will race for points, and there will be contests for Hal loween car decorating and kids cos tumes as part of the Inverness tracks Trick-or-Treat Night festivities. The Sprints, which are currently led by Herb Neumann Jr., a longtime racing vet eran from Inverness who holds 14 com bined stock car championship, will wrap up its points season with a 30-lapper. The remaining five divisions will go 25 laps and then return for one more night of points racing on Nov. 8. The Sportsman division will headline the group with a 50-lap points event on that latter date. Speedway lease owner and promoter Gary Laplant announced the cancellation of the remainder of the season on Sept. 30, citing personal and financial reasons, but noted the possibility for more races before the end of the year. The track is seeking sponsorships for races on the pair of nights. Grandstand admissions are $13 for adults, $9 for students, seniors and military personnel, and $5 for children age 11 and under (children under 42 inches in height get in free). Heat races begin at 6 p.m. See for more details. Sean Arnold, correspondent SPORTS BRIEF
PAGE 16
AMERICAN CONFERENCE East W L T Pct PF P A Home A way AFC NFC Div New England 5 2 0 .714 187 154 3-0-0 2-2-0 4-2-0 1-0-0 2-1-0 Buffalo 4 3 0 .571 135 142 2-2-0 2-1-0 1-3-0 3-0-0 1-1-0 Miami 3 3 0 .500 147 138 1-2-0 2-1-0 2-2-0 1-1-0 1-1-0 N.Y. Jets 1 6 0 .143 121 185 1-3-0 0-3-0 1-3-0 0-3-0 0-1-0 South W L T Pct PF P A Home A way AFC NFC Div Indianapolis 5 2 0 .714 216 136 3-1-0 2-1-0 5-1-0 0-1-0 3-0-0 Houston 3 3 0 .500 132 120 2-1-0 1-2-0 2-1-0 1-2-0 0-1-0 Tennessee 2 5 0 .286 121 172 1-2-0 1-3-0 2-3-0 0-2-0 1-1-0 Jacksonville 1 6 0 .143 105 191 1-2-0 0-4-0 1-4-0 0-2-0 0-2-0 North W L T Pct PF P A Home A way AFC NFC Div Baltimore 5 2 0 .714 193 104 3-1-0 2-1-0 2-2-0 3-0-0 2-1-0 Cincinnati 3 2 1 .583 134 140 2-0-1 1-2-0 2-2-0 1-0-1 1-0-0 Pittsburgh 3 3 0 .500 124 139 1-1-0 2-2-0 2-2-0 1-1-0 1-2-0 Cleveland 3 3 0 .500 140 139 2-1-0 1-2-0 2-3-0 1-0-0 1-2-0 West W L T Pct PF P A Home A way AFC NFC Div Denver 5 1 0 .833 189 121 4-0-0 1-1-0 3-0-0 2-1-0 1-0-0 San Diego 5 2 0 .714 184 114 3-1-0 2-1-0 4-1-0 1-1-0 1-1-0 Kansas City 3 3 0 .500 142 121 1-1-0 2-2-0 3-2-0 0-1-0 1-1-0 Oakland 0 6 0 .000 92 158 0-4-0 0-2-0 0-5-0 0-1-0 0-1-0 NATIONAL CONFERENCE East W L T Pct PF P A Home A way NFC AFC Div Dallas 6 1 0 .857 196 147 3-1-0 3-0-0 4-1-0 2-0-0 1-0-0 Philadelphia 5 1 0 .833 183 132 4-0-0 1-1-0 3 South W L T Pct PF P A Home A way NFC AFC Div Carolina 3 3 1 .500 158 195 2-1-0 1-2-1 3-1-0 0-2-1 1-0-0 New Orleans 2 4 0 .333 155 165 2-0-0 0-4-0 2-3-0 0-1-0 1-1-0 Atlanta 2 5 0 .286 171 199 2-1-0 0-4-0 2-3-0 0-2-0 2-0-0 Tampa Bay 1 5 0 .167 120 204 0-3-0 1-2-0 0-4-0 1-1-0 0-3-0 North W L T Pct PF P A Home A way NFC AFC Div Detroit 5 2 0 .714 140 105 3-1-0 2-1-0 4-1-0 1-1-0 2-0-0 Green Bay 5 2 0 .714 199 147 3-0-0 2-2-0 3-2-0 2-0-0 2-1-0 Chicago 3 4 0 .429 157 171 0-3-0 3-1-0 2-2-0 1-2-0 0-1-0 Minnesota 2 5 0 .286 120 160 1-2-0 1-3-0 2-3-0 0-2-0 0-2-0 West W L T Pct PF P A Home A way NFC AFC Div Arizona 5 1 0 .833 140 119 3-0-0 2-1-0 3-0-0 2-1-0 1-0-0 San Francisco 4 3 0 .571 158 165 2-1-0 2-2-0 3-2-0 1-1-0 1-1-0 Seattle 3 3 0 .500 159 141 2-1-0 1-2-0 2-2-0 1-1-0 0-1-0 St. Louis 2 4 0 .333 129 176 1-3-0 1-1-0 2-4-0 0-0-0 1-1-0 Jaguars 24, Browns 6 Cleveland 3 3 0 0 6 Jacksonville 0 7 3 14 24 First Quarter CleFG Cundiff 40, 6:30. Second Quarter CleFG Cundiff 22, 4:16. JaxA.Robinson 31 pass from Bortles (Scobee kick), :27. Third Quarter JaxFG Scobee 30, 10:00. Fourth Quarter JaxD.Robinson 8 run (Scobee kick), 5:58. JaxJohnson 3 run (Scobee kick), 4:35. ACleveland, Tate 16-36, Crowell 7-18, West 5-8, Hawkins 1-8, Hoyer 1-(minus 1). Jacksonville, D.Robinson 22-127, Bortles 5-37, Johnson 6-16, Lee 2-5. PASSINGCleveland, Hoyer 16-41-1-215. Jacksonville, Bortles 17-31-3-159. RECEIVINGCleNone. Lions 24, Saints 23 New Orleans 0 10 7 6 23 Detroit 0 3 7 14 24 Second Quarter NOJohnson 13 pass from Brees (S. Graham kick), 13:56. DetFG Prater 21, 5:29. NOFG S.Graham 27, :00. Third Quarter NOStills 46 pass from Brees (S.Graham kick), 13:42. DetBell 1 run (Prater kick), 6:19. Fourth Quarter NOFG S.Graham 48, 13:33. NOFG S.Graham 36, 5:24. DetTate 73 pass from Stafford (Prater kick), 3:38. DetFuller 5 pass from Stafford (Prater kick), 1:48. ANew Orleans, K.Robinson 3-26, Ingram 10-16, Thomas 6-13, Brees 1-13, Johnson 1-5. Detroit, Bell 18-48, Bush 4-10, Stafford 2-1. PASSINGNew Orleans, Brees 28-45-1342. Detroit, Stafford 27-40-2-299. RECEIVINGNewNone. Ravens 29, Falcons 7 Atlanta 0 0 0 7 7 Baltimore 7 10 3 9 29 First Quarter BalDaniels 5 pass from Flacco (Tucker kick), 11:47. Second Quarter BalPierce 1 run (Tucker kick), 2:38. BalFG Tucker 38, :00. Third Quarter BalFG Tucker 38, 11:28. Fourth Quarter AtlWhite 4 pass from Ryan (Bryant kick), 7:12. BalSuggs safety, 3:44. BalT.Smith 39 pass from Flacco (Tucker kick), 1:46. AAtlanta, S.Jackson 8-22, Freeman 2-20, Smith 3-10, Rodgers 2-9, Ryan 1-7. Baltimore, Forsett 23-95, Pierce 8-21, Flacco 1-4, Taliaferro 4-3. PASSINGAtlanta, Ryan 29-44-0-228. Baltimore, Flacco 16-25-2-258. RECEIVINGAtlAtlanta, Bryant 57 (SH). Redskins 19, Titans 17 Tennessee 3 7 0 7 17 Washington 3 3 7 6 19 First Quarter WasFG Forbath 31, 10:08. TenFG Succop 36, 3:41. Second Quarter WasFG Forbath 31, 7:34. TenWright 14 pass from Whitehurst (Succop kick), 1:04. Third Quarter WasGarcon 70 pass from McCoy (Forbath kick), 12:27. Fourth Quarter WasFG Forbath 27, 13:27. TenHagan 38 pass from Whitehurst (Succop kick), 7:41. WasFG Forbath 22, :00. A,227. T en WTennessee, Sankey 16-56, Whitehurst 2-10, L.Washington 1-8, Battle 2-3, McCluster 1-(minus 1). Washington, Morris 1854, Helu Jr. 5-29, Young 1-14, McCoy 2-3. PASSINGTennessee, Whitehurst 17-26-1160. Washington, Cousins 10-16-1-139, McCoy 11-12-0-128. RECEIVINGTennessee,None. Rams 28, Seahawks 26 Seattle 3 3 7 13 26 St. Louis 7 14 0 7 28 First Quarter SeaFG Hauschka 24, 9:01. StLMason 6 run (Zuerlein kick), 5:19. Second Quarter StLCunningham 5 pass from A.Davis (Zuerlein kick), 13:12. StLBailey 90 punt return (Zuerlein kick), 7:05. SeaFG Hauschka 35, :07. Third Quarter SeaWilson 19 run (Hauschka kick), 4:22. Fourth Quarter SeaHelfet 19 pass from Wilson (pass failed), 9:44. StLKendricks 4 pass from A.Davis (Zuerlein kick), 5:36. SeaBaldwin 9 pass from Wilson (Hauschka kick), 3:18. ASeattle, Wilson 7-106, Lynch 18-53, Turbin 2-7, Michael 2-5. St. Louis, Mason 18-85, Austin 5-16, Cunningham 2-3, A.Davis 2-(minus 2). PASSINGSeattle, Wilson 23-36-0-313. St. Louis, A.Davis 18-21-0-152, Hekker 1-1-0-18. RECEIVINGSt. Louis, Zuerlein 52 (WR). Packers 38, Panthers 17 Carolina 0 3 0 14 17 Green Bay 21 7 10 0 38 First Quarter GBNelson 59 pass from A.Rodgers (Crosby kick), 11:51. GBLacy 5 run (Crosby kick), 5:53. GBStarks 13 run (Crosby kick), 2:07. Second Quarter GBCobb 3 pass from A.Rodgers (Crosby kick), 4:07. CarFG Gano 33, :00. Third Quarter GBD.Adams 21 pass from A.Rodgers (Crosby kick), 10:20. GBFG Crosby 34, :08. Fourth Quarter CarBenjamin 13 pass from Newton (Gano kick), 9:39. CarBersin 1 pass from Anderson (Gano kick), 1:24. ACarCarolina, Newton 17-31-1-205, Anderson 5-8-0-43. Green Bay, A.Rodgers 19-22-0-255, Flynn 0-2-0-0. RECEIVINGCarNone. Sundays milestones Peyton Manning broke Brett Favres record for touchdown passes with his 509th. The mile stone touchdown pass was an 8-yarder to Demaryius Thomas with 3:09 left in the first half that gave Denver a 21-3 lead over San Francisco. Mannings team mates played keep-away with the milestone memento before Man ning got the ball and congratula tions from his teammates. Manning went into the game with 506 and needed just four drives to break the record. He threw a 3-yard TD pass to Emmanuel Sanders on Denvers first drive and tied the record when Wes Welker took a pass over the mid dle for 39 yards. Manning reached the milestone in his 246th regular-season game. Favre needed 302. The Colts improved to 5-2 with a 27-0 rout of the Bengals, earning the 500th victory in franchise history. The Colts record is 500-444-7. It was Indys first shutout since beating Tennessee in the final game of the 2008 season. ... Jacksonville beat Cleveland 24-6 for its first victory since beating Houston on Dec. 15, 2013. ... The Lions over came a 14-point deficit en route to a 24-23 come-from-behind win against New Orleans. It marked the 10th time an NFL team has come back from a deficit of at least 14 points to win in 2014, al ready tied for the second most such comebacks through Week 7 of any season since at least 1970. ... Colts wide receiver Reg gie Wayne had four catches for 15 yards and became the ninth player in NFL history with 14,000 yards receiving. ... The Colts Ahmad Bradshaw leads all run ning backs with six TD catches this season. He is the first run ning back with six touchdown catches in his teams first seven games of a season since San Di eg. ... The Lions Matthew Stafford tied Bobby Layne for Detroits career lead with his 118th touchdown pass. Jags in win column JACKSONVILLE Denard Robinson ran for a career-high 127 yards and a touchdown, Jackson vill games first touchdown. It was really all the Jaguars needed on a day in which coach Gus Bradleys defense deliv ered time and time again. The Browns (3-3) settled for field goals in two trips inside the 20-yard line and failed to convert on fourth-and-1 at the 24. Cleveland, which entered the game with the leagues third-best rushing attack, was held in check. The Browns ran 30 times for 69 yards. Lions 24, Saints 23 DETROIT Matthew Stafford threw two touchdown passes in the final 3:38, including the winner to Corey Fuller with 1:48 remaining, and the Detroit Lions rallied for a 24-23 victory over the New Orleans Saints. The Saints (2-4) were in control late in the fourth quarter when Stafford found Golden Tate for a 73-yard catch. Redskins 19, Titans 17 LANDOVER, Md. Kai Forbath kicked a 22-yard field goal on the last play of the game, and Colt McCoy stepped in after Kirk Cousins was benched at halftime to lead the Wash ington Redskins to a 19-17 win over the Tennessee Titans. McCoy completed 11 of 12 passes for 128 yards and a touchdown in his first meaningful role in a win since Nov. 20, 2011, when he led the Cleve land Browns to a 14-10 victory over the Jacksonville Jaguars. The Redskins snapped a four-game losing streak to improve to 2-5. The Titans fell to 2-5. McCoys first pass was a careerlong 70-yard touchdown to Pierre Gar con after the Redskins trailed 10-6 at halftime. Charlie Whitehurst was 17 for 26 for 160 yards with two touchdowns and one interception for the Titans. Rams 28, Seahawks 26 ST. LOUIS Punter Johnny Hek kers pass from the St. Louis 18 caught the Seattle Seahawks by surprise for the last of three big plays by Rams special teams in a 28-26 victory over the defending Super Bowl champions. Stedman Bailey had a 90-yard touchdown on a trick return that fooled the Seahawks into thinking another player was going to catch the punt, and Benny Cunninghams 75-yard kickoff return set up an early touchdown for the Rams (2-4). Russell Wilson rushed for 106 yards on seven carries and also passed for two touchdowns while going 23 for 36 for 313 yards. The Seahawks (3-3) dominated sta tistically, outgaining the Rams 463272. Doug Baldwins 9-yard reception cut the deficit to two with 3:18 to go, but the Rams were able to run out the clock after Hekkers completion to Cunningham. Packers 38, Panthers 17 GREEN BAY, Wis. Aaron Rodg ers threw for 255 yards and three touchdowns, Randall Cobb torched the Carolina secondary for 121 yards re ceiving and the Green Bay Packers routed the Carolina Panthers 38-17. Sure-tackling Green Bay (5-2) lim ited quarterback Cam Newton in the first half. The Packers scored touch downs rushing touchdowns in the first half. Rodgers was 19 of 22 in carving up a Carolina secondary playing without starting cornerback Josh Norman. Ravens 29, Falcons 7 BALTIMORE Elvis Dumervil and Pernell McPhee each had two sacks, part of a dominant defensive perfor mance that carried the Baltimore Ra vens past the Atlanta Falcons 29-7. to make it 20-7. It was Atlan tas first fourth-quarter score in five games. Terrell Suggs sacked Ryan for a safety with 3:39 left and Joe Flacco threw a 39-yard touchdown pass to Torrey Smith on a fourth-and-9 to seal Baltimores fourth win in five games. The Ravens allowed only four first downs in the pivotal first half and fin ished with five sacks in dealing the Falcons their fourth straight defeat. Associated Press Jacksonville Jaguars outside linebacker Telvin Smith (50) intercepts a pass in front of Cleveland Browns running back Ben Tate (44) during the second half Sunday in Jacksonville. The Jaguars beat the Browns 24-6 for their first win of the season. B4 rfntb NFL scoreboard Thursdays Game New England 27, N.Y. Jets 25 Sundays Todays.
PAGE 17
B5 Dolphins 27, Bears 14 Miami 7 7 7 6 27 Chicago 0 0 7 7 14 First Quarter MiaClay 13 pass from Tannehill (Sturgis kick), 6:51. Second Quarter MiaM.Wallace 10 pass from Tannehill (Sturgis kick), 5:20. Third Quarter ChiForte 10 pass from Cutler (Gould kick), 7:59. MiaMiller 2 run (Sturgis kick), :31. Fourth Quarter MiaFG Sturgis 33, 13:32. ChiForte 1 run (Gould kick), 7:38. MiaFG Sturgis 19, 2:13. AMiami, Miller 18-61, Tannehill 6-48, Dan.Thomas 7-25, M.Wallace 1-4, Damie.Williams 1-(minus 1). Chicago, Forte 12-49, Cutler 2-3. PASSINGMiami, Tannehill 25-32-0-277. Chicago, Cutler 21-34-1-190. RECEIVINGMMiami, Sturgis 50 (WR), 37 (BK). Chiefs 23, Chargers 20 Kansas City 0 10 3 10 23 San Diego 7 7 0 6 20 First Quarter SDPhillips 1 pass from Rivers (Novak kick), 3:15. Second Quarter KCCharles 16 run (Santos kick), 14:51. KCFG Santos 28, 3:11. SDGates 27 pass from Rivers (Novak kick), :14. Third Quarter KCFG Santos 40, 8:35. Fourth Quarter KCSherman 11 pass from A.Smith (Santos kick), 14:50. SDFG Novak 24, 9:36. SDFG Novak 48, 1:57. KCFG Santos 48, :21. AKansas City, Charles 22-95, A.Smith 6-29, Davis 10-25, Thomas 1-5. San Diego, Oliver 15-67, R.Brown 1-2. PASSINGKansas City, A.Smith 19-28-0221. San Diego, Rivers 17-31-1-205. RECEIVINGKNone. Cardinals 24, Raiders 13 Arizona 7 7 7 3 24 Oakland 0 10 3 0 13 First Quarter AriTaylor 2 pass from Palmer (Catanzaro kick), 1:47. Second Quarter AriFloyd 33 pass from Palmer (Catanzaro kick), 5:37. OakMcFadden 1 run (Janikowski kick), 1:56. OakFG Janikowski 29, :45. Third Quarter OakFG Janikowski 53, 7:17. AriTaylor 4 run (Catanzaro kick), 2:55. Fourth Quarter AriFG Catanzaro 41, :29. AArizona, Ellington 24-88, Taylor 12-40, Jo.Brown 1-(minus 5). Oakland, McFadden 14-48, Jones-Drew 3-6, Carr 2-2. PASSINGArizona, Palmer 22-31-1-253. Oakland, Carr 16-28-0-173. RECEIVINGArizona,None. Colts 27, Bengals 0 Cincinnati 0 0 0 0 0 Indianapolis 3 7 7 10 27 First Quarter IndFG Vinatieri 23, :33. Second Quarter IndBradshaw 1 run (Vinatieri kick), 12:08. Third Quarter IndAllen 32 pass from Luck (Vinatieri kick), 9:47. Fourth Quarter IndBradshaw 10 pass from Luck (Vinatieri kick), 12:09. IndFG Vinatieri 50, 1:55. ACincinnati, Bernard 7-17, Hill 4-15, Dalton 1-0. Indianapolis, Richardson 14-77, Bradshaw 10-52, Herron 5-37, Luck 4-5, Moncrief 1-0. PASSINGCincinnati, Dalton 18-38-0-126. Indianapolis, Luck 27-42-0-344. RECEIVINGCNone. Cowboys 31, Giants 21 N.Y. Giants 0 14 0 7 21 Dallas 7 7 7 10 31 First Quarter DalEscobar 15 pass from Romo (Bailey kick), 5:06. Second Quarter NYGBeckham Jr. 9 pass from Manning (J.Brown kick), 11:24. NYGFells 27 pass from Manning (J.Brown kick), 7:53. DalWilliams 18 pass from Romo (Bailey kick), 2:17. Third Quarter DalEscobar 26 pass from Romo (Bailey kick), 6:15. Fourth Quarter DalMurray 1 run (Bailey kick), 9:11. NYGBeckham Jr. 5 pass from Manning (J.Brown kick), 5:28. DalFG Bailey 49, :59. AN.Y. Giants, A.Williams 18-51, Hillis 6-29, Beckham Jr. 1-13, Manning 1-11. Dallas, Murray 28-128, Dunbar 3-16, Randle 2-7, Romo 2-5. PASSINGN.Y. Giants, Manning 21-33-0248. Dallas, Romo 17-23-1-279. RECEIVINGNNone. Bills 17, Vikings 16 Minnesota 3 10 0 3 16 Buffalo 0 10 0 7 17 First Quarter MinFG Walsh 40, 1:50. Second Quarter BufWatkins 26 pass from Orton (Carpenter kick), 9:23. MinPatterson 4 pass from Bridgewater (Walsh kick), 6:17. BufFG Carpenter 31, 4:01. MinFG Walsh 55, :15. Fourth Quarter MinFG Walsh 33, 11:45. BufWatkins 2 pass from Orton (Carpenter kick), :01. AMinnesota, McKinnon 19-103, Asiata 6-24, Felton 2-21, Bridgewater 1-7, Patterson 1-3. Buffalo, Spiller 1-53, Dixon 1351, Jackson 3-12, Summers 1-3, Orton 1-(minus 1). PASSINGMinnesota, Bridgewater 15-262-157. Buffalo, Orton 31-43-1-283. RECEIVINGMinNone. Streaks and stats The Cowboys have won six in a row. They are off to their best start (6-1) since winning six of their first seven on the way to a 13-3 finish in 2007. ... The Cardi nals (5-1) are off to their best start since 1976. ... The Ben gals Kevin Huber punted a franchise-record-tying 11 times, 10 of which followed three-and-outs, in a 27-0 loss to the Colts. The teams were a combined 0 for 14 on third-down conversions in the first half, marking the first time thatsneys team record of 29 set in 1992-93. ... The Raiders (0-6) have lost 12 in a row going back to last sea son. They are off to their worst start to a season since losing their first 13 games in 1962 the year before late owner Al Davis joined the franchise. Fins top Bears Associated Press rfrntfbbnf fnnff rfntbn CHICAGO Ryan Tannehill threw for 277 yards and two touch downs in an efficient perfor mance, and the Miami Dolphins beat the Chicago Bears 27-14 on Sunday.. Lamar Miller also had a 2-yard touchdown run for the Dolphins (3-3), who had lost three of four since an opening victory over New England. The Bears (3-4) remained win less in three home games this sea son and have dropped five of their last seven at Soldier Field. Matt Forte scored two touchdowns and Jeremiah Ratliff finished with a career-best 3 1/2 sacks. Cowboys 31, Giants 21 ARLINGTON, Texas Tony Romo threw three touchdown passes, DeMarco Murray broke Jim Browns 56-year-old NFL record with his sev enth straight 100-yard rushing game to start a season, and the Dallas Cow boys won their sixth straight by beating the New York Giants 31-21. Romo had a fourth scoring pass overturned on replay. Instead, Murray wound up with his seventh rushing touchdown of the season on a 1-yard plunge. Murray finished with 128 yards rush ing to pass Brown, who hit the century mark in the first six games of the 1958 season for Cleveland. The Cowboys (6-1) are off to their best start since they went 13-3 in 2007 and were the top seed in the NFC be fore losing to New York in their first playoff game. Eli Manning had three touchdown passes for the Giants (3-4), who have lost to the NFC Easts top two teams in consecutive weeks. Chiefs 23, Chargers 20 SAN DIEGO Cairo Santos kicked a 48-yard field goal field goal with 21 seconds left and the Kansas City Chiefs beat San Diego 23-20, snap ping the Chargers five-game winning streak. The Chiefs moved into field goal range thanks to Alex Smith, who com pleted threeteam race, pulling within 1 1/2 games of San Diego (5-2). The Denver Bron cos (4-1) hosted San Francisco on Sunday night. The Chargers flunked their sternest test in a month and lost for the first time since a defeat at Arizona in the season opener. Bills 17, Vikings 16 ORCHARD PARK, N.Y. Kyle Orton hit Sammy Watkins on a 2-yard touchdown for the winning score with 1 second remaining as the Buffalo Bills beat the Minnesota Vikings 17-16. The touchdown capped a 15-play, 80-yard drive which Orton extended by converting a fourth-and-20 and a thirdand-12. Orton set up the decisive score with a 28-yard pass to Chris Hogan at the Vikings 2. Orton overcame an interception, a lost fumble and six sacks to finish 31 of 43 for 283 yards and two touch downs both to Watkins. It was Ortons. Colts 27, Bengals 0 INDIANAPOLIS Andrew Luck threw two touchdown passes and the Colts defense dominated Cincinnati in a 27-0 victory. Luck was 27 of 42 for 344 yards as Indianapolis (5-2) won its fifth straight. It was Indys first shutout since December 2008. Cincinnati (3-2-1), which hasnt won since starting 3-0, endured its first shutout since December 2009 and had a franchise record tying 11 punts Sun day. Andy Dalton was 18 of 38 for 126 yards. Indy churned out 506 yards, struck early and pulled away late. Ahmad Bradshaws 1-yard TD run made it 10-0 in the second quarter and Luck threw two second-half TD passes to make it 24-0. Colts linebacker Erik Walden was ejected in the first half for making con tact with umpire Bruce Stritesky. Cardinals 24, Raiders 13 OAKLAND, Calif. Carson Palmer threw two touchdown passes in his re turn to Oakland and the Arizona Cardi nals sent the Raiders to their 12th straight loss with a 24-13 victory. Stepfan Taylor caught one touch down pass and ran for another, and Andre Ellington gained 160 yards from scrimmage for the Cardinals (5-1), who are off to their best start since 1976. Darren McFadden ran for a touch down for the Raiders (0-6), off to their worst start to a season since losing their first 13 games in 1962 the year before late owner Al Davis joined the franchise. Associated Press bnbf ttbnf fr
PAGE 18
TALLAHASSEE With Florida States perfect re cord still intact and a second-half rally against Notre Dame complete, Ja an other week of controversy for Winston, who has been the subject of a sexual as sault allegation and a stu dent conduct code inquiry over the past two years. This week, the school said it was investigating whether Winston received benefits for autographs being sold online. But on Saturday night, Winstons mission was to dig the Seminoles out of trouble. And he did. Florida State had pro tection issues in the first half and Winston never seemed completely com fortable. Fisher said those were cleaned up at half time and suddenly Win ston had room to operate. The reigning Heisman winner drove the Semi noles to touchdowns on three of their first four drives, each taking a mini mum of seven plays. He spread the ball around and hit big plays to receiv ers Rashad Greene, Travis Rudolph and Jesus Wil son. Even running back Karlos Williams caught a 21-yarder. Williams called Win stons work poetry in motion. This wasnt the first time Winston had to shine in the second half. Okla homa. Were playing Notre Dame. Were not playing a high school team. The Irish nearly pulled off the upset, moving to the 2-yard line in the games final moments. Ev erett Golson threw a touchdown pass to Corey Robinson with 13 seconds remaining. But Notre Dame was called for pass interfer ence when a receiver blocked the defender re sponsible for Robinson, and the touchdown was erased. Notre Dame coach Brian Kelly was not happy with the call. We execute that play every day, Kelly said. And we do it legally and thats the way we coach it. We dont coach illegal plays. The Irish moved back to fourth and goal from the 18-yard line. Linebacker Jacob Pugh picked off the desperation pass in the back of the end zone. Ballgame. We fight for each other, its a brotherhood, FSU linebacker Terrance Smith said. We fight for the guys next to us and were not going to let the guys next to us down. Golson threw for 313 yards and three touch downs, but Winston won the duel in the second half as he completed his first 13 passes against a de fense FSUs schedule and the win may be its last chance to make a decisive impres sion on the College Foot ball Playoff selection committee. I aint worried about the doubters, FSU cor nerback P.J. Williams said. We just know we will do whatever we have to do to win games. Were not going down. B6 Deadline: October 22 | Noon GARAGE SALE WEEKENDCOMMUNITY For Only Oct 24-26 Meet the local candidates and hear their positions. Circuit Court Judge County Commission U.S. House REFRESHMENTS WILL BE AVAILABLE FOR PURCHASE to benefit the Boys & Girls Clubs of Citrus County For more information call Mike Wright at 352-563-3228 or email mwright@chronicleonline.com Political Forum Political Forum T u e s d a y O c t o b e r 2 1 2 0 1 4 C o l l e g e o f C e n t r a l F l o r i d a C F L e c a n t o C a m p u s D O O R S O P E N : 6 P M F O R U M S T A R T S A T 7 P M T u e s d a y O c t o b e r 2 1 2 0 1 4 Tuesday, October 21, 2014 C o l l e g e o f C e n t r a l F l o r i d a College of Central Florida C F L e c a n t o C a m p u s CF Lecanto Campus D O O R S O P E N : 6 P M DOORS OPEN: 6PM F O R U M S T A R T S A T 7 P M FORUM STARTS AT 7PM 000JDPL No. 2 Noles survive scare from No. 5 Irish Embarrassing loss could cost Muschamp rf GAINESVILLE Florida coach Will Muschamp trudged across the field and into the locker room amid a scattering of boos Saturday night. There were still a couple Fire Mus champ chants in the distance, too. Those could become reality soon, maybe in the next few days. Marcus Murphy scored three touch downs, including two on special teams, and Missouri embarrassed Florida 42-13. The Tigers (5-2, 2-1 Southeastern Con ference) scored on a kickoff return, a punt return, a fumble return and an in terception return. They managed just seven first downs and 119 yards, includ ing 20 passing, but won by essentially let ting the Gators self-destruct in nearly every way imaginable. And they did. The Gators (3-3, 2-3) turned the ball over six times and lost at home for the second time in eight days, fueling specu lation that Muschamp has coached his final game in Gainesville. The Fire Mus champ chants broke out in third quarter and could be heard throughout an emp tying Florida Field. Im really worried about this football team right now, Muschamp said when asked about his job security. Thats re ally what Im worried about. Im not get ting concerned about things I dont have any control over, other than this team. I think thats the most important thing right now. The Gators have dropped 12 of their last 19 games and trail the divisionleading Bulldogs by two games with three remaining. Athletic director Jeremy Foley said in September that he wouldnt make any de cisions about Muschamps half time today. Then again, there doesnt seem to be a number of strong candidates out there for the choosing. Its one of those losses where you just have to look in the mirror and ask your selfidas six turnovers. He added an 82-yard punt return for a score early in the third. That was the dagger, Muschamp said. Driskel fumbled on Floridas first se ries and threw an interception in the sec ond quarter that led to a field goal. He also had a fumble returned 21 yards for a touchdown by Markus Golden and an in terception returned 46 yards for a score by Darvin Ruise both in Missouris 22-point third quarter. That third quarter, I wish that could happen every game, Missouri coach Gary Pinkel said. That was amazing. Harris, a freshman who played for the first time since a female student with drew inter ceptions. He also was sacked four times. He now has 12 turnovers in the last four games. At the end of the day, youre not going to win many games turning the ball over six times, Muschamp said. Associated Press rffntfbrfnfr frr Associated Press tff fn tfbr ff tffnff
PAGE 19
MONDAY, OCTOBER20, 2014 B7CITRUSCOUNTY(FL) CHRONICLEENTERTAINMENT PHILLIPALDER Newspaper Enterprise Assn. Alaska State Troopers Alaska State Troopers Alaska State Troopers Drugs, Inc. Meth Boom Montana Drugs, Inc. Windy City High Drugs, Inc. Meth Boom Montana (NICK) 28 36 28 35 25HenrySam & ThunderMaxFull HseFull HseFull HseFull HsePrincePrinceFriendsFriends (OWN) 103 62 103 Breaking DownBreaking DownDateline on OWNDateline on OWNDateline on OWNDateline on OWN (OXY) 44 123 The Premonition (1999) Burt Reynolds.Snapped PGSnapped PGSnapped PGSnapped PG (SHOW) 340 241 340 4 Alex Cross (2012) Tyler Perry. A serial killer pushes Cross to the edge. Homeland Iron in the Fire MA The Affair (In Stereo) MA Homeland Iron in the Fire MA The Affair (In Stereo) MA (SPIKE) 37 43 37 27 36 A Man Apart (2003) R The Fast and the Furious (2001) Vin Diesel. An undercover cop infiltrates the world of street racing. 2 Fast 2 Furious (2003) Paul Walker. Two friends and a U.S. customs agent try to nail a criminal. (STARZ) 370 271 370 At Middleton (2013) R O Brother, Where Art Thou? (2000) George Clooney. PG-13 About Last Night (2014) Kevin Hart. R American Hustle (2013) Christian Bale. R (SUN) 36 31 36 Canoe Worlds Sport Fishing Ship Shape TV SportsmanFlorida Sport Fishing the Flats Lightning Live! (N) NHL Hockey Tampa Bay Lightning at Edmonton Oilers. From Rexall Place in Edmonton, Alberta. (SYFY) 31 59 31 26 29 Hostel Part II (2007, Horror) R Saw: The Final Chapter (2010, Horror) Tobin Bell, Costas Mandylor. RStarve (2014) Bobby Campo. Premiere. Trapped pals fight for their lives in an abandoned school. NR Hellboy (TBS) 49 23 49 16 19AmericanAmericanAmericanAmericanAmericanAmericanAmericanAmericanBig BangBig BangConan (N) (TCM) 169 53 169 30 35Too Mny Cook Way Back Home (1932, Drama) Phillips Lord. NR Saboteur (1942, Suspense) Robert Cummings, Priscilla Lane. PG Kings Row (1942, Drama) Ann Sheridan, Ronald Reagan. NR (DVS) (TDC) 53 34 53 24 26Fast N Loud (In Stereo) Fast N Loud (In Stereo) Fast N Loud: Revved Up (N) Fast N Loud A Chevy Impala. Fast N Loud (In Stereo) PG Fast N Loud A Chevy Impala. (TLC) 50 46 50 29 30Say YesSay YesUndercover BossUndercover BossUndercover BossUndercover BossUndercover Boss (TMC) 350 261 350 Twilight Saga-2 Flirting With Disaster (1996) Ben Stiller. R Quartet (2012, Comedy-Drama) Maggie Smith. (In Stereo) PG-13 Silver Linings Playbook (2012) Bradley Cooper. (In Stereo) R (TNT) 48 33 48 31 34Castle The Fifth Bullet PG Castle Tick, Tick, Tick ... PG Castle Boom! PG (DVS) Castle The Third Man (In Stereo) PG Major Crimes Letting It Go Law & Order Church (TOON) 38 58 38 33 TeenClarenceGumballRegularKing/HillKing/HillClevelandClevelandAmericanRickFam. GuyFam. Guy (TRAV) 9 106 9 44Bizarre FoodsFoodFoodBizarre FoodsBizarre FoodsBizarre FoodsBizarre Foods (truTV) 25 55 25 98 55Commercials Commercials truTV Top FunniestJokersJokersJokersJokersWorlds Funniest (TVL) 32 49 32 34 24HillbilliesHillbilliesHillbilliesHillbilliesFamFeudFamFeudFamFeudThe ExesRaymondRaymondFriendsFriends (USA) 47 32 47 17 18NCIS Tony and Ziva become trapped. PG NCIS A commander is abducted. PG WWE Monday Night RAW (N) (In Stereo Live) PG Chrisley Knows Chrisley Knows (WE) 117 69 117 CSI: Miami (In Stereo) CSI: Miami Chip/Tuck CSI: Miami Reality stars murder. CSI: Miami Collateral Damage CSI: Miami Dissolved CSI: Miami Seeing Red (WGN-A) 18 18 18 18 20Funny Home VideosFunny Home VideosFunny Home VideosFunny Home VideosFunny Home VideosParksParks Dear Annie: Ive been in an abusive marriage for nearly 15 years, and I cant take another day. My husband has never hit me. Its all mental and emotional abuse. He calls me horrible names in front of our children. He has constant tantrums where he screams, throws things, breaks things and threatens me, saying if I leave, hell kill me, destroy my life and take our children away. I have no access to money, and he has driven all of my friends away. I have nowhere to go. There are no shelters in my rural area, and Im scared of what he may do when I leave. However, Im determined. Ive written him a very long letter explaining why and promising that I dont want any money from him, so he doesnt have to worry about that. And I plan to give him this letter in the next few days. I want to hand it to him. I dont want to be sneaky and leave the letter and walk out the door. But Im afraid. I dont have anyone to discuss these things with. My mother said she didnt want to hear it and it was my problem. Please help me. Too Scared To Leave Dear Too Scared: Please do not do anything rash. Before you leave, you need to have your next step planned and ready, whether it is finding a shelter, staying with friends or relatives, or leaving town. It would be unwise to hand your abusive husband a letter and walk out the door. We know you want to do the honorable thing, but your safety is more important right now. We urge you to call the National Domestic Violence Hotline (thehotline.org) at 1-800-799-SAFE. Someone there will guide you through the process. Dear Annie: My husband and I are retired and live in upstate New York with our son and his family. Our son broke his back and neck in a freak accident. He has fully recovered, but now is addicted to pain medication. He has no job and no insurance. Is there any way to get him the help he needs to be a functioning adult again? He would give anything to be better, but cant afford the treatment. Desperately Concerned Mom Dear Mom: This must be a terribly difficult situation for everyone, but the fact that your son wants to get better is encouraging. Please look into statefunded drug and alcohol rehab centers through the Substance Abuse and Mental Health Services Administration at findtreatment.samhsa. gov, or call their treatment referral line at 800-662-HELP Well be thinking of you. Dear Annie: Best Friend in Trouble was pretty sure her best friends husband was cheating on her with his sister-in-law. She asked whether she should tell her friend. I say, YES! I wish someone had told me when my husband was cheating. At a company holiday party, I actually sat next to the woman my husband was having an affair with. Probably everyone in the room knew except me. One of my good friends discovered his wife was cheating when he contracted an STD. Another found out when his wife became pregnant. Hed had a vasectomy. Ive known a few people who have cheated, and let me tell you, if they dont get caught, they keep right on doing it. After I realized my husband was seeing another woman, I learned that my own sister knew he was cheating and didnt tell me. I could never forgive her for keeping it a secret. I wish I had known sooner. Best Friend should tell her friend what she knows and then let the wife decide what she wants to do about it. Still SmartingAnn) FAULTNINTH WEIGHTDISMAY Saturdays Jumbles: Answer: When the plane hit turbulence, everything WENTFLYING Now arrange the circled letters to form the surprise answer, as suggested by the above cartoon.THAT SCRAMBLED WORD GAMEby David L. Hoyt and Jeff Knurek Unscramble these four Jumbles, one letter to each square, to form four ordinary words. TARFD FYCAN YIELLK HURCOS Tribune Content Agency, LLC All Rights Reserved. Check out the new, free JUSTJUMBLE app Answer here: MONDAY EVENING OCTOBER The battle rounds continue. PGThe Blacklist (N) NewsJ. Fallon # (WEDU) PBS 3 3 14 6World News Nightly Business PBS NewsHour (N) (In Stereo) Antiques Roadshow Jacksonville (N) G Antiques Roadshow G Independent Lens Twin Sisters G Living With Parkinsons G % (WUFT) PBS 5 5 5 41News at 6BusinessPBS NewsHour (N)Antiques RoadshowAntiques RoadshowIndependent LensWorldT. Smiley ( (WFLA) NBC 8 8 8 8 8NewsNightly NewsNewsChannel 8Extra (N) PG The Voice The Battles, Part 3 The battle rounds continue. (N) (In Stereo) PG The Blacklist The Front (N) NewsTonight Show ) (WFTV) ABC 20 20 20 NewsWorld News Jeopardy! (N) G Wheel of Fortune Dancing With the Stars (N) (In Stereo Live) PG Castle Childs Play (N) PG Eyewit. News Jimmy Kimmel (WTSP) CBS 10 10 10 10 1010 News, 6pm (N) Evening News Wheel of Fortune Jeopardy! (N) G Big Bang Theory The Millers PG Scorpion Plutonium Is Forever (N) NCIS: Los Angeles The 3rd Choir 10 News, 11pm (N) Letterman ` (WTVT) FOX 13 13 13 13NewsNewsTMZ (N) PG The Insider (N) Gotham Viper (N) (DVS) Sleepy Hollow The Weeping Lady FOX13 10:00 News (N) (In Stereo) NewsAccess Hollywd 4 (WCJB) ABC 11 11 4 NewsABC EntLets AskDancing With the Stars (N) PG Castle (N)) PGRightThisMinuteDancing With the Stars (N) (In Stereo Live) PG Castle Childs Play (N) PG NewsJimmy Kimmel @ (WMOR) IND 12 12 16Modern Family Modern Family Big Bang Theory Big Bang Theory Law & Order: Special Victims Unit Law & Order: Special Victims Unit AngerAngerThe OfficePrince L (WTOG) CW 4 4 4 12 12King of Queens King of Queens Mike & Molly Mike & Molly The Originals Every Mothers Son Jane the Virgin Chapter Two Two and Half Men Two and Half Men Friends PG Friends O (WYKE) FAM 16 16 16 15Soundtrack Cin Citrus Today INN News County Court Every Minute Mobil 1 The Grid Steel Dreams Players Parking Chop Cut RebuildMotorsportsRaceline G Mobil 1 The Grid S (WOGX) FOX 13 7 7TMZ PGSimpsonsBig BangBig BangGotham Viper Blue Bloods Blue Bloods (A&E) 54 48 54 25 27Storage Wars PG Storage Wars PG Storage Wars PG Storage Wars PG Duck Dynasty Duck Dynasty Duck Dynasty Duck Dynasty Duck Dynasty Duck Dynasty Duck Dynasty Duck Dynasty (AMC) 55 64 55 Friday the 13th (1980) R Friday the 13th, Part 2 (1981, Horror) Amy Steel, John Furey. R Friday the 13th Part III (1982, Horror) Dana Kimmell, Paul Kratka. R Friday the 13th: The Final Chapter (ANI) 52 35 52 19 21To Be AnnouncedTo Be AnnouncedGator Boys (In Stereo) PG Rattlesnake Republic: Texas Sized PG North Woods Law: On the Hunt (N) PG Gator Boys (In Stereo) PG (BET) 96 19 96 The Real (N) (In Stereo) PG Meet the Browns (2008, Comedy-Drama) Tyler Perry, Angela Bassett, David Mann. PG-13 Husbands Johnson Family Vacation (2004) Cedric the Entertainer. PG-13 (BRAVO) 254 51 254 Housewives/Atl.Housewives/Atl.Housewives/Atl.ManzodHousewives/NJTBAHappensManzod (CC) 27 61 27 33Colbert Report Daily ShowSouth Park Tosh.0 Futurama PG Futurama PG South Park MA South Park MA South Park MA South Park MA Daily ShowColbert Report (CMT) 98 45 98 28 37Reba As Is PG Reba PG Raising Hope PG Raising Hope PG Starsky & Hutch (2004) Ben Stiller. Two detectives investigate a cocaine dealer. PG-13 Cops Reloaded Cops Reloaded Cops Reloaded (CNBC) 43 42 43 Mad Money (N)The Profit Shark Tank PGShark Tank PGThe Profit The Profit (CNN) 40 29 40 41 46SituationCrossfireErin Burnett OutFrontAnderson CooperRoots: Our Journeys Home (N) CNN Tonight (N) (DISN) 46 40 46 6 5Dog With a Blog G Dog With a Blog G Jessie G Girl MeetsAustin & Ally G Twitches (2005, Fantasy) Tia Mowry. (In Stereo) Wolfblood PG Jessie G My Babysitter My Babysitter (ESPN) 33 27 33 21 17Monday Night Countdown (N) (Live) NFL Football Houston Texans at Pittsburgh Steelers. (N Subject to Blackout)SportCtr (ESPN2) 34 28 34 43 49SportsCenter (N)BaseballSports.30 for 30 World/Poker World/Poker Football Final (EWTN) 95 70 95 48NewsCubaDaily Mass GThe Journey HomeNewsRosaryThe World OverWordWomen (FAM) 29 52 29 20 28Boy Meet World The Nightmare Before Christmas (1993) PG The Hunger Games (2012) Jennifer Lawrence. In a dystopian society, teens fight to the death on live TV. PG-13 The 700 Club (In Stereo) G (FLIX) 118 170 The Other Sister (1999) Juliette Lewis, Tom Skerritt. (In Stereo) PG-13 Promised Land (1987, Drama) Jason Gedrick. (In Stereo) R When a Man Loves a Woman (1994) Andy Garcia. (In Stereo) R (FNC) 44 37 44 32Special ReportGreta Van SusterenThe OReilly FactorThe Kelly File (N)Hannity (N) The OReilly Factor (FOOD) 26 56 26 DinersDinersGuys GamesHungryHungryMy. DinMy. DinRestaurant: Im.Restaurant: Im. (FS1) 732 112 732 Americas PregameQualifyingWomens Soccer MLBMLB 2014: That JustFOX Sports Live (N) (FSNFL) 35 39 35 SportsMo.ShipCollege Football Oklahoma State at Texas Christian. (Taped) World Poker World Poker (FX) 30 60 30 51 Kung Fu Panda (2008, Comedy) Voices of Jack Black, Angelina Jolie. PG How to Train Your Dragon (2010, Fantasy) Voices of Jay Baruchel. PG How to Train Your Dragon (2010, Fantasy) Voices of Jay Baruchel. PG (GOLF) 727 67 727 Golf Central (N)The Golf Fix (N) GFeherty Seven Days in Utopia (2011) GSeven Days (HALL) 59 68 59 45 54The Waltons The Estrangement G The Waltons The Nurse G The Waltons The Intruders G The Middle PG The Middle PG The Middle PG The Middle PG Golden Girls Golden Girls (HBO) 302 201 302 2 2The MajesticLast Week To. Leap Year (2010, Romance-Comedy) Amy Adams. (In Stereo) PG Private Violence (2014, Documentary) NR The Final Shot Foo Fighters: Sonic Highways MA (HBO2) 303 202 303 Pacific Rim (2013) Charlie Hunnam. PG-13 Last Week To. Real Time With Bill Maher MA Boardwalk Empire MA Enemy of the State (1998, Suspense) Will Smith. (In Stereo) R (HGTV) 23 57 23 42 52Love It or List It GLove It or List It GLove It or List It GLove It or List It GHuntersHunt IntlLove It or List It G (HIST) 51 54 51 32 42Swamp People Day of Reckoning PG Swamp People Lethal Encounters PG Swamp People (In Stereo) PG Swamp People (In Stereo) PG Swamp People (In Stereo) PG Swamp People Cannibal Gator PG (LIFE) 24 38 24 31 Killers (2010, Action) Ashton Kutcher, Katherine Heigl. PG-13 27 Dresses (2008, Romance-Comedy) Katherine Heigl. PG-13 13 Going on 30 (2004) Jennifer Garner, Judy Greer. Premiere. PG-13 (LMN) 50 119 Born Bad (2011, Suspense) Meredith Monroe, Bonnie Dennison. NR Death Clique (2014, Crime Drama) Lexi Ainsworth, Barbara Alyn Woods. NR Deadly Friends (2004, Docudrama) Jessica Par, Brendan Fletcher. R (MAX) 320 221 320 3 3 Mama (2013, Horror) Jessica Chastain. (In Stereo) PG-13 The Knick Crutchfield MA Fight Club (1999) Brad Pitt. Men vent their rage by beating each other in a secret arena. RGreat Gatsby WANT MORE PUZZLES? Look for Sudoku and Wordy Gurdy puzzles in the Classified pages. Oswald Spengler, a German philosopher who died in 1936, said, The secret of all victory lies in the organization of the non-obvious. That is a perfect way to lead into this weeks deals, in which the obvious play is not right. Here, South is in six hearts. West leads the diamond king. South wins with his ace and cashes the heart ace-king, but East discards a spade on the second round. What should declarer do next? After North used a three-diamond transfer bid, Souths three spades showed the spade ace, four-card heart support and, typically, a doubleton somewhere. Then North, who knew the partnership had the values for a small slam, transferred again with four diamonds before jumping to the small slam. (Yes, he might have control-bid four clubs to show his ace.) South is faced with two losers: the heart queen and the diamond queen. He must discard dummys two diamonds before West can ruff in and cash his diamond queen. With only five spades between the two hands, it looks obvious to play on that suit first but it is wrong! West ruffs the third spade and cashes his diamond queen. If West has fewer than three clubs, the contract is unmakable. So South should play on that suit first. If it breaks 3-3, then he hopes West also has at least three spades. But when West turns up with four clubs, one diamond can disappear on the club 10 and the second on the third spade, Wests ruff coming too late for him.
PAGE 20
B8MONDAY, OCTOBER20, 2014CITRUSCOUNTY(FL) CHRONICLECOMICS Pickles Crystal River Mall 9; 564-6864 Fury (2014) (R) 1p.m., 4:15p.m., 7:30p.m. The Best of Me (PG 13)1:25p.m., 4:40p.m., 7:45p.m., The Book of Life (PG) In3D. 4:45p.m., Nopasses The Book of Life (PG) 1:50p.m., 7:20p.m. Alexander and the Terrible, Horrible, No Good, Very Bad Day (PG) 1:35p.m., 4:20p.m., 7:35p.m. Dracula Untold (PG-13) 1:10p.m., 4:30p.m., 8p.m. The Judge (2014) (R) 1:15p.m., 3:50p.m., 7p.m. Nopasses. Annabelle (R) 1:45p.m., 4:50p.m., 7:50p.m. Gone Girl (R) 12:45p.m., 4p.m., 7:15p.m. Nopasses. The Equalizer (R) 12:50p.m., 4:05p.m., 7:05p.m. Citrus Cinemas 6 Inverness; 637-3377 Fury (2014) (R) 12:45p.m., 4p.m., 7:15p.m. The Book of Life (PG) In3D 4:30p.m. Nopasses The Book of Life (PG) 1:15p.m. 7:30p.m. Alexander and the Terrible, Horrible, No Good, Very Bad Day (PG) 1:30p.m., 4:20p.m., 7:05p.m. Dracula Untold (PG-13) 1p.m., 4:10p.m., 7:20p.m. The Judge (2014) (R) 12:30p.m., 3:45p.m., 7p.m. Nopasses. Gone Girl (R) 12:15p.m., 3:30p.m., 6:50p.m. Nopasses G NJGWWP RUNMTL FMZGT GEEJKUR UOJ FGN ROJ FJTU UONMBLO GTX DR JTTMVWJX VP OJN REGNR. EGNWP RDZMTPrevious Solution: God put us here, on this carnival ride. We close our eyes never knowing where itll take us next. Carrie Underwood (c) 2014 by NEA, Inc., dist. by Universal Uclick 10-20
PAGE 21
MONDAY,OCTOBER20,2014B187 000J5LW 000J5M2 Requirements HS Diploma or GED Valid Florida Driver License $8.50 per hour Full-Time 40 hrs/wkSheriffs Ranches Enterprises000JLUYApply in person to Thrift Store located at 200 SE US HWY 19 (Kings Bay Plaza) Crystal River FL 34429EOE/DFWPFIELD REPRESENTATIVE ASSISTANT DECORATIVE BATHSET 4 peice, like new, ivory/stainless steel, ($20 ) 352-613-7493 DOWNSIZING 27 TV & stand, $40. 7-pc. queen bedding set, comforter, $40. 4 pr. Curtains & 4 extension rods, $25. (443) 752-7304/appt. DOWNSIZING Washer & Dryer, Amana, $500, used 2-wks. Kenmore microwave, $40. (443) 752-7304/appt. FIBERGLASS CRITTER CAGE 20x24 MEDIUM ONLY $20.00 352 464 0316 GENERAL MERCHANDISE SPECIALS!!! -6 LINES -10 DAYSup to 2 ITEMS $1 $200. $11.50 $201 $400. $16.50 $401 $800. $21.50 $801 $1500. $26.50 352-563-5966 Gold Christmas Tree Ornaments Some move 12 @ $20 ea; Seiko Watch-gold $50. 352-746-9896 I WANT TO BUY A HOUSE or MOBILE Any Area, Condition, Situation. 726-9369 LOST DOG Mini pin lost on Meadow St. and Tina ct. Black and brown 352-422-4119 Memory Foam Pillows from Mattress Firm, New standard, 2 Pk. cost $79.sell for $50. 352-382-0069 NIKKOHappy Holidays dishes for 8 All the bells & whistles. Plus table cloth & napkins.All you need for your holiday table! $700 (352)746-9896 PAPER SHREDDER ROYALJMD500 With can Good Condition $15. 352-621-0175 PLAYSTATION 2 Games Madagascar & Sly 2 Band of Thieves $6 EA352-613-0529 SEARS CRAFTSMAN Air Compressor/ Paint Sprayer 20 GAL. $100.00 352 464 0316 SILVER TEA SET Brand New Wilcox, Duberry Floral $125. (352) 726-7421 MENS BLACK SUIT Jacket 46, Pants 40 White Shirt 17 IN NEW CONDITION $70 firm 344-1066 MENS JEANS Three pair, Wranglers. size 32x30 $20. Linda 423-4163 48 FIBER OPTIC CHRISTMAS TREE Good cond, almost new, multicolor LED lights, $70 (352)465-1616 APPLIANCES like new washers/dryers, stoves, fridges 30 day warranty trade-ins, 352-302-3030 ATTENTION FESTIVAL VENDORS! Table top display with lights. $75. (352) 382-5067 ATTENTION:VIAGRA and CIALIS USERS! Acheaper alternative to high drugstore prices! 50 Pill Special $99 FREE Shipping! 100% Guaranteed. CALLNOW: 1-800-943-8953 AUTOMATIC POOL CLEANER Baracuda by Zodiac, includes hoses, ex condition. $100 Call 352-270-8475 AUTOMATIC POOL CLEANER Navigator by Hayward, includes hoses, ex condition. $100 352-270-8475 BEALLS GIFT CERTIFICATE $100.00 selling for $75.00. Will meet you there to verify. Linda 423-4163 CABINET FORMICA Top/dark base 1 drawer 2 drs 32L18W 34H good condition $20. 352-621-0175 Couch & chair with coffee table & 2 end tables, very good cond. $150 24 ft. Aluminum Extension Ladder $60 (304) 678-4070 CRITTER CAGE 12x20 SMALL $15.00 352 464 0316 DINING TABLE Solid Oak pedestal table with 4 leaves and 4 chairs. $200. cash only 352-746-9618 Mattr ess Liquidation 50%-80% OFF RETAIL WHY PAY MORE? (352) 484-4772 Qn. Sz. Bedroom Set, dark wood $800 Stanley Qn Bedroom Set $1,000. Living Rm. Furn. $900. Oak Ent. Center. $300. HD Flat Scrn. 37 $200 352-586-6125 SEVERALFURNITURE ITEMS, call for appt. after 11a.m (352) 628-4766 SOFAHIDEABED Tan/cranberry/green plaid 74L32w 29H $60. 352-621-0175 SOLID OAK SMALL COMPUTER DESK with pull out shelf & drawer $75 OBO 352-527-1399 Solid Wood Dining Table w/ 6 chairs & 2 extentions. Great condition! $400. (352) 527-1543 TRADE IN MATTRESS SETS Starting at $50. Very Good Condition 352-621-4500 RICH BEDDING New & Used Furniture 352-503-6801 BLACK & DECKER HedgeTrimmer 16 blade, $15; Scotts Lawn Spreader $15. (352) 382-0069 Bobs Discarded Lawn Mower Service Free Pick-up. (352) 637-1225 CRAFTSMAN RIDING MOWER, 24 hp 48cut, 132 hrs. Electric edger, electric trimmer $800 for all (352) 527-8989 HUQSVARNA Riding Mower $875.; TROY-BUILT self propelled mower $75 (352) 249-7335 John Deere 25 Riding Lawnmowerand edger $350. for both (352) 746-2393 WHEELBARROW small, used, works well, $15.00; Black&Decker 3/8 Electric Drill $6.00 352-382-0069 BOOTS JChrisholm Size 10 light tan color. Worn once. Great condition. $45. 352-212-2556 ENGERSOL-RAND Golf CartLifted W/ 22.11/10 new tires Ready for hunting. $2200 obo 447-5545 FREEZER$100 352-586-6125 FRIGIDAIRE 20.7 CU FT Upright Freezer, not frost free paid $650 Asking $300; GE Family Roaster Oven $40. 352-860-2095 GE DISHWASHER Excellent Condition. Color:Bisque $100.00 352-860-0212 MICROWAVE OVEN Over the Range Microwave. Excellent Condition. $75.00 352-860-0212 Side by Side Refrigerator, $600. Electric Stove, $350. 352-586-6125 SMITTYSAPPLIANCE REPAIR.Also W anted Dead or Alive W ashers & Dryers. FREE PICK UP! 352-564-8179 Bankruptcy Auction-Onsite & Online Oct. 28th at 10am Tuxedo Fruit Company 3487 S. US Hwy 1 Fort Pierce, Fl 34982 Citrus Packing Plant, Forklifts, Trailers, Compressors, Pallet Wrap Machine, Office Furniture & Equipment auctions.com 2 Preview Days: 10/20 & 10/27 10am-4pm Case #14-23036-EPK 10%-13%BP (800) 840-BIDS Subj to confirm. AB-1098 AU-3219, Eric Rubin FALLAUCTION!! 10/25/14 @ 10AM 650 Lots of Antiques, Collectibles,Art, Jewelry & More! See website for catalog & photos. www atmantiqueauctions.com 13%BP(-3% cash) VS MC DS AB3279AE450 AU1593 352-795-2061 1 Older Shop Smith $60. 1 Radial Arm Saw $60 (304) 678-4070 Cell Fiberglass 20 Extension Ladder, $95. 10Aluminum Step Ladder, $35. (352) 382-5521 PANASONIC VHS PLAYER 4 head vhs player with remote. Works Fine. $20. 352-382-5275 T.V. SPEAKER BAR 40 inch vizio speaker bar with remote.never used. $ 50.00 firm 352-382-5275 Vizio 55 TV 3 yrs. old $275 1 Boss 3-2-1 DVD Home Entertainment Sys. cost $1,300 Asking $600. Walter (352) 527-3552 200 SQ FT TILE & MATCHING TOILET Baby blue 4x4 tiles, bull nose, soap dish $100. for all 352-563-0054 AUDIOVOX CAR VHS PLAYER Attaches to headrests for viewing in the car. $25. 352-364-6704. Like new! XBOX 360 Game system with 2 wireless controllers and power cord. $180. Used less than 6 months. 352-476-8744 6 PC.Fancy Wrought Iron & Wicker, Glass top dining set & matching beautiful wine rack hutch. Exc cond. $400 (352) 270-8475 BAKERS TABLE Great for placing microwave & assorted items. Can e-mail pic. $15 352-566-6589 CHAIRS 2 rust brown upholstered rocker style living room, great shape, no holes/stains, ($35) 352-613-7493 Cherrywood Dining Room Table w/ 6 chairs & 3 ext. $600; Cherrywood Breakfront & Hutch, $600. (352) 513-4768 COUCH & EXTRA PillowsTeal/tan/light red 37L18W 29H Good Condition $40. 352-621-0175 PT HousekeepersUpscale Country Club Restaurant Now accepting applications for part -time housekeepers. Apply in person at 505 E Hartford St. Mon-Sat., 2-5pm AIRLINE CAREERS -Start Here-Get FAAcertified with hands on training in Aviation Maintenance. Financial aid for qualified students. Job placement assistance. CallAIM 866-314-5838 AIRLINE MECHANIC CAREERS begin here Get FAAapproved Aviation Maintenance hands on training. FinancialAid for qualified students. Job placement assistance. CALL Aviation Institute of Maintenance 877-741-9260 www .FixJet s.com MEDICAL BILLING TRAINEES NEEDED Become a Medical Office Assistant! NO EXPERIENCE NEEDED! Online training can get you job ready! HS Diploma/GED & PC/Internet needed! 1 888 528 5547 JAZZ IMPROV study jazz w/Rick D. all instruments welcome (352) 344-5131 Well Established and HIGHLY profitable franchise retail store in Crystal River. Call Pat for details at 1-813-230-7177 BASEBALLCARDS 1970s-thru-2000s. Most in mint condition. Approximately 35,000 cards. Prices vary based on card and year. Topps/Bowman/Fleer 352-794-3097 or by E-mail: Edwardsplace @hotmail.com FRANKLIN MINT 1981 J. LUNGER PEWTER 7 small animals. $10.00 each or $50. for all 352-344-1066 HEATWAVE POOL& SPAHEATER (H120) round titanium exchanger excellent condition 352 344 0291 asking $875 obo APPLIANCES like new washers/dryers, stoves, fridges 30 day warranty trade-ins, 352-302-3030 Exp. Medical AssistantFT For Busy Medical Office, EMR exp. a plus.Fax Resume: (352) 564-4222 or call (352) 564-0444 Front Desk ReceptionistFor a busy Medical Office up to 35 hours per week, exp. preferred but will train. Call (352) 686-6385 Front Office HelpExp. Necessary For Busy Medical Office,Fax Resume: (352) 564-4222 or Call (352) 564-0444 MEDICAL BILLINGMulti location medical practice is seeking individual to be part of a billing team in our Citrus County location. Employee must be computer literate, able to multitask and solve problems. Will be working with accounts, patients, financial closes for day and month and insurance companies on receipt of payments, posting and collections. Must be able to update programs as needed, work well within a team and resolve account conflict in a professional manner. Only experienced and qualified personnel need apply. Fax resume to: 746-9320 DRIVERClass B CDL or responsible person able to get Class B CDL. Heavy lifting required daily for shingle delivery truck. Apply within: SUNNILAND ROOFING SUPPLY 6130 N. Florida Ave. Hernando, 34442 (352) 465-4900 Driver Trainees Needed NOW!Become a driver for Werner Enterprises. Earn $800 per week! Local CDLTraining. 1-877-214-3624 SEAMSTRESS NEEDEDMust have professional expo. in Pageant Gowns, Wedding Dresses & Tuxedos & Suit & Suit Alterations Call (352) 795-5686 Crystal River TRUSS BUILDERSExp preferred Call Bruce Component Systems, Inc. (352) 628-0522 Ext 15 P/T RECEPTIONISTUpscale Country Club Activity Center needing part-time receptionist. Require professional able to multi-task. Must be proficient in Word, Powerpoint and Publisher. Call 746-7633 Monday-Friday for Appointment I I I I I I I I Tell that special person Happy Birthday with a classified ad under Happy Notes. Only $28.50 includes a photo Call our Classified Dept for details352-563-5966 I I I I I I I I EXP. STYLISTFor Suites to open in mid Nov. Downtown Dunnellon. Booth rental available. Call Donna 352-220-7260 Tell that special person Happy Birthday with a classified ad under Happy Notes. Only $28.50 includes a photo Call our Classified Dept for details352-563-5966 ADMISSIONS COORDINATORSeeking Experienced Admissions Professional Do you want to help make a difference? We are looking for that Special Person to add to our Team! Gr eat Benefits!! Medical & Dental Ins. FREE Life Ins. Competitive Salary Bonus Potential 100% match up to 4% on 401K Plus lots more!! APPL Y TO: Peter.Misura@ northporthealth.com CRYSTAL RIVER HEALTH & REHAB EOE CaregiverMidnight Shift must have all certificates and background check, Call for interview, (352) 344-5555 xt.101 Executive EZ Desk Chair, u pick-up (352) 637-2153 Free Kittens 2 females, sisters 4 months old Must go together 352-601-5426 Free Male Dog, husky/black lab mix breed, long grey hair, current shots, call 352-201-2758 Free to good Home two older Paso finoMares both in good health, very easy riders. Great with kids 527-9948 Hernando Have you seen Louie? Small male cat, grey w/ blk stripes, yellow eyes. By Seven Rivers Hosp. 563-5018/795-7650 Lost Cat Pure white 3 yr old cat reward-highlands area -352-419-7636 Lost Dog Red & White Welsch Corgi 11y/o Lost in Beverly Hills on 10/17 Please call (352) 533-2150 Lost dog, Silky Terrier last seen in Homosasaa on 10/11 near Litflower & Linder. (802) 598-6716 Lost Small Multi-Poo White/Tan Dog, Pine Ridge Area Conestoga Street Chipped, (352) 464-1519 Found Small Dog in the area of New Florida Ave, btwn Washington and California, Beverly Hills, pls call (352) 693-8243 SEVERALFURNITURE ITEMS, call for appt. after 11a.m (352) 628-4766 WE BUYRVS, TRAVELTRAILERS, 5TH WHEELS, MOTOR HOMES Call US 352-201-6945 $$ CASH PAID $$FOR JUNK VEHICLES no title ok 634-5389 BUYING JUNK CARS Running or Not CASH PAID-$300 & UP (352) 771-6191 FREE REMOV AL Appliances,AC Units Pool Heaters, Lawn Tractors 352-270-4087 T AURUS MET AL Recycling Best Prices for your cars or trucks also biggest U-Pull-It with thousands of vehicles offering lowest price for parts 352-637-2100 7 yr old female Bassett Hound and 7 yr old female Min Pin housebroken, healthy, spayed, free to good home pls call for interview (352) 287-3843 FREE CATS stripped gray, all gray or black & white Call (352) 746-1904 Tweet Tweet Tweet Follow the Chronicle on citruschronicle news as it happens right at your finger tips
PAGE 22
B10MONDAY,OCTOBER20,2014!1 Fall-Winter to $ave on W ater lic/ins 352-465-3086 AFFORDABLE LAWN CARE Cuts $10 & Up Res./Comm., Lic/Ins. 563-9824, 228-7320 D & R TREE SERVICE Lawn & Landscape Specialist. Lic. & Ins. Free Est. 352-302-5641 RICHARD STOKES HOME REPAIRS Int/Ext, Vinyl Windows Rescreening, Land Clearing 302-6840 NESTO MEDINA HANDYMAN SERVICE Specializing in Tile (305) 992-3805 Cell God Bless You!! RICHARD STOKES HOME REPAIRS Int/Ext, Vinyl Windows Rescreening, Land Clearing 302-6840 WARD HANDYMAN All Home Rep airs -Pressure Washing -Roof Coating, -Re-screens, Painting Driveway sealcoat Lic & Ins(352)464-3748 HOME CLEANING reliable & exp. lic/ins needs based, refs Bonded-352-212-6659 NA TURE COAST CLEANING $20 hr. Windows $25hr. 352-489-2827 Kats Kritter KarePET SITTING (352) 270-4672 LARR YS TRACT OR SER VICE GRADING & BUSHHOGGING ***352-302-3523*** **ABOVEALL** M & W INTERIORS All Home Improvement Northern Quality Southern prices! (352) 794-3368 Seasoned Oak Fire WoodF ALL SPECIAL $70. 4x7 stack, will deliver (352) 344-2696 Airport/Taxi Transportation DAYS Transportation Airports, Ports & Med DaysT ransport ation. com or (352) 613-0078 SMITTYSAPPLIANCE REPAIR.Also W anted Dead or Alive W ashers & Dryers. FREE PICK UP! 352-564-8179 CONSIGNMENT USA WE DO IT ALL!!! TRANSMISSIONS AIR CONDITIONING AUTO REPAIRS FREE TOWING FREE ESTIMATE 461-4518, 644 N US19 BOAT CLEANING Scuba diver w/tools to clean, find, salvage, anything underwater. Dmitry (352) 257-3788 Carpentry/Painting 30 years exp. Mobile home repairs. Low hourly rates. 220-4638 JEFFS CLEANUP/HAULING Clean outs/ Dump Runs Brush Removal. Lic. 352-584-5374 CURBAPPEAL Yardscape, Curbing, Flocrete. River Rock Reseals & Repairs. Lic. (352) 364-2120 000J5M5 Richard Max Simms Realtor Broker Owner NOW IS A GREAT TIME TO LIST YOUR HOME! CALL RICHARD FOR A FREE, NO OBLIGATION MARKET / CREDIT ANALYSIS! Buy, Sell or Refi, LLC 352-527-1655 ForSale.com editHer e .com SELLYOUR HOMEIN THEClassifieds SPECIAL! 30 Days $58.50Its Easy Call Today (352) 563-5966CITY3/2 mobile on canal unfurnished no pets f/l/s $600.00 mth 352-860-2795 HERNANDOWatsons Fish Camp 55+ Rental Community (352) 726-2225 INGLISCharming furn or unfurn effic./cottage, all utilities incld. No smoking. $600.352-422-2994 WATERFRONT RENT OR SELLHomosassa-Riverhaven-3BR 3BA.Canal w/dock.Lg Garage for RV or boat. 11430 W Waterway-$1300/mo Cell (352) 442-4210 IVORYTON INN ESSEX, CT Clean comfortable Rooms & Suites from $300 weekly. With private patio from $400 weekly. Julie (860) 836-3501 juliecr owell@ sbcglobal.net1/1, All Utilities Incl,d. $600. mo. + Sec., 352-634-5499 INVERNESS2/1 or 1/1 near CM Hospital $525 or $475 incld water/garb 352-422-2393 HERNANDOWATSONs Fish Camp 55+ Rental Community (352) 726-2225 Invernessnew 1bd mother-in -law apt. fully furn, all util,cable incl.$700 mo + sec (941) 650-7703 Brentwood& Terra Vista of Citrus Hills Homes & Townhomes Furnished & unfurnished. Starting at $1000/ per month, social membership included Six months minimum. Terra Vista Realty Group.Call 746-6121 **INVERNESS**Golf & Country loc. 3/2/2 Spacious pool home $850. ( 908) 322-6529 CITRUS SPRINGS3/2/1 home, CHA, 1,939 SF, no pets, 1st, last and sec reqd $730/mo 352-489-1411 INVERNESS3 bedroom. 2 bath. Waterfront, pool, 3/2/2 $1100/mo (541) 499-5025 INVERNESS3/2/2, wheel chair access. $775. mo.,1st, last.& $500. sec. Ref. Req. 352-637-2840 Youve been asking for a RENTAL SPECIAL and we listened SPECIAL INCLUDES: 7 days in print 7 days in our online Rental Finder Up to 6 lines Todays NewAdsFor only $50.00 Call us today! 352-563-5966 Your world first.Every Dayvautomotive Classifieds REPOGREAT SHAPE 40K MUST SEE!! 352-795-1272 located in Homosassa HOMOSASSA3/2 + Den, c/h/a, Clean $700. mo. f/l/s 352-634-6340 HOMOSASSA 3bd/2ba, 1 acre, skylight, decking, 2 sheds, parquat floor, fireplace, $55k obo (352) 563-9857 HOMOSASSA RENT TO OWN3360Arundel Terrace 2BR/1 BA, Tile floors, washer /dryer, lrg lot, Own for $3000 down & $476.29 per mo or Rent for $550 per mo. Call for appointment Tony Tubolina Broker/ Owner (727) 385-6330 DANS MH & RV PARK 3 Large Lots $175 mo. 2 Small Lots $165 mo. Inclds Water, Mowing & Trash (352) 447-2043 FLORAL CITYLAKEFRONT 1 Bedrm. AC, Clean, No Pets (352) 344-1025 Rental Complex (19 Apt) For Sale; 2bd Apt. For Rent (352) 228-7328 FLORAL OAKSAPARTMENTS SUZIE QSusie Q, a 1-y.o. Rhodesian Ridgeback/ Lab female, HW-negative, housebroken, wt. 35 lbs. She is very alert & playful, lively, good with other dogs, does not care about cats. Affectionate, friendly, plays with a ball, plays in water, good for young family. Call Joanne @ 352-795-1288 or Dreama @ 813-244-7324! HOMOSASSA1/1, Near US 19 $350mo. 1st/last/sec. 352-634-2368 Youve been asking for a RENTAL SPECIAL and we listened SPECIAL INCLUDES: 7 days in print 7 days in our online Rental Finder Up to 6 lines Todays NewAdsFor only $50.00 Call us today! 352-563-5966 JUNIORJunior 1 1/2 y.o. Blackmouth Cur mix, neutered, housebrkn, UTD on shots, microchipped, crate-trained. Athletic, loves to run in yard, will need daily exercise. Good w/leash training, loves car rides, loyal to his humans. Knows basic commands, Call Laci @ 352-212-8936, email Lacihendershot @yahoo.com. NewbornCHIHUAHUA PUPPIES 5 wks old, 3 boys, black $300. ea. (352) 419-7025 PUPPIES Miniature Short Hair Daschunds 4 Males CKC, health shots & cert. $400 ea; Call Sarah 786-879-0221 or text SADIESadie an approx 4-y.o. Bulldog mix, tan w/white markings, spayed, HW-negative, housebrkn, very sweet & calm, wt. 70 lbs, gets along well w/other dogs, well-mannered, UTD on shots. Friendly & gentle, shy @ first, warms up quickly. Call Joanne @ 352-795-1288 or Dreama @ 813-244-7324. Sgt. StanSgt. Stan neutered Lab mix, 6-8 y.o., HW--negative. Wt. 69 lbs. Extremely well behaved, knows many commands. Dog friendly, would be best in home without cats. Handsome & mellow. Awesome dog, would be great addition to any family. Call Marti @ 786-367-2834. THORThor, 9-month-old puppy, playful, very friendly, good with other dogs. Loves to play with water hose or sprinklers, chases tennis balls. Can sit for treats, is eager to learn. He is neutered, microchipped, UTD on vaccinations. Call Christina @ 352-464-3908, email christina.heady@ yahoo.com. 16 ft. TrailerDouble Axle $1,200. obo (352) 697-2409 CARGO TRAILER 17 x 14 dual axles, elec. brakes, new 10 ply tires, E-track, side door, drop ramp door $2,500. (352) 322-1813 CARGO TRAILER 2012, 5X8, side door bench, diamond plate front & fenders, 15 chrome wheels, round top, $1,250. (352) 860-1106 50 AUDIO BOOKS All by recent and noteable authors $5 each for sell or trade. (352) 212-1854. 2 Pair of Love Birds with cages, $300. (352) 634-4237 Beautiful SunConure w/cage $225. Lovebird, apricot green & blue w/cage $150. (352) 746-6542. NEW SKYLIGHT LEXAN 20x20 $40.00 352 464 0316 STILTS For doing Sheetrock work. Good shape, only $50. 352 464 0316 4 TOILET SEAT RISER Makes it much easier to get up. $20.00 352-464-0316 Electric Go-Go Elite Travel Scooter $200 OBO; also New Wheelchair $75 OBO (352) 382-1795 MOBILITY CHAIR JAZZY 3, New battery, excellent cond. $450 OBO(352) 476-1113 OxygenConcentratorInogenOne -Regain Independence & Enjoy Greater Mobility. 100% Portable! Long-Lasting Battery. Try It Risk Free Call 800-619-5300 For Cash Pur chase Only STRAT STYLE ELECTRIC GUITAR LOOKS,SOUNDS, PLAYS GOOD! $40 352-601-6625 Carvin Electric Guitar CT6 deep tiger eye finish, mid 2005 Model gold hardware, locking tuners, $950. (352) 746-7745 EQUALIZER peavey 15 band stereo, new, ($35) 352-212-1596 JAZZ IMPROV study jazz w/Rick D. all instruments welcome (352) 344-5131 JEAN BAPTISTE LG SIZE BUGLE w/Mouthpiece & Case Great condition $50 Josh 423-4163 MICROPHONE akg d800s, low impedance, good stage mic, great shape,($10) 352-212-1596 MONITORS TOA12 good shape, both for ($45 ) 352-212-1596 SPEAKER STANDS quicklock heavy duty, great shape, both for ($35) 352-212-1596 SPEAKERS radio shack 10 PA, pole mountable, good shape, both for ($35) 352-212-1596 FOLDING TABLE Heavy Duty, brown 5long x 30 wide Excellent condition. $30. 352-270-3909 TREADMILLMANUAL Excell. cond. Almost new non elec. easily moved and stored $50. Call 352-257-4076 1994 EZ-Go Golf Cart Very good cond w/ charger $1850. (352) 601-2480 Club Car1 year old, Battery with charger, lots of extras $1,795. 352-476-5687 BICYCLE LOCK New Brinks adjustable shackle solid brass 2x 6 all purpose $10. Dunnellon 352.465.8495 BICYCLE RACKS 1-1/4 receiver hitches 3-bike & 2-bike, Heavy Duty $25. ea. Dunnellon 352.465.8495 Club Car 2008 Super Clean Golf Cart, Two-Tone Seats. Charger Included. $3,800. Call Love Motorsports @ 352-621-3678 GOLF IRONS NewAdamsTight Lies MRH 7-SW senior graphite $100. Dunnellon 465.8495 Need a JOB?#1 Employment source is Classifieds
PAGE 23
MONDAY,OCTOBER20,2014B 11 CITRUS COUNTY (FL) CHRONICLE CLASSIFIEDS 000J5M1 FORD1964 Galaxy 500 2dr, w/skirts, original paint & interior, 352 big block, 102k mi. 2 owners, $8800. partial trade? (352) 870-8058 PONTIAC 1989 Firebird Formula 23k doc. mi. pristine, original owner, $11,500. obo 352-634-3806 CJ7 RENEGADE1980, 6-Cyl., 4-spd, hard top, metal doors, rear seat. Rough, not running, but would be good for parts or project. Clear Title. $800, or best offer. Local, call between 1p-5p (352) 697-3522 CONSIGNMENT USA WE DO IT ALL!!! TRANSMISSIONS AIR CONDITIONING AUTO REPAIRS FREE TOWING FREE ESTIMATE 461-4518, 644 N US19 BUICK2005, Rendezvous $5,995. 352-341-0018 CHEVY2000, Blazer, 2 Door $2,995. 352-341-0018 Ford2000 Excursion 164k mi. TV, 6 track, electric brakes, tow pkg. $6000. obo (352) 503-7284 Mercedes Benz2001 CLK 320 Convertible, Looks and runs great, A Must See $6,000(207)-730-2636 CHEVY, Suburban, 4 x 4, Driven 6,000 miles a year $3,995. Consignment USA (352) 795-4440 CHRYSLER2012 Town & Country Wheelchair van with 10 lowered floor, ramp and tie downs Call Tom for more info 352-325-1306 HONDA, Odyssey white 6 passenger, leather inter. just detailed. $3,500. (352) 212-7501 HONDA2006 Shadow 600CC runs & looks like new new tires & battery red w/xtras, asking $2800. (352) 344-2715 KAWASAKI2009 500, Windsheild, sissy bar, 8,000 mil Excel. cond. $3,500 obo (352) 860-1106 YAMAHA, C3, 49CC Scooter Red, excel. cond. 281 miles, $1,800. (248) 420-9625 cell BUICK2000 LeSabre 55k mi, extra clean new tires, $4950. (352) 257-3894 BUICK2007 LaCrosse CX 4 DR.3.8Lv6 = Bronze LOW MILEAGE Beverly Hills Area $8500. ONLY SERIOUS BUYERS Please E-Mail MROSSITRA VEL@ GMAIL.COM or call (480) 391-0057 CHEVROLET2004 IMPALA63,000 miles perfect cond. $5,500 (352) 237-3507 CONSIGNMENT USA WE DO IT ALL!!! TRANSMISSIONS AIR CONDITIONING AUTO REPAIRS FREE TOWING FREE ESTIMATE 461-4518, 644 N US19 FORD1994 Tempo 190k mi new brakes, battery, cold a/c, $600. obo (352) 341-1103 FORD2002, Taurus $3,995. 352-341-0018 HYUNDAI2002, Elantra, Auto trans,pw., pl. $2,995 352-341-0018 LINCOLN1988Towncar, 4-door, senior owned, garage kept, 45kmi $4200 (352) 860-1106 or 201-4499 LINCOLN1993 127K Miles, Drives like new, must see! $2200. OBO (352) 447-5545 MAZDA2010 MX5 Miata Sport red conv, 32k mi, auto new tires, exc. cond. MUST SEE! ask, $16,295. (352) 897-4432 MINI COOPER2005, Power windows, locks, $7,995. 352-341-0018 NISSAN2012 Altima 2.5 S Grey, 19K mi, Keyless entry, Power Seat/Windows, tinted. Mint cond. $13,500 352-746-6432 Oldsmobile2001 Maroon Aurora 107k mi. exc. new ac, brakes, & more $5750. aft.6p (352) 637-5525 PONTIAC2001 Grand Am GT 81k mi, extra clean leather, alum wheels $4500(352) 257-3894 SELL YOUR VEHICLE IN THEClassifieds**3 SPECIALS ** 7 days $26.50 14 days $38.50 30 Days $58.50 Call your Classified representative for details. 352-563-5966 Your Worldof garage sales Classifieds ww.chronicleonline.com NC Mtnsnear Lake Lure. New cabin on 1.5 acres, huge porches, vaulted ceiling, 1,200sf, ready to finish. $74,900 Call 828-286-1666 2 JET SKIS onTrailer, 04, Yamaha 2002 Honda, Around 200 hrs. ea., 3 passengers ea. Must sell due to health $6,995. 352-726-3263 KAWASAKI1996 Jet Ski 750 Good condition, No time to ride. $1100. (352) 287-3656 ** BUY, SELL** & TRADE CLEAN USED BOATS THREE RIVERS MARINE US 19 Crystal River **352-563-5510** Eddy LineKayak, 14 Equinox carbon lite construction-2 hatches, gently used, new, cost $1800. asking $1400. (352) 586-3850 WE HA VE BOA TS GULF TO LK MARINE We Pay CASH For Used Clean Boats Pontoon, Deck & Fishing Boats **(352)527-0555** boatsupercenter.com WILDERNESS14.5 2 person Kayak exc. cond. yellow/ orange sunset, $400. (352) 586-3850 CHEVY1990 Class C, Awning, generator, 31k miles, 2 ACs, Runs Perfect $5,800 (727) 207-1619 Crystal River HITCHHIKER36 ft, 4 slide outs hydraulic brakes, self leveling sys. + extras $33,000. 352-637-3996 WE BUYRVS, TRUCKS, TRAILERS, 5TH WHEELS, & MOTOR HOMES Call US 352-201-6945 Winnebago2005, 37B, 38ft Long, 3 slides 53k mi, $69,400 pics on rvtrader.com (352) 344-3181 BUYING JUNK CARS Running or Not CASH PAID-$300 & UP (352) 771-6191 T AURUS MET AL Recycling Best Prices for your cars or trucks also biggest U-Pull-It thousands of vehicles offering lowest price for parts 352-637-2100 Tennessee MountainsNEW CABIN $149,900 3 BR/2.5 BA sold as is 28.5 Acres, Creeks, Mountain Views, Trout Stream, Minutes to Watts Bar Lake. Power, Roads, Financing Call 877-520-6719 or Remax 423-756-5700 ONLY $49,00010 Acres Mini Farms Paved Street Call John 305-607-7886 REALTY USA (407) 599-5000 Tweet Tweet Tweet Follow the Chronicle on citruschroniclenews as it happens right at your finger tipsTAKE NEW LISTINGS BUYING OR SELLING TOP PERFORMANCEReal estate Consultant tpauelsen@ hotmail.com Your Citrus County Residential Sales Specialist!MICHAEL J. RUTKOWSKI(U.S. Army Retired) Realtor (352) 422-4362 Michael.Rutkowski @ERA.com Integrity First in all Aspects of Life!ERA American Realty & Investments. MICHELE ROSERealtor Simply put I ll work harder 352-212-5097 isellcitruscounty@yahoo.com Craven Realty, Inc. 352-726-1515 ARBOR LAKES 55+ Gated Community Corner Cul-de-Sac UNIQUE 2/2/2 VILLA w/den/covered lanai Inground Pool. Many Upgrades $179.900 Appt. (352) 726-7339 2BR/2BA/2CG, Large lanai, FL room, Private Street $85,000 (352) 419-4447 or (352) 201-8177!!! AGENT ADIN THE CHRONICLE CLASSIFIEDS SPECIALS 30 Days $55.50 Its Easy Call . 5 INCOMEPROPERTIES For Sale make offer, 1 or all TERMS (352)422-3670
PAGE 24
B12MONDAY,OCTOBER20,2014 CLASSIFIEDS CITRUSCOUNTY( FL ) CHRONICLE 429-1020 MCRN 10/28/14 Lien Sale PUBLIC NOTICE NOTICE OF PUBLIC SALE Citrus Mini Storage is wishing to avail itself of the provisions of applicable laws of this state, Civil Code Section 83.801 -83.809, hereby gives notice of sale under said law, to wit: On October 28, James B Strickland 34 Household/ Business Items Ryan Wilson 44 Household/ Business Items Suzette Berry B October 13 & 20, 2014. 438-1020 MCRN OA-14-01 Dept. of Plan. & Dev. PUBLIC NOTICE NOTICE OF INTENT TO CONSIDER AN ORDINANCE AMENDING THE CITRUS COUNTY CODE The Citrus County Board of County Commissioners (BCC) proposes to adopt the following by ordinance: OA 14 01 DEP AR TMENT OF PLANNING AND DEVELOPMENT AN ORDINANCE OF CITRUS COUNTY, FLORIDA, A POLITICAL SUBDIVISION OF THE STATE OF FLORIDA, AMENDING THE FEE SCHEDULE OF CHAPTER 54 OF THE CITRUS COUNTY CODE, ALSO KNOWN AS THE CITRUS COUNTY IMPACT FEE ORDINANCE, FOR TRANSPORTATION, SCHOOLS, PARKS, LIBRARY, FIRE, EMERGENCY MEDICAL SERVICES (EMS), LAW, AND PUBLIC BUILDINGS; PROVIDING FOR SHORT TITLE, AUTHORITY, APPLICABILITY, AND ADOPTION OF TECHNICAL REPORT; PROVIDING FOR INTENT AND PURPOSE; PROVIDING FOR DEFINITIONS AND RULES OF CONSTRUCTION; PROVIDING FOR FEE TO BE IMPOSED; PROVIDING FOR INDIVIDUAL ASSESSMENT; PROVIDING FOR CREDITS; PROVIDING FOR BENEFIT DISTRICTS; PROVIDING FOR USE OF FUNDS; PROVIDING FOR RETURN OF FEES; PROVIDING FOR LIBERAL CONSTRUCTION, SEVERABILITY, AND PENALTY; CONFLICTS OF LAW; CODIFICATION, INCLUSION IN CODE, AND SCRIVENERS ERRORS; MODIFICATION; AND PROVIDING AN EFFECTIVE DATE. A public workshop on the proposed ordinance will be held by the Board of County Commissioners on November 4, 2014, at 1:45 PM, at the Citrus County Courthouse, 110 N. Apopka Avenue, Room 100, Inverness, Florida. Interested parties may appear at the meeting and be heard with respect to the proposed ordinance amendment. A copy of the proposed ordinance Land Development Division at (352) 527-5239. this meeting because of a disability or physical impairment should contact the County Administrators Published October 20, 2014 OA-14-01 420-1027 MCRN How has been filed against you and you are required to serve a copy of your written defenses, if any, to it on Plaintiffs attorney, Donald F. Perrin, Esq., DONALD F. PERRIN, P.A., Post Office Box 250, Inverness, FL 34451-0250 within thirty (30) days after the first publication of this notice and file the original with the Clerk of this Court either before service on Plaintiffs attorney or immediately thereafter; otherwise a default will be entered against you for the relief demanded in the Complaint. DATED this 23rd day of September, 2014. (SEAL) ANGELA VICK, Clerk of the Court and Comptroller By:/s/ Amy Holmes, As Deputy Clerk Published October 6, 13, 20, & 27,2014. 426-1020 MCRN Seymour, Deborah L. 09-2014-CA-000751 NOA PUBLIC NOTICE IN THE CIRCUIT COURT OF THE FIFTH JUDICIAL CIRCUIT IN AND FOR CITRUS COUNTY, FLORIDA CIVIL ACTION CASE NO.: 09-2014-CA-000751 DIVISION: WELLS FARGO FINANCIAL SYSTEM FLORIDA, INC., Plaintiff, vs. DEBORAH L. SEYMOUR A/K/A DEBORAH SEYMOUR A/K/A DEBORAH LEA SEYMOUR, et al, Defendant(s). NOTICE OF ACTION To:DEBORAH L. SEYMOUR A/K/A DEBORAH SEYMOUR A/K/A DEBORAH LEA SEYMOUR Last Known Address: 8302 Marinazzo Terrace Crystal River, FL 344 71, OF DE ROSA, INC., UNIT 5, REVISED, ACCORDING TO THE MAP OR PLAT THEREOF AS RECORDED IN PLAT BOOK 11, PAGE 29, OF THE PUBLIC RECORDS OF CITRUS COUNTY, FLORIDA. A/K/A 8302 MARINAZZO TERRACE, CRYSTAL RIVER, FL 34428 1st day of October, 13 & 20, 2014 14-148734 427-1103 MCRN Collins, Casey J. 2013-CC-664 NOA PUBLIC NOTICE IN THE COUNTY COURT OF THE FIFTH JUDICIAL CIRCUIT IN AND FOR CITRUS COUNTY IN THE STATE OF FLORIDA Case No: 2013-CC-664 CITRUS COUNTY CLERK OF COURT AND COMPTROLLER Plaintiff, V. CASEY JAMES COLLINS, Individually and LORRAINE ROSENBERGER. Individually, STATE OF FLORIDA, DEPARTMENT OF REVENUE, CHILD SUPPORT ENFORCEMENT PROGRAM, ALISON T. WEST, Individually, TENNILLE COLLINS, Individually, TENNILLE SCALZI, Individually, LINDA A. MULIK, Individually, GERALD M SCHWARTZ d/b/a ALLIED JUDGMENT RECOVERY LLC., CITRUS COUNTY MUNICIPAL SERVICES BENEFIT UNIT OF WATER AND WASTE WATER SERVICES, STATE OF FLORIDA, and XTYLINE, INC., Defendants. NOTICE OF ACTION TO: GERALD M. SCHWARTZ d/b/a ALLIED JUDGMENT RECOVERY, LLC, CASEY JAMES COLLINS, LINDA A. MULIK and ALISON T. WEST $6,589.58 resulting from the tax deed sale that occurred on June 19, 2013 and referenced as 2013-078. 428-1103 MCRN Runnels, Timothy A. 2014-SC-273 NOA PUBLIC NOTICE IN THE COUNTY COURT OF THE FIFTH JUDICIAL CIRCUIT IN AND FOR CITRUS COUNTY IN THE STATE OF FLORIDA Case No: 2014-SC-273 CITRUS COUNTY CLERK OF COURT AND COMPTROLLER Plaintiff, V. TIMOTHY ALAN RUNNELS, Individually, HUMBERTO GONZALEZ JR., Individually, and KAREN V. GONZALEZ, Individually, and ISAOA/ATIMA, and PALISADES COLLECTION, LLC. Assignee of Polaris, A Florida Limited Liability Company and GE MONEY BANK A Corporation F/K/A/ GE CAPITAL CONSUMER CARD CO. Defendants. NOTICE OF ACTION TO: TIMOTHY ALAN RUNNELS, HUMBERTO GONZALEZ JR and ISAOA/ATIMA $4,162.09 resulting from the tax deed sale that occurred on December 11, 2013 and referenced as 2013-350. 432-1020 MCRN Macphee, James D. 2013 CA 824 NOA PUBLIC NOTICE IN THE CIRCUIT COURT OF THE 5TH JUDICIAL CIRCUIT, IN AND FOR CITRUS COUNTY, FLORIDA CIVIL DIVISION CASE NO.:2013 CA 824 FEDERAL NATIONAL MORTGAGE ASSOCIATION, Plaintiff, vs. JAMES D MACPHEE, et al., Defendants. NOTICE OF ACTION TO: JAMES D. MACPHEE Current Residence: 1405 E WEDGEWOOD LN HERNANDO, FL 34442 433-1020 MCRN Hall, Sanford A. 2013 CA 001491 NOA PUBLIC NOTICE IN THE CIRCUIT COURT OF THE 5TH JUDICIAL CIRCUIT, IN AND FOR CITRUS COUNTY, FLORIDA CIVIL DIVISION CASE NO.:2013 CA 001491 IN RE: S&P CAPITAL CORPORATION vs. SANFORD A. HALL and HARTLYN J. HALL TO: SANFORD A. HALL and HARTLYN J. HALL YOU ARE HEREBY NOTIFIED that an action to foreclose a mortgage on the following property located in Citrus County, Florida: Lot(s) 12, THE KINGS FOREST, according to the Plat thereof on file in the office of the Clerk of the Circuit Court in and for Citrus County, Florida, recorded in Plat Book 11, Page 148. Said lands situate, lying and being in Citrus County, Florida. has been filed against you and you are required to serve a copy of your written defenses, if any, to it on William G. Shofstall, attorney for Plaintiff, S&P CAPITAL CORPORATION, whose address is P.O. Box 210576, West Palm Beach, Florida 33421, and file the original with the Clerk of the above-styled court on or before thirty (30) days after the first date of publication; otherwise a default will be entered against you for the relief prayed for the Complaint. WITNESS my hand and the Seal of said Court at Citrus County, Florida on this 2nd day of September, 2014. ANGELA VICK, CLERK OF THE CIRCUIT COURT CITRUS COUNTY, FLORIDA (Circuit Court Seal) By: /s/ Vivian Cancel, As Deputy Clerk Published October 13 & 20, 2014 434-1027 MCRN Fengarinas, Amanda N. 2014-CA-000756 NOA PUBLIC NOTICE IN THE CIRCUIT COURT OF THE FIFTH JUDICIAL CIRCUIT IN AND FOR CITRUS COUNTY, FLORIDA CIVIL DIVISION CASE NO. 2014-CA-000756 BRANCH BANKING AND TRUST COMPANY, Plaintiff, v. AMANDA NICOLE FENGARINAS AS PERSONAL REPRESENTATIVE OF THE ESTATE OF GREGORY F. FENGARINAS, et al, Defendants. NOTICE OF ACTION TO: UNKNOWN HEIRS OF GREGORY F. FENGARINAS,: UNKNOWN YOU ARE NOTIFIED that an action to foreclose a mortgage on the following property in CITRUS County, Florida, to-wit: LOTS 21 AND 22, BLOCK B OF MEADOW WOOD, ACCORDING TO THE MAP OR PLAT THEREOF RECORDED IN PLAT BOOK 4, PAGE 108, PUBLIC RECORDS OF CITRUS November 19, 2014, or within thirty (30) days after the first publication of this Notice of Action, and file the original with the Clerk of this Court at 110 N Apopka A venue, Inver ness FL 34450 either before service on Plaintiffs attorney or immediately thereafter; otherwise, a default will be entered against you for the relief demanded in the complaint petition. WITNESS my hand and seal of the Court on this 6th day of October, 2014. ANGELA VICK, Clerk of the Court and Comptroller (SEAL) By: /S/ VIVIAN CANCEL, Deputy Clerk Published October 20 & 27, 2014 436-1110 MCRN Byers, John R. 2014-CA-838 NOA PUBLIC NOTICE IN THE CIRCUIT COURT OF THE FIFTH JUDICIAL CIRCUIT OF THE STATE OF FLORIDA, IN AND FOR CITRUS COUNTY CIVIL DIVISION Case Number: 2014-CA-838. AMENDED NOTICE OF ACTION TO: UNKNOWN HEIRS, DEVISEES, GRANTEES, ASSIGNEES, LIENORS, CREDITORS, TRUSTEES OR ANY OTHER PARTIES CLAIMING BY, THROUGH, UNDER OR AGAINST MARK T. ROE a/k/a MARK ROE, deceased. (Address Unknown) YOU ARE NOTIFIED that an action to quiet title to the following described real property in Citrus County, Florida: Lot 15 of Magnolia Beach Park Unrecorded Subdivision: Commence at the SE corner of the NE 1/4 of Section 33, Township 18 South, Range 20 East, Citrus County, Florida, thence N 89 W along the South line of said NE 1/4 417.65 feet, thence N 7 40 E 537.6 feet to the Point of beginning, thence S 89 20 E 140 feet, thence N 7 40 E 76.8 feet, thence N 89 20 W 140 feet, thence S 7 40 W 76.8 feet to the point of Beginning. Property Address: 1295 N. Beach Park Dr., Inverness, FL 34453 has been filed against you, and you are required to serve a copy of your written defenses, if any, to it on Adam J. Knight, Esq. attorney for Plaintiff, whose address is 601 S. Fremont Avenue, Tampa, Florida 33606 on or before 30 days from the first date of publication and to file the original with the Clerk of this Court either before service on plaintiffs attorney or immediately thereafter; otherwise a default will be entered against you for the relief demanded in the Complaint. The action was instituted in the Fifth Judicial Court for Citrus County in the State of Florida and is styled as follows: (s) DATED on October 7, 2014. Angela Vick, Clerk of the Court and Comptroller [CIRCUIT COURT SEAL] By: /s/ Vivian Cancel, As Deputy Clerk Published October 20 & 27, November 3 & 10, 2014 11755.00 437-1027 MCRN Tucker,Annie M. 09-2014-CA-000743 NOA PUBLIC NOTICE IN THE CIRCUIT COURT OF THE FIFTH JUDICIAL CIRCUIT IN AND FOR CITRUS COUNTY, FLORIDA Case No.: 09-2014-CA-000743 CITIMORTGAGE, INC. Plaintiff, v. ANNIE M. TUCKER A/K/A ANNIE MAE TUCKER A/K/A ANNIE TUCKER A/K/A ANNIE M. STEVENFIELD, et al Defendant(s). NOTICE OF ACTION FOR FORECLOSURE PROCEEDING-PROPERTY TO: PAUL DUCAN BASS HEIR OF PATSY L. BASS A/K/A PATSY BASS ADDRESS UNKNOWN BUT WHOSE LAST KNOWN ADDRESS IS: 5220 S LECANTO HIGHWAY LECANTO, FL 34461 Residence unknown, if living, including any unknown spouse of the said Defendants, if either has remarried and if either or both of said Defendant(s) Defendant(s) as may be infants, incompetents or otherwise not sui juris. YOU ARE HEREBY NOTIFIED that an action has been commenced to foreclose a mortgage on the following real property, lying and being and situated in Citrus County, Florida, more particularly described as follows:. The South 1/2 of the S 1/2 of the N 1/2 of the NE 1/4 of Section 35, Township 19 South, Range 18 East, Citrus County, Florida, LESS and EXCEPT the right of way of State Road No. 490. LESS AND EXCEPT the following: 439-1027 MCRN Carella, Marie 09-2014-CA-000524 NOA PUBLIC NOTICE IN THE CIRCUIT COURT OF THE FIFTH JUDICIAL CIRCUIT IN AND FOR CITRUS COUNTY, FLORIDA CIVIL ACTION CASE NO.: 09-2014-CA-000524 DIVISION: BANK OF AMERICA, N.A., Plaintiff, vs. THE UNKNOWN HEIRS, DEVISEES, GRANTEES, ASSIGNEES, LIENORS, CREDITORS, TRUSTEES, OR OTHER CLAIMANTS CLAIMING BY, THROUGH, UNDER, OR AGAINST, MARIE E. CARELLA ALSO KNOWN AS MARIE R. CARELLA ALSO KNOWN AS E. CARELLA MARIE, DECEASED et al, Defendant(s). NOTICE OF ACTION To: THE UNKNOWN HEIRS, DEVISEES, GRANTEES, ASSIGNEES, LIENORS, CREDITORS, TRUSTEES, OR OTHER CLAIMANTS CLAIMING BY, THROUGH, UNDER, OR AGAINST, MARIE E. CARELLA ALSO KNOWN AS MARIE R. CARELLA ALSO KNOWN AS E. CARELLA MARIE, 11 AND THE SOUTH HALF OF LOT 10, BLOCK 3 OF ROYAL OAKS, ACCORDING TO THE PLAT THEREOF AS RECORDED IN PLAT BOOK 13, PAGES(S) 51 THROUGH 54, INCLUSIVE, OF THE PUBLIC RECORDS OF CITRUS COUNTY, FLORIDA. A/K/A 3383 S BELGRAVE DRIVE, INVERNESS, FL 34452 November 19, 2014 10th day of September, 20 & 27, 2014 14-141042 Commence at the NW corner of the S 1/2 of the N 1/2 of the NE 1/4 of Section 35, Township 19 South, Range 18 East, thence S 0`45 E along the West line of said Northeast 1/4 a distance of 331.30 feet to the Point of Beginning, said point being the NW corner of land described in Official Records Book 257, Page 562, of the South 122.73. AND Commence at the NW corner of the S 1/2 of N 1/2 of the NE 1/4 of Section 35, Township 19 South, Range 18 East, Citrus County, Florida, thence S 0`45 E along the West line of the said NE 1/4 a distance of 331.30 feet to the Point of Beginning, said point also being the NW corner of land described in Official records Book 257, page 562, of the the 20 feet of the South 122.73 feet. Together with an easement described as follows: The southerly 20 feet and the Westerly 20 feet of the South 122.73 feet of the S 1/2 of the N 1/2 of the NE 1/4 of Section 35, Township 19 South, Range 18 East, LESS and EXCEPT The right of way of State Road No. 491. Together with a 1985 Thomas Doublewide, VIN #846780A and VIN#846780B AND Together with a 1980 Doublewide. COMMONLY KNOWN AS: 5220 S Lecanto Highway, Lecanto, FL 34461 November 7th day of October, October 20 & 27, 2014 FL-97001267-14 GLORIA S. VICKERY-MACPHEE Current Residence: 1405 E WEDGEWOOD LN HERNANDO, FL 34442 UNKNOWN TENANT Current Residence: 1405 E WEDGEWOOD LN HERNANDO, FL 34442 YOU ARE NOTIFIED that an action for Foreclosure of Mortgage on the following described property: LOT 25, BLOCK I, FAIRVIEW ESTATES. ACCORDING TO PLAT THEREOF RECORDED IN PLAT BOOK 12. PAGES 49 THROUGH 60 INCLUSIVE. 12,. WITNESS my hand and the seal of this Court this 22nd day of October 2013. Betty Striffler, Clerk of Court [CIRCUIT COURT SEAL] By: /s/ Dawn Napel, Deputy Clerk Published October 13 & 20,2014 12-19057 The action is asking the court to decide how the following real or personal property should be divided: The Overbid funds in the amount of | http://ufdc.ufl.edu/UF00028315/03636 | CC-MAIN-2018-39 | refinedweb | 32,312 | 74.08 |
On Monday 19 September 2005 04:30 pm, Rich Apodaca wrote:
> On Mon, 19 Sep 2005 09:12:15 +0200, Egon Willighagen wrote
> > From what I've seen, I think CDKTools and Octet use a builder as stand in
> > for the data classes, correct?
>
> Yes - and no. My take on the AtomContainer class (the main "data class") is
> that it combines two completely unrelated behaviours: construction
> (addAtom, addBond, etc.) and query (getAtomCount, getBondCount, etc.). In
> other words, what I propose is:
>
> AtomContainer -> AtomContainer + CDKBuilder
This might be good in the long term... but in the short term I don't want to
step away from having things out of sight for users.
> In those situations in which AtomContainer behaves as its own builder (in
> file i/o, for example), CDKBuilder is designed to be a replacement.
> However, AtomContainer remains the main data class in those situations in
> which information needs to be extracted from an AtomContainer (in
> UniversalIsomorphismTester, for example).
>
> > This is not the step I would like to make, though I do want it's
> > functionality. The reason is that this would change the user
> > experience regarding the CDK API. Something I don't want to change.
>
> It needn't change the CDK API at all, just add to it. Check out the current
> release of CDKTools (). I've introduced CDKBuilder as
> a layer on top of CDK. Clients that want to continue letting AtomContainer
> build itself can continue to do so. But clients wanting the flexibility of
> the Builder Pattern can take advantage of CDKBuilder. The more extreme
> option would be to remove the mutator methods (addAtom, addBond, etc.) from
> the AtomContainer interface. And I agree that this would very much change
> the CDK API. But what I'm doing with CDKTools and CDKBuilder just gives
> developers a more flexible option for building AtomContainers, without
> changing the CDK experience.
>
> > My main 'problem' I have with this design is that this indeed is a
> > replacement for the AtomContainer.
>
> Not at all. The two can co-exist side-by-side (see above).
>
> > interface. So, a lot of methods, but the class can be singular
> > (only one instance can exist in the VM world), so I don't expect too
> > many memory problems.
> >
> > In the end all library classes in the CDK do not instantiate data
> > classes directly, but use the builder instead.
>
> I realize that this is in line with the current CDK setup. Unfortunately, I
> don't see how this moves the ball forward. If the object returned by
> ChemObject.getBuilder() is a static Singleton instance of a class with a
> large interface, then the same effect can be achieved by adding the
> following method to any ChemObjectBuilder interface implementation:
>
> public interface ChemObjectBuilder
> {
> // ... methods
> }
>
> public class BasicChemObjectBuilder implements ChemObjectBuilder
> {
> private static ChemObjectBuilder instance = null;
>
> // ... implement ChemObjectBuilder interface
>
> public static ChemObjectBuilder getInstance()
> {
> // .. the usual Singleton Pattern code
> if (instance == null)
> {
> instance = new BasicChemObjectBuilder();
> }
>
> return instance;
> }
> }
Yes, this is what I had in mind. Except that the the ChemObjectReader does not
know which ChemObjectBuilder class to instantiate. Hence the
ChemObject.getChemObjectBuilder() method.
> Although this can be done, there really is no need for it. Java has a
> highly-optimized garbage collector that can easily handle a few unused
> ChemObjectBuilders. I'm not against it per se, I just think it's
> unnecessary. And this is the kind of optimization that is best performed at
> a late stage with a good profiling tool, so its performance benefit, if
> any, can be quantified.
That might be true, though I do not see why using a singleton would make it
possibly slower. Is there a reason not to use it?
> Aside from this, the solution you propose still couples builder-like
> behavior to query-like behavior.
Yes, that's the current design. I know that Octet uses immutable classes,
which might be better, but *I* don't want to make that transition now. That
is, if someone comes with a good patch, I won't object to applying it at all.
But I'm not going to write that patch at this moment myself.
I'm just trying to tweak things such that I can fullfill my personal needs
here, *without* modifying the API and breaking things too much...
> Why should Atom (which currently inherits
> from ChemObject) need to know anything about its builder? I'm advocating
> completely decoupling these behaviors because I've found this to be the
> most flexible solution.
Can you give a practical example where this flexibility comes in?
> I think what you're getting at is that interface implementations should
> never directly instantiate the objects they need to work with?
Yes, I think that about formulates one of the goals...
> But there
> are better ways to accomplish this, for example, through "Dependency
> Injection" (). There are
> entire frameworks designed to make this simpler, such as Spring and
> PicoContainer. I'm planning on implementing a solution like this in Octet
> (), but only as a late-stage optimization, when I'm
> certain the entire package is stable.
Will have to look at this.
> This is a very interesting discussion. I don't want to give the impression
> that what I'm proposing is the only way to do it. There are may viable
> solutions, but I'm interested in how you and the rest of the CDK developers
> see the relative advantages and disadvantages of the approaches we discuss.
Likewise, I do not overly prefer my solution to others, but just being
pragmatic here, considering time limitations (no time for thinking through
and explaning a large API design) and trying not to break anything...
It's an interesting discussion indeed. What do others think about this?
Egon
View entire thread | http://sourceforge.net/p/cdk/mailman/message/9934449/ | CC-MAIN-2016-07 | refinedweb | 948 | 65.01 |
08 April 2011 16:00 [Source: ICIS news]
TORONTO (ICIS)--EMS Group's first-quarter sales rose by 8.4% year on year to Swiss francs (Swfr) 438m ($476m, €334m), partially due to the successful launch of new products, the Switzerland-based specialty chemicals firm said on Friday.
In local currency terms, sales for the three months ended 31 March rose 18.1% from the 2010 first quarter, but lower foreign exchange rates and high raw material prices hampered growth in Swiss franc terms, the company said.
“A sustained positive economic trend in the main sales markets worldwide, together with successfully launched new applications with speciality products, led to a very positive development of sales volumes,” the company said.
?xml:namespace>
However,
“The future supply of raw materials will not keep pace with increasing demand, resulting in strong price increases at increasingly short intervals,” the company said.
“As a consequence, price increases for secondary products and stronger inflationary trends must be expected,” it added.
($1 = Swfr0.92, €1 = Swfr1.31) | http://www.icis.com/Articles/2011/04/08/9451227/swiss-ems-q1-sales-rise-8.4-warns-of-rising-raw-material-costs.html | CC-MAIN-2015-06 | refinedweb | 170 | 51.18 |
Sigh. Why is everything so easy in Groovy? I had a small request from a client to change a lot of database records. Now, I am no SQL guy, but I know Java and Groovy. So I wrote a little script to access the database and change the value of some records:
import groovy.sql.Sql // Just copy mysql-connector.jar in $GROOVY_HOME/lib and we // use the MySql driver. def sql = Sql.newInstance("jdbc:mysql://localhost/db", "user", "password", "com.mysql.jdbc.Driver") // Make sure we can update the records. sql.resultSetConcurrency = java.sql.ResultSet.CONCUR_UPDATABLE sql.eachRow("select * from articles where description like '%IN STOCK%'") { println "Change record with id ${it.id}" it.description = it.description.substring(0, it.description.indexOf('IN STOCK')) } println "Done." | http://mrhaki.blogspot.com/2009/04/groovy-even-makes-database-access-easy.html | CC-MAIN-2018-39 | refinedweb | 127 | 53.78 |
#include <graph_lock.hpp>
The locking implementation is basically two families of continuations. The first family is called the scopelock_continuation This family completes the lock of a scope. It iterates over the owners of the replicas of the vertex, and issue remote calls to acquire locks on them.
The second family is called partiallock_continuation It completes the lock on local vertices. It iterates over the owned vertices within the scope of the vertex, acquiring locks.
Definition at line 56 of file graph_lock.hpp.
The parameters passed on to the partial lock continuation
Requests a lock on the scope surrounding globalvid. This globalvid must be owned by the current machine. When lock is complete the handler is called.
Definition at line 69 of file graph_lock.hpp.
Isues an unlock on the scope surrounding globalvid. A lock on this scope MUST have been acquired before or very bad things will happen
Definition at line 101 of file graph_lock.hpp. | http://select.cs.cmu.edu/code/graphlab/doxygen/html/classgraphlab_1_1graph__lock.html | CC-MAIN-2014-10 | refinedweb | 156 | 58.58 |
When I want to make a value type read-only outside of my class I do this:
public class myClassInt { private int m_i; public int i { get { return m_i; } } public myClassInt(int i) { m_i = i; } }
What can I do to make a
List<T> type readonly (so they can't add/remove elements to/from it) outside of my class? Now I just declare it public:
public class myClassList { public List<int> li; public myClassList() { li = new List<int>(); li.Add(1); li.Add(2); li.Add(3); } }
There is limited value in attempting to hide information to such an extent. The type of the property should tell users what they're allowed to do with it. If a user decides they want to abuse your API, they will find a way. Blocking them from casting doesn't stop them:
public static class Circumventions { public static IList<T> AsWritable<T>(this IEnumerable<T> source) { return source.GetType() .GetFields(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance) .Select(f => f.GetValue(source)) .OfType<IList<T>>() .First(); } }
With that one method, we can circumvent the three answers given on this question so far:
List<int> a = new List<int> {1, 2, 3, 4, 5}; IList<int> b = a.AsReadOnly(); // block modification... IList<int> c = b.AsWritable(); // ... but unblock it again c.Add(6); Debug.Assert(a.Count == 6); // we've modified the original IEnumerable<int> d = a.Select(x => x); // okay, try this... IList<int> e = d.AsWritable(); // no, can still get round it e.Add(7); Debug.Assert(a.Count == 7); // modified original again
Also:
public static class AlexeyR { public static IEnumerable<T> AsReallyReadOnly<T>(this IEnumerable<T> source) { foreach (T t in source) yield return t; } } IEnumerable<int> f = a.AsReallyReadOnly(); // really? IList<int> g = f.AsWritable(); // apparently not! g.Add(8); Debug.Assert(a.Count == 8); // modified original again
To reiterate... this kind of "arms race" can go on for as long as you like!
The only way to stop this is to completely break the link with the source list, which means you have to make a complete copy of the original list. This is what the BCL does when it returns arrays. The downside of this is that you are imposing a potentially large cost on 99.9% of your users every time they want readonly access to some data, because you are worried about the hackery of 00.1% of users.
Or you could just refuse to support uses of your API that circumvent the static type system.
If you want a property to return a read-only list with random access, return something that implements:
public interface IReadOnlyList<T> : IEnumerable<T> { int Count { get; } T this[int index] { get; } }
If (as is much more common) it only needs to be enumerable sequentially, just return
IEnumerable:
public class MyClassList { private List<int> li = new List<int> { 1, 2, 3 }; public IEnumerable<int> MyList { get { return li; } } }
UPDATE Since I wrote this answer, C# 4.0 came out, so the above
IReadOnlyList interface can take advantage of covariance:
public interface IReadOnlyList<out T>
And now .NET 4.5 has arrived and it has... guess what...
So if you want to create a self-documenting API with a property that holds a read-only list, the answer is in the framework. | https://www.dowemo.com/article/70377/how-uses-properties-when-processing-list-lt;membe-amp; | CC-MAIN-2018-26 | refinedweb | 551 | 63.39 |
Message box widget. More...
#include <TGUI/Widgets/MessageBox.hpp>
Message box widget.
Signals:
Copy constructor.
Add a button to the message box.
When receiving a callback with the ButtonClicked trigger then callback.text will contain this caption to identify the clicked button.::ChildWindow.
Makes a copy of another message box.
Creates a new message box widget.
Returns the renderer, which gives access to functions that determine how the widget is displayed.
Return the text of the message box.
Returns the size of the text.
Overload of assignment operator.
Reload the widget.
When primary is an empty string the built-in white theme will be used.
Reimplemented from tgui::ChildWindow.
Changes the font of the text in the widget and its children.
When you don't call this function then the font from the parent widget will be used.
Reimplemented from tgui::ChildWindow.
Change. | https://tgui.eu/documentation/v0.7/classtgui_1_1MessageBox.html | CC-MAIN-2022-21 | refinedweb | 143 | 72.42 |
I'm doing some C++ homework, it's for an online class, and my ****ing teacher never answers me back about questions I have. Ultimately I'm having to do it all on my own basically. I feel like I'm close with this one (or maybe not), but a little mixed up. I'm getting an "in function int main" error but don't know where, and "void value not ignored as it ought to be" error on the line: "totalCommission = calcCommission (commission1, commission2, commission3, commission4)". Can anyone help?
------------------------------------------------------------------------------------------------
#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;
int main()
{
const double commission = 0.1;
double salesAmount = 0.0;
double totalCommission = 0.0;
double sale1 = 0.0;
double sale2 = 0.0;
double sale3 = 0.0;
double sale4 = 0.0;
double commission1 = 0.0;
double commission2 = 0.0;
double commission3 = 0.0;
double commission4 = 0.0;
void calcCommission(double, double, double, double);
void displayCommission();
void getTotalCommission();
cout << "Enter first saleman's total sales: ";
cin >> sale1;
cout << "Enter second saleman's total sales: ";
cin >> sale2;
cout << "Enter third saleman's total sales: ";
cin >> sale3;
cout << "Enter fourth saleman's total sales: ";
cin >> sale4;
salesAmount = sale1 + sale2 + sale3 + sale4;
cout << "Total sales: " << salesAmount << endl;
totalCommission = calcCommission (commission1, commission2, commission3, commission4);
cout << "Commission: " << totalCommission << endl;
system("pause");
return 0;
}
---------------------------------------------------------------------------------------------------
View Tag Cloud
Forum Rules | http://forums.codeguru.com/showthread.php?518706-Void-functions&p=2044239&mode=threaded | CC-MAIN-2017-22 | refinedweb | 221 | 51.24 |
Some Popular Object-Oriented Design Patterns
- Builder
- Object Pool
- Singleton (a special type of Object Pool)
- Decorator
Some Popular Functional Design Patterns
- Event
- Recursion
- Pipes and Filters
- Bridge
Many of these design patterns are simply more popular in one paradigm over another. For example Pipes and Filters and Bridge can be used in OOP, while Decorator can be used in functional programming. These are just some examples. While many design patterns can be used in multiple paradigms it may be simpler to implement a design pattern a certain way that involves a specific paradigm over another. Recursion is however an example of a multi-paradigm design pattern that is generally implemented in the same fashion in both paradigms.
Since I have been studying design patterns, I have often asked myself why don't schools teach design patterns first. And I remember that I used to hate programming theory, not understanding the purpose behind learning about how a bunch of really old guys' programming patterns would help me to learn how to program. But now more than ever I understand that programming theory (like design patterns) is a foundation for computer science. It includes proofs, mathematics, discrete math, computer architecture and organization, O-notation, and design patterns. One very simple and basic design pattern, and yet one of the most important and highly used design patterns, is the Interface design pattern. If you are using C# or Java then interfaces are part of the programming language. Recently I dug a little deeper into what interfaces really were.
The simple definition of an interface (generically in computer science) is:
A contract defining that any class that implements an interface must implement all the method definitions given in the interface. (The Code Project - The Interface Construct in C#)
As I said C# and Java define the Interface Pattern explicitly. Here are some example interfaces in C# and Java (they should look very similar):
Given the enumerated type WeaponType:
(Language-independent)
- Code: Select all
WeaponType = Firearm | Melee | Missile;
(Java)
- Code: Select all
package com.nathandelane.interfaces;
public interface IWeapon
{
String getName();
double getWeight();
WeaponType getWeaponType();
long getDamageIncurance();
}
(C#)
- Code: Select all
public namespace Nathandelane.Interfaces
{
public interface IWeapon
{
string GetName();
double GetWeight();
WeaponType GetWeaponType();
long GetDamageIncurance();
}
}
(Note that methods of an interface in C# and Java are inherently public and it is a syntax error to include an access modifier in a method declaration in an interface.)
In these languages and some modern dynamic languages like Ruby interfaces are trivial, and simple to implement. Once I realized that, I wondered how one does it in C++. I read a lot of information and learned that at some point Microsoft added a __interface keyword that can be used similarly to the C# and Java variants, but that wasn't exactly what I wanted, so I consulted with Bjarne Stroustrup, the inventor of C++ to see what he said, and I learned that an interface is really just a completely abstract or (in C++) virtual class. The class itself is not what is virtual but all of its members are. They must also be public. Anyway this is the basic result:
- Code: Select all
#include <string>
class IWeapon
{
public:
virtual std::string GetName() = 0; // The '= 0' makes this a pure virtual function.
virtual double GetWeight() = 0;
virtual WeaponType GetWeaponType() = 0;
virtual long GetDamageIncurance() = 0;
};
This pattern is more similar to a abstract class in high-level strongly-typed languages like C# and Java. An abstract class is also, by definition, and interface, however an abstract class may define some methods and leave other methods virtual or abstract. Abstract classes may also define a constructor and fields, though typically the constructor or constructors are protected a little more by giving it protected access. Here are some examples in Java and C#:
(Java)
- Code: Select all
package com.nathandelane.interfaces;
abstract class AbstractWeapon
{
private String name;
private double weight;
private WeaponType type;
private long damageIncurance;
protected AbstractWeapon(String name, double weight, WeaponType type, long damageIncurance)
{
this.name = name;
this.weight = weight;
this.type = type;
this.damageIncurance = damageIncurance;
}
abstract String getName(); // Note that these methods can also have a body.
abstract double getWeight();
abstract WeaponType getWeaponType();
abstract long getDamageIncurance();
}
(C#)
- Code: Select all
public namespace Nathandelane.Interfaces
{
public abstract class AbstractWeapon
{
private String _name;
private double _weight;
private WeaponType _type;
private long _damageIncurance;
protected AbstractWeapon(String name, double weight, WeaponType type, long damageIncurance)
{
_name = name;
_weight = weight;
_type = type;
_damageIncurance = damageIncurance;
}
public virtual String getName(); // Now this method doesn't need to be overridden because it is virtual.
public abstract double getWeight(); // Note that these methods can also have a body.
public abstract WeaponType getWeaponType();
public abstract long getDamageIncurance();
}
So there are a couple of different rules in C#. It has both virtual and abstract keywords. A virtual function and an abstract function are both by definition virtual or abstract, but the virtual keyword makes it so that the implementing class is not required to override the method. An abstract method must be overridden by an implementing class. Also both abstract (and virtual methods in C#) may have a body in an abstract class in C# or Java. Sometimes you will see that the default body throws an exception, such as a new NotImplementedException, in case somebody tries to use the method before it is implemented.
Anyway, besides practicing programming now, I am suggesting more and more understanding why, when, and how to correctly implement design patterns. If anybody wants to pick my brain a little more on design patterns or any of the other ideas I mentioned in this post, feel free to contact me. I should be more available these days.
Thanks.
Nathandelane | http://www.hackthissite.org/forums/viewtopic.php?f=36&t=6334 | CC-MAIN-2014-15 | refinedweb | 946 | 50.16 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.