text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
- What is the current state of the project?
- How can the current SVN head be built?
- SVN Head contains two build systems, one based on ant and one based on GNU autotools. The ant based build system runs on all supported platforms, while the GNU autotools mechanism is restricted to POSIX platforms. The ant based build system also automatically downloads and builds the APR libraries, while the GNU autotools mechanism requires those libraries to be already available.
- Are there binaries available?
A nightly build is performed on GUMP, based on the ant build system. To make sure that the GNU autotools based build system also works properly, a nightly (unofficial) build based on it is created at. This build also executes the unit tests. The build is based on SVN HEAD, but usually contains some additional patches which are not yet in committed to the SVN repository. The platform is a Debian GNU/Linux 3.1 System using gcc 3.3.5.
- Where can I learn more? / Is there a log4cxx book?
Log4cxx is patterned after the log4j API, so the log4j book should go a long way towards getting your head around using the C++ API. The two should be functionally and structurally equivalent (down to the bugs). It would be nice to compile a list of good log4cxx or generic log4j examples on the OtherResources page.
- This FAQ seems short, don't people ask other questions, or am I the only one not getting it?
Have you seen the formal faq for log4cxx? Your question may be answered there. Also feel free to help build up this FAQ on the Wiki.
In <<Short introduction to log4cxx>>, it's said "The log4cxx environment is fully configurable programmatically." - But how do I do this?
- For starters:
#include <log4cxx/logger.h> #include <log4cxx/helpers/pool.h> #include <log4cxx/basicconfigurator.h> #include <log4cxx/fileappender.h> #include <log4cxx/simplelayout.h> int main() { log4cxx::FileAppender * fileAppender = new log4cxx::FileAppender(log4cxx::LayoutPtr(new log4cxx::SimpleLayout()), "logfile", false); log4cxx::helpers::Pool p; fileAppender->activateOptions(p); log4cxx::BasicConfigurator::configure(log4cxx::AppenderPtr(fileAppender)); log4cxx::Logger::getRootLogger()->setLevel(log4cxx::Level::getDebug()); log4cxx::LoggerPtr logger = log4cxx::Logger::getLogger("logger"); LOG4CXX_INFO(logger,"Created FileAppender appender"); return 0; } | https://wiki.apache.org/logging-log4cxx/FrequentlyAskedQuestions?highlight=%28%28OtherResources%29%29 | CC-MAIN-2017-34 | refinedweb | 366 | 60.21 |
So I've learned how to do various commands such as if then statements and Javax.swing (the java text boxes) and I honestly don't know were to progress to after this point. Am I good enough to start developing with something like J-frame, or is there other things I should learn before moving onto something like that. (also note im planning to stick with just 2D).
Can't attach my most recent file to the thread (keeps saying its a invalid file) so the code is here (its just a simple RPG fight with some basic attack commands
A = attack
H = heal
M = magic)
Code
import javax.swing.JOptionPane; public class Battlebackup { public static void main(String[]args) { JOptionPane myIO = new JOptionPane (); myIO.showMessageDialog(null,"a slimey boi starts to sneak up on you"); int turn = 0; int PHP = 10; int EHP = 5; int MP = 3; int none = 0; while (turn < 100) { String inputText = myIO.showInputDialog("What do you do?" + " HP:" + PHP + " MP:" + MP ); if (inputText.equals("a")){ myIO.showMessageDialog(null,"you do a friggin shank boi, inflicting one damage to the slime"); EHP -= 1; none += 1; }else { } if (MP > 0) { if (inputText.equals("m")){ myIO.showMessageDialog(null,"you throw your book of harry potter spells, dealing 3 dmg "); EHP -= 3; MP -= 1; none += 1; }else { } }else { } if (MP > 0) { if (inputText.equals("h")){ myIO.showMessageDialog(null,"you eat your book of harry potter spells, gaining 8hp"); PHP += 8; MP -= 1; none += 1; }else { } }else { } if (none > 0) { none -= 1; }else { myIO.showMessageDialog(null,"you do nothin, great"); } myIO.showMessageDialog(null,"Slimy boi takes out a slimy gun and shoots you, dealing 3 damage"); PHP -= 3; if (EHP > 0) { }else { myIO.showMessageDialog(null,"you defeat the slime and get 69gold xd"); turn =+ 1000; } if (PHP > 0) { }else { myIO.showMessageDialog(null,"you died"); turn =+ 1000; } } } } | https://www.javaprogrammingforums.com/java-theory-questions/41357-were-should-i-move-point.html | CC-MAIN-2020-29 | refinedweb | 307 | 71.55 |
Learning Elm From A Drum Sequencer (Part 2)
In part one of this two-part article, we began building a drum sequencer in Elm. We learned the syntax, how to read and write type-annotations to ensure our functions can interact with one another, and the Elm Architecture, the pattern in which all Elm programs are designed.
In this conclusion, we’ll work through large refactors by relying on the Elm compiler, and set up recurring events that interact with JavaScript to trigger drum samples.
Check out the final code here, and try out the project here. Let’s jump to our first refactor!
Refactoring With The Elm Compiler
The thought of AI taking over developer jobs is actually pleasant for me. Rather than worry, I’ll have less to program, I imagine delegating the difficult and boring tasks to the AI. And this is how I think about the Elm Compiler.
The Elm Compiler is my expert pair-programmer who’s got my back. It makes suggestions when I have typos. It saves me from potential runtime errors. It leads the way when I’m deep and lost midway through a large refactor. It confirms when my refactor is completed.
Refactoring Our Views
We’re going to rely on the Elm Compiler to lead us through refactoring our model from
track : Track to
tracks : Array Track. In JavaScript, a big refactor like this would be quite risky. We’d need to write unit tests to ensure we’re passing the correct params to our functions then search through the code for any references to old code. Fingers-crossed, we’d catch everything, and our code would work. In Elm, the compiler catches all of that for us. Let’s change our type and let the compiler guide the way.
The first error says our model doesn’t contain track and suggests we meant tracks, so let’s dive into View.elm. Our view function calling
model.track has two errors:
Trackshould be
Tracks.
- And
renderTrackaccepts a single track, but now tracks are an array of tracks.
We need to map over our array of tracks in order to pass a single track to
renderTrack. We also need to pass the track index to our view functions in order to make updates on the correct one. Similar to
renderSequence,
Array.indexedMap does this for us.
view : Model -> Html Msg view model = div [] (Array.toList <| Array.indexedMap renderTrack model.tracks)
We expect another error to emerge because we’re now passing an index to
renderTrack, but it doesn’t accept an index yet. We need to pass this index all the way down to
ToggleStep so it can be passed to our update function.
renderTrack.
Array.indexedMap always passes the index as its first value. We change renderTrack’s type annotation to accept an Int, for the track index, as its first argument. We also add it to the arguments before the equal sign. Now we can use trackIndex in our function to pass it to renderSequence.
renderTrack : Int -> Track -> Html Msg renderTrack trackIndex track = div [ class "track" ] [ p [] [ text track.name ] , div [ class "track-sequence" ] (renderSequence trackIndex track.sequence) ]
We need to update the type annotation for
renderSequence in the same way. We also need to pass the track index to
renderStep. Since
Array.indexedMap only accepts two arguments, the function to apply and the array to apply the function to, we need to contain our additional argument with parentheses. If we wrote our code without parentheses,
Array.indexedMap renderStep trackIndex sequence, the compiler wouldn’t know if
trackIndex should be bundled with
sequence or with
renderStep. Furthermore, it would be more difficult for a reader of the code to know where
trackIndex was being applied, or if
Array.indexedMap actually took four arguments.
renderSequence : Int -> Array Step -> List (Html Msg) renderSequence trackIndex sequence = Array.indexedMap (renderStep trackIndex) sequence |> Array.toList
Finally, we’ve passed our track index down to
renderStep. We add the index as the first argument then add it to our
ToggleStep message in order to pass it to the update function.
renderStep : Int -> Int -> Step -> Html Msg renderStep trackIndex stepIndex step = let classes = if step == Off then "step" else "step _active" in button [ onClick (ToggleStep trackIndex stepIndex step) , class classes ] []
Refactoring Our Update Functions
Considering incorrect arguments, the compiler has found two new errors regarding
ToggleStep.
We’ve added
trackIndex to it, but haven’t updated it for the track index. Let’s do that now. We need to add it in as an
Int.
type Msg = ToggleStep Int Int Step
Our next batch of errors are in the Update function.
First, we don’t have the right number of arguments for
ToggleStep since we’ve added the track index. Next, we are still calling
model.track, which no longer exists. Let’s think about a data model for a moment:
model = { tracks: [ { name: "Kick", clip: "kick.mp3", sequence: [On, Off, Off, Off, On, etc...] }, { name: "Snare", clip: "snare.mp3", sequence: [Off, Off, Off, Off, On, etc...] }, etc... ] etc... }
In order to update a sequence, we need to traverse through the Model record, the tracks array, the track record, and finally, the track sequence. In JavaScript, this could look something like
model.tracks[0].sequence[0], which has several spots for failure. Updating nested data can be tricky in Elm because we need to cover all cases; when it finds what it expects and when it doesn’t.
Some functions, like
Array.set handle it automatically by either returning the same array if it can’t find the index or a new, updated array if it does. This is the kind of functionality we’d like because our tracks and sequences are constant, but we can’t use
set because of our nested structure. Since everything in Elm is a function, we write a custom helper function that works just like set, but for nested data.
This helper function should take an index, a function to apply if it finds something at the index value, and the array to check. It either returns the same array or a new array.
setNestedArray : Int -> (a -> a) -> Array a -> Array a setNestedArray index setFn array = case Array.get index array of Nothing -> array Just a -> Array.set index (setFn a) array
In Elm
a means anything. Our type annotation reads
setNestedArray accepts an index, a function that returns a function, the array to check, and it returns an array. The
Array a annotation means we can use this general purpose function on arrays of anything. We run a case statement on
Array.get. If we can’t find anything at the index we pass, return the same array back. If we do, we use
set and pass the function we want to apply into the array.
As our
let...in block is about to become large under the
ToggleStep branch, we can move the local functions into their own private functions, keeping the update branches more readable. We create
updateTrackStep which will utilize
setNestedArray to dig into our nested data. It will take: a track index, to find the specific track; a step index, to find which step on the track sequence was toggled; all of the model tracks; and return updated model tracks.
updateTrackStep : Int -> Int -> Array Track -> Array Track updateTrackStep trackIndex stepIndex tracks = let toggleStep step = if step == Off then On else Off newSequence track = setNestedArray stepIndex toggleStep track.sequence newTrack track = { track | sequence = (newSequence track) } in setNestedArray trackIndex newTrack tracks
We still use
toggleStep to return the new state,
newSequence to return the new sequence, and
newTrack to return the new track. We utilized
setNestedArrayto easily set the sequence and the tracks. That leaves our update function short and sweet, with a single call to
updateTrackStep.
update : Msg -> Model -> ( Model, Cmd Msg ) update msg model = case msg of ToggleStep trackIndex stepIndex step -> ( { model | tracks = updateTrackStep trackIndex stepIndex model.tracks } , Cmd.none )
From right to left, we pass our array of tracks on
model.tracks, the index of the specific step to toggle, and the index of the track the step is on. Our function finds the track from the track index within
model.tracks, finds the step within the track’s sequence, and finally toggles the value. If we pass an track index that doesn’t exist, we return the same set of tracks back. Likewise, if we pass a step index that doesn’t exist, we return the same sequence back to the track. This protects us from unexpected runtime failures, and is the way updates must be done in Elm. We must cover all branches or cases.
Refactoring Our Initializers
Our last error lies in Main.elm because our initializers are now misconfigured.
We’re still passing a single track rather than an array of tracks. Let’s create initializer functions for our tracks and an initializer for the track sequences. The track initializers are functions with assigned values for the track record. We have a track for the hi-hat, kick drum, and snare drum, that have all their steps set to Off.
initSequence : Array Step initSequence = Array.initialize 16 (always Off) initHat : Track initHat = { sequence = initSequence , name = "Hat" } initSnare : Track initSnare = { sequence = initSequence , name = "Snare" } initKick : Track initKick = { sequence = initSequence , name = "Kick" }
To load these to our main
init function, we create an array from the list of initializers,
Array.fromList [ initHat, initSnare, initKick ], and assign it to the model’s tracks.
init : ( Model, Cmd.Cmd Msg ) init = ( { tracks = Array.fromList [ initHat, initSnare, initKick ] } , Cmd.none )
With that, we’ve changed our entire model. And it works! The compiler has guided us through the code, so we don’t need to find references ourself. It’s tough not lusting after the Elm Compiler in other languages once you’ve finished refactoring in Elm. That feeling of confidence once the errors are cleared because everything simply works is incredibly liberating. And the task-based approach of working through errors is so much better than worrying about covering all the application’s edge cases.
Handling Recurring Events Using Subscriptions
Subscriptions is how Elm listens for recurring events. These events include things like keyboard or mouse input, websockets, and timers. We’ll be using subscriptions to toggle playback in our sequencer. We’ll need to:
- Prepare our application to handle subscriptions by adding to our model
- Import the Elm time library
- Create a subscription function
- Trigger updates from the subscription
- Toggle our subscription playback state
- And render changes in our views
Preparing Our App For Subscriptions
Before we jump into our subscription function, we need to prepare our application for dealing with time. First, we need to import the Time module for dealing with time.
import Time exposing (..)
Second, we need to add fields to our model handling time. Remember when we modeled our data we relied on
playback,
playbackPosition, and
bpm? We need to re-add these fields.
type alias Model = { tracks : Array Track , playback : Playback , playbackPosition : PlaybackPosition , bpm : Int } type Playback = Playing | Stopped type alias PlaybackPosition = Int
Finally, we need to update our
init function because we’ve added additional fields to the model.
playback should start
Stopped, the
playbackPosition should be at the end of the sequence length, so it starts at 0 when we press play, and we need to set the beat for
bpm.
init : ( Model, Cmd.Cmd Msg ) init = ( { tracks = Array.fromList [ initHat, initSnare, initKick ] , playback = Stopped , playbackPosition = 16 , bpm = 108 } , Cmd.none )
Subscribing To Time-Based Events In Elm
We’re ready to handle subscriptions. Let’s start by creating a new file, Subscriptions.elm, creating a
subscription function, and importing it into the Main module to assign to our Main program. Our
subscription function used to return
always Sub.none, meaning there would never be any events we subscribed to, but we now want to subscribe to events during playback. Our
subscription function will either return nothing,
Sub.none, or update the playback position one step at a time, according to the BPM.
main : Program Never Model Msg main = Html.program { view = view , update = update , subscriptions = subscriptions , init = init } subscriptions : Model -> Sub Msg subscriptions model = if model.playback == Playing then Time.every (bpmToMilliseconds model.bpm) UpdatePlaybackPosition else Sub.none
During playback, we use
Time.every to send a message,
UpdatePlaybackPosition to our update function to increment the playback position.
Time.every takes a millisecond value as its first argument, so we need to convert BPM, an integer, to milliseconds. Our helper function,
bpmToMilliseconds takes the BPM and does the conversion.
bpmToMilliseconds : Int -> Float bpmToMilliseconds bpm = let secondsPerMinute = Time.minute / Time.second millisecondsPerSecond = Time.second beats = 4 in ((secondsPerMinute / (toFloat bpm) * millisecondsPerSecond) / beats)
Our function is pretty simple. With hard-coded values it would look like
(60 / 108 * 1000) / 4. We use a
let...in block for readability to assign millisecond values to our calculation. Our function first converts our BPM integer, 108, to a float, divides the BPM by
secondsPerMinute, which is 60, multiplies it by the number of milliseconds in a second, 1000, and divides it by the number of beats in our time signature, 4.
We’ve called
UpdatePlaybackPostion, but we haven’t used it yet. We need to add it to our message type. Time functions return a time result, so we need to include
Time to the end of our message, though we don’t really care about using it.
type Msg = ToggleStep Int Int Step | UpdatePlaybackPosition Time
With our subscription function created, we need to handle the missing branch in our update function. This is straightforward: increment the playbackPosition by 1 until it hits the 16th step (15 in the zero-based array).
UpdatePlaybackPosition _ -> let newPosition = if model.playbackPosition >= 15 then 0 else model.playbackPosition + 1 in ( { model | playbackPosition = newPosition }, Cmd.none )
You’ll notice rather than passing the
Time argument into our update branch we’ve used an underscore. In Elm, this signifies there are additional arguments, but we don’t care about them. Our model update is significantly easier here since we’re not dealing with nested data as well. At this point, we’re still not using side-effects, so we use
Cmd.none.
Toggling Our Playback State
We can now increment our playback position, but there is nothing to switch the model from Stopped to Playing. We need a message to toggle playback as well as a views to trigger the message and an indicator for which step is being played. Let’s start with the messages.
StartPlayback -> ( { model | playback = Playing }, Cmd.none ) StopPlayback -> ( { model | playback = Stopped , playbackPosition = 16 } , Cmd.none )
StartPlayback simply switches playback to Playing, whereas StopPlayback switches it and resets the playback position. We can take an opportunity to make our code more followable by turning 16 into a constant and using it where appropriate. In Elm, everything is a function, so constants look no different. Then, we can replace our magic numbers with initPlaybackPosition in StopPlayback and in init.
initPlaybackPosition : Int initPlaybackPosition = 16
With our messages set, we can now focus on our view functions. It’s common to set playback buttons next to the BPM display, so we’ll do the same. Currently, our view function only renders our tracks. Let’s rename
view to
renderTracks so it can be a function we call from the parent view.
renderTracks : Model -> Html Msg renderTracks model = div [] (Array.toList <| Array.indexedMap renderTrack model.tracks) view : Model -> Html Msg view model = div [ class "step-sequencer" ] [ renderTracks model , div [ class "control-panel" ] [ renderPlaybackControls model ] ]
Now, we create our main view which can call our smaller view functions. Give our main div a class,
step-sequencer, call
renderTracks, and create a div for our control panel which contains the playback controls. While we could keep all these functions in the same view, especially since they have the same type annotation, I find breaking functions into smaller pieces helps me focus on one piece at a time. Restructuring, later on, is a much easier diff to read as well. I think of these smaller view functions like partials.
renderPlaybackControls will take our entire model and return HTML. This will be a div that wraps two additional functions. One to render our button, renderPlaybackButton, and one that renders the BPM display, renderBPM. Both of these will accept the model since the attributes are on the top-level of the model.
renderPlaybackControls : Model -> Html Msg renderPlaybackControls model = div [ class "playback-controls" ] [ renderPlaybackButton model , renderBPM model ]
Our BPM display only shows numbers, and eventually, we want users to be able to change them. For semantics, we should render the display as an input with a number type. Some attributes (like type) are reserved in Elm. When dealing with attributes, these special cases have a trailing underscore. We’ll leave it for now, but later we can add a message to the on change event for the input to allow users to update the BPM.
renderBPM : Model -> Html Msg renderBPM model = input [ class "bpm-input" , value (toString model.bpm) , maxlength 3 , type_ "number" , Html.Attributes.min "60" , Html.Attributes.max "300" ] []
Our playback button will toggle between the two playback states: Playing and Stopped.
renderPlaybackButton : Model -> Html Msg renderPlaybackButton model = let togglePlayback = if model.playback == Stopped then StartPlayback else StopPlayback buttonClasses = if model.playback == Playing then "playback-button _playing" else "playback-button _stopped" in button [ onClick togglePlayback , class buttonClasses ] []
We use a local function,
togglePlayback, to attach the correct message to the button’s on click event, and another function to assign the correct visual classes. Our application toggles the playback state, but we don’t yet have an indicator of its position.
Connecting Our Views And Subscriptions
It’s best to use real data to get the length of our indicator rather than a magic number. We could get it from the track sequence, but that requires reaching into our nested structure. We intend to add a reduction of the on steps in
PlaybackSequence, which is on the top-level of the model, so that’s easier. To use it, we need to add it to our model and initialize it.
import Set exposing (..) type alias Model = { tracks : Array Track , playback : Playback , playbackPosition : PlaybackPosition , bpm : Int , playbackSequence : Array (Set Clip) } init : ( Model, Cmd.Cmd Msg ) init = ( { tracks = Array.fromList [ initHat, initSnare, initKick ] , playback = Stopped , playbackPosition = initPlaybackPosition , bpm = 108 , playbackSequence = Array.initialize 16 (always Set.empty) } , Cmd.none )
Since a
Set forces uniqueness in the collection, we use it for our playback sequence. That way we won’t need to check if the value already exists before we pass it to JavaScript. We import
Set and assign
playbackSequence to an array of sets of clips. To initialize it we use
Array.initialize, pass it the length of the array, 16, and create an empty set.
Onto our view functions. Our indicator should render a series of HTML list items. It should light up when the playback position and the indicator position are equal, and be dim otherwise.
renderCursorPoint : Model -> Int -> Set String -> Html Msg renderCursorPoint model index _ = let activeClass = if model.playbackPosition == index && model.playback == Playing then "_active" else "" in li [ class activeClass ] [] renderCursor : Model -> Html Msg renderCursor model = ul [ class "cursor" ] (Array.toList <| Array.indexedMap (renderCursorPoint model) model.playbackSequence) view : Model -> Html Msg view model = div [ class "step-sequencer" ] [ renderCursor model , renderTracks model , div [ class "control-panel" ] [ renderPlaybackControls model ] ]
In
renderCursor we use an indexed map to render a cursor point for each item in the playback sequence.
renderCursorPoint takes our model to determine whether the point should be active, the index of the point to compare with the playback position, and the set of steps which we aren’t actually interested in. We need to call
renderCursor in our view as well.
With our cursor in place, we can now see the effects of our subscription. The indicator lights up on each step as the subscription sends a message to update the playback position, and we see the cursor moving forward.
While we could handle time using JavaScript intervals, using subscriptions seamlessly plugs into the Elm runtime. We maintain all the benefits of Elm, plus we get some additional helpers and don’t need to worry about garbage collection or state divergence. Further, it builds on familiar patterns in the Elm Architecture.
Interacting With JavaScript In Elm
Adoption of Elm would be much more difficult if the community was forced to ignore all JavaScript libraries and/or rewrite everything in Elm. But to maintain its no runtime errors guarantee, it requires types and the compiler, something JavaScript can’t interact with. Luckily, Elm exposes ports as a way to pass data back and forth to JavaScript and still maintain type safety within. Because we need to cover all cases in Elm, if for an undefined reason, JavaScript returns the wrong type to Elm, our program can correctly deal with the error instead of crashing.
We’ll be using the HowlerJS library to easily work with the web audio API. We need to do a few things in preparation for handling sounds in JavaScript. First, handle creating our playback sequence.
Using The Compiler To Add To Our Model
Each track should have a clip, which will map to a key in a JavaScript object. The kick track should have a kick clip, the snare track a snare clip, and the hi-hat track a hat clip. Once we add it to the
Track type, we can lean on the compiler to find the rest of the missing spots in the initializer functions.
type alias Track = { name : String , sequence : Array Step , clip : Clip } initHat : Track initHat = { sequence = initSequence , name = "Hat" , clip = "hat" } initSnare : Track initSnare = { sequence = initSequence , name = "Snare" , clip = "snare" } initKick : Track initKick = { sequence = initSequence , name = "Kick" , clip = "kick" }
The best time to add or remove these clips to the playback sequence is when we toggle steps on or off. In
ToggleStep we pass the step, but we should also pass the clip. We need to update
renderTrack,
renderSequence, and
renderStep to pass it through. We can rely on the compiler again and work our way backward. Update
ToggleStep to take the track clip and we can follow the compiler through a series of “not enough arguments.”
type Msg = ToggleStep Int Clip Int Step
Our first error is the missing argument in the update function, where
ToggleStep is missing the
trackClip. At this point, we pass it in but don’t do anything with it.
ToggleStep trackIndex trackClip stepIndex step -> ( { model | tracks = updateTrackStep trackIndex stepIndex model.tracks } , Cmd.none )
renderStep is missing arguments to pass the clip to
ToggleStep. We need to add the clip to our on click event, and we need to allow
renderStep to accept a clip.
renderStep : Int -> Clip -> Int -> Step -> Html Msg renderStep trackIndex trackClip stepIndex step = let classes = if step == On then "step _active" else "step" in button [ onClick (ToggleStep trackIndex trackClip stepIndex step) , class classes ] []
When I was new to Elm, I found the next error challenging to understand. We know it’s a mismatch to
Array.indexedMap, but what does
a and
b mean in
Int -> a -> b and why is it expecting three arguments when we’re already passing four? Remember
a means anything, including any function.
b is similar, but it means anything that’s not a. Likewise, we could see a function that transforms values three times represented as
a -> b -> c.
We can break down the arguments when we consider what we pass to
Array.indexedMap.
Array.indexedMap (renderStep trackIndex) sequence
Its annotation,
Int -> a -> b, reads
Array.indexedMap takes an index, any function, and returns a transformed function. Our two arguments come from
(renderStep trackIndex) sequence. An index and array item are automatically pulled from the array,
sequence, so our anything function is
(renderStep trackIndex). As I mentioned earlier, parentheses contain functions, so while this looks like two arguments, it’s actually one.
Our error asking for
Int -> a -> b but pointing out we’re passing
Main.Clip -> Int -> Main.Step -> Html.Html Main.Msg says we’re passing the wrong thing to
renderStep, the first argument. And we are. We haven’t passed in our clip yet. To pass values to functions when using an indexed map, they are placed before the automatic index. Let’s compare our type annotation to our arguments.
renderStep : Int -> Clip -> Int -> Step -> Html Msg renderStep trackIndex trackClip stepIndex step = ... Array.indexedMap (renderStep trackIndex) sequence
If
sequence returns our step index and step, we can read our call as
Array.indexedMap renderStep trackIndex stepIndex step which makes it very clear where our
trackClip should be added.
Array.indexedMap (renderStep trackIndex trackClip) sequence
We need to modify
renderSequence to accept the track clip, as well pass it through from
renderTrack.
renderSequence : Int -> Clip -> Array Step -> List (Html Msg) renderSequence trackIndex trackClip sequence = Array.indexedMap (renderStep trackIndex trackClip) sequence |> Array.toList renderTrack : Int -> Track -> Html Msg renderTrack trackIndex track = div [ class "track" ] [ p [] [ text track.name ] , div [ class "track-sequence" ] (renderSequence trackIndex track.clip track.sequence) ]
Reducing Our Steps Into A Playback Sequence
Once we’re clear of errors our application renders again, and we can focus on reducing our playback sequence. We’ve already passed the track clip into the
ToggleStep branch of the update function, but we haven’t done anything with it yet. The best time to add or remove clips from our playback sequence is when we toggle steps on or off so let’s update our model there.
Rather than use a
let...in block in our branch, we create a private helper function to update our sequence. We know we need the position of the step in the sequence, the clip itself, and the entire playback sequence to modify.
updatePlaybackSequence : Int -> Clip -> Array (Set Clip) -> Array (Set Clip) updatePlaybackSequence stepIndex trackClip playbackSequence = let updateSequence trackClip sequence = if Set.member trackClip sequence then Set.remove trackClip sequence else Set.insert trackClip sequence in Array.set stepIndex (updateSequence trackClip) playbackSequence
In
updatePlaybackSequence we use
Array.set to find the position of the playback sequence to update, and a local function,
updateSequence to make the actual change. If the clip already exists, remove it, otherwise add it. Finally, we call
updatePlaybackSequence from the
ToggleStep branch in the update function to make the updates whenever we toggle a step.
ToggleStep trackIndex trackClip stepIndex step -> ( { model | tracks = updateTrackStep trackIndex stepIndex model.tracks , playbackSequence = updatePlaybackSequence stepIndex trackClip model.playbackSequence } , Cmd.none )
Elm makes updating multiple record fields quite easy. Additional fields are added after a comma, much like a list, with their new values. Now when we toggled steps, we get a reduced playback sequence. We’re ready to pass our sequence data to JavaScript using a command.
Using Commands To Send Data To JavaScript
As I’ve mentioned, commands are side-effects in Elm. Think of commands as way to cause events outside of our application. This could be a save to a database or local storage, or retrieval from a server. Commands are messages for the outside world. Commands are issued from the update function, and we send ours from the
UpdatePlaybackPosition branch. Every time the playback position is incremented, we send our clips to JavaScript.
UpdatePlaybackPosition _ -> let newPosition = if model.playbackPosition >= 15 then 0 else model.playbackPosition + 1 stepClips = Array.get newPosition model.playbackSequence |> Maybe.withDefault Set.empty in ( { model | playbackPosition = newPosition } , sendClips (Set.toList stepClips) )
We use a local function to get the set of clips from the playback sequence.
Array.get returns the set we asked for or nothing if it can’t find it, so we need to cover that case and return an empty set. We use a built-in helper function,
Maybe.withDefault, to do that. We’ve seen several updates to our model thus far, but now we’re sending a command. We use
sendClips, which we’ll define in a moment, to send the clips to JavaScript. We also need to convert our set to a List because that’s a type JavaScript understands.
sendClips is a small port function that only needs a type declaration. We send our list of clips. In order to enable the port, we need to change our update module to a port module. From
module Update exposing (update) to
port module Update exposing (update). Elm can now send data to JavaScript, but we need to load the actual audio files.
port module Update exposing (update) port sendClips : List Clip -> Cmd msg
In JavaScript, we load our clips in a samples object, map over the list of clips Elm sends us, and play the samples within the set. To listen to elm ports, we call subscribe on the port
sendClips, which lives on the Elm application ports key.
(() => { const kick = new Howl({ src: [''] }); const snare = new Howl({ src: [''] }); const hat = new Howl({ src: [''] }); const samples = { kick: kick, snare: snare, hat: hat, }; const app = Elm.Main.embed(document.body); app.ports.sendClips.subscribe(clips => { clips.map(clip => samples[clip].play()); }); })();
Ports ensure type safety within Elm while ensuring we can communicate to any JavaScript code/package. And commands handle side-effects gracefully without disturbing the Elm run time, ensuring our application doesn’t crash.
Load up the completed step sequencer and have some fun! Toggle some steps, press play, and you’ve got a beat!
Wrapping Up And Next Steps
Elm has been the most invigorating language I’ve worked in lately. I feel challenged in learning functional programming, excited at the speed I get new projects up and running, and grateful for the emphasis on developer happiness. Using the Elm Architecture helps me focus on what matters to my users and by focusing on data modeling and types I’ve found my code has improved significantly. And that compiler! My new best friend! I’m so happy I found it!
I hope your interest in Elm has been piqued. There is still much more we could do to our step sequencer, like letting users change the BPM, resetting and clearing tracks, or creating sharable URLs to name a few. I’ll be adding more to the sequencer for fun over time, but would love to collaborate. Reach out to me on Twitter @BHOLTBHOLT or the larger community on Slack. Give Elm a try, and I think you’ll like it!
Further Reading
The Elm community has grown significantly in the last year, and is very supportive as well as resourceful. Here are some of my recommendations for next steps in Elm:
- Official Getting Started Guide
- A GitBook written by Evan, Elm’s creator, that walks you through motivations for Elm, syntax, types, the Elm Architecture, scaling, and more.
- Elm Core Library
- I constantly refer to the documentation for Elm packages. It’s written well (though the type annotations took a bit of time to understand) and is always up to date. In fact, while writing this, I learned about classList, which is a better way to write class logic in our views.
- Frontend Masters: Elm
- This is probably the most popular video course on Elm by Richard Feldman, who’s one of the most prolific members of the Elm community.
- Elm FAQ
- This is a compilation of common questions asked in various channels of the Elm community. If you find yourself stuck on something or struggling to understand some behavior, there’s a chance it’s been answered here.
- Slack Channel
- The Elm Slack community is very active and super friendly. The #beginners channel is a great place to ask questions and get advice.
- Elm Seeds
- Short video tutorials for learning additional concepts in Elm. New videos come out on Thursdays.
| https://www.smashingmagazine.com/2018/01/learning-elm-drum-sequencer-part-2/ | CC-MAIN-2022-05 | refinedweb | 5,274 | 65.32 |
Dragging an dynamically created object out of a ListView
Hi there,
I want to drag an object out of a ListView and then later on use it for something.
- in the ListView is a set of objects
- dragging is initiated by press and hold
- the list item is removed from the list
- the dragged object can be dragged around (aka used for something)
The problem is, that I can't get the dragged object to move immediately after the press and hold. I always have to release the mouse and press the generated object again to get it moving.
@// DragItem.qml
import QtQuick 2.0
Rectangle {
width: 100; height: 100
border.width: 1
MouseArea {
anchors.fill: parent
drag.target: parent
onPressed: parent.color = "purple"
onReleased: parent.color = "orange"
}
}
@
@// Main.qml
import QtQuick 2.0
Item {
id: root
width: 400; height: 400
ListView { width: 100; height: 400 spacing: 10 model: ListModel { id: listModel ListElement { color: "blue" } ListElement { color: "yellow" } ListElement { color: "red" } } delegate: Rectangle { width: 100; height: 100 color: model.color MouseArea { anchors.fill: parent onPressAndHold: { var position = mapToItem(root, x, y) var component = Qt.createComponent("DragItem.qml") var properties = { "x": position.x, "y": position.y, "color": model.color } component.createObject(root, properties) listModel.remove(index) } } } }
}
@ | https://forum.qt.io/topic/35375/dragging-an-dynamically-created-object-out-of-a-listview | CC-MAIN-2017-47 | refinedweb | 205 | 52.76 |
Cache in Python
Get FREE domain for 1st year and build your brand new site
We have explored caching by demonstrating it with a Python code example where intermediate results are cached and improved the overall performance. This is done through functools.lru_cache in Python.
Before learning about Cache in Python, Let's first learn about What is Caching?
What is Caching?
Caching is a part of the memoization technique to store results of already computed results rather than computing them again and again with a constraint on memory to use.
Let's take an example:
Here is an expression:
F(x,y) = 2x + 3y - 4
You have to calculate results for the given Co-ordinates (x,y) in an order:
(2,3), (4,5), (4,5), (2,3), (7,8), (4,5), (2,3), (4,5)
You will observe that pairs (2,3) and (4,5) appear repeatedly.
To solve this problem, You can use two different approaches:
First Approach:
You can calculate one by one for each pair(x,y). But this approach leads to extra computation for already computed values.
Second Approach:
You can store the results of each computation or if it's already computed, fetching the results directly from the memory. Well, this approach reduces the computation but leads to the usage of memory.
How can you solve this problem if billions of queries are there but didn't have that much memory to store the results?
Here, you need a Cache.
In this blog, you will read about LRU Cache with constrained maxSize which is an inbuilt tool of python.
LRU Cache
Caches are basically a hash table. Every data that goes inside it is hashed and stored making it accessible at O(1).
LRU Cache is a cache object that deletes the least-recently-used items if the memory available to the cache is exceeded.
Let's again take an example:
Here is a Cache with a size to store three pairs which is initially empty:
Expression as follows:
F(x,y) = 2x + 4y - 4
Let's go through the following queries one by one:
(2,3), (4,5), (2,3), (7,8), (5,6), (4,5)
Query 1:
(2,3)
F(2,3) = 12
LRU becomes:
Query 2:
(4,5)
F(4,5) = 24
LRU becomes:
Query 3:
(2,3)
Already calculated, Stored in Cache. So, you don't need to compute it again.
LRU becomes:
Query 4:
(7,8)
F(7,8) = 42
LRU becomes:
(4,5) -> 24 is the last object present in the cache, if any new object comes it will be popped out from the cache.
Query 5:
(5,6)
F(5,6) = 30
LRU becomes:
Last Object (4,5) -> 24 popped out.
Query 6:
(4,5)
You will see that for (4,5) result was calculated earlier but isn't stored in Cache. So, you need to compute it again.
F(4,5) = 24
LRU becomes:
Hoping that you had understood how LRU Cache works, Let's see how to use it in python.
LRU Cache in Python
You can use LRU Cache using Python’s functools.lru_cache decorator.
Case 1: Calculating time for recursive calculation of equation for 1,000,000 times without using Cache.
import timeit def solveEquation(x,y): return 2*x + 4*y - 4 time = timeit.timeit('solveEquation(2,3)', globals=globals(), number=1000000)
By running the above code you will get time as:
0.14199211914092302
Case 2: Now calculating time for calculating equation using functools.lru_cache with a recursive function.
import timeit import functools @functools.lru_cache(maxsize=3) def solveEquation(x,y): return 2*x + 4*y - 4 time = timeit.timeit('solveEquation(2,3)', globals=globals(), number=1000000)
By running the above code you will get time as:
0.08165495004504919
By observing the Case-1 and Case-2, you will observe that using functools.lru_cache improves the time-performance of the computation.
Hoping that you had understood the difference between using functools.lru_cache and not using it.
Now, Let's breakdown the @functools.lru_cache(maxsize=128, typed=False) for a better understanding of how to use it.
functools
functools is a python module used for higher-order functions: functions that act on or return other functions.
lru_cache
lru_cache is a decorator applied directly to a user function to add the functionality of LRU Cache.
maxsize
maxsize is the maximum number of objects you can store in a cache.
If you didn't pass maxsize as a parameter, then by default maxsize will be 128.
If maxsize is set to None, the LRU feature is disabled and the cache can grow without bound.
typed
If typed is set to true, function arguments of different types will be cached separately. For example, f(3) and f(3.0) will be treated as distinct calls with distinct results.
Functions
This decorator provides a cache_clear() function for clearing the cache.
Function cache_info() returns a named tuple showing hits, misses, maxsize, and currsize.
Hoping that you have understood the Cache and how to use it.
Thanks for reading | https://iq.opengenus.org/cache-in-python/ | CC-MAIN-2021-43 | refinedweb | 841 | 73.68 |
i can some one help me with jframe and panel. i want to create button using jpanel. i look online but they seem to have different ways to do it and i have no idea which is the best way to do this. ex some people extends jframe at top. other create jframe j = new jframe in main etc...
This is what iam trying to do. dont worry about my logic here. this is just so i can understand how to extends etc....
**
main.java will call aaaa.java.
aaaa.java will call bbbb.java
bbbb.java will create one button and that button will have a actionlistener.
**
Code:public class Main extends JFrame{ public Main() { // TODO Auto-generated constructor stub } public static void main(String[] args){ aaaa a = new aaaa(); //?? } }
Code:public class aaaa extends JFrame{ //??? public aaaa() { bbbb b = new bbbb(); //?? } }
Code:public class bbbb implements ActionListener{ //???? JButton b1 = new JButton("check here"); public bbbb() { //?? } public void actionPerformed(ActionEvent e) { } } | http://forums.devshed.com/java-help-9/button-using-javax-946282.html | CC-MAIN-2018-13 | refinedweb | 162 | 71 |
Create a Frame in Java
in java AWT package. The frame in java works like the main window where your components
(controls) are
added to develop a application. In the Java AWT, top-level... Create a Frame in Java
Frame
Java Frame What is the difference between a Window and a Frame
java-awt - Java Beginners
java-awt how to include picture stored on my machine to a java frame...());
JFrame frame = new JFrame();
frame.getContentPane().add(panel... information,
java - Swing AWT
information,
Thanks...java i want a program that accepts string from user in textfield1 and prints same string in textfield2 in awt hi,
import java,
Java AWT Package Example
swings - Swing AWT
:// What is Java Swing Technologies? Hi friend,import...(); } public TwoMenuItem(){ JFrame frame; frame = new JFrame("Two menu
Help Required - Swing AWT
JFrame("password example in java");
frame.setDefaultCloseOperation...();
}
});
}
}
-------------------------------
Read for more information.... the password by searching this example's\n"
+ "source code
java - Swing AWT
java Hello Sir/Mam,
I am doing my java mini... for upload image in JAVA SWING.... Hi Friend,
Try the following code...(String[] args) {
JFrame frame = new JFrame("Upload Demo");
JPanel panel = new
java - Swing AWT
public void main(String args[]) throws
Exception {
JFrame frame = new
Line Drawing - Swing AWT
) {
System.out.println("Line draw example using java Swing");
JFrame frame = new...Line Drawing How to Draw Line using Java Swings in Graph chart... using java Swing
import javax.swing.*;
import java.awt.Color;
import
frame with title free hand writing
://
Thanks...frame with title free hand writing create frame with title free hand writing, when we drag the mouse the path of mouse pointer must draw line ao arc
Java Program - Swing AWT
Java Program Write a Program that display JFileChooser that open... JFrame {
public static JFrame frame;
public JList list;
public...(String [] args) {
frame = new Uploader();
frame.setVisible(true
java - Swing AWT
java how can i add items to combobox at runtime from jdbc Hi Friend,
Please visit the following link:
Thanks Hi Friend
JFileChooser - Swing AWT
);
}
public static void main(String s[]) {
JFrame frame = new JFrame("Directory chooser file example");
FileChooser panel = new FileChooser... for more information,
Thanks
b+trees - Swing AWT
b+trees i urgently need source code of b+trees in java(swings/frames).it is urgent.i also require its example implemented using any string...;
String nodeName;
public static void main(String[] args) {
JFrame frame
java - Swing AWT
What is Java Swing AWT What is Java Swing AWT
JList - Swing AWT
is the method for that? You kindly explain with an example. Expecting solution as early...() {
//Create and set up the window.
JFrame frame = new JFrame("Single list example");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE
Java logical error - Swing AWT
Java logical error Subject:Buttons not displaying over image
Dear Sir/Madam,
I am making a login page using java 1.6 Swings.
I have want to apply picture in the backgoud of the frame and want to place buttons
JTable - Cell selection - Swing AWT
information.
Thanks.
Amardeep... main(String[] args) {
JTableDemo frame = new JTableDemo
How to save data - Swing AWT
to :
Thanks...",
Collection frame work - Java Beginners
Collection frame work How to a sort a list of objects ordered by an attribute of the object
JavaThread "AWT-EventQueue-176 - Applet
#
# Java VM: Java HotSpot(TM) Client VM (1.5.0_05-b05 mixed mode)
# Problematic frame.../Conditional;Ljava/awt/Component;)V
v ~RuntimeStub::alignment_frame_return Runtime1..._frame_return Runtime1 stub
j java.awt.EventDispatchThread.pumpEvents(ILjava/awt
JTable Cell Validation? - Swing AWT
://
Thanks it's not exactly...JTable Cell Validation? hi there
please i want a simple example...(table);
JLabel label=new JLabel("JTable validation Example",JLabel.CENTER
creating a modal frame - Java Beginners
creating a modal frame i have a JFrame with 2 textfields. i want that this frame should open in a modal mode. how to do this? please tell. thanks
Radio Button In Java
on the frame. The
java AWT , top-level window, are represent by the CheckBoxGroup... for the procedure of inserting checkbox group on
the Java AWT frame.
Program Description:
Here...
Radio Button In Java
Java Swings-awt - Swing AWT
Java Swings-awt Hi,
Thanks for posting the Answer... I need to design a tool Bar which looks like a Formating toolbar in MS-Office Winword(Standard & Formating) tool Bar.
Please help me... Thanks in Advance
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/81558 | CC-MAIN-2013-20 | refinedweb | 760 | 58.58 |
Hi everyone!<br />
Thanks for taking the time out of your busy schedule to visit my corner of the web.? I'm going to use this blog as a sounding board for my thoughts on pretty much everything, so if you're interested in anything, check back often.? I'll be updating a lot and there will always be something different.<br /><p><div class="blogdisclaim"><a href="">This post originally appeared on an external website</a></div>
Converting a DataSet to an ADODB.Recordset
Hey everybody.? Right now I'm working on a "on-the-fly" method of converting a .Net DataSet to an ADODB.Recordset.? After doing countless searches on google (and any site therein), I found that the majority of conversion methodologies required storing the file to hard disk (which can be pretty painful if you do web development), or did just a plain lousy job of explaining anything.<br />
<br />
Once I get the process flushed out and a little better commented, I'll post it.? Just to give some background, the conversion methods are encapsulated in its own class, so it should be fairly portable.? I'm hoping to have this up by either the end of this week, or beginning of next.? Enjoy!<br /><p><div class="blogdisclaim"><a href="">This post originally appeared on an external website</a></div>
Drunken Musings I
<p>Hey!<br />
In my last post, I mentioned that I would have something set up for converting a .Net DataSet to an ADODB.Recordset.? I actually have this finished, but as I am a lazy *******, I don't have it set up yet.<br /></p>
<p>Actually, here's my blog/thought of the day.? Why do ex's make life difficult?? My ex-fiancee (never got married) is trying to make life difficult.? While we were dating/engaged, I backed her car in to a pole, and I told her I would fix it.? I have hit some extreme situations since then and have not been able to get it fixed - for those that know me, you'll know what I'm talking about, for those that don't and are curious, please drop me a line.? Anyway, I can only talk to my ex through e-mail, which is tough enough, and have managed to "misunderstand" a bunch of things.? Now she's threatening legal action after I have saved her from a lot of financial ruin.? Am I wrong for being slightly upset about this?? Why do lawyers need to be brought in?? Why can't things be easier?? Why does my ex hate me?? Oh well.? I will not let this situation beat me down.? Thanks for listening.? The conversiont thing is coming.<br /></p>
<p>Later!<br /></p><p><div class="blogdisclaim"><a href="">This post originally appeared on an external website</a></div>
Life Update
Ok, so I'm on this whole kick right now to better myself and create a better me.? Well, of course that includes changing my email on every frigging service I belong to, which is too many to count.? How in the **** did humanity ever live in this world without belonging to fifty different news service organizations, online stores and such?? It really makes me wonder who out there has my email address, and what could they be doing with it?? I'm sure there is a ninety year old Ukrainian women sitting in her hovel rubbing her hands together saying something along the lines like "He he he... now who can I email as Scott Allender?"? Probably would sound a lot better if you think with a ninety year old Ukranian woman accent.<br /><p><div class="blogdisclaim"><a href="">This post originally appeared on an external website</a></div>
Cool New Tool
Well, special thanks to friends Han Gerwitz and Ryan Stephenson.? With their direction, I found this awesome new bookmark utility calle del.icio.us.? This thing is awesome; I can pretty much upload whatever the **** I want as a bookmark and boom, it stores and puts it into html (ideal for a home page) and rss feed (great for aggregators if you want to know about my tastes and stuff).? Anyway, this thing rocks!? So much so that I've devoted an entire blog to it.<br />
<br />
All right, I need to get out more.? I know.<br /><p><div class="blogdisclaim"><a href="">This post originally appeared on an external website</a></div>
Convert a .Net DataSet to an ADODB.Recordset
All right, without further ado and sorry for the wait, here's the code for converting a DataSet to a Recordset without using a hard file, i.e. the Microsoft way.? I've got this thing embedded in a web service, but feel free to incorporate it into a standalone dll.? As my intention for this thing is to share it as much as possible (it was **** trying to figure out how to make this work without a hard file), feel free to copy and use however you want.<br />
<br />
Public Class VbConvert
'**************************************************************************
' Method Name : GetADORS
' Description : Takes a DataSet and converts into a Recordset. The converted
' ADODB recordset is saved as an XML file. The data is saved
' to the file path passed as parameter.
' Output : The output of this method is long. Returns 1 if successfull.
' If not throws an exception.
' Input parameters:
' 1. DataSet object
' 2. Database Name
' 3. Output file - where the converted should be written.
'**************************************************************************
Public Function GetADORS(ByVal ds As DataSet) As String
Dim strRsXml As String
'Create an xmlwriter object, to write the ADO Recordset Format XML
Try
Dim sw As New MemoryStream
Dim xwriter As New XmlTextWriter(sw, System.Text.Encoding.Default)
'call this Sub to write the ADONamespaces to the XMLTextWriter
WriteADONamespaces(xwriter)
'call this Sub to write the ADO Recordset Schema
WriteSchemaElement(ds, xwriter)
Dim TransformedDatastrm As New MemoryStream
'Call this Function to transform the Dataset xml to ADO Recordset XML
TransformedDatastrm = TransformData(ds)
'Pass the Transformed ADO REcordset XML to this Sub
'to write in correct format.
HackADOXML(xwriter, TransformedDatastrm)
xwriter.BaseStream.Position = 0
Dim sr As New StreamReader(xwriter.BaseStream)
strRsXml = sr.ReadToEnd
xwriter.Flush()
xwriter.Close()
sr.Close()
Return strRsXml.ToString
Catch ex As Exception
'Returns error message to the calling function.
Err.Raise(100, ex.Source, ex.ToString)
End Try
End Function
Private Sub WriteADONamespaces(ByRef writer As XmlTextWriter)
'The following is to specify the encoding of the xml file
'writer.WriteProcessingInstruction("xml", "version='1.0' encoding='ISO-8859-1'")
'The following is the ado recordset format
'>
'Write the root element
writer.WriteStartElement("", "xml", "")
'Append the ADO Recordset namespaces
writer.WriteAttributeString("xmlns", "s", Nothing, "uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882")
writer.WriteAttributeString("xmlns", "dt", Nothing, "uuid:C2F41010-65B3-11d1-A29F-00AA00C14882")
writer.WriteAttributeString("xmlns", "rs", Nothing, "urn:schemas-microsoft-com:rowset")
writer.WriteAttributeString("xmlns", "z", Nothing, "#RowsetSchema")
writer.Flush()
End Sub
Private Sub WriteSchemaElement(ByVal ds As DataSet, ByRef writer As XmlTextWriter)
'ADO Recordset format for defining the schema
' <s:Schema
' <s:ElementType
' </s:ElementType>
' </s:Schema>
'write element schema
writer.WriteStartElement("s", "Schema", "uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882")
writer.WriteAttributeString("id", "RowsetSchema")
'write element ElementTyoe
writer.WriteStartElement("s", "ElementType", "uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882")
'write the attributes for ElementType
writer.WriteAttributeString("name", "", "row")
writer.WriteAttributeString("content", "", "eltOnly")
writer.WriteAttributeString("rs", "updatable", "urn:schemas-microsoft-com:rowset", "true")
WriteSchema(ds, writer)
'write the end element for ElementType
writer.WriteFullEndElement()
'write the end element for Schema
writer.WriteFullEndElement()
writer.Flush()
End Sub
Private Sub WriteSchema(ByVal ds As DataSet, ByRef writer As XmlTextWriter)
Dim i As Int32 = 1
Dim dc As DataColumn
For Each dc In ds.Tables(0).Columns
dc.ColumnMapping = MappingType.Attribute
writer.WriteStartElement("s", "AttributeType", "uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882")
'write all the attributes
writer.WriteAttributeString("name", "", dc.ToString)
writer.WriteAttributeString("rs", "number", "urn:schemas-microsoft-com:rowset", i.ToString)
writer.WriteAttributeString("rs", "baseCatalog", "urn:schemas-microsoft-com:rowset", Me.DBNAME)
writer.WriteAttributeString("rs", "baseTable", "urn:schemas-microsoft-com:rowset", _
dc.Table.TableName.ToString)
writer.WriteAttributeString("rs", "keycolumn", "urn:schemas-microsoft-com:rowset", _
dc.Unique.ToString)
writer.WriteAttributeString("rs", "autoincrement", "urn:schemas-microsoft-com:rowset", _
dc.AutoIncrement.ToString)
'write child element
writer.WriteStartElement("s", "datatype", "uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882")
'write attributes
writer.WriteAttributeString("dt", "type", "uuid:C2F41010-65B3-11d1-A29F-00AA00C14882", _
GetDatatype(dc.DataType.ToString))
writer.WriteAttributeString("dt", "maxlength", "uuid:C2F41010-65B3-11d1-A29F-00AA00C14882", _
dc.MaxLength.ToString)
writer.WriteAttributeString("rs", "maybenull", "urn:schemas-microsoft-com:rowset", _
dc.AllowDBNull.ToString)
'write end element for datatype
writer.WriteEndElement()
'end element for AttributeType
writer.WriteEndElement()
writer.Flush()
i = i + 1
dc = Nothing
End Sub
'Function to get the ADO compatible datatype
' Feel free to add as many ADO compatible data types, I needed only two.
Private Function GetDatatype(ByVal dtype As String) As String
Select Case (dtype)
Case "System.Int32"
Return "int"
Case "System.Float"
Return "float"
Case Else
Return "string"
End Select
End Function
'Transform the data set format to ADO Recordset format
'This only transforms the data
Private Function TransformData(ByVal ds As DataSet) As MemoryStream
Dim config As New System.Configuration.AppSettingsReader
Dim xslFile As String = config.GetValue("rsXsl", Type.GetType("System.String"))
Dim instream As New MemoryStream
Dim outstream As New MemoryStream
'write the xml into a memorystream
ds.WriteXml(instream, XmlWriteMode.IgnoreSchema)
instream.Position = 0
'load the xsl document
Dim xslt As New XslTransform
xslt.Load(xslFile)
'create the xmltextreader using the memory stream
Dim xmltr As New XmlTextReader(instream)
'create the xpathdoc
Dim xpathdoc As XPathDocument = New XPathDocument(xmltr)
'create XpathNavigator
Dim nav As XPathNavigator
nav = xpathdoc.CreateNavigator
'Create the XsltArgumentList.
Dim xslArg As XsltArgumentList = New XsltArgumentList
'Create a parameter that represents the current date and time.
Dim tablename As String
xslArg.AddParam("tablename", "", ds.Tables(0).TableName)
'transform the xml to a memory stream
xslt.Transform(nav, xslArg, outstream)
instream = Nothing
xslt = Nothing
xmltr = Nothing
xpathdoc = Nothing
nav = Nothing
'outstream.Position = 0
Return outstream
End Function
'**************************************************************************
' Method Name : ConvertToRs
' Description : The XSLT does not tranform with fullendelements. For example,
' <root attr=""/> intead of <root attr=""><root/>. ADO Recordset
' cannot read this. This method is used to convert the
' elements to have fullendelements.
'**************************************************************************
Private Sub HackADOXML(ByRef wrt As XmlTextWriter, ByVal ADOXmlStream As System.IO.MemoryStream)
ADOXmlStream.Position = 0
Dim rdr As New XmlTextReader(ADOXmlStream)
Dim outStream As New MemoryStream
rdr.MoveToContent()
'if the ReadState is not EndofFile, read the XmlTextReader for nodes.
Do While rdr.ReadState <> ReadState.EndOfFile
If rdr.Name = "s:Schema" Then
wrt.WriteNode(rdr, False)
wrt.Flush()
ElseIf rdr.Name = "z:row" And rdr.NodeType = XmlNodeType.Element Then
wrt.WriteStartElement("z", "row", "#RowsetSchema")
rdr.MoveToFirstAttribute()
wrt.WriteAttributes(rdr, False)
wrt.Flush()
ElseIf rdr.Name = "z:row" And rdr.NodeType = XmlNodeType.EndElement Then
'The following is the key statement that closes the z:row
'element without generating a full end element
wrt.WriteEndElement()
wrt.Flush()
ElseIf rdr.Name = "rs:data" And rdr.NodeType = XmlNodeType.Element Then
wrt.WriteStartElement("rs", "data", "urn:schemas-microsoft-com:rowset")
ElseIf rdr.Name = "rs:data" And rdr.NodeType = XmlNodeType.EndElement Then
wrt.WriteEndElement()
wrt.Flush()
End If
rdr.Read()
Loop
wrt.WriteEndElement()
wrt.Flush()
End Sub
Private Const DBNAME As String = "******"
End Class
<br /><p><div class="blogdisclaim"><a href="">This post originally appeared on an external website</a></div>
What's Going On?
Ok, it's about 1 am on Tuesday morning, and I'm sitting here after working 11 hours and cleaning my house for about 5 hours.? When did life get so mundane?? I mean seriously, I'm 24, and I spent a night cleaning my house.? Granted, I could use a mundane night, so I shouldn't really be complaining, but still... I remember thinking to myself that when I was grown up things would be so much easier and exciting than as a little kid.? Now, I would love the opportunity to spend a single afternoon being childlike.? I guess that's the cosmic joke, huh?? You always want to be in a state of mind than what you're currently in.? Kind of a take on the grass is always greener thing.<br />
<br />
Anyway, today should be much more interesting.? I start working with a new client, which is always exciting, I get to interview my first candidate for a roommate, and... I don't know.? I guess something will come up, and if it doesn't, great.? I can take the time to relax.<br />
<br />
Oh by the way, this past weekend was great.? I finished my second bike century, a hundred mile bike ride.? This ride was sponsored by the American Diabetes Association, and if you want to donate go to the <a href="">American Diabetes Association Donation</a> page.? Diabetes is a serious, treatable illness.<br /><p><div class="blogdisclaim"><a href="">This post originally appeared on an external website</a></div>
Romance and Stuff
Ok, so I really like this girl.? She's extremely attractive, but there are several things impeding a potential relationship between her and myself.? To elaborate, she's drop dead gorgeous, but something that has been bothering me is that the more I find out about her, the more she reminds me of my ex-fiance.? it's kind fo tough to reconcile that.? I just can't seem to get past the things that I know about this new woman, because they are so close to what I have already experienced.??<br />
<br />
Should a learning experince (my time with my ex-fiance) cripple potential experiences with other women?<br /><p><div class="blogdisclaim"><a href="">This post originally appeared on an external website</a></div>
Romance and Stuff
<p>Most of us tend to be attracted to the same kind of person over and over again. Those are the peole we like. Those are the people who like us. We have fun together. Our values are compatible. Emotionally, instinctively, intellectually, we have preferences in people just as we have them in food and art. And they feel the same way about us.</p>
<p>Duh!</p>
<p>You should turn this dilemma into an asset. How did you enjoy the relationship with your fiancee? (Two E's please, we live in a new era and you don't want to make a gender typo!) What was the cause of the failure? Did you have good times or did you fight a lot? Was it a blind date or an accident of hormones that turned into a train wreck? Or was it a good match that just turned bad because of fate and perhaps a bit of inexperience and immaturity?</p>
<p>If this woman doesn't turn out to be the one you keep for life, you'll probably notice that many or most of her successors also bear a lot of resemblance to one another.</p>
<p>If this similarity is so spooky that it just creeps you out, then go with your gut and bail. Or if you're doggedly looking for the adventure that comes with diversity and more of the same feels like a big disappointment, then keep looking. But if you're just wondering, I'd say stop wondering and go for it. This type of woman seems to be your type. Enjoy!</p>
Life Update II
So yeah, I just got back from Chicago where my youngest nephew, Austin Michael Bates, was christened.? I have such love for my sister's family; almost like it was my o own.? Anyway, anytime I go to Chicago I always return refreshed and ready for anything.? Which is good as I have my family's reunion this upcoming week, more fun with my new client, a roommate to prepare for, and as always, the other comings and goings in my life.? On the plus side, I feel pretty confident that my life will keep me very busy this week.<p><div class="blogdisclaim"><a href="">This post originally appeared on an external website</a></div>
Scott's Thoughts & Stuff
This conversation is currently closed to new comments. | https://www.techrepublic.com/forums/discussions/scotts-thoughts-stuff/ | CC-MAIN-2019-35 | refinedweb | 2,686 | 58.99 |
The first section of this article gives some background. The second section describes how to install and get the example scripts working for you so that
you can verify that they work and have something to build off of. The third section discusses how to use the library I wrote for your own applications,
giving you the guidance to utilize the library to its fullest. The fourth section describes the actual code of the library, how it works, and why I made the choices that I did.
If you are like me, you will probably want to know more about how I make C# and PHP work together in code and not just blindly use my library.
However, I have included detailed usage instructions just in case you only want to copy, paste, and make it work.
Several weeks ago, I was writing a proof of concept program in C# that, in part, required connecting to a PHP script securely. I did not have the option
of using SSL, so I opted to using something more customized. Figuring that a specific encryption algorithm will work exactly the same regardless of the language
that is implementing it, I simply used the built-in algorithms provided in C# and PHP. This assumption is true, for the most part; however, I was having a surprisingly difficult
time getting the encryption to actually work between Microsoft's library and PHP's Mcrypt library. After several hours of failure, I eventually broke down and Googled a
quick solution. Normally, I find an example close to what I need in a few minutes, but I looked for a long time and could not find a suitable solution.
Nobody seemed to quite have a complete answer to the many people out there who were asking for help trying to do exactly what I was trying to do. Many
questions were answered unsatisfactorily, none answered the full topic, others did not get any response at all, and an occasional person would eventually
respond to his own question saying nothing more than "I got it to work," but would not help out the rest of us by posting his solution. I had already spent
enough time trying to find a "quick" solution, so at this point, I went ahead and wrote my own. It took me a full Saturday, but I got it working. This article
details what I did to get it working, provides a library that you can use, and should hopefully help out others who are in the same dilemma that I was.
The Advanced Encryption Standard is a widely used block cipher based on the Rinjdael encryption algorithm. And by based, I mean that AES is a subset of Rinjdael
in that it has a fixed block size of 128 bits, whereas Rinjdael supports variable block lengths of 128, 192, or 256 bits. Do not confuse block length with
key length. AES supports key lengths of 128, 192, or 256 bits (when you see references to AES-128, AES-192, or AES-256, the number indicates the key size and not the block size).
AES is a symmetric algorithm, meaning that the same key must be used both to encrypt and decrypt a message. Although the algorithm is strong, you must have
a secure way of getting the key to both parties without anybody intercepting the key along the way and using it to read your messages. This is where we need an asymmetric encryption algorithm.
The RSA algorithm is an asymmetric, public key encryption method. This means that two different keys are involved: an encryption key and a decryption key. These keys are referred
to as public and private keys, respectively. The private key is kept on the server away from prying eyes, while the public key can be sent to as many people
as you desire--even your enemies. Anything encrypted with one of the keys can only be decrypted with the other--you can not even decrypt a message with
the same key you encrypted it with. In this way, a server (say, a PHP script) can send you a public RSA key which you can then use to encrypt something that
only the server can decrypt. This is how you can securely transmit your symmetric AES key to a remote host for use in the AES algorithm.
But why transmit and use an AES key when you already have RSA keys that you can use? Because, RSA is very slow. In fact, it is slow enough that we only want
to use it long enough to transmit a single key for use in our symmetric algorithm. This is basically what is happening behind the scenes when you request
a website on an HTTPS site. The server sends you the public key, you use the public key to encrypt a random key you generate, and then you send
that encrypted key to the server to establish a session through which you can securely communicate via AES encrypted messages. This is exactly the way that my
implementation works.
Please be aware that in real word applications, there is another step where your computer verifies that the public key you got from a web
site is valid by verifying that it is signed by a certificate authority. Without this step, it is possible for a determined individual to pretend to be the web site you
are trying to access by issuing you a public key that he himself made. This attacker could then intercept your messages half way by pretending to be the website
you are trying to access, read the messages in plain text, then encrypt them with the real site's public key, and send them to the real site. Note that this attack is not
prevented in this article, so you should not use this encryption here for anything that is too sensitive--for an application like this, you should buy a signed
certificate for use with an SSL encrypted web site.
For the purpose of this article, we will be using a self singed RSA certificate. There is no need to buy a certificate for this application, as we are not going
to check the signature anyway. We are not going to be having browser notifications in our application warning us that the issuer is not trusted. The keys we generate
ourselves are far more secure than we really need them to be.
This is a good time for me to warn you about the dangers of writing your own encryption algorithm. While it may be fun to write your own encryption scheme (and I have done so many times),
it is not a good idea to use it for anything. I can't seem to stress this point enough. Many people (including myself at one point) think that by creating your
own algorithm, your attackers will not know the algorithm or the keys, making it twice as difficult to break.
The flaw in this reasoning is two fold. One is that to decrypt something, you have to have the algorithm. If you have the algorithm, even if it's compiled into an obfuscated
assembly, it can be decompiled and studied for flaws. If the strength of your algorithm relies at all, even a little bit, on someone not knowing your algorithm, then it's not
a secure algorithm. All of the strength of the algorithm should rest in the strength of the chosen keys. Two, if you have a compromised key, then you only lose the information
secured by that key, and anyone who was using a different key is still safe. If you have a compromised algorithm, then every bit of information, regardless of the key it was
encrypted with, may be compromised. If the algorithm is found to be bad, then your scheme's 1024 bit keys may turn out to be worthless.
"But the entire world knows how the AES algorithm works! The algorithm is readily available and explained to anyone interested, so how can it be more secure than my own algorithm?"
Simply put, AES and RSA have undergone extensive testing and have been scrutinized by experts. Really smart people all over the world spend a lot of time trying to break it,
and have still failed. They have proven its strength by their failure. Always use an algorithm that has been proven to be secure, because you have no idea how many flaws you
are overlooking when you make your own.
I think a comment by Garth J. Lancaster says it best when he states that even though people
think that being obscure about security concerns is the best way, he would trust them less than an open solution he can verify. He says (and I definitely agree) that the only thing
that should be private in an encryption scheme is the certificate or key. [Read Comment] If the encryption
method is known, it should not weaken the encrypted information at all. In fact, if the attacker knows that you used AES, he might give up trying to break the algorithm from the start.
All you have to do is generate cryptographically secure keys, and keep them safe.
Then there's the legal issues of writing your own algorithm. I'll forgo the details, so just suffice it to say that if you intend to use your own encryption scheme to store
other people's sensitive information (credit cards or such like) and use a faulty method to do so, you could be in a lot of trouble. Always use an approved encryption scheme when
you are dealing with other people's information. Have a look at the FIPS 140-2 standard at NIST.gov
or see all the documents here. FIPS 140-3 is due to come out shortly, so keep an eye on it. You are required to adhere
to these standards if you run a business. Stay away form DES, since it has been shown that a 56 bit DES key can be broken in under a day (a 64 bit DES key is actually only 56 bits,
since the other 8 bits are just for parity).
Security risks aside, making your own algorithm (that's worth anything) takes a lot of time. Why not simply call the built-in encryption functions that are most likely provided
by the language you are using? Experts created, tested, and optimized it just for you, so simply drop it in and use it.
For legal reasons, OpenSSL is not included in the download for this article. You can download it manually from here
and follow the instructions below to install it. (Optionally, you can get it from the developer of OpenSSL, but then you would have to compile the source code yourself.)
Download the latest Light installer from the list. For me, this was v1.0.0d (32-bit). Run the installer and if you get a popup like the one below (see picture),
click OK--we only need access to 3 files in this package so nothing bad is going to happen.
When you get to the following "Copy OpenSSL DLLs to:" screen (see picture), select the "The OpenSSL binaries (/bin) directory" option.
Now that OpenSSL is installed, navigate to the folder you installed it to in an Explorer window. I installed mine into the default location
of C:\OpenSSL-Win32\bin. Once you are there, open a command window and CD to that directory (e.g., type "cd C:\OpenSSL-Win32\bin").
You are now ready to generate public and private RSA keys.
To generate your own RSA keys, type the following commands at the prompt.
Let's generate a 1024 bit RSA private key and store it as temp.key:
openssl genrsa -aes256 -out temp.key 1024
You can optionally change the 1024 above to 2048 if you want a more secure key (it's unnecessary though--even web sites mostly use 1024 bit
keys); just know that it will be a bit slower when encrypting and decrypting. Now, to convert the private key to the format we will use:
openssl rsa -in temp.key -out private.key
During the next step, OpenSSL will ask you to enter some information about your web site. You can make this up however you want.
Create the public certificate (for distribution to clients) like this:
openssl req -new -x509 -nodes -sha1 -key private.key -out public.crt -days 3650
If you prefer, included in the download is a .bat file that I wrote which you can simply copy into the bin
folder of OpenSSL and run to generate an SSL key pair for you. It will pause to ask you for information such as a
password and country information, so just give it what it wants and it will generate an RSA key pair for you.
A key pair that I made is included in the download for you to play with (in case you can't get it to work yourself or just want to skip
this step). Please note that the whole world has access to that private key since it is available for download, so do not use it for anything important.
Now you have both the public and private keys necessary for RSA encryption. There is one more step we must take to keep our private key secure on the server.
You will notice that the private key is saved in a .key file in plain text. If we upload this to our web site as-is, then anybody can view it by simply going
to yoursite.com/private.key and downloading it. You can either set up your web server to disallow people from downloading .key
files (the harder way and the way we won't do it), or you can store it in a variable in a PHP script. If you try to open the PHP file, you will just get a blank page instead
of the key. To do this, we are just going to copy the contents of private.key into a PHP variable named $PrivateRSAKey, like this:
$PrivateRSAKey
<?php $PrivateRSAKey = "-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQE...0FgBdzxrcF0b
-----END RSA PRIVATE KEY-----"; ?>
Now we can just use the PHP include() function to load our private key into our PHP script. To use the functions in my encryption library,
the variable must have the name given above ($PrivateRSAKey) or it will not be recognized. Also, make sure you do not upload the private.key file
to your web server! That would be a big security mistake. Once you generate a good .php file for your private key, you might as well delete private.key
so that it's not floating around. In the download, I have provided a simple tool (made from 5 lines of C#) to automatically convert a .key file to a variable
in a .php file. You may use it if you are having problems copying and pasting the string and messing up the format or line ending characters.
include()
You now have a public key in the format of a certificate (public.crt) and a private key stored in a PHP variable (private.php).
The next step is to upload the PHP portion of this script to a web server that has PHP. I uploaded the PHP scripts to another computer on my local network that
is running WAMP (Windows, Apache, MySQL, and PHP). Upload everything in the PHP folder of the download for this article to a folder on your web server that you want
to access securely. I uploaded mine to my other computer's web root in a folder named "enc," so the scripts will be at for me.
Next, add the public.crt and private.php files (created in step 1 above) in to this folder as well, overwriting the included sample keys. (If you would rather
use my keys temporarily until you get things working, then skip copying the keys over until later.) Note that in the few seconds that it takes you to upload
your private key to your web server, it is vulnerable to interception. If this is a problem, then you should either upload it via an HTTPS link to the web server
or find a way to generate it on the server itself. This is the only time the private key will travel across the network.
The PHP encryption library this article uses is PHP Sec Lib. This library is an excellent pure PHP implementation of the common encryption algorithms that
we will be using. The code for PHP Sec Lib is included in this download for your convenience, but can also be downloaded from SourceForge.
You now have the necessary files uploaded to your web site, and should have a working PHP script on the Internet that you can access from the C# side of the connection.
This is the easy part, as far as setting up a working example is concerned. Simply open up the solution file from the download in Visual Studio and hit Compile.
When the program runs, in the third box, type in the full URL to the example.php that you uploaded to your site (for me, this is),
and click "Establish Connection". Give it a moment. When it says that the connection has been established (the "Send Message" button will
become available), type in a message and hit "Send Message". If all works correctly, then you will get back a meaningful response from PHP (as opposed to garbage characters or a bomb-out).
The C# code example takes care of retrieving the public key from the server and using it to send a symmetric key it generates. If you want, you can also test
out the HTTP asynchronous POST functionality if you enter the correct URL to test.php, or the RSA encryption (without transmitting an AES key) by entering
the correct URL to rsa.php.
The example defaults to because that is the name of the test computer on my home network.
I'm going to show you how to use this library by walking you through the creation of a simple application: a high score submitter! We are going to make
a lame game in C# (all you do is type in what you want your score to be). The game will post a user's high score to our secure PHP script which will return
his ranking amongst all gamers. We are going to encrypt this transmission because we do not want sore losers to use HTTP interception software to capture
the posted data (the score), modify their score, and then resend the package so that they can feel warm inside for cheating to the top of a score list.
Let's get started. Before we write our own, we will look at an example. If you have uploaded the entire examples folder, then you will
have uploaded example.php also. As its name suggests, this is the example page that we will build off of. Its contents are displayed below:
<?php
// Set the location to the public and private keys
$PrivateKeyFile = "private.php";
$PublicKeyFile = "public.crt";
include("secure.php");
if ($AESMessage != "")
{
SendEncryptedResponse("Got: " . $AESMessage . ", GOOD!");
}
?>
As you can see, we simply set the location of the private key (in PHP format) and the public key, include the encryption library, and then write our own code to do whatever.
Here are a few things to note...
SendEncryptedResponse()
<?php
$PrivateKeyFile
$PublicKeyFile
?>
Here is how you process an encrypted incoming message:
Any time this script receives a valid encrypted message, it will automatically be decrypted with the AES decryption key in the currently
established session (you will establish this from the C# side later). After it is decrypted, the message will be stored in the variable $AESMessage.
If this variable is not empty (thus the check for != ""), then there is a plain text message in the variable that you can process
however you want. In the example, I am just sending it back to the sender plus a little extra. We are going to change this a bit by first parsing out the winner's name and his
score. Since we are going to eventually send these from the C# side in the format name(comma)score, let's parse
out those two strings using PHP's explode(delimiter, string) function:
$AESMessage
!= ""
explode(delimiter, string)
if ($AESMessage != "")
{
// Get the username and high score from the message that was sent
$split = explode(",", $AESMessage);
$username = $split[0];
$score = $split[1];
}
How about we add a little ranking code:
$rank = "";
if ($score < 100)
$rank = "Loser!";
else if ($score < 1000)
$rank = "Not bad...";
else if ($score < 10000)
$rank = "Pretty Good.";
else if ($score < 100000)
$rank = "Amazing!";
else if ($score < 1000000)
$rank = "~YOU DA BOMB~";
else
$rank = "YOU ARE A GRAND MASTER!";
Now that we have done what we wanted with the data sent to us, we can send an encrypted response back to the client that accessed this page by using
the SendEncryptedResponse() function. This function automatically encrypts the text you give it and returns it to the client. Note that you can
only call this function once, and once you call it, the script exits. Nothing will be processed after a call to this function. As a reminder,
do not use any echo commands in your script anywhere, as this will mess up the response to the client. Here is how we respond:
SendEncryptedResponse("Name: " . $username . " Rank: " . $rank);
With the call of this function, you are done. The message you pass it will be encrypted and sent to the C# program that called this page in the first place.
Here is the full source code for the PHP script:
<?php
// Set the location to the public and private keys
$PrivateKeyFile = "private.php";
$PublicKeyFile = "public.crt";
include("secure.php");
if ($AESMessage != "")
{
// Get the username and high score from the message that was sent
$split = explode(",", $AESMessage);
$username = $split[0];
$score = $split[1];
$rank = "";
if ($score < 100)
$rank = "Loser!";
else if ($score < 1000)
$rank = "Not bad...";
else if ($score < 10000)
$rank = "Pretty Good.";
else if ($score < 100000)
$rank = "Amazing!";
else if ($score < 1000000)
$rank = "~YOU DA BOMB~";
else
$rank = "YOU ARE A GRAND MASTER!";
SendEncryptedResponse("Name: " . $username . " Rank: " . $rank);
}
?>
That is it for the PHP side of things. Let us move on to the C# library...
We got the pseudo-high-score-board all set up, now all we need to do is write a pseudo-game to send it some data. First, open up Visual Studio and start
a new Windows Forms application. Once you are there, go ahead and drag a Textbox, a NumericUpDown, and a Button on to the form.
Make it look pretty, add some Labels, and be sure to set the NumericUpDown's Maximum property to something really high, like a hundred million.
Here is what I made my "game" interface look like:
Textbox
NumericUpDown
Button
Label
Maximum
Now add an OnClick event to the button (double click the button in the Designer view). In the Solution Explorer pane, right click on References
and click "Add Reference". Browse until you find the cs2phpCryptography.dll library and add it (you can find this in the Library folder of the download). Also make
sure that you add a reference to the library at the top of your code-behind, like this:
OnClick
using CS2PHPCryptography;
Hopefully you are not stuck. I am trying to be extra clear in case the people that are reading this article are mainly PHP developers and are not too familiar with C#.
Next, we are going to create a SecurePHPConnection object within the Form class, and instantiate it as follows:
SecurePHPConnection
Form
SecurePHPConnection secure;
public Form1()
{
InitializeComponent();
secure = new SecurePHPConnection();
}
Right below the instantiation statement, we are going to subscribe to two of the events in the class: OnConnectionEstablished and OnResponseReceived.
OnConnectionEstablished will be raised when, as my self-documenting naming suggests, a secure connection has been established with a remote PHP script.
OnResponseReceived will be raised whenever we get a response from the remote script (this only happens after we send a message). Here is how you subscribe:
OnConnectionEstablished
OnResponseReceived
public Form1()
{
InitializeComponent();
secure = new SecurePHPConnection();
secure.OnConnectionEstablished +=
new SecurePHPConnection.ConnectionEstablishedHandler(
secure_OnConnectionEstablished);
secure.OnResponseReceived +=
new SecurePHPConnection.ResponseReceivedHandler(
secure_OnResponseReceived);
}
void secure_OnResponseReceived(object sender, ResponseReceivedEventArgs e)
{
throw new NotImplementedException();
}
void secure_OnConnectionEstablished(object sender,
OnConnectionEstablishedArgs e)
{
throw new NotImplementedException();
}
We are almost there. Now we need to give the class the location of the PHP script that it will be posting its information to. Here is how
we can do it (substituting my URL for the location you uploaded your PHP script to):
secure.SetRemotePhpScriptLocation("");
Now we can finally start the request for a secure connection. It takes a while since we are using RSA (and some of you are probably going to use 2048 bit
keys and make it even slower), so I set it up to do all of its stuff in the background and call back via the aforementioned event when it
succeeds. Here is what you do to start the call:
secure.EstablishSecureConnectionAsync();
You may also wish to set the button's Enabled property to false so that the user cannot click the Send button until a connection has been established.
Just add the following code:
Enabled
button1.Enabled = false;
Good! We have the connection being started when the program runs. Now in the event code for OnConnectionEstablished, we can enable the button
to let the user know that it is OK to submit his high score now.
void secure_OnConnectionEstablished(object sender, OnConnectionEstablishedArgs e)
{
button1.Enabled = true;
}
The next thing we will add is the code to send the actual high score to the PHP script. We can add this in the button click event code. First, we will get the username and
score and put them into a single string separated by a comma, just like we set up our PHP script to expect:
private void button1_Click(object sender, EventArgs e)
{
string name = textBox1.Text;
decimal score = (int)numericUpDown1.Value;
string message = name + "," + score.ToString();
}
Now we can send the message. After we check to make sure that it is OK to send a message in the first place, we can simply call the SendMessageSecureAsync() function
to have our message automatically encrypted with a 256 bit AES key and sent to the PHP script.
SendMessageSecureAsync()
if (secure.OKToSendMessage)
{
secure.SendMessageSecureAsync(message);
}
Once PHP processes the request and sends a response, we can process its response in the OnResponseReceived event. The variable e passed
into the event contains the response from the remote server. I chose to use a MessageBox to display the response:
e
MessageBox
void secure_OnResponseReceived(object sender, ResponseReceivedEventArgs e)
{
MessageBox.Show(e.Response);
}
Now it is time to hit F5 to compile and test it out! Once the application starts, the button will be grayed out for a while until the connection is established.
Once it is enabled, type in a name and a score, send it off, and see what happens.
It works for me. Hopefully it worked for you as well. Here is the complete C# code listing for Lame Game:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using CS2PHPCryptography;
namespace Lame_Game
{
public partial class Form1 : Form
{
SecurePHPConnection secure;
public Form1()
{
InitializeComponent();
secure = new SecurePHPConnection();
secure.OnConnectionEstablished +=
new SecurePHPConnection.ConnectionEstablishedHandler(
secure_OnConnectionEstablished);
secure.OnResponseReceived +=
new SecurePHPConnection.ResponseReceivedHandler(
secure_OnResponseReceived);
secure.SetRemotePhpScriptLocation("");
secure.EstablishSecureConnectionAsync();
button1.Enabled = false;
}
void secure_OnResponseReceived(object sender, ResponseReceivedEventArgs e)
{
MessageBox.Show(e.Response);
}
void secure_OnConnectionEstablished(object sender,
OnConnectionEstablishedArgs e)
{
button1.Enabled = true;
}
private void button1_Click(object sender, EventArgs e)
{
string name = textBox1.Text;
decimal score = (int)numericUpDown1.Value;
string message = name + "," + score.ToString();
if (secure.OKToSendMessage)
{
secure.SendMessageSecureAsync(message);
}
}
}
}
Pretty simple code considering that we are communicating with a remote server in AES encrypted messages. OK. Now on to the cool stuff...
I put together this diagram to visually demonstrate what is actually happening across the Internet:
First, the C# program (from here on referred to as the client) posts to the PHP script (the server) the following plain text request: "getkey=y".
Second, the server sees that the client wants the public RSA key, so it gives it to him in plain text by returning it in the response to the web page request.
Third, the client generates a 256 bit AES key and a 128 bit initialization vector using a cryptographically secure random number generator.
The key and the initialization vector (IV) make up the symmetric key that we need to have on both sides of the connection to correctly encrypt and decrypt
using the AES algorithm. Both the key and the IV are encrypted separately using the public RSA key (provided by the server) and placed
into the data section of a POST request which is then sent off to the server.
Fourth, the server uses its private RSA key to decrypt both the AES key and IV which it stores in a session variable on the server for use exclusively with this one client.
If multiple clients are connected to the same server, they will each have a different AES key. The server now has the proper key it needs and uses it to send
the string "AES OK", encrypted with the AES key, back to the client.
Fifth, the client gets the response from the server, decrypts it using the AES key that was just established for this session, then verifies that it says "AES OK".
If not, then there is a problem. At this point, a secure connection has been established. Each time this initialization process takes place, a
new AES key is made, used for just that session, and then discarded when done.
The last "stage" is more of a loop. The client sends any message it wants to the server, the server processes the request and sends a message back,
and then the client processes the server's response. In the diagram above, the black arrows represent plain text travelling through the Internet, the blue
line represents an RSA encrypted package, and the red lines represent AES encrypted messages.
When the client is done, he (optionally) sends the AES encrypted message "CLOSE CONNECTION" to the server who then destroys the session variables containing
the AES keys and returns "DISCONNECTED".
Keeping with the pattern I have made, we will first look at the PHP code. Since it is a lot shorter than the C# code, this works out well.
Since the full code is way too long to print here, I will just present the code segments applicable to what I am currently talking about.
Let us start with RSA. Right off the bat, I want to let you know that the libraries I have written do not allow the server to encrypt with RSA
nor the client to decrypt with RSA--you can only encrypt on the client side and decrypt on the server side (this only applies to RSA and not AES, which
we have working both ways). This is not because I could not do it, but because for this implementation, I am only using RSA to transmit a symmetric key
and thus do not need it to work both ways. Also, since the algorithm is slow, there is no reason why you would want to use RSA bidirectional unless you
were using it for some sort of signature verification, and that is currently beyond the scope of this article.
First of all, the client will post to the server the variable "getkey=y". When the server sees this, it will simply spit out the
entire public key file for the client and then exit.
//
// The remote user is requesting a public certificate.
//
if (isset($_POST['getkey']))
{
echo file_get_contents($PublicKeyFile);
exit;
}
Since we are using the pure PHP implementation of RSA from PHP Sec Lib, encryption in RSA is actually quite easy. I start out with a code
example to initialize the engine for decryption:
$rsa = new Crypt_RSA();
$rsa->setEncryptionMode(CRYPT_RSA_ENCRYPTION_PKCS1);
$rsa->loadKey($PrivateRSAKey);
Not too hard. Notice that the encryption mode is set to PKCS1 (v2.1), this is simply the encryption standard that we are using. The alternative to PKCS1 is OAEP (Optimal Asymmetric
Encryption Padding), which is supposedly a little more secure, but we are not going to use it for this application. The PHP variable $PrivateRSAKey comes
from the private.php file we generated earlier and included above. If you did not read that section, then just know that the private key file was copied in its plain
text entirety from the .key file and in to a variable in a .php file so that people could not download it by typing the direct address
to it in their browser. The private key itself is in the PKCS1 unencrypted format. And by unencrypted, I mean that the private key itself is not encrypted--it is totally capable
of being used for encrypting.
When the client sends us the AES key he has chosen for us to use in this session, it will come encrypted with the public key the server gave him.
Now is a good time to introduce the special URL Safe Base64 encoding I utilize throughout both the client and server code. Here is what it looks like in PHP:
function Base64UrlDecode($x)
{
return base64_decode(str_replace(array('_','-'), array('/','+'), $x));
}
function Base64UrlEncode($x)
{
return str_replace(array('/','+'), array('_','-'), base64_encode($x));
}
It is just standard base64 encoding with all the "/" characters replaced with "_" and all the "+" characters replaced with "-". This is the standard way of making
a base64 encoded string URL friendly. We do not necessarily need to do this from the server side, but I chose to do it for all the transmissions just to make sure
all messages use the same formatting.
Going back to decrypting the AES keys that were sent to us encrypted with a public RSA key, let's look at how it is done:
$_SESSION['key'] = Base64UrlEncode($rsa->decrypt(Base64UrlDecode($_POST['key'])));
$_SESSION['iv'] = Base64UrlEncode($rsa->decrypt(Base64UrlDecode($_POST['iv'])));
Notice that the key and the initializing vector are sent separately and that they are base64 encoded. We decode the posted base64 encoded
string to a byte array, decrypt it with the private RSA key we just loaded, re-base64 encode the byte array into a string, and then store it in a session variable for
later use. Now we have a securely transmitted AES key that we can use for the rest of this session. The RSA decryption part runs slowly (a few seconds), but
fortunately, this is the only place we need it--AES is set up now and it is really fast.
Here is how we get the AES portion working. Once again, initialization of the AES engine is pretty straightforward:
$aes = new Crypt_AES(CRYPT_AES_MODE_CBC);
$aes->setKeyLength(256);
$aes->setKey(Base64UrlDecode($_SESSION['key']));
$aes->setIV(Base64UrlDecode($_SESSION['iv']));
$aes->enablePadding(); // This is PKCS7
As you can see, we are setting the key length to 256 bits, the strongest allowed by AES. Next, we are setting the key itself to the AES key previously stored
in $_SESSION['key'], and setting the initialization vector to the value stored in the current session as well. The padding that we
are enabling is PKCS7 padding. The AES we are using requires that we pad our message to a multiple of the block size (a fixed 128 bits, or 16 bytes). There are
several types of padding formats that can be used, and this is where some people cannot get their PHP encrypted messages to decrypt correctly in another language. PKCS7 pads
the message to the correct length by adding bytes whose value is equal to the number of bytes that are added. Here is an example:
$_SESSION['key']
The string: T H I S I S A T E S T
In Hex: 54 48 49 53 49 53 41 54 45 53 54
Since the message is not in a multiple of 16 bytes (it's 11 bytes), we must add 5 (16 - 11 = 5) bytes to pad it out to length. With PKCS7 padding,
each of the 5 bytes will have the hexadecimal value 05, the value coming from the number of bytes we are adding:
54 48 49 53 49 53 41 54 45 53 54 05 05 05 05 05
This padded message can then be sent through the AES cipher. Do not get this confused with zero padding where messages are just padded with zeroes,
or with one of the many other padding schemes out there. We are choosing to use this one to get it to work with C#.
The CBC mode you see above is setting the AES block cipher's mode to use Cipher Block Chaining. This is what uses the initialization vector we
are setting. The basic idea behind CBC is that each block is encrypted with some information obtained from the previous block in a message. This means
that if you have any problem decrypting one block in a message, all subsequent blocks will be unreadable as well. An interesting aside is that the initialization
vector does not necessarily need to be kept private (like the key does); however, it is a good idea to keep it safe anyway for extra security.
Next, to decrypt a message, we do as follows:
$AESMessage = $aes->decrypt(Base64UrlDecode($_POST['data']));
Noting again how we first get the bytes from the base64 encoded string before running it through the cipher. This line of code is where the $AESMessage
variable is set that you can use to access whatever the client sent to you. Once the server processes what is in this variable, it will send back a response to the client
using the SendEncryptedResponse() function. This function initializes the AES block cipher in the same manner as it was initialized above, and then does this:
echo Base64UrlEncode($aes->encrypt($message));
printing out the base64 encoded string containing an encrypted message. This is the message that the client will receive as a response to his encrypted HTTP request.
After this, we call the PHP exit command to prevent the server from outputting any more text. If anything is output but one single encrypted message, then the client
will get confused and explode (or at least give a notification that there was a problem).
The only aspect of the server side yet to discuss is where we destroy the session at the request of the client.
if ($AESMessage == "CLOSE CONNECTION")
{
echo Base64UrlEncode($aes->encrypt("DISCONNECTED"));
$_SESSION['key'] = "llama";
$_SESSION['iv'] = "llama";
unset($_SESSION['key']);
unset($_SESSION['iv']);
session_destroy();
exit;
}
It may not be necessary to change the variable contents, then unset them, and then destroy the session, but it doesn't hurt. llama represents
my bogus data, plus I hear they are good at destroying data. Just before we kill the session, we are going to send one last encrypted message to the client letting
him know that we got his request to curl up and die. The downside to sending these commands in plain text like this is that if the client wants to send the string
"CLOSE CONNECTION" for something he is doing unrelated to our protocol, it will close the connection on him.
That is it for the PHP code. Now we can delve into the client side C#.
There are quite a few aspects to the client side of this library that are not related directly to encryption. We will quickly browse over them just
so you know what is happening when we call the functions later on.
First we will look at the HttpControl class which deals with sending the actual data across the Internet to the server.
HttpControl
public string Post(string url, string data, ProxySettings settings)
{
try
{
byte[] buffer = Encoding.ASCII.GetBytes(data);
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
// Use a proxy
if (settings.UseProxy)
{
IWebProxy proxy = request.Proxy;
WebProxy myProxy = new WebProxy();
Uri newUri = new Uri(settings.ProxyAddress);
myProxy.Address = newUri;
myProxy.Credentials = new NetworkCredential(
settings.ProxyUsername, settings.ProxyPassword);
request.Proxy = myProxy;
}
// Send request
request.Method = "POST";
request.ContentType = "application/x-www-form-urlencoded";
request.ContentLength = buffer.Length;
request.CookieContainer = cookies;
Stream postData = request.GetRequestStream();
postData.Write(buffer, 0, buffer.Length);
postData.Close();
// Get and return response
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream Answer = response.GetResponseStream();
StreamReader answer = new StreamReader(Answer);
return answer.ReadToEnd();
}
catch (Exception ex)
{
return "ERROR: " + ex.Message;
}
}
This is pretty much the standard way to POST data to a website from C#. This class is provided in the library in addition to the cryptography stuff in case you ever need to use it.
Notice the following important line of code:
request.CookieContainer = cookies;
This is where we make sure that we send back the cookie information which identifies our session. Without this line of code, we could not have PHP save
our key and IV for use throughout the whole session, meaning each request we would make could not be linked with all of our other requests and we would have
to resend a key with every transaction. When we explicitly send back the cookie that the server gives us (containing the SESSIONID of our connection),
the server can look up our key and IV in its list of connected clients and resume our session.
SESSIONID
Since I am often behind a proxy, I have added the necessary overloads to the POST and GET functions in case you ever need to send information through one.
Ignore the fact that I am catching all errors and returning the error message in a string. Once in a while, you will not be able to access a page, and instead
of getting an encrypted response, you will get an HTML formatted error which could break some stuff in the code.
There is also an AsyncHttpControl class that does exactly the same thing as the HttpControl, except all the calls
to POST or GET will run in the background and raise an OnHttpResponse event with the response text when a response is received.
This is nice in a UI environment where you do not want the UI to freeze up on the user.
AsyncHttpControl
OnHttpResponse
There is also a class called PostPackageBuilder that makes it a little more straightforward for creating POST data.
You simply add an identifier and value pair to the collection through the AddVariable(varName, varValue) method and send
it to the HttpControl class. If you do not want to manually concatenate your variables into a form of "x=1&y=2&z=llama",
then this class will make things work smoother for you.
PostPackageBuilder
AddVariable(varName, varValue)
The final helper class we will look at before we get to the encryption stuff is the static Utility class. This class currently
only contains functions to encode and decode in our special URL friendly base64 format (already introduced in the PHP code above).
Utility
Let's get down to business. First I'm going to throw a class diagram of the RSA class at you, then we will discuss it.
To use the classes that we are going to use to do the RSA encryption, we need two using statements:
using
using System.Security.Cryptography;
using System.Security.Cryptography.X509Certificates;
The certificate we have generated and are going to load is an X509 certificate, meaning that it contains, among other things, the public key,
the name of the person who the certificate is made for, and how long the certificate is good for. For our purposes, we are going to ignore
everything but the public key, since that is all we need for the moment. Verifying signatures and expiration dates is beyond the scope of this article.
This class contains a function to load a certificate from either a string or a file. The certificate in string format is simply the entire contents
of the certificate file loaded in to a string; we can use this function to directly load a certificate from the server's response to our request for the public key.
Here is what our sample 1024 bit public RSA certificate looks like:
-----BEGIN CERTIFICATE-----
MIID2jCCA0OgAwIBAgIJAPEru6Ch9es0MA0GCSqGSIb3DQEBBQUAMIGlMQswCQYD
VQQGEwJVUzEQMA4GA1UECBMHRmxvcmlkYTESMBAGA1UEBxMJUGVuc2Fjb2xhMRsw
GQYDVQQKExJTY290dCBUZXN0IENvbXBhbnkxGTAXBgNVBAsTEFNlY3VyaXR5IFNl
Y3Rpb24xFjAUBgNVBAMTDVNjb3R0IENsYXl0b24xIDAeBgkqhkiG9w0BCQEWEXNz
bEBzcGFya2hpdHouY29tMB4XDTExMDcwNDEzMDczM1oXDTIxMDcwMTEzMDczNFow
gaUxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlQZW5z
YWNvbGExGzAZBgNVBAoTElNjb3R0IFRlc3QgQ29tcGFueTEZMBcGA1UECxMQU2Vj
dXJpdHkgU2VjdGlvbjEWMBQGA1UEAxMNU2NvdHQgQ2xheXRvbjEgMB4GCSqGSIb3
DQEJARYRc3NsQHNwYXJraGl0ei5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJ
AoGBAKLEwtnhSD3sUMidycowAhupy59PMh8FYX6ebKy4NYqEiFONzrujkGtAZgmU
aCAQBEmGcfBUDVd4ew72Xjikq0WhBUju+wmrIcgnQcIMAXMkZ2gBV12SkvCzRrJf
5zqO0rC0x/tBli/46KGrzyYLl7K3QFx3MQPNvVO+w/b0coatAgMBAAGjggEOMIIB
CjAdBgNVHQ4EFgQU+6E6OauoEUohJOAgC8OXU3xaHn4wgdoGA1UdIwSB0jCBz4AU
+6E6OauoEUohJOAgC8OXU3xaHn6hgaukgagwgaUxCzAJBgNVBAYTAlVTMRAwDgYD
VQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlQZW5zYWNvbGExGzAZBgNVBAoTElNjb3R0
IFRlc3QgQ29tcGFueTEZMBcGA1UECxMQU2VjdXJpdHkgU2VjdGlvbjEWMBQGA1UE
AxMNU2NvdHQgQ2xheXRvbjEgMB4GCSqGSIb3DQEJARYRc3NsQHNwYXJraGl0ei5j
b22CCQDxK7ugofXrNDAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4GBAJ8l
RVFiLgfxiHsrPvhY+i05FYnDskit9QTnBv2ScM7rfK+EKfOswjxv9sGdGqKaTYE6
84XCmrwxCx42hNOSgMGDiZAlNoBJdJbF/bw2Qr5HUmZ8G3L3UlB1+qyM0+JkXMqk
VcoIR7Ia5AGZHe9/QAwD3nA9rf3diH2LWATtgWNB
-----END CERTIFICATE-----
As you can see, it is a base64 encoded string sandwiched between a header and footer. The constructor for the X509Certificate2 class
that we are going to use to load the certificate expects a string containing just the base64 part of this certificate, so we need to parse it out:
X509Certificate2
key = key.Split(new string[] { "-----" }, StringSplitOptions.RemoveEmptyEntries)[1];
key.Replace("\n", "");
This will split the certificate into an array of three strings ["BEGIN CERTIFICATE", "MIID2jCC...ATtgWNB", "END CERTIFICATE"] of which we want
the second element: the base64 encoded certificate. We also need to remove the line ending characters to reconstruct the base64 string into one
long string since it is stored in the file in separate lines of 64 characters each.
Correctly reading in the certificate is one of the areas that might have been confusing for some of you trying to get this to work on your own,
since one of the X509Certificate2 class' constructors expects an XML formatted certificate file, but you have a key generated by OpenSSL in a different format.
You also may not have known exactly what to send to the constructor for a certificate in the format of a byte array. Now that we have the base64 encoded certificate string, we can decode
it to a byte array and send it to the constructor:
return new X509Certificate2(Convert.FromBase64String(key));
If any of the above statements fail while loading a key, we catch the general Exception and throw a FormatException because the only thing that should
cause this code to fail is if the certificate entered was not in the expected format (for example, missing the "-----BEGIN CERTIFICATE-----" tag at the top).
Exception
FormatException
Now that we have a public certificate, we can use it to encrypt stuff (later we will encrypt an AES key). Here is how we encrypt with the public key:
RSACryptoServiceProvider publicprovider = (RSACryptoServiceProvider)cert.PublicKey.Key;
return publicprovider.Encrypt(message, false);
Remember that data encrypted with the public key can only be decrypted with the private key, and that you should only encrypt small messages with RSA
because it's slow. Now we can look at the AES algorithm.
Take a look at the class diagram for the AEStoPHPCryptography class and then we will set up the AES engine. First, we are going to need
to add a using statement for the following namespace:
AEStoPHPCryptography
using System.Security.Cryptography;
Remember from the network model above that the client is responsible for generating the AES key and the initialization vector needed for encryption. We are going to do that first, like this:
Key = new byte[256 / 8];
IV = new byte[128 / 8];
RNGCryptoServiceProvider random = new RNGCryptoServiceProvider();
random.GetBytes(Key);
random.GetBytes(IV);
The RNGCryptoServiceProvider class provides a pretty secure pseudorandom number generator which we are going to use to generate a new AES key each time
we start a new connection to the server. Remember that each client will generate and use a different AES key, and that the server will keep
track of which key to use with which client on the server side. Notice that the Key is a byte array 32 bytes (256 bits) long, and that the IV
is a byte array of 16 bytes (128 bits) long. We fill both of the arrays with random bytes to use for our key. If the bytes are not random (say we type in a password instead), then we will
severely reduce the security of our system. This is why we will just automatically generate the keys using a cryptographically secure random number generator.
RNGCryptoServiceProvider
Key
IV
Now that we have the keys, we can use them to both encrypt and decrypt (we will send them to the server in a different class, so hold on). First we are going
to encrypt something. Let's initialize the AES engine:
RijndaelManaged aes = new RijndaelManaged();
aes.Padding = PaddingMode.PKCS7;
You might be wondering why we are using the RinjdaelManaged class and not some AES class. This is because AES is essentially Rinjdael
with a fixed key size (see the introduction at the top of this article for more information on this). Notice again that we are setting the padding to PKCS7. You will remember from
the PHP section what this is (and if not, you can scroll up and look at it). Now we set the cipher to use CBC, a key size of 256 bits, and then give it the key and IV that we just generated.
RinjdaelManaged
aes.Mode = CipherMode.CBC;
aes.KeySize = 256;
aes.Key = Key;
aes.IV = IV;
Here is where the actual magic happens. We create an ICryptoTransform object which will do the actual encryption for us:
ICryptoTransform
ICryptoTransform encryptor = aes.CreateEncryptor(aes.Key, aes.IV);
We are going to need a place to save the encrypted data too, so go create a MemoryStream:
MemoryStream
MemoryStream msEncrypt = new MemoryStream();
The CryptoStream is what we will write out the message into. It will use the specified encryption algorithm (AES) and write the data after
it has been encrypted to whatever stream we give it, which in our case is a memory stream.
CryptoStream
CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write);
We have a stream to write the encrypted data to, we have a stream to write plain text to that will encrypt the data, now we just need
a StreamWriter to actually allow us to write to the encryption stream:
StreamWriter
StreamWriter swEncrypt = new StreamWriter(csEncrypt);
Here is where we write to the stream that encrypts the message and in turn writes the encrypted message to the MemoryStream:
swEncrypt.Write(plainText);
Free up the resources that we were using:
swEncrypt.Close();
csEncrypt.Close();
aes.Clear();
Finally, we return the encrypted data held in the MemoryStream in the form of a byte array. This is the data that we will then encode
into a base64 string and send to the server.
return msEncrypt.ToArray();
Decryption works in much the same way, except we create a Decryptor from the AES object instead of an Encryptor:
Decryptor
Encryptor
ICryptoTransform decryptor = aes.CreateDecryptor(aes.Key, aes.IV);
And we read the bytes from the CryptoStream instead of writing to it:
string plaintext = srDecrypt.ReadToEnd();
For your convenience, here is the complete code for the encryption function:
/// <summary>
/// Encrypt a message and get the encrypted message in a URL safe form of base64.
/// </summary>
/// <param name="plainText">The message to encrypt.</param>
public string Encrypt(string plainText)
{
return Utility.ToUrlSafeBase64(Encrypt2(plainText));
}
/// <summary>
/// Encrypt a message using AES.
/// </summary>
/// <param name="plainText">The message to encrypt.</param>
private byte[] Encrypt2(string plainText)
{
try
{
RijndaelManaged aes = new RijndaelManaged();
aes.Padding = PaddingMode.PKCS7;
aes.Mode = CipherMode.CBC;
aes.KeySize = 256;
aes.Key = Key;
aes.IV = IV;
ICryptoTransform encryptor = aes.CreateEncryptor(aes.Key, aes.IV);
MemoryStream msEncrypt = new MemoryStream();
CryptoStream csEncrypt =
new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write);
StreamWriter swEncrypt = new StreamWriter(csEncrypt);
swEncrypt.Write(plainText);
swEncrypt.Close();
csEncrypt.Close();
aes.Clear();
return msEncrypt.ToArray();
}
catch (Exception ex)
{
throw new CryptographicException("Problem trying to encrypt.", ex);
}
}
You should be able to get this particular function working in an application that does not use my library by just copying and pasting it in there,
changing Utility.ToUrlSafeBase64 to Convert.ToBase64String, and making sure that you have a 32 byte long key array and a 16 byte long IV array created somewhere.
But we are all using my library, right?
Utility.ToUrlSafeBase64
Convert.ToBase64String
At last, we arrive at the meat of the C# client library that this whole article has been leading up to: the SecurePHPConnection class!
This class does everything for you. It will, in a background thread, do the following and then raise an event when it has established a secure connection:
Once the connection is established, the rest (as the first) is abstracted away from you. You simply call a function to send a message and wait for a response
by either blocking until you get a response or waiting for an event to be raised saying that you got a response.
Here is what we set up in the constructor:
http = new HttpControl();
rsa = new RSAtoPHPCryptography();
aes = new AEStoPHPCryptography();
background = new BackgroundWorker();
background.DoWork += new DoWorkEventHandler(background_DoWork);
background.RunWorkerCompleted +=
new RunWorkerCompletedEventHandler(background_RunWorkerCompleted);
sender = new BackgroundWorker();
sender.DoWork += new DoWorkEventHandler(sender_DoWork);
sender.RunWorkerCompleted +=
new RunWorkerCompletedEventHandler(sender_RunWorkerCompleted);
As you can see, we are using the HttpControl for communication, the RSA class for the key exchange, and the AES class for general communication (all three of these classes
have been previously discussed). We set up two background workers to handle the time consuming stuff without tying up the UI. The class defines two events that anyone can subscribe to:
/// <summary>
/// Event raised when a secure connection
/// has been established with the remote PHP script.
/// </summary>
public event ConnectionEstablishedHandler OnConnectionEstablished;
public delegate void ConnectionEstablishedHandler(object sender,
OnConnectionEstablishedEventArgs e);
/// <summary>
/// Event raised when an encrypted transmission
/// is received as a response to something you sent.
/// </summary>
public event ResponseReceivedHandler OnResponseReceived;
public delegate void ResponseReceivedHandler(object sender,
ResponseReceivedEventArgs e);
The first one we will raise whenever we successfully complete our background connection routine of getting an RSA key, sending the AES
key, and verifying that all is good. The second one we will raise whenever we receive a response to an asynchronous message that we had previously sent (we will not raise this
event when the blocking SendMessageSecure() function is called, only for SendMessageSecureAsync()).
SendMessageSecure()
We are going to need to save the URL of the server's PHP script that we are going to contact, so we can write a simple set function like this:
public void SetRemotePhpScriptLocation(string phpScriptLocation)
{
address = phpScriptLocation;
...
}
And when the client is ready to start the secure connection, he calls this function to start the connection request in the background:
public void EstablishSecureConnectionAsync()
{
if (!background.IsBusy)
background.RunWorkerAsync();
}
Here is the code to establish the actual connection. First we are going to send the request for the public certificate:
string cert = http.Post(address, "getkey=y");
Once we get the certificate (in cert), we can use it to initialize our RSA cipher:
cert
rsa.LoadCertificateFromString(cert);
Next we tell out AES class to go ahead and generate a secure AES key and IV:
aes.GenerateRandomKeys();
And encrypt them with the public RSA key we just got from the server:
string key = Utility.ToUrlSafeBase64(rsa.Encrypt(aes.EncryptionKey));
string iv = Utility.ToUrlSafeBase64(rsa.Encrypt(aes.EncryptionIV));
Now we can securely send the key and the IV to the server for use in our secure session using the HttpControl:
string result = http.Post(address, "key=" + key + "&iv=" + iv);
If all went well up to here, we should get the encrypted response from the server of "AES OK" indicating that all is well on both his side and our side.
A secure tunnel (of sorts) has been created! We can now notify the client by raising the OnConnectionEstablished event:
if (OnConnectionEstablished != null)
{
OnConnectionEstablished(this, new OnConnectionEstablishedEventArgs());
}
Once the client knows that everything has been successfully set up to allow encrypted communication with the server, he can send as many messages
as he wants using the SendMessageSecure() function. Here is what the code looks like:
string encrypted = aes.Encrypt(message);
string response = http.Post(address, "data=" + encrypted);
return aes.Decrypt(response);
As you can see, the message is encrypted and sent in a regular POST request to the server. The response from the server is decrypted and
returned to whomever called the function. This repeats until the client quits or sends a disconnection request.
And that is all there is to it! If you copy over the XML documentation with the .dll for this library, you should have access to enough documentation
to help you use all of the features it provides to the fullest.
Hopefully this article was a help to those of you who either are having problems getting encryption to work between C# and PHP, or are just interested
in anything encryption related. Either way, if you create anything really cool with this library, drop me a message--I would love to see it.
All code included in the download to this article is released under the GPL v3. My library allows you to encrypt in C# and decrypt in PHP, and to encrypt in PHP
and decrypt in C#, using either RSA or AES.
PHP Sec Lib, which is included in the download, is an Open Source project from SourceForge. (See links above.)
This article, along with any associated source code and files, is licensed under The GNU General Public License (GPLv3):
After install openssl i try to build key, but Approach to the problem::
now What I do? D'Oh! | :doh:
MemoryStream msDecrypt = new MemoryStream (cipherText);
CryptoStream csDecrypt = new CryptoStream (msDecrypt, decryptor, CryptoStreamMode.Read);
StreamReader srDecrypt = new StreamReader (csDecrypt, Ecoding.UTF8, false);
SET NAMES 'utf8'
FormatException was unhandled by user code
The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters.
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/223081/Encrypting-Communication-between-Csharp-and-PHP?msg=4507178 | CC-MAIN-2015-11 | refinedweb | 9,600 | 61.36 |
User talk:Anton Latukha
Contents
- 1 Bots
- 2 Idea about interlinks
Bots
Hi,
if you are interested in bots, have a look at User:Lahwaacz/ArchWiki:Bots (a draft for the future ArchWiki:Bots page). There is not much content yet, but the links might be interesting for you.
To answer some questions on your user page:
- There is no server-side bot on ArchWiki, all bots are started manually in random intervals. Wikipedia has some continuously running bots, so the idea is not crazy. -- Lahwaacz (talk) 15:37, 12 September 2016 (UTC)
- Yes, I just thought there are many style points, and handy ideas that can be automatically parsed on server side. Just create a parsing rules. There is only one problem of the unknown innerworks of MediaWiki (for me). Start with simple and dumb things, and as process goes, everyone going to understand and become confident, and writing new rules can become standard process. Testing, and then applying. You have some sort of test server? Testing can be done simply on parsing some closed test filled section of running wiki. PiroXiline (talk) 18:39, 12 September 2016 (UTC)
- I have a MediaWiki instance installed on localhost for testing, see User:Lahwaacz/LocalArchWiki if you are interested in details. As for the style rules checking, there is an open issue for which I have too little time. -- Lahwaacz (talk) 19:07, 12 September 2016 (UTC)
- The <br> tags cannot be replaced automatically because there is no way to tell if it was added intentionally or not. Even if the lists were the only problem, it would be hard because parsing lists and tags correctly is very difficult. -- Lahwaacz (talk) 15:37, 12 September 2016 (UTC)
- Thanks. I can live without that br idea )) PiroXiline (talk) 18:39, 12 September 2016 (UTC)
- But really, I thought. Kate accomplished highlight very precisely. {{Tags work right, any lists right, any mix I saw was right. Even your "The <br>" it highlights <<nowiki> : I tried to find their highlighting sources already, while researching colouring of output (wanted to see what and how they use in such great highlight for everything), but not saw at first glance. They changed old process of committing highlights that googles well. Somewhere today I found a page where they pointed how to do it. But I went off. : I mean to highlight, they need to parse it in the first place. - ~~~~ : As I said, other way, Atom package translation from Mac - not worked completely. But it is on github. - ~~~~ * [[Help:Style#Code formatting]] says that "^ " is for simple one-liners and <nowiki>{{bc}} for multi-line code blocks.
I hope you find the other answers soon!
-- Lahwaacz (talk) 15:37, 12 September 2016 (UTC)
- Thank you.
- No need to correct my userpage Notes section typos, there can be complete scrambles, not covered with tags and very messy. From today I use that section to not to forget things, ideas as I go.
- I do not know innerworks of ArchWiki workflow on level needed now, in what pages what suggestions proper to post (because I do not know what pages exist for that), what you, admins track.
- For now I see many fit in /Style
- <Offtopic>
- I learn to use best tools that do proper one thing. Bookmarks are to store permanently interesting sites. Onetab extension to temporarily offload interesting to process sites, pocket to read it later and so on. Extensive folder structure to sort thing. And so on.
- And also then I gradually going to formulate/research questions, iterating and evolution create shapes.
- And then make all the prettiness.
- And then post to proper section.
- Now I moved to Kate anyway, there is very pretty highlighting. And not to flood with commits. And to go underwater ))
- If there was inline editor with MediaWiki markup (so you simultaneously see all the text with the markup in dynamic HTML mode, see Abricotine for Markdown to understand).
- It going to be waay better for all world. It would speed-up writing and make cozy to work with markup of any wiki articles dramatically.
- And there is no such now. I looked. There are wikED, and in Atom some port from Mac of MediaWiki markup, which not work properly and abandoned (3 commits) long time ago. PiroXiline (talk) 18:39, 12 September 2016 (UTC)
- As you will maybe find out, the MediaWiki markup language is very difficult and anoying to work with from programs. That's why there are so few visual editors, and most of them have many bugs or even conflict with ArchWiki's style guidelines. We also cannot hope in any development of the language, by now it is practically a dead language. It was developed for Wikipedia and aims for maximum backwards compatibility. Wikipedia has about 15 years of history, which is many hundreds of terabytes of data, so the language cannot be changed easily. For now you can enable "Use live preview" in the preferences, and there are probably some userscripts to have syntax highlighting in the browser. -- Lahwaacz (talk) 19:21, 12 September 2016 (UTC)
- Yeah. It has it's limits. But it is not true that all dead. Maybe current language is dead.
- But wikis is too important to the world.
- There going to be a raise of questions, problems arise.
- Someone creates new language. New concepts.
- And of course they are going to be convertion, translators to new language or platform.
- The main thing is, the info is here.
- Than tool going to be developed to translate everything to new stage. Maybe in one solid snapshot. There is no other way. Noone going to start from scratch to gather all that info that was done worldwide by all people. - PiroXiline (talk) 20:10, 12 September 2016 (UTC)
- Dressed ArchWiki in dark mode using your user script feature. In most popular "Archlinux and ArchAssault Dark" official homepage everything works very smoothly. I do not know why there no single official ArchWiki dark skin. So pleasant to look and work here in dark mode. Or at least suggest one for everyone )
- And as wiki-side script - uniplatform. My Arch, Chrome, Fox, Android, everything ticks as a clock.
- But wikED edit window looks crazy white now. Kate better anyway for me. Emacs, still lose myself there.
- In skin all works, I only nitpick that Show Changes mark text of changes in grey on light blue, not very contrasting colors. But it must be one touch to script.
- I try to slowly gradually work on Color output in console.
- But I deliberately chose very hard, almost impossible topic. I make research on my own.
- I figured-out how everything works together, and how properly divide all terms from each other.
- But there is Xresourses, VT types, ncurses library, TERMCAP, escape codes. Is where I need to dig in. Started a little on theory today, offline.
- I going to go full freelance on some nearest day, so my work on wiki going to slow.
- Thinking in what way to contribute not only with ideas.
- But building new references, style guide references. Nitpicking of contradictions also helps.
- The main point I understand. Only some of my style ideas need really to be forced in rules on users. There is too much of everything already. We need try to ease start even more. There is around 10 long articles you need to read to make something good and to understand things.
- I understand that my language need to be in the Wu wei manner. To introduce and to explain why doing this way is great. Like RFC-s work in the Wu wei manner. To explain, to recommend. And to start a trends.
- I installed Wiki Monkey today, but not understanded it for now. Because not used.
- I want to create messy and less messy notes and then say Butt! to him. And look what he do, and what I can do ))
- Your answers are very helpful. They shorten the circuit of research. -- PiroXiline (talk)
Idea about interlinks
Listen everyone. I have an idea. ))
I know I am not the first one. I remember when nonexistent titles was broadly used.
It was a long-long time ago. Maybe technology, wiki engine and critical mass of IT professionals ready to make quantum leap again.
Distributed version control and all open source with it did a great step with changing to Github concept.
But really, with little automation by templates we can exploit MediaWiki in our favor. - PiroXiline (talk) 18:58, 10 September 2016 (UTC)
Algorithm
- It is sane to forbid creation of links to non-existent titles. Because it is like winning a lottery, probability to shoot a target in the future is tiny.
- If you want to reference or show something:
- Find and make a link to the Archwiki subsection if it illustrates topic.
- Find other source of information.
- Try to search authoritative source.
- For basic information
- Official site. Make external link.
- Wiki in the project Github repository. Make external link.
- For detailed technical information
- Special wiki. Make MW:Help:Interwiki_linking.
- Special Wikipedia articles. Make MW:Interwiki_linking
- Other very-very great source:
- Great forum post
- Bug report on Arch Bug or Github, or on official Bug tracker. Relatively to scope.
- And add short main info to where you write (like name of package pacman).
- PiroXiline (talk) 18:58, 10 September 2016 (UTC)
- Maybe we can put this checklist in one of our existing articles? — Kynikos (talk) 02:43, 11 September 2016 (UTC)
- That is why I try to write sections in this structured manner.
- I clearly see that link posting can be described in one algorithm.
- But it better be placed as help, guidence, reference. Not rule. ArchWiki overburden with rule threshold to newcomer. Anyway people going to love to use a good help-checklist.
- I think checklist needs more thought, research and incremental work before placing. -- PiroXiline (talk) 18:07, 22 September 2016 (UTC)
Exception
- If you saw resources and now see that topic needs as a more technical explanation, Archlinux scope, like
ghb:
- Only one exception here is real and be practical.
- If the name of a link to non-existent article is one strict word like "Kate" (remark: most (95%) of List_of_applications are one word names). (maybe someday ease restriction on strict well-known or popular names like Open Shot, KDE Connect)
- Than make a link named Name of article to non-existent article.
This can be automatically controlled by bot. - PiroXiline (talk) 18:58, 10 September 2016 (UTC)
- How do you define "strict well-known or popular names" without generating controversy? – Kynikos (talk) 02:51, 11 September 2016 (UTC)
- Here is my thoughts about that:
Strict well-known or popular names, acronyms
- Names of core utilities, GNU projects.
- Acronyms of protocols, standards widely adopted in their fields.
- If name is literal name of man page. Or man(1) redirects name as alias to another man.
- If name is literal name of package (or meta) in Core, Extra, Community package in Arch Linux. (as you know Core, Extra and Community have one shared namespace)
- Also maybe:
- Acronyms overall.
- With checklist from comment points:
- Acronyms & single-word names.
- Acronyms have plural meaning over the world.
- But almost absolutely all acronyms have one meaning on ArchWiki:
- People understand scope, context and filter-out what is meant. So, say people can filter-out 1/10 of acronym meaning here.
- Roughly half if not greater of what's left is software development related (not ArchWiki scope).
- Devs world love invent bicycles in the first place.
- Than goes love to loudly Name every sneeze now.
- Because of branding. To serve food on the table, and some want huge money. Every new small is called 'technology' (like in Microsoft) and have name. Frameworks. Development concepts, patterns. Languages with their different and excessive terminology to everything.
All that mostly not touch ArchWiki.
- ArchWiki is really not a place where devs would post software development in detail articles. It is not a place. It is a place for ArchWiki admins, and maybe Arch devs.
In other words: only major dev acronyms used and all other are relatively very rarely and specifically used on ArchWiki.
- StackOverflow launched technical documentation initiative.
So that all goes out of the way. (1/2)
- Most of acronyms, thank God, have one sense in Open Source perspective. Proprietary terms used rarely on ArchWiki. (1/2 of acronyms)
- Than this all, in our case, is filtered by *nix perspective (Win, Mac, IOS, Android is important topics on ArchWiki, but I point out - that not excessively and majorly used, covered). (1/2)
- Half of what left is Linux related. Portion of Mac OS X and BSD goes away (say 3/4 of acronyms stays).
- ArchWiki is a narrow platform for more administration, architecture Linux topics. Revolving around Arch. If I can say Arch Linux 'management' wiki. And then many acronyms (example, of other distributions, many of kernel insides related) fall-off in Arch Linux perspective. (say 4/5 of acronyms left)
- Let also remember, not every acronym really need ArchWiki reference page. Many of acronyms are too small in info or scope that they does not sanely require any reference to other ArchWiki place, many discussed internally and mostly in specific-to them articles. (like java terms). So if needed, contributer can provide direct link to that ArchWiki article section (as all works now), or person going to search related article by himself. (say, 3/4 of terms stay)
- Note: All that also goes to single-word names. Maybe change word acronym to single-word name and include acronym term into single-word name.
- So we have estimation:
- 1*(1/10)*(1/2)*(1/2)*(1/2)*(3/4)*(4/5)*(3/4) = 0.005625
- Probability that acronym going to really, seriously coincide is 0.006 in an reference article. I think we can live with 0.6% references not showing us to proper place. But in return we have a huge gain of cross-referencing and condensing of information.
- From the top of the head example: DRM - Digital Rights Management and Direct Rendering Manager. What you think is going to be pointed on ArchWiki reference page? There is no sense to create Digital Rights Management info stash in ArchWiki it is very fragmented theme in ArchWiki scope. And then DRM is logically always points to external link, or proper place on ArchWiki for that product/format. So obviously we discuss Direct Rendering Manager here. And reference [[DRM]] is also a bad reference page now. Because it covered in ATI, NVIDIA, Intel. And that is why [[DRM]] if become, will become a sort of for term reference and disambiguation page.
- So we can safely adopt acronyms in Arch Linux always have one meaning. -- PiroXiline (talk) 16:58, 22 September 2016 (UTC)
- I'm lost, this all started because you were proposing to allow red links in some cases, right? Then I asked you to define "strict well-known or popular names" objectively (especially the emphasized adjectives), and you wrote another essay on the meaning of "acronym"? How did we even get there? You seem to have very elaborate ideas, but you should focus on explaining them in a more exact way, getting to the point without digressing to loosely related matters...
- — Kynikos (talk) 14:20, 23 September 2016 (UTC)
- Yes. First list says what safely can be allowed.
- Than I go into thought estimation details. Looking at a question, how many same acronyms (or single-word names), but with different meaning, collide on ArchWiki. Like DRM example.
- -- PiroXiline (talk) 23:45, 23 September 2016 (UTC)
- Please note that Simplicity is one of the founding philosophies of our community: this usually also means that if something is very complicated to implement, with many exceptions and special cases, then maybe it's the implementation itself that should be rethought; in this case you would like red links allowed for some cases, but defining these cases requires paragraphs over paragraphs of rules apparently... Maybe this Simply means that disallowing red links altogether as it is now is the best idea after all?
- — Kynikos (talk) 14:20, 23 September 2016 (UTC)
- Yeah. I proposed and wrote a lot of stuff. This is the biggest complex idea. That is why I'm proposing to move all of this idea to my talk page. It is very multipart. Maybe I can chip-out simple and useful thing, and propose them one by one.
- I also agree sometimes I go into very details of explanation. Because I want to be sure that what I wrote is explained, understandable and has some ground.
- You have a point about complexity and rethought process. I thought to leverage a community, so ideas start to evolve and take shape. And forbidding red links is a simple and effective solution now.
- -- PiroXiline (talk)
Automatic control of exception by Bot
- If new commit was done
- Check if in additional content is a link to nonexistent article
- If name of that article is more that one word
- Strip linkage. Add quotes. If before link word 'see' or 'look' or 'look at' was used - change that to 'search for'.
So any new links to nonexistent articles going to be stripped. And we require them be in Name of article. So text leaved untouched. Example: For troubleshooting this, see Extensive debugging, converts to, For troubleshooting this, search for Extensive debugging. You make a troubleshooting research and search word encourage you to search for that term and also leaves a feeling that you need to make a link to useful information there for other people after you. - PiroXiline (talk) 18:58, 10 September 2016 (UTC)
Change Template for empty page
If human opens an empty page, today template says to him:
There is currently no text in this page. You can search for this page title in other pages, search the related logs, or edit this page.
If we can change massage of that template:
Here is currently no such article.
You can:
- Create a redirect to Archwiki section where topic is covered.
- search on other pages
- Create this page
- Search the related logs
Logs in last place, because average user does not know what is that. - PiroXiline (talk) 18:58, 10 September 2016 (UTC)
- I get a different message when opening a non-existing page, specifically MediaWiki:Newarticletext. Anyway expanding it could be a good idea. – Kynikos (talk) 02:50, 11 September 2016 (UTC)
Automation and process of one-word article redirect creation
If Create a redirect is hit - edit opens automatically with Redirect template, search title already placed inside. There an example stub link that introduces to place a proper redirect link to internal Archwiki page.
Individual leaves this tab open. He found no information there, and he is ready to contribute, because he researching this topic now anyway.
He continue his searches on wiki (maybe he already know where to find, because he researching topic now, or he know wiki well, and just checked, maybe article exists).
If he finds a source, it places link. He needs to make a preview of all that and make submit.
Comment section can be generated automatically or filled with standard message. If decided Going Commando auto-interlinking. Or individual needs to write a message. This is more protection-wise approach.
So many referencing articles going to be created. - PiroXiline (talk) 18:58, 10 September 2016 (UTC)
- This is not very clear: are you suggesting to pre-populate new pages with some content derived from the title? – Kynikos (talk) 02:56, 11 September 2016 (UTC)
- If it happen, it going to be great to pre-populate new page with
{{Redirect|Target article|reason}}template.
- So
Target articleis:
Place proper internal name to article/section here, like 'Help:Editing#Internal_links' see Help:Editing#Internal_links and Template:Redirect.
- And MAYBE
reasonis:
Place a proper explanation why this page internally linked to that place. But I do not think exquisite reason is needed.
- Better let contributor provide reason in Summary of commit. Anyway he needs to provide it there. And history of commits of these pages is small and directly show everything. -- PiroXiline (talk) 17:46, 22 September 2016 (UTC)
- Why would anyone create a new article and want Template:Redirect put there automatically? And, even if possible to set up, this should happen every time you create a new page? It would be too confusing and complicated, let us close this :) — Kynikos (talk) 14:29, 23 September 2016 (UTC)
- I thought we discussed that several weeks ago. It is surprising to see you forgot.
- We was talking about creating automatic redirects for simple terms, names of apps, technologies, acronyms, so it is going to focus people attention and effort on sections about that terms.
- I suggested. While human goes to new (not created) page to add [[Create redirect to ArchWiki secton], so that ease for people to create a redirect to proper place on wiki. So people after that going to be streamed there.
- I not said anywhere that 'this should happen every time you create a new page'. I wrote 'If Create a redirect is hit (on a new page)'. And we discussing that here.
- So I do not know why you closed this. And I see, as all others with fresh energy that came and faded-out, I get undermotivated with things like this. -- PiroXiline (talk) 23:25, 23 September 2016 (UTC)
Improved Archwiki interlinkage concept
That going to greatly improve Archwiki interlinkage. People will create interconnects by themselves. And fix broken links by themselves.
But now simply saying Kate. And redirect going to point to List_of_applications/Documents
If someone finds, makes a better description of Kate cheats - link will be switched there by people.
Someone writes KDE Connect. Than he himself or someone goes to that link, and creates a reference to KDE#Integrate_Android subsection. Where is appropriate to explain what is KDE Connect and point package, AUR, to official site.
So now we do not need to every time on every mention explain what is
gdb and where information about it, because GDB points to Step-by-step_debugging_guide#Gdb. And we make there a base of GDB knowledge, extensive overwriting make it a bigger section over time, links to external resources emerge, coverage of more cases with gdb. Which improves bug reporting to Archlinux and to all open source.
If someone moves Step-by-step_debugging_guide#Gdb elsewhere, people, or human who moved it going to link it right again, he probably researched and saw that GDB points there.
And when information ob GDB grows - it becomes real GDB article. And individual who creates it going to delete Redirect in GDB. And now, all GDB links points to GDB article. - PiroXiline (talk) 18:58, 10 September 2016 (UTC)
- This is not very clear: are you suggesting to create more redirects to existing article sections? We are already encouraging that :) — Kynikos (talk) 02:58, 11 September 2016 (UTC)
Conditions
So all this works in conditions:
- By protecting all this with strict restrictions on links to non-existent articles with #Algorithm.
- #Exception only to one word links.
- Automate a bot to strip others.
- Adding a redirect feature to template of non-existent pages.
- Trend starts automatically.
I need to point out in Help:Editing#Internal_links from main page, nonexistent links encouraged already for a long time and no one goes mad. So this #Algorithm is a clean-up and clearing out and protection. - PiroXiline (talk) 18:58, 10 September 2016 (UTC)
- Since a system like this, even if approved, would take time to be implemented, I have fixed the inconsistency from #Sentences about non-existent articles contradict each-other for the moment. – Kynikos (talk) 01:53, 11 September 2016 (UTC)
Results
Because now Archwiki ask, and make link it is simple, take a one click. People going to start make links from nonexistent pages to according information on Archwiki.
All this automatically create all single-word links and point them to information.
What needs to be addressed:
- Make automation. Tweak templates and bot.
- Clearly explain, how to go to edit mode if page is reference. So people can delete reference. Or make this feature.
- We have many new little pages and engine makes references.
- Articles going to become more atomic. It is very great, but also that mean more linkage and more articles.
- If link in reference is not satisfiable - you need go to search page. Big deal.
So these are benefits:
- People automatically start creating one-word articles, linking them to proper Archwiki subsections.
- People automatically fix broken links.
- People see that one-word links is a trend - start generating one word nonexistent links. Which boosts point 1.
- All-around interlinkage of wiki improves greatly.
- Links focus people effort in one place. So while topic gets critical mass - article spawns.
- Every link automatically points to proper article. Both when article created, and people track changes.
- Articles going to become more atomic.
What problems do admins see. Archwiki and admins has some experience in this. - PiroXiline (talk) 18:58, 10 September 2016 (UTC)
Idea have a wide spread. Maybe, to we not get lost in miriad of details, I move all that to my talk page. And then find to where is proper to start and publish point-by point. So to build incremental changes and discus them. And point-by point move to the whole picture. -- PiroXiline (talk) | https://wiki.archlinux.org/index.php/User_talk:Anton_Latukha | CC-MAIN-2019-13 | refinedweb | 4,237 | 66.64 |
Weekly Meeting Notes 2017¶
- Table of contents
- Weekly Meeting Notes 2017
- December 20, 2017
- December 06, 2017
- November 29, 2017
- November 22, 2017
- November 15, 2017
- November 8, 2017
- November 1, 2017
- October 25, 2017
- October 18, 2017
- October 11, 2017
- October 04, 2017
- September 27, 2017
- September 20, 2017
- September 13, 2017
- September 6, 2017
- August 30, 2017
- August 23, 2017
- August 16, 2017
- August 09, 2017
- August 02, 2017
- July 26, 2017
- July 19, 2017
- July 12, 2017
- July 5, 2017
- June 14, 2017
- June 07, 2017
- May 31, 2017
- May 24, 2017
- May 17, 2017
- May 10, 2017
- April 26, 2017
- April 19, 2017
- March 22, 2017
- March 01, 2017
- February 22, 2017
- February 15, 2017
- January 18, 2017
- January 18, 2017
- January 11, 2017
- January 04, 2017
December 20, 2017¶
Parag Mhashilkar, Marco Mascheroni, Dennis Box
- v3_2_21 status
- Dennis
- Other tickets
- Marco Mascheroni
- Marco Mascheroni
- Will look at monitoring breakdown for Meta Sites
December 06, 2017¶
Marco Mambelli, Parag Mhashilkar, Marco Mascheroni, Dennis Box
- Project News
- Stakeholders meeting on Dec 7 3:30 - 4:30pm.
- v3_2_21 status
- Its plossible to include this tickets if they get resolved by end of this week.
- Meta-sites ticket: Mambelli reviewed and gave feedback. Mascheroni is going over them.
- Dennis got the condor ce issues resolved and can work on the unprivileged singularity.
November 29, 2017¶
Dave Dykstra, Dennis Box, Marco Mambelli, Marco Mascheroni, Dave Mason, Parag Mhashilkar
- Project News
- Stakeholders meeting on Dec 7 3:30 - 4:30pm
- Singularity
- Found cause why bounds were disappearing on SL7.4 with overlay turned on. Fix has been put in for CVMFS 2.4.3. Dave has built new version 2.4.2-1.1 for OSG with the patch. Should be released soon.
- Singularity 2.4.1 released upstream that broke EL 6
- There is no quality control in Singularity. Not enough testing is going on EL6.
- Dave found how to attach into a singularity control to get access a user and see processes.
- Depolyment at FNAL
- Currently deployed at UNL and Tier-2
- Beginning Feb CMS wants every EL6 have to be installed. CMS is requiring that Singularity to be on SL6 but may not go there. Working to have a test cluster deployed by end of this year. Combination of T1 & LPC. Big transition is to move to container (docker) based on SL7. There are CMS meetings internally on how to proceed with this and move forward. runc and charliecloud will run if unprivileged namespaces.
- Dennis: Setup SL7 to try out singularity.
- CMS
- Nothing from Dave for today's meeting
- v3_2_20
- Is in OSG integration testing and should be released in OSG 3.2 & 3.3 (December)
- OSG moved to a rolling release schedule.
- Developers
November 22, 2017¶
Dennis Box, Marco Mambelli, Marco Mascheroni, Jeff Dost
- OSG Operations
- Developers
- Dennis Box
- Marco Mambelli
- Marco Mascheroni
- Meta sites: slides presentation
This is solution1, first step towards possible multi-site configuration: - to set limits across a ret of Entries - to avoid double counting because of multiple CEs pointing to the same cluster Some notes about the solution proposed (beside what's in the presentation): There is an Infosys section that may not be handled correctly: how to correlate the info DB that tells how to create the entry first. Is there to connect in BDII, but it is mostly a failed attempt - BDII is not updated - not all info is in BDII, e.g. if a site prefers 8 cores This solution is similar to a proposal from Jeff several years ago: one more level of the tree where there is a site level and entries underneath. For that, you should be able to put the attributes in the entry set level or the entry level. Jeff will forward an old google doc about this tree-like structure. Don’t worry about the requests for this iteration but may help w/ decisions to leave options open for the future. Monitoring has not been touched, so you will see only the aggregate numbers for the entry set. Entry sets will look like a new entry and no sub-entries underneath. Jeff: For site debugging would be important to see the single CEs because one may misbehave and would be difficult to troubleshoot if you cannot single it out. Would like to look at both: total and be able to drill down to single CEs. The configuration directory allows multiple entry files. The splitting of Factory entries is because of different groups owning and sharing these files/entries: 1. staff shared w/ CMS and non-CMS; 2. CMS only entries; 3. ITB factory has all of the production files plus one (or more) for testing It is similar to condor config.d: files are read alphabetically and newer can overwrite entries in older ones There are situations where entries from a set are in the same file. But sometime entries are shared across multiple VOs, with a queue per VO or a queue accepting multiple VOs In an entry set all the entries have to have the same auth method and trust domain Double counting only matters per VO, we don’t care about non CMS, because when 2 different VOs are involved counting is different. Could an entry be added to different entry sets (eg. different entry sets for different VOS)? yes if different VOs are involved. There would be no double counting problem Monitoring is the most important missing feature, the other features mentioned here are only thought experiments and would need to be tested with the first version. An entry sets is all in one file in this first version. An entry set is a block, the last file that defines it is overriding the entry set definition Marco next steps will be: - finish the test - define what to add in the current version - prepare a working version that can be released and tested
- v3.2.20 has been released and is in OSG testing
November 15, 2017¶
Dennis Box, Marco Mambelli, Marco Mascheroni
Small attendance because of CMS week and Supercomputing. December 7 will be the next stakeholder meeting
- v3.2.20 Status
- RC4 has been tested succesfully, the release will be cut after the meeting.
- Developers
- Marco Mascheroni
- Marco Mambelli
- Working on v3.2.20 RC and release. Did single core and multicore jobs testing.
- Dennis Box
- Been testing release candidate. No problems on SL6 and SL7 upgrades.
November 8, 2017¶
Dennis Box, Marco Mambelli, Marco Mascheroni, James Letts, Antonio, Jeff Dost, Parag Mhashilkar
- CMS
- James:
- #17221: Marco Mambelli - Work has not started yet
- Some sites are shortening proxy lifetime to 24 hours. There is a feature in HTCondor that lets you refresh proxy
- Jeff: We only see this in the European sites and don't see this in
- Antonio: KIT admins refused to do it saying that it is security measure.
- Parag: This will have a bigger impact on VO.
- OSG Factory Operations (Jeff)
- Will respond to Marco's email.
- Will try to put rc3 on ITB and try to send pilots/tests
- v3.2.20 Status
- RC3 is out. So far it is ok
- Jenkins is reporting a unit test failing. Marco is investigating
- Developers
- Marco Mascheroni
- Marco Mambelli
- Working on issues for v3.2.20 and its release.
- Haven't updated the FactoryEntryStatusNow page. waiting on Jeff's feedback.
- Dennis Box
November 1, 2017¶
Dennis Box, Marco Mambelli, Dave Dykstra, Erik Vandeering, Dave Mason
- Singularity (Dave Dykstra)
- Problems with EL7.4 kernel (bind mounts not working) are not showing up in new installations
Hyunwoo Kim is moving to HEPcloud development, Dennis will increase effort
Lorena will likely start January 22
- v3_2_20 status
- Not much progress this week. Need to make changes and cut a release candidate.
- Dennis Box
- Working on 2531
October 25, 2017¶
Dennis Box, Marco Mascheroni, Marco Mambelli
- v3_2_20 status
- Not much progress this week. Need to make changes and cut a release candidate.
- Dennis Box
- Working on 2531, we talked about the categories (binned values with the name of the biggest value: JobsStart_0, 2, 5, 10, many) and adding them to the web monitoring
October 18, 2017¶
Jeff Dost, Dave Dykstra, Parag Mhashilkar, Dennis Box, Marco Mascheroni, Marco Mambelli
- Singularity (Dave Dykstra)
- 1.4 released. Dave recommends OSG to wait for couple of months and meanwhile test it on couple of big sites before releasing to all sites. Several of the pull requests from Dave are included.
- Problems with EL7.4 kernel
- With docker at UNL. All bind mount points disappear in the middle of the jobs. Its not easily reproducible and UNL admins trying to reproduce it.
- Also issues with autofs. SL7.4 kernel and docker, FNAL notices autofs crashes once in a while. Work around is not to unmount
- When timeout is 0, it does not update the access time of the file. Known issue with EL7 kernel. Workaround again is set timeout > 0, never unmount.
- CMS
- Talking to Jeff about top collector in global pool trying to debug why factory records (??). Year ago started filtering non essentials classads. Frontend queries for idle glideins to collector, frontend may get out dated idle counts. Will affects the frontend pressure. Claim idle, claim running. Query the top and secondary collector and see how much they are different.
- Third source analyze_glideins. Where is the info coming from? Frontend or from stats scavenged from the glidein output. Formatting of anaylze_glideins, should consider multi core if it is not.
- OSG
- voms-proxy-fake: Red herring for the glideinwms team. Main issue is the proxy format and condor submit crashes the schedd with proxy that is missing attributes. It has nothing to do with the condor_root_switchboard.
- condor_root_switchboard
- condor 8.7 does not support switchboard. Proposal from the htcondor team is to pass the code to us to maintain and they will help us trim it.
- v3_2_20 status
- Not much progress this week. Need to make changes and cut a release candidate.
- Marco Mascheroni
- Experiment in CMS, frontend match expression changed format from list to string and it crashed. The error handling can be improved
- Need to work on the meta sites.
October 11, 2017¶
Jeff Dost, Dave Mason, Parag Mhashilkar, Dennis Box, Marco Mascheroni, Marco Mambelli.
- CMS
- Discussions with James
- After a shutdown at CERN, CMS started up long lived pilots with open stack. They start and stop at same time. It took about 24 hours to ramp up to speed. Jeff is waiting on info from CERN and can use the spread. CERN told to limit 1 VM per cycle. We maybe able to increase number of glideins submitted per loop but further control the rate of submission to Open stack through HTCondorG config knobs.
- #13069 - Meta Sites: No progress from Marco Mascheroni last week. Plan to resume the work next week
- #17221 - Glidein auto removal: No update. Working on v3.2.20 release.
- condor_switchboard support dropped by htcondor.
October 04, 2017¶
Jeff Dost, Dave Dykstra, Marco Mambelli, Dennis Box, Parag Mhashilkar, Jeff Dost, Marco Mascheroni, Dave Mason
- Singularity
- New release v2.4 is being prepared
- Dave had to make couple of changes to have it compiled for OSG
- Brian has security concern. Mounting arbitrary image files as whole file system. Potential FS race conditions can be exploited? Configuration option to turn it off. There is a FW attribute, pinned_only. Allows you to append to file but not modify it. Needs to be set by root. Works on EXT/Local/Luster FS but not on NFS FS/BEEGFS. Does not impact unprivileged singularity.
- v3_2_20 status
- Going through log glideinwms and htcondor warn/error messages for RC.
- Cleared some issues and found some more issues
- factory is trying to qedit even if the config has it disabled. Working with Marco to fix it.
- CMS
- No updates
- Developers
- Marco Mascheroni
- Working on meta sites
- Focus on fixing the condor qedit
- Dennis Box
- Checked in automated tests and scripts.
- Hyunwoo Kim
- Busy with GPGrid/HEPCloud
- Marco Mambelli
- Testing release candidate
- condor switchboard has been removed from htcondor 8.7.2. So far it is in 8.6.4. We need a solution for alternative ways of doing it.
September 27, 2017¶
Jeff Dost, Marco Mambelli, Dennis Box, Parag Mhashilkar, Jeff Dost, Marco Mascheroni, Dave Mason
- v3_2_20
- tested rc2. No errors unexpected errors.
- Cut release today
- Next release to be shorter release cycle
- Singularity
- Jeff's monitoring request to be in this cycle
- Futurize stage 1
- CMS
- James
- Looking at retired time of the glideins and studies show that request wall clock time in jobs is sufficiently accurate and can be used to drain. If cut it to half we wont. As per Jeff, James can over ride the jobs' life time
- Request from Antonio: solution to not reduce bunch of glideins and cause churn. #17221
- Developers
- Marco Mascheroni
- Meta sites. Done with configuration path. Working on making entry sets advertise as a single entry.
- Dennis Box
- Testing RC2 and automating tests.
- Need to understand how to monitor logs for errors.
- Marco Mambelli
- Doing testing/install/upgrade for v3.2.20 rc
September 20, 2017¶
Dave Dykstra, Jeff Dost, Marco Mambelli, Dennis Box, Hyunwoo Kim, Antonio Perez, Parag Mhashilkar
- Dave Dykstra
- OSG has released singularity
- Singularity is in CVMFS
- Learned that kernel 7.4 is default kernel and updating to latest kernel should get unprivileged singularity access if you enable it.
- New singularity v2.3.2
- Has fixes to loading images from docker hub
- Is holding references to calling process in image directory so it does not get unmounted.
- v3_2_20
- Frontend RRD bug has been fixed. Needs to be merged. In fix rrd option, some files were not fixed.
- Build RC later today.
- OSG Operation
- CMS
- Meta Sites & Aliases: Been recently discussion with CERN and CERN has deployed large number of *15) different CES and some of them are redundant. This is imposing limitations on factory side. Marco Mascheroni has been working on this for past couple of releases and is dedicating his effort to this issue.
- Method for long queued pilots is helping a bit. There are still number of sites complaining. Need to open a ticket for smart removal of glideins in a smart way rather than auto retiring and old idle glideins
- Developers
September 13, 2017¶
Marco Mambelli, Marco Mascheroni, Dennis Box, Hyunwoo Kim, Dave Mason, Parag Mhashilkar
- v3_2_20
- Still getting errors in logs when there is upgrade or new entry is added. In ticket for adding new monitoring info to factory. Sent mail to Jeff giving link asking for his feedback.
- Other issues have been fixed
- Feature change of loading singularity executable which is very specific with SL7 and kernel version
- Additional testing
- Looking at logs and found couple of issues with rc that were handled
- CMS
- Deverlopers
- Marco Mascheroni
- Nothing much to report
- Hyunwoo
- Fixed issues found in rc last week
- Working on init scripts where second execution of reconfig is blocked
- Next planning on new two singularity tickets
- Dennis
- Not much progress last week
- Marco Mambelli
- Fixed unittests for master branch
September 6, 2017¶
Marco Mambelli, Marco Mascheroni, Dennis Box, Hyunwoo Kim, Eric Vaandering, Dave Dykstra
- Singularity
- Singularity unprivileged installed in OASIS.
- Would be nice to try unprivileged singularity bin before the other one (requested in ticket)
- Brian brought up a new cvmfs feature (used by CMS, LIGO). Access to cvms by voms proxy. Requires a different Linux session (setsid, starts a new session and changes the parent to be the init) so that the proxy is not shared.
- Is condor using different sessions (startd, starter)? - Marco will ask Friday
- It affects also signals: we may need a process to trap and send signals to the new sesssion. Singularity is not changing session, inside singularity a script could do that (this way the new parent would be the Singularity process and not init).
- Dave will open a new ticket for this
- Marco Mascheroni, working on other cms duties mostly, and in part on 1st part of metasite ticket. Will move to 2nd part soon
- Testing 3.2.20 RC1
- Dennis, observed an error about a missing key when accessing RRD data. The attribute is there (in RRD) when a fresh install is done, not w/ upgrade. Investigating further
- Dennis did istall and upgrade smoke test and they work.
- Marco did manual install w/ smoke test and work fine. Observed an error whe a new entry is added, only wisible at the first reconfig with the new entry
- Both Marco and Dennis are investigating further
- HyunWoo - found minor bug in singularity, fixed already
- No news from Eric
August 30, 2017¶
Parag Mhashilkar, Dennis Box, Jeff Dost, James Letts, Hyunwoo Kim
- v3_2_20 rc1 testing
- Dennis could not find v3_2_20 rc1
- Basic smoke tests are ok
- Hyunwoo will be testing it today
- Jeff: Its in osg-3.4 and not in osg-3.3 development
- CMS (James)
- CPU efficiency in GlideinWMS and HTCondor. Cut the queue for 3 hour has been helpful.
- Next wastage is draining of glideins. CMS will be looking at the configuration in case of multicore glideins
- Feature request from computing operations
- Production workflow can find useful if they can easily stop jobs from starting at a site (maybe its broken)
- Seems more like a condor_qedit for exiting jobs + condor_config changes to add default start/site exclusion
- OSG Operations (Jeff Dost)
- Developer
- Dennis Box
- Not much progres
- Hyunwoo
- Marco Mambelli reviewed singularity ticket. Made changes accordingly
- Start working on asynchronous feature of SL7 sysctl.
August 23, 16, 09, 2017¶
Dave Dykstra, Marco Mambelli, Parag Mhashilkar, Jeff Dost
- Singularity - Dave Dykstra
- Logging for singularity is working HTCondor 8.6.5 and is in august OSG release. Waiting on Marco to test the bug fix.
- Singularity seems to be deployed at CMS Tier-2. Maybe Atlas too (?)
- Any job/glidein go to HTCondor CE has logging
- RHEL 7.4 released has optional feature which allows singularity to run as unprivileged user. Need a kernel boot parameter. Dave will be testing it. Need to verify that it works.
- OSG operations. Jeff Dost.
- Jeff sent email with details
- v3_2_20
- #13807: Hyunwoo looking at the singularity feedback from Brian and VOs. Should be done by end of this week.
- #14559: Marco should be done by end of this week.
- Release candidate end of the week
- #17343: Issues with SL7 and reload.
- Jeff: how is reload done? Cycle: STOP - RECONFIG - RESTART. For scale of OSG: reload can take upto 5 mins. Any protection against back to back quick reloads? Marco thinks if reload is in process it will go in oblivion. Jeff suggests having a warning message that a reload is in progress.
August 02, 2017¶
Marco Mambelli, Marco Mascheroni, Dennis Box, Antonio, Parag Mhashilkar, Hyunwoo Kim, James Letts, Eric Vaandering
- v3_2_20
- Marco Mambelli
- Marco Mascheroni
- CMS
- Antonio
- Continuing preparation for scale tests starting today. Got feedback from Edgar/Jeff/Marco/Parag on multi glidein slots. Running scale tests on grid sites in opportunistic & dedicated unused resources.
- multiple p-slot
- multiple p-slot per glidein
- push collector & negotiator and multiple queries
- intensity of jobs
- timeline is dependent on several factors but focus in August or into September
- Issue addressed in past about monitoring info into glidein job's classad through qedit which is expensive. Maybe a good idea to enable this with dedicated factory for above scale testing.
- Tuning on queue limits
- Reduced idle glidein time limits to 1 hour. Not cancelling held glideins yet. Maybe have a time limit for held glideins for their removal.
- Hyunwoo
- composing how our singularity is different that what UNL is using
- Our code requires new features in glideinwms just like factory-frontend
- From CMS - frustration from sites setting up singularity. May have some issues with isolation of user environ and running CMS software.
- Will resume "why my job is not running"
- Thomas
- Working on Jenkins
- Document Work
- documented coding guidelines
- Marco Mambelli
- Need to test changes with python 3 as well
- Dennis Box
- Did not whole lot on glideinwms.
- automated testing script - found issue with EOF and named pipes. Removed tee and piping and it works.
July 26, 2017¶
Marco Mambelli, Marco Mascheroni, Dennis Box, Antonio, Parag Mhashilkar, Dave Mason
- v3_2_20
- There are 2 high priority tickets under progress
- #14559 (Mambelli): Should be completed before the end of the week.
- Jeff wants to make sure that monitoring is fixed as it is confusing and some of the lines show pilots while some show cores. When the info is overlaying in monitoring. On fixed slots case, glideins should match how man... In addition to cores and glideins and core requested also want to see partially used glideins and unused cores.. ONLY WANT TOTAL PILOTS and TOTAL CORES in the factory monitoring. SLOT info is not relevant. more info Nov 2016.
- #13069 (Mascheroni)
- Didnt get chance to work on this and will try to work on it. May not be able to complete for this release
- There are two parts. Introduced new tags in factory configuration for entry set that can list entries. Entry sets are published differently.
- Collection of sites have different number of weighting factor.
- Grouping of entries only happen for similar or standard entries. Are we approaching to xrootd model of federated entries? Multiple factories with same entries do not see each others limits. Not easy to solve without making factories aware of each other. This is rate problem as entries in multiple factories fill up fast.
- CMS
- we are removing glideins every hour. Three hours was too long.
- Met
- Factory OPS: ITB & RC testing
- Only testing on ITB when ready to upgrade Production.
- Ramping up new hire but didnt work out. Understaffed to help out with testing releases and RCs
- Burt
- Quality control in glideinwms have been bubbling up and is not reflecting good on the project, group and lab
July 19, 2017¶
Marco Mambelli, Marco Mascheroni, Hyunwoo Kim, Dennis Box, Eric Vaanderig, Antonio, Parag Mhashilkar
- v3_2_20
- We still plan to go in OSG's August Release [], deadline (dev freeze) is 7/24, we want to reserve one week or almost to RC testing
- Antonio
- No new request compared to GWMS stakeholder meeting.
- Main work is on improving CPU efficiency: condor issues, GWMS issues
- Would like to check more the number of activations reported by Glideins. Number of pilots with no activation is an interesting metric.
- Currently removing pilots form the queues if they are older than 3 hours, thinking to reduce this to 1 hour. This is gonna cost a lot of pilots renewal: + It will keep all the pilots relevant, - It may cost some harm on the factory
Things seem to be OK, we had this in place already for a couple of weeks. Applied this only to the main Frontend group, dedicated resources, no opportunistic (and HPC resources)
- This has not been helping a lot because requests lately have been very spiky, so even 3 hours seem not enough. Whitelisting is also very specific on where some jobs can run, this increases spikiness.
- Marco Mascheroni
- Marco Mambelli
- Fix unit tests
- Working on 14559
- Reminder to run unit tests
- Reminder that futurize tests of the code will be added soon
- Eric, Parag
- No news
July 12, 2017¶
Marco Mambelli, Marco Mascheroni, Thomas Hein (Hyunwoo Kim, Dennis Box sent updates)
- v3_2_20
- We want to go in OSG's August Release [], deadline (dev freeze) is 7/24, we want to reserve one week or almost to RC testing, so we have almost 2 weeks
- Hyunwoo
- Worked on Singularity OSG scripts.
- Working more on tests to verify that Singularity is OK
- Dennis
- Worked on entry scalability
- Problem with select, there is a limit set at compile time for 1024 file descriptors, this would limit the Factory to ~510 entries: possible solutions (excluding recompiling the interpreter), use multiple select on segments of the file descriptor, use multiple entryGroup, use poll instead of select
- Marco Mascheroni
- Work on 13069: Balancing glidein pressure to sites that are aliases or Meta-Sites
- Split the ticket in 2: 1. add entry sets with entries sharing common elements, 2. manage balancing within the set
- Marco Mambelli
- Troubleshoot and fix empty Factory job stats bug
- Working on 14559
- Python3
- Marco forwarded to the list Thomas email to the GlideinWMS developers highlighting the changes in syntax and new suggested idioms
July 5, 2017¶
Marco Mambelli, Hyunwoo Kim, Dennis Box, Thomas Hein
- v3_2_20
- We want to go in OSG's August Release [], deadline (dev freeze) is 7/24, we want to reserve one week or almost to RC testing, so we have almost 2 weeks
- All 3.2.20 High priority tickets should go in, Marco, Dennis and HyunWoo agree
- Python3
- Introduced Thomas work: modernization of the code, getting it ready for Python 3, adding futurize tests in Jenkins
- Thomas will send an email to the GlideinWMS developers highlighting the changes in syntax and new suggested idioms
June 14, 2017¶
Parag Mhashilkar, Marco Mambelli, Hyunwoo Kim, Dave Dykstra
- Singularity (Dave)
- Worked on logging. Got plugin from Brian. Now can keep track of jobs that starts, stop and are running. Configurable to cleanup tracking if no info is there for more than x hours. Depends on the glideins reporting to HTCondorCE Collector.
- Will ask Ken Herner to configured some sites to send classads to Dave's test collector
- Working with Hyunwoo for Singularity. Not have an option to disable singularity in job's classad.
- EL6 limitation. Cannot automount inside container dir thats already mounted outside. So this script needs to mount the dirs on EL6.
- v3_2_19
- ITB should have v3.2.19 since Suchandra tested in ITB
- condor view problem went away with collector use shared port
- Brian and Tim Theisen have been pushing for collector use shared port
- Hyunwoo
- Worked on Singularity OSG scripts.
- Struggling with the CVMFS issues. Should check on EL^ that mounting same dir in multiple containers on same worker node is ok
- Marco Mambelli
- Multiple node submission on Cori
- Fermicloud shared home dir has issues. Will have to restart rpcbind and ypbind. Port mapper daemon is having some issues. We need a fix from Scientific linux.
June 07, 2017¶
Parag Mhashilkar, Marco Mambelli, Dennis Box, Marco Mascheroni, James Letts, Hyunwoo Kim
- v3_2_19
- Is in OSG testing repo
- Should be in the June OSG release
- v3_2_19-1 in osg-v3.3 and v3_2_19-2 in osg-v3.4 (removes redundant dependency in spec file)
- CMS
- No news
- Developers
- Marco Mascheroni
- Will start working on the Meta sites.
- Marco Mambelli
- Hyunwoo Kim
- Singularity ticket: Will make more progress based on Brian's scripts
- Dennis Box
- Been busy with Jobsub:
- Tested uUpgrading to 3.2.19 and fresh install worked
May 31, 2017¶
Parag Mhashilkar, Dave Dykstra, Eric Vaandering, Marco Mambelli, Dennis Box, Marco Mascheroni, James Letts, Hyunwoo Kim
- Singularity (Dave)
- Trying to demonstrate the logging feature when singularity runs jobs. Test setup with HTCondor CE. Can send ads to CE.
- Brian has wrote plugins that allows python plugins. Can load python plugin now but problem with shared object loaded. Brian is trying to help with that.
- Even Wisconsin builds don't help
- Marco helping with site setting env variable to forward all glidein info to the collector
- Marco talked to OSG and they will add singularity with next release. It is currently in OSG upcoming. Glideinwms will support singularity in next production release. Ticket is being worked upon.
- Glideinwms - singularity status: Hyunwoo started with scripts from Brian Bockelman but they are tailored for their environment. Have simple scripts working.
- Dave will need help from Glideinwms team to test it.
- CMS
- Upcoming release will include
- Improve glideins scale down
- Linking Frontend monitoring from Factory monitoring
- Log number of activation/claims per glidein
- Support for HTCondor v8.6 configuration
- Requires upgrade to factory and frontend
- Glideinwms v3.2.19
- Final release done yesterday
- Now in OSG production
- Developers
- Marco Mascheroni
- Scale down ticket
- Meta Sites
- Dennis Box
- Testing v3.2.19
- Been busy with Jobsub
- Automated deployment: Problems with logging
- Hyunwoo Kim
- Singularity Support: May need 2-3 weeks
- Marco Mambelli
- HTCondor 8.6 changes
- Job being sent to default schedd. Changes to HTCondor configuration parsing along with how we do it was the issue. New settings are compatible with old HTCondor versions
- condor_config_val now defaults to tool environment and based on the daemon calling it, dump will dump different info.
- Glideinwms 3.3 will go in OSG-3.4-upcoming
- Students coming June 19
May 24, 2017¶
Attending: Marco, Marco, Dennis
Parag started the meeting, HyunWoo sent email
- Developers:
- HyunWoo:
- I have been working on this singularity issue and I am currently adding new codes for this ticket.
- Dennis:
- Moved 2 ticket to 3.2.20, will complete the one in feedback
- Marco Mambelli:
- Will complete 15892 bi tomorrow morning
- Marco Mascheroni:
- 16414: applied feedback, we discussed it in the meeting. Better start the timeout from queueing time than EnterCurrentStatus, to avoid resets in case of preemption or jumping through hold.
- CMS
Marco discussed at the CMS meeting and Antonio would prefer to have direct control of the complete periodic_remove expression, not just removing Idle glideins.
To avoid jobs that go to idle after being held.
These may be missed in the expression.
Marco Msc will investigate the behavior of the factory with held jobs. According to Marco Mmb there is no periodic release and
There is no reason to automatically release glideins that had been held? (correction: the release is needed for glideins already running jobs. There a hold due to a temporary network problem could otherwise kill user jobs)
May 17, 2017¶
Parag Mhashilkar, Marco Mascheroni, Dave Dykstra, James Letts, Antonio, Hyunwoo Kim, Eric Vaandering, Marco Mambelli
- Singularity
- Feature for logging in HTCondor in who is using singularity is in 8.6.3 and is available in upcoming
- Asking Tony to try it out. Dave is learning to install HTCondor CE
- Can we send parameters to glidein -- yes
- CMS
- Removing glideins from the factory queues anything that is older than x days. Marco Mascheroni is working on it and will have prototype by today. Will work with Mambelli on how to test it. Question is how to pass the info from frontend to factory.
- v3.2.19
- Will try to make it by OSG release
- Status didnt change much since last week
- Updated the status of tickets that will released in v3_2_19
May 10, 2017¶
Parag Mhashilkar, Marco Mascheroni, Dave Dykstra, Dennis Box, James Letts, Antonio
- CMS
- Trying to improve the efficiency of global pool
- Checking age of pilot in the system. Noticed that we keep a lot of old pilots.
- Marco: Talked to James during HTCondor week. Removing pilots idle in local queue but not in remote queue. Removing pilots in remote queue is tricky
- Antonio: If we lose spot in queue, that's ok. If we are in fair share losing spot is ok. There are held pilots. Killing old pilots is ok. Also it is ok to remove to remote pilots that are over couple of days. There are multiple CEs per site. There are single entries per CE. If we over provisioning, we start removing old pilots (older than 24 hours)
- Check if we remove held glideins and also forcefully.
- Start removing idle glideins that are local and also idle for more than x hours
- Use periodic remove
- Plots+Notes from Antonio
- QDate for all the pilots in the CMS global pool for running, idle and held status in the factory schedds. As we discussed, the main observation is that we apparently have quite old stuff in the queues, even if the majority of pilots which are running are relatively "fresh" (they were requested within 2 to 3 days ago)
- Looking at it from a site's perspective (PIC Tier 1), this plot shows pilots running and queued classified according to the date they entered the local queue. As it shows, in some cases such as this, we are running old pilots requested about a week ago. This is basically breaking the correspondence between workload pressure and glidein pressure in situations when we have fluctuating workloads in the global pool (e.g. requests that only take about a couple of days to complete, then pretty much nothing for the next few days). Hence the suggestion to fixing it by trying to remove any pilots in the system that did not start running perhaps 48h (or 24h) after requested by the FE, in order to recover the correlation to existing demand.
- Then, looking at it as a function of time, this is the number of pilots classified as running, idle or held from factory perspective, for a couple of sites:
- As I mentioned, there are clear differences in the mix amongst CMS sites. For PIC above, we have about 5x total more glideins than max running (=size of the CMS pledge at the site). Going through the full queue explains the picture in c): once the pilots get to run, they are already a week old. In contrast, the second site (UK Tier-1 at RAL) has much less in queue compared to running, so it's going to be running fresher pilots (=following more closely the actual demand for their resources).
- Marco Mascheroni
- Looking at removing idle glideins
- Meta sites: lower in priority
- Marco Mambelli
- v3_2_19 Status - OSG dev freeze in 2017-05-30
- RC next week. Some small bug fixes.
- Singularity support should be done by end of this week
- Changes in HTCondor for subsystem config changes. Impact is visible with HTCondor 8.6
April 26, 2017¶
Parag Mhashilkar, Marco Mascheroni, Dave Dykstra, Dennis Box, James Letts
- glexec & singularity
- Singularity taking over at OSG
- HTCondor CE plugin to collect logs from glideinwms pilots to tell collector which VO users are running jobs etc. Brian wrote the plugin. This is requirement for FNAL to install it here. Dave suspects other sites may be interested in running this plugin. Plugin will be part of HTCondor
- We are waiting on FNAL for feature request
- CILogon Basic CA recognized in Europe. FNAL uses this CA so users cant run in Europe. Following options:
- JNR is only allowing FNAL users. Mine is trying to see if this can be distributed
- BNL has a package but will take upto a year
- OSG can make agreement with CILogon to give a list of Identity providers that are approved to be Silver level which is approved in Europe
- OSG does not plan to distribute glexec in OSG-3.4 which is expected in next few months
- CMS:
- Parag: Possible to kill job if its only one in multicore glidein
- James: When trying to drain, in case of some sites it took upwards of 72 hours to drain them
- take centreal manager to go away
- wait for glideins to go away
- request frontend has made
- when job pressure is low we still get lot of glideins.
- Can we automate the removal of glidein
- Marco Mascheroni
- Create and assign to yourself ticket on auto removal of glideins to help with faster draining
- Dennis Box
- No meeting next week. Most of us will be at HTCondor week
April 19, 2017¶
Notes from James Letts
Ad hoc meeting between the GlideinWMS Developers and the CMS Submission
Infrastructure Group
Attending: Marco Mascheroni, Antonio, James, Marco Mambelli
- Since the regular glideinWMS developers meeting has been cancelled for the past couple of weeks, CMS called an informal chat to discuss some issues we have been having with ramping down the Global Pool. Bursty job submission patterns is something SI cannot do anything about, but we noted that while we can ramp up the Global Pool very quickly and efficienty (with few idle cores), when job pressure becomes reduced then draining glideins and even new running glideins that were submitted to the sites' batch systems during high pressure waste a lot of CPU cores. Since this happens often, it has become visible to the sites and to their funding agencies. CMS SI have made fixing this problem a top priority.
- After defining the problem, we proposed two areas where improvements might be made in glideinWMS, realizing that solutions may also need to come from HTCondor. One possibility would be cancelling no-longer-needed idle glideins when the job pressure is reduced. Firstly we need to understand the current logic inside glideinWMS.
- During the switchover of the HTCondor Central Manager machine at CERN, it was observed that the idle glidein queue at PIC was over 48h long. This was during a period when the pool was contracting, so new glideins were not needed according to frontend pressure. Apparently there is functionality in glideinWMS to remove idle glideins at the site batch queues when the frontend pressure drops. Is this be turned on?
- ACTION ITEM: Marco Mambelli will investigate what is the current mechanism is, regarding the aggressiveness of submission, investigate consequences for ramp up and ramp down, what the built-in delays are, etc.
- CMS noted that the glideins at T2_CH_CERN ramp up as well as down really efficiently, unlike most other sites, e.g. T2_BE_IIHE, for which plots were compared.
- ACTION ITEM: Marco Mambelli will investigate if there are any special factory settings for the entries at T2_CH_CERN responsible for this behaviour.
- Everyone noted that the CMS Global Pool has a single frontend, that makes the request for resources effectively a single-user model. Could the frontend request removal of idle glideins when no longer needed? Tune retire time?
- We also discussed controlling the pilot pressure, i.e. the number of idle glideins sent to individual sites. CMS observes that sites with multiple CEs get more pilot submissions. Can glidein factories be site aware and scale pressure appropriately? Are entries in fact being tuned to reflect changes in sites, i.e. addition of resources? In principle the tunings are in git.
- On the HTCondor side, we are investigating depth-wise filling of multi-core glideins.
March 22, 2017¶
Parag Mhashilkar, Marco Mascheroni, Hyunwoo Kim, Dennis Box, James Letts
- CMS:
- Fighting scalability issues with HTCondor central managers when we get to 200K scale. Queries from different systems are blocking top level Collector. Currently, part of the problem is running out of memory on CERN Machine
- v3.3.2 Status
- v3.3.2 rc3 released yesterday
- Issues with HTCondor 8.4.11 and 8.6.0 where new style config SCHEDD2.BLAH does not work correctly
- With both 8.5 and 8.6 series you get warnings with old configurations ***
March 01, 2017¶
Parag Mhashilkar, Marco Mascheroni, Hyunwoo Kim, Dennis Box, Eric Vaandering, James Letts, Antonio Perez-Calero,
- Release Status
- 3.2.18 released earlier this week.
- 3.3.2 working on the release and bug fixes
- CMS
- Most changes we are working on are on the HTCondor collector and negotiator scalability issues
- Developer updates
February 22, 2017¶
Parag Mhashilkar, Marco Mascheroni, Hyunwoo Kim, Dennis Box, Eric Vaandering, Dave Dykstra
- Dave Dykstra (glexec)
- At JNR we were submit jobs from here but was failing because CI Logon Basic CA. They get config from EGI. In Europe some VOs anyone is allowed to join. At NIKEF they are working on functionality to list of VOs one can join
- Brian has been asking users to use SL7 starting April (?) if site has singularity.
- Release Status: v3.2.18
February 15, 2017¶
Parag Mhashilkar, Marco Mascheroni, Hyunwoo Kim, Jeff Dost, Dennis Box, James Letts, Marco Mascheroni, Antonio, Dave Mason
- Project News
- Marco Mambelli is taking over as the Technical Lead
- Will be switching to zoom
- Release Status:
- v3.2.17 is in OSG release
- Issues with use of daemon function and pid file for the glideinwms services
- v3.2.18 is being worked on and will be in the next OSG release
- CMS
- Factory Operations
- Covered in topics above
January 18, 2017¶
Parag Mhashilkar, Marco Mascheroni, Hyunwoo Kim, Eric Vaandering, Jeff Dost, Dennis Box
- v3.2.17 Release Status
- Release is out in development repo and will be available in OSG Feb release
- CMS
- No Big News
January 18, 2017¶
Parag Mhashilkar, Marco Mambelli, Hyunwoo Kim, Dennis Box, Eric Vaandering, Jeff Dost
- v3.2.17 Release Status
- Marco tested rc1 on SL6 and SL7
----
January 11, 2017¶
Parag Mhashilkar, Marco Mambelli, Hyunwoo Kim, Dennis Box, Eric Vaandering, Jeff Dost
- CMS
- No new news
- OSG Factory Operations
- Jeff: Ken Herner reported issues after changing the 2MB to 2048 KB.
- v3.2.17 Release Status
January 04, 2017¶
Parag Mhashilkar, Marco Mambelli, Hyunwoo Kim, Dennis Box
- Release candidate tomorrow, so resolve all the pending tickets. | https://cdcvs.fnal.gov/redmine/projects/glideinwms/wiki/Weekly_Meeting_Notes_2017 | CC-MAIN-2019-30 | refinedweb | 6,720 | 62.78 |
Most code in Twisted is not thread-safe. For example, writing data to a transport from a protocol is not thread-safe. Therefore, we want a way to schedule methods to be run in the main event loop. This can be done using the function callFromThread:
from twisted.internet import reactor def notThreadSafe(x): """do something that isn't thread-safe""" # ... def threadSafeScheduler(): """Run in thread-safe manner.""" reactor.callFromThread(notThreadSafe, 3) # will run 'notThreadSafe(3)' # in the event loop reactor.run()
Sometimes we may want to run methods in threads. For example, in order to access blocking APIs. Twisted provides methods for doing so using the IReactorThreads API. Additional utility functions are provided in twisted.internet.threads. Basically, these methods allow us to queue methods to be run by a thread pool.
For example, to run a method in a thread we can do:
from twisted.internet import reactor def aSillyBlockingMethod(x): import time time.sleep(2) print x # run method in thread reactor.callInThread(aSillyBlockingMethod, "2 seconds have passed") reactor.run()
The utility methods are not part of the reactor APIs, but are implemented in threads.
If we have multiple methods to run sequentially within a thread, we can do:
from:
from:
from.
The thread pool is implemented by ThreadPool.
We may want to modify the size of the thread pool, increasing or decreasing the number of threads in use. We can do this do this quite easily:
from. | http://twistedmatrix.com/documents/current/core/howto/threading.html | CC-MAIN-2014-42 | refinedweb | 241 | 60.82 |
Opened 2 years ago
Closed 2 years ago
#21874 closed Cleanup/optimization (fixed)
Consistent handling for app modules with no (or empty) __path__
Description
This ticket is a followup from discussion that occurred on in the course of resolving #17304. The description here is lengthy as I've tried to be comprehensive, but really the issue impacts only very unusual edge-case app modules.
Currently in master, if an app module has no __path__ attribute, its appconfig gets a path of None. This is intended to indicate that the app has no directory path that can be used for finding things like templates, static assets, etc. It is likely to break some naive code that iterates over app-configs using their paths. (For instance, the current code in AppDirectoriesFinder breaks with a TypeError if any app-config has a path of None).
If an app module would have an empty __path__, currently an obscure IndexError would be raised.
If an app module has a __path__ with more than one element in it (namespace package), ImproperlyConfigured is raised.
All of the above is bypassed if the app uses an AppConfig subclass that has an explicit path as a class attribute - in that case, that path is unconditionally used and no path detection occurs.
The __file__ attribute is not currently used at all for determining an app's filesystem path.
Here are some types of app-modules that could be referenced in INSTALLED_APPS, and the contents of their __path__ and __file__ attributes:
A) An ordinary package, e.g 'myapp' where myapp is a directory containing an __init__.py. In this case, the module object's __path__ will be ['/path/to/myapp/'] and its __file__ will be '/path/to/myapp/__init__.pyc'
B) A namespace package with only a single location. In this case, __path__ is length-one, just as in (A), and there is no __file__ attribute.
C) A namespace package with multiple locations. __path__ is length > 1, and there is no __file__.
D) A single-file module, e.g. 'myapp' where there is just myapp.py. In this case, the module will have no __path__ attribute, and __file__ will be /path/to/myapp.pyc.
E) A package inside a zipped egg, e.g. 'myapp' where there is a zipped egg myapp-0.1-py2.7.egg added to sys.path in a pth file, and the zipfile internally contains a file myapp/__init__.py. In this case, __path__ will contain the bogus non-path ['/path/to/myapp-0.1-py2.7.egg/myapp/'] and __file__ will similarly be a bogus non-path '/path/to/myapp-0.1-py2.7.egg/myapp/__init__.pyc'. (A package or module within an unzipped egg is no different from case (A) or (D).)
F) A module loaded by some hypothetical custom loader which leaves __path__ empty.
So currently for cases (A), (B), and (E) we use the single entry in __path__ (even though in case (E) that path doesn't exist), and for case (C) we raise ImproperlyConfigured and require the user to explicitly supply a path on the AppConfig. For case (D) we set appconfig.path to None, and for case (F) we raise an obscure IndexError. I consider our handling of cases (D) and (F) sub-optimal.
I think it would be preferable if appconfig.path were never set to None. The algorithm I would propose for appconfig.path is this: (i) if AppConfig.path is set explicitly, use it, (ii) If app_module.__path__ exists and is length 1, use its only element, (iii) if app_module.__file__ exists, use os.path.dirname(__file__), and (iv) if no path can be deduced from the __path__ and __file__ attributes, then raise ImproperlyConfigured and tell the user they need to set the path explicitly on their AppConfig.
Raising ImproperlyConfigured for any type of app module that we supported in previous versions of Django would be a backwards-incompatible change. As far as I am aware, only cases (A) and (E) were previously explicitly supported, and neither of those would raise ImproperlyConfigured with my proposal.
Case (D) seems to work in previous versions of Django, though I don't believe that it was ever explicitly supported. In any case, it would be covered in my proposal by the fallback to os.path.dirname(__file__).
Case (F) is a bit vague, and I don't believe is a backwards-compatibility concern - I think my proposal provides reasonable handling of any combination of __path__ and __file__ that any custom loader might set.
If my proposal is rejected for whatever reason and we opt to stick with something closer to the status quo in master, then the following actions should be taken to resolve this ticket: set appconfig.path to None when __path__ exists but is empty to avoid the IndexError in case (F), and update AppDirectoriesFinder in contrib.staticfiles to not blow up if it its an appconfig.path that is None (and perhaps audit other uses of appconfig.path in Django to ensure they can handle it being None).
One issue that will need to be resolved for my proposal to move forward is that currently the test suite instantiates AppConfig in a number of places with app_module=None, which currently passes silently but under my proposal would cause ImproperlyConfigured to be raised. I plan to look into this issue and propose a patch.
Feedback on the concept welcome, in particular if I've missed any important cases.
Change History (7)
comment:1 follow-up: ↓ 2 Changed 2 years ago by aaugustin
- Triage Stage changed from Unreviewed to Accepted
comment:2 in reply to: ↑ 1 Changed 2 years ago by carljm ;-)
Indeed, I agree. That's why I didn't include case (E) in my list of cases where I think we need better handling. If a module loader is going to return bogus paths, I think Django should just pass them along. (Plus, almost all uses of appconfig.path are likely to take roughly the form mypath = os.path.join(appconfig.path, 'something'); if os.path.exists(mypath): ... and a bogus path won't blow up that code.
I assume wheels don't matter here because they're expanded to regular files during installation (but I could be wrong).
That's correct - wheels and unzipped eggs are both loaded by the normal filesystem module loading process, so there is nothing special about them from this perspective..
Yep, I discovered this later last night. I also discovered why Django's test suite may have misleadingly made it seem that egg-loaded modules would have no __path__; because our egg-template-loader tests have a function that creates fake eggs and stuffs them into sys.modules, and those fake eggs (unlike real ones) have no __path__.
comment:3 Changed 2 years ago by aaugustin
- Keywords app-loading added
comment:4 Changed 2 years ago by carljm
- Has patch set
- Owner changed from nobody to carljm
- Status changed from new to assigned
Pull request:
comment:5 Changed 2 years ago by carljm
It is possible that I am wrong about which way we should go on this, and that ultimately we will want to provide a way for app authors to explicitly say "this app has no filesystem location, please don't try to load any files (e.g. templates or static assets) from it."
Even with this pull request, that's possible by just giving a bogus path, like eggs do - but that's a bit ugly as an official recommendation for a real use case.
My inclination is to go with this strict (but backwards compatible) approach for 1.7 and see if use cases pop up for apps without a filesystem location. But that means that third-party code written for 1.7 and assuming that appconfig.path must be a string would later be broken if we decide that None is a valid value for appconfig.path after all.
comment:6 Changed 2 years ago by aaugustin
Apps without a path are unlikely to be found in the wild; the most common problematic case is eggs that have an invalid path. Let's not agonize over a virtually non-existent problem ;-)
Patch looks good and well tested.
I have three small suggestions, bordering on bikeshedding:
- Put a reference to this ticket in a comment, your original explanation is worth referencing.
- Extract the logic that determines the path in a separate function, because it's getting a bit long for __init__ which is already quite long.
- Avoid relying on errmsg in the conditionals; it may be marginally less efficient, but it's more readable:
# Convert paths to list because Python 3.3 _NamespacePath does # not support indexing. paths = list(getattr(app_module, '__path__', [])) # Fallback to __file__ if no suitable path is found. if len(paths) != 1: filename = getattr(app_module, '__file__', None) if filename is not None: paths = [os.path.dirname(filename)] # Handle errors # ...
comment:7 Changed 2 years ago by Carl Meyer <carl@…>
- Resolution set to fixed
- Status changed from assigned to closed ;-)
I assume wheels don't matter here because they're expanded to regular files during installation (but I could be wrong).. | https://code.djangoproject.com/ticket/21874 | CC-MAIN-2016-26 | refinedweb | 1,526 | 62.98 |
Hi Emil, The reason has to do with the definitions of (>>=) for Writer and (WriterT m). Looking at Control.Monad.Writer (ghc-6.2.2), newtype Writer w a = Writer { runWriter :: (a, w) } instance (Monoid w) => Monad (Writer w) where m >>= k = Writer $ let (a, w) = runWriter m (b, w') = runWriter (k a) in (b, w `mappend` w') newtype WriterT w m a = WriterT { runWriterT :: m (a, w) } instance (Monoid w, Monad m) => Monad (WriterT w m) where return a = WriterT $ return (a, mempty) m >>= k = WriterT $ do (a, w) <- runWriterT m (b, w') <- runWriterT (k a) return (b, w `mappend` w') Patterns in "let" expressions bind lazily, so Writer's (>>=) is lazy in both its arguments and thus can handle the infinite recursion of your "foo". However, patterns in "do" expressions bind strictly, so WriterT's (>>=) is strict in its arguments; it tries to evalue "foo" completely, causing a stack overflow. You may use "case" expressions instead of "let" statements to bind patterns strictly. Conversely, you can also make a do statement bind patterns lazily, using lazy patterns (see eg) Hope that helps, -Judah On Tue, 15 Feb 2005 17:45:26 +0100, Emil Axelsson <emax at cs.chalmers.se> wrote: > Hello, > > > > _______________________________________________ > Haskell mailing list > Haskell at haskell.org > > | http://www.haskell.org/pipermail/haskell/2005-February/015344.html | CC-MAIN-2014-35 | refinedweb | 211 | 59.57 |
Technical Support
On-Line Manuals
Cx51 User's Guide
The #include directive tells the C preprocessor to include
the contents of the file specified in the input stream to the
compiler and then continue with the rest of the original file.
For example, the header file, file.h contains the
following:
char *func (void);
The program file, myprog.c includes this header file:
#include "file.h"
void main (void)
{
while (1)
{
printf (func());
}
}
The output generated by the proprocessor is:
char *func (void);
void main (void)
{
while (1)
{
printf (func());
}
}
Header files typically contain variable and function declarations
along with macro definitions. But, they are not limited to only
those. A header file may contain any valid C program fragment.
However, this practice is error-prone, very confusing, and not at all. | http://www.keil.com/support/man/docs/c51/c51_pp_includeworks.htm | CC-MAIN-2019-26 | refinedweb | 132 | 63.9 |
Content-type: text/html
mkdir - Creates a directory
#include <sys/stat.h>
#include <sys/types.h>
int mkdir (
const char *path,
mode_t mode );
Interfaces documented on this reference page conform to industry standards as follows:
mkdir(): XPG4, XPG4-UNIX
Refer to the standards(5) reference page for more information about industry standards and associated tags.
Specifies the name of the new directory.
The mkdir() function creates a new directory with the following attributes: The owner ID is set to the process's effective user ID. The group ID is set to the group ID of its parent directory. [Digital]. Permission and attribute bits are set according to the value of the mode parameter modified by the process's file creation mask (see the umask() function). This parameter is constructed by logically ORing values described in the sys/mode.h header file. The new directory is empty, except for . (dot) and .. (dot-dot).
To execute the mkdir() function, a process must have search permission to get to the parent directory of the path parameter and write permission directory for update.
Upon successful completion, the mkdir() function returns a value of 0 (zero). If the mkdir() function fails, a value of -1 is returned, and errno is set to indicate the error. policies. The directory in which the entry for the new link is being placed cannot be extended because the user's quota of disk blocks or i-nodes on the file system containing the directory is exhausted. The named file already exists. The path parameter is an invalid address.. A component of the path parameter does not exist or points to an empty string. Unable to allocate a directory buffer. The file system does not contain enough space to hold the contents of the new directory or to extend the parent directory of the new directory. A component of the path prefix is not a directory. The named file resides on a read-only file system.
[Digital] For NFS file access, if the mkdir() function fails, errno may also be set to one of the following values: Indicates either that the system file table is full, or that there are too many files currently open in the system. Indicates a stale NFS file handle. A client cannot make a directory because the server has unmounted or unexported the remote directory.
Functions: chmod(2), mknod(2), rmdir(2), umask(2)
Commands: chmod(1), mkdir(1), mknod(8)
Standards: standards(5) delim off | http://backdrift.org/man/tru64/man2/mkdir.2.html | CC-MAIN-2017-04 | refinedweb | 411 | 55.74 |
Take 40% off Deep Learning for Natural Language Processing by entering fccraaijmakers into the discount code box at checkout at manning.com.
Multitask learning is concerned with learning several things at the same time. An example is to learn both speech tagging and sentiment analysis at the same time, or learning two topic taggers in one go. Why is that a good idea? Ample research has demonstrated, for quite some time already, that multitask learning improves the performance on certain tasks separately. This gives rise to the following application scenario:
You’re training classifiers on a number of NLP tasks, but the performance is disappointing. It turns out your tasks can be decomposed into separate subtasks. Can multitask learning be applied here, and, if it does, does it improve performance on the separate tasks when learned together?
— Scenario: Multitask learning.
The main motivation for multitask learning is classifier performance improvement. The reason why multitask learning can boost performance is rooted in statistics. Every machine learning algorithm suffers from inductive bias: a set of implicit assumptions underlying its computations. An example of such an inductive bias is the maximization of distances between class boundaries carried out by support vector machines. Another example is the bias in nearest neighbor-based machine learning, where the assumption is that the neighbors (in the feature space) of a specific test data point are in the same class as the test data point. An inductive bias isn’t necessarily a bad thing; it incorporates a form of optimized specialization.
In multitask learning, learning two tasks at the same time -with their own separate inductive biases- produces an overall model that aims for one inductive bias: an inductive bias that optimizes for the two tasks at the same time. This approach may lead to better generalization properties of the separate tasks, meaning that the resulting classifier can handle unseen data better. Often that classifier turns out to be a stronger classifier for both separate tasks.
A good question to ask is which tasks can be combined in such a way that performance on the separate tasks benefits from learning them at the same time. Should these tasks be conceptually related? How to define task relatedness at all? This is a topic beyond the scope of this article. We should focus our experiments on combining tasks that, reasonably, seem to fit together. For instance, named entity recognition may benefit from part of speech tagging, or learning to predict restaurant review sentiment may be beneficial for predicting consumer products sentiment. First, we discuss the preprocessing and handling of our data. After that, we go into the implementation of the three types of multitask learning.
Data
As mentioned, we use the following datasets for multitask learning:
- Two different datasets for consumer review-based sentiment (restaurant and electronic product reviews).
- The Reuters news dataset, with forty-six topics from the news domain.
- Joint learning of Spanish part of speech tagging and named entity tagging.
For the sentiment datasets, we verify if learning sentiment from two domains in parallel (restaurant and product reviews) improves the sentiment assignment to the separate domains. This is a topic called domain transfer: how to transfer knowledge from one domain to another during learning, in order to supplement small data sets with additional data.
The Reuters news dataset entails a similar type of multitask learning. Given a number of topics, assigned to documents, can we create sensible combinations of pairs of two topics (topics A+B, learned together with topics C+D) that, when learned together, benefit the modeling of the separate topics? And how can we turn such a pairwise discrimination scheme into a multiclass classifier. Below, we find out.
Finally, the last task addresses multitask learning applied to shared data with different labelings. In this task, we create two classifiers, one focusing on part of speech tagging, and the other on named entity recognition. Do these tasks benefit from each other? Let’s take a look at each of our datasets in turn.
Consumer reviews: Yelp and Amazon
We use two sentiment datasets: sets of Yelp restaurant reviews and Amazon consumer reviews, labeled for positive or negative sentiment. These datasets can be obtained from Kaggle.
The Yelp dataset contains restaurant reviews, with data like:
The potatoes were like rubber and you could tell they had been made up ahead of time being kept under a warmer.,0 The fries were great too.,1 Not tasty and the texture was just nasty.,0 Stopped by during the late May bank holiday off Rick Steve recommendation and loved it.,1 The Amazon dataset contains reviews of consumer products: o there is no way for me to plug it in here in the US unless I go by a converter.,0 Good case, Excellent value.,1 Great for the jawbone.,1 Tied to charger for conversations lasting more than 45 minutes.MAJOR PROBLEMS!!,0 The mic is great.,1 I have to jiggle the plug to get it to line up right to get decent volume.,0
Data handling
First, let us discuss how to load the sentiment data into our model. The overall schema is the following.
Figure 1. Sentiment data processing schema.
The following procedure converts our data into feature vectors, labeled with integer-valued class labels.
Listing 1: Load sentiment data.
def loadData(train, test): global Lexicon with io.open(train,encoding = "ISO-8859-1") as f: trainD = f.readlines() ❶ f.close() with io.open(test,encoding = "ISO-8859-1") as f: testD = f.readlines() ❷ f.close() all_text=[] for line in trainD: m=re.match("^(.+),[^\s]+$",line) if m: all_text.extend(m.group(1).split(" ")) ❸ for line in testD: m=re.match("^(.+),[^\s]+$",line) if m: all_text.extend(m.group(1).split(" ")) ❹ Lexicon=set(all_text) ❺ x_train=[] y_train=[] x_test=[] y_test=[] for line in trainD: ❻ m=re.match("^(.+),([^\s]+)$",line) if m: x_train.append(vectorizeString(m.group(1),Lexicon)) y_train.append(processLabel(m.group(2))) for line in testD: ❼ m=re.match("^(.+),([^\s]+)$",line) if m: x_test.append(vectorizeString(m.group(1),Lexicon)) y_test.append(processLabel(m.group(2))) return (np.array(x_train),np.array(y_train)),(np.array(x_test),np.array(y_test)) ❽
❶ Read the training data into an array of lines.
❷ Similar for the test data.
❸ Extend the all_text array with training data. We need this for a lexicon for vectorization of our data.
❹ Similar for the test data.
❺ Build a lexicon.
❻ Vectorize the training data (see below for vectorizeString), using the lexicon.
❼ Similar for test data.
❽ Return the vectorized training and test data.
The vectorizeString function converts a string into a vector of word indices, using a lexicon. It’s based on the familiar one_hot function of Keras we have encountered before:
Listing 2: Vectorizing strings.
def vectorizeString(s,lexicon): vocabSize = len(lexicon) result = one_hot(s,round(vocabSize*1.5)) return result
The processLabel function creates a global dictionary for the class labels in the dataset:
Listing 3: Creating a class label dictionary.
def processLabel(x): if x in ClassLexicon: return ClassLexicon[x] else: ClassLexicon[x]=len(ClassLexicon) return ClassLexicon[x]
The final processing of the data takes place after this: padding the feature vectors to uniform length, and converting the integer-based class labels to binary vectors with the Keras built-in to_categorical:
x_train = pad_sequences(x_train, maxlen=max_length, padding=’post’) x_test = pad_sequences(x_test, maxlen=max_length, padding=’post’)
y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes)
Now that our data is in place, let us establish a baseline result: what does a standard, single-task classifier produce for these datasets? Here’s our single-task setup:
Listing 4: Single-task sentiment classifier.
(x_train,y_train),(x_test,y_test)=loadData(train,test) ❶ num_classes=len(ClassLexicon) ❶ epochs = 100 batch_size=128 max_words=len(Lexicon)+1 max_length = 1000 x_train = pad_sequences(x_train, maxlen=max_length, padding='post') ❷ x_test = pad_sequences(x_test, maxlen=max_length, padding='post') y_train = keras.utils.to_categorical(y_train, num_classes) ❸ y_test = keras.utils.to_categorical(y_test, num_classes) inputs=Input(shape=(max_length,)) ❹ x=Embedding(300000, 16)(inputs) ❺ x=Dense(64,activation='relu')(x) ❻ x=Flatten()(x) ❼ y=Dense(num_classes,activation='softmax')(x) ❽ model=Model(inputs=inputs, outputs=y) ❾ model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_split=0.1)
❷ Pad training and test data to a pre-specified length.
❸ Convert the labels to a one-hot vector (binary, categorical) representation.
❹ Our input layer.
❺ Input data is embedded with a 300,000 words embedding, producing 16-dimensional vectors.
❻ Create a dense layer with output dimension of 64.
❼ Flatten the data, and add a dense layer.
❽ Pass the dense layer output to a softmax output layer, producing class probabilities.
❾ Create the model, and fit it on the data.
Running this model on Amazon and Yelp produces the following accuracy scores:
- Amazon: 77.9%
- Yelp: 71.5%
These single-task scores are our baseline. Does multitask learning improve on these scores?
If you want to find out, you’ll have to check out the book on Manning’s liveBook platform here. | https://freecontent.manning.com/multitask-learning/ | CC-MAIN-2021-49 | refinedweb | 1,531 | 50.23 |
{-# OPTIONS -Wall #-} -------------------------------------------------------------------------------- -- | -- Module : Wumpus.Drawing.Chains.Base -- Copyright : (c) Stephen Tetley 2010-2011 -- License : BSD3 -- -- Maintainer : stephen.tetley@gmail.com -- Stability : unstable -- Portability : GHC -- -- Generate points in an iterated chain. -- -- \*\* WARNING \*\* - unstable. Names are not so good, also -- Wumpus-Basic has a @chain1@ operator... -- -------------------------------------------------------------------------------- module Wumpus.Drawing.Chains.Base ( Chain , LocChain , unchain , zipchain , zipchainWith , unchainTD , zipchainTD , zipchainWithTD ) where import Wumpus.Basic.Kernel -- package: wumpus-basic import Wumpus.Core -- package: wumpus-core -- | A 'Chain' is a list of points. The list is often expected to -- be inifinte, but if it was a Stream it would loose the ability -- to use list comprehensions. -- type Chain u = [Point2 u] -- | A LocChain is a function from a starting point to a 'Chain'. -- type LocChain u = Point2 u -> Chain u -- | Note - commonly a 'Chain' may be infinite, so it is only -- unrolled a finite number of times. -- unchain :: Int -> LocGraphic u -> Chain u -> TraceDrawing u () unchain i op chn = go i chn where go n _ | n <= 0 = return () go _ [] = return () go n (x:xs) = draw (op `at` x) >> go (n-1) xs zipchain :: [LocGraphic u] -> Chain u -> TraceDrawing u () zipchain (g:gs) (p:ps) = draw (g `at` p) >> zipchain gs ps zipchain _ _ = return () zipchainWith :: (a -> LocGraphic u) -> [a] -> Chain u -> TraceDrawing u () zipchainWith op xs chn = go xs chn where go (a:as) (p:ps) = draw (op a `at` p) >> go as ps go _ _ = return () -- | Variant of 'unchain' where the drawing argument is a -- @TraceDrawing@ not a @LocGraphic@. -- unchainTD :: Int -> (Point2 u -> TraceDrawing u ()) -> Chain u -> TraceDrawing u () unchainTD i op chn = go i chn where go n _ | n <= 0 = return () go _ [] = return () go n (x:xs) = (op x) >> go (n-1) xs zipchainTD :: [Point2 u -> TraceDrawing u ()] -> Chain u -> TraceDrawing u () zipchainTD (g:gs) (p:ps) = g p >> zipchainTD gs ps zipchainTD _ _ = return () zipchainWithTD :: (a -> Point2 u -> TraceDrawing u ()) -> [a] -> Chain u -> TraceDrawing u () zipchainWithTD op xs chn = go xs chn where go (a:as) (p:ps) = op a p >> go as ps go _ _ = return () -- Notes - something like TikZ\'s chains could possibly be -- achieved with a Reader monad (@local@ initially seems better -- for \"state change\" than @set@ as local models a stack). -- -- It\'s almost tempting to put point-supply directly in the -- Trace monad so that TikZ style chaining is transparent. -- (The argument against is: how compatible this would be with -- the Turtle monad for example?). -- | http://hackage.haskell.org/package/wumpus-drawing-0.1.0/docs/src/Wumpus-Drawing-Chains-Base.html | CC-MAIN-2014-52 | refinedweb | 406 | 61.19 |
- Colours: 'gray' is a colour too, 'k' is short for black.
- Don't like the border on a legend? You need to stick the legend into a variable, "leg = legend()" for example, and say "leg.draw_frame(False)" (this pearl of wisdom comes courtesy of the matplotlib source code).
- Want to make the legend text (and hence the legend) smaller? Try legend(prop={"size":10}).
- To adjust the position or font of text in a plot, it may be easiest to save as SVG, open in Inkscape (a truly fantastic program), click on the image and hit 'Ungroup' a few times (CTRL+SHIFT+G), make the adjustments, resave the SVG, and finally Export to Bitmap (300 dpi is probably good enough).
- Want two overlapping histograms on the same plot? Use the width keyword to hist() to set the bar widths to half the bin size. Also, catch the rectangles used to make the second histogram so that you can offset the bars by adding half the bin size:
Update 11/05/2010: With current version of Matplotlib, you need to replace width=width with rwidth=0.5.
a, b, c = hist(x, bins, facecolor='k', width=width)
for rect in c:
rect.set_x(rect.get_x() + width)
You might also want to store the first rectangle of each histogram in a list and pass it as a first argument to legend.
- To add a regression line between x=100 and x=600 (and get the R and significance values for free), try:
from scipy import stats
grad, inter, r, p, std_err = stats.linregress(x, y)
plot((100, 600), [grad*x + inter for x in [100, 600]])
- Line styles: "-" for solid, "--" for dashed, ":" for dotted. Unfortunately, the gaps in 'dashed' are too big. To define your own line style, you need to catch the line in a variable, e.g. "lines = plot(...)", and use lines[0].set_dashes((5,2)) for dashes of length 5 separated by gaps of 2 (unknown units).
- Want Greek symbols and subscripts in axes labels? It worked for me on windows with something like the following:
Note that the first time you run this on Windows, matplotlib might need to connect to the internet to download a needed component (MiKTeX). Also, when you make changes to the TeX stuff you might need to run your program twice to pick up on the changes.
rc('text', usetex=True)
xlabel(r"$\rho_{8.0}", fontsize="large")
Update 03/01/2011: With current version of Matplotlib, a TeX parser is included.
The Angstrom symbol can be include in a legend as follows: "RMSD ($\AA$)". For more information on available symbols, see the Matplotlib mathtext page. | http://baoilleach.blogspot.com/2007_12_01_archive.html | CC-MAIN-2014-42 | refinedweb | 443 | 71.34 |
This article discusses the Swing concept of a pluggable look and feel and offers some thoughts on how to use it in a way that is both convenient for the programmer and desirable for the user experience.
What Does "Pluggable Look and Feel" Mean?
Swing is a part of the Java Foundation Classes (JFC) and implements a set of graphical user interface (GUI) components with a pluggable look and feel (L&F). In very short terms, this means that Swing-based applications can appear as if they are native Windows, Mac OS X, GTK+, or Motif applications, or they can have a unique Java look and feel through the "Metal" package. Applications can also provide a completely new user experience by implementing a totally unprecedented L&F.
In its early days, Java offered only GUI elements that used native components (rendered and handled by the underlying windowing system). To be platform-independent, Java could provide only such elements and features that had counterparts on all supporting platforms. Today, we know that this was not enough for rich client applications. The growing demand for powerful UI elements led to the development of JFC and Swing, which overcame the limitations of the original implementation. Unlike their predecessors (the original AWT components are still available, of course), Swing components do not rely on the underlying windowing system; they are "lightweight," rendered entirely in Java. Having said that, one might argue that in a very strict sense they are not "native." For the Java look and feel, this is certainly true. And even the Windows look and feel in its current implementation only mimics its native counterpart. But this is not a general problem of Swing or the pluggable look and feel concept. We could write a new look and feel that uses native Windows APIs to create, paint, and maintain GUI elements.
"Pluggable" means that both visual representation and (physical) behavior of a GUI element can change, even while the component is on display, without modifying the program. When Swing applications create a new button by instantiating the
JButtonclass, this new object knows how it should paint itself on screen and how it should react to mouse movements and mouse clicks. However, it delegates these tasks (such as rendering and mouse handling) to certain specialized classes, because if the button itself contained the code that creates its visual representation, we would need to change this code or append new drawing methods if we wanted to modify its look. For this reason, Swing provides means to install and select custom look and feels as well as to create new ones. Implementing a new look and feel is a huge task, beyond the scope of this article; technical background information can be found in the excellent O'Reilly book Java Swing, by Marc Loy, Robert Eckstein, Dave Wood, James Elliott, and Brian Cole.
javax.swing.UIManager
Many L&F-related aspects are handled within the
javax.swing.UIManager class. Applications use this class to query which look and feels are present, to obtain certain L&F names, and to set the look and feel the program wishes to use. At this point, we will have a short look at some of its interesting methods. We will cover them in greater detail later.
public static String getCrossPlatformLookAndFeelClassName()
This method returns the fully qualified class name of the so-called cross-platform L&F. If you want your application to have a unique Java look, choose the L&F that is returned by this method. We will have a closer look at this L&F in the next section.
public static String getSystemLookAndFeelClassName()
This method returns the class name of a L&F that implements the host environment's native look and feel. The name of the returned L&F, therefore, is platform-dependent. If you want your application to appear as native applications do, you might choose the L&F returned here.
public static UIManager.LookAndFeelInfo[] getInstalledLookAndFeels()
This method returns a list of currently available L&Fs. In addition to the cross-platform and native look and feels, the Java runtime usually provides the Motif L&F. As I have mentioned earlier, Swing provides the means to create a completely new look and feel. If such a package is installed properly, it will be returned here, as well.
public static void setLookAndFeel(String className)
Finally, this method is used to set the current default look and feel.
The Java Look and Feel
Now we will have a closer look on how pluggable look and feel affects programming. Consider the following minimal Swing program.
import javax.swing.*; import javax.swing.border.*; import java.awt.*; import java.awt.event.*; public class SuperSimple2 extends JFrame { JLabel msgLabel; public SuperSimple2() { super("SuperSimple"); setDefaultCloseOperation(DO_NOTHING_ON_CLOSE); addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent e) { System.exit(0); } }); ActionListener al = new ActionListener() { public void actionPerformed(ActionEvent e) { msgLabel.setText(((JButton)e.getSource()).getText()); } }; JButton button; JPanel buttonPanel = new JPanel(); buttonPanel.setBorder( new TitledBorder("please click a button")); for (int i = 0; i < 3; i++) { button = new JButton("Button #" + (i + 1)); button.addActionListener(al); buttonPanel.add(button); } JPanel cp = new JPanel(new BorderLayout()); cp.setBorder(new EmptyBorder(10, 10, 10, 10)); msgLabel = new JLabel("no button clicked!"); cp.add(msgLabel, BorderLayout.NORTH); cp.add(buttonPanel, BorderLayout.CENTER); setContentPane(cp); pack(); setVisible(true); } public static void main(String [] args) { new SuperSimple2(); } }
The program does the same thing, displaying a window containing some buttons, no matter which operating system your machine has. Its visual appearance will differ, though. On a Mac, it will (at least to a certain extent) look like a native application. Other systems probably show something similar to Figure 1:
Figure 1. The Java look and feel
What we see here is called the Java look and feel. Since the package that contains this L&F is named
javax.swing.plaf.metal, it is commonly referred to as Metal. Metal has been Sun's visual appearance of choice for Java programs. With J2SE 1.5, we will get a slightly new look, calledOcean, that will replace the old one (now to be calledSteel). Technically, Ocean is not a new L&F but implemented as
javax.swing.plaf.metal.MetalTheme, and therefore aims to be compatible with existing Metal applications (concerning
getPreferredSize(), for example). The Java look and feel is often referred to as a cross-platformlook and feel, because it should be available on any environment providing Java and JFC.
Native Vs. Cross-Platform Look and Feel
If you look at Figure 1, you might argue that the program does not look familiar. The term "look and feel" refers to the visual appearance and physical behavior of GUI components; however, it covers additional aspects that go beyond that component level. It is also about terminology, layout, and general program behavior. For example, on some platforms you "quit" an application, on others you "exit;" some have one main application window with several internal frames, while others prefer multiple top-level windows. Or think of the way a program is launched or where it puts its files and settings. We might call this user experience. A program may use GUI elements that look like (or even are) native ones, yet it still may feel strange if it behaves different than the majority of native applications do. Why is that? Through our daily routine, we get used to certain fonts and colors; we expect things to be at a certain place. A Windows XP user expects to close a window by clicking on a red widget in the upper right corner of the window. Consequently, a program with a closing element in the upper left corner of its window is likely to feel strange to him/her because he/she is not used to that. The same is true for Mac users: a program with a menu bar inside of the top level window surely feels unfamiliar, since Mac applications usually have one menu bar at the top of the screen.
So why does our program use Metal?
UIManagermaintains a default look and feel that is chosen if an application does not set one explicitly. On many systems, this will be Metal.
The underlying question is: should a Java program look and feel native anyway? One of the great ideas behind Java has always been "write once, run anywhere." Taking that into account, shouldn't a Java program look and feel the same, regardless which platform it runs on? Wouldn't it be desirable if all Java programs had the same appearance? A program can provide a unique user experience through the Java look and feel and through a very comprehensive set of style guides, which you can find in the book Java Look and Feel Design Guidelines. It provides ample information about how a program using the Java look and feel should behave; how it should organize its menus and dialogs; and which colors, fonts and icons to use. Sun once coined the term "100% pure Java." If you want your program to be recognized as a Java application, you should use the Java look and feel and provide a user experience according to the guidelines of this book. Sun surely promotes this idea. And that is why on many systems, Metal is the default look and feel. On the other hand, many people consider Metal ugly. I definitely won't comment on that since this is personal taste. However, several points back Metal's bad reputation:
- There are several alternative look and feels available.
- There are even completely new windowing toolkits. One that gained widespread public recognition is SWT, the Standard Widget Toolkit which is used in the Eclipse universal tool platform.
- J2SE 1.5 brings a sort of brushed-up, new look.
Time will tell if Ocean achieves a better reputation than Steel did. Figure 2 shows the new look.
Figure 2. SwingSet using the new J2SE 1.5 Ocean look
As I have already pointed out, users get used to certain behavior (remember my window-closing example) and often work with several programs simultaneously. If the appearance and behavior of our Java program differs significantly from native applications, users are likely to get confused and frustrated. Additionally, many people have strong expectations concerning visual appearance. That is why on a Mac, my sample program won't show the Java look and feel — the Mac defaults to its own system look and feel. Java programs need to integrate smoothly into the Mac OS X environment, otherwise Mac users will never accept such a program. For these reasons, many Swing programs contain code similar to this:
try { UIManager.setLookAndFeel( UIManager.getSystemLookAndFeelClassName()); } catch (Exception e) { }
Forcing the program to use the system look and feel undoubtedly has several advantages:
- The look and feel is familiar to the user.
- It is not immediately visible that the program is written in Java.
Unfortunately, this has a seldom-acknowledged drawback. Although
getSystemLookAndFeelClassName() returns a L&F that claims to provide a native look, the user might have installed one that even better resembles the host environment. But in current versions of Java, the value returned by this method is hard-coded. Sun must have been aware of this issue, since the upcoming J2SE 1.5 allows the user to set the return value of
getSystemLookAndFeelClassName().
With J2SE 1.4.2, Sun introduced the GTK+ look and feel. On many Linux-based systems it would be adequate to return it as the native L&F. But currently, this is not the case. Additionally,
getInstalledLookAndFeels() does not mention this L&F at all. Sun has fixed this in J2SE 1.5; however, it may still take some time until this version will gain widespread use. As a result, on Linux, most Java programs show the Motif look and feel.
On Windows systems, there are several issues with Sun's implementation of the Windows look and feel. If an application uses it, its components may not look or behave exactly like native ones. This is because the Windows L&F only mimics its native counterparts, rather than using them. The WinLAF project on java.net aims to fix these glitches. If you are interested in what these problems are, please have a look at WinLAF's release notes. Of course, it would be best if the underlying issues were fixed in Java itself. Until this is the case, the WinLAF project provides a way to make Java applications with the Windows look and feel appear as close to their native counterparts as possible. Figures 3 and 4 illustrate the sometimes subtle differences.
Figure 3. SwingSet using Sun's Windows look and feel
Figure 4. SwingSet using WinLAF's look and feel
How can applications benefit from the WinLAF project? According to the installation instructions, using WinLAF requires two steps:
- Put the
winlaf-0.4.jarfile in the
CLASSPATHof your application.
- In your
main()method, include a line of code like this:
net.java.plaf.LookAndFeelPatchManager.initialize();
However, this ties an application to the WinLAF project, which is not what we want. But it takes just a few steps to make Swing applications benefit from the WinLAF corrections. These are:
- Copy the WinLAF .jar file to the ext directory of your Java installation.
- Create a little wrapper class and put it in a .jar file.
- Put that jar file into the
extdirectory as well.
- Modify the swing.properties file to include the new look and feel and set the L&F as the default one.
The code of the little wrapper class is as follows.
package com.sun.java.swing.plaf.windows; public class WinLAFWrapper extends WindowsLookAndFeel { public WinLAFWrapper() { try { javax.swing.UIManager.setLookAndFeel(this); net.java.plaf.LookAndFeelPatchManager.initialize(); } catch (Exception e) { } } }
Some Thoughts on Making Choices
Let us reconsider the alternatives concerning choosing a look and feel:
- Choose the cross-platform look and feel.
- Choose the native L&F.
- Provide an individual look and feel and force Java to use this.
- Let the user decide.
Any of the first three has the disadvantage that it does not respect users' preferences. Programs that force Java to use a specific look and feel to render its components will not benefit from the WinLAF project; neither will they show the appropriate system look and feel on some operating systems. As we have seen, there might be an enhanced version of the system look and feel, and it is quite probable that the users would like to use this. And on some systems, Java simply assumes the wrong look and feel to be the native one. Consequently, the program must allow the user to decide which look and feel to use. The main reason is flexibility. If someone likes the Java look, he/she can choose it. If others prefer a tight integration into the host environment, let them have it. And the ones in favor of really individual look and feel may choose their preferred one, too. Technically, this is not a big deal. You can query the available look and feels using
getInstalledLookAndFeels() and set one with another method of
UIManager; for example,
public static void setLookAndFeel(LookAndFeel newLookAndFeel) or
public static void setLookAndFeel(String className).
However, since there is no standard dialog box for selecting L&Fs, applications usually implement one on their own. Although it is desirable that users can choose which look and feel the application should use, doing so should be the same in all applications. Clearly, the best solution would be if Sun provided a standard look and feel chooser. Since this probably will not be the case for some time, it is perfectly valid to provide one. But please consider the following question: why do users have to select a preferred look and feel for almost any application they install? Swing provides means to set a look and feel as the default one. If a user has done so, it is quite likely that this is the one he/she likes best. Why can't an application respect his/her decision?
Specifying a Default Look and Feel
Several L&F-related settings can be managed through theswing.properties file, which resides in%JAVA_HOME%\lib or $JAVA_HOME/lib, depending on your operating system. This file may not be present. If it is not, you can create it using an editor that can save plain ASCII files. A more pleasant way is to use a specialized tool such as TKPLAFUtility.
Figure 5. The TKPLAFUtility program
The default look and feel is specified through the
swing.defaultlaf property. You can set it in theswing.properties file or set it as a command-line parameter. The value of this property is the fully qualified class name of the look and feel to use. In an application, you can use the method
public static LookAndFeel getLookAndFeel() to determine it.
Conclusion
This article introduced and discussed some aspects of the pluggable look and feel concept of the Swing GUI elements. My goal has been to make you aware of some issues that might arise while utilizing the concept. | https://community.oracle.com/docs/DOC-983327 | CC-MAIN-2017-22 | refinedweb | 2,846 | 55.44 |
Running Tasks in the Background¶
When running a Firework, the Firetasks are run sequentially in the main thread. One way to run a background thread would be to write a Firetask that spawns a new Thread to perform some work. However, FireWorks also has a built-in method to run background tasks via BackgroundTasks. Each BackgroundTask is run in its own thread, in parallel to the main Firetasks, and can be repeated at stated intervals.
BackgroundTasks parameters¶
BackgroundTasks have the following properties:
- tasks - a list of Firetasks to execute in sequence. The Firetasks can read the initial spec of the Firework. Any action returned by Firetasks within a BackgroundTask is NOT interpreted (including instructions to store data).
- num_launches - the total number of times to repeat the BackgroundTask. 0 or less indicates infinite repeats.
- sleep_time - amount of time in seconds to sleep before repeating the BackgroundTask
- run_on_finish - if True, the BackgroundTask will be run one last time at the end of the Firework
Setting one or more BackgroundTasks (via file)¶
The easiest way to set BackgroundTasks is via the Python code. However, if you are using flat files, you can define BackgroundTasks via the
_background_tasks reserved keyword in the FW spec:
spec: _background_tasks: - _fw_name: BackgroundTask num_launches: 0 run_on_finish: false sleep_time: 10 tasks: - _fw_name: ScriptTask script: - echo "hello from BACKGROUND thread #1" use_shell: true - _fw_name: BackgroundTask num_launches: 0 run_on_finish: true sleep_time: 5 tasks: - _fw_name: ScriptTask script: - echo "hello from BACKGROUND thread #2" use_shell: true _tasks: - _fw_name: ScriptTask script: - echo "starting"; sleep 30; echo "ending" use_shell: true
The specification above has two BackgroundTasks, one which repeats every 10 seconds and another which repeats every 5 seconds.
Setting one or more BackgroundTasks (via Python)¶
You can define a BackgroundTask via:
bg_task = BackgroundTask(ScriptTask.from_str('echo "hello from BACKGROUND thread"'), num_launches=0, sleep_time=5, run_on_finish=True)
and add it to a Firework via:
fw = Firework([my_firetasks], spec={'_background_tasks':[bg_task]})
Python example¶
The following code runs a script that, in the main thread, prints ‘starting’, sleeps, then prints ‘ending’. In separate threads, two background threads run at different intervals. The second BackgroundTask has
run_on_finish set to True, so it also runs after the main thread finishes:
from fireworks import Firework, FWorker, LaunchPad, ScriptTask from fireworks.features.background_task import BackgroundTask from fireworks.core.rocket_launcher import rapidfire # set up the LaunchPad and reset it launchpad = LaunchPad() launchpad.reset('TODAYS DATE') # set TODAYS DATE to be something like 2014-02-10 firetask1 = ScriptTask.from_str('echo "starting"; sleep 30; echo "ending"') bg_task1 = BackgroundTask(ScriptTask.from_str('echo "hello from BACKGROUND thread #1"'), sleep_time=10) bg_task2 = BackgroundTask(ScriptTask.from_str('echo "hello from BACKGROUND thread #2"'), num_launches=0, sleep_time=5, run_on_finish=True) # create the Firework consisting of a custom "Fibonacci" task firework = Firework(firetask1, spec={'_background_tasks': [bg_task1, bg_task2]}) ## store workflow and launch it locally launchpad.add_wf(firework) rapidfire(launchpad, FWorker()) | https://pythonhosted.org/FireWorks/backgroundtask.html | CC-MAIN-2017-13 | refinedweb | 467 | 51.58 |
An ID3v2 attached picture frame implementation. More...
#include <attachedpictureframe.h>
Detailed Description
An ID3v2 attached picture frame implementation.
This is an implementation of ID3v2 attached pictures. Pictures may be included in tags, one per APIC frame (but there may be multiple APIC frames in a single tag). These pictures are usually in either JPEG or PNG format.
Member Enumeration Documentation
This describes the function or content of the picture.
- Enumerator:
-
Constructor & Destructor Documentation
Constructs an empty picture frame. The description, content and text encoding should be set manually.
Constructs an AttachedPicture frame based on data.
Destroys the AttahcedPictureFrame instance.
Member Function Documentation
Returns a text description of the image.
Returns the mime type of the image. This should in most cases be "image/png" or "image/jpeg".
Called by parse() to parse the field data. It makes this information available through the public API. This must be overridden by the subclasses.
Implements TagLib::ID3v2::Frame.
Reimplemented in TagLib::ID3v2::AttachedPictureFrameV22.
Returns the image data as a ByteVector.
- Note
- ByteVector has a data() method that returns a const char * which should make it easy to export this data to external programs.
- See Also
- setPicture()
- mimeType()
Render the field data back to a binary format in a ByteVector. This must be overridden by subclasses.
Implements TagLib::ID3v2::Frame.
Sets a textual description of the image to desc.
Sets the mime type of the image. This should in most cases be "image/png" or "image/jpeg".
Sets the image data to p. p should be of the type specified in this frame's mime-type specification.
- See Also
- picture()
- mimeType()
- setMimeType()
Set the text encoding used for the description.
- See Also
- description()
Returns the text encoding used for the description.
- See Also
- setTextEncoding()
- description()
Returns a string containing the description and mime-type
Implements TagLib::ID3v2::Frame.
Friends And Related Function Documentation
Member Data Documentation
The documentation for this class was generated from the following file: | http://taglib.github.io/api/classTagLib_1_1ID3v2_1_1AttachedPictureFrame.html | CC-MAIN-2014-15 | refinedweb | 323 | 52.66 |
Decision table?
1 answer
Actually when defining decisions tables you have the option to use Groovy Script.
But based on what you are trying to achieve I would rather create a custom connector that use Bonita Engine APIs and also a data source to get information from your external MySQL database and provide as output maybe a set of boolean describing which transition(s) should be taken (or maybe code that on an integer).
thank you for your reply,
for my case, after a Xor gateway I have three activities.
the transition to these activities are dynamic and according to some information (information of the actor who runs the instance and other data).
so the transition is based on the control over this information if it is satisfied the transition will be taken, if no other transition is taken.
I don't understand the idea to do a connector for that purpose if the decision table will meet the need.
If all information are available in Bonita (actor who runs the instance, business data values...) you don't need to create a custom connector. A Groovy script in the transition condition or in the descision table should do the job just fine.
hello,
some information is retrieved from the organization, for each actor I detect his age and his job title.
after that, I retrieve from an association rule base (MySQL) that if age has a specific value as well as the job title, another data has a specific value. according to this value, I guide the actor to the activity with the same data value. (sorry if the idea is not clear but I make it as much as simple).
so I don't how to proceed to do that.
first, i should import that:
import groovy.sql.Sql
import org.bonitasoft.engine.identity.User;
import org.bonitasoft.engine.identity.UserCriterion;
I'm soo newbie with groovy and the Bonita script. | https://community.bonitasoft.com/questions-and-answers/decision-table | CC-MAIN-2019-09 | refinedweb | 322 | 54.12 |
Introduction To GUI With Tkinter In Python
Hi, Welcome to your first Graphical User Interface(GUI) tutorial with Tkinter in Python.)
3. Introduction To Tkinter
6. Organizing Layout And Widgets
10. Drop-Down Menus
11. Alert Boxes
12. Simple Shapes
13. Images And Icons
1. What Is A Graphical User Interface(GUI)
GUI is a desktop app which helps you to interact with the computers. They are used to perform different tasks in the desktops, laptops, other electronic devices, etc.., Here, we mainly talking about the laptops and desktops.
- GUI apps like Text-Editors are used to create, read, update and delete different types of files.
- GUI apps like Sudoku, Chess, Solitaire, etc.., are games which you can play.
- GUI apps like Chrome, Firefox, Microsoft Edge, etc.., are used to surf the Internet.
They are some different types of GUI apps which we daily use on the laptops or desktops. We are going to learn how to create those type of apps.
As this is an Introduction to GUI, we will create a simple Calculator GUI app.
2. What Is Tkinter
Tkinter is an inbuilt Python module used to create simple GUI apps. It is the most commonly used module for GUI apps in the Python.
You don't need to worry about installation of the Tkinter module as it comes with Python default.
Note:-
- I am going to use Python 3.6 version. So, kindly update Python if you're using below versions.
- To install Python 3.6 Ubuntu follow this link Install Python 3.6 in Ubuntu
- To install the latest version of Python in Windows go to Python Official Website then, download and install the exe file.
- Go to Python Official Download Page, download Mac OS X and install in your machine.
- Follow this tutorial along with the practice so, that you can understand it very quickly.
- Don't copy the code. Try to write by modifying it as you like.
3. Introduction To Tkinter
Run the following code to create a simple window with the text Hello World!.
Steps:-
- import the module tkinter.
- Initialize the window manager with the tkinter.Tk() method and assign it to a variable window. This method creates a blank window with close, maximize and minimize buttons.
- Rename the title of the window as you like with the window.title(title_of_the_window).
- Label is used to insert some objects into the window. Here, we are adding a Label with some text.
- pack() attribute of the widget is used to display the widget in a size it requires.
- Finally, the mainloop() method to display the window until you manually close it.
import tkinter window = tkinter.Tk() # to rename the title of the window window.title("GUI") # pack is used to show the object in the window label = tkinter.Label(window, text = "Hello World!").pack() window.mainloop()
That's a basic program to create a simple GUI interface. You will see a similar window like this.
4. Tkinter Widgets
Widgets are something like elements in the HTML. You will find different types of widgets to the different types of elements in the Tkinter.
Let's see the brief introduction to all of these widgets in the Tkinter.
- Button:- Button widget is used to place the buttons in the tkinter.
- Canvas:- Canvas is used to draw shapes in your GUI.
- Checkbutton:- Checkbutton.
I mentioned only some of the widgets that are present in Tkinter. You can find the complete list of widgets at official Python documentation.
5. Geometry Management
All widgets in the tkinter will have some geometry measurements. These measurements give you to organize the widgets and their parent frames, windows, etc..,
Tkinter has the following three Geometry Manager classes.
- pack():- It organizes the widgets in the block, which mean it occupies the entire available width. It's a standard method to show the widgets in the window
- grid():- It organizes the widgets in table-like structure. You will see details about grid later in this tutorial.
- place():- It's used to place the widgets at a specific position you want.
6. Organizing Layout And Widgets
To arrange the layout in the window, we will use Frame, class. Let's create a simple program to see how the Frame works.
Steps:-
- Frame is used to create the divisions in the window. You can align the frames as you like with side parameter of pack() method.
- Button is used to create a button in the window. It takes several parameters like text(Value of the Button), fg(Color of the text), bg(Background color), etc..,
Note:- The parameter of any widget method must be where to place the widget. In the below code, we use to place in the window, top_frame, bottom_frame.
import tkinter window = tkinter.Tk() window.title("GUI") # creating 2 frames TOP and BOTTOM top_frame = tkinter.Frame(window).pack() bottom_frame = tkinter.Frame(window).pack(side = "bottom") # now, create some widgets in the top_frame and bottom_frame btn1 = tkinter.Button(top_frame, text = "Button1", fg = "red").pack()# 'fg - foreground' is used to color the contents btn2 = tkinter.Button(top_frame, text = "Button2", fg = "green").pack()# 'text' is used to write the text on the Button btn3 = tkinter.Button(bottom_frame, text = "Button2", fg = "purple").pack(side = "left")# 'side' is used to align the widgets btn4 = tkinter.Button(bottom_frame, text = "Button2", fg = "orange").pack(side = "left") window.mainloop()
Above code produces the following window, if you didn't change the above code.
Now, we will see how to use the fill parameter of pack()
import tkinter window = tkinter.Tk() window.title("GUI") # creating 3 simple Labels containing any text # sufficient width tkinter.Label(window, text = "Suf. width", fg = "white", bg = "purple").pack() # width of X tkinter.Label(window, text = "Taking all available X width", fg = "white", bg = "green").pack(fill = "x") # height of Y tkinter.Label(window, text = "Taking all available Y height", fg = "white", bg = "black").pack(side = "left", fill = "y") window.mainloop()
If you run the above code, you will get the output something similar to the following.
6.1. Grid
Grid is another way to organize the widgets. It uses the Matrix row column concepts. Something like this.
2 x 2 Matrix
0 0 0 1 1 0 1 1
See the below example to get an idea of how it works.
import tkinter window = tkinter.Tk() window.title("GUI") # creating 2 text labels and input labels tkinter.Label(window, text = "Username").grid(row = 0) # this is placed in 0 0 # 'Entry' is used to display the input-field tkinter.Entry(window).grid(row = 0, column = 1) # this is placed in 0 1 tkinter.Label(window, text = "Password").grid(row = 1) # this is placed in 1 0 tkinter.Entry(window).grid(row = 1, column = 1) # this is placed in 1 1 # 'Checkbutton' is used to create the check buttons tkinter.Checkbutton(window, text = "Keep Me Logged In").grid(columnspan = 2) # 'columnspan' tells to take the width of 2 columns # you can also use 'rowspan' in the similar manner window.mainloop()
You will get the following output.
7. Binding Functions
Calling functions whenever an event occurs refers to a binding function.
- In the below example, when you click the button, it calls a function called say_hi.
- Function say_hi creates a new label with the text Hi.
import tkinter window = tkinter.Tk() window.title("GUI") # creating a function called say_hi() def say_hi(): tkinter.Label(window, text = "Hi").pack() tkinter.Button(window, text = "Click Me!", command = say_hi).pack() # 'command' is executed when you click the button # in this above case we're calling the function 'say_hi'. window.mainloop()
The above program will produce the following results.
Another way to bind functions is using events. Events are something like mousemove, mouseover, clicking, scrolling, etc..,.
The following program also produces the same output as the above one.
'<Button-1>' parameter of bind method is the left clicking event, i.e., when you click the left button the bind method call the function say_hi
<Button-1> for left click
<Button-2> for middle click
<Button-3> for right click
Here, we are binding the left click event to a button. You can bind it to any other widget you want.
- You will have different parameters for different events
Let's see
import tkinter window = tkinter.Tk() window.title("GUI") # creating a function with an arguments 'event' def say_hi(event): # you can rename 'event' to anything you want tkinter.Label(window, text = "Hi").pack() btn = tkinter.Button(window, text = "Click Me!") btn.bind("<Button-1>", say_hi) # 'bind' takes 2 parameters 1st is 'event' 2nd is 'function' btn.pack() window.mainloop()
8. Mouse Clicking Events
Clicking events are of 3 different types namely leftClick, middleClick, and rightClick.
Now, you will learn how to call a particular function based on the event that occurs.
- Run the following program and click the left, middle, right buttons to calls a specific function.
- That function will create a new label with the mentioned text.
import tkinter window = tkinter.Tk() window.title("GUI") #creating 3 different functions for 3 events def left_click(event): tkinter.Label(window, text = "Left Click!").pack() def middle_click(event): tkinter.Label(window, text = "Middle Click!").pack() def right_click(event): tkinter.Label(window, text = "Right Click!").pack() window.bind("<Button-1>", left_click) window.bind("<Button-2>", middle_click) window.bind("<Button-3>", right_click) window.mainloop()
If you run the above program, you will see a blank window. Now, click the left, middle and right button to call respective functions.
You get the something similar results to the following.
9. Classes
Classes is handy when you're developing a large software or something that's big.
Let's see how we use Classes in the GUI apps.
import tkinter class GeeksBro: def __init__(self, window): self.text_btn = tkinter.Button(window, text = "Click Me!", command = self.say_hi) # create a button to call a function called 'say_hi' self.text_btn.pack() self.close_btn = tkinter.Button(window, text = "Close", command = window.quit) # closing the 'window' when you click the button self.close_btn.pack() def say_hi(self): tkinter.Label(window, text = "Hi").pack() window = tkinter.Tk() window.title("GUI") geeks_bro = GeeksBro(window) window.mainloop()
The above program produces the output something similar to the following.
10. Drop-Down Menus
I hope all of you know what drop-down menus are. You will create drop-down menus in tkinter using the class Menu. Follow the below steps to create drop-down menus.
Steps:-
- Create a root menu to insert different types of menu options using tkinter.Menu(para) and it takes a parameter where to place the Menu
- You have to tell the tkinter to initiate Menu using window_variable.config(menu = para) and it takes a parameter called menu which is the root menu you previously defined.
- Now, creating sub menus using same method tkinter.Menu(para) and it takes the parameter root menu.
- root menu.add_cascade(para1, menu = para2) creates the name of the sub menu, and it takes 2 parameters one is label which is the name of the sub menu, and another one is menu which is sub menu.
- sub menu.add_command() adds an option to the sub menu.
- sub menu.add_separator() adds a separator
Let's see the example to understand it fully.
import tkinter window = tkinter.Tk() window.title("GUI") def function(): pass # creating a root menu to insert all the sub menus root_menu = tkinter.Menu(window) window.config(menu = root_menu) # creating sub menus in the root menu file_menu = tkinter.Menu(root_menu) # it intializes a new su menu in the root menu root_menu.add_cascade(label = "File", menu = file_menu) # it creates the name of the sub menu file_menu.add_command(label = "New file.....", command = function) # it adds a option to the sub menu 'command' parameter is used to do some action file_menu.add_command(label = "Open files", command = function) file_menu.add_separator() # it adds a line after the 'Open files' option file_menu.add_command(label = "Exit", command = window.quit) # creting another sub menu edit_menu = tkinter.Menu(root_menu) root_menu.add_cascade(label = "Edit", menu = edit_menu) edit_menu.add_command(label = "Undo", command = function) edit_menu.add_command(label = "Redo", command = function) window.mainloop()
You will see the output something similar to the following. Click to the File and Edit menus to look at the options.
11. Alert Box
You can create alert boxes in the tkinter using messagebox method. You can also create questions using the messasgebox method.
import tkinter import tkinter.messagebox window = tkinter.Tk() window.title("GUI") # creating a simple alert box tkinter.messagebox.showinfo("Alert Message", "This is just a alert message!") # creating a question to get the response from the user [Yes or No Question] response = tkinter.messagebox.askquestion("Simple Question", "Do you love Python?") # If user clicks 'Yes' then it returns 1 else it returns 0 if response == 1: tkinter.Label(window, text = "You love Python!").pack() else: tkinter.Label(window, text = "You don't love Python!").pack() window.mainloop()
You will see the following output.
12. Simple Shapes
You are going to draw some basic shapes with the Canvas provided by tkinter in GUI.
See the example.
import tkinter window = tkinter.Tk() window.title("GUI") # creating the 'Canvas' area of width and height 500px canvas = tkinter.Canvas(window, width = 500, height = 500) canvas.pack() # 'create_line' is used to create a line. Parameters:- (starting x-point, starting y-point, ending x-point, ending y-point) line1 = canvas.create_line(25, 25, 250, 150) # parameter:- (fill = color_name) line2 = canvas.create_line(25, 250, 250, 150, fill = "red") # 'create_rectangle' is used to create rectangle. Parameters:- (starting x-point, starting y-point, width, height, fill) # starting point the coordinates of top-left point of rectangle rect = canvas.create_rectangle(500, 25, 175, 75, fill = "green") # you 'delete' shapes using delete method passing the name of the variable as parameter. canvas.delete(line1) # you 'delete' all the shapes by passing 'ALL' as parameter to the 'delete' method # canvas.delete(tkinter.ALL) window.mainloop()
You will see the following shapes in your GUI window.
Just run dir(tkinter.Canvas) to see all the available methods for creating different shapes.
13. Images And Icons
You can add Images and Icons using PhotoImage method.
Let's how it works.
import tkinter window = tkinter.Tk() window.title("GUI") # taking image from the directory and storing the source in a variable icon = tkinter.PhotoImage(file = "images/haha.png") # displaying the picture using a 'Label' by passing the 'picture' variriable to 'image' parameter label = tkinter.Label(window, image = icon) label.pack() window.mainloop()
You can see the icon in the GUI.
Now, you're able to:-
- Understand the tkinter code.
- Create frames, labels, buttons, binding functions, events, etc..,
- To develop simple GUI apps.
Now, we're going to create a simple Calculator GUI with all the stuff you have learned till now.
14. Creating Calculator
Every GUI apps include two steps.
- Creating User Interface
- Adding functionalities to the GUI
Let's start creating Calculator.
from tkinter import * # creating basic window window = Tk() window.geometry("312x324") # size of the window width:- 500, height:- 375 window.resizable(0, 0) # this prevents from resizing the window window.title("Calcualtor") ################################### functions ###################################### # 'btn_click' function continuously updates the input field whenever you enters a number def btn_click(item): global expression expression = expression + str(item) input_text.set(expression) # 'btn_clear' function clears the input field def btn_clear(): global expression expression = "" input_text.set("") # 'btn_equal' calculates the expression present in input field def btn_equal(): global expression result = str(eval(expression)) # 'eval' function evalutes the string expression directly # you can also implement your own function to evalute the expression istead of 'eval' function input_text.set(result) expression = "" expression = "" # 'StringVar()' is used to get the instance of input field input_text = StringVar() # creating a frame for the input field input_frame = Frame(window, width = 312, height = 50, bd = 0, highlightbackground = "black", highlightcolor = "black", highlightthickness = 1) input_frame.pack(side = TOP) # creating a input field inside the 'Frame' input_field = Entry(input_frame, font = ('arial', 18, 'bold'), textvariable = input_text, width = 50, bg = "#eee", bd = 0, justify = RIGHT) input_field.grid(row = 0, column = 0) input_field.pack(ipady = 10) # 'ipady' is internal padding to increase the height of input field # creating another 'Frame' for the button below the 'input_frame' btns_frame = Frame(window, width = 312, height = 272.5, bg = "grey") btns_frame.pack() # first row clear = Button(btns_frame, text = "C", fg = "black", width = 32, height = 3, bd = 0, bg = "#eee", cursor = "hand2", command = lambda: btn_clear()).grid(row = 0, column = 0, columnspan = 3, padx = 1, pady = 1) divide = Button(btns_frame, text = "/", fg = "black", width = 10, height = 3, bd = 0, bg = "#eee", cursor = "hand2", command = lambda: btn_click("/")).grid(row = 0, column = 3, padx = 1, pady = 1) # second row seven = Button(btns_frame, text = "7", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(7)).grid(row = 1, column = 0, padx = 1, pady = 1) eight = Button(btns_frame, text = "8", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(8)).grid(row = 1, column = 1, padx = 1, pady = 1) nine = Button(btns_frame, text = "9", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(9)).grid(row = 1, column = 2, padx = 1, pady = 1) multiply = Button(btns_frame, text = "*", fg = "black", width = 10, height = 3, bd = 0, bg = "#eee", cursor = "hand2", command = lambda: btn_click("*")).grid(row = 1, column = 3, padx = 1, pady = 1) # third row four = Button(btns_frame, text = "4", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(4)).grid(row = 2, column = 0, padx = 1, pady = 1) five = Button(btns_frame, text = "5", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(5)).grid(row = 2, column = 1, padx = 1, pady = 1) six = Button(btns_frame, text = "6", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(6)).grid(row = 2, column = 2, padx = 1, pady = 1) minus = Button(btns_frame, text = "-", fg = "black", width = 10, height = 3, bd = 0, bg = "#eee", cursor = "hand2", command = lambda: btn_click("-")).grid(row = 2, column = 3, padx = 1, pady = 1) # fourth row one = Button(btns_frame, text = "1", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(1)).grid(row = 3, column = 0, padx = 1, pady = 1) two = Button(btns_frame, text = "2", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(2)).grid(row = 3, column = 1, padx = 1, pady = 1) three = Button(btns_frame, text = "3", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(3)).grid(row = 3, column = 2, padx = 1, pady = 1) plus = Button(btns_frame, text = "+", fg = "black", width = 10, height = 3, bd = 0, bg = "#eee", cursor = "hand2", command = lambda: btn_click("+")).grid(row = 3, column = 3, padx = 1, pady = 1) # fourth row zero = Button(btns_frame, text = "0", fg = "black", width = 21, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(0)).grid(row = 4, column = 0, columnspan = 2, padx = 1, pady = 1) point = Button(btns_frame, text = ".", fg = "black", width = 10, height = 3, bd = 0, bg = "#eee", cursor = "hand2", command = lambda: btn_click(".")).grid(row = 4, column = 2, padx = 1, pady = 1) equals = Button(btns_frame, text = "=", fg = "black", width = 10, height = 3, bd = 0, bg = "#eee", cursor = "hand2", command = lambda: btn_equal()).grid(row = 4, column = 3, padx = 1, pady = 1) window.mainloop()
See the Calculator which you have made. If you have any doubt in the above code, feel free to ask in the comment sections.
15. EndNote
This tutorial is not meant to teach you the complete tkinter like documentation. But,
What you have learned in this tutorial is fair enough to make some simple GUI apps. You have to learn more methods for styling and interaction with the objects in GUI.
You can't find the complete documentation of tkinter anywhere. But, learn it by doing and asking someone who knows tkinter well.
Hope you enjoyed this tutorial. Happy Coding!
If you would like to learn more about Python, take DataCamp's Python Data Science Toolbox (Part 1) course. | https://www.datacamp.com/community/tutorials/gui-tkinter-python | CC-MAIN-2019-35 | refinedweb | 3,314 | 60.11 |
Please note: Answer edited after Xavier's Answer
I am trying to use different Build Flavors for one same Application project in Android Studio. However, I seem to be having a terrible time configuring it to work appropriately.
Steps:
productFlavors {
flavor1 {
packageName 'com.android.studio.test.flavor1'
}
flavor2 {
packageName 'com.android.studio.test.flavor2'
}
}
com.foo.test
src
src/flavor1/java/com/foo/test/MainActivity.java
If you got in the Studio preferences, under the Gradle section, you can enable auto-import for your project (we'll enable this by default later). This will let Studio re-import your build.gradle whenever you edit it.
Creating flavors doesn't mean you're going to use custom code for them so we don't create the folders. You do need to create them yourself.
If you look at my IO talk you'll see how we mix in together values from the flavors and build type to create the variant.
For the Java source:
src/main/java src/flavor1/java src/debug/java
are all 3 used to create a single output. This means they can't define the same class.
If you want to have a different version of the same class in the two flavor you'll need to create it in both flavors.
src/flavor1/java/com/foo/A.java src/flavor2/java/com/foo/A.java
And then your code in src/main/java can do
import com.foo.A
depending on the flavor selected, the right version of com.foo.A is used.
This also means both version of A must have the same API (at least when it comes to the API used by classes in src/main/java/...
Edit to match revised question
Additionally, it's important to put the same A class only in source folders that are mutually exclusive. In this case src/flavor1/java and src/flavor2/java are never selected together, but main and flavor1 are.
If you want to provide a different version of an activity in different flavor do not put it in src/main/java.
Do note that if you had 3 flavors and only wanted a custom one for flavor1, while flavor2 and flavor3 shared the same activity you could create a common source folders for those two other activities. You have total flexibility in creating new source folders and configuring the source set to use them.
On to your other points:
It's normal that the 2nd flavor source folder is not blue. You need to switch to the 2nd flavor to enable it, and then you'll be able to create packages and classes inside. Until then, Studio doesn't consider it to be a source folder. We'll hopefully improve this in the future to make the IDE aware of those unactive source folders.
I think it's also normal that you can't create resource files in the res folder. The menu system hasn't been updated to deal with all these extra resource folders. This will come later. | https://codedump.io/share/NMQ02CvavEw6/1/using-build-flavors---structuring-source-folders-and-buildgradle-correctly | CC-MAIN-2017-26 | refinedweb | 507 | 56.66 |
\\\nAfter joking for a long time that I'd learn Kubernetes, I finally got serious about doing it two weeks ago. I had a chat with my boss and he encouraged me to learn something outside my domain, so it seemed like a good opportunity to finally knuckle down and get my head around it.\n\n\\\n \n\nI'm a self-taught coder. Most of my foundational knowledge has come from spending nights at my computer reading documentation, watching tutorials online, and getting hands-on. I've always been a learner that shares, meaning when I learn something technical, I like to write a blog post about it. Partly so when I have to look it up again I can easily search for it, but also so I can share with others what I've learned in the hope that it helps anyone who happens to find it.\n\n\\\n \n\nI'm still in the early stages of my Kubernetes journey, but I've set myself a goal of obtaining my Certified Kubernetes Application Developer (CKAD) certification this year. That's a long way off as I'm still getting my head around the basics, but I've set myself the goal as something to aim for.\n\nWith that goal in mind, my short-term goal is to get my head around the basics and create a series of blog posts covering each topic.\n\n\\\nSo with that intro out of the way, let's dive into our first topic, Pods!\n\nIn this blog post, we'll cover:\n\n\\\n* What are Pods?\n* What's the difference between single container and multi-container Pods?\n* How can we create Pods in Kubernetes?\n* How can we deploy and interact with our Pods using kubectl?\n* How can we ensure that our Pods in Kubernetes are healthy?\n\n## What are Pods?\n\nPods are Kubernetes Objects that are the basic unit for running our containers inside our Kubernetes cluster. In fact, Pods are the smallest object of the Kubernetes Model.\n\nKubernetes uses pods to run an instance of our application, and a single pod represents a single instance of that application. We can scale out our application horizontally by adding more Pod replicas.\n\n\\\nPods can contain a single or multiple containers as a group that shares the same resources within that Pod (storage, network resources, namespaces). Pods typically have a 1-1 mapping with containers, but in more advanced situations, we may run multiple containers in a Pod.\n\nPods are ephemeral resources, meaning that Pods can be terminated at any point and then restarted on another node within our Kubernetes cluster. They live and die, but Pods will never come back to life.\n\n \n\nPod containers will share the name network namespace and interface. Container processes need to bind to different ports within a Pod, and ports can be reused by containers in separate containers. Pods do not span nodes within our Kubernetes cluster.\n\n## What's the difference between single container and multi-container Pods?\n\nLike I mentioned before, Pods can contain either a single or multiple containers.\n\n\\\nRunning a **single container in a Pod** is a common use case. Here, the Pod acts as a wrapper around the single container, and Kubernetes manages the Pods rather than the containers directly.\n\n\\\nWe can also run **multiple containers in a Pod**. Here, the Pod wraps around an application that's composed of multiple containers and shares resources.\n\n\\\nIf we need to run multiple containers within a single Pod, it's recommended that we only do this in cases where the containers are tightly coupled.\n\n## How can we create Pods in Kubernetes?\n\nWe can define Pods in Kubernetes using YAML files. Using these YAML files, we can create objects that interact with the Kubernetes API (Pods, Namespace, Deployments, etc.). Under the hood, kubectl converts the information that we have defined in our YAML file to JSON, which makes the request to the Kubernetes API.\n\n\\\nI'm a fan of YAML, it's easy to understand what's going on, and thanks to extensions in tools like Visual Studio Code, they're easy to create and manage.\n\n\\\nI get that indentation can be a pain. I use the [YAML code extension]() that Red Hat has developed in Visual Studio code to help me write my YAML files. For intellisense for Kubernetes, I use the [Kubernetes code extension]().\n\n\\\nLet's take a look at an example YAML definition for a Kubernetes Pod:\n\n\\\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-2\n labels:\n name: nginx-2\n env: production\nspec:\n containers:\n - name: nginx\n image: nginx\n```\n\n\\\nLet's break this down a bit. To create Kubernetes objects using YAML, we need to set values for the following fields.\n\n\\\n**apiVersion** - This defines the Kubernetes API version that we want to use in this YAML file. You can read more about API versioning in Kubernetes [here]().\n\n**kind** - This defines what kind of Kubernetes object we want to create.\n\n**metadata** - This is data that helps us uniquely identify the object that we want to create. Here we can provide a name for our app, as well as apply labels to our object.\n\n**spec** - This defines the state that we want or our object. The format that we use for spec. For our Pod file, we have provided information about the containers that we want to host on our Pod.\n\n\\\nTo see what else we can define in our Pod YAML file, [this documentation from Kubernetes]() will help you.\n\n## How can we deploy and interact with our Pods using kubectl?\n\nThere are a few ways that we can use kubectl to deploy and interact with our Pods!\n\n\\\n*Wait, what is kubectl?*\n\n\\\nkubectl, (kube-control, or as some people call it, kube-cuddle) is the Kubernetes command-line tool. It allows us to run commands against Kubernetes clusters.\n\n\\\n \n\nWith kubectl, we can create a Pod using our YAML definition file like so:\n\n\\\n```bash\nkubectl apply -f mypod.yaml\n```\n\n\\\nWe can list all of our Pods like so:\n\n\\\n```bash\nkubectl get pods\n```\n\n\\\nWe can expose a container port externally using kubectl. Remember, by default, Pods and Containers are only accessible within the Kubernetes Cluster. Using Kubectl, we can run the following command:\n\n\\\n```bash\nkubectl port-forward mypod 8080:80\n```\n\n\\\nWe can also delete the pod. We can do this by deleting the pod directly like so:\n\n\\\n```bash\nkubectl delete pod mypod\n```\n\n\\\nThis will cause the Pod to be destroyed and created. We can also delete the Deployment that manages the Pod (I'll talk about Kubernetes Deployments in a future post) like so:\n\n\\\n```bash\nkubectl delete deployment mydeployment\n```\n\n## How can we ensure that our Pods in Kubernetes are healthy?\n\nKubernetes relies on Probes to determine whether or not a Pod is healthy. Probes are diagnostic operations that are performed periodically by the Kubelet on containers.\n\n\\\nThere are three types of Probes:\n\n* **Liveness Probes** - These are used to determine if a Pod is healthy and running as expected. If the liveness probe fails, kubelet will kill the container, and the container will then restart according to its defined policy (We can define these as either Always, OnFailure, and Never)\n* **Readiness Probes** - These are used to determine whether or not a pod should receive requests. If the probe fails, the endpoints controller removes the IP address of the Pod from the endpoints of all Services that match the Pod.\n* **Startup Probe** - This is used to determine whether the container has started. All other probes are disabled if a startup probe is provided until it succeeds.\n\n\\\nTo perform diagnostic checks, Kubelet will call a Handler that has been implemented by the container. These types of handlers are:\n\n\\\n* **ExecAction** - This executes an action inside the container\n* **TCPSocketAction** - TCP checks are performed against the container’s IP address on a specified port.\n* **HTTPGetAction** - HTTP Get requests are performed against a Pods IP address on a specified port and path.\n\n\\\nThese probes can result in either a *Success*, *Failure,* or *Unknown* result.\n\n## Conclusion\n\nHopefully, this article has helped you understand the basics of Pods in Kubernetes!\n\nIf you want to learn more about Pods in Kubernetes, these are the resources I used to get my head around them:\n\n\\\n* [Official Kubernetes Pod Documentation]()\n* [Kubernetes Pod YAML Reference]()\n* [Pod Lifecycle]()\n\n\\\nHappy Coding 💻👩💻👨💻\n\n\\\nAlso published [here](). | https://hackernoon.com/understanding-pods-in-kubernetes | CC-MAIN-2021-39 | refinedweb | 1,512 | 62.98 |
On 3/17/06, A.M. Kuchling <amk at amk.ca> wrote: > Thought: We should drop all of httplib, urllib, urllib2, and ftplib, > and instead adopt some third-party library for HTTP/FTP/whatever, > write a Python wrapper, and use it instead. (The only such library I > know of is libcurl, but doubtless there are other candidates; see > for a list.) > > Rationale: > > * HTTP client-side support is pretty complicated. HTTP itself > has many corners (httplib.py alone is 1420 lines long, and urllib2 > is 1300 lines). > > * There are many possible permutations of proxying, SSL on/off, > and authentication. We probably haven't tested every permutation, > and probably lack the volunteer effort to test them all. > If you search for 'http' in the bug tracker, you find about 16 or so > bugs submitted for httplib/urllib/urllib2, most of them for one > permutation or another. > > With a third-party library, the work of maintaining RFC compliance falls > to someone else. > > * A third-party library might support more features than we have time > to implement. > > A downside: these libraries would be in C, and might be the source of > security bugs. Python code may be buggy, but probably won't fall prey > to buffer overflow. We'd also have to keep in sync with the library. > There is also the issue that PyPy could have problems since they have always preferred we keep pure Python versions of stuff around when possible (I assume IronPython has .NET Internet libraries to use). > Similar arguments could be made for a server-side solution, but here I > have no idea what we might choose. A server-side HTTP implementation > + a WSGI gateway might be all that Python 3000 needs. > > Good idea? Dumb idea? Possibly good. We have the precendent of zlib, expat, etc. The key is probably the license is compatible with ours (which libcurl seems to be: MIT/X derivative). I know that having fixed urllib bugs I sure wouldn't mind if I didn't have to read another RFC on URL formats. =) But maybe this also poses a larger question of where for Py3K we want to take the stdlib. Ignoring its needed cleanup and nesting of the namespace, do we want to try to use more external tools by importing them and writing a Pythonic wrapper? Or do we want to not do that and try to keep more things under our control and go with the status quo? Or do we want to really prune down the stdlib and use more dynamic downloading ala Cheeseshop and setuptools? I support the first even though it makes problems for PyPy since it should allow us to have more quality code with less effort on our part. I also support the second since we seem to be able to pull it off. For the third option I would want to be very careful with what is and is not included since Python's "batteries included" solution is an important part of Python and I would not want that to suffer. -Brett | https://mail.python.org/pipermail/python-dev/2006-March/062544.html | CC-MAIN-2016-50 | refinedweb | 507 | 71.44 |
So I have tried this script which kinda makes my way more hard, I want to write Funding details more simplified, like one single row for each year(or any other good way to present it)
import csv
import json
json_data = [{"Founded": "2004", "Company": "InsideSales.com","Funding": "$201.2M", "Funding Details": [[" 2015", "$60.65M "], [" 2014", "$100M "], [" 2013", "$36.55M "], [" 2012", "$4M "]]}]
f = csv.writer(open("CRUNCHBASEDATA.csv", "wb+"))
f.writerow(['Founded','Company','Funding','Funding Details'])
for obj in indata:
f.writerow([obj['Founded'],obj['Company'],obj['Funding'],obj['Funding Details']])
[[u' 2015', u'$60.65M '], [u' 2014', u'$100M '], [u' 2013', u'$36.55M '], [u' 2012', u'$4M ']]
Untested, but rearrange your data. Something like
funding_details = json_data[0]["Funding Details"] ... f.writerow( ... 'Funding', 'Funding_2014', 'Funding_2013', ... ) f.writerow( ... obj['Funding'], funding_details[0][1], funding_details[1][1], ... )
If the funding years are not always 2014, 2013, 2012 in that order you may need to build a dict with entries for the years of interest set to blank, then overwrite the ones that are present:
funding = { u' 2014':'', u' 2013':'', u' 2012':'', u' 2011':'' } for elem in funding_details: funding[ elem[0] ] = elem[1]
and then the data you pass to writerow will include
funding[ u' 2014'] etc. which will still be blank if it did not exist in the JSON funding_details list.
You might also want to investigate
csv.dictwriter | https://codedump.io/share/Jvvw1Z5luscF/1/converting-a-json-to-csv-a-bit-tedious-one | CC-MAIN-2017-09 | refinedweb | 227 | 67.45 |
Unit testing is a way through which your small block of code. Unit testing is not used to find the bugs. It allows the developers to test their deviation from their goal/ aim or to check whether the method performs the correct computation or it is calculating required result or not.
You will learn that how to write a unit test case using junit frame work.
There are some naming convention which should be followed during writing a test case. When when you are going to write a test case for a class then you should write the class name ending with 'Test' word. for example the test class name for class Example should be ExampleTest. If you write like this then it will not give an error. When you write a test case for the method then the method name should be start with the key word 'test'. for example if you want to write a test case for method 'add' then the name of this method in the Test class should be testAdd. The test word is added before the Method name and the fist letter of the method name becomes capital. If this 'test' precede before the method name, then unit testing will not formed with this method.
Before writing a Test Case you must download and include the junit jar file. You can download jar file from Junit site and include this file into your project.
Now write a test class and extend the class TestCase of junit.framework.TestCase. The class TestCase extends the class Assert and implements the interface Test. The Assert class consist a set of assertion method which is used to write a test case, such as assertTrue(........), assertEquals(........) etc.
Then define a instance variable, and store the state of fixture in this variable.
Override the method setUp() and tearDown() of class TestCase. Use setUp() method to initialize the fixture state and use tearDown() method to clean up the test.
Now consider a simple java class which has a method addString, this method adds two string.
ExampleClass.java
package net.roseindia; public class ExampleClass { static String s; public static String adString(String x, String y) { s = x.concat(y); return s; } }
Now the test class for this is
ExampleClassTest.java
package net.roseindia; import junit.framework.TestCase; public class ExampleClassTest extends TestCase { String fistString; String secondString; String finalResult; String result; @Override protected void setUp() throws Exception { // Initializing Values fistString = "rose"; secondString = "india"; finalResult = "roseindia"; // Calculating Value from method result = ExampleClass.adString(fistString, secondString); super.setUp(); } public void testSum() { // Testing Value assertEquals(result, finalResult); } @Override protected void tearDown() throws Exception { // TODO Auto-generated method stub super.tearDown(); } }
When you will run this program using eclipse as a Junit Test case, the output will look like this.
You can download the source code and configure it into your eclipse.
Download Select Source Code | http://www.roseindia.net/struts/struts/struts2.2.1/writingunittests.html | CC-MAIN-2014-10 | refinedweb | 479 | 65.32 |
----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: -----------------------------------------------------------
Ship it! This is great! Thanks Jojy! docs/containerizer.md (lines 23 - 24) <> I wouldn't claim it 'the best'. Maybe one of the best is more appropriate if you really want to say it. docs/containerizer.md (lines 53 - 54) <> What does that mean? An agent is either running on Linux or running on OSX, how is that possible? docs/containerizer.md (line 66) <> I won't mentionen the name 'isolators' as it's confusing with our isolators in Mesos containerizer. docs/containerizer.md (line 75) <> Mention that this is the native Mesos container solution. s/isolators/pluggable isolators/ docs/containerizer.md (line 82) <> e.g. cgroups/namespaces s/Use fine grined/Want fine grined/ docs/containerizer.md (line 83) <> s/Use/Want/ docs/containerizer.md (lines 84 - 85) <> s/filesystem/storage/disk/ docs/containerizer.md (line 86) <> I would combine this with the first bullet: ``` Allow Mesos to control the task's runtime environment without depending on other container technologies (e.g., docker). ``` - Jie Yu On Dec. 29, 2015, 4:36 p.m., Jojy Varghese wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > > ----------------------------------------------------------- > > (Updated Dec. 29, 2015, 4:36 > > | https://www.mail-archive.com/reviews@mesos.apache.org/msg19222.html | CC-MAIN-2017-26 | refinedweb | 204 | 54.49 |
In, you are given a working code that produces a simple counter which can be incremented or decremented via two buttons on an HTML page.
Your task is to add a new feature to this code, which is to create a new button that will reset the counter. Following you will find a proposed solution to the task. What we did was to create a new message which when received it would zero the model value and then we attached it to a new button we created.
The full solution is as follows and we highlighted all the lines that were added for this additional functionality to work:
import Html exposing (Html, button, div, text) import Html.App as Html import Html.Events exposing (onClick) main = Html.beginnerProgram { model = model , view = view , update = update } -- MODEL type alias Model = Int model : Model model = 0 -- UPDATE type Msg = Increment | Decrement | Reset update : Msg -> Model -> Model update msg model = case msg of Increment -> model + 1 Decrement -> model - 1 Reset -> 0 -- VIEW view : Model -> Html Msg view model = div [] [ button [ onClick Decrement ] [ text "-" ] , div [] [ text (toString model) ] , button [ onClick Increment ] [ text "+" ] , button [ onClick Reset ] [ text "Reset" ] ]
You can download the solution from here 1-button.elm (compressed) (287 downloads)
This post is also available in: Greek
Oh wow thank you! I thought I kept trying to set “model = 0” and the compiler would fail.
Cheers | https://bytefreaks.net/programming-2/elm/an-introduction-to-elm-series-solution-to-buttons-example | CC-MAIN-2020-29 | refinedweb | 231 | 57 |
Free
Remarks:
Due to the length of this article, if you don't have enough time or want to check the detailed functions of FreeHttp for the time being, it is recommended that you read Chapter 6 [Six: Quick Start] , Chapter 7 [Seven: Simple Practice] (This 2 Chapters can help you quickly understand the basic functions of FreeHttp)
If you are interested in the FreeHttp code implementation, or if you plan to modify the FreeHttp functionality, you can find the relevant content (source code address and engineering structure) at the end of the chapter [Implementation and Source Code]
The basic operation interface is as shown below and is mainly divided into 5 parts...
(As shown above: select fiddler default update session, click the get button, the yellow area is the obtained information)
Indicates the url matching method (matches the content in the rear text box), supports Contain, StartWith, Is, Regex, AllPass
(As shown above: when the mouse hovers over the area, there will be a matching mode prompt).)
Confirm the creation of a new rule in the create mode
Confirm saving the current rule in edit mode
This button has the same meaning as the confirmation button in the "Rules Editing Control Bar" below.. )
Because the header field name in the RFC@2616 request header is not case sensitive, the meaning of host and hoST is the same. Once the rule is matched, the host header in the request header will be removed..
For detailed usage, please see [8: Parameterized Data Settings] (Understanding the setting of parameterized data will not affect the main function of using freehttp)
If you are familiar with the Http original message, you can click on the icon in the figure below to enter the raw mode and edit the original message to be replaced.)
Raw mode supports all the functions of request replace above, including parameterized data that will be introduced in [8: Parameterized Data Settings] .")
The editing area is used to control the head header that matches the http response, and deletes the specified response head.
The editing logic is consistent with the removal of the request header in [2.1.2], and the description will not be repeated here.
The editing area is used to control the head of the modified http response, and the specified response head is added.
The editing logic is consistent with the addition of the request header in [2.1.3], and the description will not be repeated). )
The rule control edit bar consists of 3 parts, as shown in Figure 1, 2, 3, 4, rule control, 5 quick rule editing, 6 tampering tools and general settings.)
The "cancel edit" function is relatively simple and is only used to clear the information saved in the editing area.
Clear the information directly in the creation mode, and cancel the editing status of the current rule in the edit mode.
.
For detailed usage, please see [8: Parameterized Data Settings] (Understanding the setting of parameterized data will not affect the main function of using freehttp)
The current version has a total of 6 quick rules to help you quickly complete the tampering rules settings.)
The quick rule is for Request Modific, which can add a specified UserAgent to the request matching the rule.
As shown in the above figure, the item is relatively simple, just fill in the UserAgent you need.
There are 4 tools in the current version for your convenience or other settings..
This item provides some basic settings for the FreeHttp plugin.
This item provides centralized management of parameterized data for FreeHttp
After selecting this item, pop-up layer manager window, you can add, modify, debug, etc. parameters in the manager.
The following [eight: parameterized data settings] will introduce the use of parameterized data in detail, here is not specified)
This area only displays the operation of the tampering rule and the execution log.
The log unified format starts with data and uses color to distinguish between errors, prompts, and information logs..
The above "parameter data manage" is mainly divided into the above three parts.
1: parameter data manage category (click on different categories to switch between lists))
The parameters you add in the parameterized data manager can be used directly in the "Request Replsce", "Response Replace" rules.)
As shown in the above figure, you can drag and drop the parameters you want in the parameter manager to any part of the editing area. It will also be added automatically for you (the parameters added by dragging and dropping are all "next". , you can modify it manually) (-)
Finally, use str-len to get the isBeta parameter in the request line as shown above. Click OK after completion..
As shown above, you are using a browser (valid for any client that uses the fiddler proxy) to access
You can see that the interface that does not exist has returned the data as expected, and the name was successfully removed.
After downloading the project and loading it successfully, you can see the basic structure as shown above.
The following is an overview of the features of the main namespace in the figure.
You can modify the code of each part of FreeHttp directly according to your needs to modify or extend the function of FreeHttp, so that he can better meet your individual needs.
If you find any questions or comments, please contact me at (you may also contact mycllq@hotmail.com to submit your questions or suggestions)
(Reference from and translate by Google )。
A. Cutting Banner time limit per test:2 seconds memory limit per test:256 megaby...
扫码关注云+社区
领取腾讯云代金券
我来说两句 | https://cloud.tencent.com/developer/article/1408554 | CC-MAIN-2021-04 | refinedweb | 933 | 55.98 |
:
This looks like a great approach. Leave comments on Mitch’s individual posts if you’ve got stuff to add and get ready to get certified!
This.
Updated – Added Hobart Community LaunchUpdated:
Another meeting this evening where we kind of covered off some of what was left after the first three sessions.
Transcript
Skills being measured
This just in from Brian Goldfarb, Technical Product Manager on ASP.NET
WeRead this article to get started ( seriously :)!)Provide feedback at the forum / get your questions answered
Update: Scott Guthrie’s also posted some more about this tool:
You can also see a walkthrough and description of it here:
Co.
We got "together" again last night for our third study group session. Once again, I've posted the transcript. This is also linked from our Skills Being Measured page.
I’d have to say we didn’t get nearly as far as we might have this evening. I’m not sure whether this was due to a McFlurry incident early in the proceedings, or just a general Monday-itis, but we covered off some of the ADO.NET stuff.
Next session is this Wednesday at 1930 (Sydney time – UTC+11)
I’ve also just posted my findings on the AdRotator and Dynamic Image Generation.
As part of my study group homework, I promised to bone up on the AdRotator class in ASP.NET2.0 and report back to the group. Turns out this lead to another cool thing that I needed to report back on - dynamic image generation.
The source code I’ll refer to throughout this post is available for download. Of course, it’s intended as a sample only. Use it at your own risk, no warranty expressed or implied, contents may be hot. Don’t say you weren’t warned. The source code demonstrates the 3 modes of populating an AdRotator control and a technique for dynamically displaying images stored as binary data in fields of a database.
The AdRotator class basically allows you to display a random image (more on where images come from and how "random" works in a bit) with an associated bit of ALT text and (optionally) a hyperlink from the image to somewhere else. You’ve seen this kind of thing on web pages before, in fact it goes back to ASP.NET 1.0 and has been implemented in any number of other web frameworks as well. Since we’ve been studying for the ASP.NET 2.0 exam, that’s what I’ll cover here.
AdRotator populates its image and navigation properties in 3 different ways, which I’ll call “Database”, “XML” and “Programmatic” for reasons that should become obvious, if they’re not already.
In database mode, an AdRotator is linked to a DataSource that needs, at the very least, a column for the URL of the image, a column for the URL to navigate to when the imaged is clicked and a column for the string to be displayed in the ALT tag of the image. In the MSDN documentation (online and offline), the full allowable schema is listed and includes some compulsory fields:
Note that the name of these fields can be overridden in the AdRotator by setting the ImageURLField, NavigateURLField and AlternateTextField properties to the appropriate field name.
In addition, the following fields are optional.
Of course, with the cool data handling functionality of data sources in ASP.NET 2.0, you don’t need a table anything like this, you can pull the data into the appropriate structure with a SQL statement in the DataSource itself. So, if you were dynamically displaying the thumbnail image from the ProductPhoto table in the AdventureWorks sample database, allowing a click to redirect to the LargePhoto image in the same table (again, dynamically rendered) and taking a description from the Product table for the Alternative text (OK, from now on I’ll call it Alternate text and grind my teeth), you could use something like
<asp:SqlDataSource</asp:SqlDataSource>
Which produces a resultset with the 4 required columns. Of course, writing SQL statements like that is much easier when you do it like this:
The AdRotator control itself then looks like this:
<asp:AdRotator
All of the properties (except the DataSourceID and the name, of course) are left at their default values because I've used the default names for the fields in the DataSource definition.
The next way to populate the fields of an AdRotator is to use a static XML file. It has a very similar structure to the schema of the table required for the Database method. All of the details of the schema are available in MSDN (online and offline). There’s an example file included in the accompanying downloads. Here's an extract (of a file called Advertisments.xml):
<Ad> <ImageUrl>~/ProductThumbNail.ashx?ID=70</ImageUrl> <NavigateUrl>~/ProductPhoto.ashx?ID=70</NavigateUrl> <AlternateText>Product Number 70</AlternateText> <Impressions>100</Impressions></Ad>
<
And the AdRotator looks like this
<asp:AdRotator
Again, the default values work here because I’ve followed the naming conventions.
The third (and potentially most interesting from a developers’ point of view, although the database method is pretty cool too) is to populate the properties programmatically whenever the AdRotator AdCreated event fires (occurs once per round trip to the server after the creation of the control, but before the page is rendered). The event handler is passed an AdCreatedEventArgs object. You set the 3 properties, ImageURL, NavigateURL and AlternateText (familiar looking names aren’t they) and they are used to render the control.
The AdRotator control looks like this:
<asp:AdRotator
And in the code-behind file (mine's in C#, but VB.NET would work just as well), I just wired up an event handler (via the properties grid in my case, because I'm lazy)
protected void AdRotator3_AdCreated(object sender, AdCreatedEventArgs e){ // Just for this example - see the download for a different way of using ProductID int ProductID = 1; e.ImageUrl = "~/ProductThumbNail.ashx?ID=" + ProductID.ToString(); e.NavigateUrl = "~/ProductPhoto.ashx?ID=" + ProductID.ToString(); e.AlternateText = "Programatically generated text for product ID " + ProductID.ToString();}
This is potentially very powerful. You could do real-time image targeting based on the navigation history of this user, based on personalisation they’ve done or any number of other criteria.
Up until now, I’ve glossed over the fact that the AdRotator class expects URLs pointing to images, not byte streams or arrays of binary data, or anything else that might represent an image. This potentially presents a problem when (as we have in our example), the images are stored as binary data in a database file. You don’t want to have to write them out to disk, where they’ll take up as much space again and need to be kept in sync with the database version. Up until Beta 1 of VS2005, it looked like this was going to be very easy, just use the Dynamic Image Control. Alas, it was dropped between Beta 1 and Beta 2 (see). Fortunately, that very same page lists some code that can be adapted to do exactly what we need.
Essentially, we need to create an HTTP Handler class that intercepts calls to a particular URL and returns the image in a format that the browser expects. There are docs on the IHTTPHandler interface (which the specialised dynamic image generation classes in the example implement) in the MSDN docs (online and offline). HTTP handlers give you a means of interacting with the low-level request and response services of the IIS Web server and provide functionality much like ISAPI extensions but with a simpler programming model.
Here’s the code for one of the Handler classes (ProductThumbNail.ashx). The other one is identical, except a different TableAdaptor is used to retrieve the bigger photo.
<%@ WebHandler Language="C#" Class="ProductThumbNail" %>
using System;using System.Drawing;using System.Drawing.Imaging;using System.IO;using System.Web;using System.Web.Caching;using System.Data;using System.Data.Sql;using System.Web.UI.WebControls;using System.Web.Configuration;using System.Configuration;using System.Collections;
public class ProductThumbNail : IHttpHandler{ public void ProcessRequest (HttpContext context) { // Get the image ID from querystring, and use it to generate a cache key. String imageID = context.Request.QueryString["ID"]; String cacheKey = context.Request.CurrentExecutionFilePath + ":" + imageID; Byte[] imageBytes; // Check if the cache contains the image. Object cachedImageBytes = context.Cache.Get(cacheKey); if (cachedImageBytes != null) { imageBytes = (Byte[])cachedImageBytes; } else { GetImageTableAdapters.ThumbNailTableAdapter oTA = new GetImageTableAdapters.ThumbNailTableAdapter(); GetImage.ThumbNailDataTable oDT = oTA.GetData(int.Parse(imageID)); if (oDT.Rows.Count == 0) { // MAGIC NUMBER! // Photo ID 1 displays "No Image Available" image oDT = oTA.GetData(1); } imageBytes = (byte[])oDT.Rows[0]["ThumbNailPhoto"]; //; } }}
<%
using
public
The important method to implement in the IHTTPHandler interface is ProcessRequest(). This gives you low-level access to the context object which, in turn, gives to access to the Request and Response objects so you can get the ID passed as a query string in the URL (using Request) and shove stuff back out with the Response object. The stuff to shove back out is just a byte array grabbed from the binary column in the table. Notice, I’ve just grabbed the contents of the ThumbNailPhoto column in the first row and cast it to a byte array.
To send the image back, just set the content type (note that this could come from the table as well, or could be dynamically determined from the header in the binary if required), tell the Response object just to stream the bits back, don't wait for the stream to close, and then just write the byte array to the output stream.
A couple of other things to note here (and I’m not claiming they’re best practice – comments always welcome):
Notice that instead of programatically creating connections and commands, I’ve opted to create a dataset (GetImage.xsd in the App_Code folder) with a couple of table adaptors. The naming convention adopted by ASP.NET gets me every time I use these. The TableAdaptors are in the <DataSetName>TableAdaptors namespace and aren’t available to intellisense until you’ve built the project after the xsd is created. The data tables are classes of the <DataSetName> class. So in this case The GetImageTableAdaptors namespace contains the ThumbNailTableAdaptor and the ProductPhotoTableAdaptor classes. The GetImage class has two properties – instances of the ThumbNail and ProductPhoto classes.
Configuring these datasets is heaps easier than remembering how to use them. Just add a DataSet to the project and the first TableAdaptor is added automatically. I added a very simple SQL statement with a parameter (the ProductPhotoID) to return a single row. To add the other, just right-click on the xsd design surface and choose Add TableAdaptor.
Note that I’ve opted for the very simplest Table Adaptors – no INSERT, UPDATE or DELETE clauses. These are only ever designed to be read-only.
This bit was already in the code on the MSDN site about the DynamicImageControl being pulled, but I left it in there because it seemed like such a good idea. The cache engine takes a key to see whether it has already looked up the image and stored it for future use. In this case I’ve left the cache duration at 2 hours. Obviously, you’d massage this based on how often images were likely to change.
I2. Highlight news and tips from around the world3. Present interviews with both local and international VFP developers
It has three main aims:1. Keep the local Australian developer community informed2. Highlight news and tips from around the world3..
Update – Fixed the transcript link
We got "together" again last night for our second study group session. Once again, I've posted the transcript. This is also linked from our Skills Being Measured page. | https://blogs.msdn.com/b/acoat/archive/2005/11.aspx?Redirected=true&PostSortBy=MostRecent&PageIndex=1 | CC-MAIN-2015-48 | refinedweb | 1,970 | 53.61 |
If the first run of the type checker succeeds, then running it again but with refined types will presumably also succeed, and will not achieve anything. Whitebox macros are useful when they are needed for the (first) type-checking phase to complete successfully – i.e., they guide the type checker for the right types and implicits to be inferred.
What kinds of macros should Scala 3 support?
If that would be interesting for the development group, we’d be happy to help run such a more detailed survey on a larger scale.
It would definitely be interesting!
I was wondering what would the best method here be, and I suppose any automated means of checking the source code would raise both privacy concerns and be hard to do accurately (because of transitive dependencies).
So a self-reported usage report would be the way to go. We could ask people which macro libraries they (consciously
) use in their projects, with a division between blackbox/whitebox/annotation macros if the library offers multiple. Maybe @olafurpg has a good starting list of such projects? Plus a free-form area where people could state if they have custom macros, and what are the use-cases.
What do you think?
Adam
When designing the research try to keep in mind that, at least in my experience, a lot of scala codebase is close to Java and written and maintained by people not that familiar with advanced topics such as macro and their differentiation. So if you are interested in real statistics, try to keep questions very specific, so the person who answers them doesnt need to know what is whitebox/blackbox macro or if they’re even using macros in the first place
Ah yes, sure, I wouldn’t expect anyone not interested in macros development to know about the difference. What I had in mind is asking about specific features of a library, if it offers both blackbox/whitebox/annotation macros.
@adamw But I think that users would likely not know what kinds of macros they were using. At least not the difference between whitebox and blackbox. Annotation/def macros is easier to answer, so this could be a useful datapoint.
Maybe it’s easier to ask: Can you give us a shortlist of the macros you use most often (name of macro and library where it comes from)? Given the list, we can do the classification ourselves.
This is very true, and I like the question. Perhaps you should even add a list of 10 or 20 “common” macro, because perhaps user don’t really know that what they use is a macro. That’s the counterpart of having macro so nicely integrated, most user may not even they use them.
For ex, from an user point of view, it is not evident that:
def values = ca.mrvisser.sealerate.values[Whatever]
Is a macro call.
Yes, you are right, sorry I didn’t express what I had mind clearly enough before :). As I was replying to @Krever, I thought about explicitly asking about usage of a set of known libraries which use macros (shapeless, circe etc.). Here the data set obtained by @olafurpg might be very helpful.
Plus a free-form field so that people might add to the list if anything is missing.
That looks like a good plan. So we collect a set of libraries that define macros and do are survey which of these libraries are used in their projects?
What if a library defines blackbox and whitebox macros, you won’t know which are being used.
Also how do you collect the set of libraries?
I have authored two open-source Scala projects (
Chymyst and
curryhoward), and in both I use
def macros. Both projects have an embedded DSL flavor, so most likely my perception is skewed as to what features were “important” in def macros. In brief, here is what
def macros do for me - and I don’t see mention of these features in Olafur’s summary:
- Enable compile-time reflection: for example, I can say
f[A => B => Int](x + y)where
fis a macro, and I can use reflection to inspect the type expression
A => B => Intat compile time. Then I can build a type AST for that type expression and compute something (e.g. a type class instance or whatever). Note that
defmacros do not convert type parameters into AST; only the
x+ywill be converted to an AST when the macro is expanded. So, here I am using macros to have a staged compilation, where I use reflection at the first stage, and create code to be compiled at the second stage. (The
curryhowardproject uses this to create code for an expression by using a logic theorem prover on this expression’s type signature.)
- Inspect the name and the type signature of the expression to the left of the equals sign. For example, in the
curryhowardproject I can write
def f[A,B](x: A, y: A => B): B = implementwhere
implementis a macro. In that macro, I can see that the left-hand side is defining a method called
fwith type parameters
Aand
B, return type
B, arguments
x,
yof specific types. I can also inspect the enclosing class to determine that
fis a method of that class, and to see what other methods that class have. Another use of the “left-side inspection” that’s great for DSLs is a construction such as
val x = makeVarwhere
makeVaris a macro that uses the name
xas a string to initialize something, instead of writing
val x = makeVar(name="x"). The result is a more concise DSL.
It would be a pity to see such useful features disappear when the new Scala 3 macro system is created.
On the other hand, I also encountered the breakage in the current
def macros, as soon as I tried to transform ASTs. Even a very simple transformation - such as removing
if true and replacing
{ case x if true => f(x) } by
{ case x => f(x) } - leads to a compiler crash despite all my efforts to preserve the syntax tree’s attributes. It would be good to fix this kind of breakage in the new macros.
Another question: I noticed that type class derivation in Kittens and in Shapeless is limited to polynomial functors. Contravariant functors, for example, are not derived, and generally function types such as A => B are not supported for type class derivation. I wonder if this limitation will continue in Scala 3. I hope not!
The way it seems to me,
' means “quote,” and is a good choice because it’s a quote symbol but is not used in Scala for strings – actually it’s used for
Symbols, which are like code identifiers. So
'{ ... } is completely new syntax – a new kind of literal expression for
Exprs. On the other hand,
~ is a standard prefix operator in Scala (along with
-,
+, and
!), and normally means “bitwise negation.” So it looks like it’s just a method on Expr that “flips” it from an Expr into an actual value.
To summarize for people like me who haven’t done much macro programming and did not understand the docs well:
': Real Thing into Representation of Thing (It now stands for “something”)
~: Representation of Thing into Real Thing (It now is something)
Which fits very much into @nafg’s reasoning for
' and
~.
The real question is what kinds of macros shouldn’t Scala 3 support?
– a whitebox fan
ps) but in today’s racially-charged environment, I’m glad blackbox is catching a break for once. Is there an actual Marvel hero named Blackbox? because there oughtta be.
Thanks, good to know these use cases. The way you describe it, it looks like these would work in the new system. The whole point of exposing Tasty trees is to allow decompositions like the ones you describe.
Hi folks new poster here! just to let you know we (me+NEU/CTU folks) are working on a large-scale analysis of macro usage for exactly this purpose. The idea is to look at how macros (of all kinds) are used in the wild and generate some use cases so we can have an informed decision of how many constructs would be supported by a transition. It seems there is some interest in this already so it would be interesting to hear your thoughts if you haven’t posted already!
Hi there!
We use macros in couple projects:
It allows getting handy and efficient JSON/binary serialization. Here are code and results of benchmarks which compares (in some custom domain) both of them with the best Scala/Java serializers that have binding to Scala case classes and collections:
The most interesting feature that we are waiting for is opaque types (SIP-35) that would be work properly with macros. It would allow us to avoid using annotations (like @named, @stringified, etc.) for case class fields to tune representation properties or binding.
Instead, we want to use some configuration functions for macro calls which will override defaults without touching sources of data structures, like some of these configuration functions:
But instead of strings, they should have some type parameter(s) like it is modeled here: | https://contributors.scala-lang.org/t/what-kinds-of-macros-should-scala-3-support/1850?page=2 | CC-MAIN-2018-22 | refinedweb | 1,541 | 68.91 |
Collaboration Policy (read carefully)Work alone. You may consult any material you want including the course notes, text and additional sources (including anything you find on the Web). If you use a source other than the course materials or text, you should cite the source in your answer.Answer all 10 questions (100 points total). There is no time limit on this exam, but you shouldn't spend more than three hours on it.
You may not discuss this exam with any human, other than the course staff (who will answer clarification questions only). You may not use a Java compiler during this exam, unless you write your own, but you may use ESC/Java, a C compiler and Splint as much as you want.
Subtyping
1. (10) Assume the following code is valid Java code:MysteryType1 mt1; MysteryType2 mt2; ... (anything could be here) mt1 = mt2.ossifrage (mt1);Could MysteryType2 be a subtype of Wombat (as specified below) that satisfies the substitution principle? Explain why or why not clearly.public class Wombat extends Object { // OVERVIEW: A Wombat represents a confused bat. public Wombat () // EFFECTS: Initialize this to a new Wombat. public Wombat ossifrage (Object o) // EFFECTS: Returns the ossifrage of this and o. }2. (10) What can you determine about the specification of the MysteryType2 class? Be as specific as possible, but do not make any assumptions about the class that are not necessary for the code above to be correct. (Note: you should not assume MysteryType2 is a subtype of the Wombat type mentioned in the previous question.) 3. (10) Recall the SimObject class from Problem Set 5, excerpted and slightly modified below:abstract public class SimObject implements Runnable { // OVERVIEW: A SimObject is an object that represents a simulator object. // It has a grid (a circular reference to the grid containing this object), // location (integer row and column), initialization state (boolean // that is true if it has been initialized). // // A typical SimObject is < grid, row, col, initialized >. // // Note: SimObject is an abstract class. The executeTurn method must // be implemented by subclasses. //@ghost public boolean isInitialized; public SimObject () // EFFECTS: Creates a new, uninitalized SimObject. //@ensures !isInitialized { } public void init (int row, int col, /*@non_null@*/ Grid grid) //@requires !isInitialized //@ensures isInitialized // EFFECTS: Initializes this snake object with grid at row and col. { } abstract public void executeTurn() //@requires isInitialized // EFFECTS: Executes one turn for this object. ; public /*@non_null@*/ Color getColor () // EFFECTS: Returns the color that represents this SimObject, for // painting the grid. { } }Would the SnakeSimObject class specified on the next page satisfy the substitution principle rules to be a subtype of SimObject? For full credit, you answer must clearly explain why or why not.public class SnakeSimObject extends SimObject { // OVERVIEW: A SnakeSimObject is an object that represents a slimey snake. // It has a grid (a circular reference to the grid containing this object), // location (integer row and column), initialization state (boolean // that is true if it has been initialized), length (integer // that gives the length on the snake), and direction (compass direction // that gives the direction the snake is facing). // // A typical SnakeSimObject is < grid, row, col, initialized, // length, direction >, e.g., < grid, 5, 12, true, 4, S >. // public SnakeSimObject () // EFFECTS: Creates a new, uninitalized SnakeSimObject with // length 3 and facing direction N. //@ensures !isInitialized { } public void init (int row, int col, /*@non_null@*/ Grid grid) //@requires !isInitialized //@ensures isInitialized // EFFECTS: Initializes this snake object with grid at row and // col. If the length of the snake is not positive or // greated than the number of rows in grid, sets the // length of this to 1. // { } public void executeTurn() // REQUIRES: true // EFFECTS: Executes one turn for this object. If this is not // initialized (!isInitialized), does nothing. Otherwise, // the snake will move one square in the direction it is // facing. { } public /*@non_null@*/ Color getColor () // EFFECTS: Returns greenish-brown snake color. { } }4. (10) Suppose we changed the specification of the constructor to:public SnakeSimObject () // EFFECTS: Creates a new initialized SnakeSimObject at (0,0) // with length 3 and facing direction N. //@ensures isInitialized { }Illustrate why this specification would not satisfy the substitution principle by showing a short code fragment where a SnakeSimObject could not be safely used in place of a SimObject and explain what could go wrong.
Concurrency5. (10) Explain how the implementation below could exhibit a race condition if this code were used as part of a distributed voting machine (where the same Candidate object may be referenced by multiple threads).1 public class Candidate { 2 private String name; 3 private int votes; 4 5 public Candidate (String n) { 6 // EFFECTS: Initializes this to the candidate named n with 0 7 // votes. 8 name = n; 9 } 10 11 public void tallyVote () { 12 // EFFECTS: Increases this candidate's vote total by 1. 13 System.err.println ("Recorded vote for: " + name); 14 votes = getVotes () + 1; 15 } 16 17 public int getVotes () { 18 return votes; 19 } 20 }6. (5) Explain what small change you would make to the code to fix the problem you identified in question 5?
7. (15) In Problem Set 5, Question 7, we suggested solving the deadlock where three philosophers interact by adding a rep invariant the requires colleage relationships to be reflexive (that is, if A is B's colleague, B must be A's colleague). Suppose we did not want to place this restriction on colleagues. Instead, we should be able to have three philosophers where A is B's colleague, B is C's colleague and C is A's colleague. Describe a locking strategy that would allow this without ever deadlocking. A good strategy description is worth 10 points, but for full credit, you must show (pseudocode is fine) how you would modify code for the philosophize method below to implement your strategy (); } } } }
Memory Management8. (10) Consider the following classes (similar to Problem Set 4 and Lecture 17):public class Species { // OVERVIEW: Immutable record type for representing a species with a name and genome. /*@non_null@*/ private String name; /*@non_null@*/ private Genome genome; public Species (/*@non_null@*/ String n, /*@non_null@*/ Genome g) { name = n; genome = g; } public /*@non_null@*/ String getName () { return name; } } public class SpeciesSet { // OVERVIEW: SpeciesSets are unbounded, mutable sets of Species. // A typical SpeciesSet is { x1, ..., xn } private Vector els; //@invariant els != null //@invariant els.elementType == \type(Species) //@invariant els.containsNull == false // invariant els does not contain two species with the same name. public SpeciesSet () { // EFFECTS: Initializes this to be empty: { } els = new Vector (); } public void insert (/*@non_null@*/ Species s) { // MODIFIES: this // EFFECTS: If s already contains a Species whose name matches // the name of s, does nothing. Otherwise, adds s to the // elements of this: this_post = this_pre U { s } if (getIndex (s) < 0) els.add (s); } private int getIndex (/*@non_null@*/ Species s) // EFFECTS: If a species with the same name as s is in this // returns index where it appears, else returns -1. //@ensures \result >= -1 //@ensures \result < els.elementCount { for (int i = 0; i < els.size (); i++) { if (s.getName ().equals (((Species) els.elementAt (i)).getName ())) { return i; } } return -1; } }Fill in the missing parts in the small code fragment below so that a mark and sweep garbage collector that runs at the marked location would collect exactly 2 garbage Species objects.SpeciesSet ss = new SpeciesSet (); Species s1 = new Species ("snake", "ACATGA"); Species s2 = [[[ fill in space 1 ]]]; Species s3 = [[[ fill in space 2 ]]]; ss.insert (s1); s1 = null; ss.insert (s2); s2 = null; ss.insert (s3); s3 = null; ← mark and sweep collector runs here ...
9. (10) In a language with explicit memory management like C, explain why it would be useful to change the specification of insert to:public boolean insert (/*@non_null@*/ Species s) // MODIFIES: this // EFFECTS: If s already contains a Species whose name matches // the name of s, returns false and does nothing. // Otherwise, returns true and adds s to the elements of // this: this_post = this_pre U { s }10. (10, a good answer is worth bonus points; a complete language design and implementation is worth an automatic A+ in CS201J) After being barred from using Sun's Java trademarks, a large software company in Mondred, Washington has decided to attempt to implement a new language that will offer the performance advantages of C with the saftey and ease of development advantages of Java. They call their new language Db (pronounced "D flat"), not to be confused with its tonal near-equivalent C# ("C sharp"). In particular, they would like it to be possible to build a reference counting garbage collector that will work for Db but still be able to manipulate memory addresses directly in at least some of the ways allowed by C. Assume they will start their design with C, and add or remove features to produce Db. Either argue why their goal is impossible to achive, or explain what changes they should make to C to achieve their goal.
11. (Optional, no credit, but I might give bonus points for good answers) Describe what CS201J is about using one word or a short phrase.
12. (Optional, no credit) Do you feel your performance on this exam will fairly reflect your understanding of the course material so far? If not, explain why. | http://www.cs.virginia.edu/cs201j-fall2002/exams/exam2.html | CC-MAIN-2018-51 | refinedweb | 1,520 | 61.87 |
globals.d.ts
We discussed global vs. file modules when covering projects and recommended using file based modules and not polluting the global namespace.
Nevertheless, if you have beginning TypeScript developers you can give them a
globals.d.ts file to put interfaces / types in the global namespace to make it easy to have some types just magically available for consumption in all your TypeScript code.
For any code that is going to generate JavaScript we highly recommend using file modules.
globals.d.tsis great for adding extensions to
lib.d.tsif you need to.
- It's good for quick
declare module "some-library-you-dont-care-to-get-defs-for";when doing TS to JS migrations. | https://basarat.gitbooks.io/typescript/content/docs/project/globals.html | CC-MAIN-2019-04 | refinedweb | 117 | 50.43 |
The Biopython Project is an international association of developers of freely available Python () tools for computational molecular biology. Python is an object oriented, interpreted, flexible. and even documentation.. Biopython runs on many platforms (Windows, Mac, and on the various flavors of Linux and Unix). ().
Bio.PDB: [18, Hamelryck and Manderick, 2003];
Bio.Cluster: [14, De Hoon et al., 2004];
Bio.Graphics.GenomeDiagram: [2, Pritchard et al., 2006];
Bio.Phyloand
Bio.Phylo.PAML: [9, Talevich et al., 2012];
For example, this will only work under Python 2:
>>> print "Hello World!" Hello World!
If you try that on Python 3 you’ll get a
SyntaxError.
Under Python 3 you must write:
>>> print("Hello World!") Hello World!
Surprisingly that will also work on Python 2 – but only for simple examples printing one thing. In general you need to add this magic line to the start of your Python scripts to use the print function under Python 2.6 and 2.7:
from __future__ import print_function
If you forget to add this magic import, under Python 2 you’ll see extra brackets produced by trying to use the print function when Python 2 is interpreting it as a print statement and a tuple.
>>> import Bio >>> print(Bio.__version__) ...If the “
import Bio” line fails, Biopython is not installed. If the second line fails, your version is very out of date. If the version string ends with a plus, you don’t have an official release, but a snapshot of the in development code.
Seqobject missing the upper & lower methods described in this Tutorial?
str(my_seq).upper()to get an upper case string. If you need a Seq object, try
Seq(str(my_seq).upper())but be careful about blindly re-using the same alphabet.
Seqobject translation method support the
cdsoption described in this Tutorial?
Bio.SeqIOand
Bio.AlignIOread and write?
Bio.SeqIOand
Bio.AlignIOfunctions
parse,
readand
writetake filenames? They insist on handles!
Bio.SeqIO.write()and
Bio.AlignIO.write()functions accept a single record or alignment? They insist on a list or iterator!
[...]to create a list of one element.
str(...)give me the full sequence of a
Seqobject?
Bio.Blastwork with the latest plain text NCBI blast output?
Bio.Entrez.parse()work? The module imports fine but there is no parse function!
Bio.Entrez.efetch()stopped working?
retmode="text"to your call. Second, they are now stricter about how to provide a list of IDs – Biopython 1.59 onwards turns a list into a comma separated string automatically.
Bio.Blast.NCBI the same results as the NCBI BLAST website?
Bio.Blast.NCBIXML.read()work? The module imports but there is no read function!
SeqRecordobject have a
letter_annotationsattribute?
SeqRecordto get a sub-record?
SeqRecordobjects together?
Bio.SeqIO.convert()or
Bio.AlignIO.convert()work? The modules import fine but there is no convert function!
parseand
writefunctions as described in this tutorial (see Sections 5.5.2 and 6.2.1).
Bio.SeqIO.index()work? The module imports fine but there is no index function!
Bio.SeqIO.index_db()work? The module imports fine but there is no index_db function!
MultipleSeqAlignmentobject? The
Bio.Alignmodule imports fine but this class isn’t there!
Bio.Align.Generic.Alignmentclass supports some of its functionality, but using this is now discouraged.
subprocessmodule directly.
_.
For more general questions, the Python FAQ pages may be useful. 18, this has some cools tricks and tips), the Advanced section (Chapter 20), 9, extract data from Swiss-Prot from certain orchid proteins in Chapter 10, and work with ClustalW multiple sequence alignments of orchid proteins in Section 6 for seq_record in SeqIO.parse("ls_orchid.fasta", "fasta"): print(seq_record.id) print(repr(seq_record.seq)) print(len(seq_record)) for seq_record in SeqIO.parse("ls_orchid.gbk", "genbank"): print(seq_record.id) print(repr(seq_record.seq)) print(len(seq_record)) 18 18),G", IUPAC.unambiguous_dna) >>> for index, letter in enumerate(my_seq): ... print("%i %s" % (index, letter)) 0 G 1 A 2 T 3 C 4 G >>> print(len(my_seq)) 5
You can access elements of the sequence in the same way as for strings (but remember, Python counts from zero!):
>>> print(my_seq[0]) #first letter G >>> print(my_seq[2]) #third letter T >>> print(my_seq[-1]) #last letter G
The
Seq object has a
.count() method, just like a string.
Note that this means that like a Python string, this gives a
non-overlapping count:
>>> from Bio.Seq import Seq >>> "AAAA".count("AA") 2 >>> Seq("AAAA").count("AA") 2
For some biological uses, you may actually want an overlapping count (i.e. 3 in this trivial example). When searching for single letters, this makes no difference:
>>> from Bio.Seq import Seq >>> from Bio.Alphabet import IUPAC >>> my_seq = Seq('GATCGATGGGCCTATATAGGATCGAAAATCGC', IUPAC.unambiguous_dna) >>> len(my_seq) 32 >>> my_seq.count("G").12 in the print function
(and the print statement under Python 2):
>>> <BLANKLINE>.
>>> str(my_seq) :
>>> from Bio.Alphabet import IUPAC >>> from Bio.Seq import Seq >>> protein_seq = Seq("EVRNAK", IUPAC.protein) >>> dna_seq = Seq("ACGT", IUPAC.unambiguous_dna) >>> protein_seq + dna_seq Traceback (most recent call last): ... TypeError: Incompatible alphabets IUPACProtein() and())
You may often have many sequences to add together, which can be done with a for loop like this:
>>> from Bio.Seq import Seq >>> from Bio.Alphabet import generic_dna >>> list_of_seqs = [Seq("ACGT", generic_dna), Seq("AACC", generic_dna), Seq("GGTT", generic_dna)] >>> concatenated = Seq("", generic_dna) >>> for s in list_of_seqs: ... concatenated += s ... >>> concatenated Seq('ACGTAACCGGTT', DNAAlphabet())
Or, a more elegant approach is to the use built in
sum function with its optional start value argument (which otherwise defaults to zero):
>>> from Bio.Seq import Seq >>> from Bio.Alphabet import generic_dna >>> list_of_seqs = [Seq("ACGT", generic_dna), Seq("AACC", generic_dna), Seq("GGTT", generic_dna)] >>> sum(list_of_seqs, Seq("", generic_dna)) Seq('ACGTAACCGGTT', DNAAlphabet())
Unlike the Python string, the Biopython
Seq does not (currently) have a
.join method.
Python strings have very useful
upper and
lower methods for changing the case.
As of Biopython 1.53, the
Seq object gained similar methods which are alphabet aware.
For example,
>>> from Bio.Seq import Seq >>> from Bio.Alphabet import generic_dna >>> dna_seq = Seq("acgtACGT", generic_dna) >>> dna_seq Seq('acgtACGT', DNAAlphabet()) >>> dna_seq.upper() Seq('ACGTACGT', DNAAlphabet()) >>> dna_seq.lower() Seq('acgtacgt', DNAAlphabet())
These are useful for doing case insensitive matching:
>>> "GTAC" in dna_seq False >>> "GTAC" in dna_seq.upper() True
Note that strictly speaking the IUPAC alphabets are for upper case sequences only, thus:
>>> from Bio.Seq import Seq >>> from Bio.Alphabet import IUPAC >>> dna_seq = Seq("ACGT", IUPAC.unambiguous_dna) >>> dna_seq Seq('ACGT', IUPACUnambiguousDNA()) >>> dna_seq.lower() Seq('acgt', DNA())
As mentioned earlier, an easy way to just reverse a
Seq object (or a
Python string) is slice it with -1 step:
>>> my_seq[::-1] Seq('CGCTAAAAGCTAGGATATATCCGGGTAGCTAG',() Traceback (most recent call last): ... ValueError: Proteins do not have complements!
The example in Section 5.5.3 combines the
Seq
object’s reverse complement method with
Bio.SeqIO for sequence input/output.
were added in Biopython 1.49. For older releases you would have to use the
Bio.Seq
module’s functions instead, see Section 3:
>>> from Bio.Seq import Seq >>> from Bio.Alphabet import generic_dna >>> 18.1.3 combines the
Seq object’s
translate method with
Bio.SeqIO for sequence input/output.
In the previous sections we talked about the
Seq object translation method (and mentioned the equivalent function in the
Bio.Seq module – see
Section 3.ambiguous.
>>> from Bio.Seq import Seq >>> from Bio.Alphabet import IUPAC >>> seq1 = Seq("ACGT", IUPAC.unambiguous_dna) >>> seq2 = Seq("ACGT", IUPAC.unambiguous_dna)
So, what does Biopython do? Well, the equality test is the default for Python objects – it tests to see if they are the same object in memory. This is a very strict test:
>>>)
Observe what happens if you try to edit the sequence:
>>> my_seq[5] = "G" Traceback (most recent call last): ... TypeError: 'Seq' object does not support item assignment] = "C" >>> mutable_seq MutableSeq('GCCATCGTAATGGGCCGCTGAAAGGGTGCCCGA', IUPACUnambiguousDNA()) >>> mutable_seq.remove("T") >>> mutable_seq MutableSeq('GCCACGTAATGGGCCGCTGAAAGGGTGCCCGA', IUPACUnambiguousDNA()) >>> mutable_seq.reverse() >>> mutable_seq MutableSeq('AGCCCGTGGGAAAGTCGCCGGGTAATGCGCACCG', IUPACUnambiguousDNA())
You can also get a string from a
MutableSeq object just like from a
Seq object (Section 3.4).
The
UnknownSeq object (as
SeqFeature objects) to be associated with the sequence, and is used throughout and
SeqFeature objects in this chapter, you may also want to read the
SeqRecord wiki page (), and the built in documentation (also online – SeqRecord and SeqFeature):
>>> presented as attributes of the class. Usually you won’t create
a
SeqRecord “by hand”, but instead use
Bio.SeqIO to read in a
sequence file for you (see Chapter 5 and the examples
below). However, creating
SeqRecord can be quite simple.
To create a
SeqRecord.
Working with per-letter-annotations is similar,
letter_annotations is a
dictionary like attribute which will let you assign any Python sequence (i.e.
a string, list or tuple) which has the same length as the sequence:
>>> simple_seq_r.letter_annotations["phred_quality"] = [40, 40, 38, 30] >>> print(simple_seq_r.letter_annotations) {'phred_quality': [40, 40, 38, 30]} >>> print(simple_seq_r.letter_annotations["phred_quality"]) [40, 40, 38, 30]
The
dbxrefs and
features attributes are just Python lists, and
should be used to store strings and
SeqFeature objects (discussed later
in this chapter) respectively. key idea about each
SeqFeature object is to describe a region on a parent sequence, typically a
SeqRecord object. That region is described with a location object, typically a range between two positions (see Section 4.3.2 below).
The
SeqFeature class has a number of attributes, so first we’ll list them and their general features, and then later in the chapter work through examples to show how this applies to a real life example. The attributes of a SeqFeature are:
SeqFeatureon the sequence that you are dealing with, see Section 4.3.2 below. The
SeqFeaturedelegates much of its functionality to the location object, and includes a number of shortcut attributes for properties of the location:
.location.ref– any (different) reference sequence the location is referring to. Usually just None.
.location.ref_db– specifies the database any identifier in
.refrefers to. Usually just None.
.location.strand– the strand on the sequence that the feature is located on. For double stranded nucleotide sequence this may either be 1 for the top strand, −1 for the bottom strand, 0 if the strand is important but is unknown, or None if it doesn’t matter. This is None for proteins, or single stranded sequences.
CompoundLocationobject, and should now be ignored.
The key idea about each
SeqFeature object is to describe a
region on a parent sequence, for which we use a location object,
typically describing a range between two positions. Two try to
clarify the terminology we’re using:
<100and
>200are all positions.
I just mention this because sometimes I get confused between the two.
Unless you work with eukaryotic genes, most
SeqFeature locations are
extremely simple - you just need start and end coordinates and a strand.
That’s essentially all the basic
FeatureLocation object does.
In practise of course, things can be more complicated. First of all we have to handle compound locations made up of several regions. Secondly, the positions themselves may be fuzzy (inexact).
Biopython 1.62 introduced the
CompoundLocation as part of
a restructuring of how complex locations made up of multiple regions
are represented.
The main usage is for handling ‘join’ locations in EMBL/GenBank files.
So far we’ve only used simple positions. One complication in dealing with feature several types of fuzzy positions, so we have five classes do deal with them:
positionattribute of the object.
`<13', signifying that the real position is located somewhere less than.
Here’s an example where we create a location with fuzzy end points:
>>> from Bio import SeqFeature >>> start_pos = SeqFeature.AfterPosition(5) >>> end_pos = SeqFeature.BetweenPosition(9, left=8, right=9) >>> my_location = SeqFeature.FeatureLocation(start_pos, end_pos)
Note that the details of some of the fuzzy-locations changed in Biopython 1.59, in particular for BetweenPosition and WithinPosition you must now make it explicit which integer position should be used for slicing etc. For a start position this is generally the lower (left) value, while for an end position this would generally be the higher (right) value.
If you print out a
FeatureLocation object, you can get a nice representation of the information:
>>> print(my_location) [>5:(8^9)]
We can access the fuzzy start and end positions using the start and end attributes of the location:
>>> my_location.start AfterPosition(5) >>> print(my_location.start) >5 >>> my_location.end BetweenPosition(9, left=8, right=9) >>> print(my_location.end) (8^9)
If you don’t want to deal with fuzzy positions and just want numbers, they are actually subclasses of integers so should work like integers:
>>> int(my_location.start) 5 >>> int(my_location.end) 9
For compatibility with older versions of Biopython you can ask for the
nofuzzy_start and
nofuzzy_end attributes of the location
which are plain integers:
>>> my_location.nofuzzy_start 5 >>> my_location.nofuzzy_end 9
Notice that this just gives you back the position attributes of the fuzzy locations.
Similarly, to make it easy to create a position without worrying about fuzzy positions, you can just pass in numbers to the
FeaturePosition constructors, and you’ll get back out
ExactPosition objects:
>>> exact_location = SeqFeature.FeatureLocation(5, 9) >>> print(exact_location) [5:9] >>> exact_location.start ExactPosition(5) >>> int(exact_location.start) 5 >>> exact_location.nofuzzy_start 5
That is most of the nitty gritty about dealing with fuzzy positions in Biopython. It has been designed so that dealing with fuzziness is not that much more complicated than dealing with exact positions, and hopefully you find that true!
You can use the Python keyword
in with a
SeqFeature or location
object to see if the base/residue for a parent coordinate is within the
feature/location or not.
For example, suppose you have a SNP of interest and you want to know which features this SNP is within, and lets suppose this SNP is at index 4350 (Python counting!). Here is a simple brute force solution where we just check all the features one by one in a loop:
>>> from Bio import SeqIO >>> my_snp = 4350 >>> record = SeqIO.read("NC_005816.gb", "genbank") >>> for feature in record.features: ... if my_snp in feature: ... print("%s %s" % (feature.type, feature.qualifiers.get('db_xref'))) ... source ['taxon:229193'] gene ['GeneID:2767712'] CDS ['GI:45478716', 'GeneID:2767712']
Note that gene and CDS features from GenBank or EMBL files defined with joins are the union of the exons – they do not cover any introns.
A
SeqFeature or location object doesn’t directly contain a sequence, instead the location (see Section 4.3.2) describes how to get this from the parent sequence. For example consider a (short) gene sequence with location 5:18 on the reverse strand, which in GenBank/EMBL notation using 1-based counting would be complement(6..18), like this:
>>> from Bio.Seq import Seq >>> from Bio.SeqFeature import SeqFeature, FeatureLocation >>> example_parent = Seq("ACCGAGACGGCAAAGGCTAGCATAGGTATGAGACTTCCTTCCTGCCAGTGCTGAGGAACTGGGAGCCTAC") >>> example_feature = SeqFeature(FeatureLocation(5, 18), type="gene", strand=-1)
You could take the parent sequence, slice it to extract 5:18, and then take the reverse complement. If you are using Biopython 1.59 or later, the feature location’s start and end are integer like so this works:
>>> feature_seq = example_parent[example_feature.location.start:example_feature.location.end].reverse_complement() >>> print(feature_seq) AGCCTTTGCCGTC
This is a simple example so this isn’t too bad – however once you have to deal with compound features (joins) this is rather messy. Instead, the
SeqFeature object has an
extract method to take care of all this:
>>> feature_seq = example_feature.extract(example_parent) >>> print(feature_seq) AGCCTTTGCCGTC
The length of a
SeqFeature or location matches
that of the region of sequence it describes.
>>> print(example_feature.extract(example_parent)) AGCCTTTGCCGTC >>> print(len(example_feature.extract(example_parent))) 13 >>> print(len(example_feature)) 13 >>> print(len(example_feature.location)) 13
For simple
FeatureLocation objects the length is just
the difference between the start and end positions. However,
for a
CompoundLocation the length is the sum of the
constituent regions..
The
format() method of the
SeqRecord class.5.4.
You[20]) type: gene location: [4342:4780](+) qualifiers: Key: db_xref, Value: ['GeneID:2767712'] Key: gene, Value: ['pim'] Key: locus_tag, Value: ['YP_pPCP05'] <BLANKLINE>
>>> print(record.features[21]) type: CDS location: [4342:4780](+)](+) qualifiers: Key: db_xref, Value: ['GeneID:2767712'] Key: gene, Value: ['pim'] Key: locus_tag, Value: ['YP_pPCP05'] <BLANKLINE>
>>> print(sub_record.features[20]) type: CDS location: [42:480](+) Sections 18.1.7 and 18.1.8 for some FASTQ examples where the per-letter annotations (the read quality scores) are also sliced.
You can add
SeqRecord objects together, giving a new
SeqRecord.
What is important here is that any common
per-letter annotations are also added, all the features are preserved (with their
locations adjusted), and any other common annotation is also kept (like the id, name
and description).
For an example with per-letter annotation, we’ll use the first record in a
FASTQ file. Chapter 5 will explain the
SeqIO functions:
>>> from Bio import SeqIO >>> record = next(SeqIO.parse("example.fastq", "fastq")) >>> len(record) 25 >>> print(record.seq) CCCTTCTTGTCTTCAGCGTTTCTCC
>>> print(record.letter_annotations["phred_quality"]) [26, 26, 18, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 22, 26, 26, 26, 26, 26, 26, 26, 23, 23]
Let’s suppose this was Roche 454 data, and that from other information
you think the TTT should be only TT. We can make a new edited
record by first slicing the
SeqRecord before and after the “extra”
third T:
>>> left = record[:20] >>> print(left.seq) CCCTTCTTGTCTTCAGCGTT >>> print(left.letter_annotations["phred_quality"]) [26, 26, 18, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 22, 26, 26, 26, 26] >>> right = record[21:] >>> print(right.seq) CTCC >>> print(right.letter_annotations["phred_quality"]) [26, 26, 23, 23]
Now add the two parts together:
>>> edited = left + right >>> len(edited) 24 >>> print(edited.seq) CCCTTCTTGTCTTCAGCGTTCTCC
>>> print(edited.letter_annotations["phred_quality"]) [26, 26, 18, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 22, 26, 26, 26, 26, 26, 26, 23, 23]
Easy and intuitive? We hope so! You can make this shorter with just:
>>> edited = record[:20] + record[21:]
Now, for an example with features, we’ll use a GenBank file. Suppose you have a circular genome:
>>> >>> record.dbxrefs ['Project:58037']
>>> record.annotations.keys() ['comment', 'sequence_version', 'source', 'taxonomy', 'keywords', 'references', 'accessions', 'data_file_division', 'date', 'organism', 'gi']
You can shift the origin like this:
>>> shifted = record[2000:] + record[:2000]
>>> shifted SeqRecord(seq=Seq('GATACGCAGTCATATTTTTTACACAATTCTCTAATCCCGACAAGGTCGTAGGTC...GGA', IUPACAmbiguousDNA()), id='NC_005816.1', name='NC_005816', description='Yersinia pestis biovar Microtus str. 91001 plasmid pPCP1, complete sequence.', dbxrefs=[])
>>> len(shifted) 9609
Note that this isn’t perfect in that some annotation like the database cross references and one of the features (the source feature) have been lost:
>>> len(shifted.features) 40 >>> shifted.dbxrefs [] >>> shifted.annotations.keys() []
This is because the
SeqRecord slicing step is cautious in what annotation
it preserves (erroneously propagating annotation can cause major problems). If
you want to keep the database cross references or the annotations dictionary,
this must be done explicitly:
>>> shifted.dbxrefs = record.dbxrefs[:] >>> shifted.annotations = record.annotations.copy() >>> shifted.dbxrefs ['Project:10638'] >>> shifted.annotations.keys() ['comment', 'sequence_version', 'source', 'taxonomy', 'keywords', 'references', 'accessions', 'data_file_division', 'date', 'organism', 'gi']
Also note that in an example like this, you should probably change the record identifiers since the NCBI references refer to the original unmodified sequence.
One of the new features in Biopython 1.57 was the
SeqRecord object’s
reverse_complement method. This tries to balance easy of use with worries
about what to do with the annotation in the reverse complemented record.
For the sequence, this uses the Seq object’s reverse complement method. Any features are transferred with the location and strand recalculated. Likewise any per-letter-annotation is also copied but reversed (which makes sense for typical examples like quality scores). However, transfer of most annotation is problematical.
For instance, if the record ID was an accession, that accession should not really
apply to the reverse complemented sequence, and transferring the identifier by
default could easily cause subtle data corruption in downstream analysis.
Therefore by default, the
SeqRecord’s id, name, description, annotations
and database cross references are all not transferred by default.
The
SeqRecord object’s
reverse_complement method takes a number
of optional arguments corresponding to properties of the record. Setting these
arguments to
True means copy the old values, while
False means
drop the old values and use the default value. You can alternatively provide
the new desired value instead.
Consider this example record:
>>> from Bio import SeqIO >>> record = SeqIO.read("NC_005816.gb", "genbank") >>> print("%s %i %i %i %i" % (record.id, len(record), len(record.features), len(record.dbxrefs), len(record.annotations))) NC_005816.1 9609 41 1 11
Here we take the reverse complement and specify a new identifier – but notice how most of the annotation is dropped (but not the features):
>>> rc = record.reverse_complement(id="TESTING") >>> print("%s %i %i %i %i" % (rc.id, len(rc), len(rc.features), len(rc.dbxrefs), len(rc.annotations))) TESTING 9609 41 0 0 use for seq_record in SeqIO.parse("ls_orchid.fasta", "fasta"): print(seq_record.id) print(repr(seq_record.seq)) print(len(seq_record)) for seq_record in SeqIO.parse("ls_orchid.gbk", "genbank"): print(seq_record.id) print(seq_record.seq) print(len(seq_record)) >>> identifiers = [seq_record.id for seq_record in SeqIO.parse("ls_orchid.gbk", "genbank")] >>> identifiers ['Z78533.1', 'Z78532.1', 'Z78531.1', 'Z78530.1', 'Z78529.1', 'Z78527.1', ..., 'Z78439.1']
There are more examples using
SeqIO.parse() in a list
comprehension like this in Section 18() function on an iterator to step through the entries, like this:
from Bio import SeqIO record_iterator = SeqIO.parse("ls_orchid.fasta", "fasta") first_record = next(record_iterator) print(first_record.id) print(first_record.description) second_record = next(record_iterator) print(second_record.id) print(second_record.description)
Note that if you try to use
next() and there are no more results, you’ll get the special
StopIteration exception.
One special case to consider is when your sequence files have multiple records, but you only want the first one. In this situation the following code is very concise:
from Bio import SeqIO first_record = next(SeqIO.parse("ls_orchid.gbk", "genbank"))
A word of warning here – using the
next() function records = list(SeqIO.parse("ls_orchid.gbk", "genbank"))("ls_orchid.gbk", "genbank") first_record = next(record_iterator) all_species = [] for seq_record in SeqIO.parse("ls_orchid.gbk", "genbank"): all_species.append(seq_record.annotations["organism"]) print(all_species)
Another way of writing this code is to use a list comprehension:
from Bio import SeqIO all_species = [seq_record.annotations["organism"] for seq_record in \ SeqIO.parse( all_species = [] for seq_record in SeqIO.parse("ls_orchid.fasta", "fasta"): all_species.append(seq_record.description.split()[1]).
Instead of using a filename, you can give
Bio.SeqIO a handle
(see Section 22.1), and in this section
we’ll use handles to parse sequence from compressed files.
As you’ll have seen above, we can use
Bio.SeqIO.read() or
Bio.SeqIO.parse() with a filename - for instance this quick
example calculates the total length of the sequences in a multiple
record GenBank file using a generator expression:
>>> from Bio import SeqIO >>> print(sum(len(r) for r in SeqIO.parse("ls_orchid.gbk", "gb"))) 67518
Here we use a file handle instead, using the
with statement
to close the handle automatically:
>>> from Bio import SeqIO >>> with open("ls_orchid.gbk") as handle: ... print(sum(len(r) for r in SeqIO.parse(handle, "gb"))) 67518
Or, the old fashioned way where you manually close the handle:
>>> from Bio import SeqIO >>> handle = open("ls_orchid.gbk") >>> print(sum(len(r) for r in SeqIO.parse(handle, "gb"))) 67518 >>> handle.close()
Now, suppose we have a gzip compressed file instead? These are very
commonly used on Linux. We can use Python’s
gzip module to open
the compressed file for reading - which gives us a handle object:
>>> import gzip >>> from Bio import SeqIO >>> handle = gzip.open("ls_orchid.gbk.gz", "r") >>> print(sum(len(r) for r in SeqIO.parse(handle, "gb"))) 67518 >>> handle.close()
Similarly if we had a bzip2 compressed file (sadly the function name isn’t quite as consistent):
>>> import bz2 >>> from Bio import SeqIO >>> handle = bz2.BZ2File("ls_orchid.gbk.bz2", "r") >>> print(sum(len(r) for r in SeqIO.parse(handle, "gb"))) 67518 >>> handle.close()
If you are using Python 2.7 or later, the
with-version works for
gzip and bz2 as well. Unfortunately this is broken on older versions of
Python (Issue 3860) and you’d
get an
AttributeError about
__exit__ being missing.
There is a gzip (GNU Zip) variant called BGZF (Blocked GNU Zip Format), which can be treated like an ordinary gzip file for reading, but has advantages for random access later which we’ll talk about later in Section 5.4.4.
In the previous sections, we looked at parsing sequence data from a file
(using a filename or handle), and from compressed files (using a handle).
Here we’ll use
Bio.SeqIO with another type of handle, a network
connection, to download and parse sequences from the internet.
Note that just because you can download sequence data and parse it into
a
SeqRecord object in one go doesn’t mean this is a good idea.
In general, you should probably download sequences once and save them to
a file for reuse.
Section 9.6 talks about the Entrez EFetch interface in more detail, but for now let’s just connect to the NCBI and get a few Opuntia (prickly-pear) sequences Entrez.email = "A.N.Other@example.com" handle = Entrez.efetch(db="nucleotide", rettype="fasta", retmode="text", id="6273291") seq_record = SeqIO.read(handle, "fasta") handle.close() print("%s with %i features" % (seq_record.id, len(seq_record.features)))
Expected output:
gi|6273291|gb|AF191665.1|AF191665 with 0 features
The NCBI will also let you ask for the file in other formats, in particular as
a GenBank file..
from Bio import Entrez from Bio import SeqIO Entrez.email = "A.N.Other@example.com" handle = Entrez.efetch(db="nucleotide", rettype="gb", retmode="text",
Notice this time we have three features.
Now let’s fetch several records. This time the handle contains multiple records,
so we must use the
Bio.SeqIO.parse() function:
from Bio import Entrez from Bio import SeqIO Entrez.email = "A.N.Other@example.com" handle = Entrez.efetch(db="nucleotide", rettype="gb", retmode="text", \ 9 for more about the
Bio.Entrez module, and make sure to read about the NCBI guidelines for using Entrez (Section 9.1).
Now let’s use a handle to download a SwissProt file from ExPASy,
something covered in more depth in Chapter']
We’re now going to introduce three related functions in the
Bio.SeqIO
module which allow dictionary like random access to a multi-sequence file.
There is a trade off here between flexibility and memory usage. In summary:
Bio.SeqIO.to_dict()is the most flexible but also the most memory demanding option (see Section 5.4.1). This is basically a helper function to build a normal Python
dictionarywith each entry held as a
SeqRecordobject in memory, allowing you to modify the records.
Bio.SeqIO.index()is a useful middle ground, acting like a read only dictionary and parsing sequences into
SeqRecordobjects on demand (see Section 5.4.2).
Bio.SeqIO.index_db()also acts like a read only dictionary but stores the identifiers and file offsets in a file on disk (as an SQLite3 database), meaning it has very low memory requirements (see Section 5.4.3), but will be a little bit slower.
See the discussion for an broad overview (Section 5.4.5).
The next thing that we’ll do with our ubiquitous orchid files is to show how
to index them and access them like a database using the Python
dictionary
data type (like a hash in Perl). This is very useful for moderately large files
where you only need to access certain elements of the file, and makes for a nice
quick ’n dirty database. For dealing with larger files where memory becomes a
problem, see Section 5.4.2 below.
You can use the function
Bio.SeqIO.to_dict() to make a SeqRecord dictionary
(in memory). By default this will use each record’s identifier (i.e. the
.id
attribute) as the key. Let’s try this using our GenBank file:
>>> from Bio import SeqIO >>> orchid_dict = SeqIO.to_dict(SeqIO.parse("ls_orchid.gbk", "genbank"))
There is just one required argument for
Bio.SeqIO.to_dict(), a list or
generator giving
SeqRecord objects. Here we have just used the output
from the
SeqIO.parse function. As the name suggests, this returns a
Python dictionary.
Since this variable
orchid_dict is an ordinary Python dictionary, we can look at all of the keys we have available:
>>> len(orchid_dict) 94
>>> orchid_dict.keys() ['Z78484.1', 'Z78464.1', 'Z78455.1', 'Z78442.1', 'Z78532.1', 'Z78453.1', ..., 'Z78471.1']
If you really want to, you can even look at all the records at once:
>>> orchid_dict.values() #lots of output! ... orchid_dict = SeqIO.to_dict(SeqIO.parse("ls_orchid.fasta", "fasta")) orchid_dict = SeqIO.to_dict(SeqIO.parse("ls_orchid.fasta", "fasta"), key_function=get_accession)("ls_orchid.gbk", "genbank"), ... lambda rec : seguid(rec.seq)) >>> record = seguid_dict["MN/s0q9zDoCVEEc+k/IFwCNF2pY"] >>> print(record.id) Z78532.1 >>> print(record.description) C.californicum 5.8S rRNA gene and ITS1 and ITS2 DNA.
That should have retrieved the record Z78532.1, the second entry in the file.
As the previous couple of examples tried to illustrate, using
Bio.SeqIO.to_dict() is very flexible. However, because it holds
everything in memory, the size of file you can work with is limited by your
computer’s RAM. In general, this will only work on small to medium files.
For larger files you should consider
Bio.SeqIO.index(), which works a little differently. Although
it still returns a dictionary like object, this does not keep
everything in memory. Instead, it just records where each record
is within the file – when you ask for a particular record, it then parses
it on demand.
As an example, let’s use the same GenBank file as before:
>>> from Bio import SeqIO >>> orchid_dict = SeqIO.index("ls_orchid.gbk", "genbank") >>> len(orchid_dict) 94
>>> orchid_dict.keys() ['Z78484.1', 'Z78464.1', 'Z78455.1', 'Z78442.1', 'Z78532.1', 'Z78453.1', ..., 'Z78471.1']
>>> seq_record = orchid_dict["Z78475.1"] >>> print(seq_record.description) P.supardii 5.8S rRNA gene and ITS1 and ITS2 DNA. >>> seq_record.seq Seq('CGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGATCATTGTTGAGATCACAT...GGT', IUPACAmbiguousDNA()) >>> orchid_dict.close()
Note that
Bio.SeqIO.index() won’t take a handle,
but only a filename. There are good reasons for this, but it is a little
technical. The second argument is the file format (a lower case string as
used in the other
Bio.SeqIO functions). You can use many other
simple file formats, including FASTA and FASTQ files (see the example in
Section 18.1.11). However, alignment
formats like PHYLIP or Clustal are not supported. Finally as an optional
argument you can supply an alphabet, or a key function.
Here is the same example using the FASTA file - all we change is the filename and the format name:
>>> from Bio import SeqIO >>> orchid_dict = SeqIO.index("ls_orchid.fasta", "fasta") >>> len(orchid_dict) 94 >>> orchid_dict.keys() [']
Suppose you want to use the same keys as before? Much like with the
Bio.SeqIO.to_dict() example in Section 5.4.1.1,
you’ll need to write a tiny function to map from the FASTA identifier
(as a string) to the key you want:
def get_acc(identifier): """"Given a SeqRecord identifier string, return the accession number as a string. e.g. "gi|2765613|emb|Z78488.1|PTZ78488" -> "Z78488.1" """ parts = identifier.split("|") assert len(parts) == 5 and parts[0] == "gi" and parts[2] == "emb" return parts[3]
Then we can give this function to the
Bio.SeqIO.index()
function to use in building the dictionary:
>>> from Bio import SeqIO >>> orchid_dict = SeqIO.index("ls_orchid.fasta", "fasta", key_function=get_acc) >>> print(orchid_dict.keys()) ['Z78484.1', 'Z78464.1', 'Z78455.1', 'Z78442.1', 'Z78532.1', 'Z78453.1', ..., 'Z78471.1']
Easy when you know how?
The dictionary-like object from
Bio.SeqIO.index() gives you each
entry as a
SeqRecord object. However, it is sometimes useful to
be able to get the original raw data straight from the file. For this
use the
get_raw() method which takes a
single argument (the record identifier) and returns a string (extracted
from the file without modification).
A motivating example is extracting a subset of a records from a large
file where either
Bio.SeqIO.write() does not (yet) support the
output file format (e.g. the plain text SwissProt file format) or
where you need to preserve the text exactly (e.g. GenBank or EMBL
output from Biopython does not yet preserve every last bit of
annotation).
Let’s suppose you have download the whole of UniProt in the plain
text SwissPort file format from their FTP site
()
and uncompressed it as the file
uniprot_sprot.dat, and you
want to extract just a few records from it:
>>>()
There is a longer example in Section 18.1.5 using the
SeqIO.index() function to sort a large sequence file (without
Biopython 1.57 introduced an alternative,
Bio.SeqIO.index_db(), which
can work on even extremely large files since it stores the record information
as a file on disk (using an SQLite3 database) rather than in memory. Also,
you can index multiple files together (providing all the record identifiers
are unique).
The
Bio.SeqIO.index() function takes three required arguments:
SeqIOmodule).
As an example, consider the GenBank flat file releases from the NCBI FTP site,, which are gzip compressed GenBank files. As of GenBank release 182, there are 16 files making up the viral sequences, gbvrl1.seq, …, gbvrl16.seq, containing in total almost one million records. You can index them like this:
>>> from Bio import SeqIO >>> files = ["gbvrl%i.seq" % (i+1) for i in range(16)] >>> gb_vrl = SeqIO.index_db("gbvrl.idx", files, "genbank") >>> print("%i sequences indexed" % len(gb_vrl)) 958086 sequences indexed
That takes about two minutes to run on my machine. If you rerun it then the index file (here gbvrl.idx) is reloaded in under a second. You can use the index as a read only Python dictionary - without having to worry about which file the sequence comes from, e.g.
>>> print(gb_vrl["GQ333173.1"].description) HIV-1 isolate F12279A1 from Uganda gag protein (gag) gene, partial cds.
Just as with the
Bio.SeqIO.index() function discussed above in
Section 5.4.2.2, the dictionary like object also lets you
get at the raw text of each record:
>>> print(gb_vrl.get_raw("GQ333173.1")) LOCUS GQ333173 459 bp DNA linear VRL 21-OCT-2009 DEFINITION HIV-1 isolate F12279A1 from Uganda gag protein (gag) gene, partial cds. ACCESSION GQ333173 ... //
Very often when you are indexing a sequence file it can be quite large – so you may want to compress it on disk. Unfortunately efficient random access is difficult with the more common file formats like gzip and bzip2. In this setting, BGZF (Blocked GNU Zip Format) can be very helpful. This is a variant of gzip (and can be decompressed using standard gzip tools) popularised by the BAM file format, samtools, and tabix.
To create a BGZF compressed file you can use the command line tool
bgzip
which comes with samtools. In our examples we use a filename extension
*.bgz, so they can be distinguished from normal gzipped files (named
*.gz). You can also use the
Bio.bgzf module to read and write
BGZF files from within Python.
The
Bio.SeqIO.index() and
Bio.SeqIO.index_db() can both be
used with BGZF compressed files. For example, if you started with an
uncompressed GenBank file:
>>> from Bio import SeqIO >>> orchid_dict = SeqIO.index("ls_orchid.gbk", "genbank") >>> len(orchid_dict) 94 >>> orchid_dict.close()
You could compress this (while keeping the original file) at the command line using the following command – but don’t worry, the compressed file is already included with the other example files:
$ bgzip -c ls_orchid.gbk > ls_orchid.gbk.bgz
You can use the compressed file in exactly the same way:
>>> from Bio import SeqIO >>> orchid_dict = SeqIO.index("ls_orchid.gbk.bgz", "genbank") >>> len(orchid_dict) 94 >>> orchid_dict.close()
or:
>>> from Bio import SeqIO >>> orchid_dict = SeqIO.index_db("ls_orchid.gbk.bgz.idx", "ls_orchid.gbk.bgz", "genbank") >>> len(orchid_dict) 94 >>> orchid_dict.close()
The
SeqIO indexing automatically detects the BGZF compression. Note
that you can’t use the same index file for the uncompressed and compressed files.
So, which of these methods should you use and why? It depends on what you are
trying to do (and how much data you are dealing with). However, in general
picking
Bio.SeqIO.index() is a good starting point. If you are dealing
with millions of records, multiple files, or repeated analyses, then look at
Bio.SeqIO.index_db().
Reasons to choose
Bio.SeqIO.to_dict() over either
Bio.SeqIO.index() or
Bio.SeqIO.index_db() boil down to a need
for flexibility despite its high memory needs. The advantage of storing the
SeqRecord objects in memory is they can be changed, added to, or
removed at will. In addition to the downside of high memory consumption,
indexing can also take longer because all the records must be fully parsed.
Both
Bio.SeqIO.index() and
Bio.SeqIO.index_db() only parse
records on demand. When indexing, they scan the file once looking for the
start of each record and do as little work as possible to extract the
identifier.
Reasons to choose
Bio.SeqIO.index() over
Bio.SeqIO.index_db()
include:
Reasons to choose
Bio.SeqIO.index_db() over
Bio.SeqIO.index()
include:
Bio.SeqIO.index()can require more than 4GB of RAM and therefore a 64bit version of Python.
get_raw()method can be much faster, since for most file formats the length of each record is stored as well as its offset.
We’ve talked about using
Bio.SeqIO.parse() for sequence input (reading files), and now we’ll look at
Bio.SeqIO.write() which is for sequence output (writing files). This is a function taking three arguments: some
SeqRecord objects, a handle or filename SeqIO.write(my_records, "my_example.faa", "fasta"). The
Bio.SeqIO.write() function returns the number of
SeqRecord objects written to the file.
Note - If you tell the
Bio.SeqIO.write() function to write to a file that already exists, the old file will be overwritten without any warning.
Some people like their parsers to be “round-tripable”, meaning if you read in
a file and write it back out again it is unchanged. This requires that the parser
must extract enough information to reproduce the original file exactly.
Bio.SeqIO does not aim to do this.
As a trivial example, any line wrapping of the sequence data in FASTA files is
allowed. An identical
SeqRecord would be given from parsing the following
two examples which differ only in their line breaks:
>YAL068C-7235.2170 Putative promoter sequence TACGAGAATAATTTCTCATCATCCAGCTTTAACACAAAATTCGCACAGTTTTCGTTAAGA GAACTTAACATTTTCTTATGACGTAAATGAAGTTTATATATAAATTTCCTTTTTATTGGA >YAL068C-7235.2170 Putative promoter sequence TACGAGAATAATTTCTCATCATCCAGCTTTAACACAAAATTCGCA CAGTTTTCGTTAAGAGAACTTAACATTTTCTTATGACGTAAATGA AGTTTATATATAAATTTCCTTTTTATTGGA
To make a round-tripable FASTA parser you would need to keep track of where the sequence line breaks occurred, and this extra information is usually pointless. Instead Biopython uses a default line wrapping of 60 characters on output. The same problem with white space applies in many other file formats too. Another issue in some cases is that Biopython does not (yet) preserve every last bit of annotation (e.g. GenBank and EMBL).
Occasionally preserving the original layout (with any quirks it may have) is
important. See Section 5.4.2.2 about the
get_raw()
method of the
Bio.SeqIO.index() dictionary-like object for one potential
solution.
In previous example we used a list of
SeqRecord objects as input to the
Bio.SeqIO.write() function, but it will also accept a
SeqRecord iterator like we get from
Bio.SeqIO.parse() – this lets us do file conversion by combining these two functions.
For this example we’ll read in the GenBank format file ls_orchid.gbk and write it out in FASTA format:
from Bio import SeqIO records = SeqIO.parse("ls_orchid.gbk", "genbank") count = SeqIO.write(records, "my_example.fasta", "fasta") print("Converted %i records" % count)
Still, that is a little bit complicated. So, because file conversion is such a common task, there is a helper function letting you replace that with just:
from Bio import SeqIO count = SeqIO.convert("ls_orchid.gbk", "genbank", "my_example.fasta", "fasta") print("Converted %i records" % count)
The
Bio.SeqIO.convert() function will take handles or filenames.
Watch out though – if the output file already exists, it will overwrite it!
To find out more, see the built in help:
>>> from Bio import SeqIO >>> help(SeqIO.convert) ... Sections 18.1.9 and 18.1.10 in the cookbook chapter which looks at inter-converting between different FASTQ formats.
Finally, as an added incentive for using the
Bio.SeqIO.convert() function
(on top of the fact your code will be shorter), doing it this way may also be
faster! The reason for this is the convert function can take advantage of
several file format specific optimisations and tricks..7):
>>> from Bio import SeqIO >>> for record in SeqIO.parse("ls_orchid.gbk", "genbank"): ... print(record.id) ... print(record.seq.reverse_complement())
Now, if we want to save these reverse complements to a file, we’ll need to make
SeqRecord objects.
We can use the
SeqRecord object’s built in
.reverse_complement() method (see Section 4.8) but we must decide how to name our new records.
This is an excellent place to demonstrate the power of list comprehensions which make a list in memory:
>>> from Bio import SeqIO >>> records = [rec.reverse_complement(id="rc_"+rec.id, description = "reverse complement") \ ... for rec in SeqIO.parse("ls_orchid.fasta", "fasta")] >>> len(records) 94
Now list comprehensions have a nice trick up their sleeves, you can add a conditional statement:
>>> records = [rec.reverse_complement(id="rc_"+rec.id, description = "reverse complement") \ ... for rec in SeqIO.parse("ls_orchid.fasta", "fasta") if len(rec)<700] >>> len(records) 18 = (rec.reverse_complement(id="rc_"+rec.id, description = "reverse complement") \ ... for rec in SeqIO.parse("ls_orchid.fasta", "fasta") if len(rec)<700)
As a complete example:
>>> from Bio import SeqIO >>> records = (rec.reverse_complement(id="rc_"+rec.id, description = "reverse complement") \ ... for rec in SeqIO.parse("ls_orchid.fasta", "fasta") if len(rec)<700) >>> SeqIO.write(records, "rev_comp.fasta", "fasta") 18
There is a related example in Section 18.1.3,, use the the
SeqRecord class’
format() method (see Section 4.5).
Note that although we don’t encourage it, you can use the
format() method to write to a file, for example something like this:
from Bio import SeqIO out_handle = open("ls_orchid_long.tab", "w") for record in SeqIO.parse("ls_orchid.gbk", "genbank"): if len(record) > 100: out_handle.write(record.format("tab")) out_handle.close()
While this style of code will work for a simple sequential file format like FASTA or the simple tab separated format used here, it will not work for more complex or interlaced file formats. This is why we still recommend using
Bio.SeqIO.write(), as in the following example:
from Bio import SeqIO records = (rec for rec in SeqIO.parse("ls_orchid.gbk", "genbank") if len(rec) > 100) SeqIO.write(records, "ls_orchid.tab", "tab")
Making a single call to
SeqIO.write(...) is also much quicker than
multiple calls to the
SeqRecord.format(...) method.
This chapter is about Multiple Sequence Alignments, by which we mean a collection of
multiple sequences which have been aligned together – usually with the insertion of gap
characters, and addition of leading or trailing gaps – such that all the sequence
strings are the same length. Such an alignment can be regarded as a matrix of letters,
where each row is held as a
SeqRecord object internally.
We will introduce the
MultipleSeqAlignment object which holds this kind of data,
and the
Bio.AlignIO module for reading and writing them as various file formats
(following the design of the
Bio.SeqIO module from the previous chapter).
MultipleSeqAlignment
MultipleSeqAlignment from a now out of date release of PFAM from. We can load this file as follows (assuming it has been saved to disk as “PF05371_seed.sth” in the current working directory):
>>> from Bio import AlignIO >>> alignment = AlignIO.read("PF05371_seed.sth", "stockholm")
This code will print out a summary of the alignment:
>>>("PF05371_seed.sth", "stockholm") >>> print("Alignment length %i" % alignment.get_alignment_length()) Alignment length 52 >>>("%s %s" % (record.id, record.dbxrefs))
MultipleSeqAlignment objects (or for backwards compatibility the obsolete
Alignment objects), a handle or filename to write to, and a sequence format.
Here is an example, where we start by creating a few
MultipleSeqAlignment objects the hard way (by hand, rather than by loading them from a file).
Note we create some
SeqRecord objects to construct the alignment from.
from Bio.Alphabet import generic_dna from Bio.Seq import Seq from Bio.SeqRecord import SeqRecord from Bio.Align import MultipleSeqAlignment align1 = MultipleSeqAlignment([ SeqRecord(Seq("ACTGCTAGCTAG", generic_dna), id="Alpha"), SeqRecord(Seq("ACT-CTAGCTAG", generic_dna), id="Beta"), SeqRecord(Seq("ACTGCTAGDTAG", generic_dna), id="Gamma"), ]) align2 = MultipleSeqAlignment([ SeqRecord(Seq("GTCAGC-AG", generic_dna), id="Delta"), SeqRecord(Seq("GACAGCTAG", generic_dna), id="Epsilon"), SeqRecord(Seq("GTCAGCTAG", generic_dna), id="Zeta"), ]) align3 = MultipleSeqAlignment([ SeqRecord(Seq("ACTAGTACAGCTG", generic_dna), id="Eta"), SeqRecord(Seq("ACTAGTACAGCT-", generic_dna), id="Theta"), SeqRecord(Seq("-CTACTACAGGTG", generic_dna), id="Iota"), ]) my_alignments = [align1, align2, align3]
Now we have a list of
Alignment objects, we’ll write them to a PHYLIP format file:
from Bio import AlignIO AlignIO.write(my_alignments, "my_example.phy", "phylip") the
Bio.AlignIO.write() function returns the number of alignments written to the file.
Note - If you tell the
Bio.AlignIO.write() function to write to a file that already exists, the old file will be overwritten without any warning.
Converting between sequence alignment file formats with
Bio.AlignIO works
in the same way as converting between sequence file formats with
Bio.SeqIO
(Section 5.5.2). We load generally the alignment(s) using
Bio.AlignIO.parse() and then save them using the
Bio.AlignIO.write()
– or just use the
Bio.AlignIO.convert() helper function.
For this example, we’ll load the PFAM/Stockholm format file used earlier and save it as a Clustal W format file:
from Bio import AlignIO count = AlignIO.convert("PF05371_seed.sth", "stockholm", "PF05371_seed.aln", "clustal") print("Converted %i alignments" % count)
Or, using
Bio.AlignIO.parse() and
Bio.AlignIO.write():
from Bio import AlignIO alignments = AlignIO.parse("PF05371_seed.sth", "stockholm") count = AlignIO.write(alignments, "PF05371_seed.aln", "clustal") print("Converted %i alignments" % count)("PF05371_seed.sth", "stockholm") AlignIO.write([alignment], "PF05371_seed.aln", "clustal") AlignIO.convert("PF05371_seed.sth", "stockholm", "PF05371_seed.phy", "phylip") original PHYLIP alignment file format is that the sequence identifiers are strictly truncated at ten characters. In this example, as you can see the resulting names are still unique - but they are not very readable. As a result, a more relaxed variant of the original PHYLIP format is now quite widely used:
from Bio import AlignIO AlignIO.convert("PF05371_seed.sth", "stockholm", "PF05371_seed.phy", "phylip-relaxed")
This time the output looks like this, using a longer indentation to allow all the identifers to be given in full::
7 52 COATB_BPIKE/30-81 AEPNAATNYA TEAMDSLKTQ AIDLISQTWP VVTTVVVAGL VIRLFKKFSS Q9T0Q8_BPIKE/1-52 AEPNAATNYA TEAMDSLKTQ AIDLISQTWP VVTTVVVAGL VIKLFKKFVS COATB_BPI22/32-83 DGTSTATSYA TEAMNSLKTQ ATDLIDQTWP VVTSVAVAGL AIRLFKKFSS COATB_BPM13/24-72 AEGDDP---A KAAFNSLQAS ATEYIGYAWA MVVVIVGATI GIKLFKKFTS COATB_BPZJ2/1-49 AEGDDP---A KAAFDSLQAS ATEYIGYAWA MVVVIVGATI GIKLFKKFAS Q9T0Q9_BPFD/1-49 AEGDDP---A KAAFDSLQAS ATEYIGYAWA MVVVIVGATI GIKLFKKFTS COATB_BPIF1/22-73 FAADDATSQA KAAFDSLTAQ ATEMSGYAWA LVVLVVGATV GIKLFKKFVS KA RA KA KA KA KA RA
If you have to work with the original strict PHYLIP format, then you may need to compress the identifers somehow – or assign your own names or numbering system. This following bit of code manipulates the record identifiers before saving the output:
from Bio import AlignIO alignment = AlignIO.read("PF05371_seed.sth", "stockholm") name_mapping = {} for i, record in enumerate(alignment): name_mapping[i] = record.id record.id = "seq%i" % i print(name_mapping) AlignIO.write([alignment], "PF05371_seed.phy", "phylip") (strict) strict alignment object’s
format() method.
This takes a single mandatory argument, a lower case string which is supported by
Bio.AlignIO as an output format. For example:
from Bio import AlignIO alignment = AlignIO.read(("PF05371_seed.sth", "stockholm") out_handle = StringIO() AlignIO.write(alignments, out_handle, "clustal") clustal_data = out_handle.getvalue() print(clustal_data)
Now that we’ve covered loading and saving alignments, we’ll look at what else you can do with them.
First of all, in some senses the alignment objects act like a Python
list of
SeqRecord objects (the rows). With this model in mind hopefully the actions
of
len() (the number of rows) and iteration (each row as a
SeqRecord)
make sense:
>>> from Bio import AlignIO >>> alignment = AlignIO.read("PF05371_seed.sth", "stockholm") >>> print("Number of rows: %i" % len(alignment)) Number of rows: 7 >>> can also use the list-like
append and
extend methods to add
more rows to the alignment (as
SeqRecord objects). Keeping the list
metaphor in mind, simple slicing of the alignment should also make sense -
it selects some of the rows giving back another alignment object:
>>> >>> print(alignment[3:7]) SingleLetterAlphabet() alignment with 4 rows and 52 columns
What if you wanted to select by column? Those of you who have used the NumPy matrix or array objects won’t be surprised at this - you use a double index.
>>> print(alignment[2, 6]) T
Using two integer indices pulls out a single letter, short hand for this:
>>> print(alignment[2].seq[6]) T
You can pull out a single column as a string like this:
>>> print(alignment[:, 6]) TTT---T
You can also select a range of columns. For example, to pick out those same three rows we extracted earlier, but take just their first six columns:
>>> print(alignment[3:6, :6]) SingleLetterAlphabet() alignment with 3 rows and 6 columns AEGDDP COATB_BPM13/24-72 AEGDDP COATB_BPZJ2/1-49 AEGDDP Q9T0Q9_BPFD/1-49
Leaving the first index as
: means take all the rows:
>>> print(alignment[:, :6]) SingleLetterAlphabet() alignment with 7 rows and 6 columns AEPNAA COATB_BPIKE/30-81 AEPNAA Q9T0Q8_BPIKE/1-52 DGTSTA COATB_BPI22/32-83 AEGDDP COATB_BPM13/24-72 AEGDDP COATB_BPZJ2/1-49 AEGDDP Q9T0Q9_BPFD/1-49 FAADDA COATB_BPIF1/22-73
This brings us to a neat way to remove a section. Notice columns 7, 8 and 9 which are gaps in three of the seven sequences:
>>> print(alignment[:, 6:9]) SingleLetterAlphabet() alignment with 7 rows and 3 columns TNY COATB_BPIKE/30-81 TNY Q9T0Q8_BPIKE/1-52 TSY COATB_BPI22/32-83 --- COATB_BPM13/24-72 --- COATB_BPZJ2/1-49 --- Q9T0Q9_BPFD/1-49 TSQ COATB_BPIF1/22-73
Again, you can slice to get everything after the ninth column:
>>> print(alignment[:, 9:]) SingleLetterAlphabet() alignment with 7 rows and 43 columns ATEAMDSLKTQAIDLISQTWPVVTTVVVAGLVIRLFKKFSSKA COATB_BPIKE/30-81 ATEAMDSLKTQAIDLISQTWPVVTTVVVAGLVIKLFKKFVSRA Q9T0Q8_BPIKE/1-52 ATEAMNSLKTQATDLIDQTWPVVTSVAVAGLAIRLFKKFSSKA COATB_BPI22/32-83 AKAAFNSLQASATEYIGYAWAMVVVIVGATIGIKLFKKFTSKA COATB_BPM13/24-72 AKAAFDSLQASATEYIGYAWAMVVVIVGATIGIKLFKKFASKA COATB_BPZJ2/1-49 AKAAFDSLQASATEYIGYAWAMVVVIVGATIGIKLFKKFTSKA Q9T0Q9_BPFD/1-49 AKAAFDSLTAQATEMSGYAWALVVLVVGATVGIKLFKKFVSRA COATB_BPIF1/22-73
Now, the interesting thing is that addition of alignment objects works by column. This lets you do this as a way to remove a block of columns:
>>> edited = alignment[:, :6] + alignment[:, 9:] >>> print(edited) SingleLetterAlphabet() alignment with 7 rows and 49 columns AEPNAAATEAMDSLKTQAIDLISQTWPVVTTVVVAGLVIRLFKKFSSKA COATB_BPIKE/30-81 AEPNAAATEAMDSLKTQAIDLISQTWPVVTTVVVAGLVIKLFKKFVSRA Q9T0Q8_BPIKE/1-52 DGTSTAATEAMNSLKTQATDLIDQTWPVVTSVAVAGLAIRLFKKFSSKA COATB_BPI22/32-83EGDDPAKAAFDSLQASATEYIGYAWAMVVVIVGATIGIKLFKKFTSKA Q9T0Q9_BPFD/1-49 FAADDAAKAAFDSLTAQATEMSGYAWALVVLVVGATVGIKLFKKFVSRA COATB_BPIF1/22-73
Another common use of alignment addition would be to combine alignments for
several different genes into a meta-alignment. Watch out though - the identifiers
need to match up (see Section 4.7 for how adding
SeqRecord objects works). You may find it helpful to first sort the
alignment rows alphabetically by id:
>>> edited.sort() >>> print(edited) SingleLetterAlphabet() alignment with 7 rows and 49 columns DGTSTAATEAMNSLKTQATDLIDQTWPVVTSVAVAGLAIRLFKKFSSKA COATB_BPI22/32-83 FAADDAAKAAFDSLTAQATEMSGYAWALVVLVVGATVGIKLFKKFVSRA COATB_BPIF1/22-73 AEPNAAATEAMDSLKTQAIDLISQTWPVVTTVVVAGLVIRLFKKFSSKA COATB_BPIKE/30-81EPNAAATEAMDSLKTQAIDLISQTWPVVTTVVVAGLVIKLFKKFVSRA Q9T0Q8_BPIKE/1-52 AEGDDPAKAAFDSLQASATEYIGYAWAMVVVIVGATIGIKLFKKFTSKA Q9T0Q9_BPFD/1-49
Note that you can only add two alignments together if they have the same number of rows.
Depending on what you are doing, it can be more useful to turn the alignment object into an array of letters – and you can do this with NumPy:
>>> import numpy as np >>> from Bio import AlignIO >>> alignment = AlignIO.read("PF05371_seed.sth", "stockholm") >>> align_array = np.array([list(rec) for rec in alignment], np.character) >>> print("Array shape %i by %i" % align_array.shape) Array shape 7 by 52
If you will be working heavily with the columns, you can tell NumPy to store the array by column (as in Fortran) rather then its default of by row (as in C):
>>> align_array = np.array([list(rec) for rec in alignment], np.character, order="F")
Note that this leaves the original Biopython alignment object and the NumPy array in memory as separate objects - editing one will not update the other!.4.5, and wrappers for the EMBOSS
packaged versions of the PHYLIP tools (which EMBOSS refer to as one
of their EMBASSY packages - third party tools with an EMBOSS style
interface).
We won’t explore all these alignment tools here in the section, just a
sample, but the same principles apply.
ClustalW is a popular command line tool for multiple sequence alignment
(there is also a graphical interface called ClustalX). Biopython’s
Bio.Align.Applications module has a wrapper for this alignment tool
(and several others).. For example:
>>> import os >>> from Bio.Align.Applications import ClustalwCommandline >>> clustalw_exe = r"C:\Program Files\new clustal\clustalw2.exe" >>> clustalw_cline = ClustalwCommandline(clustalw_exe, infile="opuntia.fasta")
>>> assert os.path.isfile(clustalw_exe), "Clustal W executable missing" >>> stdout, stderr = clustalw_cline()
Remember, in Python strings
\n and
\t are by default
interpreted as a new line and a tab – which is why we’re put a letter
“r” at the start for a raw string that isn’t translated in this way.
This is generally good practice when specifying a Windows style file name.
Internally this uses the
subprocess module which is now the recommended way to run another
program in Python. This replaces older options like the
os.system()
and the
os.popen* functions.
Now, at this point it helps to know about how command line tools “work”. When you run a tool at the command line, it will often print text output directly to screen. This text can be captured or redirected, via two “pipes”, called standard output (the normal results) and standard error (for error messages and debug messages). There is also standard input, which is any text fed into the tool. These names get shortened to stdin, stdout and stderr. When the tool finishes, it has a return code (an integer), which by convention is zero for success.
When you run the command line tool like this via the Biopython wrapper, it will wait for it to finish, and check the return code. If this is non zero (indicating an error), an exception is raised. The wrapper then returns two strings, stdout and stderr.
In the case of ClustalW, when run at the command line all the important output is written directly to the output files. Everything normally printed to screen while you wait (via stdout or stderr) is boring and can be ignored (assuming it worked).
What we care about are the two output files, the alignment and the guide
tree. We didn’t tell ClustalW what filenames to use, but it defaults to
picking names based on the input file. In this case the output should be
in the file
opuntia.aln.
You should be able to work out how to read in the alignment using
Bio.AlignIO by now:
>>> from Bio import AlignIO >>> align = AlignIO.read(.Phylo can parse these:
>>> from Bio import Phylo >>> tree = Phylo.read("opuntia.dnd", "newick") >>> Phylo.draw_ascii
Chapter 13 covers Biopython’s support for phylogenetic trees in more depth.). The
Bio.AlignIO module should be able to read this
alignment using format="fasta"..aln .aln -clwstrict
The
Bio.AlignIO module should be able to read these alignments
using format="clustal"..
You would then run MUSCLE command line string as described above for
ClustalW, and parse the output using
Bio.AlignIO to get an
alignment object. >>> muscle_cline = MuscleCommandline(input="opuntia.fasta") >>> print(muscle_cline) muscle -in opuntia.fasta
If we run this via the wrapper, we get back the output as a string. In order
to parse this we can use
StringIO to turn it into a handle.
Remember that MUSCLE defaults to using FASTA as the output format:
>>> from Bio.Align.Applications import MuscleCommandline >>> muscle_cline = MuscleCommandline(input="opuntia.fasta") >>> stdout, stderr = muscle_cline() >>> from StringIO import StringIO >>> from Bio import AlignIO >>> align = AlignIO.read(StringIO
The above approach is fairly simple, but if you are dealing with very large output
text the fact that all of stdout and stderr is loaded into memory as a string can
be a potential drawback. Using the
subprocess module we can work directly
with handles instead:
>>> import subprocess >>> from Bio.Align.Applications import MuscleCommandline >>> muscle_cline = MuscleCommandline(input="opuntia.fasta") >>> child = subprocess.Popen(str(muscle_cline), ... stdout=subprocess.PIPE, ... stderr=subprocess.PIPE, ... shell=(sys.platform!="win32")) >>> from Bio import AlignIO >>> align = AlignIO.read(child! Note this is a bit more advanced and fiddly, so don’t bother with this technique unless you need to. >>> muscle_cline = MuscleCommandline(clwstrict=True) >>> print(muscle_cline) muscle -clwstrict
Now for the fiddly bits using the
subprocess module, stdin and stdout:
>>> import subprocess >>> import sys >>> child = subprocess.Popen(str(cline), ... stdin=subprocess.PIPE, ... stdout=subprocess.PIPE, ... stderr=subprocess.PIPE, ... universal_newlines=True, ... with this style of calling external programs is much more complicated. It also becomes far harder to diagnose problems, because you can’t try running MUSCLE manually outside of Biopython (because you don’t have the input file to supply). There can also be subtle cross platform issues (e.g. Windows versus Linux, Python 2 versus Python 3), and how you run your script can have an impact (e.g. at the command line, from IDLE or an IDE, or as a GUI script). These are all generic Python issues though, and not specific to Biopython.
If you find working directly with subprocess like this scary, there is an alternative. If you execute the tool with muscle_cline() you can supply any standard input as a big string, muscle_cline(stdin=...). So, provided your data isn’t very big, you can prepare the FASTA input in memory as a string using StringIO (see Section 22.1):
>>> from Bio import SeqIO >>> records = (r for r in SeqIO.parse("opuntia.fasta", "fasta") if len(r) < 900) >>> from StringIO import StringIO >>> handle = StringIO() >>> SeqIO.write(records, handle, "fasta") 6 >>> data = handle.getvalue()
You can then run the tool and parse the alignment as follows:
>>> stdout, stderr = muscle_cline(stdin=data) >>> from Bio import AlignIO >>> align = AlignIO.read(StringIO
You might find this easier, but it does require more memory (RAM) for the strings used for the input FASTA and output Clustal formatted data. >>> needle_cline = NeedleCommandline(>> needle_cline.>> needle_cline.gapopen=10 >>> needle_cline.gapextend=0.5 >>> needle_cline.>> print(needle_cline) needle -outfile=needle.txt -asequence=alpha.faa -bsequence=beta.faa -gapopen=10 -gapextend=0.5 >>> print(needle_cline.outfile) needle.txt
Next we want to use Python to run this command for us. As explained above, for full control, we recommend you use the built in Python subprocess module, but for simple usage the wrapper object usually suffices:
>>> stdout, stderr = needle_cline() >>> print(stdout + stderr) Needleman-Wunsch global alignment of two sequences
Next we can load the output file with
Bio.AlignIO as
discussed earlier in this chapter, as the emboss format:
>>> from Bio import AlignIO >>> align = AlignIO.read( – use stdout=True rather than the outfile argument), and also to read one of the one of the inputs from stdin (e.g. asequence="stdin", much.
Your first introduction to running BLAST was probably via the NCBI web-service. In fact, there are lots of ways you can run BLAST, which can be categorised several ways. The most important distinction is running BLAST locally (on your own machine), and running BLAST remotely (on another machine, typically the NCBI servers). We’re going to start this chapter by invoking the NCBI online BLAST service from within a Python script.
NOTE: The following Chapter 8 describes
Bio.SearchIO, an experimental module in Biopython. We
intend this to ultimately replace the older
Bio.Blast module, as it
provides a more general framework handling other related sequence
searching tools as well. However, until that is declared stable, for
production code please continue to use the
Bio.Blast module
for dealing with NCBI BLAST..3 below.
expectsets the expectation or e-value threshold.
For more about the optional BLAST arguments, we refer you to the NCBI’s own documentation, or that built into Biopython:
>>> from Bio.Blast import NCBIWWW >>> help(NCBI) ...
Note that the default settings on the NCBI BLAST website are not quite the same as the defaults on QBLAST. If you get different results, you’ll need to check the parameters (e.g. the expectation value threshold and the gap values).
For example, if you have a nucleotide sequence you want to search against the nucleotide database (nt) using BLASTN, and you know the GI number of your query sequence, you can use:
>>> from Bio.Blast import NCBIWWW >>> result_handle = NCBI("blastn", "nt", fasta_string)
We could also have read in the FASTA file as a
SeqRecord and then
supplied just the sequence itself:
>>> from Bio.Blast import NCBIWWW >>> from Bio import SeqIO >>> record = SeqIO.read("m_cold.fasta", format="fasta") >>> result_handle = NCBI("blastn", "n("m_cold.fasta", format="fasta") >>> result_handle = NCBI("blastn", "n.3),
but you might want to save a local copy of the output file first.
I find this especially useful when debugging my code that extracts
info from the BLAST results (because re-running the online search
is slow and wastes the NCBI computer time).
We need to be a bit careful since we can use
result_handle.read() to
read the BLAST output only once – calling
result_handle.read() again
returns an empty string.
>>> save_file = open("my_blast.xml", "w") >>> save_file.write(result_handle.read()) >>> save_file.close() >>> result_handle.close()
After doing this, the results are in the file
my_blast.xml and the
original handle has had all its data extracted (so we closed it). However,
the
parse function of the BLAST parser (described
in 7.3) takes a file-handle-like object, so
we can just open the saved file for input:
>>> result_handle = open("my_blast.xml")
Now that we’ve got the BLAST results back into a handle again, we are ready to do something with them, so this leads us right into the parsing section (see Section 7.3 below). You may want to jump ahead to that now ….
Running BLAST locally (as opposed to over the internet, see Section 7.1) has at least major two advantages:
Dealing with proprietary or unpublished sequence data can be another reason to run BLAST locally. You may not be allowed to redistribute the sequences, so submitting them to the NCBI as a BLAST query would not be an option.
Unfortunately, there are some major drawbacks too – installing all the bits and getting it setup right takes some effort:
To further confuse matters there are several different BLAST packages available, and there are also other tools which can produce imitation BLAST output files, such as BLAT.
The “new” NCBI BLAST+ suite was released in 2009. This replaces the old NCBI “legacy” BLAST package (see below).
This section will show briefly how to use these tools from within Python. If you have already read or tried the alignment tool examples in Section 6.4 this should all seem quite straightforward. First, we construct a command line string (as you would type in at the command line prompt if running standalone BLAST by hand). Then we can execute this command from within Python.
For example, taking a FASTA file of gene nucleotide sequences, you might want to run a BLASTX (translation) search against the non-redundant (NR) protein database. Assuming you (or your systems administrator) has downloaded and installed the NR database, you might run:
blastx -query opuntia.fasta -db nr -out opuntia.xml -evalue 0.001 -outfmt 5
This should run BLASTX against the NR database, using an expectation cut-off value of 0.001 and produce XML output to the specified file (which we can then parse). On my computer this takes about six minutes - a good reason to save the output to a file so you and repeat any analysis as needed.
From within Biopython we can use the NCBI BLASTX wrapper from the
Bio.Blast.Applications module to build the command line string,
and run it:
>>> from Bio.Blast.Applications import NcbiblastxCommandline >>> help(NcbiblastxCommandline) ... >>> blastx_cline = NcbiblastxCommandline(query="opuntia.fasta", db="nr", evalue=0.001, ... outfmt=5, out="opuntia.xml") >>> blastx_cline NcbiblastxCommandline(cmd='blastx', out='opuntia.xml', outfmt=5, query='opuntia.fasta', db='nr', evalue=0.001) >>> print(blastx_cline) blastx -out opuntia.xml -outfmt 5 -query opuntia.fasta -db nr -evalue 0.001 >>> stdout, stderr = blastx_cline()
In this example there shouldn’t be any output from BLASTX to the terminal,
so stdout and stderr should be empty. You may want to check the output file
opuntia.xml has been created.
As you may recall from earlier examples in the tutorial, the
opuntia.fasta
contains seven sequences, so the BLAST XML output should contain multiple results.
Therefore use
Bio.Blast.NCBIXML.parse() to parse it as described below in
Section 7.3.
NCBI BLAST+ (written in C++) was first released in 2009 as a replacement for
the original NCBI “legacy” BLAST (written in C) which is no longer being updated.
There were a lot of changes – the old version had a single core command line
tool
blastall which covered multiple different BLAST search types (which
are now separate commands in BLAST+), and all the command line options
were renamed.
Biopython’s wrappers for the NCBI “legacy” BLAST tools have been deprecated
and will be removed in a future release.
To try to avoid confusion, we do not cover calling these old tools from Biopython
in this tutorial.
You may also come across Washington University BLAST
(WU-BLAST), and its successor, Advanced Biocomputing
BLAST (AB-BLAST, released in 2009, not free/open source). These packages include
the command line tools
wu-blastall and
ab-blastall, which mimicked
blastall from the NCBI “legacy” BLAST stuie.
Biopython does not currently provide wrappers for calling these tools, but should be able
to parse any NCBI compatible output from them.
As mentioned above, BLAST can generate output in various formats, such as XML, HTML, and plain text. Originally, Biopython had parsers for BLAST plain text and HTML output, as these were the only output formats offered at the time. Unfortunately, the BLAST output in these formats kept changing, each time breaking the Biopython parsers. Our HTML BLAST parser has been removed, but the plain text BLAST parser is still available (see Section 7.5). Use it at your own risk, it may or may not work, depending on which BLAST version you’re using.
(see Section sec:appendix-handles).
MultipleSeqAlignment = next(blast_records) # ... do something with blast_record >>> blast_record = next(blast_records) # ... do something with blast_record >>> blast_record = next(blast_records) # ... do something with blast_record >>> blast_record = next(blast_records) = next(blast_records).4.
The PSIBlast record object is similar, but has support for the rounds that are used in the iteration steps of PSIBlast. The class diagram for PSIBlast is shown in Figure 7.3.
Depending on which BLAST versions or programs you’re using, our plain text BLAST parser may or may not work. Use it at your own risk! = next(blast_iterator) = next(iterator) ... except NCBIStandalone.LowQualityBlastError as info: ... print("LowQualityBlastError detected in id %s" % info[1])
The
next() functionality version of PSI-BLAST (the legacy NCBI command line
tool
blastpgp, or its replacement
psiblast) using the wrappers
in
Bio.Blast.Applications version of RPS-BLAST (either the legacy NCBI
command line tool
rpsblast, or its replacement with the same name)
using the wrappers in
Bio.Blast.Applications module.
At the time of writing, the NCBI do not appear to support tools running an RPS-BLAST search via the internet.
You can use the
Bio.Blast.NCBIXML parser to read the XML output from
current versions of RPS-BLAST.
WARNING: This chapter of the Tutorial describes an experimental
module in Biopython. It is being included in Biopython and documented
here in the tutorial in a pre-final state to allow a period of feedback
and refinement before we declare it stable. Until then the details will
probably change, and any scripts using the current
Bio.SearchIO
would need to be updated. Please keep this in mind! For stable code
working with NCBI BLAST, please continue to use Bio.Blast described
in the preceding Chapter 7.
Biological sequence identification is an integral part of bioinformatics. Several tools are available for this, each with their own algorithms and approaches, such as BLAST (arguably the most popular), FASTA, HMMER, and many more. In general, these tools usually use your sequence to search a database of potential matches. With the growing number of known sequences (hence the growing number of potential matches), interpreting the results becomes increasingly hard as there could be hundreds or even thousands of potential matches. Naturally, manual interpretation of these searches’ results is out of the question. Moreover, you often need to work with several sequence search tools, each with its own statistics, conventions, and output format. Imagine how daunting it would be when you need to work with multiple sequences using multiple search tools.
We know this too well ourselves, which is why we created the
Bio.SearchIO
submodule in Biopython.
Bio.SearchIO allows you to extract information
from your search results in a convenient way, while also dealing with the
different standards and conventions used by different search tools.
The name
SearchIO is a homage to BioPerl’s module of the same name.
In this chapter, we’ll go through the main features of
Bio.SearchIO to
show what it can do for you. We’ll use two popular search tools along the way:
BLAST and BLAT. They are used merely for illustrative purposes, and you should
be able to adapt the workflow to any other search tools supported by
Bio.SearchIO in a breeze. You’re very welcome to follow along with the
search output files we’ll be using. The BLAST output file can be downloaded
here,
and the BLAT output file
here.
Both output files were generated using this sequence:
>mystery_seq CCCTCTACAGGGAAGCGCTTTCTGTTGTCTGAAAGAAAAGAAAGTGCTTCCTTTTAGAGGG
The BLAST result is an XML file generated using
blastn against the NCBI
refseq_rna database. For BLAT, the sequence database was the February 2009
hg19 human genome draft and the output format is PSL.
We’ll start from an introduction to the
Bio.SearchIO object model. The
model is the representation of your search results, thus it is core to
Bio.SearchIO itself. After that, we’ll check out the main functions in
Bio.SearchIO that you may often use.
Now that we’re all set, let’s go to the first step: introducing the core object model.
Despite the wildly differing output styles among many sequence search tools, it turns out that their underlying concept is similar:
Realizing this generality, we decided use it as base for creating the
Bio.SearchIO object model. The object model consists of a nested
hierarchy of Python objects, each one representing one concept outlined above.
These objects are:
QueryResult, to represent a single search query.
Hit, to represent a single database hit.
Hitobjects are contained within
QueryResultand in each
QueryResultthere is zero or more
Hitobjects.
HSP(short for high-scoring pair), to represent region(s) of significant alignments between query and hit sequences.
HSPobjects are contained within
Hitobjects and each
Hithas one or more
HSPobjects.
HSPFragment, to represent a single contiguous alignment between query and hit sequences.
HSPFragmentobjects are contained within
HSPobjects. Most sequence search tools like BLAST and HMMER unify
HSPand
HSPFragmentobjects as each
HSPwill only have a single
HSPFragment. However there are tools like BLAT and Exonerate that produce
HSPcontaining multiple
HSPFragment. Don’t worry if this seems a tad confusing now, we’ll elaborate more on these two objects later on.
These four objects are the ones you will interact with when you use
Bio.SearchIO. They are created using one of the main
Bio.SearchIO
methods:
read,
parse,
index, or
index_db. The
details of these methods are provided in later sections. For this section, we’ll
only be using read and parse. These functions behave similarly to their
Bio.SeqIO and
Bio.AlignIO counterparts:
readis used for search output files with a single query and returns a
QueryResultobject
parseis used for search output files with multiple queries and returns a generator that yields
QueryResultobjects
With that settled, let’s start probing each
Bio.SearchIO object,
beginning with
QueryResult.
The QueryResult object represents a single search query and contains zero or more Hit objects. Let’s see what it looks like using the BLAST file we have:
>>> from Bio import SearchIO >>> blast_qresult = SearchIO.read('my_blast.xml', 'blast-xml') >>> print(blast_qresult) ... 3 2 gi|301171322|ref|NR_035857.1| Pan troglodytes microRNA... 4 1 gi|301171267|ref|NR_035851.1| Pan troglodytes microRNA... 5 2 gi|262205330|ref|NR_030198.1| Homo sapiens microRNA 52... 6 1 gi|262205302|ref|NR_030191.1| Homo sapiens microRNA 51... 7 1 gi|301171259|ref|NR_035850.1| Pan troglodytes microRNA... 8 1 gi|262205451|ref|NR_030222.1| Homo sapiens microRNA 51... 9 2 gi|301171447|ref|NR_035871.1| Pan troglodytes microRNA... 10 1 gi|301171276|ref|NR_035852.1| Pan troglodytes microRNA... 11 1 gi|262205290|ref|NR_030188.1| Homo sapiens microRNA 51... 12 1 gi|301171354|ref|NR_035860.1| Pan troglodytes microRNA... 13 1 gi|262205281|ref|NR_030186.1| Homo sapiens microRNA 52... 14 2 gi|262205298|ref|NR_030190.1| Homo sapiens microRNA 52... 15 1 gi|301171394|ref|NR_035865.1| Pan troglodytes microRNA... 16 1 gi|262205429|ref|NR_030218.1| Homo sapiens microRNA 51... 17 1 gi|262205423|ref|NR_030217.1| Homo sapiens microRNA 52... 18 1 gi|301171401|ref|NR_035866.1| Pan troglodytes microRNA... 19 1 gi|270133247|ref|NR_032574.1| Macaca mulatta microRNA ... 20 1 gi|262205309|ref|NR_030193.1| Homo sapiens microRNA 52... 21 2 gi|270132717|ref|NR_032716.1| Macaca mulatta microRNA ... 22 2 gi|301171437|ref|NR_035870.1| Pan troglodytes microRNA... 23 2 gi|270133306|ref|NR_032587.1| Macaca mulatta microRNA ... 24 2 gi|301171428|ref|NR_035869.1| Pan troglodytes microRNA... 25 1 gi|301171211|ref|NR_035845.1| Pan troglodytes microRNA... 26 2 gi|301171153|ref|NR_035838.1| Pan troglodytes microRNA... 27 2 gi|301171146|ref|NR_035837.1| Pan troglodytes microRNA... 28 2 gi|270133254|ref|NR_032575.1| Macaca mulatta microRNA ... 29 2 gi|262205445|ref|NR_030221.1| Homo sapiens microRNA 51... ~~~ 97 1 gi|356517317|ref|XM_003527287.1| PREDICTED: Glycine ma... 98 1 gi|297814701|ref|XM_002875188.1| Arabidopsis lyrata su... 99 1 gi|397513516|ref|XM_003827011.1| PREDICTED: Pan panisc...
We’ve just begun to scratch the surface of the object model, but you can see that
there’s already some useful information. By invoking
QueryResult object, you can see:
Bio.SearchIOtruncates the hit table overview, by showing only hits numbered 0–29, and then 97–99.
Now let’s check our BLAT results using the same procedure as above:
>>> blat_qresult = SearchIO.read('my_blat.psl', 'blat-psl') >>> print(blat_qresult) Program: blat (<unknown version>) Query: mystery_seq (61) <unknown description> Target: <unknown target> Hits: ---- ----- ---------------------------------------------------------- # # HSP ID + description ---- ----- ---------------------------------------------------------- 0 17 chr19 <unknown description>
You’ll immediately notice that there are some differences. Some of these are caused by the way PSL format stores its details, as you’ll see. The rest are caused by the genuine program and target database differences between our BLAST and BLAT searches:
Bio.SearchIOknows that the program is BLAT, but in the output file there is no information regarding the program version so it defaults to ‘<unknown version>’.
All the details you saw when invoking the
>>> print("%s %s" % (blast_qresult.program, blast_qresult.version)) blastn 2.2.27+ >>> print("%s %s" % (blat_qresult.program, blat_qresult.version)) blat <unknown version> >>> blast_qresult.param_evalue_threshold # blast-xml specific 10.0
For a complete list of accessible attributes, you can check each format-specific documentation. Here are the ones for BLAST and for BLAT.
Having looked at using
QueryResult objects, let’s drill
down deeper. What exactly is a
QueryResult? In terms of Python objects,
QueryResult is a hybrid between a list and a dictionary. In other words,
it is a container object with all the convenient features of lists and
dictionaries.
Like Python lists and dictionaries,
QueryResult objects are iterable.
Each iteration returns a
Hit object:
>>> for hit in blast_qresult: ... hit Hit(id='gi|262205317|ref|NR_030195.1|', query_id='42291', 1 hsps) Hit(id='gi|301171311|ref|NR_035856.1|', query_id='42291', 1 hsps) Hit(id='gi|270133242|ref|NR_032573.1|', query_id='42291', 1 hsps) Hit(id='gi|301171322|ref|NR_035857.1|', query_id='42291', 2 hsps) Hit(id='gi|301171267|ref|NR_035851.1|', query_id='42291', 1 hsps) ...
To check how many items (hits) a
QueryResult has, you can simply invoke
Python’s
len method:
>>> len(blast_qresult) 100 >>> len(blat_qresult) 1
Like Python lists, you can retrieve items (hits) from a
QueryResult using
the slice notation:
>>> blast_qresult[0] # retrieves the top hit Hit(id='gi|262205317|ref|NR_030195.1|', query_id='42291', 1 hsps) >>> blast_qresult[-1] # retrieves the last hit Hit(id='gi|397513516|ref|XM_003827011.1|', query_id='42291', 1 hsps)
To retrieve multiple hits, you can slice
QueryResult objects using the
slice notation as well. In this case, the slice will return a new
QueryResult object containing only the sliced hits:
>>> blast_slice = blast_qresult[:3] # slices the first three hits >>> print(blast_slice) ...
Like Python dictionaries, you can also retrieve hits using the hit’s ID. This is particularly useful if you know a given hit ID exists within a search query results:
>>> blast_qresult['gi|262205317|ref|NR_030195.1|'] Hit(id='gi|262205317|ref|NR_030195.1|', query_id='42291', 1 hsps)
You can also get a full list of
Hit objects using
hits and a full
list of
Hit IDs using
hit_keys:
>>> blast_qresult.hits [...] # list of all hits >>> blast_qresult.hit_keys [...] # list of all hit IDs
What if you just want to check whether a particular hit is present in the query
results? You can do a simple Python membership test using the
in keyword:
>>> 'gi|262205317|ref|NR_030195.1|' in blast_qresult True >>> 'gi|262205317|ref|NR_030194.1|' in blast_qresult False
Sometimes, knowing whether a hit is present is not enough; you also want to know
the rank of the hit. Here, the
index method comes to the rescue:
>>> blast_qresult.index('gi|301171437|ref|NR_035870.1|') 22
Remember that we’re using Python’s indexing style here, which is zero-based. This means our hit above is ranked at no. 23, not 22.
Also, note that the hit rank you see here is based on the native hit ordering present in the original search output file. Different search tools may order these hits based on different criteria.
If the native hit ordering doesn’t suit your taste, you can use the
sort
method of the
QueryResult object. It is very similar to Python’s
list.sort method, with the addition of an option to create a new sorted
QueryResult object or not.
Here is an example of using
QueryResult.sort to sort the hits based on
each hit’s full sequence length. For this particular sort, we’ll set the
in_place flag to
False so that sorting will return a new
QueryResult object and leave our initial object unsorted. We’ll also set
the
reverse flag to
True so that we sort in descending order.
>>> for hit in blast_qresult[:5]: # id and sequence length of the first five hits ... print("%s %i" % (hit.id, hit.seq_len)) ... gi|262205317|ref|NR_030195.1| 61 gi|301171311|ref|NR_035856.1| 60 gi|270133242|ref|NR_032573.1| 85 gi|301171322|ref|NR_035857.1| 86 gi|301171267|ref|NR_035851.1| 80 >>> sort_key = lambda hit: hit.seq_len >>> sorted_qresult = blast_qresult.sort(key=sort_key, reverse=True, in_place=False) >>> for hit in sorted_qresult[:5]: ... print("%s %i" % (hit.id, hit.seq_len)) ... gi|397513516|ref|XM_003827011.1| 6002 gi|390332045|ref|XM_776818.2| 4082 gi|390332043|ref|XM_003723358.1| 4079 gi|356517317|ref|XM_003527287.1| 3251 gi|356543101|ref|XM_003539954.1| 2936
The advantage of having the
in_place flag here is that we’re preserving
the native ordering, so we may use it again later. You should note that this is
not the default behavior of
QueryResult.sort, however, which is why we
needed to set the
in_place flag to
True explicitly.
At this point, you’ve known enough about
QueryResult objects to make it
work for you. But before we go on to the next object in the
Bio.SearchIO
model, let’s take a look at two more sets of methods that could make it even
easier to work with
QueryResult objects: the
filter and
map
methods.
If you’re familiar with Python’s list comprehensions, generator expressions
or the built in
filter and
map functions,
you’ll know how useful they are for working with list-like objects (if you’re
not, check them out!). You can use these built in methods to manipulate
QueryResult objects, but you’ll end up with regular Python lists and lose
the ability to do more interesting manipulations.
That’s why,
QueryResult objects provide its own flavor of
filter and
map methods. Analogous to
filter, there are
hit_filter and
hsp_filter methods. As their name implies, these
methods filter its
QueryResult object either on its
Hit objects
or
HSP objects. Similarly, analogous to
map,
QueryResult
objects also provide the
hit_map and
hsp_map methods. These
methods apply a given function to all hits or HSPs in a
QueryResult
object, respectively.
Let’s see these methods in action, beginning with
hit_filter. This method
accepts a callback function that checks whether a given
Hit object passes
the condition you set or not. In other words, the function must accept as its
argument a single
Hit object and returns
True or
False.
Here is an example of using
hit_filter to filter out
Hit objects
that only have one HSP:
>>> filter_func = lambda hit: len(hit.hsps) > 1 # the callback function >>> len(blast_qresult) # no. of hits before filtering 100 >>> filtered_qresult = blast_qresult.hit_filter(filter_func) >>> len(filtered_qresult) # no. of hits after filtering 37 >>> for hit in filtered_qresult[:5]: # quick check for the hit lengths ... print("%s %i" % (hit.id, len(hit.hsps))) gi|301171322|ref|NR_035857.1| 2 gi|262205330|ref|NR_030198.1| 2 gi|301171447|ref|NR_035871.1| 2 gi|262205298|ref|NR_030190.1| 2 gi|270132717|ref|NR_032716.1| 2
hsp_filter works the same as
hit_filter, only instead of looking
at the
Hit objects, it performs filtering on the
HSP objects in
each hits.
As for the
map methods, they too accept a callback function as their
arguments. However, instead of returning
True or
False, the
callback function must return the modified
Hit or
HSP object
(depending on whether you’re using
hit_map or
hsp_map).
Let’s see an example where we’re using
hit_map to rename the hit IDs:
>>> def map_func(hit): ... hit.id = hit.id.split('|')[3] # renames 'gi|301171322|ref|NR_035857.1|' to 'NR_035857.1' ... return hit ... >>> mapped_qresult = blast_qresult.hit_map(map_func) >>> for hit in mapped_qresult[:5]: ... print(hit.id) NR_030195.1 NR_035856.1 NR_032573.1 NR_035857.1 NR_035851.1
Again,
hsp_map works the same as
hit_map, but on
HSP
objects instead of
Hit objects.
Hit objects represent all query results from a single database entry.
They are the second-level container in the
Bio.SearchIO object hierarchy.
You’ve seen that they are contained by
QueryResult objects, but they
themselves contain
HSP objects.
Let’s see what they look like, beginning with our BLAST search:
>>> from Bio import SearchIO >>> blast_qresult = SearchIO.read('my_blast.xml', 'blast-xml') >>> blast_hit = blast_qresult[3] # fourth hit from the query result >>> print(blast_hit) Query: 42291 mystery_seq Hit: gi|301171322|ref|NR_035857.1| (86) Pan troglodytes microRNA mir-520c (MIR520C), microRNA HSPs: ---- -------- --------- ------ --------------- --------------------- # E-value Bit score Span Query range Hit range ---- -------- --------- ------ --------------- --------------------- 0 8.9e-20 100.47 60 [1:61] [13:73] 1 3.3e-06 55.39 60 [0:60] [13:73]
You see that we’ve got the essentials covered here:
query_idand
query_descriptionattributes.
id,
description, and
seq_len, respectively.
Now let’s contrast this with the BLAT search. Remember that in the BLAT search we had one hit with 17 HSPs.
>>> blat_qresult = SearchIO.read('my_blat.psl', 'blat-psl') >>> blat_hit = blat_qresult[0] # the only hit >>> print(blat_hit) Query: mystery_seq <unknown description> Hit: chr19 (59128983) <unknown description> HSPs: ---- -------- --------- ------ --------------- --------------------- # E-value Bit score Span Query range Hit range ---- -------- --------- ------ --------------- --------------------- 0 ? ? ? [0:61] [54204480:54204541] 1 ? ? ? [0:61] [54233104:54264463] 2 ? ? ? [0:61] [54254477:54260071] 3 ? ? ? [1:61] [54210720:54210780] 4 ? ? ? [0:60] [54198476:54198536] 5 ? ? ? [0:61] [54265610:54265671] 6 ? ? ? [0:61] [54238143:54240175] 7 ? ? ? [0:60] [54189735:54189795] 8 ? ? ? [0:61] [54185425:54185486] 9 ? ? ? [0:60] [54197657:54197717] 10 ? ? ? [0:61] [54255662:54255723] 11 ? ? ? [0:61] [54201651:54201712] 12 ? ? ? [8:60] [54206009:54206061] 13 ? ? ? [10:61] [54178987:54179038] 14 ? ? ? [8:61] [54212018:54212071] 15 ? ? ? [8:51] [54234278:54234321] 16 ? ? ? [8:61] [54238143:54238196]
Here, we’ve got a similar level of detail as with the BLAST hit we saw earlier. There are some differences worth explaining, though:
Bio.SearchIOdoes not attempt to try guess what it is, so we get a ‘?’ similar to the e-value and bit score columns.
In terms of Python objects,
Hit behaves almost the same as Python lists,
but contain
HSP objects exclusively. If you’re familiar with lists, you
should encounter no difficulties working with the
Hit object.
Just like Python lists,
Hit objects are iterable, and each iteration
returns one
HSP object it contains:
>>> for hsp in blast_hit: ... hsp HSP(hit_id='gi|301171322|ref|NR_035857.1|', query_id='42291', 1 fragments) HSP(hit_id='gi|301171322|ref|NR_035857.1|', query_id='42291', 1 fragments)
You can invoke
len on a
Hit to see how many
HSP objects it
has:
>>> len(blast_hit) 2 >>> len(blat_hit) 17
You can use the slice notation on
Hit objects, whether to retrieve single
HSP or multiple
HSP objects. Like
QueryResult, if you slice
for multiple
HSP, a new
Hit object will be returned containing
only the sliced
HSP objects:
>>> blat_hit[0] # retrieve single items HSP(hit_id='chr19', query_id='mystery_seq', 1 fragments) >>> sliced_hit = blat_hit[4:9] # retrieve multiple items >>> len(sliced_hit) 5 >>> print(sliced_hit) Query: mystery_seq <unknown description> Hit: chr19 (59128983) <unknown description> HSPs: ---- -------- --------- ------ --------------- --------------------- # E-value Bit score Span Query range Hit range ---- -------- --------- ------ --------------- --------------------- 0 ? ? ? [0:60] [54198476:54198536] 1 ? ? ? [0:61] [54265610:54265671] 2 ? ? ? [0:61] [54238143:54240175] 3 ? ? ? [0:60] [54189735:54189795] 4 ? ? ? [0:61] [54185425:54185486]
You can also sort the
HSP inside a
Hit, using the exact same
arguments like the sort method you saw in the
QueryResult object.
Finally, there are also the
filter and
map methods you can use
on
Hit objects. Unlike in the
QueryResult object,
Hit
objects only have one variant of
filter (
Hit.filter) and one
variant of
map (
Hit.map). Both of
Hit.filter and
Hit.map work on the
HSP objects a
Hit has.
HSP (high-scoring pair) represents region(s) in the hit sequence that
contains significant alignment(s) to the query sequence. It contains the actual
match between your query sequence and a database entry. As this match is
determined by the sequence search tool’s algorithms, the
HSP object
contains the bulk of the statistics computed by the search tool. This also makes
the distinction between
HSP objects from different search tools more
apparent compared to the differences you’ve seen in
QueryResult or
Hit objects.
Let’s see some examples from our BLAST and BLAT searches. We’ll look at the BLAST HSP first:
>>> from Bio import SearchIO >>> blast_qresult = SearchIO.read('my_blast.xml', 'blast-xml') >>> blast_hsp = blast_qresult[0][0] # first hit, first hsp >>> print(blast_hsp) Query: 42291 mystery_seq Hit: gi|262205317|ref|NR_030195.1| Homo sapiens microRNA 520b (MIR520... Query range: [0:61] (1) Hit range: [0:61] (1) Quick stats: evalue 4.9e-23; bitscore 111.29
Just like
QueryResult and
Hit, invoking
HSP shows its general details:
HSP.
These details can be accessed on their own using the dot notation, just like in
QueryResult and
Hit:
>>> blast_hsp.query_range (0, 61)
>>> blast_hsp.evalue 4.91307e-23
They’re not the only attributes available, though.
HSP objects come with
a default set of properties that makes it easy to probe their various
details. Here are some examples:
>>> blast_hsp.hit_start # start coordinate of the hit sequence 0 >>> blast_hsp.query_span # how many residues in the query sequence 61 >>> blast_hsp.aln_span # how long the alignment is 61
HSP
documentation
for a full list of these predefined properties.
Furthermore, each sequence search tool usually computes its own statistics /
details for its
HSP objects. For example, an XML BLAST search also
outputs the number of gaps and identical residues. These attributes can be
accessed like so:
>>> blast_hsp.gap_num # number of gaps 0 >>> blast_hsp.ident_num # number of identical residues 61
These details are format-specific; they may not be present in other formats.
To see which details are available for a given sequence search tool, you
should check the format’s documentation in
Bio.SearchIO. Alternatively,
you may also use
.__dict__.keys() for a quick list of what’s available:
>>> blast_hsp.__dict__.keys() ['bitscore', 'evalue', 'ident_num', 'gap_num', 'bitscore_raw', 'pos_num', '_items']
Finally, you may have noticed that the
query and
hit attributes
of our HSP are not just regular strings:
>>> blast_hsp.query SeqRecord(seq=Seq('CCCTCTACAGGGAAGCGCTTTCTGTTGTCTGAAAGAAAAGAAAGTGCTTCCTTT...GGG', DNAAlphabet()), id='42291', name='aligned query sequence', description='mystery_seq', dbxrefs=[]) >>> blast_hsp=[])
They are
SeqRecord objects you saw earlier in
Section 4! This means that you can do all sorts of
interesting things you can do with
SeqRecord objects on
HSP.query
and/or
HSP.hit.
It should not surprise you now that the
HSP object has an
alignment property which is a
MultipleSeqAlignment object:
>>> print(blast_hsp.aln) DNAAlphabet() alignment with 2 rows and 61 columns CCCTCTACAGGGAAGCGCTTTCTGTTGTCTGAAAGAAAAGAAAG...GGG 42291 CCCTCTACAGGGAAGCGCTTTCTGTTGTCTGAAAGAAAAGAAAG...GGG gi|262205317|ref|NR_030195.1|
Having probed the BLAST HSP, let’s now take a look at HSPs from our BLAT
results for a different kind of HSP. As usual, we’ll begin by invoking
>>> blat_qresult = SearchIO.read('my_blat.psl', 'blat-psl') >>> blat_hsp = blat_qresult[0][0] # first hit, first hsp >>> print(blat_hsp) Query: mystery_seq <unknown description> Hit: chr19 <unknown description> Query range: [0:61] (1) Hit range: [54204480:54204541] (1) Quick stats: evalue ?; bitscore ? Fragments: 1 (? columns)
Some of the outputs you may have already guessed. We have the query and hit IDs
and descriptions and the sequence coordinates. Values for evalue and bitscore is
‘?’ as BLAT HSPs do not have these attributes. But The biggest difference here
is that you don’t see any sequence alignments displayed. If you look closer, PSL
formats themselves do not have any hit or query sequences, so
Bio.SearchIO won’t create any sequence or alignment objects. What happens
if you try to access
HSP.query,
HSP.hit, or
HSP.aln?
You’ll get the default values for these attributes, which is
None:
>>> blat_hsp.hit is None True >>> blat_hsp.query is None True >>> blat_hsp.aln is None True
This does not affect other attributes, though. For example, you can still
access the length of the query or hit alignment. Despite not displaying any
attributes, the PSL format still have this information so
Bio.SearchIO
can extract them:
>>> blat_hsp.query_span # length of query match 61 >>> blat_hsp.hit_span # length of hit match 61
Other format-specific attributes are still present as well:
>>> blat_hsp.score # PSL score 61 >>> blat_hsp.mismatch_num # the mismatch column 0
So far so good? Things get more interesting when you look at another ‘variant’ of HSP present in our BLAT results. You might recall that in BLAT searches, sometimes we get our results separated into ‘blocks’. These blocks are essentially alignment fragments that may have some intervening sequence between them.
Let’s take a look at a BLAT HSP that contains multiple blocks to see how
Bio.SearchIO deals with this:
>>> blat_hsp2 = blat_qresult[0][1] # first hit, second hsp >>> print(blat_hsp2) Query: mystery_seq <unknown description> Hit: chr19 <unknown description> Query range: [0:61] (1) Hit range: [54233104:54264463] (1) Quick stats: evalue ?; bitscore ? Fragments: --- -------------- ---------------------- ---------------------- # Span Query range Hit range --- -------------- ---------------------- ---------------------- 0 ? [0:18] [54233104:54233122] 1 ? [18:61] [54264420:54264463]
What’s happening here? We still some essential details covered: the IDs and descriptions, the coordinates, and the quick statistics are similar to what you’ve seen before. But the fragments detail is all different. Instead of showing ‘Fragments: 1’, we now have a table with two data rows.
This is how
Bio.SearchIO deals with HSPs having multiple fragments. As
mentioned before, an HSP alignment may be separated by intervening sequences
into fragments. The intervening sequences are not part of the query-hit match,
so they should not be considered part of query nor hit sequence. However, they
do affect how we deal with sequence coordinates, so we can’t ignore them.
Take a look at the hit coordinate of the HSP above. In the
Hit range: field,
we see that the coordinate is
[54233104:54264463]. But looking at the
table rows, we see that not the entire region spanned by this coordinate matches
our query. Specifically, the intervening region spans from
54233122 to
54264420.
Why then, is the query coordinates seem to be contiguous, you ask? This is perfectly fine. In this case it means that the query match is contiguous (no intervening regions), while the hit match is not.
All these attributes are accessible from the HSP directly, by the way:
>>> blat_hsp2.hit_range # hit start and end coordinates of the entire HSP (54233104, 54264463) >>> blat_hsp2.hit_range_all # hit start and end coordinates of each fragment [(54233104, 54233122), (54264420, 54264463)] >>> blat_hsp2.hit_span # hit span of the entire HSP 31359 >>> blat_hsp2.hit_span_all # hit span of each fragment [18, 43] >>> blat_hsp2.hit_inter_ranges # start and end coordinates of intervening regions in the hit sequence [(54233122, 54264420)] >>> blat_hsp2.hit_inter_spans # span of intervening regions in the hit sequence [31298]
Most of these attributes are not readily available from the PSL file we have,
but
Bio.SearchIO calculates them for you on the fly when you parse the
PSL file. All it needs are the start and end coordinates of each fragment.
What about the
query,
hit, and
aln attributes? If the
HSP has multiple fragments, you won’t be able to use these attributes as they
only fetch single
SeqRecord or
MultipleSeqAlignment objects.
However, you can use their
*_all counterparts:
query_all,
hit_all, and
aln_all. These properties will return a list containing
SeqRecord or
MultipleSeqAlignment objects from each of the HSP
fragment. There are other attributes that behave similarly, i.e. they only work
for HSPs with one fragment. Check out the
HSP documentation
for a full list.
Finally, to check whether you have multiple fragments or not, you can use the
is_fragmented property like so:
>>> blat_hsp2.is_fragmented # BLAT HSP with 2 fragments True >>> blat_hsp.is_fragmented # BLAT HSP from earlier, with one fragment False
Before we move on, you should also know that we can use the slice notation on
HSP objects, just like
QueryResult or
Hit objects. When
you use this notation, you’ll get an
HSPFragment object in return, the
last component of the object model.
HSPFragment represents a single, contiguous match between the query and
hit sequences. You could consider it the core of the object model and search
result, since it is the presence of these fragments that determine whether your
search have results or not.
In most cases, you don’t have to deal with
HSPFragment objects directly
since not that many sequence search tools fragment their HSPs. When you do have
to deal with them, what you should remember is that
HSPFragment objects
were written with to be as compact as possible. In most cases, they only contain
attributes directly related to sequences: strands, reading frames, alphabets,
coordinates, the sequences themselves, and their IDs and descriptions.
These attributes are readily shown when you invoke
HSPFragment. Here’s an example, taken from our BLAST search:
>>> from Bio import SearchIO >>> blast_qresult = SearchIO.read('my_blast.xml', 'blast-xml') >>> blast_frag = blast_qresult[0][0][0] # first hit, first hsp, first fragment >>> print(blast_frag) Query: 42291 mystery_seq Hit: gi|262205317|ref|NR_030195.1| Homo sapiens microRNA 520b (MIR520... Query range: [0:61] (1) Hit range: [0:61] (1)
At this level, the BLAT fragment looks quite similar to the BLAST fragment, save for the query and hit sequences which are not present:
>>> blat_qresult = SearchIO.read('my_blat.psl', 'blat-psl') >>> blat_frag = blat_qresult[0][0][0] # first hit, first hsp, first fragment >>> print(blat_frag) Query: mystery_seq <unknown description> Hit: chr19 <unknown description> Query range: [0:61] (1) Hit range: [54204480:54204541] (1) Fragments: 1 (? columns)
In all cases, these attributes are accessible using our favorite dot notation. Some examples:
>>> blast_frag.query_start # query start coordinate 0 >>> blast_frag.hit_strand # hit sequence strand 1 >>> blast_frag.hit # hit sequence, as a SeqRecord=[])
Before we move on to the main functions, there is something you ought to know
about the standards
Bio.SearchIO uses. If you’ve worked with multiple
sequence search tools, you might have had to deal with the many different ways
each program deals with things like sequence coordinates. It might not have been
a pleasant experience as these search tools usually have their own standards.
For example, one tools might use one-based coordinates, while the other uses
zero-based coordinates. Or, one program might reverse the start and end
coordinates if the strand is minus, while others don’t. In short, these often
creates unnecessary mess must be dealt with.
We realize this problem ourselves and we intend to address it in
Bio.SearchIO. After all, one of the goals of
Bio.SearchIO is to
create a common, easy to use interface to deal with various search output files.
This means creating standards that extend beyond the object model you just saw.
Now, you might complain, "Not another standard!". Well, eventually we have to choose one convention or the other, so this is necessary. Plus, we’re not creating something entirely new here; just adopting a standard we think is best for a Python programmer (it is Biopython, after all).
There are three implicit standards that you can expect when working with
Bio.SearchIO:
Bio.SearchIO, all sequence coordinates follows Python’s coordinate style: zero-based and half open..
Bio.SearchIO, start coordinates are always less than or equal to end coordinates. This isn’t always the case with all sequence search tools, as some of them have larger start coordinates when the sequence strand is minus.
1(plus strand),
-1(minus strand),
0(protein sequences), and
None(no strand). For reading frames, the valid choices are integers from
-3to
3and
None.
Note that these standards only exist in
Bio.SearchIO objects. If you
write
Bio.SearchIO objects into an output format,
Bio.SearchIO
will use the format’s standard for the output. It does not force its standard
over to your output file.
There are two functions you can use for reading search output files into
Bio.SearchIO objects:
read and
parse. They’re essentially
similar to
read and
parse functions in other submodules like
Bio.SeqIO or
Bio.AlignIO. In both cases, you need to supply the
search output file name and the file format name, both as Python strings. You
can check the documentation for a list of format names
Bio.SearchIO
recognizes.
Bio.SearchIO.read is used for reading search output files with only one
query and returns a
QueryResult object. You’ve seen
read used in
our previous examples. What you haven’t seen is that
read may also accept
additional keyword arguments, depending on the file format.
Here are some examples. In the first one, we use
read just like
previously to read a BLAST tabular output file. In the second one, we use a
keyword argument to modify so it parses the BLAST tabular variant with comments
in it:
>>> from Bio import SearchIO >>> qresult = SearchIO.read('tab_2226_tblastn_003.txt', 'blast-tab') >>> qresult QueryResult(id='gi|16080617|ref|NP_391444.1|', 3 hits) >>> qresult2 = SearchIO.read('tab_2226_tblastn_007.txt', 'blast-tab', comments=True) >>> qresult2 QueryResult(id='gi|16080617|ref|NP_391444.1|', 3 hits)
These keyword arguments differs among file formats. Check the format documentation to see if it has keyword arguments that modifies its parser’s behavior.
As for the
Bio.SearchIO.parse, it is used for reading search output
files with any number of queries. The function returns a generator object that
yields a
QueryResult object in each iteration. Like
Bio.SearchIO.read, it also accepts format-specific keyword arguments:
>>> from Bio import SearchIO >>> qresults = SearchIO.parse('tab_2226_tblastn_001.txt', 'blast-tab') >>> for qresult in qresults: ... print(qresult.id) gi|16080617|ref|NP_391444.1| gi|11464971:4-101 >>> qresults2 = SearchIO.parse('tab_2226_tblastn_005.txt', 'blast-tab', comments=True) >>> for qresult in qresults2: ... print(qresult.id) random_s00 gi|16080617|ref|NP_391444.1| gi|11464971:4-101
Sometimes, you’re handed a search output file containing hundreds or thousands
of queries that you need to parse. You can of course use
Bio.SearchIO.parse for this file, but that would be grossly inefficient
if you need to access only a few of the queries. This is because
parse
will parse all queries it sees before it fetches your query of interest.
In this case, the ideal choice would be to index the file using
Bio.SearchIO.index or
Bio.SearchIO.index_db. If the names sound
familiar, it’s because you’ve seen them before in Section 5.4.2.
These functions also behave similarly to their
Bio.SeqIO counterparts,
with the addition of format-specific keyword arguments.
Here are some examples. You can use
index with just the filename and
format name:
>>> from Bio import SearchIO >>> idx = SearchIO.index('tab_2226_tblastn_001.txt', 'blast-tab') >>>()
Or also with the format-specific keyword argument:
>>> idx = SearchIO.index('tab_2226_tblastn_005.txt', 'blast-tab', comments=True) >>> sorted(idx.keys()) ['gi|11464971:4-101', 'gi|16080617|ref|NP_391444.1|', 'random_s00'] >>> idx['gi|16080617|ref|NP_391444.1|'] QueryResult(id='gi|16080617|ref|NP_391444.1|', 3 hits) >>> idx.close()
Or with the
key_function argument, as in
Bio.SeqIO:
>>> key_function = lambda id: id.upper() # capitalizes the keys >>> idx = SearchIO.index('tab_2226_tblastn_001.txt', 'blast-tab', key_function=key_function) >>>()
Bio.SearchIO.index_db works like as
index, only it writes the
query offsets into an SQLite database file.
It is occasionally useful to be able to manipulate search results from an output
file and write it again to a new file.
Bio.SearchIO provides a
write function that lets you do exactly this. It takes as its arguments
an iterable returning
QueryResult objects, the output filename to write
to, the format name to write to, and optionally some format-specific keyword
arguments. It returns a four-item tuple, which denotes the number or
QueryResult,
Hit,
HSP, and
HSPFragment objects that
were written.
>>> from Bio import SearchIO >>> qresults = SearchIO.parse('mirna.xml', 'blast-xml') # read XML file >>> SearchIO.write(qresults, 'results.tab', 'blast-tab') # write to tabular file (3, 239, 277, 277)
You should note different file formats require different attributes of the
QueryResult,
Hit,
HSP and
HSPFragment objects. If
these attributes are not present, writing won’t work. In other words, you can’t
always write to the output format that you want. For example, if you read a
BLAST XML file, you wouldn’t be able to write the results to a PSL file as PSL
files require attributes not calculated by BLAST (e.g. the number of repeat
matches). You can always set these attributes manually, if you really want to
write to PSL, though.
Like
read,
parse,
index, and
index_db,
write
also accepts format-specific keyword arguments. Check out the documentation for
a complete list of formats
Bio.SearchIO can write to and their arguments.
Finally,
Bio.SearchIO also provides a
convert function, which is
simply a shortcut for
Bio.SearchIO.parse and
Bio.SearchIO.write.
Using the convert function, our example above would be:
>>> from Bio import SearchIO >>> SearchIO.convert('mirna.xml', 'blast-xml', 'results.tab', 'blast-tab') (3, 239, 277, 277)
As
convert uses
write, it is only limited to format conversions
that have all the required attributes. Here, the BLAST XML file provides all the
default values a BLAST tabular file requires, so it works just fine. However,
other format conversions are less likely to work since you need to manually
assign the required attributes first. show a warning message with the name and URL of the missing DTD file. The parser will proceed to access the missing DTD file through the internet, allowing the parsing of the XML file to continue. However, the parser is much faster if the DTD file is available locally. For this purpose, please download the DTD file from the URL in the warning message and place it in the directory
...site-packages/Bio/Entrez/DTDs, containing the other DTD files. If you don’t have write access to this directory, you can also place the DTD file in
~/.biopython/Bio/Entrez/DTDs, where
~ represents your home directory. Since this directory is read before the directory
...site-packages/Bio/Entrez/DTDs, you can also put newer versions of DTD files there if the ones in
...site-packages/Bio/Entrez/DTDs become outdated. 9.12.. The email parameter will be mandatory from June 1, 2010. In case of excessive usage, NCBI will attempt to contact a user at the e-mail address provided prior to blocking access to the E-utilities.
>>> from Bio import Entrez >>> Entrez.tool = "MyLocalScript"The tool parameter will default to Biopython. 18 note), which can be retrieved by EFetch (see section 9.6).
You can also use ESearch to search GenBank. Here we’ll do a quick search for the matK gene in Cypripedioideae orchids (see Section 9.
For most of their databases, the NCBI support several different file formats. Requesting a specific file format from Entrez using
Bio.Entrez.efetch() requires specifying the
rettype and/or
retmode optional arguments. The different combinations are described for each database type on the pages linked to on NCBI efetch webpage (e.g. literature, sequences and taxonomy).
One common usage is downloading sequences in the FASTA or GenBank/GenPept plain text formats (which can then be parsed with
Bio.SeqIO, see Sections 5.3.1 and 9.6). From the Cypripedioideae example above, we can download GenBank record 186972394 using
Bio.Entrez.efetch:
>>> from Bio import Entrez >>> Entrez.email = "A.N.Other@example.com" # Always tell NCBI who you are >>> handle = Entrez.efetch(db="nucleotide", id="186972394", rettype="gb", retmode="text") >>> arguments
rettype="gb" and
retmode="text" let us download this record in the GenBank format.
Note that until Easter 2009, the Entrez EFetch API let you use “genbank” as the return type, however the NCBI now insist on using the official return types of “gb” or “gbwithparts” (or “gp” for proteins) as described on online. Also not that until Feb 2012, the Entrez EFetch API would default to returning plain text files, but now defaults to XML.
Alternatively, you could for example use
rettype="fasta" to get the Fasta-format; see the EFetch Sequences Help page for other options. Remember –", retmode="text") >>>): # Downloading... net_handle = Entrez.efetch(db="nucleotide",id="186972394",rettype="gb", retmode="text") out_handle = open(filename, "w") out_handle.write(net_handle.read()) out_handle.close() net_handle.close() print("Saved") print("Parsing...") record = SeqIO.read'
So, that dealt with sequences. For examples of parsing file formats specific to the other databases (e.g. the
MEDLINE format used in PubMed), see Section 9.12.
If you want to perform a search with
Bio.Entrez.esearch(), and then download the records with
Bio.Entrez.efetch(), you should use the WebEnv history feature – see Section 9.15.
ELink, available from Biopython as
Bio.Entrez.elink(), can be used to find related items in the NCBI Entrez databases. For example, you can us this to find nucleotide entries for an entry in the gene database,
and other cool stuff.
Let’s use ELink to find articles related to the Biopython application note published in Bioinformatics in 2009. The PubMed ID of this article is 19304878:
>>> from Bio import Entrez >>> Entrez.>>>> record = Entrez.read(Entrez.elink(dbfrom="pubmed", id=pmid))-divided 14630660, is about the Biopython PDB parser.
We can use a loop to print out all PubMed IDs:
>>> for link in record[0]["LinkSetDb"][0]["Link"]: ... print(link["Id"]) 19304878 14630660 18689808 17121776 16377612 12368254 ......
Now that was nice, but personally I am often more interested to find out if a paper has been cited. Well, ELink can do that too – at least for journals in Pubmed Central (see Section 9.15.3).
For help on ELink, see the ELink help page. There is an entire sub-page just for the link names, describing how different databases can be cross referenced.
EGQuery provides counts for a search term in each of the Entrez databases (i.e. a global query). This is particularly useful to find out how many items your search terms would find in each database without actually performing lots of separate searches with ESearch (see the example in 9.14 main use of this is for GUI tools to provide automatic suggestions for search terms.
The
Entrez.read function reads the entire XML file returned by Entrez into a single Python object, which is kept in memory. To parse Entrez XML files too large to fit in memory, you can use the function
Entrez.parse. This is a generator function that reads records in the XML file one by one. This function is only useful if the XML file reflects a Python list object (in other words, if
Entrez.read on a computer with infinite memory resources would return a Python list).
For example, you can download the entire Entrez Gene database for a given organism as a file from NCBI’s ftp site. These files can be very large. As an example, on September 4, 2009, the file
Homo_sapiens.ags.gz, containing the Entrez Gene database for human, had a size of 116576 kB. This file, which is in the
ASN format, can be converted into an XML file using NCBI’s
gene2xml program (see NCBI’s ftp site for more information):
gene2xml -b T -i Homo_sapiens.ags -o Homo_sapiens.xml
The resulting XML file has a size of 6.1 GB. Attempting
Entrez.read on this file will result in a
MemoryError on many computers.
The XML file
Homo_sapiens.xml consists of a list of Entrez gene records, each corresponding to one Entrez gene in human.
Entrez.parse retrieves these gene records one by one. You can then print out or store the relevant information in each record by iterating over the records. For example, this script iterates over the Entrez gene records and prints out the gene numbers and names for all current genes:
>>> from Bio import Entrez >>> handle = open("Homo_sapiens.xml") >>> records = Entrez.parse(handle) >>> for record in records: ... status = record['Entrezgene_track-info']['Gene-track']['Gene-track_status'] ... if status.attributes['value']=='discontinued': ... continue ... geneid = record['Entrezgene_track-info']['Gene-track']['Gene-track_geneid'] ... genename = record['Entrezgene_gene']['Gene-ref']['Gene-ref_locus'] ... print(geneid, genename)
This will print:
1 A1BG 2 A2M 3 A2MP 8 AA 9 NAT1 10 NAT2 11 AACP 12 SERPINA3 13 AADAC 14 AAMP 15 AANAT 16 AARS 17 AAVS1 ...
Three things can go wrong when parsing an XML file:
The first case occurs if, for example, you try to parse a Fasta file as if it were an XML file:
>>> from Bio import Entrez >>> handle = open("NC_005816.fna") # a Fasta file >>> 164, in read raise NotXMLError(e) Bio.Entrez.Parser.NotXMLError: Failed to parse the XML data (syntax error: line 1, column 0). Please make sure that the input data are in XML format.
Here, the parser didn’t find the
<?xml ... tag with which an XML file is supposed to start, and therefore decides (correctly) that the file is not an XML file.
When your file is in the XML format but is corrupted (for example, by ending prematurely), the parser will raise a CorruptedXMLError. Here is an example of an XML file that ends prematurely:
<>
which will generate the following traceback:
>>> 160, in read raise CorruptedXMLError(e) Bio.Entrez.Parser.CorruptedXMLError: Failed to parse the XML data (no element found: line 16, column 0). Please make sure that the input data are not corrupted. >>>
Note that the error message tells you at what point in the XML file the error was detected.
The third type of error occurs if the XML file contains tags that do not have a description in the corresponding DTD file. This is an example of such an XML file:
<?xml version="1.0"?> <!DOCTYPE eInfoResult PUBLIC "-//NLM//DTD eInfoResult, 11 May 2002//EN" ""> <eInfoResult> <DbInfo> <DbName>pubmed</DbName> <MenuName>PubMed</MenuName> <Description>PubMed bibliographic record</Description> <Count>20161961</Count> <LastUpdate>2010/09/10 04:52</LastUpdate> <FieldList> <Field> ... </Field> </FieldList> <DocsumList> <Docsum> <DsName>PubDate</DsName> <DsType>4</DsType> <DsTypeName>string</DsTypeName> </Docsum> <Docsum> <DsName>EPubDate</DsName> ... </DbInfo> </eInfoResult>
In this file, for some reason the tag
<DocsumList> (and several others) are not listed in the DTD file
eInfo_020511.dtd, which is specified on the second line as the DTD for this XML file. By default, the parser will stop and raise a ValidationError if it cannot find some tag in the DTD:
>>> from Bio import Entrez >>> handle = open("einfo3.xml") >>> 154, in read self.parser.ParseFile(handle) File "/usr/local/lib/python2.7/site-packages/Bio/Entrez/Parser.py", line 246, in startElementHandler raise ValidationError(name) Bio.Entrez.Parser.ValidationError: Failed to find tag 'DocsumList' in the DTD. To skip all tags that are not represented in the DTD, please call Bio.Entrez.read or Bio.Entrez.parse with validate=False.
Optionally, you can instruct the parser to skip such tags instead of raising a ValidationError. This is done by calling
Entrez.read or
Entrez.parse with the argument
validate equal to False:
>>> from Bio import Entrez >>> handle = open("einfo3.xml") >>> record = Entrez.read(handle, validate=False) >>>
Of course, the information contained in the XML tags that are not in the DTD are not present in the record returned by
Entrez.read..3.1 and >>> with open("pubmed_result1.txt") as handle: ... record = Medline.read(handle) ... >>> with open("pubmed_result2.txt") as handle: ... for record in Medline.parse(handle): ... both of these examples, for simplicity we have naively combined ESearch and EFetch. In this situation, the NCBI would expect you to use their history feature, as illustrated in Section 9 9.get("TI", "?")) ... print("authors:", record.get("AU", "?")) ... print("source:", record.get(']:
>>> len(record["IdList"]) 814
Let’s look at the first five results:
>>> 9.15.
>>> idlist = ",".join(record["IdList"][:5]) >>> print(idlist) 187237168,187372713,187372690,187372688,187372686 >>> handle = Entrez.efetch(db="nucleotide", id=idlist, retmode="xml") >>> records = Entrez.read(handle) >>> 9.3.1.
For simplicity, this example does not take advantage of the WebEnv history feature – see Section 9 with older versions of Biopython you had to supply a comma separated list of GI numbers to Entrez, as of Biopython 1.59 you can pass a list and this is converted for you:
>>> gi_str = ",".join(gi_list) >>> handle = Entrez.efetch(db="nuccore", id=gi_str, rettype="gb", retmode="text")", retmode="text") >>> 9.1. In particular, please note that for simplicity, this example does not use the WebEnv history feature. You should use this for any non-trivial search and download work, see Section 9 9.14, it is better tomode="text",mode="text", 9.12.1 above, you can then use
Bio.Medline to parse the saved records.
Back in Section 9.7 we mentioned ELink can be used to search for citations of a given paper. Unfortunately this only covers journals indexed for PubMed Central (doing it for all the journals in PubMed would mean a lot more work for the NIH). Let’s try this for the Biopython PDB parser paper, PubMed ID 14630660:
>>> from Bio import Entrez >>> Entrez.>>>> results = Entrez.read(Entrez.elink(dbfrom="pubmed", db="pmc", ... LinkName="pubmed_pmc_refs", from_uid=pmid)) >>> pmc_ids = [link["Id"] for link in results[0]["LinkSetDb"][0]["Link"]] >>> pmc_ids ['2744707', '2705363', '2682512', ..., '1190160']
Great - eleven articles. But why hasn’t the Biopython application note been found (PubMed ID 19304878)? Well, as you might have guessed from the variable names, there are not actually PubMed IDs, but PubMed Central IDs. Our application note is the third citing paper in that list, PMCID 2682512.
So, what if (like me) you’d rather get back a list of PubMed IDs? Well we can call ELink again to translate them. This becomes a two step process, so by now you should expect to use the history feature to accomplish it (Section 9.15).
But first, taking the more straightforward approach of making a second (separate) call to ELink:
>>> results2 = Entrez.read(Entrez.elink(dbfrom="pmc", db="pubmed", LinkName="pmc_pubmed", ... from_uid=",".join(pmc_ids))) >>> pubmed_ids = [link["Id"] for link in results2[0]["LinkSetDb"][0]["Link"]] >>> pubmed_ids ['19698094', '19450287', '19304878', ..., '15985178']
This time you can immediately spot the Biopython application note as the third hit (PubMed ID 19304878).
Now, let’s do that all again but with the history …TO.3.3 = next(records) >>> record.accession 'PS00001' >>> record.name 'ASN_GLYCOSYLATION' >>> record.pdoc 'PDOC00001' >>> record = next(records) >>> record.accession 'PS00004' >>> record.name 'CAMP_PHOSPHO_SITE' >>> record.pdoc 'PDOC00004' >>> record = next(records) >>> >>> with open("lipoprotein.txt") as handle: ...["PR"] ['PDOC00110']
>>> record["CC"] ['Hydrolyzes triacylglycerols in chylomicrons and very low-density lipoproteins (VLDL).', 'Also hydrolyzes diacylglycerol.'] >>>
Bio.
First we create a PDBParser object:
>>> from Bio.PDB.PDBParser import PDBParser >>> p = PDBParser(PERMISSIVE=1)
The PERMISSIVE flag indicates that a number of common problems (see 11.7.1) associated with PDB files will be ignored (but note that some atoms and/or residues will be missing). If the flag is not present a PDBConstructionException will be generated if any problems are detected during the parse operation.
The Structure object is then produced by letting the PDBParser object parse a PDB file (the PDB file in this case is called ’pdb1fat.ent’, ’1fat’ is a user defined name for the structure):
>>>>>>> s = p.get_structure(structure_id, filename)
You can extract the header and trailer (simple lists of strings) of the PDB file from the PDBParser object with the get_header and get_trailer methods. below, (which maps to a list of references),
journal_reference,
author, and
compound (which maps to a dictionary with various information about the crystallized compound).
The dictionary can also be created without creating a Structure object, ie. directly from the PDB file:
>>> file = open(filename, 'r') >>> header_dict = parse_pdb_header(file) >>> file.close()
Similarly to the case the case of PDB files, first create an MMCIFParser object:
>>> from Bio.PDB.MMCIFParser import MMCIFParser >>> parser = MMCIFParser()
Then use this parser to create a structure object from the mmCIF file:
>>> structure = parser.get_structure('1fat', '1fat.cif')
To have some more low level access to an mmCIF file, you can use the
MMCIF2Dict class:
>>> from Bio.PDB.MMCIF2Dict import MMCIF2Dict >>>']
That’s not yet supported, but we are definitely planning to support that in the future (it’s not a lot of work). Contact the Biopython developers (biopython-dev@biopython.org) if you need this). True ... else: ... return False ... >>> io = PDBIO() >>> io.set_structure(s) >>> io.save('gly_only.pdb', GlySelect())
If this is all too complicated for you, the Dice module contains a handy extract function that writes out all residues in a chain between a start and end residue.
The overall layout of a Structure object follows the so-called SMCRA (Structure/Model/Chain/Residue/Atom) architecture:. 11.1. 11 blank hetero field, that its sequence identifier is 10 and that its insertion code is "A".
To get the entity’s id, use the
get_id method:
>>> entity.get_id()
You can check if the entity has a child with a given id by using the
has_id method:
>>> entity.has_id(entity_id)
The length of an entity is equal to.
The id of the Model object is an integer, which is derived from the position
of the model in the parsed file (they are automatically numbered starting from
0).
Crystal structures generally have only one model (with id 0), while NMR files usually have several models. Whereas many PDB parsers assume that there is only one model, the
Structure class in
Bio.PDB is designed such that it can easily handle PDB files with more than one model.
As an example, to get the first model from a Structure object, use
>>> first_model = structure[0]
The Model object stores a list of Chain children.
The id of a Chain object is derived from the chain identifier in the PDB/mmCIF file, and is a single character (typically a letter). Each Chain in a Model object has a unique id. As an example, to get the Chain object with identifier “A” from a Model object, use
>>> chain_A = model["A"]
The Chain object stores a list of Residue children.
A residue id is a tuple with three elements:
'W'in the case of a water molecule;
'H_'followed by the residue name for other hetero residues (e.g.
'H_GLC'in the case of a glucose molecule);.
Unsurprisingly, a Residue object stores a set of Atom children. It also contains a string that specifies the residue name (e.g. “ASN”) and the segment identifier of the residue (well known to X-PLOR users, but not used in the construction of the SMCRA data structure).
Let’s look at some examples. Asn 10 with a blank insertion code would have residue id (’ ’, 10, ’ ’). Water 10 would have residue id (’W’, 10, ’ ’). 11.3.3.
A Residue object has a number of additional methods:
>>> residue.get_resname() # returns the residue name, e.g. "ASN" >>> residue.is_disordered() # returns 1 if the residue has disordered atoms >>> residue.get_segid() # returns the SEGID, e.g. "CHN1" >>> residue.has_id(name) # test if a residue has a certain atom
You can use is_aa(residue) to test if a Residue object is an amino acid. 11.3.2.α atoms (which are called ’.CA.’). In cases were stripping the spaces would create problems (ie. two atoms called ’CA’ in the same residue) the spaces are kept..
To manipulate the atomic coordinates, use the transform method of the Atom object. Use the set_coord method to specify the atomic coordinates directly.
An Atom object has the following additional() # standard deviation of atomic parameters >>> a.get_siguij() # standard deviation of anisotropic B factor >>> a.get_anisou() # anisotropic B factor >>> a.get_fullname() # atom name (with spaces, e.g. ".CA.")
To represent the atom coordinates, siguij, anisotropic B factor and sigatm Numpy arrays are used.
The get_vector method returns a Vector object representation of the coordinates of the Atom object, allowing you to do vector operations on atomic coordinates. Vector implements the full set of 3D vector operations, matrix multiplication (left and right) and some advanced rotation-related operations as well.
As an example of the capabilities of Bio.PDB’s Vector module, suppose that you would like to find the position of a Gly residue’s Cβ atom, if it had one. Rotating the N atom of the Gly residue along the Cα-C bond over -120 degrees roughly puts it in the position of a virtual Cβ Vector module also has methods to rotate (rotmat) or reflect (refmat) one vector on top of another.
These are some examples:
>>> model = structure[0] >>> chain = model['A'] >>> residue = chain[100] >>> atom = residue['CA']
Note that you can use a shortcut:
>>> atom = structure[0]['A'][100]['CA']
Bio.PDB can handle both disordered atoms and point mutations (i.e. a Gly and an Ala residue in the same position).
Disorder should be dealt with from two points of view: the atom and the residue points of view. In general, we have tried to encapsulate all the complexity that arises from disorder. If you just want to loop over all Cα (see Fig. 11.1). Each Atom object in a DisorderedAtom object can be uniquely indexed using its altloc specifier. The DisorderedAtomResidue object (see Fig. 11”. Its residue id could e.g. be (“H_GLC”, 1, “ ”).
>>> from Bio.PDB.PDBParser import PDBParser >>> parser = PDBParser() >>> structure = parser.get_structure("test", "1fat.pdb") >>> model = structure[0] >>> chain = model["A"] >>> residue = chain[1] >>> atom = residue["CA"]
>>> p = PDBParser() >>> structure = p.get_structure('X', 'pdb1fat.ent') >>> for model in structure: ... for chain in model: ... for residue in chain: ... for atom in residue: ... print(atom) ...
There is a shortcut if you want to iterate over all atoms in a structure:
>>> atoms = structure.get_atoms() >>> for atom in atoms: ... print(atom) ...
Similarly, to iterate over all atoms in a chain, use
>>> atoms = chain.get_atoms() >>> for atom in atoms: ... print(atom) ...
or if you want to iterate over all residues in a model:
>>> residues = model.get_residues() >>> for residue in residues: ... print(residue) ...
You can also use the
Selection.unfold_entities function to get all residues from a structure:
>>> res_list = Selection.unfold_entities(structure, 'R')
or.
>>> residue_id = ("H_GLC", 10, " ") >>> residue = chain[residue_id]
>>> for residue in chain.get_list(): ... residue_id = residue.get_id() ... hetfield = residue_id[0] ... if hetfield[0]=="H": ... print(residue_id) ...
>>> for model in structure.get_list(): ... for chain in model.get_list(): ... for residue in chain.get_list(): ... if residue.has_id("CA"): ... ca = residue["CA"] ... if ca.get_bfactor() > 50.0: ... print(ca.get_coord()) ...
>>>") ...
To extract polypeptides from a structure, construct a list of Polypeptide objects from a Structure object using PolypeptideBuilder as follows:
>>> model_nr = 1 >>> polypeptide_list = build_peptides(structure, model_nr) >>> for polypeptide in polypeptide_list: ... print(polypeptide) ...
A Polypeptide object is simply a UserList of Residue objects, and is always created from a single Model (in this case model 1). You can use the resulting Polypeptide object to get the sequence as a Seq object or to get a list of Cα atoms as well. Polypeptides can be built using a C-N or a Cα-Cα.
The first thing to do is to extract all polypeptides from the structure (as above).>)
The minus operator for atoms has been overloaded to return the distance between two atoms.
# Get some atoms >>> ca1 = residue1['CA'] >>> ca2 = residue2['CA'] # Simply subtract the atoms to get their distance >>> distance = ca1-ca2
Use the vector representation of the atomic coordinates, and the calc_angle function from the Vector module:
>>> vector1 = atom1.get_vector() >>> vector2 = atom2.get_vector() >>> vector3 = atom3.get_vector() >>> angle = calc_angle(vector1, vector2, vector3)
Use the vector representation of the atomic coordinates, and the calc_dihedral function from the Vector module:
>>> vector1 = atom1.get_vector() >>> vector2 = atom2.get_vector() >>> vector3 = atom3.get_vector() >>> vector4 = atom4.get_vector() >>> angle = calc_dihedral(vector1, vector2, vector3, vector4)
Use NeighborSearch to perform neighbor lookup. The neighbor lookup is done using a KD tree module written in C (see Bio.KDTree), making it very fast. It also includes a fast method to find all point pairs within a certain distance of each other.
Use a Superimposer object to superimpose two coordinate sets. This object calculates the rotation and translation matrix that rotates two lists of atoms on top of each other in such a way that their RMSD is minimized. Of course, the two lists need to contain the same number [17, Golub & Van Loan])
To superimpose two structures based on their active sites, use the active site atoms to calculate the rotation/translation matrices (as above), and apply these to the whole molecule.
First, create an alignment file in FASTA format, then use the StructureAlignment class. This class can also be used for alignments with more than two structures.
Half Sphere Exposure (HSE) is a new, 2D measure of solvent exposure [20]. Basically, it counts the number of Cα atoms around a residue in the direction of its side chain, and in the opposite direction (within a radius of 13 Å). Despite its simplicity, it outperforms many other measures of solvent exposure.])
For this functionality, you need to install DSSP (and obtain a license for it — free for academic use, see). Then use the DSSP class, which maps Residue objects to their secondary structure (and accessible surface area). The DSSP codes are listed in Table 11.1. Note that DSSP (the program, and thus by consequence the class) cannot handle multiple models!
The DSSP class can also be used to calculate the accessible surface area of a residue. But see also section 11.6.9.α depth) tuples. The Cα depth is the distance of a residue’s Cα.
It is well known that many PDB files contain semantic errors (not the structures themselves, but their representation in PDB files). Bio.PDB tries to handle this in two ways. The PDBParser object can behave in two ways: a restrictive way and a permissive way, which is the default.:
These errors indicate real problems in the PDB file (for details see [18, Hamelryck and Manderick, 2003]). In the restrictive state, PDB files with errors cause an exception to occur. This is useful to find errors in PDB files.
Some errors however are automatically corrected..
Structures can be downloaded from the PDB (Protein Data Bank) by using the retrieve_pdb_file method on a PDBList object. to specify () is used. See the API documentation for more details. Thanks again to Kristian Rother for donating this module. [19, Hamelryck, 2003], and to develop a new algorithm that identifies linear secondary structure elements [26, Majumdar et al., 2005].
Judging from requests for features and information, Bio.PDB is also used by several LPCs (Large Pharmaceutical Companies :-).
Bio.PopGen is a development version. APIs that are made available on our official public releases should be much more stable..read, Fastsimcoal2 (). Fastsimcoal2 allows for, among others, population structure, multiple demographic events, simulation of multiple types of loci (SNPs, sequences, STRs/microsatellites and RFLPs) with recombination, diploidy multiple chromosomes or ascertainment bias. Notably Fastsimcoal2 doesn’t support any selection model. We recommend reading Fastsimcoal2’s documentation, available in the link above.
The input for Fastsim Fastsim Fastsimcoal2 to simulate the demography (below we will see how Biopython can take care of calling Fastsim Fastsimcoal2 documentation to understand the full potential available in modeling chromosome structures. In this subsection we only discuss how to implement chromosome structures using the Biopython interface, not the underlying Fastsim parameter of 0.0 and a range constraint of 0.0 (for information about this parameters please consult the Fastsimcoal2 documentation, you can use them to simulate various mutation models, including the typical – for microsatellites – stepwise mutation model among others).
We now discuss how to run Fastsimcoal2 from inside Biopython. It is required that the binary for Fastsimcoal2 is called fastsimcoal21 (or fastsimcoal21.exe on Windows based platforms), please note that the typical name when downloading the program is in the format fastsimcoal2_x_y. As such, when installing Fastsimcoal2 you will need to rename of the downloaded executable so that Biopython can find it.
It is possible to run Fastsim FastSim = FastSimCoalController() ctrl.run_fastsim’). Fastsim Fastsim.read.
The Bio.Phylo module was introduced in Biopython 1.54. Following the lead of SeqIO and AlignIO, it aims to provide a common way to work with phylogenetic trees independently of the source data format, as well as a consistent API for I/O operations.
Bio.Phylo is described in an open-access journal article [9, Talevich et al., 2012], which you might also find helpful.
To get acquainted with the module, let’s start with a tree that we’ve already constructed, and inspect it a few different ways. Then we’ll colorize the branches, to use a special phyloXML feature, and finally save it.
Create a simple Newick file named simple.dnd using your favorite text editor, or use simple.dnd provided with the Biopython source code:
(((A,B),(C,D)),(E,F,G));
This tree has no branch lengths, only a topology and labelled terminals. (If you have a real tree file available, you can follow this demo using that instead.)
Launch the Python interpreter of your choice:
% ipython -pylab
For interactive work, launching the IPython interpreter with the
-pylab flag enables
matplotlib integration, so graphics will pop up automatically. We’ll use that during
this demo.
Now, within Python, read the tree file, giving the file name and the name of the format.
>>> from Bio import Phylo >>> tree = Phylo.read("simple.dnd", "newick")
Printing the tree object as a string gives us a look at the entire object hierarchy.
>>> print(tree) Tree(rooted=False, weight=1.0) Clade(branch_length=1.0) Clade(branch_length=1.0) Clade(branch_length=1.0) Clade(branch_length=1.0, name='A') Clade(branch_length=1.0, name='B') Clade(branch_length=1.0) Clade(branch_length=1.0, name='C') Clade(branch_length=1.0, name='D') Clade(branch_length=1.0) Clade(branch_length=1.0, name='E') Clade(branch_length=1.0, name='F') Clade(branch_length=1.0, name='G')
The Tree object contains global information about the tree, such as whether it’s rooted or unrooted. It has one root clade, and under that, it’s nested lists of clades all the way down to the tips.
The function
draw_ascii creates a simple ASCII-art (plain text) dendrogram. This is a
convenient visualization for interactive exploration, in case better graphical tools aren’t
available.
>>> from Bio import Phylo >>> tree = Phylo.read("simple.dnd", "newick") >>> Phylo.draw_ascii(tree) __________________ A __________________| | |__________________ B __________________| | | __________________ C | |__________________| ___________________| |__________________ D | | __________________ E | | |__________________|__________________ F | |__________________ G <BLANKLINE>
If you have matplotlib or pylab installed, you can create a graphic
using the
draw function (see Fig. 13.1):
>>> tree.rooted = True >>> Phylo.draw(tree)
The functions
draw and
draw_graphviz support the display of different
colors and branch widths in a tree.
As of Biopython 1.59, the
color and
width attributes are available on the
basic Clade object and there’s nothing extra required to use them.
Both attributes refer to the branch leading the given clade, and apply recursively, so
all descendent branches will also inherit the assigned width and color values during
display.
In earlier versions of Biopython, these were special features of PhyloXML trees, and using the attributes required first converting the tree to a subclass of the basic tree object called Phylogeny, from the Bio.Phylo.PhyloXML module.
In Biopython 1.55 and later, this is a convenient tree method:
>>> tree = tree.as_phyloxml()
In Biopython 1.54, you can accomplish the same thing with one extra import:
>>> from Bio.Phylo.PhyloXML import Phylogeny >>> tree = Phylogeny.from_tree(tree)
Note that the file formats Newick and Nexus don’t support branch colors or widths, so if you use these attributes in Bio.Phylo, you will only be able to save the values in PhyloXML format. (You can still save a tree as Newick or Nexus, but the color and width values will be skipped in the output file.)
Now we can begin assigning colors. First, we’ll color the root clade gray. We can do that by assigning the 24-bit color value as an RGB triple, an HTML-style hex string, or the name of one of the predefined colors.
>>> tree.root.color = (128, 128, 128)
Or:
>>> tree.root.color = "#808080"
Or:
>>> tree.root.color = "gray"
Colors for a clade are treated as cascading down through the entire clade, so when we colorize the root here, it turns the whole tree gray. We can override that by assigning a different color lower down on the tree.
Let’s target the most recent common ancestor (MRCA) of the nodes named “E” and “F”. The
common_ancestor method returns a reference to that clade in the original tree, so when
we color that clade “salmon”, the color will show up in the original tree.
>>> mrca = tree.common_ancestor({"name": "E"}, {"name": "F"}) >>> mrca.color = "salmon"
If we happened to know exactly where a certain clade is in the tree, in terms of nested list
entries, we can jump directly to that position in the tree by indexing it. Here, the index
[0,1] refers to the second child of the first child of the root.
>>> tree.clade[0, 1].color = "blue"
Finally, show our work (see Fig. 13.1.1):
>>> Phylo.draw(tree)
Note that a clade’s color includes the branch leading to that clade, as well as its descendents. The common ancestor of E and F turns out to be just under the root, and with this coloring we can see exactly where the root of the tree is.
My, we’ve accomplished a lot! Let’s take a break here and save our work. Call the write function with a file name or handle — here we use standard output, to see what would be written — and the format phyloxml. PhyloXML saves the colors we assigned, so you can open this phyloXML file in another tree viewer like Archaeopteryx, and the colors will show up there, too.
>>> import sys >>> Phylo.write(tree, sys.stdout, "phyloxml") <phy:phyloxml xmlns: <phy:phylogeny <phy:clade> <phy:branch_length>1.0</phy:branch_length> <phy:color> <phy:red>128</phy:red> <phy:green>128</phy:green> <phy:blue>128</phy:blue> </phy:color> <phy:clade> <phy:branch_length>1.0</phy:branch_length> <phy:clade> <phy:branch_length>1.0</phy:branch_length> <phy:clade> <phy:name>A</phy:name> ...
The rest of this chapter covers the core functionality of Bio.Phylo in greater detail. For more examples of using Bio.Phylo, see the cookbook page on Biopython.org:
Like SeqIO and AlignIO, Phylo handles file input and output through four functions:
parse,
read,
write and
convert,
all of which support the tree file formats Newick, NEXUS, phyloXML and NeXML, as
well as the Comparative Data Analysis Ontology (CDAO).
The
read function parses a single tree in the given file and returns it. Careful; it
will raise an error if the file contains more than one tree, or no trees.
>>> from Bio import Phylo >>> tree = Phylo.read("Tests/Nexus/int_node_labels.nwk", "newick") >>> print(tree)
(Example files are available in the Tests/Nexus/ and Tests/PhyloXML/ directories of the Biopython distribution.)
To handle multiple (or an unknown number of) trees, use the
parse function iterates
through each of the trees in the given file:
>>> trees = Phylo.parse("Tests/PhyloXML/phyloxml_examples.xml", "phyloxml") >>> for tree in trees: ... print(tree)
Write a tree or iterable of trees back to file with the
write function:
>>> trees = list(Phylo.parse("phyloxml_examples.xml", "phyloxml")) >>> tree1 = trees[0] >>> others = trees[1:] >>> Phylo.write(tree1, "tree1.xml", "phyloxml") 1 >>> Phylo.write(others, "other_trees.xml", "phyloxml") 12
Convert files between any of the supported formats with the
convert function:
>>> Phylo.convert("tree1.dnd", "newick", "tree1.xml", "nexml") 1 >>> Phylo.convert("other_trees.xml", "phyloxml", "other_trees.nex", "nexus") 12
To use strings as input or output instead of actual files, use
StringIO as you would
with SeqIO and AlignIO:
>>> from Bio import Phylo >>> from StringIO import StringIO >>> handle = StringIO("(((A,B),(C,D)),(E,F,G));") >>> tree = Phylo.read(handle, "newick")
The simplest way to get an overview of a
Tree object is to
>>> from Bio import Phylo >>> tree = Phylo.read("PhyloXML/example.xml", "phyloxml") >>> print(tree) Phylogeny(description='phyloXML allows to use either a "branch_length" attribute...', name='example from Prof. Joe Felsenstein's book "Inferring Phyl...', rooted=True) Clade() Clade(branch_length=0.06) Clade(branch_length=0.102, name='A') Clade(branch_length=0.23, name='B') Clade(branch_length=0.4, name='C')
This is essentially an outline of the object hierarchy Biopython uses to represent a tree. But more likely, you’d want to see a drawing of the tree. There are three functions to do this.
As we saw in the demo,
draw_ascii prints an ascii-art drawing of the tree (a
rooted phylogram) to standard output, or an open file handle if given. Not all of the
available information about the tree is shown, but it provides a way to quickly view the
tree without relying on any external dependencies.
>>> tree = Phylo.read("example.xml", "phyloxml") >>> Phylo.draw_ascii(tree) __________________ A __________| _| |___________________________________________ B | |___________________________________________________________________________ C
The
draw function draws a more attractive image using the matplotlib
library. See the API documentation for details on the arguments it accepts to
customize the output.
>>> tree = Phylo.read("example.xml", "phyloxml") >>> Phylo.draw(tree, branch_labels=lambda c: c.branch_length)
draw_graphviz draws an unrooted cladogram, but requires that you have Graphviz,
PyDot or PyGraphviz, NetworkX, and matplotlib (or pylab) installed. Using the same example as
above, and the
dot program included with Graphviz, let’s draw a rooted tree (see
Fig. 13.3):
>>> tree = Phylo.read("example.xml", "phyloxml") >>> Phylo.draw_graphviz(tree, prog='dot') >>> import pylab >>> pylab.show() # Displays the tree in an interactive viewer >>> pylab.savefig('phylo-dot.png') # Creates a PNG file of the same graphic
(Tip: If you execute IPython with the
-pylab option, calling
draw_graphviz causes
the matplotlib viewer to launch automatically without manually calling
show().)
This exports the tree object to a NetworkX graph, uses Graphviz to lay out the nodes, and
displays it using matplotlib.
There are a number of keyword arguments that can modify the resulting diagram, including
most of those accepted by the NetworkX functions
networkx.draw and
networkx.draw_graphviz.
The display is also affected by the
rooted attribute of the given tree object.
Rooted trees are shown with a “head” on each branch indicating direction (see
Fig. 13.3):
>>> tree = Phylo.read("simple.dnd", "newick") >>> tree.rooted = True >>> Phylo.draw_graphviz(tree)
The “prog” argument specifies the Graphviz engine used for layout. The default,
twopi, behaves well for any size tree, reliably avoiding crossed branches. The
neato program may draw more attractive moderately-sized trees, but sometimes will
cross branches (see Fig. 13.3). The
dot program may be useful
with small trees, but tends to do surprising things with the layout of larger trees.
>>> Phylo.draw_graphviz(tree, prog="neato")
This viewing mode is particularly handy for exploring larger trees, because the matplotlib viewer can zoom in on a selected region, thinning out a cluttered graphic.
>>> tree = Phylo.read("apaf.xml", "phyloxml") >>> Phylo.draw_graphviz(tree, prog="neato", node_size=0)
Note that branch lengths are not displayed accurately, because Graphviz ignores them when
creating the node layouts. The branch lengths are retained when exporting a tree as a NetworkX
graph object (
to_networkx), however.
See the Phylo page on the Biopython wiki () for
descriptions and examples of the more advanced functionality in
draw_ascii,
draw_graphviz and
to_networkx.
The
Tree objects produced by
parse and
read are containers for recursive
sub-trees, attached to the
Tree object at the
root attribute (whether or not the
phylogenic tree is actually considered rooted). A
Tree has globally applied information
for the phylogeny, such as rootedness, and a reference to a single
Clade; a
Clade has node- and clade-specific information, such as branch length, and a list of
its own descendent
Clade instances, attached at the
clades attribute.
So there is a distinction between
tree and
tree.root. In practice, though, you
rarely need to worry about it. To smooth over the difference, both
Tree and
Clade inherit from
TreeMixin, which contains the implementations for methods
that would be commonly used to search, inspect or modify a tree or any of its clades. This
means that almost all of the methods supported by
tree are also available on
tree.root and any clade below it. (
Clade also has a
root property, which
returns the clade object itself.)
For convenience, we provide a couple of simplified methods that return all external or internal nodes directly as a list:
These both wrap a method with full control over tree traversal,
find_clades. Two more
traversal methods,
find_elements and
find_any, rely on the same core
functionality and accept the same arguments, which we’ll call a “target specification” for
lack of a better description. These specify which objects in the tree will be matched and
returned during iteration. The first argument can be any of the following types:
name(added in Biopython 1.56);
tree.find_clades({"name": "Foo1"})matches Foo1,
{"name": "Foo.*"}matches all three clades, and
{"name": "Foo"}doesn’t match anything.
Since floating-point arithmetic can produce some strange behavior, we don’t support matching floats directly. Instead, use the boolean True to match every element with a nonzero value in the specified attribute, then filter on that attribute manually with an inequality (or exact number, if you like living dangerously).
If the dictionary contains multiple entries, a matching element must match each of the given attribute values — think “and”, not “or”.
After the target, there are two optional keyword arguments:
is_terminalmethod.
Finally, the methods accept arbitrary keyword arguments which are treated the same way as a
dictionary target specification: keys indicate the name of the element attribute to search for,
and the argument value (string, integer, None or boolean) is compared to the value of each
attribute found. If no keyword arguments are given, then any TreeElement types are matched.
The code for this is generally shorter than passing a dictionary as the target specification:
tree.find_clades({"name": "Foo1"}) can be shortened to
tree.find_clades(name="Foo1").
(In Biopython 1.56 or later, this can be even shorter:
tree.find_clades("Foo1"))
Now that we’ve mastered target specifications, here are the methods used to traverse a tree:
find_elements, but return the corresponding clade object. (This is usually what you want.)
The result is an iterable through all matching objects, searching depth-first by default. This is not necessarily the same order as the elements appear in the Newick, Nexus or XML source file!
find_cladeson them. PhyloXML trees often do have complex objects attached to clades, so this method is useful for extracting those.
find_elements(), or None. This is also useful for checking whether any matching element exists in the tree, and can be used in a conditional.
Two more methods help navigating between nodes in the tree:
These methods provide information about the whole tree (or any clade).
unit_branch_lengths=Trueoption, only the number of branches (levels in the tree) is counted.
The rest of these methods are boolean checks:
True), and
Falseotherwise.
if subclade in clade: ...
These methods modify the tree in-place. If you want to keep the original tree intact, make a complete copy of the tree first, using Python’s copy module:
tree = Phylo.read('example.xml', 'phyloxml') import copy newtree = copy.deepcopy(tree)
reverse=Trueto sort clades deepest-to-shallowest.
If the outgroup is identical to self.root, no change occurs. If the outgroup clade is terminal (e.g. a single terminal node is given as the outgroup), a new bifurcating root clade is created with a 0-length branch to the given outgroup. Otherwise, the internal node at the base of the outgroup becomes a trifurcating root for the whole tree. If the original root was bifurcating, it is dropped from the tree.
In all cases, the total branch length of the tree stays the same.
root_with_outgroupunder the hood.)
branch_lengthand the same name as this clade’s root plus an integer suffix (counting from 0) — for example, splitting a clade named “A” produces the sub-clades “A0” and “A1”.
See the Phylo page on the Biopython wiki () for more examples of using the available methods.
The phyloXML file format includes fields for annotating trees with additional data types and visual cues.
See the PhyloXML page on the Biopython wiki () for descriptions and examples of using the additional annotation features provided by PhyloXML.
While Bio.Phylo doesn’t infer trees from alignments itself, there are third-party programs available that do. These are supported through the module Bio.Phylo.Applications, using the same general framework as Bio.Emboss.Applications, Bio.Align.Applications and others.
Biopython 1.58 introduced a wrapper for PhyML (). The program accepts an input alignment in phylip-relaxed format (that’s Phylip format, but without the 10-character limit on taxon names) and a variety of options. A quick example:
>>> from Bio import Phylo >>> from Bio.Phylo.Applications import PhymlCommandline >>> cmd = PhymlCommandline(input='Tests/Phylip/random.phy') >>> out_log, err_log = cmd()
This generates a tree file and a stats file with the names
[input filename]
_phyml_tree.txt and
[input filename]
_phyml_stats.txt. The tree file is in Newick format:
>>> tree = Phylo.read('Tests/Phylip/random.phy_phyml_tree.txt', 'newick') >>> Phylo.draw_ascii(tree)
A similar wrapper for RAxML () was added in Biopython 1.60, and FastTree () in Biopython 1.62.
Note that some popular Phylip programs, including dnaml and protml, are already available through the EMBOSS wrappers in Bio.Emboss.Applications if you have the Phylip extensions to EMBOSS installed on your system. See Section 6.4 for some examples and clues on how to use programs like these.
Biopython 1.58 brought support for PAML (), a suite of programs for phylogenetic analysis by maximum likelihood. Currently the programs codeml, baseml and yn00 are implemented. Due to PAML’s usage of control files rather than command line arguments to control runtime options, usage of this wrapper strays from the format of other application wrappers in Biopython.
A typical workflow would be to initialize a PAML object, specifying an alignment file, a tree file, an output file and a working directory. Next, runtime options are set via the set_options() method or by reading an existing control file. Finally, the program is run via the run() method and the output file is automatically parsed to a results dictionary.
Here is an example of typical usage of codeml:
>>> from Bio.Phylo.PAML import codeml >>> cml = codeml.Codeml() >>> cml.>> cml.>> cml.>> cml.>> cml.set_options(seqtype=1, ... verbose=0, ... noisy=0, ... RateAncestor=0, ... model=0, ... NSsites=[0, 1, 2], ... CodonFreq=2, ... cleandata=1, ... fix_alpha=1, ... kappa=4.54006) >>> results = cml.run() >>> ns_sites = results.get("NSsites") >>> m0 = ns_sites.get(0) >>> m0_params = m0.get("parameters") >>> print(m0_params.get("omega"))
Existing output files may be parsed as well using a module’s read() function:
>>> results = codeml.read("Tests/PAML/Results/codeml/codeml_NSsites_all.out") >>> print(results.get("lnL max"))
Detailed documentation for this new module currently lives on the Biopython wiki:
Bio.Phylo is under active development. Here are some features we might add in future releases:
Currently, Bio.Nexus contains some useful features that have not yet been ported to Bio.Phylo classes — notably, calculating a consensus tree. If you find some functionality lacking in Bio.Phylo, try poking throught Bio.Nexus to see if it’s there instead.
We’re open to any suggestions for improving the functionality and usability of this module; just let us know on the mailing list or our bug database.
Finally, if you need additional functionality not yet included in the Phylo module, check if it’s available in another of the high-quality Python libraries for phylogenetics such as DendroPy () or PyCogent (). Since these libraries also support standard file formats for phylogenetic trees, you can easily transfer data between libraries by writing to a temporary file or StringIO object.
This chapter gives an overview of the functionality of the
Bio.motifs package included in Biopython. It is intended
for people who are involved in the analysis of sequence motifs, so I’ll
assume that you are familiar with basic notions of motif analysis. In
case something is unclear, please look at Section 14.10
for some relevant links.
Most of this chapter describes the new
Bio.motifs package included
in Biopython 1.61 onwards, which is replacing the older
Bio.Motif package
introduced with Biopython 1.50, which was in turn based on two older former
Biopython modules,
Bio.AlignAce and
Bio.MEME. It provides
most of their functionality with a unified motif object implementation.
Speaking of other libraries, if you are reading this you might be interested in TAMO, another python library designed to deal with sequence motifs. It supports more de-novo motif finders, but it is not a part of Biopython and has some restrictions on commercial use.
Since we are interested in motif analysis, we need to take a look at
Motif objects in the first place. For that we need to import
the Bio.motifs library:
>>> from Bio import motifs
and we can start creating our first motif objects. We can either create
a
Motif object from a list of instances of the motif, or we can
obtain a
Motif object by parsing a file from a motif database
or motif finding software.
Suppose we have these instances of a DNA motif:
>>> from Bio.Seq import Seq >>> instances = [Seq("TACAA"), ... Seq("TACGC"), ... Seq("TACAC"), ... Seq("TACCC"), ... Seq("AACCC"), ... Seq("AATGC"), ... Seq("AATGC"), ... ]
then we can create a Motif object as follows:
>>> m = motifs.create(instances)
The instances are saved in an attribute
m.instances, which is essentially a Python list with some added functionality, as described below.
Printing out the Motif object shows the instances from which it was constructed:
>>> print(m) TACAA TACGC TACAC TACCC AACCC AATGC AATGC <BLANKLINE>
The length of the motif defined as the sequence length, which should be the same for all instances:
>>> len(m) 5
The Motif object has an attribute
.counts containing the counts of each
nucleotide at each position. Printing this counts matrix shows it in an easily readable format:
>>> print(m.counts) 0 1 2 3 4 A: 3.00 7.00 0.00 2.00 1.00 C: 0.00 0.00 5.00 2.00 6.00 G: 0.00 0.00 0.00 3.00 0.00 T: 4.00 0.00 2.00 0.00 0.00 <BLANKLINE>
You can access these counts as a dictionary:
>>> m.counts['A'] [3, 7, 0, 2, 1]
but you can also think of it as a 2D array with the nucleotide as the first dimension and the position as the second dimension:
>>> m.counts['T', 0] 4 >>> m.counts['T', 2] 2 >>> m.counts['T', 3] 0
You can also directly access columns of the counts matrix
>>> m.counts[:, 3] {'A': 2, 'C': 2, 'T': 0, 'G': 3}
Instead of the nucleotide itself, you can also use the index of the nucleotide in the sorted letters in the alphabet of the motif:
>>> m.alphabet IUPACUnambiguousDNA() >>> m.alphabet.letters 'GATC' >>> sorted(m.alphabet.letters) ['A', 'C', 'G', 'T'] >>> m.counts['A',:] (3, 7, 0, 2, 1) >>> m.counts[0,:] (3, 7, 0, 2, 1)
The motif has an associated consensus sequence, defined as the sequence of
letters along the positions of the motif for which the largest value in the
corresponding columns of the
.counts matrix is obtained:
>>> m.consensus Seq('TACGC', IUPACUnambiguousDNA())
as well as an anticonsensus sequence, corresponding to the smallest values in
the columns of the
.counts matrix:
>>> m.anticonsensus Seq('GGGTG', IUPACUnambiguousDNA())
You can also ask for a degenerate consensus sequence, in which ambiguous nucleotides are used for positions where there are multiple nucleotides with high counts:
>>> m.degenerate_consensus Seq('WACVC', IUPACAmbiguousDNA())
Here, W and R follow the IUPAC nucleotide ambiguity codes: W is either A or T, and V is A, C, or G [10]. The degenerate consensus sequence is constructed following the rules specified by Cavener [11].
We can also get the reverse complement of a motif:
>>> r = m.reverse_complement() >>> r.consensus Seq('GCGTA', IUPACUnambiguousDNA()) >>> r.degenerate_consensus Seq('GBGTW', IUPACAmbiguousDNA()) >>> print(r) TTGTA GCGTA GTGTA GGGTA GGGTT GCATT GCATT <BLANKLINE>
The reverse complement and the degenerate consensus sequence are only defined for DNA motifs.
If we have internet access, we can create a weblogo:
>>> m.weblogo("mymotif.png")
We should get our logo saved as a PNG in the specified file.
Creating motifs from instances by hand is a bit boring, so it’s useful to have some I/O functions for reading and writing motifs. There are not any really well established standards for storing motifs, but there are a couple of formats that are more used than others.
One of the most popular motif databases is JASPAR. In addition to the motif sequence information, the JASPAR database stores a lot of meta-information for each motif. The module
Bio.motifs contains a specialized class
jaspar.Motif in which this meta-information is represented as attributes:
matrix_id- the unique JASPAR motif ID, e.g. ’MA0004.1’
name- the name of the TF, e.g. ’Arnt’
collection- the JASPAR collection to which the motif belongs, e.g. ’CORE’
tf_class- the structual class of this TF, e.g. ’Zipper-Type’
tf_family- the family to which this TF belongs, e.g. ’Helix-Loop-Helix’
species- the species to which this TF belongs, may have multiple values, these are specified as taxonomy IDs, e.g. 10090
tax_group- the taxonomic supergroup to which this motif belongs, e.g. ’vertebrates’
acc- the accesion number of the TF protein, e.g. ’P53762’
data_type- the type of data used to construct this motif, e.g. ’SELEX’
medline- the Pubmed ID of literature supporting this motif, may be multiple values, e.g. 7592839
pazar_id- external reference to the TF in the PAZAR database, e.g. ’TF0000003’
comment- free form text containing notes about the construction of the motif
The
jaspar.Motif class inherits from the generic
Motif class and therefore provides all the facilities of any of the motif formats — reading motifs, writing motifs, scanning sequences for motif instances etc.
JASPAR stores motifs in several different ways including three different flat file formats and as an SQL database. All of these formats facilitate the construction of a counts matrix. However, the amount of meta information described above that is available varies with the format.
The first of the three flat file formats contains a list of instances. As an example, these are the beginning and ending lines of the JASPAR
Arnt.sites file showing known binding sites of the mouse helix-loop-helix transcription factor Arnt.
>MA0004 ARNT 1 CACGTGatgtcctc >MA0004 ARNT 2 CACGTGggaggtac >MA0004 ARNT 3 CACGTGccgcgcgc ... >MA0004 ARNT 18 AACGTGacagccctcc >MA0004 ARNT 19 AACGTGcacatcgtcc >MA0004 ARNT 20 aggaatCGCGTGc
The parts of the sequence in capital letters are the motif instances that were found to align to each other.
We can create a
Motif object from these instances as follows:
>>> from Bio import motifs >>> with open("Arnt.sites") as handle: ... arnt = motifs.read(handle, "sites") ...
The instances from which this motif was created is stored in the
.instances property:
>>> print(arnt.instances[:3]) [Seq('CACGTG', IUPACUnambiguousDNA()), Seq('CACGTG', IUPACUnambiguousDNA()), Seq('CACGTG', IUPACUnambiguousDNA())] >>> for instance in arnt.instances: ... print(instance) ... CACGTG CACGTG CACGTG CACGTG CACGTG CACGTG CACGTG CACGTG CACGTG CACGTG CACGTG CACGTG CACGTG CACGTG CACGTG AACGTG AACGTG AACGTG AACGTG CGCGTG
The counts matrix of this motif is automatically calculated from the instances:
>>> print(arnt>
This format does not store any meta information.
JASPAR also makes motifs available directly as a count matrix,
without the instances from which it was created. This
pfm format only
stores the counts matrix for a single motif.
For example, this is the JASPAR file
SRF.pfm containing the counts matrix for the human SRF transcription factor:
2 9 0 1 32 3 46 1 43 15 2 2 1 33 45 45 1 1 0 0 0 1 0 1 39 2 1 0 0 0 0 0 0 0 44 43 4 2 0 0 13 42 0 45 3 30 0 0
We can create a motif for this count matrix as follows:
>>> with open("SRF.pfm") as handle: ... srf = motifs.read(handle, "pfm") ... >>> print(srf.counts) 0 1 2 3 4 5 6 7 8 9 10 11 A: 2.00 9.00 0.00 1.00 32.00 3.00 46.00 1.00 43.00 15.00 2.00 2.00 C: 1.00 33.00 45.00 45.00 1.00 1.00 0.00 0.00 0.00 1.00 0.00 1.00 G: 39.00 2.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 44.00 43.00 T: 4.00 2.00 0.00 0.00 13.00 42.00 0.00 45.00 3.00 30.00 0.00 0.00 <BLANKLINE>
As this motif was created from the counts matrix directly, it has no instances associated with it:
>>> print(srf.instances) None
We can now ask for the consensus sequence of these two motifs:
>>> print(arnt.counts.consensus) CACGTG >>> print(srf.counts.consensus) GCCCATATATGG
As with the instances file, not meta information is stored in this format.
The
jaspar file format allows multiple motifs to be specified in a single file. In this format each of the motif records consist of a header line followed by four lines defining the counts matrix. The header line begins with a
> character (similar to the Fasta file format) and is followed by the unique JASPAR matrix ID and the TF name. The following example shows a
jaspar formatted file containing the three motifs Arnt, RUNX1 and MEF2A:
>MA0004.1 Arnt A [ 4 19 0 0 0 0 ] C [16 0 20 0 0 0 ] G [ 0 1 0 20 0 20 ] T [ 0 0 0 0 20 0 ] >MA0002.1 RUNX1 A [10 12 4 1 2 2 0 0 0 8 13 ] C [ 2 2 7 1 0 8 0 0 1 2 2 ] G [ 3 1 1 0 23 0 26 26 0 0 4 ] T [11 11 14 24 1 16 0 0 25 16 7 ] >MA0052.1 MEF2A A [ 1 0 57 2 9 6 37 2 56 6 ] C [50 0 1 1 0 0 0 0 0 0 ] G [ 0 0 0 0 0 0 0 0 2 50 ] T [ 7 58 0 55 49 52 21 56 0 2 ]
The motifs are read as follows:
>>> fh = open("jaspar_motifs.txt") >>> for m in motifs.parse(fh, "jaspar")) ... print(m) TF name Arnt Matrix ID MA0004.1 TF name RUNX1 Matrix ID MA0002.1 Matrix: 0 1 2 3 4 5 6 7 8 9 10 A: 10.00 12.00 4.00 1.00 2.00 2.00 0.00 0.00 0.00 8.00 13.00 C: 2.00 2.00 7.00 1.00 0.00 8.00 0.00 0.00 1.00 2.00 2.00 G: 3.00 1.00 1.00 0.00 23.00 0.00 26.00 26.00 0.00 0.00 4.00 T: 11.00 11.00 14.00 24.00 1.00 16.00 0.00 0.00 25.00 16.00 7.00 TF name MEF2A Matrix ID MA0052.1 Matrix: 0 1 2 3 4 5 6 7 8 9
Note that printing a JASPAR motif yields both the counts data and the available meta-information.
In addition to parsing these flat file formats, we can also retrieve motifs from a JASPAR SQL database. Unlike the flat file formats, a JASPAR database allows storing of all possible meta information defined in the JASPAR
Motif class. It is beyond the scope of this document to describe how to set up a JASPAR database (please see the main JASPAR website). Motifs are read from a JASPAR database using the
Bio.motifs.jaspar.db module. First connect to the JASPAR database using the JASPAR5 class which models the the latest JASPAR schema:
>>> from Bio.motifs.jaspar.db import JASPAR5 >>> >>> JASPAR_DB_HOST = <hostname> >>> JASPAR_DB_NAME = <db_name> >>> JASPAR_DB_USER = <user> >>> JASPAR_DB_PASS = <passord> >>> >>> jdb = JASPAR5( ... host=JASPAR_DB_HOST, ... name=JASPAR_DB_NAME, ... user=JASPAR_DB_USER, ... password=JASPAR_DB_PASS ... )
Now we can fetch a single motif by its unique JASPAR ID with the
fetch_motif_by_id method. Note that a JASPAR ID conists of a base ID and a version number seperated by a decimal point, e.g. ’MA0004.1’. The
fetch_motif_by_id method allows you to use either the fully specified ID or just the base ID. If only the base ID is provided, the latest version of the motif is returned.
>>> arnt = jdb.fetch_motif_by_id("MA0004")
Printing the motif reveals that the JASPAR SQL database stores much more meeta-information than the flat files:
>>> print(arnt)
We can also fetch motifs by name. The name must be an exact match (partial matches or database wildcards are not currently supported). Note that as the name is not guaranteed to be unique, the
fetch_motifs_by_name method actually returns a list.
>>> motifs = jdb.fetch_motifs_by_name("Arnt") >>> print(motifs[0])
The
fetch_motifs method allows you to fetch motifs which match a specified set of criteria. These criteria include any of the above described meta information as well as certain matrix properties such as the minimum information content (
min_ic in the example below), the minimum length of the matrix or the minimum number of sites used to construct the matrix. Only motifs which pass ALL the specified criteria are returned. Note that selection criteria which correspond to meta information which allow for multiple values may be specified as either a single value or a list of values, e.g.
tax_group and
tf_family in the example below.
>>> motifs = jdb.fetch_motifs( ... collection = 'CORE', ... tax_group = ['vertebrates', 'insects'], ... tf_class = 'Winged Helix-Turn-Helix', ... tf_family = ['Forkhead', 'Ets'], ... min_ic = 12 ... ) >>> for motif in motifs: ... pass # do something with the motif
An important thing to note is that the JASPAR
Motif class was designed to be compatible with the popular Perl TFBS modules. Therefore some specifics about the choice of defaults for background and pseudocounts as well as how information content is computed and sequences searched for instances is based on this compatibility criteria. These choices are noted in the specific subsections below.
TFBSmodules appear to allow a choice of custom background probabilities (although the documentation states that uniform background is assumed). However the default is to use a uniform background. Therefore it is recommended that you use a uniform background for computing the position-specific scoring matrix (PSSM). This is the default when using the Biopython
motifsmodule.
TFBSmodules use a pseudocount equal to √N * bg[nucleotide], where N represents the total number of sequences used to construct the matrix. To apply this same pseudocount formula, set the motif
pseudocountsattribute using the
jaspar.calculate\_pseudcounts()funtion:
>>> motif.pseudocounts = motifs.jaspar.calculate_pseudocounts(motif)Note that it is possible for the counts matrix to have an unequal number of sequences making up the columns. The pseudocount computation uses the average number of sequences making up the matrix. However, when
normalizeis called on the counts matrix, each count value in a column is divided by the total number of sequences making up that specific column, not by the average number of sequences. This differs from the Perl
TFBSmodules because the normalization is not done as a separate step and so the average number of sequences is used throughout the computation of the pssm. Therefore, for matrices with unequal column counts, the pssm computed by the
motifsmodule will differ somewhat from the pssm computed by the Perl
TFBSmodules.
meanmethod of the
PositionSpecificScoringMatrixclass. However of note, in the Perl
TFBSmodules the default behaviour is to compute the IC without first applying pseudocounts, even though by default the PSSMs are computed using pseudocounts as described above.
TFBSmotifs was usually performed using a relative score threshold, i.e. a score in the range 0 to 1. In order to compute the absolute PSSM score corresponding to a relative score one can use the equation:
>>> abs_score = (pssm.max - pssm.min) * rel_score + pssm.minTo convert the absolute score of an instance back to a relative score, one can use the equation:
>>> rel_score = (abs_score - pssm.min) / (pssm.max - pssm.min)For example, using the Arnt motif before, let’s search a sequence with a relative score threshold of 0.8.
>>> test_seq=Seq("TAAGCGTGCACGCGCAACACGTGCATTA", unambiguous_dna) >>> arnt.pseudocounts = motifs.jaspar.calculate_pseudocounts(arnt) >>> pssm = arnt.pssm() >>> max_score = pssm.max() >>> min_score = pssm.min() >>> abs_score_threshold = (max_score - min_score) * 0.8 + min_score >>> for position, score in pssm.search(test_seq, threshold=abs_score_threshold): ... rel_score = (score - min_score) / (max_score - min_score) ... print("Position %d: score = %5.3f, rel. score = %5.3f" % ( position, score, rel_score)) ... Position 2: score = 5.362, rel. score = 0.801 Position 8: score = 6.112, rel. score = 0.831 Position -20: score = 7.103, rel. score = 0.870 Position 17: score = 10.351, rel. score = 1.000 Position -11: score = 10.351, rel. score = 1.000
MEME [12] is a tool for discovering motifs in a group of related DNA or protein sequences. It takes as input a group of DNA or protein sequences and outputs as many motifs as requested. Therefore, in contrast to JASPAR files, MEME output files typically contain multiple motifs. This is an example.
At the top of an output file generated by MEME shows some background information about the MEME and the version of MEME used:
******************************************************************************** MEME - Motif discovery tool ******************************************************************************** MEME version 3.0 (Release date: 2004/08/18 09:07:01) ...
Further down, the input set of training sequences is recapitulated:
******************************************************************************** TRAINING SET ******************************************************************************** DATAFILE= INO_up800.s ALPHABET= ACGT Sequence name Weight Length Sequence name Weight Length ------------- ------ ------ ------------- ------ ------ CHO1 1.0000 800 CHO2 1.0000 800 FAS1 1.0000 800 FAS2 1.0000 800 ACC1 1.0000 800 INO1 1.0000 800 OPI3 1.0000 800 ********************************************************************************
and the exact command line that was used:
******************************************************************************** COMMAND LINE SUMMARY ******************************************************************************** This information can also be useful in the event you wish to report a problem with the MEME software. command: meme -mod oops -dna -revcomp -nmotifs 2 -bfile yeast.nc.6.freq INO_up800.s ...
Next is detailed information on each motif that was found:
******************************************************************************** MOTIF 1 width = 12 sites = 7 llr = 95 E-value = 2.0e-001 ******************************************************************************** -------------------------------------------------------------------------------- Motif 1 Description -------------------------------------------------------------------------------- Simplified A :::9:a::::3: pos.-specific C ::a:9:11691a probability G ::::1::94:4: matrix T aa:1::9::11:
To parse this file (stored as
meme.dna.oops.txt), use
>>> handle = open("meme.dna.oops.txt") >>> record = motifs.parse(handle, "meme") >>> handle.close()
The
motifs.parse command reads the complete file directly, so you can
close the file after calling
motifs.parse.
The header information is stored in attributes:
>>> record.version '3.0' >>> record.datafile 'INO_up800.s' >>> record.command 'meme -mod oops -dna -revcomp -nmotifs 2 -bfile yeast.nc.6.freq INO_up800.s' >>> record.alphabet IUPACUnambiguousDNA() >>> record.sequences ['CHO1', 'CHO2', 'FAS1', 'FAS2', 'ACC1', 'INO1', 'OPI3']
The record is an object of the
Bio.motifs.meme.Record class.
The class inherits from list, and you can think of
record as a list of Motif objects:
>>> len(record) 2 >>> motif = record[0] >>> print(motif.consensus) TTCACATGCCGC >>> print(motif.degenerate_consensus) TTCACATGSCNC
In addition to these generic motif attributes, each motif also stores its specific information as calculated by MEME. For example,
>>> motif.num_occurrences 7 >>> motif.length 12 >>> evalue = motif.evalue >>> print("%3.1g" % evalue) 0.2 >>> motif.name 'Motif 1'
In addition to using an index into the record, as we did above, you can also find it by its name:
>>> motif = record['Motif 1']
Each motif has an attribute
.instances with the sequence instances
in which the motif was found, providing some information on each instance:
>>> len(motif.instances) 7 >>> motif.instances[0] Instance('TTCACATGCCGC', IUPACUnambiguousDNA()) >>> motif.instances[0].motif_name 'Motif 1' >>> motif.instances[0].sequence_name 'INO1' >>> motif.instances[0].start 620 >>> motif.instances[0].strand '-' >>> motif.instances[0].length 12 >>> pvalue = motif.instances[0].pvalue >>> print("%5.3g" % pvalue) 1.85e-08
TRANSFAC is a manually curated database of transcription factors, together with their genomic binding sites and DNA binding profiles [27]. While the file format used in the TRANSFAC database is nowadays also used by others, we will refer to it as the TRANSFAC file format.
A minimal file in the TRANSFAC format looks as follows:
ID motif1 //
This file shows the frequency matrix of motif
motif1 of 12 nucleotides.
In general, one file in the TRANSFAC format can contain multiple motifs. For
example, this is the contents of the example TRANSFAC file
transfac.dat:
VV EXAMPLE January 15, 2013 XX // ID motif1 P0 A C G T 01 1 2 2 0 S 02 2 1 2 0 R 03 3 0 1 1 A ... 11 0 2 0 3 Y 12 1 0 3 1 G // ID motif2 P0 A C G T 01 2 1 2 0 R 02 1 2 2 0 S ... 09 0 0 0 5 T 10 0 2 0 3 Y //
To parse a TRANSFAC file, use
>>> handle = open("transfac.dat") >>> record = motifs.parse(handle, "TRANSFAC") >>> handle.close()
The overall version number, if available, is stored as
record.version:
>>> record.version 'EXAMPLE January 15, 2013'
Each motif in
record is in instance of the
Bio.motifs.transfac.Motif
class, which inherits both from the
Bio.motifs.Motif class and
from a Python dictionary. The dictionary uses the two-letter keys to
store any additional information about the motif:
>>> motif = record[0] >>> motif.degenerate_consensus # Using the Bio.motifs.Motif method Seq('SRACAGGTGKYG', IUPACAmbiguousDNA()) >>> motif['ID'] # Using motif as a dictionary 'motif1'
TRANSFAC files are typically much more elaborate than this example, containing lots of additional information about the motif. Table 14.2.3 lists the two-letter field codes that are commonly found in TRANSFAC files:
Each motif also has an attribute
.references containing the
references associated with the motif, using these two-letter keys:
Printing the motifs writes them out in their native TRANSFAC format:
>>> print(record) VV EXAMPLE January 15, 2013 XX // ID motif1 XX XX // ID motif2 XX P0 A C G T 01 2 1 2 0 R 02 1 2 2 0 S 03 0 5 0 0 C 04 3 0 1 1 A 05 0 0 4 1 G 06 5 0 0 0 A 07 0 1 4 0 G 08 0 0 5 0 G 09 0 0 0 5 T 10 0 2 0 3 Y XX // <BLANKLINE>
You can export the motifs in the TRANSFAC format by capturing this output in a string and saving it in a file:
>>> text = str(record) >>> handle = open("mytransfacfile.dat", 'w') >>> handle.write(text) >>> handle.close()
Speaking of exporting, let’s look at export functions in general.
We can use the
format method to write the motif in the simple JASPAR
pfm format:
>>> print(arnt.format("pfm")) 4.00 19.00 0.00 0.00 0.00 0.00 16.00 0.00 20.00 0.00 0.00 0.00 0.00 1.00 0.00 20.00 0.00 20.00 0.00 0.00 0.00 0.00 20.00 0.00
Similarly, we can use
format to write the motif in the JASPAR
jaspar format:
>>> print(arnt.format(]
To write the motif in a TRANSFAC-like matrix format, use
>>> print(m.format("transfac")) P0 A C G T 01 3 0 0 4 W 02 7 0 0 0 A 03 0 5 0 2 C 04 2 2 3 0 V 05 1 6 0 0 C XX // <BLANKLINE>
To write out multiple motifs, you can use
motifs.write.
This function can be used regardless of whether the motifs originated from a TRANSFAC file. For example,
>>> two_motifs = [arnt, srf] >>> print(motifs.write(two_motifs, 'transfac')) P0 A C G T 01 4 16 0 0 C 02 19 0 1 0 A 03 0 20 0 0 C 04 0 0 20 0 G 05 0 0 0 20 T 06 0 0 20 0 G XX // P0 A C G T 01 2 1 39 4 G 02 9 33 2 2 C 03 0 45 1 0 C 04 1 45 0 0 C 05 32 1 0 13 A 06 3 1 0 42 T 07 46 0 0 0 A 08 1 0 0 45 T 09 43 0 0 3 A 10 15 1 0 30 T 11 2 0 44 0 G 12 2 1 43 0 G XX // <BLANKLINE>
Or, to write multiple motifs in the
jaspar format:
>>> two_motifs = [arnt, mef2a] >>> print(motifs.write(two_motifs, ] >MA0052.1 MEF2A]
The
.counts attribute of a Motif object shows how often each
nucleotide appeared at each position along the alignment. We can
normalize this matrix by dividing by the number of instances in the
alignment, resulting in the probability of each nucleotide at each
position along the alignment. We refer to these probabilities as
the position-weight matrix. However, beware that in the literature
this term may also be used to refer to the position-specific scoring
matrix, which we discuss below.
Usually, pseudocounts are added to each position before normalizing.
This avoids overfitting of the position-weight matrix to the limited
number of motif instances in the alignment, and can also prevent
probabilities from becoming zero. To add a fixed pseudocount to all
nucleotides at all positions, specify a number for the
pseudocounts argument:
>>> pwm = m.counts.normalize(pseudocounts=0.5) >>> print(pwm) 0 1 2 3 4 A: 0.39 0.83 0.06 0.28 0.17 C: 0.06 0.06 0.61 0.28 0.72 G: 0.06 0.06 0.06 0.39 0.06 T: 0.50 0.06 0.28 0.06 0.06 <BLANKLINE>
Alternatively,
pseudocounts can be a dictionary specifying the
pseudocounts for each nucleotide. For example, as the GC content of
the human genome is about 40%, you may want to choose the
pseudocounts accordingly:
>>> pwm = m.counts.normalize(pseudocounts={'A':0.6, 'C': 0.4, 'G': 0.4, 'T': 0.6}) >>> print(pwm) 0 1 2 3 4 A: 0.40 0.84 0.07 0.29 0.18 C: 0.04 0.04 0.60 0.27 0.71 G: 0.04 0.04 0.04 0.38 0.04 T: 0.51 0.07 0.29 0.07 0.07 <BLANKLINE>
The position-weight matrix has its own methods to calculate the consensus, anticonsensus, and degenerate consensus sequences:
>>> pwm.consensus Seq('TACGC', IUPACUnambiguousDNA()) >>> pwm.anticonsensus Seq('GGGTG', IUPACUnambiguousDNA()) >>> pwm.degenerate_consensus Seq('WACNC', IUPACAmbiguousDNA())
Note that due to the pseudocounts, the degenerate consensus sequence calculated from the position-weight matrix is slightly different from the degenerate consensus sequence calculated from the instances in the motif:
>>> m.degenerate_consensus Seq('WACVC', IUPACAmbiguousDNA())
The reverse complement of the position-weight matrix can be calculated directly from the
pwm:
>>> rpwm = pwm.reverse_complement() >>> print(rpwm) 0 1 2 3 4 A: 0.07 0.07 0.29 0.07 0.51 C: 0.04 0.38 0.04 0.04 0.04 G: 0.71 0.27 0.60 0.04 0.04 T: 0.18 0.29 0.07 0.84 0.40 <BLANKLINE>
Using the background distribution and PWM with pseudo-counts added,
it’s easy to compute the log-odds ratios, telling us what are the log
odds of a particular symbol to be coming from a motif against the
background. We can use the
.log_odds() method on the position-weight
matrix:
>>> pssm = pwm.log_odds() >>> print(pssm) 0 1 2 3 4 A: 0.68 1.76 -1.91 0.21 -0.49 C: -2.49 -2.49 1.26 0.09 1.51 G: -2.49 -2.49 -2.49 0.60 -2.49 T: 1.03 -1.91 0.21 -1.91 -1.91 <BLANKLINE>
Here we can see positive values for symbols more frequent in the motif than in the background and negative for symbols more frequent in the background. 0.0 means that it’s equally likely to see a symbol in the background and in the motif.
This assumes that A, C, G, and T are equally likely in the background. To
calculate the position-specific scoring matrix against a background with
unequal probabilities for A, C, G, T, use the
background argument.
For example, against a background with a 40% GC content, use
>>> background = {'A':0.3,'C':0.2,'G':0.2,'T':0.3} >>> pssm = pwm.log_odds(background) >>> print(pssm) 0 1 2 3 4 A: 0.42 1.49 -2.17 -0.05 -0.75 C: -2.17 -2.17 1.58 0.42 1.83 G: -2.17 -2.17 -2.17 0.92 -2.17 T: 0.77 -2.17 -0.05 -2.17 -2.17 <BLANKLINE>
The maximum and minimum score obtainable from the PSSM are stored in the
.max and
.min properties:
>>> print("%4.2f" % pssm.max) 6.59 >>> print("%4.2f" % pssm.min) -10.85
The mean and standard deviation of the PSSM scores with respect to a specific
background are calculated by the
.mean and
.std methods.
>>> mean = pssm.mean(background) >>> std = pssm.std(background) >>> print("mean = %0.2f, standard deviation = %0.2f" % (mean, std)) mean = 3.21, standard deviation = 2.59
A uniform background is used if
background is not specified.
The mean is particularly important, as its value is equal to the
Kullback-Leibler divergence or relative entropy, and is a measure for the
information content of the motif compared to the background. As in Biopython
the base-2 logarithm is used in the calculation of the log-odds scores, the
information content has units of bits.
The
.reverse_complement,
.consensus,
.anticonsensus, and
.degenerate_consensus methods can be applied directly to PSSM objects.
The most frequent use for a motif is to find its instances in some sequence. For the sake of this section, we will use an artificial sequence like this:
>>> test_seq=Seq("TACACTGCATTACAACCCAAGCATTA", m.alphabet) >>> len(test_seq) 26
The simplest way to find instances, is to look for exact matches of the true instances of the motif:
>>> for pos, seq in m.instances.search(test_seq): ... print("%i %s" % (pos, seq)) ... 0 TACAC 10 TACAA 13 AACCC
We can do the same with the reverse complement (to find instances on the complementary strand):
>>> for pos, seq in r.instances.search(test_seq): ... print("%i %s" % (pos, seq)) ... 6 GCATT 20 GCATT
It’s just as easy to look for positions, giving rise to high log-odds scores against our motif:
>>> for position, score in pssm.search(test_seq, threshold=3.0): ... print("Position %d: score = %5.3f" % (position, score)) ... Position 0: score = 5.622 Position -20: score = 4.601 Position 10: score = 3.037 Position 13: score = 5.738 Position -6: score = 4.601
The negative positions refer to instances of the motif found on the
reverse strand of the test sequence, and follow the Python convention
on negative indices. Therefore, the instance of the motif at
pos
is located at
test_seq[pos:pos+len(m)] both for positive and for
negative values of
pos.
You may notice the threshold parameter, here set arbitrarily to 3.0. This is in log2, so we are now looking only for words, which are eight times more likely to occur under the motif model than in the background. The default threshold is 0.0, which selects everything that looks more like the motif than the background.
You can also calculate the scores at all positions along the sequence:
>>> pssm.calculate(test_seq) array([ 5.62230396, -5.6796999 , -3.43177247, 0.93827754, -6.84962511, -2.04066086, -10.84962463, -3.65614533, -0.03370807, -3.91102552, 3.03734159, -2.14918518, -0.6016975 , 5.7381525 , -0.50977498, -3.56422281, -8.73414803, -0.09919716, -0.6016975 , -2.39429784, -10.84962463, -3.65614533], dtype=float32)
In general, this is the fastest way to calculate PSSM scores.
The scores returned by
pssm.calculate are for the forward strand
only. To obtain the scores on the reverse strand, you can take the reverse
complement of the PSSM:
>>> rpssm = pssm.reverse_complement() >>> rpssm.calculate(test_seq) array([ -9.43458748, -3.06172252, -7.18665981, -7.76216221, -2.04066086, -4.26466274, 4.60124254, -4.2480607 , -8.73414803, -2.26503372, -6.49598789, -5.64668512, -8.73414803, -10.84962463, -4.82356262, -4.82356262, -5.64668512, -8.73414803, -4.15613794, -5.6796999 , 4.60124254, -4.2480607 ], dtype=float32)
If you want to use a less arbitrary way of selecting thresholds, you can explore the distribution of PSSM scores. Since the space for a score distribution grows exponentially with motif length, we are using an approximation with a given precision to keep computation cost manageable:
>>> distribution = pssm.distribution(background=background, precision=10**4)
The
distribution object can be used to determine a number of different thresholds.
We can specify the requested false-positive rate (probability of “finding” a motif instance in background generated sequence):
>>> threshold = distribution.threshold_fpr(0.01) >>> print("%5.3f" % threshold) 4.009
or the false-negative rate (probability of “not finding” an instance generated from the motif):
>>> threshold = distribution.threshold_fnr(0.1) >>> print("%5.3f" % threshold) -0.510
or a threshold (approximately) satisfying some relation between the false-positive rate and the false-negative rate (fnr/fpr≃ t):
>>> threshold = distribution.threshold_balanced(1000) >>> print("%5.3f" % threshold) 6.241
or a threshold satisfying (roughly) the equality between the false-positive rate and the −log of the information content (as used in patser software by Hertz and Stormo):
>>> threshold = distribution.threshold_patser() >>> print("%5.3f" % threshold) 0.346
For example, in case of our motif, you can get the threshold giving you exactly the same results (for this sequence) as searching for instances with balanced threshold with rate of 1000.
>>> threshold = distribution.threshold_fpr(0.01) >>> print("%5.3f" % threshold) 4.009 >>> for position, score in pssm.search(test_seq, threshold=threshold): ... print("Position %d: score = %5.3f" % (position, score)) ... Position 0: score = 5.622 Position -20: score = 4.601 Position 13: score = 5.738 Position -6: score = 4.601
To facilitate searching for potential TFBSs using PSSMs, both the position-weight matrix and the position-specific scoring matrix are associated with each motif. Using the Arnt motif as an example:
>>> from Bio import motifs >>> with open("Arnt.sites") as handle: ... motif = motifs.read(handle, 'sites') ... >>> print(motif> >>> print(motif.pwm) 0 1 2 3 4 5 A: 0.20 0.95 0.00 0.00 0.00 0.00 C: 0.80 0.00 1.00 0.00 0.00 0.00 G: 0.00 0.05 0.00 1.00 0.00 1.00 T: 0.00 0.00 0.00 0.00 1.00 0.00 <BLANKLINE>
>>> print(motif.pssm) 0 1 2 3 4 5 A: -0.32 1.93 -inf -inf -inf -inf C: 1.68 -inf 2.00 -inf -inf -inf G: -inf -2.32 -inf 2.00 -inf 2.00 T: -inf -inf -inf -inf 2.00 -inf <BLANKLINE>
The negative infinities appear here because the corresponding entry in the frequency matrix is 0, and we are using zero pseudocounts by default:
>>> for letter in "ACGT": ... print("%s: %4.2f" % (letter, motif.pseudocounts[letter])) ... A: 0.00 C: 0.00 G: 0.00 T: 0.00
If you change the
.pseudocounts attribute, the position-frequency matrix and the position-specific scoring matrix are recalculated automatically:
>>> motif.pseudocounts = 3.0 >>> for letter in "ACGT": ... print("%s: %4.2f" % (letter, motif.pseudocounts[letter])) ... A: 3.00 C: 3.00 G: 3.00 T: 3.00
>>> print(motif.pwm) 0 1 2 3 4 5 A: 0.22 0.69 0.09 0.09 0.09 0.09 C: 0.59 0.09 0.72 0.09 0.09 0.09 G: 0.09 0.12 0.09 0.72 0.09 0.72 T: 0.09 0.09 0.09 0.09 0.72 0.09 <BLANKLINE>
>>> print(motif.pssm) 0 1 2 3 4 5 A: -0.19 1.46 -1.42 -1.42 -1.42 -1.42 C: 1.25 -1.42 1.52 -1.42 -1.42 -1.42 G: -1.42 -1.00 -1.42 1.52 -1.42 1.52 T: -1.42 -1.42 -1.42 -1.42 1.52 -1.42 <BLANKLINE>
You can also set the
.pseudocounts to a dictionary over the four nucleotides if you want to use different pseudocounts for them. Setting
motif.pseudocounts to
None resets it to its default value of zero.
The position-specific scoring matrix depends on the background distribution, which is uniform by default:
>>> for letter in "ACGT": ... print("%s: %4.2f" % (letter, motif.background[letter])) ... A: 0.25 C: 0.25 G: 0.25 T: 0.25
Again, if you modify the background distribution, the position-specific scoring matrix is recalculated:
>>> motif.background = {'A': 0.2, 'C': 0.3, 'G': 0.3, 'T': 0.2} >>> print(motif.pssm) 0 1 2 3 4 5 A: 0.13 1.78 -1.09 -1.09 -1.09 -1.09 C: 0.98 -1.68 1.26 -1.68 -1.68 -1.68 G: -1.68 -1.26 -1.68 1.26 -1.68 1.26 T: -1.09 -1.09 -1.09 -1.09 1.85 -1.09 <BLANKLINE>
Setting
motif.background to
None resets it to a uniform distribution:
>>> motif.background = None >>> for letter in "ACGT": ... print("%s: %4.2f" % (letter, motif.background[letter])) ... A: 0.25 C: 0.25 G: 0.25 T: 0.25
If you set
motif.background equal to a single value, it will be interpreted as the GC content:
>>> motif.background = 0.8 >>> for letter in "ACGT": ... print("%s: %4.2f" % (letter, motif.background[letter])) ... A: 0.10 C: 0.40 G: 0.40 T: 0.10
Note that you can now calculate the mean of the PSSM scores over the background against which it was computed:
>>> print("%f" % motif.pssm.mean(motif.background)) 4.703928
as well as its standard deviation:
>>> print("%f" % motif.pssm.std(motif.background)) 3.290900
and its distribution:
>>> distribution = motif.pssm.distribution(background=motif.background) >>> threshold = distribution.threshold_fpr(0.01) >>> print("%f" % threshold) 3.854375
Note that the position-weight matrix and the position-specific scoring matrix are recalculated each time you call
motif.pwm or
motif.pssm, respectively. If speed is an issue and you want to use the PWM or PSSM repeatedly, you can save them as a variable, as in
>>> pssm = motif.pssm
Once we have more than one motif, we might want to compare them.
Before we start comparing motifs, I should point out that motif boundaries are usually quite arbitrary. This means we often need to compare motifs of different lengths, so comparison needs to involve some kind of alignment. This means we have to take into account two things:
To align the motifs, we use ungapped alignment of PSSMs and substitute zeros for any missing columns at the beginning and end of the matrices. This means that effectively we are using the background distribution for columns missing from the PSSM. The distance function then returns the minimal distance between motifs, as well as the corresponding offset in their alignment.
To give an exmaple, let us first load another motif,
which is similar to our test motif
m:
>>> with open("REB1.pfm") as handle: ... m_reb1 = motifs.read(handle, "pfm") ... >>> m_reb1.consensus Seq('GTTACCCGG', IUPACUnambiguousDNA()) >>> print(m_reb1.counts) 0 1 2 3 4 5 6 7 8 A: 30.00 0.00 0.00 100.00 0.00 0.00 0.00 0.00 15.00 C: 10.00 0.00 0.00 0.00 100.00 100.00 100.00 0.00 15.00 G: 50.00 0.00 0.00 0.00 0.00 0.00 0.00 60.00 55.00 T: 10.00 100.00 100.00 0.00 0.00 0.00 0.00 40.00 15.00 <BLANKLINE>
To make the motifs comparable, we choose the same values for the pseudocounts and the background distribution as our motif
m:
>>> m_reb1.pseudocounts = {'A':0.6, 'C': 0.4, 'G': 0.4, 'T': 0.6} >>> m_reb1.background = {'A':0.3,'C':0.2,'G':0.2,'T':0.3} >>> pssm_reb1 = m_reb1.pssm >>> print(pssm_reb1) 0 1 2 3 4 5 6 7 8 A: 0.00 -5.67 -5.67 1.72 -5.67 -5.67 -5.67 -5.67 -0.97 C: -0.97 -5.67 -5.67 -5.67 2.30 2.30 2.30 -5.67 -0.41 G: 1.30 -5.67 -5.67 -5.67 -5.67 -5.67 -5.67 1.57 1.44 T: -1.53 1.72 1.72 -5.67 -5.67 -5.67 -5.67 0.41 -0.97 <BLANKLINE>
We’ll compare these motifs using the Pearson correlation. Since we want it to resemble a distance measure, we actually take 1−r, where r is the Pearson correlation coefficient (PCC):
>>> distance, offset = pssm.dist_pearson(pssm_reb1) >>> print("distance = %5.3g" % distance) distance = 0.239 >>> print(offset) -2
This means that the best PCC between motif
m and
m_reb1 is obtained with the following alignment:
m: bbTACGCbb m_reb1: GTTACCCGG
where
b stands for background distribution. The PCC itself is
roughly 1−0.239=0.761.
Currently, Biopython has only limited support for de novo motif finding. Namely, we support running and parsing of AlignAce and MEME. Since the number of motif finding tools is growing rapidly, contributions of new parsers are welcome.
Let’s assume, you have run MEME on sequences of your choice with your
favorite parameters and saved the output in the file
meme.out. You can retrieve the motifs reported by MEME by
running the following piece of code:
>>> from Bio import motifs >>> with open("meme.out") as handle: ... motifsM = motifs.parse(handle, "meme") ...
>>> motifsM [<Bio.motifs.meme.Motif object at 0xc356b0>]
Besides the most wanted list of motifs, the result object contains more useful information, accessible through properties with self-explanatory names:
.alphabet
.datafile
.sequence_names
.version
.command
The motifs returned by the MEME Parser can be treated exactly like regular Motif objects (with instances), they also provide some extra functionality, by adding additional information about the instances.
>>> motifsM[0].consensus Seq('CTCAATCGTA', IUPACUnambiguousDNA()) >>> motifsM[0].instances[0].sequence_name 'SEQ10;' >>> motifsM[0].instances[0].start 3 >>> motifsM[0].instances[0].strand '+'
>>> motifsM[0].instances[0].pvalue 8.71e-07
We can do very similar things with the AlignACE program. Assume, you have
your output in the file
alignace.out. You can parse your output
with the following code:
>>> from Bio import motifs >>> with open("alignace.out") as handle: ... motifsA = motifs.parse(handle, "alignace") ...
Again, your motifs behave as they should:
>>> motifsA[0].consensus Seq('TCTACGATTGAG', IUPACUnambiguousDNA())
In fact you can even see, that AlignAce found a very similar motif as MEME. It is just a longer version of a reverse complement of the MEME motif:
>>> motifsM[0].reverse_complement().consensus Seq('TACGATTGAG', IUPACUnambiguousDNA())
If you have AlignAce installed on the same machine, you can also run it directly from Biopython. A short example of how this can be done is shown below (other parameters can be specified as keyword parameters):
>>>>>>> from Bio.motifs.applications import AlignAceCommandline >>> cmd = AlignAceCommandline(cmd=command, input=input_file, gcback=0.6, numcols=10) >>> stdout, stderr= cmd()
Since AlignAce prints all of its output to standard output, you can get to your motifs by parsing the first part of the result:
>>> motifs = motifs.parse(stdout, "alignace")
Cluster analysis is the grouping of items into clusters based on the similarity of the items to each other. In bioinformatics, clustering is widely used in gene expression data analysis to find groups of genes with similar gene expression profiles. This may identify functionally related genes, as well as suggest the function of presently unknown genes.
The Biopython module
Bio.Cluster provides commonly used clustering algorithms and was designed with the application to gene expression data in mind. However, this module can also be used for cluster analysis of other types of data.
Bio.Cluster and the underlying C Clustering Library is described by De Hoon et al. [14].
The following four clustering approaches are implemented in
Bio.Cluster:
The data to be clustered are represented by a n × m Numerical Python array
data. Within the context of gene expression data clustering, typically the rows correspond to different genes whereas the columns correspond to different experimental conditions. The clustering algorithms in
Bio.Cluster can be applied both to rows (genes) and to columns (experiments).
Often in microarray experiments, some of the data values are missing, which is indicated by an additional n × m Numerical Python integer array
mask. If
mask[i,j]==0, then
data[i,j] is missing and is ignored in the analysis.
The k-means/medians/medoids clustering algorithms and Self-Organizing Maps (SOMs) include the use of a random number generator. The uniform random number generator in
Bio.Cluster is based on the algorithm by L’Ecuyer [25], while random numbers following the binomial distribution are generated using the BTPE algorithm by Kachitvichyanukul and Schmeiser [23]. The random number generator is initialized automatically during its first call. As this random number generator uses a combination of two multiplicative linear congruential generators, two (integer) seeds are needed for initialization, for which we use the system-supplied random number generator
rand (in the C standard library). We initialize this generator by calling
srand with the epoch time in seconds, and use the first two random numbers generated by
rand as seeds for the uniform random number generator in
Bio.Cluster.
In order to cluster items into groups based on their similarity, we should first define what exactly we mean by similar.
Bio.Cluster provides eight distance functions, indicated by a single character, to measure similarity, or conversely, distance:
'e': Euclidean distance;
'b': City-block distance.
'c': Pearson correlation coefficient;
'a': Absolute value of the Pearson correlation coefficient;
'u': Uncentered Pearson correlation (equivalent to the cosine of the angle between two data vectors);
'x': Absolute uncentered Pearson correlation;
's': Spearman’s rank correlation;
'k': Kendall’s τ.
The first two are true distance functions that satisfy the triangle inequality:
and are therefore refered to as metrics. In everyday language, this means that the shortest distance between two points is a straight line.
The remaining six distance measures are related to the correlation coefficient, where the distance d is defined in terms of the correlation r by d=1−r. Note that these distance functions are semi-metrics that do not satisfy the triangle inequality. For example, for
we find a Pearson distance d(u,w) = 1.8660, while d(u,v)+d(v,w) = 1.6340.
In
Bio.Cluster, we define the Euclidean distance as
Only those terms are included in the summation for which both xi and yi are present, and the denominator n is chosen accordingly. As the expression data xi and yi are subtracted directly from each other, we should make sure that the expression data are properly normalized when using the Euclidean distance.
The city-block distance, alternatively known as the Manhattan distance, is related to the Euclidean distance. Whereas the Euclidean distance corresponds to the length of the shortest path between two points, the city-block distance is the sum of distances along each dimension. As gene expression data tend to have missing values, in
Bio.Cluster we define the city-block distance as the sum of distances divided by the number of dimensions:
This is equal to the distance you would have to walk between two points in a city, where you have to walk along city blocks. As for the Euclidean distance, the expression data are subtracted directly from each other, and we should therefore make sure that they are properly normalized.
The Pearson correlation coefficient is defined as
in which x, ȳ are the sample mean of x and y respectively, and σx, σy are the sample standard deviation of x and y. The Pearson correlation coefficient is a measure for how well a straight line can be fitted to a scatterplot of x and y. If all the points in the scatterplot lie on a straight line, the Pearson correlation coefficient is either +1 or -1, depending on whether the slope of line is positive or negative. If the Pearson correlation coefficient is equal to zero, there is no correlation between x and y.
The Pearson distance is then defined as
As the Pearson correlation coefficient lies between -1 and 1, the Pearson distance lies between 0 and 2.
By taking the absolute value of the Pearson correlation, we find a number between 0 and 1. If the absolute value is 1, all the points in the scatter plot lie on a straight line with either a positive or a negative slope. If the absolute value is equal to zero, there is no correlation between x and y.
The corresponding distance is defined as
where r is the Pearson correlation coefficient. As the absolute value of the Pearson correlation coefficient lies between 0 and 1, the corresponding distance lies between 0 and 1 as well.
In the context of gene expression experiments, the absolute correlation is equal to 1 if the gene expression profiles of two genes are either exactly the same or exactly opposite. The absolute correlation coefficient should therefore be used with care.
In some cases, it may be preferable to use the uncentered correlation instead of the regular Pearson correlation coefficient. The uncentered correlation is defined as
where
This is the same expression as for the regular Pearson correlation coefficient, except that the sample means x, ȳ are set equal to zero. The uncentered correlation may be appropriate if there is a zero reference state. For instance, in the case of gene expression data given in terms of log-ratios, a log-ratio equal to zero corresponds to the green and red signal being equal, which means that the experimental manipulation did not affect the gene expression.
The distance corresponding to the uncentered correlation coefficient is defined as
where rU is the uncentered correlation. As the uncentered correlation coefficient lies between -1 and 1, the corresponding distance lies between 0 and 2.
The uncentered correlation is equal to the cosine of the angle of the two data vectors in n-dimensional space, and is often referred to as such.
As for the regular Pearson correlation, we can define a distance measure using the absolute value of the uncentered correlation:
where rU is the uncentered correlation coefficient. As the absolute value of the uncentered correlation coefficient lies between 0 and 1, the corresponding distance lies between 0 and 1 as well.
Geometrically, the absolute value of the uncentered correlation is equal to the cosine between the supporting lines of the two data vectors (i.e., the angle without taking the direction of the vectors into consideration).
The Spearman rank correlation is an example of a non-parametric similarity measure, and tends to be more robust against outliers than the Pearson correlation.
To calculate the Spearman rank correlation, we replace each data value by their rank if we would order the data in each vector by their value. We then calculate the Pearson correlation between the two rank vectors instead of the data vectors.
As in the case of the Pearson correlation, we can define a distance measure corresponding to the Spearman rank correlation as
where rS is the Spearman rank correlation.
Kendall’s τ is another example of a non-parametric similarity measure. It is similar to the Spearman rank correlation, but instead of the ranks themselves only the relative ranks are used to calculate τ (see Snedecor & Cochran [29]).
We can define a distance measure corresponding to Kendall’s τ as
As Kendall’s τ is always between -1 and 1, the corresponding distance will be between 0 and 2.
For most of the distance functions available in
Bio.Cluster, a weight vector can be applied. The weight vector contains weights for the items in the data vector. If the weight for item i is wi, then that item is treated as if it occurred wi times in the data. The weight do not have to be integers.
For the Spearman rank correlation and Kendall’s
τ,
weights do not have a well-defined meaning and are therefore not implemented.
The distance matrix is a square matrix with all pairwise distances between the items in
data, and can be calculated by the function
distancematrix in the
Bio.Cluster module:
>>> from Bio.Cluster import distancematrix >>> matrix = distancematrix(data)
where the following arguments are defined:
data(required)
mask(default:
None)
mask[i,j]==0, then
data[i,j]is missing. If
mask==None, then all data are present.
weight(default:
None)
weight==None, then equal weights are assumed.
transpose(default:
0)
dataare to be calculated (
transpose==0), or between the columns of
data(
transpose==1).
dist(default:
'e', Euclidean distance)
To save memory, the distance matrix is returned as a list of 1D arrays. The number of columns in each row is equal to the row number. Hence, the first row has zero elements. An example of the return value is
[array([]), array([1.]), array([7., 3.]), array([4., 2., 6.])]
This corresponds to the distance matrix
The centroid of a cluster can be defined either as the mean or as the median of each dimension over all cluster items. The function
clustercentroids in
Bio.Cluster can be used to calculate either:
>>> from Bio.Cluster import clustercentroids >>> cdata, cmask = clustercentroids(data)
where the following arguments are defined:
data(required)
mask(default:
None)
mask[i,j]==0, then
data[i,j]is missing. If
mask==None, then all data are present.
clusterid(default:
None)
clusteridis
None,). The centroid data are stored in the 2D Numerical Python array
cdata, with missing data indicated by the 2D Numerical Python integer array
cmask. The dimensions of these arrays are (number of clusters, number of columns) if
transpose is
0, or (number of rows, number of clusters) if
transpose is
1. Each row (if
transpose is
0) or column (if
transpose is
1) contains the averaged data corresponding to the centroid of each cluster.
Given a distance function between items, we can define the distance between two clusters in several ways. The distance between the arithmetic means of the two clusters is used in pairwise centroid-linkage clustering and in k-means clustering. In k-medoids clustering, the distance between the medians of the two clusters is used instead. The shortest pairwise distance between items of the two clusters is used in pairwise single-linkage clustering, while the longest pairwise distance is used in pairwise maximum-linkage clustering. In pairwise average-linkage clustering, the distance between two clusters is defined as the average over the pairwise distances.
To calculate the distance between two clusters, use
>>> from Bio.Cluster import clusterdistance >>> distance = clusterdistance(data)
where the following arguments are defined:
data(required)
mask(default:
None)
mask[i,j]==0, then
data[i,j]is missing. If
mask==None, then all data are present.
weight(default:
None)
weight==None, then equal weights are assumed..
Partitioning algorithms divide items into k clusters such that the sum of distances over the items to their cluster centers is minimal.
The number of clusters k is specified by the user.
Three partitioning algorithms are available in
Bio.Cluster:
These algorithms differ in how the cluster center is defined. In k-means clustering, the cluster center is defined as the mean data vector averaged over all items in the cluster. Instead of the mean, in k-medians clustering the median is calculated for each dimension in the data vector. Finally, in k-medoids clustering the cluster center is defined as the item which has the smallest sum of distances to the other items in the cluster. This clustering algorithm is suitable for cases in which the distance matrix is known but the original data matrix is not available, for example when clustering proteins based on their structural similarity.
The expectation-maximization (EM) algorithm is used to find this partitioning into k groups. In the initialization of the EM algorithm, we randomly assign items to clusters. To ensure that no empty clusters are produced, we use the binomial distribution to randomly choose the number of items in each cluster to be one or more. We then randomly permute the cluster assignments to items such that each item has an equal probability to be in any cluster. Each cluster is thus guaranteed to contain at least one item.
We then iterate:
To avoid clusters becoming empty during the iteration, in k-means and k-medians clustering the algorithm keeps track of the number of items in each cluster, and prohibits the last remaining item in a cluster from being reassigned to a different cluster. For k-medoids clustering, such a check is not needed, as the item that functions as the cluster centroid has a zero distance to itself, and will therefore never be closer to a different cluster.
As the initial assignment of items to clusters is done randomly, usually a different clustering solution is found each time the EM algorithm is executed. To find the optimal clustering solution, the k-means algorithm is repeated many times, each time starting from a different initial random clustering. The sum of distances of the items to their cluster center is saved for each run, and the solution with the smallest value of this sum will be returned as the overall clustering solution.
How often the EM algorithm should be run depends on the number of items being clustered. As a rule of thumb, we can consider how often the optimal solution was found; this number is returned by the partitioning algorithms as implemented in this library. If the optimal solution was found many times, it is unlikely that better solutions exist than the one that was found. However, if the optimal solution was found only once, there may well be other solutions with a smaller within-cluster sum of distances. If the number of items is large (more than several hundreds), it may be difficult to find the globally optimal solution.
The EM algorithm terminates when no further reassignments take place. We noticed that for some sets of initial cluster assignments, the EM algorithm fails to converge due to the same clustering solution reappearing periodically after a small number of iteration steps. We therefore check for the occurrence of such periodic solutions during the iteration. After a given number of iteration steps, the current clustering result is saved as a reference. By comparing the clustering result after each subsequent iteration step to the reference state, we can determine if a previously encountered clustering result is found. In such a case, the iteration is halted. If after a given number of iterations the reference state has not yet been encountered, the current clustering solution is saved to be used as the new reference state. Initially, ten iteration steps are executed before resaving the reference state. This number of iteration steps is doubled each time, to ensure that periodic behavior with longer periods can also be detected.
The k-means and k-medians algorithms are implemented as the function
kcluster in
Bio.Cluster:
>>> from Bio.Cluster import kcluster >>> clusterid, error, nfound = kcluster(data)
where the following arguments are defined:
data(required)
nclusters(default:
2))
kcluster, from a theoretical viewpoint it is best to use the Euclidean distance for the k-means algorithm, and the city-block distance for k-medians..
The
kmedoids routine performs k-medoids clustering on a given set of items, using the distance matrix and the number of clusters passed by the user:
>>> from Bio.Cluster import kmedoids >>> clusterid, error, nfound = kmedoids(distance)
where the following arguments are defined: , nclusters=2, npass=1, initialid=None)|
distance(required)
distance = array([[0.0, 1.1, 2.3], [1.1, 0.0, 4.5], [2.3, 4.5, 0.0]])
distance = array([1.1, 2.3, 4.5])
distance = [array([]|, array([1.1]), array([2.3, 4.5]) ]
nclusters(default:
2)
npass(default:
1)
initialidis given, the value of
npassis ignored, as the clustering algorithm behaves deterministically in that case. array containing the number of the cluster to which each item was assigned,
error is the within-cluster sum of distances for the optimal k-medoids clustering solution, and
nfound is the number of times the optimal solution was found. Note that the cluster number in
clusterid is defined as the item number of the item representing the cluster centroid.
Hierarchical clustering methods are inherently different from the k-means clustering method. In hierarchical clustering, the similarity in the expression profile between genes or experimental conditions are represented in the form of a tree structure. This tree structure can be shown graphically by programs such as Treeview and Java Treeview, which has contributed to the popularity of hierarchical clustering in the analysis of gene expression data.
The first step in hierarchical clustering is to calculate the distance matrix, specifying all the distances between the items to be clustered. Next, we create a node by joining the two closest items. Subsequent nodes are created by pairwise joining of items or nodes based on the distance between them, until all items belong to the same node. A tree structure can then be created by retracing which items and nodes were merged. Unlike the EM algorithm, which is used in k-means clustering, the complete process of hierarchical clustering is deterministic.
Several flavors of hierarchical clustering exist, which differ in how the distance between subnodes is defined in terms of their members. In
Bio.Cluster, pairwise single, maximum, average, and centroid linkage are available.
For pairwise single-, complete-, and average-linkage clustering, the distance between two nodes can be found directly from the distances between the individual items. Therefore, the clustering algorithm does not need access to the original gene expression data, once the distance matrix is known. For pairwise centroid-linkage clustering, however, the centroids of newly formed subnodes can only be calculated from the original data and not from the distance matrix.
The implementation of pairwise single-linkage hierarchical clustering is based on the SLINK algorithm (R. Sibson, 1973), which is much faster and more memory-efficient than a straightforward implementation of pairwise single-linkage clustering. The clustering result produced by this algorithm is identical to the clustering solution found by the conventional single-linkage algorithm. The single-linkage hierarchical clustering algorithm implemented in this library can be used to cluster large gene expression data sets, for which conventional hierarchical clustering algorithms fail due to excessive memory requirements and running time.
The result of hierarchical clustering consists of a tree of nodes, in which each node joins two items or subnodes. Usually, we are not only interested in which items or subnodes are joined at each node, but also in their similarity (or distance) as they are joined. To store one node in the hierarchical clustering tree, we make use of the class
Node, which defined in
Bio.Cluster. An instance of
Node has three attributes:
left
right
distance
Here,
left and
right are integers referring to the two items or subnodes that are joined at this node, and
distance is the distance between them. The items being clustered are numbered from 0 to (number of items − 1), while clusters are numbered from -1 to −(number of items−1). Note that the number of nodes is one less than the number of items.
To create a new
Node object, we need to specify
left and
right;
distance is optional.
>>> from Bio.Cluster import Node >>> Node(2, 3) (2, 3): 0 >>> Node(2, 3, 0.91) (2, 3): 0.91
The attributes
left,
right, and
distance of an existing
Node object can be modified directly:
>>> node = Node(4, 5) >>> node.left = 6 >>> node.right = 2 >>> node.distance = 0.73 >>> node (6, 2): 0.73
An error is raised if
left and
right are not integers, or if
distance cannot be converted to a floating-point value.
The Python class
Tree represents a full hierarchical clustering solution. A
Tree object can be created from a list of
Node objects:
>>> from Bio.Cluster import Node, Tree >>> nodes = [Node(1, 2, 0.2), Node(0, 3, 0.5), Node(-2, 4, 0.6), Node(-1, -3, 0.9)] >>> tree = Tree(nodes) >>> print(tree) (1, 2): 0.2 (0, 3): 0.5 (-2, 4): 0.6 (-1, -3): 0.9
The
Tree initializer checks if the list of nodes is a valid hierarchical clustering result:
>>> nodes = [Node(1, 2, 0.2), Node(0, 2, 0.5)] >>> Tree(nodes) Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: Inconsistent tree
Individual nodes in a
Tree object can be accessed using square brackets:
>>> nodes = [Node(1, 2, 0.2), Node(0, -1, 0.5)] >>> tree = Tree(nodes) >>> tree[0] (1, 2): 0.2 >>> tree[1] (0, -1): 0.5 >>> tree[-1] (0, -1): 0.5
As a
Tree object is read-only, we cannot change individual nodes in a
Tree object. However, we can convert the tree to a list of nodes, modify this list, and create a new tree from this list:
>>> tree = Tree([Node(1, 2, 0.1), Node(0, -1, 0.5), Node(-2, 3, 0.9)]) >>> print(tree) (1, 2): 0.1 (0, -1): 0.5 (-2, 3): 0.9 >>> nodes = tree[:] >>> nodes[0] = Node(0, 1, 0.2) >>> nodes[1].left = 2 >>> tree = Tree(nodes) >>> print(tree) (0, 1): 0.2 (2, -1): 0.5 (-2, 3): 0.9
This guarantees that any
Tree object is always well-formed.
To display a hierarchical clustering solution with visualization programs such as Java Treeview, it is better to scale all node distances such that they are between zero and one. This can be accomplished by calling the
scale method on an existing
Tree object:
>>> tree.scale()
This method takes no arguments, and returns
None.
After hierarchical clustering, the items can be grouped into k clusters based on the tree structure stored in the
Tree object by cutting the tree:
>>> clusterid = tree.cut(nclusters=1)
where
nclusters (defaulting to
1) is the desired number of clusters k.
This method ignores the top k−1 linking events in the tree structure, resulting in k separated clusters of items. The number of clusters k should be positive, and less than or equal to the number of items.
This method returns an array
clusterid containing the number of the cluster to which each item is assigned.
To perform hierarchical clustering, use the
treecluster function in
Bio.Cluster.
>>> from Bio.Cluster import treecluster >>> tree = treecluster(data)
where the following arguments are defined:
data
mask(default:
None)
mask[i,j]==0, then
data[i,j]is missing. If
mask==None, then all data are present.
weight(default:
None)
weight==None, then equal weights are assumed.)
To apply hierarchical clustering on a precalculated distance matrix, specify the
distancematrix argument when calling
treecluster function instead of the
data argument:
>>> from Bio.Cluster import treecluster >>> tree = treecluster(distancematrix=distance)
In this case, the following arguments are defined:
distancematrix
distance = array([[0.0, 1.1, 2.3], [1.1, 0.0, 4.5], [2.3, 4.5, 0.0]])
distance = array([1.1, 2.3, 4.5])
distance = [array([]), array([1.1]), array([2.3, 4.5])
treeclustermay shuffle the values in the distance matrix as part of the clustering algorithm, be sure to save this array in a different variable before calling
treeclusterif you need it later.
method
method=='s': pairwise single-linkage clustering
method=='m': pairwise maximum- (or complete-) linkage clustering
method=='a': pairwise average-linkage clustering
When calling
treecluster, either
data or
distancematrix should be
None.).
Self-Organizing Maps (SOMs) were invented by Kohonen to describe neural networks (see for instance Kohonen, 1997 [24]). Tamayo (1999) first applied Self-Organizing Maps to gene expression data [30].
SOMs organize items into clusters that are situated in some topology. Usually a rectangular topology is chosen. The clusters generated by SOMs are such that neighboring clusters in the topology are more similar to each other than clusters far from each other in the topology.
The first step to calculate a SOM is to randomly assign a data vector to each cluster in the topology. If rows are being clustered, then the number of elements in each data vector is equal to the number of columns.
An SOM is then generated by taking rows one at a time, and finding which cluster in the topology has the closest data vector. The data vector of that cluster, as well as those of the neighboring clusters, are adjusted using the data vector of the row under consideration. The adjustment is given by
The parameter τ is a parameter that decreases at each iteration step. We have used a simple linear function of the iteration step:
τinit is the initial value of τ as specified by the user, i is the number of the current iteration step, and n is the total number of iteration steps to be performed. While changes are made rapidly in the beginning of the iteration, at the end of iteration only small changes are made.
All clusters within a radius R are adjusted to the gene under consideration. This radius decreases as the calculation progresses as
in which the maximum radius is defined as
where (Nx, Ny) are the dimensions of the rectangle defining the topology.
The function
somcluster implements the complete algorithm to calculate a Self-Organizing Map on a rectangular grid. First it initializes the random number generator. The node data are then initialized using the random number generator. The order in which genes or microarrays are used to modify the SOM is also randomized. The total number of iterations in the SOM algorithm is specified by the user.
To run
somcluster, use
>>> from Bio.Cluster import somcluster >>> clusterid, celldata = somcluster(data)
where the following arguments are defined:
data(required)].
Principal Component Analysis (PCA) is a widely used technique for analyzing multivariate data. A practical example of applying Principal Component Analysis to gene expression data is presented by Yeung and Ruzzo (2001) [33]. × principal component analysis, typically the mean is subtracted from each column in the data matrix. In the example above, this effectively centers the ellipsoidal cloud around its centroid in 3D space, with the principal components describing the variation of points in the ellipsoidal cloud with respect to their centroid.
The function
pca below first uses the singular value decomposition to calculate the eigenvalues and eigenvectors of the data matrix. The singular value decomposition is implemented as a translation in C of the Algol procedure
svd [16], which uses Householder bidiagonalization and a variant of the QR algorithm. The principal components, the coordinates of each data vector along the principal components, and the eigenvalues corresponding to the principal components are then evaluated and returned in decreasing order of the magnitude of the eigenvalue. If data centering is desired, the mean should be subtracted from each column in the data matrix before calling the
pca routine.
To apply Principal Component Analysis to a rectangular matrix
data, use
>>> from Bio.Cluster import pca >>> columnmean, coordinates, components, eigenvalues = pca(data)
This function returns a tuple
columnmean, coordinates, components, eigenvalues:
columnmean
data.
coordinates
datawith respect to the principal components.
components
eigenvalues
The original matrix
data can be recreated by calculating
columnmean + dot(coordinates, components).
Cluster/TreeView are GUI-based codes for clustering gene expression data. They were originally written by Michael Eisen while at Stanford University.
Bio.Cluster contains functions for reading and writing data files that correspond to the format specified for Cluster/TreeView. In particular, by saving a clustering result in that format, TreeView can be used to visualize the clustering results. We recommend using Alok Saldanha’s TreeView program, which can display hierarchical as well as k-means clustering results.
An object of the class
Record contains all information stored in a Cluster/TreeView-type data file. To store the information contained in the data file in a
Record object, we first open the file and then read it:
>>> from Bio import Cluster >>> handle = open("mydatafile.txt") >>> record = Cluster.read(handle) >>> handle.close()
This two-step process gives you some flexibility in the source of the data. For example, you can use
>>> import gzip # Python standard library >>> handle = gzip.open("mydatafile.txt.gz")
to open a gzipped file, or
>>> import urllib # Python standard library >>> handle = urllib.urlopen("")
to open a file stored on the Internet before calling
read.
The
read command reads the tab-delimited text file
mydatafile.txt containing gene expression data in the format specified for Michael Eisen’s Cluster/TreeView program. For a description of this file format, see the manual to Cluster/TreeView. It is available at Michael Eisen’s lab website and at our website.
A
Record object has the following attributes:
data
mask
dataarray, if any, are missing. If
mask[i,j]==0, then
data[i,j]is missing. If no data were found to be missing,
maskis set to
None.
geneid
genename
genenameis set to
None.
gweight
gweightis set to
None.
gorder
gorderis set to
None.
expid
eweight
eweightis set to
None.
eorder
eorderis set to
None.
uniqid
After loading a
Record object, each of these attributes can be accessed and modified directly. For example, the data can be log-transformed by taking the logarithm of
record.data.
To calculate the distance matrix between the items stored in the record, use
>>> matrix = record.distancematrix()
where the following arguments are defined:
transpose(default:
0)
dataare to be calculated (
transpose==0), or between the columns of
data(
transpose==1).
dist(default:
'e', Euclidean distance)
This function returns the distance matrix as a list of rows, where the number of columns of each row is equal to the row number (see section 15.1).
To calculate the centroids of clusters of items stored in the record, use
>>> cdata, cmask = record.clustercentroids()
clusterid(default:
None)
clusteridis not given,; see section 15.2 for a description.
To calculate the distance between clusters of items stored in the record, use
>>> distance = record.clusterdistance()
where the following arguments are defined:.
To perform hierarchical clustering on the items stored in the record, use
>>> tree = record.treecluster()
where the following arguments are defined:)
transpose
transpose==0, genes (rows) are being clustered. If
transpose==1, microarrays (columns) are clustered.).
To perform k-means or k-medians clustering on the items stored in the record, use
>>> clusterid, error, nfound = record.kcluster()
where the following arguments are defined:
nclusters(default:
2).
To calculate a Self-Organizing Map of the items stored in the record, use
>>> clusterid, celldata = record.somcluster()
where the following arguments are defined:].
To save the clustering result, use
>>> record.save(jobname, geneclusters, expclusters)
where the following arguments are defined:
jobname
jobnameis used as the base name for names of the files that are to be saved.
geneclusters
kcluster. In case of hierarchical clustering,
geneclustersis a
Treeobject.
expclusters
kcluster. In case of hierarchical clustering,
expclustersis a
Treeobject.
This method writes the text file
jobname.cdt,
jobname.gtr,
jobname.atr,
jobname*.kgg, and/or
jobname*.kag for subsequent reading by the Java TreeView program. If
geneclusters and
expclusters are both
None, this method only writes the text file
jobname.cdt; this file can subsequently be read into a new
Record object.
This is an example of a hierarchical clustering calculation, using single linkage clustering for genes and maximum linkage clustering for experimental conditions. As the Euclidean distance is being used for gene clustering, it is necessary to scale the node distances
genetree such that they are all between zero and one. This is needed for the Java TreeView code to display the tree diagram correctly. To cluster the experimental conditions, the uncentered correlation is being used. No scaling is needed in this case, as the distances in
exptree are already between zero and two. The example data
cyano.txt can be found in the
data subdirectory.
>>> from Bio import Cluster >>> handle = open("cyano.txt") >>> record = Cluster.read(handle) >>> handle.close() >>> genetree = record.treecluster(method='s') >>> genetree.scale() >>> exptree = record.treecluster(dist='u', transpose=1) >>> record.save("cyano_result", genetree, exptree)
This will create the files
cyano_result.cdt,
cyano_result.gtr, and
cyano_result.atr.
Similarly, we can save a k-means clustering solution:
>>> from Bio import Cluster >>> handle = open("cyano.txt") >>> record = Cluster.read(handle) >>> handle.close() >>> (geneclusters, error, ifound) = record.kcluster(nclusters=5, npass=1000) >>> (expclusters, error, ifound) = record.kcluster(nclusters=2, npass=100, transpose=1) >>> record.save("cyano_result", geneclusters, expclusters)
This will create the files
cyano_result_K_G2_A2.cdt,
cyano_result_K_G2.kgg, and
cyano_result_K_A2.kag.
median(data)
returns the median of the 1D array
data.
mean(data)
returns the mean of the 1D array
data.
version()
returns the version number of the underlying C Clustering Library as a string. (16.2) and (16.3).
Table 16 (16.2) and 16 (16.2) and (16 (16.2) and (16 16.1.
Using the data in Table 16.1, we create and initialize a k-nearest neighbors model as follows:
>>> from Bio import kNN >>> k = 3 >>> model = kNN.train(xs, ys, k)
where
xs and
ys are the same as in Section 16 was added to Biopython 1.50,
having previously been available as a separate Python module dependent on Biopython.
GenomeDiagram is described in the Bioinformatics journal publication by Pritchard et al. (2006) [2],
which includes some examples images. There is a PDF copy of the old manual here, which has some
more examples.
As the name might suggest, GenomeDiagram was designed for drawing whole genomes, in particular prokaryotic genomes, either as linear diagrams (optionally broken up into fragments to fit better) or as circular wheel diagrams. Have a look at Figure 2 in Toth et al. (2006) [3] for a good example. It proved also well suited to drawing quite detailed figures for smaller genomes such as phage, plasmids or mitochrondia, for example see Figures 1 and 2 in Van der Auwera et al. (2009) [4] (shown with additional manual editing). typically.draw(format="circular", circular=True, pagesize=(20*cm,20*cm), start=0, end=len(record), circle_core=0.7)="Strand 17.1, which was all that was available in the last publicly released standalone version of GenomeDiagram. Arrow sigils were included when GenomeDiagram was added to Biopython 1.50:
#Default uses a BOX sigil gd_feature_set.add_feature(feature) #You can make this explicit: gd_feature_set.add_feature(feature, sigil="BOX") #Or opt for an arrow: gd_feature_set.add_feature(feature, sigil="ARROW")
Biopython 1.61 added three more sigils,
#Box with corners cut off (making it an octagon) gd_feature_set.add_feature(feature, sigil="OCTO") #Box with jagged edges (useful for showing breaks in contains) gd_feature_set.add_feature(feature, sigil="JAGGY") #Arrow which spans the axis with strand used only for direction gd_feature_set.add_feature(feature, sigil="BIGARROW")
These are shown below. Most sigils fit into a bounding box (as given by the default BOX sigil), either above or below the axis for the forward or reverse strand, or straddling it (double the height) for strand-less features. The BIGARROW sigil is different, always straddling the axis with the direction taken from the feature’s stand.
We introduced the arrow sigils in the previous section.:
Biopython 1.61 adds a new
BIGARROW sigil which always stradles
the axis, pointing left for the reverse strand or right otherwise:
#A large arrow straddling the axis: gd_feature_set.add_feature(feature, sigil="BIGARROW")
All the shaft and arrow head options shown above for the
ARROW sigil can be used for the
BIGARROW sigil too.
Now let’s return to the pPCP1 plasmid from Yersinia pestis biovar Microtus, and the top down approach used in Section 17.1.3, but take advantage of the sigil options we’ve now discussed. This time we’ll use arrows for the genes, and overlay them with strand-less features (as plain boxes) showing the position of some restriction digest sites.
from reportlab.lib import colors from reportlab.lib.units import cm from Bio.Graphics import GenomeDiagram from Bio import SeqIO from Bio.SeqFeature import SeqFeature, FeatureLocation record = SeqIO.read(") gd_diagram.draw(format="circular", circular=True, pagesize=(20*cm,20*cm), start=0, end=len(record), circle_core = 0.5) gd_diagram.write("plasmid_circular_nice.pdf", "PDF") gd_diagram.write("plasmid_circular_nice.eps", "EPS") gd_diagram.write("plasmid_circular_nice.svg", "SVG")
And the output:
All the examples so far have used a single track, but you can have more than one track – for example show the genes on one, and repeat regions on another. In this example we’re going to show three phage genomes side by side to scale, inspired by Figure 6 in Proux et al. (2002) [5]. We’ll need the GenBank files for the following three phage:
NC_002703– Lactococcus phage Tuc2009, complete genome (38347 bp)
AF323668– Bacteriophage bIL285, complete genome (35538 bp)
NC_003212– Listeria innocua Clip11262, complete genome, of which we are focussing only on integrated prophage 5 (similar length).
You can download these using Entrez if you like, see Section 9.6 for more details. For the third record we’ve worked out where the phage is integrated into the genome, and slice the record to extract it (with the features preserved, see Section 4.6), and must also reverse complement to match the orientation of the first two phage (again preserving the features, see Section 4.8):
from Bio import SeqIO A_rec = SeqIO.read("NC_002703.gbk", "gb") B_rec = SeqIO.read("AF323668.gbk", "gb") C_rec = SeqIO.read("NC_003212.gbk", "gb")[2587879:2625807].reverse_complement(name=True)
The figure we are imitating used different colors for different gene functions. One way to do this is to edit the GenBank file to record color preferences for each feature - something Sanger’s Artemis editor does, and which GenomeDiagram should understand. Here however, we’ll just hard code three lists of colors.
Note that the annotation in the GenBank files doesn’t exactly match that shown in Proux et al., they have drawn some unannotated genes.
from reportlab.lib.colors import red, grey, orange, green, brown, blue, lightblue, purple A_colors = [red]*5 + [grey]*7 + [orange]*2 + [grey]*2 + [orange] + [grey]*11 + [green]*4 \ + [grey] + [green]*2 + [grey, green] + [brown]*5 + [blue]*4 + [lightblue]*5 \ + [grey, lightblue] + [purple]*2 + [grey] B_colors = [red]*6 + [grey]*8 + [orange]*2 + [grey] + [orange] + [grey]*21 + [green]*5 \ + [grey] + [brown]*4 + [blue]*3 + [lightblue]*3 + [grey]*5 + [purple]*2 C_colors = [grey]*30 + [green]*5 + [brown]*4 + [blue]*2 + [grey, blue] + [lightblue]*2 \ + [grey]*5
Now to draw them – this time we add three tracks to the diagram, and also notice they are given different start/end values to reflect their different lengths (this requires Biopython 1.59 or later).
from Bio.Graphics import GenomeDiagram name = "Proux Fig 6" gd_diagram = GenomeDiagram.Diagram(name) max_len = 0 for record, gene_colors in zip([A_rec, B_rec, C_rec], [A_colors, B_colors, C_colors]): max_len = max(max_len, len(record)) gd_track_for_features = gd_diagram.new_track(1, name=record.name, greytrack=True, start=0, end=len(record)) gd_feature_set = gd_track_for_features.new_set() i = 0 for feature in record.features: if feature.type != "gene": #Exclude this feature continue gd_feature_set.add_feature(feature, sigil="ARROW", color=gene_colors[i], label=True, name = str(i+1), label_position="start", label_size = 6, label_angle=0) i+=1 gd_diagram.draw(format="linear", pagesize='A4', fragments=1, start=0, end=max_len) gd_diagram.write(name + ".pdf", "PDF") gd_diagram.write(name + ".eps", "EPS") gd_diagram.write(name + ".svg", "SVG")
The result:
I did wonder why in the original manuscript there were no red or orange genes marked in the bottom phage. Another important point is here the phage are shown with different lengths - this is because they are all drawn to the same scale (they are different lengths).
The key difference from the published figure is they have color-coded links between similar proteins – which is what we will do in the next section.
Biopython 1.59 added the ability to draw cross links between tracks - both simple linear diagrams as we will show here, but also linear diagrams split into fragments and circular diagrams.
Continuing the example from the previous section inspired by Figure 6 from Proux et al. 2002 [5], we would need a list of cross links between pairs of genes, along with a score or color to use. Realistically you might extract this from a BLAST file computationally, but here I have manually typed them in.
My naming convention continues to refer to the three phage as A, B and C. Here are the links we want to show between A and B, given as a list of tuples (percentage similarity score, gene in A, gene in B).
#Tuc2009 (NC_002703) vs bIL285 (AF323668) A_vs_B = [ (99, "Tuc2009_01", "int"), (33, "Tuc2009_03", "orf4"), (94, "Tuc2009_05", "orf6"), (100,"Tuc2009_06", "orf7"), (97, "Tuc2009_07", "orf8"), (98, "Tuc2009_08", "orf9"), (98, "Tuc2009_09", "orf10"), (100,"Tuc2009_10", "orf12"), (100,"Tuc2009_11", "orf13"), (94, "Tuc2009_12", "orf14"), (87, "Tuc2009_13", "orf15"), (94, "Tuc2009_14", "orf16"), (94, "Tuc2009_15", "orf17"), (88, "Tuc2009_17", "rusA"), (91, "Tuc2009_18", "orf20"), (93, "Tuc2009_19", "orf22"), (71, "Tuc2009_20", "orf23"), (51, "Tuc2009_22", "orf27"), (97, "Tuc2009_23", "orf28"), (88, "Tuc2009_24", "orf29"), (26, "Tuc2009_26", "orf38"), (19, "Tuc2009_46", "orf52"), (77, "Tuc2009_48", "orf54"), (91, "Tuc2009_49", "orf55"), (95, "Tuc2009_52", "orf60"), ]
Likewise for B and C:
#bIL285 (AF323668) vs Listeria innocua prophage 5 (in NC_003212) B_vs_C = [ (42, "orf39", "lin2581"), (31, "orf40", "lin2580"), (49, "orf41", "lin2579"), #terL (54, "orf42", "lin2578"), #portal (55, "orf43", "lin2577"), #protease (33, "orf44", "lin2576"), #mhp (51, "orf46", "lin2575"), (33, "orf47", "lin2574"), (40, "orf48", "lin2573"), (25, "orf49", "lin2572"), (50, "orf50", "lin2571"), (48, "orf51", "lin2570"), (24, "orf52", "lin2568"), (30, "orf53", "lin2567"), (28, "orf54", "lin2566"), ]
For the first and last phage these identifiers are locus tags, for the middle phage there are no locus tags so I’ve used gene names instead. The following little helper function lets us lookup a feature using either a locus tag or gene name:
def get_feature(features, id, tags=["locus_tag", "gene"]): """Search list of SeqFeature objects for an identifier under the given tags.""" for f in features: for key in tags: #tag may not be present in this feature for x in f.qualifiers.get(key, []): if x == id: return f raise KeyError(id)
We can now turn those list of identifier pairs into SeqFeature pairs, and thus
find their location co-ordinates. We can now add all that code and the following
snippet to the previous example (just before the
gd_diagram.draw(...)
line – see the finished example script
Proux_et_al_2002_Figure_6.py
included in the Doc/examples folder of the Biopython source code)
to add cross links to the figure:
from Bio.Graphics.GenomeDiagram import CrossLink from reportlab.lib import colors #Note it might have been clearer to assign the track numbers explicitly... for rec_X, tn_X, rec_Y, tn_Y, X_vs_Y in [(A_rec, 3, B_rec, 2, A_vs_B), (B_rec, 2, C_rec, 1, B_vs_C)]: track_X = gd_diagram.tracks[tn_X] track_Y = gd_diagram.tracks[tn_Y] for score, id_X, id_Y in X_vs_Y: feature_X = get_feature(rec_X.features, id_X) feature_Y = get_feature(rec_Y.features, id_Y) color = colors.linearlyInterpolatedColor(colors.white, colors.firebrick, 0, 100, score) link_xy = CrossLink((track_X, feature_X.location.start, feature_X.location.end), (track_Y, feature_Y.location.start, feature_Y.location.end), color, colors.lightgrey) gd_diagram.cross_track_links.append(link_xy)
There are several important pieces to this code. First the
GenomeDiagram object
has a
cross_track_links attribute which is just a list of
CrossLink objects.
Each
CrossLink object takes two sets of track-specific co-ordinates (here given
as tuples, you can alternatively use a
GenomeDiagram.Feature object instead).
You can optionally supply a colour, border color, and say if this link should be drawn
flipped (useful for showing inversions).
You can also see how we turn the BLAST percentage identity score into a colour, interpolating between white (0%) and a dark red (100%). In this example we don’t have any problems with overlapping cross-links. One way to tackle that is to use transparency in ReportLab, by using colors with their alpha channel set. However, this kind of shaded color scheme combined with overlap transparency would be difficult to interpret. The result:
There is still a lot more that can be done within Biopython to help
improve this figure. First of all, the cross links in this case are
between proteins which are drawn in a strand specific manor. It can
help to add a background region (a feature using the ‘BOX’ sigil) on the
feature track to extend the cross link. Also, we could reduce the vertical
height of the feature tracks to allocate more to the links instead – one
way to do that is to allocate space for empty tracks. Furthermore,
in cases like this where there are no large gene overlaps, we can use
the axis-straddling
BIGARROW sigil, which allows us to further
reduce the vertical space needed for the track. These improvements
are demonstrated in the example script
Proux_et_al_2002_Figure_6.py
included in the Doc/examples folder of the Biopython source code.
The result:
Beyond that, finishing touches you might want to do manually in a vector image editor include fine tuning the placement of gene labels, and adding other custom annotation such as highlighting particular regions.
Although not really necessary in this example since none of the cross-links overlap, using a transparent color in ReportLab is a very useful technique for superimposing multiple links. However, in this case a shaded color scheme should be avoided.
You can control the tick marks to show the scale – after all every graph should show its units, and the number of the grey-track labels.). You will need to change to the American spellings, although for several years the Biopython version of GenomeDiagram supported both. we have not included the old module
GenomeDiagram.GDUtilities yet. This included a number of
GC% related functions, which will probably be merged under
Bio.SeqUtils later on.
The
Bio.Graphics.BasicChromosome module allows drawing of chromosomes.
There is an example in Jupe et al. (2012) [6]
(open access) using colors to highlight different gene families.
Here is a very simple example - for which we’ll use Arabidopsis thaliana.
You can skip this bit, but first I(filename,"fasta") print(name, len(record))
This gave the lengths of the five chromosomes, which we’ll now use in
the following short demonstration of the
BasicChromosome module:
from reportlab.lib.units import cm from Bio.Graphics import BasicChromosome entries = [("Chr I", 30432563), ("Chr II", 19705359), ("Chr III", 23470805), ("Chr IV", 18585042), ("Chr V", 26992728)] max_len = 30432563 #Could compute this telomere_length = 1000000 #For illustration chr_diagram = BasicChromosome.Organism() chr_diagram.page_size = (29.7*cm, 21*cm) #A4 landscape for name, length in entries: = telomere_length cur_chromosome.add(end) #This chromosome is done chr_diagram.add(cur_chromosome) chr_diagram.draw("simple_chrom.pdf", "Arabidopsis thaliana")
This should create a very simple PDF file, shown here:
This example is deliberately short and sweet. The next example shows the location of features of interest.
Continuing from the previous example, let’s also show the tRNA genes. We’ll get their locations by parsing the GenBank files for the five Arabidopsis thaliana chromosomes. You’ll need to download these files from the NCBI FTP site, and preserve the subdirectory names or edit the paths below:
from reportlab.lib.units import cm from Bio import SeqIO from Bio.Graphics import BasicChromosome entries = [("Chr I", "CHR_I/NC_003070.gbk"), ("Chr II", "CHR_II/NC_003071.gbk"), ("Chr III", "CHR_III/NC_003074.gbk"), ("Chr IV", "CHR_IV/NC_003075.gbk"), ("Chr V", "CHR_V/NC_003076.gbk")] max_len = 30432563 #Could compute this telomere_length = 1000000 #For illustration chr_diagram = BasicChromosome.Organism() chr_diagram.page_size = (29.7*cm, 21*cm) #A4 landscape for index, (name, filename) in enumerate(entries): record = SeqIO.read(filename,"genbank") length = len(record) features = [f for f in record.features if f.type=="tRNA"] #Record an Artemis style integer color in the feature's qualifiers, #1 = Black, 2 = Red, 3 = Green, 4 = blue, 5 =cyan, 6 = purple for f in features: f.qualifiers["color"] = [index+2] - again using bp as the scale length here. body = BasicChromosome.AnnotatedChromosomeSegment(length, features) body.scale = length cur_chromosome.add(body) #Add a closing telomere end = BasicChromosome.TelomereSegment(inverted=True) end.scale = telomere_length cur_chromosome.add(end) #This chromosome is done chr_diagram.add(cur_chromosome) chr_diagram.draw("tRNA_chrom.pdf", "Arabidopsis thaliana")
It might warn you about the labels being too close together - have a look at the forward strand (right hand side) of Chr I, but it should create a colorful PDF file, shown here:
Biopython now has two collections of “cookbook” examples – this chapter (which has been included in this tutorial for many years and has gradually grown), and which is a user contributed collection on our wiki.
We’re trying to encourage Biopython users to contribute their own examples to the wiki. In addition to helping the community, one direct benefit of sharing an example like this is that you could also get some feedback on the code from other Biopython users and developers - which could help you improve all your Python code.
In the long term, we may end up moving all of the examples in this chapter to the wiki, or elsewhere within the tutorial.
This section shows some more examples of sequence input/output, using the
Bio.SeqIO module described in Chapter 5.
Often you’ll have a large file with many sequences in it (e.g. FASTA file or genes, or a FASTQ or SFF file of reads), a separate shorter list of the IDs for a subset of sequences of interest, and want to make a new sequence file for this subset.
Let’s say the list of IDs is in a simple text file, as the first word on each line. This could be a tabular file where the first column is the ID. Try something like this:
from Bio import SeqIO input_file = "big_file.sff" id_file = "short_list.txt" output_file = "short_list.sff" wanted = set(line.rstrip("\n").split(None,1)[0] for line in open(id_file)) print("Found %i unique identifiers in %s" % (len(wanted), id_file)) records = (r for r in SeqIO.parse(input_file, "sff") if r.id in wanted) count = SeqIO.write(records, output_file, "sff") print("Saved %i records from %s to %s" % (count, input_file, output_file)) if count < len(wanted): print("Warning %i IDs not found in %s" % (len(wanted)-count, input_file))
Note that we use a Python
set rather than a
list, this makes
testing membership faster..5.4):
import random from Bio.Seq import Seq from Bio.SeqRecord import SeqRecord from Bio import SeqIO original_rec = SeqIO.read(.9 we saw how to use the
Seq
object’s
translate method, and the optional
cds argument
which enables correct translation of alternative start codons.
We can combine this with
Bio.SeqIO as
shown in the reverse complement example in Section 5.5.3.("coding_sequences.fasta", "fasta")) SeqIO.write(proteins, "translations.fasta", "fasta")
This should work on any FASTA file of complete coding sequences.
If you are working on partial coding sequences, you may prefer to use
nuc_record.seq.translate(to_stop=True) in the example above, as
this wouldn’t check for a valid start codon etc.
Often you’ll get data from collaborators as FASTA files, and sometimes the
sequences can be in a mixture of upper and lower case. In some cases this is
deliberate (e.g. lower case for poor quality regions), but usually it is not
important. You may want to edit the file to make everything consistent (e.g.
all upper case), and you can do this easily using the
upper() method
of the
SeqRecord object (added in Biopython 1.55):
from Bio import SeqIO records = (rec.upper() for rec in SeqIO.parse("mixed.fas", "fasta")) count = SeqIO.write(records, "upper.fas", "fasta") print("Converted %i records to upper case" % count)
How does this work? The first line is just importing the
Bio.SeqIO
module. The second line is the interesting bit – this is a Python
generator expression which gives an upper case version of each record
parsed from the input file (mixed.fas). In the third line we give
this generator expression to the
Bio.SeqIO.write() function and it
saves the new upper cases records to our output file (upper.fas).
The reason we use a generator expression (rather than a list or list comprehension) is this means only one record is kept in memory at a time. This can be really important if you are dealing with large files with millions of entries.
Suppose you wanted to sort a sequence file by length (e.g. a set of
contigs from an assembly), and you are working with a file format like
FASTA or FASTQ which
Bio.SeqIO can read, write (and index).
If the file is small enough, you can load it all into memory at once
as a list of
SeqRecord objects, sort the list, and save it:
from Bio import SeqIO records = list(SeqIO.parse("ls_orchid.fasta","fasta")) records.sort(cmp=lambda x,y: cmp(len(x),len(y))) SeqIO.write(records, "sorted_orchids.fasta", "fasta")
The only clever bit is specifying a comparison function for how to sort the records (here we sort them by length). If you wanted the longest records first, you could flip the comparison or use the reverse argument:
from Bio import SeqIO records = list(SeqIO.parse("ls_orchid.fasta","fasta")) records.sort(cmp=lambda x,y: cmp(len(y),len(x))) SeqIO.write(records, "sorted_orchids.fasta", "fasta")
Now that’s pretty straight forward - but what happens if you have a
very large file and you can’t load it all into memory like this?
For example, you might have some next-generation sequencing reads
to sort by length. This can be solved using the
Bio.SeqIO.index() function.") records = (record_index[id] for id in ids) SeqIO.write(records, "sorted.fasta", "fasta")
First we scan through the file once using
Bio.SeqIO.parse(),
recording the record identifiers and their lengths in a list of tuples.
We then sort this list to get them in length order, and discard the lengths.
Using this sorted list of identifiers
Bio.SeqIO.index() allows us to
retrieve the records one by one, and we pass them to
Bio.SeqIO.write()
for output.
These examples all use
Bio.SeqIO to parse the records into
SeqRecord objects which are output using
Bio.SeqIO.write().
What if you want to sort a file format which
Bio.SeqIO.write() doesn’t
support, like the plain text SwissProt format? Here is an alternative
solution using the
get_raw() method added to
Bio.SeqIO.index()
in Biopython 1.54 (see Section 5.4.2.2).") handle = open("sorted.fasta", "w") for id in ids: handle.write(record_index.get_raw(id)) handle.close()
As a bonus, because it doesn’t parse the data into
SeqRecord objects
a second time it should be faster. ENA sequence read archive, (2MB) which unzips to a 19MB file SRR020192.fastq. This is some Roche 454 GS FLX single end data from virus infected California sea lions (see for details).
First, let’s count the reads:
from Bio import SeqIO count = 0 for rec in SeqIO.parse("SRR020192.fastq", "fastq"): count += 1 print("%i reads" % count)
Now let’s do a simple filtering for a minimum PHRED quality of 20:
from Bio import SeqIO good_reads = (rec for rec in \ SeqIO.parse("SRR020192.fastq", "fastq") \ if min(rec.letter_annotations["phred_quality"]) >= 20) count = SeqIO.write(good_reads, "good_quality.fastq", "fastq") print("Saved %i reads" % count)
This pulled out only 14580 reads out of the 41892 present. A more sensible thing to do would be to quality trim the reads, but this is intended as an example only.ATGACGGTGT is a 5’ primer sequence we want to look for in some FASTQ formatted read data. As in the example above, we’ll use the SRR020192.fastq file downloaded from the ENA (). The same approach would work with any other supported file format (e.g. FASTA files).
This code uses
Bio.SeqIO with a generator expression (to avoid loading
all the sequences into memory at once), and the
Seq object’s
startswith method to see if the read starts with the primer sequence:
from Bio import SeqIO primer_reads = (rec for rec in \ SeqIO.parse("SRR020192.fastq", "fastq") \ if rec.seq.startswith("GATGACGGTGT")) count = SeqIO.write(primer_reads, "with_primer.fastq", "fastq") print("Saved %i reads" % count)
That should find 13819 reads from SRR014849.fastq and save them to a new FASTQ file, with_primer.fastq.
Now suppose that instead you wanted to make a FASTQ file containing these reads
but with the primer sequence removed? That’s just a small change as we can slice the
SeqRecord (see Section 4.6) to remove the first eleven
letters (the length of our primer):
from Bio import SeqIO trimmed_primer_reads = (rec[11:] for rec in \ SeqIO.parse("SRR020192.fastq", "fastq") \ if rec.seq.startswith("GATGACGGTGT")) count = SeqIO.write(trimmed_primer_reads, "with_primer_trimmed.fastq", "fastq") print("Saved %i reads" % count)
Again, that should pull out the 13819 reads from SRR020192.fastq, but this time strip off the first ten characters, and save them to another new FASTQ file, with_primer_trimmed.fastq.
Finally, suppose you want to create a new FASTQ file where these reads have their primer removed, but all the other reads are kept as they were? If we want to still use a generator expression, it is probably clearest to define our own trim function:
from Bio import SeqIO def trim_primer(record, primer): if record.seq.startswith(primer): return record[len(primer):] else: return record trimmed_reads = (trim_primer(record, "GATGACGGTGT") for record in \ SeqIO.parse("SRR020192.fastq", "fastq")) count = SeqIO.write(trimmed_reads, "trimmed.fastq", "fastq") print("Saved %i reads" % count)
This takes longer, as this time the output file contains all 41892 reads. Again, we’re used a generator expression to avoid any memory problems. You could alternatively use a generator function rather than a generator expression.
from Bio import SeqIO def trim_primers(records, primer): """Removes perfect primer sequences at start of reads. This is a generator function, the records argument should be a list or iterator returning SeqRecord objects. """ len_primer = len(primer) #cache this for later for record in records: if record.seq.startswith(primer): yield record[len_primer:] else: yield record original_reads = SeqIO.parse("SRR020192.fastq", "fastq") trimmed_reads = trim_primers(original_reads, "GATGACGGTGT") count = SeqIO.write(trimmed_reads, "trimmed.fastq", "fastq") print("Saved %i reads" % count)
This form is more flexible if you want to do something more complicated where only some of the records are retained – as shown in the next example.
This is essentially a simple extension to the previous example. We are going to going to pretend GATGACGGTGT is an adaptor sequence in some FASTQ formatted read data, again the SRR020192.fastq file from the NCBI ().
This time however, we will look for the sequence anywhere in the reads, not just at the very beginning:
from Bio import SeqIO def trim_adaptors(records, adaptor): """Trims perfect adaptor sequences. This is a generator function, the records argument should be a list or iterator returning SeqRecord objects. """ len_adaptor = len(adaptor) #cache this for later for record in records: index = record.seq.find(adaptor) if index == -1: #adaptor not found, so won't trim yield record else: #trim off the adaptor yield record[index+len_adaptor:] original_reads = SeqIO.parse("SRR020192.fastq", "fastq") trimmed_reads = trim_adaptors(original_reads, "GATGACGGTGT") count = SeqIO.write(trimmed_reads, "trimmed.fastq", "fastq") print("Saved %i reads" % count)
Because we are using a FASTQ input file in this example, the
SeqRecord
objects have per-letter-annotation for the quality scores. By slicing the
SeqRecord object the appropriate scores are used on the trimmed
records, so we can output them as a FASTQ file too.
Compared to the output of the previous example where we only looked for a primer/adaptor at the start of each read, you may find some of the trimmed reads are quite short after trimming (e.g. if the adaptor was found in the middle rather than near the start). So, let’s add a minimum length requirement as well:
from Bio import SeqIO def trim_adaptors(records, adaptor, min_len): """Trims perfect adaptor sequences, checks read length. This is a generator function, the records argument should be a list or iterator returning SeqRecord objects. """ len_adaptor = len(adaptor) #cache this for later for record in records: len_record = len(record) #cache this for later if len(record) < min_len: #Too short to keep continue index = record.seq.find(adaptor) if index == -1: #adaptor not found, so won't trim yield record elif len_record - index - len_adaptor >= min_len: #after trimming this will still be long enough yield record[index+len_adaptor:] original_reads = SeqIO.parse("SRR020192.fastq", "fastq") trimmed_reads = trim_adaptors(original_reads, "GATGACGGTGT", 100) count = SeqIO.write(trimmed_reads, "trimmed.fastq", "fastq") print("Saved %i reads" % count)
By changing the format names, you could apply this to FASTA files instead. This code also could be extended to do a fuzzy match instead of an exact match (maybe using a pairwise alignment, or taking into account the read quality scores), but that will be much slower.
Back in Section 5.5.2 we showed how to use
Bio.SeqIO to convert between two file formats. Here we’ll go into a
little more detail regarding FASTQ files which are used in second generation
DNA sequencing. Please refer to Cock et al. (2009) [7]
for a longer description. FASTQ files store both the DNA sequence (as a string)
and the associated read qualities.
PHRED scores (used in most FASTQ files, and also in QUAL files, ACE files and SFF SeqIO.convert("solexa.fastq", "fastq-solexa", "standard.fastq", "fastq")
If you want to convert a new Illumina 1.3+ FASTQ file, all that gets changed is the ASCII offset because although encoded differently the scores are all PHRED qualities:
from Bio import SeqIO SeqIO.convert("illumina.fastq", "fastq-illumina", "standard.fastq", "fastq")
Note that using
Bio.SeqIO.convert() like this is much faster
than combining
Bio.SeqIO.parse() and
Bio.SeqIO.write()
because optimised code is used for converting between FASTQ variants
(and also for FASTQ to FASTA conversion).Q standard as used by Sanger, the NCBI, and elsewhere (format name fastq or fastq-sanger).
For more details, see the built in help (also online):
>>> from Bio.SeqIO import QualityIO >>> help(QualityIO) ...
FASTQ files hold both sequences and their quality strings. FASTA files hold just sequences, while QUAL files hold just the qualities. Therefore a single FASTQ file can be converted to or from paired FASTA and QUAL files.
Going from FASTQ to FASTA is easy:
from Bio import SeqIO SeqIO.convert("example.fastq", "fastq", "example.fasta", "fasta")
Going from FASTQ to QUAL is also easy:
from Bio import SeqIO SeqIO.convert("example.fastq", "fastq", "example.qual", "qual")
However, the reverse is a little more tricky. You can use
Bio.SeqIO.parse()
to iterate over the records in a single file, but in this case we have
two input files. There are several strategies possible, but assuming that the
two files are really paired the most memory efficient way is to loop over both
together. The code is a little fiddly, so we provide a function called
PairedFastaQualIterator in the
Bio.SeqIO.QualityIO module to do
this. This takes two handles (the FASTA file and the QUAL file) and returns
a
SeqRecord iterator:
from Bio.SeqIO.QualityIO import PairedFastaQualIterator for record in PairedFastaQualIterator(open("example.fasta"), open("example.qual")): print(record)
This function will check that the FASTA and QUAL files are consistent (e.g.
the records are in the same order, and have the same sequence length).
You can combine this with the
Bio.SeqIO.write() function to convert a
pair of FASTA and QUAL files into a single FASTQ files:
from Bio import SeqIO from Bio.SeqIO.QualityIO import PairedFastaQualIterator handle = open("temp.fastq", "w") #w=write records = PairedFastaQualIterator(open("example.fasta"), open("example.qual")) count = SeqIO.write(records, handle, "fastq") handle.close() print("Converted %i records" % count)
FASTQ files are often very large, with millions of reads in them. Due to the
sheer amount of data, you can’t load all the records into memory at once.
This is why the examples above (filtering and trimming) iterate over the file
looking at just one
SeqRecord at a time.
However, sometimes you can’t use a big loop or an iterator - you may need
random access to the reads. Here the
Bio.SeqIO.index() function
may prove very helpful, as it allows you to access any read in the FASTQ file
by its name (see Section 5.4.2).
Again we’ll use the SRR020192.fastq file from the ENA (), although this is actually quite a small FASTQ file with less than 50,000 reads:
>>> from Bio import SeqIO >>> fq_dict = SeqIO.index("SRR020192.fastq", "fastq") >>> len(fq_dict) 41892 >>> fq_dict.keys()[:4] ['SRR020192.38240', 'SRR020192.23181', 'SRR020192.40568', 'SRR020192.23186'] >>> fq_dict["SRR020192.23186"].seq Seq('GTCCCAGTATTCGGATTTGTCTGCCAAAACAATGAAATTGACACAGTTTACAAC...CCG', SingleLetterAlphabet())
When testing this on a FASTQ file with seven million reads, indexing took about a minute, but record access was almost instant.
The example in Section 18.1.5 show how you can use the
Bio.SeqIO.index() function to sort a large FASTA file – this
could also be used on FASTQ files.
If you work with 454 (Roche) sequence data, you will probably have access to the raw data as a Standard Flowgram Format (SFF) file. This contains the sequence reads (called bases) with quality scores and the original flow information.
A common task is to convert from SFF to a pair of FASTA and QUAL files,
or to a single FASTQ file. These operations are trivial using the
Bio.SeqIO.convert() function (see Section 5.5.2):
>>> from Bio import SeqIO >>> SeqIO.convert("E3MFGYR02_random_10_reads.sff", "sff", "reads.fasta", "fasta") 10 >>> SeqIO.convert("E3MFGYR02_random_10_reads.sff", "sff", "reads.qual", "qual") 10 >>> SeqIO.convert("E3MFGYR02_random_10_reads.sff", "sff", "reads.fastq", "fastq") 10
Remember the convert function returns the number of records, in this example just ten. This will give you the untrimmed reads, where the leading and trailing poor quality sequence or adaptor will be in lower case. If you want the trimmed reads (using the clipping information recorded within the SFF file) use this:
>>> from Bio import SeqIO >>> SeqIO.convert("E3MFGYR02_random_10_reads.sff", "sff-trim", "trimmed.fasta", "fasta") 10 >>> SeqIO.convert("E3MFGYR02_random_10_reads.sff", "sff-trim", "trimmed.qual", "qual") 10 >>> SeqIO.convert("E3MFGYR02_random_10_reads.sff", "sff-trim", "trimmed.fastq", "fastq") 10
If you run Linux, you could ask Roche for a copy of their “off instrument” tools (often referred to as the Newbler tools). This offers an alternative way to do SFF to FASTA or QUAL conversion at the command line (but currently FASTQ output is not supported), e.g.
$ sffinfo -seq -notrim E3MFGYR02_random_10_reads.sff > reads.fasta $ sffinfo -qual -notrim E3MFGYR02_random_10_reads.sff > reads.qual $ sffinfo -seq -trim E3MFGYR02_random_10_reads.sff > trimmed.fasta $ sffinfo -qual -trim E3MFGYR02_random_10_reads.sff > trimmed.qual
The way Biopython uses mixed case sequence strings to represent the trimming points deliberately mimics what the Roche tools do.
For more information on the Biopython SFF support, consult the built in help:
>>> from Bio.SeqIO import SffIO >>> help(Sff.9 about translation).
>>> from Bio import SeqIO >>> record = SeqIO.read(): ... length = 3 * ((len(record)-frame) // 3) #Multiple of three ... for pro in nuc[frame:frame+length].translate(table).split("*"): ... if len(pro) >= min_pro_len: ... print("%s...%s - length %i, strand %i, frame %i" \ ... % (pro[:30], pro[-3:], len(pro), strand, frame)) 17.1 (see the Python module
re). >>> sizes = [len(rec) for rec in SeqIO.parse("ls_orchid.fasta", "fasta")] >>> sizes = [len(rec) for rec in SeqIO.parse("ls_orchid.fasta", "fasta")] this:
from Bio import SeqIO from Bio.SeqUtils import GC gc_values = sorted(GC(rec.seq) for rec in SeqIO.parse("ls_orchid.fasta", "fasta")) = next(record_iterator) rec_two = next(record_iterator) ENA sequence read archive FTP site ( and), and are from E. coli – see 18 shown targeted an alignment summary object, which we’ll assume is called
summary_align (see section 18 20 AlignIO >>> from Bio import Alphabet >>> from Bio.Alphabet import IUPAC >>> from Bio.Align import AlignInfo >>>>> alpha = Alphabet.Gapped(IUPAC.protein) >>> c_align = AlignIO.read(filename, "clustal", alphabet=alpha) >>> summary_align = AlignInfo.SummaryInfo(c_align)
Sections 6.4.1 and 18 (the order can vary):
{( 2 E -1 1 H -5 -4 3 K -10 -5 -4 1 R -4 -8 -4 -2 2 features - and get more or less the same
thing as if you had loaded the GenBank file directly as a SeqRecord using
Bio.SeqIO (Chapter 5).
Biopython’s BioSQL module is currently documented at which is part of our wiki pages. 19 preferably_Biospam’ (no quotes).
test_Biospamfile to the directory Tests/output
python run_tests.py -g test_Biospam.py. The regression testing framework is nifty enough that it’ll put the output in the right place in just the way it likes it.
Tests/output/test_Biospam) and double check the output to make sure it is all correct.
python run_tests.py. This will run all of the tests, and you should see your test run (and pass!).
As an example, the
test_Biospam.py test script to test the
addition and
multiplication functions in the
Biospam
module could look as follows:
from __future__ import print_function from Bio import Biospam print("2 + 3 =", Biospam.addition(2, 3)) print("9 - 1 =", Biospam.addition(9, -1)) print("2 * 3 =", Biospam.multiplication(2, 3)) print("9 * (- 1) =", Biospam.multiplication(9, -1))
We generate the corresponding output with
python run_tests.py -g test_Biospam.py, and check the output file
output/test_Biospam:
test_Biospam 2 + 3 = 5 9 - 1 = 8 2 * 3 = 6 9 * (- 1) = -9
Often, the difficulty with larger print-and-compare tests is to keep track which line in the output corresponds to which command in the test script. For this purpose, it is important to print out some markers to help you match lines in the input script with the generated output.
We want all the modules in Biopython to have unit tests, and a simple
print-and-compare test is better than no test at all. However, although
there is a steeper learning curve, using the
unittest framework
gives a more structured result, and if there is a test failure this can
clearly pinpoint which part of the test is going wrong. The sub-tests can
also be run individually which is helpful for testing or debugging.
The
unittest-framework has been included with Python since version
2.1, and is documented in the Python Library Reference (which I know you
are keeping under your pillow, as recommended). There is also
online documentaion
for unittest.
If you are familiar with the
unittest system (or something similar
like the nose test framework), you shouldn’t have any trouble. You may
find looking at the existing example within Biopython helpful too.
Here’s a minimal
unittest-style test script for
Biospam,
which you can copy and paste to get started:
import unittest from Bio import Biospam class BiospamTestAddition(unittest.TestCase): def test_addition1(self): result = Biospam.addition(2, 3) self.assertEqual(result, 5) def test_addition2(self): result = Biospam.addition(9, -1) self.assertEqual(result, 8) class BiospamTestDivision(unittest.TestCase): def test_division1(self): result = Biospam.division(3.0, 2.0) self.assertAlmostEqual(result, 1.5) def test_division2(self): result = Biospam.division(10.0, -2.0) self.assertAlmostEqual(result, -5.0) if __name__ == "__main__": runner = unittest.TextTestRunner(verbosity = 2) unittest.main(testRunner=runner)
In the division tests, we use
assertAlmostEqual instead of
assertEqual to avoid tests failing due to roundoff errors; see the
unittest chapter in the Python documentation for details and for other functionality available in
unittest (online reference).
These are the key points of
unittest-based tests:
unittest.TestCaseand cover one basic aspect of your code
setUpand
tearDownfor any repeated code which should be run before and after each test method. For example, the
setUpmethod might be used to create an instance of the object you are testing, or open a file handle. The
tearDownshould do any “tidying up”, for example closing the file handle.
test_and each test should cover one specific part of what you are trying to test. You can have as many tests as you want in a class.
if __name__ == "__main__": runner = unittest.TextTestRunner(verbosity = 2) unittest.main(testRunner=runner)to execute the tests when the script is run by itself (rather than imported from
run_tests.py). If you run this script, then you’ll see something like the following:
$ python test_BiospamMyModule.py test_addition1 (__main__.TestAddition) ... ok test_addition2 (__main__.TestAddition) ... ok test_division1 (__main__.TestDivision) ... ok test_division2 (__main__.TestDivision) ... ok ---------------------------------------------------------------------- Ran 4 tests in 0.059s OK
import unittest from Bio import Biospam class BiospamTestAddition(unittest.TestCase): def test_addition1(self): """An addition test""" result = Biospam.addition(2, 3) self.assertEqual(result, 5) def test_addition2(self): """A second addition test""" result = Biospam.addition(9, -1) self.assertEqual(result, 8) class BiospamTestDivision(unittest.TestCase): def test_division1(self): """Now let's check division""" result = Biospam.division(3.0, 2.0) self.assertAlmostEqual(result, 1.5) def test_division2(self): """A second division test""" result = Biospam.division(10.0, -2.0) self.assertAlmostEqual(result, -5.0) if __name__ == "__main__": runner = unittest.TextTestRunner(verbosity = 2) unittest.main(testRunner=runner)
Running the script will now show you:
$ python test_BiospamMyModule.py An addition test ... ok A second addition test ... ok Now let's check division ... ok A second division test ... ok ---------------------------------------------------------------------- Ran 4 tests in 0.001s OK
If your module contains docstring tests (see section 19.3),
you may want to include those in the tests to be run. You can do so as
follows by modifying the code under
if __name__ == "__main__":
to look like this:
if __name__ == "__main__": unittest_suite = unittest.TestLoader().loadTestsFromName("test_Biospam") doctest_suite = doctest.DocTestSuite(Biospam) suite = unittest.TestSuite((unittest_suite, doctest_suite)) runner = unittest.TextTestRunner(sys.stdout, verbosity = 2) runner.run(suite)
This is only relevant if you want to run the docstring tests when you
execute
python test_Biospam.py; with
python run_tests.py, the docstring tests are run automatically
(assuming they are included in the list of docstring tests in
run_tests.py, see the section below).
Python modules, classes and functions support built in documentation using docstrings. The doctest framework (included with Python) allows the developer to embed working examples in the docstrings, and have these examples automatically tested.
Currently only a small part of Biopython includes doctests. The
run_tests.py script takes care of running the doctests.
For this purpose, at the top of the
run_tests.py script is a
manually compiled list of modules to test, which
allows us to skip modules with optional external dependencies which may
not be installed (e.g. the Reportlab and NumPy libraries). So, if you’ve
added some doctests to the docstrings in a Biopython module, in order to
have them included in the Biopython test suite, you must update
run_tests.py to include your module. Currently, the relevant part
of
run_tests.py looks as follows:
# This is the list of modules containing docstring tests. # If you develop docstring tests for other modules, please add # those modules here. DOCTEST_MODULES = ["Bio.Seq", "Bio.SeqRecord", "Bio.SeqIO", "...", ] #Silently ignore any doctests for modules requiring numpy! try: import numpy DOCTEST_MODULES.extend(["Bio.Statistics.lowess"]) except ImportError: pass
Note that we regard doctests primarily as documentation, so you should stick to typical usage. Generally complicated examples dealing with error conditions and the like would be best left to a dedicated unit test.
Note that if you want to write doctests involving file parsing, defining
the file location complicates matters. Ideally use relative paths assuming
the code will be run from the
Tests directory, see the
Bio.SeqIO doctests for an example of this.
To run the docstring tests only, use
$ python run_tests.py doctest
Many of the older Biopython parsers were built around an event-oriented design that includes Scanner and Consumer objects.
Scanners take input from a data source and analyze it line by line,
sending off an event whenever it recognizes some information in the
data. For example, if the data includes information about an organism
name, the scanner may generate an
organism_name event whenever it
encounters a line containing the name.
Consumers are objects that receive the events generated by Scanners.
Following the previous example, the consumer receives the
organism_name event, and the processes it in whatever manner
necessary in the current application.
This is a very flexible framework, which is advantageous if you want to
be able to parse a file format into more than one representation. For
example, the
Bio.GenBank module uses this to construct either
SeqRecord objects or file-format-specific record objects.
More recently, many of the parsers added for
Bio.SeqIO and
Bio.AlignIO take a much simpler approach, but only generate a
single object representation (
SeqRecord and
MultipleSeqAlignment objects respectively). In some cases the
Bio.SeqIO parsers actually wrap
another Biopython parser - for example, the
Bio.SwissProt parser
produces SwissProt format specific record objects, which get converted
into
SeqRecord objects.
This module provides a class and a few routines for generating substitution matrices, similar to BLOSUM or PAM matrices, but based on user-provided data. Additionally, you may select a matrix from MatrixInfo.py, a collection of established substitution matrices. The
SeqMat class derives from a dictionary:
class SeqMat(dict)
The dictionary is of the form
{(i1,j1):n1, (i1,j2):n2,...,(ik,jk):nk} where i, j are alphabet letters, and n is a value.
self.alphabet: a class as defined in Bio.Alphabet
self.ab_list: a list of the alphabet’s letters, sorted. Needed mainly for internal purposes
__init__(self,data=None,alphabet=None, mat_name='', build_later=0):
data: can be either a dictionary, or another SeqMat instance.
alphabet: a Bio.Alphabet instance. If not provided, construct an alphabet from data.
mat_name: matrix name, such as "BLOSUM62" or "PAM250"
build_later: default false. If true, user may supply only alphabet and empty dictionary, if intending to build the matrix later. this skips the sanity check of alphabet size vs. matrix size.
entropy(self,obs_freq_mat)
obs_freq_mat: an observed frequency matrix. Returns the matrix’s entropy, based on the frequency in
obs_freq_mat. The matrix instance should be LO or SUBS.
sum(self)Calculates the sum of values for each letter in the matrix’s alphabet, and returns it as a dictionary of the form
{i1: s1, i2: s2,...,in:sn}, where:
print_mat(self,f,format="%4d",bottomformat="%4s",alphabet=None)
prints the matrix to file handle f.
format is the format field for the matrix values;
bottomformat is the format field for the bottom row, containing matrix letters. Example output for a 3-letter alphabet matrix:
A 23 B 12 34 C 7 22 27 A B C
The
alphabet optional argument is a string of all characters in the alphabet. If supplied, the order of letters along the axes is taken from the string, rather than by alphabetical order.
The following section is laid out in the order by which most people wish to generate a log-odds matrix. Of course, interim matrices can be generated and investigated. Most people just want a log-odds matrix, that’s all.) matrices. A full protein alphabet matrix would be of the size 20x20 = 400. A half matrix of that alphabet would be 20x20/2 + 20/2 = 210. That is because same-letter entries don’t change. (The matrix diagonal). Given an alphabet size of N:
The SeqMat constructor automatically generates a half-matrix, if a full matrix is passed. If a half matrix is passed, letters in the key should be provided in alphabetical order: (’A’,’C’) and not (’C’,A’).
At this point, if all you wish to do is generate a log-odds matrix, please go to the section titled Example of Use. The following text describes the nitty-gritty of internal functions, to be used by people who wish to investigate their nucleotide/amino-acid frequency data more thoroughly.
Use:
OFM = SubsMat._build_obs_freq_mat(ARM)
The OFM is generated from the ARM, only instead of replacement counts, it contains replacement frequencies.
Use:
EFM = SubsMat._build_exp_freq_mat(OFM,exp_freq_table)
exp_freq_table: should be a FreqTable instance. See section 20.2.2 for detailed information on FreqTable. Briefly, the expected frequency table has the frequencies of appearance for each member of the alphabet. It is implemented as a dictionary with the alphabet letters as keys, and each letter’s frequency as a value. Values sum to 1.
The expected frequency table can (and generally should) be generated from the observed frequency matrix. So in most cases you will generate
exp_freq_table using:
>>> exp_freq_table = SubsMat._exp_freq_table_from_obs_freq(OFM) >>> EFM = SubsMat._build_exp_freq_mat(OFM, exp_freq_table)
But you can supply your own
exp_freq_table, if you wish
Use:
SFM = SubsMat._build_subs_mat(OFM,EFM)
Accepts an OFM, EFM. Provides the division product of the corresponding values.
Use:
LOM=SubsMat._build_log_odds_mat(SFM[,logbase=10,factor=10.0,round_digit=1])
logbase: base of the logarithm used to generate the log-odds values.
factor: factor used to multiply the log-odds values. Each entry is generated by log(LOM[key])*factor And rounded to the
round_digitplace after the decimal point, if required.
As most people would want to generate a log-odds matrix, with minimum hassle, SubsMat provides one function which does it all:
make_log_odds_matrix(acc_rep_mat,exp_freq_table=None,logbase=10, factor=10.0,round_digit=0):
acc_rep_mat: user provided accepted replacements matrix
exp_freq_table: expected frequencies table. Used if provided, if not, generated from the
acc_rep_mat.
logbase: base of logarithm for the log-odds matrix. Default base 10.
round_digit: number after decimal digit to which result should be rounded. Default zero.
FreqTable.FreqTable(UserDict.UserDict)
alphabet: A Bio.Alphabet instance.
data: frequency dictionary
count: count dictionary (in case counts are provided).
read_count(f): read a count file from stream f. Then convert to frequencies
read_freq(f): read a frequency data file from stream f. Of course, we then don’t have the counts, but it is usually the letter frquencies which are interesting.
A 35 B 65 C 100
And will be read using the
FreqTable.read_count(file_handle) function.
An equivalent frequency file:
A 0.175 B 0.325 C 0.5
Conversely, the residue frequencies or counts can be passed as a dictionary. Example of a count dictionary (3-letter alphabet):
{'A': 35, 'B': 65, 'C': 100}
Which means that an expected data count would give a 0.5 frequency for ’C’, a 0.325 probability of ’B’ and a 0.175 probability of ’A’ out of 200 total, sum of A, B and C)
A frequency dictionary for the same data would be:
{'A': 0.175, 'B': 0.325, 'C': 0.5}
Summing up to 1.
When passing a dictionary as an argument, you should indicate whether it is a count or a frequency dictionary. Therefore the FreqTable class constructor requires two arguments: the dictionary itself, and FreqTable.COUNT or FreqTable.FREQ indicating counts or frequencies, respectively.
Read expected counts. readCount will already generate the frequencies Any one of the following may be done to geerate the frequency table (ftab):
>>> from SubsMat import * >>> ftab = FreqTable.FreqTable(my_frequency_dictionary, FreqTable.FREQ) >>> ftab = FreqTable.FreqTable(my_count_dictionary, FreqTable.COUNT) >>> ftab = FreqTable.read_count(open('myCountFile')) >>> ftab = FreqTable.read_frequency(open('myFrequencyFile'))
Getting feedback on the Biopython modules is very important to us. Open-source projects like this benefit greatly from feedback, bug-reports (and patches!) from a wide variety of contributors.
The main forums for discussing feature requests and potential bugs are the Biopython mailing lists:
Additionally, if you think you’ve found a new bug, you can submit it to our issue tracker at (this has replaced the older tracker hosted at). This way, it won’t get buried in anyone’s Inbox and forgotten about.
We encourage all our uses to sign up to the main Biopython mailing list. Once you’ve got the hang of an area of Biopython, we’d encourage you to help answer questions from beginners. After all, you were a beginner once.
We’re happy to take feedback or contributions - either via a bug-report or on the Mailing List. While reading this tutorial, perhaps you noticed some topics you were interested in which were missing, or not clearly explained. There is also Biopython’s built in documentation (the docstrings, these are also online), where again, you may be able to help fill in any blanks.
As explained in Chapter 18, Biopython now has a wiki collection of user contributed “cookbook” examples, – maybe you can add to this?
We currently provide source code archives (suitable for any OS, if you have the right build tools installed), and Windows Installers which are just click and run. This covers all the major operating systems.
Most major Linux distributions have volunteers who take these source code releases, and compile them into packages for Linux users to easily install (taking care of dependencies etc). This is really great and we are of course very grateful. If you would like to contribute to this work, please find out more about how your Linux distribution handles this.
Below are some tips for certain platforms to maybe get people started with helping out:
You must first make sure you have a C compiler on your Windows computer, and that you can compile and install things (this is the hard bit - see the Biopython installation instructions for info on how to do this).
Once you are setup with a C compiler, making the installer just requires doing:
python setup.py bdist_wininst
Now you’ve got a Windows installer. Congrats! At the moment we have no trouble shipping installers built on 32 bit windows. If anyone would like to look into supporting 64 bit Windows that would be great.
To make the RPM, you just need to do:
python setup.py bdist_rpm
This will create an RPM for your specific platform and a source RPM in the directory
dist. This RPM should be good and ready to go, so this is all you need to do! Nice and easy.
Once you’ve got a package, please test it on your system to make sure it installs everything in a good way and seems to work properly. Once you feel good about it, send it off to one of the Biopython developers (write to our main mailing list at biopython@biopython.org if you’re not sure who to send it to) and you’ve done it. Thanks!
Even if you don’t have any new functionality to add to Biopython, but you want to write some code, please consider extending our unit test coverage. We’ve devoted all of Chapter 19 to this topic.
There are no barriers to joining Biopython code development other than an interest in creating biology-related code in Python. The best place to express an interest is on the Biopython mailing lists – just let us know you are interested in coding and what kind of stuff you want to work on. Normally, we try to have some discussion on modules before coding them, since that helps generate good ideas – then just feel free to jump right in and start coding!
The main Biopython release tries to be fairly uniform and interworkable, to make it easier for users. You can read about some of (fairly informal) coding style guidelines we try to use in Biopython in the contributing documentation at. We also try to add code to the distribution along with tests (see Chapter 19 for more info on the regression testing framework) and documentation, so that everything can stay as workable and well documented as possible (including docstrings). This is, of course, the most ideal situation, under many situations you’ll be able to find other people on the list who will be willing to help add documentation or more tests for your code once you make it available. So, to end this paragraph like the last, feel free to start working!
Please note that to make a code contribution you must have the legal right to contribute it and license it under the Biopython license. If you wrote it all yourself, and it is not based on any other code, this shouldn’t be a problem. However, there are issues if you want to contribute a derivative work - for example something based on GPL or LPGL licenced code would not be compatible with our license. If you have any queries on this, please discuss the issue on the biopython-dev mailing list.
Another point of concern for any additions to Biopython regards any build time or run time dependencies. Generally speaking, writing code to interact with a standalone tool (like BLAST, EMBOSS or ClustalW) doesn’t present a big problem. However, any dependency on another library - even a Python library (especially one needed in order to compile and install Biopython like NumPy) would need further discussion.
Additionally, if you have code that you don’t think fits in the distribution, but that you want to make available, we maintain Script Central () which has pointers to freely available code in Python for bioinformatics.
Hopefully this documentation has got you excited enough about Biopython to try it out (and most importantly, contribute!). Thanks for reading all the way through!
If you haven’t spent a lot of time programming in Python, many questions and problems that come up in using Biopython are often related to Python itself. This section tries to present some ideas and code that come up often (at least for us!) while using the Biopython libraries. If you have any suggestions for useful pointers that could go here, please contribute!
Handles are mentioned quite frequently throughout this documentation, and are also fairly confusing (at least to me!). Basically, you can think of a handle as being a “wrapper” around text information.
Handles provide (at least) two benefits over plain text information:
Handles can deal with text information that is being read (e. g. reading
from a file) or written (e. g. writing information to a file). In the
case of a “read” handle, commonly used functions are
read(),
which reads the entire text information from the handle, and
readline(), which reads information one line at a time. For
“write” handles, the function
write() is regularly used.
The most common usage for handles is reading information from a file,
which is done using the built-in Python function
open. Here, we open a
handle to the file m_cold.fasta
(also available online
here):
>>> handle = open("m_cold.fasta", "r") >>> handle.readline() ">gi|8332116|gb|BE037100.1|BE037100 MP14H09 MP Mesembryanthemum ...\n"
Handles are regularly used in Biopython for passing information to parsers.
For example, since Biopython 1.54 the main functions in
Bio.SeqIO
and
Bio.AlignIO have allowed you to use a filename instead of a
handle:
from Bio import SeqIO for record in SeqIO.parse("m_cold.fasta", "fasta"): print(record.id, len(record))
On older versions of Biopython you had to use a handle, e.g.
from Bio import SeqIO handle = open("m_cold.fasta", "r") for record in SeqIO.parse(handle, "fasta"): print(record.id, len(record)) handle.close()
This pattern is still useful - for example suppose you have a gzip compressed FASTA file you want to parse:
import gzip from Bio import SeqIO handle = gzip.open("m_cold.fasta.gz") for record in SeqIO.parse(handle, "fasta"): print(record.id, len(record)) handle.close()
See Section 5.2 for more examples like this, including reading bzip2 compressed files.
One useful thing is to be able to turn information contained in a
string into a handle. The following example shows how to do this using
cStringIO from the Python standard library:
>>>>> print(my_info) A string with multiple lines. >>> from StringIO import StringIO >>> my_info_handle = StringIO(my_info) >>> first_line = my_info_handle.readline() >>> print(first_line) A string <BLANKLINE> >>> second_line = my_info_handle.readline() >>> print(second_line) with multiple lines.
This document was translated from LATEX by HEVEA. | http://biopython.org/DIST/docs/tutorial/Tutorial.html | CC-MAIN-2014-35 | refinedweb | 47,456 | 58.38 |
.
The.
To see the above application in action, click the following link to start a ClickOnce installation (requires the .NET Framework 2.0):
To download the source code for the application (in C#), click the following link:
You will also need to download an evaluation version of Studio Enterprise 2007 v1.5 (or update an earlier version) in order to build and run the application.
The following sections describe the user interface elements of the C1RibbonEarth application, all of which are provided by the C1Ribbon and C1StatusBar components.
The Home Tab provides access to the most commonly used commands. It consists of four groups:
The Actions Tab controls user preferences and provides access to infrequently used commands. It consists of two groups:
To open the Application Menu, click the C1 logo in the upper left corner of the form. The left pane of the Application Menu contains commands that apply to the entire map:
The following figure shows the Application Menu with the Open commands visible.
When the Open submenu is not visible, the right pane of the Application Menu displays a list of place names previously visited using the Find command, as shown in the following illustration. Click one of these items to reset the map to the corresponding location.
The bottom pane of the Application Menu contains a single button for exiting the application.
The Quick Access Toolbar, or QAT, appears to the right of the Application Icon above the Home Tab. Initially, it contains the same two command buttons that appear in the Zoom group on the Home Tab. It also contains a dropdown button that opens a menu for customizing the contents and position of the QAT:
The Config Toolbar appears on the right side of the ribbon on the same row as the Tabs. It contains a single dropdown menu named Style for selecting one of the pre-defined color schemes: Blue (the default), Silver, or Black. The following figure shows the Black color scheme:
The Status Bar at the bottom of the form contains a text label on the left side that usually displays the current latitude/longitude of the center of the map. It is also used to display a "Not found" message when the Find command is unable to locate the specified place name.
The right side of the Status Bar contains a track bar control that provides an alternate interface to the zoom commands. Drag the slider to adjust the zoom level continuously, or click the plus/minus buttons to invoke the Zoom In/Out commands. Note that if either extreme is reached, the corresponding command buttons are grayed out in both the Home Tab and the QAT.
There are two ways to go about designing a form that incorporates C1Ribbon. One is to use the ComponentOne Ribbon Application project template that is installed with Ribbon for .NET, which initializes C1Ribbon and C1StatusBar components containing Microsoft Word-like text formatting and viewing commands.
The other way is to add C1Ribbon and C1StatusBar components to an empty form. The ribbon automatically docks to the top, while the status bar docks to the bottom. Next, add a Panel component and set its Dock property to Fill. The Panel serves as a container for the form's content. In the sample application, this is a WebBrowser control.
WebBrowser
Note that the main form created by the project template has rounded corners and a visual appearance that differs from a standard form. This is because it inherits from C1.Win.C1Ribbon.C1RibbonForm instead of System.Windows.Forms.Form. To make this change, switch to code view and add the following line:
C1.Win.C1Ribbon.C1RibbonForm
System.Windows.Forms.Form
using C1.Win.C1Ribbon;
Then change the parent class of the form to C1RibbonForm:
C1RibbonForm
public partial class Form1 : C1RibbonForm
After switching back to the designer, the form should look something like this:
You can use the SmartDesigner feature to add new tabs, add groups to tabs, and add items to groups. For example, clicking a group name opens a toolbar that provides access to various actions and property categories. The first toolbar button, Actions, opens a dropdown menu that lets you add a new item to the group, as shown in the following figure:
After you have added an item such as a button, you can then click it to open a toolbar with commands that apply to that type of object. For example, the following figure shows how to change the large and small images associated with a newly added ribbon button. The second button on the floating toolbar opens a dialog for selecting built-in Office 2007-style images. You can also use this dialog to import images from project resources or local files.
When you use the SmartDesigner to add tabs, groups, and ribbon controls, they are given unique names such as ribbonButton1, ribbonGroup2, and so forth. Ultimately, you will have to write event handling code that performs a specific action on the form's content whenever a button is clicked or a selection is made from a combo box or dropdown menu. Therefore it is a good idea to assign meaningful names to ribbon controls as you create them, making your code easier to understand and maintain.
ribbonButton1
ribbonGroup2
Now let's examine the structure of some of the control groups in the Home Tab of the sample application.
The Map Type group contains a set of three toggle buttons for controlling the display of the map. Only one button can be selected at a time.
The following controls constitute the Map Type group:
mapGroup
RibbonGroup
mapToggleGroup
RibbonToggleGroup
mapStreetButton
RibbonToggleButton
Pressed
True
mapSatelliteButton
False
mapHybridButton
Since all three buttons have the LargeImage property set, the buttons are arranged horizontally. The application responds to button clicks by handling the PressedButtonChanged event of the RibbonToggleGroup element:
LargeImage
PressedButtonChanged
private void mapToggleGroup_PressedButtonChanged(object sender, EventArgs e)
{
// One of the buttons in the MapType group was pressed
InvokeTag(mapToggleGroup.PressedButton);
}
The PressedButton property of the RibbonToggleGroup element returns the RibbonToggleButton that was clicked. InvokeTag is a private member function that uses the Tag property of its RibbonItem argument to perform the desired action. For most RibbonButton and RibbonToggleButton elements in the sample application, the Tag property is set to the name of a JavaScript function in the HTML page displayed in the WebBrowser control. The implementation details are discussed later in this article.
PressedButton
InvokeTag
Tag
RibbonItem
RibbonButton
The Zoom group contains two command buttons for incrementing and decrementing the zoom level. Unlike the toggle buttons in the Map Type group, these buttons do not have a Pressed property.
The following controls constiture the Zoom group:
zoomGroup
zoomInButton
zoomOutButton
Since neither button has the LargeImage property set, the buttons are stacked vertically. Both buttons have the ShowInQat and ShowInQatMenu properties set to True, which causes both buttons to appear in the Quick Access Toolbar and the Customize Quick Access Toolbar dropdown menu.
ShowInQat
ShowInQatMenu
The Pan group contains four command buttons for moving the center point of the map. The structure of the Pan group is similar to that of the Zoom group except that separators are used to gain more control over how the buttons are stacked.
Without the separators, the Left, Up, and Down buttons would be stacked vertically.
The Places and Addresses group contains a text box and a command button for entering street addresses, place names, or zip codes. These controls are aligned horizontally by being placed inside a toolbar.
The following controls constitute the Places and Addresses group:
findGroup
findToolbar
RibbonToolBar
findEditBox
RibbonEditBox
KeyPress
findButton
Typically, you would use a RibbonToolBar to present formatting and alignment commands as in Word 2007.
The Google Maps API is a free beta service that lets you use JavaScript to add maps to public web sites that are free to consumers. At the time of this writing, localhost and file-based URLs are not supported. A complete discussion of the Google Maps API is beyond the scope of this article. For more information, visit the following URL:
Note that the current version of the Google Maps API requires registration for an API key (one per site). However, you do not need to register for an API key to run the C1RibbonEarth sample, as the API is already hosted at the following URL:
If you visit this site in a browser, you will see a map that supports panning by dragging the mouse, but cannot be manipulated in any other way. The map becomes fully interactive only when it is viewed within the C1RibbonEarth application.
The following listing shows a minimal implementation of a Google Maps host site:
<html>
<head>
<title>Google Maps with C1Ribbon</title>
<script src=
""
type="text/javascript"></script>
<script type="text/javascript">
//<![CDATA[
function load()
{
if (GBrowserIsCompatible())
{
var div = document.getElementById("map");
var map = new GMap2(div);
map.setCenter(new GLatLng(44, -98), 3);
}
}
//]]>
</script>
</head>
<body onload="load()" onunload="GUnload()" style="margin:0">
<div id="map" style="width:100%; height:100%"></div>
</body>
</html>
The first <script> tag specifies the URL for the JavaScript API, which includes the software version and API key as part of the query string. The HTML body consists of a single <div> tag named map that occupies the entire area of the browser window without margins. When the page loads, the load() function is called to create and associate a GMap2 object with the <div> tag. The map is then centered about a specific latitude/longitude value (44, -98) at zoom level 3 (the higher the number, the more detailed the image).
<script>
<div>
map
load()
GMap2
In order for our Windows Forms application to be able to do anything interesting with this site, it needs to be able to call methods on the GMap2 object. To facilitate this, we do the following:
For example, the GMap2 object supports parameterless zoomIn() and zoomOut() methods that increment and decrement the current zoom level. The following code mirrors these methods on the <div> element:
zoomIn()
zoomOut()
div.proxy = map;
div.zoomIn = function()
{
this.proxy.zoomIn();
}
div.zoomOut = function()
{
this.proxy.zoomOut();
}
The first assignment statement creates the proxy property. Subsequent references to this property will return the GMap2 instance. Within the body of each anonymous function, the reserved word this denotes the <div> element, so the expression this.proxy returns the same GMap2 instance.
proxy
this
this.proxy
Note that user-defined methods need not mirror the exact syntax of the underlying GMap2 object. For example, the GMap2 object supports a setMapType() method that accepts one argument, a constant value that specifies the type of map to display. For convenience, our host site implementation declares three distinct parameterless methods corresponding to the three toggle buttons in the Map Type group on the Home Tab:
setMapType()
div.setMapTypeStreet = function()
{
this.proxy.setMapType(G_NORMAL_MAP);
}
div.setMapTypeSatellite = function()
{
this.proxy.setMapType(G_SATELLITE_MAP);
}
div.setMapTypeHybrid = function()
{
this.proxy.setMapType(G_HYBRID_MAP);
}
The sample application contains a WebBrowser control that navigates to the host site in the Load event of the main form. The application handles the DocumentCompleted event on the WebBrowser control by setting a Boolean flag. Once this flag has been set, the entire page has been loaded and all DOM elements are available for scripting. At this point, the following code can be used within the event handler for a ribbon button to increment the zoom level:
Load
DocumentCompleted
HtmlElement map = webBrowser.Document.GetElementById("map");
if (map != null)
map.InvokeMember("zoomIn");
The HtmlElement named map refers to the <div> tag, not the GMap2 object. The InvokeMember() method executes the named method without arguments. If arguments are required, use the overloaded version of InvokeMember. For example, the following code centers the map about a random position:
HtmlElement
InvokeMember()
InvokeMember
HtmlElement map = webBrowser.Document.GetElementById("map");
if (map != null)
{
Random r = new Random();
double lat = (r.NextDouble() * 180.0) - 90.0;
double lng = (r.NextDouble() * 360.0) - 180.0;
object[] args = { lat, lng };
map.InvokeMember("setCenter", args);
}
The corresponding user-defined method on the host site is declared as follows:
div.setCenter = function(lat, lng)
{
this.proxy.setCenter(new GLatLng(lat, lng), 3);
}
The Google Maps API supports event listeners that can be used to execute anonymous JavaScript functions whenever certain actions occur, such as mouse events. In a typical web site, you would handle these events to update DOM elements on the page. In our host site, we use window.external to convey changes in location and zoom level to the Windows Forms application.
window.external
For example, the following JavaScript code sets up a listener for the move event, which fires repeatedly while the map view is changing:
move
GEvent.addListener(map, "move", function()
{
if (window.external)
{
if (typeof(window.external.StatusText) != "undefined")
window.external.StatusText = map.getCenter().toString();
}
});
If the host site is opened in a browser outside of the Windows Forms application, window.external will be null, and no action will be taken. Otherwise, if window.external supports the StatusText property, then this event handler sets its value to the string representation of the map's center (latitude/longitude).
StatusText
In the sample application, the ObjectForScripting property of the WebBrowser control is set to an object that defines window.external for JavaScript code that executes on the HTML page:
ObjectForScripting
webBrowser.ObjectForScripting = new ScriptingObject(this);
In the preceding statement, this is an instance of the main application form (RibbonEarthForm). The ScriptingObject class is defined as follows:
RibbonEarthForm
ScriptingObject
[ComVisible(true)]
[ClassInterface(ClassInterfaceType.AutoDispatch)]
public class ScriptingObject
{
private RibbonEarthForm me;
public ScriptingObject(RibbonEarthForm form)
{
me = form;
}
public string StatusText
{
set { me.StatusText = value; }
}
public string RecentLocation
{
set { me.RecentLocation = value; }
}
public int ZoomLevel
{
set { me.ZoomLevel = value; }
}
}
The ScriptingObject class simply saves the value of the RibbonEarthForm argument passed to its constructor and implements three write-only properties as pass-throughs to the saved form. This is necessary because the parent class of the main form (C1RibbonForm) does not have the ComVisible attribute, which is required in order for an object to serve as window.external. If the main form inherited from System.Windows.Forms.Form instead of C1RibbonForm, then the main form could have been given the ComVisible attribute and a separate ScriptingObject class would not have been necessary.
ComVisible
The RibbonEvent event is a catchall for responding to user interaction with the C1Ribbon control. In the sample application, this event is used to restore focus to the WebBrowser control whenever the user clicks a button or otherwise gives focus to the ribbon:
RibbonEvent
C1Ribbon
private void c1Ribbon_RibbonEvent(object sender, RibbonEventArgs e)
{
// Restore focus to the browser control after interacting with the ribbon
switch (e.EventType)
{
case RibbonEventType.ChangeCommitted:
case RibbonEventType.ChangeCanceled:
case RibbonEventType.Click:
case RibbonEventType.DialogLauncherClick:
case RibbonEventType.DropDownClosed:
{
if (c1Ribbon.Focused)
webBrowser.Focus();
break;
}
}
}
The C1Ribbon and C1StatusBar controls make it easy to incorporate the new Office 2007 Ribbon UI in your Windows Forms applications. Intuitive visual designers let you customize ribbon elements directly on the design surface. You can also save ribbon layouts as XML for quick reloading into another form.
C1StatusBar
Visit the following URL to download a trial version of Ribbon for .NET:
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here | https://www.codeproject.com/Articles/20369/Using-ComponentOne-Ribbon-for-NET-as-an-Interface | CC-MAIN-2017-51 | refinedweb | 2,567 | 54.32 |
Hot questions for Using Neural networks in google colaboratory
Question:
I've recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got here.
Then I pasted my code to create the CNN into Colab and started the process. Here is the complete code:
Part 1: Setting up Colab to import picture from my Drive
(part 1 is copied from here as it worked as exptected for me
Step
Step 2:
from google.colab import auth auth.authenticate_user()
Step 3:}
Step 4:
!mkdir -p drive !google-drive-ocamlfuse drive
Step 5:
print('Files in Drive:') !ls drive/
Part 2: Copy pasting my CNN
I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend. For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems
from keras.models import Sequential from keras.layers import Conv2D from keras.layers import MaxPooling2D from keras.layers import Flatten from keras.layers import Dense from keras.layers import Dropout from keras.optimizers import Adam from keras.preprocessing.image import ImageDataGenerator
parameters
imageSize=32 batchSize=64 epochAmount=50
CNN
classifier=Sequential() classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer classifier.add(Flatten())
ANN
classifier.add(Dense(units=64, activation='relu')) #hidden layer classifier.add(Dense(units=1, activation='sigmoid')) #output layer classifier.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method
image preprocessing
train_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True) test_datagen = ImageDataGenerator(rescale = 1./255) training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set', target_size = (imageSize, imageSize), batch_size = batchSize, class_mode = 'binary') test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set', target_size = (imageSize, imageSize), batch_size = batchSize, class_mode = 'binary') classifier.fit_generator(training_set, steps_per_epoch = (8000//batchSize), epochs = epochAmount, validation_data = test_set, validation_steps = (2000//batchSize))
Now comes my Problem
First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training_set, 2000 test_set)
I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)
This is an intermediate result from my PC:
Epoch 2/50 63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520
And this from Colab:
Epoch 1/50 13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916
Why is Google Colab so slow in my case?
Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don't know how to solve this other than choosing a different method to import the database.
Answer:
As @Feng has already noted, reading files from drive is very slow. This tutorial suggests using some sort of a memory mapped file like hdf5 or lmdb in order to overcome this issue. This way the I\O Operations are much faster (for a complete explanation on the speed gain of hdf5 format see this).
Question:
I have a dataset of images on my Google Drive. I have this dataset both in a compressed .zip version and an uncompressed folder.
I want to train a CNN using Google Colab. How can I tell Colab where the images in my Google Drive are?
official tutorial does not help me as it only shows how to upload single files, not a folder with 10000 images as in my case.
Then I found this answer, but the solution is not finished, or at least I did not understand how to go on from unzipping. Unfortunately I am unable to comment this answer as I don't have enough "stackoverflow points"
I also found this thread, but here all the answer use other tools, such as Github or dropbox
I hope someone could explain me what I need to do or tell me where to find help.
Edit1:
I have found yet another thread asking the same question as mine: Sadly, of the 3 answers, two refer to Kaggle, which I don't know and don't use. The third answer provides two links. The first link refers to the 3rd thread I linked, and the second link only explains how to upload single files manually.
Answer:
To update the answer. You can right now do it from Google Colab
# Load the Drive helper and mount from google.colab import drive # This will prompt for authorization. drive.mount('/content/drive') !ls "/content/drive/My Drive"
Question:
I am training a neural network for Neural Machine Traslation on Google Colaboratory. I know that the limit before disconnection is 12 hrs, but I am frequently disconnected before (4 or 6 hrs). The amount of time required for the training is more then 12 hrs, so I add some savings each 5000 epochs.
I don't understand if when I am disconnected from Runtime (GPU is used) the code is still execute by Google on the VM? I ask it because I can easily save the intermediate models on Drive, and so continue the train also if I am disconnected.
Does anyone know it?
Answer:
Yes, for ~1.5 hours after you close the browser window. To keep things running longer, you'll need an active tab.
Question:
I am tuning hyperparameters for a neural network via gridsearch on google colab. I got a "transport endpoint is not connected" error after my code executed for 3 4 hours. I found out that this is because google colab doesn't want people to use the platform for a long time period(not quite sure though).
However, funnily, after the exception was thrown when I reopened the browser, the cell was still running. I am not sure what happens to the process once this exception is thrown.
Thank you
Answer:
In google colab, you can use their GPU service for upto 12 hours, after that it will halt your execution, if you ran it for 3-4 hours, it will just stop displaying Data continuously on your browser window(if left Idle), and refreshing the window will restore that connection.
In case you ran it for 34 hours, then it will definitely be terminated(Hyphens matter), this is apparently done to discourage people from mining cryptocurrency on their platform, in case you have to run your training for more than 12 hours, all you need to do is enable checkpoints on your google drive, and then you can restart the training, once a session is terminated, if you are good enough with the requests library in python, you can automate it.
Question:
Whenever i specify them path like "C:\Users\Admin\Desktop\tumor", I get the "file not found" error,using cv2.imread(). Can anyone explain the correct way to read them?
Answer:
You'll need to transfer files to the backend VM. Recipes are in the I/O example notebook:
Or, you can use a local runtime as described here: | https://thetopsites.net/projects/neural-network/google-colaboratory.shtml | CC-MAIN-2021-31 | refinedweb | 1,178 | 57.37 |
On Tue, Oct 03, 2017 at 11:45:38AM +0800, Fengguang Wu wrote:> Hi Josh,> > On Mon, Oct 02, 2017 at 04:31:09PM -0500, Josh Poimboeuf wrote:> > On Mon, Oct 02, 2017 at 04:26:54PM -0500, Josh Poimboeuf wrote:> > > Fengguang, assuming it's reliably recreatable, any chance you could> > > recreate with the following patch?> > Sure, I'll try your patch on v4.14-rc3 since it looks the most> reproducible kernel. For the bisected 4879b7ae05, the warning only> shows up once out of 909 boots according to the below stats. So I'm> not sure whether it's the _first_ bad commit. To double confirm, I> just queued 5000 more boot tests for each of its parent commits.Fengguang, here's an improved version of the patch based on Linus'suggestions. If you can use it instead, that would be helpful becauseit has a better chance of dumping useful data. Thanks!diff --git a/arch/x86/kernel/unwind_frame.c b/arch/x86/kernel/unwind_frame.cindex d145a0b1f529..191012762aa0 100644--- a/arch/x86/kernel/unwind_frame.c+++ b/arch/x86/kernel/unwind_frame.c@@ -44,7 +44,8 @@ static void unwind_dump(struct unwind_state *state) state->stack_info.type, state->stack_info.next_sp, state->stack_mask, state->graph_idx); - for (sp = state->orig_sp; sp; sp = PTR_ALIGN(stack_info.next_sp, sizeof(long))) {+ for (sp = PTR_ALIGN(state->orig_sp, sizeof(long)); sp;+ sp = PTR_ALIGN(stack_info.next_sp, sizeof(long))) { if (get_stack_info(sp, state->task, &stack_info, &visit_mask)) break; @@ -178,7 +179,7 @@ static struct pt_regs *decode_frame_pointer(unsigned long *bp) { unsigned long regs = (unsigned long)bp; - if (!(regs & 0x1))+ if ((regs & (sizeof(long)-1)) != 1) return NULL; return (struct pt_regs *)(regs & ~0x1);@@ -221,6 +222,10 @@ static bool update_stack_state(struct unwind_state *state, &state->stack_mask)) return false; + /* Make sure the frame pointer is properly aligned: */+ if ((unsigned long)frame & (sizeof(long)-1))+ return false;+ /* Make sure it only unwinds up and doesn't overlap the prev frame: */ if (state->orig_sp && state->stack_info.type == prev_type && frame < prev_frame_end) | http://lkml.org/lkml/2017/10/3/516 | CC-MAIN-2018-13 | refinedweb | 321 | 56.76 |
Model Creation and Initialisation
When some (user) code creates a subclass of models.Model, a lot goes on behind the scenes in order to present a reasonably normal looking Python class to the user, whilst providing the necessary database support when it comes time to save or load instances of the model. This document describes what goes on from the time a model is imported until an instance is created by the user.
A couple of preliminaries: Firstly, if you just want to develop systems with Django, this information is not required at all. It is here for people who might want to poke around the internals of Django or fix any bugs that they may find.
Secondly, by the nature of the beast, class creation and initialisation delves a bit into the depths of Python's operation. A familiarity with the difference between a class and an instance, for example, will be helpful here. Section 3.2 of the Python Language Reference might be of assistance.
Unless otherwise noted, all of the files referred to here are in the source code tree in the django/db/models/ directory. For reference purposes, imagine that we have a models.py file containing the following model.
from django.db import models class Example(models.Model): name = models.CharField(maxlength = 50) static = "I am a normal static string attribute" def __unicode__(self): return name
Importing A Model
From the moment a file containing a subclass of Model is imported, Django is involved.
In the process of parsing the file during import, the Python interpreter needs to create Example, our subclass. The Model class (see base.py) has a __metaclass__ attribute that defines ModelBase (also in base.py) as the class to use for creating new classes. So ModelBase.__new__ is called to create this new Example class. It is important to realise that we are creating the class object here, not an instance of it. In other words, Python is creating the thing that will eventually be bound to the Example name in our current namespace.
Metaprogramming -- overriding the normal class creation process -- is not very commonly used, so a quick description of what happens might be in order. ModelBase.__new__, like all __new__ methods is responsible for setting up any class-specific features. The more well-known __init__ method, on the other hand, is responsible for setting up instance-specific features. __new__ is passed the name of the new class as a string (Example in our case), any base classes (Model and object here) -- as actual class objects, so we can look inside them, examine types and so forth -- and a dictionary of attributes that need to be installed. This attribute dictionary includes all the attributes and methods from the new class (Example) as well as the parent classes (e.g. Model). So it includes all of the utility functions from the Model class like save() and delete() as well as the new field objects we are creating in Example.
The __new__ method is required to return a class object that can then be instantiated (by calling Example() in our case). If you are interested in seeing more examples of this, the Python Cookbook from O'Reilly has a whole chapter on metaprogramming.
Installing The Attributes
Now things start to get interesting. A new class object with the Example name is created in the right module namespace. A _meta attribute is added to hold all of the field validation, retrieval and saving machinery. This is an Options class from options.py. Initially, it takes values from the inner Meta class, if any, on the new model, using defaults if Meta is not declared (as in our example).
Each attribute is then added to the new class object. This is done by calling ModelBase.add_to_class with the attribute name and object. This object could be something like a normal unbound method, or a string, or a property, or -- of most relevance here -- a Field subclass that we want to do something with. The add_to_class() method checks to see if the object has a contribute_to_class method that can be called. If it doesn't, this is just a normal attribute and it is added to the new class object with the given name. If we do have a contribute_to_class method on the new attribute object, this method is called and given a reference to the new class along with the name of the new attribute.
Putting this in the context of our example, the static and __unicode__ attributes would be added normally to the class as a string object and unbound method, respectively. When the name attribute is considered, add_to_class would be called with the string name and an instance of the models.CharField class (which is defined in fields/__init__.py). Note that we are passed an instance of CharField, not the class itself. The object we are passed in add_to_class has already been created. So that object knows it has a maxlength of 50 in our case. And that the default verbose name is not being overridden and so on. CharField instances have a contribute_to_class method, so that will be called and passed the new Example class object and the string name (the attribute name that we are creating).
When Field.contribute_to_class (or one of the similar methods in a subclass of Field, it does not add the new attribute to the class we are creating (Example). Instead it adds itself to the Example._meta class, ending up in the Example._meta.fields list. If you are interested in the details you can read the code in fields/__init__.py and I am glossing over the case of relation fields like ForeignKey and ManyToManyField here. But the principle is the same for all fields. The important thing to realise is that they are not added as attributes on the main class, but, rather, they are stored in the _meta attribute and will be called upon at save or load or delete time.
There are no attributes on the final class object for the model fields (the things that are derived from Field, that is). These attributes are only added in when __init__() is called, which is discussed below.
Preparing The Class
Once all of the attributes have been added, the class creation is just about finished. The Model._prepare method is called. This sets up a few model methods that might be required depending on other options you have selected and adds a primary key field if one has not been explicitly declared. Finally, the class_prepared signal is emitted and any registered handlers run.
The main beneficiary of receiving class_prepared at the moment is manipulators.py. It catches this signal and uses it to add default add- and change-manipulators to the class.
Registering The Class
Once Django has created the new class object, a copy is saved in an in-memory cache (a Python dictionary) so that it can be retrieved by other classes if needed. This is required so that things like reverse lookups for related fields can be performed (see Options.get_all_related_objects() for example).
The registration itself is done right at the end of the ModelBase.__new__ method (recall that we still have not completely finished this method and have not returned from it yet). The register_models() function in loaders.py is called and is passed the name of the Django application this model belongs to and the new class object we are creating via __new__. However, it is not quite as simple as saving a reference to this class...
Files can be imported into Python in many different ways. We have
from application.models import Example
or
from application import models myExample = models.Example()
or even
from project.application import models
Each of these imports are really importing exactly the same model. However, due to the way importing is implemented in Python, the last case, particularly is not usually identified as the same import as the first two cases. In the last case, the models module has a name of project.application.models, whilst in the first two cases it is called application.models.
This might not be a big problem if we already used consistent import statements. However, it is convenient to be able to leave off the project name inside an application, so that we can move the application between projects. Also, although we might import something from a particular path, if Django imports it as part of working out all the installed models (which it has to do to work out reverse related fields, again), it might be imported with a slightly different name. This "slightly different name" is used to derive the application name and so, if we are not careful, we might end up registering the same model under two or more different application names: such as registering Example under application and project.application, even though it is the same Example class. And this leads to problems down the line, because if the reverse relations are computed as referring to the project.application Example class and we happen to have a copy of the application Example class in our code or shell, things do not work.
If the previous paragraph seemed a bit confusing, just bear in mind that we only want to register each model exactly once, regardless of its import path, providing we can tell that it is the same model.
We can work out whether a model is the same as one we already registered by looking at the source filename (which you can retrieve via Python's introspection facilities: see register_model() for the details, if you care). So register_model is careful not to register a model more than once. If it is called a second time for the same model, it just does nothing.
So we have created the class object and registered it. There is one last subtlety to take care of: if we are creating this model for the second time for some reason -- for example, due to a second import via a slightly different path -- we do not want to return out new object to the user. This will lead to the problem described above of one class object being used to compute things like related fields and another object being given to the user to work with; the latter will not have all the right information. The effects of making this mistake (and the difficulties in diagnosing it) are well illustrated in ticket #1796. Instead of always returning the new object we have created, we return whatever object is registered for this model by looking it up via get_model() in loadings.py. If this is the first time this class object has been created, we end up returning the one we just created. If it is the second time, then the call to register_model threw away our newly created object in favour of the first instance. So we return that first instance.
In this way, by being very careful at model construction, we only ever end up with one instance of each class object. Since every Model sub-class must call ModelBase.__new__ when it is first created, we will always be able to catch the duplicates (unless there is a bug in the duplicate detection code in register_model() :-) ).
After all this work, Python can then take the new object and bind it to the name of the class -- Example for us. There it sits quietly until somebody wants to create an instance of the Example class.
Creating A Model Instance
When a piece of Python code runs something like
e1 = Example(name = 'Fred')
the Example.__init__() method is called. As mentioned above, this sets up the instance-specific features of the class and returns a Python class instance type.
Fortunately, after the hard work of understanding how the Example class was created, understanding the __init__() method is much easier. We really only need to consider the Model.__init__ method here, since it is assumed that any subclass with its own __init__() will eventually call the superclass __init__() method as well. We can boil the whole process down to a few reasonably simple steps.
- Emit the pre_init signal. This is caught by any code that might want to attach to the new instance early on. Currently the GenericForeignKey class uses this signal to do some setting up (see GenericForeignKey.instance_pre_init() in fields/generic.py).
- Run through all of the fields in the model and create attributes for each one (recall that the class object does not have these attributes as the field instances have all been put into the _meta attribute):
- Because assigning to a many-to-many relation involves a secondary database table (the join table), initialising these fields requires special handling. So if a keyword argument of that name is passed in, it is assigned to the corresponding instance attribute after the necessary processing.
- For normal field attributes, an attribute on the new instance is created that contains either the value passed into __init__() or the default value for that field.
- For any keyword arguments in the __init__() that remain unprocessed, check to see if there is a property on the class with the same name that can be set and, if so, call that with the passed in value.
- If any keyword arguments remain unprocessed, raise an AttributeError exception, because the class cannot handle them and the programmer has made a mistake.
- Run through any positional arguments that have been passed in and try to assign them to field attributes in the order that the fields appear in the class declaration.
- Emit the post_init signal. Nothing in Django's core uses this signal, but it can be useful for code built on top of Django that would like to hook into particular class instances prior to the creator receiving the instance back again.
At the end of these six steps, we have a normal Python class with an attribute for each field (along with all the normal non-field attributes and methods). Each of these field attributes just holds a normal Python type, rather than being any kind of special class (the exception here are relation fields, again, such as many-to-many relations). Other methods in the class can work with these attributes normally, as well as access the Field subclasses that control them via the _meta attribute on the instance.
Malcolm Tredinnick
June, 2006 | https://code.djangoproject.com/wiki/DevModelCreation?version=5 | CC-MAIN-2016-22 | refinedweb | 2,404 | 61.26 |
Opened 7 years ago
Closed 2 years ago
#25167 closed New feature (duplicate)
Provide option to disable ajax text/plain response in technical_500_response
Description
Line in question:
Ticket #10841 from a long time ago made a change to return text/plain responses unconditionally when the request was made via ajax. It would be handy if this special casing of ajax requests could be disabled with a setting or something.
I just spent the last 45 minutes tracking down why the debug error pages were not rendering with styles on a new project I was basing on an older one that did not exhibit the issue.
Turns out I hit this before and the approach I came up with in the past was to monkey patch the handler and forcefully set ajax to false. Otherwise it seems like there is a lot of code to copy and also warnings in the source about writing error handlers. So I'd rather not, but I need to display the error messages during development...
Below is a sample of the monkey patch. It would probably be better to move the condition outside of the function and disable it when not in DEBUG mode. I am going to do that now I guess, but I figure it was worthwhile to raise the issue.
from django.conf import settings from django.core.handlers.base import BaseHandler handle_uncaught_exception = BaseHandler.handle_uncaught_exception def _handle_uncaught_exception_monkey_patch(self, request, resolver, exc_info): if settings.DEBUG: request.is_ajax = lambda: False return handle_uncaught_exception(self, request, resolver, exc_info) BaseHandler.handle_uncaught_exception = _handle_uncaught_exception_monkey_patch
I guess you use something like Chrome inspector which renders HTML just fine? It would be a nice feature, but I doubt adding a setting will be a popular solution.
There was an idea to replace
settings.DEFAULT_EXCEPTION_REPORTER_FILTERwith a
DEFAULT_EXCEPTION_REPORTERsetting:
Implementing that would likely make allow customizing this behavior easy. | https://code.djangoproject.com/ticket/25167 | CC-MAIN-2022-21 | refinedweb | 305 | 54.12 |
Board index » C Language
All times are UTC
> First of all, I know this code is long and probably inefficient as > hell, but I was just practicing and so didn't take the time to sit > down and plan it out. Anyway, I stumbled on a run time error that I > can't for the life of me figure out. Everything runs smoothly until > the final call to getInput(), which asks for input of type float. When > I input any number, rather than store the value in the variable and > return to main() which should print it, it seems to loop back to the > beginning of main() instead. As in it reprompts for a single character > and then an integer and so on, until it gets to the float, at which > point it loops back again.
#include <stdio.h> #include <stdlib.h> #include <ctype.h> #include <string.h>
enum {INT_NUM, LONG_NUM, FLOAT_NUM, SINGLE_CHAR, ALPHA_STRING, ALPHA_NUMERIC}; char buff[128];
void getInput(const char *prompt, short in_type, void *ptr) {
int err_flag = 0; size_t i; int (*test_type) (int) = isdigit; char *nl;
do { if (err_flag) printf("Invalid entry.\n"); err_flag = 0; printf("%s ", prompt); fflush(stdout); fgets(buff, 128, stdin); if ((nl = strchr(buff, '\n'))) *nl = 0; switch (in_type) { case INT_NUM: case LONG_NUM: test_type = isdigit; break; case FLOAT_NUM: for (i = 0; i < strlen(buff); i++) { if ((!test_type(buff[i])) && (buff[i] != '.')) err_flag = 1; } break; case SINGLE_CHAR: if (strlen(buff) > 1) { err_flag = 1; break; } else test_type = isalpha; break; case ALPHA_STRING: test_type = isalpha; break; case ALPHA_NUMERIC: test_type = isalnum; break; }
if (in_type != FLOAT_NUM) { for (i = 0; i < strlen(buff); i++) { if (!test_type(buff[i])) err_flag = 1; } } } while (err_flag);
switch (in_type) { case INT_NUM: *(int *)ptr = atoi(buff); break; case LONG_NUM: *(long *)ptr = atol(buff); break; case FLOAT_NUM: *(float *)ptr = atof(buff); break; case SINGLE_CHAR: *(char *)ptr = buff[0]; break; case ALPHA_STRING: case ALPHA_NUMERIC: strcpy((char *)ptr, buff); break; }
getInput("Enter a single letter: ", SINGLE_CHAR, &love); printf("%c\n", love); memset(buff, 0, sizeof buff); getInput("Enter a whole number: ", INT_NUM, &stoopid); printf("%d\n", stoopid); memset(buff, 0, sizeof buff); getInput("Enter a string WITHOUT whitespace: ", ALPHA_STRING, hate); printf("%s\n", hate); memset(buff, 0, sizeof buff); getInput("Enter a floating point number: ", FLOAT_NUM, &crazy); printf("%f\n", crazy);
return 0;
> > void getInput(const char *prompt, short in_type, void *ptr)
[...]
> /* Let's see if I understand this part of the code. This finds the ending > newline character and replaces it with the string terminator. If there is no > newline char, then it leaves it like it is, assuming that in the absence of > the newline, it must instead already end with the string terminator. Correct? > */
> /* I haven't seen the memset() function before now, but I would guess it > works like this: arg1 is string in memory that you want to work with. arg2 is > the symbol you wish to replace each element with. arg3 is the size of each > element in the string. Is this correct? Either way, I have been told that > resetting the elements in buff is unecessary as long as I test fgets() return > value when getting the input. */
> > memset(buff, 0, sizeof buff);
To be honest, I didn't bother to look at why you were zeroing buff. It is something I bother with.
--
1. Problem with odd loop when passing float to function
2. Passing float to dll problem
3. Homework Problem --- While Loop Need in a Function
4. FAQ 13.16 (was: Homework Problem --- While Loop Need in a Function)
5. problem passing unmanaged pointer from one managed member function to another
6. problem: conditional argument passing to a function
7. Problems passing arrays of Strings to function.
8. problems about passing struct to functions
9. Anwser: Problems passing a dynamic array of structures to a function
10. Problems passing a dynamic array of structures to a function
11. FAQ: Problems passing results from a function back, gcc and SunOS 4.1.4
12. pointer problem - passing contents back to calling function. | http://computer-programming-forum.com/47-c-language/1fb7446a139990b3.htm | CC-MAIN-2020-24 | refinedweb | 661 | 70.63 |
16 July 2012 08:20 [Source: ICIS news]
SINGAPORE (ICIS)--Asian benzene prices may continue rising in the near term to track the buoyant US and European markets, supported by tight supply of prompt cargoes globally, market sources said on Monday.
At midday, spot prices in Asia were hovering at $1,185-1,195/tonne (€972-980/tonne) FOB (free on board) Korea, $5/tonne higher than last Friday’s closing levels and up $130-135/tonne from 1 June, according to ICIS.
“Prices are likely to firm for the time being [as] all regions look very tight,” a Singapore-based trader said.
Asia is a key exporter of benzene to the ?xml:namespace>
The
Strong crude prices have been supporting the gains in the global benzene market, with US spot prices of the aromatics product at 1,563-1,608/tonne FOB (free on board) Barges on 13 July, up by $393-421/tonne from 1 June, according to ICIS.
Prompt supply in the
The same situation is persisting in the European benzene market, driving up spot prices, they said.
Asian suppliers are scrambling to get hold of more spot cargoes to export to the
“Everyone wants to ship to the
Freight rates for loading 6,000-9,000 tonnes of benzene from
Asia is expected to ship at least 60,000 tonnes of benzene to the
These estimated volumes may go up further if traders can get hold of more vessels, said the Korean trader.
But
The inter-month spread for August and September loading cargoes were assessed at a $35/tonne backwardation on 13 July.
Benzene production from a number of Asian crackers has been reduced in recent weeks due to poor margins, while increasing use of lighter feedstocks like liquefied petroleum gas (LPG) at crackers also leads to lower aromatics output, traders said.
In South Korea, production at some TDP units was cut in recent weeks because of squeezed margins, while in Japan, production hiccups are limiting benzene supply, they said.
Late last week, JX Nippon Oil was forced to shut production at its Mizushima-based refinery for safety checks, which may result in less supply of benzene from Japan for the US market, traders said.
The current uptrend in the Asian benzene market, however, does not reflect the weakness in downstream styrene monomer (SM) and phenol sectors.
“It is mostly short covering by traders which is driving up the [benzene] prices over here, in USG (US Gulf) and even in Europe,” said another Singapore-based trader.
( | http://www.icis.com/Articles/2012/07/16/9578361/Asia-benzene-may-firm-up-in-near-term-on-tight-supply.html | CC-MAIN-2014-10 | refinedweb | 421 | 57.74 |
In this series I will comment on what I like in some of the languages I use. I will cover things that I find convenient, things that might lead me to write correct code, things that tend to make my code more readable, and possibly other things that I just like for no clearly-articulated reason. The purpose of this series is to help me think about what features I would put in a language if I created my own.
Posts in this series: Syntax, Deployment, Metaprogramming, Ownership
What aspects of syntax and layout – how the code appears in your text editor – do I like? This is entirely irrelevant to the computer that is running the code, and an implementation detail of the language compiler or interpreter, but is extremely important to me.
- Python and Scheme’s low-symbol approach. Python and Scheme tend towards the use of English words, rather than symbols to express concepts. This makes code easier to read for someone unfamiliar with the language, and also for me.
- Python’s unambiguous block structure. Python uses indentation to express block structure (e.g. for variable scope, namespaces and logic structures). Programmers in most languages (notably C-like and Lisp-like ones) tend to use indentation to help humans understand the block structure of their code, but the computer uses different symbols to understand the same thing. This can lead to ambiguity where a human may mis-read the real block structure of some code because the indentation is inconsistent with the symbols. Python avoids the problem by making indentation the syntax understood by the compiler. [Go avoids the same problem by stating that code with inconsistent indenting (as defined by the gofmt program) is invalid.]
- Scheme’s simplicity. Scheme (like other Lisp dialects) has very simple rules to describe its syntax. This means you need to learn very little before you can understand the structure of Scheme programs, and it is unlikely you will be confused by rare structures.
Of course, there are trade-offs in all these points. Using fewer symbols can make programs verbose (although I find Python feels very concise). The lack of an end-of-block symbol makes short Python snippets look clean, but can make longer sections hard to understand (perhaps better editor support would help with this?). Scheme’s use of brackets for everything means there are a lot of brackets, so working out which does what job can be difficult.
Nevertheless, the goals of reducing the number of symbols with special meaning, allowing humans and computers to use the same ways of understanding code, and being as simple as possible are good goals to have.
One thought on “Goodness in programming languages, part 1 – syntax and layout” | https://www.artificialworlds.net/blog/2012/02/10/goodness-in-programming-languages-part-1-syntax-and-layout/ | CC-MAIN-2021-10 | refinedweb | 457 | 60.35 |
I ran into an interesting max javascript feature yesterday. I found the workaround, but thought I’d mention it. Basically, initializing a String at declaration will interfere with testing for a file with the same name as that string.
Or, more verbosely, the following function will always report that the file with the filename passed to it exists, even if the file does not.
function checkForFile(fileName) //check to see if file exists
{
var s = new String(fileName);
f = new File(s, "read");
if (f.isopen) //if succeed in opening file
{
f.close();
post("found file: " + fileName + "n");
return (true);
}
else //file doesn’t exist
{
f.close(); //anal retentively close file
post("could not find file: " + fileName + "n");
return (false);
}
}
If I change the string declaration to be like this, however:
var s = new String();
s = filename;
Then it works fine. Don’t pick on programming style, by the way. I know that it is fully unnecessary to declare s in the first place. This was just a concise example. It is not specific to a local function either. If a string is declared with that text outside of the function, the function still misreports the file.
String is obviously part of javascript 1.5, but File IO is a max/msp add. I’m curious about what’s going on under the hood to make that happen.
sasha
can’t help with an answer,
but can confirm that is happening with me too.
I had struck this the first time i used f.isopen
and had incorrectly thought that it could not be used to check for the existence of files and have been laboriously trawling through folder objects to check for files.
so sorry that i can’t help but thank you for posting this
Log in to reply
OUR PRODUCTS
C74 ELSEWHERE
C74 RSS Feed | © Copyright Cycling '74 | Terms & Conditions | https://cycling74.com/forums/topic/string-declaration-and-opening-files/ | CC-MAIN-2015-48 | refinedweb | 313 | 73.37 |
Find Questions & Answers
Can't find what you're looking for? Visit the Questions & Answers page!
Hi all,
after upgrading to BW 7.5 on HANA, changing the sort order of date characteristics like 0CALMONTH and 0CALYEAR does not work anymore in BOA 2.4, if those characteristics are restricted within the Query. If a sorting other than the Default is defined for the characteristic in the Query, it is displayed correctly on the first execution of the Query. If the sorting is now changed in BOA, it is applied and cannot be changed again.
Sorting is possible when moving the "Characteristic Restrictions" to the "Default Values".
Other characteristics seem to not have the problem.
Is there any way to solve this issue? Can anyone check if sorting of those characteristics works in a later BOA version with BW 7.5. on HANA?
Thanks
Stephan
SAP provided Note 2654334 for this.
Dear Olga,
thanks for the information. The note solved our problem.
Regards
Michael
Hi Michael,
our system is BW 7.5 SP10, so 2494060 - which I had already checked - does not apply.
I found out that sorting by text does work for characteristics in the customer namespace (Z* in our case) referencing SAP time charecteristics. It seems only the 0* InfoObjects cannot be sorted by text.
Regards
Stephan
Hello Stephan - I haven't experienced any issue with 2.6 SP1 on 7.5 (but not using HANA)
Is there a chance you could download 2.6 with the latest SP on a desktop to try yourself? 2.4 is several years old...
Hello Tammy, thank you for the quick reply!
I will try to get 2.6 SP1 (or later) installed. It may take a while, since I do not have the authorization to do this myself.
Hello Tammy, after installing Version 2.6 SP1 sorting in a query still does not work properly. There is no problem running the same query in one of the other (non-HANA) Systems. I am afraid we have to create an incident. regarding this.
Hi Stephan,
did you already hear something concerning your incident?
We suffer from the same problem and I just tested with BOA 2.6 SP2 but the problem still exists. We're also running BOA on BW 7.5 on HANA. Beside of the dates 0CALMONTH and 0CALYEAR, I found the error with the date fields 0FISCPER and 0FISCYEAR. For example in the SAP demo query 0D_PU_C01_Q0016. If you restrict 0CALYEAR or 0CALMONTH you can't sort them properly, whereas the purchasing organization (in that case 0D_PUR_ORG) works perfectly fine.
best regards
Michael
P.S.: Mabye it's a Java problem. The sorting error only occurs in the BOA, SAP Portal or Java Web (via RSRT) but I couldn't reproduce it using the BEx Analyzer or ABAP BICS mode in the RSRT.
Hi Michael,
we are still in contact with SAP regarding this issue. It seems all time characteristics are affected - although I have not tested it with 0CALDAY, -WEEK, etc..
Also, I found out yesterday, that Design Studio has the same problem. WebI seems to be working.
There are workarounds available:
1) Move the filter values to the default values.
2) Use a second characteristic containing the same values as the time characteristic for filtering.
Both workarounds are not feasible for us, because we would have to change hundreds of Queries. I will update this thread if we get any information from SAP.
Grüße, Stephan
Hallo Stephan,
changing all the queries wouldn't be a solution for us as well. So I will look forward what you achieve. We will patch our Java Stack the next days and I will let you know, if this solved anything.
By the way, how did you open the WebI file? As far as I know, they can be opened in two different ways. If you just double click on them in the BI launchpad then they will open in a HTML mode but if you right click on them and select "Change" then they will open in a Java runtime enviroment. Do you think you can compare the two modes?
regards
Michael
Hallo Michael,
I created a new document, changed the sorting in editing mode several times and saved the document. The selected sorting is still used when I just double-click the file. Not sure how to sort anything in display mode, I am not really familiar with WebI.
Regards
Stephan
Tried it out today with WebI also and couldn't reproduce the problem here either.
Today we installed the latest patches for NW7.50 SPS09 in our sandbox system but the error still exists... :( Because of that, we opened an incident at SAP. Hope, they can help us.
Best regards
Michael
Hallo Michael,
SAP released note 2622886 today. After implementing the correction I am not entirely sure it is working as expected: when sorting a time characteristic by text, it seems the sorting is still not working. I am aware that sorting a time characteristic by text does not make much sense, but I think it should work as expected anyway.
Can you confirm this behaviour?
Regards
Stephan
Hi Stephan,
we just implemented the note and it helped. In our system the sorting works with the values as well as with the Texts (we run BW 7.5 SP9).
Did you have a look at the required note 2494060 ()?
Best regards
Michael | https://answers.sap.com/questions/455520/boa-sorting-date-characteristics-in-query-with-bw.html | CC-MAIN-2018-30 | refinedweb | 905 | 76.22 |
A test case isn’t just a test case: it lives in the (generally extremely large) space of inputs for the software system you are testing. If we have a test case that triggers a bug, here’s one way we can look at it:
The set of test cases triggering a bug is a useful notion since we can search it. For example, a test case reducer is a program that searches for the smallest test case triggering a bug. It requires a way to transform a test case into a smaller one, for example by deleting part of it. The new variant of the test case may or may not trigger the bug. The process goes like this:
I’ve spent a lot of time watching reducers run, and one thing I’ve noticed is that the reduction process often triggers bugs unrelated to the bug that is the subject of the reduction:
Sometimes this is undesirable, such as when a lax interestingness test permits the reduction of one bug to get hijacked by a different bug. This happens all the time when reducing segfaults, which are hard to tell apart. But on the other hand, if we’re looking for bugs then this phenomenon is a useful one.
It seems a bit counterintuitive that test case reduction would lead to the discovery of new bugs since we might expect that the space of inputs to a well-tested software system is mostly non-bug-triggering with a few isolated pockets of bug-triggering inputs scattered here and there. I am afraid that that view might not be realistic. Rather, all of the inputs we usually see occupy a tiny portion of the space of inputs, and it is surrounded by huge overlapping clouds of bug-triggering inputs. Fuzzers can push the boundaries of the space of inputs that we can test, but not by as much as people generally think. Proofs remain the only way to actually show that a piece of software does the right thing any significant chunk of its input space. But I digress. The important fact is that reducers are decently effective mutation-based fuzzers.
In the rest of this post I’d like to push that idea a bit farther by doing a reduction that doesn’t correspond to a bug and seeing if we can find some bugs along the way. We’ll start with this exciting C++ program
#include <iostream> int main() { std::cout << "Hello World!++" << std::endl; }
and we’ll reduce it under the criterion that it remains a viable Hello World implementation. First we’ll preprocess and then use delta’s topformflat to smoosh everything together nicely:
g++ -std=c++11 -E -P -O3 hello-orig.cpp | topformflat > hello.c.
Anyhow, the result is 550 KB of goodness:
Here’s the code that checks if a variant is interesting — that is, if it acts like a Hello World implementation:
g++ -O3 -std=c++11 -w hello.cpp >/dev/null 2>&1 &&
ulimit -t 1 &&
./a.out | grep Hello
The ulimit is necessary because infinite loops sometimes get into the program that is being reduced.
To find compiler crashes we’ll need a bit more elaborate of a test:
if
g++ -O3 -std=c++11 -w hello.cpp >compiler.out 2>&1
then
ulimit -t 1 &&
./a.out | grep Hello
else
if
grep 'internal compiler error' compiler.out
then
exit 101
else
exit 1
fi
fi
When the compiler fails we look at its output and, if it contains evidence of a compiler bug, exit with code 101, which will tell C-Reduce that it should save a copy of the input files that made this happen.
The compiler we’ll use is g++ r231221, the development head from December 3 2015. Let’s get things going:
creduce --nokill --also-interesting 101 --no-default-passes \
--add-pass pass_clex rm-tok-pattern-4 10 ../test.sh hello.cpp
The -also-interesting 101 option indicates that the interestingness test will use process exit code 101 to tell C-Reduce to make a snapshot of the directory containing the files being reduced, so we can look at it later. --no-default-passes clears C-Reduce’s pass schedule and -add-pass pass_clex rm-tok-pattern-4 10 add a single pass that uses a small sliding window to remove tokens from the test case. The issue here is that not all of C-Reduce’s passes are equally effective at finding bugs. Some passes, such as the one that removes dead variables and the one that removes dead functions, will probably never trigger a compiler bug. Other passes, such as the one that removes chunks of lines from a test case, eliminate text from the test case so rapidly that effective fuzzing doesn’t happen. There are various ways to deal with this problem, such as probabilistically rejecting improvements or rejecting improvements that are too large, but for this post I’ve chosen the simple expedient of running just one pass that (1) makes progress very slowly and (2) seems to be a good fuzzer.
The dynamics of this sort of run are interesting: as the test case walks around the space of programs, you can actually see it brush up against a compiler bug and then wander off into the weeds again:
The results are decent: after about 24 hours, C-Reduce caused many segfaults in g++ and triggered six different internal compiler errors: 1, 2, 3, 4, 5, 6. One of these was already reported, another looks probably like a duplicate, and four appear to be new.
I did a similar run against Clang++ r254595, again the development head from December 3. This produced segfaults and also triggered 25 different assertions:
llvm/include/llvm/Support/Casting.h:230: typename cast_retty
I have to admit that I felt a bit overwhelmed by 25 potential bug reports, and I haven’t reported any of these yet. My guess is that a number of them are already in the bugzilla since people have been fuzzing Clang lately. Anyway, I’ll try to get around to reducing and reporting these. Really, this all needs to be automated so that when subsequent reductions find still more bugs, these just get added to the queue of reductions to run.
If you were interested in reproducing these results, or in trying something similar, you would want to use C-Reduce’s master branch. I ran everything on an Ubuntu 14.04 box. While preparing this post I found that different C-Reduce command line options produced widely varying numbers and kinds of crashes.
Regarding previous work, I believe — but couldn’t find written down — that the CERT BFF watches out for new crashes when running its reducer. In a couple of papers written by people like Alex Groce and me, we discussed the fact that reducers often slip from triggering one bug to another.
The new thing in this post is to show that triggering new bugs while reducing isn’t just some side effect. Rather, we can go looking for trouble, and we can do it without being given a bug to reduce in the first place. A key enabler for easy bug-finding with C-Reduce was finding a simple communication mechanism by which the interestingness test can give C-Reduce a bit of out-of-band information that a variant should be saved for subsequent inspection. I’m not trying to claim that reducers are awesome fuzzers, but on the other hand, it might be a little bit awesome that mutating Hello World resulted in triggering 25 different assertion violations in a mature and high-quality compiler. I bet we’d have done even better by starting with a nice big fat Boost application.
28 thoughts on “Reducers are Fuzzers”
Triggering 25 different assertions in Clang is really amazing. Well done!
How well tested are compilers for robustness in the face of invalid inputs, though? These wouldn’t seem to be high priority bugs.
As I already told John in emails about this, what really interests me is:
How does a reducer fare against a C++ mutation tool? That is, if you had a competitor that just generates similar valid C++ programs to your starting point, does it behave much like this, or is there something special about reduction? As far as I know, there aren’t any free, useful, working C++ mutation tools. I tried using Jamie Andrews’ nice C tool, but of course templates + operator replacement ends in tears. The tiny set of actual syntactically valid things generated didn’t do anything useful.
Most mutation-based fuzzers for anything with a real grammar need to at least try to stay close to that grammar, so a pure file-fuzzer will also be useless. So is C-Reduce 1) just nice to have around for this in absence of a better mutator or 2) actually doing something special? I would place blind money on 2, but I’m not sure why. I feel like reduction does push you somewhere interesting in the space.
PS ==yang in that 25 is quite amazing.
PS anyone know of a C++ mutation tool?
Should the first line of second paragraph say “set of test cases” instead of “set of programs”?
Paul, as a compiler implementor, the reason you look at these bugs is that they cause the compiler the become internally inconsistent. Just because I’ve triggered these inconsistencies with invalid inputs doesn’t mean they cannot also be triggered by valid ones.
Magnus, thanks, fixed, I thought I had gotten all of those already, as you might guess I started out writing this post one way and then switched around the discussion to be more general in the beginning.
ICEs on invalid input are only interesting if they
happen _before_ an error was reported by the compiler.
All your gcc bugs don’t fall in this category, so they
have very low priority.
octoploid: How about the LLVM ones?
I cannot speak for the LLVM developers and I haven’t seen the full compiler output for these assertion hits.
But one thing to remember is that at a certain point you will have to trade compilation speed for fuzzer robustness.
And because a compiler isn’t some daemon, that runs 24/7 and is vulnerable to security attacks, compilation speed should win.
octoploid, of course. I just throw bugs over the fence and they can do whatever they like with them, it’s their tool and their time.
John,
I suspect that many of the crashes occur because of syntax/semantic errors in the reduced code that get the compiler into the unexpected state that causes the segfault/assertion failure (I have not checked). These are very low priority problems and I imagine are not worth fixing (the user fixes the syntax/semantic issue and they go away).
Some thoughts on fuzzing in general:
Isn’t it amazing that such simple fuzzing still finds piles of bugs? I would have thought that those bugs would long have been found because this is such a simple program that was reduced. I guess that means that nobody was looking.
tobi, I think the reducer is exploring corners of the input space that just don’t get looked at very often.
Derek, see #12.
How hard would it be to detect octoploid’s “ICE before error” criteria? I assume it wouldn’t be too hard, given a predicate for that, to set up a search for case that satisfy it. If the ICE is after many errors, you might make the reduction target be how short an *error output* you can generate that still provokes the ICE.
As an aside: why doesn’t c-reduce keep ALL inputs? (How fast would that fill an xTB local drive? If you tgz’ed them?) If you wanted to get fancy, you could keep track of results and provenance via a rats nest of symlinks.
@bcs:
It is pretty easy: just add -Wfatal-errors to the compiler invocation. The compiler then stops at the first error it encounters. And if it isn’t an ICE, just move on.
An argument for fixing “low priority” bugs found by the fuzzer is that unless you do this, the fuzzer quickly becomes useless, since it keep spitting out the same unfixed bugs, which may conceal any higher priority bugs it finds.
@octoploid, that sort of defeats the point. The point is to attempt to mutate the ICE-after-error into an ICE-before-error.
This is based on the assumption that there are real relevant (by whatever definition) bugs where the first time it’s encountered, it will come after an error and that it will be less costly to reduce that case than find it again as a case were it comes first.
Just run octoploid’s method as a second pass, internal to the script you pass C-Reduce. Detect ICE-after-error just as John is now, then throw it at C-Reduce set up to preserve ICE, and store/terminate if it ever hits an ICE-before-error. Since C-Reduce isn’t starting in a good state, it may not hit one, but by preserving same-ICE you can at least explore the space and see if you can get a before-error ICE.
My general approach with C-Reduce has been to try to keep it deterministic, so that files do not need to be saved, but rather can be regenerated easily.
But there are two threats to determinism. First, timeouts in the interestingness test admit a bit of nondeterminism, not sure how often that comes up in practice. Second, I’m not 100% certain that multicore C-Reduce is deterministic, although the algorithm was designed to be.
@Alex, that’s more or less what I was proposing.
@regehr
C-Reduce is deterministic? That seems…. risky to me. It runs the risk that that determinism could inadvertently close off whole sections of the search space. A less risky way to get that might be to use a randomly seeded PRNG and log the seed.
In addition to that risk, if it takes 24 hours to do a reduction to get to a given point in a reduction, should I assume it would take hours to regenerate any random midpoints? If so, I’d think being able to skip that even once or twice a semester would be worth the cost of storage (~$0.03/GiB for HDDs last I checked).
Also, I’m still wondering what a backtracking solution would find. E.g. pick a starting point based on it’s size and how many times you’ve tried it before and churn on it for a set time or number of mutations, then start again with another.
Yes, BFF (And FOE) will watch for new crashes seen during minimization. This was introduced with BFF 2.5 and FOE 2.0:
In particular, look at the bottom arrow labeled “minimizer finds other crashers”
For the full blog posts see and—behind-the-scenes-of-bff-and-foes-crash-recycler.html
WD, thanks for the links!
bcs, I hear you, but the space of programs is so unimaginably vast that I really don’t think probabilistic searching is necessary. Also, the shape of the space tends to ensure that there are many ways to get to the same place, so backtracking isn’t necessarily profitable.
Regarding saving variants generated along the way, as you suggest this is generally totally feasible.
This is a side point,.
Well, “of course”: those trigger the definition of some macros (), so that standard library headers can provide more or less optimizing definitions ^_^. Of course those definitions are supposed to be equivalent (that is, valid optimizations), but not necessarily for fuzzing.
This is really intriguing. Now I’m pondering how to hook something like this up to llvm-stress and bugpoint for backend stuff.
And please do file those clang/llvm bug reports (feel free to CC me on them if you like). I’d love to get all of them cleaned up.
Thanks Jim!
I think adding a quick check in bugpoint to see if it has detected a new crash would be a really nice idea. | http://blog.regehr.org/archives/1284 | CC-MAIN-2016-44 | refinedweb | 2,723 | 68.81 |
Hey, Scripting Guy! We have a lot of users who let unread email messages pile up in their Outlook Inboxes; after awhile, that causes their Inboxes to fill up. How can I write a script users could run that would delete any unread messages that are more than 6 months old?
— ST
Hey, ST. You know, sometimes it’s tough being a Scripting Guy. After all, people don’t come to the Script Center just hoping to pick up a tip or two about system administration scripting; instead, people come to the Script Center hoping to be uplifted and inspired. Even well-known TV personalities like Oprah Winfrey and Dr. Phil don’t bother trying to help people anymore; instead, they simply say, “Listen, you want to be uplifted and inspired? Then read the Hey, Scripting Guy! column every morning.”
Now, as a general rule, the Scripting Guys welcome the responsibility of spreading joy and happiness throughout the world. However, fulfilling their roles as the Happiness Fairies can be difficult at times, especially during a week like this one. What’s so bad about this week? Boy, where do we start? For one thing, the Scripting Guy who writes this column started the week off with a car that was broken and could never be fixed; saw that change to a car that could be fixed simply by replacing a $1.25 part; then ended the week with a car that was broken and could never be fixed.
Meanwhile, Scripting Guy Jean Ross made arrangements to have a few things taken care of, the only stipulation being that everything had to be taken care of by July 3rd. No problem, she was told; everything will be done by July 3rd. When she went in to sign the final contract they told her, “Here you go, ma’am. Everything will be done by July 13th, just like you asked.”
Let’s see, what else …. Well, TechNet is planning to make some changes to the publishing process, changes that will make the Scripting Guys’ work life much … better …. (Apparently TechNet didn’t think it was challenging enough for a two-person team to write, publish, and maintain the entire Script Center. Therefore they decided to up the ante a little.) As for the icing on the cake, we’re also in the midst of performance reviews here at Microsoft. We’re probably not supposed to share this information, but here’s an excerpt from last year’s performance review for the Scripting Guy who writes this column:
Too bad for him that they don’t have a category for consistency. That he has.
But hey, you didn’t come to the Script Center to listen to the Scripting Guys bemoan their fates, did you? Instead, you came here because Oprah Winfrey and Dr. Phil promised that the Scripting Guys would uplift and inspire you. OK, let’s see what we can do about that … uplifting and inspiring … hmmm …
Oh, we know: Suppose we show you a script that can delete all the unread messages in your Outlook Inbox that are more than 6 months old? You know, a script like this one:(“[UnRead] = True”)
For i = colFilteredItems.Count to 1 Step – 1
If DateDiff(“m”, colFilteredItems(i).ReceivedTime, Now) > 6 Then
colFilteredItems(i).
End If
Feel better? Good; after all, your happiness is the only thing that matters to the Scripting Guys. Of course, you’d probably feel even better if you understood how this script works. Very well; let’s see if the two Happiness Fairies can help you with that.
To begin with, we Happiness Fairies start out by defining a constant named olFolderInbox, then assign olFolderInbox the value 6; we’ll use this constant to tell the script which Outlook folder to work with. After defining the constant we create an instance of the Outlook.Application object, then use the GetNamespace method to connect to the MAPI namespace. (A required step, even though the MAPI namespace is the only namespace we can connect to.) Finally, we use the GetDefaultFolder method to bind to the Outlook Inbox:
Set objFolder = objNamespace.GetDefaultFolder(olFolderInbox)
Now we’re ready to roll. With the connection complete we use the following line of code to retrieve a collection of all the items (that is, all the email messages) found in the Inbox folder:
Set colItems = objFolder.Items
That’s a good point: we’re really only interested in unread emails, aren’t we? That’s fine; all we have to do is apply a filter and create a “sub-collection” (named colFilteredItems) that contains only the unread emails found in the Inbox folder:
Set colFilteredItems = colItems.Restrict(“[UnRead] = True”)
We’re not going to discuss email filtering in any detail today; if you’d like to know more about filtering email take a look at our Office Space article on that very subject. About all we’ll say here is that we’re limiting our collection to items where the Unread property is True; as you might expect, if the Unread property is True that means that the message hasn’t been read yet.
Our next step is to loop through the entire collection of unread messages and then delete any emails more than 6 months old. In theory, we could have expanded our filter so that we limited the sub-collection to messages that were both unread and more than 6 months old. We didn’t bother with that simply because we were afraid the filter would become unduly complicated. Instead, we’re going to loop through all the unread messages, checking each one to see if it’s more than 6 months old. If it is, we’ll delete it. If it’s not, we won’t.
Just exactly the way Dr. Phil would do things.
Because we’re going to be deleting items we need to start our loop at the bottom (that is, with the very last email in the collection) and then work our way to the top (the very first email in the collection). That’s why we have a For Next loop that starts with the last item (whose index number can be determined by using the Count property) and works its way towards the first item, the one with the index number 1:
For i = colFilteredItems.Count to 1 Step -1
What’s that? You say you have two questions? Look, we don’t really feel like answering a bunch of – no, sorry. After all, we are the Happiness Fairies; we have a job to do. With that in mind, we’d be thrilled to answer your questions. As to the first question (“Why the Step -1 parameter?”), well, that’s easy: Step -1 is the key to doing a “backwards” loop. Let’s say we have 100 items in our collection. In that case, the counter variable i starts out equal to 100, and the first time through the loop we’ll be working with item 100. When we’re done with that item we need to turn our attention to item 99. How do we get from item 100 to item 99? That’s right: by subtracting 1 from the current value of our counter variable. That’s what Step -1 is for.
As for your other question (“Why does deleting items require us to start at the bottom and work our way up?”), well, take a peek at this Hey, Scripting Guy! column for an explanation.
As we noted, inside the loop we need to take a look at each email and determine whether the message is more than 6 months old. That’s what this line of code is for:
If DateDiff(“m”, colFilteredItems(i).ReceivedTime, Now) > 6 Then
So what are we actually doing with that line of code? Well, we’re using VBScript’s DateDiff function to determine the number of months (the “m” parameter) between the date and time the message was received (colFilteredItems(i).ReceivedTime) and the current date and time (Now). If the number of months is greater than 6 we then use this line of code to delete the message:
colFilteredItems(i).
And if the number of months is not greater than 6 then we don’t do anything at all.
Which, as our performances reviews suggest, is what we do best.
From there we loop around and repeat the process with the next email in the collection. By the time we exit the loop we’ll have deleted all the unread emails that are more than 6 months old.
At any rate, we hope that you found today’s column uplifting and inspiring, ST. If not, here’s something that might help. Remember, no matter how bleak things might look, they could be worse: you could be a Scripting Guy.
Join the conversationAdd Comment
thanks
This is way too complicated for me. I'll just delete them one by one…ho-hum, this will take awhile. Have a great day, script guy.
Chris
I asked the same question as I have over 12000 junk unread but as long as outlook stores them for free who cares, interesting though that you show how easy it is to remedy. | https://blogs.technet.microsoft.com/heyscriptingguy/2007/06/30/how-can-i-delete-unread-emails-that-are-more-than-6-months-old/ | CC-MAIN-2016-40 | refinedweb | 1,541 | 80.72 |
Some veteran IT Pros hear the term ‘Microsoft Clustering’ and their hearts start racing. That’s because once upon a time Microsoft Cluster Services was very difficult and complicated. In Windows Server 2008 it became much easier, and in Windows Server 2012 it is now available in all editions of the product, including Windows Server Standard. Owing to these two factors you are now seeing all sorts of organizations using Failover Clustering that would previously have shied away from it.
The service that we are seeing clustered most frequently in smaller organizations is Hyper-V virtual machines. That is because virtualization is another feature that is really taking off, and the low cost of virtualizing using Hyper-V makes it very attractive to these organizations.
In this article I am going to take you through the process of creating a failover cluster from two virtualization hosts that are connected to a single SAN (storage area network) device. However in Windows Server 2012 these are far from the limits. You can actually cluster up to sixty-four servers together in a single cluster. Once they are joined to the cluster we call them cluster nodes.
Failover Clustering in Windows Server 2012 allows us to create highly available virtual machines using a method called Active-Passive clustering. That means that your virtual machine is active on one cluster node, and the other nodes are only involved when the active node becomes unresponsive, or if a tool that is used to dynamically balance the workloads (such as System Center 2012 with Performance and Resource Optimization (PRO) Tips) initiates a migration.
In addition to using SAN disks for your shared storage, Windows Server 2012 also allows you to use Storage Pools. I explained Storage Pools and showed you how to create them in my article Storage Pools: Dive Right In! I also explained how to create a virtual SAN using Windows Server 2012 in my article iSCSI Storage in Windows Server 2012. For the sake of this article, we will use the simple SAN target that we created together in that article.
Step 1: Enabling Failover Clustering
Failover Clustering is a feature on Windows Server 2012. In order to enable it we will use the Add Roles and Features wizard.
1. From Server Manager click Manage, and then select Add Roles and Features.
2. On the Before you begin page click Next>
3. On the Select installation type page select Role-based or feature-based installation and click Next>
4. On the Select destination server page select the server onto which you will install the role, and click Next>
5. On the Select server roles page click Next>
6. On the Select features page select the checkbox Failover Clustering. A pop-up will appear asking you to confirm that you want to install the MMC console and management tools for Failover Clustering. Click Add Features. Click Next>
7. On the Confirm installation selections page click Install.
NOTE: You could also add the Failover Clustering feature to your server using PowerShell. The script would be:
Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools
If you want to install it to a remote server, you would use:
Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools –ComputerName <servername>
That is all that we have to do to enable Failover Clustering in our hosts. Remember though, it does have to be done on each server that will be a member of our cluster.
Step 2: Creating a Failover Cluster
Now that Failover Clustering has been enabled on the servers that we want to join to the cluster, we have to actually create the cluster. This step is easier than it ever was, although you should take care to follow the recommended guidelines. Always run the Validation Tests (all of them!), and allow Failover Cluster Manager to determine the best cluster configuration (Node Majority, Node and Disk Majority, etc…)
NOTE: The following steps have to be performed only once – not on each cluster node.
1. From Server Manager click Tools and select Failover Cluster Manager from the drop-down list.
2. In the details pane under Management click Create Cluster…
3. On the Before you begin page click Next>
4. On the Select Servers page enter the name of each server that you will add to the cluster and click Add. When all of your servers are listed click Next>
5. On the Validation Warning page ensure the Yes. When I click Next, run configuration validation tests, and then return to the process of creating the cluster radio is selected, then click Next>
6. On the Before You Begin page click Next>
7. On the Testing Options page ensure the Run all tests (recommended) radio is selected and then click Next>
8. On the Confirmation page click Next> to begin the validation process.
9. Once the validation process is complete you are prompted to name your cluster and assign an IP address. Do so now, making sure that your IP address is in the same subnet as your nodes.
NOTE: If you are not prompted to provide an IP address it is likely that your nodes have their IP Addresses assigned by DHCP.
10. On the Confirmation page make sure the checkbox Add all eligible storage is selected and click Next>. The cluster will now be created.
11. Click on Finish. In a few seconds your new cluster will appear in the Navigation Pane.
Step 3: Configuring your Failover Cluster
Now that your failover cluster has been created there are a couple of things we are going to verify. The first is in the main cluster screen. Near the top it should say the type of cluster you have.
If you created your cluster with an even number of nodes (and at least two shared drives) then the type should be a node and disk majority. In a Microsoft cluster health is determined when a majority (50% +1) of votes are counted. Every node has a vote. This means that if you have an even number of nodes (say 10) and half of them (5) go offline then your cluster goes down. If you have ten nodes you would have long since taken action, but imagine you have two nodes and one of them goes down… that means your entire cluster would go down. So Failover Clustering uses node and disk majority – it takes the smallest drive shared by all nodes (I usually create a 1GB LUN) and configures it as the Quorum drive – it gives it a vote… so if one of the nodes in your two node cluster goes down, you still have a majority of votes, and your cluster stays on-line.
The next thing that you want to check is your nodes. Expand the Nodes tree in the navigation pane and make sure that all of your nodes are up.
Once this is done you should check your storage. Expand the Storage tree in the navigation pane, and then expand Disks. If you followed my articles you should have two disks – one large one (mine is 140GB) and a small one (mine is 1GB). The smaller disk should be marked as assigned to Disk Witness in Quorum, and the larger disk will be assigned to Available Storage.
Cluster Shared Volumes was introduced in Windows Server 2008R2. It creates a contiguous namespace for your SAN LUNs on all of the nodes in your cluster. In other words, rather than having to ensure that all of your LUNs have the same drive letter on each node, CSVs create a link – a portal if you will – on your C: under the directory C:\ClusterStorage. Each LUN would have its own subdirectory – C:\ClusterStorage\Volume1, C:\ClusterStorage\Volume2, and so on. However using CSVs means that you are no longer limited to a single VM per LUN, so you will likely need fewer.
CSVs are enabled by default, and all you have to do is right-click on any drive assigned to Available Storage, and click Add to Cluster Shared Volumes. It will only take a second to work.
NOTE: While CSVs create directories on your C drive that is completely navigable, it is never a good idea to use it for anything other than Hyper-V. No other use is supported.
Step 4: Creating a Highly Available Virtual Machine (HAVM)
Virtual machines are no different to Failover Cluster Manager than any other clustered role. As such, that is where we create them!
1. In the navigation pane of Failover Cluster Manager expand your cluster and click Roles.
2. In the Actions Pane click Virtual Machines… and click New Virtual Machine.
3. In the New Virtual Machine screen select the node on which you want to create the new VM and click OK.
The New Virtual Machine Wizard runs just like it would in Hyper-V Manager. The only thing you would do differently here is change the file locations for your VM and VHDX files. In the appropriate places ensure they are stored under C:\ClusterStorage\Volume1.
At this point your highly available virtual machine has been created, and can be failed over without delay!
Step 5: Making an existing virtual machine highly available
In all likelihood you are not starting from the ground up, and you probably have pre-existing virtual machines that you would like to add to the cluster. No problem… However before you go, you need to put the VM’s storage onto shared storage. Because Windows Server 2012 includes Live Storage Migration it is very easy to do:
1. In Hyper-V Manager right-click the virtual machine that you would like to make highly available and click Move…
2. In the Choose Move Type screen select the radio Move the virtual machine’s storage and click Next>
3. In the Choose Options for Moving Storage screen select the radio marked Move all of the virtual machine’s data to a single location and click Next>
4. In the Choose a new location for virtual machine type C:\ClusterStorage\Volume1 into the field. Alternately you could click Browse… and navigate to the shared file location. Then click Next>
5. On the Completing Move Wizard page verify your selections and click Finish.
Remember that moving a running VM’s storage can take a long time. The VHD or VHDX file could theoretically be huge… depending on the size you selected. Be patient, it will just take a few minutes. Once it is done you can continue with the following steps.
6. In Failover Cluster Manager navigate to the Roles tab.
7. In the Actions Pane click Configure Role…
8. In the Select Role screen select Virtual Machine from the list and click Next>. This step can take a few minutes… be patient!
9. In the Select Virtual Machine screen select the virtual machine that you want to make highly available and click Next>
NOTE: A great improvement in Windows Server 2012 is the ability to make a VM highly available regardless of its state. In previous versions you needed to shut down the VM to do this… no more!
10. On the Confirmation screen click Next>
…That’s it! Your VM is now highly available. You can navigate to Nodes and see which server it is running on. You can also right-click on it, click Move, select Live Migration, and click Select Node. Select the node you want to move it to, and you will see it move before your very eyes… without any downtime.
What? There’s a Video??
Yes, We wanted you to read through all of this, but we also wrote it as a reference guide that you can refer to when you try to build it yourself. However to make your life slightly easier, we also created a video for you and posted it online. Check it out!
For Extra Credit!
Now that you have added your virtualization hosts as nodes in a cluster, you will probably be creating more of your VMs on Cluster Shared Volumes than not. In the Hyper-V Settings you can change the default file locations for both your VMs and your VHDX files to C:\ClusterStorage\Volume1. This will prevent your having to enter them each time.
As well, the best way to create your VMs will be in the Failover Cluster Manager and not in Hyper-V Manager. FCM creates your VMs as HAVMs automatically, without your having to perform those extra steps.
Conclusion
Over the last few weeks we have demonstrated how to Create a Storage Pool, perform a Shared Nothing Live Migration, Create an iSCSI Software Target in Windows Server 2012, and finally how to create and configure Failover Clusters in Windows Server 2012. Now that you have all of this knowledge at your fingertips (Or at least the links to remind you of it!) you should be prepared to build your virtualization environment like a pro. Before you forget what we taught you, go ahead and do it. Try it out, make mistakes, and figure out what went wrong so that you can fix it. In due time you will be an expert in all of these topics, and will wonder how you ever lived without them. Good luck, and let us know how it goes for you! | https://blogs.technet.microsoft.com/canitpro/2013/01/09/failover-clustering-lets-spread-the-hyper-v-love-across-hosts/ | CC-MAIN-2017-09 | refinedweb | 2,215 | 71.04 |
04 January 2010 21:18 [Source: ICIS news]
HOUSTON (ICIS news)--A US subsidiary of France-based energy giant Total announced on Monday that it would enter a joint venture (jv) with a subsidiary of US-based Chesapeake Energy, with Total acquiring a stake in a key US natural gas field.
Total E&P USA signed an agreement in which it will pay $800m (€560m) in cash and an additional $1.45bn by funding 60% of Chesapeake's share of drilling and completion expenditures.
For that, Total will receive a 25% interest in ?xml:namespace>
The deal, effective as of 1 October 2009, is subject to regulatory approval and expected to be completed by the end of January, according to a press statement.
($1 = €0.70). | http://www.icis.com/Articles/2010/01/04/9322500/total+enters+jv+with+us-based+chesapeake+buys+into+shale.html | CC-MAIN-2013-20 | refinedweb | 126 | 56.18 |
.
So, let’s take a look at how Docker works and what that means for container security.
To answer the question whether Docker is secure, we’ll first take a look at the key parts of the Docker stack:
There are two key parts to Docker: Docker Engine, which is the runtime, and Docker Hub, which is the official registry of Docker containers. It’s equally important to secure both parts of the system. And to do that, it takes an understanding of what they each consist of, which components need to be secured, and how. Let’s start with Docker Engine.
Docker Engine
Docker Engine hosts and runs containers from the container image file. It also manages networks and storage volumes. There are two key aspects to securing Docker Engine: namespaces and control groups., allowing you to run containers as non-root users. Namespaces are switched off by default in Docker, so need to be activated before you can use them.
Support for control groups, or cgroups, in Docker allows you to set limits for CPU, memory, networking, and block IO. By default containers can use an unlimited amount of system resources, so it’s important to set limits. Otherwise the entire system could be affected by a single hungry container.
Apart from namespaces and control groups, Docker Engine can be further hardened by the use of additional tools like SELinux and AppArmor.
SELinux provides access control for the kernel. It can manage access based on the type of process running in the container, or the level of the process, according to policies you set for the host. Based on this policy, thing should be scanned for vulnerabilities.
For users of private repositories, Docker Hub will scan downloaded container images. It scans a few repositories for free, after which you need to pay for scanning as an add-on.
Docker Hub isn’t the only registry service for Docker containers. Other popular registries include Quay, AWS ECR, and GitLab Container Registry. These tools also have scanning capabilities of their own. Further, Docker Trusted Registry (DTR) can be installed behind your firewall for a fee.
Third-party security tools
While the above security features provide basic protection for Docker Engine and Docker Hub, they lack the power and reach of a dedicated container security tool. A tool like Twistlock can completely secure your Docker stack. It goes beyond any one part, and gives you a holistic view of your entire system.
Docker is an intricate mesh of various moving and static parts. Clearly, plugging in any one of these security tools does not instantly make the entire stack secure. It will take a combination of these approaches to secure Docker at all levels.
So, next time someone asks you if Docker is secure, you should ask them which part of Docker they’re referring to. Then you can explain the various security considerations that affect that layer.. | https://www.infoworld.com/article/3201967/how-to-think-about-docker-security.html?cid=ifw_nlt_infoworld_open_source_2017-06-28 | CC-MAIN-2020-05 | refinedweb | 487 | 64.3 |
I’m trying to build a simple proxy to Amazon S3 that would allow a
client to request a url that gets mapped to a S3 Object by Rails and
streamed back to the client.
For S3 integration, I use the highly recommended gem AWS::S3
(), and to send a file, I simply use the
send_file function.
The concept works BUT the I can’t get the file to be sent as it’s been
downloaded. Even better would be to download the file to memory instead
of the disk, I’m really not familiar with Ruby’s IO Class though… If
anyone has a suggestion…
I first tried this:
download the file entirely then uploads it to the client
send_data AWS::S3::S3Object.value ‘object_name’, ‘bucket_name’
Then I figured I needed to use the streaming feature of AWS::S3
Start a thread that downloads the file in the background
Thread.start do
open(’/tmp/tempfile’, ‘w’) do |file|
AWS::S3::S3Object.stream(‘object_name’, ‘bucket_name’) do |chunk|
file.write chunk
end
end
end
Then send the file as it’s being written.
send_file “/tmp/tempfile”, :stream => true
This doesn’t work, the sent file is often between 0 and 4 bytes… not
cool.
I’m guessing I could put it all together, I dont know how.
Here’s the send_file function that could be overwritten:
def send_file(path, options = {}) #:doc:
raise MissingFile, “Cannot read file #{path}” unless
File.file?(path) and File.readable?(path)
options[:length] ||= File.size(path) options[:filename] ||= File.basename(path) send_file_headers! options @performed_render = false if options[:stream] render :text => file.read } end
end
Please, any idea anyone? | https://www.ruby-forum.com/t/how-to-make-rails-send-a-file-that-is-being-written/86643 | CC-MAIN-2018-51 | refinedweb | 271 | 73.47 |
Answered by:
How to write a command like "pause" to console
People I'm new to C# and I want to ask (if someone could help)
How to convert a code like this (in C++) to C#:
#include<iostream>
#include "stdafx.h"
#include "test.h"
using namespace std;
int main()
{
cout<<"Helllo";
system("pause"); //This one :)
return 0;
}
Question
Answers
Hi
In C# the Code can be written like this using Console Applications :-
using System;
using System.Collections.Generic;
using System.Text;
namespace Pause {
class Program {
static void Main(string[] args)
{
Console.WriteLine("Hello");
Console.ReadLine(); //Pause
}
}
}
Happy Coding
All replies
Hi
In C# the Code can be written like this using Console Applications :-
using System;
using System.Collections.Generic;
using System.Text;
namespace Pause {
class Program {
static void Main(string[] args)
{
Console.WriteLine("Hello");
Console.ReadLine(); //Pause
}
}
}
Happy Coding
The above solution works:
Console.WriteLine("Press any key to continue...");
Console.ReadKey(true);
Although I found if you use it when debugging it produces duplicate output.
The solution when debugging is:
1. Comment out the Console.Writeline and Console.ReadKey lines.
2. Use Console.Write("\n");
- This will produce a 'newline' and prompt "Press any key to continue..."
3. Once debugging is completed, either remove or comment the Console.Write() and uncomment the previous two lines.
I'm not sure why it works this way in debugging mode; therefore, any elaboration would be greatly appreciated.
Hope that helps.
Hi,
the easiest way to replace the
system("pause");
command in c# is to use something like this method I created exactly for that purpose:
/// <summary> /// Writes a message to the console prompting the user to press a certain key in order to exit the current program. /// This procedure intercepts all keys that are pressed, so that nothing is displayed in the Console. By default the /// Enter key is used as the key that has to be pressed. /// </summary> /// <param name="key">The key that the Console will wait for. If this parameter is omitted the Enter key is used.</param> public static void WriteKeyPressForExit(ConsoleKey key = ConsoleKey.Enter) { Console.WriteLine(); Console.WriteLine("Press the {0} key on your keyboard to exit . . .", key); while (Console.ReadKey(intercept: true).Key != key) { } }
However... when invoking the pause command you always get a message in the language of your current operating system... So if you'd like to display the default OS pause method you could also use this:
By the way... for this method you need to reference the System assembly (which is always referenced in new ConsoleApplication-projects in VS)
/// <summary> /// Emulates the pause command in the console by invoking a cmd-process. This method blocks the execution until the user has pressed a button. /// </summary> public static void Pause() { Console.WriteLine(); System.Diagnostics.Process pauseProc = System.Diagnostics.Process.Start(new System.Diagnostics.ProcessStartInfo() { FileName = "cmd", Arguments = "/C pause", UseShellExecute = false }); pauseProc.WaitForExit(); }
This method just invokes a pause command inside a new cmd process... Note the "UseShellExecute = false" statement which causes the process to start inside you current console application instead of openening a new window...
The ultimate go-around would be to invoke the system function directly like in C++... To do that however we need to import the dll the system function resides in which in itself is not that nice... (ok, neither is invoking a cmd process... ^^)???
Because it's a C function which should be called using the Cdecl call convention:
[System.Runtime.InteropServices.DllImport("msvcrt.dll", CallingConvention = CallingConvention.Cdecl)]
- The problem with Console.ReadKey(); or Console.ReadLine(); is that it locks your Console window from responding to any additional `Console.Write` commands. If you want to pause the window, but still be able to write out information to the Console you need to start your work on a new Thread and then call Join on the current thread as shown below.
static void Main(string[] args)
{
MyWork worker = new MyWork();
ThreadStart job = new ThreadStart(worker.StartMyWork);
Thread thread = new Thread(job);
thread.Start();
Thread.CurrentThread.Join();
}
- This is what I was looking for. I noticed that I see this exact sentence whenever I'm in Visual Studio ("Press any key to continue . . ."). I'm developing a C# Console app, but can't figure out why it only seems to work in CTRL+F5 mode. Either way, this is the functionality I'd like to appear in the final product. | https://social.msdn.microsoft.com/Forums/vstudio/en-US/08163199-0a5d-4057-8aa9-3a0a013800c7/how-to-write-a-command-like-pause-to-console?forum=csharpgeneral | CC-MAIN-2017-09 | refinedweb | 733 | 58.48 |
Reactive Programming with RxAndroid in Kotlin: An Introduction
Learn about how Reactive programming is a whole new paradigm using RxJava and RxAndroid in Android with Kotlin.
Version
- Kotlin 1.3, Android 4.4, Android Studio 3
Update note: This tutorial has been updated to Kotlin 1.3, Android 28 (Pie), and Android Studio 3.3.2 by Kyle Jablonski. This tutorial has previously been updated to Kotlin, Android 26 (Oreo), and Android Studio 3.0 Beta 5 by Irina Galata. The original tutorial was written by Artem Kholodnyi.
Reactive programming is not just another API. It’s a whole new programming paradigm concerned with data streams and the propagation of change. You can read the full definition of reactive programming, but you will learn more about being reactive, below.
RxJava is a reactive implementation to bring this concept to the Android platform. Android applications are a perfect place to start your exploration of the reactive world. It’s even easier with RxAndroid, a library that wraps asynchronous UI events to be more RxJava like.
Don’t be scared — I’ll bet you already know the basic concepts of reactive programming, even if you are not aware of it yet!
Note: This tutorial requires good knowledge of Android and Kotlin. To get up to speed, check out our Android Development Tutorials first and return to this tutorial when you’re ready.
In this RxAndroid tutorial for reactive programming, you will learn how to do the following:
- Grasp the concepts of Reactive Programming.
- Define an Observable.
- Turn asynchronous events like button taps and text field context changes into observable constructs.
- Transform and filter observable items.
- Leverage Rx threading in code execution.
- Combine several observables into one stream.
- Turn all your observables into Flowable constructs.
- Use RxJava’s Maybe to add a favorite feature to the app.
I hope you are not lactose intolerant — because you’re going to build a cheese-finding app as you learn how to use RxJava! :]
Getting Started
Download cheesefinder-starter and open it in Android Studio 3.3.2 or above.
You’ll be working in both CheeseActivity.kt and CheeseAdapter.kt. The
CheeseActivity class extends
BaseSearchActivity; take some time to explore
BaseSearchActivity and check out the following features ready for your use:
showProgress(): A function to show a progress bar…
hideProgress(): … and a function to hide it.
showResult(result: List: A function to display a list of cheeses.
)
cheeseSearchEngine: A field which is an instance of
CheeseSearchEngine. It has a
searchfunction which you call when you want to search for cheeses. It accepts a text search query and returns a list of matching cheeses.
Build and run the project on your Android device or emulator. You should see a gloriously empty search screen:
Of course, it’s not going to stay like that forever, you’ll soon begin adding reactive functionality to the app. Before creating your first observable, indulge yourself with a bit of theory first.
What is Reactive Programming?
In imperative programming, an expression is evaluated once and the value is assigned to a variable:
var x = 2 var y = 3 var z = x * y // z is 6 x = 10 // z is still 6
On the other hand, reactive programming is all about responding to value changes.
You have probably done some reactive programming — even if you didn’t realize it at the time.
- Defining cell values in spreadsheets is similar to defining variables in imperative programming.
- Defining cell expressions in spreadsheets is similar to defining and operating on observables in reactive programming.
Take the following spreadsheet that implements the example from above:
The spreadsheet assigns cell B1 with a value of 2, cell B2 with a value of 3 and a third cell, B3, with an expression that multiplies the value of B1 by the value of B2. When the value of either of the components referenced in the expression changes, the change is observed and the expression is re-evaluated automagically in B3:
The idea of reactive programming, to put it simply, is to have components which form a larger picture – which can be observed. And have your program listen to, and consume the changes whenever they happen.
Difference Between RxJava and RxKotlin
As you probably know, it’s possible to use Java libraries in Kotlin projects thanks to Kotlin’s language compatibility with Java. If that’s the case, then why was RxKotlin created in the first place? RxKotlin is a Kotlin wrapper around RxJava, which also provides plenty of useful extension functions for reactive programming. Effectively, RxKotlin makes working with RxJava no less reactive, but much more Kotlin-y.
In this article, we’ll focus on using RxJava, since it’s critical to understand the core concepts of this approach. However, everything you will learn applies to RxKotlin as well.
build.gradlefile and the project dependencies especially. Except for the UI libraries, it contains
RxKotlinand
RxAndroidpackages. We don’t need to specify
RxJavahere explicitly since
RxKotlinalready contains it.
RxJava Observable Contract
RxJava makes use of the Observer pattern.
In the Observer pattern, you have objects that implement two key RxJava interfaces:
Observable and
Observer. When an
Observable changes state, all
Observer objects subscribed to it are notified.
Among the methods in the
Observable interface is
subscribe(), which an
Observer will call to begin the subscription.
From that point, the
Observer interface has three methods which the
Observable calls as needed:
onNext(T value)provides a new item of type T to the
Observer.
onComplete()notifies the
Observerthat the
Observablehas finished sending items.
onError(Throwable e)notifies the
Observerthat the
Observablehas experienced an error.
As a rule, a well-behaved
Observable emits zero or more items that could be followed by either completion or error.
That sounds complicated, but some marble diagrams may clear things up.
The circle represents an item that has been emitted from the observable and the black block represents a completion or error. Take, for example, a network request observable. The request usually emits a single item (response) and immediately completes.
A mouse movement observable would emit mouse coordinates but will never complete:
Here you can see multiple items that have been emitted but no block showing the mouse has completed or raised an error.
No more items can be emitted after an observable has completed. Here’s an example of a misbehaving observable that violates the Observable contract:
That’s a bad, bad observable because it violates the Observable contract by emitting an item after it signaled completion.
How to Create an Observable
There are many libraries to help you create observables from almost any type of event. However, sometimes you just need to roll your own. Besides, it’s a great way to learn about the Observable pattern and reactive programming!
You’ll create an Observable using
Observable.create(). Here is its signature:
Observable<T> create(ObservableOnSubscribe<T> source)
That’s nice and concise, but what does it mean? What is the “source?” To understand that signature, you need to know what an
ObservableOnSubscribe is. It’s an interface, with this contract:
public interface ObservableOnSubscribe<T> { void subscribe(ObservableEmitter<T> e) throws Exception; }
Like an episode of a J.J. Abrams show like “Lost” or “Westworld,” that answers some questions while inevitably asking more. So the “source” you need to create your
Observable will need to expose
subscribe(), which in turn requires whatever’s calling it to provide an “emitter” as a parameter. What, then, is an emitter?
RxJava’s
Emitter interface is similar to the
Observer one:
public interface Emitter<T> { void onNext(T value); void onError(Throwable error); void onComplete(); }
An
ObservableEmitter, specifically, also provides a means to cancel the subscription.
To visualize this whole situation, think of a water faucet regulating the flow of water. The water pipes are like an
Observable, willing to deliver a flow of water if you have a means of tapping into it. You construct a faucet that can turn on and off, which is like an
ObservableEmitter, and connect it to the water pipes in
Observable.create(). The outcome is a nice fancy faucet. And of course, the faucet is reactive, since once you close it, the stream of water – data – is no longer active. :]
An example will make the situation less abstract and more clear. It’s time to create your first observable!
Observe Button Clicks
Add the following code inside the
CheeseActivity class:
// 1 private fun createButtonClickObservable(): Observable<String> { // 2 return Observable.create { emitter -> // 3 searchButton.setOnClickListener { // 4 emitter.onNext(queryEditText.text.toString()) } // 5 emitter.setCancellable { // 6 searchButton.setOnClickListener(null) } } }
Your imports should look as follows after entering the above code:
import io.reactivex.Observable import kotlinx.android.synthetic.main.activity_cheeses.*
You’ve imported the correct
Observable class and you’re using the Kotlin Android Extensions to get references to view objects.
Here’s what’s going on in the code above:
- You declare a function that returns an observable that will emit strings.
- You create an observable with
Observable.create(), and supply it with a new
ObservableOnSubscribe.
- Set up an
OnClickListeneron
searchButton.
- When the click event happens, call
onNexton the emitter and pass it the current text value of
queryEditText.
- Keeping references can cause memory leaks in Java or Kotlin. It’s a useful habit to remove listeners as soon as they are no longer needed. But what do you call when you are creating your own
Observable? For that very reason,
ObservableEmitterhas
setCancellable(). Override
cancel(), and your implementation will be called when the Observable is disposed, such as when the Observable is completed or all Observers have unsubscribed from it.
- For
OnClickListener, the code that removes the listener is
setOnClickListener(null).
Now that you’ve defined your Observable, you need to set up the subscription to it. Before you do, you need to learn about one more interface,
Consumer. It’s a simple way to accept values coming in from an emitter.
public interface Consumer<T> { void accept(T t) throws Exception; }
This interface is handy when you want to set up a simple subscription to an Observable.
The
Observable interface requires several versions of
subscribe(), all with different parameters. For example, you could pass a full
Observer if you like, but then you’d need to implement all the necessary methods.
If all you need out of your subscription is for the observer to respond to values sent to
onNext(), you can use the version of
subscribe() that takes in a single
Consumer (the parameter is even named
onNext, to make the connection clear).
You’ll do exactly that when you subscribe in your activity’s
onStart(). Add the following code to CheeseActivity.kt:
override fun onStart() { super.onStart() // 1 val searchTextObservable = createButtonClickObservable() searchTextObservable // 2 .subscribe { query -> // 3 showResult(cheeseSearchEngine.search(query)) } }
Here’s an explanation of each step:
- First, create an observable by calling the method you just wrote.
subscribe(), and supply a simple
Consumer.
- Finally, perform the search and show the results.
Build and run the app. Enter some letters and tap the Search button. After a simulated delay (see CheeseSearchEngine), you should see a list of cheeses that match your request:
Sounds yummy! :]
RxJava Threading Model
You’ve had your first taste of reactive programming. There is one problem though: the UI freezes up for a few seconds when the search button is tapped.
You might also notice the following line in Android Monitor:
> 08-24 14:36:34.554 3500-3500/com.raywenderlich.cheesefinder I/Choreographer: Skipped 119 frames! The application may be doing too much work on its main thread.
This happens because
search is executed on the main thread. If
search were to perform a network request, Android will crash the app with a
NetworkOnMainThreadException exception. It’s time to fix that.
One popular myth about RxJava is that it is multi-threaded by default, similar to
AsyncTask. However, if not otherwise specified, RxJava does all the work in the same thread it was called from.
You can change this behavior with the
subscribeOn and
observeOn operators.
subscribeOn is supposed to be called only once in the chain of operators. If it’s not, the first call wins.
subscribeOn specifies the thread on which the observable will be subscribed (i.e. created). If you use observables that emit events from an Android View, you need to make sure subscription is done on the Android UI thread.
On the other hand, it’s okay to call
observeOn as many times as you want in the chain.
observeOn specifies the thread on which the next operators in the chain will be executed. For example:
myObservable // observable will be subscribed on i/o thread .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .map { /* this will be called on main thread... */ } .doOnNext{ /* ...and everything below until next observeOn */ } .observeOn(Schedulers.io()) .subscribe { /* this will be called on i/o thread */ }
The most useful schedulers are:
Schedulers.io(): Suitable for I/O-bound work such as network requests or disk operations.
Schedulers.computation(): Works best with computational tasks like event-loops and processing callbacks.
AndroidSchedulers.mainThread()executes the next operators on the UI thread.
The Map Operator
The
map operator applies a function to each item emitted by an observable and returns another observable that emits results of those function calls. You’ll need this to fix the threading issue as well.
If you have an observable called
numbers that emits the following:
And if you apply
map as follows:
numbers.map { number -> number * number }
The result would be the following:
That’s a handy way to iterate over multiple items with little code. Let’s put it to use!
Modify
onStart() in
CheeseActivity class to look like the following:
override fun onStart() { super.onStart() val searchTextObservable = createButtonClickObservable() searchTextObservable // 1 .subscribeOn(AndroidSchedulers.mainThread()) // 2 .observeOn(Schedulers.io()) // 3 .map { cheeseSearchEngine.search(it) } // 4 .observeOn(AndroidSchedulers.mainThread()) .subscribe { showResult(it) } }
Going over the code above:
- First, specify that code down the chain should start on the main thread instead of on the I/O thread. In Android, all code that works with
Viewshould execute on the main thread.
- Specify that the next operator should be called on the I/O thread.
- For each search query, you return a list of results.
- Finally, make sure that the results are passed to the list on the main thread.
Build and run your project. Now the UI should be responsive even when a search is in progress.
Show Progress Bar with doOnNext
It’s time to display the progress bar!
For that you’ll need a
doOnNext operator.
doOnNext takes a
Consumer and allows you to do something each time an item is emitted by observable.
In the same
CheeseActivity class modify
onStart() to the following:
override fun onStart() { super.onStart() val searchTextObservable = createButtonClickObservable() searchTextObservable // 1 .observeOn(AndroidSchedulers.mainThread()) // 2 .doOnNext { showProgress() } .observeOn(Schedulers.io()) .map { cheeseSearchEngine.search(it) } .observeOn(AndroidSchedulers.mainThread()) .subscribe { // 3 hideProgress() showResult(it) } }
Taking each numbered comment in turn:
- Ensure that the next operator in chain will be run on the main thread.
- Add the
doOnNextoperator so that
showProgress()will be called every time a new item is emitted.
- Don’t forget to call
hideProgress()when you are just about to display a result.
Build and run your project. You should see the progress bar appear when you initiate the search:
Observe Text Changes
What if you want to perform search automatically when the user types some text, just like Google?
First, you need to subscribe to
TextView text changes. Add the following function to the
CheeseActivity class:
// 1 private fun createTextChangeObservable(): Observable<String> { // 2 val textChangeObservable = Observable.create<String> { emitter -> // 3 val textWatcher = object : TextWatcher { override fun afterTextChanged(s: Editable?) = Unit override fun beforeTextChanged(s: CharSequence?, start: Int, count: Int, after: Int) = Unit // 4 override fun onTextChanged(s: CharSequence?, start: Int, count: Int, after: Int) { s?.toString()?.let { emitter.onNext(it) } } } // 5 queryEditText.addTextChangedListener(textWatcher) // 6 emitter.setCancellable { queryEditText.removeTextChangedListener(textWatcher) } } // 7 return textChangeObservable }
Here’s the play-by-play of each step above:
- Declare a function that will return an observable for text changes.
- Create
textChangeObservablewith
create(), which takes an
ObservableOnSubscribe.
- When an observer makes a subscription, the first thing to do is to create a
TextWatcher.
- You aren’t interested in
beforeTextChanged()and
afterTextChanged(). When the user types and
onTextChanged()triggers, you pass the new text value to an observer.
- Add the watcher to your
TextViewby calling
addTextChangedListener().
- Don’t forget to remove your watcher. To do this, call
emitter.setCancellable()and overwrite
cancel()to call
removeTextChangedListener()
- Finally, return the created observable.
To see this observable in action, replace the declaration of
searchTextObservable in
onStart() of
CheeseActivity as follows:
val searchTextObservable = createTextChangeObservable()
Build and run your app. You should see the search kick off when you start typing text in the
TextView:
Filter Queries by Length
It doesn’t make sense to search for queries as short as a single letter. To fix this, let’s introduce the powerful
filter operator.
filter passes only those items which satisfy a particular condition.
filter takes in a
Predicate, which is an interface that defines the test that input of a given type needs to pass, with a
boolean result. In this case, the Predicate takes a
String and returns
true if the string’s length is two or more characters.
Replace
return textChangeObservable in
createTextChangeObservable() with the following code:
return textChangeObservable.filter { it.length >= 2 }
Everything will work exactly the same, except that text queries with
length less than
2 won’t get sent down the chain.
Run the app; you should see the search kick off only when you type the second character:
Debounce Operator
You don’t want to send a new request to the server every time the query is changed by one symbol.
debounce is one of those operators that shows the real power of reactive paradigm. Much like the
filter operator,
debounce, filters items emitted by the observable. But the decision on whether the item should be filtered out is made not based on what the item is, but based on when the item was emitted.
debounce waits for a specified amount of time after each item emission for another item. If no item happens to be emitted during this wait, the last item is finally emitted:
In
createTextChangeObservable(), add the
debounce operator just below the
filter so that the
return statement will look like the following code:
return textChangeObservable .filter { it.length >= 2 } .debounce(1000, TimeUnit.MILLISECONDS) // add this line
Run the app. You’ll notice that the search begins only when you stop making quick changes:
debounce waits for 1000 milliseconds before emitting the latest query text.
Merge Operator
You started by creating an observable that reacted to button clicks and then implemented an observable that reacts to text field changes. But how do you react to both?
There are a lot of operators to combine observables. The most simple and useful one is
merge.
merge takes items from two or more observables and puts them into a single observable:
Change the beginning of
onStart() to the following:
val buttonClickStream = createButtonClickObservable() val textChangeStream = createTextChangeObservable() val searchTextObservable = Observable.merge<String>(buttonClickStream, textChangeStream)
Run your app. Play with the text field and the search button; the search will kick off either when you finish typing two or more symbols or when you simply press the Search button.
Flowable
With the release of RxJava2, the framework has been totally redesigned from the ground up to solve some problems that were not addressed in the original library. One really important topic addressed in the update is the idea of backpressure.
Backpressure is the concept that an observable is emitting items faster than the consumer can handle them. Consider the example of the Twitter firehose, which is constantly emitting tweets as they are added to the twitter platform. If you were to use observables, which buffer items until there is no more memory available, your app would crash and consuming the firehose API would not be possible using them. Flowables take this into consideration and let you specify a
BackPressureStrategy to tell the flowable how you want the consumer to handle items emitted faster than can be consumed.
Backpressure strategies:
- BUFFER– Handles items the same way as RxJava 1 but you can also add a buffer size.
- DROP– Drops any items that the consumer can’t handle.
- ERROR– Throws an error when the downstream can’t keep up.
- LATEST– Keeps only the latest item emitted by onNext overwriting the previous value.
- MISSING– No buffering or dropping during onNext events.
Turning Observables into Flowables
Time to turn the observables above into flowables using this new knowledge of backpressure strategy. First consider the observables you added to your app. You have one observable that emits items when a button is clicked and another from keyboard input. With these two in mind, you can imagine in the first case you can use the LATEST strategy and in the second you can use the BUFFER.
Open CheeseActivity.kt and modify your observables to the following:
val buttonClickStream = createButtonClickObservable() .toFlowable(BackpressureStrategy.LATEST) // 1 val textChangeStream = createTextChangeObservable() .toFlowable(BackpressureStrategy.BUFFER) // 2
- Convert the button click stream into a flowable using LATEST BackpressureStrategy.
- Convert the text input change stream into a flowable using BUFFER BackpressureStrategy.
Finally, change the merge operator to use Flowable as well:
val searchTextFlowable = Flowable.merge<String>(buttonClickStream, textChangeStream)
Now, change the call to use the new
searchTextFlowable value, instead of the previous
Observable:
searchTextFlowable // 1 .observeOn(AndroidSchedulers.mainThread()) // 2 .doOnNext { showProgress() } .observeOn(Schedulers.io()) .map { cheeseSearchEngine.search(it) } .observeOn(AndroidSchedulers.mainThread()) .subscribe { // 3 hideProgress() showResult(it) }
Re-run the applicaton and you should see a working app with none of the pitfalls of observables.
Maybe
A Maybe is a computation that emits either a single value, no value or an error. They are good for things such as database updates and deletes. Here you will add a new feature using a Maybe to favorite a type of cheese from the app and use Maybe to emit no value.
Open the CheeseAdapter class and add the following code in onBindView:
// 1 Maybe.create<Boolean> { emitter -> emitter.setCancellable { holder.itemView.imageFavorite.setOnClickListener(null) } holder.itemView.imageFavorite.setOnClickListener { emitter.onSuccess((it as CheckableImageView).isChecked) // 2 } }.toFlowable().onBackpressureLatest() // 3 .observeOn(Schedulers.io()) .map { isChecked -> cheese.favorite = if (!isChecked) 1 else 0 val database = CheeseDatabase.getInstance(holder.itemView.context).cheeseDao() database.favoriteCheese(cheese) // 4 cheese.favorite // 5 } .subscribeOn(AndroidSchedulers.mainThread()) .subscribe { holder.itemView.imageFavorite.isChecked = it == 1 // 6 }
- Create the Maybe from an action.
- Emit the checked state on success.
- Turn the Maybe into a flowable.
- Perform the update on the Cheeses table.
- Return the result of the operation.
- Use the result from the emission to change the outline to a filled in heart.
Note: It would probably be better to use Maybe in context with a delete operation but for example purpose here you can favorite a cheese.
RxJava2 & Null
Null is no longer supported in RxJava2. Supplying null will result in a NullPointerException immediately or in a downstream signal. You can read all about this change here.
RxJava and Activity/Fragment lifecycle
Remember those
setCancellable methods you set up? They won’t fire until the observable is unsubscribed.
The
Observable.subscribe() call returns a
Disposable.
Disposable is an interface that has two methods:
public interface Disposable { void dispose(); // ends a subscription boolean isDisposed(); // returns true if resource is disposed (unsubscribed) }
Add the following property to
CheeseActivity:
private lateinit var disposable: Disposable
In
onStart(), set the returned value of
disposable with the following code (only the first line changes):
disposable = searchTextObservable // change this line .observeOn(AndroidSchedulers.mainThread()) .doOnNext { showProgress() } .observeOn(Schedulers.io()) .map { cheeseSearchEngine.search(it) } .observeOn(AndroidSchedulers.mainThread()) .subscribe { hideProgress() showResult(it) }
Since you subscribed to the observable in
onStart(),
onStop() would be a perfect place to unsubscribe.
Add the following code to CheeseActivity.kt:
@Override override fun onStop() { super.onStop() if (!disposable.isDisposed) { disposable.dispose() } }
And that’s it! Build and run the app. You won’t “observe” any changes yourself, but now the app is successfully avoiding RxJava memory leaks. :]
Where to Go From Here?
You can download the final project from this tutorial here. If you want to challenge yourself a bit more you can swap out this implementation of RxJava and replace it with Room’s RxJava support which you can find more about here.
You’ve learned a lot in this tutorial. But that’s only a glimpse of the RxJava world. For example, there is RxBinding, a library that includes most of the Android View APIs. Using this library, you can create a click observable by just calling
RxView.clicks(viewVariable).
To learn more about RxJava refer to the ReactiveX documentation.
If you have any comments or questions, don’t hesitate to join the discussion below! | https://www.raywenderlich.com/2071847-reactive-programming-with-rxandroid-in-kotlin-an-introduction | CC-MAIN-2021-17 | refinedweb | 4,127 | 57.27 |
Hi Thorsten, > one question with regards to this topic: > what would be the advantage of namespaces in Picolisp over > namingconventions like in Emacs Lisp?
Right. Not much. > 'gnus-function-name' for all functions in gnus library > 'dired-function-name' for all functions in dired library etc Yes. Such conventions make things transparent. The drawback might just be readability of the longish symbol names. I suggested something like this in my reply to Henrik (on Sep 5th, using the 'dot' as a delimiter): > A call like > > (foo> '+Pckg <arg>) > > is in no regard more encapsulating the namespace then > > (foo.Pckg <arg>) Cheers, - Alex -- UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe | https://www.mail-archive.com/picolisp@software-lab.de/msg02741.html | CC-MAIN-2018-47 | refinedweb | 110 | 54.73 |
Lifecycle components
componentDidMount()
If any state updates happen here, it will be automatically be rendered. This is useful for API updates, where the data are not present immediately. Once the data was there, it will be rendered 💫.
class Cupcake extends React.Component { constructor(props) { super(props); this.state = { flour: 200 }; } componentDidMount() { let newFlourMass = delayedApiCall(); // I should learn how to call an API lol this.setState({ flour: newFlourMass }); } render() { return ( <div> <p>Flour amount: {this.state.flour}</p> </div> ); } }
Additionally, this where most people put their event listeners, so note it down, folks 🗒️.
componentWillUnmount()
Let's say you have a temporary component (like, the congratz message of Duolingo).
After the component go out of the screen (and you saw your eye bags), anything associated with that should be cleaned up 🪥, including event listeners.
You can use this function to achieve this:
class Cupcake extends React.Component { constructor(props) { super(props); this.state = { flour: 200 }; } componentWillUnmount() { document.removeEventListener("old-event", eventHandler); } render() { return ( <div> <p>Flour amount: {this.state.flour}</p> </div> ); } }
shouldComponentUpdate()
As we have established before, any state change will cause an UI re-render. If you have multiple states that don't cause any change in UI, that's just a waste of CPU time and memory.
And your users probably wouldn't like this 'feature' 🥴.
Instead, you can tell React when to re-renders the component:
class justTens extends React.Component { constructor(props) { super(props); this.state = { count: 0 }; } shouldComponentUpdate() { return this.state.count % 10 === 0 } render() { return ( <div> <p>Multiples of 10: {this.state.count}</p> </div> ); } }
Only when
shouldComponentUpdate() returns
true that the component will re-render. Isn't that sweet 🍬?
Inline CSS
Inline styling is a common practice in React. Especially considering that UI components come and go.
The syntax is kinda different though:
<p style={{backgroundColor: "purple", fontSize: 70}}>Life sucks.</p>
Notice those camelCasing? JSX won't accept standard CSS here, so we have to use an object literal (the curly brackets), with key-pair values.
Also numbers without units default to
px. You need to use string to pass the number with its unit, e.g:
"7em".
Return components with a condition
Yes, an
if statement can do this (and ternaries too, for those people), but this solution is simply elegant:
<div> <p>Number of Children Captured: {childCount}</p> {childCount === 0 && <h3>Dang it.</h3>} </div>
If the expression returns
true, then the JSX is returned.
Early render 🕊️
If you are tempted to, you may write your whole app in React.
The problem is, it would took a longer time for the user to see the site loads. Another problem is JS is not something understandable by Google crawlers (me too), so you smash your SEO opportunities.
Kill two birds with one stone by rendering part of them on the server (if you use Node).
ReactDOMServer.renderToString(<h1>Render pls.</h1>);
Afterwords
Alright, that's all of React in FreeCodeCamp 🎉. Our next adventure would be Redux, which is a state management library (I have no idea what it is) 👀.
Happy learning everyone!
Follow me on Github!
Also on Twitter!
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/kemystra/day-9-done-react-1c42 | CC-MAIN-2022-33 | refinedweb | 519 | 59.9 |
Monad Transformers
From HaskellWiki
Revision as of 11:59, 6 July 2011
There are currently several packages that implement similar interfaces to monad transformers (besides an additional package with a similar goal but different API named MonadLib):
- transformers: provides the classes
MonadTransand
MonadIO, as well as concrete monad transformers such as
StateT. The monad
State s ais only a type synonym for
StateT s Identity a. Thus both
Stateand
StateTcan be accessed by the same methods like
putand
get. However, this only works if
StateTis the top-most transformer in a monad transformer stack. This package is Haskell 98 and thus can be also used with JHC.
- mtl (Monad Transformer Library) comes in two versions:
- version 1 was the first implementation, containing the classes
MonadTransand
MonadIO, concrete monad transformers such as
StateTand multi-parameter type classes with functional dependencies such as
MonadState. Monads like
Stateand their transformer counterparts like
StateTare distinct types and can be accessed uniformly only through a type class abstraction like
MonadState. This version is now obsolete.
-, and doesn't tie you to functional dependencies. But because it lacks the monad classes, you'll have to lift operations to the composite monad yourself (examples).
3 How to move from MTL to transformers?
Many package using
MTL can be ported to
transformers with only slight modifications.
Modules require the
Trans infix,
e.g.
import Control.Monad.State ... must be replaced by
import Control.Monad.Trans.State ....
Since
State is only a type synonym, there is no longer a constructor named
State.
For constructing you must use the function
state and instead of matching patterns you must call
runState.
4 See also
- Monad Transformers Explained
- Monad Transformers Step by Step (PDF)
- All About Monads
-
-
-
- | https://wiki.haskell.org/index.php?title=Monad_Transformers&diff=40846&oldid=37358 | CC-MAIN-2017-09 | refinedweb | 285 | 55.13 |
So I had this nifty program on my calculator that I wrote that sloves quadratic equations for me. I decided it would be good practice to try and write it in C++. I've actually had 4 different versions, each one (atleast in my opinon) getting better than the last. When I went to compile this one, for some reason when Dev C++ gets to the opening brace for the whatnext function it says there is a parse error. I bet I'm doing something stupidly small. Could I get some help? Any other suggestions on makeing the program more sleek and efficient would also be appreciated.
Source code:
#include <cstdio> #include <cstdlib> #include <iostream> #include <math.h> using namespace std; // declare prototypes void whatnext (void); void solver (void); //global varible declarations double a; double b; double c; double y; double e; double d; int main (int nNumberofArgs, char* pszArgs[]) { // Start of main program loop for (;; ) { int chose; cout << "Type 1 to solve for an equation and 2 to exit:"; cin >> chose; //Exit or solve //solve if (chose == 1) { // ask for variables cout << "What does A equal?:"; cin >> a; cout << "What does B equal?:"; cin >> b; cout << "What does C equal?:"; cin >> c; cout << "What does Y equal?:"; cin >> y; //call to decsion function whatnext(); } //exit else { cout << "Thank you for using Kurt's Quadsolver" << endl; //pause and let them read my beautiful statement system("PAUSE"); return 0; } } void whatnext (void) { //preliminary computations c -= y; d = -(b / (2 * a)); e = (b * b) - (4 * a * c); //decide what to do //if there is no solution if (e < 0) { cout << "No Solution. \nSorry.\n"; system("PAUSE"); //return to caller } //call the solving function else { solver(); } } //solver function void solver(void) { //for one solution if (e == 0) { //declare helper f variable double f; e = sqrt ( e ) / (2 * a ); f = d - e; //display answers and let them see it cout << "Solution 1: \n"; cout << f; cout << "\nYour welcome. \n"; system("PAUSE"); //return to calling function } //two solutions else { e = sqrt( e ) / (2 * a); //declare helper f and g variables double f; double g; f = d - e; g = d + e; //display solutions cout << "Solution 1: \n"; cout << f; cout << "\nSolution 2: \n"; cout << g; cout << "\nYour welcome. \n"; system ("PAUSE"); //return to calling function } }
<< moderator edit: added code tags: [code][/code] >> | https://www.daniweb.com/programming/software-development/threads/28364/why-won-t-this-compile | CC-MAIN-2021-17 | refinedweb | 386 | 64.54 |
As usual there are times when we love cops (when they rescue us from problems). However there are times when we hate say ohh god not cop again (when they catch us or fine).
Fx.
Once you install FxCop, Go to Start Menu -> All Programs -> "Microsoft FxCop" option and then click on FxCop. It will launch an empty window similar to the one shown below. It contains 3 panes – configuration pane, message pane and properties pane.
Go to Project Menu and click on “Add Targets…”. Select the EXE or DLL. Once it is done, click on Analyze button and your screen will look like below:
To analyze and which rules to check load, assemblies/exe. There are 2 tabs at the top of pane (Targets and Rules). It can load more than one assembly into the project by clicking on "Add Target for Analysis" button in toolbar or through Menu "Project->Add Targets”. It shows you all the resources included in the assembly, all the namespaces in the assembly and for each namespace, all the types. Drilling deeper with the plus sign in front of each type, you can show all the members. It shows the constructors, methods, properties and fields.
Selecting a resource, assembly, type or member of a type shows the following information in the property pane:
Right clicking on a type member, you can select from the "View | IR" popup menu to show the IL code for this member.
There are by default nine groups of rules:
Right click on the rules list displays a popup menu which helps to group the rules in three categories:
Click on the Analyze button in the toolbar while doing so shows a progress bar, and then fills the message pane with all the messages it found. On top of the pane, you see three buttons – Active, Excluded and Absent.
The Active list shows you all the active messages. These are all the errors and warnings FxCop has found with the last analysis performed. By Default, display.
Selecting a message shows you a summary in the properties pane. It includes the target item which caused the message, a short resolution description and also a help link which shows you a detailed description of the issue and the resolution for it. You can also double click on a message which brings up a dialog box with the same information but differently organized:
The "Message Details" dialog box has five tabs..
This pane has two tabs. The properties tab shows information about the selected assembly, namespace, type, type member, group of rules, rule or message. We have covered that already in detail. The Output tab shows informational, warning and error messages generated by the rules. These messages only appear if the TraceGeneral trace-switch in the FxCop.exe .config file is enabled.
Click on the menu "File | Save Report As" and provide a name and location for the report. The report is stored as an XML document which references a XSLT document - FxCopReport.xsl. Opening the report with your browser uses the referenced XSLT document to render a HTML report. The report allows you to drill down to all messages on the assembly level, type level and type member level. then takes effect for all future FxCop projects you create. Try to include the historical view in the generated reports because it is useful to be able to go back and understand what has been changed on a project over time.
To set up FxCop as an external tool in Visual Studio:
Note: If you had created custom rules and want to apply to all projects, place the custom rule DLL to c:\Program Files\Microsoft FxCop 1.36\Rules.
The External Tools configuration dialog box is displayed. The following screen shot shows the dialog box after the steps in this procedure have been completed.
Done. Now you just need to run FxCop.Go to Tools –> FxCop analyze your code and because we checked “Use Output window” in External Tools dialog, result should appear in Visual Studio output window.. | http://www.codeproject.com/Articles/78599/How-to-Use-FxCop?msg=3494329 | CC-MAIN-2014-41 | refinedweb | 676 | 71.04 |
- NAME
- SYNOPSIS
- DESCRIPTION
- FUNCTIONS
- DEPRECATED FUNCTIONS
- COMPATIBILITY
- EXPORT
- DEBUGGING
- LIMITATIONS
- SEE ALSO
- AUTHOR
- BUGS
- SUPPORT
- ACKNOWLEDGEMENTS
NAME
Math::BaseArith - mixed-base number arithmetic (like APL encode/decode)
SYNOPSIS
use Math::BaseArith qw( :all ); encode_base( $value, \@base ); decode_base( \@representation, \@base ); my @yd_ft_in = (0, 3, 12); # convert 175 inches to 4 yards 2 feet 7 inches encode_base( 175, \@yd_ft_in ) # convert 4 yards 2 feet 7 inches to 175 inches decode_base( [4, 2, 7], \@yd_ft_in )
DESCRIPTION
The inspiration for this module is a pair of functions in the APL programming language called encode (a.k.a. "represent" or "antibase)" and decode (a.k.a. base). Their principal use is to convert numbers from one number base to another. Mixed number bases are permitted.
In this perl implementation, the representation of a number in a particular number base consists of a list reference.
FUNCTIONS
In the following description of encode_base and decode_base, function output is being printed by:
sub echo { print '=> ', join ', ', @_ }
encode_base
Encode_base:
encode_base( 2, [2, 2, 2, 2] ) => 0 0 1 0 encode_base( 5, [2, 2, 2, 2] ) => 0 1 0 1 encode_base( 13, [2, 2, 2, 2] ) => 1 1 0 1 encode_base(_base, and may clarify the use of encode_base with mixed number bases.
# The representation of 75 in base 4 encode_base( 75, [4, 4, 4, 4] ) => 1 0 2 3 # At least four digits are needed for the full representation encode_base( 75, [4, 4, 4] ) => 0 2 3 # If fewer elements are in the second argument, # leading digits do not appear in the representation. encode_base( 75, [4, 4] ) => 2 3 # If the second argument is a one-element list reference, encode_base # is identical to modulus (%) encode_base( 75, [4] ) => 3 encode_base( 76, [4] ) => 0 # The expression encode_base( Q, [0] ) always yields Q as the result encode_base ( 75, [0] ) => 75 # This usage returns quotient and remainder encode_base( 75, [0, 4] ) => 18 3 # The first quotient (18) is again divided by 4, # yielding a second quotient and remainder encode_base( 75, [0, 4, 4] ) => 4 2 3 # The process is repeated again. Since the last quotient # is less than 4, the result is the same as encode_base(75,[4,4,4,4]) encode_base( 75, [0, 4, 4, 4] ) => 1 0 2 3
Now consider a mixed number base: convert 175 inches into yards, feet, inches.
# 175 inches is 14 feet, 7 inches (quotient and remainder). encode_base( 175, [0, 12] ) => 14 7 # 14 feet is 4 yards, 2 feet, encode_base( 14, [0, 3] ) => 4 2 # so 175 inches is 4 yards, 2 feet, 7 inches. encode_base( 175, [0, 3, 12] ) => 4 2 7
decode_base
decode_base is used to determine the value of the representation of a quantity in some number base. If R is a list representation (perhaps produced by the encode_base function described above) of some quantity Q in a number base defined by the radix list B (i.e.,
@R = encode_base($Q,@B), then the expression
decode_base(@R,@B) yields
$Q:
decode_base( [0, 0, 1, 0], [2, 2, 2, 2] ) => 2 decode_base( [0, 1, 0, 1], [2, 2, 2, 2] ) => 5 decode_base( [0, 3, 14], [16, 16, 16] => 62
The length of the representation list must be less than or equal to that of the base list.
decode_base( [1, 1, 1, 1], [2, 2, 2, 2] ) => 15 decode_base( [1, 1, 1, 1], [2] ) => 15 decode_base( [1], [2, 2, 2, 2] ) => 15 decode_base( [1, 1, 1, 1], [2, 2, 2] ) => (void) raises a LENGTH ERROR
As with the encode_base function, mixed number bases can be used:
# Convert 4 yards, 2 feet, 7 inches to inches. decode_base( [4, 2, 7], [0, 3, 12] ) => 175 # Convert 2 days, 3 hours, 5 minutes, and 27 seconds to seconds decode_base( [2, 3, 5, 27], [0, 24, 60, 60] ) => 183927 # or to minutes. decode_base( [2, 3, 5, 27], [0, 24, 60, 60] ) / 60 => 3065.45
The first element of the radix list (second argument) is not used; it is required only to make the lengths match and so can be any value.
DEPRECATED FUNCTIONS
- encode
-
- decode
-
Synonmous with encode_base/decode_base. Imported by default. See COMPATIBILITY for details.
COMPATIBILITY
When this module was originally released on CPAN in 2002, it exported the functions encode and decode by default. These function names, however, are fairly common and so have a high probability of colliding in the global namespace with those from other modules. As of version 1.02, the functions were renamed encode_base and decode_base in order to better distinguish them.
For upward compatibility, encode/decode are provided as aliases for encode_base/decode_base and are still exported by default so scripts that include the module by:
use Math::BaseArith;
will continue to work unchanged. See the EXPORT section for the preferred way to include the module from version 1.02 ownward.
Note the the keyword :old can be used to specify the old function names (encode/decode). The most likely use of this is to exclude them from the namespace so you can then include just one of them. For example, to get decode without encode you can do this:
use Math::BaseArith qw( !:old decode );
But, rather than this approach, consider altering your code to use the new and preferred function names.
EXPORT
As of version 1.02 and above, the preferred way to include this module is by using :all, or specifying you want either encode_base or decode_base:
use Math::BaseArith ':all'; or use Math::BaseArith 'encode_base'; or use Math::BaseArith 'decode_base';
Do NOT include it without parameters, as that will automatically import the old function names encode/decode.
DEBUGGING
Set the global variable $Math::BaseArith::DEBUG to print debugging information to STDERR.
If set to 1, function names and parameters are printed.
If set to 2, intermediate results are also printed.
LIMITATIONS
The APL encode function allows both arguments to be a list, in which case the function evaluates in dot-product fashion, generating a matrix whose columns are the representation vectors for each value in the value list; i.e. a call such as encode_base([15,31,32,33,75],[4,4,4,4]) would generate the following matrix;
0 0 0 0 1 0 1 2 2 0 3 3 0 0 2 3 3 0 1 3
This version of encode_base supports only a scalar value as the first argument.
The APL version of decode support non-integer values. This version doesn't.
SEE ALSO
AUTHOR
PUCKERING, Gary Puckering <jgpuckering@rogers.com>
BUGS
Please report any bugs or feature requests to
bug-math-basearith at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
SUPPORT
You can find documentation for this module with the perldoc command.
perldoc Math::BaseArith
You can also look for information at:
RT: CPAN's request tracker (report bugs here)
CPAN Ratings
Search CPAN
ACKNOWLEDGEMENTS
Kenneth E. Iverson, inventor of APL and author of "A Programming Language", John Wiley & Sons, 1962
This module is free software; you can redistribute it and/or modify it under the same terms as Perl 5. For more details, see the full text of the licenses in the directory LICENSES. This program is distributed in the hope that it will be useful, but without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose. | https://metacpan.org/pod/release/PUCKERING/Math-BaseArith-1.04/lib/Math/BaseArith.pm | CC-MAIN-2021-17 | refinedweb | 1,224 | 55.78 |
Ive been trying to lock some security vulnerabilities at my server, and I got to the point of the tmp folders. Both of them will store files that are used by other resources, still, reading a little more I found out that the /tmp could save some data that involves the server itself instead of the /var/tmp.
My question is, What are the implications of securing write access to /tmp and /var/tmp. I already tried securing /var/tmp and until now nothing bad has happened.
Is it safe to block /tmp and deny saving files that could damage my server or could there be a type of spam or something similar that affects my security? What is the vulnerability if I allow access on my server to these folders, say writeable with 777 permissions (like they come by default)
Thanks
/tmp and /var/tmp are supposed to be world-writable so that all programs/users can create their temporary files there. The sticky bit ensures that only the owner (and root of course) can move/rename/delete the file (see chmod(1)). Of course an application could still set insecure permissions on files, allowing read- or write-access to the wrong user(s), but that's up to the application and has nothing to do with the permissions on those directories.
/tmp
/var/tmp
chmod(1)
/tmp and /var/tmp were historically well-known locations where programs could store temporary files. However, to function properly they have to be world writable (and sticky). This works, but it also opens up a wide variety of potential security issues with one program writing temporary files that another program will read, thus causing the second program to behave in unexpected and undesirable ways.
The first "fix" for this was to have all programs writing temporary files to use a system call to generate random filenames. This sort of works, but it's dependent on (1) the program actually using it; (2) having a secure RNG; and (3) luck.
There's a move on in the Fedora Project to have system services each use an independent private temporary directory accessible only to that service. Each program sees and uses /tmp but they are actually namespaced bind mounts managed by the system. Very clever solution, and one I suspect will start showing up in other Linux distributions soon.
By posting your answer, you agree to the privacy policy and terms of service.
asked
2 years ago
viewed
2726 times
active | http://serverfault.com/questions/427285/managing-tmp-and-var-tmp-and-why-not-globally-writeable?answertab=oldest | CC-MAIN-2015-22 | refinedweb | 417 | 57.4 |
Random album generator in pythonTue 06 November 2012
You may have heard about the "random music album" thing. Basically it goes like this:
- The album cover is the 4th image from a random page of Flickr's interesting pics
- The band name is the title of a random Wikipedia article
- The album name comes from the last 3 or 5 words of a famous quote
This is all well and good, but isn't getting all this data manually, and then making the album cover a bit tedious? Sure it is, so let's see how we can do this in python
Here are some example images generated with this script (click for the full size picture):
Let's cover our dependencies first. You will need:
- python 2.6 (or similar)
- PIL (python Image Library))
- A Flickr API key
- wget
Not much to ask for is it? So once you've made sure you have all that, let's start by getting the album cover. This is handled by the interestingness part of the API and is a very simple call that will return an XML structure of the photos on that page. Once we have a response we parse the XML and get the elements we need to construct the photo URL. It's a short function and here it is:
def getAlbumImage(): page = random.randint(1,80) url = "" % (page) dom = minidom.parse(urllib.urlopen(url)) elem = dom.getElementsByTagName('photo')[4] farm_id = elem.getAttributeNode('farm').nodeValue server_id = elem.getAttributeNode('server').nodeValue the_id = elem.getAttributeNode('id').nodeValue secret = elem.getAttributeNode('secret').nodeValue photo_url = "" % (farm_id, server_id, the_id, secret) target_photo = 'band.jpg' subprocess.call(["/usr/bin/wget", "-O", target_photo, photo_url])
First we generate a random number which will be the page number we use in constructing the API URL. Once constructed we use
minidom to parse it and start extracting our data. If you paste the URL into your browser (with your valid API key) you can see the response format. Now that we have all the data we need, we construct our image URL (the format for this is in the docs) and save it to
band.jpg using
wget. You can of course use something else, but this is just easy here.
Right, onto getting our band name. This is the random Wikipedia article. Luckily Wikipedia has an API also, and doesn't require an API key for this purpose. This is an even shorter function:
def getBandName(): random_wiki_url = "" dom = minidom.parse(urllib.urlopen(random_wiki_url)) for line in dom.getElementsByTagName('page'): return line.getAttributeNode('title').nodeValue
As above we create the URL, parse the output with minidom and fetch our page title. Done.
Album title is a little trickier. I couldn't find a decent quote page that offered a free, easy to use API, so I decided to be a little more hacky and just parse the HTML itself. Hey, it works, don't judge me. We need a helper class for this called MyHTMLParser that derives from python's HTMLParser class.
class MyHTMLParser(HTMLParser): def __init__(self): HTMLParser.__init__(self) self.get_data = False; self.quotes = [] def handle_starttag(self, tag, attrs): if tag == "dt": if attrs[0][0] == 'class' and attrs[0][1] == 'quote': self.get_data = True def handle_endtag(self, data): pass def handle_data(self, data): if self.get_data: self.quotes.append(data) self.get_data = False def getAlbumTitle(): random_quote_url = "" page = urllib.urlopen(random_quote_url).read() parser = MyHTMLParser() parser.feed(page) num_quotes = len(parser.quotes) quote = parser.quotes[random.randint(0, num_quotes)].rstrip('.') last_set = random.randint(3,5) words = quote.split() if last_set > len(words): last_set = len(words) return (" ").join(words[-last_set:])
The class here is used to parse the HTML from, specifically the tag that starts with quote. Once we have that we start capturing the data between that tag and store it in an array. Our getAlbumTitle function will use this data to select a random quote and then get the last 3 or 5 words from it and join them with spaces before returning that new string.
So now we have the data that we need, we just need to wrap it all up and generate our final image using
PIL. Surprise, surprise, this isn't a big deal either.
def main(): band_name = getBandName() album_title = getAlbumTitle() cover = getAlbumImage() from PIL import ImageFont from PIL import Image from PIL import ImageDraw fnt = ImageFont.truetype("/usr/share/fonts/dejavu/DejaVuSans.ttf",25) lineWidth = 20 image = Image.open("band.jpg") imagebg = Image.new('RGBA', image.size, "#000000") # make an entirely black image mask = Image.new('L',image.size,"#000000") # make a mask that masks out all draw = ImageDraw.Draw(image) # setup to draw on the main image drawmask = ImageDraw.Draw(mask) # setup to draw on the mask drawmask.line((0, lineWidth, image.size[0],lineWidth), fill="#999999", width=100) # draw a line on the mask to allow some bg through image.paste(imagebg, mask=mask) # put the (somewhat) transparent bg on the main draw.text((10,0), band_name, font=fnt, fill="#ffffff") # add some text to the main draw.text((10,40), album_title, font=fnt, fill="#ffffff") # add some text to the main del draw image.save("out.jpg","JPEG",quality=100)
Let's go over what's happening here. You're welcome to clean it up as an exercise if you wish or think some values (like filenames) etc need configuring. Firstly we call the previously defined functions to fetch our album data and then we start the drawing. I use the
DejaVuSans.ttf font for this example, but you can use any font you have, or even use different fonts for the title and band name, to make your cover look a bit more pleasing. Once the image we saved from Flickr is open, we start writing our title and band name on the album cover, and save out the result as a
JPEG. The code here is commented so I won't go over the details here.
And that's all there is to it. If you want the the script as a whole file, you can get it from this gist | https://www.unlogic.co.uk/2012/11/06/random-album-generator-in-python/ | CC-MAIN-2018-22 | refinedweb | 1,014 | 67.15 |
Commentary: Open source has never been more popular or more under attack, but there's something cloud providers can do to make OSS more secure.
TechRepublic contributing writer Jack Wallen is correct that "Open source software has proved itself, time and time and time again, that it is business-grade for a very long time." Sonatype is also correct that supply chain attacks against popular open source software repositories jumped 650% over the last year. In fact, it's the very popularity of that open source software that makes it a prime target.
Even though President Biden has called for greater focus on the safety and integrity of open source software, we're no closer to knowing how to achieve it. Some larger projects like Kubernetes have the corporate backing necessary to ensure significant investment in securing the software, while others may be heavily used but can be the labor of love of a handful of developers. No federal mandate will magically gift the necessary resources to constantly update these less-moneyed projects.
And yet, there's hope. Cloud vendors and others increasingly incorporate open source software to deliver comprehensive offerings. Customers may be able to look to them to ensure the security of the code they operationalize.
SEE: Security incident response policy (TechRepublic Premium)
Open source under attack
Open source keeps growing in popularity, to the tune of 2.2 trillion open source packages pulled from repositories like npmjs and Maven in 2021, according to Sonatype's study. As software becomes central to how most organizations operate, developers must build with ever-increasing velocity. With over 100 million repositories available on GitHub alone, many of them high in quality, developers turn to open source to get great software fast.
That's the good thing. But not completely.
Sonatype scoured the top 10% of the most popular Java, JavaScript, Python and .NET projects, finding that 29% of them contain at least one known security vulnerability. As the report continues, the old way of exploiting vulnerabilities in open source projects would be to look for publicly accessible, unpatched security holes in open source code. But now, hackers "are taking the initiative and injecting new vulnerabilities into open source projects that feed the global supply chain, and then exploiting those vulnerabilities."
Thus far, Node.js (npm) and Python (PyPI) repositories have been the primary targets. How do attackers infiltrate the upstreams of popular projects? There are a few ways, though the most prominent of which is called dependency or namespace confusion.
As the report authors noted: "The novel, highly targeted attack vector allows unwanted or malicious code to be introduced downstream automatically without relying on typosquatting or brandjacking techniques. The technique involves a bad actor determining the names of proprietary (inner source) packages utilized by a company's production application. Equipped with this information, the bad actor then publishes a malicious package using the exact same name and a newer semantic version to a public repository, like npmjs, that does not regulate namespace identity."
These and other novel attacks are starting to add up (Figure A).
Figure A
There are at least two difficulties inherent in improving open source security. The first I've mentioned: Not every project maintainer has the resources or know-how to effectively secure her code. On the receiving end, many enterprises aren't quick to patch even known security problems. But that's not to say things are hopeless. Far from it.
I know the pieces fit
It's too soon to call it a trend, but RedMonk analyst Stephen O'Grady has highlighted early indicators of an industry shift away from isolated infrastructure primitives (e.g., compute, storage, etc.) and toward abstracted, integrated workflows. As he stated, "[V]endors are evolving beyond their original areas of core competency, extending their functional base horizontally in order to deliver a more comprehensive, integrated developer experience. From version control to monitoring, databases to build systems, every part of an application development workflow needs to be better and more smoothly integrated."
All this in an effort to make developers' lives easier.
What has made their work harder? In a more recent post he noted, "Where a developer's first–and at times, only–priority might once have been scale, today it's much more likely to be velocity." As noted above, that "need for speed" is pushing developers to embrace open source, just as it's nudging them to embrace cloud. Anything and everything that removes friction so they can build and deploy software more quickly. Often, they're getting that open source delivered to them as managed services, which strips away hardware and software friction, allowing developers to move at maximum speed with a minimum of constraint.
SEE: Vendor management & selection policy (TechRepublic Premium)
But it's not merely a matter of a cloud vendor making, say, Apache Kafka available as a service. No, what's happening, said O'Grady, is the packaging of (in this example) Kafka as part of a larger cloud service: "Instead of providing a layer above base hardware, operating systems or other similar underlying primitives, they abstract away an entire infrastructure stack and provide a higher level, specialized managed function or service."
This brings us back to those supply chain attacks.
If vendors increasingly ship "higher level, specialized managed function[s] or service[s]," they'll also presumably be on the hook for the provenance and security of the component parts of that service. This should lead more cloud providers to invest in the ongoing development, maintenance and security of these component parts, not to mention contractually standing behind those components for customers. A cloud vendor doesn't get to ship OpenSSL, as an example, and then point the finger of blame at some hapless open source maintainer if things go awry. The cloud vendor is on the hook for support.
It's still early, but hopefully this widespread adoption of open source software to deliver higher-order cloud services will, in turn, lead to widespread contributions to the open source projects upon which these services depend. Purely from a security standpoint, it's in the self-interest of the cloud vendors.
Disclosure: I work for MongoDB, but the views expressed herein are mine.
Also see
- Tech companies pledge to help toughen US cybersecurity in White House meeting (TechRepublic)
- How to become a developer: A cheat sheet (TechRepublic)
- Kubernetes: A cheat sheet (free PDF) (TechRepublic)
- If you've always wanted to learn to program with Python, here's an opportunity (TechRepublic Academy)
- A guide to The Open Source Index and GitHub projects checklist (TechRepublic Premium)
- Linux, Android, and more open source tech coverage (TechRepublic on Flipboard) | https://www.techrepublic.com/article/heres-a-fix-for-open-source-supply-chain-attacks/ | CC-MAIN-2021-43 | refinedweb | 1,112 | 51.68 |
Test::MockClass - A module to provide mock classes and mock objects for testing
# Pass in the class name and version that you want to mock use Test::MockClass qw{ClassToMock 1.1}; # create a MockClass object to handle a specific class my $mockClass = Test::MockClass->new('ClassToMock'); # specify to inherit from a real class, or a mocked class: $mockClass->inheritFrom('IO::Socket'); # make a constructor for the class, can also use 'addMethod' for more control $mockClass->defaultConstructor(%classWideDefaults); # add a method: $mockClass->addMethod('methodname', $coderef); # add a simpler method, and specify return values that it will return automatically $mockClass->setReturnValues('methodname2', 'always', 3); # create an instance of the mocked class: my $mockObject = $mockClass->create(%instanceData); # set the desired call order for the methods: $mockClass->setCallOrder('methodname2', 'methodname', 'methodname'); # run tests using the mock Class elsewhere: #:in the class to test: sub objectFactory { return ClassToMock->new; } #:in your test code: assert($testObj->objectFactory->isa("ClassToMock")); # get the object Id for the rest of the methods: my $objectId = "$mockObject"; #or $objectId = $mockClass->getNextObjectId(); # verify that the methods were called in the correct order: if($mockClass->verifyCallOrder($objectId)) { # do something } # get the order that the methods were called: my @calls = $mockClass->getCallOrder($objectId); # get the list of arguments passed per call: my @argList = $mockClass->getArgumentList($objectId, 'methodname', $callPosition); # get the list of accesses made to a particular attribute (hashkey in $mockObject) my @accesses = $mockClass->getAttributeAccess($objectId, 'attribute');
Nothing by default.
The Hook::WrapSub manpage, the Tie::Watch manpage, the Scalar::Util manpage.
This module provides a simple interface for creating mock classes and mock objects with mock methods for mock purposes, I mean testing purposes. It also provides a simple mechanism for tracking the interactions to the mocked objects. I originally wrote this class to help me test object factory methods, since then, I've added some more features. This module is hopefully going to be the Date::Manip of mock class/object creation, so email me with lots of ideas, everything but the kitchen sink will go in!
This method is called when you use the class. It optionally takes a list of classes to mock:
use Test::MockClass qw{IO::Socket File::Finder DBI};
You can also specify the version numbers for the classes:
use Test::MockClass qw{DBD::mysql 1.1 Apache::Cookie 1.2.1}
This use fools perl into thinking that the class/module is already loaded, so it will override any use statement within the code that you're trying to test.
The Test::MockClass constructor. It has one required argument which is the name of the class to mock. It also optionally takes a version number as a second argument (this version will override any passed to the use statement). It returns a Test::MockClass object, which is the interface for all of the method making and tracking for mock objects created later.
my $mockClass = Test::MockClass->new('ClassToMock', '1.1');
If no version is specified in either the use statement or the call to new, it defaults to -1.
A mocked class needs methods, and this is the most flexible way to create them. It has two required arguments, the first one is the name of the method to mock. The second argument is a coderef to use as the contents of the mocked method. It returns nothing of value. What it does that is valuable is install the method into the symbol table of the mocked class.
$mockClass->addMethod('new', sub { my $proto = shift; my $class = ref($proto) || $proto; my $self = {}; bless($self, $class); }); $mockClass->addMethod('foo', sub {return 'foo';});
I'm often too lazy, or, er, busy to write my own mocked constructor, especially when the constructor is a simple standard one. For those times I use the defaultConstructor method. This method takes a hashy list as the optional arguments, which it passes to the constructor as class-wide default attributes/values. It installs the constructor in the mocked class as 'new' or whatever was set with $mockClass->constructor() (see that method description later in this document).
$mockClass->defaultConstructor('cat' => 'hat', 'grinch' => 'x-mas');
Of course, this assumes that your objects are based on hashes.
My laziness often extends beyond the simple constructor to the methods of the mocked class themselves. Often I don't feel like writing a whole method when all I need for testing is to have the mocked method return a specific value. For times like this I'm glad I wrote the setReturnValues method. This method takes a variable number of arguments, but the first two are required. The first argument is the name of the method to mock. The second argument specifies what the mocked method will return. Any additional arguments may be used as return values depending on the type of the second argument. The possible values for the second argument are as follows:
This specifies that the method should always return true (1).
$mockClass->setReturnValues('trueMethod', 'true'); if($mockObject->trueMethod) {}
This specifies that the method should always return false (0).
$mockClass->setReturnValues('falseMethod', 'false'); unless($mockObject->falseMethod) {}
This specifies that the method should always return undef.
$mockClass->setReturnValues('undefMethod', 'undef'); if(defined $mockObject->undefMethod) {}
This specifies that the method should always return all of the rest of the arguments to setReturnValues.
$mockClass->setReturnValues('alwaysFoo', 'always', 'foo'); $mockClass->setReturnValues('alwaysFooNBar', 'always', 'foo', 'bar');
This specifies that the method should return 1 each of the rest of the arguments per method invocation until the arguments have all been used, then it returns undef.
$mockClass->setReturnValues('aFewGoodMen', 'series', 'Abraham', 'Martin', 'John');
This specifies that the method should return 1 each of the rest of the arguments per method invocation, once all have been used it starts over at the beginning.
$mockClass->setReturnValues('boybands', 'cycle', 'BackAlley-Bros', 'OutOfSync', 'OldKidsOverThere');
This specifies that the method should return a random value from the list. Well, as random as perl's srand/rand can get it anyway.
$mockClass->setReturnValues('userInput', 'random', (0..9));
Sometimes it's important to impose some guidelines for behavior on your mocked objects. This method allows you to set the desired call order for your mocked methods, the order that you want them to be called. It takes a variable length list which is the names of the methods in the proper order. This list is then used in comparison with the actual call order made on individual mocked objects.
$mockClass->setCallOrder('new', 'foo', 'bas', 'bar', 'foo');
Objects often do bizzare and unnatural things when you aren't looking, so I wrote this method to track what they did behind the scenes. This method returns the actual method call order for a given object. It takes one required argument which is the object Id for the object you want the call order of. One way to get an object's Id is to simply pass it in stringified:
my @callOrder = $mockClass->getCallOrder("$mockObject");
This method returns an array in list context and an arrayref under scalar context. It returns nothing under void context.
Now we could compare, by hand, the differences between the call order we wanted and the call order we got, but that would be all boring and we've got better things to do. I say we just use the verifyCallOrder method and be done with it. This method takes one required argument which is the object Id of the object we want to verify. It returns true or false depending on whether the methods were called in the correct order or not, respectively.
if($mockClass->verifyCallOrder("$mockObject")) { # do something }
Sometimes you might want to use the Test::MockClass object to actually return mocked objects itself, I'm not sure why, but maybe someone would want it, so for them there is the create method. This method takes a variable sized hashy list which will be used as instance attributes/values. These attributes will ovverride any class-wide defaults set by the defaultConstructor method. The method returns a mock object of the appropriate mocked class. The only caveat with this method is that in order for the attribute/values defaulting-ovveride stuff to work you have to use the defaultConstructor to set up your constructor.
$mockClass->defaultConstructor('spider-man' => 'ben reilly'); my $mockObject = $mockClass->create('batman' => 'bruce wayne', 'spider-man' => 'peter parker');
I've found that I often want to know exactly how a method was called on a mock object, when I do I use getArgumentList. This method takes three arguments, two are required and the third is often needed. The first argument is the object Id for the object you want the tracking for, the second argument is the name of the method that you want the arguments from, and the third argument corresponds to the order of call for this method (not to be confused with the call order for all the methods). The method returns an array which is a list of the arguments that were passed into the method. In scalar context it returns a reference to an array. The following example gets the arguments from the second time 'fooMethod' was called.
my @arguments = $mockClass->getArgumentList("$mockObject", 'fooMethod', 1);
If the third argument is not supplied, it returns an array of all of the argument lists.
Sometimes your mock objects are destroyed before you can get their object id. Well in those cases you can get the cached object Id from the Test::MockClass object. This method requires no arguments and returns object Ids suitable for use in any of the other Test::MockClass methods. The method begins with the object id for the first object created, and returns subsequent ones until it runs out, in which case it returns undef, and then starts over.
my $firstObjectId = $mockClass->getNextObjectId();
Sometimes you need to track how the object's attributes are accessed. Maybe someone's breaking your encapsulation, shame on them, or maybe the access is okay. For whatever reason if you want a list of accesses for an object's underlying data structure just use getAttributeAccess method. This method takes a single required argument which is the object id of the object you want the tracking for. It returns a multi dimensional array, the first dimension corresponds to the order of accesses. The second dimension contains the actual tracking information. The first position [0] in this array describes the type of access, either 'store' or 'fetch'. The second position [1] in this array corresponds to the attribute that was accessed, the key of the hash, the index of the array, or nothing for a scalar. The third position in this array is only used when the access was of type 'store', and it contains the new value. In scalar context it returns an array ref.
my @accesses = $mockClass->getAttributeAccess("$mockObject"); print "breaky\n" if(grep {$_[0] eq 'store'} @accesses);
A second argument can be supplied which corresponds to the order that the access took place.
Maybe my mock objects are too slow for you, what with all the tracking of interactions and such. Maybe all you need is a mock object and you don't care how it was interated with. Maybe you have to make millions of mock objects and you just don't have the memory to support tracking. Well fret not my friend, for the noTracking method is here to help you. Just call this method (no arguments required) and all the tracking will be disabled for any subsequent mock objects created. I personally like tracking, so I switch it on by default.
$mockClass->noTracking(); # no more tracking of methodcalls, constructor-calls, attribute-accesses
So you want to track some calls but not others? Fine, use the tracking method to turn tracking back on for any subsequently created mock objects.
$mockClass->tracking(); # now tracking is back on.
You want to use defaultConstructor or create, but you don't want to use 'new' as the name of your constructor? That's fine, just pass in the name of the constructor you want to use/create to the constructor method. Ugh, that's kinda confusing, an example will be simpler.
$mockClass->constructor('create'); # change from 'new'. $mockClass->defaultConstructor(); # installs 'create'. my $mockObject = MockClass->create(); # calls 'create' on mocked class.
This method allows your mock class to inherit from other mock classes or real classes. Since it basically just uses perl's inheritence, it's pretty transparent. And yes, it does support multiple inheritence, though you don't have to use it if you don't wanna.
Figure out how to add simple export/import mechanisms for mocked classes. Make Test::MockClass less hash-centric. Stop breaking Tie::Watch's encapsulation. Provide mock objects with an interface to their own tracking. Make tracking and noTracking more fine-grained. Maybe find a safe way to clean up namespaces after the maker object goes out of scope. Write tests for arrayref and scalarref based objects. Write tests for unusual objects (regular expression, typeglob, filehandle, etc.)
Jeremiah Jordan <jjordan@perlreason.com>
Inspired by Test::MockObject by chromatic, and by Test::Unit::Mockup (ruby) by Michael Granger. Both of whom were probably inspired by other people (J-unit, Xunit types maybe?) which all goes back to that sUnit guy. Thanks to Stevan Little for the constructive criticism.
Copyright (c) 2002, 2003, 2004, 2005 perl Reason, LLC. All Rights Reserved.
This module is free software. It may be used, redistributed and/or modified under the terms of the Perl Artistic License (see) or under the GPL. | http://search.cpan.org/~jjordan/Test-MockClass-1.05/lib/Test/MockClass.pm | CC-MAIN-2015-14 | refinedweb | 2,242 | 53.61 |
A resource response. More...
#include <Wt/Http/Response>
A resource response.
This class defines the HTTP response for a WResource request.
More specifically you can:
You may chose to provide only a partial response. In that case, use createContinuation() to create a continuation object to which you can annotate information for the next request to process the response further.
Add an HTTP header.
Headers may be added only before setting the content mime-type (setMimeType()), and before streaming any data to the out() stream.
Return the continuation, if one was created for this response.
Returns the continuation that was previously created using createContinuation(), or 0 if no continuation was created yet.
Create a continuation object for this response.
A continuation is used to resume sending more data later for this response. There are two possible reasons for this:
A new call to handleRequest() will be made to retrieve more data.
Sets the content length.
If content length is known, use this method to set it. File downloads will see progress bars. If not set, Wt will use chunked transfers.
Always use this method instead of setting the Content-Length header with addHeader().
Headers may be added only before setting the content mime-type (setMimeType()), and before streaming any data to the out() stream.
Set the content mime type.
The content mimetype is used by the browser to correctly interpret the resource.
Sets the response status.
Unless a overriden, 200 OK will be assumed. | https://webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1Http_1_1Response.html | CC-MAIN-2021-31 | refinedweb | 243 | 60.51 |
Summary: in this tutorial, you’ll learn about the two main data structures of Pandas – the Series and DataFrame.
Pandas Series
The Series is the most basic data structure of pandas.
Assuming you’re already familiar with Python data types, the
Series looks pretty similar to a list in Python. The core difference between a List and a Series is that the Series allows you to use anything you like as the index, instead of restricting on zero-based array indexes.
A Series object contains two “columns”, the first one is the index, the second one contains our data. By default, the index is number based and starts from zero.
import numpy as np import pandas as pd # This is a Series # with number-based indexes example = pd.Series([1,2,3,4,5]) example # Output : 0 1 1 2 2 3 3 4 4 5 dtype: int64
You can retrieve one or multiple items from a
Series using the indexes.
# Retrieving a single value # example[3] # Output 4 # Retrieving multiple values # example[[2,4]] # Output 2 3 4 5 dtype: int64
But you can specify your own index by passing an
index argument.
import numpy as np import pandas as pd # This is another Series # with character-based indexes example = pd.Series([1,2,3,4,5], index=['a', 'b', 'c', 'd', 'e']) example # Output : a 1 b 2 c 3 d 4 e 5 dtype: int64
By specifying a custom index column, you can access items from those indexes.
# Retrieving a single value # example['c'] # Output 3 # Retrieving multiple values # example[[c,e]] # Output c 3 e 5 dtype: int64
You can also perform other statistical operations with a Series, a common one is to get the mean of all values in a Series object.
# Get the means of the values example.mean() # Output 3.0
While not being particularly useful than the ordinary list at first, Series is the base of the next powerful data types of Pandas : the DataFrame.
Summary: Series is a list with customizable indexes, serve as the base of DataFrame. | https://monkeybeanonline.com/pandas-series/ | CC-MAIN-2022-27 | refinedweb | 347 | 51.07 |
Provides functions for performing openGL drawing operations. More...
#include <StelPainter.hpp>
Provides functions for performing openGL drawing operations..
Definition at line 40 of file StelPainter.hpp.
Define the drawing mode when drawing vertex.
Definition at line 55 of file StelPainter.hpp.
Define the drawing mode when drawing polygons.
Definition at line 46 of file StelPainter.hpp.
Draw a disk with a special texturing mode having texture center at center of disk.
The disk is made up of concentric circles with increasing refinement. The number of slices of the outmost circle is (innerFanSlices<<level).
Generate a StelVertexArray for a sphere.
Delete the OpenGL shaders objects.
This method needs to be called once before exit.
Draw a simple circle, 2d viewport coordinates in pixel.
Draws primitives using vertices from the arrays specified by setVertexArray().
Draw a great circle arc between points start and stop..
Draw a line between the 2 points.
Draw a curve defined by a list of points.
The points should be already tesselated to ensure that the path will look smooth. The algorithm take care of cutting the path if it crosses a viewport discontinutiy.
Draw a GL_POINT at the given position.
Draw a rectangle using the current texture at the given projected 2d position.
This method is not thread safe.
Draw a small circle arc between points start and stop with rotation point in rotCenter.. Example: A latitude circle has 0/0/sin(latitude) as rotCenter. If rotCenter is equal to 0,0,0, the method draws a great circle.
Draw the given SphericalRegion.
Draw a square using the current texture at the given projected 2d position.
This method is not thread safe.
Draw a rotated square using the current texture at the given projected 2d position.
This method is not thread safe.
Same as drawSprite2dMode but don't scale according to display device scaling.
Draws the primitives defined in the StelVertexArray.
Draw the string at the given position and angle with the given font.
If the gravity label flag is set, uses drawTextGravity180.
Fill with black around the viewport.
Simulates glEnableClientState, basically you describe what data the ::drawFromArray call has available.
Get the color currently used for drawing.
Get the font metrics for the current font.
Return the instance of projector associated to this painter.
Definition at line 75 of file StelPainter.hpp.
Returns a QOpenGLFunctions object suitable for drawing directly with OpenGL while this StelPainter is active.
This is recommended to be used instead of QOpenGLContext::currentContext()->functions() when a StelPainter is available, and you only need to call a few GL functions directly.
Definition at line 72 of file StelPainter.hpp.
Create the OpenGL shaders programs used by the StelPainter.
This method needs to be called once at init.
Link an opengl program and show a message in case of error or warnings.
Re-implementation of gluCylinder : glu is overridden for non-standard projection.
convenience method that enable and set all the given arrays.
It is equivalent to calling enableClientState and set the array pointer for each arrays.
Enable OpenGL blending.
By default, blending is disabled. The additional parameters specify the blending mode, the default parameters are suitable for "normal" blending operations that you want in most cases. Blending will be automatically disabled when the StelPainter is destroyed.
Set the color to use for subsequent drawing.
use instead of glColorPointer
Definition at line 275 of file StelPainter.hpp.
Set the OpenGL GL_CULL_FACE state, by default face culling is disabled.
Set the font to use for subsequent text drawing.
Enables/disables line smoothing. By default, smoothing is disabled.
Sets the line width. Default is 1.0f.
use instead of glNormalPointer
Definition at line 281 of file StelPainter.hpp.
use instead of glTexCoordPointer
Definition at line 269 of file StelPainter.hpp.
use instead of glVertexPointer
Definition at line 264 of file StelPainter.hpp.
Draw a fisheye texture in a sphere. | http://stellarium.org/doc/0.15/classStelPainter.html | CC-MAIN-2019-13 | refinedweb | 643 | 61.93 |
I am having troubles running glew.
Using the following code:
#include <GL/glew.h> #include <GL/freeglut.h> #include <iostream> int main(int argc, char **argv) { glutInit(&argc, argv); glutInitContextVersion(3, 1); glutInitContextFlags(GLUT_FORWARD_COMPATIBLE); glutInitContextProfile(GLUT_CORE_PROFILE); glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA); glutInitWindowSize(1024, 768); //glutInitWindowPosition(100, 100); glutCreateWindow("First run."); // init glew GLenum glewInitResult; glewExperimental = GL_TRUE; glewInit(); /* if(GLEW_OK != glewInitResult) { std::cerr << "ERROR: " << glewGetErrorString(glewInitResult); return 1; } */ std::cout << "hello"; glClearColor(0,0,0,0); glutMainLoop(); return 0; }
I then compile/link with the following in MinGW:
g++ -c -o glewTest.o glewTest.cpp -I"C:\MinGW\include\GL" -DGLEW_STATIC
and...
g++ -o glewTest.exe glewTest.o -L"C:\MinGW\lib" -lglew32 -lfreeglut -lopengl32 -lglu32 -Wl,--subsystem,windows
or...
g++ -o glewTest.exe glewTest.o -L"C:\MinGW\lib" -lglew32mx -lfreeglut -lopengl32 -lglu32 -Wl,--subsystem,windows
The "g++ -o glewTest.exe glewTest.o -L"C:\MinGW\lib" -lglew32 -lfreeglut -lopengl32 -lglu32 -Wl,--subsystem,windows"
does compile, but when I try to run I get the following error:
"The procedure entry point glewInit@0 could not be located in the dynamic link library glew32.dll"
When I try to compile with "g++ -o glewTest.exe glewTest.o -L"C:\MinGW\lib" -lglew32mx -lfreeglut -lopengl32 -lglu32
-Wl,--subsystem,windows", I get the follow error:
"glewTest.o:glewTest.cpp:(.text+0x15a): undefined reference to `glewInit@0' collect2.exe: error: ld returned 1 exit status"
I am at a loss...what do I do to either get the sys to find the dll, or get it to find the lib ref?
I am fine with either a static or dynamic lib build, but I have already tried the following with the dll version:
Place the DLL in the same folder with the executable. No joy.
Make sure I am using the same version and mem type (32/64bit) of the Lib with the DLL. Did so. Had no bearing.
Make sure I am placing the lib/DLL/Headers in the correct paths. Did so. Also had no bearing.
Make sure I am linking my necessary files correctly in the compile process. Obviously not working.
I have tried sifting through over 40-50 different webpages and they all pretty much say the same thing. Any new ideas? | http://www.gamedev.net/topic/648065-compile-glew-statically-on-win32-with-mingw/ | CC-MAIN-2016-26 | refinedweb | 370 | 59.8 |
I'm very new to programming in general so I'm sure this question is naive.
So, I wrote a simple magic eight ball program and I wanted to move onto an adventure game, but I thought it would be neat to include the eight ball program in the game, like making a console menu where you could select either game.
So, my problem is, I don't know how to put my eightball program in a file, and then run it from the main program file with something like
eightball();
if (response == 1)
{
eightball();
}
You will probably want to create a new class file and then add your code something like this:
namespace MyApplication { class EightBall { public static void eightball () { // Add your code here } } }
Call it from your
Main-function like this:
if (response == 1) { EightBall.eightball(); } | https://codedump.io/share/2thkV3oYK03I/1/running-another-cs-file-within-the-main-programcs-file | CC-MAIN-2017-30 | refinedweb | 138 | 51.89 |
by Tommi West
Text Layout Framework (TLF) text is the new default text type in Adobe Flash Professional CS5. Using TLF text, you can control text at the character level with advanced typography features (such as kerning and leading). You can also display global language characters, including bidirectional and vertical text, with new formatting capabilities and more precise control over the appearance of text in Flash than ever before. In this article, I examine TLF and show you how you can use it to display TLF text objects in a SWF file.
The Text Layout Framework is a library of ActionScript 3 classes. Projects that incorporate TLF text objects require Flash Player 10 (or later) to play. You can create and style TLF text using the Flash authoring environment as well as by using ActionScript code. When using ActionScript 3 to create TLF text objects, you need to use the TLFTextField class. The previous type of text in Flash (now called Classic text) uses the Flash Text Engine and the TextField class to display text in Flash Player 9 and earlier.
This sample project illustrates how to create a Flash file that is populated and styled by an external text document (an XML file). Once you've set this up, you never have to alter the FLA code, and you can directly control the SWF file's content and appearance by editing the external file in a text editor. This enables team members to directly affect the output of the project without updating the FLA file and republishing the SWF file in the Flash authoring environment.
The project displays a few lines of text; the last line contains a mailto link above the loaded image file in Figure 1.
Figure 1. The published SWF file displays the loaded TLF text and image file.
To follow along with this tutorial, download the sample files (ZIP, 290 KB). Uncompress the TLF-Text.zip archive and save the folder to your desktop.
The Sample Files folder contains the following files:
Open the folder and locate the file named tlf.fla. Double-click it to open it in Flash Professional CS5.
The Timeline in the project contains two layers. The top layer, named Actions, contains the ActionScript. The bottom layer, named Background, contains the background graphic. To review the ActionScript code, select Frame 1 of the Actions layer. Choose Window > Actions to open the Actions panel, and view the code in the Script window. At the top of the script, the last four import statements are used to import the classes needed to work with TLF text objects programmatically:
import flashx.textLayout.elements.TextFlow; import flashx.textLayout.container.ContainerController; import flashx.textLayout.conversion.TextConverter; import flashx.textLayout.events.StatusChangeEvent;
You must import these classes at the beginning of the code — before attempting to reference their methods, properties, and events — because the TLF text classes are not included in Flash Player 10; they must be imported at runtime.
Scroll down near the bottom and locate the function named initText. The code looks like this:
function initText():void { textFlow = TextConverter.importToFlow(externalFileContent,TextConverter.TEXT_LAYOUT_FORMAT); textFlow.addEventListener(StatusChangeEvent.INLINE_GRAPHIC_STATUS_CHANGE,statusChangeHandler,false,0,true); textFlow.flowComposer.addController(new ContainerController(textContainer, 350, 200)); textFlow.flowComposer.updateAllControllers(); }
The first line inside the initText function sets the default formatting for the text content:
textFlow = TextConverter.importToFlow(externalFileContent,TextConverter.TEXT_LAYOUT_FORMAT);
In this example, the text content is using the Text Layout Format formatting, rather than the other two options (plain text or HTML formatting).
The second line creates an event listener to update the text flow composer controller once the image has finished loading. It references the last function in the script:
textFlow.addEventListener(StatusChangeEvent.INLINE_GRAPHIC_STATUS_CHANGE,statusChangeHandler,false,0,true);
The third line adds the text to the flow composer controller:
textFlow.flowComposer.addController(new ContainerController(textContainer, 350, 200));
This method is similar to the DisplayObjectContainer.addChild() method, which displays the content on the Stage. The parameters at the end define the height and width of the loaded container area on the Stage.
The last line of the function causes the flow composer to update the text controller:
textFlow.flowComposer.updateAllControllers();
The last function is statusChangeHandler:
function statusChangeHandler(e:Event):void { textFlow.flowComposer.updateAllControllers(); }
This function causes the flow composer to update the text controller again once the image file has loaded.
When you are finished reviewing the code, close the tlf.fla file and quit Flash.
Next, let's preview the project to see how it will appear when it's published online.
Roll your cursor over the linked text to see the rollover effect. Click the link to launch your email client and note that the To field is populated by the mailto link in the SWF file in Figure 2.
Figure 2. When clicked, the email link invokes your email client and composes a new message.
Next, let's review the XML data and formatting properties that are loaded into the SWF file.
In the Sample Files folder, locate the file named tlf.xml. Open it in a text editor such as Adobe Dreamweaver. (You can also use Notepad or Text Edit to open the XML file, if desired.)
The top portion of the XML code defines the default formatting properties of the TextFlow, including the font size, padding, and alignment of the text content:
<TextFlow color="#000000" columnCount="1" fontSize="14" lineBreak="toFit" paddingTop="0" paddingBottom="0" paddingLeft="0" paddingRight="0" paragraphSpaceBefore="0" paragraphSpaceAfter="20" verticalAlign="top" xmlns="">
Note: The
xmlns attribute is the XML namespace declaration. This is required, so be sure not to edit or delete the link to adobe.com in the code above.
Inside the <div> tag below the TextFlow definition are several paragraphs. The first paragraph contains text that uses the default formatting, as applied by the TextFlow tag properties shown above:
<p> <span>This text is formatted by the TextFlow definition.</span> </p>
The second paragraph contains text that is formatted with styles in the surrounding
tag, causing the text to display with a white font color, a larger font size, and a bold font weight. These styles override the default formatting applied in the TextFlow tag:
<p color="#ffffff" fontSize="16" fontWeight="bold"> <span>This text is formatted with unique styles.</span> </p>
The third paragraph contains linked content that includes font color and text decoration styles to affect the normal, active, and hover states. These styles change the link formatting from the default blue underlined text.
<p> <a href="mailto:info@mysite.com"> <linkNormalFormat><TextLayoutFormat color="#990000" textDecoration="none"/></linkNormalFormat> <linkActiveFormat><TextLayoutFormat color="#990000" textDecoration="none"/></linkActiveFormat> <linkHoverFormat><TextLayoutFormat color="#cccccc" textDecoration="underline"/></linkHoverFormat> <span>This link has custom formatting and a rollover effect.</span> </a> </p>
As your cursor hovers over the link, the underline appears, and the linked text turns gray.
The last paragraph contains the path to the image file, named star.png:
<p textAlign="center"> <img source="star.png" height="61" width="61" /> </p>
The <p> tag includes the alignment parameter to align the image file to the center of the Stage.
Now that you've reviewed the sample code, try experimenting by updating the text and formatting properties in the XML file:
Update the XML text file again to change the link. This time you'll change the mailto link to a relative path that references the example HTML page in the sample files folder, like this:
<a href="example.html">
Note: If you update the link property with an absolute path to a live page on the Internet (such as), you'll encounter the error message in Figure 3.
Figure 3. The error states that a local SWF file is attempting to access a file on the Internet.
This error appears because of the Flash Player security that is applied to local SWF files attempting to access remote files. To learn more, read What is Flash Player security for local content? in the Adobe Flash Player help documentation.
To continue testing the link, you can place another HTML file (or an image file) in the local TLF-Text folder and update the link using a relative path (the element's filename). If you refresh the browser again and click the link, you'll see that the new local content loads in the browser as expected.
And if you like, you can save a different image file in the TLF-Text folder (alongside the tlf.swf, tlf.xml, and index.html files) and then update the image source in the last paragraph to reference the filename of the new image file. If you don't update the height and width attributes, the image file you supply will be resized to the dimensions specified in the XML file.
There's one more thing you may notice when developing projects using TLF text. When you publish the SWF file from Flash, you may see an SWZ file appear in the project folder. This file is generated by Flash and is considered a backup file for the TLF text classes, which you can optionally upload to the host server along with the HTML, Scripts folder, SWF file, XML file, and image files. To learn more about the SWZ file that Flash generates (and how to optionally merge it into the code of the SWF file), read the article entitled How to publish SWF files with TLF text on the FlashConf.com site.
As you can see from this simple example, you can use TLF text objects to separate text content, image content, and formatting from the SWF file itself. Without even opening the FLA file, you've updated the contents of the SWF file by editing the external XML file. Making real-time changes without touching the FLA file enables you to reduce your development time, especially when working on projects that are likely to change often.
This is only a small example of what you can achieve. To extend this project further, experiment with creating multiple SWF files with different backgrounds and graphics that load the same XML data. For example, you can create a series of SWF files for a site that look completely different but display the same text, images, and links.
Also check out some of the other capabilities of the Text Layout Framework:
Text Layout Framework offers exciting new opportunities to manipulate text and control its appearance in Flash projects. You can also use TLF text objects when developing applications in Adobe Flash Builder and Adobe AIR.
To learn more about working with TLF text, watch the video tutorial on Adobe TV entitled Creating text with the Text Layout Framework. Also read Introducing the new Adobe ActionScript 3 SDK for Facebook Platform in this issue of the Edge. Also be sure to visit the ActionScript Technology Center and read about TLF text in the Flash glossary to get more information about controlling TLF text with ActionS. | http://www.adobe.com/inspire-archive/november2010/articles/article5/index.html?trackingid=IBEPT | CC-MAIN-2015-14 | refinedweb | 1,813 | 53.61 |
My problem is that i have to create a program to hold a series of weights in containers (elements in a list). The max weight for each container is 20units. I have got this working as this:
containers = [0]
inp = input("Enter a weight, 20 or less, or enter 0 to quit: ")
def findContainers(containers,inp):
for i in range(0,len(containers)):
if inp + containers <= 20:
return i
elif inp + containers[len(containers)-1] > 20:
return -1
def Allocator(containers,inp):
x = 0
while inp != 0:
i = findContainers(containers,inp)
if i == -1:
containers = containers + [inp]
i = len(containers) - 1
else:
containers = containers + inp
x = x + 1
print "Item ", x , ", Weight ", inp ,": Container ", (i + 1)
inp = input("Enter a weight, 20 or less, or enter 0 to quit: ")
Allocator(containers,inp)
However, i need the weights to store themselves in the fullest container that can hold them.
E.g. If i had two contaainers:
1. 17
2. 18
and i wanted to add a weight of 2, whilst the first one could hold it, the weight will go on to the second container, and make it complete.
Thanks in advance for all help | https://www.daniweb.com/programming/software-development/threads/329325/help | CC-MAIN-2016-50 | refinedweb | 194 | 54.46 |
DataFrame, date_range(), slice() in Python Pandas library
Hey there everyone, Today will learn about DataFrame, date_range(), and slice() in Pandas. We all know, Python is a powerful language, that allows us to use a variety of functions and libraries. It becomes a lot easier to work with datasets and analyze them due to libraries like Pandas.
So, let’s get started.
DataFrame in Pandas
DataFrame is a two-dimensional data structure used to represent tabular data. It represents data consisting of rows and columns.
For creating a DataFrame, first, we need to import the Pandas library.
import pandas as pd
Now, we will have a look at different ways of creating DataFrame.
1. Using a ‘.csv’ file :
We can create a DataFrame by importing a ‘.csv’ file using read_csv() function, as shown in the code below:
#reading .csv file to make dataframe df = pd.read_csv('file_location') #displaying the dataframe df
2. Using an excel file :
DataFrame can also be created by importing an excel file, it is similar to using a ‘.csv’ file with just a change in the function name, read_excel()
#reading the excel file to create dataframe df = pd.read_excel('file_location') #display dataframe df
3. Using Dictionary:
We can also create our DataFrame using a dictionary where the key-value pairs of the dictionary will make the rows and columns for our DataFrame respectively.
#creating data using dictionary my_data = { 'date': ['2/10/18','3/11/18','4/12/18'], 'temperature': [31,32,33], 'windspeed': [7,8,9] } #creating dataframe df = pd.DataFrame(my_data) #displaying dtaframe df
OUTPUT:
4.Using a list of tuples :
Here, the list of tuples created would provide us with the values of rows in our DataFrame, and we have to mention the column values explicitly in the pd.DataFrame() as shown in the code below:
#creating data using tuple list my_data = [ ('1/10/18',30,6), ('2/11/18',31,7), ('3/12/18',32,7) ] #creating dataframe df = pd.DataFrame(data=my_data, columns= ['date','temperature','windspeed']) #displaying dataframe df
We can also use a list of dictionary in place of tuples.
OUTPUT:
date_range() in Pandas
The date_range function in Pandas gives a fixed frequency DatetimeIndex.
Syntax : pandas.date_range(start=None, end=None, periods=None, freq=None, tz=None, normalize=False, name=None, closed=None, **kwargs).
Let’s try to understand the working of some of the arguments of date_range() with the help of code and their output.
start: Left bound for generating dates.
end: Right bound for generating dates.
freq: Frequency strings can have multiple values, ex:4H
pd.date_range(start ='12-1-2019', end ='12-2-2019', freq ='4H')
OUTPUT:
DatetimeIndex(['2019-12-01 00:00:00', '2019-12-01 04:00:00', '2019-12-01 08:00:00', '2019-12-01 12:00:00', '2019-12-01 16:00:00', '2019-12-01 20:00:00', '2019-12-02 00:00:00'], dtype='datetime64[ns]', freq='4H')
periods: Number of periods to generate.
pd.date_range(start ='12-1-2019', end = '12-10-2019' , periods = 4)
OUTPUT:
DatetimeIndex(['2019-12-01', '2019-12-04', '2019-12-07', '2019-12-10'], dtype='datetime64[ns]', freq=None)
tz: Name of the Time zone for returning localized DatetimeIndex
pd.date_range(start='12/1/2019', periods=4, tz='Asia/Hong_Kong')
OUTPUT:
DatetimeIndex(['2019-12-01 00:00:00+08:00', '2019-12-02 00:00:00+08:00', '2019-12-03 00:00:00+08:00', '2019-12-04 00:00:00+08:00'], dtype='datetime64[ns, Asia/Hong_Kong]', freq='D')
Also, read: Python program to Normalize a Pandas DataFrame Column
slice() in Pandas
str.slice() is used to slice a substring from a string present in the DataFrame. It has the following parameters:
start: Start position for slicing
end: End position for slicing
step: Number of characters to step
Note: “.str” must be added as a prefix before calling this function because it is a string function.
example 1:
we will try to slice the year part(“/18”) from ‘date’ present in the DataFrame ‘df’
start, stop, step = 0, -3, 1 # converting 'date' to string data type df["date"]= df["date"].astype(str) # slicing df["date"]= df["date"].str.slice(start, stop, step) df
OUTPUT:
So, we have successfully sliced the year part from the date.
example 2:
We have this DataFrame
Now, we will try to remove the decimal part from the ‘height’ present in the DataFrame ‘df’.
start, stop, step = 0, -2, 1 # converting 'height' to string data type df["height"]= df["height"].astype(str) # slicing df["height"]= df["height"].str.slice(start, stop, step) df
OUTPUT:
So, we have successfully removed the decimal part from ‘height’. | https://www.codespeedy.com/dataframe-date_range-slice-in-python-pandas-library/ | CC-MAIN-2020-45 | refinedweb | 774 | 64.51 |
Advanced, cross-platform logging for Qt
Introduction
This article presents the classic Qt logging technique and then describes a cross-platform, open source logging library called QsLog. The library is licensed under the BSD license and can be used both in commercial and Free software.
Classic logging - custom message handlers
A capable logging library is an essential aid when it comes to debugging. It’s not always possible to have a debugger attached, especially when it comes to mobile devices. Sometimes the bug can’t be reproduced in the release version, sometimes GDB can’t read the information from the device, but a log message can be useful even after the application has been shipped and is running on the customer's hardware.
Qt doesn’t include a logging library by default, however there’s a simple mechanism that allows one to use a message handler function to process the output from qDebug-like function calls. A message handler is nothing more than a free function that receives a string and a message type parameter. You write qDebug() << “Parameter out of range:” << 1.0 << “-” << 2.0 and the message handler will receive the array of char “Parameter out of range: 1.0 – 2.0″ and the message type, which is one of debug, warning, critical and fatal. Custom handlers can be installed with the qInstallMsgHandler function.
There are quite a few code samples and tutorials available that demonstrate the message handler technique, including on Nokia Developer. This is a very popular approach, but unfortunately it has some subtle issues and is not so convenient.
Developers that have used other C++ logging libraries will quickly notice that there’s no easy way to toggle what messages are shown – there is no logging level support. All the logging calls you’ve made are executed on each run of the application, which discourages leaving logging messages in potentially interesting, but performance sensitive spots. Furthermore, the four available message functions - debug, warning, critical and fatal - don't provide fine-grained control over the message type. All non-error messages will be grouped under debug, while most libraries offer multiple debug levels and a trace level.
Perhaps the biggest issue is that when you install a custom message handler, all messages from qDebug / qWarning and so on will be sent to your handler. This means that Qt’s warnings or asserts would also end up in the handler, and that the logger itself could trigger another log call, which in turn could result either in a deadlock or an endless loop.
Finally, a message handler by itself is not thead-safe, and few if any examples take this into consideration.
Advanced logging - QsLog
Message handlers are tricky to get right, and that is precisely why QsLog was created. It was designed as a cross-platform solution that’s easy to add to any application and can understand Qt’s types.
QsLog works on Windows, Linux and Symbian – including the simulator. Most of the library’s code is Qt code, with a few exceptions.
Adding it to a project is really easy; include the QsLog.pri file in your project file and you’re ready to go. Using it is no problem either, just call one of the logging macros like this: QLOG_INFO() << „Hello” << „logging world” << ‚!’; And the really nice thing is that you can log to a file of your choice, or to the Qt Creator debugger / output pane or both. You can even create your own destinations.
Last but not least, you have access to six logging levels, and the active level can be set at runtime. You can write as many trace messages as you like in that tight loop, but they only evaluate to a compare plus a couple of simple function calls if the current logging level is higher than the level requested by the log call.
Step by step example
The header files that are of interest are QsLog and QsLogDest. The latter only has to be included when setting up the logger, QsLog.h is enough when just logging.
#include "QsLog.h"
#include "QsLogDest.h"
#include <QtCore/QCoreApplication>
#include <QDir>
#include <iostream>
The logger is a singleton object that can be accessed through its instance() member function. Calling instance() guarantees that the logger has been created. It is good practice to explicitly set the logging level, but if it is not set it defaults to InfoLevel.
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
// init the logging mechanism
QsLogging::Logger& logger = QsLogging::Logger::instance();
logger.setLoggingLevel(QsLogging::TraceLevel);
Destinations are targets for the log messages. We're creating and registering two destination: the file destination outputs messages to the log.txt file in the application's directory, while the debug destination outputs messages in Qt Creator's output pane or debug pane.
const QString sLogPath(QDir(a.applicationDirPath()).filePath("log.txt"));
QsLogging::DestinationPtr fileDestination(
QsLogging::DestinationFactory::MakeFileDestination(sLogPath) );
QsLogging::DestinationPtr debugDestination(
QsLogging::DestinationFactory::MakeDebugOutputDestination() );
logger.addDestination(debugDestination.get());
logger.addDestination(fileDestination.get());
Each logging level has an associated logging macro. Macros are used to minimize the performance impact when logging is disabled.
QLOG_INFO() << "Program started";
QLOG_INFO() << "Built with Qt" << QT_VERSION_STR << "running on" << qVersion();
QLOG_TRACE() << "Here's a" << QString("trace") << "message";
QLOG_DEBUG() << "Here's a" << static_cast<int>(QsLogging::DebugLevel) << "message";
QLOG_WARN() << "Uh-oh!";
qDebug() << "This message won't be picked up by the logger";
QLOG_ERROR() << "An error has occurred";
qWarning() << "Neither will this one";
QLOG_FATAL() << "Fatal error!";
const int ret = 0;
std::cout << std::endl << "Press any key...";
std::cin.get();
QLOG_INFO() << "Program exited with return code" << ret;
return ret;
}
After running the example, the log output for multiple destinations should resemble the following snapshot:
Implementation details
QsLog tries to make maximum use of the power of Qt. For instance, it uses the QDebug object for advanced formatting. This is a public object that is also used internally by the qDebug function family. By taking advantage of it, QsLog allows you to log most Qt types: QVector, QHash, QPoint – whatever you want to log – will be nicely formatted into a string.
After all the variables are transformed into a string, they are sent to the destinations registered with the logger. A destination is the target of the log call; currently there are only two destinations available – file and debugger. Destinations receive the messages in the order that they were registered with the logger. Before being sent to the destination, the message is locked behind a mutex, so you can log from multiple threads.
The complete source code for the logger can be downloaded at the QsLog bitbucket repository. Bugs and suggestions can also be submitted at the repository. | http://developer.nokia.com/community/wiki/index.php?title=Advanced,_cross-platform_logging_for_Qt&oldid=174878 | CC-MAIN-2014-10 | refinedweb | 1,110 | 53.61 |
Let’s start by thinking about a very simple example. I’ve recently switched to using Ruby as my language of choice, after a decade as a Perl hacker. Ruby does a lot of things more nicely than Perl, including having proper object syntax, simple everything-is-an-object semantics, a sweet module/mixin scheme and very easy-to-use closures, so I’ve mostly been very happy with the switch. In so far as Ruby is a better Perl, I don’t see myself ever writing a new program in Perl again, except where commercial considerations demand it.
But there are some things that I miss from Perl, and one of them is the ability to say (via use strict) that all variables must be declared before they’re used. When this pragma is in effect (i.e. pretty much always in serious Perl programs), you can’t accidentally refer to the wrong variable — for example, if you use $colour when you meant $color (because you’re a Brit working on a program written in America), the Perl compiler will pull you up short and say:
Global symbol “$colour” requires explicit package name at x.pl line 4.Execution of x.pl aborted due to compilation errors.
In Ruby, there is no need to declare variables, and indeed no way to declare them even if you want to (i.e. nothing equivalent to Perl’s my $variableName). You just go ahead and use whatever variables you need, and they are yanked into existence, bowl-of-petunias-like, as needed.
Which means that Ruby programs suffer a lot from $color-vs-$colour bugs. Right?
Not so much, as it turns out.
Problems that are not really problems?
Ruby’s been my language of choice for about four months now — which I admit isn’t very long, but it’s long enough that statistics over that period are not completely meaningless. So far, I’m not aware that I’ve run into a $color-vs-$colour problem even once. Doesn’t mean it’s not happened, of course — maybe it did, the problem was obvious when I saw the program run wrongly, and I fixed it on autopilot without really registering it.
At any rate, if it’s happening at all, it’s not a big deal for me in practice. Which makes me think: is it ever? Could it be that variable-misspellings are a category of bug that we naturally tend to guard against rigorously but that we we don’t actually need to worry about?
If this is true, then it’s come as a surprise to me. When I was switching to Ruby, I sent emails to my Ruby-maven friends whining about the lack of variable declaration and foreseeing all kinds of doom arising from it. I’m as surprised as anyone that it’s not happened.
In the mean time, I have saved myself the trouble of typing my $variable (or, since it’s Ruby and we don’t need line-noise characters in variable names, my variable) some hundreds or maybe even thousands of times. Now, typing that is not a particularly heavy burden. If it takes, say, three seconds to type each time, and I’ve omitted six hundred of them in the last few months, that’s 1800 seconds which is half an hour. Long enough to watch an episode of Fawlty Towers, but not long enough to dramatically change my programing lifestyle.
Particularly heavy burdens
But now think about the difference between typing
def qsort(a)
and
public static void ArrayList<Integer> qsort(ArrayList<Integer> a)
In other words, explicitly stating your types. We do this for basically the same reason that we declare variables before use: to guard against our own mistakes, and to get them reported to us as quickly and clearly as possible[1]. Now that is, unquestionably, a Good Thing[2]. I am big fan of what I call the FEFO principle: that programs should Fail Early, Fail Often. When something goes wrong, I want to hear about it straight away: what can be worse that trying to debug a program that ignores an error condition until it pops up later, when all the evidence has gone?
But let’s also be honest that all that public static void main stuff does impose a real burden. A much heavier one than the occasional sprinking of my $variable. What we have here (and this shouldn’t really surprise anyone) is a case of getting a real benefit in return for a real cost. Well! Turns out that you can’t get something for nothing! Who’d have thought?
(Now of course you have your dynamic-typing fundamentalists, who will argue that having type errors diagnosed up front is valueless; and likewise, you have your static-typing fundamentalists, who will argue that all the scaffolding in languages like Java imposes no cost. Since both are clearly talking nonsense, and worse, are impervious to rational argument, let’s just treat them as we would treat that other wacky pair of opposed fundamentalists, “Doctor” Kent Hovind and Richard Dawkins, and ignore them.)
So the question is not “is static or dynamic typing better?” — advocates of both sides will be able to give cogent reasons in support of their position; and opponents of each side will be able to give cogent reasons why the position is wrong. Stage Zero in understanding the problem is simply to recognise that there really are legitimate reasons to adopt either position. The question is more along these lines: given that static typing imposes an overhead on programming, under what circumstances does that overhead pay for itself?
When does the cognitive tax imposed by static typing pay for itself?
And as soon as the question’s phrased in those terms, it gets a lot easier to think clearly about. Because we realise that it all comes down the question of what it costs to have a bug. If I’m writing an e-commerce web-site that sells books, a mistake may mean that I lose an order worth £20. If I write software that runs life-support systems in hospitals, a mistake may mean that someone dies. It seems obvious to me (although I admit that using the word “obvious” is always asking for trouble) that in the former case, it’s better for me to spend my time getting more work done — adding features and whatnot — and risk losing the odd sale to a bug that static typing might have found for me. It also seems obvious that in the latter case I should use every tool available to ensure that the code is correct, and that the cognitive overhead of static typing is a small price to pay. Somewhere in between those extremes lies the crossover point. But where? Those of you who work on banking software might have opinions on this: bugs can potentially have dramatic financial consequences but are unlikely to directly endanger life and limb.
Safety is expensive
Now here’s the thing: we all know from experience that as organisations grow, they invariably start to accumulate more and more rules, procedures, forms to fill in, and so on. One day, when a company has 200 employees, someone climbs a stepladder to change a lightbulb, falls off the ladder, breaks his collar bone and has to take two weeks paid leave. That costs the company, say, £3000, and someone high up thinks “This must Never Be Allowed To Happen Again!” So before you know it, there’s a set of Stepladder Safety Procedures that have to be followed, and no-one is allowed to use a stepladder at all until they’ve taken the half-day Stepladder Safety Course. Eventually every employee has taken the course (at a cost of 100 person-days, or £30,000) and as a result no-one falls off a stepladder again for a while. (We’re charitably assuming that the course is successful and that the overall rate of stepladder accidents really does decrease as a result of everyone having taken the course.)
Now of course in objective financial terms, this doesn’t pay off until we reach the point where, without the course, ten people would have fallen off their stepladders on office time. But in the meantime the company keeps growing and the hundreds of new employees are also taking this expensive and not-usually-useful course.
For similar reasons, mature companies will often have many, many other procedures that everyone has to follow. Because someone once nicked £50 worth of stationery and It Must Never Be Allowed To Happen Again, all 200 employees spend fifteen minutes every week filing Stationery Requisition Forms (at a total cost of £2000 per week); and so it goes. We all know someone who’s worked in one of these places where you can’t so much as sneeze without filling in a form in triplicate first. Some of you are unfortunate enough to work for such companies.
In some circumstances — some, I say! — all that tedious mucking about with static types is like the Stepladder Safety Course that costs ten times more than it saves.
On the other hand, there are circumstances where safety is important. Deeply important. Where additional layers of procedures, forms, validations and suchlike pay for themselves over and over again.
So here’s my thinking: the older and larger an organisation gets, the more it tends to lean on formal procedures, form-filling and suchlike to buy safety — even if it’s at a disproportionally high price. And because that’s the existing culture of such organisations, they are disproportionally likely to favour static typing, which fits into their standard SOP of investing extra effort up front to reduce the likelihood of accidents further down the line — however unlikely those accidents and however minor their consequences. And I think this explains the near-universal tendency for big organisations to favour what I am going to suddenly start calling Stepladder-Safety Programming.
If I’m right, it explains a lot. It explains why SSP-friendly languages like Java and C++ are so widely used (it’s because the relatively few organisations that use them are the large ones), but also why there’s always been such a strong guerilla movement that prefers dynamically typed languages such as Perl, Python and Ruby — and indeed the various dialects of Lisp, if you want to go back that far. It also explains why there is such a strong bifurcation between these two camps: static typing is often favoured in environments where anything else is almost literally unthinkable whereas dynamic languages are often found in startups where Get It Working Quickly is the absolute sine qua non, and SSP wouldn’t even be on the radar. That fact that static vs. dynamic typing is embedded in cultures rather than just programming languages makes it much harder for people to cross the line in either direction: it feels like a betrayal, not just a technical decision.
And this of course is complete nonsense.
At bottom, static vs. dynamic is a technical decision, and should be made on technical grounds. Cultural predispositions in one direction or the other are, simply, an impediment to clear thinking.
And finally …
The great technical impediment that prevents us from choosing to adopt either static or dynamic typing on a project-by-project basis based on a mature and disinterested judgement of whether the additional cost is merited in light of the project’s safety issues
Here is my big gripe: the choice of static or dynamic typing is so often dictated by the programming language. If you use Java, you are condemned to name your types for the rest of your days; if you use Ruby, you are condemned never to be able to talk about types — not even when you really want to, as for example when you want to describe the signature of a callback function.
That’s just stupid, isn’t it?
Surely it should be the programmer’s choice rather than the language’s?
There are a few bows in this direction in languages known to me. One of them we’ve already mentioned: Perl’s optional use strict pragma imposes a few static-typing-like limitations on programs. Most notably, it includes use strict ‘vars’, which requires each variable to be declared before use. That’s pretty weak sauce, but at least a step in the right direction, which is more than Ruby offers. Although the use of use strict is close to ubiquitous in the Perl world, there are exceptions: for example, I notice that the unit-test scaffolding script generated by h2xs does not use strict — presumably on the assumption that test-scripts are easy enough to get right that it’s nice to be allowed to write in the laxer style.
But, really, much much more is needed. There’s no obvious technical reason why Ruby and similar languages shouldn’t be able to talk about the types of objects when the programmer wishes, and non-ambiguous syntax is not difficult to invent. Conversely, would it be possible to relax Java and similar languages so that they don’t have to witter on about types all the time? That might be a more difficult challenge — I don’t know enough about Java compilers or the JVM to comment intelligently — but it seems to be at least a goal worth aspiring to.
Does anyone know of any existing languages where static typing is optional?
In conclusion …
In choosing what typing strategy to use in writing a given system (and therefore, given the current dumb state of the world, in choosing what language to use), we should give some thought to the costs involved in static typing and the risks involved in not using it — both the likelihood of error (which in my experience is often much smaller than we’ve got used to assuming) and the cost if and when such errors do occur. My guess is that a lot of habitual Java/C#/C++ programmers are in the habit of doing Stepladder Safety Programing for essentially cultural rather than technical reasons, and that a large class of programs can be written more quickly and effectively using dynamic typing while admitting little additional possibility of error.
We do need static typing for life-support systems, avionics and Mars missions.
(Although come to think of it, the dynamically typed language par excellence Lisp was indeed used on Mars missions, to very good effect. This was pointed out in a comment by Vijay Kiran on an earlier article: “Debugging a program running on a $100M piece of hardware that is 100 million miles away is an interesting experience. Having a read-eval-print loop running on the spacecraft proved invaluable in finding and fixing the problem.” I’m not sure what to make of that observation.)
—
Notes
[1] Having your program talk about types can also enable optimisations that aren’t otherwise possible, but those benefits are not as great as people sometimes seem to imply; and since we all know that time spent in CPU is rarely the limiting factor in the kinds of disk- and network-bound programs that most of us work on most of the time, let’s ignore that for now.
Nice post, interesting discussion. Having experience with both Java and Ruby, I’d like to mention the one reason why I like static typing (which wasn’t mentioned in your post): the cool stuff IDE’s can do because of it.
Static typing allows reliable autocompletion and near-instantateous automated refactorings.
Sure, I’ve seen autocompletion for Ruby too, but I’ve never seen it work particularly well.
And sure, you could use search&replace to rename a method, but it’s a hassle across multiple files and you might replace text in comments that you didn’t want replaced, etc.
Lisp has optional type declarations, but I think they are intended as more of an optimization than as a safety feature.
Erlang has optional type declarations as a safety feature via the dialyzer tool. I used it on one project and it helped me catch several small bugs, but it also required a significant investment of time and effort.
Finally, Haskell is statically typed but type inference means that you can generally treat it as if it were a dynamically typed language.
Nice post. Some points you omitted:
(a) IDE auto-complete/intellisense in general negates a lot of the verbosity arguments
(b) nearly any bit of commercial code will be read a lot more times than its written – notations with explicit or static types make comprehension an order of magnitude easier
(c) Its extremely rare that minimizing amount of time to write a bit of code is a realworld objective. In my career so far most perl scripts that were bashed together end up needing a migration path to being rewritten as production code (normally in Java/C#)
(d) statically typed notations are easier to build tooling on top of – again see autocomplete/mtellisense/navigation/typed search/refactoring features
(e) optimisation of statically typed notations is often easier (or even just possible) because you know more at compile time.
In the other direction
(d) Typing everything sometimes is quite hard or complex (see generic type hell)
(e) Type inference systems don’t currently have good tooling for debugging large structural types.
Gilad Bracha was a prominent voice from the Java camp who advocated optional typing as a middle way. This has been picked up a little in C# through the dynamic keyword for run-time typing and the var keyword for local type inference.
My own opinion here is that throwing off the shackles of static typing and freestyling all the way is never the right choice for systems. In fact, the only time I’ve found it appropriate is for “augmented-manual” work – e.g. when you have a unique bit of analysis to do and want a /truely/ throwaway bit of code
In case anyone’s as confused as I was by Emma Watson standing in for Richard Dawkins:
Yes, yes, yes! I so agree that _both_, statically and dynamic, are needed – and would wish for a project to be able to gradually transition to statically typed as it grows and matures (especially for the lower-level parts, like libraries).
The one language which I use much (but do not always enjoy too much), PHP, can actually do this to a limited degree:
function do_something(my_var) {…}
… works. You can force the given variable to be of a certain class:
function do_something(MyClass my_var) {…}
The Interpreter will then throw an exception, which fits nicely with the “die early” principle.
To some extent, PHP does this. You dont have to declare variables, but you can declare them global or static. You also dont have to specify types in methods but you can put in a type hint.
Now, for other reasons than preventing bugs, I have implemented static typing in my objects by adding “annotations” in the class definition, which are parsed via reflection at run time and used for things like validating input data, formatting for the database, etc.
But yeah, I’ve always been torn on the issue, having started (serious programming, anyway) with C and C++ but then switching to PHP, javascript etc.
It would be interesting to design a language that truly supported both methods by choice..
I think you are missing the main points of static typing:
1) It makes reading foreign code easier (thereby helping future maintenance and evolution)
2) It enables automatic refactoring and tool support
For example: “def qsort(a)”
Can I pass a list there? Or an array? Or a set? Or an enumeration? Or all of the above?
When you tackle a code base that you did not write, this kind of question arises all the time and static typing helps tremendously.
More complex the application, the more static typing is beneficial. Small, low-complexity applications are better off with dynamically typed languages. Just MHO, after 30 years of professional software design/development experience in companies ranging from 1 employee (self) to 3000. Personally, I prefer strongly typed languages because it helps in thinking about the use of the data you are typing, at least for me.
This is one of the things I love about Objective-C; it lets you specify types and have the compiler check them until it makes no sense to do so. Though being dynamic is not much less verbose in this case.
I find the language yields well to my style of thinking.
I can see a big problem with a static / dynamic typing switch in a programming language:
It incurs overhead or leads to a de facto paradigm.
The Perl community’s use of “use strict” for non-trivial programs creates two kinds of overhead.
1) Mental overhead (“Is this a non trivial program?”)
2) Language overhead (“We have to support both dynamic and static typing, even though the default is the one over the other in most cases”)
In all honesty, if a project calls for a specific paradigm or has specific constraints, I rather pick a language that meets the project criteria, and take advantage of the optimizations already incorporated into the language itself.
What I’d rather like is being able to mix and match languages, rather than paradigms. For example, using Java where strong typing is essential for some reason, and using JRuby on top of Java when flexibility or terseness is important, switching from the one to the other where necessary.
A good example would be game engines: a core written in C/++ where the performance matters (graphics, networking, etc.), with a scripting language on top where the additional expressiveness matters (cutscene scripting, AI routines, scripts for events).
It also allows, nay, forces specialization: C coders have to be good at low level, high performance computing, while level designers can focus on creating a narrative in a natural(ish) DSL, and nobody has to work outside of their area of expertise. The risk is, of course, becoming a one trick pony, but that can be overcome.
TL;DR: Use the right tool for the job, and be multi-lingual.
Consider what happens when you want to access a member of an object: a Java IDE can, assuming your code isn’t too messy, look up the type of the object you’re accessing (or the return type of the function you just called) and pop up a list of members. An IDE for a dynamically typed language can’t do that, unless you’re in the habit of adding type annotations to everything (at which point you’re back to the Java way). This means you spend a small but potentially crucial amount of time scrolling back to remember whether a method was called “meters2feet”, “metersToFeet”, “feetFromMeters”, etc. (even in code you wrote five minutes ago, if my own experience is typical). Moreover, since this happens again and again (one second thirty times a day as opposed to one minute two or three times a week), task switching overhead comes into play, and this can have devastating effects.
In my opinion this moves the crossover point closer to the £20-book example than you might think.
Python 3 added function annotations which are a step in this direction of optional static typing.
Phil Toland noted:
Yes, I was planning to say something about type inference in the post, before realising I don’t know enough about it to do that without making a fool of myself. The more I think about Haskell, the more sure I become that it’ll be the next language I learn after I’ve got the hang of Scheme.
Trying to ignore the static-vs-dynamic debate, but one nice thing that can make the “overhead” of static languages a lot nicer is type inference — it’s why I love ocaml so much. You don’t need to specify types, but when things get funky, you can add them when you need to.
I’ve yet to delve into dynamic languages myself; definitely on my todo list :)
I am truly horrified to learn (from Thomas Stegmaier) that of all the languages out there, the one that seems to approach my optional-static-typing dream is … PHP? I ask you, what is the world coming to?
Cedric wrote:
In Ruby, and other duck-typing languages, the answer is generally “whatever makes sense”. For people who are used to statically typing everything, it’s hard to appreciate just how well this works and how easy it makes things. It reduces the amount of code that needs to be written by a huge degree.
(Let me be clear here, again, that I am not saying that dynamic typing is therefore always the correct answer. But the answer to this particular question is very much a point in favour of, rather than against, dynamic typing.)
To me, static typing is a lot less about safety than it is about readability.
When I approach a function in a dynamically-typed code, I often have a hard time understand what it can operate on and what it returns – and consequently, a hard time understanding what it does. Statically typed code is far more readable, at least for me.
The only case I prefer dynamically typed languages is when they are also weakly-typed: if types can anyway be implicitly converted to other types, associating a type with each variable is indeed unnecessary.
> In Ruby, and other duck-typing languages, the answer is generally “whatever makes sense”. For people who are used to statically typing everything, it’s hard to appreciate just how well this work and how easy it makes things. It reduced the amount of code that needs to be written by a huge degree.
You are wrong. Go read on the concept of structural type systems please.
Not every statically typed language is as moronic as Java, and Java shouldn’t be a measuring stick for static typing. Because it would be a very, very bad one as Java is a bad language with an awful compiler and an utterly terrible type system.
The whole post is, in fact, utterly terrible for that precise reason: the yardstick used for static typing is Java, and Java is garbage. Therefore the conclusion is along the lines of “static typing ok for some cases, but in general it’s no good”. What a surprise.
I think when you talk about static typing, you actually mean “type inference”. For example, ML does type inference. That means you don’t have to declare the type of a variable before using it. However, you can optionally put typing declarations for clarity. Sometimes the ML compiler yells at you if it cannot infer the type at compile time. That’s when you put in some types to clear up any ambiguity. So I think the gist of your article is every language should support type inference to some degree.
Common Lisp uses strong dynamic typing. This means that the type is dynamic but still strongly checked by the compiler. You can also “optionally” declare the types for additional safety and optimization opportunities.
IMO strong dynamic typing gives you the best of both the static and dynamic worlds.
Mike: you missed the comment about C# having exactly this. Basically they have a new magic type called “dynamic” that allows you to call any method on it, with the risk of having it fail at runtime:
dynamic foo = …;
foo.bar(); foo.baz(); // whatever
I’m not sure if they support a method missing/no such method mechanism like Smalltalk and Ruby.
A professor of mine used to say about static typing mavens “so what are they trying to proof in the end? That the program is correct? Good luck.”
Having said that, there are a lot of advantages to static types.
Are you aware of the Go approach to this? The language has interfaces like Java, and every object that implements the protocol specified by an interface can be “auto cast” into that interface. Sort of like static duck typing.
@Cedric: your points are unfortunately easy to counter, if *overly* verbose typing is used. reading a hugely verbose grammar takes just as long as parsing through code to infer what it’s doing.. it’s really the machine that has an easier time analyzing the program when via static analysis.
moreover, your qsort example is ridiculous simply for the fact that you are assuming there will be no tests or other documentation. usually, tackling a code-base is difficult because of design issues – not missing type information. if you cannot understand what something is for, typing will not help you much. you’ll still have to see how it’s being used regardless of all the type-related stuff that’s thrown in. it’s a small part of the overall problem.
bottom line is that static typing is useful for machines, and experienced programmers working in IDEs that offset the typing necessary to decorate everything.
a hybrid approach can get the best of both worlds, but also the worst of both worlds. active study is necessary to determine which of the two cases will take place.. otherwise, you’ll likely just choose something under assumption that it will suit your developers better than the alternatives.
You should take a look at the work being done on Groovy++. It’s an extension to Groovy that allows you to declare that certain classes or methods require type declarations. Besides the documentation advantage, it provides far better performance than straight Groovy (which allows type info to be specified, but does dynamic lookups, anyway, IIRC.)
I think the reason static typing is so popular is that it is a natural extension of C’s types – In order to be able to use the same operator syntax (+,-,*,/) for both floats and ints, we need to be able to distinguish between their use cases. Since we don’t want to incur a run-time overhead, the compiler uses a variable’s type declaration to tell how to translate operators into low-level instructions and then typing information need not be in the compiled program.
This is the correct solution for sheer performance, but it falls short for the object-oriented paradigm. On the other end of the spectrum, we have Smalltalk-style syntax, where overloading is very flexible. The consequence is this approach is this: the basic elements of meaning are method selectors. Which means in order to understand what code means, you need to understand what a given method selector is meant to represent, and without formal Interface documentation, polymorphic code can be hard to understand.
Huh; looks like WordPress ate the plus-plus at the end of Groovy plus-plus (writing it out so it’s not swallowed again).
[Mike: I fixed that for you.]
In Ruby they often talk about duck typing, if it talks like a duck and quacks like a duck, it most likely is a duck.
So if you specify in the documentation for your method what kind of an interface the passed in object needs to adhere to, any object can be passed in. And following along with this definitely gets easier with time. :)
So in Ruby talking about an objects type isn’t really a good idea, Rubyis rather ask an object whether it can do what they want.
Check out Scala. It’s a language that runs on the JVM.
What I write here might not be 100% correct, since my experience with Scala is limited to having read a few interesting articles about it. But my understanding is that Scala lets the programmer choose when to type things statically and when to type things dynamically. The Scala compiler does type inference and implicit type conversions. provides some examples.
I’m a big fan of how you seem to be able to take what I am feeling and express it in an elegant post.
I have no idea why Python doesn’t have syntax to force a type, or to automatically throw an error when you pass the wrong type to a function. I’m getting sick of having to write type checking code so frequently.
It’s interesting that a couple of commentators raise IDE issues with dynamic languages.
It strikes me that a lot of the problem there is that the major IDEs tend to be based around compiled languages. It’s relatively easy to add in a new compiler backend and a new language syntax file into this well understood structure.
In contrast, look at Smalltalk, where many implementations are based around the idea of editing objects within the Smalltalk runtime itself – there is less separation between the IDE and the running program – the IDE can directly introspect objects themselves, rather than applying rules to text files.
The downside, of course, is that the IDE is locked into a specific runtime.
you probably know that already (and it’s beside the point of your post): if you run your program with “-w” Ruby will print a warning if you attempt to use a variable that has not been assigned to before.
I think the main problem with languages like Java and C# is not so much strong-typing, as type redundancy. This makes it less readable, not more, than dynamic languages, and there’s little you can do about it.
Compare this javascript
[script]
validateResult(result, [
{
condition: (this.returnDate != null && this.returnDate < this.departureDate),
id: 'returnDate-button',
failMsg: "Your return date occurs before your departure date"
},
{
condition: (this.originSelected.code == null),
id: 'originSelect-button',
failMsg: "You have not selected a valid origin city"
}]);
[/script]
with the equivalent in Java
[script]
validateResult(result, new Validation[]{
new Validation((this.returnDate != null && this.returnDate < this.departureDate),
"returnDate-button", "Your return date occurs before your departure date")
},
{
new Validation((this.originSelected.code == null), "originSelect-button", "You have not selected a valid origin city")
});
[/script]
The redundancies in this case were the unnecessary repetition of the Validation type. Unfortunately, no matter what you do to clean the code, it'll remain less readable because of these types of redundancies.
I really like the idea of type inference, because it gets rid of a lot of this redundancy. For example,
[script]
List names = new ArrayList();
[/script]
becomes
[script]
var names = new ArrayList();
[/script]
With type inference, this could have been shortened to:
[script]
validateResult(result, new Validation[]{
(this.returnDate != null && this.returnDate < this.departureDate),
"returnDate-button", "Your return date occurs before your departure date")
},
{
(this.originSelected.code == null), "originSelect-button", "You have not selected a valid origin city")
});
[/script]
This still isn't as readable as the Javascript above (where the dynamic typing allows you to create the adhoc structs), but it's still cleaner than the Java. If you wrote this in JavaFX (which borrows heavily from Javascript & Java), it's nearly as readable (and quick to write) as the original Javascript:
[script]
validateResult(result, [
Validation{
condition: (this.returnDate != null && this.returnDate < this.departureDate),
id: 'returnDate-button',
failMsg: "Your return date occurs before your departure date"
},
Validation{
condition: (this.originSelected.code == null),
id: 'originSelect-button',
failMsg: "You have not selected a valid origin city"
}]);
[/script]
Marrying the two seems to be the way to go because its much faster and concise to write, is arguably more traceable than dynamic languages alone, and you've still got the strong typing safety net.
Irony of ironies, I most often come up against accesses of unused variables and type errors in error reporting code that is infrequently executed and poorly maintained.
The stairway metaphor was funny, but very wrong. It makes the assumption that static typing slow you down, and this is simply incorrect. As other commenters pointed, the few extra keystrokes are irrelevant. There is some extra code “noise” but this is highly debatable – on the flipside, code becomes easier to read, you don’t have to guess what things mean from the context. The only advantage of dynamic-typed languages is their flexibility, remarkably for advanced metaprogramming (MOPs) but this is a completely different discussion.
Your point is valid as long as we only look at the type systems of Ruby and Java, but the world is _much bigger_ than that.
For example, not only do we have languages (like Go and Perl6 and others) that more or less add typing to help the programmer, we also have languages that have advanced type inferencing, starting with Hindley-Milner type inference (SML, OCaml, Clean, Mercury, and Haskell), which add more advanced typing going toward System-F in some cases.
And then you mention enterprise culture versus small shop culture, but there are many small shops which go _faster_, because they are using advanced type systems in Haskell, OCaml, F#, Clean and what not. Testing with a toolkit like QuickCheck (Haskell, Erlang) is like unit and regression testing on steroids.
And then as far as IDEs go, Smalltalk still has the upper hand in tools (refactoring and otherwise), because the IDE is integrated with the language, and this advantage transfers over to similar systems (Lisp, scripting languages, etc.), so even there static types are not the panacea it is made out to be. Javascript has some issues that makes static analysis hard, but they haven’t much to do with the lack of static types.
And so forth. There is simply too much to mention.
It is an interesting discussion, even though it is often watered out by ignorance; there is much more to be said than just comparing Ruby and Java.
It looks like that there is much deeper gap elsewhere, which has more to do with how well one can deal with abstract concepts, no matter what type system is used.
Type decls are nice when reading code, and type-checking really help in langs like C where the wrong type can easily cause a crash.
But in any language, you need testing because type decls don’t know when you meant 1 or 2 or other behavior. I do a lot of C/C /C#/Java, but when I did TDD in Python I found errors quickly and didn’t miss type decls.
Ironically, I found type-inference in C# nice for writing code, but not nice for reading code. Particularly when complete expressions were used to init a “var” decl. It also got in the of manual refactoring, since type decls are required to declare a function, but what’s the type of that var I’m going to pass into it?
There is a joker in the pack if you try to apply cost/benefit analysis to decide how risk-averse you should be: security. An ‘unimportant’ piece of code could compromise a much bigger system that really matters. This is, of course, one of those broad and ‘wicked’ problems for software development, of which the type-safety issue is merely one small case.
You suggest that static typing, and other practices intended to reduce risk, are more prevalent in older companies. I sometimes wonder how many promising software-based start-ups fail to make it in the long-term, because they lose control of their code base, and especially of the semantics of that code. As some other commentators have noted, explicit typing can help in understanding a large system’s code.
On the other hand, static typing can give only weak assurances of correctness, and only begin to capture a system’s full semantics, which is why it is possible to argue its merits either way. The program-proving techniques you discussed in earlier posts have both higher costs and greater potential benefits, and industry’s choice here has been overwhelmingly single-sided, though one could argue that this decision has been made largely in ignorance (an ignorance not only of the issues but also of there being a choice at all.)
Finally, I am skeptical of the appealing but simplistic notion that, under duck typing, software does the appropriate thing. It may well proceed where a different approach would raise an error, and that may often be appropriate, but ultimately, appropriateness is a semantic issue that is beyond the purview of a language’s execution semantics to determine. Duck typing can certainly lead to elegant solutions, and elegance can sometimes help in being correct, but I wonder how often subtle errors go unnoticed because duck typing has papered over the cracks. Elegance is neither a guarantor nor a substitute for correctness.
Duck Typing is a pretty dangerous practice overall, here is an article that explains why:
As for the Smalltalk IDE: its refactoring capabilities were certainly ahead of its time back then, but they are extremely primitive compared to what Eclipse/IDEA do today. This shouldn’t come as a surprise: Smalltalk is a dynamically typed language, so a lot of automatic refactorings are simply *impossible* to perform with assistance from the developer.
More details here:
Of course, I meant “without assistance from the developer” above.
I actually like dynamic typing, I think it is absolutely great for throwing code together and getting it to work.
The place that static typing shines is when dealing with big ugly interfaces. For example, things like Webkit, or Quartz, or OS\360 TCAM. When groveling over the library specification and trying to figure out how to get the library to do what I want, there is nothing like knowing from the declaration that this call or method invocation takes a gwamp, a wamp and a bamp and returns a samp and a damp. Now, if I can just take the gramp and pamp I have and get the relevant gwamp, I’d be all set. I seem to always be doing this. Maybe I should write a program to help me solve this kind of puzzle like a crossword or anagram helper.
This isn’t really about the language requiring type declarations, but rather having the API documentation specifying what each piece wants and what it will give. (Having this kind of API specification is important even when the coding language doesn’t care about types.)
Pingback: links for 2010-06-06 | GFMorris.com
“On the other hand, static typing can give only weak assurances of correctness, and only begin to capture a system’s full semantics, which is why it is possible to argue its merits either way.”
The whole point of object-oriented programming was to allow the programmer to extend a language’s type system so that it can capture a system’s semantics.
Jeff Dege writes:
That seems like a dangerous delusion to me. The idea that a type-system can capture semantics sounds more like the aspirational goal of a wildly optimistic mid-1970s research project that it does like any actual program I or anyone else has ever seen.
Dear Sirs, an interesting article as always. I have a couple of minor quibbles. Firstly, Dawkins isn’t a fundamentalist, for any useful definition of the word fundamentalist. That Dawkins, always blowing stuff up. The rotter. Secondly this entire topic goes out the window when cpu cycles are on the line, which you dismiss a little quickly in your article. For instance if you need speed (games, simulators, modellers, renderers, art packages, compilers, ie all the good stuff) then this argument is dead before it begins. Static typing and a manly language like C is the way to go. Plus if you turn up to a job which makes this kind of software dragging your hyper lisp, your Spangle 5, Turncoat !pling, Quartz, Futtock DER or any other kind of obscure (ie useless) language you will be gently beaten and forced to relearn a C derivative.
Incidentally, great comments as usual. I’d not heard the phrase duck typing in ages (probably looked at the idea briefly in the past, laughed, and moved on). Shame something that prone to causing errors hasn’t died along with every other rotten idea thought up in a drug fuelled haze during the 1970s. Man. Mike Taylor, did you mean to say that Jeff Diege was dangerously delusional (perhaps) or that oops was? (definitely)
“That seems like a dangerous delusion to me.”
I don’t think I disagree with you, but many of the “rah-rah, OO is going to save the world” articles I read, during the initial popularization of OO, expressed themselves in just such terms.
Jeff Dege clarifies:
OK, Jeff, I get you now — the key word in your original statement was “was”, as in “The whole point of object-oriented programming was to allow the programmer to …”, without claiming that OOP actually did achieve this. Sorry for misunderstanding you the first time around.
Hi, Jubber, thanks for the kind words. I am pleasantly surprised that we got up to 43 comments before the first Dawkins defence, and that even then it was such a polite one :-) No, he doesn’t blow things up; neither does Kent Hovind, though, and I don’t think many people would quibble with the description of him as the f-word.
Efficiency, on the other hand, is very interesting. Let’s admit right up front that statically typed languages tend to be more efficient than dynamic, perhaps realistically by a factor of 2 or so, but let’s push the boat right out at allow a whole order of magnitude. My question would then be: why does this still matter? Moore’s law has been pushing up along quite nicely for many decades now, and the result is that even if a game that I write in a dynamic language now is ten times slower than the same game would be in a static language, it’ll nevertheless be a hundred times faster than Quake was when I first played it back in 1995. (Quick maths: 2010-1995 = 15 years; Moore’s law doubles speed every 18 months = 1.5 year, so 15 years gives 15/1.5 = 10 doublings; 2^10 = 1024, call it 1000; throw away a factor of ten for using a dynamic language and I should still be 1000/10 = 100 times faster than Quake.) But we all know this is not the case. Why not? I truly don’t know.
And if you want to feel really bad about this, consider the appalling performance of modern Netscape releases on 3 GHz machines compared with how clicky Netscape 3 used to feel on 33 MHz machines (i.e. once more literally a factor of a hundred). Where is all my speed going? I’m not talking about when a browser’s limited by network bandwidth, which has grown more slowly, but simple CPU-only operations like increasing/reducing font-size or even tab-switching.
Uh oh … I feel a new blog entry coming on :-)
“I don’t think I disagree with you, but many of the “rah-rah, OO is going to save the world” articles I read, during the initial popularization of OO, expressed themselves in just such terms.”
I suppose you refer to the late-80’s and early-90’d, with technology like C++ and Eiffel. Remember that these were already a second-generation OO, and one that already deviated a lot from Smalltalk (that many people consider the gold standard of OO up to this day, and the inventor of OO although there were some precursors like Simula).
In my field (games) software performance has not scaled with hardware performance – the reason for this is obvious. On a BBC Micro (C64 equivalent, American chums) two people could wring every last byte of information and trickery from the 32K machine in a year of coding. By the Playstation era Squaresoft needed 150 people to do 90 percent of the same thing with the PS1. And it took five years of the PS1’s lifecycle before people began to hit its software performance limits. Diminishing returns, larger coding teams, more complex software backbones, far more complex hardware, C++ and garbage collection take care of the rest. You mention Moore’s Law – well that’s a nice gentle curve over time. Come in close and consoles have lifespans, so its really big digital jumps every few years. If an interpreted or dynamic language loses me 10 percent or more of what I can draw in the 17 millseconds I have to play with, that’s 10 percent less polys, less sprites, less everything. An entire sprite layer. My whore won’t look as pretty as the girls the pimp down the road is pushing, even if she’s just as much fun to play with. :-)
Jubber, I can see how the matters you mention could explain a factor of 5 or 10 or even 20. But 100?
“A much heavier one that”
You didn’t really mean that, did you?
(And if, as I hope, you didn’t, then you didn’t mean it on at least one one blog post.)
[Mike: Fixed now — thanks for spotting it!]
Well perhaps you would be 100 times faster than Quake – but what does that really mean? Quake used very low textures at a very low screen resolutions. Plus very low poly models. 16 polys per sub model with 16 by 16 textures on a 320 by 256 screen. (486 era? I may be out by some factor on the version you played) A model in crysis might have thousands of polys, 512 by 512 textures, 1920 by 1080 res with four times the colour depth (101 times the workload just for that resolution jump) – and then you start adding in multiple screen redraws for feedback, lighting, shadows, HDR, then 3D sound, massive worlds, more advanced AI, all sitting on a far larger OS and DX backbone… In other words, 100 times faster running code isn’t nearly fast enough. If I have understood the point you are making.
OK, Jubber, you have a point when it comes to increasingly realistic Quake-like graphics. It is indeed easy to see how the processing needs of that kind of drawing has kept pace with Moore (although isn’t that stuff mostly done in the graphics card now? I’m still not 100% sure what the actual CPU is doing all that time.)
I guess the example of Web browsers exercises me more. Partly that’s because browser responsiveness seems to have gone backwards in the last decade and a half; but maybe more importantly, it’s because I know more or less what it takes to build a browser, and could imagine doing so myself, whereas I know that I couldn’t make an advanced Quake-like game.
About two hours ago, I was head-deep in an enhancement to a PHP site that I’ve somehow inherited.
I had occasion to look into the authenticate() function, which is called from the login page with username and password.
It begins:
function authenticate($loginname, $pass)
{
$username = sqlclean($username);
$pass = text2pass($pass);
[…]
}
The sqlclean() function cleans up data entry fields, to prevent sql injection, etc. text2pass() hashes the password.
If you notice, the variable on which sqlclean() is being called is not the variable that is being passed into the function. In fact, the $loginname variable is not being guarded against sql injection at all.
There is simply no way that a language with static checking would allow this error to get by. And this isn’t the sort of error that is likely to be caught by anything other than the most exhaustive automated testing.
I’ve worked in languages with dynamic typing, and in languages with static typing. I can’t say I’ve found dynamic typing significantly faster, except in a few specific problem areas, where I’ve found the sort of programming errors that static typing catches to be all too common.
@Jeff Dege: search for E_NOTICE
I also believe that explicit static interfaces help in communication. When you have teams which are distributed in space and or time (especially time) the static typing can really help.
Mike, you have so much to learn for someone your age. I’m proud of you for making the effort.
Starting with trivia: the next version of C++, coming directly to a compiler near you, supports type inference as in ML and Haskell. You just say, e.g., “auto i = expr;” and the type of variable i is the type of the expression expr. It has similar syntax for functions and for a few other situations. Microsoft swiped the syntax and pushed it out, in C#, ahead of time.
Next, the true value of annotating type information isn’t safety or and it isn’t documentation. It allows you to have the compiler automatically do the right thing based on the type. Most trivially, if you call sort() on a list, you get a list sort, and on an array you get an array sort, both correct and optimized. If your language won’t use the type information you give it for that, it’s a cruel joke. (E.g. C# or Java.)
That’s not to say that improved safety and documentation are minor benefits. That’s also not to say that all those benefits are always worth the effort. In small, easily tested programs — the correct domain of Ruby — it can be a tough sell. The danger is that small programs grow to large programs.
Point up for the static-code-is-easier-to-read argument: I’ve been learning a PHP codebase for about a month now, and it has been by far the biggest waste of time to try to find where random variables like $usr_data are assigned so I can hopefully figure out what it is, and therefore what that function does. Utterly frustrating.
Nathan: I don’t want to be as impolite as you are, but if you are pontificating like that, you should have your facts a bit better together.
Taking your example of the superior performance choices a compiler can make based on type annotations: in a dynamic language, the operation of sorting a list will be just as efficient for different data types as in a static language, with the exception that the decision of which algorithm to pick will be made at runtime, not at compile time. This might result in a slight overhead, but given the complexity of a sort operation, it will be insignificant. So your argument is misinformed..
The point of dynamic typing is that there are always a lot of interesting programs that cannot be expressed within the bounds of a static type system. Languages like C++ or Java get around that to some degree by having casts, but even then you end up with things that are not easily expressible using their respective type system.
Whether that class of programs is large enough and beneficial enough to your problem domain is the interesting question when choosing a dynamically or statically typed language.
Nathan Myers offers:
Well, Nathan, I am impressed that you’re able to be so very patronising in so few words. I hope that talent comes in useful some day.
This is of course absolutely nothing to do with static vs. dynamic typing — it’s a matter of strong vs. weak typing, which is quite orthogonal. Having “the compiler automatically do the right thing based on the type” is of course the very essence of duck-typing, which is about as dynamic as you can get.
Gosh, it would be splendid to have somebody like Nathan Myers working on a team – somebody the younger coders could turn to for advice and gentle assistance can be invaluable. :-)
That aside, I’m interested in the point you make about browsers seeming to get slower over time. Now it’s not my field, I’ve only made a few websites, but some of the same things that apply to games might apply here. Firstly try running an old browser on a pentium 3 – you might find that memory doesn’t match reality. But beyond that, old browsers loaded early-internet text heavy html with no style sheets, php, security stuff or flash animations. GIFs were optimised for post-text loading back then – nobody bothers now. Adverts barely existed back then. Perhaps that accounts for some of it. CSS is significant because I’ve seen loaded pages refresh just after the initial load and juggle elements around to fit the css. At least I assume this is what is happening. This causes a perception of slowness. Chrome caches pages as bitmaps to hdd – they actually redraw more slowly than a full refresh. Very stupid and very slow until we all have ssds. Perhaps a games programmer and web programmer should get together to create the ultimate responsive web browser. How’s your assembler? :D
Seems like Haskell has a Dynamic Typing module for those situations where you really need it, which you don’t.
@Nathan: “Starting with trivia: the next version of C++, coming directly to a compiler near you, supports type inference as in ML and Haskell. ” – just to be picky, this statement is incorrect or at least very imprecise, because C++’s type inference won’t be anything _even close_ to what is supported by Haskell and ML (i.e. a full-blown Hindler-Milner typesystem). That kind of type inference goes much further than simple scenarios like “auto i = expr”, which are now becoming commonplace in popular languages with more conventional static typesystems like C#, C++, JavaFX Script, etc.
It might be worth noting that even Java 6 has some very limited type inference. For examples, look at Collections.emptyMap() or the Google Collections Library. You can write something like:
Map map = Collections.emptyMap();
Where the compiler infers the generic type arguments to the emptyMap() function.
@Liam: I’ve never programmed either ML or Haskell, or any other language with strong type inference. However I’m certain that there are situations which cannot be expressed in any type system. For example more or less anything that involves methodMissing/noSuchMethod. Again, whether that is a valid use case in your domain is up to you.
There is this famous quote “every reasonably complex system ends up with its own half-assed, incompatible, broken LISP implementation”. Maybe every reasonably complex application ends up with its own cumbersome way of implementing dynamically typed parts…
@Martin Probst: .”
This is incorrect, I I read you correctly. The Collections.sort() methods don’t do any kind of runtime dispatch. The only “dispatch” that happens is compile-time, purely static – only overloading, not polymorphism. And because Collections only contains two sort() methods (only for List), I think you meant Arrays.sort()… check its source code, it’s just quicksort & mergesort algorithms that are replicated many times for all supported array types. No polymorphism at all.
@Osvaldo: my memory tricked me, what I was referring to is Collections.binarySearch(Collection, T), which does a dispatch based on the collection type (two algorithms, one special cased for RandomAccess lists).
However I think you’ll agree my argument holds. There is nothing forcing a dynamic language to choose an inferior algorithm, the only thing potentially slowing it down is the number of choices (= dispatches) you have to make at runtime.
Martin, Mike: Yes, in runtime-typed languages you can dispatch on the type, but only at runtime. As a consequence, you can’t use type information to control code generation. It may not be clear why this matters so much. Indeed, it was not clear to the people designing C++ until Erwin Unruh walked into a meeting holding up a program that generated a list of prime numbers in the error messages emitted by the compiler.
You might think that was just a cheap stunt, but it was a very short step from there to C++ programs being routinely faster and more readable than the equivalent Fortran program, where the equivalent C program is almost always slower and longer. (And, yes, the Matlab program is also slower.)
You can intone “slight overheads” all you want, but I’m talking about multiple orders of magnitude. When programs take weeks to run instead of hours, or a hundred servers to carry the load instead of one, slowness gets hard to distinguish from immorality.
Nathan Myers says:
Not if Martin is right (and I think he is) that the overhead of dynamic dispatch is some small constant number of machine instructions per method call. That is indeed a “slight overhead” — the difference between O(n) and O(n+1), if you like. How can you get that to account for multiple orders of magnitude?
@Mike: “Not if Martin is right (and I think he is) that the overhead of dynamic dispatch is some small constant number of machine instructions per method call. ” – wrong. This “wisdom” comes from the old times – up to early 90’s – when compilers didn’t do any advanced optimization (compared to the current state of the art).
Modern compilers can aggressively inline method calls, even for those that require polymorphic dispatch. This is remarkably true and powerful for dynamic JIT compielrs, such as Java’s HotSpot. Inlining in turn, will gain you not just another few instructions less (to pass parameters, setup a new stack frame etc.), but it gives the optimizer a much bigger chunk of code to perform further optimization like register allocation, loop unrolling, common subexpression elimination and many others. That’s what puts you in the ballpark of orders-of-magnitude faster code. And it’s one of the major reasons why dynamic languages are indeed sluggish when compared to static-typed languages (there are others…).
Very recently though, dynamic languages started to benefit from the same advanced optimizations. The Self system is actually the big precursor, and it was a dynamic language (Smalltalk-family). Now we start to see optimizers like the latest JavaScript JIT compilers in good browsers (i.e. anything except MSIE) that can do these tricks for a dynamic language. But they can’t yet compete with static-typed languages. Let’s say the dynamic stuff is now just 10X slower instead of 100X, that’s already some improvement. ;-)
I think Paul Graham once said that every program with more than a certain level of complexity contains a subset Lisp interpreter. (He made his money doing an online store builder in Lisp that he later sold to Yahoo.) Lisp, of course, requires no type declaration and assumes a dynamic environment.
There are reasons for dynamic dispatch in solving lots of coding problems, and dynamic dispatch doesn’t have to kill performance if it is done right. Apple uses it in Objective-C and there was a good article on its efficient implementation at BBlum’s Weblog-o-mat.
Lisp allows fully dynamic runtime typing, but I also remember that MacLisp was challenged to produce numerical code as good as the PDP-10’s FORTRAN compiler’s back in the 70s. It succeeded by allowing type annotations, doing type inferencing and then doing some good old fashioned optimization. (Each routine had an optimized entry and an interpreter friendly compiled entry, so you could get full performance from the compiled subroutine, but you could still call it from command level.)
Yeah, this is a pretty incoherent comment.
Kaleberg, I believe you’re thinking of Greenspun’s Tenth Rule of Programming: “Any sufficiently complicated C program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp”. (I wish I knew his first nine Rules.)
If you want a dynamic language with optional static types, have a look at Typed Scheme. It’s distributed with Racket (formerly PLT Scheme): racket-lang.org
It has a sound type system, does some inference, makes it easy to mix typed and untyped code and you have access to the whole Racket ecosystem.
I don’t think he had another nine rules. In fact, he isn’t quite sure of when he promulgated his tenth:
Thanks, Vincent. This is the kick I needed to get back up to speed in my grand learning-Scheme adventure. (That’s gone by the way a little over the last few weeks, overtaken by my Tolkien obsession and the Doctor Who blogging, but it will return!)
Pingback: How correct is correct enough? « The Reinvigorated Programmer
[quote]things like the C++ template system, and hilary ensues.[/quote]
Or, in fact, hilarity? Or are we talking about a clinton dropping by?
LOL @iWantToKeepAnon. That is quite a typo you spotted.
Thanks for drawing it to my attention. Now fixed, sadly.
(a bit late, but…)
In Ruby / any of the more-dynamic languages, enforcing types is still possible, just not in the function definitions. You just ensure that whatever object you are passed is either a “.kind_of?(Class)” or “.respond_to?(method_name)”, whichever is most “correct”.
I’d also submit my theory that another reason for SSP in large organizations is that they often do waterfall-style programming. Up-front definitions of classes, methods, and protocols is *essential* if you want to spread the actual production across 100 programmers of skill -1 to 11. In fact, simplify that: it’s all communication protocols. Without them, can you trust 100 people to work together, especially in a business setting where those 100 will probably not talk to each other?
As to the function definitions, I’ve had plans for building a language to address this exact issue for a couple years now, but haven’t had time to dive into language-building. This adds a few wrinkles to my plans, thanks!
The point of static type checking isn’t that types are checked, but that types are checked at compile time.
Checking type inside the function, at run-time, is not equivalent.
Pingback: Tutorial 12b: Too far may not be far enough « Sauropod Vertebra Picture of the Week
Pingback: Closures, finally explained! | The Reinvigorated Programmer
To the author: static typing is optional in Common Lisp (probably in Scheme too, but i’m not sure about Scheme). You can use ruby-like no-types while prototyping, and then specify types for your variables. There are probably as many lisp programming styles as there are lisp programmers, but this style is explained by Peter Seibel in Practical Common Lisp ().
Please, don’t comment with comments like lisp doesn’t have it or lisp is dead. It’s a mathematical concept brought to life in a form of programming language. Nothing to hate, really. Treat it as just another programming language.
Scheme is like Lisp in that the only reason for type declarations are for documentation and possibly as hints for compiler. Of course, the whole point of Scheme was to investigate just what it meant to say something was a “type” or “class”.
Lisp was an important and powerful language since it had no true syntax, just a representation of program data structure which was just lists. It opened a lot of questions about data type and control structure, and since it was so amorphous and malleable, it led to a lot of interesting answers. Scheme was developed to more formally investigate a subclass of the relevant questions, and anyone who has read SICP realizes that it has provided some interesting answers.
I find myself programming in Ada as a language of choice quite often. It’s at the far end of the heavy typing spectrum; I find myself having to add names and explicit declarations for things that C++ would let you omit, and it’s really quite burdensome. (I won’t do C++ because I refuse to work in languages that can’t bounds check arrays, and Java doesn’t feel as powerful … and is a language of necessity.)
My other language is Python, at the other end of the spectrum. I gauge from your comments it’s much like Ruby in the typing department. One of the reasons I choose Ada over Python for a project is speed; part of it may be static typing, but I understand that Python is slower then many of its competitors. But I also find that Python’s typing leads me into bad habits. It’s wonderful to sometimes say [0, “blah”, 3.14] without defining a record, but I ended up with a program where anonymous tuples were flying around, and stuff like
existed. I used a named record extension to clean it up, but Python seems positively averse to letting you specify a type that has these elements and only these elements; even if m is a movie named record, m.atomic_weight = 1 is still a legal assignment and will add an atomic_weight part.
I suppose the point of this ramble is that a statically typed language at its best will get you to make what you’re doing clear in the code, and at its worst a dynamically typed language can fight a clear statement of what’s going on (and certainly any easy way to guarantee it.)
Yes, Python and Ruby are philosophically very similar as regards typing. I do agree it’s awful that these languages (and Perl) give you no way of optionally talking about types when you want to. I love it that they don’t force you to, but why on earth shouldn’t a Ruby method be defined, if you want to, as
? | https://reprog.wordpress.com/2010/06/05/we-cant-afford-to-write-safe-software/ | CC-MAIN-2019-18 | refinedweb | 11,610 | 59.33 |
0
I'm just getting started with classes and objects, and having a little trouble. I just filled out some of the basic parts of the program, because I figured I'd have trouble with the class/object implementation. I get the error "error C2228: left of '.determineSalary' must have class/struct/union type" in regards to line 23, which I've put a smiley next to. Edit: oops, didn't work out quite right, it's the line that says "companyPayroll.determineSalary();"
I'm not quite sure what the deal is, I've looked through my book, trying to figure it out, I thought it might have something to do with constructors...but I don't understand constructors that well yet either! Thanks guys/gals.
#include <iostream> #include <iomanip> using std::cout; using std::cin; using std::endl; class Payroll { public: void determineSalary(); }; //End class Payroll int main() { Payroll companyPayroll(); companyPayroll.determineSalary(); void setPayroll(); return 0; } void Payroll::determineSalary() { int paycode; int hours; double salary; do { cout << "Enter paycode (-1 to end):" << endl; cin >> paycode; switch (paycode) { case 2: cout << "Hourly worker selected\n"; cout << "Enter the hourly salary: \n"; cin >> salary; cout << "Enter the total hours worked: \n"; cin >> hours; cout << "Worker's pay is: " << salary * hours << "\n"; break; } // End switch }while (paycode != -1); // End do } //End function determineSalary | https://www.daniweb.com/programming/software-development/threads/75450/helpe-with-classes | CC-MAIN-2018-43 | refinedweb | 221 | 59.64 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Project Help and Ideas » Communicating with nerdkit via USB to serial on a mac
hello,
I am trying to open a port to communicate with the nerdkit using xcode on my mac in c++.I can communicate with the kit using terminal by typing;
ls /dev
.... to get a list of all devices and find the device number. i got the number needed (PL2303 .....)
so communicating with the device is,
screen /dev/tty.PL2303-0000101D 115200
or
screen /dev/cu.PL2303-0000101D 115200
... where i think 115200 refers to the baud rate. These two lines communicate with the kit.
The next step for me is to discard terminal completely and have the project file (running the software i need to use (in this case its face tracking software)) communicating with the nerdkit. The nerdkits staff have been putting up with my questions for the best part of six months so i thought i would give them a break on explaining the detail of this process. They told me that i could open the port by treating PL2303 ..... as a file that i open and right the data to.
This can be done by using some part of stty, [-f] i think. However i have no idea how to implement this and the code I have written comes back with errors. i know it is completely wrong but here it is anyway
{
open();
stty[-f /dev/tty.PL2303-0000101D 115200]
}
I get build errors because stty, f, dev, and tty are not declared and an invlaid suffix on the integer constant. I think i need to include some sort of a header file at the beginning;
#include "stty.h" maybe?
.... and define stty, f, dev and tty
I havent tried it yet as im at work and forgot my laptop with the project on dammit!.
I am a total noob so try not to be too harsh lol!!
This post shows up pretty high in a Google search for doing this and since it isn't answered yet and I needed to figure out how to do something like this for my own project, here's the answer I came up with. On the AVR side, we'll have a program that reads a line of text from the serial port and sends it to the LCD. The hardware should be wired up as in the guide with both the USB cable and the LCD attached. You'll need the usual set of headers before this.
int main() {
// The usual boilerplate initialization.
lcd_init();
FILE lcd_stream = FDEV_SETUP_STREAM(lcd_putchar, 0, _FDEV_SETUP_WRITE);
uart_init();
FILE uart_stream = FDEV_SETUP_STREAM(uart_putchar, uart_getchar, _FDEV_SETUP_RW);
stdin = stdout = &uart_stream;
// Clear out anything that's waiting.
while(uart_char_is_waiting()) {
uart_read();
}
// Create a buffer for our string...
char command[20];
// and how many characters have been written to it.
uint8_t index = 0;
while(1) {
// See if we've gotten any data.
if(uart_char_is_waiting()) {
while(uart_char_is_waiting()) {
// Read the character and check for a newline.
char character = uart_read();
if(character != '\n') {
// Insert the character at the current position in the buffer.
command[index] = character;
// For debugging purposes, write it on the fourth line of the LCD.
lcd_goto_position(3,index);
lcd_write_data(character);
index++;
// Make sure we aren't writing more than fits in the buffer.
if(index == 20) {
lcd_line_three();
lcd_write_string(PSTR("Buffer overload. "));
index = 0;
}
}
else {
// We have received a newline character.
// Clear the fourth line.
lcd_line_four();
lcd_write_string(PSTR(" "));
lcd_line_three();
int i;
// Write the string we've received.
for(i = 0; i < index; i++) {
lcd_write_data(command[i]);
}
// Reset the index.
index = 0;
}
}
}
}
// We never get here.
return 0;
}
You can fire up screen as usual and start typing into that to verify that the characters you type show up on the fourth line.
On the Mac side, we have a C program. The code should work fine in your C++ program and XCode shouldn't have any trouble with it, but I'm skipping over setting this up in XCode as it's needlessly complex for this example. This program will open the device file, set the baud rate to match the kit, and write "Hello, World!". Drop the following into a file such as main.c
#include <stdio.h>
#include <termios.h>
#include <fcntl.h>
#include <string.h>
int main(int argc, char **argv)
{
int fd;
// Open the device file. Change this to match your device.
printf("open /dev/cu.PL2303-0000103D\n");
fd = open("/dev/cu.PL2303-0000103D", O_RDWR);
if(fd < 0)
{
printf("Error opening port.\n");
return 1;
}
printf("Port opened.\n");
// Change the baud rate of the tty device.
struct termios term;
tcgetattr(fd, &term);
cfsetispeed(&term, B115200);
cfsetospeed(&term, B115200);
tcsetattr(fd, TCSANOW, &term);
// Here's the string we'll send.
char *command = "Hello, World!\n";
int i;
for(i = 0; i < strlen(command); i++)
{
write(fd, &command[i], 1);
// If we don't wait, characters get lost.
usleep(40000);
}
close(fd);
return 0;
}
On the command line, navigate to the directory where you've saved main.c and type:
gcc main.c
This will create a program called a.out which you can run by typing:
./a.out
Connect the kit to your computer and if it's not being powered by the USB port, connect the power. Then run the program. You should see the characters appearing one at a time on line four and then the message should move up to line three.
There is ample room to improve this example, but it should get you going in the right direction.
Note that there's really nothing Mac specific about this other than the name of the device file. The program should work the same under Linux (though I have not tested that).
Hi N3Roaster,
Welcome to the forums, and thanks for the informative post!
The straightforward way of reading characters and sending them to the LCD works fine, but as you mention in your PC-side code, you may need to manually insert a delay because it takes a bit of time to communicate with the LCD. If you're up for a challenge, I will mention that it's possible to implement your serial port handling in an interrupt handler, and implement a data structure so that the LCD communication runs in the main loop "in the background". That's more complicated, but a great learning experience to fight your way through :-)
Mike
Yes, that's one of those opportunities to improve the above code. I've already written an ISR to handle the ADC interrupt and have looked at the datasheet for details on doing the same with serial communications. It looks fairly straightforward. In that case I'm thinking the way to go would involve two character buffers, one for the string being sent, another with a finished string ready for processing in the main routine, just swapping the pointers and setting a flag when new strings are accepted. The particulars would depend on how frequently the computer would be sending a new string and how long whatever processing is being done would take. Another way to take this would be to not just send the text directly to the LCD, but interpret simple commands and (for example) drive pins high or low in response. The PC side code can also be made more sophisticated, for example, by listening for data from the kit which could be used for flow control, status information, &c. It could also be made interactive.
If anyone is still watching this thread why do I get:
||Hello, World!
Where do the bangs come from?
Ralph
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/101/ | CC-MAIN-2019-18 | refinedweb | 1,278 | 73.88 |
When experimenting with something new in programming, it’s always useful to step through the code in a debugger the first time to see what it does. An unfortunate side effect is far slower than normal execution speed, which interferes with timing-sensitive operations. An alternative is to have a logging mechanism that doesn’t slow things down (as much) so we can read the logs afterwards to understand the sequence of events.
Windows has something called Event Tracing for Windows (ETW) that has evolved over the decades. This mechanism is implemented in the Windows kernel and offers dynamic control of what events to log. The mechanism itself was built to be lean, impacting system performance as little as possible while logging. The goal is that it is so fast and efficient that it barely affects timing-sensitive operations. Because one of the primary purposes of ETW is to diagnose system performance issues, and obviously it can’t be useful it if running ETW itself causes severe slowdowns.
ETW infrastructure is exposed to Universal Windows Platform applications via the
Windows.Foundation.Diagnostics namespace, with utility classes that sounded simple enough at first glance: we create a logging session, we establish one or more channels within that session, and we log individual activities to a channel.
Trying to see how it works, though, can be overwhelming to the beginner. All I wanted is a timestamp and a text message, and optionally an indicator of importance of the message. The timestamp is automatic in ETW. The text message can be done with
LogEvent, and I can pass in a
LoggingLevel to signify if it is verbose chatter, informative message, warning, error, or a critical event.
In the UWP sample library there is a logging sample application showcasing use of these logging APIs. The source code looks straightforward, and I was able to compile and run it. The problem came when trying to read this log: as part of its low-overhead goal and powerful complexity, the output of ETW is not a simple log file I can browse through. It is a task-specific ETL file format that requires its own applications to read. Such tools are part of the Windows Performance Toolkit, but fortunately I didn’t have to download and install the whole thing. The Windows Performance Analyzer can be installed by itself from the Windows store.
I opened up the ETL file generated by the sample app and… got no further. I could get a timeline of the application, and I can unfold a long list of events. But while I could get a timestamp for each event, I can’t figure out how to retrieve messages. The sample application called
LogEvent with a chunk of “Lorem ipsum” text, and I could not figure out how to retrieve it.
Long term I would love to know how to leverage the ETW infrastructure for my own application development and diagnosis. But after spending too much time unable to perform a very basic logging task, I shelved ETW for later and wrote my own simple logger that outputs to a plain text file. | https://newscrewdriver.com/2020/05/21/complexity-of-etw-leaves-a-beginner-lost/ | CC-MAIN-2021-17 | refinedweb | 522 | 59.94 |
SLL/TLS & Digital Cerficiates
Page Contents
References & Resources
There is a ton of resource available on this. Here are some of the most useful pages that I've found, the Zytrax pages being of particular note.
- [1] The book "Network Security with OpenSSL" by Pravir Chandraet al.
- [2] Zytrax's "Survival Guide - TLS/SSL and SSL(X.509) Certificates". [Last accessed: 13-Jan-2015]
- [3] Zytrax's "Survival Guide - Encryption, Authentication" . [Last accessed: 13-Jan-2015]
- [4] Command Line Fanatic's "OpenSSL tips and tricks" [Last accessed: 13-Jan-2015]
- [5] PS | Enable's "A Brief Primer on Diginal Certificates and File Types" [Last accessed: 13-Jan-2015]
- [6] Crypto++'s Keys and Formats Guide [Last accessed: 28-Apr-15]
- [7] A Layman's Guide To A Subset Of ASN.1, BER, and DER, by Luca Deri. [Last accessed: 28-Apr-15]
Please note, I'm not an expert with SSL/TLS, just learning myself. A lot of the examples below are creating toy, or test, CAs, certificates, keys etc and would have to be considerably reviewed if you were going to actually run your own CA! The above links are the most informative I found whilst trying to learn about the subject...
What Is SSL/TLS and Why Use It?
The 30,000 Foot View
SSL/TLS is a security protocol that allows two communicating parties to authenticate (verify) each other and then talk privately. This means that if Alice is talking with Bob, she can be sure that she is actually talking to Bob and not some imposter pretending to be him. Likewise Bob can be sure he is talking to the real Alice. As well as allowing them to verify each other it also allows them to talk without being "overheard" by any one else.
When Alice talks to Bob, she will ask Bob for his certificate. She can then verify Bob's certificate by checking if it has been signed by someone she already trusts (usually a "Certificate Authority"). Bob can ask the same of Alice (refered to as "Client Authentication").
The reason to use SSL/TLS is therefore quite clear. For example, when I log onto my bank's server I can be sure that I'm talking to my real bank, not some fraudster, and that the transactions and information I send to and receive from the bank are private and viewable only by me.
So what would happen if Alice was just talking to Bob without any protection? Well, the standard view involved the characters Alice and Bob, who we've met, but also involves a shady thrid party, Eve, who is going to try and listen to their conversation...
In the above diagram Alice sends a message plaintext to Bob. She's not done anything to it, it is in whatever format the document natively exists in. For example if she's sending a Word document then this is just a plain DOCX file that any one with Word can open. But because packets on the Ethernet are not private, Eve an easily pick up the document too and read it.
What Alice needs to do is jumble up her document in such a way that Bob will know how to un-jumble it and Eve will not. It is important that Eve cannot (easily) guess at how to un-jumble it either. To do this Alice and Bob encrypt their messages.
Manage Multiple Keys
On *nix systems you private & public keys are generally stored in your home directory
in the
~/.ssh/ directory. You can have multiple key pairs that you use for access
to different services. And, in fact, you should have multiple key pairs as using the same key
to access different services is generally considered to be weak security. Why? If you use
multiple keys and one is compromised, only one of the services you use is compromised. If
the same key were used for multiple services then, all those services could be
compromised by the one key being discovered!
To manage multiple keys on *nix systems use
ssh-agent:
ssh-agent is a key manager for SSH. It holds your keys and certificates in memory, unencrypted, and ready for use by ssh. It saves you from typing a passphrase every time you connect to a server. It runs in the background on your system, separately from ssh, and it usually starts up the first time you run ssh after a reboot.
The SSH agent keeps private keys safe because of what it doesn’t do:
- It doesn’t write any key material to disk.
- It doesn’t allow your private keys to be exported.
Private keys stored in the agent can only be used for one purpose: signing a message.
The following is cribbed and annotated/extended from How To Manage Multiple SSH Key Pairs by Joseph Midura
# Generate a public/private key pair # Best practice is to protect the key with a STRONG passphrase. RSA currently recommended key length # at time of writing is 4096. Also consider using Ed25519 instead (see later section) # See: # If you can link with a password manager than generates long, random, strong passphrases event better! # E.g. for LastPass # CLI: # Random guide: ssh-keygen -t rsa -f key-name -b 4096 # Create known hosts file. Joseph Midura's article recommends creating a known hosts file for each # key. The file stores all hosts you connect to using this profile - easier debugging that putting # all keys in one known hosts file and jumbling them all up! touch known_hosts_github (~/.ssh/known_hosts_github) # Create / append to config File (~/.ssh/config). This is an example for github... Host github.com Hostname github.com User git AddKeysToAgent yes # Specifies whether keys should be automatically added to a running # ssh-agent(1). If this option is set to yes and a key is loaded from a # file, the key and its passphrase are added to the agent... IgnoreUnknown UseKeychain # Only for macOS Sierra 10.12.2 or later to load the keys automatically UseKeychain yes # Only for macOS Sierra 10.12.2 or later to load the keys automatically IdentifyFile ~/.ssh.github_key UserKnownHostsFile ~/.ssh/known_hosts_github # Specifies a file to use for the user host key # database instead of ~/.ssh/known_hosts. IdentitiesOnly yes # Specifies that ssh should only use the identity keys configured in # the ssh_config files, even if ssh-agent offers more identities. # Add keys to ssh agent so you dont have to keep entering your password eval "$(ssh-agent -s)" # ^ # Generate Bourne shell commands on stdout. Use eval to execute in current shell # Outputs something like this: # SSH_AUTH_SOCK=/tmp/ssh-FkzuDlePs3bV/agent.75; export SSH_AUTH_SOCK; # SSH_AGENT_PID=76; export SSH_AGENT_PID; # echo Agent pid 76; # Add PRIVATE key ssh-add path-to-prv-key # Copy PUB key to clip board and paste into relevany service cat key_file | pbcopy # or clip.exe in WSL
Keep Your Keys Fresh
Good practice to re-generate your keys at least yearly. Also regenerate to the currently recommended key length and algorithms. For example, 2048 is no long a sufficient key length for RSA, 4096 recommended at the time of writing. Also, could be worth using ED25519 instead of RSA:
The Ed25519 was introduced on OpenSSH version 6.5 ... It's using elliptic curve cryptography that offers a better security with faster performance ...
...
Today, the RSA is the most widely used public-key algorithm for SSH key. But compared to Ed25519, it's slower and even considered not safe if it's generated with the key smaller than 2048-bit length ...
E.g.:
ssh-keygen -o -a 100 -t ed25519 -f ~/.ssh/id_ed25519 -C "john@example.com"
Url, Uri Or Urn?
Summary... ----------- URL: Uniform Resource Locator. [Scheme]://[Domain]:[Port]/[Path]?[QueryString]#[FragmentId] URL points to something "real", i.e., a resource on a network which can be located using the URL. URN: Uniform Resource Name. urn:[namespace identifier]:[namespace specific string] "Namespace identifier" is just a string that identifies how the "namespace specific string" should be evaluated. It is usually registered with IANA. E.g. isbn:1234567891234 URI: Uniform Resource Identifier == URLs + URNs. It is a superset of URL and includes URLs and URNs. URI is just a unique string that identifies something and does not have to have any other meaning other than that. I.e., it does not have to "point" to anything real. Some example of URIs are, taken verbatim URLs both identify objects and tell you how to find them. URIs just identify objects (so are a superset of URLs), and URNs are just URIs that may persists through time. More Detail... --------------- The following StackOverflow thread [] gives many really good expanations. You can read the RFC here. From the RFC an "identifier" is defined as follows: An identifier embodies the information required to distinguish what is being identified from all other things within its scope of identification. So how is a URL different from a URI. The RFC also explains") So, a _locator_ is something that will provide a means of locating the resource. A URL is therefore an identifier and a locator, whereas a URI is an identifier, but not necessarily a locator. I.e., URIs uniquely identify things but may not tell you how to find them. URLs are the subset of URIs that tell you how to find the objects identified. And what about URNs? The term "Uniform Resource Name" (URN) ... refer[s] to both URIs ... which are required to remain globally unique and persistent even when the resource ceases to exist or becomes unavailable, and to any other URI ... So URNs are just URIs that may or may not persist even when the resource has ceased to exist. Kind of a permanent URI which is more heavily regulated, usually by IANA. So, to summarise we could say that URLs both identify objects and tell you how to find them. URIs just identify objects, and URNs are just URIs that may persists through time.
Cookies
From: HTTP cookies, or internet cookies, are built specifically for Internet web browsers to track, personalize, and save information about each user’s session. A “session” just refers to the time you spend on a site. Sent by servers, stored by browsers. Stored using name/value in browser map. If site name exists in browser cookie cache then upon site visit, bowser sends cookie to server. Allows server to add STATE to the STATELESS HTTP protocol. Two types of cookies: 1. Session cookies. Only whilst in webstie domain. Not stored to disk. 2. Persistent cookies. Stored to disk. Normally exire after some time. Used for authentication and tracking.
Common Website Attack Types / Policies / Terminology
Same Origin Policy (SOP) ------------------------ From: was defined many years ago in response to potentially malicious cross-domain interactions, such as one website stealing private data from another. It generally allows a domain to issue requests to other domains, but not to access the responses. Same origin == same port, URL scheme (HTTP != HTTPS for example) and domain. Cross origin genarally applied to what JS code can access. For e.g. JS code from jeh-tech.com won't generally be able to access resources on some-other-site.com using the SOP. SOP not applied to img, video, script etc tags etc. Not generally applied to page resources. BUT JS won't be able to read contents of resources loaded from different origins! Cross Origin Resource Sharing (CORS) ------------------------------------ Sometimes SOP too restrictive... solution is CORS which enables controlled access to resources located outside of a given domain. From: A controlled relaxation of the same-origin policy is possible using cross-origin resource sharing (CORS). From::. Caution! Badly configured CORS can cause security holes! CORS is allowed by the server being contacted. It returns permission to the browser and the browser then allows the client code to access these resources. Normal Request: GET /data HTTP/1.1 Host: jeh-tech.com Origin : If jeh-tech.com is to be granted access to "some-resource", the HTTP response would look something like the following (other responses using wildcards etc. also possible): HTTP/1.1 200 OK ... Access-Control-Allow-Origin: The site has loads of good example of security vulnerabilities that can be introduced by badly configured CORS as well as free labs! Content Security Policy (CSP) ----------------------------- Cross Site Scripting (XSS) -------------------------- See: From: XSS ... allows an attacker to circumvent the same origin policy, which is designed to segregate different websites from each other. Cross-site scripting vulnerabilities normally allow an attacker to masquerade as a victim user, to carry out any actions that the user is able to perform ... If the victim user has privileged access ... [then da da daaaa!!] ... Cross-site scripting works by manipulating a vulnerable web site so that it returns malicious JavaScript to users ... Cross Site Request Forgery (CSRF) --------------------------------- A cross-origin attach. Server-side request forgery (SSRF) ---------------------------------- XML External Entity (XXE) Injection -----------------------------------
Secure Sockets Layer (Ssl) And Transport Layer Security (Tls)
SSL and TLS are protocols that provide secure communications over a computer network or link. SSL IS NOW DEPRECATED - IT IS SUPERCEEDED BY TLS TLS is based on SSL, developed due to vulnerabilities in SSLv3. SSL term is commonly used BUT NOW REFERS TO TLS, generally. TLS provides - Encryption I.e. the message contents are "hidden" - just look like random crap. - Data integrity I.e. can be sure the message contents have not been changed. - Authentication I.e., the message came from the person you think it came from and not some imposter. Encryption ONLY HIDES the message, but it does not tell you that the message came from the person you think it did, or that it hasn't been changed: ALICE -------[Encrypted Msg] -------> Bob [With Bob Pub Key] [Decrypt with Bob Prv Key] OR ALICE -------[Encrypted Msg] -------> Bad Guy -------> [New encrypted msg] -----> Bob [With Bob Pub Key] [With Bob Pub Key] To communicate with Bob, Alice encrypts her messages with Bob's public key. No one should be able (reasonably to decrypt this message without Bob's private key. Thus, the message contents are secret. But, as can be seen, nothing prevents a "bad guy" encrypting his own message with Bob's public key, sending it to be Bob, whilst claiming to be Alice. Bob has no way to know he is actually talking with the real Alice! To verify the sender and be confident the message didn't change requires SIGNING. I.e., SIGNING PROVIDES AUTHENTICATION. Types Of Keys -------------- 1. Symmetric Keys: The same key encrypts and decrypts the message. E.g. like key to your door - it both locks and opens the door. 2. Asymmetric Keys: Two different keys - one encrypts and one decrupts. It would be like having one key to lock your front door. Using the same key wouldn't unlock it, you'd need a different key. The keys are known as PUBLIC and PRIVATE keys and come as a KEY PAIR. SSL/TLS use public/private key encryption. Public keys can be made available to the world. But, because of this, you can't tell whether the public key you have received from your bank, really is the bank's public key and not that of a frauster. Enter DIGITAL CERTIFICATES.
Digital Certficiates & The Chain Of Trust
Digital Certificates -------------------- A passport links a photo and a person. Link verified by TRUSTED AUTHORITY, in this case the passport office. Passport hard to fake so when Alice presents her passport we can match the passport photo with her face and then infer that Alice is indeed who she says she is. Digital certificate does the same thing for a PUBLIC KEY. It LINKS PUBLIC KEY TO AN ENTITY and in the same way as a passport, has been VERIFIED (SIGNED) BY A TRUSTED AUTHORITY. Provides method to DISTRIBUTE TRUSTED PUBLIC KEYS. To obtain a digital certificate is just like applying for passport. Send appropriate forms, the CA does some checks and sends you back your keys enclosed in a certificate. The process of asking a CA to verify your keys is called A CERTIFICATE SIGNING REQUEST (CSR). What the Digital Certificate (SSL Cert) Looks Like -------------------------------------------------- +------------------------------+ | +----------------+ | } | | SSL Cert | | } Information describing the mysite.com and the proxy | | MySite.com | | } CA. Also the public key of MySite | | Proxy CA Info | | } | | MySite PUB KEY | | } | +----------------+ | } | | | | | +--------------------+ | } The hash verifies that the SSL info | | +----------------+ | | } A hash of the info } has not been chaged. | | | HASH | | | } describing mysite.com } The encryption of the hash ensures that | | +----------------+ | | } and the proxy } the hash has not been changed or | | | | } proofed. | | Encrypted with | | } The encrypted block can be decrypted | | private key | | } by anyone with the *public* key, so | | of CA | | } is easily verifiable. | +--------------------+ | } | | | SSL Certificate | | | +------------------------------+ MySite's certficiate contains MySite's public key. This means that anyone can send MySite private data - they encrypt with MySite's public key, and only MySite can decrypt this by using the private key. But when Alice accesses MySite, how does she know that the certificate she receives is actually from MySite, and not an imposter? The answer lies in the encrypted hash of the certificate info. The CA uses its private key to encrypt a hash of the certificate information that it has issued. The CA, as a trusted thrid party, promises that they have verified that MySite is who it claims to be. Because the CA encrypts the hash with its private key, anyone can decrypt it with the CA's public key. But, for this decryption to work, the encryption MUST have been done by the CA's private key, so we know, assuming no compromised keys, that it definitely is the CA that generated the hash. Then, as long as the decrypted hash matches the client generated hash of the certificate info, it then can be sure that the certificate has been signed by the trusted third party, and so the certificate and thus MySite, can be trusted to be who they say they are... nice! Types Of Certificates --------------------- 1. Domain Validated Certificates (DVC) X.509 digital certificate for TSL where ID of applicate has been validated by proving some control over a DNS domain. Not as trusted as EVC. It is the LEAST TRUSTED option. Validation process normally fully automated so is CHEAPEST. BAD FOR SENSITIVE DATA. 2. Extended Validated Certificates (EVC) Used by HTTPS websites and proves ID of the legal entity that controlls the domain. MORE EXPENSIVE because requires verification of the reqiesting entity's ID by CA (i.e. we used the passport office!). Manual processes required. Level of encryption the same, its just the degree of trust that differs. Certficate Restrictions ----------------------- Normally valid for use on single fully qualified domain name (FQDN). I.e if certificate issued for, cannot be used on or. From Wikipedia: be interpreted only in one way. Secure multiple subdomains using WILDCARD CERTIFICATE, which would cover *.jeh-tech.com, for e.g. NOTE: This ONLY COVERS SUBDOMAINS - it cannot cover totally different domains. To cover multiple different domans requires a SAN (Subject Alternative Name) Certificate. Root CAs --------- Root CAs keep their private keys under numerous layers of security - they are the "gold standard" super trusted, uncompromisable source of trust. We agree to totally trust the root CA and this trust is built on their ability to keep their private keys, well, private! This is super important because if their private keys are compromised then all of root CA's certificates are compromised!! Intermediate CAs & The Chain Of Trust ------------------------------------- Act like a "proxy" for Root CAs. The root CA signs their certificates. E.g. mysite.com makes a CERTIFICATE SIGNING REQUEST (CSR) to an intermediate CA (ICA), which signes the cert and returns the SSL cert it to mysite.com. It is signed by the ICA, but another chained certificate is provided that is the ICA certificate that is signed by the root CA - we get A CHAIN OF CERTIFICATES. mysite.com ---> Site's SSL certificate AND ICA's certificate } ^ ^ } A chain of certificates, or trust. | | } Our SSL is signed by ICA, which [Signed by] | } vouches for our authenticity. | | } This certificate is CHAINED to ICA ----------> ICA's Certificate -----------------+ } the ICA's own certificate, which ^ } is vouched for by the root CA who | } everyone completely trusts. This [Signed by] } is the chain. Its like accepting | } a recomendation from a friend. Root CA } A browser, for example, will have a list of CA authorities it deems as trust worthy. So when it receives a certificate, it may not trust the proxy, but as long as it can travel down the chain to find a source it does trust, it can decide to trust the proxy, as-if the party it trusts has "recommonded" the proxy. This means that the browser has a list of trusted public keys which it can use to decrypt one of the certificates in ther certificate chain it receives, in order to verify it. It can decrypt the hash, therefore it knows, if the decrypted hash matches the locally-generated hash for the cert, a) The hash definitely comes from who it says its from, b) The hash has not been tampered with This means it can trust the public key contained in the cert and use that to decrypt down the chain and so on to verify everything. Commercial v.s. Roll-Your-Own ----------------------------- Can create your own certificates and they will be just as secure. Only difference is that you will have to install your certificate to your browsers list of trusted certificates manually, as opposed to a commercial one which should already be, at least indirectly via a chain-of-trust, in your browsers trust list. Certificate Pinning ------------------- From : Pinning is the process of associating a host with their expected X509 certificate or public key. Once a certificate or public key is known or seen for a host, the certificate or public key is associated or "pinned" to the host.. [From: If [StrictHostKeyChecking is set to "yes"] ssh will never automatically add host keys to the ~/.ssh/known_hosts file and will refuse to connect to a host whose host key has changed. This provides maximum protection against trojan horse attacks, but can be troublesome when the /etc/ssh/ssh_known_hosts file is poorly maintained or connections to new hosts are frequently made. This option forces the user to manually add all new hosts. ].
Let'S Encrypt And Certbot
Let's encrypt () "is a free, automated, and open certificate authority (CA), run for the public’s benefit." It provides digital certificates for free to enable the prolific use of TLS for websites. To verify that the domain it issues a certificate for belongs to the person requesting the certificate it uses the ACME protocol. This basically requires the domain owner to demonstrate that s/he owns the domain by creating subdomains that the ACME can challenge (see). CertBot () is a little tool that helps automate the ACME protocol to make it easier for website owners to generate their own certificates. Requires SSH access to the server. CertBot can, if you are using modern Linux and servers like Apache, NginX etc do both the obtaining _and_ the installing of the certificates for you. See: At the end of the issuance process you should obtain the following files: - The public SSL certificate (certificate.crt) - The private key (private.key) - Optionally, a list of intermediate SSL certificates or an intermediate SSL certificate bundle. (chain.pem and/or certificate_and_chain.pem) Generic Cerbit *Nix Installation -------------------------------- See: wget && \ sudo mv certbot-auto /usr/local/bin/certbot-auto && \ sudo chown root /usr/local/bin/certbot-auto && \ sudo chmod 0755 /usr/local/bin/certbot-auto Manual Certificate Request -------------------------- See: To make the request on any machine other than your webserver you can perform the steps for domain validation yourself. Use CertBot in MANUAL MODE. It can do an HTTP or DNS challenge. In the former you upload a specific file to you website to demonstrate ownership and in the latter you add a DNS entry to demonstrate ownership. Similar to webroot plugin: "... If you’re running a local webserver for which you have the ability to modify the content being served, and you’d prefer not to stop the webserver during the certificate issuance process, you can use the webroot plugin..."" -- But, would this mean you could own the website hosting but not the domain name? I suppose that situation is unlikey. Use: sudo /usr/local/bin/certbot-auto certonly --manual ^^^ Installs dependencies for you which is why it needs sudo. Takes forever! After the install the manual process is deliciously easy :) It defaults to HTTP challenges by default. It will ask you to create 2 files with certain contents that can be publically accessed from your website. When it can read back these challenges it knows you have control of the site and can issue the certificate. You should see output similar to the following: IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/ Your key file has been saved at: /etc/letsencrypt/live/ Your cert will expire on 2020-09-15.:
Key Formats
See:
Secure Quick Reliable Login (Sqrl)
Attack (-Defense) Trees
From begin with hazard of interest eg. access to our system and work down tree through levels of cause and events until we meet fundamental events. e.g. cause/event is an attack From: An attack tree is a tree in which the nodes represent attacks. The root node of the tree is the global goal of an attacker. Children of a node are refinements of this goal, and leafs therefore represent attacks that can no longer be refined ... The purpose of an attack tree is to define and analyze possible attacks on a system in a structured way. This structure is expressed in the node hierarchy, allowing one to decompose an abstract attack or attack goal into a number of more concrete attacks or sub-goals. See: Useful free tool is ADTree Quantitative analysis of an attack-defense scenario: * Standard questions * What is the minimal cost of an attack? * What is the expected impact of a considered attack? * Is special equipment required to attack? * Bivariate questions * How long does it take to secure a system, when the attacker * has a limited budget? * How does the scenario change if both, the attacker and the defender are affected by a power outage? An attack tree only looks at the likelihood of an attack, not the impact. Tend to only do attack trees for things that would cause significant damage. For the L, M, H, E domain, the tool combines the scores in a relatively straight-forward way. For nodes with 'AND' children, the node score is the highest of the child scores. For nodes with or 'OR' children, the node score is the lowest of the child scores. e.g. if a goal can be attained with an 'L' child OR a 'H' child, then the goal is given an 'L' difficulty (because the attacker would do the low-difficulty thing). If a goal can be attaioned with an 'L' child AND an 'H' child, then the goal is given an 'H' difficulty, because it's the most difficult thing to achieve. These just propagate up the hierarchy. Hence... for a goal with OR children, you need to address ALL of the lowest goal ratings in order to raise the rating of the goal. With AND children you need to address ANY of the lowest goal ratings in order to raise the rating of the goal. Hence the attack tree helps you target mitigations to those things that are most worth doing. i.e. if a goal can be achieved with an 'L' OR an 'M' child, there's little point in adding a mitigation that increases the 'M' to 'H' because the 'L' still dominates the goal... etc. Attacker objects have - Clear motivation - Clear impacts (severities) on company / organisation / stakeholders To list attacker objectives: - Make a list of types of attacker - Make a list of objectives - Ask, what affects the bottom line? E.g. revenue loss, theft, brand damage Do a tree for each attacker and objective combination Worry about WHAT not how. Use the tree: - Summaries all attacker objectives - Detailed look at attack vectors - Anaylisis of risk based on likelihood - PRIORITISED list of mitigations What should come out of the attack tree should be a list of actions that *need* to be done as being necessary to appropriately secure the resource in question, relative to its positions/importance withing the system context.
Coursera Course Notes - Introduction To Cybersecurity Tools & Cyber Attacks By Ibm
NIST - Info security is the protection of informations systems from unorthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availablilty. CIA Triad: 1. Confidentiality I.e., privacy - prevent unorthorized access to data. I.e., only the sender and the receiver, can understand the message. 2. Information security Can involve authentication where the sender and receiver want to confirm each others identity in order to securly communicate. 3. Integrity Ensure information is a accurate and not corrupted/spoofed etc in any way over its entire lifecyle. I.e., a message in transity, has it been tampered with?! 4. Availability Requires routine manintenance and upgrades. The protection of information systems from unorthorized access, use, disclosure, distruption, modification, or destruvtion in order to provide CIA - confidentiality, information secutiry, integrity and availablility. Vocab: Vunerability: Flaw, loophole, oversight, or error that can be exploited to violate system security policy. Exploit: Defined way to breach secutiry of an IT system through a vunerability. Threat: Event, natural or man-made, able to cause negative impact to a system. Risk: Siutation involving exposure to danger. THREATS ------> NATRUAL FACTORS ------> Lightning, hurricane etc \------> HUMAN FACTORS ------> INTERNAL FACTORS ------> Current employees \ \------> Former employees \ ------> EXTERNAL FACTORS ------> Malicious events ------> Hackers/Crackers ------> Virus/Trojan/Worm Early cyber security operations: * Clipper Chip Gov was going to try and put chips into all US phones to spy. * Moonlight Maze Attempted to collect passwords from *nix servers. Attackers used lots of proxies and tools. * Solar Sunrise Series of attacks to department of defense networks. Exploited a known OS vunerability. Left a backdoor and sniffer in the network! Many attacks launched in this way and on other countries and organisations. Attack was launched by two teenagers in California, not a nation state!! * Buckshoot Yankee he Buckshoot Yankee was categorized as the most significant breach of the US military computers ever by the Secretary of Defense, Willian J. Lynn. * Desert Storm Some of the radars are destroyed or are tampered with fake formation. So that's one of the things that the US military command used to successfully attack some of their key military buildings of Saddam Hussein. * Bosnia On Bosnia, there was a lot of cyber operations. But things like, for example, fake news, fake information delivered to the militaries in the field * Stuxnet () - delivered into Iran nuclear plants. The worm is widely understood to be a cyberweapon built jointly by the United States and Israel in a collaborative effort known as the "Olympic Games". In 2016 forbes estimated $400bn losses from cyber attacks. $100bn via cyber crime, $2.1bn data loss. See also: IBM X-Force Threat Intellegence Index. CYBER SECURITY PROGRAM | + ---- SECURITY PROGRAM: Evaluate, create teams, baselines, risks etc for monitoring and control | + ---- ASSET MANAGEMENT: Documents, for example, as assets. Classify assets and protect. | + ---- ADMIN CONTROLS: Policies, proceedures, standards, DR, compliance and physical security. | + ---- TECH CONTROLS: Network infrastructure such as fire walls, monitoring, logging etc. Security challenges: * Simple high level requirements can turn into complex access management implementations... * Security solutions themselves can be attacked if they expose structures as targets * Protection of assets can cause complexity. E.g. program is easy to create for the purpose it is needed for but making it secure can add great complication. * Key management is hard * Protectors have to be right all the time, attackers just once! * No one likes the hassle of security until it is needed (seat belt philosophy) - Challenge assumptions - ass/u/me - Eplicitly list all assumptions as they are thoughts/ideas that lead us to predict (incorrectly) outcomes - Invite all stakeholders - Brainstorm - Look for phrases like "Will always/never...", "Generally the case..." - Examine each: - Why do I think this is correct? - When could this be false? - How confidentam I that it is valid? - If it were invalid, what impact would it have and what is the severity? - Categorise based on evidence: - Solid & well supported - Correct, with caveats - Unsupported / questionable (doesn't necessarily mean wrong - need more data and iterate) - If any need more data, gather and re-do. - Consider alternatives - Failure to consider missing data, which we may be assuming or interpolating unconciously, can lead to bad decisions - Brainstorm - Who/What/Why/Where/When/How - Who is involved and effected by the outcome - What is at stake? What could happen? Whats the problem? Whats the desired outcome? - Where did this take place? Where are the stakeholders? Where's the infrastructure? Does geography make a difference? - When did it take place? Does timing make a difference? WHen are the key dates? - Why are we doing this? What are the benifits? Key drivers? Motives? - How will we appoach this? Is it feasible? Be detailed and specific and think through all alternatives. - AVOID becoming engrossed in one explanation - always consider alternatives - brainstorm!! - Get different perspectives. - Consider the oposite of your assumption - the null hypothesis. - Evaluate data - What does normal look like? This is the key to anomoly detection. - Establish basline for whats normal: web traffic, network sensors, endpoint activity, source/dest/ volume/velocity. - Lookout for inconsistent data - Test data against each hypothesis and discard those that dont fit. - Identify key drivers - Technology - Regulatory - ISO27001 / GDPR / HIPAA / CyberEssentials - - Cyber Essentials is a simple but effective, Government backed scheme that will help you to protect your organisation - - ISO/IEC 27001:2013 (also known as ISO27001) is the international standard for information security. It sets out the specification for an information security management system (ISMS). - ISO27001 includes things like screening employees on joining, etc, as well as other IT security measures that help address social engineering. - Provides requirements for establishing, implementing, maintaining and continually improving an information security management system by preserving the confidentiality, integrity and availability of information by applying a risk management process. - ISO/IEC 27000:2018 INFORMATION TECHNOLOGY Security for any kind of digital information, ISO/IEC 27000 is designed for any size of organization. - ISO/IEC 27001:2013 INFORMATION TECHNOLOGY Security for any kind of digital information, ISO/IEC 27000 is designed for any size of organization. - ISO/IEC 27002:2013 INFORMATION TECHNOLOGY Security techniques - Code of practice for information security controls. - Society - Supply Chain - Employees - Threat actors. WHat are they're technical capabilities and drivers etc. - Understand context [most important![ - The operation environment in which you are working - Consider the PERSPECTIVE of other stakeholders, managers, colleagues,clients. - Put yourself in other peoples shoes... - Framing techniques - does it need to be set in broader context. E.g. Slow elevator Problem framing: "Elevator too old and slow" ---> "Waiting is boring" Solutioon space: replace elevator ---> Reduce perception of wait time. Terminology: Black Hats - Black hats are criminals White Hats - White hats are security researchers or hackers who, when they discover a vulnerability in software, notify the vendor so that the hole can be patched. Gray hats - Gray hats fall into the middle ground between these two other hacker categories. Gray hats sell or disclose their zero-day vulnerabilities not to criminals, but to governments, i.e., law enforcement agencies, intelligence agencies or militaries. Red hat - Like white hat hackers, red hat hackers also want to save the world from evil hackers. But they choose extreme and sometimes illegal routes to achieve their goals. Greenhat - Newbie unskilled hacker Hacktivists - The likes of Anonymous, LulzSec or AntiSec. Their mission is largely politically motivated or ideological. Insider - Insiders are made up of disgruntled employees, whistleblowers or contractors. Oftentimes their mission is payback State Sponsored Hackers - unlimited potential resources to target systems such as national infrastructure, in order to achieve political or military goals Cyber Terrorists - Motivations based on political or paramilitary goals Cyber Criminal - Money motivated Some resources: The CIS Critical Security Controls are a recommended set of actions for cyber defense that provide specific and actionable ways to stop today's most pervasive and dangerous attacks. ISSA is the community of choice for international cybersecurity professionals dedicated to advancing individual growth, managing technology risk and protecting critical information and infrastructure.. 2019 Ponemon Institute Study on the Cyber Resilient Organization: High automation organizations are better able to prevent security incidents and disruption Computer Security Incidence Response Plan (CSIRP) that is applied consistently across the entire enterprise Communication with senior leaders about the state of cyber resilience occurs more frequently Senior management in high performing organizations are more likely to understand the correlation between cyber resilience and their reputation in the marketplace High performers are more likely to have streamlined their IT infrastructure and reduced complexity Steps taken to significantly improve cyber resilience Hiring skilled personnel Visibility into applications and data assets Improved information governance practices Implementation of new technology, including cyber automation tools such as artificial intelligence and machine learning Elimination of silo and turf issues Engaging a managed security services provider Training and certification for Cybersecurity staf Training for end-users C-level buy-in and support for the cybersecurity Board-level reporting on the organization’s cyber resilience High performers are more likely to reduce complexity in their IT infrastructures. A The eight most effective security technologies Identity management & authentication Security information & event management Incident response platform Cryptographic technologies Anti-malware solution (AVAM) Anti-malware and antivirus definitely aren’t the same. Antivirus depends more on the signatures of known viruses to detect specifics viruses or types of viruses, much like a flu shot. In comparison, anti-malware is made up of nonsignature-based anomalies that detect unknown anomalies (malware that may not have been detected before). Intelligence and threat sharing Network traffic surveillance Intrusion detection & prevention Security threats +-------> Human Factors | +-----------> Internal | +----------> Former employees | +----------> Current employees | +-----------> Exernal - Malicious events - hackers - viruses, | trojans worms (malware) -------> Natural Factors - Lightnigh, hurricane, floods etc Examples of tools hacks use: SeaDaddy and SeaDuke (CyberBears US Election) BlackEnergy 3,0 (Russion Hackers) Shamoon Duqu and Flame DarkSeoul WaanaCry See X.800 : Security architecture for Open Systems Interconnection for CCITT applications Attack Classification Passive Attacks Eavesdropping - Can go undetected for a *long* time Traffic analysis - Frequency and size of messages. - Attacks confidentiality by infering data from traffic analysis Active Attacks Explicit interception and modification - masquerade - impersonation of an authorized user or known user - replay - copy of legitimate message captured and re-trasmitted. Attacks integrity of system's data. - e.g replay the API call to transfer money from account A to account B. - modification - frabrication - a fake message from A to B, but really sent by C - destruction/corruption - destruction/corruption of data and/or other resources - denial/interruption of service - disclosure of sensitive information - an attack on the confidentiality of a message Security services - Protect a system resource. - Security Analysis - Any analysis must take into account likely attacker goals and motivation to put it in the appropriate context to provide input for prioritisation of issues and mitigations. - Security objectives are a statement from the perspective of what needs to be protected in the system. Categorise into CIA. - must consider the breadth of confidentiality, integrity and availability issues in the context of the associated business risk. - split into primary and secondary objectives and what in the CIA they are focussed on. - Analysis methodologies - ADTrees, STRIDE / Secure Development Lifecycle (SDL) - E.g. Microsoft Threat Modeling Tool 2016: - Thread risk scoring - Microsoft DREAD: - Damage Potential - Reproducibility - Exploitability - Affected Users - Discoverability - CVSSv3 - Common Vunerability Scoring System - Has average of 3 levels to score against 8 criteria: - Attack Vector (AV): Network (N) Adjacent (A) Local (L) or Physical (P) - Attack Complexity (AC): Low (L) or High (H) - Privileges Required (PR): None (N) Low (L) or High (H) - User Interaction (UI): None (N) or Required (R) - Scope (S): Unchanged (U) or Changed (C) - Confidentiality Impact (C): None (N) Low (L) or High (H) - Integrity Impact (I): None (N) Low (L) or High (H) - Availability Impact (A): None (N) Low (L) or High (H) - See also: OWASP Security Knowledge Framework: - Intended to counter security attacks by securing transfers of data. - Implement security policies, which themselves are implemented by security mechanisms - Security policy is derived from the business policy E.g. business policy is no unauthorized data movements so a security policy may be protocol supression, the mechanism being disabling FTP, for ex. - the mechanism implements a policy, like ID & authentication, digital signatures, access controls, etc - X.800 - Authentication - I am who I say I am - Access Control - Only authorized people can access resource - Data Confidentiality - No unauthoirized disclosure - Data Integrity - Tamper proof - Non-Repudation - Can't be denied by one pary in the communications - Availability A watering hole attack works by identifying a website that's frequented by users within a targeted organisation, or even an entire sector, such as defence, government or healthcare.That website is then compromised to enable the distribution of malware. Spectra class attacks exploit flaws in the interfaces between wireless cores in which one core can achieve denial of service (DoS), information disclosure and even code execution on another core. MALWARE AND TYPES OF MALWARE: ----------------------------- Malware: Short for malicious software. Any software used to disrupt computer or mobile operations, gather sensitive information, gain access to private computer systems or display unwanted advertising. Many malware attacks attempt to remain hidden on the host using resources for potential uses such as launching service attacks, hosting elicit data, accessing personal or business information. Virus = malware that spreads from computer by computer via HUMAN INTERACTION. Tactics to "hide" like polymorphic code etc. Worm = self replication that does NOT RELY ON HUMAN INTERACTION. Tojan = malware that damages system or gives access to host. spread by posing as a legitimate piece of s/w, e.g. a game Spyware = Track and report host usage or get files/browsing info etc Adware = Duh! RATS = Remote Access Tojans/Tools - let attacker gain access and control of host Ransomware = Restricts access to data on host to only attackers. - See - See also - most ransomware does NOT require administrative privileges - End users are often the first line of defense. If they are unable to recognize a security event and report issues via effective channels, the attack may continue uninterrupted and spread throughout the network - Most malware uses DNS to connect to control server - try as counter measure.. - Consider disabling windows scripting host Create the following registry key and value to disable: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Script Host\ Settings\ Enabled and set the ‘Value data’ field of Enabled to ‘0’ (That is a zero without the quotes). - Develop and rehearse an incident response plan: - NIST Incident Response Process, contain four major steps: 1. Preparation 2. Detection and Analysis - Detection - Identify any and all infected systems and those in imminent danger of becoming infected. - Contain the spread of the infection as soon as possible: isolate infected systems - CAUTION: just because only one infected host encrypting filesis found, it does not mean others have not! Remain vigilant. - Do not reboot or restart an infected system. The infected system should be hibernated and disconnected from the network immediately and IT security staff should be notified been affected - Analysis - Identifying the specific variant of ransomware in action - Determining how the malware entered the organization (root cause analysis) 3. Containment, Eradication, and Recovery 4. Post-Incident Activity Protection against attacks: - Anti virus software - Inter-operation systems / Intrusion detection systems / Unified threat management systems - Updating / Patching - use latest versions to address security holes - IAM etc - Training staff - Policy Internet Securty Threats ------------------------ 1. Network Mapping (Casing the joint!) - Find out what services are implemented on the network. - Use ping to determine hosts on network - Port-scanning: try to connect to each port in sequence and see what happens - nmap - Counter measures: - Traffic monitoring and analysis - Use host scanners to record network structure and alert if unexpected computers appear on network. - White list of MAC addresses to be allowed on network. 2. Packet sniffing 3. IP Spoofing - Any IP address can be put in the source address in the IP packet as well as any MAC in the Ethernet frame etc. - Counter measures: - Doesn't counter everything but can use INGRESS FILTERING. Routers should not forward outgoing packets with invalid source addresses - e.g. datagram with IP source address not in router's network. Cannot be applied to every network. 4. [Distributed] Denial of Service (DDoS) - Counter measures: - filter out flooded packets (e.g. SYN) before they reach the network but throws good out with bad. - Traceback to source of floods. - Neither options very effective short or long term 5. Host Insertions - Generally an INSIDER THREAT - Malicious computer placed on network - Countermeasures: - Network monitoring and profiling. - Host scanning to discover hosts against MAC address inventory. - Maintain inventory of computers on netowkr by MAC address - Missing hosts ok but new unknown hosts are BAD. - Rouge software processes - A malicious program on an authorized computer or good process co-opted for malicious puposes. - Used to gain access to sensitive info (exflitration of sensitive data) - Used to monitor network - Etc. KILL CHAIN ---------- The set of work that needs to be done to compromise a victim. 1. Reconnaissance - Research, identification and selection of targets 2. Weaponization - Pairing remote access malware with exploit into a delivery payload - eg word doc 3. Delivery - Transport weapon to target (e.g. email with doc attachment) 4. Exploitation - Once delivered it is triggered to exploit vulnerable apps or systems. 5. Installation - Weapon installs backdoor on system allowing persistent access 6. Command and Control - Outside server comms with the weapons to provide "hands on keyboard access" inside the target's network. 7. Actions on Objective Social engineering ------------------ Fool victim into voluntarily giving attacker information Toolkits like SET - Social Engineering Toolkit (SET) - (Kali Linux but independently installable and available on GitHub!!) allow attacker to clone websites to create lookalikes. Phishing & Vishing ------------------ Open source campaign tools like "Go Fish" allow you to guage your cybersecurity training program inside your company is something that's adding value to the knowledge of the users, for example. X-Force Research ---------------- IBM X-Force Threat Intelligence research reports can help you keep pace with an evolving threat landscape and learn how to protect your networks and data from the latest threats and attack vectors. - More linux malware discovered in 2020 - 56 new malware families. - Ransomeware the biggest threat: 23% of all events X-Force responded to, $123m+ made by attackers! - Citrix flaw used in 25% of all attacks in first 3 months of 2020 MITRE ATT&CK FRAMEWORK ----------------------. Security event manager ---------------------- Security event management (SEM), and the related SIM and SIEM, are computer security disciplines that use data inspection tools to centralize the storage and interpretation of logs or events generated by other software running on a network. Security information management (SIM): Long-term storage as well as analysis and reporting of log data. Security event manager (SEM): Real-time monitoring, correlation of events, notifications and console views. Security information and event management (SIEM): Combines SIM and SEM and provides real-time analysis of security alerts generated by network hardware and applications.[
Self-Signed CA, Server & Client Keys, Using Open SSL
Creating A Test Certificate Authority
The online documentation is good but it still took me a little while to put all the pieces together by reading around different sites and posts on StackOverflow etc, so below are the steps I've used to create my own CA with a self-signed certificate using openssl on Linux.
The following is mostly based on the examples from "Network Security with OpenSSL" by Pravir Chandra et al...
To begin with create the directory structure for the CA. The OpenSSL CA utility expects the CA data to live in its own directory with certain subdirectories and files.
Use the following bash commands to create the CA utility directory and required files. The environment variable YOUR_CA_DIR can be set to any directory of your choosing (which should not exist yet or at least be empty).
mkdir -p $YOUR_CA_DIR cd $YOUR_CA_DIR mkdir -p private # private stores the CA's private key chmod g-rwx,o-rwx private # make directory only accessable by owner mkdir -p certs # certs stores all certificates PA issues touch index.txt # "database" to keep track of issues certificates echo '01' > serial # serial file contains a hex index (of at least 2 digits) used by # OpenSSL to generate unique certificate numbers
Now begin to write our the openssl.cnf file. This is the config file used by the CA. One thing I found was that the certificates generated were made to be valid from tomorrow (based on system date) and I wanted them valid from the day of creation. Hence, below I set a default start date (TODO: re-check this!)
# # Start to write the configuration file that will be used by the OpenSLL command line # tool to obtain information about how to issue certificates. # Note: Set default start date so that self-signed certificate is valid from NOW! DEFAULT_STARTDATE=$(date +'%y%m01000000Z') cat <<EOF >openssl.cnf [ ca ] default_ca = your_test_ca [ your_test_ca ] certificate = $YOUR_CA_DIR/cacert.pem database = $YOUR_CA_DIR/index.txt new_certs_dir = $YOUR_CA_DIR/certs private_key = $YOUR_CA_DIR/private/cakey.pem serial = $YOUR_CA_DIR/serial default_crl_days = 7 default_days = 356 default_md = sha256 default_startdate = $DEFAULT_STARTDATE policy = your_test_ca_policy x509_extensions = certificate_extensions [ your_test_ca_policy ] commonName = supplied stateOrProvinceName = supplied countryName = supplied emailAddress = supplied organizationName = supplied organizationalUnitName = optional [ certificate_extensions ] basicConstraints = CA:false EOF
# # Next add information to the configuration file that will allows us to create a self-signed # root certificate. NOTE: This is only for internal in-house use. We should use a real # CA-issued certificate for production!! cat <<EOF >>openssl.cnf [ req ] default_bits = 2048 default_keyfile = $YOUR_CA_DIR/private/cakey.pem default_md = sha256 default_startdate = $DEFAULT_STARTDATE default_days = 356 prompt = no distinguished_name = root_ca_distinguished_name x509_extensions = root_ca_extensions [ root_ca_distinguished_name ] commonName = Your Mini CA stateOrProvinceName = Hampshire countryName = UK emailAddress = ca@yourdomainname.com organizationName = Your Organisation Name Ltd [ root_ca_extensions ] basicConstraints = CA:true EOF
The next thing to do is to tell the openssl tool where to find the config file that was just created. You can do this by exporting the environment variable OPENSSL_CONF. Another option is to use the -config option to the openssl req command, which will override anything set in OPENSSL_CONF...
OPENSSL_CONF=$YOUR_CA_DIR/openssl.cnf export OPENSSL_CONF
Now it is time to actually create the CA. In this one command we will create the CA's private key and a certificate request which will be used to generate a certificate (which includes the public key) signed using the CA's private key...
NOTE: In the following example I use expect to automate the generation of the CA key set because for me this is just a little test CA which I won't be using to actually publish certificates... for that I'd get a certificate from a real CA! Embedding passwords in a script like this is very insecure, so unless your just playing around, don't do it.
# # Now generate self-signed certificate and generate key pair to go with it... # This will place the file cakey.pem in the CA's "private" directory echo "Creating self-signed certificate with password \"3nigma\"" expect - <<EOF >> ca_debug.txt puts [concat "OPENSSL_CONF =" \$::env(OPENSSL_CONF)] spawn openssl req -x509 -newkey rsa:2048 -out cacert.pem -outform PEM -verbose expect "PEM pass phrase:" send "your-password\r" expect "PEM pass phrase:" send "your-password\r" expect eof EOF if [ ! -f $YOUR_CA_DIR/private/cakey.pem ] || [ ! -f $YOUR_CA_DIR/cacert.pem ]; then echo "### ERROR: Failed to create certificate authority!" exit 1 fi
At this point the CA is fully created and can be used to generate signed certificates. The CA certificate is self-signed however so if you are going to use it with an HTTPS server and want to connect to it with your browser, you will need to be able to tell your browser to accept certificates signed by this new CA. For this, the cacert.pem must be converted into PKCS12 format so that it can be loaded into your browsers trusted root CAs list.
# # This is the certificate for use with web browser... echo "Now attempting to create cacert PFX P12 key for web browser" expect - <<EOF >> ca_debug.txt spawn openssl pkcs12 -export -in cacert.pem -out cacert.p12 -name "MyLittleTestCACertificate" -cacerts -nokeys expect "Enter Export Password:" send "a-password-of-your-choosing\r" expect "Enter Export Password:" send "a-password-of-your-choosing\r" expect eof EOF if [ ! -f cacert.p12 ]; then echo_error "### ERROR: Failed to export CA certificate to PKCS12 format" exit 1 fi
Importing The CA Certificate Into Chrome
The browser I'm using is Chrome, so here is how to load this certificate into Chrome...
- Browse to the location chrome://settings/
- Expand the settings list to show advanced settings
- Navigate down to the "HTTPS/SSL" section a click on "Manage certificates"
- In the resulting pop-up, selected the "Trusted Root Certification Authorities" tab and then click the "Import..." button. The certificate import wizard will pop up. Click "Next >"
- Follow through the rest of the dialog until your certificate has been imported. You will be promted to enter the password you used for the export password. Once imported instead of getting an unknown CA error when navigating to your HTTPS pages you will see the hallowed green padlock.
Generating Private Keys And Signed Certificates From The Test CA
I ended creating a script to automate this as well. It has quite a few shortcomings but remember, this is just a test script... I'm playing around!
The script defines one function called "generate_certificate_and_keys". It's supposed to automate as much of the certificate issuing as possible for me (I only want a client and server certificate/private key in my little test scenario). Given this, the function only lets you set the common name and password for the certificate. It could easily be extended to accept all the certificate information as parameters rather than the hard coded details...
Lets start with the function header with all the initialisation stuff.
function generate_certificate_and_keys { if [ $# -ne 3 ]; then echo_error "### ERROR: Must call with 3 parameters: target, passphrase, cn" return 1 fi : ${YOUR_CA_DIR:=~/my_ca} # Default dir for CA TARGET=$1 PASSPHRASE=$2 CN=$3 DEBUG_FILE=${TARGET}_debug.txt PRV_KEY_FILE=${TARGET}_priv_key.pem CERT_REQ_FILE=${TARGET}_key_request.pem echo "\n\n######################################################" echo " Generating a key set and certificate for $TARGET" echo " - Your_CA_DIR = $YOUR_CA_DIR" echo "######################################################" if [ ! -d $YORU_CA_DIR ]; then echo "### ERROR: The directory $YOUR_CA_DIR does not exist. You must create the" echo " certificate authority first by running ./make_ca.sh" return 1 fi
So, pretty simple so far. The function makes sure the CA exists and accepts 3 arguments, the name of the target we're gerating for, the passphrase for the target's private key, and the common name to input into the certificate.
Because in the CA generation example we used the environment variable OPENSSL_CONF to specify a config file for the certificate generation whilst creating the CA, we must be sure to unset it so that the default OpenSSL configuration will be used.
unset OPENSSL_CONF #< Make sure we're using the default OpenSSL configuration
Now we want to go ahead and create the private key for our target and a certificate request, which will be "sent" to our CA to be signed.
# # Generate two files # 1. ${CERT_REQ_FILE} contains the certificate request # 2. ${PRV_KEY_FILE} contains the private key associated with the public key # embedded in `${TARGET}_key_request.pem` expect - <<EOF > $DEBUG_FILE spawn openssl req -newkey rsa:2048 -keyout $PRV_KEY_FILE -keyform PEM -out $CERT_REQ_FILE -outform PEM expect "PEM pass phrase:" send "$PASSPHRASE\r" expect "PEM pass phrase:" send "$PASSPHRASE\r" expect "Country Name" send "Your Country\r" expect "State or Province Name" send "Your State\r" expect "Locality Name" send "Your Locality\r" expect "Organization Name" send "Your Test $TARGET Request\r" expect "Organizational Unit Name" send "Your Unit Name\r" expect "Common Name" send "$CN\r" expect "Email Address" send "someone@your-company.com\r" expect "A challenge password" send "QWERTY\r" expect "An optional company name" send ".\r" expect eof EOF if [ ! -f $PRIV_KEY_FILE ] || [ ! -f $CERT_REQ_FILE ]; then echo_error "### ERROR: Failed to generate the certificate request for $TARGET properly" return 1 fi
Now that the target has it's own private key and a certificate request (which will contain it's public key), get the CA to signed the certificate with it's private key.
Note that once again, because we are using our own CA we must export OPENSSL_CONF (or use the -config option) to ensure the OpenSSL tools use our CA's specific configuration.
OPENSSL_CONF=${YOUR_CA_DIR}/openssl.cnf export OPENSSL_CONF
Our CA configuration now being in play we can run the certificate request processing. I found that it seemed to generate certificates which were only valid from tomorrow (relative to current system time). I wanted to use them straight away so I forced a specific start date...
expect - <<EOF >> $DEBUG_FILE puts [concat "OPENSSL_CONF =" \$::env(OPENSSL_CONF)] # Set certificate startdate to start of TODAY (otherwise they seem to be dated for tomorrow?) set startdate [clock format [clock seconds] -format %y%m01000000Z] puts [concat "START DATE IS " \$startdate] spawn openssl ca -in ${TARGET}_key_request.pem -startdate "\$startdate" # Pass phrase is for the CA's certficate expect "Enter pass phrase" send "CA-certificates-password (see section on Creating The CA)\r" expect "Sign the certificate?" send "y\r" expect "1 out of 1 certificate requests certified, commit?" send "y\r" expect eof EOF
At this point several things have happened:
- Whilst creating the certificate, in order to generate certificates with a unique serial number, the file serial.txt was consulted. It is a requirement of X509 certificates that they contain a serial number unique to the CA. Therefore, the CA common name and serial number provide a globaly unique identified for the certificate. The hexadecimal number in serial.txt is used as the new certificates serial number. The file is then saved as serial.txt.old and a new serial.txt is written with the next sequential serial number to be used in the next certificate generation. So, assuming no simultaneous/concurrent use of the CA (which I'm not sure is supported anyway), I think I'm safe to find the most recently generated certificate using the text in serial.txt.old - the CA stored the certificate in certs/<serial-number>.pem.
- The CA utility has updated it's index.txt file, adding in information about the certificate that has just been generated.
SERNO=$(cat ${YOUR_CA_DIR}/serial.old) CA_SERNO_FILE=${YOUR_CA_DIR}/certs/${SERNO}.pem cp ${CA_SERNO_FILE} ${TARGET}_certificate.pem if [ $? -ne 0 ]; then echo_ERROR "### ERROR: Failed to access the ${TARGET} certificate" return 1 fi echo "The certificate is now available in ${YOUR_CA_DIR}/certs/${SERNO}.pem and will be copied to ${TARGET}_certificate.pem"
The above snipped copies the newly created certificate from certs/<serial-number>.pem into a more readable directory name and certificate file name.
The next thing I do is to do a quick sanity check to make sure that the target's private key and its new certificate that I copied to ${TARGET}_certificate.pem do indeed belong together. To do this I get the OpenSSL utilities to print out the moduli for the private key and the certificate. If these match then all's good. This method was taken from an article on command line fanataic.
PRIV_KEY_MODULUS=$(openssl rsa -in ${PRV_KEY_FILE} -noout -modulus -passin pass:$PASSPHRASE) CERT_MODULUS=$(openssl x509 -in ${TARGET}_certificate.pem -noout -modulus) if [ "$PRIV_KEY_MODULUS" != "$CERT_MODULUS" ]; then echo "### ERROR: The private key and certificate for ${TARGET} do not appear to be matched" echo " pkey modulus is $PRIV_KEY_MODULUS" echo " cert modulus is $CERT_MODULUS" return 1 fi
The next job is to convert the private key to an RSA private key. I'm doing this because TclHttpd requires the server private key in this format...
expect - <<EOF >> $DEBUG_FILE spawn openssl rsa -in ${PRV_KEY_FILE} -inform PEM -out ${TARGET}_priv_key.rsa expect "Enter pass phrase for ${PRV_KEY_FILE}" send "${PASSPHRASE}\r" expect eof EOF if [ ! -f ${TARGET}_priv_key.rsa ]; then echo_error "### ERROR: Failed to generate ${TARGET}_priv_key.rsa" return 1 fi
And then finally create a PKCS12 certificate that includes both the public and private key for the target. I used this when trying out client authentication. Could install this to Chrome (in the same way as previously described but using the "Personal" tab in the Certificates dialog) and then have the client authenticate with the server
expect - <<EOF >> $DEBUG_FILE spawn openssl pkcs12 -export -inkey ${PRV_KEY_FILE} -in ${TARGET}_certificate.pem -out ${TARGET}_certificate_with_priv_key.p12 -name "Test${TARGET}Key" expect "Enter pass phrase" send "${PASSPHRASE}\r" expect "Enter Export Password:" send "${PASSPHRASE}\r" expect "Enter Export Password:" send "${PASSPHRASE}\r" expect eof EOF if [ ! -f ${TARGET}_certificate_with_priv_key.p12 ]; then echo_error "### ERROR: Failed to generate P12 certificate-with-private-key" return 1 fi
The last thing I did was to copy all of the generated certificates into a location under the CA-root-dir/certs directory... CA-root-dir/certs/<serno>_<target name>/...
# # Now move all these files back into the CA under a directory "full_certs" so that # they can all live in one place that we can then keep track of in SVN (and not clutter # up this directory) CERTS_DEST_DIR=${YOUR_CA_DIR}/certs/${SERNO}_$(echo $TARGET | tr " " "_") CERT_FILES="debug.txt key_request.pem priv_key.pem certificate.pem certificate_with_priv_key.p12 priv_key.rsa" echo "Moving certificates to $CERTS_DEST_DIR" mkdir -p $CERTS_DEST_DIR for file in $CERT_FILES do file=${TARGET}_${file} mv ${file} $CERTS_DEST_DIR if [ $? -ne 0 ]; then echo_error "### ERROR: Failed to move ${file} to $CERTS_DEST_DIR" exit 1 fi done echo_bold "\nAll $TARGET certificates now reside in $CERTS_DEST_DIR" return 0 }
Now to generate my CA automatically and the client and server certificate and private keys that I want for my little test, I just use the following.
#!/bin/bash source gen_keys.sh : ${YOUR_CA_DIR:=../Some-Default-Dir} # Default dir for CA # Create the Certificate Autority with self-signed certificate ./gen_ca.sh if [ $? -ne 0 ]; then exit 1; fi # Generate certificates and private keys for a test server generate_certificate_and_keys "server" "password" "xx.xx.xx.xx" if [ $? -ne 0 ]; then exit 1; fi # Generate certificates and private keys for a test client generate_certificate_and_keys "client" "password" "" if [ $? -ne 0 ]; then exit 1; fi
Note that the common name (CN) used in the above examples, at least for the server, must match the actual domain name of the server. So if your server just has some IP address replace the "xx.xx.xx.xx" with its IP address. If it has a domain name replace with the root domain name.
Keys And Formats
ASN.1, BER and DER
ASN.1 refers to the ITU's Abstract Syntax Notation One: specification available from the ITU website.
The idea of ASN.1 notation is to provide a mechanism to specify complex datatypes in a manner that is independent of how the data is going to be transmitted/represented. In other words, using ASN.1 notation, complex datatypes can be specified in a standard language. Then when we want to send the datatype it can be converted from this standard language to any representation needed, like smoke signals, bytes on a wire, etc. The point is that the application is decoupled from the transfer. The application knows what it wants to do with data and what sort of data it is but doesn't need to concern itself with the inifinite different representations that might be needed when transferring this data to other people on arbirary platforms over arbirary communications channels.
ASN.1 lets one build up complex datatypes from a set of simple types. The character set it uses is very limited and includes only the characters "A-Z", "a-z". "0-9" and a set of punctutation characters such as ";:=,{}<.()[]-". As such it is very portable in the sense that everyone knows how to store and represent an ASCII character! Related to cryptography, this is the format in which some keys are written. It must, for exampe, be used when writing keys using X.509 and PKCS #8 [6].
So, ASN.1 is just an description of a dataset. The actual dataset itself constitues the private/public key, maybe some information about it, possibly encrypted with a password. The dataset and its structure depends on who produced the key set and whether they use their own proprietary format or a standard open format.
BER referes to the Basic Encoding Rules for ASN.1 and gives one or more ways to represent any ASN.1 value as an octet string. [7]
DER refers to the Distinguished Encoding Rules for ASN.1. The rules are a subset of BER, and give exactly one way to represent any ASN.1 value as an octet string. DER is intended for applications in which a unique octet string encoding is needed, as is the case when a digital signature is computed on an ASN.1 value. [7]
Another good resource is the Layman's Guide to a Subset of ASN.1, BER, and DER. It has the following to say:
... a service at one layer may require transfer of certain abstract objects between computers; a lower layer may provide transfer services for strings of ones and zeroes, using encoding rules to transform the abstract objects into such strings ...
... OSI's method of specifying abstract objects is called ASN.1 (Abstract Syntax Notation One, defined in X.208), and one set of rules for representing such objects as strings of ones and zeros is called the BER (Basic Encoding Rules, defined in X.209). ASN.1 is a flexible notation that allows one to define a variety data types, from simple types such as integers and bit strings to structured types such as sets and sequences, as well as complex types defined in terms of others. BER describes how to represent or encode values of each ASN.1 type as a string of eight-bit octets. There is generally more than one way to BER-encode a given value. Another set of rules, called the Distinguished Encoding Rules (DER), which is a subset of BER, gives a unique encoding to each ASN.1 value ...
PEM Files
PEM stands for Privacy-enhanced Electronic Mail.
CER, CRT, DER Files
PKCS Files
OpenSSH
For information, at your command prompt, type man ssh_config. You will probably find your OpenSSH configuration file at ~/.ssh/config.
Partly, the configuration file maps host to private-key file used. The format you will see is something like:
Host 192.168.1.* alias_to_this_address IdentityFile ~/.ssh/your_key_file_name User your-user-name
This is how OpenSSH knows to use the key ~/.ssh/your_key_file_name when you SSH to a host at say 192.168.1.2 as the user "your-user-name". | https://jehtech.com/ssl.html | CC-MAIN-2021-49 | refinedweb | 11,124 | 54.73 |
The asynchronous task model has been much improved in JavaFX 1.2. Not only does it have a simple and elegant API, but several of the asynchronous tasks (e.g. HttpRequest) in the JavaFX API now use this model. In addition, there are a couple of progress UI controls introduced in JavaFX 1.2, and they work well with the new asynchronous model.
The JavaFX package that contains the asynchronous-related classes is javafx.async, and a good class to take a look at first is JavaTaskBase. This class enables you to start a task, check on its progress, and be notified when the task has completed. JavaFX is single threaded, so the task that is started is one that you define in a Java class (that implements the javafx.async.RunnableFuture interface). The Java class can then call functions of your JavaFX classes, demonstrating bi-directional integration between JavaFX and Java.
Take a look at the screenshot of a simple example on which Stephen Chin and I collaborated, that demonstrates the capabilities just described:
Clicking on the Start the Task button starts a new task and adds a ProgressBar into the scene (up to a maximum of eight, the built-in limit on the number of tasks in JavaFX that can be run in parallel). When a task completes, its ProgressBar is removed from the scene. Take a look at the AsyncProgressMain.fx script below to understand how the UI is drawn, and how the tasks are started:
package projavafx.asyncprogress.ui;
import projavafx.asyncprogress.model.*;
import javafx.scene.*;
import javafx.scene.control.*;
import javafx.scene.layout.*;
import javafx.stage.Stage;
var vbox:VBox;
function startTask() {
def progressBar:ProgressIndicator = ProgressBar {
progress: bind taskController.floatProgress
}
def taskController:TaskController = TaskController {
maxProg: 100
onStart:function():Void {
insert progressBar into vbox.content;
}
onDone:function():Void {
delete progressBar from vbox.content;
}
}
taskController.start();
}
Stage {
title: "Async and Progress Example"
scene: Scene {
width: 200
height: 250
content: vbox = VBox {
layoutX: 10
layoutY: 10
spacing: 10
content: [
Button {
text: "Start the task"
action: function():Void {
println("Starting TaskController");
startTask();
}
}
]
}
}
}
Now take a look at the TaskController class that is instantiated in the script above when the button is clicked:
package projavafx.asyncprogress.model;
import projavafx.asyncprogress.model.Ticker;
import javafx.async.RunnableFuture;
import javafx.async.JavaTaskBase;
public class TaskController extends JavaTaskBase, TickerHandler {
public var maxProg:Integer;
public-read protected var floatProgress:Number;
override public function create():RunnableFuture {
maxProgress = maxProg;
return new Ticker(this);
}
override public function onTick(tickNum:Integer):Void {
progress = tickNum;
floatProgress = percentDone / 100.0;
}
}
Note that the TaskController class above extends the JavaTaskBase class mentioned earlier. We'll use its capabilities to start and monitor the task. When the create function of the TaskController instance is called by the JavaFX runtime (as a result of its start function being called) it creates a new instance of the Ticker class that we developed in Java. As shown in the code below, the Ticker class implements the RunnableFuture interface mentioned earlier.
package projavafx.asyncprogress.model;
import com.sun.javafx.functions.Function0;
import javafx.lang.FX;
import javafx.async.RunnableFuture;
public class Ticker implements RunnableFuture {
TickerHandler tickerHandler;
public Ticker (TickerHandler tickerHandler) {
this.tickerHandler = tickerHandler;
}
@Override public void run() {
for (int i = 1; i <= 100; i++) {
if (tickerHandler != null) {
final int tick = i;
FX.deferAction(new Function0<Void>() {
@Override public Void invoke() {
tickerHandler.onTick(tick);
return null;
}
});
}
try {
Thread.sleep(200);
}
catch (InterruptedException te) {}
}
System.out.println("Ticker#run is finished");
}
}
The Ticker class shown above counts to 100, sleeping for a couple hundred milliseconds each iteration. During each iteration it calls the onTick function of the TaskController class shown previously, which implements the TickerHandler interface. Here's the code for that interface:
package projavafx.asyncprogress.model;
public interface TickerHandler {
void onTick(int tickNum);
}
For an excellent explanation (including UML sequence diagrams) of asynchronous tasks in JavaFX, see Baechul's JavaFX 1.2 Async blog post. Also take a look at the Richard Bair and Jasper Potts fxexperience.com post on the subject.
For those interested in JavaFX Mobile, here's a short video clip of this example running on an HTC Diamond phone. You may want to mute the audio, as I didn't mute the microphone when creating this video. ;-)
As I mentioned in the dynamic Timeline values post, I'm trying to encourage you to compile and run the examples in this blog. However, please leave a comment if you'd like a Java Web Start link.
Regards,
Jim Weaver
The example doesn't work at all. I'm using eclipse and everythere are erros!
Posted by: Bruce | October 13, 2009 at 10:10 AM
Hi Jim,
I have read your artical which is pretty much informative.
with a reference of your example i have create one sample application which asynchronously call the webservices.
instread of ticker class ( in your example) i have implement RunnableFuture interface to a class [Control.java] which is responsible for making SOAP based webservices calls.
and i have override the run method in which i am making webservice request and getting back the response in a TickerHandler.
so my main.fx calls taskController.start() to start the thread.
But unfortunately when i click on a button to call webservices it hangs for a which then a response is return.
so eventually i am failed to create asynchronous web service request..
so please do help me how i would achieve this so that my other javafx component wont hang....
Nihar (ntimesc@gmail.com)
Posted by: Nihar | September 05, 2009 at 07:43 AM
Hi Jim ,
I have read your article regarding Asynchronicty in JavaFX. it is really a helpful and informative article.
As of now i am working on a application which extensively making webservices calls to perform ongoing computation i have chose javafx as a user interface platform. after reading your article i have tried to create new thread as and when a new webservice request is made.
i have implemented RunnableFuture in the class which is making webservices request.
so does this approach work fine for ongoing computational webservices with javafx ui ?
Posted by: Nihar | September 05, 2009 at 05:36 AM
I copied and pasted the example code shown into a NetBeans Project, FX 1.2 (Linux). It won't compile. And the block seems to happen at the mixin Inheritance feature, which I suspect, is related to the interface. TaskController insists that only one non-mixin FX class can be extended. FX doesn't like the interface at all. The parsing keeps insisting we left out a semicolon on the interface.
Forgive me for asking this here. But I did buy the first Apress FX book, got excited, then Java dumps Linux. Now Linux is back on board with 1.2 and I'm trying to learn JavaFX vers. 1.2. Each FX version had profound vocubulary changes that requires re-engineering the examples I do find on the web.
Posted by: VictorCharlie | July 03, 2009 at 04:37 PM | http://learnjavafx.typepad.com/weblog/2009/06/background-tasks-in-javafx.html | CC-MAIN-2017-39 | refinedweb | 1,163 | 57.67 |
In this section we will discuss about the Writer class in Java.
java.io.Writer is an abstract class which provides some methods to write characters to the stream. Object of this class can't be created directly. This class is a super class of all the classes which are involved in or to facilitate to write (output) the character streams. Classes descended from this class may overrides the methods of this class but the method write(char[], int, int), flush() and close() must be implemented by the subclasses.
Constructor Detail
Methods Detail
Example
To demonstrate how to use Writer to write into the output stream I am giving a simple example. In this example I have created a Java class named JavaWriterExample.java where I have created an object of Writer using BufferedWriter to write to the output stream and created object of FileWriter to write the stream to the file. Further in the example I have flushed and closed the stream using finally block.
Source Code
JavaWriterExample.java
import java.io.Writer; import java.io.FileWriter; import java.io.BufferedWriter; import java.io.IOException; public class JavaWriterExample { public static void main(String args[]) { char[] ch = {'a', 'b', 'c'}; FileWriter fw = null; Writer wr = null; //BufferedWriiter bw = null; try { fw = new FileWriter("test.txt"); wr = new BufferedWriter(fw); wr.write("Welcome to Roseindia"); wr.write(" "); wr.write(ch); System.out.println("\n File successfully written"); } catch(IOException ioe) { System.out.println(ioe); } finally { if(fw != null) { try { fw.flush(); wr.flush(); fw.close(); wr.close(); } catch(Exception e) { System.out.println(e); } } }//finally closed }// main closed }// class closed
Output
When you will compile and execute the above example like below then the output will be as follows :
When the output will show like the above then if you will open the directory what you have given the path at the time of creating of file (say test.txt) you will see that the file with the specified name is created and contains the text written by the Java program.
Source code of the above example can be downloaded from the link given below.
If you enjoyed this post then why not add us on Google+? Add us to your Circles
Liked it! Share this Tutorial
Discuss: Java IO Writer
Post your Comment | http://www.roseindia.net/java/example/java/io/writer.shtml | CC-MAIN-2015-06 | refinedweb | 383 | 67.04 |
Documentation inconsistencies: textView does not escape HTML
The documentation of
textView says that 'html is automatically escaped'. However,
viewInformation "textView" [ViewUsing id textView] "<strong>strong text</strong>" shows 'strong text' rather than '<strong>strong text</strong>', which is what I would expect. The
textField and
textArea editors do work properly, because HTML is not rendered in input fields.
Also in the documentation of iTasks.UI.Editor.Controls, many functions have 'Supported attributes:' in their documentation without any attributes listed. Does this have any use?
There are more documentation inconsistencies. For example
assign in iTasks.Extensions.User has three
@param fields but only two parameters, and it's unclear to me how the first two
@param fields relate to the first parameter. It would be relatively easy to find functions where the number of
@param fields and the arity don't agree, but then you still miss functions of which the type has changed but not the arity. Perhaps it's possible to use the git history to find commits where a type changed but the documentation didn't, to find all inconsistencies.
MWE for the textView:
import iTasks Start w = startEngine (allTasks [textv, textf, texta]) w where textv = viewInformation "textView" [ViewUsing id textView] "<strong>strong text</strong>" textf = viewInformation "textField" [ViewUsing id textField] "<strong>strong text</strong>" texta = viewInformation "textArea" [ViewUsing id textArea] "<strong>strong text</strong>" | https://gitlab.science.ru.nl/clean-and-itasks/iTasks-SDK/-/issues/188 | CC-MAIN-2021-04 | refinedweb | 227 | 53.92 |
Edge Cases and Limitations
This topic contains information about edge cases of using selectors and selectors API limitations.
Calling Selectors from Node.js Callbacks #
Selectors need access to the test controller to be executed. When called right from the test function, they implicitly obtain the test controller.
However, if you need to call a selector from a Node.js callback that fires during the test run, you have to manually bind it to the test controller.
Use the boundTestRun option for this.
import { http } from 'http'; import { Selector } from 'testcafe'; fixture `My fixture` .page ``; const elementWithId = Selector(id => document.getElementById(id)); test('Title changed', async t => { const boundSelector = elementWithId.with({ boundTestRun: t }); // Performs an HTTP request that changes the article title on the page. // Resolves to a value indicating whether the title has been changed. const match = await new Promise(resolve => { const req = http.request(/* request options */, res => { if(res.statusCode === 200) { boundSelector('article-title').then(titleEl => { resolve(titleEl.textContent === 'New title'); }); } }); req.write(title) req.end(); }); await t.expect(match).ok(); });
This approach only works for Node.js callbacks that fire during the test run. To ensure that the test function does not finish before the callback is executed, suspend the test until the callback fires. You can do this by introducing a promise and synchronously waiting for it to complete as shown in the example above.
Limitations #
You cannot use generators or
async/awaitsyntax within selectors.
Selectors cannot access variables defined in the outer scope in test code. However, you can use arguments to pass data inside the selectors, except for those that are self-invoked. They cannot take any parameters from the outside.
Likewise, the return value is the only way to obtain data from selectors. | https://devexpress.github.io/testcafe/documentation/test-api/selecting-page-elements/selectors/edge-cases-and-limitations.html | CC-MAIN-2020-05 | refinedweb | 289 | 52.76 |
Keys¶
The
keys variable defines Qtile’s key bindings. Individual key bindings are
defined with
libqtile.config.Key as demonstrated in the following
example. Note that you may specify more than one callback functions.
from libqtile.config import Key keys = [ # Pressing "Meta + Shift + a". Key(["mod4", "shift"], "a", callback, ...), # Pressing "Control + p". Key(["control"], "p", callback, ...), # Pressing "Meta + Tab". Key(["mod4", "mod1"], "Tab", callback, ...), ]
The above may also be written more concisely with the help of the
libqtile.config.EzKey helper class. The following example is
functionally equivalent to the above:
from libqtile.config import EzKey as Key keys = [ Key("M-S-a", callback, ...), Key("C-p", callback, ...), Key("M-A-<Tab>", callback, ...), ]
The
EzKey modifier keys (i.e.
MASC) can be overwritten through the
EzKey.modifier_keys dictionary. The defaults are:
modifier_keys = { 'M': 'mod4', 'A': 'mod1', 'S': 'shift', 'C': 'control', }
Callbacks can also be configured to work only under certain conditions by using
the
when() method. Currently, two conditions are supported:
from libqtile.config import Key keys = [ # Only trigger callback for a specific layout Key( [mod, 'shift'], "j", lazy.layout.grow().when(layout='verticaltile'), lazy.layout.grow_down().when(layout='columns') ), # Limit action to when the current window is not floating (default True) Key([mod], "f", lazy.window.toggle_fullscreen().when(when_floating=False)) ]
KeyChords¶
Qtile also allows sequences of keys to trigger callbacks. In Qtile, these
sequences are known as chords and are defined with
libqtile.config.KeyChord. Chords are added to the
keys section of
the config file.
from libqtile.config import Key, KeyChord keys = [ KeyChord([mod], "z", [ Key([], "x", lazy.spawn("xterm")) ]) ]
The above code will launch xterm when the user presses Mod + z, followed by x.
Warning
Users should note that key chords are aborted by pressing <escape>. In the above example, if the user presses Mod + z, any following key presses will still be sent to the currently focussed window. If <escape> has not been pressed, the next press of x will launch xterm.
Modes¶
Chords can optionally specify a “mode”. When this is done, the mode will remain active until the user presses <escape>. This can be useful for configuring a subset of commands for a particular situations (i.e. similar to vim modes).
from libqtile.config import Key, KeyChord keys = [ KeyChord([mod], "z", [ Key([], "g", lazy.layout.grow()), Key([], "s", lazy.layout.shrink()), Key([], "n", lazy.layout.normalize()), Key([], "m", lazy.layout.maximize())], mode="Windows" ) ]
In the above example, pressing Mod + z triggers the “Windows” mode. Users can then resize windows by just pressing g (to grow the window), s to shrink it etc. as many times as needed. To exit the mode, press <escape>.
Note
If using modes, users may also wish to use the Chord widget
(
libqtile.widget.chord.Chord) as this will display the name of the
currently active mode on the bar.
Chains¶
Chords can also be chained to make even longer sequences.
from libqtile.config import Key, KeyChord keys = [ KeyChord([mod], "z", [ KeyChord([], "x", [ Key([], "c", lazy.spawn("xterm")) ]) ]) ]
Modes can also be added to chains if required. The following example
demonstrates the behaviour when using the
mode argument in chains:
from libqtile.config import Key, KeyChord keys = [ KeyChord([mod], "z", [ KeyChord([], "y", [ KeyChord([], "x", [ Key([], "c", lazy.spawn("xterm")) ], mode="inner") ]) ], mode="outer") ]
After pressing Mod+z y x c, the “inner” mode will remain active. When pressing
<escape>, the “inner” mode is exited. Since the mode in between does not have
mode set, it is also left. Arriving at the “outer” mode (which has this
argument set) stops the “leave” action and “outer” now becomes the active mode.
Note
If you want to bind a custom key to leave the current mode (e.g. Control +
G in addition to
<escape>), you can specify
lazy.ungrab_chord()
as the key action. To leave all modes and return to the root bindings, use
lazy.ungrab_all_chords().
Modifiers¶
On most systems
mod1 is the Alt key - you can see which modifiers, which are
enclosed in a list, map to which keys on your system by running the
xmodmap
command. This example binds
Alt-k to the “down” command on the current
layout. This command is standard on all the included layouts, and switches to
the next window (where “next” is defined differently in different layouts). The
matching “up” command switches to the previous window.
Modifiers include: “shift”, “lock”, “control”, “mod1”, “mod2”, “mod3”, “mod4”, and “mod5”. They can be used in combination by appending more than one modifier to the list:
Key( ["mod1", "control"], "k", lazy.layout.shuffle_down() )
Special keys¶
These are most commonly used special keys. For complete list please see
the code.
You can create bindings on them just like for the regular keys. For example
Key(["mod1"], "F4", lazy.window.kill()).
Reference¶
Key¶
- class libqtile.config.Key(modifiers: List[str], key: str, *commands, desc: str = '')[source]¶
Defines a keybinding.
- Parameters
- modifiers:
A list of modifier specifications. Modifier specifications are one of: “shift”, “lock”, “control”, “mod1”, “mod2”, “mod3”, “mod4”, “mod5”.
- key:
A key specification, e.g. “a”, “Tab”, “Return”, “space”.
- commands:
A list of lazy command objects generated with the lazy.lazy helper. If multiple Call objects are specified, they are run in sequence.
- desc:
description to be added to the key binding
KeyChord¶
- class libqtile.config.KeyChord(modifiers: List[str], key: str, submappings: List[Union[libqtile.config.Key, libqtile.config.KeyChord]], mode: str = '')[source]¶
Define a key chord aka vim like mode
- Parameters
- modifiers:
A list of modifier specifications. Modifier specifications are one of: “shift”, “lock”, “control”, “mod1”, “mod2”, “mod3”, “mod4”, “mod5”.
- key:
A key specification, e.g. “a”, “Tab”, “Return”, “space”.
- submappings:
A list of Key or KeyChord declarations to bind in this chord.
- mode:
A string with vim like mode name. If it’s set, the chord mode will not be left after a keystroke (except for Esc which always leaves the current chord/mode). | http://docs.qtile.org/en/latest/manual/config/keys.html | CC-MAIN-2021-39 | refinedweb | 975 | 60.61 |
Provided by: manpages-dev_5.02-1_all
NAME
getsid - get session ID
SYNOPSIS
#include <sys/types.h> #include <unistd.h> pid_t getsid(pid_t pid); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): getsid(): _XOPEN_SOURCE >= 500 || /* Since glibc 2.12: */ _POSIX_C_SOURCE >= 200809L
DESCRIPTION
getsid(0) returns the session ID of the calling process. getsid() returns the session ID of the process with process ID pid. If pid is 0, getsid() returns the session ID of the calling process.
RETURN VALUE
On success, a session ID is returned. On error, (pid_t) -1 will be returned, and errno is set appropriately.
ERRORS
EPERM A process with process ID pid exists, but it is not in the same session as the calling process, and the implementation considers this an error. ESRCH No process with process ID pid was found.
VERSIONS
This system call is available on Linux since version 2.0.
CONFORMING TO
POSIX.1-2001, POSIX.1-2008, SVr4.
NOTES
Linux does not return EPERM. See credentials(7) for a description of sessions and session IDs.
SEE ALSO
getpgid(2), setsid(2), credentials(7)
COLOPHON
This page is part of release 5.02 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | http://manpages.ubuntu.com/manpages/eoan/man2/getsid.2.html | CC-MAIN-2019-51 | refinedweb | 216 | 68.97 |
Integrating functions in python
Posted February 02, 2013 at 09:00 AM | categories: math, python | tags: | View Comments
Updated February 27, 2013 at 02:54 PM
Problem statement
find the integral of a function f(x) from a to b i.e.
$$\int_a^b f(x) dx$$
In python we use numerical quadrature to achieve this with the scipy.integrate.quad command.
as a specific example, lets integrate
$$y=x^2$$
from x=0 to x=1. You should be able to work out that the answer is 1/3.
from scipy.integrate import quad def integrand(x): return x**2 ans, err = quad(integrand, 0, 1) print ans
0.333333333333
1 double integrals
we use the scipy.integrate.dblquad command
Integrate \(f(x,y)=y sin(x)+x cos(y)\) over
\(\pi <= x <= 2\pi\)
\(0 <= y <= \pi\)
i.e.
\(\int_{x=\pi}^{2\pi}\int_{y=0}^{\pi}y sin(x)+x cos(y)dydx\)
The syntax in dblquad is a bit more complicated than in Matlab. We have to provide callable functions for the range of the y-variable. Here they are constants, so we create lambda functions that return the constants. Also, note that the order of arguments in the integrand is different than in Matlab.
from scipy.integrate import dblquad import numpy as np def integrand(y, x): 'y must be the first argument, and x the second.' return y * np.sin(x) + x * np.cos(y) ans, err = dblquad(integrand, np.pi, 2*np.pi, lambda x: 0, lambda x: np.pi) print ans
-9.86960440109
we use the tplquad command to integrate \(f(x,y,z)=y sin(x)+z cos(x)\) over the region
\(0 <= x <= \pi\)
\(0 <= y <= 1\)
\(-1 <= z <= 1\)
from scipy.integrate import tplquad import numpy as np def integrand(z, y, x): return y * np.sin(x) + z * np.cos(x) ans, err = tplquad(integrand, 0, np.pi, # x limits lambda x: 0, lambda x: 1, # y limits lambda x,y: -1, lambda x,y: 1) # z limits print ans
2.0
2 Summary
scipy.integrate offers the same basic functionality as Matlab does. The syntax differs significantly for these simple examples, but the use of functions for the limits enables freedom to integrate over non-constant limits.
Copyright (C) 2013 by John Kitchin. See the License for information about copying. | http://kitchingroup.cheme.cmu.edu/blog/2013/02/02/Integrating-functions-in-python/ | CC-MAIN-2020-05 | refinedweb | 394 | 67.86 |
First time visiting for their Mother's Day buffet. Excellent food and excellent service. We will for sure be back!
First time visiting for their Mother's Day buffet. Excellent food and excellent service. We will for sure be back!
We have been coming here for the Monday night buffet quite often, there are many hot dishes to choose from the buffet table. On Monday night the food is generally milder to suit everybody's taste. This time I decided to try a regular dinner night,...the food was just as good and I could choose from spicy to mild curries. The service was quick since it was a quiet evening, in general a busy night would take a little bit more time for the food to be served, but the drinks would always be promptly brought to the table first.More
Had their Lunch buffet, which was awesome home made style indian food, they domt try and be fine dining fancy like all those new indian places that charge allot. The Taj Mahal is a great sort of hidden spot even though its right off a...MacLeod, you have tondrive in the back alley. But I recommend it to anyone looking or good value home made style indian food.More
Taj Mahal is the best Indian food that I ever had. Everything is delicious: Nam , cracker (appetizer), prawn masala, vegetarian dishes , saffron rice, vegetarian rice. So many flavors and spices. I highly recommend this restaurant.
Party of five for dinner - all of the food was topnotch. We had a selection of appetizers and then mains for all. Everyone was very pleased with their choices and complimented the quality, quantity and flavour of the food. We have not ever gone...for the buffet - that is our plan for our next visit.More
We went to Taj Mahal for supper on a friends suggestion. Parking is at the back of the building. You walk in thru the back door and go downstairs.At first the restaurant looks a little dated in need of a reno. That is where it...ends your not there to eat the decor. We arrived at 6pm there was only one other group besides the two of us. We were seated right away and order drinks and a mixed platter of appies. Just a heads up mixed platter feeds 4. Everything was very tasty. Our main dishes where Chicken Chawla,Lamb chops, coconut rice and garlic naan. Everything we had was delicious But next time we go which we will we will not order the mixed platter plus a meal it was just too much food. The Chichen Chawla was tender chicken in a peppery based curry just as advertised. My husband said the Lamb chops were the best! The coconut rice was perfect as was the garlic naan. We requested another order if naan to take home. We had the left orders to take home for another meal. If you are going for the decor and fancy small portions this is not the restaurant for you. If it is great Indian food and service this is a place to try.More
After trying many East Indian restaurants in Calgary, we decided to give this one a shot. The first thing we noticed is the wonderful front on Macleod Trail and what looked like extremely limited parking in the back alley. We then noticed that there is...a larger gravel area shared between the restaurant and a nightclub. Entering the main doors the lobby allows entry to both places, the nightclub upstairs and the restaurant downstairs. We went down the narrow stairs into the basement and both of us thought, wow, this looks and feels like a basement. Very dim lighting and low ceilings. To add to this ambiance, we were seated in the corner. Water was served in stainless cups. The buffet looked great. There were several salads and chutneys as well as both vegetable and beef samosas. None of the dishes were very spicy but they were flavorful. I discussed the lack of spiciness with one of the men working and he indicated that not everyone enjoys spicy, including those from India. What was more interesting is that none of the dishes were very hot, more like lukewarm. They were all from chafing dishes on a steam table however, they were not hot. I like the fact that they had some fresh cut fruit pieces, rice pudding AND Jalebi for desert, unlike so many other places that offer only one item. Overall the food was good but, with so many others to try, it will be quite some time before we return (if ever).More
Parking for the Taj is at the back off of the alley that runs behind the building. A small group of us met here for lunch buffet and we all went away full and satisfied. Pakoras, bajis, naan breads, chutneys, salads and my favorite Indian...dishes were all to be found. Butter chicken, chicken tandoori, beef curry and a lamb curry to go along with a couple of vegetable dishes made our choices pretty difficult, so I had a little of each. The food was tasty and none too spicy heat wise. I have had better butter chicken, but overall, the food deserved a solid 4. One of the better Indian restaurants in Calgary... try it.More
We stopped in for lunch on a cold, snowy Calgary day. We thoroughly enjoyed the buffet, especially the good selection of vegetarian dishes. A fresh basket of naan bread was provided at our table. The paneer curry was excellent. The vegetable curry was tasty, full...of onions and peppers. To finish off, the rice pudding was sublime, I think it had hints of cardamon. Delicious food, good value, fine service. Note: there is a parking lot dedicated to Taj Mahal behind the restaurant on the opposite side of the alley.More | https://www.tripadvisor.com/Restaurant_Review-g154913-d689874-Reviews-Taj_Mahal_Restaurant-Calgary_Alberta.html | CC-MAIN-2021-10 | refinedweb | 988 | 75.61 |
Finance::Performance::Calc - Perl extension to calculate linked performance numbers.
use 5.006001; use Finance::Performance::Calc qw (:all); print ROR(bmv => 20_000, emv => 21_567.87) # or, for convenience); Finance::Performance::Calc::return_percentages(1););
This module allows you to calculate performance for number of situations:
Single period performance, given a beginning market value (BMV) and ending market value (EMV) and optional cash flows in between.
Linked periodic performance; i.e, given three consecutive monthly returns, calculate the performance over the quarter.
"ized" performance. Given a rate of return that covers multiple periods, calculate the per-period return.
The formulae are taken from the book "Measuring Investment Performance". Author: David Spaulding. ISBN: 0-7863-1177-0.
NOTE Before using in a production environment, you should independently verify that the results obtained match what you expect. There may be unintentionally made assumptions in this code about timing and precision and what-not that do not match your assumptions with respect to the calcualtions.
my $ror = ROR($bmv, $emv, @flows);
Rate Of Return. Given a beginning market value ($bmv) and an ending market value ($emv), the rate of return is calculated as
($emv - $bmv)/$bmv
If there are intervening cash flows between the beginning market value and the ending market value, they are each represented by a hash ref containing the keys 'mvpcf' (market value prior to cash flow) and 'cf' (cash flow). The return is caculated by determining the rate of return between each cash flow and then linking the returns together. In this case, the beginning market value and the ending market value are treated as zero cash flow events; that is:
EMV is treated as {mvpcf => EMV, cf => 0} BMV is treated as {mvpcf => BMV, cf => 0}
The rate of return is in decimal form (0.02) for a return of two percent (2.0%).
my $ror = linked_ROR(0.02, -0.02, '3.5%');
Given previously calculated multiple rates of return, calculate the overall rate of return. The rates are linked using the algorithm:
o Convert any percentages to decimal (/100).
o Add 1.00 to each rate
o Multiply all the rates together
o Subtract 1.00 from the result
o Convert to percentage (* 100)
The function properly interprets strings with percents signs by dividing by 100 before using the value in any calculation.
The rate of return is in decimal form (0.02 for a return of two percent).
my $ror = ('7.0%',12);
Given a rate of return, and a number of periods, calculate the rate of return for each period. In our example, the 7.0% return is annual. We want to find the monthly return (12 months in one year). The calculation is:
((ROR + 1.0) ** (1.0/numPeriods)) - 1.00;
The rate of return is in decimal form (0.02 for a return of two percent).
If this function is called with an argument of 1, all of the rates of return that are returned from the functions will be strings in percentage form; i.e. '7.45%' instead of 0.0745. Setting to 0 turns this off. This eliminates the need for doing the math and adding the percent yourself in a known display circumstance.
If called with arguments, the prior value is returned.
If called with no arguments, the current value is returned.
If this function is called with an argument of 1, the steps in ROR and link_ROR will be printed as they are executed. This does impose a speed penalty. Setting to 0 turns this off.
If called with arguments, the prior value is returned.
If called with no arguments, the current value is returned.
The module does not do any extra special handling with regards to precision. The first example aboveprint ROR(20_000,21_567.87);
happily returns a value of0.0783934999999999
on my Linux box. In matters of precision, you have two choices:
1
Use regular Perl scalar floats as arguments and round the final result.
2
Use numeric objects whose behavior can be controlled as arguments. The only requirements for such an object are that addition, subtraction, multiplication and division ae overloaed for the object. For example:use Math::FixedPrecision; print ROR(new Math::FixedPrecision(20_000), new Math::FixedPrecision(21_567.87));
results in0.08
Two (and only two) decimal places are returned because the most precise term is the 21_567.87 value, two decimal places.
If you want more precision, ask for it:use Math::FixedPrecision; print ROR(new Math::FixedPrecision(20_000), new Math::FixedPrecision(21_567.87,4));
resulting in0.0784
Be forewarned that using numeric objects as opposed to native Perl numeric data types can result in loss of speed (see the example script eg/eg.pl in the distribution). YMMV. Test before using in a production scenario.
Matthew O. Persico, <persicom@cpan.org>
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.6.1 or, at your option, any later version of Perl 5 you may have available.
Before using in a production environment, you should independently verify that the results obtained match what you expect. I have to the best of my ability checked the results of this module, but I do not present any warrantee or guarrantee that the module is free from error. | http://search.cpan.org/dist/Finance-Performance-Calc/lib/Finance/Performance/Calc.pm | CC-MAIN-2015-06 | refinedweb | 885 | 58.18 |
CY22393 Programmable Clock
Warning
Playing around with the clock chip on the Ethernut 3 Board is fun, but also bears the risk of destroying your hardware due to overclocking.
Overview
The CY22393 is a very versatile part. It can create three completely different frequencies from a single clock source like a simple crystal or a fixed clock oscillator. The different output frequencies are generated in three independant PLL circuits, each of which is configured by a programmable 11-bit multiplier and an 8-bit divider. This gives a really large variety of available output frequencies.
The PLL outputs are routed via a programmable cross point switch to additional programmable output dividers. As a special feature the chip offers three input pins, S0, S1 and S2, which can be used to select seven different configurations of PLL1 as well as two different values for the Clock A and B output dividers. S0 and S1 are sampled during powerr-up, while S2 can be used to change the settings while the clocks are running. Using S2 to change the output dividers of Clock A and B is guaranteed to be glitch free.
The latest data sheet should be available at. It is essential to study this document before writing applications, which will modify the default clock settings.
Initial Programming
All clock settings can be programmed into the chip's internal Flash Memory. During power-up, the Flash is transfered to an on-chip RAM, which can be modified via I2C.
The chip is distributed with an empty Flash memory, which, when transfered into RAM during power-up, disables all clocks and puts all clock outputs to three-state mode. For the Ethernut Board this results in a special problem: No clock means no running CPU. No running CPU means, no possibility to modify the clock settings via I2C.
To initially program the Flash memory, a specific programming hardware is required:
- CY3672 Programmer Module
- CY3698 Socket Adapter
It is important to note, that the Flash memory on the chips must be programmed before mounting them on the target board. The Flash is not in-system-programmable.
egnite GmbH offers ready-to-run Ethernut 3 boards with pre-programmed clock chips, using these Jedec files:
- ethernut3.0-rev-e.jed
for Ethernut 3.0 Rev-E
- ethernut3v3.jed
for Ethernut 3.0 Rev-D
The clock setting for the earlier Rev-D Boards is shown below.
On the Ethernut 3 Board the chip is driven by a 25 MHz crystal to generate the reference clock. Four clock outputs are used by default:
The two remaining outputs, XBUF and Clock E, will available on Ethernut 3.0 Rev-D, when mounting R103 and R104 resp. However, in this case at least R3 must be removed and Clock E and D should never be both enabled. On newer Rev-E boards these outputs are not connected.
Clock wiring of Ethernut 3.0 Rev-E:
Clock wiring of Ethernut 3.0 Rev-D:
I2C Access
As stated earlier, the chip will copy its Flash memory into internal RAM during power-up. An I2C Interface is available to examine and even modify the RAM contents.
The I2C address of the CY22393 is 0x69 and the following code fragment can be used to query the number of the PLL that is connected to a specified clock.
#include <sys/event.h> #include <dev/twif.h> int Cy2239xGetPll(int clk) { int rc = -1; u_char loc = 0x0E; u_char reg; if (clk == 4) { rc = 1; } else if (TwMasterTransact(0x69, &loc, 1, ®, 1, NUT_WAIT_INFINITE) == 1) { rc = (reg >> (2 * clk)) & 0x03; } return rc; }
printf("Clock B on 3.0E uses PLL%d\n", Cy2239xGetPll(2));
CY2239X API
An application programming interface is provided for easy access to the clock configuration. Applications using this API must add the following header file include:
#include <dev/cy2239x.h>
u_long fref; fref = Cy2239xPllGetFreq(CY2239X_REF, 7);
The following sample displays all relevant settings.
/*! * Copyright (C) 2005-2006 by egnite Software Gmb * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY EGNITE SOFTWARE GMBH AND CONTRIBUTORS * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL EGNITE * SOFTWARE GMB additional information see */ /*! * $Log$ */ #include <dev/board.h> #include <dev/debug.h> #include <dev/cy2239x.h> #include <stdio.h> #include <io.h> #include <sys/version.h> /* Determine the compiler. */ #if defined(__IMAGECRAFT__) #if defined(__AVR__) #define CC_STRING "ICCAVR" #else #define CC_STRING "ICC" #endif #elif defined(__GNUC__) #if defined(__AVR__) #define CC_STRING "AVRGCC" #elif defined(__arm__) #define CC_STRING "YAGARTO" #else #define CC_STRING "GCC" #endif #else #define CC_STRING "Compiler unknown" #endif #if !defined(ETHERNUT3) #warning Requires Ethernut 3 int main(void) { for (;;); } #else /* Formatting macros. */ #define MEGA(f) ((f) / 1000000UL) #define MFRACT(f) ((f) - (MEGA(f) * 1000000UL)) /* * Display frequency and status of a specified PLL. */ void DumpPll(char *name, int pll, int fctrl) { u_long f = Cy2239xPllGetFreq(pll, fctrl); printf("%-6.6s = %3lu.%06lu Mhz ", name, MEGA(f), MFRACT(f)); if (Cy2239xPllEnable(pll, fctrl, -1)) { puts("running"); } else { puts("disabled"); } } /* * Display frequency and status of a specified clock output. */ void DumpClk(char *name, int clk, int fctrl) { u_long f; int pll; int dv; f = Cy2239xGetFreq(clk, fctrl); printf("%-6.6s = %3lu.%06lu Mhz ", name, MEGA(f), MFRACT(f)); pll = Cy2239xGetPll(clk); if (pll > 0) { printf("PLL%d", pll); } else if (pll == 0) { printf("Ref"); } dv = Cy2239xGetDivider(clk, fctrl); if (dv) { printf("/%d\n", dv); } else { puts(" off"); } } /* * Main application routine. */ int main(void) {\nPLL Configuration 1.0.0 - Nut/OS %s - " CC_STRING "\n\n", NutVersionString()); #if 0 /* Shows how to change the associated clock. */ if (Cy2239xSetPll(CY2239X_CLKB, CY2239X_PLL3)) { printf("SetPll failed\n"); } #endif #if 0 /* Shows how to change the divider value. */ if (Cy2239xSetDivider(CY2239X_CLKB, 1, 4)) { printf("SetDiv failed\n"); } #endif /* Dump reference and PLL clocks. */ DumpPll("Ref", CY2239X_REF, 7); DumpPll("PLL1.0", CY2239X_PLL1, 3); DumpPll("PLL1.1", CY2239X_PLL1, 7); DumpPll("PLL2", CY2239X_PLL2, 7); DumpPll("PLL3", CY2239X_PLL3, 7); putchar('\n'); /* Dump clock outputs. */ DumpClk("ClkA.0", CY2239X_CLKA, 3); DumpClk("ClkA.1", CY2239X_CLKA, 7); DumpClk("ClkB.0", CY2239X_CLKB, 3); DumpClk("ClkB.1", CY2239X_CLKB, 7); DumpClk("ClkC", CY2239X_CLKC, 7); DumpClk("ClkD", CY2239X_CLKD, 7); DumpClk("ClkE", CY2239X_CLKE, 7); putchar('\n'); for(;;); } #endif /* ETHERNUT3 */
PLL Configuration 1.0.0 - Nut/OS 4.1.9.7 - YAGARTO Ref = 25.000000 Mhz running PLL1.0 = 368.636319 Mhz running PLL1.1 = 368.636319 Mhz running PLL2 = 75.000001 Mhz disabled PLL3 = 75.000001 Mhz disabled ClkA.0 = 0.000000 Mhz Ref off ClkA.1 = 0.000000 Mhz Ref off ClkB.0 = 14.745452 Mhz PLL1/25 ClkB.1 = 73.727263 Mhz PLL1/5 ClkC = 25.000000 Mhz Ref/1 ClkD = 0.000000 Mhz Ref off ClkE = 0.000000 Mhz PLL1 off
Running on Rev-D boards will produce this output:
PLL Configuration 1.0.0 - Nut/OS 4.1.9.7 - YAGARTO Ref = 25.000000 Mhz running PLL1.0 = 147.457622 Mhz running PLL1.1 = 368.636319 Mhz running PLL2 = 100.000001 Mhz running PLL3 = 75.000001 Mhz running ClkA.0 = 25.000000 Mhz PLL2/4 ClkA.1 = 25.000000 Mhz PLL2/4 ClkB.0 = 0.000000 Mhz PLL2 off ClkB.1 = 0.000000 Mhz PLL2 off ClkC = 73.727263 Mhz PLL1/5 ClkD = 0.000000 Mhz PLL1 off ClkE = 0.000000 Mhz PLL1 off
More details about this and all other functions of the CY2239X API are available in the Nut/OS API Documentation. | http://www.ethernut.de/en/hardware/enut3/cy22393.html | CC-MAIN-2022-05 | refinedweb | 1,249 | 68.36 |
Being able to scan a barcode is a convenient way to share bits of data. Whether you’re using QR codes to share contact information or traditional barcodes for product information, being able to scan an image is more convenient than either entering a long code or similar.
Previously I wrote about scanning barcodes using Ionic Framework 1, but with Ionic 2 being all the rage I thought it would be worth revisiting for Angular.
In this guide we’re going to look at the PhoneGap BarcodeScanner plugin. It is capable of scanning the following barcode types on Android:
And it can scan the following barcode types on iOS:
That information was taken right out of the official documentation. Now you’re probably thinking that this is a PhoneGap plugin so it won’t work for Ionic 2. The reality is that PhoneGap based on Apache Cordova just like Ionic Framework, so they can all play nice together.
With the background out of the way, let’s start creating a project that can use this plugin. From your Command Prompt (Windows) or Terminal (Mac and Linux), execute the following:
ionic start BarcodeProject blank --v2 cd BarcodeProject ionic platform add ios ionic platform add android
A few important things to note. First, if you’re not using a Mac you cannot add and build for the iOS platform. Second, you must be using the Ionic CLI that supports Ionic 2 applications.
From within your project, execute the following from your Command Prompt or Terminal:
ionic plugin add phonegap-plugin-barcodescanner
This will install the barcode plugin for use in your project.
From here on out we can start coding. We’ll spend most of our time in the app/pages/home/home.html and app/pages/home/home.js files. One being our UI file and the other being our logic file.
Starting with app/pages/home/home.js, add the following code:
import {Page, Platform, Alert, NavController} from 'ionic-angular'; @Page({ templateUrl: 'build/pages/home/home.html' }) export class HomePage { static get parameters() { return [[Platform], [NavController]]; } constructor(platform, navController) { this.platform = platform; this.navController = navController; } scan() { this.platform.ready().then(() => { cordova.plugins.barcodeScanner.scan((result) => { this.nav.present(Alert.create({ title: "Scan Results", subTitle: result.text, buttons: ["Close"] })); }, (error) => { this.nav.present(Alert.create({ title: "Attention!", subTitle: error, buttons: ["Close"] })); }); }); } }
Let’s break down everything that is happening in the above.
In the
constructor method we are passing and initializing our
platform and
navController objects. Because we’ll be using a native plugin we need to make sure the device is ready first. The
Platform class has a function to check for us if the device is ready.
Inside the
scan method we do that check and once ready we call the
scan function of the actual barcode plugin. If the promise is resolved we will show an alert with the scanned result. If the promise was rejected we will show an alert with the error.
Now let’s take a look at the front end code. We are trying to keep things simple which can easily be reflected in this file. Open app/pages/home/home.html and include the following code:
<ion-navbar *navbar> <ion-title> Home </ion-title> </ion-navbar> <ion-content <button primary (click)="scan()">Scan</button> </ion-content>
We really only have a button that starts the scanner in our UI. Very simple right?
Making use of the barcode scanner plugin in Ionic 2 really wasn’t too different from what was done in the Ionic Framework 1 tutorial I wrote. This is because both versions of the framework use the same plugin. The only difference was in the use of AngularJS and Angular. | https://www.thepolyglotdeveloper.com/2016/02/add-barcode-scanning-functionality-to-your-ionic-2-app/ | CC-MAIN-2018-51 | refinedweb | 619 | 56.55 |
REBOL the "Messaging Language" 113
FunkflY writes " From the Rebol Hompage"REBOL (pronounced "REB-ul"), the first in a new breed of Internet messaging languages, today revolutionized the exchange and interpretation of network-based information by allowing programs authored in REBOL/core 2.0 to run on more than 15 popular computing platforms without modification. As a messaging language, REBOL provides seamless network connectivity to the Internet protocols such as HTTP, FTP, SMTP, POP, NNTP, Finger, and others." " A new version came out recently-worth checking out.
Check what you piss on before you piss... (Score:1)
Questions:
1. How many posters worked with actual AREXX (not REXX crap but real *AREXX* on an *Amiga*)?
2. How many posters think that Amigans should not be using Linux; that Linux is "too elite"?
3. How many posters have CS Degrees?
4. How many posters feel that only people with CS Degrees should have the right to make programs?
5. How many posters missed the boat entirely?
6. How many posters have their head up their ass?
Answers:
1. Only one, a mere pittance. Another sounded like they had some experience, but I can tell you, REXX and AREXX are two different beasts.
2. Apparently too many. You folks seemed to have forgotten that there are ALOT of defectors from the Amiga camp that left because, for once, you could get a PC that didn't come with Winblows. I'm one of them, and I can say with certainty that if Linux had not offered what it did, I would have NEVER switched.
For God's sake man, who the hell wants SEGMENTED 32-bit MEMORY? Multipliers that occur in only ONE specific regsiter, and additions in another? Only four primary CPU registers? 36 bit flat address space - now there's a standard that only Real Men Use(tm). Parity bits - hell, let's include some tools from the stone age as well. Oh, and there's my favorite, segmented MEMORY POINTERS from hell in MicroSquish C, despite the known assumption that C pointers are for a FLAT ADDRESS SPACE. (don't even try to double-guess that last one... you can't tell me that pointer arithmetic was meant for a segmented arch.)
Who dropped the ball on that one? Forget that last question - it should be, "Who dropped acid at Intel while designing such a lame chip family as the x86?" I guess that's why people paid $1500 for a computer for years and years...must be all of that wonderful stuff that you get in the box.
And people wondered why the Amiga hung around for so long...get a clue...because it was USABLE. Because it was PROGRAMMABLE. And because it didn't consist of a 8088, 64k memory, tape-drive-with-5 1/4"-floppy piece of crap in 1985. The Amiga 1000 (incidentally released in 1985), considered long obsolete, beat the crap out of PCs, Macs, etc. out of the BOX. Ever wonder why the Mac "got color" real fast? Why the PC soundcard market "magically appeared" overnight? Why Intel did everything in their power to kill the m68k line of chips? (answer to that last question: the 68060, the last chip in the family ever made before Mot pulled the plug, beats the crap out of a pentium at the same clock speed. Gee whiz, I guess having 16x 32-bit generic registers makes a difference...oops, that must be a RISC chip I'm talking about...naw, it couldn't be, after all, we know that RISC was a completely new concept...it must be a rumor that 68k chips had this feature A LONG TIME before RISC showed up)
The one salvation of the x86 arch. is Linux.
Win95 is plain stupid, with it's "API flavor of the month" approach and "wonderful Industry backing" (oops, I mean wholesale company buyouts, legal pandering, strong-arm OEM license tactics, astroturf campaigns, lame EULA agreements, big-brother-GUIDs, vendor lock-in, young-programmers-with-no-life-and-burnout, and quasi-sometimes-it-works memory protection)...I think I'll reach for a vomit bucket...
3. Who knows? Who cares? Apparently you ALL have degrees, but if that's the case, then what the hell are you doing here, instead of making money before you loose your job at 40 because of the rampant age discrimination in the industry?
4. Too many, but that's OK, we understand your need for job security. After all, programmers and analysts should be exempt from the kinds of problems you find in every other industry...they're so special, aren't they?
5. Almost all. This is a MESSAGING language, i.e. it should replace AREXX for a reason. If you feel that messaging should only be done with shared memory, semaphores, high-speed switched networks, and lots of C code, then GO SOMEWHERE ELSE AND FORGET YOU READ ALL OF THIS. I don't have several lifetimes to live to write crappily-written C code that looks like my computer puked, just so that I can get program A to send commands to program B and have program B provide feedback to program A...while doing REAL WORK.
6. It's too difficult to determine without pulling several heads out of several asses. That's OK, you can't see me this way as I sneak up on your job and take it.
* * *
A real disappointment, folks. If you can't see the value of a SCRIPTING language that allows you to simul-multi-fucking-taneously controll SEVERAL programs, and allow SEVERAL programs to interact that have completely different designs and uses, then I guess you don't have a use for shell scripts, either...or perl. Come to think of it, why bother learning anything new? We should just stick with the wonderful set of tools that we already have. I guess that's why x86 still sells strong, but can't even match the processing power of a Sony Playstation 2 (that's right folks, your average 400Mhz/Voodoo/128Mb RAM PC can't hold a candle to a $299 game console - tells you something about the words PCs SUCK WIND OUT OF THE BOX)
This is just about the last straw. Slashdot used to be an interesting place to hear the news. Now it's just a clique of "3l33t3 P33C33 d00z with 4ttit00dz".
Moral of this mindless rant:
Before you piss on something, check out what you're pissing on. It might just be a live wire...
---
Typical Programmer's Response: "Oh, it's a potential threat, let's bury our heads in the sand! Hey, what's that I feel up my ass, and why does the sand stink?"
Re: Programmers are end users (Score:1)
code. C is very simple, I think. That's what makes it good.
Simple is always better so long as simple allows flexibility and
power, which it usually does much moreso than difficult.
As for CS gurus, it's been my experience working with
a number of programmers that programming is an art. CS is
theory. Can you honestly tell me that 99% of all programming
requires no more mathematics than jr. high school algegra, if
that. The skills involved are more like those of writers and musicians
- liberal arts majors and creative artists, not scientists.
Programming is not a science, and I hope it never becomes
one.
You know what language most physical scientists and mathmaticians
prefer to test their theories? Basic. Because it is quick and has
enough math functionality and graphics to produce demos to
prove these theories or illustrate them. These are the peoople
who are *real*scientists, not pseudo scientists who get CS
degrees so they can drive BMW's. (Not to imply that all CS
grads are like that, though). Good scientists look for simplicity
even in complexity, and good mathematicians often seek the
simplest proofs.
Programmers who hold end users in contempt are at best very
mediocre programmers. Most become sysadmins because they
really don't enjoy programming. They are good at promoting
their own careers and throwing around buzz words and
little else. Often such "CS gurus" will make things unnecessarily
difficult or obtuse for their own job security, and that is really
bad programming and design. Not to mention what it costs
their employers.
These are the kinds of people who don't stay at a shop
long enough to finish what they started leaving a mess for
others to fix while they play golf with the boss, blaming others
for their mistakes, and taking credit for what other do that the
boss likes.
Rebol looks like a very interesting language. It can do a lot
more than internet messaging. That it can do a lot more should
be obvious to even a novice from looking at a small bit of
code, but perhaps such insight is unavailable to "CS gurus"
like yourself. It is a prettier language than perl (which is ugly as sin)
or python (which is a god awful mess). Not as powerful,
probably. But then, I imagine it can be used to script
compiled c and c++ or anything else, and may be an easily
exensible language using c. Something to check out.
Believe me, this is nothing like VB or Cobol. There is no
inherent conflict between *real* simplicity that seems to be
the foundation of Rebol and sophisticated uses. While it
doesn't look much like C, something tells me that for an
interpreted language it may share some of C's flexibility and
beauty which may not be apparent to CS people.
I intend to check it out because it looks like FUN and also
because it is a good way to allow non-programmers or novice
programmers to communicate if they have Rebol installed
at both ends - for example a Web server or page and
a home computer connected to it. It's small and may be much better
for that than what is now available. So there.
Why would this make me want to give up Dylan? (Score:1)
let")
);
let name = first;
let company = third;
let email = rcurry(element, 4);
for (row in persons)
format-out("%s of %s is at %s\n", row.name, row.company, row.email);
end;
Why? (Score:2)
but why should people use a "new language" just
to do simple little tasks like downloading
a webpage or sending mail? There are already
powerful scripting tools out there. Tcl, Perl,
and Python have been around for a lot longer
and they can do all of this and more. Perhaps
someone can clue me in about why this tool
is any better than the existing ones.
The examples are a bit strange though (Score:2)
Cool idea, but I'll pass (Score:2)
-Philip
I can't believe it! (Score:3)
I emailed Rebol last fall and asked if it would be available for the PalmPilot. Within a day they emailed me and said that they would investigate. It's now atleast listed as pending on the site. Did anyone else notice this? You might be geeks but you aren't going to get a functional Perl dist on the PalmPilot. (And CPAN won't help in this arena.) I think it's a niche language for niche applications that looks promising for communications between portable devices and 'regular' systems.
Rebol is something new (as compared to C,C++,Perl). Something cool to learn. Maybe even useful. Why's this group so condescending to 'new' technology? I even saw a gripe about it being Freeware. Geez-oh-whiz! Only here.
Did any of the nay-sayers, at the dawn of Perl say, "Ohhh, that's junk. What do we need that for? I don't see the application for it."?
Re:Some other notes: (Score:1)
Re:I can't believe it! (Score:1)
of a lot faster than that, now couldn't you?
Has anyone bothered to write & ask if they would consider opening the source (and if they won't open it, why wouldn't they)? It looks to me like it would be trivial to write a clone of their software (especially if it's just sticking text headers in front of data).
Re:Hmm.. C-like? I don't think so... (Score:2)
I've used it a bit in the past and I must say that it really is nothing like C. To me it feels very LISP like. The heavy use of []. Blocks (which are often just lists of things) being used to store most data, and the everything-is-data-until-executed way of working. Also, Rebol is a hell of a lot more dynamic and lose than C. Any 'word' (or function) can be redefined anywhere, anytime.
Try it out, it's fun. But you won't get very far using a C like programming mentality.
Open Your Mind (tm)
--Simon.
Re:REBOL - A different language (Score:1)
perl -MLWP::Simple -e 'print get($ARGV[0])'
SIMPLE?! Tell that to Joe User!
1. You have to KNOW about CPAN
2. You have to know where NEAREST/ANY CPAN archive is
3. You have to FIND the right module.
4. You have to INSTALL it on your machine
Yeah, simple indeed!
Some of the people here need to go out (of the ivory tower) more....
Just my 2c
Re:I don't think you've thought this through. (Score:2)
That's not a question of being a die-hard or not. The real question is why the f*&% people still invent new syntax. The beauty of Scheme (or Common Lisp, for that matter) is that you learn one basic syntax, and you can construct most any domain-specific syntax in terms of it. Yes you can have infix math in Lisp. Do your homework or something.
All this glitzy networking stuff could be a nice Scheme or Lisp extension, along with the syntax. But noooo, we have to invent a new damn language to have the networking. We'll have Perl for text processing, Lisp for heavy AI, Rebol for networking and Python for, uh, "rapid prototyping" (slow prototyping, anyone?).
Disclaimers:
1. I needed to vent.
2. I think the computing field needs more people that know what they are doing, and "easy" languages only help impostors and give people unfounded hopes (like, hey, I'm a bad-ass algorithm designer. I know Basic).
3. I don't take parentophobes seriously. If you have such problems distinguishing syntax from semantics, than you probably don't have a good grasp of the distinction between content and representation.
There, I feel better now.
Quick summary of the language... (Score:3)
Anyhow, one interesting result of the Forth-like nature is that there are a huge number of datatypes which are not possible in other languages; for example, URLs are actually formal datatypes, not just another string (a malformed URL is a compile-time error).
They've obviously learned from Perl and Python otherwise; it's a nicely dynamic language which seems to be error-tolerant, and has quick, easy syntax for most needs.
I'm reasonably happy with it. It doesn't look as _nice_ as Python, but at least its braces and brackets have a purpose.
I suppose I'll have to build a Lojbanic version. Now that would be interesting. A speakable computer language.
Not regex but maybe better (Score:1)
That all being said, I'm not throwing out GCC and scrapping Perl and Java. It does look like a great quicky
and one-shot scripter especially for the network.
The main advantage is a 119k cross-platform distribution with built-in
help and documentation and no libraries to track down and install. Beat that with [insert viable language of your choice]!
This is the kinda thing I can give my pointy-haired boss's secretary so she can write a clue filter on his email and pull down his football pool scores.
Re:Not a real language (Score:1)
CSV database parsing (Score:1)
I'm looking for a good language to deal with a reasonably large(nearing 10K lines) CSV formatted database. It's the log database for the DoxPrint system that's running at my college. DoxPrint is a SambaNetware gateway that I'm releasing under the GPL By The Time You Read This(TM). Presently, the only way to get efficient information out of it is using contortions of grep, sort, uniq, and wc, or loading the database in Excel and having fun.
Should I try Rebol? Or what? I'd like the next version of DoxPrint to support Web Log Analysis, but I'm unsure which language I should cram to achieve that goal. Suggestions?
Yours Truly
Dan Kaminsky
DoxPara Research
Once you pull the pin, Mr. Grenade is no longer your friend.
Re:Why would this make me give up Python? (Score:1)
The only place I've seen Python code get munged this way is on Slashdot, since you can't PRE, and I was able to whip up a Python script to get around that problem very easily.
Of course, if you need an excuse to limit your toolbox, calling it "whitespace-sensitive" is as good as any.
must try http manipluation (Score:1)
having a light scripting language across platforms would be very useful for querying websites, extracting useful information programatically and getting it to run off my networked palm...(i know the palm port is pending)
but i agree that as far as a prototyping/quick tool that allows u to use basic web protocols , rebol may just be the tool for the job.
REBOL thread on c.l.python (Score:4)
Personally, I can't imagine why anyone in 1999 is bothering to release a new language without making the source code available. Haven't we learned better by now?
Re:Check what you piss on before you piss... (Score:5)
Linux is part of the Free Software movement. If you don't like that then you should present clean, concise reasoning why we should never discuss it. You are making HUGE generalizations about Linux and Windows users and you ought to learn how to present a rational argument instead of this silly banter.
Why shouldn't we critique the license under which new software is released? Why should we accept everything developed for Linux with beggar's hands? We shouldn't and we don't (surprise). If that offends you, I'm not sorry and I won't apologize for "whinning programmer" bretheren for it is you who is doing the whinning here.
As for the issues; UNIX is based on small utilities which do one thing well. None of the examples on their website showed anything revolutionary or even interesting when compared to BASH. What's the difference between doingand It is any surprise that were not very impressed?
Perl is a schitzophrenic language. It can look good, it can look bad. It can be object-oriented, it can be iterative, it can be threaded, it can be modularized. It can embed code from other languages, it can communicate with the native OS using native constructs, it can be graphical, it can be ttyable. It is what you want it to be, and yes, it should be compared to REBOL because they are trying to solve the same problems.
But I suppose you'd rather whine about Linux users with your time. What a joke. And as for the Amiga user: get over it. I used to be an Amiga user and I don't expect the Linux community to "care about my feelings" or other such bullshit. If you want to compare something with AREXX then do it. If you want to spout how 31337 you are because you're the "only" one to ever use it, you have no business raking Linux users over the coals for doing the same thing with Perl and C. Shut up and present a post with content.
The wheel is turning but the hamster is dead.
Hmm.. (Score:1)
Stan "Myconid" Brinkerhoff
Some other notes: (Score:3).
An example from their documentation:
; A simple database "of" company "is at" email]
; The loop that prints the database:
use facts [forskip persons 4 [
set facts persons
print text
]
]
Would generate the following:
Moe Howard of Three Stooges Ltd. is at moe@stooges.com
... etc.
- Woodie
Re:I can't believe it! (Score:1)
no IMAP? (Score:1)
Re:Umm...how do they make money? (Score:1)
> You may not reverse engineer, decompile, or disassemble the Software.
Think of it as a beta test. My guess is they'll sell some kind of rebol-powered gui mail product.
Re:Why would this make me give up Python? (Score:1)
Re:Why would this make me give up Python? (Score:2)
Re:Why? - Easier (Score:1)
Re:emulation bakeoff: programming contest (Score:1)
emulation bakeoff: programming contest (Score:2)
slashdot moderators should hold a contest w/ the following rules:
my guess is that there will be people interested in this sort of friendly competition.
Re:Perl Example, REBOL equivalent (Score:1)
tell application 'Netscape' to open ''
Re:I can't believe it! (Score:1)
Later,
Blake.
I speak for PCDocs
Allergic to mainframe culture (Score:1)
netwiz [mailto] opined:
I once tried to maintain a database-backed website whose glue was in Rexx [ibm.com]. Yes, it blew the doors off DOS batch files. Quite a complete little language with some nice features. Unfortunately, I had the same allergic reaction to it that I always have when I try to read FORTRAN. Its design was very much rooted in the IBM mainframe culture, and it gave me the heebie-jeebies. I'm not being fascitious; something about its worldview bothered me in a very visceral way.
I'd be interested to hear what people who've worked in mainframe culture think about Rexx. I'm very much a *nix person, and just don't understand that point of view. How did it feel to use something like that on the desktop? What's your reaction to today's unix (or windows) dominated environment (and people like me)?
REBOL at least seems to have lost those awful capital letters, despite its retro 70s-language name. They need a retro-chiq logo [rebol.com] to go with the name, instead of copying wired [hotbot.com]
Re:I don't think you've thought this through. (Score:1)
Programming is NOT for end users! (Score:3)
While this may be true for application programs, which are designed for end users, other areas of computers should not be made "simple" for end users. Take two examples, both from our "favorite" company:
1) Visual Basic - this was an attempt to make programming a graphical system "easy" for lay people and those just getting "started" in programming. The result is horrendous. It may be easy to throw a few things together and come up with a "prototype" program but I'd be hard to convince that VisualBasic is "the" programming language to use for large complex projects.
2) WindowsNT - Well, this is pretty self explainatory. This server operating system was designed with a "user-friendly" interface to make it easier for "administrators" to configure and maintain the system. What a joke. It's plain stupid to believe that anyone can make system administration "easy." The only way it's easy is if you have an experienced administrator behind the wheel who knows what they are doing. Then, you can have a well oiled operation that does not require any unforseen problems. However, I would strongly disagree with anyone that says you can take an inexperienced person and have them effectively administer a system just because it has a "user-friendly" interface.
Now, what does this have to do with REBOL? Well, REBOL may be a great language. But whether it makes it "easy" for beginners or not does not influence my decision to use the language at all. In fact, I would hope that end users do not try to "program" in it simply because it's "easy." We would then end up with programs that "sort of" work "sometimes."
If someone wants to get into the computer field, let them go to school and take some classes. It's important to learn the underlying concepts before going off and writing programs for production systems. Failure to do so usually results in more work for true CS gurus who have to fix the resulting problems.
logo, scheme meets tcl (Score:1)
I have trouble seeing REBOL go anywhere, though. It's not open source, making adoption that way rather difficult. And it's up against a lot of competition.
On Windows, people use VBScript and Internet COM components with it.
In the open source world, there are Perl, Python, Tcl/Tk, LUA, GUILE, and a lot of others. Perl in particular has good support for most Internet protocols. For less common languages with some history and interesting features, there are Icon and Logo.
It is nice that their interpreter is small (something they share with Logo and Lua), but that alone doesn't seem sufficient to buy their stuff. And Tcl started off small, too.
To me, the frontiers of scripting languages are in areas like COM/CORBA integration, automatic generation of C/C++/Java interfaces, and entirely new computational paradigms, not variations on an old theme.
Re:Amiga? (Score:1)
Both REXX on OS/2 and ARexx are nice scripting languages, but are very different beasts compared to REBOL, as far as I can see by looking at the guides on the REBOL web site.
I think the basic concept of REBOL is neat, but it seems to still have a long way to go before being a replacement for Python or Perl in general.
Since REBOL seems to be available for a number of platforms, it should not be necessary for you to buy a new Amiga to get it...
My opinion on Rebol (Score:1)
It needs to be capable of multiple users. I also need to be able to restrict access to certain protocols and commands.
Until then, my users won't see it on the system.
I played with it on my workstation though... it's neat. But I can do the same in Perl... over the years I've simplified many of the module calls like rebol.
Hell, I'm sure we can write a rebol code parser in perl.
can Perl fit on a floppy? (Score:1)
So I can fit the Linux, winNT and win98 executables all one one floppy, and my scripts, too, so I can run them on any machine I need to (those are all the OS's we use at work) all from the same floppy, with no istallation of the lanquage on the local computer needed? (it stays on the floppy with the scripts)
I think that's cool. usefull, at least.
But I really don't want to learn a new language.. I'm still learning Perl (does anyone ever STOP learning perl?) and Perl is already on the Linux machines.. so:
Is there a Perl for Win98/NT that will fit on a floppy?
My activestate 514 installation on win98 is a total of 15 MB, the bin folder 1.4 MB.
Anyone know the minimum I need? That would be cool to have a Mostly complete Perl in my pocket on a floppy. It would make things easier when I work on Win32 machines...
thanks
-geekd
I don't think you've thought this through. (Score:1)
Disclaimers first: I don't use VB, I hate NT, I've never looked at REBOL. That said, I disagree about as strongly as it's possible to disagree.
The reason I use programming languages which are easier to read and write - for the lay-person, or for the technical professional - is because it's faster. If I felt like it, I could probably sit down and write a CGI script using gcc that would talk to a MySQL server quite nicely. But why? I can write a PHP app to do exactly the same thing in a matter of minutes. It might be fun to try that as an exercise sometime, but I have to make my living this way, and I'd prefer not to have to re-invent the wheel every time I do.
Sorry. The reason people invented computers in the first place was because they wanted to do math problems that were too much of a pain in the ass to do otherwise. Much of the history of the advance of computer technology can be described in terms of people who were willing to expend a hell of a lot of effort so that they (or others) could get to be lazy somewhere down the line.
Anyway - I get the feeling the diehards are just going to hate me for this. So if you disagree with me, fine, go write your own database from the ground up in assembly code. I'm going to use whatever tools are available to me, and then I'm gonna play some Quake.
Re:I don't think you've thought this through. (Score:1)
My beef was that original rant came sounded to me like a rant against readable languages, rather than pro-"knowing what you're doing." Besides which, I'm really hyped on caffiene right now, and it looked like a good target for my extra energy.
Re:Why? - Easier (Score:1)
As for your assumption that I'm just interested in using whatever is "trendy" you are once again wrong. As a computer science teacher at a Swedish University I tend to use a lot of languages that are not trendy at all, such as Lisp, prolog, etc. I have nothing against the introduction of a new language, and as I said in my original post, I have not evaluated Rebol so I can not say if it is a "good" language or not. What I was addressing in my initial post was that just because the language appeared simpler did not mean that it was better than what is currently available. The trend that I have been seeing recently on the Internet is that people that have no experience at all with system design, or even programming for that matter, are designing "web applications." This includes everything from simple scripts to attempts at full blown e-commerce sites. I'm all for creating a new language than can help with the these types of applications, but I do NOT think that it is a substitute for the type of experience that is necessary to deal with these situations efficiently.
As for my analogies, I still think they are valid if taken in context. As I said above, the current trend is for non-programmers to attempt to develop Internet applications. So, like I said above, I don't want a non-pilot flying my plane, just like I don't want some graphic designer trying to secure my data on my favorite e-commerce site.
---
Re:Why? - Easier (Score:1)
I've seen enough code from students to state with a clear conscience that yes, just about anyone can program. I'm not saying that they will all come up with the most optimal solution on their own, but if you specifically tell them to "do it this way" and then leave them alone, then 99% of the time they can do it. So I still stick to my original thesis that there is a lot more to being a computer scientist that programming.
---
Re:Why? - Easier (Score:2)
Here is something to think about, does the medical field make it easier for non-doctors to start practicing medicine without a degree? Or why don't they make plane cockpits simpler so that it would be easier to use/learn/figure out for a non-pilot?
---
Re:I can't believe it! (Score:1)
> would investigate. It's now atleast
> listed as pending on the site
Oh, they're investigating. Good.
But if you had the source, you could do
it yourself, or find someone to do it a hell
of a lot faster than that, now couldn't you?
- godel
Re:I can't believe it! (Score:1)
Perl's success comes from designing directly based on the needs of the community, and even owes its existence directly to solving a particular need. Its openness leads even to multiple options of how to interface guts (reasonably well-documents) to arbitrary C, meaning hordes and hordes of useful, free, open modules. Which implement the same sort of thing as REBOL's touted web-fetch-in-one-line and server-in-seven-lines examples.
So in short, I don't happen to see REBOL as solving any of my problems any more effeciently than C and/or Perl and/or Scheme and/or any other existing language. And considering there is no source available to enable understanding and potentially modifying every detail of the language from the ground up, I find little reason for optimism concerning the language's usefulness in the future.
REBOL - A different language (Score:3)
As for the "built-in networking" - I wouldn't make such a fuss out of it. It's true that other languages (perl, for example), don't come with these built-in, but with the excellent distributed module repository - CPAN - it's a non-issue. Fetching a WWW page is as simple as
Getting stuff done ? I haven't yet met a language that can beat perl when it comes to minimizing development time, as the mission of REBOL seems to be different (as it says, a Messaging Language), I doubt it can do that either.
REBOL for Internet Applications instead of Java.. hmm.. maybe
;)
Re:Amiga? (Score:4)
The Amiga OS had lots of good features, though, some of which linux is only catching up to now. A lot of its more esoteric and underused functions were actually quite cool, for the time. e.g. The Envoy networking software, in conjunction with ARexx, had "gateways" that could be used for message passing, scripting, etc, between clusters of Amigas - basically anything that a local arexx program could do, could also be sent through the gateways to other amigas. I also liked the dynamically resizing ramdisk - it was always handy. The support for multiple resolutions and colour depths on different screens simultaneously was also very useful. This was possible due to the custom hardware, though, which really made the amiga what is was.
It's also really easy to program for in C or macro assembler, and its system include files work very well, and were very well written (on average) - I liked taglists for function parameters, exec lists were really useful, etc. etc.
A lot of Linux people are ex-amiga people. Rasterman springs to mind. Imlib's kinda like datatypes, and enlightenment on X obviously borrows from the amiga workbench on intuition.
Amiga OS 5.0 is a different kettle of fish. It's built upon QNX, which is a very good RTOS. It has relatively little to do with Amiga OSes 0 to 3, other than the name, and the fact that some of the original amiga people are designing it "in the spirit of the original amiga" - whatever that's supposed to mean.
Rebol is a pretty easy language to learn, and very easy to read. However, I'd expect the Amiga NG people to include several scriptiing languages- they'll probably include the GNU tools, for a start, at least in the developer release of the OS.
Re:Umm...how do they make money? (Score:1)
Re:CSV database parsing (Score:1)
>last thing we need is to learn a language with
>each application, just because it was hyped on
>slashdot
Then again, should we ignore new languages just
because they're new? Many said the same things when C, Perl, and Java all first came out.
But I agree that PERL is perfect for this job
(Practical Extraction and Reporting Language eh?)
-WW
Re:Sounds too much like COBOL (Score:1)
The history of programming languages has too many *BOL languages pronounced -ball for anyone to complain about pronouncing REBOL 'reeball'. (E.g. COBOL, SNOBOL, SPITBOL,
Re:CSV database parsing (Score:1)
Use C if you want speed. The csv 'parsing'
is trivial.
Perl will probably be fast enough too (especially if you have to do lots of string processing with the results - it will be easier to write), 10K lines of data isn't _that_ much.
__// `Thinking is an exercise to which all too few brains
No toy. (Score:1)
You know, like this Corba stuff? Like that, except you just wrote simple scripts, no idl, no big fat orb, no huge stubs. Just a simple script, which ran on a single (threaded) server and could talk to several 'enabled' applications simulataneously.
It is simple, effective. Non-programmers can learn enough to enable THEIR applications to do what THEY want them to - without having to become middleware demons! Like what tcl is supposed to do, but without having to embed the interpreter in every damn application.
["windoze" lusers are used to fully integrated ide's, this whole idea is a bit mindblowing. Unix hacks think bourne shell and perl is how to integrate disparate applications
I think this is what Rebol is trying to do as well, but perhaps slightly differently. Allowing USERS to write distributed networked applications. Yes USERS, not DEVELOPERS. Wouldn't that be novel? Users being able to write their own meta-applications composed of otherwise disparate and incompatible software. At a higher level of abstraction than, say, perl (which isn't far off 'c with strings').
Perhaps it needs an ipc mechanism (apart from the network protocols and to support application messaging), and skeletons for application integration (so it can become an application messaging system, rather than just networking), and probably freely available source to fit in with the gnu/linux mould (and for all you OSS zealots to love it)
(Amiga's exec, as mentioned in his bio, was doing rock solid, very fast, 32-bit micro-kernel multiprocessing on a floppy-powerd "microcomputer" nearly 15 years ago!)
__// `Thinking is an exercise to which all too few brains
Re:Hoping to get bought out? (Score:1)
We should all start hyping it as the Next Big Thing, just to inflate the price.
Why would this make me give up Python? (Score:2))s of %(company)s is at %(email)s"
# Emulate Rebol "set" feature
def set(attr, val): globals()[attr] = val
# The loop that prints the database:
for person in persons:
map(set, facts, person)
print text % globals()
P.S. Hey, Rob, how about allowing (pre) or (code) tags, eh?
Re:Why would this make me give up Python? (Score:2)
>to run. I'll take a language that isn't whitespace-sensitive, thanks.
There's always a wise guy. Here:
for person in persons: \
map(set, facts, person);\
print text % globals()
Happy now?
Expressionism (Score:1)
The idea matters more than the expression.
The great philosophers would still great even if they spoke another language. As long as a language lets your ideas be expressed in some way, that's all that matters.
Perl Example, REBOL equivalent (Score:2)
To read a web page:
page: read
To view the HTML source code:
print page
Or, you could just write:
print read
which to me a non-perl/non-rebol user seems a lot easier than the perl example
exactly (Score:1)
190K (Score:1)
Not sure if I will use it, but it looks like it's fairly versatile and quite small (190K or 150K, depending on which page you read). Kind of a swiss army knife, but not a powertool. Probably not a replacement for Perl, but if it's really this small, think of how many mod_rebol enabled apache processes you could have going at once.
Re:Amiga? (Score:1)
Then I guess it's not just a coincidence that the Founder and CEO of the rebol company has "design and implementation of the highly acclaimed Amiga multitasking operating system" in his bio [rebol.com].
Re:Why? - Easier (Score:1)
> training, experience and the right kind of
> person.
Maybe anyone can program *really badly*, but to program *well* requires "training, experience and the right kind of person", as you might say. Isn't it slightly arrogant that you, as a computer scientist, take this view?
--
Languages (Score:2)
If we're going to get worked up about languages, why aren't we worrying about Haskell [haskell.org], which doesn't seem to have a licensing scheme and in which Microsoft has taken an altogether too intense interest?
Encourage the Haskell folks to settle on a good license -- PAL, GPL, whatever they can stomach. ANYTHING but a Microsoft hegemony.
Rebol is just another offspring of the LISP family. Haskell is something different and interesting.
--
Amiga? (Score:1)
I dunno. I saw Rexx on OS/2, it's native platform (next to mainframe), and it beat the hell out of DOS batch files. If REBOL's even half as good as the glowing commentary I've heard, I may buy a New Amiga (if it ever arrives) just to play with...
[yah, yah, PERL, Linus, *nix, who-hoo! ]
Re:Check what you piss on before you piss... (Score:1)
*silence*
Thank you.
on one point tho.. yes, the psx2 has more cpu, but i don't see anyone doing vis/sim work on one of these things. oh, yah, that's because my 1.2GB world won't fit into it's local memory. Silly me, I somehow got the idea that 32MB should be enough for _anybody_..
I think you should take your own advice.
what problem does rebol solve??? (Score:1)
-russ
Re:I'll tell you what it's good for... (you didn't (Score:1)
-russ
wait, i don't understand (Score:1)
Re:I'll tell you what it's good for... (Score:1)
ARexx, Rebol, and AmigaOS 5 (Score:1)
It's an extremely versitile language - I use it for everything from mailbox checkers with GUI's/Appicons, to maintaining my web site - if you have a look at, all the news there is maintained by a small set of ARexx scripts. The email notification system is also ran by ARexx using direct SMTP via the virtual tcp: device.
To cap it all, ARexx is also piss easy to learn - certainly much easier (with generally more readable results) than the likes of Perl, and I've not even started going on about how easy it is to control applications via ARexx ports...
As it is, nobody knows what scripting language will be used predominantly for OS 5. Rexx is a possibility, and would be a very good choice if implimented in a similar way to the current Amiga, but the likes of Perl would be more familiar to people outside the Amiga. Rebol looks like a nice language, but to be honest, from what I've used of it, I'm not amazingly keen on it.
With a little luck, OS 5 will be flexible enough to let you use pretty much any scripting language you want. Modularity is meant to be one if the things they are aiming for...
Regards
Tom
Editor of AmiSITE, Owner of the ARexx mailing list
Re:Why? - Easier (Score:1)
FunOne
God Kills.
FunOne
Re: Playstation? (Score:1)
Playstation runs games on SPECIFICALLY built hardware at TV resolution (Ugh), not to mention, when you program for the playstation, u only program for ONE type of hardware, not various types of standards and what not. Besides, when I play games on my PC, even at 1024x768, I dont see LOAD TIMES. I'd love to see a Playstation do OGR, or RC5-64. Then we'd see the diffrence.
PlayStation blows.
N64 & Z64 kick ass.
FunOne
God Kills.
FunOne | https://news.slashdot.org/story/99/05/14/1921231/rebol-the-messaging-language | CC-MAIN-2017-04 | refinedweb | 7,388 | 72.26 |
This is probably pretty simple.. . but I can't figure it out.
I've got multiple SLSB methods that specify "Supports" for their container managed transactions.
I'd like to be able to determine in those methods whether or not a transactional context has been created prior to that method invocation and is active for this method invocation. Is this possible?
There is no portable way to do this, since access to the transaction manager
is not in the spec.
In jboss it is:
import javax.transaction.TransactionManager; TransactionManager tm = (TransactionManager) new InitialContext().lookup("java:/TransactionManager"); int status = tm.getStatus();
Adrian,
Thanks for your help -- I've been wondering for some time how to do this :)
Collin | https://developer.jboss.org/thread/31743 | CC-MAIN-2018-13 | refinedweb | 117 | 52.15 |
This was my solution to a function that should return the first pair of two prime numbers spaced with a gap of g between the limits m, n if these numbers exist otherwise nil.
This is a kata from codewars.com, it passed the preliminary test. But, When I submit it I receive an error message saying that due to an inefficiency in my algorithm it takes too much time(8000ms+).
Can someone point out to me what exactly is slowing down the code, and how it should be optimized?
require 'prime'
def gap(g, m, n)
range = (m..n).to_a
primes = []
range.each { |num| primes.push(num) if Prime.prime?(num)}
primes.each_index do |count|
prv , nxt = primes[count], primes[count+1]
if !nxt.is_a? Integer
return nil
end
if nxt - prv == g
return [prv, nxt]
end
end
end
Try this:
require 'prime' def gap(g, m, n) primes = Prime.each(n).select { |p| p >= m } primes[0..-2].zip(primes[1..-1]).find { |a, b| b - a == g } end gap(2, 1, 1000) #=> [3, 5] gap(6, 1, 1000) #=> [23, 29]
Prime.each(n).select { |p| p >= m } returns you the list of all primes between
m and
n. This has a better performance than building an array with all numbers between
m and
n and the checking each number in this array if it is a prime. It is also worth noting that
Prime.each uses the eratosthenes' sieve algorithm as a default.
primes[0..-2].zip(primes[1..-1]) builds an array of each pair. This is not the most efficient way to iterate over each pair in the
primes array, but I think it read better than dealing with indexes.
This might be another option:
require 'prime' require 'set' def gap(g, m, n) primes = Set.new(Prime.each(n).select { |p| p>= m }) primes.each { |prime| return [prime, prime + g] if primes.include?(prime + g) } end
the second version doesn't build an new array with all pairs, but instead just checks for each prime if
prime + g is included in the
primes array too. I use a
Set, because it improves
include? lookups to
O(1) (whereas
include? on arrays would be
O(n).
I am not sure which version will be faster and it might be worth it to run some benchmarks. The first versions needs more memory, but does less calculations. The performance probably depends on the size of the range. | https://codedump.io/share/IFFsfVLoLURA/1/ruby---possible-inefficient-algorithm | CC-MAIN-2016-50 | refinedweb | 410 | 84.27 |
kolkata, west bengal
delhi, delhi (1502 km from kolkata)
jaipur, rajasthan (1548 km from kolkata)
erode, tamil nadu (1920 km from kolkata)
0.00Get Quote
kolkata
west bengal, india
No. 1, Lindsay Street, New Market Area, Dharmatala, Taltala No. 1, Lindsay Street, New Market Area, Dharmatala, Taltala, Kolkata, West Bengal, India
We are Times Fibre Fill Private Limited Manufacturer, supplier of
kolkata
west bengal, india
Rcmc No. 177010, Jashar Pursura Hoogly, Kolkata, West Bengal, India
We are a Sole Proprietorship Firm, known as the most reputed Manu
kolkata
west bengal, india
811 Sitakundu Baruipur, Seetakundi, Kolkata, West Bengal, India
We are West Bengal kolkata Manufacturer of quilts, woolen quilts
Saree Wholesaler
Manufacturer of bed covers, bed sheets, doormats etc. add color and charm to any interior. A suitable bed linen, doormat or towels represents your personal sense of aesthetics. At our sh
manufacturer of Designer Bed Cover
Manufacturer of Home Textile Products
supplier of Non Woven Carpets
Wholesaler of curtains
import of bedsheet
Air Pillow Dealer
Wholesaler of Bengali Saree
manufacturer of Designer Carpet
Floor Carpet Retailer
Table cloth napkins exporter
Manufacturer Of Bedsheets In Howrah Kolkata, West Beng Kolkata, West Bengal give you more choices.
Is this page helpful?
Average Ratings 4.6 (209 Ratings) | https://www.textileinfomedia.com/kolkata/pillow-covers.htm | CC-MAIN-2019-22 | refinedweb | 206 | 52.7 |
Wrapper development¶
Pure python wrappers¶
Python wrappers aim to be an easy way for wrapping external code. The external code can be a a compiled wrapper. In any other cases, a Python wrapper is the recommended choice.
PythonFunction¶
The
PythonFunction is a
Function which
execution is done in a Python context.
Here is an example of how to use it:
import openturns as ot def compute_point(X): E, F = X Y = E * F return [Y] model = ot.PythonFunction(2, 1, compute_point) out_sample = model([[2, 3], [5, 8]]) model.gradient([2.0, 3.0]).
The evaluation can be done on several points to benefit from vectorization. It receives an input sample and must return an output sample. Here is an example:
import openturns as ot import numpy as np def exec_sample(Xs): # speedup using numpy XsT = np.array(Xs).T return np.atleast_2d(np.multiply(XsT[0], XsT[1])).T model = ot.PythonFunction(2, 1, func_sample=exec_sample)
The output sample can be a Sample, a Python list of list or a
Numpy array of array. This argument is optional. If
func_sample is
not provided and must compute a sample, the
func function is
called several time: once for each point of the sample. On contrary, if
only
func_sample is given and a point must be computed, the point
is inserted in a sample of size 1,:
import openturns as ot import openturns.coupling_tools as ct external_program = 'python external_program.py' def exec(X): # create input file in_file = 'input.py' ct.replace('input_template.py', in_file, ['@E', '@F'], X) # compute ct.execute(external_program + ' ' + in_file) # parse output file Y = ct.get('output.py', tokens=['Z=']) return Y model = ot.PythonFunction(2, 1, exec) out_sample = model([[2, 3], [5, 8]]) print(out_sample)
Some explanations of the code :
replace()replace
@Eand
@Foccurence found in
input_template.pyfile and write the result to
infilefile.
X[0]value will replace
'@E'token and
X[1]will replace
'@F'token.
The external program is launched with
execute(). The input filename is passed by parameters to the program.
get()get the value following
'Z='token in
output.pyfile.
Template file example
input_template.py:
E=@E F=@F
External program example
external_program.py:
#!/usr/bin/env python # get input import sys inFile = sys.argv[1] exec(compile(open(inFile).read(), inFile, 'exec')) # compute Z = F * E # write output h = open('output.py', 'w') h.write('Z=' + str(Z)) h.close
Output file example
output.py:
Z=42.0
More examples¶
The replace
replace() function
can edit file in place. It can format values in anyway. Actually, values
can be of type “string”, if not, they are converted using str() Python
function:
can launch an external code.
The
get_value() function
can deal with several type of output file.
works actually the same way the get_value function do,
but on several parameters:
The
get_regex() function
parses output files. It is provided for backward compatibility:
Y = get_regex('results.out', patterns=['@Y2=(\R)']) # -0.89
Performance considerations¶
Two differents cases can be encounter when wrapping code: the wrapping code is a mathematical formula or it is an external code (an external process).
Symbolic formula¶
A benchmark involving the differents wrapping methods available from has been done using a dummy symbolic symbolic (muParser) function:
model = ot.SymbolicFunction(['x0','x1'], [. | https://openturns.github.io/openturns/1.16/developer_guide/wrapper_development.html | CC-MAIN-2021-31 | refinedweb | 541 | 51.85 |
So i made a script and it works fine but it could be better.
The code will be written below, the purpose of the script is to make a beep sound when the timer runs out (that works) but i also want to show on the screen how many hours minutes and seconds it takes (and possibly days, weeks, months, years)
If you run this script it will make a beep in two hours but what it will show to the user is this:
2 hours, 120 minutes and 7199 seconds
1 hours, 119 minutes and 7198 seconds
So i want to make the script smarter and make it really work as a countdown clock. And make it say this:
2 hours, 0 minutes and 0 seconds
1 hours, 59 minutes and 59 seconds
etc...
like a real countdown clock, what my script just does is show the total time instead of calculating how many minutes of the hour are left and how many seconds of a minute are left.
import os,time,winsound hours=2 minutes=0 sec=0 counter=hours*3600+minutes*60+sec mins=int(counter/60) hr=int(mins/60) while counter > 0: counter-=1 print('%i hours, ' %hr + '%i minutes ' %mins + 'and %s seconds' %counter) mins=int(counter/60) hr=int(mins/60) time.sleep(1) Freq = 1000 # Set Frequency To 2500 Hertz Dur = 500 # Set Duration To 1000 ms == 1 second winsound.Beep(Freq,Dur) | https://www.daniweb.com/programming/software-development/threads/458572/countdown-timer-with-feedback | CC-MAIN-2018-34 | refinedweb | 241 | 61.03 |
Hi,
I am trying to get a python script started at boot and I have tried many methods such as rc.local, .bashrc, systemd and crontab, but unfortunately I’ve not been successful. Here is my code:
#!/usr/bin/env python3
import cayenne.client #Cayenne MQTT Client
from time import sleep
from gpiozero import Button
button=Button(2) # Declaring button pin 2
# Cayenne authentication info. This should be obtained from the Cayenne Dashboard. MQTT_USERNAME = "x" MQTT_PASSWORD = "x" MQTT_CLIENT_ID = "x" client = cayenne.client.CayenneMQTTClient() client.begin(MQTT_USERNAME, MQTT_PASSWORD, MQTT_CLIENT_ID) def send_on(): client.virtualWrite(3, 1, "digital_sensor", "d") #Publish "1" to Cayenne MQTT Broker Channel 3 print("Button pressed\n") def send_off(): client.virtualWrite(3, 0, "digital_sensor", "d") #Publish "0" to Cayenne MQTT Broker Channel 3 print("Button released\n") button.when_pressed=send_on #When button pressed run send_on function button.when_released=send_off #When button released run send_off function while True: client.loop()
After I log in through shh I get the following error:
Traceback (most recent call last):
File “/home/pi/my-files/motor.py”, line 2, in
import cayenne.client #Cayenne MQTT Client
ImportError: No module named ‘cayenne’
All the help will be greatly appropriated.
Thanks | http://community.mydevices.com/t/running-python-at-boot-problems/5788 | CC-MAIN-2018-39 | refinedweb | 195 | 51.85 |
On Twilio's Paste design system team, we’re often curious about who uses our work and how they use it. Besides being generally interesting, this information helps us track the adoption of our system, which helps clarify the business case of our work, while also informing us for our future decision making.
This post will show you how we're tracking Paste's usage and how we use this data to improve how we build our design system.
Looking for answers
With all of the potential value of that usage data hovering over our heads, we decided we’d finally chase down some answers. First stop: our package’s NPM page.
Unsurprisingly, it didn’t provide much information. The weekly download count didn’t help us understand who is using our packages, which parts of our packages are used, and how they are used. It wasn't a very strong signal that could drive our decision-making.
Even Ol’ Reliable, Google, didn’t really have an answer for us:
We realized this wasn’t going to be easy. Most analytics tools are designed for traditional products like websites and desktop apps, not for code and NPM packages. The word “telemetry” was bounced around internally, but the process of building such a system required more time and effort than we initially wanted to budget for.
How we collect data
The fact that you’re reading this blog post means that we found another way: scraping Github. At Twilio, most of the company’s code lives on an Enterprise Github instance with very generous rate limits. This means we could scan through the entirety of the enterprise Github for the information we seek. For the projects living on regular Github, we added the Github organizations to an ancillary crawl list.
Using the excellent Octokit library, we didn’t have to write much code to get a lot accomplished. Here’s how we grab every organization:
async function getAllOrgs() { try { const response = await octokit.paginate('GET /organizations'); return response; } catch (error) { console.error(error); } }
And here’s how we grab every repository under every organization:
async function getAllRelevantReposForOrg(org) { try { const allRepos = await octokit.paginate('GET /orgs/:org/repos', { org, type: 'all', }); return cleanReposResponse(allRepos); } catch (error) { console.error(`[fn 'getAllRelevantReposForOrg']:`, error); } }
The “cleanReposResponse” function trims the response by:
- Only keeping the name, language, and last updated fields from the response
- Removing any repositories that haven’t been updated in a few years
- Keeping only the repositories with code in certain programming languages like Typescript and JavaScript, which are relevant to our system.
At this point we’re very close, but there may still be some repositories in this list that don’t pertain to us. So we then fetch the
package.json files in each repository. Some repositories have several
package.json files, such as monorepos, so we first run a search to find their locations:
const response = await octokit.search.code({ q: `repo:${orgName}/${repo.name}+filename:package.json`, });
Then we get the content of the
package.json files and map them back up to the repository and organization:
async function getPackageJson(owner, repo, packagePath = Endpoints.PACKAGE_JSON) { try { const response = await octokit.repos.getContent({ owner, repo, path: packagePath, }); // De-encode the response let pkg = JSON.parse(Buffer.from(response.data.content, response.data.encoding).toString()); // We only care about some packageJson fields, drop the rest for space return lodash.pick(pkg, AllowedPackageJsonFields); } catch (error) { if (error.response == null || error.response.status === 404) { console.log(`[getPackageJson] Processing: ${owner}/${repo} -- No package.json found.`); } else { console.log(error.response); } } }
We now know:
- which organizations have front-end or Node.js code
- which repositories have a
package.jsonfile
- and all the information contained within their
package.json, such as project name, version, and dependencies
Since all of the Paste Design System packages are namespaced, we can scan the
package.json files to find repositories with the
@twilio-paste/ prefix.
Our first report
The very first report we generated looks something like this:
{ "numberOfOrgs": 10, "numberOfRepos": 20, "orgs": { "cool-org": { "cool-repo": { "root-package.json": { "@twilio-paste/core": "6.0.1", "@twilio-paste/icons": "4.0.1" }, "subdir-package.json": { "@twilio-paste/core": "6.0.1", "@twilio-paste/icons": "4.0.1" }, }, ... }, } }
This report shows us how many organizations and repositories at the company are using Paste, plus which packages and versions they're using. Since this is an exhaustive scan of the entire enterprise Github instance, the report is very accurate. Using this information, we tracked our adoption curve growth from 7 organizations and 11 repositories on March 22, 2020 to 19 organizations and 60 repositories one year later.
For science, we were able to run a slightly modified version of the scan to track the adoption of other component libraries. This helped keep track of how predecessor systems were being deprecated, and which parts were the stickiest.
The version numbers proved really helpful when we discovered a customer-facing bug with the
@twilio-paste/core package prior to version 4.2.4. Using this report, we were able to discover which teams had affected products and support them in fixing the problem. This sparked something of a lightbulb moment for us; not only is reporting a tool to track adoption, but it’s also a tool to empower support. That experience demonstrated that we could provide targeted outreach to Paste consumers when bugs arose, rather than an all too easy to miss company announcement.
Tracking more than adoption
Well, what else? Now that we know which repositories are using Paste, we can also... clone and scan every repository using Paste.
We used NodeGit to clone every relevant repository into a “repos” folder. Then we scanned them with an excellent tool called react-scanner to track what components are imported, how often they’re imported, which props they’re using, and in which files they appear.
Here’s a snippet showing our Anchor component being misused:
{ "importInfo": { "imported": "Anchor", "local": "Anchor", "moduleName": "@twilio-paste/core", "importType": "ImportSpecifier" }, "props": { "onClick": "(Identifier)" }, "propsSpread": true, "location": { "file": "cool-org/cool-repo/packages/ui/src/components/InternalSpaLink.tsx", "start": { "line": 16, "column": 5 } } }
We can see that the Anchor component is only being provided an
onClick in a Typescript (.tsx) file. However, Anchors should have a
href prop.
This information is valuable because it lets us check if our typings could be wrong or if the user added a
// @ts-ignore. More than that, it tells us that someone is trying to use our component in a way we didn’t we didn't design for or document well enough. We can use this information to reach out to the team by opening an issue or by messaging the developers on Slack, striking up a conversation with enough context to be helpful.
A few of the ways this data has changed the way we work
- Dealing with breaking changes in Paste. We’ve even used this information to figure out if we can or should make breaking changes to our API. We can answer, “how widely is this component or prop used?”. And if we do go forward with the breaking change, we can communicate directly with the affected parties.
Assisting teams with onboarding and reducing bounce rates. We can track who our newest adopters of Paste are by seeing when a new repository adds a Paste dependency and reach out offering a helping hand. We can also track who stops using Paste and reach out to ask why.
Helping us refine our roadmap. Remember I mentioned how we modified our code to scan for other component libraries in order to track which parts were particularly sticky? That report, along with our increased communication with teams, has helped us to refine our roadmap and prioritization. We have unprecedented visibility into the most pressing needs of our consumers.
Future plans for reporting
Our reporting tooling has become such a central part of how we work that we’re investing in expanding our tooling. Some ideas we’re working on this year include:
- Implementing a Github bot for automatic workflows. We think it would be a great developer experience for Paste consumers if we could automatically open Github Issues or PRs on affected repositories for breaking changes, with detailed “how-to upgrade” guides.
- Expanding our dataset. Right now, we filter our dataset heavily to be relevant to Paste. However, this tooling has the potential to help other teams at Twilio, so we want to capture data that can be helpful to our colleagues in other teams, helping them improve their workflows the same way ours has.
- Integrating into a Business Intelligence tool to empower more people to dig into the results. Right now, our results are stored as JSON files that we commit to Github. This brought us really far, but is not accessible to non-engineering-minded folks.
If any of this seems interesting to you, we’re hiring! paste [at] twilio [dot] com
Shadi is a Staff UX Engineer working on the Paste design system. He’s invested in using data to build high quality products – and to describe what high quality products are. He can be reached at sisber [at] twilio.com or at | https://www.twilio.com/blog/insights-metrics-inform-paste-design-system?utm_campaign=Design%2BSystems%2BWeekly&utm_medium=web&utm_source=Design_Systems_Weekly_113 | CC-MAIN-2021-31 | refinedweb | 1,538 | 55.74 |
image processing in notebook server
I have a local notebook server and I'd like to be able to use the image manipulation modules from scipy. They seem to import just fine, but I never get any images, only what appear to be commands.
import pylab from scipy import misc A = misc.face() pylab.imshow(A)
Outputs
AxesImage(80,48;496x384)
Do I need some additional configuration, or a plugin to make it work?
Is this the legacy Sage notebook or the Jupyter notebook?
The Sage notebook. Is it possible to run a server for Jupyter notebooks?
I think it's actually the default in new versions of Sage, or becoming the default--I forget the status. You can start it though with
sage --notebook=jupyter. That doesn't necessarily answer your question, but I don't personally know how to use the legacy notebook. | https://ask.sagemath.org/question/40181/image-processing-in-notebook-server/ | CC-MAIN-2018-05 | refinedweb | 145 | 66.44 |
$151.20.
Premium members get this course for $389.00.
Premium members get this course for $47.20.
Premium members get this course for $122.40.
Premium members get this course for $39.20.
Premium members get this course for $159.20.
strcpy(buffer, EditBoxForPersonsHandle->T
of declare buffer as AnsiString and then do the following:
Buffer = EditBoxForPersonsHandle->T
your header file looks something like:
class YourProject : TForm
{
public:
// functions
private:
// declare your TEdit control/variable here
>> Handle = testing->GetText();???
You could use testing->Text; and because you are declaring as char[] you can use testing->Text.c_str();
I assume you have 2 forms with TEdits and you want to get the text from an editbox in one form to an editbox in another or something like...
Anyway:
The TEdit you want to read from in a form declared as "protected". So using Formx->TheEditBoxIWantToRe
If it is a class then declare a TEdit in the private area and a function to set in the public area like:
private:
TEdit *ClassEdit;
void SetTEditBox(TEdit *FormEdit)
{
ClassEdit = new TEdit(this);
ClassEdit = FormEdit;
}
Don't forget to destroy the ClassEdit to your class destructor...
Assign the formedit from your main unit AFTER you create the instance of the class by calling the public function.
George Tokas.
what i am trying to do.. .is get the text from a EditBox when i push a button(lets call it button1) and store it in a variable.
//------------------------
#include <vcl.h>
#pragma hdrstop
#include "Unit1.h"
//------------------------
#pragma package(smart_init)
#pragma resource "*.dfm"
TEnterHandleForm *EnterHandleForm;
//------------------------
__fastcall TEnterHandleForm::TEnterHa
: TForm(Owner)
{
}
//------------------------
void __fastcall TEnterHandleForm::Button1C
{
// below is the code, the error is now telling me "Lvalue required"
/* not sure if it matters, but when i click on the "EditBox" it takes me to the function for the edit box (so i can code it)
but if there's no code (hence i didn't code anything in the edit box) the function just dissapears.
*/
char buffer[40];
buffer[0] = '\0';
buffer = EditBoxForPersonsHandle->T
} | https://www.experts-exchange.com/questions/21905995/TEdit-out-of-scope.html | CC-MAIN-2018-09 | refinedweb | 335 | 53.31 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.