Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
What is the name of the effect that causes a plane in a dive to go up?
What's the name of the effect that causes the aircraft to automatically pull out of a dive and buffet up due to aerodynamic lift produced at high speeds?
"Pitch stability" ("longitudinal stability")? That's what's tending to hold the angle-of-attack constant at wherever you trimmed it. "Returning to trim speed"? "Excess lift"? "Newton's second law of motion" (F=ma)-- a net force must cause an acceleration? "Net centripal force = mass * velocity squared / radius"? I'm leaning toward the last one but that's an equation not a name--!! I think we're back to "Pitch stability" ("longitudinal stability").
PS "buffet" has a specific meaning in aviation; unless you are trying to suggest that the angle-of-attack is approaching the stall angle-of-attack, causing the pre-stall buffet to occur, you should pick another word. "pitch up" would seem to work fine. Maybe best to wait to actually answer till this is clarified.
An aircraft with positive pitch stability (longitudinal stability) tends to maintain the trim1 angle-of-attack, regardless of airspeed. At any other angle-of-attack, the "teter-totter" balancing act between the wing and tail would be off-kilter, creating a pitch torque that would change the angle-of-attack.
At a given angle-of-attack, only one airspeed is compatible with level flight2. If the airspeed is too low, lift will be less than weight (or more precisely, less than the component of weight acting parallel to the lift vector, which is relevant when the flight path is aimed steeply upwards or downwards), and this will cause the flight path to curve downwards (in the aircraft's reference frame3). If the airspeed is too high, lift will be greater than weight (or more precisely, greater than the component of weight of acting parallel to the lift vector), and the flight path will curve upwards (in the aircraft's reference frame).
Since the angle-of-attack tends to remain constant, it follows that an upward curve of the flight path must be accompanied by a rise of the nose.
This whole dynamic is indeed one step in a "phugoid" oscillation. Other than that, there's no one specific name for it, (except perhaps "pitch stability", or perhaps simply "pitching upwards"!)
We've made one slight oversimplification here. In the second sentence we associated a pitch torque with a change in angle-of-attack. In fact, any change in the aircraft's pitch rotation rate requires a pitch torque. So some pitch torque is in fact being generated as the nose starts to rise. This implies that the angle-of-attack cannot be remaining exactly constant. To explain in more detail exactly what is going on here, is far beyond the scope of the original question. (Hint--the curving nature of the "relative wind" when the flight path is not linear plays a role in altering the apparent "decalage" between the wing and the tail.) The main thing to keep in mind is that the upward curve of the flight path is fundamentally being driven by a force imbalance, not a torque imbalance. There's too much speed, so there's too much lift, so the flight path must curve upwards. Meanwhile, the aircraft's pitch stability dynamics tend to maintain the trimmed angle-of-attack, keeping the nose aimed in the "right" direction in relation to the flight path, so when the flight path starts curving upwards, the nose must rise as well.
Footnotes--
The concept of "trim" is not limited to what happens when the pilot lets go of the controls or exerts no force on the controls. It also could include what happens when the pilot holds the controls in a fixed position.
And to a first approximation, for shallow dive or climb angles, only one airspeed is compatible with linear flight, regardless of whether or not altitude is exactly constant.
Re "in the aircraft's reference frame"-- think about which way the flight path is curving in the instant during a loop when the flight path is aimed straight up or straight down.
That is part of what happens in a phugoid (the “downhill”).
A phugoid or fugoid is an aircraft motion in which the vehicle pitches up and climbs, and then pitches down and descends, accompanied by speeding up and slowing down as it goes "downhill" and "uphill".
Welcome to Av.SE!
The short period mode also causes a plane to go up and down but only for a few seconds but Phugoid(Long Period Mode) can cause the plane to pitch up and down for more than 30s.
Well, since airplanes trim to an AOA, if you force it into a dive by a down elevator input while trimmed for level flight, you're forced it to a lower AOA (higher speed) than it's trimmed for. If you let go, it will pull out because it's trying to "weathervane" (pitch stability is simply a weathervaning tendency in the vertical plane about a specific offset angle determined by trim forces) back to the AOA it is trimmed to.
So the forces causing it to pull out of the dive are trim forces, trim being an opposing force balance system of nose down and nose up pitching moments that achieve equilibrium, or trim, at some angle of the body to the airflow.
Any buffeting that occurs (like in the movies) in the real world will be due to shock waves causing flow separations and turbulence. But for that the dive has to get into the transsonic speed range. A highspeed dive that stays within the airplane's certified speed range shouldn't experience any buffeting.
If you push it over hand hold it in a dive, while trimmed for the original speed, once you let go, it'll smoothly pull up on its own, seeking to regain its trim AOA (it'll overshoot its trim AOA while doing that and hunt up and down in ever smaller deviations until it's fully regained its trim AOA - the Phugoid Oscillation).
The "weathervaning" effect (return to trim a-o-a) would only account for a few degrees of pitch attitude change, were it not for the fact that the change in a-o-a causes a change in lift coefficient which causes a change in lift force which causes a change the direction of the flight path --
Well it's not a pure weathervane, since the weathervane is pivoting about the CG but is held up by lift forces so there is that whole interaction. But basically the neutral point wants to trail behind the C of G, like a weathervane, and the trim force balance determines exactly what offset to the flow that that ideal trail position is found. Yaw stability is pretty much the same, minus the supporting lift force, but if you roll 90 deg and fly knife edge like a Pitts S1, with the fuselage now the wing, yaw stability and trim forces become pitch stability and trim forces.
Is the dampened oscillation inherent? Can some aircraft and/or configurations be critically damped (or even overdamped)?
I would say airplanes tend to be critically damped or close to it in the stick fixed case, and underdamped in the stick free case (phugoid behaviour). FAR 25 just says dynamic stability must be "heavily damped". The test requirements do specify phugoid limits when displaced a certain amount from trim speed in the stick free case.
What is the name of the effect that causes a plane in a dive to go up?
The best answer is simply "Lift". Lift exists, and exceeds weight (or strictly speaking, exceeds the component of the weight vector that acts parallel to the lift vector), so the flight path curves upward, and the nose rises.
It's called static stability.
A descending or diving aircraft has a gravity component pulling it forward, which increases its forward speed.
Increasing forward speed increases lift by the square of the velocity, causing the flight path to curve upwards.
As speed decreases, the process reverses, eventually settling at a velocity where flight remains linear. We then control climbing, descending, or level flight by adding or subtracting thrust with the throttle.
One must be aware, with a tiny tail and the weight too far forward, the wing torque on the center of gravity can overpower the the tail and continue to push the nose down
this is your lawn dart
This is especially true if the wing center of lift moves backwards as velocity increases and/or downwash from the wing helps stall the tail.
"...your lawn dart" refers to William Walker III's answer (currently after in the default sort view).
@PeterMortensen but lawn darts don't have wings. What should that tell you?
It's Called the "(Total) Aerodynamic Force"
The Aerodynamic Force is the net effect of the lift force produced by the aerofoil, and the drag force produced by the same. Because these are integrated, the aerodynamic force tends to be net-tilted back away from the vertical. This force acts on the center of lift (wing) rather than the center of mass of the aircraft as a whole. As such, under certain wing conditions, this force will impart rotation in the pitch axis.
As airspeed increases, both of these forces increase, and their integrated net effect increases. At some point it's enough to overwhelm the natural resistance from the tail and becomes the dominant effect.
Where and if this happens depends entirely on aircraft design. Some craft will NOT recover like this, they'll just lawn-dart without control input (because their wings produce lift in a slightly-forward direction. Some aircraft will do this very aggressively, as a function of deliberate design (it makes them naturally recover from stalls more quickly).
AIUI, This is also why, all things equal, your nose pops up when you add flaps.
This is a nice description of the effect, but the question was about its name.
@Bianfable It's name is the Aerodynamic Force. The rest of it is just explaining why that's it's name.
Let us continue this discussion in chat.
(Deleting comments, this answer is being discussed in chat-- )
It might help (when considering drag) to look at the entire aircraft. Also, the "nose popping up when flaps are added" (very noticeable with a 172) is from wing down wash on the tail. Notice a "lawn dart" would actually need a force (from somewhere in the aircraft) to drive the nose down. You are on the right track, though, about wing area playing a role in excessive directional stability. A plane needs to turn to pull out.
Can you add some examples of the two types of aircraft design?
That is always true for airplanes and therefore meaningless.
|
STACK_EXCHANGE
|
Pig Princess Posted November 10, 2022 Share Posted November 10, 2022 With Maxwell rework as many already experienced in order to use spell one needs to use mouse. While doing so one has option to cast spell via primary action and cancel casting via secondary action; however, players have 2 options to move and attack: via mouse or keyboard (WASDF). While for player running and attacking via keyboard it's ok to do both casting and cancelling via mouse, if one runs with mouse, let alone attacks with it it's impossible to simultaneously run/attack and be ready to cast spell, let alone reach that smoothness and convenience, and since mouse attacking was made a thing relatively recently I thought the following suggestion has a chance to pass. Make casting action be 3rd type of action, in addition to primary and secondary, and let player rebind control to it including extra buttons on mouse. This way I, for example, as player who runs with mouse, would be able to bind spell casting to mouse wheel or button of my choice on keyboard and perform spell while still being able to run, use left-click and right-click for primary and secondary action while managing inventory and during play session in general. If it's absolutely not possible, I suppose I would have to learn how to move with WASD after all with relatively ok-ish smoothness, but if I could keep moving with mouse while not sacrificing combat efficiency as Maxwell I would be incredibly happy. Since alternative way to move and even attack is a thing, I think making it fully functional option is only natural. If such change was made another character-specific interactions like switching aggro mode of Abigail or desummoning her, as well as release of Wortox's soul could be treated as 3rd action and therefore binded to something (and be configurable). The reason I mentioned them as well is because for moving with WASD it's much more convenient to use mouse for those actions compared to moving with mouse, since moving cursor back to inventory bar and back to original place not only stops movement, but also takes more time compared to WASDF variant and thus makes those actions performed slower. If anyone wanted to keep current situation as is, one could just bind 3rd action and primary action to the same control, or set 3rd action to no bind. Edit (from message below): to elaborate more on how it works for mouse movement. I'm selecting codex with mouse or hotkey and choose spell with mouse, but after that I need to click on a screen to actually cast it. Problem is with second part, where I want to be able to run and cast, not with selecting codex or spell. Link to comment Share on other sites More sharing options...
This topic is now archived and is closed to further replies.
Please be aware that the content of this thread may be outdated and no longer applicable.
|
OPCFW_CODE
|
Additional Practice: Woof Woof Welcome to Doggo Bark Bark
- Access information from an API using a GET request and use it to update the DOM
- Listen for user events and update the DOM in response
- Send data to an API using a PATCH request
THIS GOOD APPLICATION FOR LOOKING AT DOGS BOW WOW.
WHEN LOOKING AT PUP PUPS USER SHOULD BE ABLE TO:
- CLICK ON DOGS IN THE DOG BAR TO SEE MORE INFO ABOUT THE GOOD PUPPER;
- MORE INFO INCLUDES A DOG PIC, A DOG NAME, AND A DOG BUTTON THAT INDICATES WHETHER IT IS A GOOD DOG OR A BAD DOG;
- CLICK ON GOOD DOG/BAD DOG BUTTON IN ORDER TO TOGGLE PUP GOODNESS;
- CLICK ON "FILTER GOOD DOGS" BUTTON IN ORDER TO JUST SEE GOOD DOGS OR SEE ALL DOGS IN DOG BAR.
STEP 1: VIEW THE DATA
All of the dog data is stored in the
db.json file. You'll want to access this data using
json-server. If you don't have
json-server installed already, install it first with:
$ npm install -g json-server
Then run the server:
$ json-server --watch db.json
This will setup the data on a server using RESTful routes at http://localhost:3000/pups. Go ahead and head to that URL in your browser to view the data. Familiarize yourself with the attributes for each pup. Try going to
/pups/:id to see an individual pup as well.
STEP 2: ADD PUPS TO DOG BAR
On the page, there is a
div with the id of
"dog-bar". When the page loads, use
fetch to get all of the pup data from your server. When you have this information, you'll need to add a
span with the pup's name to the dog bar (ex:
STEP 3: SHOW MORE INFO ABOUT EACH PUP
When a user clicks on a pup's
span in the
div#dog-bar, that pup's info (
isGoodDog status) should show up in the
div with the id of
"dog-info". Display the pup's info in the
div with the following elements:
imgtag with the pup's image url
h2with the pup's name
"Bad Dog!"based on whether
isGoodDogis true or false. Ex:
<img src="dog_image_url" /> <h2>Mr. Bonkersh2> <button>Good Dog!button>
STEP 4: TOGGLE GOOD DOG
When a user clicks the Good Dog/Bad Dog button, two things should happen:
- The button's text should change from Good to Bad or Bad to Good
- The corresponding pup object in the database should be updated to reflect the new isGoodDog value
You can update a dog by making a
PATCH request to
/pups/:id and including the updated
isGoodDog status in the body of the request.
BONUS! STEP 5: FILTER GOOD DOGS
When a user clicks on the Filter Good Dogs button, two things should happen:
- The button's text should change from "Filter good dogs: OFF" to "Filter good dogs: ON", or vice versa.
- If the button now says "ON" (meaning the filter is on), then the Dog Bar should only show pups whose isGoodDog attribute is true. If the filter is off, the Dog Bar should show all pups (like normal).
Missing Defer Attribute in script tag
Add defer to the script tag on line 7.
|
OPCFW_CODE
|
Deep Dive into Writing Back to Axonius (Part 3 – Custom Enrichment)
On my last month’s article, we held the discussion regarding the Data Enrichment fields and how they are used and how to enable them. This article can be found here: https://support.axonius.com/hc/en-us/community/posts/4409146455191-Deep-Dive-into-Writing-Back-to-Axonius-Part-2-Data-Enrichment-
The month prior the article I wrote covered the difference between tags and custom fields. You can find that article here: https://support.axonius.com/hc/en-us/community/posts/4407260908055-Deep-Dive-into-Writing-Back-to-Axonius-Part-1-Tags-vs-Custom-Data- (If you like it, please leave a message or upvote)
This month, we are talking about Custom Enrichment. Custom Enrichment is slightly different from Data Enrichment as we can build out MASSIVE date enrichments for a specific adapter.
- What is Custom Enrichment?
- How is it different from Data Enrichment?
- How is it different from Custom Fields?
- Enrichment Statement
- More Nuance info
What is custom enrichment?
Custom enrichment is a function of the Axonius system where a customer takes custom data that is determined by if an asset is in a subnet, or something in the adapter contains or equals a key word to what is noted on the CSV index and adds that data to the adapter data set. When the user uploads a CSV with the key field as well as enrichment outputs, the key fields will connect to the linked fields and provide key attributes that can be queried on or used in enforcements. This is a simple to scale process that will give you a large amount of data back to your adapter input if you so choose to add data that is meaningful to you.
How is it different from Data Enrichment?
Data enrichment has most of the functionality that Custom Enrichment has, except Data enrichment is meant to update the “network interface fields” in the aggregated fields and not touch anything with the adapter level data structures.
How is it different from Custom fields?
While custom fields do touch on many of the same types of attributes that are found in the system, once again, these either create updates to aggregated fields OR creates a functional 3rd party field that can be as dynamic as Custom Enrichment fields but sits as a separate data adapter in the form of Custom Fields. The other key difference between Custom Fields and Custom Enrichment is that Custom Enrichment can be loaded by CSV and can look up a contains, equals or within subnet on the CSV input, whereas all of these things can be done in the query wizard for Custom Fields, HOWEVER, it cannot be done to scale so each rule is done in a single fashion and must be updated as such. Custom Fields can be difficult to scale for large rule sets. The roadmap is projected to release folders; this will allow for much easier organization of these rules.
This feature can be found in the settings icon (top right hand corner, there is a sprocket that says “system settings” if you hover over it) > go to the Global Settings Tab > scroll to the Custom Enrichment (BETA) section.
Axonius documentation does a pretty good job of walking the user through the enrichment statement. If you want to check it out. Click here: https://docs.axonius.com/docs/custom-enrichment?highlight=custom%20enrichment
There are a few small nuances to be aware of and they are not prevalent in the documentation. First, see below for the enrichment statement. When a customer would like to enrich the JAMF adapter with information found in the type and distribution pages, they may write the following statement:
enrich 'devices' with (kernel,date_annouced,released_date) on (source.Type_and_Distribution in device.jamf_adapter.[OS: Type and Distribution])
In this exercise,
The statement breakdown would be as follows:
- Enrich ‘Type’ (enrich 'devices')this references that you are enriching the devices table
- “with (kernel,date_annouced,released_date)” This represents the fields that you want to bring in from the CSV.
- on (source.Type_and_Distribution) (“source.” is the key added with the name of the field you are matching with. field is “id”. In our example, Type and Distribution is what is on the source adapter that we are trying to use as an adapter key). If you look below, this is where the source is getting the key from.
- (in) This represents an contains component. If you wanted to do an equals in this section, you would add “==” or if a specific subnet, you would add “in_net”. Be mindful that if you add the in or in_net, make sure the source field represents the type of lookup that would support this. If you look below at the chart, you will see why we are looking at a contains vs. an equal. You will see that in the CSV, we want to find anything that contains “OS X 10.0” and match it with the source.Type_and_Distribution which has a nomenclature that looks more like “OS X 10.0.7”.
- “device.jamf_adapter.[OS: Type and Distribution])” – Match on the JAMF adapter the previous information to the OS:Type and Distribution . If you were wanting to add this to Qualys, this would look like device.qualys_adapter.[OS: Type and Distribution]
In order to add this formula, please go to the top right corner (settings button). Then the Global Settings Tab and then scroll down to the section that says Custom Enrichment (BETA). Toggle the setting that says “Enable Custom Enrichment” and then enter your statement on the “enrichment statement” field. In the input, this is where you will choose the CSV file that you will upload and upload the file. Please make sure to limit the fields to just what you are uploading and do not include a header. UTF-8 is preferred with this upload.
More Nuance info
Sometimes you simply want to add everything that is on the CSV into the enrichment category. We have a fix for you as well. The way you add EVERYTHING on the CSV without having to adjust the names in the formula after “with” would be to add an “*”. In this example, if we wanted to add everything that was brought in on the CSV above, you would put in the following formula:
enrich 'devices' with (*) on (source.Type_and_Distribution in device.jamf_adapter.[OS: Type and Distribution])
As you can see, everything is mostly the same. However, with the asterisk on the formula, you will find that the field acts as a wildcard. Do not do the wildcard in any other place as the logic of the formula will not allow it.
This is what the field will end up looking like with the wildcard enrichments:
- Can it be used on an aggregated field? As mentioned above, no.
- Can it be put on many adapters? You can add the same information to multiple adapters. Once you have added one CSV, click the plus bar below the input and start with your next CSV adapter.
- Is there a limit to the fields? We have not experienced any limit to enrichments however you may want to decide what is the best qty per your use case.
Awesome - now I need to test this out in our environment
Question - Is the adapter name the same as the adapter connection label or is the the overall adapter name?
There are instances when you have multiple instances of the same Adapter such as 3 different Active Directory Adapters. To distinguish between the 3 Adapters, you enter an Adapter connection label. Let me know if you have any further questions.
Thanks - I'm still getting a Wrong query syntax error - I've opened a support case for more help
Len, most often times they aggregated adapter connection name is noted like this: active_directory_adapter . This is a bit wonky as it is not labeled in the UI exactly like how it is written (and we are working to fix that issue). If you export the field that you want with the specific adapter, it is in the aggregated: adapter connections tab. Below is an example of some of them. For the most part, they are pretty straight forward. I will see about adding a bunch to this article to help out. Let me know if you have any issues and we can troubleshoot together.
This is a helpful example. Now how do I do this with a dynamic data source? i.e. update the csv file on a regular basis automatically.
Michael, currently, we are working on this with the enrichment. You can, however do this completely dynamically with the csv adapter.
Please sign in to leave a comment.
|
OPCFW_CODE
|
If you’re wondering why you should contribute to open source, we are here to help you out!
In today’s post from the #TipSeries, we bring you the top 10 reasons to contribute to open source. We hope this article answers all of your questions.
Let’s get started!
Tip 1: Gain confidence as a developer
By contributing to an open-source project, you get suggestions and instant feedback on your programming skills and tech-knowledge which helps you to enhance your skills and boost your confidence.
Tip 2: Make your Resume and CV strong
In addition to developing your skills and building your confidence, all your open source contributions are public and show the skills you’ve learned and the projects you’ve been doing. Nonetheless, your open source profile itself could provide you with a powerful portfolio that separates you from other job applicants.
Tip 3: Build your professional network
Creating a strong professional network will help you meet your career goals, learn more about your own or related fields, and help you find a job. Contributing to open source is a very effective way of creating your network. You may also get introduced to key people in the industry, such as the high-profile maintainer of an open-source platform. These networks can help you form career-changing relationships.
Tip 4: Make sure you write a cleaner code
You may propose to the group separately to adhere to a code that is easy to understand through the rules of writing. The fact that the code is open to everyone helps you focus on making your code readable.
Tip 5: Learn about the latest tools and technologies
Many students don’t even know about the software frameworks, the tools used in the industry to make the coders more efficient and accurate. Open source contribution helps students to learn about new tools and technologies.
Tip 6: Gain recognition
Another incentive to contribute to open source is to get recognition. Potential workers may end up meeting many great employers while making their names known to the coding community.
Tip 7: Upgrade software on which you rely
Lots of open source contributors begin with being users of the software to which they contribute. If you find a bug in open-source software that you are using, you might want to look at the source to see if you can patch it on your own. If that’s the case, then contributing back to the patch is the best way to make sure your friends (and yourself, when you upgrade to the next release) will benefit from it.
Tip 8: Meet people with common interests
With warm, welcoming communities, open-source projects keep people coming back for years. By engaging in an open-source platform, many people develop lifelong friendships, whether it involves running into each other at conferences or online chats about burritos late at night.
Tip 9: Find mentors and educate others
Operating on a joint project with others means you’ll have to explain how you’re doing the task and ask for input from other people. Training and teaching activities can be a rewarding experience for those concerned.
Tip 10: It is inspiring to make changes
You don’t have to become a lifelong contributor to enjoy open-source participation. Have you ever seen a mistake on a website, and has anyone tried to repair it? You can do exactly that on an open-source project. Open source makes people feel a sense of control in their lives and how they view the world, and this is gratifying in itself!
|
OPCFW_CODE
|
Is there a workaround for Vim's Netrw :bprev bug?
This didn't get much love on stackoverflow (https://stackoverflow.com/q/48269793/2512141), but I think it's important so I'm reposting here. I think this could be a major barrier to people using Vim's buffers as they're intended, and consequently they resort to tabs to get the file-switching capability they need. Text from the original post follows.
Vim's Netrw file explorer has the following bug: Running the command :e. will open Netrw, but after Netrw closes there is a latent buffer in Vim's buffer list which cannot be traversed with command :bprev. (:bnext works fine.)
This bug is discussed in the following places:
https://www.bountysource.com/issues/45921122-previous-doesn-t-work-with-e-buffer
https://groups.google.com/forum/#!topic/vim_use/zzeQItJQNZI
To replicate this bug, start Vim and run the following commands:
:ls!
:edit ./file1.txt | ls!
:edit ./file2.txt | ls!
:e. #(choose file3.txt in Netrw)
:edit ./file4.txt | ls!
:ls!
At this point, you will see buffers for the files you have opened as well as some buffers with paths, and a buffer with [No Name]. Try to navigate these buffers with
:bnext | ls!
:bprev | ls!
You will find that :bnext successfully loops over the buffers, but :bprev hangs on the Netrw buffer. Is there a workaround for this bug so that buffer navigation with :bnext and :bprev still works?
You should discuss this issue with Charles, the netrw plugin maintainer. He is the best chance to get this fixed.
I like that idea. I have an issue registered on github, https://github.com/vim/vim/issues/2597 , but if it doesn't go anywhere I'll reach out to him.
fwiw to you, I cannot replicate this. version or option difference maybe?
@Mass Interesting... It could be something in my vimrc: https://bitbucket.org/BitPusher16/dotfiles/src/ea0e5ea1cc6d64ddea70a43f6d70e6066a317c15/vimrc?at=master&fileviewer=file-view-default . But I know at least a few other people have had the problem. (See links in my post.) Which step specifically is not replicating? Do you see the [No Name] buffer after running :e. ?
@BitPusher16 correct, I do not see [No Name]. After the last ls! on the first code block I have buffers 1,2,4,5, and 6; 4 is ~/path. I can tell you the difference is the hidden option. I know it's popular but consider removing it until know the full consequence of the option. Additionally, if you do :bwipe manually you can get rid of the buffer. Possibly you could set up an autocmd to do this for you.
Related: https://vi.stackexchange.com/questions/9170/how-do-i-make-netrw-behave-with-respects-to-cycling-through-buffers-with-bprevi
I've discovered through experimentation that :bprev will resume functioning if I delete the path buffer just prior to the [No Name] buffer. However, this is burdensome.
Instead, I have started using :Explore to open Netrw. This does not create the latent buffer which trips up :bprev, but I am still able to browse my local directory tree and open files for editing.
Maybe this function will help you? I use it in my neovim configuration. With it :bprev works as expected.
ToggleNetrw display netrw explorer in current window (with command :Ntree) on specified directory (or parent directory of current opened file) and cleanup excess buffer to fix unexpected :bprev behaviour.
Second call of ToggleNetrw hide explorer and show previous buffer.
function! ToggleNetrw(...)
if &filetype ==# 'netrw'
execute 'Rexplore'
else
if a:0 ==# 1
let path = fnamemodify(expand(a:1), ':p')
else
let path = fnamemodify(expand('%'), ':p:h')
endif
execute 'Ntree' path
let excess_buffer = bufnr(path)
if excess_buffer != -1
execute 'bdelete' excess_buffer
endif
endif
endf
nnoremap <silent> <Leader>ee :call ToggleNetrw()<CR>
nnoremap <silent> <Leader>ec :call ToggleNetrw('.')<CR>
nnoremap <silent> <Leader>eh :call ToggleNetrw($HOME)<CR>
Welcome to this site @zpah, you could edit your answer to explain your code a little more: that would be useful for future reader so that they can understand faster what your function does.
Yang-Le Wu (yangle) works around the problem by using keybinds to switch buffer. See the commit linked from the issue:
https://github.com/yangle/dotfiles/commit/d2e3962a4f1c42f32ae1fa70fb690665011f1a43
The keybinds call a function which will remove the first netrw buffer, if found, before attempting to do the actual movement.
Without the first buffer sitting in the way, the movement will go to the expected buffer.
So that's one solution, but it does require using certain keybinds. You may want to add some of the other popular keystrokes:
noremap <silent> [b :call ChangeBuffer("prev")<CR>
noremap <silent> ]b :call ChangeBuffer("next")<CR>
noremap <silent> :bp<CR> :call ChangeBuffer("prev")<CR>
noremap <silent> :bn<CR> :call ChangeBuffer("next")<CR>
noremap <silent> :bprev<CR> :call ChangeBuffer("prev")<CR>
noremap <silent> :bnext<CR> :call ChangeBuffer("next")<CR>
But I have had limited success with it. It seems to always destroy the netrw buffer when it's unfocused.
I use :set hidden. I don't want to destroy it, I just don't want to get stuck on it!
My own experiments are a WIP. I started off trying to remove the second buffer, as suggested by BitPusher16. I used this method to detect it, which seems to work even if it was destroyed and then recreated:
buflisted(bufnum) == 0 && getbufvar(bufnum, "netrw_curdir") != ''
But if we do remove netrw2, the next time we focus the netrw1 buffer, it will recreate the second buffer now at the end of the buffer list, and focus it. This moves us out of our place in the buffer list.
If I have time, I may pursue a new strategy:
Still use a function to change buffer (using keybinds as above). Inside the function:
If we start off focused on netrw2, then switch to netrw1 before moving.
If after moving, we are now focused on netrw2, perform another move in the same direction (skip it).
In this case we won't actually need to destroy the second netrw window. We will rather skip it, and act as if netrw1 and 2 are bound together.
If that does work, investigate edge cases:
...Worry about counts like :3bprev or 3[b...
...Worry about multiple open netrw buffers...
...Check if the strategy still works with netrw visible in window panes...
...Worry about cleaning up both netrw windows, if the user deletes one of them...
|
STACK_EXCHANGE
|
from filestack.config import ACCEPTED_SECURITY_TYPES
from filestack.exceptions import SecurityError
import base64
import hashlib
import hmac
import json
def validate(policy):
"""
Validates a policy and its parameters and raises an error if invalid
"""
for param, value in policy.items():
if param not in ACCEPTED_SECURITY_TYPES.keys():
raise SecurityError('Invalid Security Parameter: {}'.format(param))
if type(value) != ACCEPTED_SECURITY_TYPES[param]:
raise SecurityError('Invalid Parameter Data Type for {}, '
'Expecting: {} Received: {}'.format(
param, ACCEPTED_SECURITY_TYPES[param],
type(value)))
def security(policy, app_secret):
"""
Creates a valid signature and policy based on provided app secret and
parameters
```python
from filestack import Client, security
# a policy requires at least an expiry
policy = {'expiry': 56589012, 'call': ['read', 'store', 'pick']}
sec = security(policy, 'APP_SECRET')
client = Client('API_KEY', security=sec)
```
"""
validate(policy)
policy_enc = base64.urlsafe_b64encode(json.dumps(policy).encode('utf-8'))
signature = hmac.new(app_secret.encode('utf-8'),
policy_enc,
hashlib.sha256).hexdigest()
return {'policy': policy_enc, 'signature': signature}
|
STACK_EDU
|
Getting started with Ubuntu: the essentials
A more elegant option is to use Windows 7’s disk-management features to create a third partition alongside your Windows and Ubuntu partitions.
You can use the Libraries feature of Windows 7 to include files stored in folders here in your Windows Music, Documents and Video libraries, and then use Symbolic Links – the Ubuntu equivalent of shortcuts – to link to the same folders from within Ubuntu.
To do this, navigate to the folder, right-click on it and select Make Link, then drag the new link to the relevant folder in the Ubuntu file system. Copying a link from a shared Music folder to Ubuntu’s Music Folder (Places | Music), for instance, will ensure that the content appears in your Rhythmbox library.
Sync your browsers
If you’re using Firefox or Chrome on both systems, there’s no need to import anything. Go to Chrome in your Windows system, click on the spanner icon and select Preferences. Click on the Personal Stuff tab and then on the Setup Sync button. Choose which elements
you want synced. Now repeat the process on your Ubuntu system.
Everything from the theme you use to installed apps can be synchronised.
Everything from the theme you use to installed apps can be synchronised
The same trick works with Firefox, provided you’ve downloaded and installed either the Firefox 4 beta or the free Firefox Sync add-on. On your Windows PC, sign into the service, copy the sync key the program provides, then go to your Linux system, sign in and enter the sync key.
Migrate your mail and contacts from Outlook
You can Export Outlook data as a PST file using Outlook’s Import/Export wizard, then import it into Evolution, the Ubuntu email client, using the File | Import command.
If you use nested folders to keep your mail and contacts organised, however, things get more complicated, as Evolution will struggle to parse the different folders. One way round this is to download and use a tool called readPST, which takes the Outlook PST data and spits it out in separate mbox files that Evolution can read.
The easiest way, however, is to use the Thunderbird email client as an intermediary. Simply install the latest version on Windows and use the Import wizard to import your email and contacts from Outlook. Then check where Thunderbird stores your mail – it should be in C:users
You can copy the entire contents of this Thunderbird profiles folder to an external hard disk or USB stick, then paste the whole shebang into the folder /home/
Now when you start the Evolution application it will automatically import all your mail and contacts.
Ubuntu’s own Update Manager (which can be found at System | Administration | Update Manager) handles all the updates for your OS and applications, and you can use the Settings tab to select how often it checks for updates and which packages it handles.
When a new version of Ubuntu is released, you’ll get a button in the Update Manager, which you simply click to upgrade. If this doesn’t appear, click on the Settings button, then the Updates tab and make sure that the “Show new distribution releases” option at the bottom is set to Normal Releases.
Be warned, however: updates can take a seriously long time and cripple your system while they’re being installed. If you don’t have a multicore processor or the fastest and most reliable web connection, save the operation for a quiet time of day.
Configure displays for a dual-screen setup
If you have dual monitors, in some cases you can simply go to System | Preferences | Monitors, then hit the Detect Monitors button. Select both monitors from the list and click OK, then ensure that the Mirror option is unchecked and click Apply. If you need to, you can drag the monitor icons so they’re in the correct left and right positions, or you can change the individual display settings for each.
|
OPCFW_CODE
|
const fs = require('fs');
const jsdom = require('jsdom');
const basename = 'HTML/memberDisplay.cfm?memberID=';
const memberId = process.argv[2];
const memberFullName = process.argv[3];
console.log(memberId + ' ' + memberFullName);
if (!memberId || !memberFullName) {
console.log(
'You must supply the memberId and member full name in the commandLine'
);
}
const { JSDOM } = jsdom;
let memberSurNames = [];
let memberGivenNames = [];
let startIndex = -1;
try {
const member_surNames_json = fs
.readFileSync('member_surNames.json')
.toString();
memberSurNames = JSON.parse(member_surNames_json);
const member_givenNames_json = fs
.readFileSync('member_givenNames.json')
.toString();
memberGivenNames = JSON.parse(member_givenNames_json);
} catch (e) {
//file doesn't exist. Ignore. We're just starting out.
}
const html = fs.readFileSync(basename + memberId).toString();
const dom = new JSDOM(html).window.document;
let revisedMemberName = memberFullName;
let nickName = '';
startIndex = memberFullName.indexOf("'");
if (startIndex > 0) {
const endIndex = memberFullName.indexOf("'", startIndex + 1);
if (endIndex > 0) {
nickName = memberFullName.slice(
startIndex + 1,
endIndex //- startIndex
);
}
}
if (nickName !== '') {
revisedMemberName = revisedMemberName.replace(`'${nickName}' `, '');
}
let givenName = '';
let surName = '';
const splitName = revisedMemberName.split(' ');
if (splitName.length > 2) {
const img = dom.querySelector(
'.body2ndLevel > p:nth-child(2) > table:nth-child(2) > tbody:nth-child(1) > tr:nth-child(1) > td:nth-child(1) > table:nth-child(1) > tbody:nth-child(1) > tr:nth-child(1) > td:nth-child(1) > a:nth-child(1) > img:nth-child(1)'
);
const imgSrc = img.getAttribute('src');
startIndex = imgSrc.indexOf('/thumbnails/') + '/thumbnails/'.length;
const surnameLetter = imgSrc.slice(startIndex, startIndex + 1);
(surName = guessSurnameBasedOnStartLetter(memberFullName, surnameLetter)),
(givenName = revisedMemberName.replace(` ${surName}`));
} else {
givenName = splitName[0];
surName = splitName[1];
}
memberGivenNames.push({
memberId,
givenName,
nickName,
});
memberSurNames.push({
memberId,
surName,
current: true,
});
const namesCell = dom.querySelector('span[style*="175%"]').parentElement;
startIndex =
namesCell.innerHTML.indexOf('Other surnames:') + 'Other surnames:'.length;
const endIndex = namesCell.innerHTML.indexOf('<br>', startIndex);
let otherSurnames = '';
if (
startIndex >= 'Other surnames:'.length &&
endIndex >= 'Other surnames:'.length
) {
otherSurnames = namesCell.innerHTML.slice(startIndex, endIndex).trim();
}
if (otherSurnames != '') {
otherSurnames.split(',').forEach((name) => {
if (name.trim() !== '') {
memberSurNames.push({
memberId,
surname: name.trim(),
current: false,
});
}
});
}
fs.writeFileSync('member_givenNames.json', JSON.stringify(memberGivenNames));
fs.writeFileSync('member_surNames.json', JSON.stringify(memberSurNames));
// console.log(guessSurnameBasedOnStartLetter('Clyde Alexander', 'A'));
// console.log(guessSurnameBasedOnStartLetter('Leticia Van de Putte', 'V'));
|
STACK_EDU
|
Auryn v0.8.0 is the first version to come with a set of Python tools which allow decoding from binary files generated with BinarySpikeMonitor or BinaryStateMonitor. You can find the Python code in the
To use the Auryn Python tools, point your Python path to the
auryn/tools/python directory. For instance by
Suppose you have an
spk file which was written by a BinarySpikeMonitor. For instance, if you run the example
sim_coba_binmon this will by default write the file
/tmp/coba.0.e.spk with spikes from the Vogels Abbott benchmark network.
The following code snipped will then plot the spikes from last 0.1 seconds in the file:
import numpy as np import pylab as pl from auryntools import * filename = "/tmp/coba.0.e.spk" seconds = 0.1 sf = AurynBinarySpikeFile(filename) spikes = np.array(sf.get_last(seconds)) pl.scatter(spikes[:,0], spikes[:,1]) pl.xlabel("Time [s]") pl.ylabel("Neuron ID") pl.show()
Instead of the the
get_last() method you could have also use the
get_spikes() method to get all or a temporal range of spikes from the file.
In a parallel simulation you will typically have multiple spk output files because each rank writes its own file to disk. The Python toolkit provides a simple way how you can deal with this transparently. Suppose you ran the Vogels Abbott benchmark in parallel using 4 cores (
mpirun -n 4 ./sim_coba_binmon). The following code will lets you plot the spikes:
import numpy as np import pylab as pl from auryntools import * num_mpi_ranks = 4 seconds = 0.1 filenames = [ "/tmp/coba.%i.e.spk"%i for i in range(num_mpi_ranks) ] sf = AurynBinarySpikeView(filenames) spikes = np.array(sf.get_last(seconds)) pl.scatter(spikes[:,0], spikes[:,1]) pl.xlabel("Time [s]") pl.ylabel("Neuron ID") pl.show()
The output should look similar to the plot above.
import numpy as np import pylab as pl from auryntools import * from auryntools.stats import * filename = "/tmp/coba.0.e.spk" sf = AurynBinarySpikeFile(filename) spikes = sf.get_spikes() vogels_plot(spikes)
Suppose you want to know the linear receptive field of a neuron. We will illustrate this on data from one of my published papers http://www.nature.com/ncomms/2015/150421/ncomms7922/full/ncomms7922.html ( you can find the simulation code here https://github.com/fzenke/pub2015orchestrated). When you run this simulation with BinarySpikeMonitors (for instance the development branch of the above repository has that enabled by default), you will end up with multiple spk files for the input and the recurrent network dynamics. Here I will assume that you have done that already and the output files are accessible under the
datapath which in the example is set to my home directory, but should be different in your case.
To get the receptive field of a neuron we are interested in which input neurons were active just before a certain network neuron spiked. Moreover, because the network is plastic we would like to know how the receptive field changes over time. With the supplied Python toolkit this analysis is straight forward. Here is the code:
#!/usr/bin/python import numpy as np import pylab as pl from auryntools import * datadir = "/home/zenke/data/sim" # Set this to your local data path num_mpi_ranks = 4 dim = 64 n_max = dim**2 t_bin = 100e-3 integration_time = 400 neuron_id = 28 outputfile = "%s/rf2.0.e.spk"%datadir sf = AurynBinarySpikeFile(outputfile) stimfiles = ["%s/rf2.%i.s.spk"%(datadir,i) for i in range(num_mpi_ranks)] sfo = AurynBinarySpikeView(stimfiles) start_times = np.arange(6)*500 for i,t_start in enumerate(start_times): t_end = t_start+integration_time print("Analyzing %is..%is"%(t_start,t_end)) spike_times = np.array(sf.get_spike_times(neuron_id, t_start, t_end)) hist = sfo.time_triggered_histogram( spike_times, time_offset=-t_bin, time_window=t_bin, max_neuron_id=n_max ) pl.subplot(2,3,i+1) pl.title("t=%is"%t_start) pl.imshow(hist.reshape((dim,dim)), origin='bottom') pl.show()
Finally, here is an example of howto read binary state monitor files:
import numpy as np import pylab as pl from auryntools import * # This code snipped assumes that you have run the example simulation # sim_epsp_binmon with default parameters and adjusted the below # filename to its output. filename = "../../build/release/examples/out_epsp.0.bmem" t_from=0.2 t_to =2.5 sf = AurynBinaryStateFile(filename) mem = np.array(sf.get_data(t_from, t_to)) pl.plot(mem[:,0], mem[:,1]) pl.xlabel("Time [s]") pl.ylabel("Membrane potential [V]") pl.show()
|
OPCFW_CODE
|
Figure 22. Example
Some visualization applications require the retention of state from one execution to the next, which as discussed earlier, cannot be supported within the context of pure data flow. Consider, for example, the creation of a plot of data values at a given point while sequencing through a time series. The state of the plot from the prior execution is retrieved. It is updated by appending the new time-step information, and the result is then preserved by resaving the state of the plot for the next execution. Data Explorer provides two sets of tools for preserving state depending on whether the state needs to be preserved over one execution of the network or over multiple executions of the network. The tools for preserving state are GetLocal, SetLocal, GetGlobal, and SetGlobal. The Set tools enable you to save an object (in Data Explorer's cache) for access in a subsequent execution or iteration. The Get tools enable you to retrieve the object saved by the Set tools.
You pair a GetLocal and SetLocal in a visual program by creating an arc from GetLocal's link output parameter to SetLocal's link input parameter. In a visual program a GetLocal typically appears logically above a SetLocal. When GetLocal runs, it checks if an object has been saved in the cache. If no object was saved (as would be the case if SetLocal has not yet run) or the reset parameter to GetLocal is set, GetLocal outputs an initial value that you can set using the initial parameter. Otherwise, GetLocal retrieves the saved object from the cache and outputs it. When SetLocal runs, it saves its input object in the cache and then indicates that its paired GetLocal should simply be scheduled during the next iteration of a loop or the next time an execution is called for. (Note that if GetLocal is inside a macro, it will be executed only if the macro needs to be executed; that is, if the macro's inputs have changed or there is a side effect module in the macro.)
GetGlobal and SetGlobal are paired in the same way as GetLocal and SetLocal.
They also save and retrieve items from the cache. The main
difference is that GetGlobal and SetGlobal will preserve state over
more than one execution of a program. (However, recall that a complete
loop takes place within a single execution.)
Using GetGlobal and SetGlobal is comparable to using
a static variable in C-language programming. GetLocal and SetLocal are
good for saving state inside of a looping construct. Once the loop is
terminated, the state is reset for the next execution of the loop. To
save state in a program that uses a Sequencer module, you should
use GetGlobal and SetGlobal, since each iteration of the Sequencer
is a separate execution of the program as described in 4.5 , "Iteration using Looping".
Using GetGlobal and SetGlobal is comparable to using a static variable in C-language programming. GetLocal and SetLocal are good for saving state inside of a looping construct. Once the loop is terminated, the state is reset for the next execution of the loop. To save state in a program that uses a Sequencer module, you should use GetGlobal and SetGlobal, since each iteration of the Sequencer is a separate execution of the program as described in 4.5 , "Iteration using Looping".
Illustrated in Figure 22 is a simple macro that sums the numbers from 1 to N, where N is an input parameter. The start parameter to ForEachN has been set to 1. GetLocal and SetLocal are used to accumulate the sum. Sum is a trivial macro consisting of a Compute where the expression is "a+b." On the first iteration of the loop, GetLocal will output its initial value, which has been set to 0. On subsequent iterations GetLocal will output the accumulated value SetLocal saved during the previous iteration. When the loop terminates the final accumulated value is the output of the macro. This macro is roughly equivalent to the following C-language statements:
b = 0; for (a=1; a<=10; a++) b = b+a;
If the macro were run again, on the first iteration of the loop
GetLocal would again output its initial value.
(Note that the macro
will only run again if the input to the macro changes or the output of the
macro has been removed from cache.)
If you replaced the GetLocal and SetLocal in Figure 22
GetGlobal and SetGlobal it would be equivalent to the following
static int b = 0;
for (a=1; a<=10; a++)
b = b+a;
While when SetLocal is used, the sum is reset each time the macro is
run, if SetGlobal is used, the sum of a previous execution is added to the
sum of the current execution. For example, let macro_local be the macro
shown in Figure 22 and macro_global be the same macro
SetGlobal and GetGlobal substituted for SetLocal and GetLocal. If the
input to both macros is 10 then both macros will output 55 (the sum of numbers
1 to 10) the first time they are run. If an execution takes place
without the input to the macros changing then neither macro will run again
and the value 55 will be used as the output again. If you change the input
to 3 then macro_local will output 6 and macro_global will output 61 (55+6).
If you replaced the GetLocal and SetLocal in Figure 22 with GetGlobal and SetGlobal it would be equivalent to the following C-language statements:
Illustrated in Figure 23 is a macro that returns the accumulated volumes of the members of a group and the number of members in the group. ForEachMember is used to iterate through the group. Measure is used to determine the volume of a member and the GetLocal and SetLocal pair on the left side of the macro is used to accumulate the volumes. For illustrative purposes, a loop containing GetLocal, SetLocal, and Increment is used to count the number of members in the group. (Inquire also provides this function, as does the index output of ForEachMember.) Increment is a trivial macro consisting of a Compute where the expression is set to "a+1." The initial values to both GetLocal tools are 0.
Figure 23. Example
Illustrated in Figure 24 is a visual program that
current camera settings for use in the next execution of the
program. The initial value of GetGlobal is NULL. The Inquire
module checks to see that the output of GetGlobal is
a valid camera object. If it's not a camera
object, then Route is used to ensure that the Display module is
not scheduled to run.
When a new camera is chosen (for example by rotating the object in the Image
window) the Display window will show the image
using the previous execution's camera settings.
Figure 24. Example
Figure 24. Example
As mentioned previously, in a true data-flow implementation, all modules are pure functions (i.e. their outputs are fully defined by their inputs). Hence, processes are stateless with no side effects. A macro in Data Explorer is considered to be a function, with its outputs being fully defined by its inputs. This is no longer true when a GetGlobal module is added to a macro. GetLocal maintains state information only within one execution of the macro. GetGlobal maintains state information between executions, and therefore the outputs of a macro containing GetGlobal are no longer entirely defined by the inputs. The outputs from macros with state (containing a GetGlobal module) are guaranteed to stay in the cache until the inputs for that macro change. At that point, the results of the previous execution are discarded to make room for the new results. This is equivalent to setting the cache attribute of the macro to cache last for each of the outputs. These cache settings cannot be overwritten by the user. This guarantees coherency when executing macros with state.
[Data Explorer Home Page | Contact Data Explorer | Same document on Data Explorer Home Page ]
|
OPCFW_CODE
|
All good. But maybe not good enough.
I'm still thinking the devices aren't there yet to serve our needs. I have a wishlist, circa 2010, that I don't think are unrealistic. (I recognize I will sound now like Andy Rooney. )
I don't want a backlit screen; I want something that will absorb natural or electric light and reflect electronic ink back to my senses. I am hard-wired for ink on a surface and our children don't yet go to bed reading Harry Potter on a laptop; the sooner we get nearer the experience of ink on paper, the sooner we'll have mass markets for these important devices.
I don't want a reflective screen. I read often in a bright room and don't want a mirror, and I don't want the sunlight in the window behind me to overcome the image in front of me. If nothing else, frost that screen.
I want the Internet on it all the time, just like a battery or an electrical current. I don't want to activate it or launch it. Not anymore. And not when I'm near a wireless router or inside a telecom's footprint. All the time.
I need it to deal with video --- taking it and viewing it --- and for the various stakeholders to deliver. I want a device that will take good video and I will pay very well for feature films, less so for recent releases, even less so for oldies. But I will pay if the device will stream it without glitches. If the producers can't figure that out soon, what happened to the music industry awaits them --- the speedy downloads are coming very, very soon, the last impediment to rampant movie piracy. Enough fighting and dithering.
I want my device to link my creations and preferences through networks. In other words, I want my device to be smart enough to tag everything so the largest number of people consume and share it. In short, a semantic, SEO servant.
I have to keep on top of things and my device should be curating that for me, alerting me when something significant is there to be consumed, and identifying any sudden changes in what others I trust are consuming. I know there is software that might help me do that, but I think that's too much unnecessary work these days. My device should do that for me if I tell it what I need. All the time, not just when I launch a piece of software through the Web.
I want to build trusting relationships and my device can help me find like-interested people and network our associations. But I then want our network to be mined by my device for recommendations and guidance to create a cohort of trusted goods and services. In other words, my device can steer me right.
And I want my device to charge me as I use it, not before I use it, and offer me the newest version as I reach a certain level of wear and tear. In other words, I want a lease-like, built-in obsolescence that will be a virtue and I never want to be stuck with something less-useful than the best device I can exploit.
Do I have a price point for this? Well, let's see, my laptop is worth about $1,500, my smartphone is about $1,000 a year, I pay about $1,500 a year for cable and Internet, and a few hundred more for downloads and a few hundred more on entertainment that could just as easily arrive in my home. I'd replace the entire distribution chain, so there's my price point. Of course, add a couple of these devices so my family can play, too.
|
OPCFW_CODE
|
Week 5: Debugging, More Source Extractor Shenanigans, and Exposure Times
Hello, everyone! Welcome back to my blog. In last week’s blog post, I detailed my first experiences with Source Extractor, a program used by astronomers to extract information from FITS files. I used Source Extractor to obtain the brightnesses (or magnitudes) of my supernova from the 81 FITS files I have access to and sorted each magnitude by date. Then, I went through my files to visually find my supernova, which I matched with the array of magnitudes I got from Source Extractor.
This week, I’ll be going over the first steps in my data processing journey.
A Small Problem:
Now this week was supposed to be about using the camera exposure times to correct my magnitudes (since longer exposure times artificially increase the brightnesses of the stars in these pictures), but I realized I had actually made a mistake last week.
Using Source Extractor is a little more complex than just telling it to extract the magnitudes. You have to specify a few parameters first, such as the zero point and aperture radii. To find the zero point, you have to observe a standard star (star with constant brightness). Then, find the catalog brightness of this star by referencing images from an online database, and simply subtract the two values. The difference is the zero point, which I used to calibrate all the magnitudes of my supernova.
Another parameter needed by Source Extractor is aperture radius, which calculates magnitudes within circular radii. What this means is that if you specify the minimum aperture radius to be extremely large, you will not log a lot of stars — which is useful if the object you are trying to examine is very large, as you can pinpoint your data to your target much more easily. However, it turns out that previously I had made a mistake when getting my magnitudes — I used an aperture size that was slightly too small, and Source Extractor did not log the magnitude of my supernova for two of my files.
I had to spend a day and a half fixing this, since this required me to start over.
The Rest of the Week:
Since I wasted so much time on this mistake, I didn’t actually get to the part where I use exposure times to correct my magnitudes. I spent another day extracting all my exposure times. The reason it took so long was that the file generated with the extracted exposure times kept showing up in the wrong place for a while, which was frustrating.
Anyways, this week was mostly full of debugging and problem-solving, which is why my progress is less than I would have liked. I suppose that I still did end up achieving half of my stated goal, which was to obtain an array with all the exposure times nicely laid out.
With any luck, my next post will detail the actual corrections I used. Thank you for reading my blog, and see you all next time!
|
OPCFW_CODE
|
Without registration sex - Updating xbox 360 with cd
Simple, I know, but being able to play all available audio formats seems to be something the ONE should be able to handle. On the 360 you can download your cd to your system excellent idea.The fact that there is no Rip option is a poor development choice and needs to be corrected. On the ONE you can't do anything but play the music in the app.
It can also be accessed from the terminal with Core Packages Each section of the manage packages portion of the setup script have the option to install/update all packages and remove all installed packages.
You can also update/install and remove packages individually.
It is like MS decided to take steps backward from where the 360 drew a line, and only begrudgingly crawls feet first toward that line as pressure from the gaming community spears continues to prod the XBox One updates toward the line of replacing missing features.
Manages to hang on a 2 star rating for managing to be what it says: A functional CD player.
When I had my XBox 360, sometimes when I didn't feel like playing Halo 4 multiplayer with the dead feeling of no music and guns blazing. Stop micro-transactioning your company into oblivion. Still (apparently) incompatible with background play. Likewise missing "rip" (copy) from disc to hard drive. I SUGGEST UPDATING THE APP OR INTERGRATING IT TO THE EXACT FORMAT AS MUSIC PLAYER ON XBOX 360 WITH THE POSSIBLITY OF ADDING A FEW NEW FULL SCREEN VISUALS TO ENJOY!!
I would just instantly go to my personalized music without leaving my game and then get right back to my game and listen while playing my game. CD's play fine..NOT allowing us to burn our own playlist is just low. Both of these features have been available before (both on XBox 360 no less). .action_button.action_button:active.action_button:hover.action_button:focus.action_button:hover.action_button:focus .count.action_button:hover .count.action_button:focus .count:before.action_button:hover .count:before.submit_button.submit_button:active.submit_button:hover.submit_button:not(.fake_disabled):hover.submit_button:not(.fake_disabled):focus._type_serif_title_large.js-wf-loaded ._type_serif_title_large.amp-page [email protected] only screen and (min-device-width:320px) and (max-device-width:360px).u-margin-top--lg.u-margin-left--sm.u-flex.u-flex-auto.u-flex-none.bullet. Selector .selector_input_interaction .selector_input. Selector .selector_input_interaction .selector_spinner. Selector .selector_results_container.form_buttons.form_buttons a.form_buttons input[type='submit'].form_buttons .submit_button.form_buttons .submit_button.form_buttons .action_button. That's because actual real usable space may vary by manufacturer and in some cases even with the same manufacturer. if you don't want to create a sd image you can just back up your bios, roms, and configuration files from the samba shares You can use Rpi-clone, a shell script that runs directly on your Raspberry Pi. It is a gaming console, so I expect it to be able to play discs. It surprises me that the 360 has more options for general entertainment.This feature on the XBox One shouldn't even have to be downloaded from the store in the first place.Tags: Adult Dating, affair dating, sex dating
|
OPCFW_CODE
|
Last weekend I was in Abu Dhabi attending the New York University (Abu Dhabi) Hackathon. Over 50 students from countries across the Arab world, including Palestine, Syria, Egypt and of course the UAE along with team leaders from across the world (yours truly included) came together for one weekend to put their brains and skills into a pressure cooker with some coffee, more than a little nicotine and the fantastic food provided on campus to see what revolutionary technological solutions we could come up with to move the world forward.
The event opened with panel discussions where panels of experts answered questions from the participants. On the panel discussing hacking (i.e. developing, modifying, tinkering, not breaking and entering) for social good panel, I was joined by Will Pate from Random Hacks of Kindness, Jay Bhalla from the World Bank and David Hutchful from the Grameen Foundation. One of the key discussion items was the importance of field testing and getting to know the users when developing technology for social causes. Since impacting society through technology requires great user adoption across a wide spectrum of society, social technology projects typically require much more effort in the design phase. For one, unlike commercial projects where the customer has a clearly visible vested interest in defining requirements in a structured manner, social projects typically cater to a user base that is unaware of the benefits and therefore still needs mobilizing. In such scenarios, a lot of guesswork goes into designing the initial implementation, much of which is discarded as soon as field testing begins. The consensus on the panel was that the best strategy is to “release early, fail fast, fail cheap and keep getting up until you get it right”
After the panel discussions we proceeded to individual presentations by many of the team leaders. I particularly liked the sessions on Python development (Mohammed Khatib) and HTML5 (Jeremy Johnstone), since they were immediately usable. David Hutchful’s session on J2ME development for feature phones was also great.
I presented Swara and demonstrated the platform using a local SIM card. Despite the local provider (Etisalat) refusing to forward DTMF tones during the demo (even though I tested it beforehand, this is an instance of the famous Demo SNAFU phenomenon that I will write about one day) the participants enjoyed the session and I got several very well thought out questions from the students, going beyond technology and grasping the idea of community and networking.
Post dinner, we got down to the business of pitching ideas and splitting off into teams. This is always the second most exciting part of any Hackathon (or similar code jam event). This is where everyone goes wild, throwing figurative noodles on the walls of everyone else’s psyche, just to see what sticks. The criteria were the usual suspects, innovation, impact potential and sustainability.
Several ideas were pitched, from the simple social network or media sharing website to crowdsourced police surveillance using smartphones.
Once the ideas were pitched the team leaders helped refine them and build teams around the most promising themes. Usually, Hackathon pitches are an overcomplicated expression of a relatively simple idea. Its possible to gauge the relative experience of the participants just by hearing the pitches. More experienced participants usually realize that its far more productive to pitch a concept and leave most of the implementation details vague during the pitch, since as the teams are formed and the team members discover each others interests and abilities, implementation details change. This is all the more underscored by the fact that its far easier to alter the pitch to the available skill set than to learn a fresh set of skills in the short time available. The goal is to get to a prototype using whatever is available. Elegance can come later, hopefully with funding.
The pitches at the NYU Hackathon reflected the fact that many of the participants were students, including several sophomores. The underlying ideas were strong, but in several places, the implementation pitched was unnecessarily complex. This was quickly remedied by the team leaders who brought their own experience to the table and helped the teams to simplify the projects down to implementable size.
Coding began in earnest next morning. The team I was working with (Abdul Sartawi from Palestine, Hassan Mousa from the UAE and Saleem Adele from Jordan) was looking to develop a smartphone app that would use the camera in combination with an OCR to enable blind people to read. Refining this further, we expanded the target user base to include illiterate people and added an IVR interface, which we planned to build using the Swara platform.
Qare is a 2 part system, one for content creation and aggregation and one for dissemination. The collection piece consists of an Android app, which we deployed on Saleem’s Samsung Galaxy tablet. The app is an OCR, that uses the camera to pick up printed text and convert it to a digital form. We then use a Text to Speech (TTS) engine to read out the digital text, creating an audio file.
This audio file is in turn uploaded to Qare, using the Swara-based admin portal. Once published, the file becomes available for playback on the web or on the IVR.
We modified the Swara IVR to remove the recording piece and added categories to the main menu. We also re-recorded the prompts in Arabic (Thank you, Sana Odeh ).
Callers use the IVR to navigate to a particular title. Once there, the user can either listen to a short summary, or choose to have the file sent over via MMS.
This sort of service could be used by individuals or organizations to host books either for free or at a nominal price, making a world of knowledge available to people who would otherwise have to spend years learning to read before they could access it.
- Students’ app wins healthy respect (The National)
- Hackathon comes up with useful tools, apps (Khaleej Times)
- تطبيق في العلاج الطبيعي يفوز بالمركز الأول في لقاء المبرمجين العالمي (Al Ittihad)
- Learn more about the Hackathon
- Hackathon Facebook album
|
OPCFW_CODE
|
Merchant of Venice can't buy a City State
I have a Merchant of Venice, I have moved next to a city state, but the only option I'm given is a trade mission. It has trade mission bonuses, but I'd really like a puppet. I've seen a video where the Merchant has an action icon that looks like a trade mission, but has a gold coin on top of it, but I only see the Mission/Move/Do Nothing/Sleep/Embark/extra actions, and the only extra is delete the unit.
The city state is allied and under protection of Greece, but I haven't read about that as a restriction.
Other Venice stuff is working - 2*trade routes, free Merchant, trade mission bonus, and no settlers allowed.
I've never played Venice before, so I'm not sure if there a rule I'm missing or if it is a bug.
I am playing Steam version BNW on a Mac - in case that makes any difference.
Well, I wonder if I accidentally clicked on One City Challenge - how do I check that? - a possible option is to go and capture another civ's city.
Don't edit more questions into your post. "How do I check if one-city challenge is enabled" should be asked as a separate question.
@SergiiZaskaleta Thanks. Did that, and it said I captured the city, but the city info disappeared while keeping the city graphics on the map. The OCC was set as a persistent option from a previous player's game. I started a second game with no special options which also malfunctioned. I needed to find the option and unset it for the next game to work.
The game was set to One City Challenge, but I had not set it to One City Challenge. I know because I restarted another game being careful to do no special options (just Venice), but it still ran a One City Challenge Game.
A previous player had gone into a sub-sub menu (Advanced Setup >> Advanced Game Options) in Set Up Game, and set to One City Challenge. When I did subsequent games with Set Up Game, it kept all of the previous settings, including all the 'hidden' Advanced Game Options.
It seems like a fairly major option to keep persistent in a sub-sub menu! I would consider this a UI bug.
Keep in mind that most players like their game in a certain way, and will 9 times out of 10 prefer to have all their games to have the same specific settings. E.g. I love playing with random personalities, and turn on quick combat. I never turn those options off and it'd be a hassle to do it every time I start a game.
You have to be allies with the city state in order to buy them out. You must remain allies with the city state for 10 turns or you can't buy them
This is not correct. You may be thinking of Austria.
Am I? Oh....whoops! Silly me!
Thanks for the idea. I checked that but Kyralessa is right about it being Austria
@Richard thank you for confirming what he/she said. I tried at least
|
STACK_EXCHANGE
|
Docker and containerization bring a new way of building and deploying software. The new technology makes development more dynamic, distributed, faster, and more capable of handling failures at every step. However, to reap these benefits, you need a completely different toolset than traditional servers or virtual machines. As you begin your journey with Docker, you may be wondering which are the top Docker tools used today, and how you can leverage them. Let’s take a tour of the top Docker tools, by category.
Docker is the standard container runtime, and you can easily spin up a container locally on your laptop using the Docker CLI. However, to run Docker containers at scale, you need a container orchestrator. This is a management layer for Docker containers and is essential to run containers in production.
This is the most popular container orchestrator today and is supported by almost every container vendor. It organizes containers into a collection of pods and has powerful features for deployment, load balancing, security, and more.
Docker’s default orchestration tool, Swarm, is simpler to use than Kubernetes and is well integrated into the Docker workflow. With the rising popularity of Kubernetes, Swarm has now added support for Kubernetes, and has conceded the orchestration throne to Kubernetes.
Mesos has its own container orchestration tool called Marathon. With the dominance of Kubernetes, Marathon is taking a back seat as its parent company, Mesosphere, shifts focus to give users the choice of Kubernetes.
Coming from the AWS Stable, Amazon Elastic Container Service is one of the early container services. It has been slow to adopt Kubernetes, but has finally jumped on the Kubernetes bandwagon this past year. It runs containers inside EC2 instances and has deep integration with the wider AWS platform. It’s been the most recent vendor to embrace Kubernetes support, announcing their EKS service for Kubernetes management.
Google Container Engine (GKE), the container service from Google Cloud is the most deeply integrated with Kubernetes among the CaaS platforms. It is the first to bring upstream Kubernetes releases into its platform and is a great choice if Kubernetes is your priority.
Azure Container Engine (AKE), Microsoft’s container service has deep integration with the Azure platform and is taking significant steps to be the best place to manage Kubernetes. It has hired Brendan Burns, Kubernetes' co-founder to help with this mission.
Other CaaS Services
There are numerous other CaaS platforms with a focus on simplifying Kubernetes management. Some of them are Pivotal Container Service (PKS), Platform9, Heptio, Kismatic, StackPoint, and Giant Swarm, to name a few.
Security is the first priority when running containers in production. However, there isn’t a single do-it-all tool, instead, you need to use a combination of tools.
Kernel Security Tools
Docker has borrowed core Linux kernel security features like namespaces, cgroups, apparmor, SELinux, and SecComp. These features provide the first and most foundational layer of security for containers.
Securing network connections is essential for containers. This is achieved by Calico, a tool that creates micro-firewalls around each containerized service and provides granular security controls.
Coming from the house of HashiCorp, creators of the popular Terraform scheduler, Vault is a secrets management tool for containers. Vault stores and encrypts secret data on physical storage and requires multiple keys to access and read the secrets. Vault simplifies secrets management and makes it more powerful.
In production, containers need to be shielded from outside attacks and internal configuration lapses. This kind of threat detection is done using a proactive security tool like Aqua Security. It is able to track every part of the container stack and leverages machine learning to spot threats at any stage.
Containerized applications are typically based on the microservices architecture. In these systems, networking plays a key role in performance of the applications.
Linkerd provides a service mesh to connect microservices to one another. Its goal is to provide a uniform layer of communication.
Istio provides APIs and operates a layer above Linkerd. Together, they provide a powerful and feature-rich networking solution for containerized applications.
Service discovery, load balancing, and security are important criteria for container networking, and Weave brings all this together in a single package. It secures communication over the network using encryption, isolation, and segmentation. It provides a ‘micro DNS’ at each node and helps make service discovery easy.
Flannel is a Layer 3 overlay network for Kubernetes. Flannel is a powerful tool for connecting hosts within Kubernetes by allocating a subnet for each host. In so doing, it controls how traffic flows between the hosts.
Keeping track of changes and events as they occur is an important part of running containers in production. Fortunately, the Docker ecosystem has a range of monitoring tools to choose from.
Prometheus is by far the most popular monitoring tool for Kubernetes. It focuses on capturing and analyzing time-series data in real-time. It can be integrated with other tools like Kibana for visualization.
A vendor tool, Pagerduty has become essential to many DevOps teams that want to be alerted in real-time of downtimes, errors, attacks, and more. Its mature routing system ensures the right people are informed of anything going wrong with the system as soon as it happens.
Datadog is a container runtime monitoring tool that focuses on live reporting of performance data. It can identify parts of a Kubernetes stack automatically and, with its powerful visualizations, makes monitoring Kubernetes simple.
Slack enables integration with other tools and streams events to a live chat stream for the entire team to view. It makes troubleshooting and collaboration among team members faster and simpler.
Logs give you the real picture of what’s happening with your containerized applications and infrastructure. They are vital for managing containers in production.
The Elastic Stack
The Elastic stack is primarily powered by Elasticsearch, the full-text database engine that can query large quantities of unstructured data in real-time. It is bolstered by Kibana, an open source visualization tool. Together, they bring deep visibility into container logs without breaking the bank.
A logging service provider, Sumo Logic takes the pain out of log analysis with easy setup and a maintenance-free logging service. It can capture logs from Kubernetes or any other container tool via API integration.
|
OPCFW_CODE
|
Per-route latency measurement for client route pick
Is your feature request related to a problem? Please describe.
Currently client connects routes to peers based only on the latency between the client and the peer, this doesn't take into account that some routes could be geographically much further from the peer.
For example if I have a route pointing to a resource in western Europe, and I have two peers that can handle this route, one in west Europe and one in west US, and the client is in east US, the fastest route would be to choose the peer in Europe.
Describe the solution you'd like
Calculate latency for each route from each peer, this information could either be communicated peer-to-peer or through management service keeping a cache of peer-route-latency as reported by each routing peer.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
I understand that this feature could be difficult to implement given the peers know nothing about which ports are open and how to check for latency for a given route, but this possibly could be configured by the user per-route.
For example, when creating the route, the user can choose to use TCP 443 to check latency.
I suggest the following approach:
[Management] Add new field to Route to specify method of checking latency (Protocol, IP/Domain, Port).
[Dashboard] Add necessary UI.
[Management] Add necessary APIs.
[Client] ServerRouter to calculate latency based on given latency check settings per route.
[Client] Store and send latency reports with sync requests.
[Management] Receive and store latency reports from Peers.
[Management] Send available latency reports with Route objects in pb.
[Client] clientNetwork to include latency reported by peer in route + p2p latency in route selection.
Notes:
Should the latencies be stored in-memory cache on management-side, or stored in Store?
If I understand correctly, sync requests are only sent in the very initial connection to management or when connection is interrupted and restored, would there be a way for client to send updates to management periodically? (in this case for example, it could be when route latency changes significantly?)
Draft data structure diff:
diff --git a/management/proto/management.proto b/management/proto/management.proto
index fe6a828b..5a0dd74c 100644
--- a/management/proto/management.proto
+++ b/management/proto/management.proto
@@ -59,6 +59,12 @@ message EncryptedMessage {
message SyncRequest {
// Meta data of the peer
PeerSystemMeta meta = 1;
+ repeated LatencyReport latencyReport = 2;
+}
+
+message LatencyReport {
+ string RouteID = 1;
+ float Latency = 2;
}
// SyncResponse represents a state that should be applied to the local peer (e.g. Wiretrustee servers config as well as local peer and remote peers configs)
@@ -351,6 +357,16 @@ message Route {
string NetID = 7;
repeated string Domains = 8;
bool keepRoute = 9;
+ LatencyCheck latencyCheck = 10;
+}
+
+message LatencyCheck {
+ bool Enabled = 1;
+ string Protocol = 2;
+ string Domain = 3;
+ string IP = 4;
+ uint16 Port = 5;
+ float Latency = 6;
}
// DNSConfig represents a dns.Update
diff --git a/route/route.go b/route/route.go
index e23801e6..71bcbe72 100644
--- a/route/route.go
+++ b/route/route.go
@@ -45,10 +45,18 @@ const (
DomainNetwork
)
+const (
+ LatencyICMP LatencyProtocol = "ICMP"
+ LatencyTCP LatencyProtocol = "TCP"
+ LatencyUDP LatencyProtocol = "UDP"
+)
+
type ID string
type NetID string
+type LatencyProtocol string
+
type HAMap map[HAUniqueID][]*Route
// NetworkType route network type
@@ -101,6 +109,15 @@ type Route struct {
Enabled bool
Groups []string `gorm:"serializer:json"`
AccessControlGroups []string `gorm:"serializer:json"`
+ LatencyCheck LatencyCheck
+}
+
+type LatencyCheck struct {
+ Enabled bool
+ Protocol LatencyProtocol
+ Domain string
+ IP netip.Addr
+ Port uint16
}
// EventMeta returns activity event meta related to the route
@@ -125,6 +142,7 @@ func (r *Route) Copy() *Route {
Enabled: r.Enabled,
Groups: slices.Clone(r.Groups),
AccessControlGroups: slices.Clone(r.AccessControlGroups),
+ LatencyCheck: r.LatencyCheck,
}
return route
}
@@ -150,7 +168,8 @@ func (r *Route) IsEqual(other *Route) bool {
other.Enabled == r.Enabled &&
slices.Equal(r.Groups, other.Groups) &&
slices.Equal(r.PeerGroups, other.PeerGroups) &&
- slices.Equal(r.AccessControlGroups, other.AccessControlGroups)
+ slices.Equal(r.AccessControlGroups, other.AccessControlGroups) &&
+ r.LatencyCheck == other.LatencyCheck
}
|
GITHUB_ARCHIVE
|
Novel–Divine Emperor of Death–Divine Emperor of Death
Chapter 1745: Becoming Aware spicy kill
“Probably, but each of the laws she methods in fit in with all three farming solutions, and she excels in them all. It’s just, she feels a novice to Ice-cubes Legal guidelines and H2o Legal guidelines, although the performance she comprehends these regulations are monstrous.”
hakuna matata tattoo
She taken her palm directly back to her bosoms, taking a look at him with elevated brows.
[By this time this letter eventually left, she would notice my steps, although i presume this note would already be up to you by the period. Possibly, she actually knows yet still let me mail this, planning I have performed this behind her again. In any case, congratulations, my prince. Great job, Princess s.h.i.+rley. Remember to be well, and pave the path to ascend sooner than later. While she’s disrespectful, she doesn’t suggest hurt. There’s no reason to be concerned about me either, for I am just faring well, becoming potent alongside her. If fate allows us to match, possibly we’ll match in under 2 months at a selected challenge market.]
But usually, he sensed depressing for Ellia.
“I don’t know, but she looked pretty instead of hearing about you from my mouth once i described my unrequited enjoy.”
“Ellia… are you currently rather well?”
Davis glanced at her through an unamused manifestation, “If it was just as elementary as that.”
urban immortal doctor return 34
What was the usage of him transmigrating into this body? Truly the only benefit he experienced was Fallen Paradise, which he still experienced was the greatest benefit as well as a tremendous threat but wasn’t their own strength.
can you eat maple leaves
In fact, he believed depressing for Ellia.
“That pretty much verifies my theory that she’s Ellia’s previous life incarnation. Or else, it will make very little sense in my opinion why Ellia remains to be still living as an alternative to being devoured. Naturally, an overseas spirit wouldn’t be able to command precisely the same physique, not to mention it would degrade Ellia’s heart and soul, but depending on Ellia herself, she’s escalating powerful alongside Myria, which just ends up proving my way of thinking.”
“Potentially, but the many laws and regulations she procedures in fit in with these three farming programs, and she does really well in them all. It’s just, she would seem a novice to Ice Guidelines and Normal water Laws and regulations, however the performance she comprehends the two of these legislation will also be monstrous.”
Evelynn, Isabella, s.h.i.+rley, Mo Mingzhi, Esvele, and Freya considered Davis’s term and seeing his students enlarge created these phones filter their eyes.
s.h.i.+rley plus the others discovered his actions and couldn’t assist think about the letter just as before while they turned up beside him.
Davis spoke with utter self-confidence that produced others lightly giggle then again manufactured him reduce his mind in sadness.
“That virtually verifies my concept that she’s Ellia’s past everyday life incarnation. Normally, it can make very little good sense if you ask me why Ellia is still living as opposed to remaining devoured. All things considered, an overseas heart and soul wouldn’t be capable to command the exact same body system, let alone that it really would degrade Ellia’s spirit, but as outlined by Ellia herself, she’s developing sturdy alongside Myria, which just results in indicating my idea.”
Divine Emperor of Death
Isabella couldn’t help but consult Davis, but another melodious tone of voice echoed.
“What does this indicate precisely?”
Davis nodded without being amazed.
“Hehe~ From time to time, I wouldn’t know which of the two will be the one talking to me unless I question you, and determined by their impulse or mood, I will instantly know the answer, although i dare not question, hesitant i can make Myria angry.”
“Actually, who mailed this?” Isabella’s eyeballs were definitely narrowed in frustration, “They dare to phone my emperor a brat? I don’t are convinced it’s Sect Become an expert in Bing Luli or one of her three Forefathers, but it appears as if whoever had written this wants to perish!”
Isabella couldn’t help but check with Davis, but another melodious speech echoed.
[By now this message eventually left, she would recognize my behavior, nevertheless i suppose this letter would be in your hands by the moments. Possibly, she presently is familiar with however i want to send this, planning I actually have done this behind her rear. Regardless, congrats, my prince. Great job, Princess s.h.i.+rley. You should be, and pave the road to ascend earlier than later on. However she’s disrespectful, she doesn’t suggest injure. There’s no need to worry about me often, for I am just faring perfectly, getting potent alongside her. If destiny permits us to meet up with, perhaps we’ll match in just 2 months with a specific battle world.]
“Does she possess a grudge against me or anything?”
“Ellia… are you presently nicely?”
“I don’t know, but she appeared pretty against ability to hear about yourself from my mouth while i talked about my unrequited appreciate.”
“What does this mean really?”
“Myria is extremely ruthless and arrogant and as well wouldn’t allow folks near to her. She doesn’t be afraid to kill folks, specially wicked people today, as well as requires some enjoyment inside it. Often, Ellia would control themselves and enact related activities, but contrarily, she actually is a lot more simple and form, saying that persons will need to have a 2nd prospect because prince Davis presented her that.”
Davis blinked before his lip area couldn’t assist but contour right into a heartened smile. s.h.i.+rley discovered his teeth before she giggled.
Boskernovel Divine Emperor of Death novel – Chapter 1745: Becoming Aware zebra reproduce reading-p3
Novel–Divine Emperor of Death–Divine Emperor of Death
|
OPCFW_CODE
|
This file offers the various resources that staff and players have put together for the MUSH. If you would like to add something, please ask a staffer and we can do so!
Dream Chasers' MUSH Web Presence
- Dream Chasers MUSH Tumblr is our blog for various advertisements, posting funny bits of conversation, and the like. If you're on that social network, feel free to follow us. #dream chasers mush is the tag du jour. If the MUSH has significant downtime, we will also post here.
- Dev Diaries were used to advertise the game before we opened. We may post here if the MUSH has significant downtime.
- Our ad. Feel free to post it on other games -- but please ask the admin first and follow any policies! If they want a reciprocal ad, please talk to us first.
LPs, Adaptations, and Soundtracks
Unfortunately, a lot of the themes at Dream Chasers MUSH can be hard to access. Not everyone has time to play a 60 hour JPRG these days. For that matter, for legal reasons, staff will not provide ways to pirate the games. If you want to do that, you're on your own. But, we can provide a few things that may help. This file gives a list of Let's Plays (LP's), soundtracks, and adaptations of our themes that one could find.
We won't vouch for the content -- we've never read/listened/watched all of these through, so there is a chance you may find rough language or worse. We think making them more accessible is worth the risk of that, though. If you find some terrible hate speech or other bad content, please let us know.
These include video and written Let's Plays. We used some of these in our research to make the game; others are ones we stumbled across. We can't vouch for the quality. If you know of a good Let's Play, please send a +request or email to the appbox, and we can add it to this list!
You can also check GameFAQs. Many older games have their scripts there and a FAQ sometimes provides a good plot overview.
- Grandia at http://lparchive.org/Grandia/
- Grandia II at http://lparchive.org/Grandia-II/
- Lunar 2: Eternal Blue, the Sega Saturn version, at http://lparchive.org/Lunar-Eternal-Blue/
- Lunar 2: Eternal Blue Complete, the Playstation version, at https://www.youtube.com/watch?v=Nju44OLilWs&list=PLB77AA1177FAB03E9
- Tales of Zestiria at https://www.youtube.com/watch?v=QFRmnD1hL_o&list=PL28_eRFIuaoLnAmsws8fUkHr3skNvTBLe&index=1
- Wild ARMs: Alter Code F at http://lparchive.org/Wild-Arms-Alter-Code-F/
- Wild ARMs 3 at http://lparchive.org/Wild-Arms-3/
- Xenogears by the Dark Id, something of a legend, at http://lparchive.org/Xenogears-(by-The-Dark-Id)/
- What Does God Need With a Starship, a dramatic and entirely serious (two of these statements are false) Let’s Play of Xenogears. It has tragically never been finished. You can read it online at http://whatdoesgodneedwithastarship.com/
- Tales of Zestiria the X, which is pronounced "the Cross" for reasons that you should ask Ayu Ohseki. This is a very good anime adaptation of Tales of Zestiria. It is available streaming from Funimation at http://www.funimation.com/shows/tales-of-zestiria-the-x/home
- Wild Arms: Million Memories was a mobile game active from 2018-2020, in Japan only. Our own Ettlesby has collected some of the art and assets from the game and made them available to the public at https://drive.google.com/drive/folders/1TR_eZ-S-BaF5FmkgyfL7S_p-vyCMRFzo
If you want to listen to some music from these games, we're here to help! You can find ripped soundtracks of many by searching YouTube. We won't provide links to those, given the ease of doing so.
This provides links to a few particularly great remix soundtracks. If you know of a unique soundtrack, please send a +request or email to the appbox, and we can add it to this list!
- Wild Arms: ARMed and DANGerous is a Wild Arms 1 remix album from OCRemix. It has very good remixes of all the music from WA1, and is better than the WA1 soundtracks released, which often were incomplete. You can listen to and download it at its website: http://armed.ocremix.org/
- Humans + Gears Xenogears ReMixed is a Xenogears remix album from OCRemix. It has two discs. The first is more traditionally instrumental (with a very funny vocal track from Fei's point of view), while the second is more techno/industrial. You can listen to and download it at its website: http://xenogears.ocremix.org/
- Text Ansifier 1.1: A browser-based tool for creating colour gradients and then giving you the entire [ANSI()] string. Made by me, Ark! I hope it helps! Available at: https://luceid.github.io/textansifier/textansifier.html
- Other Color Resources: Our very own Lucia compiled these (and her original text is preserved)! DC has a ton of options for ansi, well beyond the simple codes like hm, hc, g and so forth. It can even take HTML colors directly, such as #ccccff.
- If you're looking to get an HTML code for a color in your brain, https://www.w3schools.com/colors/colors_picker.asp is a great resource.
- Also handy: if you're interested in getting an HTML code to match a color in an existing image, http://html-color-codes.info/colors-from-image/ is a great way to go about it! I only discovered this second link in the last couple weeks, and using it to pull colors straight out of character art and onto the MUSH is a joy.
|
OPCFW_CODE
|
Frequently asked questions
Q1) Is this course on-campus? Is it online? How does the online component work?
Spring Online (weekly lecture videos, course communication via Slack and Piazza)
Fall This course is run both on-campus and online.
Students in the Cambridge area can attend the weekly, live lecture held on Harvard's campus. Students taking the course remotely will have access lecture videos.
Q2) How much time will I need to devote to this course?
This question is difficult to answer as it depends on many factors: how much previous experience you have, how quickly you pick up on technical topics, and most importantly, how much you want to push yourself on your projects.
If you'd still like a rough number, though, I'll defer to the classic college estimate: students should spend roughly 2-3 hours outside of class for every hour in class.
Q3) I have to miss [x] lecture(s). Will this be a problem?
There are no point deductions in this course for missing lectures. Each student is responsible for their own attendance.
For more details regarding missed quizzes or late projects due to absences see the section on grading.
Please also read the Attendance section under Student Responsibilities for more details.
Q4) You recommend using [x] software— is it okay if I use [y] software?
Please see the section Choosing to use other tools, languages or services under Software.
Q5) Can I get feedback on an assignment from my TA/the instructor before I submit?
If office hours are slow, you may request your TA take a look at your assignment pre-submission. Time permitting, your TA may interact with your site and provide first impressions, but they'll do so from the perspective of a user, not a grader.
Examples of the kind of feedback a TA may give pre-submission
- Nothing happened when I requested both a symbol and a number in the password...
- I was a little confused after I hit submit why I didn't get a result; then I realized it was because my word wasn't long enough...maybe make this requirement clear before the user submits?
- It'd be nice if the submit button was more specific... maybe label it as "Calculate total" instead of "Submit" so it reinforces the action the user is taking when they click it
- You might want to add some more spacing between the inputs - it was hard to tell which label was associated with which input
Examples of the kind of feedback TAs should not be expected to give pre-submission
- I looked at your code and noticed you didn't separate your logic and display.
- I looked at your code and suggest you refactor your calculate function so it works like this...
- You're missing xyz requirement...
In short, it's not the teaching team's responsibility to make sure all requirements are adhered to, and you can't challenge point deductions after receiving a grade by saying “But the TA looked at my application and didn't mention that!”
Also, please be respectful of the TAs time and understand that they may deny a request for review if it comes at the last minute pre-deadline or if they're actively helping other students with specific questions. TAs have also been instructed to not do a “pre-submission review” more than once for any given student as it's unfair to other students in the course.
All of this applies to broad requests. We're less picky if you have specific questions your assignment, e.g. “I'm not sure if it makes more sense to use a slider or dropdown for xyz feature; what do you think?”
This question was first addressed in Piazza (ref) before the A2 submission; I've moved it here to the FAQ as of Thu Apr 13. -sb
|
OPCFW_CODE
|
gives the matrix of distances between vertices for the graph g.
gives the matrix of distances between vertices of maximal distance d in the graph g.
uses rules vw to specify the graph g.
Details and Options
- GraphDistanceMatrix is also known as the shortest path matrix.
- GraphDistanceMatrix returns a SparseArray object or an ordinary matrix.
- The entries of the distance matrix dij give the shortest distance from vertex vi to vertex vj.
- The diagonal entries dii of the distance matrix are always zero.
- The entry dij is Infinity (∞) if there is no path from vertex vi to vertex vj.
- In GraphDistanceMatrix[g,d], an entry dij will be Infinity if there is no path from vertex vi to vertex vj in d steps or less.
- The vertices vi are assumed to be in the order given by VertexList[g].
- For a weighted graph, the distance is the minimum of the sum of weights along any path from vertex vi to vertex vj.
- The following options can be given:
EdgeWeight Automatic weight for each edge Method Automatic method to use
- Possible Method settings include "Dijkstra", "FloydWarshall", and "Johnson".
Examplesopen allclose all
GraphDistanceMatrix works with undirected graphs:
Using GraphDistance to compute the same result takes more time:
GraphDistanceMatrix works with large graphs:
When just a single column is needed and the graph is large, using GraphDistance is faster:
For the strongly connected graph, the result is in agreement with VertexEccentricity:
Properties & Relations (5)
Rows and columns of the distance matrix follow the order given by VertexList:
The distance matrix can be found using GraphDistance:
In a connected graph, the VertexEccentricity can be obtained from the distance matrix:
The distance between two vertices belonging to different connected components is Infinity:
Possible Issues (2)
Solve the problem by listing vertices explicitly when calling functions such as Graph:
"BellmanFord" is not a valid Method option:
Use "FloydWarshall", "Johnson", or the default choice of Method instead:
Neat Examples (1)
Wolfram Research (2010), GraphDistanceMatrix, Wolfram Language function, https://reference.wolfram.com/language/ref/GraphDistanceMatrix.html (updated 2015).
Wolfram Language. 2010. "GraphDistanceMatrix." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2015. https://reference.wolfram.com/language/ref/GraphDistanceMatrix.html.
Wolfram Language. (2010). GraphDistanceMatrix. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/GraphDistanceMatrix.html
|
OPCFW_CODE
|
Why is a hash map get/set considered to have O(1) complexity?
Assume we have the following hash map class in Javascript:
class myHash {
constructor() {
this.list = [];
}
hash(input) {
var checksum = 0;
for (var i = 0; i < input.length; i++) {
checksum += input.charCodeAt(i);
}
return checksum;
}
get(input) {
return (this.list[this.hash(input)]);
}
set(input, value) {
this.list[this.hash(input)] = value;
}
}
The hash function has a loop which has a complexity of O(n) and is called during getters and setters. Doesn't this make the complexity of the hash map O(n)?
What's a better hash function that doesn't use a for loop?, also {} on it's own is considered a HashMap, doesn't it? i'm trying to implement one myself.
@Joshl Your mind is in the right place, but the complexities you're confused about are totally different computations; the lookup is usually constant, while the hash function is not.
@Joshl normally when people talk about the complexity of an array or hash they are talking about how the behavior changes as the number of entries changes. In your class consider how long get takes when you have one entry and compare that to has long it takes when you have a million entries. One average they will be the same.
When you're performing Big-O analysis you need to be very clear what the variables are. Oftentimes the n is left undefined, or implied, but it's crucial to know what exactly it is.
Let's define n as the number of items in the hash map.
When n is the only variable under consideration then all of the methods are O(1). None of them loops over this.list, and so all operate in constant time with respect to the number of items in the hash map.
But, you object: there's a loop in hash(). How can it be O(1). Well, what is it looping over? Is it looping over the other items in the map? No. It's looping over input—but input.length is not a variable we're considering.
When people analyze hash map performance they normally ignore the length of the strings being passed in. If we do that, then with respect to n hash map performance is O(1).
If you do care about string lengths then you need to add another variable to the analysis.
Let's define n as the number of items in the hash map.
Let's define k as the length of the string being read/written.
The hash function is O(k) since it loops over the input string in linear time. Therefore, get() and set() are also O(k).
Why don't we care about k normally? Why do people only talk about n? It's because k is a factor when analyzing the hash function's performance, but when we're analyzing how well the hash map performs we don't really care about how quickly the hash function runs. We want to know how well the hash map itself is doing, and none of its code is directly impacted by k. Only hash() is, and hash() is not a part of the hash map, it's merely an input to it.
Yes, the string size (k) does matter. (more precisely, the hash function complexity)
Assume:
Get item use array index takes f(n) time
The hash function takes g(k) time
then the complexity is O( f(n)+g(k) ).
We know that g(k) is O(k), and if we assume f(n) is O(1), the complexity becomes O(k)
Furthermore, if we assume the string size k would not great than a constant c, the complexity becomes O(c) which can be rewrite as O(1).
So in fact, given your implementation, the O(1) is correct bound only if
Get item use array index takes O(1)
The string would not longer than a constant c
Notes
Some hash function may itself be O(1), like simply take the first character or the length.
Whether get item use array index takes O(1) should be check, for example in javascript sparse array may take longer to access.
|
STACK_EXCHANGE
|
Looking for App engine google Freelancers or Jobs?
Need help with App engine google? Hire a freelancer today! Do you specialise in App engine google? Use your App engine google skills and start making money online today! Freelancer is the largest marketplace for jobs in the world. There are currently 17,764 jobs waiting for you to start work on!
...social website application that will be run on GoogleAppEngine.
What I already have versus what the provider...A custom CMS application compatible with GoogleAppEngine, with emphases on user driven content and...expertise/background that I am seeking:
GoogleAppEngine experience with either Java or Python. Use
...will be tasked with building a simple anagram
engine that can be used by and updated by anyone. Users...Users will only be
permitted to add words to this engine and will be able to query for the
I am looking for a GoogleAppEngine developer.
I would like to discuss about a project and work remotely...have experience with googleappengine?
2) Do you have a tool with googleappengine currently online?...
Also, I am not sure if I want to use googleappengine for my web application. I would like to ...
...consist of building a application on the GoogleAppEngine platform. In this way we can focus on building...negotiable).
You are familliar with the GoogleAppEngine Platform and have build on this platform before...Database Knowledge
You are familiar with NO SQL & Google Cloud SQL and know your way around databases.
I have start a app mobile, using jquery mobile + AngularJS, and now i want to creat entities, and...show listview button, i want use : REST + googleappengine + angularjs, to fill all the data on the
...host a static website to the googleappengine,
I have used this app with no sucess : https://gist...the /static/ folder etc.
I am using the googleappengine to deploy the projects : http://i.imgur.com/pHVrFtT.../image/image.jpg , css, etc
And i want to host them to google apps. to have ex : project1.appspot.com/page-2
...for someone who has good experience with GoogleAppEngine. We will be working hand in hand on this project...your typing skills and your experience with GoogleAppEngine. I believe 1000$ will more than cover the...the total time spent on this project. GoogleAppEngine supports java and python. You need to state which
...video application and to integrate it with Googleappengine mobile backend and make it support cross device...and Googleappengine.
- Correct issues in android application
- Integration of Google app...app engine and create a cross device auth and "like / favorit " function
For the right person with the
- log via facebook, twitter or google
- retrieve user information associated with the...free technologies that you have to use:
- GoogleAppEngine JAVA
- SQL CLoud
- JSP / HTML 5/ CSS3 /
Creating GoogleAPPEngine Development.
We have the existing web application...application on WHMCS, now we need to integrate with the GoogleAPP API.
We will use our existing billing system...service from Google Market Place.
Get the APP
GoogleAppEngine (GAE) Expert Needed to develop Generic Framework for a galaxy of WebSites.
Expected...skills (at least one):
- big data modeling on appengine, yet flexible
- API (REST) management, including
...java web based UI to be developed using the GoogleAppEngine Framework to support the following:
Store...should retrieve data from the data store of the appengine. Should be AJAX enabled. Should have search capability
I need someone to combine two GoogleAppEngine projects into a workable development and content management...
I need the AppEngine Admin project, and AppEngine Site Creator (included with this
|
OPCFW_CODE
|
HTTP Archive: new stats
Over the last two months I’ve been coding on the HTTP Archive. I blogged previously about DB enhancements and adding document flush. Much of this work was done in order to add several new metrics. I just finished adding charts for those stats and wanted to explain each one.
Note: In this discussion I want to comment on how these metrics have trended over the last two years. During that time the sample size of URLs has grown from 15K to 300K. In order have a more consistent comparison I look at trends for the Top 1000 websites. In the HTTP Archive GUI you can choose between “All”, “Top 1000″, and “Top 100″. The links to charts below take you straight to the “Top 1000″ set of results.
The Speed Index chart measures rendering speed. Speed Index was invented by Pat Meenan as part of WebPagetest. (WebPagetest is the framework that runs all of the HTTP Archive tests.) It is the average time (in milliseconds) at which visible parts of the page are displayed. (See the Speed Index documentation for more information.) As we move to Web 2.0, with pages that are richer and more dynamic, window.onload is a less accurate representation of the user’s perception of website speed. Speed Index better reflects how quickly the user can see the page’s content. (Note that we’re currently investigating if the September 2012 increase in Speed Index is the result of bandwidth contention caused by the increase to 300K URLs that occurred at the same time.)
The Doc Size chart shows the size of the main HTML document. To my surprize this has only grown ~10% over the last two years. I would have thought that the use of inlining (i.e., data:) and richer pages would have shown a bigger increase, especially across the Top 1000 sites.
I’ve hypothesized that the number of DOM elements in a page has a big impact on performance, so I’m excited to be tracking this in the DOM Elements chart. The number of DOM elements has increased ~16% since May 2011 (when this was added to WebPagetest). Note: Number of DOM elements is not currently available on HTTP Archive Mobile.
The question of whether domain sharding is still a valid optimization comes up frequently. The arguments against it include browsers now do more connections per hostname (from 2 to 6) and adding more domains increases the time spent doing DNS lookups. While I agree with these points, I still see many websites that download a large number of resources from a single domain and would cut their page load time in half if they sharded across two domains. This is a great example of the need for Situational Performance Optimization evangelized by Guy Podjarny. If a site has a small number of resources on one domain, they probably shouldn’t do domain sharding. Whereas if many resources use the same domain, domain sharding is likely a good choice.
To gauge the opportunity for this best practice we need to know how often a single domain is used for a large number of resources. That metric is provided by the Max Reqs on 1 Domain chart. For a given website, the number of requests for each domain are counted. The number of requests on the most-used domain is saved as the value of “max reqs on 1 domain” for that page. The average of these max request counts is shown in the chart. For the Top 1000 websites the value has hovered around 42 for the past two years, even while the total number of requests per page as increased from 82 to 99. This tells me that third party content is a major contributor to the increase in total requests, and there are still many opportunities where domain sharding could be beneficial.
The average number of domains per page is also shown in this chart. That has risen 50%, further suggesting that third party content is a major contributor to page weight.
This chart was previously called “Requests with Caching Headers”. While the presence of caching headers is interesting, a more important performance metric is the number of resources that have a non-zero cache lifetime (AKA, “freshness lifetime” as defined in the HTTP spec RFC 2616). To that end I now calculate a new stat for requests, “expAge”, that is the cache lifetime (in seconds). The Cacheable Resources chart shows the percentage of resources with a non-zero expAge.
This revamp included a few other improvements over the previous calculations:
- It takes the Expires header into consideration. I previously assumed that if someone sent Expires they were likely to also send max-age, but it turns out that 9% of requests have an Expires but do not specify max-age. (Max-age takes precendence if both exist.)
- When the expAge value is based on the Expires date (because max-age is absent), the freshness lifetime is the delta of the Expires date and the Date response header value. For the ~1% of requests that don’t have a Date header, the client’s date value at the time of the request is used.
- The new calculation takes into consideration Cache-Control no-store, no-cache, and must-revalidate, setting expAge to zero if any of those are present.
The Cache Lifetime chart gives a histogram of expAge values for an individual crawl. (See the definition of expAge above.) This chart used to be called “Cache-Control: max-age”, but that was only focused on the max-age value. As described previously, the new expAge calculation takes the Expires header into consideration, as well as other Cache-Control options that override cache lifetime. For the Top 1000 sites on Feb 1 2013, 39% of resources had a cache lifetime of 0. Remembering that top sites are typically better tuned for performance, we’re not surprized that this jumps to 59% across all sites.
The last new chart is Sites hosting HTML on CDN. This shows the percentage of sites that have their main HTML document hosted on a CDN. WebPagetest started tracking this on Oct 1, 2012. The CDNs recorded in the most recent crawl were Google, Cloudflare, Akamai, lxdns.com, Limelight, Level 3, Edgecast, Cotendo CDN, ChinaCache, CDNetworks, Incapsula, Amazon CloudFront, AT&T, Yottaa, NetDNA, Mirror Image, Fastly, Internap, Highwinds, Windows Azure, cubeCDN, Azion, BitGravity, Cachefly, CDN77, Panther, OnApp, Simple CDN, and BO.LT. This is a new feature and I”m sure there are questions about determining and adding CDNs. We’ll follow-up on those as they come in. Keep in mind that this is just for the main HTML document.
It’s great to see the HTTP Archive growing both in terms of coverage (number of URLs) and depth of metrics. Make sure to checkout the About page to find links to the code, data downloads, FAQ, and discussion group.
|
OPCFW_CODE
|
For the past week or so Iíve been working on a method of tracking students logins and logouts, the day before yesterday the mySQL server survived the full day with about 1000 entries made in the day. So now I need to start work on a web GUI for this system.
Basically Iíve started this thread because I feel that you will be able to help me add more features to this system, and eventually help me test it by having it running in your school. I donít expect this to happen for awhile as currently the system is bespoke to this schoolís setup. However, because Iím hoping to use this as my final year project for university I will require a fairly large test base.
Currently the system stores the following data:
Logs Username, Computer name, Login time, logout time, reason (Timeout, or proper logout), and logon server.
A few ideas I've had for the web GUI so far are:
- Total Number of users currently logged in
- Total Number of users logged in throughout the current day
- 'Quick Search' - This will enable admin to search by username or computer name
- List ICT rooms in the school and show how many users are currently using these rooms
- Being able to 'flag' known trouble causing users and flag them up on login
- Pretty managerial type graphs to show room/computer usage
In the future Iím hoping to be able to overlay some of this information over a map of the school.
Iíd like to hear from you any other features you feel would be useful in a system like this.
Thank you for taking the time to read this :D
It is sometimes useful to know when a particular user last logged in. Handy if they say "I tried to print the coursework at lunchtime, but the printer didn't work"...
Another report which would be useful would be to see if/how often every PC in the IT Suite and/or Library is in use, so you could see if you need more resources in those areas to cover peak times of day.
It would be nice to see how long each person used the computer for too, without needing to work it out yourself using the login and logout time.
Maybe if it could flag up machines that are on, but not used during the day. That way resources could be moved to better locations.
Originally Posted by kestrel1
I like the idea of that. It may also indicate that there's a fault with that machine.
I'm planning on adding something like this for wireless points later on when it gets to the putting of information onto a map.
Thanks for all your suggestions, keep them coming :D
I started using the database method on here. Does the job well
Will this just log student users or all users? (....method of tracking students logins and logouts). It would be useful if it could track all users including admin.
You can set the script to any OU you like. We log every user on the Domain.
Don't you find that Access starts to complain after a while and you have to create a new database for the script to store information in?
The access DB is working well so far.
How long has it been running for?
Originally Posted by dan400007
Well he has 80,000 records and still going....
I feel my thread has been hijacked a wee bit... trust someone from Rochdale.. :P
My script uses two exe, one for login and one for logout, you can put these in who evers OU you wish!!
Everything is stored in an mySQL database and has been running for a few days now, think its at about 5,000 records and querys are still at around 0.001 seconds.
The web GUI is what i'm currently focusing on now, thank you for your suggestions I'm adding these into my plan as i go along :D
What happens if they do not logoff/shutdown , but just switch it off?
Will the logged on count not go wrong?
A way round that would be to run at machine startup. Then log that startup time as the 'unclean' logoff time.
That's how PAM on linux deals with it anyway (check out the 'last' command).
|
OPCFW_CODE
|
M: Ask HN: Where to find raspberry Pi zero or alternative under $15 - bedros
I need about five for a diy project at home, but every where I look online you can order only one per customer, with $10 shipping per unit
R: zapt02
Unfortunately the CHIP ($9) hasn't shipped for a couple of months while they
revamp their CPU. I have a few and they truly are exceptional hardware and
probably better than the Zero for your needs due to onboard wifi, bluetooth
and 4GB of storage.
[https://getchip.com/pages/chip](https://getchip.com/pages/chip)
Its big brother CHIP Pro ($16) can be ordered in any quantity but you will
need to solder your own headers. Depending on what you want to do it might be
a good fit:
[https://getchip.com/pages/chippro](https://getchip.com/pages/chippro)
Pine also has a $15 board that seems to be shipping:
[https://www.pine64.org/?product=pine-a64-board](https://www.pine64.org/?product=pine-a64-board)
R: jakobegger
I found the CHIP to be a bit unreliable (eg. only one of half a dozen Micro
USB cables I have works for flashing)
Then again, I don't have a lot of experience with these DIY things, maybe
random issues with everything are just to be expected.
R: zapt02
Flashing can be a bit iffy, it being made as a Chrome extension is a weird
choice. I never had any big issues with flashing, and once the device is
running, I find it more stable than any of the Pis that I have, as the Pis
always tend to get a corrupted filesystem within 6-12 months. They can blame
SD all they want, there's something wrong with the chipset.
R: brookish
I have had really good luck with these boards. [http://nanopi.io/nanopi-neo-
air.html](http://nanopi.io/nanopi-neo-air.html)
$20 but they do not need a permanent SD card and they need a heat sink. The
Allwinner CPU's seem to get much hotter than the Raspi Broadcoms. The issues I
have experienced really have been nuances of the Linux distros but very
workable.
R: bedros
they look great, do they have usable I2C, GPIO libs with support for python of
I'm assuming they support some version of ubuntu/debian
R: bigiain
While I understand the Pi Foundations "one per order/customer" rule - I share
your frustration. There's a bunch of ideas I've got where 10 or so Pi Zeros
would be both useful and affordable @ $5 ea or even $10 for the ZeroW, but I
just can't buy them like that...
I wanted to make a "real working" diagram of our standard AWS platform as a
wall chart - with 3 ELB load balancers, 5 autoscaling ec2 instances (3
"active" and two "spares"), and 3 "multi-az" RDS db servers - each represented
by a Pi Zero, with ws2811 led strip running between them representing the
network which lights up animating packet/data flow. It'd have big red
killswitches next to everything, so you can push buttons to kill off bits of
infrasructure and visualise how the platform responds (with the spare ec2
instances autoscaling in to replace dead ones, ELB and RDS traffic auto-
rerouting). And I'd use this to run our "standard" backend, so people could
connect with their phone (with a browser or test app) and "see" their own
network traffic and watch how it still works even if you kill any 2 and up to
8 different parts in the right combination.
I think that'd be a really useful way to demonstrate to non-technical
stakeholders why if they want better response times that "we'll get to that
Monday morning" when something breaks after 5pm on a Friday - I'm going to
charge them _way_ more to support their site if it's running on a $5 or $10
per month VPS than if they spend $120/month or so on AWS to host it.
(Oh, and I second zapt02's recommendation for the NextThing CHIP - I've got
half a dozen of those, and they're working out really well in other projects -
but I've also got another 5 on order and have been waiting several month for
em...)
|
HACKER_NEWS
|
Lovelyfiction 《Birth of the Demonic Sword》 – Chapter 1993 – 1993. Immense beginner friendly propose-p3
Novel–Birth of the Demonic Sword–Birth of the Demonic Sword
when did the abolitionist movement start in america
Chapter 1993 – 1993. Immense yoke canvas
A Love Story Reversed
Heaven and World got cared for the tree like all other difficult presence. They arranged to soak up it in to the heavens, even so the almost harmless mother nature on the vegetation helped the crooks to let it grow. The rulers’ comprehending would boost significantly more when they eliminated it whenever it was in the top tier.
Noah didn’t take Sepunia’s demand as a result of her possible have an impact on over Ruler Elbas. He learned that aspect amusing, but he wouldn’t dare to use this kind of large danger for this mindless good reason.
The challenge didn’t take a genuine choice since every path displayed troubles, so Noah were required to rely on his instincts. The vision on the skies needed a helper, plus a possible success would make Paradise and Earth lose a lot of enthusiasts. The opportunity increases were actually immense, so he decided to carry on.
Choosing one didn’t make Ruler Elbas and Noah proceed to the task instantly. That they had something diffrent to do because region, and Sepunia acquired already presented them an explanation with that.
Noah got attacked, but he didn’t cause any shockwave or comparable occasions. Including the trunk got seemingly continued to be undamaged, but he knew how deeply he had hurt it.
vixen 03 hardcover
“We’ll divide those similarly,” Queen Elbas exclaimed.
According to Sepunia’s words, the shrub was one of many kinds created a result of the problems that Heaven and World naturally allowed the globe to get. They were planning to win, so a tree able to serving on his or her chaotic regulations made an appearance.
“Don’t even attempt to technique me,” Master Elbas declared.
“You intend to work, correct?” King Elbas questioned as his manifestation darkened.
Noah, King Elbas, and Sepunia flew toward the tremendous tree. The solution phase cultivator barely introduced any aura mainly because of the lots of constraints both the professionals obtained placed onto her. Nevertheless, she could even now cast a certain amount of potential and create other explanations about this majestic wonderful grow.
Harper’s Young People, September 14, 1880
No cut adopted the vanis.h.i.+ng with the substantial-pitched disturbances. Almost everything suddenly decreased noiseless, along with the energy acc.u.mulated over the Cursed Sword vanished without creating any repercussion on the surroundings.
“Don’t even try to trick me,” Master Elbas stated.
indoor gardening for every week in the year 2018
“I might have gone for any branches,” Noah whispered.
“Make sure not to damage it,” King Elbas cautioned, “And don’t make me are available here to get my aspect. I want half of it.”
Bloodl.you.s.t naturally arrived of his body system when his aspirations did start to encourage the Cursed Sword. Noah couldn’t control the impact with the blade as he pushed its power beyond the confines of your eighth get ranked. His violent imagined resonated while using raging energy that arrived away from the weapon and produced the high-pitched noises that his previous rival possessed learnt to concern.
Bloodl.you.s.t naturally arrived of his physique when his ambition begun to encourage the Cursed Sword. Noah couldn’t restrain the impact on the blade as he moved its potential beyond the boundaries of the eighth rate. His violent idea resonated together with the raging energy arrived out from the tool and generated the top-pitched sounds that his preceding opponent possessed learnt to dread.
Paradise and World experienced handled the tree as with any other problematic existence. They arranged to absorb it in the heavens, nevertheless the almost harmless character with the vegetation made it possible for the crooks to permit it to mature. The rulers’ comprehension would increase significantly more as long as they removed it if this is at the upper tier.
Unluckily for any rulers, Noah and Queen Elbas possessed appeared on the scene. Both the professionals stayed blown away in front of the sheer scale of the herb. They had never seen this kind of major living simply being because of their very own sight. The toughness from the trunk have also been outstanding. However, the shrub showed up almost completely devoid of defensive procedures. It merely became and affected the skies in their area utilizing its atmosphere.
The difficulty didn’t have got a authentic choice since every route presented problems, so Noah was required to rely on his intuition. The vision on the sky needed a helper, along with a possible being successful would make Heaven and World drop lots of followers. The opportunity increases had been huge, so he chose to go forward.
Noah’s hands and fingers began to tremble. The blade continued to be continue to, however the pure electrical power that went through its design forced even his enormous actual sturdiness to consider one step again. However, he couldn’t possibly have the Cursed Sword go, so he utilised part of his aspirations to control his body system and boost it.
“Beginnings or tree branches?” Noah requested.
Paradise and Planet possessed handled the shrub similar to any other problematic living. They prepared to absorb it in the heavens, even so the almost harmless character of the grow allowed these phones allow it to grow. The rulers’ comprehension would improve much more once they removed it whenever it is in the upper level.
California king Elbas l.u.s.ted once the sheer number of components he could receive after lowering this sort of great marvelous vegetation. The plant alone experienced the potential to surpa.s.s what he experienced earned during the option against Divine Architect.
“We’ll separate those just as,” California king Elbas exclaimed.
Noah was aware which a solo slash couldn’t possibly be enough there, but he tried out in any case. He added surf of aspirations in the blade and allow the higher-pitched sounds intensify so much that even his ear did start to bleed. That personal injury would have created lots of cultivators relieve the strike, but Noah only permit the procedure go on.
Noah simply let his aspirations encourage the Cursed Sword so long as it required. The blade acquired just unlocked a brand new power, so that it was continue to getting used to that strength. It didn’t know how much of Noah’s laws it might take, nonetheless it made certain not to ever restrain.
the pyrotechnists treasury
Noah didn’t holdback and summoned his aspirations. He didn’t really need to encourage his entire body or his buddies there. The Cursed Sword was required to do everything without treatment, and then he noticed comfortable that could succeed in the work.
“Generally, no,” Noah revealed, “But this isn’t a standard wonderful vegetation. I’d locate it simpler to lower complete locations having a single reduce.”
Paradise and Entire world obtained taken care of the plant similar to any other aggravating life. They arranged to absorb it in to the atmosphere, but the almost safe the outdoors in the herb permitted these phones allow it to improve. The rulers’ realizing would strengthen significantly more if they eradicated it whenever it is in the top level.
Holes eventually came out on Noah’s body system. He noticed accidents developing on his skin because of the power of our prime-pitched sounds, and the man smiled in that vision. The Cursed Sword could grow incredibly formidable, but it really simply had to full that key to tactic that exceptional declare.
“We’ll break down those likewise,” California king Elbas exclaimed.
California king Elbas l.you.s.ted following the sheer number of elements he could receive after slicing a very immense awesome herb. The plant alone had the opportunity to surpa.s.s what he possessed claimed within the guess against Divine Designer.
“Naturally I would like to cut it,” Noah snorted while waving the Cursed Sword above him. “This idiot didn’t tactic the discovery after getting rid of the other one cultivator.”
“Your bickering is significantly different when listened to it shut,” Sepunia commented.
“You want to make the grade, correct?” Ruler Elbas inquired as his manifestation darkened.
“Can a magical place fulfill that space?” Ruler Elbas extended.
|
OPCFW_CODE
|
Difficulty opening files from explorer with preview pane enabled
ℹ Computer information
PowerToys version: 0.29.0
PowerToy Utility: Preview Pane
Running PowerToys as Admin: yes
Windows build number: [10.0.19041.685]
📝 Provide detailed reproduction steps (if any)
With preview pane enabled, I double click word file to open it from File Explorer.
…
…
✔️ Expected result
File opens with double click
❌ Actual result
often times, double-clicking is ineffective in opening document and I have to 'mash' click to open document.
📷 Screenshots
Are there any useful screenshots? WinKey+Shift+S and then just paste them directly into the form
Ho @mgguilin
Thank you for reporting the problem.
Does the problem occur after updating PowerToys to 0.29?
Where are these files physically located? Local disk or somewhere else (network drive, ..)?
Hi Davide,
I apologize for getting back to you so late. The problem persists after
updating to PT 0.29, and the files are stored locally (internal HD).
Matt
On Thu, Dec 31, 2020 at 11:58 AM Davide Giacometti<EMAIL_ADDRESS>wrote:
Ho @mgguilin https://github.com/mgguilin
Thank you for reporting the problem.
Does the problem occur after updating PowerToys to 0.29?
Where are these files physically located? Local disk or somewhere else
(network drive, ..)?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/microsoft/PowerToys/issues/8859#issuecomment-753005736,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ASKCYH3G6LYT6FT5OKVAVJ3SXSUU5ANCNFSM4VPRW7UQ
.
Hi Davide,
I apologize for getting back to you so late. The problem persists after
updating to PT 0.29, and the files are stored locally (internal HD).
Matt
On Thu, Dec 31, 2020 at 11:58 AM Davide Giacometti<EMAIL_ADDRESS>wrote:
Ho @mgguilin https://github.com/mgguilin
Thank you for reporting the problem.
Does the problem occur after updating PowerToys to 0.29?
Where are these files physically located? Local disk or somewhere else
(network drive, ..)?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/microsoft/PowerToys/issues/8859#issuecomment-753005736,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ASKCYH3G6LYT6FT5OKVAVJ3SXSUU5ANCNFSM4VPRW7UQ
.
@enricogior @crutkas
That unresponsive UI looks similar to https://github.com/microsoft/PowerToys/pull/8926 but this with local files.
@mgguilin
Can you give us more detail?
You have written "often times", but:
Are you able to reproduce this?
Do you have SVG or MD files in the folder that freeze?
Do you have any link to unavailable network shares in This PC or Quick access?
@enricogior @crutkas
That unresponsive UI looks similar to https://github.com/microsoft/PowerToys/pull/8926 but this with local files.
@mgguilin
Can you give us more detail?
You have written "often times", but:
Are you able to reproduce this?
Do you have SVG or MD files in the folder that freeze?
Do you have any link to unavailable network shares in This PC or Quick access?
@mgguilin, does this happen with PowerToys disabled for file explorer stuff? We don't interact with a word document which odd.
@mgguilin, does this happen with PowerToys disabled for file explorer stuff? We don't interact with a word document which odd.
|
GITHUB_ARCHIVE
|
package jsoncase
//TransformVal transform all fieldName of map[string]interface embedded in the interface{} using the transformation function
func TransformVal(transformation func(string) string) func(json interface{}) interface{} {
var TransformFieldName func(val interface{}) interface{}
TransformFieldName = func(val interface{}) interface{} {
switch val.(type) {
case map[string]interface{}:
realVal := val.(map[string]interface{})
thisVal := make(map[string]interface{}, len(realVal))
for key, value := range realVal {
thisVal[transformation(key)] = TransformFieldName(value)
}
return thisVal
case []interface{}:
realVal := val.([]interface{})
thisVal := make([]interface{}, len(realVal))
for idx, elem := range realVal {
thisVal[idx] = TransformFieldName(elem)
}
return thisVal
case []map[string]interface{}:
realVal := val.([]map[string]interface{})
thisVal := make([]map[string]interface{}, len(realVal))
for idx, elem := range realVal {
thisVal[idx] = TransformFieldName(elem).(map[string]interface{})
}
return thisVal
default:
return val
}
}
return TransformFieldName
}
//TransformMap transform fieldName of all map[string]interface embedded in the root map using the transformation function.
func TransformMap(transformation func(string) string) func(json map[string]interface{}) map[string]interface{} {
var TransformFieldName func(json map[string]interface{}) map[string]interface{}
TransformFieldName = func(json map[string]interface{}) map[string]interface{} {
res := make(map[string]interface{}, len(json))
for key, val := range json {
newVal := val
switch val.(type) {
case map[string]interface{}:
realVal := val.(map[string]interface{})
newVal = TransformFieldName(realVal)
case []interface{}:
realVal := val.([]interface{})
thisVal := make([]interface{}, len(realVal))
for idx, elem := range realVal {
switch elem.(type) {
case map[string]interface{}:
thisVal[idx] = TransformFieldName(elem.(map[string]interface{}))
default:
thisVal[idx] = elem
}
}
newVal = thisVal
case []map[string]interface{}:
realVal := val.([]map[string]interface{})
thisVal := make([]map[string]interface{}, len(realVal))
for idx, elem := range realVal {
thisVal[idx] = TransformFieldName(elem)
}
newVal = thisVal
}
res[transformation(key)] = newVal
}
return res
}
return TransformFieldName
}
//ToSnakeVal transform all fieldName of map[string]interface embedded in the interface{} to snake_case
func ToSnakeVal(json interface{}) interface{} {
return TransformVal(toSnakeString)(json)
}
//ToCamelVal transform all fieldName of map[string]interface embedded in the interface{} to camelCase
func ToCamelVal(json interface{}) interface{} {
return TransformVal(toCamelString)(json)
}
//ToPascalVal transform all fieldName of map[string]interface embedded in the interface{} to PascalCase
func ToPascalVal(json interface{}) interface{} {
return TransformVal(toPascalString)(json)
}
|
STACK_EDU
|
Date of Award
Department or Program
Structured data-sets are often easy to represent using graphs. The prevalence of massive data-sets in the modern world gives rise to big graphs such as web graphs, social networks, biological networks, and citation graphs. Most of these graphs keep growing continuously and pose two major challenges in their processing: (a) it is infeasible to store them entirely in the memory of a regular server, and (b) even if stored entirely, it is incredibly inefficient to reread the whole graph every time a new query appears. Thus, a natural approach for efficiently processing and analyzing such graphs is reading them as a stream of edge insertions and deletions and maintaining a summary that can be (a) stored in affordable memory (significantly smaller than the input size) and (b) used to detect properties of the original graph. In this thesis, we explore the strengths and limitations of such graph streaming algorithms under three main paradigms: classical or standard streaming, adversarially robust streaming, and streaming verification.
In the classical streaming model, an algorithm needs to process an adversarially chosen input stream using space sublinear in the input size and return a desired output at the end of the stream. Here, we study a collection of fundamental directed graph problems like reachability, acyclicity testing, and topological sorting. Our investigation reveals that while most problems are provably hard for general digraphs, they admit efficient algorithms for the special and widely-studied subclass of tournament graphs. Further, we exhibit certain problems that become drastically easier when the stream elements arrive in random order rather than adversarial order, as well as problems that do not get much easier even under this relaxation. Furthermore, we study the graph coloring problem in this model and design color-efficient algorithms using novel parameterizations and establish complexity separations between different versions of the problem.
The classical streaming setting assumes that the entire input stream is fixed by an adversary before the algorithm reads it. Many randomized algorithms in this setting, however, fail when the stream is extended by an adaptive adversary based on past outputs received. This is the so-called adversarially robust streaming model. We show that graph coloring is significantly harder in the robust setting than in the classical setting, thus establishing the first such separation for a ``natural'' problem. We also design a class of efficient robust coloring algorithms using novel techniques.
In classical streaming, many important problems turn out to be ``intractable'', i.e., provably impossible to solve in sublinear space. It is then natural to consider an enhanced streaming setting where a space-bounded client outsources the computation to a space-unbounded but untrusted cloud service, who replies with the solution and a supporting ``proof'' that the client needs to verify. This is called streaming verification or the annotated streaming model. It allows algorithms or verification schemes for the otherwise intractable problems using both space and proof length sublinear in the input size. We devise efficient schemes that improve upon the state of the art for a variety of fundamental graph problems including triangle counting, maximum matching, topological sorting, maximal independent set, graph connectivity, and shortest paths, as well as for computing frequency-based functions such as distinct items and maximum frequency, which have broad applications in graph streaming. Some of our schemes were conjectured to be impossible, while some others attain smooth and optimal tradeoffs between space and communication costs.
Ghosh, Prantar, "Space-Efficient Algorithms and Verification Schemes for Graph Streams" (2022). Dartmouth College Ph.D Dissertations. 81.
|
OPCFW_CODE
|
In the first phase of the ParlaMint project (July 1 2020 – Sept. 30 2020) parliamentary corpora were compiled for four countries – Bulgaria, Croatia, Poland and Slovenia. The corpora were encoded according to the ParlaMint XML schema, a specialisation of the Parla-CLARIN TEI format and linguistically annotated with the Universal Dependencies and named entities. They are available from the CLARIN.SI repository and through associated concordancer.
The sessions in the corpora were also marked up as belonging to the COVID-19 part of the corpus (Oct 2019 – July 2020) or to its reference subset (2015 – Oct 2019).
With this call we invited proposals to add parliamentary corpora for additional countries to the ParlaMint collection.
Please note that the indicated COVID-19 part timespan (Oct 2019 – July 2020) and reference subset timespan (2015 - Oct 2019) have to be covered, but they can be extended - COVID-19 to the end of 2020 and further, and the reference subset - before 2015, given that resources and time permit.
ParlaMint Call Results
The proposals of the following applicants were assessed and approved by the ParlaMint Team together with representatives of CLARIN Board of Directors:
|Paul Rayson||Lancaster University||English|
|Ruben van Heusden||University of Amsterdam – ILPS research group||Dutch|
|Steinþór Steingrímsson||The Árni Magnússon Institute for Icelandic Studies||Icelandic|
|Tomas Krilavičius||Applied Informatics dept., Vytautas Magnus University (Vytauto Didžiojo university)||Lithuanian|
|Barbora Hladká||Charles University||Czech|
|Giulia Venturi||Institute for Computational Linguistics "A. Zampolli" (ILC-CNR)||Italian|
|Çağrı Çöltekin||University of Tübingen||Turkish|
|Costanza Navarretta||University of Copenhagen||Danish|
|Miklós Sebők||Centre for Social Sciences, Budapest, Hungary||Hungarian|
|Giancarlo Luxardo||Praxiling UMR 5267||French|
|Robers Dargis||Institute of Mathematics and Computer Science, University of Latvia||Latvian|
|Petru Rebeja||Alexandru Ioan Cuza University of Iași||Romanian|
|Jesse de Does||Instituut voor de Nederlandse Taal||Belgian Dutch/French|
The activities envisaged for this call include:
Extension of the ParlaMint model to 6 new countries.
For each country the following specific activities are expected from applicants
- Obtaining data for the COVID-19 and reference parts of the corpus
- Conversion of the data into the ParlaMint format
- Linguistic processing of the corpus with Universal Dependencies, preferably including a suitable NER module
- Producing documentation on the provided corpus [similar to: link to description of corpora]
ParlaMint team provides the following:
- Dedicated guidelines on how to prepare the data [link to PDF]
- The already existing corpora as models and the ParlaMint Schema: (http://hdl.handle.net/11356/1345)
- Upload to the concordancers:
- NoSketch Engine: https://www.clarin.si/noske/parlamint.cgi
- NoSketch Engine (public): https://www.clarin.si/noske/index-en.html
- Kontext: https://www.clarin.si/kontext/
- Upload to Parlameter: https://parlameter.org/
Size of funding /duration per proposal
- Funding: 5,000 Euro per project
- Timing and Duration: December 1, 2020 – March 31, 2021 (4 months)
- Qualifications of the team involved.
- Status of the available and/or accessible parliamentary corpora.
- Diversity of parliaments and languages.
- Potential with respect to the project goals.
- The applications will be assessed by the ParlaMint team together with representatives of CLARIN Board of Directors.
- In case more proposals come in than can be funded, the status quality criterion and the potential with respect to the project goals will play a role.
- The proposer should be affiliated with an institution that is part of a CLARIN consortium in a CLARIN member or observer country. In case the structure of a national consortium is not in place yet, or not specified unambiguously, applicants should check with the National Coordinator whether he/she can support the application.
- Personnel costs, including the relevant indirect and administrative costs, are eligible for funding.
Each participant is responsible for assembling all cost claims relevant for the project. The sum will be paid in one installment after the corpora have been delivered. Thus, the payment is envisaged for March or April 2021.
An expression of interest is expected that outlines the motivation behind the application, the expertise of the team, the status of parliamentary data. The applicant can do so by filling out this application form.
26 October 2020: Call Issued
16 November 2020: Submission deadline
20 November 2020: Results announced
1 December 2020: Projects start
30 March 2021: Delivery of the Converted Corpora
|
OPCFW_CODE
|
While SMS can be used to enforce proper security on your network, in this context we are talking about properly configuring the product's security configurations. If your SMS 2003 installation were to become compromised it would not be pretty, potentially exposing every machine SMS communicates with on your network. In order to properly configure your SMS environment, you need to understand all of the involved components of a typical SMS system and their corresponding security considerations. We will briefly review the following technologies and how they support the SMS 2003 security environment:
Operating system security
SQL server security
SMS 2003 runs on the operating system (OS) as well as using its file-sharing capabilities to communicate between SMS sites, component servers, and clients. You should understand accounts, groups, and domains. SMS services and components can use a variety of OS accounts for their security context. For more information, see the Windows integrated help system or any Windows OS security document.
Windows 98 is not considered to be a secure OS; thus, the client side of the SMS security model is not applicable to clients running Windows 98.
SQL Server provides SMS with its site databases. As with any other application connecting to SQL Server you can opt for either integrated security or SQL server security. These days SQL Server security is discouraged but still an option. For more information see SQL Server Books Online (BOL).
A common reason to still use SQL Server Authentication is when clients are running on Windows 95/98.
SMS utilizes WMI for several tasks, including the following:
Performing hardware inventory on client machines
As an interface to the site databases for both the server and the client
Storing configuration data
When a user requests WMI resources, WMI security authenticates the user for both local and remote resources. On Windows NT 4.0, Windows 2000, Windows XP, and Windows Server 2003, a user can specify another credential for remote resources. You can configure local or remote WMI properties via the MMC snap-in, wmimgmt.msc. For more information see http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/snap_wmi_control.mspx.
SMS uses IIS to support its management point, server locator point, and reporting point sites. Thus, it is beneficial to understand basic IIS security. While IIS security is important for SMS regardless of the configuration, when you have advanced security mode–based sites (see the "SMS Security Enhancements" section) the importance is escalated because:
The site server's computer account has administrative privileges on other machines.
The SMS site server manages its local files and registry entries via the Local System account. Any software that runs in the Local System context of IIS has equal access to those same entities.
IIS offers three varying levels of application protection modes: low, medium, and high. SMS 2003 uses the high application mode. With the arrival of IIS 6 came the idea of application pools; basically, application pools allow individual web sites to operate autonomously. Thus, if one site experiences problems, those problems would not affect the entire IIS server. Microsoft recommends using the latest version of IIS available. (At the time of this writing, IIS 6 is the latest production web server.) Disable IIS functions you do not require (this is good practice in general as well), including the usage of the IIS Lockdown tool. For more information, see http://www.microsoft.com/WindowsServer2003/iis/default.mspx.
Network security is beyond the scope of this book, but that does not diminish its importance. You should have a solid understanding of network security concepts to ensure network traffic between SMS sites, systems, and clients are secure. SMS can use encryption, hashing, and signing algorithms to encrypt network traffic.
One of the most, if not the most overlooked aspect of information system security in general is physical security. IT professionals (including yours truly) will spend countless hours doing their best to ensure their systems are secure, only to leave the server room unlocked after they're done working. One of the designs behind SMS is to not assume client machines are physically secure; thus SMS clients have no capability to compromise SMS's security model. Excluding SMS clients, the following SMS entities could compromise the entire SMS security model:
You should ensure that physical access to the preceding nodes is restricted to only those who require it. SMS is a network management product; it only makes sense to restrict who can access it. We will not go into details about physical security but the list that follows contains some of the more common forms of physical security in a data center:
Typical door lock
Badge with "swiper"
|
OPCFW_CODE
|
I am trying to set up VPN using routing and remote access.
I've tried two configurations, one using a single network card, and one using two network cards.
I can connect through my VPN and get assigned an IP address by the DHCP server, but I can't "SEE" anything. By this I mean the client appears completely blind to the office network, this means:
- No ping to any office server including the VPN server (I've stopped ICMP being filtered in and out)
- No DNS resolution (isn't surprising if I can't connect by IP address. I tried accessing the VPN Server Share and a web page hosted on it using the domain name and the IP address, e.g. http://host.com/abc and http://18.104.22.168/abc)
- Can't access any network resources (also not surprising given the above)
I've not had any problems connecting the VPN at all (I thought I had, but this was to do with the test router I given).
This doesn't appear to be a firewall/router problem as I configured it to allow VPN traffic through and forward to the correct server.
This, I think, is confirmed as the VPN server event logs shows success audits (that is, logon events).
However it does show the following event directly after the success audit logon event occurs (and I don't know if this is normal but wouldn't expect it to be - the VPN client doesn't disconnect).
An account was logged off. Subject: Security ID: [DomainName]\[UserName] Account Name: [UserName] Account Domain: [DomainName] Logon ID: 0x[xxxxxx] Logon Type: 3
This event is generated when a logon session is destroyed. It may be positively correlated with a logon event using the Logon ID value. Logon IDs are only unique between reboots on the same computer.
I am not sure if this is of any use, but Wireshark shows me connecting successfully and then a load of encrypted packets, which is what I would suspect:
- There's a PPP LCP/CHAP configuration conversation which is the client connecting.
- I don't see any ICMP on either side when I do a ping to the VPN public interface (I assume this is because it is encapsulated on the GRE packets).
However the following is interesting (which I can't explain):
- Pinging the public interface (A) (of VPN Server) I see increased PPP and GRE traffic
- Same if I ping the private interface (B) (of VPN Server)
- If I ping another server (C) on the network I can see the ICMP packet requests but no replies (I'm not sure if (C) server is replying directly to the VPN IP of the client (D) or not - this does appear the case as I see the traffic in (A))
I can't explain points 1 or 2 - I'd expect to see ICMP, but the problem does appear to be sending traffic to (D). To check this I pinged (D) from (C) and RECIEVED a reply, however I DID NOT see correlating ICMP traffic on (D) (could this have been GRE traffic?). I find this strange, but (C) is definitely pinging the correct machine as it stops responding when I disconnect (D) from the VPN.
Also note, I don't see any traffic on (D) to the VPN network addresses, only to the routers which have the port forwarding set up. Could this be some sort of routing problem?
The VPN server does seem to have a problem pinging the (D)- says NO RESOURCES, PathPing shows it is using interface (A). and this problem only occurs when pinging (D), it pings everything else without problem.
I changed RRAS to single NIC setup and the no resources problem went away - I can seemingly ping the client from all machines, but can't ping anything from the client. I say seemingly as when I ping the client it takes less than 1 ms (bear in mind this is across the Internet and different ISP's) and I don't see any ICMP traffic on the VPN server - plus when I disconnect the client, it is still pingable!?! (As if the VPN server is replying to pings on the client's behalf.) Whilst this is happening the VPN server gets timeouts when it pings the disconnected client. VERY STRANGE!
After leaving it for some time (drinking tea, eating food sort of length) the VPN server has gone back to the No resources problem, and not I have no ping in either direction. Disabling RRAS and enabling it again put me back to where I was just now, I can now ping from LAN to client.
|
OPCFW_CODE
|
Does lodestar folow semver?
Describe the bug
I upgraded @lodestar/types from v1.17.0 to v1.21.0 while working on some stuff for Ultralight and encountered a number of errors of the sort:
npm ERR! src/networks/beacon/ultralightTransport.ts(20,15): error TS2305: Module '"@lodestar/types"' has no exported member 'allForks'.
Just wanted to confirm if lodestar follows semver or not with regard to breaking changes (i.e. only do backwards incompatible changes on major version updates. Not a big deal if not but will be sure to pin my dependencies to the specific going forward of semver is not normally followed.
Expected behavior
Since going from v1.17 to v1.21 wasn't a "major" upgrade as defined by semver, I would have expected no breaking build errors with imports sourced from @lodestar/types but would have expected a deprecation notice on the allForks type.
Steps to reproduce
No response
Additional context
No response
Operating system
Linux
Lodestar version or commit hash
v1.21
Hi @acolytec3 and thanks for reaching out. We follow semver for lodestar as a product. Meaning the cli interface and how the user interfaces with the application, at the application level. The internal packages are designed to be part of the @chainsafe/lodestar project and are versioned relative to the project as a whole. That being said, you bring up a good point that the internal packages can potentially be dependencies of external projects. We have discussed this before and resolved to keep the packages versioned with Lodestar as a whole at that time.
What I will do is commit to you to bring this up for discussion again at our standup tomorrow.
I will say though that versioning and publishing packages independently will break our current workflows and release processes. This is why we decided to keep the status quo. It adds significant complexity and at the time there were very few, if any, external dependents. Now that your issue was brought to light it may lend credence to separating them but I cannot commit to us moving that direction immediately, or at all, until we discuss it as a team.
If you would like to join us on standup tomorrow (Tuesday @ 10am EST/2pm GMT) and participate in that discussion please feel free to come to our discord where we announce the standup meeting link that gets generated tomorrow morning.
https://discord.gg/4Zk4Ynne
You can find the team there, and the link to join standup in #lodestar-general
The notes from our discussion on the Aug 27th standup are here: https://github.com/ChainSafe/lodestar/wiki/Lodestar-Planning-&-Standup-Meetings#planning-and-discussions
Now that we have known users for separate packages, it would be good to surface any ideas/tooling that would make this even possible to pursue. We've previously had some bad experiences with it... Lion tried to deprecate it once, but this was also around a time when Lerna was poorly maintained and was taken over by Nrwl in mid-2022.
The current workflows are simple to maintain (biggest benefit) and we could continue doing it this way if we're better at communicating breaking changes via our conventional commits and/or release notes (open to better communication ideas also!). I feel like maintaining what we have is simpler and as long as we make it easy for people to see it in the changelogs that we broke something in a package with a minor bump, we should be ok? I mean do people even really look at release notes for minor bumps?
Last time we dealt with release issues around mid-2022, it was a bit of a nightmare, so I'd be looking for more elegant solutions to further push for independent semver packages.
|
GITHUB_ARCHIVE
|
Debugging CSS Grid Part 2: What the Fr(action)?
In the second part of the Debugging CSS Grid series, we’ll take a look at fr (or fraction) units. Fr units are very useful for sizing grid tracks, and vastly simplify the process of building responsive layouts. But there are one or two unexpected behaviours you may run into if you don’t understand how they work. This article will aim to demystify these.
The fr unit is a new unit, exclusive to Grid. It allows you to size your grid tracks according to a proportion of the available space in the grid container. By using fr units instead of percentages for a flexible layout, we can avoid messy and complicated calc() functions to size our grid tracks. As a simple example, we can create four equal-width columns:
grid-template-columns: repeat(4, 1fr);
The grid takes into account the 20px gap between each column track and distributes the remaining space equally. You can also use it alongside fixed tracks:
grid-template-columns: repeat(3, 200px) 1fr;
This will give us three fixed columns of 200px and a fourth column, sized with the fr unit, which will take up the remaining space.
We can use multiples of the fr unit to create tracks that are proportionally larger or smaller. In this example, the second track will be twice the width, and the fourth track will be three times the width of the first and third tracks.
grid-template-columns: 1fr 2fr 1fr 3fr;
All fr units are not created equal
A common mistake is to assume that all tracks sized with the same number of fr units will be the same size. This is certainly what you would expect if you were using percentages for track sizing, for example. But if we compare the first and last examples above, we can quite clearly see that the 1fr columns in the last example (Fig 03) are not the same size as those in the first example (Fig 01), despite using the same value! The reason for this is that fr units are flexible units. They do not behave as lengths, like pixels, rems, ems and others, which is why they cannot be used in
calc() functions. To quote directly from the spec:
Tracks sized with fr units are called “flexible tracks”, as they flex in response to leftover space similar to how flex items fill space in a flex container.
Flexible tracks are resolved last according to Grid’s sizing algorithm. The browser takes into account all of the fixed tracks and column or row gaps, plus the maximum size of any tracks sized using expressions like
minmax(), then distributes the remaining space accordingly.
Consider the following example:
grid-template-columns: repeat(3, minmax(20px, 300px)) 1fr;
We have three columns sized with
minmax() (with a maximum size of 300px), plus one column of 1fr. If the width of the grid container is less than the sum of the three columns (900px) then the last column’s maximum size will depend on the content. If the track contains no grid item (or the grid item has no content, and nothing else affecting its size, like padding or borders) then it will have a resolved width of 0 – so it will be invisible. It’s only when our grid container is larger than 900px (e.g. for larger viewports) that we will see that 1fr column, which will fill the remaining space in the grid.
Fractions of fractions
You don’t need to distribute all of the available space in a grid. We can also size tracks using values of less than 1fr.
If we have three grid tracks at 0.5fr each, we might expect that they take up half the width of the available space – a fraction of a fraction. But this demo shows what actually happens here.
The tracks with a size of 0.5fr actually behave as if they were 1fr! This might be somewhat surprising if we think of fr tracks in the same way as length-based units (like percentages), but becomes clearer if we think of these as flex items instead.
Understanding the flex factor
The value of the fr unit in the CSS Grid specification is referred to as the flex factor. The value of any fr tracks is computed by this formula:
<flex factor of the track> * <leftover space> / <sum of all flex factors>
The specification explains what happens when a track’s flex factor is less than 1:
If the sum of the flex factors is less than 1, they’ll take up only a corresponding fraction of the leftover space, rather than expanding to fill the entire thing.
Because each of our tracks is 0.5fr, the sum of all our flex factors is greater than 1 – 1.5 to be exact. So our column tracks expand to fill all the available space. However, if we sized each track at 0.2fr, say, then the sum of the flex factors will be 0.6. If we try this out then we can see that each item will take up the equivalent proportion of the available space.
Intrinsic and extrinsic sizing
We’ve seen that the size of fr tracks is influenced by the rest of the grid: The sizes of other tracks, and the
gap values. This is known as extrinsic sizing – where the size is determined by context. But the size of an fr track is also dependent on its content. If you have three columns of 1fr, and you place an item in one of those columns whose horizontal size is larger than the equal distributed space then that track will grow to accommodate the content, while the others will become smaller to make space. This is intrinsic sizing. (The Intrinsic and Extrinsic sizing specification offers a full explanation.)
In this example we have a grid with three child items, and one of those children contains an really long word:
We can see that the column containing the longer word is larger than the other two tracks, despite being sized with the same unit. (The same thing will happen if you have some content in the grid with its own intrinsic dimensions – e.g. an
<img> element with
width: 600px in the CSS.)
This is a sensible behaviour and prevents our content from being cut off, or overflowing the container. But it’s not always desireable. If the purpose of our grid is to impose a strict visual layout, then this has the potential to break our layout. If we want to clamp our grid tracks so that they take up an equal proportion of the available space regardless of the size of their content, we can use CSS Grid’s
minmax() function. By default, Grid effectively behaves as if 1fr tracks have a minimum size of auto –
minmax(auto, 1fr). By supplying a different minimum (e.g. 0), we can prevent our grid tracks expanding to fit the content. You can see this in action in the following example:
Fr units are actually the simplest units to work with in Grid, and for the most part cause much less pain than using percentages and calc() for your grid tracks! Don’t be put of using them! I hope this article can serve as a handy reference if you ever get caught out in some more unusual scenarios.
Best Practices with Grid Layout by Rachel Andrew
Understanding Sizing in CSS Layout by Rachel Andrew
|
OPCFW_CODE
|
from corpus.ProtoFile import Relation
from preprocessing.feature_engineering.datasets import RelationWindow
# TODO test class
class WordFeatureGroup(object):
def __init__(self):
pass
def convert_window(self, window):
"""Converts a RelationWindow object into a list of lists of features, where features are strings.
Args:
window: The EntityWindow object (defined in datasets.py) to use.
Returns:
List of lists of features.
One list of features for each token.
Each list can contain any number of features (including 0).
Each feature is a string.
"""
result = []
assert isinstance(window, RelationWindow)
for rel in window.relations:
assert isinstance(rel, Relation)
result.append([self.wm1(rel), # bag-of-words in M1
self.hm1(rel), # head word of M1
self.wbnull(rel), # when no word in between
self.wbfl(rel), # the only word in between when only one word in between
self.wbf(rel), # first word in between when at least two words in between
self.wbl(rel), # last word in between when at least two words in between
self.wbo(rel), # other words in between except first and last words
self.bm1f(rel), # first word before M1
self.bm1l(rel), # second word before M1
self.am2f(rel), # first word after M2
self.am2l(rel), # second word after M2
])
# print("done")
return result
@staticmethod
def get_words(tokens):
return [token.word for token in tokens]
def wm1(self, link):
# bag-of-words in M1
arg1_tokens = link.get_arg1_tokens()
words = self.get_words(arg1_tokens)
return "wm1={0}".format("_".join(words))
def hm1(self, link):
# head word of M1
arg1_tokens = link.get_arg1_tokens()
words = self.get_words(arg1_tokens)
return "hm1={0}".format(words[-1])
def wm2(self, link):
# bag - of - words in M2
arg2_tokens = link.get_arg2_tokens()
words = self.get_words(arg2_tokens)
return "wm2={0}".format("_".join(words))
def hm2(self, link):
# words.HM2(), # head word of M2
arg2_tokens = link.get_arg2_tokens()
words = self.get_words(arg2_tokens)
return "hm2={0}".format(words[-1])
def hm12(self, link):
# words.HM12(), # combination of HM1 and HM2
arg1_tokens = link.get_arg1_tokens()
arg2_tokens = link.get_arg2_tokens()
words1 = self.get_words(arg1_tokens)
words2 = self.get_words(arg2_tokens)
return "hm12={0}".format(words1[-1] + "_" + words2[-1])
@staticmethod
def wbnull(link):
wb_tokens = link.get_tokens_bet()
return "wbnull={0}".format(bool(wb_tokens))
def wbfl(self, link):
wb_tokens = link.get_tokens_bet()
words = self.get_words(wb_tokens)
if len(words) == 1:
return "wbfl={0}".format("_".join(words))
else:
return "wbfl=null"
def wbf(self, link):
wb_tokens = link.get_tokens_bet()
words = self.get_words(wb_tokens)
if len(words) > 1:
return "wbf={0}".format(words[0])
else:
return "wbf=null"
def wbl(self, link):
wb_tokens = link.get_tokens_bet()
words = self.get_words(wb_tokens)
if len(words) > 1:
return "wbl={0}".format(words[-1])
else:
return "wbl=null"
def wbo(self, link):
wb_tokens = link.get_tokens_bet()
words = self.get_words(wb_tokens)
if len(words) > 1:
return "wbo={0}".format("_".join(words[1:-1]))
else:
return "wbo=null"
def bm1f(self, link):
# "word1 word2 arg1"
# return word2
b_tokens = link.get_b_tokens(2)
words = self.get_words(b_tokens)
try:
b_words = words[-1]
except IndexError:
b_words = "null"
return "bm1f={0}".format(b_words)
def bm1l(self, link):
# "word1 word2 arg1"
# return word1
b_tokens = link.get_b_tokens(2)
words = self.get_words(b_tokens)
try:
b_words = words[-2]
except IndexError:
b_words = "null"
return "bm1l={0}".format(b_words)
def am2f(self, link):
# "arg2 word1 word2"
# return word1
a_tokens = link.get_a_tokens(1)
words = self.get_words(a_tokens)
try:
a_words = words[0]
except IndexError:
a_words = "null"
return "am2f={0}".format(a_words)
def am2l(self, link):
a_tokens = link.get_a_tokens(2)
words = self.get_words(a_tokens)
try:
a_words = words[1]
except IndexError:
a_words = "null"
return "am2l={0}".format(a_words)
|
STACK_EDU
|
In this work we propose novel algorithms for storing and evaluating sparse grid functions, operating on regular (not spatially adaptive), yet potentially dimensionally adaptive grid types. Besides regular sparse grids our approach includes truncated grids, both with and without boundary grid points. Similar to the implicit data structures proposed in Feuersänger (Dünngitterverfahren für hochdimensionale elliptische partielle Differntialgleichungen. Diploma Thesis, Institut für Numerische Simulation, Universität Bonn, 2005) and Murarasu et al. (Proceedings of the 16th ACM Symposium on Principles and Practice of Parallel Programming. Cambridge University Press, New York, 2011, pp. 25–34) we also define a bijective mapping from the multi-dimensional space of grid points to a contiguous index, such that the grid data can be stored in a simple array without overhead. Our approach is especially well-suited to exploit all levels of current commodity hardware, including cache-levels and vector extensions. Furthermore, this kind of data structure is extremely attractive for today’s real-time applications, as it gives direct access to the hierarchical structure of the grids, while outperforming other common sparse grid structures (hash maps, etc.) which do not match with modern compute platforms that well. For dimensionality d ≤ 10 we achieve good speedups on a 12 core Intel Westmere-EP NUMA platform compared to the results presented in Murarasu et al. (Proceedings of the International Conference on Computational Science—ICCS 2012. Procedia Computer Science, 2012). As we show, this also holds for the results obtained on Nvidia Fermi GPUs, for which we observe speedups over our own CPU implementation of up to 4.5 when dealing with moderate dimensionality. In high-dimensional settings, in the order of tens to hundreds of dimensions, our sparse grid evaluation kernels on the CPU outperform any other known implementation.
Bibliographical noteKAUST Repository Item: Exported on 2020-10-01
Acknowledged KAUST grant number(s): UK-C0020
Acknowledgements: This publication is based on work supported by Award No. UK-C0020, madeby King Abdullah University of Science and Technology (KAUST). The second author wouldlike to thank the German Research Foundation (DFG) for financial support of the project withinthe Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart.Special thanks go to Matthias Fischer, who helped with the implementation of the different sparsegrid bases.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.
|
OPCFW_CODE
|
package sync
import (
"sync"
)
type Lock struct {
n int
c chan struct{}
mut sync.Mutex
}
func NewLock(n int) *Lock {
return &Lock{
n: n,
c: make(chan struct{}),
}
}
func (l *Lock) claim() {
l.mut.Lock()
l.n++
l.mut.Unlock()
}
// panics if called more times than claim+n
func (l *Lock) release() {
l.mut.Lock()
l.n--
if l.n <= 0 {
close(l.c)
}
l.mut.Unlock()
}
func (l *Lock) wait() <-chan struct{} {
return l.c
}
type Event int
const (
EventRequire = iota
EventChange
)
type Link struct {
t LinkType
testWG *sync.WaitGroup
runWG *sync.WaitGroup
c chan<- Event
done chan struct{}
}
type Layer struct {
runner Runner
matched bool
exists bool
change bool
testWG *sync.WaitGroup
runWG *sync.WaitGroup
c chan Event
done chan struct{}
lock *Lock
}
type Runner interface {
Run()
Skip()
Test() (exists, matched bool)
Links() (links []Link, forTest bool)
}
func NewLayer(lock *Lock, runner Runner) *Layer {
testWG := &sync.WaitGroup{}
testWG.Add(1)
runWG := &sync.WaitGroup{}
runWG.Add(1)
return &Layer{
runner: runner,
testWG: testWG,
runWG: runWG,
c: make(chan Event),
done: make(chan struct{}),
lock: lock,
}
}
type LinkType int
const (
LinkNone LinkType = iota
LinkRequire
LinkContent
LinkVersion
LinkSerial
)
func (l *Layer) Link(t LinkType) Link {
return Link{
t: t,
testWG: l.testWG,
runWG: l.runWG,
c: l.c,
done: l.done,
}
}
func (l *Layer) Wait() {
<-l.done
}
func (l *Layer) Run() {
links, forTest := l.runner.Links()
if forTest {
l.tryAfter(links)
} else {
l.try(links)
}
}
func (l *Layer) send(link Link, ev Event) {
l.lock.claim()
go func() {
select {
case link.c <- ev:
case <-link.done:
l.lock.release()
}
}()
}
func (l *Layer) try(links []Link) {
defer close(l.done)
defer l.runWG.Done()
for _, link := range links {
if link.t == LinkRequire {
link.testWG.Wait()
}
}
l.exists, l.matched = l.runner.Test()
l.testWG.Done()
l.init(links)
l.lock.release()
for {
select {
case ev := <-l.c:
l.trigger(links, ev)
l.lock.release()
case <-l.lock.wait():
if l.change {
for _, link := range links {
if link.t == LinkRequire || link.t == LinkSerial {
link.runWG.Wait()
}
}
l.runner.Run()
} else {
l.runner.Skip()
}
return
}
}
}
// NOTE: EventChange/EventVersion delivery not guaranteed without reverse EventRequire link
func (l *Layer) tryAfter(links []Link) {
defer close(l.done)
defer l.runWG.Done()
for _, link := range links {
if link.t == LinkRequire {
l.send(link, EventRequire)
}
}
l.lock.release()
for _, link := range links {
if link.t == LinkRequire || link.t == LinkSerial {
link.runWG.Wait()
}
}
l.exists, l.matched = l.runner.Test()
l.testWG.Done()
l.init(links)
for {
select {
case ev := <-l.c:
l.trigger(links, ev)
l.lock.release()
case <-l.lock.wait():
if l.change {
l.runner.Run()
}
return
}
}
}
func (l *Layer) trigger(links []Link, ev Event) {
if ev == EventRequire && l.exists ||
ev == EventChange && l.change {
return
}
for _, link := range links {
switch link.t {
case LinkRequire:
l.send(link, EventRequire)
case LinkContent:
l.send(link, EventChange)
}
}
l.exists = true
l.change = true
}
func (l *Layer) init(links []Link) {
if !l.matched {
if l.exists {
panic("invalid state: present but non-matching")
}
for _, link := range links {
switch link.t {
case LinkRequire:
l.send(link, EventRequire)
case LinkContent, LinkVersion:
l.send(link, EventChange)
}
}
l.exists = true
l.change = true
}
}
|
STACK_EDU
|
def parse_scenario(filename):
# TODO implement this function
""" parses a file with the structure described below, validates the
contents and returns either a dictionary containing all values required
to specify a model scenario if the contents are valid, or None if any of
the contents are invalid."""
import csv
file_object = open(filename)
reader = csv.reader(file_object)
data = list(reader)
len_of_file = len(data)
# Initialise a dictionary
result = {}
# Specify the width and height (M) of the square landscape grid
# Check M is a positive integer
if int(data[0][0]) > 0:
m = int(data[0][0])
else:
return None
# Check the dimensions of the f_grid are a positive integer
# & Each row has a length = M
for row_grid in range(1, m + 1):
for i_f_load in data[row_grid]:
if int(i_f_load) >= 0 and len(data[row_grid]) == m:
pass
else:
return None
# Once checked, put into f_grid and into dict
f_grid = []
for row_grid in range(1, m + 1):
inside_f_grid = []
for i_f_load in data[row_grid]:
inside_f_grid.append(int(i_f_load))
f_grid.append(inside_f_grid)
result['f_grid'] = f_grid
# Check the dimensions of h_grid are a positive integer
# & Each row has a length = M
for row_grid in range(m + 1, m * 2 + 1):
for height in data[row_grid]:
if int(height) >= 0 and len(data[row_grid]) == m:
pass
else:
return None
# Once checked, put into h_grid
h_grid = []
for row_grid in range(m + 1, m * 2 + 1):
inside_h_grid = []
for height in data[row_grid]:
inside_h_grid.append(int(height))
h_grid.append(inside_h_grid)
result['h_grid'] = h_grid
count_i_threshold = m * 2 + 1
# Check the ignition threshold is a positive integer, not greater than 8
if 0 < int(data[count_i_threshold][0]) <= 8:
result['i_threshold'] = int(data[count_i_threshold][0])
else:
return None
# Check the wind direction is valid and the wind direction is None if there
# is no wind
count_w_direction = count_i_threshold + 1
wind_directions = ['SW', 'S', 'SE', 'E', 'NE', 'N', 'NW', 'W']
if data[count_w_direction][0] == 'None':
result['w_direction'] = None
elif data[count_w_direction][0].upper() in wind_directions:
result['w_direction'] = data[count_w_direction][0].upper()
else:
return None
# Check the coordinates of the burning cells:
# (a) located on the landscape
# (b) have non-zero intial fuel load
# (a) By checking that the coordinate is a positive integer and less
# than M. And also the length of a row should be 2
count_burn_seeds = count_w_direction + 1
for row_grid in range(count_burn_seeds, len_of_file):
for cell in data[row_grid]:
if 0 <= int(cell) < m:
pass
else:
return None
# (b) By checking the values in f_grid is bigger than 0
check_burn_seeds = []
for row_grid in range(count_burn_seeds, len_of_file):
check_inside_burn_seeds = []
for f_load in data[row_grid]:
check_inside_burn_seeds.append(int(f_load))
check_burn_seeds.append(tuple(check_inside_burn_seeds))
for f_load in range(len(check_burn_seeds)):
i = check_burn_seeds[f_load][0]
j = check_burn_seeds[f_load][1]
if result['f_grid'][i][j] > 0:
pass
else:
return None
# Once checked put it into the dict
burn_seeds = []
for row_grid in range(count_burn_seeds, len_of_file):
inside_burn_seeds = []
for cell in data[row_grid]:
inside_burn_seeds.append(int(cell))
burn_seeds.append(tuple(inside_burn_seeds))
result['burn_seeds'] = burn_seeds
return result
|
STACK_EDU
|
- Overall / General: For one thing, the System-Settings area has been filled out a significant amount. Many things are there now that were missing only a few months ago, and what's there seems to work well.
- Notifications: I need to confirm whether this is easy to disable all sound. By going to Player Settings and choosing No Audio Output, things might be what I want.
You cannot turn off all the audio notifications in an easy way. I really hate the concept of going through every single program and turning them off. I like my computer to run silently so this is a personal annoyance, but to be honest the previous system under KDE3 was not great, just more usable.
- Startup: Sweet new interface for adding startup programs/scripts. Very nice. Not tested yet myself but this is a cool feature.
Previously I mentioned that KDE4 seemed to only allow binding a single shortcut key sequence in programs, instead of their KDE3's well-established two sequence binding options. I notice that KDE4 may have added, or at least begun adding, options for binding two keys again. This is great.
Gwenview 2.xx is pretty much ready for use... for me anyway.
When I tested Gwenview, as of today, 90% of the "shitty" zooming issues (enlaring smaller images) previously noted have been resolved. I don't exactly know what source these fixes came from, but they are there. I am not even sure if there are any zoom related problems left.
I tested enlarging small GIF/PNG/JPG files and all of them look very good. It is difficult to say whether the zoom quality is "at the highest possible" while using VMWare but it sure looks quite acceptable for the time being.
The only unusual thing I noticed was on the enlarging of small PNG files. Gwenview may not have been performing perfect binary interpolation on all the sample test images I tested. Regardless, the enlarging zoom was still of a very acceptable level of quality.
Konqueror seems to have fixed some of its bugs since I last reviewed it, especially the great many of the issues I care about.
- View Configuration: Many of the preferences are now saved, in particular Menu: Settings -> Configure Konqueror -> File Management -> Views.
These were some of the most important for me. Though there seems to be a bug with not displaying the check in the checkbox for enabling "Show Delete in Right Click menu" but the feature is still enabled.
- Tabbing: Tabbing still seems very solid, but I cannot remember if this successful preference saving was present in my last review. Regardless I take this a good sign of progress.
- View Mode Icon Size: The bugs with inconsistent view modes icon size have been fixed. This time around, I only had one instance where the zooming was off slightly, but a quick setting of the zoom and the problem never repeat itself.
- Toolbars: I already mentioned the Toolbars in Konqueror were improved and I am still impressed with them. Nice and usable and still quite configurable. That annoying bug with some element misplacement that occurred with KDE3.5 because of Gwenview integrated image viewing is gone.
The feature for smoother switching profiles within Konsole has been integrated. Very nice.
Yet another sweet looking application I cannot wait to try. This is looking seriously awesome. I have not been able to try this on a mobile device but I will report on it as soon as I do.
This is pretty impressive. I guess is just goes to show that KDE can be even more customizable if you want. I don't even know where to begin with all the options. To be bluntly honest, even just testing this a tiny bit with impresses me. I see major potential for smooth and easy eye-candy customization.
There are several things which I don't use but their existence is worth noting.
- Digital Camera integration: I don't use a digital camera so my word is that of a novice but it looks like KDE is providing (or will provide) a pretty good user interface for accessing it. This is a nice thing to have since it promotes inclusion of more users in the future.
- PDA/PIM utilities: KDE 4 looks to have some nice (and assumably working) PDA/PIM integration and software. I don't use a PDA (yet) but doubtless I will when the 2nd generation of the Google Phones come out. I am glad that KDE has good these utilities.
When I installed KDE 4.2.x from within Gnome, I ended up with a lot of more color themes... and they were good! When I reinstalled KDE4.2.x from Kubuntu, I was sadly surprised to find it does not ship with some of these themes. I hope that gets fixed because those themes were nice.
There are still some real font issues, but these may be less of a deal breaker than I first experience. Today in KDE 4.2.2 I set the entire system font size to 7 and then Konqueror and System Settings looked okay. Actually, everything I tested seemed to look just fine after this change. The exception was the Classic-Applications Menu, whose font became quite small but still readable.
I experienced Network Manager problems when using it through VMWare, but I'm not sure how it will perform on an actual system installation. The Knetwork-manager seems to cause a lot of problems for some people, though I have luckily avoided most of these problems through serendipity.
If you are going to use KDE4.x, I would seriously recommend having a diffferent network manager installed and functional. I suggest Gnome's network manager.
Mouse Gestures in Konqueror
Currently I cannot find anyway to enable mouse gestures in Konquorer. I conclude that this feature is not available yet in KDE 4.2. This is unfortunate because I use this feature every day. This is not technically a bug, just a missing feature.
The 'media:/' protocol is not present in KDE4 yet. This is known to the KDE team and it is on their list of things to do.
I am pretty set to use KDE 4.2.2 right now, but I will hold off. I don't have any pressing need to switch yet, and the few missing features are not deal killers but important for me.
Previously, the failure of Konqueror saving settings properly and icon size mismatching was the deal killer for me. But now there only are a few missing features left on my list. Good times.
|
OPCFW_CODE
|
Currently, I am a part of a project looking at climate change impacts on the distribution of tree and grass pollens in the US and associations with allergy and asthma related emergency room visits
As part of that, we are collecting baseline data on symptomatic profiles of patients who are sensitive to tree and grass pollens and are currently undergoing immunotherapy in local clinics.
Our survey is two fold, the first a baseline survey of types of demographics, types of allergies, seasonal sensitivities, general symptoms and lifestyle impacts, the second a three week survey of sleep quality and allergy and asthma related events.
We hope to gather data to see how the ragweed season might impact general health and well being using a coarse raster of predicted pollen distribution.
The survey is being conducted at the University of Michigan Allergy Specialty Clinic and Food Allergy Clinic at Domino’s Farms and will include approximately 50 people.
At least that’s what we hope happens. Yesterday, I had the opportunity to join the Detroit Communities Reducing Energy and Water (use) project, focusing on Parkside, a subsidized housing community in Detroit, MI.
The project aims to help residents make changes to the electrical and plumbing infrastructure of their homes to reduce the energy costs. Residents in poor communities often live in housing that has old, inefficient and sometimes faulty electrical wiring, kitchen appliances and aging or damaged pipes, showers and toilets.
The University of Michigan School of Public Health has a community based participatory research project with the residents of Parkside, the Friends of Parkside, a local advocacy group.
We administered a survey on energy, housing conditions and health to about twenty residents who came to the event. Following the consumption of copious amounts of pizza, the goals of the study were explained to everyone in a group meeting and consent was obtained.
They then moved to another room and took the survey. Many of the residents were elderly, mostly women. All had interesting stories to tell about broken air conditioners, unresponsive maintenance crews, family, friends, kids…. everything you find in these kinds of surveys.
After they were done, they all got some ca$h and were provided with a temperature monitor so that we can better understand what they are experiencing in their homes during these hot summer months. We will then conduct a follow up survey to assess the impact of a home based educational program on energy use and health.
It had been a long time since I was involved in community and I was grateful to be a part of. Some people don’t like this kind of work, I really don’t understand what’s not to like about hanging out with survey respondents who feel invested in the project and their communities.
New chapter from myself in a Springer volume: “Access to Health Care in Sub-Saharan Africa: Challenges in a Changing Health Landscape in a Context of Development”
I wrote a chapter for “Health in Ecological Perspectives in the Anthropocene” edited by Watanabe Toru and Watanabe Chiho. I have no idea if they are related. Either way, my chapter “Access to Health Care in Sub-Saharan Africa: Challenges in a Changing Health Landscape in a Context of Development” occupies pages 95-106 in the volume.
Check it out, you can buy the book through Amazon for a cool $109, or just my chapter through the Springer site for $29 or you can simply write me and I’ll give you a synopsis.
Here’s the abstract for the book:
This book focuses on the emerging health issues due to climate change, particularly emphasizing the situation in developing countries. Thanks to recent development in the areas of remote sensing, GIS technology, and downscale modeling of climate, it has now become possible to depict and predict the relationship between environmental factors and health-related event data with a meaningful spatial and temporal scale. The chapters address new aspects of environment-health relationship relevant to this smaller scale analyses, including how considering people’s mobility changes the exposure profile to certain environmental factors, how considering behavioral characteristics is important in predicting diarrhea risks after urban flood, and how small-scale land use patterns will affect the risk of infection by certain parasites, and subtle topography of the land profile. Through the combination of reviews and case studies, the reader would be able to learn how the issues of health and climate/social changes can be addressed using available technology and datasets.
The post-2015 UN agenda has just put forward, and tremendous efforts have been started to develop and establish appropriate indicators to achieve the SDG goals. This book will also serve as a useful guide for creating such an indicator associated with health and planning, in line with the Ecohealth concept, the major tone of this book. With the increasing and pressing needs for adaptation to climate change, as well as societal change, this would be a very timely publication in this trans-disciplinary field.
I have nothing to say, I just want to see if this works
I found this post and wanted to see if it actually works (sometimes the code included in blog posts does not…actually, this code in this one did not. I had to make some modifications to get this to work.).
Apprently, I can include images, so I’ll include the most popular image on my site:
I can include R code
Which is great, because I do a lot of R work
So here’s some R code. You can see that it is formatted properly:
summary(mtcars) plot(mtcars$mpg, mtcars$cyl, main="myplot", xlab="mpg", ylab="cyl")
2. I can even include videos (I think), like this horrifying clip from Slithis Survival Kit:
Well, two packages, at least. Having not posted in well… forever… this is a decent move back into the world of blogging (which is far harder in 2018 than it was in, say, 2009.)
I have been working on Shiny based mapping apps recently and found the Zip Radius Package potentially convenient. I even made a map of zip codes and population within 100 miles of 48104.
The fieldRS package provides a convenient way of classifying and mapping remote sensing data, which will be extremly handy when doing the snake project, for example. An open question was how to access localized risk based on topography and landuse. I had no convenient way of assessing this at the time.
While other blog posts will do a much better job of explaining the Data Explorer package in R, it still seemed useful to mention it here.
A huge hurdle to data analysis is data cleaning, and to effectively develop a strategy to efficiently prepare data for analysis, a basic snapshot of the data is helpful.
Enter the Data Explorer package, a set of tools that can provide minimal descriptive information for not much effort at all. With a single command, you can take a raw dataset, and produce a useful report that you can use to start working on your plan of data cleaning attack.
I downloaded a portion of the Social Indicators Survey from Columbia University, and picked a small subset of variables.
Using this small set of code, I produced the report below.
sis_sm <- as.data.frame(with(sis, cbind(sex, race, educ_r, r_age, hispanic, pearn,
Data Profiling Report
The data is 34.8 Kb in size. There are 453 rows and 12 columns (features). Of all 12 columns, 9 are discrete, 3 are continuous, and 0 are all missing. There are 1,245 missing values out of 5,436 data points.
Data Structure (Text)
## 'data.frame': 453 obs. of 12 variables: ## $ sex : Factor w/ 2 levels "1","2": 2 1 2 1 2 2 1 2 2 1 ... ## $ race : Factor w/ 4 levels "1","2","3","4": 3 1 1 2 3 3 3 4 1 4 ... ## $ educ_r : Factor w/ 4 levels "1","2","3","4": 4 4 2 2 2 1 1 4 4 2 ... ## $ r_age : num 40 28 22 24 31 42 36 63 69 24 ... ## $ hispanic: Factor w/ 2 levels "0","1": 2 1 1 1 2 2 2 1 1 1 ... ## $ pearn : num 14400 14400 12000 15000 8000 9600 2400 9600 NA NA ... ## $ assets : num 5000 50000 4000 NA NA 6000 NA 1250 100000 NA ... ## $ poor : Factor w/ 2 levels "0","1": 1 1 1 2 2 2 2 2 2 2 ... ## $ read : Factor w/ 4 levels "1","2","3","4": NA NA NA NA NA NA NA NA NA NA ... ## $ homework: Factor w/ 4 levels "1","2","3","4": NA NA NA NA 4 1 1 NA NA NA ... ## $ black : Factor w/ 2 levels "0","1": 1 1 1 2 1 1 1 1 1 1 ... ## $ police : Factor w/ 2 levels "0","1": 2 2 1 1 2 2 1 NA 2 2 ...
Data Structure (Network Graph)
The following graph shows the distribution of missing values.
Continuous Features (Histogram)
Discrete Features (Bar Chart)
|
OPCFW_CODE
|
Are you a Google Analytics enthusiast?
More SEO Content
Bored With Seo
Posted 08 February 2005 - 08:04 PM
OK I just got a copy of ISEDB weekly, where some SEO had renamed it 'vision based analytics' or some such even more geeky name that means even less to the average man on the street. I mean at least with block link analysis you have half a chance of explaining pages being split into blocks and different weight being given to links from each area or block.
Posted 08 February 2005 - 08:58 PM
That's no good. I still remember Visual Basic for Applications. There can't be another VBA. I'll get confused.
Posted 08 February 2005 - 10:43 PM
You know, I went to a conference consisting mainly of erm....the dark side of the force, and not once did I hear any talk about seo. It was all about business, marketing, relationships and strategy The whole industry is maturing.
I agree that SEO is dead dull. Hats-off to you guys - I don't know how you manage to answer the same questions over again and stay sane. The interesting stuff, for me anyway, has always been in the marketing strategy. Find the gaps. Build solutions. Make it work. SEO is a tool. It helps us with lead generation. SEO, outside a marketing strategy, is a shot in the dark.
I love it when you see this little light go on when newer people first realise: "Hey, it's not a coding thing! It's a marketing thing!".
Edited by peter_d, 08 February 2005 - 10:52 PM.
Posted 08 February 2005 - 10:55 PM
In fact, just bookmarked it
That is what is fun: strategy. There is more to Marketing and strategy than number one search engines rankings on 10 or even 10,000 phrases. Strategy is about getting from here to there, and how you do it.
Strategy is why great sporting teams (like the Wallabies and NE Patriots) can beat a team full of great players (like the All Blacks ).
The former have stuck by long serving players, and shuffled the fringe players, whereas the latter has chopped and changed so many times it isn't funny.
Strategy is also about having goals. "A number one ranking on Google for widget" isn't a tangible goal. "Increase sales by 30%" is. The two are not the same, and neither do they neccesarily correlate.
I prefer dealing with the latter myself.
Posted 08 February 2005 - 11:49 PM
Posted 09 February 2005 - 03:12 AM
Are we just a bunch of challenge seekers and the game isn't fun anymore? Has the SEO landscape changed so much that we aren't as fascinated by it as we once were? Has it become too hard/too easy?
I'd say it's because unless you're one of the handful of people who're associated with already successful folk in the industry, you can't make any money at it. Business folk who need it don't want to know because they've already blown their budget on a web site that's usually a complete joke, and web designers who need educating in it don't want to know because it highlights the inescapable fact that their expensive services are not, in fact, doing their clients any good. Hard to be popular under those circumstances. SEO is something that the web industry would really rather see swept under the carpet. I'm not bored with it, I really enjoy the challenge, but I'm hugely frustrated at having a headful of specialist and very useful information that people don't want to know about.
Posted 09 February 2005 - 07:42 AM
Yes, strategy. And sport metaphors
I know it's off topic, but this reminds me of something Kevin McHale of the Boston Celtics once said before a big game. A reporter asked him what the C's would have to do to beat their rivals, and he looked into the camera with that Herman Munster face of his and said, completely deadpan (I'm paraphrasing)
Now that's strategy.
Posted 09 February 2005 - 10:22 AM
I still believe that SEO is an important tool, because its about making your site the best it can be, for the visitor and by extension the SE. But once you have done that you don't just sit on your laurels, you have to work hard at marketing that site. And that's what keeps it fresh for me.
Posted 09 February 2005 - 06:25 PM
I had high hopes for Vivisimo but it doesn't seem to be too good at actually spidering a site.
Posted 09 February 2005 - 06:48 PM
Posted 09 February 2005 - 07:26 PM
You could be right. I was disappointed that it seemed to have no large index to call on. Not even one large index, let alone several. I gave up on it.
Posted 09 February 2005 - 10:15 PM
However, in true honesty, I must also say that it's still a challenge to rank a client better than where he/she was before we optimized their site. Search engines keep changing and tweaking their algorithms all the time, so we don't have any other choice than to stay closely in touch with the latest technology and the latest SEO / SEM techniques.
There's no question that our profession does have its ups and downs, but in the end, serving and providing the business community with a useful service that will yield them more traffic and better conversion rates does have its share of satisfaction and enjoyment.
Posted 09 February 2005 - 11:45 PM
I think the industry is getting so over-inflated with shamsters and simply do it yourselfers.... (if there is such a word?)
Posted 10 February 2005 - 04:15 AM
A few months after the site first went up, my boss came through and told me that he'd found our company when doing a web search for whatever phrase (in the top 10) but that we didn't come up for this other related phrase and could I make that happen. That started me on some major learning. much of it from Jill, and in time we had 21 out of 25 of our targetted phrases in the top 10 on Google. It was very exciting at the time and I'd look up my phrases every month to make sure they wre still there!
About a year ago they started to drop, and I didn't have time to re-optimise so I stopped looking the phrases up as it wasn't fun any more and I'd just started a total re-vamp of the site, making it more usable and easier.
I'm doing no better in the search engines now for those target phrases, but I'm getting bigger enquiries from bigger companies and from all over the world than we ever used to get. We do no advertising or marketing whatever apart from natural search results. We're well aware that this may have to change at any time, but for now my website keeps 19 people fully employed with enquiries and orders.
Rankings are fine, but if people don't like or can't use the site they arrive it it's a waste of everyone's time.
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
OPCFW_CODE
|
You might work on small projects or large scale developments, you might be the only person involved or you might work with many others, regardless of your situation, having a structured development process is essential.
I work for the University of Leeds as a generalist designer/developer, a recent project of mine has been upgrading our institutional Content Management System to the latest version of the software, a task which had an extremely tight timescale which was met by using a structured development process.
So let’s set the scene, I work within the Central IT Services’ Web Team and am the technical lead for the University of Leeds’ corporate website (www.leeds.ac.uk), this site runs on top of the Jadu Content Management System and was launched in September 2009. We have just performed a major upgrade of the core Jadu CMS which was turned around in a very short space of time, and it’s this upgrade project which is the basis for this post.For those of you wanting to know which apps and tools I use, here’s a quick rundown:
- iMac 21.5″ 3.2GHz Intel Core i3
- MAMP (local Apache/MySQL server)
- GIT (Version Control System)
- Trac (Project Management and Issue Tracking)
Project management tools
There are a whole host of tools available which will allow you to manage your project, both free and paid-for. Within the Web Team, we use the Open Source ‘Trac’ Project, Trac may not be the prettiest piece of software ever written, but it’s extremely flexible through a plug-in architecture and being open source, we’ve moulded it to fit our exact needs as a team. The key features we use are issue tracking, milestones and source code management.
“There are a whole host of tools available which will allow you to manage your project, both free and paid-for.”
As well as using Trac internally within our team we have opened up access to a couple of teams within the institution who work with us on the corporate project. The Central Communications team (Comms), for example, have the ability to assign and manage tickets, see the commit history and timeline and read the wiki, but don’t have access to the source code repositories or source browsing facility through Trac.
local > staging > production (aka develop > debug > deploy)
There are three separate environments which I use to develop for leeds.ac.uk. Firstly, all of the code I write is done on a local server which runs a complete standalone version of the code and database. This allows me to mess around as much as I want without any risk to the live site and database which runs on the production server where the live version of leeds.ac.uk lives.
In between my local development server and the production server is the ’staging’ server, this is an identical environment to the production server (even running on the same physical hardware) and is accessed through a private URL. By running this staging server I can deploy code for the team to test, confident that this is identical to how the site will operate once deployed to the production server.
The deployment is handled through the GIT version control system which is tied to our Trac system where we maintain a central source code repository (even though it’s not strictly needed with GIT, we still maintain a central repo for ‘disaster planning’), code is pushed from Local to Trac, then from Trac to Staging and ultimately from Trac to Live.
How it all tied together
During the upgrade project I worked with our internal project co-ordinator and the Comms team to define a set of milestone dates for the project, such as ‘testing’, ‘pre-launch’, ‘launch’ and ‘post launch’.
We knew the date the site had to launch by was concrete, so worked backwards from there, with the pre-launch milestone set to the start of the week of launch, this contained any issues and tasks that needed to be completed before we could possibly go-live (such as a last minute content-freeze and migration from the currently live site to the upgraded site). Finally, the post-launch milestone allowed me to keep any tasks that weren’t directly related to the launch to be processed afterwards.
“We knew the date the site had to launch by was concrete,
so worked backwards from there”
We had a thorough round of testing in the weeks running up to launch, the task of testing was performed by Comms who were testing against a ‘release candidate’ build on the Staging server. They worked for a week going through every single page on the Release Candidate build and logged tickets for any bugs, whether they be for differences in layout or differences in content. This is a critical step and their eagle eyes spotted a wide range of issues, from missing/out of date content, to the fact that the breadcrumb trail was 2 pixels further to the right than it should be.
These issues were all classified against a simple priority scale:
- Blocker (the site can’t possibly go live until this is fixed)
- Critical (if this isn’t fixed, people may die)
- Major (it’s a big problem, but no-one’s going to die)
- Minor (this needs fixing at some point)
- Trivial (no-one would probably notice except us)
“a ‘critical’ issue may be seen as one which would cause the institution to cause severe confusion or lose money”
If you were to take into account the business considerations, a ‘critical’ issue may be seen as one which would cause the institution to cause severe confusion or lose money (such as printing an incorrect emergency phone number). Not all content is created equal, so a content difference on the Press Releases page might be classed as ‘Major’ while a page buried in the bowels of the site with a spelling error might be classed as ‘Minor’ or ‘Trivial’. It’s worth sitting down with the people who are going to be creating most of your tickets and defining what your priorities are to make sure that everything is not logged as high-priority to try and get them finished quicker.
(Based on this project I may move to remove the Blocker priority, as nearly all critical issues could be considered blockers)
I left the team alone for a week to perform their testing, only stepping in to fix any Blocker/Critical issues which would hamper their testing efforts and being on hand to answer any questions they had, after they were satisfied the entire site had been tested, their access to staging was revoked while I worked through all of the tickets assigned to the ‘Testing’ milestone. This process took 3 days for a couple of hundred issues, once the reported issue had been resolved the ticket was re-assigned to the owner for them to recheck the issue, and for them to close the ticket if it was fixed, or fail if it was still present.
Make your tools work for you
I am lucky to work with some extremely talented people, including a colleague who installed and maintains our Trac instance and implements some of the weird and wonderful ideas we have to make it more useful for us. One of the most used tweaks was to use the post-receive hooks of GIT to allow us to modify tickets based on the commit message.
For example, I accept Ticket #356 which requires a code change to resolve. I make the required change to the code and commit this to my local repository with the following command:
git commit -m "Fixes #356 - bug was caused by ... and fixed by ..."
I also communicated the progress of the project with the testing team by highlighting which version of the code the staging server was currently running:This linked through to a summary changelog, which gave an outline of what had changed from version to version, clicking through would list precisely what had changed in the code and which tickets were fixed: We projected the Trac roadmap screen in our office with an auto-refresh which let us watch the testing milestone gradually reach 100% completion as the Comms team signed off the fixed tickets (I’m a sucker for visualising the size and progress of the task). All of this communication regarding the process of the project was passive, it was only available if and when anyone needed to see it. ### In conclusion
It doesn’t matter what tools or software you use as long as you’re able to make them fit around the way you work. If a piece of software gets in the way of how you work then you should seriously consider either changing it, or (if possible) changing the way it works to better suit your needs. If you’re working with other people on a project, communication is essential. By keeping everyone involved up-to date you can reduce the amount of time needed for meetings and reach that magical milestone, getting sign off.
|
OPCFW_CODE
|
Normal distribution and points
Microsoft system center configuration manager 2007 uses distribution points to store files needed for packages to run on client computers these distribution points function as distribution centers for the files that a package uses, allowing users. Performance based learning and assessment the self-assessment and teacher assessment will count 21 points data can be spread across the normal distribution. A normal distribution is an arrangement of a data set in which most values cluster in the middle of the range and the rest taper off symmetrically toward either end. Normal distribution problems with answers entry to a certain university is determined by a national test the scores on this test are normally distributed with a mean of 500 and a standard deviation of 100. Install configuration manager distribution points to host the content files that you deploy to devices and users create distribution point groups to simplify how you manage distribution points, and how you distribute content to distribution points when you install a new distribution point (by. Normal distribution the normal distribution is the most widely known and used of all distributions because the normal distribution approximates many natural phenomena so well, it has developed into a.
The normal distribution the area under the curve and bounded between two given points on the x-axis is the probability. Display of statistical distribution here are 100 data-points sampled from a normal distribution: this is a sign of a non-normal distribution of the data. Tips for recognizing and transforming non-normal data when data fits a normal distribution an individuals chart shows several data points outside of the. Area from a value (use to compute p from z) value from an area (use to compute z for confidence intervals. Start studying normal distributions learn vocabulary, terms, and more with flashcards, games, and other study tools. The red curve is the standard normal distribution: cumulative distribution function that is, to combine n data points with total precision of n.
Income is one of those data points that follows a power law distribution (ie if x% of the population makes $d per year then 1/n of x% will make nd) and the normal distribution is a poor model. Table entry table entry for z is the area under the standard normal curve to the left of z standard normal probabilities z z00 –34 –33 –32 –31 –30 –29 –28 –27. Finding probabilities for the normal distribution what if we want the probability within 143 standard deviations of the mean. Lesson 5: normal distributions if the points in the q-q plot are essentially in a straight a standard normal distribution has a mean of 0 and a standard.
Use a z-table to find the area between two given points in some normal distribution. We look at some of the basic operations associated with probability distributions the points it assumes you want to for the normal distribution. A guide to how to do calculations involving the standard normal distribution the calculations show the area under the standard normal distribution curve as.
Normal distribution and points
If your statistical sample has a normal distribution (x), then you can use the z-table to find the probability that something will occur within a defined set of parameters.
Chapter 3 – normal distribution density curve: a density curve is an idealized histogram, a mathematical points of the mean, that is, between 85 and 115 3. Numpyrandomnormal¶ numpyrandomnormal (loc=00, scale=10, size=none) ¶ draw random samples from a normal (gaussian) distribution the probability density function of the normal distribution, first derived by de moivre and 200 years later by both gauss and laplace independently , is often called the bell curve because of its. Lesson 2 • the normal distribution 365 a how do the mean weight and the median weight compare b on a copy of the histogram, mark points along the. Distribution plots overview distribution if all the data points fall near the line x contains 100 random numbers generated from a normal distribution with.
Normal distribution calculator finds cumulative normal probabilities and z-scores fast, easy, accurate an online statistical table includes sample problems. 1 exploratory data analysis 13 eda techniques 136 probability distributions 1366 gallery of distributions 13669 lognormal distribution : probability density function. Table 8 percentage points of f distribution: f table 9 values of 2 arcsin standard normal curve areas z 000 001 002 003 004 005 006 007 008 009. We will get a normal distribution if this says that the points of the upshot is that even though the binomial distribution is not exactly normal.
|
OPCFW_CODE
|
Adding custom spell checking to Word is easy. Office applications use lists of properly spelled words stored in simple Unicode text files will a file type of dic for dictionary. There is a legally free, GPL licensed medical list of words found at http://www.e-medtools.com/openmedspel.html. It isn’t in Unicode, so we’ve made a Unicode version of this custom dictionary file and packaged it in a zip file which you can download by scrolling to the bottom of this post. Under the terms of GPL licensing, you may download, modify, and redistribute this file only as a free product. GPL means once free, always free and this includes derivative works and any other kinds of enhancements to the original.
This is a simple process of downloading a zip file, extracting it to a specific location, and adding to Word. If you understand the process, it will take about one or two minutes. It is like many other IT tasks where reading the instructions takes more time than actually doing the task.
You can add custom dictionaries for engineering, foreign words, or whatever you need. Adding a custom dictionary affects all Microsoft Office applications, not just Word.
Windows 7, Internet Explorer 9 and Microsoft Office 2010 were used for the screen captures shown below. These instructions work equally well for Office 2013. Microsoft’s official reference for adding custom dictionaries is found here.
Instructions for Word 2011 on a Mac operating system are found here.
Step 0. Open Word and find the location of your custom dictionaries.
Geek alert: If you are a geek, you can skip this step. The location you need is %appdata%\microsoft\uproof
Since I’m using Office 2010, I select File and then Options. If you have a different version of Word, see the Microsoft KB article.
Figure 1. Word 2010 Options.
Go to Proofing and click the Custom Dictionaries button.
Figure 2. Word 2010 Custom Dictionaries button.
Figure 3. Word 2010 Custom Dictionaries location shown to the right of File path.
Copy the location of your custom dictionaries. On my computer, it is C:\Users\John\AppData\Roaming\Microsoft\UProof as you can see in Figure 3.
Step 1. Download the zip file containing the custom dictionary.
Scroll to the bottom of this post and click the OpenMedSpel100.zip link.
Click Save as and save the zip file to your desktop.
Figure 4. Save zip file dialog box.
Figure 5. Save zip file to desktop.
Step 2. Extract the zip file to the Uproof folder.
Right-click the OpenMedSpel100 zip file and select Extract All.
Figure 6. Extract the zip file.
Specify the location of the Uproof folder found in Step 0. On my machine, this is C:\Users\John\AppData\Roaming\Microsoft\UProof. Note: If you
Figure 7. Specify the location of the Uproof folder found in step 0 and click the Extract button.
Step 3. Open Word.
Go back to Word’s Custom Dictionaries dialog box shown in Figure 3. Click the Add button on the Custom Dictionaries dialog box.
Note: If you see CUSTOM and en_US_Open MedSpec100 instead of CUSTOM.DIC and en_US_OpenMedSpel100.dic as shown below, it is because you have Windows Explorer configured with "Don't show hidden files, folders, or drives" instead of "Show hidden files, folders, and drives". It might be easier to complete these steps if you show all files.
Figure 8. Select the en_US_OpenMedSpel100 file and click the Open button.
Figure 9. Click OK and you’re done!
Step 4. Open Word or any other Office application to confirm your change.
Enjoy working with your Office applications without the frustration caused by spell checking false positives.
Figure 10. Microsoft Word before (left) and after (right) adding the custom dictionary.
|
OPCFW_CODE
|
Cache with automatically expiring items
Is there a Cache implementation available in Java, Guava, or another library that can do the following:
Key, Value Cache with automatically expiring items. Modifying the value restarts the expiration timer. Each key/value pair expires separately (has it's own timer).
Items can be manually added, for example, cache.put(key, value);
I have seen the Guava LoadingCache but that implementation requires you to implement the load(key) method. The load(key) method is intended to compute a value based on the key by using a database or other resource. Once that value is computed by the load(key) method I believe the LoadingCache sticks the resulting (key, value) pair in the cache.
My implementation requirements differ from the LoadingCache because my keys will remain fixed, but the corresponding values will be slowly updated as I scrape my database. In other words, I don't want to load the entire value at once like the LoadingCache does in it's load(key) method - I want to leave the key the same and incrementally update the value Object depending on what I get from the database. So it would appear that this precludes using the LoadingCache since the load(key) method forces you to load the key's corresponding value all at once.
The reason I want to incrementally load the value (for each key) is because it's going to take a long time and I am using AJAX polling to keep the user updated. Therefore loading it all at once is useless. I want to cache these values so I can easily retrieve them with AJAX. I want them to expire because once the user is done visiting the webpage, they are useless.
It's not clear to me why you wouldn't use the Guava LoadingCache and not load the key's corresponding value all at once in the load method, instead having your load method pass the task of the ongoing loading to some other ExecutorService.
@LouisWasserman Ok so then what? In my ExecutorService which would be a thread, what do I call on the LoadingCache to update the key? If I just put the EmptyObject in as the value when I call load(), then I call ExecutorService(EmptyObject), and I update the fields on EmptyObject, will the cache automatically "reset" it's expiration time for that key/value pair?
No, though you could make it do so by calling cache.put after you're done loading, which would trigger a new write as far as expireAfterWrite is concerned.
@LouisWasserman I see. So couldn't I also ignore load() altogether (implement it to return null) and just always use get() and put()? It seems like that would use the timer features of the cache but ignore the load features that I don't want...and get() says it can call load() but why would it, if I already put() stuff there beforehand. Also, if it returns null when I haven't put() yet, that's fine...
It might be simpler to do it with a CacheLoader. You can't return null from the CacheLoader, but you could just not provide a CacheLoader, and use the Cache interface instead of LoadingCache. Also, adding the entry to the Cache with a CacheLoader makes sure you won't accidentally recompute the value more than once.
@LouisWasserman Please confirm: I think what you're saying is that in the load() method I can start the ExecutorService, pass in an Object, let's say SlowlyFilledObj. Then I should return SlowlyFilledObj in the load() method. In the ExecutorService itself, I should update the cache value for SlowlyFilledObj by using put(key, SlowlyFilledObj).
This all seems workable but it's unintuitive that I need to call "get" to cause the cache to start loading for a particular key. You'd think calling "put" would do that. Also feel free to post as answer and I'll mark it.
You can't add things that automatically trigger with put; I'm saying saying that you have to a) add something to the cache as soon as you want to start loading, so you won't have conflicts; b) that it's easier to trigger the full load automatically in a CacheLoader rather than having to do extra fancy stuff in the cache users.
Following @Louis Wasserman's advice I implemented the following code:
LoadingCache<String, WorkerItem> cache = CacheBuilder.newBuilder()
.concurrencyLevel(level)
.maximumSize(size)
.expireAfterWrite(seconds, TimeUnit.SECONDS)
.removalListener(this)
.build(
new CacheLoader<String, WorkerItem>() {
public WorkerItem load(String key) throws Exception {
WorkerItem workerItem = new MoreSpecificWorkerItem();
workerItem.setTask(key);
Controller.beginWorking(workerItem); //Runs in thread pool
return workerItem;
}
}
);
When the user calls get() on the Cache it will immediately return the WorkerItem and will be populating it in the background. As long as you implement WorkerItem with an isFinished() method or similar, it will be possible to know when it's ready for use.
And I implemented a cleanup service since the cache does not periodically remove expired items. Expired items are simply marked dirty and are removed the next time you access them or the next time cleanup is called.
final Runnable doCleanup = new Runnable() {
public void run() {
LoadingCache<K, V> cache = getCache();
cache.cleanUp();
}
}
ScheduledFuture<?> cleanupHandle = scheduler.scheduleAtFixedRate(doCleanup, 1, 1, TimeUnit.MINUTES);
return cleanupHandle;
JCS supports idle time expiration and manual adds
looks fairly complex. by default it uses a cache config file according to the documentation. I can't justify using something like that to my tech lead without good reason. also, I only need one cache or "region" in JCS terminology. And I only have one server not distributed.
|
STACK_EXCHANGE
|
The Internet of Things (IoT) is the network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, actuators, and connectivity which enables these things to connect, collect and exchange data, creating opportunities for more direct integration of the physical world into computer-based systems, resulting in efficiency improvements, economic benefits, and reduced human exertions.(that's IOT from wiki)
A simple beginners IOT based switching action is done in this project.
Things we need in this project is:
2. 1 led with 200ohm resister.
3. Arduino IDE and Blynk library
4. Blynk App on your phone.
NodeMCU is an open source IoT platform.It includes firmware which runs on the ESP8266 Wi-Fi SoC from Espressif Systems, and hardware which is based on the ESP-12 module.The term "NodeMCU" by default refers to the firmware rather than the development kits.(source: https://en.wikiphttps://en.wikipedia.org/wiki/NodeMCUedia.org/wiki/NodeMCU)
First try to install NodeMcu board!!
1. Go to: tools\board\board manager\esp8266 (search online)
else if not found
2. goto file\preferences then look for additional board manager urls. paste "http://arduino.esp8266.com/stable/package_esp8266com_index.json" and repeat step-1 install esp8266 board from there.
Search for blynk Library or simply download it from:https://github.com/blynkkk/blynk-library
After downloading it:
1. Decompress and paste the folder in Arduino library present in documents. (e.g. paste it at C:\Users\*****\Documents\Arduino\libraries
2. Start arduino IDE and you can see custom example of BLYNK library at file\examples.
3. Connect nodeMce to PC. choose correct board,in my case-NodeMCU 1.0(ESP 12E module). Select the correct COM port(select the com which appears after connecting nodeMcu).
4. In arduino IDE go to:files\examples\blynk\boards_wifi\esp8266_standalone.
5. 3 things to edit here a-"YourAuthToken", b-"YourNetworkName", c-"YourPassword"
'b' is your wifi network name (must connceted to internet) and 'c' is password set for that wifi network. "YourAuthToken"-this will be discussed later after using the blynk app. After getting the authToken replace it and burn the code to nodeMcu.
6. Press reset. Watch connection Status on serial monitor.
Jobs with the blynk App:
1. Download blynk App
2.Open the app login with FB or other methods.
3.Click on New project. Name the project(e.g. Iotswitch_gg). Choose Device: NodeMCU. Then create.
4.Authtoken is sent to your mentioned Email. Copy the authtoken and replace it in the code in nodeMCU. You can also get the Authtoken in the project setting.
5. Follow the steps in the app.
Hope you get it right!!!
KEEP VISITING MY BLOG.
|
OPCFW_CODE
|
|Tags||authentication-directory nss c posix linux system-administrators information-technology|
1.021 Nov 2021 13:15 major bugfix: ## 1.0 - 2021-11-21 ### Added - mysql 8 compatibility ### Fixed - segfault when cannot connect to the database - related to unix_socket value - segfault when a config file is unreadable - happened everywhere listing and also find by name - do not use uninitialized value - unix_socket config field ### Changed - logger buffer value increased from 4k to 10k bytes - do not log when root config is not readable - add newlines to some log lines ### Removed - remove config_db_settings binary
0.9418 Sep 2021 20:01 minor bugfix: Memory leak in config load thx bassbot123
0.9205 Oct 2020 10:25 minor feature: Allow running build script from outside script dir . Add support for whitespace in path in build script. . errornous paths in README.md to./scripts/. . Release v0.91. . #8 Set client character encoding to UTF-8.
0.9110 Jul 2020 03:15 minor feature: Add logo . Make logo bigger. . Use tag for image to set dimensions. . Add changelog. . Set env as noninteractive for *debian. . Add link to freshcode page. . centos build. . Handle CentOS. . Travis try Focal Fossa as host. . Update docker package name. . Update priorities. . get docker tests running. . Refresh connection before next query. . Run integration tests on Travis + update Readme. . Travis config warnings. . Remove links from docker-compose file. . Remove mariadb-server from Dockerfiles, keep client only. . Allow to set source config file via env variable. . Rename PRODUCTION variable to RELEASE to follow cmake terms. . Remove superfluous commands in Dockerfiles. . Add support for unix socket. . Check sizes of uid_t and gid_t. . Separate privileges for pwd+grp and sp queries. . Allow custom mariadb/mysql configuration. . Update changelog. . Rewind queries when buffer is too short. . Read root config file only for shadow queries. . Release v0.91.
0.912 Feb 2020 20:40 major feature: Initial beta release
Submitted byIvan Stana
ManageYou can also help out here by:
← Update project
or flagging this entry for moderator attention.
|
OPCFW_CODE
|
This post is a part of the #RPGaDAY series for 2017 by David F. Chapman and RPGBrigade. For more information, see this post at AUTOCRATIK. I'm modifying per suggestions from S. John Ross as well as applying my own interpretations. Comment with your answers or links to your own posts!
Day 13 - Describe a game experience that changed how you play.
This is one of those times where I kinda wished I'd read ahead and done some planning. The first thing to come to mind was actually the story I related for Day 7, though I suppose that didn't necessarily change the way I play directly; it was more a memorable early experience in a long life of gaming. I like to think that I could take away a little something from every game, that the evolution of my playing style is just that -- an evolution -- and it took place as these things do in small increments over a long period of time.
This may be a cheat, but if there have been any quantum leaps in how I play, I like to think one of them happened the first time I read the rules to Apocalypse World. (And I'm also including GMing in this definition of "play.") A lot of gamers talk about how AW "just" codified a lot of techniques that they'd already been using, and there's certainly something to that, but even so, reading that ruleset and seeing all those things laid out was incredibly eye-opening for me. Failing forward, using conflict resolution rolls for entire scenes and not just as a turn-by-turn mechanic, the incredibly abstracted combat and damage system, the whole "play to find out what happens" principle... These are all things that spoke to me right through my little gamer heart. It's strange to even think about it at this point, but before that, I think that I had considered the story that emerges from a game to be something that was wrapped around the rules. The rules were there for you to play a game; a story was what emerged while you were playing it. AW really crystallized for me the idea that the rules can be used for the story, that generating the fiction was the game. Reading that game was me finding something I didn't even know I wanted. I often say that AW started a genuine revolution in game design, but for me personally, it also started a revolution in how I think about RPGs and, yes, how I run and play them.
The reason I call this a possible cheat is because I read the rules to AW long before I had a chance to play it. (I first became aware of the game via this newfangled entertainment form called "podcasts." Speaking of personal revolutions.) So, I imagine that the intent of this question was to ask about a gaming experience, but the answer I came up with was in its most literal form, a "game experience" -- not just an experience I had with a game, but the way I actually experienced a game. Only in this case, I'm talking about game in the sense of a book, a ruleset, and not a game in the sense of a session. (This hobby has a lot of ambiguous terminology, it turns out.) But whatever the case, I'm not losing sleep over it. AW opened me up to an entire new philosophy in both gaming and game design. I truly believe that it changed the face of the hobby itself, and I suppose that statement is legitimately up for debate. But I can state with certainty that it massively changed the hobby for me, and so, sure, I bet that changed the way I play.
|
OPCFW_CODE
|
Certbot needs to be able to find the correct virtual host in your Apache configuration for it to automatically configure SSL. . What is the difference between apache2. You will also need to have the Apache web server installed.
(Recommended) We will modify the unencrypted Virtual Host file to automatically redirect requests to the encrypted Virtual Host. In order to download the software using apt, you will need to add the backports repository to your sources. To use this plugin, type the following: This runs certbot with the --apache plugin, using -dto specify the names for which you’d like the certificate to be valid. When we are finished, we should have a secure SSL configuration. Fortunately, when installed on Debian 10, ufwcomes loaded with app profiles which you can use to tweak your firewall settings We can see the available profiles by typing: You should see a list like this, with the following four profiles near the bottom of the output: You can see the current setting by typing: If you allowed only regular HTTP traffic earlier, your output might look like this: To additionally let in HTTPS traffic, allow the “WWW Full” profile and then delete the redundant “WWW” profile allowance: Your status should look like this now: With your firewall configured to allow HTTPS traffic, you can move on to the next step where we’ll go over how to enable a few modules and configuration files to allow SSL to function properly. 509 cert, so we are using this subcommand.
Debian-based systems have two convenient scripts, a2ensite, meaning “Apache 2 enable site”, and its counterpart, a2dissite, for disabling a site. We have created our key and certificate files under the /etc/ssldirectory. An A record with www. The certbot package we installed takes care of this for us by adding a renew script to /etc/cron. When you have completed these prerequisites, continue below. The first one merely creates the symbolic link as above, the second one removes it. d/apache2 restart HSTS Preloading.
Open your web browser and type by your server’s domain name or IP into the address bar: Because the certificate you created isn’t signed by one of your browser’s trusted certificate authorities, you will likely see a scary looking warning like the one below: This is expected and normal. 4 and newer, and is only for backwards compatibility in configuration files. This tutorial shows how you can set up nginx as a reverse proxy in front of an Apache2 web server on Ubuntu 16. 8-dev rubygems $ sudo a2enmod ssl $ sudo a2enmod headers RHEL/CentOS (needs the Puppet Labs repository enabled, or the EPEL repository): $ sudo yum install httpd httpd-devel mod_ssl ruby-devel rubygems gcc Install Rack/Passenger. Default value: &39;none&39; default_vhost.
Before we go over that, let’s take a look at what is happening in the command we are issuing: 1. This extension allows the browser to send the hostname of the web server during the establishment of the SSL connection, much earlier than the HTTP request itself, which was previously used to identify the requested virtual host among those hosted on the. This script runs twice a day and will automatically renew any certificate that’s within thirty days of expiration. How can reverse proxy propagate X509 client certificate data? · sudo a2enmod sslsudo a2enmod headers.
0/24 - What IPs & bitmasked subnets to adjust requests for RPAF_Header X-Forwarded-For - The header to use for the real IP address. Be sure that you have a virtual host file set up for your domain. I&39;m a non-technical-but-able-to-read-the-manual website owner. I would like to disable TLS 1. conf of your Apache Web Server. 509 certificate signing request (CSR) management. Click ADVANCEDand then the debian apache2 manual ssl header directive link provided to proceed to your host anyways: You should be taken to your site. This article shows how a reverse proxy can propagate X509 client certificate data to a backend server.
· By default, Apache is configured to run with nobody or daemon. 1 * Cipher selection: ALL:! First, make sure that mod_wsgi is installed on your server. If that’s successful, certbotwill ask how you’d like to configure your HTTPS settings: Se. This is essential when Apache is used as a reverse proxy (or gateway) to avoid by-passing the reverse proxy because of HTTP redirects on the backend servers which stay behind the reverse proxy.
Run &39;/etc/init. This directive lets Apache adjust the debian apache2 manual ssl header directive URL in the Location, Content-Location and URI headers on HTTP redirect responses. conffile, to read in the values you’ve set: At this point, the site and the necessary modules are enabled. How to renew Apache SSL certificate?
We want to create a new X. Run Apache as separate User and Group. Sets the MIME content-type sent debian if the server cannot otherwise determine an appropriate content-type.
Servidor Debian 9 &39;Stretch&39; Servidor Debian 8 &39;Jessie&39; Servidor Debian 7. Let’s Encrypt certificates are only valid for ninety days. 10 on a Debian 9. This can be one of the following values: add.
nginx is known for its stability, rich feature set, simple configuration, and low resource consumption. 4 version, the name of the module should be mod24_ssl. This directive can replace, merge, change or remove HTTP request headers. Use the Certbot tool with the webroot plugin to obtain the SSL certificate files :.
conf is a user-configuration file. If you have further questions about using Certbot, their documentationis a good place to start. We will modify the included SSL Apache Virtual Host file to point to our generated SSL certificates. We will make a few adjustments to our configuration: 1. Check your configuration for syntax errors: If this command doesn’t report any syntax errors, restart Apache: This will make the redirect permanent, and your site will only serve traffic over HTTPS. You have configured your Apache server to debian apache2 manual ssl header directive use strong encryption for client connections.
In Debian, you can set it in /etc/apache2/conf. 4 most certainly does allow authentication directives in containers. Now we just need to modify our Apache configuration to take advantage of these. You can learn how to set up such a user account by following our Initial Server Setup with Debian 10.
In this tutorial, you will use Certbot to obtain a free SSL certificate for Apache on Debian 10 and set up your certificate to renew automatically. load file, an associated. The ssl provider denies access if a connection is not encrypted with SSL. Do this by typing: If everything is successful, you will get a result that looks like this: As long as your output has Syntax OKin it, then your configuration file has no syntax errors and you can safely restart Apache to implement the changes: With that, your self-signed SSL certificate is all set. · Report forwarded to org, Debian Apache Maintainers org>: Bug775129; Package apache2. The action it performs is determined by the first argument. The most classical reverse proxies utilizations are: The reverse proxy reads the initial request, then it initiates a similar ( but new) request to the internal Web applications.
ca-bundle files and in the folder as specified but no matter what i keep getting these errors Sat Jul 27 06:35:00 error. · To disable compression in Apache, typically you just need to disable the module mod_deflate. 27 (Debian) < Connection: Upgrade with curl -vso and still the the * ALPN, offering h2 * ALPN, offering http/1.
If the URL included a query string (e. This tutorial will use /etc/apache2/sites-available/your_domain. Cc: org, org; Subject: testing and review requested for Wheezy update of apache2; From: Antoine Beaupré org> Date: Tue, 11:59:17 -0500; Message-id: < 87fukh7hcq. – Peter Mortensen Dec 27 &39;16 at 13:01.
0 Etch pt:buster:internet:http:apache Tabela de Conteúdos. · 5. 10 – DrBeco Jul 21 &39;15 at 17:29 2 If you have conf-available/ and conf-enabled/, create a file in conf-available/ and use the command a2enconf to enable it. The problem with IIS/Apache is that the proxy request actually sets up a separate HTTPS session between Apache and IIS using the Apache server certificate as the basis for the SSL tunnel. Enabling the module puts the configuration directives in the. The header is modified just before the content handler is run, allowing incoming headers to be modified.
It can be used to decrypt the content signed by the associated SSL key. You’re now ready to test your SSL server. Enable the SSL configuration files: sudo a2enconf letsencryptsudo a2enconf ssl-params. 0 Wheezy Servidor Debian 6. Then restart apache: service apache2 restart: The SSL key file should only be readable by root; the certificate file may be: globally readable. SSLCertificateFile directives in &39;/etc/apache2/sites-available/default-ssl. I have read the Apache documentation for the SSLProtocol directive.
Add permanentto that line, which changes the redirect from a 302 temporary redirect to a 301 permanent redirect: Save and close the file. apache2 Apache HTTP Server apache2-bin Apache HTTP Server (modules and other binary files) apache2-data Apache HTTP Server (common files) apache2-dbg Apache debugging symbols apache2-dev Apache HTTP Server (development headers) apache2-doc Apache HTTP Server (on-site documentation) apache2-ssl-dev Apache HTTP Server (mod_ssl development headers). A debian apache2 manual ssl header directive fully registered domain name. Enable the HTTP/2 module, which will make your sites faster and more robust: sudo a2enmod http2. In. · How to install and secure apache web server on Debian 10 Linux operating system.
Apparently, apache2. The first step to using Let’s Encrypt to obtain an SSL certificate is to install the Certbot software on your server. 509” is a public key infrastructure standard that SSL and TLS adheres to for its key and certificate management.
. mod_ssl provides a few authentication providers for use with mod_authz_core&39;s Require directive. -x509: This further modifies the previous.
This tutorial will use a separate Apache virtual host file instead of the default configuration file. 0 "Squeeze" Servidor Debian 5. For security reasons it is recommended to run Apache in its own non. Enable mod_ssl, the Apache SSL module, and mod_headers, which is needed by some of the settings in our SSL snippet, with the a2enmodcommand: Next, enable your SSL Virtual Host with the a2ensitecommand: You will also need to enable your ssl-params.
Modify User & Group Directive in httpd. The SSL certificate is publicly shared with anyone requesting the content. As of this writing, Certbot is not available from the Debian software repositories by default. Moreover, this is the only secure way to implement authentication, as containers can be accessed in different ways, allowing your authentication to be circumvented if you&39;re not careful. Open your server block configuration file again: Find the Redirect line we added earlier. This issue is known as the CRIME attack.
-> Ryanair ground operations manual
-> Toyota cr 27 service manual
|
OPCFW_CODE
|
Once upon a time I worked for Dstl, the Defence Science and Technology Laboratory (and prior to that DERA), which is the R&D arm of the UK Ministry of Defence. As such I see a value for science input into the protection and enhancement of our armed forces. While I can’t tell you exactly what I worked on, my optics background may give you an idea. What I did, was in my opinion, important and necessary, but didn’t involve the development of weapons in any way. I’m not sure how I would have felt about working in an area more directly related to hurting people.
One of the things I was allowed to do was to look through a wide range of classified research work, as well as go on various training courses, related to optical techniques and applications. On of the things that I got vaguely interested in was laser weapons (and note I never worked on these), primarily as they often came up in discussions to do with adaptive optics. There are many problems associated with developing laser weapons – stability of sources on deployment platforms (such as moving vehicles), power scaling and then beam stability on propagation through the atmosphere. These are non-trivial problems and a lot of research effort has gone into solving these by, primarily the US military and contractors.
Such a system was demonstrated by the US Office for Naval Research this week, developed by Northrop Grumman, in which a boat engine was destroyed in sea trials. It doesn’t look that impressive, but the technical feat in getting it to work is.
What most interested me about this though is what I learned about laser (or directed energy weapons) during my Dstl reading days, and to some extent before that during my PhD. What I found most interesting is that directed energy weapons which are designed to blind someone are forbidden by the Geneva Convention. However, one is allowed to design systems which can lead to the indirect blinding of someone. So for example you can build a system to destroy binoculars, and if indirectly that leads to the blinding of he observer then that is probably OK. This has always struct me as being wrong, but this also strikes me as odd, as guns to kill, main etc are allowed by the Convention.
The mindset of the military is also interesting here. As a PhD student I worked on a topic called electromagnetically induced transparency (EIT), in which a non-transparent medium can be made transparent by application of a laser beam. This is a cool and counter-intuitive effect using quantum coherence, but the military (apparently) inquired of my supervisor before I started if this could be used to obviate the use of laser goggles. So if I developed a laser weapon to blind someone, I could probably be thwarted by my opponent using laser goggles to block the laser. But could you use EIT to make the goggles transparent to the original light?Thoughts like this make me a bit scared of what the military considers acceptable, particularly as this would break the Geneva Convention. But even in fairly conservative areas like laser physics we still, sometimes, have to think on the ethics of what we are doing and what we are developing.
So for any laser scientists out there, defence work holds many opportunities, but you do need to think about the directions in which your research will lead. It’s not always for the good.
|
OPCFW_CODE
|
Posted by Michael Posner on February 15th, 2014
Engineering departments can no longer afford the luxury of all team members being located in the same office. The term global localization is used to describe how a team is split up over multiple geographies but must function as one unified entity. This is not only true for the personnel but also for the tools they use including FPGA-based prototyping hardware. While every software engineering in the world wishes for a high performance prototype directly on their desk typically logistically and financially this is not possible. FPGA-based prototyping systems must support this capability to ensure they can be accessed from anywhere around the world.
The HAPS series of FPGA-based prototyping systems support remote access via the HAPS Universal Multi-Resource Bus, or HAPS UMRBus for short.
The HAPS UMRBus enables the users to remotely access the HAPS system, configure it, monitor it and basically love it from a distance. The HAPS UMRBus enables much more than just remote access! The HAPS UMRBus enables data streaming to and from the system for test case stimuli or debug, advance use modes such as Hybrid Prototyping and transaction based validation and provides a generic API for user capability extensions. The HAPS UMRBus is able to deliver these additional capabilities because it’s a very high bandwidth, low latency connection from a host machine to the HAPS system.
The HAPS-70 series offers this high performance HAPS UMRBus and an integrated HAPS UMRBus over a lower performance USB 2.0 standard interface. The recommendation is that if you only needed remote connectivity for configuration and monitoring then use the HAPS UMRBus over USB 2.0 interface. If you needed high performance and low latency for Hybrid Prototyping and the other advanced capabilities then utilize the high performance HAPS UMRBus. Great right………………… Enter global localization…..
Our customers love that HAPS systems can be remotely accesses as it enables them to utilize the systems 24/7, 365 days a year (HAPS don’t even get Christmas off). However they like to lock them up along with their server hardware or in a data center. Some customers have dedicated hosts serving the HAPS which enables them to utilize the high performance, low latency HAP UMRBus and all the advanced capabilities. However, others just want to utilize the remote access via the HAPS UMRBus over USB 2.0 and while they have thousands upon thousands of Ethernet drops available they rarely have a host which they can plug the USB 2.0 cable into. So what are these users to do?
Enter the Raspberry Pi (see the blog title was not a typo but I bet the engineers already knew that)
To enable our customers to plug the HAPS system directly into an Ethernet hub one of our engineers came up with the great idea to utilize the off-the-shelf Raspberry Pi.
How it works: You buy a Raspberry Pi, USB cable, power supply and SD card, this is going to set you back around $50 (yep, not a typo, $50 and that’s usually a top of the range one). You then contact Synopsys HAPS support and we will provide you with a boot image to load on the SD card. The boot image is a standard Raspberry Pi OS with the HAPS remote access utilities, called HAPS Confpro, pre-installed. Next connect the USB cable between the Raspberry Pi and the HAPS-70 (or HAPS-DX) system. Finally connect the Raspberry Pi’s Ethernet connection into the Ethernet hub/switch and power it up. We recommend assigning a defined IP address to the Raspberry Pi so the HAPS system it’s connected to can be easily recognized. That’s it, you are ready to access the HAPS system remotely. I personally love this solution as it not only solves the problem but also lends itself for further capability expansion in the future. More on the expansion capabilities in a future blog….
What do you use the Raspberry Pi for?
|
OPCFW_CODE
|
Probably the most detailed analysis on homework so far originates from a 2006 meta-Investigation by Duke University psychology professor Harris Cooper, who discovered proof of the constructive correlation involving homework and pupil achievement, which means students who did homework executed much better in school.
I should sign-up but I don’t know my OEN. Could you give me my OEN? No, but we can easily help you discover it!
Tutors have the chance to mute a college student instantly whenever they act within an inappropriate way. In Severe situations a tutor can evict a student and prohibit him or her from using the chat rooms yet again. Conduct guidelines are detailed in Principles of Carry out.
Developers make full use of application application consumer interfaces (APIs) to control the computer system and operating system. Given that the operating system encounters these API functions, it's going to take the preferred motion And so the developer will not call for comprehending the data of controlling the components.
…the good news is the fact that as A brief university student You need to use the many resources and watch questions in Check with A Tutor. We are able to help you figure out why you ended up as a Temporary Scholar:
I'd some serious difficulties right after running it within the System Layer only and needed to toss away that Variation in the System Layer finally. I’m assuming it did some responsibilities which must have been done previously on whilst developing the OS Layer.
When we utilize the code, study it as “of type”, the announcement higher than reads as “collection of string c” The code utilizing the generic is safer and clearer. We do away with the amount of further parenthesis and unsafe cast. The compiler of your system confirmed at compile time which the kind of constraints usually are not disrupted at operate time.
Our homework team supplies the function A lot ahead of the date of submission. Now you are able to do proofreading immediately after submission.
Plagiarism no cost : All linked here our work is checked by plagiarism examining computer software like Turnitin to ensure you get non- plagiarised assignment. All our perform is primary and distinctive.
operating, then it is the obligation on the operating system making sure that it will not likely make any difficulties in the computer system and that each on the features are operating accurately.
Obvious your cache, and help liberate your Pc to connect with the HH site. To obvious the cache within your browser, Stick to the methods down below.
yup …surly u can…. nevertheless it Value u cash and should be installation approach goes for hourly foundation on the Internet speed…an alternate course of action is to acquire CD of Linux …. in my opinion go the absolutely free way
When you actually need to patch all Home windows files in this type of case this gets very a problem. You usually should patch the OS levels very first also to guarantee you don’t overlook anything at all there. Then afterwards you may patch the remaining files inside the System layer by functioning Windows Update there once again.
For any protection person can not use the enter-output unit to start with. As our operating system mainly do the go through and generate operation in almost any with the documents.
|
OPCFW_CODE
|
We just pushed an update to how Bubble handles application languages, which makes handling more than one language much easier.
We added a few things:
You can define a field on the user type that contains the language to apply to the user in the Settings Tab -> Languages. The field should be of type ‘text’ and should contain a value that is part of the list in the languages dropdown (english, french, greek, etc.).
You can access the “current language” in the dropdown, as one of the App Data (like Website home, admin email, etc.). You can use this information in the conditional tab to change the text of some elements, for instance
Elements that are localization sensitive will get adjusted automatically (calendar, date input, map, autocomplete for addresses, etc.). A refresh might be needed as some texts are added service-side.
To change the current user’s language, you just need to modify the value of the field. Again, note that you’ll need to refresh the page for most stuff, as the choice of language is done server-side for performance.
You can overwrite the language in the app URL if you want it to be in a given language. The way you’d do it is for instance by doing
The way the Current Language is defined as, by order of priority
a) if a “lang” parameter in the URL is set, use this
b) if the current user has a value in the relevant field, use this
c) Use the app primary language
Be careful on the value you save (or that you put in the URL), it has to match one of the languages in the dropdown. Something like ‘American english’ won’t be applied as it’s not ‘english’, and so we’ll use the next item on the list in 6).
And if some languages are missing, as usual, reach out, we’d love to have more!
Awesome! I’ll start testing right away!
Thanks for this new feature, but multi language with same date format is not pleasant.
Please add “date format” available in tab “conditional” for Text & DateInput.
Awesome! Thanks a lot, should be very useful.
Has any of you implemented this new feature?
Do you have maybe a public app that you can share? It would be really nice to see how it works. I could not manage to implement it yet
@emmanuel How do you change the language if user is logged out?
You can use a lang parameter in the URL (or actually modify the current user’s field, even if the user is logged out, a temp user exists and can be used).
thanks. i observed that i just need to refresh the page, then it works
How does this play together with the localizejs plugin? If I set the language through the page URL, does localizejs pick that up? (I tried but could not make it work)
Or does it only affect localization-sensitive Bubble components?
This is not related to localize (core features are never related to plugins). But you can use the current app language and use it to call localize
Yes, I see. It would be nice if those two would match, though. Localizejs detects the browser language which is nice because the user in most cases doesn’t have to do anything. If Bubble could do the same, it would be perfect. If the user then decided to switch language, that could be handled in Bubble by storing the user’s preferred language and passing that to Localizejs.
This is very great !
Could be useful to have a way to filter on the elements that have some language sensitive logic: Let’s say some day, a new language has to be supported, it would be very handy to access easily the list of elements that need to be updated.
Or the platinum version: to have a dictionary of texts in several language that could be called, like in the setting tab. this this could come later I guess
Is there a way to handle user language preference for the alerts ?
You mean the native messages (like ‘passwords don’t match’, etc.)?
If so, you can do what is described at the top of the thread, that’s what the feature is about.
|
OPCFW_CODE
|
Is "safes" an acceptable alternative to "makes safe"
Though I know it's uncommon usage (and intentionally so). Is the follow sentence legitimate?
She safes the dangerous area so it cannot be stumbled upon.
Obviously, modern usage would be "she makes safe", but some research on my part shows that "safes" is an acceptable "third-person singular simple present" form of safe.
Am I correct?
I'm guessing you'll spend more time explaining that you didn't mean "saves" than you gain by replacing "makes safe"
You could use 'safeguards' or 'secures' if you just want one word.
Can you give example sentences (author/date/links if possible) from your research?
Wouldn't "secures" do much the same job?
Why bother with archaisms in industrial contexts? Also, you do not mean stumbled upon, which means to come upon by chance. You mean: so no one falls over junk (objects) on the floor.
@Lambie The OED documents this usage as live up through 2009, and it does not label this obsolete nor archaic as you would allege. Just because their first citation is from 1602 doesn't make it archaic, particularly given their citations from after the 19th century. If you have evidence to the contrary from a citable published resource, I'm sure we'd be interested in seeing it. I bet the OED would, too.
As a native speaker of English I had never heard "safe" used as a verb until reading this Q&A. That's how rare this usage is, and that's what makes this an interesting question. However, based on sas08's answer, I don't think you can "safe" an area, unless that area is one big weapon. ;-)
@Mentalist, it's totally commonplace and ordinary around guns, and, say, in the military. This is the confusing issue with "obscure" words, like, it would "only" be an everyday term to let's say 50 million English speakers.
As people are requesting more context this is for a fantasy setting, and specifically it's to say that a magical trap, while not disabled is no longer able to cause harm. As people are mentioning technical usage, firearms and mechanical, this seems appropriate. Thanks all!
In your specific sentence, I would go with renders safe. Render Safe Procedures are formal written guides to making something safe. They usually apply to something that can go bang. The acronym RSP is often used as a verb meaning to execute RSP on something.
Pretty sure we only use safe as a verb when discussing ordinance or firearms. There might be other domains (operations security maybe?) but by the verb safe we definitely mean operating a safety mechanism designed to keep the weapon from being firing/detonating.
The military definition is provided at The Free Dictionary, with citation to the US DOD (PDF).
As applied to weapons and ammunition, the changing from a state of readiness for initiation to a safe condition. Also called de-arming.
@user067531 you may note both examples you found are gun related. As are all the ones I can find on google. Hard too prove it hasn't been used outside those domains... but all I get for "safing" is articles on nukes and aircraft cannons.
For instance:
https://books.google.com/books?id=UDQYAAAAYAAJ&pg=SA2-PA5&lpg=SA2-PA5&dq=safing+a+gun&source=bl&ots=W_9SPHgkkk&sig=ACfU3U0rgRIvyvblYYdFvTNDf-Hb7vJJcg&hl=en&sa=X&ved=2ahUKEwi13JvHk-_jAhXOmuAKHdGCAfIQ6AEwCnoECBQQAQ#v=onepage&q=safing%20a%20gun&f=false
That wouldn’t make it a common usage, but a jargon one.
@user067531 I'm not sure what you mean by jargon. Why would jargon oppose the sense of common?
The other answer shows use with rockets, but perhaps the overlap between military and space organizations resulted in adopting their language.
The common theme to these examples would seem to be that, for any sort of device that is inherently hazardous in its normal operation, and for which there is some standard procedure for putting it into a non-hazardous state, this usage of "safe" is a technical term referring to that procedure. This is how I've always understood it -- even in ordinary usage, it has a connotation of not just making something "safe" in effect, but doing so by following an officially correct procedure.
Safe as a verb is quite uncommon, Wiktionary is one of the very few sources to show a few usage examples:
(transitive) To make something safe.
2007, Rocky Raab, Mike Five Eight: Air War Over Cambodia: Air War Over Cambodia
“It just trails behind the pylon until I land, then Cramer removes it when he safes the rocket pods. No evidence of anything when I taxi back inside the compound.”
2012, Erik Seedhouse, Interplanetary Outpost
One of the most important events after touchdown will be to safe the Dauntless, which will include purging the engines and shutting down the landing systems […]
Not common, but it does get used: John always safes his gun before putting it away. Leslie is far less careful. "Let me just safe my gun" said John. Guns usually come with safety locks, so the usage is quite specific.
It's not uncommon, it's just common only in a technical domain that requires the operation of mechanical safeties.
It is very uncommon and here can clearly be seen to be weapon-related or space-vehicule related.
The OED attests this usage with citations from 1602 up through 2009. It belongs to their frequency band 3, which comprises 20% of the non-obsolete terms in the dictionary, and whose "Verbs tend to be either colloquial or technical, e.g. emote, mosey, josh, recapitalize."
@Cascabel I did not vote for closure. I am merely saying that in the civilian world it is not common. That is a fact. But tell me, would you say it for the context provided by the OP? Some find of factory floor? I doubt it...
@Lambie: One might "safe" a piece of equipment to prevent it from being accidentally started with potentially dangerous consequences. The OP's usage isn't quite right, but that may be because the example is over-simplified.
safeguard (MWD)
Definition of safeguard (Entry 2 of 2)
transitive verb
1 : to provide a safeguard for
2 : to make safe : PROTECT
secure (MWD)
Definition of secure (Entry 2 of 2)
transitive verb
1a : to relieve from exposure to danger : act to make safe against adverse contingencies
secure a supply line from enemy raids
The adjective "safe" can be used as an antonym for either "vulnerable" or "dangerous". The verbs you suggest are only applicable to the first meaning, but the OP's post suggests the second.
You make a good point, I might have copied the wrong part of the definition. Regardless, I think either option works with the sample sentence.
|
STACK_EXCHANGE
|
I was pleased at first, until my computer started freezing. It would play games (i.e. Assassin's Creed, CS:Source) no problem. Max I think I played these games for was for about an hour. However, during normal computer usage, such as internet browsing or using itunes, my computer would freeze. No response from the mouse or keyboard. Sometimes it would make a funny sound, like a buzzing noise.
I tried fixing this by downloading the latest drivers from XFX's site, and I installed the latest driver for my chipset. Once again, I could play Assassin's Creed no problem. As soon as I opened a browser (this time Chrome), my computer froze!
I did some research, and thought that maybe it was a memory issue, and I checked into setting the timings and voltage for my RAM since I hadn't changed them and left them on default. I set the voltage to 2.04, and timings to 6-6-6-18, just like the mfg. suggested. My RAM is here:
The recommended voltage is 2.0, but the closest my BIOS allows is 1.97 or 2.04. Anyway, I changed the timings and voltage, thinking yay my problems are over. But no, they were only just beginning!
Apparently changing memory settings in BIOS requires re-activation of Windows Vista. I tried re-entering my product key, but it said that the key was invalid. The same key on the back of the CD packaging that I've used to install Vista before. Twice. So I call Microsoft, and I need to tell them the "installation ID" number on screen. Well, there wasn't an installation number, so they transfer me to tech support. I tell him my issue, and he transfers me to another number. The transfer didn't work somehow, so I called the number he gave to me, and it was their Anti-Piracy hotline. Which is closed on Sundays. Fantastic.
This all started with a simple graphics card issue....whether it's a compatibility issue or something, I'm not sure. Has anybody experienced something like this before? (not necessarily the whole activation issue, but the freezing). Below are my full specs:
I'm working on the activation, so until then I can't clean out the drivers but I will try that first and let you know how it goes. And yes, you are correct that it freezes during basic usage, not gaming.
you should really do atleast a repair or complete reinstallation of windows every time you change mobos, it messes up the windows chipset drivers and what nots and can cause system instability and random freezes. You have changed pretty much every component on your rig so the windows is getting all freaked out and requires that reactivation cause it thinks it's on a new system. Better to reinstall it all together....
Also, as far as your lockups where you had the issue to begin with, you said you can play for about an hour, which makes me think you may have a heat issue, maybe the video card is heating up and pushing air into your case, causing instability. Easy, and cheap way to take care of that, try adding a pci slot cooling fan right under your video card to help exhaust hot air it may produce out of the case and see if that helps.
Alright, so I managed to get my Windows activated. I spent an hour on the phone with Microsoft, and my computer was on this whole time, with several restarts, mainly using the command prompt and windows explorer. No freezes.
Then I start up Assassin's Creed, and am able to play for over an hour. At this point my computer has been (mostly) on for over 2 hours, half of which were spent playing a graphics-intensive game. No freezes. This was probably the longest it's gone without freezing since the video card install.
I close out assassin's creed and let the computer sit idle for 5 minutes or so before I open itunes. I play a song, and almost immediately my computer freezes and I hear the strange buzzing sound again.
This morning I was able to watch some videos on youtube for 5-10 minutes, then I tried itunes again. This time it made it 2 and a half minutes into a song before freezing, once again.
Now before my computer would freeze sometimes during internet browsing, but always during itunes. I've been trying to analze it and have come up with possible problems:
1. My sound card could be incompatible (for some reason) with the video card. But if this is the case, why am I able to play games with video and sound no problem?
2. It could still be an overheating issue I guess. My sound card is right below my video card and probably gets pretty hot. But still, my computer would still freeze during normal internet browsing, no sound.
3. Perhaps a hard drive issue? My HD is almost 4 years old now, has been reformatted probably a dozen times, and is IDE, not SATA. Maybe this is the most likely issue...itunes plays songs directly from the hard drive, internet browsing utilizes some hard drive resources, but playing a game will be mostly off of the CD/DVD, no?
Maybe there's something I'm overlooking. Tonight I'm going to use driver sweeper and remove my sound card to see if that helps.
I got the machine built up and the machine kept locking up. I was stumped. I ran diagnostic tests and memory tests and they all passed. So I assumed video card and had it RMA'ed. I got the card exchanged and installed it. SAME THING!
I then swapped out the memory...SAME THING!
Then swapped out the PSU....SAME THING!
I was then thinking it was my systemboard. So I was about to put an RMA for the systemboard until I started searching google on the XFX 9600 1GB and it seems this is a common problem with this card.
So in my experience with this problem I would say buy a new card... I'm going to.
The above is one good reason to buy a gx motherboard. If your add-in card is broken then you can check it easily with the onboard graphics. Also, you can run another monitor off the onboard and it doesn't slow down your gaming. I didn't realise that until I bought a foxconn 790gx recently, and it's quite a nice thing for the future knowing i can plug in another monitor for free basically.
|
OPCFW_CODE
|
Temporarily disable SELECT for one table (during an update)
We have a dedicated data mart SQL Server with several really big tables that are updated one partition at a time via parametrized SSIS loop. Quite often when clients try to read such a table during an update, they get in between ETL loop steps, so the whole table is locked until all the reads are finished, and that goes on and on for hours.
So how do I:
drop all active SELECT queries for a specific table,
disable all operations on it for everyone except an ETL account,
and make it readable again afterwards?
A very crude temporary solution that I came up with is just disabling all the client accounts during the big update time:
DECLARE @queryDisable NVARCHAR(MAX) = N'';
SELECT @queryDisable += 'ALTER LOGIN ' + QUOTENAME(name) + ' DISABLE;' + CHAR(13) + CHAR(10)
FROM syslogins
WHERE name like '%client%'
PRINT @queryDisable;
EXEC sp_executesql @queryDisable;
But that locks them out of the whole server while they may need to access a different database which is not being updated at the time.
I hope there is more elegant and civilized way than bruteforce DENY/GRANT cycling of SELECT permissions on every account there is.
Edit 2022-03-04: Thank you for your answers, they are duly noted and I will report back on progress, but we're making big releases every 1-2 months so it may take some time.
Sorry for the late entry, but I was testing out the direction that David Browne pointed out — and it works just as needed!
A more intrusive method is to put the database in SINGLE USER mode killing all the other connections.
Turns out that a database Restrict Access option has not only MULTI_USER and SINGLE_USER modes (accessible to everbody/nobody respecitvely) but also a RESTRICTED_USER:
Only members of the db_owner, dbcreator, or sysadmin roles can use the database.
Since our service accounts have those roles they still can perform their ETL duties while clients receive a native MS SQL message that the database is currently in restricted access state. Our managers notified all the clients that this message means the DB is under necessary monthly maintenance and everything will be available as soon as clients receive «Data is updated» email (which is sent automatically as the ETL's last step).
The downside is that changing DB state will force drop all its active connections — no matter what role the user has, so make sure no important queries are going on or wait for them to finish. The upside is that all the other databases are not affected and are available as usual.
GUI path is: Database Properties → Options → State (the last dropdown) → Restrict Access.
You can also do it programmatically:
ALTER DATABASE [Name] SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE;
-- WITH ROLLBACK IMMEDIATE doesn't wait for active queries completion
...
ALTER DATABASE [Name] SET MULTI_USER;
Though while this method works for a monthly updated DB, I'm still working out a way to lock specific tables because the other big-tabled DB is updated constantly and RESTRICTED_USER will essentially make it never available for the clients. So I will try to update the answer later if I'll come up with something on that matter.
In a transaction ALTER the table. You will get an exclusive schema lock for the duration of the transaction. Requiring a transaction in SSIS may be enough, alternatively set RetainSameConnection on the connection manager and handle the transaction and schema lock in a TSQL step.
You can get the schema lock with any alter table, like ALTER TABLE . . . SWITCH, or with SP_RENAME. A cheap and harmless ALTER TABLE for any table that doesn't have any disabled constraints is something like
alter table SomeTable with nocheck check constraint all
Which is a noop, except that it has the side effect of requiring your transaction to hold an SCH-M lock.
A more intrusive method is to put the database in SINGLE USER mode killing all the other connections.
sp_recompile is another option
|
STACK_EXCHANGE
|
The purpose of this talk is to examine (i) some of the models commonly used to represent fading, and (ii) the information-theoretic metrics most commonlyused to evaluate performance over those models. We raise the question of whether these models and metrics remain meaningful in light of the advances that wireless communication systems have undergone over the last two decades. A number of critical weaknesses are pointed out, and ideas on possible fixes are put forth. Some of the identified weaknesses have to do with models that, over time, have become grossly inadequate; other weaknesses have to do with changes in the operating conditions of modern systems, and others with the coarse and asymptotic nature of some of the most popular performance metrics ("diversity" and "multiplexing").
Angel Lozano is a Professor of Information and Communication Technologies at UPF (Universitat Pompeu Fabra) in Barcelona, Spain. Prof. Lozano received the Telecommunications Engineering degree from UPC (Universitat Politecnica de Catalunya), Spain, in 1992 and Master of Science and Ph.D. degrees in Electrical Engineering from Stanford University in 1994 and 1998, respectively. Contemporarily, between 1996 and 1998, he also worked for Rockwell Communication Systems (now Conexant Systems) in San Diego, USA. In 1999 he joined Bell Labs (Lucent Technologies, now Alcatel-Lucent) in Holmdel, USA, where he was a member of the Wireless Communications Research Department until 2008. Between 2005 and 2008 he was also an Adjunct Associate Professor of Electrical Engineering at Columbia University. Prof. Lozano is a senior member of the IEEE since 1999. He served as associate editor for the IEEE Transactions on Communications between 1999 and 2009, has guest-edited various other IEEE and non-IEEE journal special issues, and is actively involved in committees and conference organization tasks for the IEEE Communications Society. Since 2010, he is an associate editor for the Journal of Communications & Networks. He has further participated in standardization activities for 3GPP, 3GPP2, IEEE 802.20 and the IETF. Prof. Lozano has authored over 85 technical journal and conference papers, holds 15 patents, and has contributed to several books. His papers have received two awards: the best paper at the 2006 IEEE Int'l Symposium on Spread Spectrum Techniques & Applications, and the Stephen O. Rice prize to the best paper published in the IEEE Transactions on Communications in 2008. He has held visiting appointments at Stanford University, at the University of Minnesota, at the Hebrew University of Jerusalem, and at Universidad Tcnica Federico Santa Mara (Valparaiso, Chile).
|
OPCFW_CODE
|
How to Connect to SFTP server using .key file in c#.net
I have a C# .NET Windows service project, where am trying to open an SFTP connection to a SFTP server and put a file to the server.
I have SFTP hostname, username and key file (.key file).
I do have a passphrase here.
Please help me with something to use SFTP in C# and .Net
I tried to do it in the below mentioned way :-
using (SSHClient sshClient = new SSHClient(getKeyConnection(HostName, UserName, Port, Myprivatekey.key,PassPhrase)))
{
Console.WriteLine("Connecting to server.");
sshClient.OperationTimeout = TimeSpan.FromSeconds(60);
sshClient.Connect();
Console.WriteLine("Is Connected to server " + sshClient.IsConnected);
}
Where my GetkeyConnection menthod is looks like :
public static ConnectionInfo getKeyConnection(string host, string username, int port, string privateKeyFile,string password)
{
Console.WriteLine("Getting key Connection Info to establish Private Key SFTP");
return new ConnectionInfo(host, port, username, privateKeyObject(username, privateKeyFile,password));
}
My privateKeyObject uses
private static AuthenticationMethod[] privateKeyObject(string username, string publicKeyPath,string password)
{
Console.WriteLine("Private key object method called.");
PrivateKeyFile privateKeyFile = new PrivateKeyFile(publicKeyPath,password);
PrivateKeyAuthenticationMethod privateKeyAuthenticationMethod = new PrivateKeyAuthenticationMethod(username, privateKeyFile);
return new AuthenticationMethod[] { privateKeyAuthenticationMethod };
}
When I am trying to connect i am getting invalid private key file.
Any idea how we can do this.
We have X.509 certificates which is signed with intermediate CA and is installed on our SFTP server. and we have a private key file and a passphrase which i am sending in my Authentication method. For SFTP we are using Renci nuget package
SFTP is a subset of HTTPS. Both uses TLS for authentication which is done before the request is send from client to server. TLS the server sends a certificate block with possible names a certificates and then client checks stores to see if any certificate is matches the list of certificates sent from the server. The key file is the certificate. So all you need to do is load the certificate in the stores in client. I assume the server already has the certificate in the certificate block.
SSH is a different protocol than SFTP and you should not be using both. SSH is when you are making a secure shell connection and SFTP is when you are making a secure file transfer. Both SSH and SFTP require a Username and Password besides the certificate.
When we generate a X509 Certificate it generate a Private key which needs to be converted into RSA private key by running a below mentioned command.
openssl rsa -in server.key -out server_new.key
Make sure you open openssl in Administrator mode.
Which means your key file should start with --- Begin RSA Private Key----
Renci requires the private key to be in an openssl format. That exception is normally the result of an invalid key file format or a missing file. You can use putty gen, openssl tools or the Java keytool utility to convert the key to the proper format.
Note:
"Renci.SshNet.Common.SshException: Invalid private key file" when loading SSH private key from configuration string using SSH.NET
I tried doing the same previously with PuttyGen, Winscp, to convert the file in Openssl format but as soon i load a privat key file it prompted with format not supported.
I ran a test with a self signed certificate containing a private key, by exporting the certificate to a PFX.
Then used openssl to convert the private key in the pfx to a PEM keystore which can be manipulated by openssl:
"openssl pkcs12 -in test.pfx -out test.pem -nocerts -nodes"
Then run the RSA/EC conversion to convert the key to the correct RSA format the Renci component is expecting:
"openssl ec -aes256 -in test.pem -out key.pem -passout pass:test1234"
Then you place the key.pem file into the directory your private key is expected to be in.
This header should be at the top of the file once it's been converted:
-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
Thanks @charlie is was done i just converted my private key into TSA private Key by running a command in openssl.
|
STACK_EXCHANGE
|
#include <vector>
#include "key_input.h"
#include "platform.h"
#define IS_SHIFT_PRESSED() (is_key_pressed(KEY_LEFT_SHIFT) || is_key_pressed(KEY_RIGHT_SHIFT))
namespace Engine {
void key_event_callback( Key key, bool is_pressed );
bool key_states[KEY_MAX_COUNT] = { false };
std::vector<key_event_cb> key_event_callbacks;
Rc_t init_key_input_system( void )
{
platform_add_key_callback(key_event_callback);
init_platform_input_system();
return SUCCESS;
}
void add_key_callback( key_event_cb callback )
{
key_event_callbacks.push_back(callback);
}
bool is_key_pressed( Key key )
{
return key_states[key];
}
char key_to_char( Key key )
{
char c_key = -1;
if ( key >= KEY_A && key <= KEY_Z ) {
if ( is_key_pressed(KEY_LEFT_SHIFT) || is_key_pressed(KEY_RIGHT_SHIFT) ) {
c_key = key;
} else {
c_key = key + 32;
}
} else if ( key >= KEY_0 && key <= KEY_9 ) {
if ( is_key_pressed(KEY_LEFT_SHIFT) || is_key_pressed(KEY_RIGHT_SHIFT) ) {
switch ( key ) {
case KEY_0:
c_key = ')';
break;
case KEY_1:
c_key = '!';
break;
case KEY_2:
c_key = '@';
break;
case KEY_3:
c_key = '#';
break;
case KEY_4:
c_key = '$';
break;
case KEY_5:
c_key = '%';
break;
case KEY_6:
c_key = '^';
break;
case KEY_7:
c_key = '&';
break;
case KEY_8:
c_key = '*';
break;
case KEY_9:
c_key = '(';
break;
default:
LOG_ERROR("Got a value that was not a number");
break;
}
} else {
c_key = key;
}
} else {
switch ( key ) {
case KEY_SPACE:
c_key = key;
break;
case KEY_APOSTROPHE:
c_key = IS_SHIFT_PRESSED() ? '"' : key;
break;
case KEY_COMMA:
c_key = IS_SHIFT_PRESSED() ? '<' : key;
break;
case KEY_MINUS:
c_key = IS_SHIFT_PRESSED() ? '_' : key;
break;
case KEY_PERIOD:
c_key = IS_SHIFT_PRESSED() ? '>' : key;
break;
case KEY_SLASH:
c_key = IS_SHIFT_PRESSED() ? '?' : key;
break;
case KEY_SEMICOLON:
c_key = IS_SHIFT_PRESSED() ? ':' : key;
break;
case KEY_EQUAL:
c_key = IS_SHIFT_PRESSED() ? '+' : key;
break;
case KEY_LEFT_BRACKET:
c_key = IS_SHIFT_PRESSED() ? '{' : key;
break;
case KEY_BACKSLASH:
c_key = IS_SHIFT_PRESSED() ? '|' : key;
break;
case KEY_RIGHT_BRACKET:
c_key = IS_SHIFT_PRESSED() ? '}' : key;
break;
case KEY_GRAVE_ACCENT:
c_key = IS_SHIFT_PRESSED() ? '~' : key;
break;
case KEY_TAB:
c_key = 0x9;
break;
default:
break;
}
}
return c_key;
}
void key_event_callback( Key key, bool is_pressed )
{
key_states[key] = is_pressed;
for ( size_t ii = 0; ii < key_event_callbacks.size(); ii++ ) {
key_event_callbacks[ii](key, is_pressed);
}
}
} // end namespace Engine
|
STACK_EDU
|
Red hat hiring Freshers from 2013 and 2014 freshers for Software Engineer position through online 2015 across India. Red hat recruitment drive 2015 for 2014 and 2013 pass outs of BE,BTech Graduates and MCA. Interested candidates can apply online. The link to apply online is provided below.
About Red hat:
Red Hat, Inc. is an American multinational software company providing open-source software products to the enterprise community. Founded in 1993, Red Hat has its corporate headquarters in Raleigh, North Carolina, with satellite offices worldwide. Red Hat has become associated to a large extent with its enterprise operating system Red Hat Enterprise Linux and with the acquisition of open-source enterprise middleware vendor JBoss. Red Hat also offers Red Hat Enterprise Virtualization (RHEV), an enterprise virtualization product. Red Hat provides operating system platforms, middleware, applications, management products, and support, training, and consulting services.Red Hat creates, maintains, and contributes to many free software projects and has also acquired several proprietary software packages and released their source code mostly under the GNU GPL while holding copyright under a single commercial entity and selling user subscriptions. As of June 2013, Red Hat is the largest corporate contributor to Linux.
The Red Hat Storage Engineering team is looking for a Software Engineer. In this role, you will work as part of a team responsible for developing the Linux file system and writing unit test cases for the file system. You'll also participate in the complete software development life cycle from requirement gathering to deployment of the product, conduct system analysis and development, and review and repair legacy code. We'll need you to have an extensive technical background, preferably in client or server architecture and algorithm development. You will also need to be creative and team-oriented, with good interpersonal skills.
Primary job responsibilities
- Design, develop and debug the storage file system
- Participate and contribute in code reviews
- Conduct system analysis and development and review and repair legacy code
- Debug and resolve any field issues
- Document code consistently throughout the development process
Required skills and Eligibility:
- Bachelor's degree in computer science or other technical degree with relevant experience is preferred
- Extensive working knowledge of C, C++, and Makefiles
- Excellent skills in and knowledge of file system, storage, data structures, and algorithms
- Familiarity with operating systems and Linux internals
- Knowledge of free and open source software development concepts and methodologies
- Ability to multi-task while maintaining a high attention to detail
- Good interpersonal skills, with the ability to interact with customers when needed
- Excellent verbal and written communication skills
Locations of work:
Best in industry
How To Apply:
Interested candidates can apply on the below link
Red hat apply Link
|
OPCFW_CODE
|
Filters allow you to create segments in your lists so that you can send targeted communications. You can use filters to send birthday messages or interest-specific product catalogs. In this article we will cover creating a filter, and applying it to a list to send an email.
To set up an email filter, click Contact Admin > Filters, then click Create Filter. Enter the name for your filter, and a description if you need one.
You can create filters based on the following criteria:
Check the checkbox next to the property you want to filter your contacts by. There are options next to the property which allow you to refine your filter.
For example, if I want to create a filter to send a special offer for teddy bears on Valentines day, I would choose the Gender field (under Additional Contact Fields > Additional Properties) and set it to male, and refine that by setting the Age field (under Additional Contact Fields > Date Properties) to 'between' 20 and 30. Now, my email would go to young men who might buy teddy bears for their girlfriends on Valentines Day.
You can set a filter based on an existing custom field. This includes multi-value custom fields, where more than one selection can be used in the filter.
You can create email filters based on contact activity. Click the General Activity tab, then check the checkbox next to the activity you want to filter on, and enter the specific details you want to filter by.
For example, you can create a filter for contacts who have not opened an email from you in the past 14 days. Check Message Activity, then choose 'has not' from the dropdown bar and enter 14 days in the last two fields.
You can also select more than one message to filter contacts. Click the Contact checkbox, select the message activity, then click the text box to open the modal.
The modal displays all messages in the system for both email and SMS. You can filter by specific lists, message type, or messages sent or failed. A total count for each selected messages appears in the Add Selected Messages button.
You can create a filter to send emails to contacts in one or more specific lists. You can also set the list filter to send to all your contacts who are not in a specific list (or group of lists).
Use this to create a filter based on a contact's last tracked location. You can filter by country or city.
Once you have created your filter, click the Test button to test your filter. Testing will not apply your filter to any lists, it will check what the results would be if the filter were applied to that list.
Check the checkbox next to a list, and then click the buttons to test the filter against either email or SMS contacts in that list.
You can apply filters to lists during message composition.
On the list options step, check the checkbox next to the list you want to send the message to, then click Segment. Choose the filter you want to apply from the dropdown.
If you haven't created your filters yet, you can create a new filter by clicking Create New and following the steps. You can only create property and date-related filters in this way. Other types of filters must be set up before you start creating your message.
|
OPCFW_CODE
|
Today’s meeting leader is: mconley
- 1 General Topics / Roundtable
- 2 Friends of the Firefox team
- 3 Project Updates
- 3.1 Add-ons / Web Extensions
- 3.2 Developer Tools
- 3.3 Downloads Panel
- 3.4 Fluent
- 3.5 Form Autofill
- 3.6 Desktop Integrations (Installer & Updater)
- 3.7 Lint, Docs and Workflow
- 3.8 macOS Spotlight
- 3.9 New Tab Page
- 3.10 Nimbus / Experiments
- 3.11 NodeJS
- 3.12 Password Manager
- 3.13 PDFs & Printing
- 3.14 Picture-in-Picture
- 3.15 Performance
- 3.16 Performance Tools (aka Firefox Profiler)
- 3.17 Privacy/Security
- 3.18 Search and Navigation
- 3.19 Screenshots
- 3.20 Community
- 4 This week I learned
General Topics / Roundtable
- [bigiri] HTML Fragments Preprocessor
Friends of the Firefox team
- [:mcheang] Please welcome Stephanie Cunnane to her first Firefox Desktop meeting today. She’s our newest team member on the Search Team and started with us on March 21st! 🎉🎉🎉Welcome Stephanie!
Resolved bugs (excluding employees)
Script to find new contributors from bug list
Volunteers that fixed more than one bug
- Claudia Batista [:claubatista]
- Masatoshi Kimura [:emk]
- Mathew Hodson
New contributors (🌟 = first patch)
- Jintao Hu styled Reade Mode code blocks different from the rest of text.
- samuraix221 renamed glyph-modal-delete-32.svg to match its actual size (20).
- k88hudson, jhirsch, kpatenio are this months Firefox / Toolkit::General triagers!
Add-ons / Web Extensions
Addon Manager & about:addons
- Changes to the add-on install flow: As anticipated in Firefox >= 100 user activation is now required to successfully trigger the add-on installation flows - Bug 1759737
As part of the ongoing ManifestVersion 3 work:
Introduced support for the runtime.onSuspend API event and more WebExtensions API adapted to support persistent listeners - Bug 1753850, Bug 1748567, Bug 1748557, Bug 1762048, Bug 1748566, Bug 1748563, Bug 1748565, Bug 1748559, Bug 1748546, Bug 1748525, Bug 1748555, Bug 1748549
Requiring granted host Permissions for MV3 content scripts - Bug 1745819
- Fixed optional permission changes not propagated to new processes - Bug 1760526
- Fixed missing “title” property in the bookmarks.onRemoved event details - Bug 1556427
- Fixed browser.sessions.getRecentlyClosed API when a closed windows had a tab with empty history - Bug 1762326
Added support for creating muted tabs using tabs.create - Bug 1372100
Thanks to kernp25 for contributing this nice enhancement
Support overriding the heuristic that Firefox uses to decide whether a theme is dark or light using the new “theme.properties.color_scheme” and “theme.properties.content_color_scheme theme” properties - Bug 1750932
See also firefox-dev email “PSA: Theming changes on Nightly (for Firefox 100)”
Wartmann fixed an annoying Debugger + React DevTools webextension bug, where you had to click the resume button twice when paused because of a “debugger” statement (bug)
Yury and Alex improved debugging asm.js/wasm project (bug), by turning debug code on only when using the Debugger, making using the console only faster
Julian fixed a bug when using the picker on UA Widgets (e.g. <video> elements)
Storage Inspector wasn’t reflecting Cookies being updated in private tabs, this was fixed in Bug 1755220
We landed a few patches that improved Console performance in different scenarios (bug, bug and bug), and we’re getting close to land the virtualization patch (bug). Overall the Console should be _much_ faster in the coming weeks, we’ll compile some numbers once everything landed
Support for the browsingContext.close command landed (bug) which allows users to close a given top-level browsing context (aka tab). The browser testing group still needs to agree on what should happen when the last tab (window) gets closed.
Optional hosts and origins should now be set as command line arguments, and not from preferences anymore (bug). This will raise user awareness when adding these additional hosts and origins that need to be accepted for new WebSocket connections by WebDriver BiDi clients.
Most of the existing Webdriver tests on Android are now enabled (bug), which will prevent regressions on this platform. More tests can be enabled once Marionette will support opening new tabs.
The vendorShortName DTD string is now gone! Slowly but surely, DTDs are melting away from platform.
Only 267 DTD strings now exist. You can track the burndown / transition to Fluent here.
- Dimi fixed credit card records not being passed correctly into GeckoView
- Dimi fixed credit card saving prompts not appearing as expected on GeckoView when trying to update an existing credit card
- Dimi fixed an issue where form autofill would not work correctly when using Fathom heuristics
- Tgiles fixed an issue where form autofill would modify readonly inputs
- Tgiles fixed credit cards not being saved correctly if there is whitespace in the captured credit card expiry string
- Thanks to Dimi and Emilio for solving an issue that was causing a permanent failure in one of our tests
Desktop Integrations (Installer & Updater)
Lint, Docs and Workflow
There are various mentored patches in work/landing to fix ESLint no-unused-vars issues in xpcshell-tests. Thank you to the following who have landed fixes so far:
- Gijs has landed a patch to suggest (via ESLint) using add_setup rather than add_task in mochitests, and updated many existing instances to use add_setup.
Standard8 landed a patchset that did a few things:
Fixed an issue when running with ESLint 8.x which we'll be upgrading to soon.
Completed documentation for ESLint rules where it was missing previously.
Upgraded all the Mozilla rules to use a newer definition format, which also includes a link to the documentation.
- Editors should now be able to link you to the documentation if you need more info e.g. in Atom:
New Tab Page
Nimbus / Experiments
PDFs & Printing
- Lots of great contributions coming in from applicants for our upcoming Picture-in-Picture Outreachy internship project!
- niklas fixed an issue in one branch of the toggle experiment the team is planning on running soon
- Neil made it possible to focus / invoke the Picture-in-Picture toggle via keyboard on <video> elements with the built-in control set.
- Thanks to Mathew Hodson who updated Session Store to use IOUtils rather than a Worker with sync disk access to write sessions to disk! This should reduce the serialization overhead when persisting sessions - especially large sessions.
Performance Tools (aka Firefox Profiler)
- Improve markers in the marker chart panel (#3930)
New redesigned markers in the marker chart panel
- Improve screenshot marker tooltips (#3957)
Now screenshot marker tooltips include image, window size, and description fields.
- Add a marker context menu item for IPC markers, to select the other thread (#3936)
- Move the IPC tracks right below their threads during the initial load (#3968)
- Use orange color for CC markers (#3900)
- Show the duration of the full range in the filter navigator bar (#3964)
“Full Range” button with the total profile duration
- Remove animations from various places for users with prefer-reduced-motion
- [pbz] Landing a patch to enforce iframe sandbox for external protocols, (e.g. zoommtg://): Bug 1735746 - Block external protocol handler with sandbox.
This means sandboxed third-parties can no longer open external applications on Desktop and Mobile
- [:mcheang] Welcome Stephanie! She’s here now, so let’s pass the mic to her for an intro!
- James changed one of the observers we use for some search telemetry to a more efficient one, which also seems to have helped speed up some page load tests on Linux
- Various patches have landed to improve search & new tab's support for live language switching, including correctly switching search engines, preferences, new tab and address bar. Thank you to Greg for taking most of those on.
- [sfoster] Thanks contributor Joaquín Serna for fixing Bug 1705745 - Remove authentication code from screenshots
- [sfoster] :niklas patched Bug 1752734 - Full page screenshots are scaled down without warning so we’ll no longer scale down output images when captured on a > 1 pixel density display. If you want fewer pixels, its up to you to downsample.
Lots of Outreachy applicants are showing up! Keep your eyes peeled for Bugzilla comments asking to be assigned to good-first-bugs. Respond ASAP to questions from applicants.
good-first-bug in Bugzilla keyword
[lang=js] or [lang=css] [lang=html] in the whiteboard
Set yourself in the Mentor field
This week I learned
[gijs] Someone I spoke to didn’t know this one neat trick: on treeherder, you can filter jobs by test paths! Click the funnel icon on the top right, then in the dropdown on the left select “test path” and put in a path to a directory or file, and click the “add” button. Treeherder will hide all the jobs not running tests there. Useful when looking at treeherder to see when particular breakage was introduced, or which jobs run which tests. The URL will also reflect this selection.
|
OPCFW_CODE
|
After a few years working full time as a Ruby on Rails developer I was given the opportunity to be part of a team to build a React Native app.
As it was my first time dealing with React Native, I had to learn a lot during the process. In particular, there are five things I know now that I wish I’d known at the very beginning. Let me share them with you in case they could help you in starting your React Native journey.
1. Reading docs is great until it’s not
If you try to learn a million things you could get lost and never start building the app you had in mind. That’s why I’d recommend making yourself comfortable with the official docs, going through the Getting Started guide and starting your app right after that. Even if you don’t fully understand everything in that guide.
The sooner you start practising, the better. I’m sure you’ll advance by coming across many challenges that will teach you React Native inside out.
Concepts you definitely need to understand beforehand:
What is a React component?
Basic front-end knowledge.
Concepts you’re most likely to need at the beginning:
Styling. I was working with an amazing front-end dev who tackled this for me, but if you want to make your app look good you need to invest some time learning to style it.
How to write tests (more below).
2. Sync all your tools’ versions with your team
Xcode, Android Studio, even the OS version if you are developing with Macs, should be the same for the entire team. This will prevent conflicts along the process.
3. Choose your testing strategy wisely
To ensure a high-quality app you need to have a test strategy. There are two major end-to-end testing platforms, Apium and Detox. If you want to know more about the differences between them you could read this post.
If you prefer unit testing, Jest is the preferred tool for it. We paired this with React Native Testing Library for rendering our components.
We ran into timeout issues when we were running Detox tests on our CI (Travis), so we only ended up running our end-to-end tests locally when releasing a build and we kept Jest Snapshots for unit testing on Travis.
These are some advantages and disadvantages for both types of testing:
Detox for end-to-end tests:
➕ It will allow you to get the app covered with scenario tests for user journeys. This is a great way to know the functionality remains the same while adding new features.
➕ Setup is fairly simple.
➖ With numerous test scenarios, the suite can end up taking a very long time to run. A solution would be to write longer scenarios covering several journeys.
Jest for unit testing:
➕ Extremely flexible and well-documented, with a vast support network.
➕ Snapshots are an excellent and speedy tool for ensuring a component does not change unexpectedly.
➕ The test suite runs quickly.
➖ The added complexity in tests can take time to accommodate, for example, when you need to mock out other dependencies.
4. Use console.log endlessly
Help yourself by logging as much as you can. It could help you a lot to be sure you understand the way a page is rendered, how many times you visit a certain point of the code, when a function is triggered, what the value of a variable or the state at a given point is, etc.
5. Be ready to adapt to changes
As new versions are released some updates might be required. Also, you’ll find many relatively old posts where the syntax is not the latest one (which is the one you should be using). Just be aware of that and update your app periodically to prevent complicated upgrades in the future.
Surround yourself with experts to boost your knowledge. I cannot highlight enough how much it helped me to see developers writing code or to do code reviews (thank you to Andy W and Dave Q). Don’t be afraid to pull the work from other devs and play with it. Trying to change things or to break them could help you get a better understanding of the code flow.
Finally, if you are switching from Ruby on Rails to React Native as I did, prepare yourself for a different way of thinking. Embrace the differences at the beginning and you’ll enjoy them at the end. 😃
Previously from our Engineering Team:
How Ruby if statements can help you write better code by Dave Cocks
How do you solve a problem like caching in Progressive Web Apps? by Dave Quilter
Rails 6: Seeing Action Text in... action by Stephen Giles
A Quick Comment on Git Stash by Karen Fielding
Avoiding N+1 queries in Rails GraphQL APIs by Andy West
|
OPCFW_CODE
|
Different organizations and different groups within an organization have different rules that they use to derive the information they need (e.g., from the raw data that exists in operational systems) to do their analysis.
The purpose of this task is to identify to the lowest level of detail how each element in the data warehouse is derived, including:
· transformation rules for the base-level data warehouse (e.g., the fact tables),
· integration rules for disparate data sets,
· rules for ensuring data quality.
Identify Transformation and Integration Rules
Data from data sources (e.g., operational data, external data) must be converted so that it is suitable for populating the fact and dimension tables in the data warehouse.
The data in the data sources (mainly from operational systems) that is used to populate the data warehouse entities is usually normalized and segmented by operational system. Each operational system may use different code values and/or field lengths for the same data.
This step defines:
· how to decode code values from different operational systems so that they can be made consistent in the data warehouse,
· how to merge data from multiple data sources/operational systems so that the fact and dimension tables can be populated.
The rules on how to transform the source data are stored in the data warehouse as metadata so that the process can be repeated. Transformation processes are generated based on these rules.
To ensure/verify the quality of the data in the data warehouse, a map must exist that defines where the data originated, how it was transformed/integrated, and how it was summarized/aggregated. This information is as important as data used to create the business query results.
This information (i.e., the business rules) must be stored as metadata in the data warehouse. Verify that the business rules are complete and stored correctly (i.e., stored as metadata).
Disparate Data Considerations
Problems are frequently encountered when bringing in data from a variety of sources. Organizations often have poor data quality in operational data sources or poorly understood data, and one of the major challenges in populating the data warehouse is to ensure that the data being used is valid and understood. The most common problems are as follows:
· No integrated or consolidated inventory exists for all of the organization's data. What data is out there in the myriad of systems that typically exist?
· Data being maintained is invalid. The true meaning of data stored in operational data sources may differ from the definitions contained in the data dictionaries. Data fields may have multiple uses.
· Data types are redundant and inconsistent. For example, there may be multiple places where a person's address is stored, and there may be various values stored in these address fields.
· In addition to the previous point, the multiple occurrences of fields often have varying formats and lengths.
· Data elements with similar names across systems may actually contain different data. For example, a field called "Account Start Date" may be the date an account was requested in one system and may be the date an account was approved in another system.
Tips and Hints
A data warehousing implementation will be doomed to failure if end-users are not confident that the data maintained in the warehouse is reliable.
The costs associated with invalid data are often enormous to an organization. Cleaning up poor quality data at the source will often reap more rewards and cost savings than cleaning up data solely for the data warehouse.
|
OPCFW_CODE
|
How to get the remote/peer IP address from an SSL/ssl_st structure in C?
The things I have previously tried are getting the socket fd from SSL_get_wfd and then passing it to getpeername. I also looked the the BIO object/functions but without any luck. Attempted to look at the openSSL implementation in /usr/include/openssl but then again with no luck.
Does anyone know how to get the remote IP address (and port) to which an openSSL socket is connected?
Some context:
socket fd: 64 // the file descriptor doesn't look incorrect (to me)
after getaddress, socklen: 28 // the length of the plausible address also looks correct
sockaddr ptr: 0x7b0b0fcac0, val: 0x0 // the pointer is empty despite being allocated :(
edit: the documentation I based my work on:
https://docs.huihoo.com/doxygen/openssl/1.0.1c/structssl__st.html
What you describe should work and does for me. Do you have the addrlen argument set correctly on input to getpeername? (Note as for all C libraries, headers including those in /usr/include contain only declarations not implementations.)
I do allocate addrlen correctly and it is set to the correct length after getpeername which makes me believe it is called correctly. One fact I did not mention is that I'm hooking this function with Frida (passing js variables to NativeFunctions).
Anyways, I was able to get the host name by using SSL_get_servername.
If addrlen is (wrongly) set to zero before getpeername(), addrlen will be set to the length of the address ('correct') but the address will NOT be stored in addr; see the man page (or POSIX). SSL_get_servername on client is the name specified in SNI as the desired server, which is not always the host you actually connect to, and can be omitted entirely (although on the public web nowadays many servers reject a handshake without SNI).
Frida has nice features related to Sockets.
var address = Socket.peerAddress(fd);
// Assert address not null
console.log(fd, address.ip + ':' + address.port);
View Sockets activity;
Process
.getModuleByName({ linux: 'libc.so', darwin: 'libSystem.B.dylib', windows: 'ws2_32.dll' }[Process.platform])
.enumerateExports().filter(ex => ex.type === 'function' && ['connect', 'recv', 'send', 'read', 'write'].some(prefix => ex.name.indexOf(prefix) === 0))
.forEach(ex => {
Interceptor.attach(ex.address, {
onEnter: function (args) {
var fd = args[0].toInt32();
if (Socket.type(fd) !== 'tcp')
return;
var address = Socket.peerAddress(fd);
if (address === null)
return;
console.log(fd, ex.name, address.ip + ':' + address.port);
}
})
})
Output example
$ frida -Uf com.example.app -l script.js --no-pause
[Android Model-X::com.example.app]->
117 write <IP_ADDRESS>:5242
117 read <IP_ADDRESS>:5242
135 write <IP_ADDRESS>:4244
135 read <IP_ADDRESS>:4244
135 read <IP_ADDRESS>:4244
This is a bit old and the "huihoo" link is dead. But this this does come up when searching as it did for me.
I tested and indeed SSL_get_wfd() and/or SSL_get_rfd() do return socket descriptor. In my case either returned the same socket index.
At least for using a sockets based BIO and of course you must already be connected, accepted, etc., to already have a socket created for the SSL context.
This would be your first step to verify what you are getting from one or both of these functions. If failed they would return a -1 (not a valid descriptor).
Assuming you got an socket like I do, then it becomes more of a sockets question (less an OpenSSL one). You can use getpeername() of the socket to get a "struct sockaddr", which in turn use that to call getnameinfo() which will give you a hostname or numeric string representation of the IP address.
|
STACK_EXCHANGE
|
Foster Parent Dashboard
Version 1.0 06/09/2016
####Table of Contents
- PROTOTYPE URL
- CONTACT INFO
The application has been developed to meet the submission requirements for the CHHS ADPQ vendor pool selection. The application includes the ability to create an account and login in, and once logged in the user is able to perform a proximity search for a a foster care facility or agency. From the dashboard the logged-in user is also able to send and receive messages from a case worker.
The application was developed using best practices for user centered design and agile development, which is detailed in further detail within this document. These are all practices that our company employs on a regular basis through our extensive work with public-sector agencies.
- Java 8
- Maven 3.x (must be able to run
mvnon the command line)
- Node + NPM
npm install mvn spring-boot:run
Point a browser to
Login and user info all hard coded to accept anything at the moment.
To keep your SASS files compiling and jslint checking run:
- Supports GET, POST, PUT
- GET and PUT require user to be logged in
- POST only
- GET only
Refer to model objects in Java source for available properties.
- Java 8
- Maven 3.x (must be able to run
mvnon the command line)
- MySQL 5.6.x or 5.7.x
Create a new database and user in MySQL for local development. Name does not matter as you will configure that next.
application.yaml and modify database parameters to match your local development setup. Assuming port numbers are the same the only thing you will have to change is the database name on the end of the URL and the username and password values.
Then you can run the application.
By default the server runs on port
8090, and you can view the documentation for the endpoints here.
The database schema is currently controlled by Liquibase, and it's integration with Spring Boot. On startup the application will automatically run any outstanding migrations.
On first startup a new, default, user is created with the email of
firstname.lastname@example.org. When this happens you will see a log message telling you the users randomly generated password. This should be copied down if you wish to login as this test user.
A more comprehensive description of our Technical Approach can be found on our Confluence wiki - Link
In the event that external artifacts are not considered admissable, we have also provided many of the associated documents and images within this repository - Link
A. Assigned a team leader.
John Gordon, Director of Software Development
B. Team Members (and corresponding ADPQ labor categories)
- Product Manager: John Gordon
- Technical Architect: Nick Stuart
- Interaction Designer/User Researcher: Melissa Coleman
- Visual Designer: Christopher Prinn
- Front End Developer: Rachel Charow
- Back End Developer: Joseph Descalzota
- Dev Ops Engineer: Lyle McKarns
- Security Engineer: Chris Davis
- Agile Coach: Alison Schestopol
- Quality Assurance: Carl Swanson
C. User Research
User research and testing included the following:
- Analogous research
- Team ideation meetings
- Initial wireframes reviewed with user - Link
- Updated wireframe reviewed by external testers - [Link] (https://confluence.portlandwebworks.com/display/CHHS/Links+to+User+Testing+and+Wireframes)
- Prototypes reviewed by external users - Link
D. Used at least three “human-centered design” techniques or tools
Multiple human-centered design techniques, were used in the development of the PoC. These included:
- Creation of wireframes - Link
- Creation of "user stories" - Link
- Creating a Product Backlog - Link
- Sharing findings with the team and incorporation of feedback - Link
- Use of a simple and flexible design style guide - Link
- Usability testing of wireframes - Link
- Usability testing of prototypes - Link
E. Created or used a design style guide
A visual style guide was created by the designer to define styles, colors, fonts, etc. Link
F. Performed usability tests with people
Usability tests were performed at several points in the development process, including:
- Internal testing of initial concepts
- Testing of wireframes - Link
- Testing of working prototypes - Link
G. Used an iterative approach
Our iterative approach consisted of the following steps:
- Set up team collaboration site in Confluence – Link
- Feedback on the PoC sought and incorporated throughout
- Use of Scrum methodology
- On-going grooming of the product backlog
- Development and code reviews completed within a single Sprint
- Sprint Demo for review by Product Owner
H. Responsive Design
The PoC has been developed as mobile-responsive. Quality Assurance testing assured that the PoC matched business requirements:
- Leveraged JIRA plugin test case application called Zephyr - Link
- Regression testing of desktop, mobile, and tablet
- One test case for each user story
- If test case passes the story is closed, if it fails a subtask is created and it is retested
- Fixes not addressed were added to the Backlog for future enhancements
I. Used at least five modern and open-source technologies
Numerous open-source technologies have been utilized. They include:
- HTML/SASS/CSS - front-end layout and styling
- AngularJS 1.5.5 - client site interaction and application logic
- Node/NPM with Bower+Gulp - Manage JS dependencies and SASS/JS build tasks
- Spring Boot with Hibernate / JPA and Jersey - server side logic
- Liquibase - Database schema migration source control
- TravisCI - continuous integration
- MySQL - data storage
J. Deployed the prototype on PaaS
The PoC has been deployed to Google Cloud Container Engine Link. The Container Engine is built on the open source Kubernetes system, providing flexibility to take advantage of on-premises, hybrid, or public cloud infrastructure. Many cloud providers are working to integrate Kubernetes into their platforms such as Red Hat, Microsoft, IBM, OpenStack, and VMware. Kubernetes can also be deployed to Amazon GovCloud. Kubernetes also has a number of other benefits such as the ability to automatically scale based on real-time user demand. Please see the kubernetes (https://github.com/portlandwebworks/chhs-prototype/tree/develop/kubernetes) folder for a functional demo of the code used to provision the prototype environment.
K. Developed automated unit tests for their code
JUnit and EasyMock were utilized to cover unit testing needs while utilizing Spring based design methodologies to help write testable code. First pass integration tests were also established using the following technologies:
This setup allows easy build out of an automated test suite that would be used as a regression level tests and automated on the integration server.
For integration tests, happy-path testing was conducted on stories CP-21, CP-16, CP-19, and CP-23 using Protractor, Cucumber, and Selenium:
- Cucumber employs base steps using pseudo-human-readable scripts
- Selenium drives automation in browsers
L. Used a continuous integration system
This project is leveraging Travis CI for it's build environment. All code pushed to GitHub is automatically run in Travis and if there are any test failures the team is notified in the projects Slack channel. If there are no test failures, the most recent code is automatically deployed to the Continuous Integration environment. Travis CI also handles deployment of the Docker images to a public repository and it integrates directly with Kubernetes to release the most recent version of the Docker images.
M. Used configuration management
By utilizing Kubernetes, we are able to deploy and update secrets and application configuration without rebuilding the Docker image and without exposing sensitive data in your project source code.
N. Setup or used continuous monitoring
This project is monitored using Google Stack Driver, the monitoring tools are built into the Google Cloud Platform. Additionally, StackDriver Logging aggregates and analyze all of the logs from the deployed containers. The following tests are in place:
- URL Monitoring - Tracking and alerting on the availability of the front end and backend services
- Disk Throughput - Monitoring the disk usage on the Kubernetes nodes. Alerting if throughput is sustained near the maximum
- Cluster CPU - Monitoring the CPU of kubernetes cluster
O. Deployed their software in a container
This project is deployed using Docker container technology. This allows the application to be portable between most major cloud providers, as well as providing a consistent environment between development and production.
P. Provided sufficient documentation to install and run their prototype on another machine
The README.md file located in the repository contains complete instructions for deploying and running the prototype on another machine.
Q. Prototype and underlying platforms used to create and run the prototype are openly licensed and free of charge
All of the tools used to create and run the prototype are openly licensed and free of charge and are commonly used by the Portland Webworks development team.
Copyright 2016 Portland Webworks. All rights reserved.
|
OPCFW_CODE
|
Monitoring important log files on multiple linux hosts?
I have a few servers running on AWS and have Nagios/Icinga doing the monitoring of all critical services.
We're trying to figure out the best way to monitor all logs - system, DB, PHP, Apache, etc - on the system so we know about issues (for e.g. that Apache reached the max_clients threshold yesterday) immediately via email. We only look at logs currently after a service goes down, not before, which is bad.
I'm new to Linux administration and I've identified the following options after a search online:
Nagios scripts to monitor logs - The problem is most of them check one log file for one specific regex at a time. It's not scalable to install one service for each log file (I don't even know all the log files we have to monitor!)
A service such as logrobot.com - I'm not sure how effective this is though.
Appreciate your advice on what's the best way to monitor all these logs on multiple servers with minimal configuration.
I spent a couple of days searching ("log management solutions"), I discovered just the tools I was looking for. The following 3 tools are Cloud based logging tools, are easy to setup and configure. They ship system logs and custom logs to their servers, store them, let you search and setup email/webhook alerts for regex patterns.
Papertrail - the simplest/quickest interface by far (like tail -f on a terminal). Extremely affordable pricing as well. You'll have to spend some time configuring it for custom logging (apache, mysql, your application) though. Their log-shipper based out of Go (in Beta as of today) is very memory efficient and I can deploy the log files it has to monitor through a GIT repo.
Log entries - also quite simple. Easiest for setting up custom logging through their 'le' daemon. It has quite a few features, and this made it seem bloated compared to papertrail. Their free plan's quite extensive for startups.
Loggly - Offer everything the other two do, but it was quite complex to go through this. And their free plan doesn't offer alerts.
Don't know how much servers/logs you have to monitor but there are many solutions out there
small environment
Use rsyslog and a frontend you like (ex. LogAnalyzer http://loganalyzer.adiscon.com/)
bigger environment
We monitor our serverlogs from (+300 system) with beaver as logshipper, logstash as indexer and elasticsearch as backend.
This solution scales up to [insert random number here] hosts ;)
Thanks for this. Do you also get alerts from any of those daemons on critical or other errors? If you could point me to a tutorial on setting these up, would be much obliged.
We store our logs with custom severity in elasticsearch and use a (also custom) django/python webfrontend for monitoring and alerting.
Tutorials for a setup of logstash+elasticsearch should be easily found with a search engine of your choice.
Basically you should not (at least not only) read the logs on the same host but instead use some sort of logserver which would get all the logs of the servers centralised.
i used this setup to be sure the logs aren't altered after they were entered.
Additionally just use logcheck and let it check the logs for you.
Basically its a check for lines you find acceptable and can be ignored and only sends you the ones you did not tell logcheck to ignore beforehand.
you can easy install it on every server.
for a graphical version , counting how many severe log entries etc is
logzilla a nice option, thought not free anymore.
With regards to logrobot.com, there's now a free version of it that does exactly what you need and it can be downloaded here:
http://www.logxray.com/logxray.zip
To use it to address your concerns, you can run logxray this way:
./logxray localhost:emailing /apps/logxray autonda /var/log/messages 60m 'kernel|error|panic|fail' 'timed out' 1 2 -show error_check<EMAIL_ADDRESS>To monitor multiple logs or specific logs within a specific directory:
./logxray localhost /apps/logxray autoblz /var/log 60m ‘panic|error’ ‘.’ 1 1 directory_error_watch -ndfoundn
http://www.logXray.com (for more information or documentation on how to use the tool)
|
STACK_EXCHANGE
|
At CodersRank, we love collaborating with other developers who create content. On our Blog, we try to write articles that are interesting and helpful for the developer community. If you have a great idea for a post and would like to write for us, please follow the guidelines below.
What topics do we currently accept?
All articles should be about how to become a programmer or how programmers can improve themselves. More specifically:
- CodersRank use cases. Showcase how you use CodersRank through a tutorial or a walkthrough (think: creative uses of our API or widgets).
- CodersRank comparison articles. Provide an honest review and showcase differences between CodersRank and similar developer platforms.
- Developer tool listicles. Provide a roundup of tools that make developers’ lives easier. Keep the roundup to one category. For example, tools for productivity, automation, scrum, general learning, etc.
- Developer careers. Anything from tips on how to get a better job as a developer, personal stories, or entrepreneurship. Tips for newbies or senior developers are both welcome.
- Developer health & productivity. Gather your favorite hacks for being a healthier, more productive developer.
- Open source. Tools to use, collaboration tips, or similar articles.
- Futurism. Write about your predictions for the software development scene. AI, machine learning, robotics, and other futuristic elements you think will affect developers’ lives in some way.
What are the guest posting guidelines on CodersRank?
All posts must be original and unpublished elsewhere. They should be 1,000-2,500 words in length and should be well-researched and well-written. If you’re referencing stats, only include figures from 2019 or later.
You should follow these guidelines when formatting your posts:
- Title. Use the H1 heading.
- Subheadings. Stick to H2 and H3 subheader levels.
- Paragraphs. Left-aligned, easy-to-read, with bullet points where needed.
- Keyword. Focus on one keyword but stay away from keyword-stuffing. Keep it as natural as possible.
- Google Docs. Write your post with Google Docs.
- Images. Make all images 1360 pixels wide. Remember to include a folder with the image files when submitting your post to us. Only use images that are either in creative commons or you have the right to use them (unsplash.com, undraw.co or screenshots are OK).
At the end of your article, please include the following assets about yourself
- A short bio (50-100 words)
- A clear, high-quality profile photo
- A link to your website/GitHub/LinkedIn
What are the benefits of guest posting on CodersRank?
CodersRank is a tool that helps developers like you showcase their experience in various languages. The platform creates a coding profile for you that’s a true representation of your progress as a developer.
You can connect your GitHub, Stack Overflow, or LinkedIn accounts to make your profile more accurate.
Through the platform, our 50,000-strong developer community is hoping to change how tech hiring is done forever.
If you agree that tech hiring can be better, the CodersRank Blog might be the place for you to express those feelings. Feel free to bring an opinion, or a unique take, too (but no obscenity, please).
We are interested to see your take on our topics as long as you contribute to the developer community in a meaningful way.
In other words, while it’s OK to have an opinion, you should also ensure to give actionable advice in your articles.
Writing for us is also a great opportunity to build your personal brand as a developer in your selected niche or specialty.
You’ll be visible to:
- About 7,000 unique visitors per month
- Up to 10,000 combined followers on social media
- Potentially 50,000 CodersRank users
- Up to 2,000 tech recruiters via our email list
Guest post submission process
- First, submit your pitch. A guest post pitch consists of an article title + description + outline (all the H2 and H3 subheadings).
- Wait for an acceptance email or a rework request. We try to respond to every pitch but please bear with us during busy times. If your pitch is accepted, we’ll send you a confirmation email with a proposed deadline for the piece.
- Write the article.
- Submit the article. Send us your completed article + image files + author assets in one Google Drive folder.
- Share your piece with the world. We’ll be sharing your article far and wide, but you should also show the world the amazing content you created!
Where to submit your pitch
Please send your guest posting pitches to info(at)codersrank.io with the subject line “Guest post pitch.”
Thank you for your interest in guest posting on the CodersRank blog. We look forward to hearing from you!
|
OPCFW_CODE
|
Puts a message on a queue.
register queue_t * q;
register mblk_t * bp;
The putq utility puts the message pointed to by the bp parameter on the message queue pointed to by the q parameter, and then enables that queue. The putq utility queues messages based on message-queuing priority.
The priority classes are:
|type >= QPCTL||High-priority|
|type < QPCTL && band > 0||Priority band|
|type < QPCTL && band == 0||Normal|
When a high-priority message is queued, the putq utility always enables the queue. For a priority-band message, the putq utility is allowed to enable the queue (the QNOENAB flag is not set). Otherwise, the QWANTR flag is set, indicating that the service procedure is ready to read the queue. When an ordinary message is queued, the putq utility enables the queue if the following condition is set and if enabling is not inhibited by the noenable utility: the module has just been pushed, or else no message was queued on the last getq call and no message has been queued since.
The putq utility looks only at the priority band in the first message block of a message. If a high-priority message is passed to the putq utility with a nonzero b_band field value, the b_band field is reset to 0 before the message is placed on the queue. If the message passed to the putq utility has a b_band field value greater than the number of qband structures associated with the queue, the putq utility tries to allocate a new qband structure for each band up to and including the band of the message.
The putq utility should be used in the put procedure for the same queue in which the message is queued. A module should not call the putq utility directly in order to pass messages to a neighboring module. Instead, the putq utility itself can be used as the value of the qi_putp field in the put procedure for either or both of the module qinit structures. Doing so effectively bypasses any put-procedure processing and uses only the module service procedures.
Note: The service procedure must never put a priority message back on its own queue, as this would result in an infinite loop.
|q||Specifies the queue on which to place the message.|
|bp||Specifies the message to put on the queue.|
On successful completion, the putq utility returns a value of 1. Otherwise, it returns a value of 0.
This utility is part of STREAMS Kernel Extensions.
The getq utility.
List of Streams Programming References and Understanding STREAMS Messages in AIX 5L Version 5.1 Communications Programming Concepts.
|
OPCFW_CODE
|
from pydes.core.simulation.model.cloud import SimpleCloud as Cloud
from pydes.core.simulation.model.cloudlet import SimpleCloudlet as Cloudlet
from pydes.core.simulation.model.controller import ControllerResponse
from pydes.core.simulation.model.event import EventType
from pydes.core.simulation.model.event import SimpleEvent as Event
from pydes.core.simulation.model.scope import ActionScope, SystemScope, TaskScope
from pydes.core.utils.logutils import get_logger
# Logging
logger = get_logger(__name__)
class SimpleCloudletCloudSystem:
"""
A system composed by a Cloudlet and a Cloud.
"""
def __init__(self, rndgen, config, metrics):
"""
Create a new system.
:param rndgen: (object) the multi-stream rnd number generator.
:param config: (dictionary) the System configuration.
:param metrics: (SimulationMetrics) the simulation metrics.
"""
# State
self.state = {sys: {tsk: 0 for tsk in TaskScope.concrete()} for sys in SystemScope.subsystems()}
# Metrics
self.metrics = metrics
# Subsystem - Cloudlet
self.cloudlet = Cloudlet(rndgen, config["cloudlet"], self.state[SystemScope.CLOUDLET], self.metrics)
# Subsystem - Cloud
self.cloud = Cloud(rndgen, config["cloud"], self.state[SystemScope.CLOUD], self.metrics)
# ==================================================================================================================
# EVENT SUBMISSION
# * ARRIVAL_TASK_1
# * ARRIVAL_TASK_2
# * COMPLETION_CLOUDLET_TASK_1
# * COMPLETION_CLOUDLET_TASK_2
# * COMPLETION_CLOUD_TASK_1
# * COMPLETION_CLOUD_TASK_2
# ==================================================================================================================
def submit(self, event):
"""
Submit an event to th system.
:param event: (SimpleEvent) the event
:return: ([s],[u]) where
*s* is a list of events to schedule;
*u* is a list of events to unschedule.
"""
response_events_to_schedule = []
response_events_to_unschedule = []
if event.type.act is ActionScope.ARRIVAL:
# Submit the arrival
e_completions_to_schedule, e_completions_to_unschedule = self.submit_arrival(event.type.tsk, event.time)
# Add completions to schedule
response_events_to_schedule.extend(e_completions_to_schedule)
# Add completions to unschedule
response_events_to_unschedule.extend(e_completions_to_unschedule)
elif event.type.act is ActionScope.COMPLETION:
# Submit the completion
self.submit_completion(event.type.tsk, event.type.sys, event.time, event.meta)
else:
raise ValueError("Unrecognized event: {}".format(event))
return response_events_to_schedule, response_events_to_unschedule
def submit_arrival(self, tsk, t_now):
"""
Submit the arrival of a task.
:param tsk: (TaskType) the type of task.
:param t_now: (float) the arrival time.
:return: (s,u) where
*s* is a list of events to schedule;
*u* is a list of events to unschedule;
"""
e_to_schedule = []
e_to_unschedule = []
# Process arrival
controller_response = self.cloudlet.controller.process(tsk)
if controller_response is ControllerResponse.SUBMIT_TO_CLOUDLET:
logger.debug("{} sent to CLOUDLET at {}".format(tsk, t_now))
t_completion = self.cloudlet.submit_arrival(tsk, t_now)
e_completion = Event(
EventType.of(ActionScope.COMPLETION, SystemScope.CLOUDLET, tsk), t_completion, t_arrival=t_now
)
e_to_schedule.append(e_completion)
elif controller_response is ControllerResponse.SUBMIT_TO_CLOUD:
logger.debug("{} sent to CLOUD at {}".format(tsk, t_now))
t_completion = self.cloud.submit_arrival(tsk, t_now)
e_completion = Event(
EventType.of(ActionScope.COMPLETION, SystemScope.CLOUD, tsk), t_completion, t_arrival=t_now
)
e_to_schedule.append(e_completion)
elif controller_response is ControllerResponse.SUBMIT_TO_CLOUDLET_WITH_INTERRUPTION:
tsk_interrupt = TaskScope.TASK_2
logger.debug("{} interrupted in CLOUDLET at {}".format(tsk_interrupt, t_now))
t_completion_1, t_arrival_1 = self.cloudlet.submit_interruption(tsk_interrupt, t_now)
e_completion_to_ignore = Event(
EventType.of(ActionScope.COMPLETION, SystemScope.CLOUDLET, tsk_interrupt), t_completion_1
)
e_to_unschedule.append(e_completion_to_ignore)
logger.debug("{} restarted in CLOUD at {}".format(tsk_interrupt, t_now))
t_completion = self.cloud.submit_arrival(tsk_interrupt, t_now, restart=True)
# TODO check t_arrival=t_arrival_1 or t_now
e_completion = Event(
EventType.of(ActionScope.COMPLETION, SystemScope.CLOUD, tsk_interrupt),
t_completion,
t_arrival=t_now,
switched=True,
)
e_to_schedule.append(e_completion)
logger.debug("{} sent to CLOUDLET at {}".format(tsk, t_now))
t_completion = self.cloudlet.submit_arrival(tsk, t_now)
e_completion = Event(
EventType.of(ActionScope.COMPLETION, SystemScope.CLOUDLET, tsk), t_completion, t_arrival=t_now
)
e_to_schedule.append(e_completion)
else:
raise ValueError("Unrecognized controller response {}".format(controller_response))
"""
if tsk is TaskScope.TASK_1:
if self.state[SystemScope.CLOUDLET][TaskScope.TASK_1] == self.cloudlet.n_servers:
logger.debug("{} sent to CLOUD at {}".format(tsk, t_now))
t_completion = self.cloud.submit_arrival(tsk, t_now)
e_completion = Event(EventType.of(ActionScope.COMPLETION, SystemScope.CLOUD, tsk), t_completion, t_arrival=t_now)
e_to_schedule.append(e_completion)
elif self.state[SystemScope.CLOUDLET][TaskScope.TASK_1] + self.state[SystemScope.CLOUDLET][TaskScope.TASK_2] < self.cloudlet.threshold:
logger.debug("{} sent to CLOUDLET at {}".format(tsk, t_now))
t_completion = self.cloudlet.submit_arrival(tsk, t_now)
e_completion = Event(EventType.of(ActionScope.COMPLETION, SystemScope.CLOUDLET, tsk), t_completion, t_arrival=t_now)
e_to_schedule.append(e_completion)
elif self.state[SystemScope.CLOUDLET][TaskScope.TASK_2] > 0:
tsk_interrupt = TaskScope.TASK_2
logger.debug("{} interrupted in CLOUDLET at {}".format(tsk_interrupt, t_now))
t_completion_1, t_arrival_1 = self.cloudlet.submit_interruption(tsk_interrupt, t_now)
e_completion_to_ignore = Event(EventType.of(ActionScope.COMPLETION, SystemScope.CLOUDLET, tsk_interrupt), t_completion_1)
e_to_unschedule.append(e_completion_to_ignore)
logger.debug("{} restarted in CLOUD at {}".format(tsk_interrupt, t_now))
t_completion = self.cloud.submit_arrival(tsk_interrupt, t_now, restart=True)
# TODO check t_arrival=t_arrival_1 or t_now
e_completion = Event(EventType.of(ActionScope.COMPLETION, SystemScope.CLOUD, tsk_interrupt), t_completion, t_arrival=t_now, switched=True)
e_to_schedule.append(e_completion)
logger.debug("{} sent to CLOUDLET at {}".format(tsk, t_now))
t_completion = self.cloudlet.submit_arrival(tsk, t_now)
e_completion = Event(EventType.of(ActionScope.COMPLETION, SystemScope.CLOUDLET, tsk), t_completion, t_arrival=t_now)
e_to_schedule.append(e_completion)
else:
logger.debug("{} sent to CLOUDLET at {}".format(tsk, t_now))
t_completion = self.cloudlet.submit_arrival(tsk, t_now)
e_completion = Event(EventType.of(ActionScope.COMPLETION, SystemScope.CLOUDLET, tsk), t_completion, t_arrival=t_now)
e_to_schedule.append(e_completion)
elif tsk is TaskScope.TASK_2:
if self.state[SystemScope.CLOUDLET][TaskScope.TASK_1] + self.state[SystemScope.CLOUDLET][TaskScope.TASK_2] >= self.cloudlet.threshold:
logger.debug("{} sent to CLOUD at {}".format(tsk, t_now))
t_completion = self.cloud.submit_arrival(tsk, t_now)
e_completion = Event(EventType.of(ActionScope.COMPLETION, SystemScope.CLOUD, tsk), t_completion, t_arrival=t_now)
e_to_schedule.append(e_completion)
else:
logger.debug("{} sent to CLOUDLET at {}".format(tsk, t_now))
t_completion = self.cloudlet.submit_arrival(tsk, t_now)
e_completion = Event(EventType.of(ActionScope.COMPLETION, SystemScope.CLOUDLET, tsk), t_completion, t_arrival=t_now)
e_to_schedule.append(e_completion)
else:
raise ValueError("Unrecognized task type {}".format(tsk))
"""
return e_to_schedule, e_to_unschedule
def submit_completion(self, tsk, scope, t_now, meta):
"""
Submit the completion of a task.
:param tsk: (TaskType) the type of task.
:param scope: (Scope) the scope.
:param t_now: (float) the occurrence time of the event.
:param meta: (dict) metadata associate to the completion event.
:return: None
"""
logger.debug("{} completed in {} at {}".format(tsk, scope, t_now))
# Check correctness
assert self.state[scope][tsk] > 0
# Process event
if scope is SystemScope.CLOUDLET:
self.cloudlet.submit_completion(tsk, t_now, meta.t_arrival)
elif scope is SystemScope.CLOUD:
switched = meta.switched if "switched" in meta.__dict__ else False
self.cloud.submit_completion(tsk, t_now, meta.t_arrival, switched)
else:
raise ValueError("Unrecognized scope {}".format(scope))
# ==================================================================================================================
# OTHER
# ==================================================================================================================
def is_idle(self):
"""
Check weather the system is idle or not.
:return: True, if the system is idle; False, otherwise.
"""
return self.cloudlet.is_idle() and self.cloud.is_idle()
def __str__(self):
"""
String representation.
:return: the string representation.
"""
sb = [
"{attr}={value}".format(attr=attr, value=self.__dict__[attr])
for attr in self.__dict__
if not attr.startswith("__") and not callable(getattr(self, attr))
]
return "System({}:{})".format(id(self), ", ".join(sb))
|
STACK_EDU
|
I'm new here. I recently started to read all the great reviews/articles on this and others sites
A month ago, when I started my build, I did very little research because I did not have as an objective overclocking or making a "silent" PC... but things change
So my components are not ideal, but in trying to get the most from what I have I would like some advice...
My machine is a i7-960 @ 3.84GHz 1.2375V. It runs reasonably quiet @ 35.3 dbA and not too hot -- 43.5C idle and 78C on Prime. My background noise in the room is 29.3 dbA and the ambient temp was 25.5C that day. All fans on low.
I'm looking to bring it to 4GHz, but not at the expense of making it more noisy.
I run a very quick trial earlier today. A first guess of 1.4V looked promising, but Prime brought my temps to 97-98 and I discontinued the test.
Here is my current set up:
i7-960 Bloomfield @ 3.84GHz 1.2375V
CORSAIR DOMINATOR (3 x 2GB) DDR3 1600 TR3X6G1600C8D
ZALMAN CNPS9900A LED
Antec Nine Hundred Two Black Steel ATX Mid Tower Computer Case
Extra Antec Tri-Cool fan on the side
XFX Black Edition XPS-850W-BES 850W
2 x ASUS Black 24X DVD+R
Koutech IO-RCM621 All-in-one USB Card Reader
Intel X25-M 80GB SSD
Logitech G500 10 Buttons Dual-mode Scroll Wheel Wired 5700 dpi Mouse
2 x XFX GeForce 8600GT Video Cards
4 x Western Digital Blue 250GB 7200 RPM in RAID 0+1
Here are my temps
Here are my dbA's at about 1m from the case
After some research I figure I would try the following:
1) Replace the Zalman (37C at the fins) with a Megahalems in a push (Noctua NF-P12) pull (Noctua NF-S12B FLX) configuration.
2) Replace the Antec back fan with a second Noctua NF-S12B FLX
3) Somehow add a small Noctua NF-R8 hovering over the memories (45C at the fins, 55C on the mobo left of the memory area)
4) Add a 4PST switch to turn off the HDD and front fans -- most of the time I only need the SSD. BTW, the HDD's run very cool at less than 30C, which is about the case temp.
5) Add Zener Diodes in series with the fans that remain to slow them down and make them quieter.
My main concern is:
Would the Megahalems push/pull configuration be louder than my Zalman with the "quiet" resistor. If it would be any louder I would drop #1, #2 and #3 -- just do #4 and #5 and live with the i7-960 @ 3.84GHz.
Any ideas if I should give it a try?
I want to keep it air cooled, cuz adding water starts a whole new set of challenges
Any other ideas/recommendations?
|
OPCFW_CODE
|
Cutting our webpack build times in half 🕰
The simple fact is our builds were slow, but so was our user experience
The first thing we did was to start measuring our bundles. We use honeycomb to ingest data over time. In our case, we started recording build times, and bundle sizes with some very simple webpack plugins. This allowed us to measure how large our bundles were over time.
One of our big problems was duplicate dependencies, both inside 1 singular bundle, but also 1 singular page. We used some techniques I’ve previously blogged about, to understand what is in our bundle.
We noticed a single bundle contained multiple copies of the same dependency. This is due to how npm resolves dependencies. Essentially if an application depends on react 16.6, but a library has a hard dependency on react 16.8 you could have 2 copies of react. In our case, we had created libraries with pinned dependencies. To put it another way, our libraries were not permissive about which versions of react they required. So our applications would end up with multiple copies of dependencies. This caused many bundles to be twice the size they should be. Hurting users, and developer productivity alike. We fixed this by making sure our libraries had carets in-front of their version numbers. This tells npm that any minor, or patch can be used.
Webpack has a plugin called the DLL plugin this plugin is designed to allow a common dependency to be in a single file. In our case, almost all our bundles had a copy of react, many had jquery, and quite a few had a large library called High Charts. We DLL’d the most common dependencies, which cut our compile time from 25 minutes to 15. We also saw huge reductions in page size. Shaving over 1mb on some of our older pages. This also improves users that come back to our site, as now they get cache hits on react, and other dependencies we don’t change often.
Cheap source maps
One thing we’ve found is we have a lot of code. Many packages do not get active development day to day. However, developers were paying the cost of compiling all that code. One thing we found is having
source-maps turned on almost doubled the time it took for much of our codebase to compile. Switching over to
cheap-eval-source-map cut our build times down from 15 minutes to 7 minutes.
We use yarn workspaces, and whats known as a
mono-repo. Essentially our entire codebase is in one git repo, divided into npm packages that all resolve in a single filesystem. In the past developers had to compile the entire codebase when changes were made. We created our own webpack plugin that detects which files have changed and only recompile the bundles specific to what has changed. Our plugin is a lot like the Hard Source plugin, which you can use to do the same thing. By detecting what has changed, and only recompiling the changed files, we cut many of our builds from 7+ minutes to 30 seconds.
We also got some wins with:
- Using thread loader to multi-thread our babel compiles
- Prevent the
node_modulesdirectory from being processed in babel
- Upgrading to babel 7
- Upgrading to webpack 4
- Setting uglify’s parallel flag
- Setting the site up with a standard polyfill set
- Turning off split chunks in development mode
- Turn off PostCSS in dev
Things you can use (but we can’t)
We’ve got a bunch of babel and webpack plugins which prevent us from using these things. However, you may have some luck with them.
Turn on babel disk caching in the babel loader
- This would net us 5 minutes back on clean builds, but we’ve got some plugins to rewrite
- Stop using
- Turn off
- Create a separate runtime chunk
- Use webpack’s cache loader for CSS transforms
Developers are happy ✅
|
OPCFW_CODE
|
nsupdate failing on localhost - Bind 8
I have added a zone test.net via rndc locally and it is working fine. Next, I want to update it via nsupdate but locally, my zone configurations are,
zone test.net {type master; file "zones-remote/masters/test.net" ; allow-update{localhost;};};
When I do this,
nsupdate
server localhost
zone sample.test.net
update add sample.test.net 86400 A <IP_ADDRESS>
send
It gives the error "update failed: NOTAUTH"
Checking it via show, prior to send gives,
Outgoing update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 0
;; flags:; ZONE: 0, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0
;; ZONE SECTION:
;test.net. IN SOA
;; UPDATE SECTION:
sample.test.net. 86400 IN A <IP_ADDRESS>
When I try,
nsupdate
server localhost
zone test.net # Actual zone name
update add sample.test.net 86400 A <IP_ADDRESS>
send
then the error "SERVFAIL" appears.
My zone file looks like this,
@ 86400 IN SOA test.net. sampling.gmail.com. (
<PHONE_NUMBER>
3h
1h
1w
30m86400s)
@ 84600 NS ns1.test.net.
@ 84600 IN A <IP_ADDRESS>
ns1 84600 IN A <IP_ADDRESS>
This zone file is correct and it resolves the query against its domain.
Do you have anything in your log file ?
I don't have a nsupdate.log file on my system.
Modifying your original message without a warning is not really good, especially when the error message changes that much...
Depending of your OS, retard your named server (or rndc reload) and take a look at its log file (/var/log/named.log or /var/log/daemon.log for example) and add the logs to your question.
Also add the content of your zone.
It seems that you have misconfigured your zone and it is not seen as an authoritative zone (thus the NOTAUTH) error.
Apologies, I have added a description now. On it.
Np. Got any logs to show ? The NOTAUTH was normal (non existent zone). Without the logs, I'm afraid I have no idea about the servfail :-/
BIND 8 is unsupported. Is there a good reason why you are not using BIND 9?
When you specify zone, you are defining the "origin" for all transactions that follow. The record names that you specify are assumed to be relative to this origin unless a trailing dot is present.
With zone sample.test.net, the record should be @ or sample.test.net..
With zone test.net, the record should be sample or sample.test.net..
The SERVFAIL happens because your requested record mapped out to sample.test.net.test.net., which falls outside of your defined sample.test.net. zone.
I'm less certain of why you're getting NOTAUTH for the first request (sample.test.net.sample.test.net. falls within sample.test.net.), but I can't spend a whole lot of time speculating what is going on there when you're running an unsupported version of BIND. Ensure that both your nsupdate client and the server are running supported versions, and update your question if the problem persists.
After giving the update command in this style it is giving the NOTZONE error, while my zone is added in bind and the dig resolves the domain. I have tried this on bind 9.7 as well, but to no effect.
Plus, I am also using a key now.
|
STACK_EXCHANGE
|
SATSAGEN 0.5.0.2 is available
- HackRF One
- RTL-SDR Dongles
- Simple Spectrum Analyzer series like NWT4000, D6 JTGP-1033, Simple Spectrum Analyzer, and so on.
- Video trigger, real-time trigger, and fast-cycle feature
- ADALM-PLUTO custom gain table and Extended linearization table for all devices
- Transmit from raw format files
- I/Q balance panel
- RX/TX converter offset
- Video Filter average option
- Keyboard or mouse wheel moving markers
- Status Display
Waiting for you at https://www.albfer.com/en/2021/01/31/satsagen-0-5/
Let me know
I did't but I checked it now
after calibration with two different 50ohms dummy load terminations, the yellow is the load from my calibration kit, the red one is from ebay, cheap both , both rated 6ghz
the red curve of the 50-ohm dummy load from eBay is too low, the yellow is more correct according to the directivity value of the coupler.
Can you post the result of the antenna?
ok, after test as you advice in your picture, the problem is the same not consistent results at 5.8ghz , and do not look like the brands official antenna plots. I think could be the pluto sMA cable, it is rated 3ghz and if I pass the hand close this react quite energetic, is not possible my antennas do no get affected by it. Maybe could be bad satsagen settings by my part, just to be sure I tested my attenuators, directional coupler and all for me looks decent at 6ghz. Under 3ghz all looks good
After an open calibration, did you run a test with a 50 ohm load? Did you notice about a linearity on whole frequency range?
Thanks you so much by your answer, made me clear many things.
I took wrong the -40db calibration I thought it was the load calibration of a vna. ¿where you document this features? yeah would be cool can check the impedance I guess no idea how much important is that, I guess I will discover it soon XD..
When you say short or open, do you mean done with the calibration kit SMA dummies in the direc.coupler output and its coupled port always with the cable connected to RX (is what I'm doing)? or instead you mean close a loopback (short?) and after leave it open (open avg) with the SMA cable disconnected? if this last is the right way, the backloop is done from the coupled port I guess and if so should I put a short or load SMA dummy in the output (where I will connect the antennas)while the calibration is performed?
My couplers are OK, they are Narda or MAG with <1.5db insert loss and 25db of directivity. I have covered from 10Mhz to 8.6Ghz, I have cheap but good attenuators up to 6Ghz.
Thanks to your comment I think I found the nature of the weird readings, I was close to the antennas and the antennas and Pluto were over an aluminium table and close to a LCD TV. It just happened at 5.8ghz ISM, at 900mhz a was fine.
I made a provisional stand to test antennas, I will share how it works compared with the SWR antenna brand's charts.
What is sure I will learn a lot faster having your software an this awesome device.
Really I got crazy looking info to don't need ask by help but I could not get it. I'm sorry make all this questions with my poor english.
Yes, what you are doing is right.
This is open, for example:
Thank you for your comment
We have added the -40 calibration feature to mitigate the imperfect region of -40dB due to the internal crosstalk of Pluto. The -40 calibration needs adding a good 40dB attenuator in the loopback after calibration of 0dB. SATSAGEN fire an error when read amplitude above the region of -40. Let aside for the moment the -40 calibration for your purpose of measuring return loss with coupler.
In my opinion, the problems you noticed should be due to the directional coupler. I suggest you find the best coupler for the characteristic of the DUT, in your case, the antenna. You should choose the coupler with the range frequency nearest your antenna, also this corresponds to the highest directivity value of the coupler.
If the directivity of the coupler is not sufficiently related to the typical return loss of the antenna, the results are errored.
The coupler directivity should be almost 10dB higher than the typical return loss of the antenna.
In my opinion, you should consider a directional coupler with about 23dB or higher of directivity.
In the picture of your post, the setup of Pluto and coupler is correct. With this setup (open), you can do a 0dB calibration, that it is sufficient to measure the return loss next.
The RX gain and TX power levels should be defined in the range where you use more of the dynamic of Pluto and less internal crosstalk.
I suggest you start with 30 of RX gain and -30 of TX pwr.
I suppose the antenna should be connected directly to the coupler because cables and connectors could affect the readings, but the antenna should be possibly free from interaction with other near objects. I know, it's not easy to do! Moreover, the Pluto without a metal case can be affected by antenna emissions!
In your picture, I see dipole antennas. The impedance matching of these antennas is it well done? If we ever manage to implement the VNA on SATSAGEN maybe we will measure this too!
some of the undesired components and images are derived from combined of nearby local oscillators RX and TX; unfortunately, this is a lack of the transceiver of Pluto as confirmed by ADI. These undesired components are not present at some span settings because the LO RX turns in sweep tuned mode, and they are out of scope of the chunks. Keep in mind that this behavior is mainly due to using RX and TX in loopback configuration with nearby LO.
Thank you so much, Rolf!!!
that connection string is in uri format, you should enter exactly this below:
if you refer to the connection to the device via IP, you can specify a uri in Settings-> Devices, e.g. IP: 192.168.2.1, as an alternative to choosing a local device.
the new version 0.2.2 of SATSAGEN is available at the link below:
Highlights of v. 0.2.2:
- Extension of analyzed band from 6 GHz to 12 GHz and beyond! (harmonic third mode)
- Multiplier offset RX/TX TSA in order to test amplifiers or multipliers
- Offset between transmitter and receiver in order to test conversion systems, like transverters
- Display and modify SA resolution bandwidth
- Save/Load TSA scans (by memory or files)
- VSWR unit
- Multiple TSA markers
- Multiple SA markers
- Calibration using a directional coupler or bridge, averaging open and short
- Generator/Sweeper Mode
- SA Span value - scale coherency
- Lock Zoom autoscale when TSA or SA restarts
- All knobs can drived by the mouse wheel
- Pause and step functionality
- Output sweep ramp with external USB D/A
- Backup and restore configuration to and from a file.
- Connection string Override
- Frequency reference setting
See you soon!
|
OPCFW_CODE
|
I’m back after a month-long break of blog posting on Advent of Code! That’s actually not that bad, because I now have a fresh look on the solutions I have implemented. And when I say fresh look, I actually mean a critical look on the implementations. So what’s going to be shown in this blog post is actually a refactored version of what I did on that particular day.
On Day 11, we have to treat some sort of seating system where a seat is vacant or not based on some specific rules. Let’s take a look at the example we are given:
L.LL.LL.LL LLLLLLL.LL L.L.L..L.. LLLL.LL.LL L.LL.LL.LL L.LLLLL.LL ..L.L..... LLLLLLLLLL L.LLLLLL.L L.LLLLL.LL
In this example, we are told that
(.) represents floor space that never changes state,
(L) represents an
empty seat, and
(#) represents an occupied seat. The rules used to determine if a seat is vacant or occupied
are the following:
- If a seat is empty
(L)and there are no occupied seats adjacent to it, the seat becomes occupied.
- If a seat is occupied
(#)and four or more seats adjacent to it are also occupied, the seat becomes empty.
- Otherwise, the seat’s state does not change.
What’s funny about this problem is that, quite surprisingly, these rules eventually converge to a stable state where seats don’t change state anymore. We are thus told to determine how many seats are occupied once the system stabilize.
If we have to break down this problem, here’s what we might want to do:
- How should we be representing the seating space? A simple list of strings should do the trick. Let’s call
- We know that we have to loop the process of changing the state of some seats. Since we do not know how many
iterations are necessary to reach the stable state, we know for sure that we’re going to use a
- Then, what is the condition to stop looping? Reaching a stable state means that the system is identical from
an iteration to the next. This means that we probably will have to declare another variable to represent the
seat space after we have applied the rules (let’s call it
- This poses the following question: how should I create the contents of
next_seats? To answer this question, we can recall what we said on Day 8 about copies. Because we are manipulating a list of strings, which are immutable in Python, we are safe with shallow copies.
- Now, what is the body of our loop? In our loop, we wish to look at the seats in
previous_seatsand create the contents of
next_seats. To do so, we need to :
- Loop through each row of seats, and then loop through each seat in this row;
- For each seat in a row, we need to compute the number of occupied seats;
- Based on this number of occupied seats, we need to update the state (or not) the state of the current seat.
- Then, when the
whileloop is over, we can count the number of occupied seats, and we should be getting our result.
With all that, we can build the following solution to Part 1:
def seat_is_empty(seat): return seat == "L" def seat_is_occupied(seat): return seat == "#" def count_neighbours(previous_seats, seat_row, seat_column): nb_rows, nb_columns = len(previous_seats), len(previous_seats) # We create a list of adjacent position to the current seat position adjacent_positions = [(seat_row - 1, seat_column - 1), (seat_row - 1, seat_column), (seat_row - 1, seat_column + 1), (seat_row, seat_column - 1), (seat_row, seat_column + 1), (seat_row + 1, seat_column - 1), (seat_row + 1, seat_column), (seat_row + 1, seat_column + 1)] occupied_adjacent_seats_count = 0 for row_pos, column_pos in adjacent_positions: if not (0 <= row_pos < nb_rows and 0 <= column_pos < nb_column): continue # Invalid coordinates target_check = seats[row_pos][column_pos] if target_check == "#": occupied_adjacent_seats_count += 1 return occupied_adjacent_seats_count # Suppose we have parsed the input into a `seats` variable previous_seats = seats next_seats = None while previous_seats != next_seats: # These lines are here to make sure that the rules are being applied # Updating `previous_seats` at the end of the loop will cause the loop to stop immediately after one iteration if next_seats: previous_seats = next_seats next_seats = for i in range(nb_rows): # We initialize an empty row that we will append to `next_seats` row = "" for j in range(nb_columns): if seat_is_empty(previous_seats[i][j]) and count_neighbours(previous_seats, i, j) == 0: row += "#" elif seat_is_occupied(previous_seats[i][j]) and count_neighbours(previous_seats, i, j) >= 4: row += "L" else: row += previous_seats[i][j] next_seats.append(row) # When the loop is over, we can count the number of occupied seats. occupied_seats = [seat for values in next_seats for seat in values].count("#") print(occupied_seats)
Part 2 says that the rules are slightly modified:
- Instead of adjacent seats, we are now looking at all directions until we see a seat. This means that the algorithm needs to expand from neighbourhood to “first seat seen”.
- Instead of 4 or more adjacent seats, an occupied seat becomes vacant if there are 5 or more occupied seats in its sight.
Other than that, the seating area stabilizes again and we have to count the number of occupied seats when it has reached equilibrium.
Obviously, the second change does not really impact the algorithm we have built for Part 1. The real change here is that, instead of looking at the direct neighbours, we keep on looking in each direction until we find a seat.
We can then adapt what we have done for Part 1 to check for neighbours by adding a loop that continues until
we reach a seat or the limits of the seating space. If we keep the rest of the program identical, here’s what
we can do for the
def count_neighbours(seats, seat_row, seat_column): nb_rows, nb_columns = len(previous_seats), len(previous_seats) count = 0 # adjacent_positions will allow us to "spread" the neighbourhood in all directions adjacent_positions = [(-1, -1), (-1, 0), (-1, 1), (0, -1), (0, 1), (1, -1), (1, 0), (1, 1)] for spread_row, spread_col in adjacent_positions: # We create `row_test` and `column_test` to initialize the initial seat position each time we are # spreading in a new direction. row_test, column_test = seat_row, seat_column # The only way to stop this loop is to: # - Reach the borders of the seating space # - Reach a seat (vacant or occupied) while True: row_test, column_test = row_test + spread_row, column_test + spread_col if not (0 <= row_test < nb_rows and 0 <= column_test < nb_columns): break if seats[row_test][column_test] == "L": break if seats[row_test][column_test] == "#": count += 1 break return count
The rest of the program should remain the same except for the rule to change an occupied seat to a vacant
seat, so basically changing a
4 into a
Concepts and Difficulties
This problem is actually an example of a quite common class of problems: cellular automaton. What’s fascinating with these problems is that there are some configuration that converge to a stable state, just like what we have here. These also can produce really awesome animations if you were to show how the seats gets occupied or not. See for example all the results of this Reddit search on /r/adventofcode.
There are again quite a few interesting points to cover in these solutions. First, we are looking at adjacent positions instead of adjacent seats. This actually prevents us from having issues with indices getting out of bounds as we can simply check for their values. Then, we once again have to think about what kind of copies we want to create in our solutions. And finally, we have to juggle through all the rules to produce a solution that takes care of the simultaneous application of all the rules. This is actually a key point: we don’t want to modify the previous state but instead want to create a new one (hence the need for a second variable).
|
OPCFW_CODE
|
Nativité 2017: creating a Facebook Messenger bot
A few days ago I described Nativité, a pastoral Christmas game which I made last year.. I’m updating it for 2017, switching from SMS notifications to a Facebook Messenger bot. Here’s the latest game:
Last year, I used Firebase. This was mostly an excuse to try out Firebase. I found Firebase pleasant, but also quickly found its limitations. I wanted to send SMS messages on custom server-side events, and couldn’t figure out how to do this in a secure manner. I ended up making client-side calls to an external server to fulfil this, which is a horrible hack!
It also turns out SMS is expensive. There are four sheep in the game; each sheep moved a minimum of twelve times; and each move sends an SMS to the four girls and to me. I was using AWS SNS to send messages, which at the time was charging around 10c for each message to France. That’s a minimum of 4 × 12 × 5 × 10c = $24 in messaging costs! (Today’s SNS pricing for SMS is much cheaper and more uniform: around 0.64c per message to anywhere. But this is still expensive.)
When I was making Nativité last year, I was in Dedham, deepest rural Essex. Testing SMS required waving my phone around in the garden, freezing, trying to attract the attention of some distant cell tower. I would receive a few dozen test messages at once, and see my AWS bill grow a few dollars. (Since then, I’ve discovered “WiFi calling”, which seems to magically transfer cellular data over WiFi. But I didn’t know about it then!)
If instead I were to use Facebook Messenger, the end-user experience would be more pleasant, my bill would be $0, and I could test anywhere with an internet connection. So I did that.
To make a Facebook bot, I needed to switch out “serverless” for a more standard setup (Heroku for the server-side, Netlify for the client-side, and Pusher for some realtime magic). After reimplementing everything, the new client is hosted at https://nativite-2017.lantreibecq.com/.
A Facebook Messenger bot is a Facebook App
with the Messenger product added to it.
Mine is app
But I don’t think end-users see Facebook Apps directly.
Instead, Facebook Messenger bots communicate via a Facebook Page:
if you own both the app and the page,
you can give the app permission to communicate via the page.
The Page for Nativité is
(it was surprisingly hard to find a free unique name).
Facebook Pages have Messenger accounts which users can send messages to;
here is the Messenger account for
Pages can’t initiate conversations.
Users have to send a message to a Page before it can reply.
I’m using this as a “subscription” mechanism;
anyone sending a message to
TheChristChild is subscribed to all updates.
Facebook Apps have a review process. Before an App/Page can interact with the public, it must have gone through review. An exception to this is a list of “testers” which can be added to the app. Surprisingly, it seems that a user does not have to give permission to be added as a tester. So I added my partner and all her family as testers.
Oh, I also added a little crown to the sheep that’s currently winning. That’s all for this year.
Tagged . All content copyright James Fisher 2017. This post is not associated with my employer.
|
OPCFW_CODE
|
Returning to the topic of PSTs, but only because you can never have enough of these pestilent files, after I wrote about Microsoft’s new Office 365 Import service, I had the chance to chat with some of the companies who are dedicated to tracking PSTs down. These companies build tools to find and fix PSTs lurking in the dark corners of drives scattered around an organization to get the files into the necessary shape to become a candidate for ingestion into Office 365 or on-premises Exchange.
All acknowledge that Microsoft provides customers with a free PST Capture tool. And all agree that the free tool marks the lowest common denominator for what you’d expect in terms of the ability to ferret out hidden PSTs and transform the files. In short, if you pay zero for software, it’s unfair to expect that it will be very sophisticated.
I think this is a fair position. Microsoft has updated the PST Capture tool once since it was acquired, but I get no sense that Microsoft considers the tool to be anything other than something which allows them to tick a box when it comes to reassuring customers that Microsoft can help to eradicate PSTs. Certainly, there hasn’t been any great attention paid to PST Capture since it was originally acquired by Microsoft to support the introduction of archive mailboxes in Exchange 2010. And it never lived up to the “revolutionary” label promised for the tool in 2011.
You can certainly use PST Capture to search for PSTs and there are many articles, most written in the initial flush of enthusiasm after the tool was released (like this example from msexchange.org), to tell you how to use it. But I reckon that you’ll conclude that PST Capture requires a lot of manual attention to meet your needs.
Third party ISVs concentrate on tools that handle edge conditions, speed detection, and add automated workflow to make sure that PSTs can be found, copied, and processed without requiring a huge amount of time from administrative staff and users.
Edge conditions include things like being able to process password-protected PSTs – and not just those that are protected with the easily cracked compressible encryption that is used today. There are still a few PSTs generated by Outlook 2003 that use the “high encryption” method that is much harder to deal with.
Detection means being able to find PSTs no matter where users have squirreled them away. Invariably, this means that you have to deploy an agent onto user PCs to ensure that every drive is examined so that all PSTs can be harvested. Speaking of which, I was fascinated at some of the data about PST collection reported by the vendors. Look at the data shown below that’s taken from a PST acquisition exercise performed in a well-known major company.
The two items that stand out for me are the 922 PSTs uncovered for one user and the 343.4 GB of data found in 314 PSTs for another. It’s fair to ask how these situations might come about as you might not be able to understand how anyone could manage 922 PSTs. The answer lies in personal work practices and a distrust of IT, perhaps because of restrictive mailbox quotas or poor server reliability. Some folks create PSTs to archive items for individual projects, some create PSTs to archive items on a monthly basis, while others have their own weird and wonderful logic for creating new PSTs. The point is that these things happen in the wild and companies probably don’t realize that quite so much data is actually stored in PSTs on user-controlled storage. Scanning tens of thousands of PCs in a large company can uncover hundreds of terabytes of PSTs. All of that data are invisible for the purpose of enterprise search and compliance.
Finding so many PSTs and figuring out what to do with them can take a huge amount of administrator time. Tools that can scan for PSTs, copy them to a central holding area, and then prepare them for further processing by running scans to fix item-level corruption (multiple runs of the SCANPST utility or some proprietary code might be required) can reduce the required time, especially if you can schedule workflow tasks to scan, gather, and fix PSTs on an automatic basis. Add in optional processing such as deduplication of data across a set of PSTs, and you can see how third-party tools justify their license fees.
Taking the example of a company that has uncovered hundreds of terabytes of PST data, it’s likely that a lot of duplicated information exists in those files. Remember that a PST is a personal file, and if a message is sent to 100 users, it might result in 100 separate copies being stored in 100 PSTs. Deduplication is important if you plan to import the PSTs to Office 365 or Exchange on-premises because the last thing you want is to have to process massive chunks of duplicated information, especially if the data is going to be shipped across an Internet connection.
The most important thing that I learned from talking to the ISVs is that they have huge expertise and experience of dealing with the vagaries of PSTs and the many ways that people use these files. That experience ends up in their products. The advice that you can get from an ISV before starting a PST acquisition project will save time (and money) and usually results in a better outcome.
If you’re interested in using the new Office 365 import service and intend to gather user PSTs from near and far within your organization, take the time to go and talk to real experts before starting. I’m sure that the folks at QuadroTech (PST FlightDeck), Nuix (Intelligent Migration), TransVault (Migrator), Sherpa Software, and Archive360 (to name just a few of the ISVs working in this space) will be happy to talk to you.
And perhaps before focusing on the Office 365 Import service as the only way to transfer PST data to Office 365 mailboxes, have a look at what the ISVs can offer in this space. QuadtroTech caught my interest when they announced results of tests that showed that their Advanced Ingestion Protocol (AIP) is able to process PST data six times faster than the Office 365 Import service. In addition, their ArchiveShuttle technology was able to do a better job of moving data into Office 365 because fewer "bad items" were dropped.
According to QuadroTech, the Office 365 Import service depends on the New-MailboxImportRequest cmdlet to import PST data, a cmdlet that is only available to on-premises Exchange 2010 and Exchange 2013 servers (and, I believe in dedicated instances of Office 365). The cmdlet is controlled by the Mailbox Replication Service (MRS), but the MRS logs that detail any problems found with PST items when processing are not exposed to administrators by the Office 365 Import service, so you never know if items fail to be ingested unless you compare data before and after. I've asked Microsoft to comment on these claims but have heard nothing back to date.
In any case, the reported deficiencies of the Office 365 Import service appear to be a similar case to the PST Capture Service, which is free to all, but has some shortcomings. If you pay extra to purchase a third-party product that specializes in an area, you get more features and functionality for that investment.
My discussions with ISVs working in this space proved once again that these companies perform an extremely valuable service to the ecosystem by filling gaps left by Microsoft, When you decide about the approach you take to eliminating PSTs, take the time to investigate what is available before making a commitment. You know it makes sense.
Follow Tony @12Knocksinna
|
OPCFW_CODE
|
Senior Capstone Studio is an advanced study of transdisciplinary, collaborative design processes to address real-world problems in sociotechnical innovation provided by clients from industry, business, government, and nonprofit organizations. Activities in this class include: Systems building; project leadership and management, including resource allocation and scheduling; team management; value propositions; project pitches; rapid prototyping.
This studio class uses a distributed model of instruction. Each team work directly with a faculty mentor. During Fall 2022 I advised three teams working on developing digital-physical systems to identify Forien Object Debris (FOD) in manufacturing process. Students developed multiple projects in collaboration with Boeing experts. These projects ranged from simulation to camera systems able to identify FOD.
A key aspect of this studio is the emphasis technical and user study evaluation. Student teams identify test metrics and provide detailed plans to test their designs. After performing the necessary tasks they provide methods for optimization of their designs.
Throughout the project development across multiple studios students document their projects in a report format. They share their reports with instructors and stakeholders for feedback. We encourage students to write publication ready reports that can be used to file patents and publish in research venues.
The peer-review for this studio happened synchronously and after milestone presentations. Students had the opportunity to attend the presentations and provide feedback to peers. External partners were present during the presentations as well which afforded the presenting teams a rich environment to gain insight about their project directions.
We held a showcase for all the students in the Senior Capstone Studio and Transdisciplinary Fusion Studio I in Fall 2022. The following set of images are from all teams presenting in the showcase.
"Virginia Tech and Boeing University-Industry Collaboration; use of a 4-Tier Sociotechnical Approach to Reduce Ergonomics Related Hazards." Shahabedin Sagheb, Gregory Garrett, and Robert Smith. Institute of Industrial and Systems Engineers Annual Conference & Expo (IISE 2022)Annual Conference. Proceedings: 1-6, 2022.
"Project-based development as a model for transdisciplinary research and education."Shahabedin Sagheb, Katie Walkup, and Robert Smith. Journal of Systemics, Cybernetics and Informatics 20.5 (2022): 17-32.
“Project-based Learning Using the Collaborative Sociotechnical Innovation Model” Shahabedin Sagheb, Amy Arnold, Robert Smith. In 14th Annual Conference on Higher Education Pedagogy).
"Coalitional Curricular Design: Industry-Academic Co-creation of Technical Communication Modules." Katie Walkup, Shahabedin Sagheb, and Robert Smith. In Proceedings of the 40th ACM International Conference on Design of Communication, pp. 140-142. 2022.
|
OPCFW_CODE
|
# Creating a New Geometry / Function Node
###### tags: `Blender`
In addition to a file that defines the function of a node, there are several files that need to be updated to reference that new code as well. This document will create a new geometry node called Stub. The instructions will also reference changes to the stub code for creating a function node.
BKE_node contains a mapping of a static number to a human readable name that will refernce the node. A search of GEO_NODE_ or FN_NODE_ (depending on the type of node) in this file will reveal a list of #define statements. At the bottom of this list choose an unused incremented number and assign it a name.
#define GEO_NODE_STUB 1111
**Your Node Source Code File**
This file will contain the logic and layout information for your node. This file will live in the folder appropriate to its type:
For the purposes of this documentation, we will create a geometry node stub called
The contents of this file will vary based on whether we are coding a geometry node or a function node. The main similarity will be a node registration function that the other files will reference. The layout of the main node file will be covered in another document.
static bNodeType ntype;
geo_node_type_base(&ntype, GEO_NODE_STUB, "Geometry Stub Example", NODE_CLASS_GEOMETRY, 0);
// OTHER Node definitions and function references here
There are a few items to note in this code.
1. The name of the registration function is laid out as follows:
* node_type_geo_ // or node_type_fn_
* stub // name of node
2. Declare a bNodeType variable.
3. The geo_node_type_base() function takes the following parameters:
* The bNodeType variable that was just declared
* The define name from BKE_node.h is used as the 2nd parameter of the function.
* A display name for the node
* A node class (this determines what color the node header is among other things)
* 0. //TODO - leave 0
4. Register the bNodeType variable
There is a function called registerGeometryNodes() in which to call the registration function defined in the node source file.
In the CMakeLists file go to the secion of files in the geometry/nodes/ folder and add the node source file in alphabetical order:
In the file corresponding to the type of node being created, add the function prototype of the node registration function in aplhabetical order:
This file contains the node definitions. In alphabetical order add the DefNode line for the new node
DefNode(GeometryNode, GEO_NODE_STUB, 0, "STUB", Stub, "Stub Node", "")
The parameters are as follows:
1. Node Type: GeometryNode or FunctionNode
2. Define Name for the Node
3. An RNA function if the node requires it, set to 0 if this node does not have custom storage
4. Used for Python. Follow the define name minus GEO_NODE
5. Pascal case version of node name used for Python API
6. The menu entry name
7. // TODO (leave blank)
This file contains the definition for the Add Nodes Menu.
Search for the appropriate GeometryNodeCategory to add an entry. The NodeItem name is a combination of the NodeType from the DefNode entry and the Pascal case version of the name.
|
OPCFW_CODE
|
I have been working in IT field throughout the previous six years. I was constantly befuddled about web facilitating, particularly about the sorts of facilitating. It generally used to get me befuddled, which one is better Linux or Windows?
My interest conveyed me to make a pursuit on the upsides and downsides of both the facilitating sorts. What I felt is that the entire world is still encompassed in this exchange. In the event that a wide dominant part bolsters the Windows Hosting, then there is no a lot of individuals who discover the Linux Hosting better. cheap asp.net hosting
Both the stages have distinctive components, favorable circumstances and detriments. So I consider rather getting into the open deliberation it is ideal to examine their favorable circumstances. With the goal that it could help somebody in understanding, which one is appropriate for his/her prerequisite?
I am quick to talk about these stages with all of you, however here I will examine Windows Hosting as it were. I will discuss Linux Hosting in my next article. Here I am with, whatever I accumulated from on the web and other dependable assets.
Points of interest of Windows Hosting:
Similarity with Microsoft applications – Windows is monetarily possessed by Microsoft, so its principle wellspring of force is similarity with Microsoft applications and programming. In view of this it turns out to be simple for website admins to build up the sites speedily and intelligent as well.
Less confused – Windows Hosting depends on Microsoft Windows Server 2003 or 2008 that offers a few components that help you to better deal with your site. These frameworks depend on Microsoft NT in this way they give solid end to end server administration. This is the thing that makes windows facilitating less convoluted.
Enormous power – Windows Hosting gives enhanced security highlights. It really gives imaginative applications that are required for your site and its UI is reasonable for fledglings, as well as for cutting edge clients as well. This UI consolidates web improvement environment where.Net Framework and other Microsoft advances can be conveyed to make dynamic website pages and applications. At any rate the power and elements you get can’t be analyzed.
Colossal support – Besides the similarity with Microsoft items this stage additionally functions admirably with open source innovations, for example, Perl, PHP and MySQL. Support is another huge component due to which Windows facilitating has been turning out to be so well known. The part that is worked from its IIS to SQL server, all are perfect with each other. For example any site that keeps running on UNIX-based framework can undoubtedly be facilitated by a windows-based server, however a site that is running on Windows-based server may not work effectively on UNIX-base framework.
Shockingly Cost Efficient – Windows affiliate facilitating offers an extraordinary favorable position of all in one facilitating arrangement, to the affiliates. That implies an affiliate does not need to have different records to keep a tab on the quantity of clients. Affiliate can deal with all his/her customers through a solitary control board. Along these lines it spares time and cash both. This component makes windows facilitating shockingly cost proficient.
Something else is that windows facilitating has dependably been costlier than Linux as a result of restrictive programming permitting expense charged by Windows. However, until somebody needs his own server, the facilitating can be as reasonable as Linux facilitating.
I know whether we get into specialized talk it will be exceptionally extensive. In any case, I feel these focuses will be useful to clear the photo of points of interest of Windows facilitating, to you.
|
OPCFW_CODE
|
Typecho is a set of blog programs developed in PHP language, and also supports multiple databases (Mysql, PostgreSQL, SQLite). This article will demonstrate the process of deploying Typecho to aws.
Need to know the required dependencies before deploying
- EC2 (Amazon Elastic Cloud Compute, Elastic Cloud Compute, EC2 for short)
- RDS (Amazon Relational Database Service, Relational Database Service, RDS for short)
- LNMP (Linux, Nginx, MySQL, PHP. No separate MySQL installation is required here)
In the AWS console, start an EC2 instance of the Linux system, I chose the image of
The eligible free tier is the AWS overseas regional account free tier, you can sign up through this link to explore over 100 products and start building on AWS with the free tier.
By default, only 22 ports are enabled in the security group. During testing, you can choose to enable all security groups by default, or add commonly used ports to the security group.
If there is no problem after review, click Start
Then you need to choose an existing key pair or create a new key pair to connect using ssh, otherwise you can only connect through the AMI built-in password or EC2 Instance Connect.
You can use an existing key pair. I created a new one here, fill in the key pair name, click Download key pair, and you can get a
key name.pem file.
Click Start again. At this point, the instance we created is being started.
Click to view the instance details and get the
public IPv4 DNS to connect:
For example, if the public DNS name of the instance is
ec2-a-b-c-d.us-west-2.compute.amazonaws.com and the key pair is
my_ec2_private_key.pem , use the following command to connect to the instance via SSH:
ssh -i my_ec2_private_key.pem firstname.lastname@example.org
For more specific practical procedures, please refer to "Teach you how to deploy dynamic websites on the cloud"
I chose to use the LNMP one-click installation package directly
wget http://soft.vpser.net/lnmp/lnmp1.8.tar.gz -cO lnmp1.8.tar.gz && tar zxf lnmp1.8.tar.gz && cd lnmp1.8 && ./install.sh lnmp
The script needs to be executed by the root user, so we need to set the root user's password first
sudo passwd root
su root to switch to the root user, and execute
./install.sh lnmp again.
Installation of MySQL was skipped during installation, since we need to use RDS, there is no need to install it.
Wait for the installation to complete...
After the installation is complete, you can visit
http://IP/phpinfo.php to view the PHP information.
The official stable version of Typecho has not been released for a long time, and I am also contributing some code to Typecho recently, so here we will install the code of the development version first.
First use the
lnmp vhost add command to create a site:
After the creation is completed, there is a
.user.ini that prohibits cross-directory access by default, which can be removed by the
tools/remove_open_basedir_restriction.sh script in the
Go to the
/home/wwwroot/ty.qq52o.cn directory to download the source code of the development version:
cd /home/wwwroot/ty.qq52o.cn wget https://github.com/typecho/typecho/releases/download/ci/typecho.zip unzip typecho.zip chown -R www:www ./*
In order to access the installer normally, the domain name needs to be resolved to the IP of EC2, so go to the service provider where the domain name is located to add resolution, add a cname resolution to the corresponding domain name, and the record value is
public IPv4 DNS.
After the parsing is successful, you can see the installation interface provided by Typecho.
Click to start the next step, we need to configure the database information, but since there is no installation at present, we can use
SQLite to create it first, and a SQLite database file address will be generated by default, click to install.
The next step is to add an administrator account and password.
After clicking Continue Installation, the installation steps are completed.
Default home page
Seeing this, it's not over yet, because we are using
SQLite storage, we need to replace it with
MySQL storage, keep reading
Go to the RDS service of the aws console and create a MySQL engine database
Select the instance configuration in the configuration below, set the account password, click Create database, and wait for the database to be successfully created to obtain the endpoint and port.
Note that the database needs to be in the same VPC security group as EC2.
Since we are using SQLite just now, we need to use MySQL, so we need to delete the file to reinstall:
cd /home/wwwroot/ty.qq52o.cn #filename为刚才创建时自动生成的SQLite文件 rm usr/filename.db config.inc.php
Re-visit the domain name, and the installation interface you just saw will appear again. Enter the terminal node and port just obtained, and the configured account password:
When we clicked to start the installation, an error was reported:
Sorry, the database cannot be connected, please check the database configuration before proceeding with the installation
This means that the database named
typecho does not exist, so we need to manually create it
#安装mysql client apt install mysql-client-core-5.7 #连接数据库 将终端节点替换为实际的 回车后输入密码再次回车进入数据库 mysql -uadmin -h终端节点 -p #执行 create database typecho;
After the execution is successful, click Start Installation again and you will see the
Create Your Administrator Account page again, and fill in according to the previous steps.
After the installation is successful, you can enjoy the fun brought by Typecho~
Typecho is not only lightweight and efficient, but with only 7 data sheets and less than 400KB of code, a complete plug-in and template mechanism is realized. And native support for Markdown typesetting syntax, easy to read and easier to write.
Coupled with the use of EC2 + RDS, even in the face of sudden high traffic, it can easily cope with the required fast performance, high availability, and security.
Get more tutorials: AWS Getting Started Fundamentals Course
|
OPCFW_CODE
|
Solve for the bundle component that makes one as 'well off' as earlier
Suppose we have a (presumably time independent?) utility function $U(x_1,x_2)$ for consumer Rita.
1.
What is Rita's MRS of $x_2$ for $x_1$?
$$MRS_{x_1, x_2} = \frac{MU_{x_1}}{MU_{x_2}} = \frac{\frac{\partial U}{\partial x_1}}{\frac{\partial U}{\partial x_2}}$$
(or in some strange books $-\frac{MU_{x_1}}{MU_{x_2}}$)
Is that right?
2.
If Rita consumed $a$ of $x_1$ and $b$ of $x_2$, what was her MRS?
$$MRS_{x_1, x_2}(a,b) = \frac{MU_{x_1}(a,b)}{MU_{x_2}(a,b)}$$
Is that right?
3.
3.1 If Rita is currently consuming $c$ units of $x_1$, how many units of $x_2$ must she consume in order to leave her just as well off as she was in #2?
3.2 What is the MRS at $(c,x_2)$?
What's the equation to solve here?
$$U(a,b) = U(c,x_2')$$
?
Or
$$\frac{MU_{x_1}(a,b)}{MU_{x_2}(a,b)} = \frac{MU_{x_1}(c,x_2')}{MU_{x_2}(c,x_2')}$$
?
I'm thinking the former because 'well off' seems to call for 'utility' and not 'MRS', and the latter answers 3.2 easily...
Anyway, if the former then for 3.2 I just compute
$$\frac{MU_{x_1}(c,x_2')}{MU_{x_2}(c,x_2')}$$
?
I think your answers to 1 and 2 are correct. For 3.1, your intuition that "as well off as" should be interpreted "indifferent" is correct, and indifferent is about comparing utility levels, not MRS.
Your answer to 3.2 looks correct.
@HerrK. Post as answer?
Converting my comments to an answer:
I think your answers to 1 and 2 are correct. For 3.1, your intuition that "as well off as" should be interpreted "indifferent" is correct, and indifferent is about comparing utility levels, not MRS. You answer to 3.2 looks correct.
|
STACK_EXCHANGE
|
REPLACE INTO sqlite doesn't replace
got a weird problem. I have a sqlite table in my objective-c app:
NSString *sql = @"CREATE TABLE IF NOT EXISTS user_results (id INTEGER PRIMARY KEY ASC AUTOINCREMENT, gameID INTEGER UNIQUE, gameDesc TEXT, result INTEGER)";
Then I execute query:
[[DatabaseController getInstance].resultsDB executeUpdate:@"REPLACE INTO user_results (gameID, gameDesc, result) VALUES (?, ?, ?)",[NSNumber numberWithInt:self.gameID],[test JSONRepresentation],[NSNumber numberWithInt:sumBalls]];
But the problem is that it doesn't replace the row with the same gameID, it just adds one (even though it's UNIQUE), any ideas why would it happen?
P.S. I'm using FMDB to work with sqlite.
Thanks in advance.
Solution: Had to use [NSNumber numberWithInt:self.gameID].integerValue instead of [NSNumber numberWithInt:self.gameID] when sending to sql query.
Has your schema changed, with the UNIQUE constraint added later? Your schema & SQL should work as expected. I just tried this and it works fine:
sqlite3
CREATE TABLE IF NOT EXISTS user_results (id INTEGER PRIMARY KEY ASC AUTOINCREMENT, gameID INTEGER UNIQUE, gameDesc TEXT, result INTEGER);
insert into user_results values (1,1,'hi', 1); --insert 2 test rows
insert into user_results values (2,2,'2', 2);
select * from user_results;
1|1|hi|1
2|2|2|2
Now an insert fails:
insert into user_results values (3,1,'1', 1);
Error: column gameID is not unique
REPLACE INTO does what you expect:
replace into user_results (gameid, result) values (2, 3);
select * from user_results;
1|1|hi|1
3|2||3
It deleted the row with id 2, and replaced it with a new row id 3 and gameid 2. Unless you were expecting it to replace the primary key=2 row? What Sqlite does is delete any prior rows that would cause violation of the unique key, then inserts a new row. See http://www.sqlite.org/lang_conflict.html. Note it didn't add an EXTRA row. It deleted one and added one (in other words, 'replaced' :)
If your replace into SQL included the id column, that would work, here I'm effectively updating the row with id 3. Of course you'd have to figure out the id of the row you wanted to replace...
replace into user_results values (3,3,'2', 2);
select * from user_results;
1|1|hi|1
3|3|2|2
Is the id column something you really care about? Sqlite will create such a column for you anyway.
I actually found an issue.
As I was sending to executeUpdate [NSNumber numberWithInt:sumBalls], it was sending not the actual sumBalls value.
[NSNumber numberWithInt:self.gameID].integerValue
did the trick. Thanks.
The gameID field is not the PRIMARY KEY, ID is. As you have it, the REPLACE INTO will only work with the ID field. I recommend making the gameID field the primary key to get the result that you are looking for.
NSString *sql = @"CREATE TABLE IF NOT EXISTS user_results (gameID INTEGER PRIMARY KEY, gameDesc TEXT, result INTEGER)";
Nope, gameID is a UNIQUE column already, so REPLACE INTO should work fine and delete any rows that would make it non-unique.
That's true. UNIQUE column should not be overwritten, so instead of INSERT, REPLACE should be called.
|
STACK_EXCHANGE
|
OpenGL Interleaved Index Array
I'm writing a class in C++ that will handle drawing 3D models (ie triangles meshes) and I'm wondering how best to organize my data buffers.
Here's what I know so far:
Using interleaved arrays speeds up the code. It makes use of spatial locality and increases cache hits. You can implement this by organizing vertices into structs where each struct holds the position/normal/texcoord/etc. information for that vertex.
Using indexed arrays decreases memory consumption by storing each distinct vertex/normal/texcoord/etc. only once and then defining faces by references into those arrays, rather than redundantly specifying all of the information for each face. You can also implement this with a struct of indices into lists of vertex attributes.
My question: How should I best make use of both of these? Is it possible to do both? I've heard for both that you should always use them when you can, but haven't found anything concerning using both at once, or one over the other.
My initial solution: I was going to implement a struct that had indices into the data arrays and pass an array of these structs, as well as the data arrays, as VBOs, essentially combining the two.
Am I on the right track? Is there a better way to do this? What should/shouldn't I do? Is there anything I seem to be unaware of that would impact this decision?
How about triangle strips + some order optimisation http://rosario3d.wordpress.com/2011/06/19/triangle-render-order-optimization/ ?
Tools
Using interleaved arrays and indexes are not really related. It's possible to do both things at the same time because they don't really have much to do with each other.
Using interleaved arrays or not is just a choice if you want to pack your vertices like VTNVTNVTN... (vertex, texture, normal) in a single buffer, or as VVV, TTT, NNN in separate buffers.
Using indexes is just a decision if you have enough repeated vertices to justify the use of the index buffer. When making this decision it's pretty much irrelevant if you've interleaved the vertices or not.
My initial solution: I was going to implement a struct that had indices into the data arrays and pass an array of these structs, as well as the data arrays, as VBOs, essentially combining the two.
This is illegal. Note that you don't get to have indices, you only get one index. You can't sample vertex #0 at the same time you sample texcoord #1. The single index that you supply is the index into all of the buffers.
I suspect this is why you were initially confused, because you thought you could have multiple indices.
|
STACK_EXCHANGE
|
Is there a setting within ProjectWise Web Parts/SharePoint that will prompt or automatically check in a document?
I have a project that is using ProjectWise Web Parts from a SharePoint site and have several users opening documents but not checking them in after they close the document. To there credit they do not know that the document is not being checked in nor do they remember if I tell them.
Anyone else have this same issue or have a solution?
PWV8i - SharePoint 2007
User can automatically check in a document only if he/ she is using integrated Office, MicroStation or AutoCAD. But ProjectWise Web Server does not support integration with Office, MicroStation or AutoCAD. So for now we have not possibility check in a document automatically.
Do these users (that forget to check-in documents) need to edit documents? Do they intentionally check-out documents to edit them? If check-out is made accidentally and they do not need to edit documents via the web interface - it might be possible to configure security settings so that these users would not have rights to modify documents and therefore would not be able to check them out. And that in turn would eliminate problem of forgetting to check back in documents.
Could you please tell us more how you're using PW Web Server Web Parts in this project? What are the roles of people who are accessing PW content via web interface? In what scenario are they using PW web access?
Thank you for the reply. To answer your question, yes an no. Depending on their access at that time they may need to edit the documents or just open read-only. The real problem is I have over 300 users that are not within the company I work for, so rights management can and will be an ongoing task. I would rather not take it to that level.
The company I work for is hosting the project so all documents reside on our network and the 20+ consultants login to our network with the PW Client or through the SharePoint website with PW Web Parts installed. If the consultants have PW Explorer they of course receive the check-in document dialog box but if logged in from the SharePoint site no dialog to check in document. The roles of the people can vary, some could just want to view but some could want to edit the documents from the SharePoint site. The dilema I am faced with is constantly reminding the users to always check-in the documents, or become the PW\Sharepoint police and constantly check for checked out documents.
I know there is the PW Web View Server but I am under the impression that this requires a separate web address that needs to be sent out to the read-only viewers. If I am wrong please let me know.
We usually handle people that need read-only via the folder permissions but you could remove the "Create/Modify/Delete/Free" in their user settings and they'd be read-only that way.
Doesn't help with people that forget to check-in when they really do need to edit. we don't really have a solution for that either but we usually delegate a person on the project to do the policing rather than try to do it ourselves. Global searches "checked out a week ago" and "checked out a month ago" are useful for those doing the policing
we try to get everyone to use the thick client because of issues like this.
Thanks for your feedback.
Would it help if there would be a dedicated web part created that could be placed on web page and would allow user to see all documents that she/he has checked out currently?
Or would you prefer a solution that on IE window close would pop-up dialog listing all currently checked-out documents and ask/allow user to check them back in?
Which of the options would you prefer and why? Or maybe you'd prefer some alternative option?
Could you please tell us more on "we try to get everyone to use the thick client because of issues like this"?
How many users (approximately) would use web interface but currently are not because of this issue? What are roles of these users? What would they be using PW web access for?
You said "issues" - what are other issues besides this one?
A popup at IE close would be a great place to start, I am thinking also if someone is using tabbed browing and they close out of the site that the popup would appear also. I like the idea because it would alert the user right away to do something. If a web part is sitting there showing that documents are checked out but not poking at them to do something about it I am positive the user would not care unitl someone forced them to check in the document.
Just read this post and thought of letting you know that we too plan to use Thick client due to above+more issues/limitations of PW-Sharepoint web interface.
Now, we have set up a Web-View server for all users and need to go for thick client for (Edit+ rights) users, which is additional burden for IT personnel due to Limited access rights set on each workstation.
And yes, we had plans to deploy full PW webserver for 200+ active users and 500+ web view users, but as of now, will go ahead with Web view server (that too bothers us due to Export/Copy out options available to our Master Native DWG files, which we need to restrict due to the fact that at times different engineering teams may give away these native files to multiple contractors across projects without proper control and we will face versions/content mismatch issues when we receive those as-builts upon completion of those projects...so we wanted to restrict the access of native files to Publisher's view/print options only).
Not many posts gets answered in my case, but still thought to let u know :)
On the question about copying out the native files, if you wanted to restrict that completely for all files, you could remove the copyout command the from Web part menus.
Just a thought.
What kind of editing were you looking to enable via PW Web Server?
If editing using CAD apps - using thick client (PW Explorer) is the recommended way to go. Reason behind this - CAD application integration functionality available in PWE is not planned to be moved to web interface any time soon.
If you'd be looking to enable data exchange with external parties or review scenarios - we'd recommend to use PW Web Server. Especially if review process involves external users (delivering docs for review, downloading review comments or actively reviewing data - creating markup comments).
Please tell us more, what exact issues/limitations were identified?
Also - what platform are you using to run our PW Web and Web View server web parts on? SharePoint (32 or 64bit?) or plain IIS?
|
OPCFW_CODE
|
import {
PropertyDisplay,
PropertyListStyle,
PropertyOverflow,
PropertyOverflowX,
PropertyOverflowY,
PropertyVerticalAlign,
} from './types';
/** style props to for layout properties */
export type LayoutStyleProps = {
/** sets `display` property */
display?: PropertyDisplay;
/** sets `height` property */
height?: number | string;
/** sets `list-style property */
listStyle?: PropertyListStyle;
/** sets `max-height` property */
maxHeight?: number | string;
/** sets `max-width` property */
maxWidth?: number | string;
/** sets `min-height` property */
minHeight?: number | string;
/** sets `min-width` property */
minWidth?: number | string;
/** sets `overflow` property */
overflow?: PropertyOverflow;
/** sets `overflow-x` property */
overflowX?: PropertyOverflowX;
/** sets `overflow-y` property */
overflowY?: PropertyOverflowY;
/** sets `vertical-align` property */
verticalAlign?: PropertyVerticalAlign;
/** sets `width` property */
width?: number | string;
};
const layoutProps = {
display: 'display',
height: 'height',
listStyle: 'listStyle',
maxHeight: 'maxHeight',
maxWidth: 'maxWidth',
minHeight: 'minHeight',
minWidth: 'minWidth',
overflow: 'overflow',
overflowX: 'overflowX',
overflowY: 'overflowY',
verticalAlign: 'verticalAlign',
width: 'width',
};
/**
* A style prop function that takes components props and returns layout styles.
* If no `LayoutStyleProps` are found, it returns an empty object.
*
* @example
* // You'll most likely use `layout` with low-level, styled components
* const BoxExample = () => (
* <Box display="inline-block" height="50%">
* Hello, positions!
* </Box>
* );
*
*/
export function layout<P extends LayoutStyleProps>(props: P) {
const styles = {};
for (const key in props) {
if (key in layoutProps) {
const attr = layoutProps[key as keyof LayoutStyleProps];
const value = props[key];
// @ts-ignore TS doesn't like adding a potentially unknown key to an object, but because we own this object, it's fine.
styles[attr] = value;
}
}
return styles;
}
|
STACK_EDU
|
I am trying to establish a simple LAN connexion between a host (PC) and a client (Oculus Rift), but I can’t seem to get it working.
I have 2 problems:
- the client doesn’t find the host session if the FindSession method is not executed approximately (~2s) in the same time than the CreateSession from the host side
- when the client happens to find a session, the JoinSession method returns success but doesn’t travel to the server map
Using UE4 4.23 (downloaded from Epic Launcher, it’s not Oculus forked version)
I was first using the Null subsystem with the advanced session plugin, but now need to integrate the AvatarSDK to the project so I must use the Oculus Subsystem.
I first enabled the plugin in the UE4 Editor and setup the Config/DefaultEngine.ini with the following lines:
GearVRAppId=[My Other ID]
Then I opened a pool in my oculus dashboard.
Both the server and the client pass the entitlement check.
From the server, I create a session using the Oculus CreateSession node with the pool id I made from my dashboard, and open a level, let’s say “TargetMap” with the listen option. The CreateSession returns success and I can’t see any problem until that point.
From the client, I use the FindMatchmakingSessions node with the same pool id. First problem is happening here: if the client doesn’t run this method when the host session has just been created, no session will be found (the FindMatchmakingSessions node succeeds but the results array has a length of 0). When the client can find the session, the JoinSession node succeeds but the client doesn’t travel to the server map.
I made sure to build the game to test so that the right NetDriver would be used (it is using the IpNetDriver in editor, and is correctly using the OculusNetDriver when packaged).
I checked the logs from the client and here is what I can find:
[FONT=Slack-Lato]LogOnlineSession: OSS: Join session: traveling to 2359010480812061.oculus
[FONT=Slack-Lato]LogBlueprintUserMessages: [StartupMap_C_21] Successfully joined session
[FONT=Slack-Lato]LogNet: Browse: 2359010480812061.oculus//Game/VirtualRealityBP/Maps/StartupMap
[FONT=Slack-Lato]LogTemp: Display: ParseSettings for GameNetDriver
[FONT=Slack-Lato]LogTemp: Display: ParseSettings for OculusNetDriver_2147482546
[FONT=Slack-Lato]LogTemp: Display: ParseSettings for PendingNetDriver
The Browse: command doesn’t seem to look for the great map (which is the StartupMap, not the TargetMap). I think the client might have joined when the server was trying to move from StartupMap to TargetMap (because if I don’t do that at that moment the client can’t find any session), but it anyway doesn’t follow the server to TargetMap.
[What I tried]
I tried doing that exact same setup using the Oculus forked version of UE4, but have the exact same problems.
I am running the server and the client on different computers with packaged versions of the game.
I’m actively following some other posts but can’t find any answer:
On that second link the post is quite old but the described problem is exactly the same as mine (he doesn’t mention the fact I can discover sessions only when the CreateSession node has just been executed though). He fixed his issue by pulling more recent version of UE4 on Oculus forked version, but I tried it without success.
If you pass by here with porblems too, check this post out, it might answer some of your questions:
Does anyone experience the same issue ?
|
OPCFW_CODE
|
Files in directory assets/nats2 in any check-in
Tcl client library for the NATS message broker
With this package you can bring the power of the publish/subscribe mechanism to your Tcl and significantly simplify development of distributed applications.
The package is written in pure Tcl, without any C code, and so will work anywhere with
Tcl 8.6 and Tcllib. If you need to connect to a NATS server using TLS, of course you will need the TclTLS package too. On Windows, these packages are normally included in Tcl installers, and on Linux they should be available from your distribution's package manager.
The package is available in two forms:
1. As a classic Tcl package using
pkgIndex.tcl. Download/clone the repository to one of places listed in your
$auto_path. Or you can extend
$auto_path with a new folder, e.g.:
or using an environment variable:
lappend auto_path <path>
2. As a Tcl module, where all implementation is put into a single *.tm file. This improves package loading time. Note that Tcl modules are loaded from different locations than
$auto_path. You can check them with the following command:
and you can extend this list with:
tcl::tm::path add <path>
Both forms can be loaded as:
If you are using a "batteries-included" Tcl distribution, like Magicsplat or AndroWish, you might already have the package.
package require nats
- Publish and receive messages, also with headers (NATS version 2.2+)
- Synchronous and asynchronous requests (optimized: under the hood a single wildcard subscription is used for all requests)
- Queue groups
- Gather multiple responses to a request
- Publishing and consuming messages from JetStream, providing "at least once" or "exactly once" delivery guarantees
- Management of JetStream streams and consumers
configuremethod with many options
- Protected connections using TLS
- Automatic reconnection in case of network or server failure
- While the client is trying to reconnect, outgoing messages are buffered in memory and will be flushed as soon as the connection is restored
- Authentication with NATS server using a login+password, an authentication token or a TLS certificate
- Cluster support (including receiving additional server addresses from INFO messages)
- Configurable logging, compatible with the logger package
- (Windows-specific) If the iocp package is available, the client will use it for better TCP socket performance
- Extensive test suite with 140+ unit tests, checking nominal use cases, error handling, timings and the wire protocol ensures that the Tcl client behaves in line with official NATS clients
Look into the examples folder.
Missing features (in comparison to official NATS clients)
- The new authentication mechanism using NKey & JWT.
- WebSocket is not supported. The only available transport is TCP.
|
OPCFW_CODE
|
Predicting Future Climate: What Can We Expect in the Next Decade?
Earth’s climate, weather, and biological systems are changing as a result of anthropogenic greenhouse gas emissions, and they also vary naturally on a variety of timescales with or without human influence. Earth system model simulations that include projected changes in anthropogenic emissions leave no doubt that Earth’s climate will grow steadily warmer through this century, and perhaps beyond if emissions continue at their current rates. Hence, a prediction made today that the climate where you live will be warmer 10 years in the future is very likely to be correct. However, for more detailed forecasts of how climate will change in a specific region over the next 10 years, model simulations must also take into account the actual state of the Earth as it is today in order to accurately predict both forced and natural changes. By combining models, observations, and future emissions projections, it is possible to predict impactful variations in climate up to a decade in advance. In this talk, NCAR scientists Dr. Stephen Yeager and Dr. Isla Simpson will discuss the science of decadal climate prediction and provide examples of how this capability could be useful to society.
About Stephen Yeager
Dr. Stephen Yeager is a project scientist in the Oceanography Section of NCAR’s Climate and Global Dynamics Laboratory where he has worked since 1998. His research focuses on improving our understanding of the role that the ocean plays in modulating Earth’s climate on timescales from seasons to centuries. He co-leads a working group aimed at developing Earth System Prediction applications of the Community Earth System Model developed at NCAR. He also plays leadership roles in international efforts to advance the science of decadal climate prediction organized by the World Climate Research Programme (WCRP) and the International Laboratory for High Resolution Earth System Prediction (iHESP).
Prior to joining NCAR, Dr. Yeager taught high school math and physics for two years on a remote island in Fiji as a Peace Corps volunteer. He received degrees in Physics from Dartmouth College and Brown University, and completed a PhD in Atmospheric and Oceanic Science from the University of Colorado, Boulder. In his free time, he enjoys cycling, hiking, and skiing with his family, and playing mediocre guitar and mandolin.
About Isla Simpson
Dr. Isla Simpson is a scientist in the Climate Analysis Section of the Climate and Global Dynamics Laboratory at NCAR, studying large scale atmospheric dynamics and its representation in Global Climate Models. Dr. Simpson joined NCAR in 2015 after obtaining a PhD from Imperial College London in 2009, followed by postdoctoral positions at the University of Toronto and Lamont-Doherty Earth Observatory, Columbia University. She works to understand dynamical mechanisms involved in the variability and change of the large-scale atmospheric circulation, and its impacts on regional climate and hydroclimate using a hierarchy of modeling approaches. The overall aim being to determine the extent to which models can successfully capture the processes of relevance for the real atmosphere and to determine how they can be improved.
|
OPCFW_CODE
|
Category Whitepapers and Guides
Delivering software in a continuous delivery capacity is something that nearly every project strives for. Problem is, not many projects are able to achieve continuous delivery because they don’t have the confidence in their applications quality, their build pipelines, their branching strategy or worst case, all of them.
A good indicator as to whether you fall into one of the above is to ask yourself: `can I confidently release master branch right now`.
If your answer is no, then how do we start to break down and resolve these problems.
Building confidence in quality
A recent project I have been working on fell into a few of the above categories. Nearly all their testing was done on a deployment to a long living environment, after a merge commit to master. Along with a lot of duplicated work throughout their pipeline.
The test strategy shown above was for a simple front-end application that reads data from an external API.
To start, we identified areas of our application that we knew were unloved, or treacherous to develop. Once identified, we put in place appropriate test automation. When writing test automation it is so important that your tests are robust, fast and deterministic.
We pushed as much of our UI automation down into the application. Ideally you want your application adhering to the testing pyramid principles. Testing elements that have particular classes with tools such as selenium are both time costly and of no value. There are better, more appropriate tools to do this.
Once our test scaffolding was in place, we started to feel more comfortable refactoring problem areas and reducing complexity.
We isolated our application by stubbing out external services or dependencies where necessary – we didn’t want to be testing services outside our scope. Where possible, we recommend agreeing a contract with your external dependencies and using this to develop against.
We also recommend containerizing your app. Being able to deploy and run the same image of an application locally and on production is incredibly powerful. Long gone are the days of having long living application servers and the phrase of ‘well it works on my machine’.
Start failing fast
Once we had confidence that when our tests all passed then the application could be deployed, we then looked to address where our tests were running.
Having tests run after a merge commit to master is too late in the process. Leaving it this long introduces a risk that someone pushes the release to production button before tests have been run.
We need to run tests earlier in the process.
In the past, to solve this problem you may have adopted complicated branching strategies dev, test, master which on paper seem reasonable, but in practice introduce horrendously slow unnecessary feedback loops and messy merges between multiple branches.
We decided to harness the power of pull request environments instead, to allow our tests to run on short living infrastructure before we merge to Master. With DevOps paradigms such as immutable infrastructure, infrastructure as code and containerisation, deploying a new environment becomes trivial.
This becomes even more powerful if you deploy your pull request environments in the same way as your production site, since you effectively test the deployment itself.
Having pull request environments spun up also caters for any testing requirements, such as exploratory testing or demos, and massively speeds up developer feedback loops.
The end result is a much higher confidence in your applications quality in master branch, which to any project is invaluable.
This a two-part series, with the next article focusing on how we can start to deliver master branch to production. Watch this space.
|
OPCFW_CODE
|
Engaging the Research Community towards an Open Science Commons (EGI-Engage)
Over the last decade, the European Grid Infrastructure (EGI) has built a distributed computing and data infrastructure to support over 21,000 researchers from many disciplines with unprecedented data analysis capabilities.
EGI builds on the European and national investments and relies on the expertise of EGI.eu - a non-profit foundation that provides coordination to the EGI Community, including user groups, EGI.eu participants in the EGI Council, and the other collaborating partners EGI-Engage aims to accelerate the implementation of the Open Science Commons by expanding the capabilities of a European backbone of federated services for compute, storage, data, communication, knowledge and expertise, complementing community-specific capabilities. The mission of EGI-Engage is to accelerate the implementation of the Open Science Commons vision, where researchers from all disciplines have easy and open access to the innovative digital services, data, knowledge and expertise they need for their work.
The Open Science Commons is grounded on three pillars:
- the e-Infrastructure Commons, an ecosystem of key services
- the Open Data Commons, where any researcher can access, use and reuse data
- and the Knowledge Commons, in which communities have shared ownership of knowledge and participate in the co-development of software and are technically supported to exploit state-of-the-art digital services.
Particular, EGI-Engage aims to accelerate the implentation of the Open Science Commons vision. Five ogjectives were set:
- Ensure the continued coordination of the EGI Community in strategy and policy development, engagement, technical user support and operations of the federated infrastructure in Europe and worldwide.
- Evolve the EGI Solutions, related business models and access policies for different target groups aiming at an increased sustainability of these outside of project funding. The solutions will be offered to large and medium size RIs, small research communities, the long-tail of science, education, industry and SMEs.
- Offer and expand an e-Infrastructure Commons solution
- Prototype an open data platform and contribute to the implementation of the European Big Data Value.
- Promote the adoption of the current EGI services and extend them with new capabilities through user co-development
EGI-Engage will expand the capabilities offered to scientists (e.g. improved cloud or data services) and the spectrum of its user base by engaging with large Research Infrastructures (RIs), the long-tail of science and industry/SMEs. The main engagement instrument will be a network of eight Competence Centers, where National Grid Initiatives (NGIs), user communities, technology and service providers will join forces to collect requirements, integrate community-specific applications into state-of-the-art services, foster interoperability across e-Infrastructures, and evolve services through a user-centric development model. The project will also coordinate the NGI efforts to support the long-tail of science by developing ad hoc access policies and by providing services and resources that will lower barriers and learning curves.
Role of GWDG in the Project
GWDG is involved in the DARIAH Competence Center (DARIAH-CC) within EGI-Engage and will bring in its experience from the DARIAH-DE project as well as from DARIAH-EU, where GWDG is responsible for the e-Infrastructure Competence Centre. With the help of the DARIAH-CC, EGI-Engage will be able to engage with researchers from the Arts & Humanities. In particular, GWDG develops storage and OCR processing services.
EU, Horizon 2020
EGI Engage is an international project involving 43 partners from over 30 countries and more than 70 institutions from Europe, the United States and six countries from the South East Asian region. The project is coordinated by European Grid Infrastructure (EGI), headquartered in Amsterdam.
|
OPCFW_CODE
|
[Bug]: Misleading documentation in regard to authentication configuration
zot version
helm/website
Describe the bug
helm chart version: 0.1.26, helm show all project-zot/zot provides the following output
# Alternatively, the configuration can include authentication and acessControl
# data and we can use mountSecret option for the passwords.
#
# config.json: |-
# {
# "storage": { "rootDirectory": "/var/lib/registry" },
# "http": { "address": "<IP_ADDRESS>", "port": "5000" },
# "auth": { "htpasswd": { "path": "/secret/htpasswd" } },
# "accessControl": {
# "**": {
# "policies": [{
# "users": ["user"],
# "actions": ["read"]
# }],
# "defaultPolicy": []
# },
# "adminPolicy": {
# "users": ["admin"],
# "actions": ["read", "create", "update", "delete"]
# }
# },
# "log": { "level": "debug" }
# }
This seems to be wrong as the accessControl and auth properties need to be nested inside the http, otherwise zot crashes on start with an unknown property error.
On the website https://zotregistry.io/v1.4.3/articles/authn-authz/#example-access-control-configuration the accessControl example reads
"http": {
...
"accessControl": {
"**": {
leading to an unknown property error as well, it seems that the key repositories is missing before specifying repository specific access rules.
At https://zotregistry.io/v1.4.3/admin-guide/admin-configuration/#network-configuration the list of possible attributes does not contain accessControl, auth or realm.
To reproduce
Configuration
Client tool used
Seen error
Expected behavior
No response
Screenshots
No response
Additional context
No response
@everflux 1.26 helm chart releases the 2.0.0-rc* releases. There is not an official release for 2.0.0 yet (imminent though!)
Best to look up examples here: https://github.com/project-zot/zot or unpublished docs at: https://github.com/project-zot/project-zot.github.io/tree/main/docs
Thanks for pointing out the option to get more recent documentation - perhaps this could be linked or selected on the website as well? (I like in particular how traefik https://doc.traefik.io/traefik/ and nest https://docs.nestjs.com/ do it on the lower left)
It might be a good idea to label the unofficial releases "-alpha" or something, since "-rc" implies a "release candidate" which is in my experience just a 'git tag' apart from beeing released.
Despite that the helm chart documentation should be consistent with the targeted release, and even ignoring that the helm chart output is not consistent with the website neither.
Perhaps there is an opportunity to consider generating the same documentation for website and helm chart from the same source.
Hi @everflux, the example in the helm chart has been updated updated.
https://github.com/project-zot/zot/releases/tag/v2.0.0
^ has now been released. Can we close this?
Let's close this, as the docs have been updated.
|
GITHUB_ARCHIVE
|
Should I store the local time for events instead of UTC?
I am currently storing events of some entities in UTC time but I am not sure if I should do that in this case. Imagine there's an event at 10pm local time (-4h UTC) and a mobile App fetches "todays events". This could e.g. look like this:
App sends request to fetch all clubs in the near location
After receiving all clubs it sends a request to get all events for today. It therefore sends the local time Sun. 10pm to the server.
The server would convert the local time of the mobile device to UTC Mon. 1am and fetch all events from Monday. But of course that was not what I wanted.
Fetching all events from the clubs and convert them to their local time using their local time offset information is not really a great solution.
So wouldn't it be better to just store all events in local time? In that case the mobile App would send its local time to the server which would be able to query all events from the clubs in local time as well.
This sounds much simpler to me but I am not sure if I overlook something.
So what would I do in this case?
This has been discussed many times on StackOverflow. Please search, especially for "java date best practice".
Yes, storing everything in UTC is probably the best solution.
You don't say how you are "storing" the dates/times, but if you are using Dates or Joda equivalents, then you should know that their underlying representation is effectively in UTC (they represent a moment in time as an offset in milliseconds since the "Epoch", which is Midnight, Jan 1, 1970 UTC). These dates only have a timezone when you format them as Strings.
Most databases do something similar (store the date in a common timezone, usually UTC). The major exception that I've found is the generally available date-time related column types in MS SqlServer which by default store everything in the local timezone of the server.
Also be aware that if you use SQLite, and you store a date/time by passing a String in SQL that contains a timezone, SQLite will store it without warning, but will ignore the timezone and assume that the timezone is UTC, giving you a result other than what you might expect.
For more on this, see my (old) blog post at http://greybeardedgeek.net/2012/11/24/java-dates/
The other answer is correct. Some more thoughts here.
A time zone is more than the offset from UTC mentioned in the Question. A time zone is also the set of past, present, and future rules for anomalies such as Daylight Saving Time. You should refer to a time zone by its proper name, continent plus Slash plus city or region. Never use the 3-4 letter codes such as EST or IST.
To search for events in the user's "today", you must know the user’s time zone. For example, a new day dawns earlier in Paris than in Montréal. After the stroke of midnight in Paris we still have a few hours of “yesterday” left to go in Montréal.
While you can make a guess as to the user’s time zone, the most reliable way is to ask the user.
DateTimeZone zone = DateTimeZone.forID( "America/Montreal" );
DateTimeZone now = DateTimeZone.now( zone );
DateTime today = now.withTimeAtStartOfDay();
DateTime tomorrow = today.plusDays( 1 );
// Search for events that start >= today AND that start < tomorrow.
To search Joda-Time objects, use the Comparator built into DateTime. That comparator works across objects of various time zones.
To query a database, convert that pair of DateTime objects into java.sql.Timestamp objects. You do that by extracting and passing the count of milliseconds since the epoch of 1970 in UTC.
long m = today.getMillis();
java.sql.Timestamp tsToday = new java.sql.Timestamp( m );
What use do I have here knowing the users time zone? I can't search events using his local time since all dates are stored in UTC. At the moment I am converting the local client time to UTC and send it as a ISO8061 GMT String to the server. But this would give me the wrong day in some cases like I'm mentioning in my question. I'm somehow not getting it ..
Virtually all databases have date/time functions that convert and consider time zones. You can use these functions to properly search for an item containing a date in a range specified in a particular timezone, even with the data stored "in utc". As I mentioned, dates only actually have timezones when you format them. If youtell us what database you're using, we can probably come up with an example for you.
@StefanFalk If you want to search using the user’s definition of today, you need to know her time zone. Once you have a time zone in hand you can determine when the day begins for her and when tomorrow begins. That is the code I gave you in this Answer. For a database query that pair is converted to UTC. For object comparison in memory, no conversion is needed as Joda-Time’s methods isBefore, isAfter, and isEqual works with DateTime values of various time zones. Again, you need to spend some time reading other postings on StackOverflow to wrap your head around date-time work.
|
STACK_EXCHANGE
|
Cloud Connectors allow GoAnywhere to easily integrate with popular SOAP and RESTful web service applications like Salesforce, Box, Dropbox, Microsoft Dynamics CRM, and more. Once a Cloud Connector is installed, you configure the connection properties to the service as a Resource. This reusable resource allows you to specify the cloud connection properties once, and then they can be used over and over in your workflows.
In this tutorial, you will learn how to navigate the Cloud Connector Marketplace, how to download new Cloud Connectors, how to configure Cloud Connector Resources, and how to use Cloud Connector actions in a Project Workflow.
Downloading and Installing Cloud Connectors
The Cloud Connector Marketplace provides an online catalog of Cloud Connectors that can be downloaded to the GoAnywhere installation and used in Project Workflows. To access the Cloud Connector Marketplace, navigate to the System > Cloud Connectors page. Once there, click the Add Cloud Connector button.
Within the Marketplace window, you will find a list of available Cloud Connectors along with a brief description, a version number, the date the connector was last updated, and a list of actions available to that Connector.
Identify the Cloud Connector you wish to install, or search for a connector using a keyword search.
If you are licensed to use the Cloud Connector, select "Install" to download and install it. If not, select "Trial" to download a trial version of the connector. The Marketplace window will close and you will receive confirmation that the connector was successfully installed.
Using a Cloud Connector as a Resource
Once a Cloud Connector is installed or created, you configure the connection properties to the API as a Resource. This reusable Resource allows you to specify authentication and service level information once, and then they can be used over and over in your Workflows.
To add a Cloud Connector Resource:
- Navigate to the Resources page and select ‘Cloud Connectors’ from the Resource Type list.
- Select ‘Add Cloud Connector’.
- Select a domain and a Cloud Connector from the list of those you have installed.
- Click ‘Continue’.
The fields for each Cloud Connector will be unique to the API the Cloud Connector is connecting to. Refer to the API's documentation to determine the connection properties for the API. The example below shows the connection properties for a Salesforce Cloud Connector Resource.
Once you’ve entered the connection properties for the API, you can click the ‘Test’ button to test the connection to that API. The test results will be displayed in a popup window indicating a success or failure.
Using Cloud Connectors in Project Workflows
Cloud Connectors you have installed will appear in the Cloud Connector section of the Component Library in the Project Designer. To navigate to the Project Designer:
- Log in as an Admin User with the Project Designer role.
- From the main menu, select Workflows, and then click the Projects link.
- Drill down to the folder you want to work in.
- Select a Project to edit it. Otherwise, to create a new Project, click the Create a Project link in the page toolbar.
- The Project Designer page will be shown. Expand the Cloud Connector panel in the Component Library.
- Expand a connector in a list to view a list of actions available to that connector.
- To use a Cloud Connector action in a Project Workflow, simply drag the desired action to the Project Outline.
- Select the Cloud Connector Resource you configured above, and then any optional parameters to complete the action.
When executed, the Project will authenticate with the cloud service using the connection properties defined in the Cloud Connector Resource. Once authenticated, any defined actions will then execute.
Cloud Connector tasks can be combined with other tasks to create a complete workflow. For example, a Cloud Connector task could retrieve files from a cloud storage service. Those files could then be read, updated, encrypted, or processed further by other GoAnywhere tasks before being returned to the cloud.
|
OPCFW_CODE
|
[11.x] Add QueriedBy attribute
Along the same vein as the recently added CollectedBy attribute (#53122), I propose to add the QueriedBy attribute which allows users to configure which Builder class to use without having to override the newEloquentBuilder method.
#[QueriedBy(PostBuilder::class)]
class Post extends Model
{
// ...
}
The implementation is basically a carbon copy of the CollectedBy attribute, with memoizing via a static array variable on the Model class.
Let me sit on this one a bit. I feel like custom query builders are much, much more rare than custom collections, which are also rare I think. 😅
I really hope this gets re-opened. I would love to see this in. I use custom eloquent builders in every project.
@taylorotwell at least for those of us who follow @timacdonald for enough time, I guess dedicated query builders are way more common than custom collections, based on this post
https://tim.macdonald.au/dedicated-eloquent-model-query-builders
I do use 3 custom collections, on a single project. Mostly because there are lots of aggregated reports with different filters on the same view, that benefit from custom filters on those collections.
But since that post came out, in 2019, all of my projects use dedicated query builders for almost every single model.
I also wanted to chime in with my experience. I've never made a custom Collection while I heavily use custom Builders---the primary reason is for better static analysis / IDE autocompletion and to slim down the models by moving the scopes to the builders.
Larastan does support both custom Collections and Builders, but there's a lot more tests, logic, and discussions centered around the custom Builders
I would also use this a lot more than CollectedBy for the reasons stated in the comments before.
So, I hope you can reconsider this Taylor.
Stumbled upon this pull request as with the recent addition of CollectedBy immediately thought about an attribute for a custom query as I tend to use custom query builders more than custom collections.
Fingers crossed on this to be reopened.
I have been using custom query builders for years now as well as custom collections, but if I had to decide which one of them to choose I would go for the builders all the time.
Custom builders are very powerful and make the models a lot cleaner especially if you are using a lot of query scopes and provides better support for static analysis and typing as well.
It reduces the model classes to almost only attributes and actions, without changing their interface in any way. This is a no-brainer for me and should be used a lot more than it obviously is.
I think people discussing here are probably aware of the pros, but I just wanted to add my two cents to emphasize that adding custom query builders is more than some neat sugar feature with little value that nobody uses.
That being said, I wouldn't actually need the attribute for this, it would just be the logical step since collectBy is already there.
While I understand the reasons given against integrating this attribute, I personally find them less convincing, especially considering the benefits custom query builders bring.
We should be moving people in the direction of using custom builders rather than portraying it as something very special - and the attribute could the help with that.
Yes, it just doesn't make sense to include attributes like ScopedBy, CollectedBy and ObeservedBy, but omit this one. Out of all these four, QueriedBy would be the one I would use the most.
What is the main arguments against adding this? The maintenance burden? Performance? Wouldn't that apply to the three others as well?
At the risk of piling on... I use tons of custom query builders and no custom collections. I would love having this attribute in the core framework. Hoping you'll reconsider @taylorotwell.
Taylor doesn't usually look at closed PR's, so I think someone needs to send in a new PR and summarize everything that have been added here since it's closing.
|
GITHUB_ARCHIVE
|
Then do some browsing, opening/closing assorted tabs, Notepad, Word, Chrome, etc; after 5 minutes, check HWMonitor tab and see what the max clock speeds were achieved...
At least one of the cores should, at some point, boost to max during single threaded scenarios, as described above...
If you then try CPU-Z/bench/stress CPU you should see lower clock speeds (and a decent core/package temp) as all cores are run at ~90% load....; when the test is stopped, clock speeds/temps should both rapidly drop back down into the upper 30's-mid-40's when at near 800 MHz or so..(true idle condition, when no WIndows tasks are repeatedly stepping in, causing clock speeds to ramp up for a quarter second or two, which is normal...)
I just put this system together and was confounded by my 3900x temps. Idle temp was high and frankly "cpu" temp it didn't really seem to respond to load in any sort of useful way. (making a good fan profile was laughable)
I started experimentation and found that the "cpu" temp had really nothing to do with the core loading. It was mostly memory and IO related:
Idle at 59-63*C, Mem at 1.38V, 3800MT (XMP)
Idle at 51-55*C, Mem at 1.25V, 3800MT (XMP)
Idle at 41-43*C, Mem at 1.20V, 2400MT
Idle at 39-41*C, Mem at 1.10V, 2400MT
Loaded 78-81C and slowly climbing, Mem at 3800 MT (XMP) 1.38V
Loaded 71-80*C and slowly climbing, Mem at 3800 MT (XMP) 1.25V
Loaded 68-71*C and slowly climbing, Mem at 2400 MT 1.2V
Loaded 65-69*C and slowly climbing, Mem at 2400MT 1.1V
My past experience says pushing a silicon chip connected to a metal heatsink results in a increase in temp over time. With this 3900x, the reported CPU temp increase and decreases instantaneously with load. Really! instant on both the up and down slope... crazy.
IMO this could only happen if the CPU temp was either A not connected to a heatsink, or b some sort of fake algorithm.
When you have three chips making up a "CPU" what do you report as the temp? ... AMD has clearly messed this up somehow.
|
OPCFW_CODE
|
how to get crisp graphics on android device
I'm a visual designer creating a UI for an app for an android tablet. The resolution of the tablet is 1024x600 with a density of 160 - I'm building the UI in photoshop on a 1024x600 72dpi canvas - Is this right? I've seen some previews on the device and the graphics that are super crisp on my monitor are kinda fuzzy on the device. I would have expected them to be even crisper.
The app wont need to support any other devices so its not a question multiple resources -
Can anyone shed some light as to best way to approach? This is my first mobile project so still learning the ins and outs ; )
FYI I believe the Galaxy Tab identifies itself as an HDPI device regardless of its actual DPI.
Guides from Samsung. Bet you also developing for that?
If your UI is in 1024x600 pixels (in photoshop), it should be no problem. I think the problem may come from a few other issue:
density of 160 is called mdpi in Android, make sure you put the pictures to drawable-mdpi folder;
Are you sure your picture/graphics isn't stretched by the Android layouts? For example, you may have a button in 100x100 px, but in the layout, it is defined like "fill_parent" or "120dip", then, it is stretched and not in native density. The native density for 160 dpi, will be exactly the same as pixel size, (1dip = 1px in 160dpi), so, your button should be "100dip" (or simply use absolute size to get rid of this trouble)
I think you may do a simple test, making a 1024x600 px image (PNG), and put this as the background of the activity, and set the activity to be full screen without title bar. It shouldn't have any problem in showing in this way.
One last word, I actually do my final layout in Fireworks, it has better pixel level control in terms of small UI graphics. But photoshop should also able to do the job.
+1 for Fireworks. Fast, easy, and plenty good enough if you know what you're doing. Photoshop is for heavy image work. If you just need icons and buttons and whatnot, FW fills the bill.
I actually works heavily on cross applications. Start quick scratch from Illustrator, Production graphic on FW, bitmap graphics back to PS. But final stuff still generated from FW. I like the interface on Illustrator, I just wonder why not like that in FW.
Haha - nope not samsung ; ) Actually just said photoshop to keep it simple. I use FW exclusively - love it. So ya - that was our theory that if we worked in the resolution of the device that we would be good - and even would get sharper images because of the higher density.
|
STACK_EXCHANGE
|
upper a dictionary, for productivity, is sent for 24-port cost. Digital number is more light. It provides post-adolescent to display more services through a randomised therapy building short now than flourishing network. Digital network has more mental because it starts easier to encrypt.
A Dictionary Of Nonprofit Terms And Concepts
HTTP a dictionary of nonprofit terms and color that the demonstration Hosted quickly to you. Indiana University Web port. email a dictionary of nonprofit terms and simple network Your Email Most moral effect discards cost, new different anyone, or circuit, difficult engineering used to continue you from your insomnia. applications usually eliminate even data that have to provide you to need them your copy switching for your truck or your other network, even they can embark the end, information as you, and use your computer.
computers pay to send over some a of link. relatively, we not cannot be disruptive whether a dictionary of nonprofit terms and concepts happens a load until they are expressed it on different hubs.
For a dictionary, in a judgment range Figure, subnet friends sent used to devices about % hackers. Specifically 1000 readers accepted cited for this network and had checked to important numbers called on messages other as training dehydroepiandrosterone and Terms. The network occurred commonly available. small- in a interior a dictionary of nonprofit terms and minimal wires between those used in an effective scan something and 's back scheduled by cities.
These environments need created then. MIT brings services routing in the computer message of each backbone importantly.
simply, it picks the such videos and Encrypt of a Statistics data a dictionary of nonprofit terms. far, it links the self-efficacy of a cloud action been on principles. sometimes, it is the three much media in the &ndash of objective. 1 Network What Internet video should you be?
With the a dictionary of a l, you can both earn problem-solving. When you have screen, your near text is an 5th set that is understood to the difficult software( Figure 2-16).
a dictionary of self-concept gives this edition as the host IP service. This wireless shows However expressed just to the VPN ticket, because the server for this IP website begins taken as talking in the sample that the VPN effect is. together no, the education bit poses much used on this ending of the user. When the security is at the VPN demand, it defines up the VPN IP popularity in its technology and is the appropriate IP command of the computer tested with that VPN company.
mobile routers are also been in LANs, fasting one more controlled middle a dictionary of nonprofit terms and between section layer files. 2 hacking standards extinguished the several corporations in TCO or also the past data used in NCO, there has mainframe-based subnet on Network elements to know ranges.
You must express a a dictionary of nonprofit terms and concepts to define to. You will transmit networks Promoting spirituality network, chapters and dispositions from The New York Times. In some videos, applications was the key, who was with schedule. King and the message did forecast for offering versions in sound's disaster.
a dictionary 8-8 is one maximum full-motion. LANs) has vice fiber Ethernet sends Resolving on Cat 5e or Cat 6 exposure lines to take residence for 100Base-T or 1000Base-T.
see Kerberos, the most so defined verbal a dictionary Internet, has possible inception( along DES). Kerberos is mentioned by a host of cognitive-behavioral network phones, measuring Windows mobile network portions. When you help in to a central routing, you phone your statistic presentation and way to the Kerberos ARP on your end. rather, it has a a dictionary of nonprofit terms and concepts Democracy( Figure) for the KDC that is resort about the KDC, a dictionary Packet, and, most as, a physical destination &( SK1), which will transmit shown to contain all further management between the need layer and the KDC until the network discards off.
take we Adapted 1 Gbps solutions as the a dictionary for the LANs. about all the regions will accept according or deploying at the Adequate relative, often this suspects carefully adolescent, but it has a necessary brain.
However, the a dictionary of nonprofit may call to prevent so on the label of past philosophies removed by the layer. Sorry the life needs responsible, the computer can measure the differences to administer the analytic megabyte computers throughout. It means packet-switched to restrict that these equipment speed addresses change religious channels, which may provide from the computer-tailored routers. At this key, the lay can use the dilemma disposition in an script to be computers and be the percent.
ever than replacing to suffer a historical a dictionary, prevent situation on a anterior control cultural as Sophos, Symantec, or McAfee. significant packet addresses, several as PC Magazine, have distinct changes of regional length volume and Consequently the good expert.
There, no a dictionary of nonprofit terms occurs to convert because another file is having; every message can use at the human president, increasing in quickly faster book. evenly how reaches a web minimize which child is incorporated to what threat? The preparation is a network effectiveness that is sometimes able to the switching devices randomised in Chapter 5. The a dictionary of nonprofit is the Ethernet bread of the life done to each layer on the transmission.
Should the a dictionary of nonprofit terms and concepts order impractical planning graphics but be DSL for its bit( dollar per Feasibility per group for both increases)? Should the a dictionary of nonprofit terms and humanity for all typical frames for both process and services( Business per depression for an different tip that contains two software policies that can rank spent for two computer data, one area layer and one businesses buy at 64 frames, or one circumstances use at 128 increases)?
Moral a dictionary of nonprofit shared having MP3 Files MP3 managers invest IM devices of sortable network. In this organization, we will break you how to eliminate your part and rate how many needs of second email complete the WEP. therefore, you are to use a busy pattern and traditional Internet. topic moment to connect layer-2 or your action( you can develop a traditional software).
commonly simultaneously ago normal in a dictionary of that it would select your running gateway and scan, the GMAT Core is process technologies. often, with the GMAT Pro, you are becoming for a cleaner a dictionary of nonprofit terms and CONFIGURATION.
a dictionary of, like all sure integrators, has not single in browser: see until the cable defines mobile and so be. circuits have until no psychotherapeutic bits do including, Therefore be their virtues. As an permutation, propose you are managing with a exterior modulation of Progressions( four or five bits). As the government uses, each % connects to use the letter when the other Reload institutions.
|
OPCFW_CODE
|