Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
October has just finished, and for the team here at Vonage, it was a busy one. Earlier in the year, we committed to join this year’s Hacktoberfest, which meant that our October would only be about one thing—Open Source.
We’re no strangers to Open Source at Vonage, with our libraries, code snippets, and demos all on GitHub, and many of our team contributing to, or maintaining other work in addition to that.
We thoroughly enjoyed supporting participants throughout October, sharing our knowledge, and chatting about how significant and awesome open source contributions can be.
What Was Hacktoberfest?
Hacktoberfest has become a key event in the Open Source community’s calendar over the years. Participants were encouraged and rewarded for suggesting changes to software repositories, making the repositories more accurate and accessible for everyone to use.
Every person who submitted four successful requests during Hacktoberfest will now be rewarded with swag, including a Hacktoberfest t-shirt and Vonage-branded stickers, or have the opportunity to plant a tree.
Additionally, we also offered $5 Open Collective gift cards and what turned out to be our most in-demand bit of swag ever:
Bamboo Vonage Socks.
Our Hacktoberfest In Numbers
We’ll break down this information a bit more below, but if you’re looking for the TL;DR, here’s what Hacktoberfest 2020 looked like for us:
The most crucial part of any Hacktoberfest is the contributions that people make to projects. October saw a 254% increase in PRs (pull requests) opened across our repositories compared to the average of the last 12 months.
The number of total PRs tagged Hacktoberfest was 226, out of which 157 were tagged
hacktoberfest-accepted and 117 contributions were merged.
Community and education are also at the heart of Hacktoberfest. With many of the participants making their first contributions to Open Source, providing a place to learn, communicate, inspire, and collaborate was vital.
Our Hacktoberfest Tuesday’s events took place every Tuesday (obviously) in October, covering Asia, Europe, Israel, and the Americas.
Taking the form of a small virtual conference, attendees could watch inspiring talks from both our team and special guest maintainers as well as have the opportunity to get their questions answered and spend time with our team.
Official and Community Events
In addition to our events, there were several official Hacktoberfest events, and many more community organized events to be a part of. We were all over those as well!
Lorna Mitchell, one of our Senior Developer Advocates, spoke at the first official Hacktoberfest event earlier in the month:
Additionally, our colleagues took to the ‘virtual’ stage in a truly international manner:
- Garann spoke at virtual events in Paraguay, India, and the U.K.
- Ben represented in India and Israel.
- Diana touched areas that we had previously never reached with talks in Pakistan and Nepal as well as India.
We created 47 new education resources spanning tutorials, articles, videos, and talks to support the community during this year’s event.
Each week there was something to dive into that would either get you started, inspire your next contribution or give you food for thought.
In October, one of the highlight articles was Nahrin’s 33 High Impact Open Source Projects Seeking Contributors that outlined projects that were making people’s lives better, impacting societal change or environmental issues (amongst many other things).
It’s a fantastic resource if you’re looking to give back through your Open Source contributions. Regardless of your programming language of choice, you will find something interesting to get involved with.
According to our stats, we streamed over 43 hours of Hacktoberfest specific content on Twitch in October. That is huge!
On top of our regular live streams, we committed to streaming every single working weekday in October to support Hacktoberfest on our VonageDevs Twitch channel.
It was great to have so many regular viewers join us throughout the month. We celebrated with a special, chaotic, all-team stream on Friday, 30th October, where we took the time to highlight key moments, superb contributions, fantastic community members, and more.
Contributions Always Welcome
If you contributed a PR, attended an event, hung out in the Discord, chatted with us on a live stream, or enjoyed one of our talks, thank you.
It was a lot of work, but we truly enjoyed being a part of this year’s Hacktoberfest and especially being part of the community alongside you.
It doesn’t end here, though. Open Source is not just for October! You are more than welcome to contribute to our projects at any time, regardless of your background or experience.
Please find out more by checking out our Hacktoberfest 2020 site that includes lots of resources to help you get started.
|
OPCFW_CODE
|
We stick pretty firmly to the Agile SCRUM software development methodology. I won’t go into great detail about what that means, but here’s the way we work:
We work in a series of two-week ‘sprints’. Each of these is a block of time dedicated to working on a set of tasks laid out at the beginning that won’t change in the middle. Each sprint starts with planning and estimation, has a stand-up every day during the sprint, and finishes with demos and a retrospective.
Good planning takes time. Sorry bosses, but it has to be that way or the actual sprint will suffer. During planning, bite-sized tasks are pulled out of a backlog of features, refined until everyone understands what it’s about, estimated and added to the sprint. All the developers working on that product and the product owner do this together. We aim to come up with a realistic list of goals that we hope to achieve by the end of the sprint, and a prioritised list of tasks (or ‘stories’) that enable us to reach those goals, with a clear deliverable for each task.
The whole team are involved in estimating each task. This way, the experience of many are used to cover all bases - the established team members know about legacy code in the application and the new team members might bring new ways to solve a certain problem. A consensus is agreed upon that is realistic, which is vital for understanding what’s possible to achieve during a sprint. With each sprint down, the estimations of individual tasks and the backlog features overall get more and more accurate, especially if the team is made up of the same developers each time.
We hold a daily stand-up meeting at 10am. It should last no longer than 10 minutes, involves the whole team and everyone has to tell the others:
It’s not a way to keep tabs on people and know whether they’re skiving off or not! The main reasons to have it are so that the team has a good understanding of where everyone is at and what it is we’re all working on. More importantly is the result that blockers are caught early on and help given (in whatever the best form for that is), which will be arranged for a time after the meeting.
Don’t be late for standup!
It’s important during planning to define what the team expects to see as deliverables at the end of the sprint. These are the tangible end results of your hard work, which you then get to demo to the whole team at the end of a successful iteration. Demos are usually around 5 minutes each.
Make sure to get ready for a demo before you give the demo. The law of Sod indicates that your demo will not go well and will take ages of everyone’s time if you don’t have it ready.
It’s important to spend a little time looking back at each sprint, to work out what went well and what needs fixing in our process. We discuss all of this in a regular retrospective meeting involving the whole team. This is a great way to make sure that we’re always thinking about how we can improve our process, and we fix problems with it early.
This isn’t set in stone by any means, but here’s how we generally approach our work during a sprint.
All the tasks we decided on in planning are kept in Asana. Asana keeps a record of all the information about individual tasks, and also what people have been working on and what stage each task is at. We also use a Kanban board, because it gives a really helpful at-a-glance view of what’s going on in the iteration.
Tasks are listed in the relevant project in priority order. But do take a pragmatic approach to deciding what the next task you do is, depending on whether you’re waiting for other tasks to get done beforehand.
When you need something to do, just grab the next available task you can work on from the Asana that corresponds to the current sprint, by assigning it to yourself and moving it into the ‘In Progress’ part of the project. While you’re at it, move the corresponding card on the Kanban board along too.
Even if you think you know exactly what you’re about to attempt when grabbing a new task off the pile, it’s always a good idea to double-check with whoever created the task in the first place that you’ll be delivering what’s expected.
Talk through the technical approach with others in the team. Even if you think there’s a clear, simple method to take it’s still worth double-checking with others as everyone has different experience and knowledge that may lead to something better or simpler. Doing this also helps to ensure there’s not some other work going on that would impact on your ability to carry out the task.
We use git-flow to manage our versioning. Find out more about how we use git flow
Add your initials to the feature name so we know who started it off. For example:
# Git flow plugin: $ git flow feature start sm/indexing-awesomeness # Plain old git: $ git checkout -b feature/sm/indexing-awesomeness develop
Hack away to your heart’s content. (After you created automated tests that prove it all works, of course.)
When you think the feature’s done, keep it in its branch for now while you do the following checks.
Move the Asana story to “waiting for sign off” and move the card on the Kanban board to this heading too. Then get the Product Owner to take a look at a working example so they can give you the thumbs up. They might need to check with other people before signing off (like a client); it’s their responsibility to do this and let you know when the feature is signed off. The working example can be given on your local development machine, theirs perhaps, or a demo server running on Heroku or EC2. Whatever fits best with the feature in question and the circumstances.
Your code should now be reviewed by another developer via a GitHub pull request. This helps to improve the overall quality of our code. Listen to their comments. Don’t be defensive – they’re only talking frankly about good coding practices! Also, don’t forget you’ll be giving your opinions on their code shortly too.
Squash & Rebase your feature against the latest develop branch
# Squash commits into as few as possible (rule of thumb: <5) # Example squash & rebase: $ git checkout develop $ git pull origin develop $ git rebase -i develop feature/sm/indexing-awesomeness
Create a GitHub pull-request
Now either set up the pull request through GitHub or if you have the ‘hub’ command (which comes with our boxen setup):
$ hub pull-request -b develop -h feature/sm/indexing-awesomeness
Where possible add the relevant user story for the title, and include a link to all the details relating to that story or specific task.
Make sure the comparison is against the develop branch, not the master
Assign it to someone
This can be someone working on the same project, or another developer who could understand the code and provide you with useful feedback.
Now the code will get reviewed, and you might want to make changes based on the feedback.
Once the code has been okayed the reviewer can close the pull request.
Merge to develop (no fast-forward)
# Git flow plugin: $ git flow feature finish sm/indexing-awesomeness # Plain old git: $ git checkout develop $ git merge --no-ff feature/sm/indexing-awesomeness
Delete the remote feature branch
$ git push origin :feature/sm/indexing-awesomeness
Even after being signed off and code-reviewed, your feature might not be ok to be deployed to production. Ideally, every story in the iteration should be able to be deployed as soon as it’s signed off; waiting at this stage can cause all sorts of nasty problems when you try to merge and deploy.
If it looks like you’ll need this delay for a particular feature, use your best judgement and discuss with the team what to do. You might build that feature so it’s backwards compatible; you might start with a task to build a demo version of that feature on a demo branch, then once that’s been signed off have another task to make the live version. Or other options.
The process above works for on-going projects. There’s a bunch of setup stuff involved in setting up a new project. Typically these bits will be:
Create projects in Asana and for the backlog and first sprint. The Product Owner will do all that in collaboration with developers. They will include setup tasks in the first sprint.
Create a github repo for it.
Initialise the repo for Git Flow. Push the newly created develop branch up to github.
Add the project to Jenkins so that continuous integration is setup from the start. There’s a simple way to do this where all projects are built in the same way, from a jenkins.sh file in the root directory of each project.
Create a Vagrantfile and associated provisioning steps for development on a local VM (or multiple VMs to represent multiple servers). The best way to do this is to re-use the setup from the latest, most-relevant project.
The provisioning steps will grow as dependencies and configuration gets added to the project. They will eventually be used to provision the production servers too, so avoid any steps that are too specific to your local VM setup.
Create a Fabric fabfile with the necessary commands defined for common tasks like ‘test’, ‘deploy’, ‘run’. They’re somewhat project-specific, but look to other projects for examples.
Create any other files and directories required by the particular language or framework being used for this project. For example, a requirements.txt file for listing Python dependencies, or a Gemfile for listing ruby dependencies, that sort of thing. Very project-specific.
Once you’ve set it up right, if you add a ‘#’ and then the number of an Asana task ID to one of your git commits, when pushed up to github Asana will add a commit message into your task. Handy!
|
OPCFW_CODE
|
The 5-Second Trick For ResMed CPAP Supplies in Deland FL 32720
But Don’t be concerned, you are not by itself. For the Alaska Slumber Clinic we get questioned every day about CPAP therapy by our people, and we strive to give the ideal information and facts feasible in permitting them, and you also, superior recognize what CPAP therapy is, how it works, and the wonderful final results it might have on your lifetime.
The commonest CPAP facet-consequences are mask or pressure relevant. Some patients will practical experience claustrophobia to your CPAP mask. Some patients will acquire nasal congestion while some may possibly practical experience rhinitis or perhaps a runny nose. Even though CPAP side-consequences absolutely are a nuisance, significant side-effects are certainly uncommon. Also, analysis has revealed that CPAP facet-outcomes are rarely The explanation patients prevent utilizing CPAP. Straightforward factors could seriously assist with reducing side-outcomes of CPAP: Here are a few guidelines. * Ensure that the mask you might have is equipped correctly. A mask that is certainly way too substantial or also small is going to be not comfortable. * Nasal signs regularly reply to heated humidification in the CPAP air. Most CPAP machines come with heated humidifiers but many people usually do not rely on them. * Never overtighten the mask. This popular oversight leads to mask soreness and damage to the skin. If awkward air leaks occur, think about altering to another mask. How much time soon after I begin CPAP treatment method will I start to Notice a difference in my exhaustion, Electricity amounts
CPAP Supplies in Deland Florida
Business Results 1 - 10 of 10
CPAP Rentals in Deland Florida
Business Results 1 - 10 of 3
Deland FL - BingNews Search results
AROUND TOWN: Crafts, photography, Random Fandom and more
at the Garden Club of DeLand, 865 S. Alabama Ave. Scotti is dedicated to the rescue, rehabilitation and release of sick, injured and orphaned wildlife. His main rescue area is Central Florida, and ...
Central Florida snakes are being killed by bloodsucking worms found in pythons
Concern over the parasites began last August after researchers at Stetson University in DeLand found a venomous pygmy rattlesnake ... miles away from the pythons they usually infect in South Florida, ...
Vegetable gardens move to Florida House menu
They challenged the constitutionality of the ordinance but lost in court, with the Florida Supreme Court declining to take up the issue. Backers of the bill, sponsored by Rep. Elizabeth Fetterhoff, ...
Want to grow veggies in your front yard? Florida House ready to weigh in on regulations
Elizabeth Fetterhoff, R-DeLand, was approved at the House State Affairs ... They appealed the ruling to the Florida Supreme Court, which declined to grant review. Ricketts and Carroll faced $50 in ...
Stetson University professor says bloodsucking worms are killing Florida’s rattlesnakes
DeLand, Fla - Dr. Terence Farrell and his students at Stetson ... That fact connects the worms to Burmese pythons who are native to that region and happen to be a invasive species in Florida.
|
OPCFW_CODE
|
Check if value in controls was changed from the orginal data on button click
I have a page with different controls (checkboxes,textareas, ddls..etc), On pageload the data is loaded into controls.
What would be a good approach to check if the data was modified from the original data after the button was clicked. Using c#.
Thanks,
Do you want to know the appropriate event in the life cycle? Appropriate what?
sorry spelling mistake, meant to write: good approach
@Ben you can try with viewstate or hidden input
Hide the default content of controls in HiddenField. Check the last content of controls with comparing the values in hidden fields in ButtonClicked event.
You can use ViewState["Key"]
try with
1. In the load
ViewState["Key"] = texbox.Text;
2. Compare in the post the two values
I have 20 fields, so you saying to add 20 viewstate parameters and check each one on button click if it was changed?
You can inherit from these controls and create custom controls. There you can create property to store your initial value. Later you can compare it to the current value and see if it is changed or not.
If this is a web application I would look at Session Variables ,ViewState,etc which I personally prefer Session Variables
If this is a Windows I would look at Properties there are a couple of ways you could do it.
can you provide and example as to what type of data you are wanting to hold
sounds like you are looking at creating something like a DELTA
this is a web app, I can use session variables, I have 20 fields, so you saying to check each field and compare to the session variable and see if it was changed?
Session is an Object so you could lookup the Session.Add Method or you could create a HashFields too
Are you doing this to check for concurrency? If so, I would recommend using the entity data model. It has built in features to check if a field has changed from the original. Here is a quick example how to use it:
http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application
this is using mvc, i am using web app, can this be done in web app project?
When you say "web app," do you mean "web forms". If so, then yes, entity framework is agnositic about MVC or Web Forms. Here is the same example in web forms: http://www.asp.net/web-forms/tutorials/getting-started-with-ef/the-entity-framework-and-aspnet-getting-started-part-1
..edit > this is the web forms tutorial on concurrency: http://www.asp.net/web-forms/tutorials/continuing-with-ef/handling-concurrency-with-the-entity-framework-in-an-asp-net-web-application
|
STACK_EXCHANGE
|
from pygame import Surface
import logging
from albow.core.ResourceUtility import ResourceUtility
from albow.utils import overridable_property
from albow.widgets.ButtonBase import ButtonBase
from albow.widgets.Image import Image
class ImageButton(ButtonBase, Image):
"""
An ImageButton is a button whose appearance is defined by an image.
"""
disabledBgImage = overridable_property('disabledBgImage')
"""
This disabled background image
"""
enabledBgImage = overridable_property('enabledBgImage')
"""
The enabled background image
"""
highlightedBgImage = overridable_property('highlightedBgImage')
"""
The highlighted background image
"""
def __init__(self, disabledBgImage: str = None, enabledBgImage: str = None, highlightedBgImage: str = None, **kwds):
"""
You must as a minimum supply a single image via `theImage` parameter. Optionally, you can supply
enabled, disabled, and highlighted images
Args:
disabledBgImage: The image to display when the button is disabled
enabledBGImage: The image to display when the button is enabled
highlightedBgImage: The image to display when the button is highlighted
**kwds:
"""
Image.__init__(self, **kwds)
self.logger = logging.getLogger(__name__)
self._disabledBgImage = None
self._enabledBgImage = None
self._highlightedBgImage = None
if disabledBgImage != None:
self._disabledBgImage = ResourceUtility.get_image(disabledBgImage)
if enabledBgImage != None:
self._enabledBgImage = ResourceUtility.get_image(enabledBgImage)
if highlightedBgImage != None:
self._highlightedBgImage = ResourceUtility.get_image(highlightedBgImage)
def get_disabledBgImage(self):
return self._disabledBgImage
def set_disabledBgImage(self, theNewImage: Surface):
self._disabledBgImage = theNewImage
def get_enabledBgImage(self):
return self._enabledBgImage
def set_enabledBgImage(self, theNewImage: Surface):
self._enabledBgImage = theNewImage
def get_highlightedBgImage(self) -> Surface:
return self._highlightedBgImage
def set_highlightedBgImage(self, theNewImage: Surface):
self._highlightedBgImage = theNewImage
def get_highlighted(self):
return self._highlighted
def set_highlighted(self, theNewValue: bool):
self._highlighted = theNewValue
def draw(self, surface: Surface):
dbi = self.disabledBgImage
ebi = self.enabledBgImage
hbi = self.highlightedBgImage
if not self.enabled:
if dbi:
self.draw_image(surface, dbi)
elif self.highlighted:
if hbi:
self.draw_image(surface, hbi)
else:
surface.fill(self.highlight_color)
else:
if ebi:
self.draw_image(surface, ebi)
fgi = self.image
if fgi:
self.draw_image(surface, fgi)
|
STACK_EDU
|
Please stop suggesting to use 777. You're making your file writeable by everyone, which pretty much means you lose all security that the permission system was designed for. If you suggest this, think about the consequences it may have on a poorly configured webserver: it would become incredibly easy to "hack" the website, by overwriting the files. So, don't.
Michael: there's a perfectly viable reason why your script can't create the directory, the user running PHP (that might be different from Apache) simply doesn't have sufficient permissions to do so. Instead of changing the permissions, I think you should solve the underlying problem, meaning your files have the wrong owner, or Apache or PHP is running under the wrong user.
Now, it seems like you have your own server installed. You can determine which user is running PHP by running a simple script that calls the 'whoami' program installed in most linuxes:
If all is right, you should see the username PHP is running under. Depending on your OS, this might be 'www-data', 'nobody', 'http', or any variation. If your website is the only website running, this is easy to change by changing the user Apache runs under. If you have Debian, like I tend to, you can edit the file /etc/apache2/envvars (as root), and change the value for APACHE_RUN_USER. Depending on your OS, this variable might be set in a different configuration file, so if you can't find it in /etc/apache2/envvars, try to search for the variable declaration by using:
$ grep -R "APACHE_RUN_USER=" .
From the directory all apache-config files are in.
If you're not the only one on the server, you might want to consider creating user accounts for every website, and using something like Apache2-MPM-ITK to change the RUN_USER depending on which website is called. Also, make sure that the user the PHP process is running under is the owner of the files, and the directories. You can accomplish that by using chown:
% chown theuser:theuser -R /var/www/website/
If PHP is running with it's own user, and is the owner of the files and directories it needs to write in, the permission 700 would be enough. I tend to use 750 for most files myself though, as I generally have multiple users in that group, and they can have reading permissions. So, you can change the permissions:
% chmod 0750 -R /var/www/website/
That should be it. If you having issues, let us know, and please don't ever take up any advice that essentially tells you: if security is bothering you, remove the security.
|
OPCFW_CODE
|
字幕列表 影片播放 列印英文字幕 Hi. James from engVid. I was going to try to... A shoe and a book joke, but I didn't think it would go well. But Mr. E is saying to me: "I gotta hand it to you." Right? "You tried." Yeah, I did try. Unfortunately I failed. Today I want to teach you about body parts as verbs, and how certain parts of our body, from our hands to our mouths, to our heads can be used as verbs and have a meaning. Now, before I go any further, I want to say two things. Thank you to Baz and Tomo. Thanks, guys, you made this lesson possible with some of your suggestions. And if you guys have suggestions for me at all for lessons, please, don't hesitate. Go to engVid, www.engvid.com, and just say, you know: "Can you teach this, this, and this?" or "Could you help us with...?" and you might get your name on the board. Now, I'm going to move on to our lesson, but just to point out because you grammar heads out there will say: "He wrote 'gotta', and that's not a word in English." You're right, this is slang. But I'm saying: "You gotta hand it to me", because I'm using one of these body parts as a verb right there: "hand it", it means have got to. "I have got to hand it to you." But in English, we say: "gotta" because it's faster and simpler. Right? So: "I have got to hand it to you" is very formal, "I gotta hand it to you" is very natural. Keep that in mind. If you're writing, write: "I have got to", but if you're speaking, you could say to a Canadian: "I gotta get going now", and they'll understand you have to go. Cool? All right. Moving on. First things we want to talk about, and I tried to do this in order with your body so you will remember the order. "Head", I have a head. I cannot walk like this, it doesn't make sense. I turn my head in the direction I'm going. So, when somebody says: "Where are you heading?" they're saying: "I see your head is going in this direction. To where are you going?" So: "heading" means direction. "He was heading to his house", that means the direction he was going of his house. "She was heading to the store", she was going in the direction of the store. Number one: "heading". Number two: "eyeball". "To eyeball somebody" is to look at them. Usually used in a negative sense. If someone says to you: "Are you eyeballing me?" It means: "Are you staring at me or looking at me? Because I don't like how you look at me, okay? Stop doing it." Okay? So: "to eyeball someone". Maybe you, you know... Sometimes you've seen women look at other women, and they look them up and down, like: "Look at her." They're eyeballing, because you can see their eyes moving and checking them out. Or guys eyeball each other, like: "Yeah, he thinks he's tough", and they eyeball you. Okay? Number two: "to eyeball". Number three: "neck". I'm not a vampire, I don't... I don't want to bite you and get your blood, but "necking" isn't when two people put their necks together, but "necking" is kissing, but long-time kissing, so it's like you're with your partner: "[Kisses]". "Necking", okay? So that's why I have two lips, because they're kissing and that's why the two people are happy because messing... Messing. [Laughs] Kissing means... "Necking" means long-term kissing or long-time kissing and passionate kissing. Okay? Number four: "mouth off". You can see the mouth is jumping off of a box. Let me finish that box, it doesn't look like a full box, there. So it's jumping off a box. "Mouth off" is to say things, like: "Get out of here. I don't care." It's being rude. Being rude, maybe sometimes using slang towards someone. So, for example, if your dad were to say: "Hey, could you pick up the box?" And you go: "Yo, old man, why don't you pick up the box? You're bigger than me, you should pick up..." You're mouthing off. I would say: "Stop mouthing off. Stop being rude." Okay? Or: "...talking back to me like that". "Mouthing off". "Shoulder", "shoulder a burden", that's just one example, but when you shoulder something, like a responsibility, it means you carry it with you. You carry it with you. So if you're shouldering many responsibilities, maybe you are a student, maybe you're trying to learn English, maybe you have a job, maybe you have a fam-... That's a lot of things to put on your shoulders. Because shoulders are used to carry, so you're carrying a lot of these things on your shoulder. Okay? Next one, number six: "armed". Dunh-dunh-dunh-dunh. "Arm", this is your arm. We say: "armed" to mean have a weapon, like a gun. "Pewng." That's a phaser, by the way, from Star Trek. "Pewng, pewng, pewng, plewng." Or a sword. "Ta-ching, ching." Even a knife. You can use a pen as a weapon. In fact, to be honest, if you're armed, you could use words as weapons. It's anything that can hurt someone, we say they were armed. Right? So if you're not very smart, you might not be well-armed in an argument. Sorry, it's funny. Really, it is. But think of "armed" being a weapon, like a gun, or a knife, or a sword. Okay? So, are the... Were the people armed? Did they have weapons? "Elbow", that's this part of your arm, the elbow. See that part? That's your elbow. Okay? Now, I don't know where you are in the world, but Canadians will know this one and Americans, but if you elbow somebody in hockey it means to hit them with your elbow. So: "elbowing" usually means to either hit somebody with this part of your body, or to push your way into a situation. And it means there's physical contact or a little bit of violence, because if I elbow into the room, it means I'm going: "Excuse me. Excuse me. Excuse me", and I use these to get room. Or if I elbow past you, you're standing there and I go: "Excuse me, got to go", and I will hit you with my elbow to make you move. All right? But if you watch hockey, elbows happen all the time. Okay? So: "elbow someone". Now, because this is YouTube I'm going to give you the finger, but it's not this finger, it's another finger. This finger here, but I'm not allowed to show it on network TV, or kids... The kids' channel. So: "give someone the finger" is not this finger... Okay, don't use this finger, don't use this finger, don't use this finger, don't use this finger, use this one. But I have to illustrate it like that. It means to tell someone to go away in a very strong way. In fact, you might say it's the F-word. You can go find it out for yourself. But if you go like this: "Hey, you", and I give you the finger, I probably will have to run away because you're going to probably want to hit me back. Okay? So, go figure out what "the finger" is. Number nine: "butt in line". Okay, I don't know if you can see my butt - that's my bum-bum, but you can't see it. So, hang on a second. The things I do for engVid. Okay. Dunh-dunh-dunh. Okay, so, that is my butt. Get a good look. Okay? "To butt in line" means to take this thing and to push your way in line. What do I mean? I mean there is a line... Sorry, give me a second. Told you, all the stuff I do for engVid. There is a line and everyone's lined up nicely, and you're like... Remember "elbow"? You go: "Excuse me", and people go: -"You can't butt in line. Your butt has to go back with everyone else at the end." -"Damn it." Because if you butt in line, you try to get in line when you shouldn't. Don't try that in England - the queue is everything. You do that England, they'll all say: "Excuse me? Right, you can't butt in line."
|
OPCFW_CODE
|
As a result it took a long time to start up the Managed Server. FED events were incorrectly logged as policy errors. SCIM app schema is updated. User mailing list archive at Nabble. These assumptions are required to be consistent with market participant assumptions that are reasonably available. The Java Bean cartridge supports factories for creating the beans. Implementations provide a means of resolving the parameters that the processor will receive, Satellite, subsequent notifications from Okta Verify listed the incorrect location.
Click For More Info
The underlying problem proved to be with parsing large cookies. Mobile admins had access to an invalid navigation link. User Dashboard was missing an ARIA attribute. This topic has been unlocked. JNDI tree did not show the managed servers. This design avoids the high overhead associated with thread-based concurrency models and decouples event and thread scheduling from application logic. Added Javadoc JAR file for the Hazelcast IMDG Enterprise Edition. Our platform has been adopted across many vertical markets, object to JSON, to prevent double locking when lock operation is retried. If that verification fails, which could result in tax assessments, that change is now reflected in Active Directory.
Premiums numbers are those reserved for various services. Use the templating result to replace the targeted element. Browser decoding implementation was a related issue. Kafka HTTP endpoint Rationale. Apache Kafka is an event streaming platform. Cardinality Estimator data structure is implemented on top of Hazelcast to estimate cardinality. We recognize the tax benefit of an uncertain tax position only if it is more likely than not that the position is sustainable upon examination by the taxing authority, gets a connection via a datasource, Null Pointer Exception is thrown. It also provides a Kafka endpoint that can be used by existing Kafka based applications as an alternative to running your own Kafka cluster. This article will provide following details: How to check the status of Kafka and Zookeeper? The next Routing Rule will be used to direct your users to the appropriate sign in.
You can now assign Apps to App Admins at the instance level. To date, unless it is stopped on the Management Center. No matching method could be found. Class A common stock for each calendar year. But that is not the right way to do. Okta is focused on the adoption of inclusive language and communication. Admins saw a loop when they enabled Multifactor Authentication for admins with no MFA factor set as Optional or Required in the corresponding MFA policy. Okta mastered attributes are now updated in a master app user profile when an org disables email customization. Take some action when an exception has occurred while executing a Flow for an event.
If the socket was recycled for a subsequent request, follow one of these steps: To add an action under the last step, the EJB transaction manager or you can do it your self. Developed the application using Agile methodology and participated in Scrum meetings. When a user arrived, or RAML, Routing and Persistence functionality. Lasts for example, and therefore, it should be removed too after the listener is removed.
When we create an array in Java, if we are unable to continue to meet these requirements, in order for all the files at one single occasion to be read and correlated into the same group. Fixed an issue where the queue items were being delivered more than once when they are reproduced after a member leaves the cluster. Business leaders are placing unrelenting demands on their IT organizations, and our stock price and trading volume could decline. Web session cookies rather than processing is no exception if mulesoft api calls to become lambda which rely on many concurrent modification exception mulesoft api consumer groups.
Origins Origin of the Client ID and Client Secret credentials. Web application occasionally failed with this exception. Thank you for your patience. App, installation, AMF and JDBC. This feature is now available for more orgs. The Japanese translation for password policy messages was inaccurate. If our compensation committee permits the transfer of rights, access of specific systems, to limit the number of shares such holders may include. Dispatch stage to mulesoft admin in respect the concurrent modification exception mulesoft apis, which case version history of operations produced when this modification is a meaningful difference in filter method of users with? It supports multiple protocols such as SOAP, then share those documents with the business partners. Deleting pushed app groups in the Service Provider resulted in duplicate groups being created in Okta.
License information was not sent to Hazelcast Management Center. Sr Mulesoft Developer Resume Naperville IL Hire IT People. IWA Agent Version History. Sections can be collapsed or expanded. Events that can be used with Event Hooks. The result is a composable enterprise that empowers the business to focus on innovation and respond to customer needs and competitive challenges. For configuration information, we will take a look at how Kafka can help us with handling distributed messaging, in the order they are declared. Apache Artemis, with the exception of the correlation id being incremented. Once initiated, versioning, we face a number of risks with respect to this strategy.
This method need to be parametrized with the connector class. This panel is split into two parts: the header and the content. Return the type index for the current nesting level. Okta Admin Console was broken. Configures the message source for the flow. Introduced a special Java client type to be used by Management Center. This class acts as a wrapper for configuration attributes that support simple text, request body, proxies should be waited to be created. Security disabled with a Hazelcast Client that has SSL enabled gives this error. The Administration Console now functions correctly when resizing the Netscape browser frames on Solaris.
Fixes hostname matching problem when interface has wildcards. Determine Java Version in Windows From Command Line. Widget to get this update. JSON objects and arrays can also be nested. Both the notification banner and the email notification contain a link to the query above. Similar to what we did for Cassandra, continue to invest heavily in research and development, and business results. The raw materials are segregated into fix size chunks or batches and processed in parallel on conveyor belt. Sharing Fail-fast iterators throw ConcurrentModificationException on a best-effort basis in path 1 1 is.
Helpful to the headaches can i get short term disability migraines, admins were advised that some users who were eligible for conversion were not listed, resulting in an error in the logs. The right backend for your Mobile apps, methods, which are in turn added to a List or a Map. For details, if the first data inserted to the map has no value for the attribute or the collection is empty. Similarly, and test the TIBCO bridge between TIBCO FTL, you can create all users under the same organizational unit. Super admins have the ability to select which email notifications a specific type of admin receives by default.
Fivorites Du Moment
Our use of free cash flow and free cash flow margin has limitations as an analytical tool and you should not consider it in isolation or as a substitute for an analysis of our results under GAAP. Seeing multiple kafka can be deactivated users who were displayed in an example of concurrent modification exception mulesoft apis can i can now available in. Error reconstructing the EJBObject put into session for name: trader java. This is to enable users who may have lost their original activation email to request a password reset. Internal Server Error for users reseting a password using the API activation token.
Virtual Campus Tour
It is now possible to update the webapp if the webapp is deleted, when the original leaked object is closed, some of which are beyond our control and may not be related to our operating performance. Reset: The time at which the current rate limit window resets in UTC epoch seconds. You compile using this modification is not appear after deducting the concurrent modification exception mulesoft. To use these properties, stronger brand and business user recognition, by using the Event Sourcing pattern that is inherently atomic. Indicates that the execution of the current event is stopped becasue the message has already been processed.
Locations And HoursUk Land The Easements
Pin It On PinterestA
Last Day Of SchoolNear That Hsa Offer Me A Banks
Vendor InformationDirectrix Practice Problems Worksheet
Chevron Right IconMemorandum
Umbrella InsuranceSearch Massachusetts Arrest Free Warrant
Liability CoveragePdf High Template
Vendor ApplicationRates In Divorce On America
ClickfunnelsreviewDocuments Find Testimony
Indigenous EducationDifference And Complaint
|
OPCFW_CODE
|
I have an mp4 file I need to link to from my website. I have uploaded it to my webhosts server using FTP and added the link to my webpage. But when I click on the image/link, I get a page that says “the page cannot be found”. I’ve tried a number of things with no success. Can anyone tell me why this is happening? This file was provided by the client. I am unable to change it’s format in any way. I can run it on my computer from the folder it is in, so I know it works. I just can’t seem to get it to work on the webpage.
The website is: http://www.oaknoll.com
The video appears at the bottom of the page. It’s the 50th Anniversary video.
Check the path and filename match exactly. Are you sure the file is not in any sub-folder?
It may be better to embed the video, as a direct link to the file may result in a download, not playout.
I originally had it in a folder named “video” but since that didn’t seem to work, I tried putting it in the root folder with the webpage files. That didn’t work either. I haven’t tried embedding it as it seems like a large file, but I will try it.
I am looking at my server using Filezilla and the mp4 file appears in both the “video” folder and the “root” folder (the same place the index.asp file is that the video is on, I believe that is considered the root folder). I removed the spaces from the mp4 file name thinking maybe that was causing problems. Still nothing. But the mp4 file appears in both folders on my server when I look at it thru Filezilla. Ugh…
If it is certain the file is there, named the same, and there isn’t a cached page being used, the only other thing I can think of is inadequate permission values.
Hi there sarb,
are you absolutely certain that Oaknoll-TV-Show-Proof is correct?
I’ve taken a screen shot of my filezilla window showing the two files. But I can’t figure out how to insert the picture into this post. Any suggestions?
You can drag and drop into the edit window, or use the upload button on the editor.
You might need to add something like this to your .htaccess file:
AddType video/m4v .m4v
AddType video/ogg .ogv
AddType video/mp4 .mp4
AddType video/webm .webm
Seeing all those “.asp” files I have a strong feeling there isn’t an htaccess file to edit.
I don’t know, but my take from
Add a MIME Type (IIS 7)
You should create MIME types to help clients handle new file name extensions appropriately. If IIS 7 does not recognize the file name extension requested by the client, IIS 7 sends the content as the default MIME type, which is Application. This MIME type signifies that the file contains application data, and usually means that clients cannot process the file.
is that it would give an error message other than
HTTP Error 404 - File or directory not found.
Internet Information Services (IIS)
But maybe not.
In any case, making sure the MIME type is there (and supported) is a good idea.
ya, no htaccess that i can find…
If you have Google Webmaster Tools installed try:
/Crawl/Fetch as Google
The results are usually quite comprehensive.
Not sure what /Crawl/Fetch is?
This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.
|
OPCFW_CODE
|
package cs12b;
import java.util.Arrays;
/*
* Tennessee Philips Ward, 1614708, and Kelsy Lee, 1587641
* Class 12B
* Autocomplete.java
* This program takes information from Term.java and BinarySearchDeluxe.java to find all the terms with matching prefixes
* and the number of terms with said prefix
* Using BinarySearchDeluxe.java methods, you can create a new array of terms that is sorted by prefix order or by
* reverse weight.
* See main for test examples
*/
public class Autocomplete {
//initiallizes the data structure from given array of temrs
Term[] aTerm;
public Autocomplete(Term[] terms) {
aTerm = terms;
}
//returns all terms that start with given prefix in descending order of weight
public Term[] allMatches(String prefix) {
//aTerm.BinarySearchDeluxe.firstIndexOf(terms, new Term(prefix, 0), Term.byPrefixOrder(2)));
Arrays.sort(aTerm, Term.byPrefixOrder(prefix.length()));
int first = BinarySearchDeluxe.firstIndexOf(aTerm, new Term(prefix, 0), Term.byPrefixOrder(prefix.length()));
int last = BinarySearchDeluxe.lastIndexOf(aTerm, new Term(prefix, 0), Term.byPrefixOrder(prefix.length()));
Term[] temp = Arrays.copyOfRange(aTerm, first, last+1);
Arrays.sort(temp, Term.byReverseWeightOrder());
return temp;
}
//returns number of terms that start with given prefix
public int numOfMatches(String prefix) {
Arrays.sort(aTerm, Term.byPrefixOrder(prefix.length()));
if (BinarySearchDeluxe.lastIndexOf(aTerm, new Term(prefix,0), Term.byPrefixOrder(prefix.length())) - BinarySearchDeluxe.firstIndexOf(aTerm, new Term(prefix,0), Term.byPrefixOrder(prefix.length())) == 0){
return 0;
}
return 1 + BinarySearchDeluxe.lastIndexOf(aTerm, new Term(prefix,0), Term.byPrefixOrder(prefix.length())) - BinarySearchDeluxe.firstIndexOf(aTerm, new Term(prefix,0), Term.byPrefixOrder(prefix.length()));
}
//unit testing makes me cry
public static void main(String[] args) {
Term[] terms = {new Term("oief",1), new Term("dsfsgh", 2), new Term("fsdgherth",3), new Term("fsml", 69), new Term("fskys", 420), new Term("birb", 4)};
System.out.println(Arrays.toString(new Autocomplete(terms).allMatches("fs")));
System.out.println(new Autocomplete(terms).numOfMatches("fs"));
System.out.println(new Autocomplete(terms).numOfMatches("fsz"));
}
}
|
STACK_EDU
|
Modded Minecraft Uses Almost All RAM
I've been putting together my own collection of 1.7.10 mods for playing Minecraft. Recently, I've noticed it uses almost all of my 13 GB of available RAM, even though it is saying it is using 2-2.5 GBs. My guess is that this is because one of my mods is experiencing a memory leak, causing the ballooning in RAM usage. However, I am having trouble finding a way to learn which, since I don't have enough RAM left to start any other processes once Minecraft is up and running. Also, I can't feasibly solve this with trial and error, since I have 112 mods installed.
My question is, how can I pinpoint the mod that is causing this issue? My constraints are that I cannot use trial-and-error and that I cannot start any processes after Minecraft has fully loaded. That would result in a forking error.
Here's a list of my installed mods, a la Pastebin: http://pastebin.com/5Q7LGhs3
112 mods is your problem. The best way to figure out which one is to disable them, one by one, until your RAM usage drops.
You say its using 13GB of RAM but saying its only using 2-2.5GB of RAM. That makes no sense to me, how are you tracking these numbers?
@Frank Packs like TPPI have over 200 mods and only use 2.5 GBs of RAM. Are you sure there isn't any debugger that could show specific threads or classes that are causing me trouble?
@James Before starting Minecraft, I run top in a Terminal session. This shows how much memory is being used total, as well as how much certain programs are using. Minecraft is show to, and tells me that it does, jump between 2 and 3 GBs of RAM. However, once it has fully started up, my RAM usage jumps from 3GBs to nearly 16 GBs.
You might be able to gain some clues from console output, but ultimately, trial and error may be your only option. Can you post a list of the mods you are using? (pastebin is fine) We might be able to figure something out based on that. Maybe.
Sure thing: http://pastebin.com/5Q7LGhs3
So far I've found people reporting memory leaks with this particular version of Minecraft Loader. Try removing that and see what happens. I'll keep poking at the list in the mean time.
This particular version of Minecraft Loader has been reported to cause memory leaks. Remove that mod for now, at least until a new version comes out.
|
STACK_EXCHANGE
|
As you may recall, I had enthusiastic expectations and resolutions1 for Dutch Clojure Days. My first Clojure-only conference, my first proper face-to-face with the community. How could I not be excited?
On Saturday 21st at 8:30 am sharp we were at the TQ building reception, greeted by Carlo Sciolla, one of the organisers. A couple of words on the venue: a simple but elegant building, close to Dam Square and right in front of a fascinating flower market. The conference happened on the fourth floor, with a balcony to enjoy the outstanding view on the city, and food and drinks for everybody. My first, huge “thank you, DCD!” goes to the vegetarian option which was palatable for a vegan, but let’s keep the cheering and the hand-clapping for the end.
Eleven speakers were waiting for us. Vijay Kiran set the stage and the playful mood of the day, leaving soon room to Alex Yakushev. “Embrace the JVM” was a talk to treasure. Observability, performance profiling, memory inspection. I am by no means a JVM expert, however the tools Alex showed us will definitely help me get a better understanding of the machinery behind Clojure.
Simon Belak was up next talking about transducers and statistical analysis. This was probably the hardest one for me. I haven’t found a way to appreciate the value of transducers yet, and statistical analysis is not my strongest skill. But I still appreciated the concept of sketch algorithms and I will hunt histograms pretty soon.
Srihari Sriraman with “Practical Generative Testing Patterns” blew my mind and,
if you fancy ratings and such, was the highlight of the day. We all know
test.check is good, but the approach of Srihari to automation, seeding
relevant data and testing plausible behaviours left me eager to grab my keyboard
and implement something similar.
After lunch we were treated to one more talk before the lightning sessions. Wilker Lúcio explained the beauty and easy-of-use of GraphQL, an interesting alternative to REST for better APIs.
The lightning talks kicked off with some magical REPL-debugging from Valentin
scope-capture looked promising, and I can only hope for an
integration with CIDER. Dr Roland Kay reminded us of the usefulness of
clojure.spec, although if I had to base my opinion of clojure.spec on his talk,
it roughly looked like the type-system Clojure is missing. No trolling intended.
Thomas van der Veen hit the MQTT broker pedal, mixing Java and Clojure, but I am
still not sure I got the purpose of the experiment aside from the sake of
learning. Ray McDermott closed the lightning sessions with an amazing
browser-driven, multi-user REPL he is devising which can make live
pair-programming scattered around the world a breeze.
The last three talks reflected experiences of using Clojure for business. Josh Glover, Philip Mates and Pierre-Yves Ritschard shared with us the journeys of their companies and projects and how designing, developing, and testing have only improved since their move to our beloved language.
Drinks followed before a bit of REPL-driven comedy courtesy of Ray McDermott. Suffice it to say we sang the Clojure version of Bowie’s “Rebel Rebel” aptly entitled “REPL REPL”. If you weren’t there, well, you don’t know what you missed.
Dutch Clojure Days left me with the impression that the Clojure community is alive and hard-working, and its heart is in the right place. Ideas flourish, projects boom, boundaries get stretched. We can only be thankful to the DCD staff for being able to set up such a pleasant event, asking us only to join them to share our passion.
|
OPCFW_CODE
|
unity-design team mailing list archive
Mailing list archive
Privacy setting for intranet lenses
So after a general flamefest about the amazon lens we ended up with a
privacy flag for lenses, where the user can request that they don't get
results from the internet. Right now, writing a lens to the spec, it
won't respect the privacy flag at all, however it is possible to query
the status of the flag and implement privacy in the lens, I expect this
will be documented at some point but it is fairly clear how to do it in
the source of existing lenses. This is fine, I don't like it much, but
it works and keeps people happy about not sending "termi" to the
datacentre in Amazon's hollowed out volcano lair.
I want to write a lens to search intranet applications. Stuff like
OpenERP, vtiger, Peoplesoft, SAP, Sage accounts etc. Things people have
for their business as intranet web applications. You type in a customer
reference number or invoice number and get all the stuff relating to
them turn up in the lens, click a thing and you go straight to that bit
of data on your intranet site. The lens would need some configuration to
point it to your local server, and would need some authentication. I
have this kind of working, in a hard coded demo way, it needs a tidy up,
but fine, I can do that.
The problem I have is with the privacy flag. Should I respect it or not?
If my system doesn't respect that flag then "OMG EVIL!!!11!!" I am
sending local searches to the company system which might be logged and
people might be searching for their CV that they are sending to a rival
employer or all kinds of dastardly scenarios that can be manufactured to
prove this is a gross violation of employee privacy and all my fault.
The other possibility is that I do respect the flag. This means that to
turn on my lens you need to turn on the Amazon search stuff, so I have
Amazon product searches as a dependency of my lens working, plus my lens
positively encourages you to type in customer references and stuff that
really I don't want to search for on Amazon. I don't want my lens to
only work if you send your customer reference searches over to Amazon
because that would be "OMG EVIL!!!11!!" and all my fault.
I am coming to the conclusion that I can't release this because I am
evil either way. The privacy flag is having a chilling effect.
I work at http://libertus.co.uk
|
OPCFW_CODE
|
My free registry cleaner list is one of the more popular software lists on my site. With so many scam registry tools out there, no wonder so many look for a true freeware program to solve their Windows Registry woes.
But are there really problems that build up in the registry that need fixing? Are registry cleaners the solution to most of my computer issues? The answers to those questions might surprise you.
"Do I need to run a registry cleaner on a regular basis?"
In case you missed that, the answer again is: No.
Contrary to the online advertising pitches, the bad information from your neighbor, and perhaps your own belief prior to this moment, registry cleaning is NOT a computer maintenance task. I can not be more clear on this topic.
Years ago, registry cleaners were more often, and more correctly, referred to as registry repair programs because that's what they do - they repair certain kinds of issues in the Windows Registry that cause a very short list of computer problems.
"If I don't need to clean my registry every day/week/month/year then what do I need a registry cleaner for?"
Registry cleaners can be useful tools to solve certain kinds of problems in the registry, like those created when a program doesn't uninstall correctly or a malware infection isn't cleaned up properly.
Interestingly, the most useful parts of modern registry cleaners are some of their features that have nothing to do with the registry at all. Registry cleaners have morphed into overall "system cleaners" of sorts, removing not only the unused registry key here and there, but also MRU lists, temporary files, browser download histories, and more.
And while we're talking about it: no, you also don't need to regularly clean out those other areas of your computer either. While that data might take up space, it's not often a lot, nor does it usually cause any problems by simply existing.
"How do I know if the problem I'm having with my computer can be solved by a registry cleaner?"
Chances are it can't be.
It's starting to sound like I hate registry cleaners, doesn't it? Not true. I just don't want you to get the slightest impression that registry cleaning is a panacea for your computer's ills.
The only relatively common problem that registry cleaners are good at solving are error messages at Windows startup about missing files. Even in that case, using a registry cleaner is just one of many useful troubleshooting steps to try in that situation.
A registry cleaner will not fix a computer startup problem. A registry cleaner will not fix a Blue Screen of Death. And, ironically, a registry cleaner will not fix any issue that Windows actually reports as a registry issue like registry corruption, a missing registry, etc.
"My favorite registry cleaner says it fixes LOTS of problems in the registry. Is that not true?"
Most registry cleaners "do" a lot of "stuff" in the registry, but I'd argue that most of that "doing" is fixing problems that simply don't exist.
The long list of issues that your registry cleaner will show you, and then impressively delete in just a few seconds, are mostly registry keys that point to files or other items no longer on your computer. That fact alone does not indicate a problem. You could fill the Windows Registry with all sorts of unnecessary extra information and you'd never know.
The documentation with most legitimate registry cleaners will admit that the value of removing these entries is simply a "smaller registry." However, many stretch the truth a bit in just the next sentence, saying that a smaller registry means a faster computer. In fact, speeding up your computer is often one of the highlighted benefits of regularly running a registry cleaner.
As far as I'm aware, however, there's no evidence that a smaller registry has any positive effect on computer performance. While I suppose a drastic decrease in registry size could have a small impact on how fast Windows does certain things, the small amount of unnecessary data a registry cleaner will remove has but a tiny impact on your registry's size.
"But cleaning my registry speeds up my computer, right?"
Wrong. See the last few paragraphs in the previous question.
"OK, maybe registry cleaning is overrated. But what's wrong with running one every day/week/month/year, just in case?"
A few reasons come to mind:
- Letting an automated tool remove registry keys, especially ones not really causing problems, is risky.
- It's a waste of your time.
- It's a waste of your computer's resources
Actually, I'd go beyond overrated and say unnecessary. Why would you want to do any sort of maintenance that's unnecessary?
"Are commercial registry cleaners better than free ones? Do you recommend anything other than free cleaners?"
I have yet to find a commercial registry cleaner that comes close to the features, safety, and speed of any of the top several freeware registry cleaners in my list.
In most cases, you get what you pay for. In the case of registry cleaners, however, it seems that free is best.
"CCLEANER ISN'T FREE!!!"
Yes it is. (I have actually gotten emails with the above statement in ALL CAPS!)
CCleaner, in case you don't know, is the registry cleaner that I most frequently recommend and I can assure you that it's 100% free.
Unfortunately, one or more other not-so-free programs masquerade as CCleaner, often times in large banner advertisements on some websites, tricking at least some people to download their program. After finding lots of "problems" and maybe infecting your computer with some malware, it demands that you pay-to-fix.
The poor victim them searches for more about CCleaner, finds me, and... well, here we are.
Just be sure you're downloading CCleaner here, direct from Piriform, the only maker of the software.
|
OPCFW_CODE
|
Date format in Spring Batch Excel
I'm trying to read an Excel file with Spring Batch and Spring Batch Excel and cells in date format are read in a different format form the file.
In my file dates are in DD/MM/YYYY and when Spring/POI are reading data org.apache.poi.ss.usermodel.DataFormatter is used and in performDateFormatting method the parameter dateFormat has a pattern of M/d/yy.
Is there a way to force the date pattern when reading ?
My rowmapper configuration is
<bean id="caricaAnagraficheReader" class="org.springframework.batch.extensions.excel.poi.PoiItemReader" scope="step">
<property name="resource" value="file:#{batchParameters.genericBatchParameters.allegatoNomeCompleto}" />
<property name="linesToSkip" value="1" />
<property name="rowMapper">
<bean class="it.blue.batch.portali.components.CaricaAnagraficheRowMapper" />
</property>
</bean>
Thank you in advance
It will use the date-format from the cell if it is available (if no format it set and only type date it will fallback to the default JDK locale for formatting). You could try to explicitly set the format for the date-cells in your excel sheet. Everything currently is delegated to Apache POI and we allow for some hooks (I'm the author of the excel readers). We could provide an option to set a Locale to be used for reading. If you know what the format is for reading, and apparently the writing is important, you could map it to another format whilst writing as well.
We created the Excel to be filled so we are aware which format we are expecting. Unfortunately users should fill the value and it can do mistakes. So basically I can check in the rowmapper some additional checks regarding the date format and eventually use the optional Local when implemented. Sounds good ?
I don't understand your comment at all? If there is an explicit pattern in excel it will be used (according to the Apache POI documentation) if there isn't it will use the date format as available from the system local (or rather the JVM default). Another option is, which we are pondering on, is make the API we have more JDBC like and return the actual type (a Date in this case instead of a String). The Apache POI implementation does allow for a date-format to be set, so we could investigate that as well (you can set the Locale to use and/or date format), so we could investigate that.
Before running the batch job (not sure how you are launching things), try doing LocaleUtil.setUserLocale(your-preffered-locale). This should set the Locale to use for Apache POI, see if that makes a difference. If not it means there is formatting and the date is formatted accordingly to the format as defined in excel. The only way to get around that would be to create another API exposing the actual datatypes (like numeric, date etc.) akin to the JDBC stuff. Which would be quite a refactoring to do, but it would offer more flexibility I guess.
|
STACK_EXCHANGE
|
Feature: Allow user to configure the Operation polling interval
Right now Operations poll every 0.5 seconds. Unfortunately for some APIs the GetOperation rpc call counts against the user's quota. In the case of Speech#startRecognition when the user uploads a large amount of audio, use of operation.on('complete', ... can result in hundreds of API calls, decimating the user's quota (which isn't that big).
google-cloud-node should allow the user to configure the polling interval, or even use exponential backoff.
For some APIs, like Speech, we can solve this by upgrading to the latest gax-nodejs and using its LRO implementation, which relies on exponential backoff for polling by default, and already supports configurable backoff settings.
Thank you for opening this issue.
I was just about to do it myself, having spent all day today finding the problem and bringing attention to it on the google speech discussion thread.
Could I get a timeline of when this will be fixed? I'm currently working on something time-sensitive and I'm debating between waiting for this to be fixed or switching to the python library (where this doesn't seem to be a problem).
@kbyatnal What version @google-cloud/speech are you using?
@jmdobry
0.5.0
Are you reading the file into memory yourself or just passing the file path string to startRecognition?
I'm passing it the File object (from Google Cloud Storage/Bucket).
I actually saw the discussion thread regarding this topic and have implemented the temporary workaround. It seems to be working for now.
Right, but if you're using 0.5.0 then the operation polling should actually be using exponential backoff with a multiplier of 1.3, and that's still eating up your quota?
I don't think the exponential multiplier is in place for this version.
I made 3 requests to the API yesterday before I started getting quota errors, so I checked my console and saw that it had registered 2000+ hits for the API.
Try this:
return speech.api.Speech({
config: speechConfig,
audio: {
uri: 'gs://YOUR_BUCKET_NAME/YOUR_FILE_NAME'
}
}, {
longrunning: {
// Try fiddling with the first two numbers
initialRetryDelayMillis: 100, // This is the initial delay
retryDelayMultiplier: 1.3, // This is the multiplier that increases the delay over time
maxRetryDelayMillis: 60000,
initialRpcTimeoutMillis: null,
rpcTimeoutMultiplier: null,
maxRpcTimeoutMillis: null,
totalTimeoutMillis: 600000
}
}, (err, operation, apiResponse) => {
if (err){
return cb(err);
}
operation
.on('complete', function (response) {
console.log(response.results[0].alternatives[0]);
});
});
@kbyatnal it might help to double-check the version of google-gax being used. If you do a full re-install of @google-cloud/speech, it'll pull down 0.10.4, which has the expo logic built in:
$ npm cache clean
$ npm install --save @google-cloud/speech
$ npm ls google-gax
<EMAIL_ADDRESS>/Users/stephen/dev/play/play-1481768017
└─┬<EMAIL_ADDRESS> └──<EMAIL_ADDRESS>
@stephenplusplus
Thanks for the suggestion! I updated to 0.10.4, but now I'm getting the following error while running the code:
/Users/kbyatnal/Desktop/nodejs_tutorial/node_modules/google-gax/lib/longrunning.js:273 var previousMetadataBytes = Buffer.from ? Buffer.from("") : new Buffer(""); ^ TypeError: this is not a typed array.
Here is my code:
`const speechConfig = {
encoding: "LINEAR16",
sampleRate: 8000
};
return speech.startRecognition(file, speechConfig, (err, operation, apiResponse) => {
if(err){
return cb(err);
}
console.log("waiting");
operation.on("error", (err) => {return console.log(err);});
operation.on("complete", (transcript) => {
console.log(transcript);
});
});`
The file is a Google Cloud Storage File object.
cc @jmuk
cc @landrito
Sorry for that. The error message might look like protobuf.js decodes the data as a typed array while our code assumes it's a NodeJS Buffer? 😞
@landrito -- please look into this, and please try using Speech API with your patch before sending PRs.
I'll look into it . Sorry about that!
Seems like Speech system tests in this repo would have caught that no?
@kbyatnal Which version of node are you using?
https://github.com/googleapis/gax-nodejs/pull/90 should solve this problem!
It appears I was using a method in Node that does not work for node versions >4.0.x but <4.5.x and does not work for node versions >5.0.x and <5.10.x.
@jmdobry I suspect this is why it slipped through the system tests.
@landrito
I'm on node version 5.5.0 right now.
@kbyatnal I just published google-gax 0.10.5 which should fix the issue!
@landrito
Awesome, that part seems to be fixed now!
I guess this brings us back to the original issue. I just tried calling the async method in the API with a 1 minute raw audio file and I got the "quota exceeded issue". I checked my API console and that one request resulted in a quota usage of 486 requests, which leads me to believe that the exponential backoff isn't working. This is with the latest google-gax version 0.10.5.
@kbyatnal Can you try calling the async method on google-gax version 0.10.6. I pushed a bug-fix that should solve the exponential backoff problem.
The quota exceeded issue is resolved with 10.6.0.
@kbyatnal, just want to make sure you try 0.10.6 before closing this issue.
Let's call it resolved, but please let us know if anything is not functioning as expected.
|
GITHUB_ARCHIVE
|
Hacking is the process of using a computer to manipulate another computer or computer system in an unauthorized fashion.
An event in which the total rewarded bitcoins per confirmed block halves, happening every 210,000 blocks mined.
The maximum amount that an ICO will raise. If a hard cap is reached, no more funds will be collected.
Hard Fork is a type of protocol change that validates all previously invalid transactions and invalidates all previously valid transactions. This type of fork requires all nodes and users to upgrade to the latest version of the forked protocol software. In a hard fork, a single cryptocurrency permanently splits into two, resulting in one blockchain that follows the old protocol and the other that follows the newest protocol. Some examples are Bitcoin and Bitcoin Cash, or Ethereum and Ethereum Classic.
The act of performing a hash function on input data of arbitrary size, with an output of fixed length that looks random and from which no data can be recovered without a cipher. An important property of a hash is that the output of hashing a particular document will always be the same when using the same algorithm.
Any function used to map data of arbitrary size to data of a fixed size.
A unit of measurement for the amount of computing power being consumed by the network to continuously operate. The Hash Rate of a computer may be measured in kH/s, MH/s, GH/s, TH/s, PH/s or EH/s depending on the hashes per second being produced.
Hidden cap is an unknown limit to the amount of money a team elects to receive from investors in its Initial Coin Offering (ICO). The purpose of a hidden cap is to even the playing field by letting smaller investors put in money, without the large investors forming an accurate understanding of the total cap and adjusting their investment as a result.
A wallet that uses Hierarchical Deterministic (HD) protocol to support the generation of crypto-wallets from a single master seed using 12 mnemonic phrases.
A type of passive investment strategy where you hold an investment for a long period of time, regardless of any changes in the price or markets. The term first became famous due to a typo made in a bitcoin forum, and the term is now commonly expanded to stand for “Hold On for Dear Life”.
Hosted Wallet is a hosted crypto wallet is a digital wallet in which your private keys are stored. In exchange, the wallet takes care of the backup and security of your funds.
The online storage of private keys allowing for quicker access to cryptocurrencies.
A hybrid PoW/PoS allows for both Proof-of-Stake and Proof-of-Work as consensus distribution algorithms on the network. This approach aims to bring together the security of PoW consensus and the governance and energy efficiency of PoS.
Hyperledger is an umbrella project of open-source blockchains and blockchain-related tools started by the Linux Foundation in 2015 to support the collaborative development of blockchain-based distributed ledgers.
|
OPCFW_CODE
|
Collect miniapps/common/*.o in a library file [lib-extras-dev]
The purpose of this PR is to package the object files in miniapps/common within a library file that user applications can link with if needed. These object files contain convenience classes and functions that don't need to be part of libmfem put may be useful for developers who use existing miniapps as starting points for building their own applications.
This PR adds a common header file called mfem-extras.hpp and build targets to create libmfem-extras.a.
Changes will also be needed in the CMake files but I wanted to get a little feedback on the general idea first.
PR
Author
Editor
Reviewers
Assignment
Approval
Merge
https://github.com/mfem/mfem/pull/864
@mlstowell
@tzanio
@tzanio + @rcarson3
12/15/19
⌛due 1/05/20
⌛due 1/12/20
I'm not really convinced we need another library... at least not at the top level.
@v-dobrev what do you think?
So currently when cmake is used and make install is called, the miniapp directory stays within the build directory. Would it make sense @tzanio, @v-dobrev, and @mlstowell with this PR to have a miniapp directory with this library and the necessary headers included in the install directory?
This has been discussed offline and will be introduced in a future PR.
Hi, @rcarson3 ,
The intention is to make the libmfem-extras library available to outside developers in a manner similar to how libmfem is provided. The new library (and its headers) doesn't need to be in the same location as libmfem but it should be installed somewhere convenient. I know @tzanio has stronger feelings about where it belongs than I do so I'll defer to him.
Thanks,
Mark
The new libmfem-extras.a does not have a lot of code, so it seems simpler (for users) to merge it into libmfem.a. Naturally, the header mfem-extras.hpp will also be included by mfem.hpp.
Also, currently the additional code is in the namespace mfem::miniapps -- it seems more appropriate to use mfem::extras or, alternatively, rename the header to mfem-miniapps.hpp.
@v-dobrev ,
There is certainly an argument to be made for simply putting these things in libmfem.a (@tzanio made the same point). Part of the reason for keeping them separate is that these things don't generally add to the functionality of MFEM, as a finite element library, they mainly provide convenient wrappers which can be handy for application developers. We may add various convenience classes to libmfem-extras.a over time and we probably don't want these to cause libmfem.a to bloat. Another way to think about it is that libmfem.a provides hard-core finite element code in full detail whereas libmfem-extras.a offers a handful of shortcuts for commonly constructed objects or similar pieces of code (like VisualizeField).
Using the namespace mfem::extras does make more sense, good point!
I made a few comments on the code, but I am still conflicted about the role of this library.
If the aim is to help the folks that will be looking at the miniapps, then I think both the header and the library should be there. It will also be useful to use more descriptive names, e.g.
miniapps/common ➡️ miniapps/utilities
miniapps/common/*_extras.* ➡️ miniapps/utilities/utilities.
mfem-extras.hpp ➡️ miniapps/mfem-utilities.hpp
miniapps/libmfem-extras.a ➡️ miniapps/libmfem-utilities.a
(instead of utilities, other naming options could be utils,helpers, app-kit, or just kit)
Considering utilities, or something derived therefrom, versus extras I came across something that made me chuckle.
According to Merriam-Webster:
utility (advective): ... serving primarily for utility rather than beauty.
extra (noun): ... an attractive addition or accessory.
Now which do you think I prefer? Seriously, I do prefer "extras" because I think it makes it plain that these are not strictly necessary. People can make full use of MFEM without ever looking at the "extras" and they don't even need to link with them if they don't want to.
This could be easier to discuss in person, but I think my confusion comes from the association of the "extras" with the "miniapps".
If we want people to think of these as "extra tools/utilities not included in the main mfem library", then the code should not be in the miniapps/common directory and we should consider instead adding a new top-level directory, extras for it.
If we want to keep the code in miniapps/common as is, then the mixing of "extras" + "miniapps" is just confusing for me. Picking a name to indicate that these are application-building tools will be more clear.
@tzanio , that does make a lot of sense.
@tzanio and I just spoke at length about the naming and placement of this new library and I think I better understand his position. One of the main goals is to avoid confusing our users. With that in mind we'll keep this PR's changes confined to the miniapps subdirectories. This means moving mfem-extras.hpp into either miniapps or miniapps/common. The new library will be created in the same location i.e. either miniapps or miniapps/common. Which location should be chosen?
The other issue is naming. I think we agree that the names should be of the form mfem-something.hpp and libmfem-something.a. What should we choose in place of mfem-something?
mfem-app-kit
mfem-common
mfem-extras
mfem-miniapps
mfem-utilities
... other suggestions?
I'd like to hear from @v-dobrev but here are my personal preferences...
For the location (in order of preference)
miniapps/common
miniapps
miniapps/common renamed as miniapps/extras or miniapps/utilities
For the name (in order of preference)
mfem-common in miniapps/common
mfem-miniapps in miniapps
mfem-extras in miniapps or in miniapps/common renamed as miniapps/extras
mfem-utilities in miniapps or in miniapps/common renamed as miniapps/utilities
mfem-app-kit in miniapps
Sorry for chiming in here, but do we want to consider future developments here? I'm thinking of our discussion regarding the navier stokes library and its directory in the repository as well as in the installation directory.
This is blocking toys-dev, which I think is otherwise ready for review... can we maybe meet to resolve it?
Any afternoon next week (I'm on vacation this week) would be fine with me.
After discussion in person, we agreed with @mlstowell on the library name miniapps/common/libmfem-common.so/a and the header name miniapps/common/mfem-common.h
Merged in next for testing ...
Looking at the current state, I do not like that there is a separate library that is being installed. This makes it more difficult for users to link and makes it harder for developers to maintain.
If we really need to have separate libraries then we need to have a technical meeting to discus what's the best way to do this. For example, I think such additional libraries should be optional. Also, I think it is much more natural for users to have the extra library files and headers next to the main mfem library and headers, respectively.
@mlstowell -- I spoke with @v-dobrev and one possible compromise is to keep things as they are now, except revert the makefile not to build or install the mfem-common library (this will also resolve the regression test errors).
Are you OK with this change?
If not, maybe the three of us can meet tomorrow afternoon to discuss this?
Hi, @tzanio,
I don't think I understand the proposal. If the makefile doesn't build the library then this PR would only be adding a header file. I'm also curious about the problem with the regression tests. We should probably meet.
I'm sorry this has become such a hassle. I really thought this was a trivial suggestion and I cannot believe we have spent so much time debating it.
Best wishes,
Mark
Mark, I updated the build system as we discussed. Let me know if something does not look right.
It looks good to me. Thanks, @v-dobrev !
The following error was detected by the regression testing:
/usr/bin/ld: mesh_extras.o: relocation R_X86_64_32S against symbol `_ZTVSt15basic_streambufIcSt11char_traitsIcEE@@GLIBCXX_3.4' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: fem_extras.o: relocation R_X86_64_32S against undefined symbol `_ZTVN4mfem18FiniteElementSpaceE' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: pfem_extras.o: relocation R_X86_64_32S against symbol `_ZTVN4mfem6common13H1_ParFESpaceE' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Nonrepresentable section on output
make[1]: *** [libmfem-common.so.4.0.1] Error 1
make: *** [miniapps/common] Error 2
make: *** Waiting for unfinished jobs....
See the mystash log for details.
The shared library issue on linux should be fixed now.
|
GITHUB_ARCHIVE
|
Design guidelines for open in new tab - Hyperlink vs Image
I am designing a website which has lots of 3rd party web services dependency. Hence user need to constantly switch to those site for validation - like using Oauth services, which will redirect back to our site.
What would be the better design for such case -
Text HyperLink URL - Which state the description and domain name to
be redirected.
Using the logo of the redirecting site.
Any other solution ?
Any similar design example would be helpful.
Neither text URL nor logo of the target indicate that link will open in new tab - what exactly are you hoping to convey to your users?
There will be a clear indication text saying users need to authenticate from 3rd party app. Its like OAuth signin where we get redirection after the user complete his action on those sites.
Add an icon next to the label to let users know they will be redirected to another website.
I would expect to see a mix of solutions depending on what the link does.
Using logo's
Works if the logo is known or easy to remember. A google or facebook login flow for example will benefit from showing [ {Logo} {name} ]. People will likely not be surprised if hitting this link opens a new window/tab, or redirects to the service represented.
Downside: If the logo is unknown, users might get surprised nonetheless.
Hyperlink opening new page or tab
As already explained by others, its common to add an icon next to your link that indicates it opens a new page/tab. Typically a box and arrow . Whether or not you would open a new tab or not is a different question that has been asked before. As always: it depends
Oauth links
Bit of an outlier. Its not really a hyperlink in the standard way, since it starts a necessary user flow. Typically you'd see a text on click that says: "We are redirecting you to service xyz for authentication". After which you end up in the flow. At the end you would show a text saying: "Success (or not). you are now redirected back to product ABC"
Personally I would:
Stick to clear hyperlinks for majority of cases and only use logo's where its expected: in navigation and login pages.
Open links on same page. Various reasons why this is easier but most importantly: most users have more difficulty with using tabs over using the back button, and its more difficult on mobile. But that highly depends on product type and users! Here's a decent read on this
Take extra care in supporting back buttons and users not losing their (form) data if they do happen to hit the link.
Take the time to test these options and understand my users. I usually work in B2B which is different than B2C and therefore these things vary.
It is not so important whether you use hyperlinks or images as whether you can clearly indicate that you are taking the user to a new tab, and that you will return the user to the previous tab once the operation is finished.
It is about establishing a clear visual indicator and implementing a consistent behaviour, and your users will become used to it.
Here are some examples from a previous question you can reference: How to indicate this button/link opens a new tab
|
STACK_EXCHANGE
|
Who am I?
- Using the 'Select Category' screen, remove everything apart from People in order to generate a list of people and celebrities.
- Once you've printed them out and cut them up, each person in the room should pick out a card randomly and using double sided sellotape, stick it to the forehead of someone else in the room without them seeing the card you are attaching to them.
- Once everyone in the room has a card attached to their forehead, the game can begin.
- With everyone sitting in a circle and play moving anti-clockwise or clockwise, each player is given a chance to ask a question which can only have a Yes or No answer.
- If the answer is No, they must wait their turn to ask another question and play moves on to the person next to them.
- This continues until someone correctly guesses the celebrity or person they have on their own forehead.
You could of course play this with any of the categories on this site but people is the most common one.
- Like the 'Who Am I' game above, remove everything apart from People from the list of categories in order to generate cards for people and celebrities.
- Cut up the cards, fold them and put them all in a hat/bowl.
- Have a stopwatch handy or use the timer on this website. Typically each person should be given 30 seconds, but this can be increased.
- Everyone should sit in a circle and one person (Player 1) should be identified to start the game.
- Player 1 pulls out a card from the hat/bowl and the countdown starts.
- The player has 30 seconds to try and get the person on the left of them to guess which name they have in their hands but without saying the actual name or obvious clues such as initials etc., e.g. "This character was in Star Trek", "He had a viscious pinch and funny ears", "Spock!", "Correct"
- After the name is guessed, the correctly guessed card is put aside to be counted at the end of the countdown
- The player keeps pulling out people from the hat/bowl and giving clues until the time is up.
- Once time is up, the number of correctly guessed people is counted and that number is given as a score to both guesser and clue giver.
- The hat/bowl is then passed to the next person in the circle and so on until everyone in the circle has had a go guessing and giving clues.
- The one with the most correct guesses wins - you may need a tiebreaker!
Again, you could play this with other categories but people is the easiest.
- Generate some cards (any category will do), cut them, fold them and put them in a hat/bowl
- One person at a time takes a card from the hat/bowl
- The others must then guess what is written on the card by asking questions which can only have "Yes" or "No" as an answer, e.g. "Is it a movie?", "No", "Is it a person?", "Yes", "Is the person male?" etc.
- This should continue until someone guesses the card correctly.
- Optionally, you can punish people who wildly guess by having a rule where if someone makes an incorrect guess, the round is over and it is that persons go to take a card next. This way players should not guess unless they know the correct answer.
- A correct guess earns a point, winner is the person with most points at the end.
- Generate some cards, cut them, fold them and put them in a hat/bowl
- Someone should have stopwatch or countdown timer (if you are a Deluxe member, you can use the countdown from this website.
- Players take it in turns to take 3 cards from the hat/bowl
- The player then has a minute (or fixed amount of time) to act out all 3 cards in succession before the time runs out.
- If they successfully do all 3 in the allocated time, they go through to the next round and the rest of the players in the current round continue.
- If they fail to act out all 3, they are out, but can continue to guess the Charades of others.
- Optionally, you could time each person to see how fast they can have 3 Charades guessed. The winner is the person who does 3 within the shortest time.
|
OPCFW_CODE
|
M: VW will stop firing 'hail cannons' after farmers protest - edh649
https://amp.ft.com/content/3b377aa8-a64d-11e8-8ecf-a7ae1beff35b?__twitter_impression=true
R: kpil
How about they built the fscking cars so they would survive being outdoors? I
certainly don't have anti-hail navy-grade cannons at home.
I never really understood why cars have the outer finish of an indoor piece of
art. My car have scratches from my _fingernails_ near the door handles. What's
up with that? Should I wear white gloves when I approach the car?
R: athenot
Scratches and dents are 2 different things.
All modern cars have paint which is then covered in clear coat (which is then
covered in wax). The wax layer can easily be scratched but it also easily
buffs out. The clear coat layer is VERY strong and will put up with quite a
bit of abuse; you can make that coat even thicker if you so wish, with
aftermarket products.
Hail doesn't directly damage the clear coat or the paint, it dents the actual
metal itself. To make a car that resists those heavy hail storms would require
quite a bit of extra weight, and overall this problem is cheaper to solve via
insurance than by building a bullet-proof carapace that will add weight and
rarely ever get used.
R: thatcat
It would only be a lot of weight if you use steel. Remember the 90s saturns
with plastic doors that would just pop back after being dented? The problem is
that deformation properties in steel are used to absorb impact as a safety
device during collision.
R: Kurtz79
"In response, VW said it would install netting above the cars to protect them
from hailstorms in the future."
It seems a comparatively much more reasonable and non-invasive low-tech
solution...
R: stefan_
Or, even cheaper, you just dismantle the "hail cannons", since there is no
scientific evidence of their effectiveness or even a hypothesis on why their
working principle would have anything to do with hail. So since they do
nothing, have never done anything, you clearly didn't need them in the first
place.
R: lenkite
To the HN meteorologist/physicists: Do hail cannons even work ? It really
seems strange that producing shockwaves prevent hail from falling. I have
experienced hailstorms with lots of thunder and lightning.
R: fyrabanks
nice try, financial times subscription department
R: qmarchi
De-paywalled rather than linking to another source.
[https://outline.com/yBhsWR](https://outline.com/yBhsWR)
R: lawlessone
Do they even work?
R: toddmorey
Articles say science unproven. Thunder notably creates similar shockwave. It's
in the category of who knows but cheap enough to atempt anyway.
R: dx034
Or they needed it for insurance. The finished cars will likely be insured and
maybe the insurer forced them to show reasonable effort to reduce damage.
R: pvaldes
> My car have scratches from my fingernails near the door handles
Professional car painting is expensive for avoiding things like that. There
are a lot of super-cheap brands also of course, but in the end, you have what
you pay for.
R: Jaruzel
FT Article is paywalled. Try this link:
[http://uk.businessinsider.com/volkswagen-hail-cannons-
mexico...](http://uk.businessinsider.com/volkswagen-hail-cannons-mexico-
farmers-draught-2018-8)
R: ghshephard
...Or click on "web" in the HN comments.
R: Jaruzel
Which is just a google search. Some people don't like doing that.
R: pc86
I don't even know what "not liking" Google searches is supposed to mean.
R: ljcn
Some people avoid using Google.
R: deweller
> Scientists say there is no way to prove if these cannons really work, but
> farmers say it is cheaper to try the cannons than to buy hail insurance
I don't even...
R: mcguire
Firing shockwaves every 6 seconds?
|
HACKER_NEWS
|
What two statements are true about properly overridden hashCode() and equals() methods? A. hashCode() doesn’t have to be overridden if equals() is. B. equals() doesn’t have to be overridden if hashCode() is. C. hashCode() can always return the same value, regardless of the object that invoked it. D. If two different objects that are not meaningfully equivalent both invoke hashCode(), then hashCode() can’t return the same value for both invocations. E. equals() can be true even if it’s comparing different objects. Answers given are C, E. My question is regarding option E. Once the equals and hashCode are properly overrriden[This is mentioned in the question], how come two different objects can return true when equls() is called? Can any one give me an example?
Thomas, While your example is perfectly syntactical, do you think that class B's equal method is properly overriden? It is checking for instanceof A??? I agree that in perfect world if you call equals method on two diferent objects they might give true. But Hey we are living in ideal world when it comes to SCJP. equals() is overrridden perfectly. I like your example.
Howdy -- I'm just adding a bit here on how and why two different objects might be considered equal. Two different objects can be considered equal if YOU (the programmer of the class) decide that they should be. The purpose of .equals() is to allow a programmer to decide when and if two objects can be considered "meaningfully" equivalent. (And this becomes especially crucial when the object is being used as, say, a key in a HashMap or as an element in a HashSet). Remember, that == cares only if the bit patters in two different reference variables are identical. Which means, of course, that two different references are referring to the very same object on the heap. With .equals(), on the other hand, you might want to know that while two references refer to two different objects, those objects are considered "equal". The classic example (but with a gotcha of its own) is class String. If the user types in "Boulder" as the answer to a question, and you want to compare that to a list of cities in Colorado (perhaps in a String[ ] array), that would mean you have two different String objects that you want to test for equality: the "Boulder" that the user typed in (which you pulled from a JTextField) and the "Boulder" in your String[ ] of cities. If there were no .equals(), it would be MUCH harder to do that comparison! And as far as you're concerned, there is NO meaningful difference between "Boulder" entered by the user and "Boulder" in your String [ ] array. What makes the Strings a bad example, though, is that you can be fooled by the String constant pool into thinking that you can use == to find out if two different String objects are equal. That works in some situations (although not consistently across all VMs) because references to what you THINK are two different String objects can be redirected in such a way that both references point to the same String object in the String constant pool. But that is a special case that applies ONLY to String objects, and only because String objects are immutable and there is a constant pool for them. (Bottom line: don't rely on == to compare Strings!) So, what other situations might warrant two different objects being equal? What about the wrappers? Besides the static utility methods, a wrapper INSTANCE has only one purpose in life: to wrap a primitive value. There is no meaningful difference between an Integer object with the value of '3' and another Integer object with the value of '3'. As far as any code should be concerned, they're the same. (Although you can always use == if you really DO want to know that two different -- but equal -- objects are really two distinct objects.) That's why the .equals() method has been overridden in the wrapper classes, so that '2' and '2' are the same. (Although only when wrapped by the same class type. Two Integer instances with '2' will be considered equal but an Integer and a Long with '2' will NOT be considered equal. Java assumes that if you meant for a Long and an Integer to be equal, you wouldn't have used two different wrapper class types). Now imagine you use a class as a key into a HashMap. The key object has been put(key, object) into the HashMap. And now it comes time to get the object out by supplying the key -- you might get the key value from somewhere else, say, the user selected an ID number from a list or typed it in and you use their input to construct another instance of whatever your key class is. You've got two different objects now representing the key -- the one you originally used to PUT something in the HashMap, and the one you now have to use to GET the object. You want to know that if the key (let's say it's some kind of ID number) you used is 54678 (wrapped in an Integer), that if you make an Integer with that same number, the HashMap won't say, "I have never seen that key / number before in my life. Nope, there is no 54678 in THIS collection..." No, you want the HashMap to say to you, "Yes, I do have an object in here with a key that exactly matches the one you supplied, 54678, even though they are two different objects. Doesn't matter, your key tested positive for .equals() with a key I have in the collection." Most classes in Java do NOT have overridden .equals() methods, so most simply use == inside (that's what the inherited one from class Object does). But the wrappers and String class overrides .equals() to provide meaningful equivalence, and you can too. Just don't forget the contract -- if you have two objects that are considered equal, they MUST have the same hashcode as well, so be sure to do your equals() test on the same instance variables that you used to calculate your hashcode, in such a way that if these values change, and the hashcode values change, the equals() method will also reflect the change. So you don't have to use ALL instance variables to calculate your hashcode, just the ones that matter for equivalency. But you CAN have two different instances in a class return true for hashcode, yet still be different for .equals(). That simply means your hashcode algorithm may be less efficient than it could be. You could have a class, for example, where you override the hashcode method to: return 42; This means that all objects of that class will always have the same hashcode. This is legal and valid! Valid because it does not violate 'the contract'. If two objects are equal using .equals(), they will certainly have the same hashcode, and that's the contract. The contract does not say that if two objects are NOT equal they MUST have different hashcodes. The contract does NOT say that if two hashcodes are equal, the objects MUST be equal. The contract DOES say that if two hashcodes are not equal, the objects MUST NOT be equal. Should you override your .equals() method? If you ever care about meaningful equivalency, or if you ever want your object to be used in a collection that uses hashing, when two different objects might be used for putting and getting something out of the collection. Which means you will very likely, in the real world, want to override equals(), which means you must also override the hashcode method. cheers, Kathy (who believes that Ben and Jerry's chocolate fudge brownie will NEVER be meaningfully equivalent to any other flavor and/or brand of ice cream)
Joined: May 05, 2000
Originally posted by Kathy Sierra: The contract does not say that if two objects are NOT equal they MUST have different hashcodes.
And it's a good thing it doesn't! The hashCode method returns an int. But there are more possible Strings than there are ints. By the way, I added a hashCode method to my example for Bill!
|
OPCFW_CODE
|
Client certificate add token should have an expiry
When running lxc config trust add --name foo, a token is created with a corresponding operation. That operation remains running until a client consumes the token via lxc remote add my-rmt <token>. If the client forgets to consume it, the operation will be left running indefinitely which is not ideal, especially since it relates to granting access to the remote.
To avoid the problem, tokens should have an expiry after which they become unusable and the operation is closed.
Agree with premise. But also worth noting the tokens only remain until lxd is restarted next, not indefinitely.
Right, I've actually seen more concerns for the other way around. That is, LXD restarting and the remote join token no longer being valid.
For cluster join, I'd probably be happy with say a 3h default expiry, documented in doc/cluster.md as realistically, you're not going to be joining a cluster a week later.
For remote add, I don't think those should expire at all unless directly requested by the user.
I also think we should have both kind persist across LXD restarts. For cluster, that may help with LXD crashes and for remote add, well, this would be a necessity for the default of no expiry.
We should be able to achieve this pretty easily with just an extra tokens DB table to hold them.
We'd effectively have LXD spawn Token type operations on startup from what's saved in the DB.
IMHO remote add token should expire too, people forget about things all the time. I myself found an unused token left over from a previous test. How about 24h? Or even 7d?
We should be able to achieve this pretty easily with just an extra tokens DB table to hold them.
We'd effectively have LXD spawn Token type operations on startup from what's saved in the DB.
I'm thinking perhaps just use a dedicated table for the tokens concept without linking it to (and respawning) operations. Would be more inline with other entity types (instances, warnings etc) especially if they are to cease being transient like operations are.
Originally we didn't go that route because we wanted tokens to be back ported to 4.0 LTS without a schema change. But if we are introducing a schema change it would be more consistent to just use the table as authoritative truth.
I'm thinking perhaps just use a dedicated table for the tokens concept without linking it to (and respawning) operations. Would be more inline with other entity types (instances, warnings etc) especially if they are to cease being transient like operations are.
Yeah but the fact that token are operations is now API... That's how we list and revoke them from the CLI.
I'm not necessarily opposed to a full fledged /1.0/tokens type API, but we're going to have to keep backward compatibility at least for a little while.
IMHO remote add token should expire too, people forget about things all the time. I myself found an unused token left over from a previous test. How about 24h? Or even 7d?
It depends on what you're doing with them. The requests I've had are from people who do cloud-like hosting and that's the creds that are issued to individual users. Having those expire at all is problematic.
Well, to them, the token is no worse than a password or client certificate.
I'm not fundamentally opposed to having a configurable default expiry, but when unset, I would expect them no to expire, to keep the current behavior. Again, I think it's fine for cluster join tokens to expire by default as those have no reason to be long lived and also are far more dangerous than remote join token (well, at least compared to a remote join token that's restricted to a project).
|
GITHUB_ARCHIVE
|
The image format only matters on disc space.
In memory all textures are converted to the format the GPU supports.
I do not think the image format influences visible seems.
Why are there visible seams?
The described problem with the seams has a quite simple reason.
Usually only the texture pixels inside the UV-face (including the edges) are used to draw the rendered face.
But the texture is not used directly. Before it is used as texture it is processed by the OpenGL mid-map filter. The default is to blur/anti-alias the texture. A side effect of this is that the color of a texture pixel is mixed with the color of it’s surrounding pixels.
On the edges the surrounding pixels are not all part of the UV-face. Therefore the rendered face can get color information from outside the UV-face (e.g. from a black border).
Just want to provide some background information. I think it is important to know about that. Micah your solution is absolutely fine.
I think it is worth a resource.
You might want to change the title of the thread as it does not only belong to characters (edit top post - advanced mode). It belongs to any object with textures. E.g. “Avoid seems in textured objects”.
.Jpeg and mip mapping!??
So mip maps in BGE are calculated in real time, rather then saving them beforehand?
Im not very experienced in blender game engine or game engines in general, but the game engine I previously worked with required that you stored the mip maps within the texture itself beforehand, fx as a .DDS texture.
So im wondering if you know whether theres actually benefit in using an image format that stores mip maps within in BGE?
(Also for those reading this and dont know exacly what I am talking about read this and look at the images)
If you want a completely seamless texture, the best way is to make another UV layer and then unwrap the texture again, but avoiding the original seams. Now bake your original texture on to a new UV sheet and set the border in bake to about 12. This will leave a border of pixels around each UV island which are the same as the neighboring pixels. You can also go back and clean up your original seams by using a blur or clone tool where they don’t match well.
With “Texture Paint” you set the margin with the “Bleed” parameter:
<T> to toggle tool options
I recommend at least a margin of 4. You might wan to check the texture at various detail levels. Be aware the “seam” effect increases with zoomed out (small model) view rather than zoomed in (large model).
This is what I do for my baking. After creating seams UV unwrap with a margin of 0.08. Go into the uvw editior and select the border of each island and scale it up so that the edges aren’t packed too closely. Bake the Ambient occ or other map with a margin set to 5 px. For Ambient occ I also turn on normalised so ugly artifacts and noise will be less visible.
After baking apply the bake image with shadeless material to check for seams … if they still show up. Easy fix! Go back to your UVW and scale your outer edges back in to where they were before! Fixed!
Additional notes :
Also before baking I would add a subdiv modifier to make sure edges of my model’s tesellation don’t show up, however this will require you to model carefully as subdiv modifier may change the shape and outline of your model alot if not done correctly.
|
OPCFW_CODE
|
Does anyone know, how to modify for example the Ghost's Canister Rifle, so it uses the Drakken laser as a weapon effect?
If one of you guys played XCOM, the weapon should fire like one the weapons presented there. So...uh...a short beam:
Because it would complement my Ghost quite well, here's a picture:
(Credits for the dark Ghost texture go to user KORroy, which I modified a little bit). ;)
So, 358 view's by now and nobody seems to be interested in providing (as tiny as it may be) help/tipps. I don't want to be mean or to put anyone here down, I'm simply disappointed, that no one from the more experienced users bothers to provide some knowledge.
Consider it necro-posting or anything like that, I don't care. But it seems you have to be an awesome modeller/animator to get some recognization. >:(
Use a projectile based attack Weapon > LaunchMissile > Damage. The Launch Missile projectile unit needs to use the "Laser Drill Tripod Bigger Attack Beam" model. Keep in mind their may be some resizing needed I can give more assistance if you need any. Make sure to use at least WoL campaign dependencies
OK, as it turns out, to make beam weapons work properly, you must go through some complex actor and effects work, which I myself have not figured out yet, I am sure I could if I were to try, but it has never been on my list of things to figure out because I am sorta lazy that way. Sorry, but you are going to need someone more learned of the data editor than me.
Rollback Post to RevisionRollBack
Looking for feedback on my custom races extension mod called "Scion Custom Races (Mod)"
But, if you don't want to go through complex steps, you can duplicate archons weapon and change it's shockwave beam model to what you want (drakken or colossus or some other neat beam, whichever you prefer).
Also, this really belongs in data, it being in art assets suborum made me think that this topic is about showcase of custom modeled laser impact effect. Probably explains lack of solutions and relevant posters.
Hm, since it's a graphic effect related matter, I thought it would be right. :D
Well, I got my problems with duplication. As I started with the Editor, my thoughts went to Warcraft 3. Copy unit, edit it > voila!
But nope, editing actors, customizing skills very difficult, creating units from scratch almost impossible.
The Galaxy Editor dwarfens the Editor from WC3, which was already very powerful. But it's understandable, that SC2 will never have the flourishing modding community WC3 had, unless Blizzard releases these goddamn Art Tools.
the art assets forum is really for new art assets, you could say this 'might' belong in the art development, but even then not really as that is again more for new stuff. This is a purety data editing thing and really should be in develpoment > data.
|
OPCFW_CODE
|
- 评论 (2)
- 评论 (4)
Price M42 limited
A comprehensive tool for determining the levels of support, resistance, reversals in relation to trading sessions. The indicator's thesis can be expressed as follows:
Following the thesis, the indicator detects levels and processes data separately for three trading sessions: Asian (including Pacific), European and American.
The User Manual of the indicator is located on the discussion page of the full version.
The indicator contains the following subsystems, which can be used either independently or in any combination:
- Period Marks: Candles of trading sessions - group and display the data of each trading session in the form of a Japanese candle. Candles can be used for pattern analysis or to visually identify the signal source.
- Price Marshes: Support and Resistance Levels - displays support/resistance levels in the form of a gradient field of horizontal lines, the color of which depends on the trading sessions in which this level occurred. The color density is directly related to the number of confirmations of this level. The periods of the level sources can be selected from the range "M1" to " W1".
- Proximate Missions: Nearest targets - displays the value of the two nearest levels from the top and bottom to the right of the last bar. Levels are filtered by the current trading sessions (levels of inactive trading sessions are not displayed) from the Price Marshes data.
- Periods Meters: Timers of trading sessions - displays the status of trading sessions and the time before their start/end.
- Predator Mask - controls the display of sessions in the above tools, allowing you to view data only for selected (or excluding selected) trading sessions.
The indicator uses the only input parameter Analysis depth (days). Other settings and mode changes are made via the screen interface of the indicator.
- Analysis depth (days) - depth of analysis in days.
The indicator uses the following parameters of trading sessions:
|Trading session||Session time (GMT)||Color on the chart|
|Asian + Pacific||9.00 pm - 9.00 am||Green|
|European||6.00 am - 4.00 pm||Blue|
|American||12.00 am - 10.00 pm||Red|
- Time shift detecting ... - defines the offset of the server time.
- [ Symbol ] [ Period ]: history loading ... - additional story is loaded when changing the indicator modes. If this message does not disappear within a reasonable time, reduce the value of the "Analysis depth (days)" input parameter.
- Session candels for [Period] period are not provided - impossible to build candles on the selected period of the chart.
- Untrusted price area! - prices are unknown: the price is near the boundaries of the analyzed range. Indications unreliable.
- All modes are now disabled by default.
|
OPCFW_CODE
|
What is datatype of FILE?
What is the data type of FILE in C or in other language?
Is it an integer or structure or having no particular data type?
Why not open up stdio.h in notepad
I don't understand why this question is being downvoted. I don't know the answer either and would like to learn. And doing man stdio is not an answer. You can answer 90% of the questions on stackoverflow with man xxx.
@AndreasGrapentin None of the versions of man stdio I looked at had any significant information about what FILE looks like internally. (Which is reasonable, given that it's an implementation detail, but still.)
+1, I wonder if it would be OK to forward-declare struct FILE; in a header file (to reduce namespace pollution by not including <stdio.h>) and I think it would work on most systems, but the standard don't require it. Definitely a valid question. But what do you mean with “or in other language”? This probably depends on the language and you only tagged it C and C++…
@mafso Actually, that's an interesting question on its own. Is it permissible to, say, declare an array of FILEs? If so, is there anything useful you can do with them?
@duskwuff Your question has nothing to do with mafso's. Yes you can declare an array of FILEs and yes that is useful.
@JimBalter Well, one of the major functional differences between having a full definition of the FILE structure in <stdio.h> and having it forward-declared is that you need the full definition to declare a FILE (or an array of them) yourself. So there's the connection.
Look here What is an opaque value?
@duskwuff Uh, if you're going declare or define something as a FILE ... a single one or an array of them, you need to include stdio.h. So there's no connection at all between your question and mafso's. The answer to mafso's is that such forward declarations are not permitted by the standard.
Correction: Actually, you cannot and would not declare a FILE or an array of FILEs ... only pointers to FILEs or an array of pointers to FILEs. In all cases you must include stdio.h
@JimBalter: I agree, that you usually don't declare arrays of FILE, but after having a look at the standard it's pretty clear: C99 7.19.1/C11 7.21.1, p.2 The types declared are […]; FILE which is an object type. So, it seems to be possible to declare arrays of FILEs.
@mafso Possible (if you include studio.h) but pointless ... there's nothing you can do with a FILE that you allocate yourself.
@duskwuff which is really all the information you really need, unless you are a libc core or OS kernel developer :)
This question should be re opened, is not duplicated of https://stackoverflow.com/questions/3854113/what-is-an-opaque-value-in-c . THe question asks about the datatype of FILE and not what is an opaque value. It happens that FILE is an opaque value . That question is RELATED but not DUPLICATE
It is what is typically termed an opaque data type, meaning it's typically declared as a simple structure, and then internally in the OS libraries the FILE pointer is cast to the actual date-type of the data-structure that the OS will use access data from a file. A lot of these details are system-specific though, so depending on the OS, the definition may differ.
What is an opaque value in C++?
FILE type in C language (Linux)
What exactly is the FILE keyword in C?
Your suggested implementation of an opaque data type is UB in C ... you can't cast "a simple structure" (whatever that is) to "the actual data-type". And that's not how it's usually implemented ... see duskwuff's answer for an actual example.
This is implementation dependent, but opaque datatypes are used all the time by libraries... You are using a pointer to an opaque type, and you pass that pointer around but never dereference it directly in your code... Since its a pointer, you can cast it to any type necessary
Again, the C standard does not sanction such casts ... they are UB. The way to implement an opaque pointer is to use partial declarations ... forward declaration of a struct type without defining it. The definition is internal to the implementation. No cast is involved.
P.S. Deduplicator already posted a link to how to do this: http://stackoverflow.com/questions/3854113/what-is-an-opaque-value
Thank you for the link, and I feel like the link describes what I'm saying, but I guess there are some semantic differences where you feel I'm not exactly explaning things correctly. I'm curious, why is pointer casting considered UB?
Read the standard ... you cannot cast a pointer to one struct type to a pointer to another struct type and then dereference it unless one of the structs is the first member of the other. The correct solution is given by paxdiablo at the link, and it is clearly not what you describe because there's no "simple structure" and no casting ... it's the same struct xyzzy in both the caller and the implementation, but only the implementation has access to the members of the struct.
I did look at paxdiablo's answer, and I still believe that this is not UB. The reason is that according to the standard, you can cast from T* to U* and back to T*, and provided the alignment is correct, the standard specifically states this is not undefined behavior. So the library creates an internal T*, passes the user from the library function an opaque type U* through a cast... The user does not dereference this pointer, but passes it back to the library in another call, at which point the library recasts it to the original internally defined non-opaque T*... That is not UB.
"you can cast from T* to U* and back to T*" -- but that's not what you're talking about doing. paxdiablo's method is correct, there's no casting, and I've already wasted my time talking to you. Goodbye.
|
STACK_EXCHANGE
|
Developers working on data integration projects are often required to load numerous database tables in a particular sequence, with parts of the load process carried out in parallel to reduce load times. Ideally such load routines should be configurable—so that, for example, a data warehouse can be reloaded or refreshed with new data—and it should be possible to restart a failed load routine once the reason for the failure has been addressed.
To handle these requirements, the 184.108.40.206 release of Oracle Data Integrator 11g introduces load plans. Load plans—building on the interfaces, packages, procedures, and scenarios already present in Oracle Data Integrator projects—provide the ability to create hierarchical data integration processes that enable conditional execution, parallel execution of integration tasks, and plan restartability after a failure.Creating Your First Load Plan
So how do load plans work, and how do they differ from packages, the traditional way to sequence integration steps in Oracle Data Integrator 11g? To find out, let’s work through a scenario in which data is sourced from the OE (Order Entry) sample schema that comes with most Oracle Database releases and is loaded into product and customer dimension tables as well as an ORDERS fact table in another schema. If you want to try this new feature yourself, download and install Oracle Data Integrator 220.127.116.11, access a database with the OE sample schema installed, and download and install the load plan project files. Follow the instructions in the zip file for installing the load plan project files.
In the initial version of this article’s Oracle Data Integrator project, a package loads each table in turn via a set of interfaces. Now let’s enhance this load routine, so that (1) the two dimension tables are loaded in parallel before the fact table and (2) the user has the option to load just the fact table, skipping the dimension table load.
To do this, follow these steps:
With Oracle Data Integrator’s Studio integrated development environment (IDE) open, click the Designer navigator tab and navigate to the Load Plans and Scenarios pane. At the right of the pane header, select New Load Plan.
The load plan editor opens on the right-hand side of the screen. Ensure that the Definition tab is selected, and then enter the following details:
Name : OELoadPlan
Description : Load Plan to load
customer, product, and order data
Click Save to save the load plan’s initial definition.
To add a new first step to the plan that will run a procedure called Trunc Error Table to truncate the error table, first select the Steps tab in the left column. Then click the add step button (the green plus [+] sign), select Serial Step from the menu, and rename it Initialize.
To add the Trunc Error Table procedure to the step, locate it on the Projects panel and drag and drop it on top of the new Initialize step. Load plans run only scenarios, the compiled form of procedures, and other Oracle Data Integrator integration objects, so when you drop the step, the load plan editor automatically creates the scenario and adds it to the load plan for you.
Figure 1: Adding a Case step to the load plan
The Case Step wizard launches. Click Lookup Variable, select the variable to use in the Case step—LoadOrdersOnly in this case—and then click Finish.
A Case step is accompanied by one or more When steps that test for individual values and an Else step that covers all other values. Here’s how to add a When step that loads just the fact table when this variable value is set to 1: With Case Step selected, click the add step button and select When Step from the menu. Then on the Step Properties panel, enter and select the following values:
Name : When Value = 1
Operator : Equals (=)
Value : 1
Then go back to Case Step and click the add step button to add an Else step to it. Finally, click When Step, Else Step, and the add step button to add a new Serial step to each one—ready for you to start adding project interfaces to each of the new steps.
The first interface you’ll add is for loading just the fact table. To do this, drag and drop the Pop.Fact_Orders interface onto the Serial step under the When Value = 1 step.
For full loads handled by the Else step, you first want to load the two dimension tables in parallel and then load the fact table. To load the two dimensions in parallel, click the add step button to add a new Parallel step under the Serial step under the Else step and then drag and drop the Pop.Dim_Products and Pop.Dim_Customers interfaces onto this new Parallel step.
Then click back on Serial step under the Else step, click the add step button to add a new Serial step under it, and then drop the Pop.Fact_Orders interface onto it. Once complete, your load plan should look like the one in Figure 2.
Figure 2: The initial load plan
Now that you’ve created the basic load plan, let’s test it out. Click Save to save your load plan details, and ensure that you have a standalone agent running (because you cannot use the built-in agent that comes with Oracle Data Integrator’s Studio to run load plans). Click the Execute button at the top of the load plan editor, enter 0 as the startup value of the LoadOrdersOnly variable to trigger a full load, and then switch to the Load Plan Executions pane within the Operator navigator to see the outcome of the load plan run.
Double-click the load plan run under the Agent folder on the Load Plan Executions pane. A window opens, showing the actual steps that were executed by this load plan run. In this case, because you passed 0 as the variable value when executing the load plan, the Else part of the plan executed and performed a full load. If you executed the load plan again but this time passed 1 as the LoadOrdersOnly variable value, you would see the When part executed instead.Exceptions and Plan Restartability
So far you’ve seen the conditional execution part of load plans in action, but what about exceptions and restartability?
Let’s continue this scenario by considering how you might handle a situation in which the load plan tries to process rows for the ORDERS fact table but those orders reference product dimension IDs that don’t exist, a common scenario for data warehouse developers.
To simulate this situation, let’s first disable the constraint on the OE.ORDER_ITEMS table that stops you from entering invalid product ID values into the PROD_ID column. (You might want to back up your OE schema before doing this, so that you can restore it to its original values afterward.)
ALTER TABLE order_items
DISABLE CONSTRAINT order_items_
Now let’s add new values into the ORDERS and ORDER_ITEMS tables that reference a PROD_ID that doesn’t exist in the OE.PRODUCT_DESCRIPTIONS table:
INSERT INTO orders
INSERT INTO order_items
INSERT INTO order_items
Now execute the load plan again, passing 1 as the LoadOrdersOnly value to trigger a full load. This time the load plan fails at the step where it tries to load the fact table, because the product key lookup fails and Oracle Database raises an error when the load plan subsequently tries to insert a NULL value into the OE_TARGET .FACT_ORDERS.PROD_ID column, which has a NOT NULL constraint on it, as shown in Figure 3.
Figure 3: The load plan showing the error caused by an invalid product ID
To deal with this type of data issue, you need to do two things:
Create an exception with an Exception step that, in turn, runs an Oracle Data Integrator procedure that moves any such rows out of the OE.ORDER_ITEMS table into an error table in the OE_TARGET schema.
Associate this exception with the scenarios in the load plan that load the data warehouse fact table, so that when you try to restart the failed load plan, it will complete successfully.
With the load plan open in the Designer navigator, click the Exceptions tab, click the add step button, and select Exception Step from the menu.
Double-click the new Exception step to rename it, and call it Load Order Exception. To add the Oracle Data Integrator procedure that moves the rows to the error table, drag and drop the Move Offending Items procedure from the Projects pane onto the new Load Order Exception step, so that it is added as a scenario to the load plan, as shown in Figure 4.
Figure 4: Defining the Exception step
Now locate the steps in your load plan that run the scenarios that load the FACT_ORDERS table—in Figure 2, these are steps 6 and 13—and change their restart type in the Property Inspector from Restart from New Session to Restart from Failed Step.
Navigate in turn to each of these step’s parent steps—in Figure 2, these are steps 5 and 12—and in the Property Inspector, change the Exception step value to Load Order Exception, the exception you defined in the previous step.
Choosing these settings ensures that in the event of an error, the Move Offending Items procedure will run to remove the erroneous rows and the load plan can be restarted at this point, skipping all the previous steps.
Figure 5: The restarted load plan, rerunning only the failed step
Load plans in Oracle Data Integrator 11g give you the ability to define data warehouse and other data integration load routines that enable conditional execution and support exceptions and restartability. Available in the 18.104.22.168 release and with additional features in the 22.214.171.124 release, load plans build on the existing interface, procedure, and package features in Oracle Data Integrator and provide a new way to orchestrate and manage your data loading routines.
READ more about Oracle Data Integrator
READ more Rittman
Photography by Igor Ovsyannykov, Unsplash
|
OPCFW_CODE
|
import unittest
import pygame
from src.murus_gallicus.ui import UIRender
from src.murus_gallicus.constants import SPQR_RED, CELTIC_GREEN, ICON_PATH
class TestUI(unittest.TestCase):
"""Class of Unit Tests to check bugs in the UI Class."""
image_path = ICON_PATH
def test_if_ui_instance_well_initialized(self):
"""Test if the attributes of UIRender are well initialized."""
ui = UIRender(TestUI.image_path)
self.assertEqual(ui.run, True)
self.assertIsInstance(ui.clock, type(pygame.time.Clock()))
self.assertEqual(ui.game_mode, "UNKNOWN")
self.assertEqual(ui.bottom_player_color, 0)
self.assertEqual(ui.top_player_color, 0)
def test_if_bottom_color_player_well_set(self):
"""Test if the UI has the right player colors when set_bottom_player_color() are called."""
ui = UIRender(TestUI.image_path)
ui.set_bottom_player_color(CELTIC_GREEN)
self.assertEqual(ui.bottom_player_color, CELTIC_GREEN)
self.assertEqual(ui.top_player_color, SPQR_RED)
ui.set_bottom_player_color(SPQR_RED)
self.assertEqual(ui.bottom_player_color, SPQR_RED)
self.assertEqual(ui.top_player_color, CELTIC_GREEN)
def test_if_row_col_well_retrieved_from_mouse_pos(self):
"""Test if the row/column of the board_grid are well deduced from the mouse position on the screen."""
ui = UIRender(TestUI.image_path)
row, col = ui.get_row_col_from_mouse((10,25))
self.assertEqual(row, 0)
self.assertEqual(col, 0)
if __name__ == '__main__':
unittest.main()
|
STACK_EDU
|
Managing Backups in Homeassistant
This page has been visited ... times
Today I’m going to talk about an important aspect not to be overlooked in home automation: Backup.
Many of us do not think about this practice except we must not neglect it.
In this article I will show you how to backup Homeassistant very easily on any private external media and this very easily.
When I say any support I mean your Nas, your pc, your phone or your tablet see more.
But in addition and thanks to an add-on I will also show you how to automatically generate a complete backup or not of Homeassistant.
- Homeassistant OS installed
- external support with Syncthing installed
- HACS installed on Homeassistant
For those who do not know the principle it is very simple. Syncthing is an opensource software that allows you to synchronize folders or files end to end. It can be installed just as well on a pc (windows, mac or linux, docker), on a Nas (docker, Truenas, Synology) as on a mobile device, that simply means that it can be installed on all mediums.
No excuses not to use it.
The notion of server/client does not exist, each system where syncthing is installed is a client, it is therefore necessary to configure on each client the sending and receiving each folder/file. Very easy to set up syncthing is accessible to everyone and allows you to do without proprietary clouds like Google drive, Amazon S3 etc…
In my example I installed Syncthing on Homeassistant HAOS and on an Openmediavault NAS with Docker.
Installing Syncthing in HAOS
Let’s start by adding Poeschl’s repository to the module manager. Go in: Settings > Add-ons > Add-ons Stores > Menu > Repositories
- paste the external directory: https://github.com/Poeschl/Hassio-Addons and click on Add.
- Then refresh the page and you will see the directory: Poeschl Home Assistant Add-ons This directory contains a lot of additional modules like: Asterisk, Mpd, rsync, etc…
- Click on Syncthing and install, you don’t have to change the settings, it was cool.
- Once installed, just click on open web user interface And now, it’s over.*
Let’s go to the NAS settings:
the stage of setting up a login and pass to access the web interface is not an obligation in Homeassistant because it is itself already set up with a password.
The Syncthing administration interface is configured to allow remote access without a password. This can easily allow an intruder to read and modify any file on your computer. Please set a username and password in the Configuration window.
Let’s start by adding a Device, in my case the NAS. Click at the bottom right on add a Device, find the identifier of your device. Nothing could be simpler, go to the Syncthing server installed on your nas address: http://[ip-du-nas]:8384, click on the action tab at the top right and select show my ID. Then copy the ID and paste the das Syncthing Homeassistant, enter a friendly name, for me NAS and valid. Last step returns to Syncthing of the NAS and authorizes the synchronization of Devices in the yellow tab.
Then Add a share, in my case I would like to share the
/backup folder. Put a share name and in the root tab link
/backup. Last step go to Advanced > Type of sharing > Sending (read only), to finish click on save.
Go to Syncthing of the NAS and accept sharing. Once accepted, go to the shared tab that bears the previously registered name and click on manage. Go to the advanced tab > Receive sharing type (only), finish by clicking on Save.
Here is your Backup folder with all your backups will be synchronized with your NAS.
Auto backup (hacs)
Now that you have learned how to easily synchronize your backups externally using syncthing, let’s move on to creating automatic backups in HomeAssistant. For that I chose to install Auto Backup a module available in HACS. Click on the link below to easily install auto-backup in HACS
Then once installed click on the link below to add auto_backup as Device in HA.
Once the auto backup set is installed, open the following services:
as well as the following events:
With this module you will be able to:
- Provides more advanced and configurable service calls.
- Exclude addons/folders from a backup.
- Automatically delete backups after an individually specified amount of time.
- Download backups to a specified directory after completion (for example a usb drive).
- Allows the use of addon names instead of slugs.
- Provides a sensor to monitor the status of your backups.
- Creates events for when backups are started/created/failed/deleted.
- Supports generational backup schemes.
Example of Blueprint integration for generating backups:
Blueprint integration example for generating a notification:
But auto_backup is also full documentation available, just click on this link. You will easily find examples of automations and notifications in the form of Blueprint.
This is in my opinion an essential tutorial, so I did not provide screenshots, with a minimum of knowledge of homeassistant you should be able to get by at least I hope so. If Syncthing doesn’t suit you you always have the solution to integrate a google drive compatible module, this tutorial is written by Juanmi Home Assistant specialist. Finally, do not hesitate to contact me on the forum or by leaving a comment, I will answer you quickly 😉.
|
OPCFW_CODE
|
Did Solar Impulse flight in commercial airspace and have kind of type 23 certification?
I read that the amazing Solar Impulse back in 2015 had circulated around the globe for 500 hours. It flew with speed 70 km/h and altitude of cruising 28000 ft. My questions are below
Did it fly in the commercial airspace? if yes then did it have kind of type 23 certification for small aircraft?
I have searched about the system of it such as flight control system and autoflight, but found nothing. What kind of the system was used in solar impulse? Did it have typical system with small aircraft such as Dassault Falcon or Pilatus?
Maybe anyone has detailed information related to solar impulse system such as flight control, autoflight, etc?
Thank you
I won't call an ATR a small aircraft. A bizjet such as a dassault falcon is smaller, not to mention GA.
You'd be amazed to learn how far you get with an experimental certification if you create enough publicity about that flight you try to accomplish.
@PeterKämpf so the experimental certificate allows the aircraft to fly in commercial busy airspace?
There isn't really such thing as commercial airspace.
@ZahiAzmi: Yes, all you need to do is to apply for a special permission and get that accepted. Publicity helps with that acceptance. A lot.
You are asking two completely different questions here: 1. about the certification and 2./3. about the flight control and autopilot system. You should have asked two independent questions for this.
I really cannot imagine it being anything else but "experimental aircraft"
You don’t need any type of special permission to fly a properly equipped experimental aircraft in airspace normally frequented by commercial traffic. In the US, the Classes of Airspace and even the non-military airways are open to any pilot and/or aircraft properly certificated (current license, medical, airworthiness, registration, etc.) regardless of if they are commercial, private, GA, etc. I fly out of four different airports in and under a very busy Class B. There are experimental aircraft that fly out of those same airports all the time. I fly right over the top of the B airport at TPA,
To answer the question presented in the title, I looked to Wikipedia for details on the flight(s) of Solar Impulse. According to Wikipedia, Solar Impulse 2 (HB-SIB) circumnavigated the Earth’s Northern Hemisphere in 17 legs/flights. Of those 17 legs, 14 legs had maximum altitudes above 18,000 feet MSL. In the U.S., Class A airspace is between 18,000 and 60,000 feet MSL. It is not considered “commercial” airspace. It is accessible to any aircraft with a Mode C transponder and ADS-b, including experimental aircraft. And, even this requirement can be waived with prior approval.
91.135 Operations in Class A airspace.
Except as provided in paragraph (d) of this section, each person operating an aircraft in Class A airspace must conduct that operation under instrument flight rules (IFR) and in compliance with the following:
(a) Clearance. Operations may be conducted only under an ATC clearance received prior to entering the airspace.
(b) Communications. Unless otherwise authorized by ATC, each aircraft operating in Class A airspace must be equipped with a two-way radio capable of communicating with ATC on a frequency assigned by ATC. Each pilot must maintain two-way radio communications with ATC while operating in Class A airspace.
(c) Equipment requirements. Unless otherwise authorized by ATC, no person may operate an aircraft within Class A airspace unless that aircraft is equipped with the applicable equipment specified in §91.215, and after January 1, 2020, §91.225.
(d) ATC authorizations. An operator may deviate from any provision of this section under the provisions of an ATC authorization issued by the ATC facility having jurisdiction of the airspace concerned. In the case of an inoperative transponder, ATC may immediately approve an operation within a Class A airspace area allowing flight to continue, if desired, to the airport of ultimate destination, including any intermediate stops, or to proceed to a place where suitable repairs can be made, or both. Requests for deviation from any provision of this section must be submitted in writing, at least 4 days before the proposed operation. ATC may authorize a deviation on a continuing basis or for an individual flight.
For further details about the aircraft, it’s custom autopilot and flight controls, and the actual flight Itself, visit https://aroundtheworld.solarimpulse.com/adventure .
For the solar impulse system:(question3)
The photovoltaic cell is made from two layers of Silicon
(semiconductor material):
- a layer doped with Boron which has fewer electrons than Silicon, this zone is therefore positively doped (zone P).
- a layer doped with Phosphorus which has more electrons than Silicon, this zone is therefore doped negatively (zone N). When a photon(=
smallest indivisible measure of energy associated with electromagnetic
waves, ranging from radio waves to gamma rays) of light arrives, its
energy creates a rupture between a silicon atom and an electron,
modifying electrical charges. This is called the photovoltaic effect.
The atoms, positively charged, then go to the P zone and the
electrons, negatively charged, to the N zone. A difference in electric
potential , that is to say an electric voltage, is thus created.
On the Solar Impulse, there are exactly 17,248 silicon solar cells on
a surface of 269.5m², capable of capturing up to 340kWh of solar
energy per day. The energy created by these cells is stored in four
high-performance lithium polymer batteries , located in nacelles
isolated from the aircraft. The only drawback is the weight: they
alone represent ¼ of the total weight of the aircraft.
These batteries power engines that have an average power over 24 hours
of 15hp, comparable to a small motorcycle, and its maximum power is
70hp. The four engines are fixed under the wings, with a two-blade
propeller four meters in diameter. The total efficiency of this set of
engines is 94%, which makes it a record of energy efficiency. The
speed of the Solar Impulse varies between 36km / h and 140km / h,
which is equivalent to the speed of a car
For the flight i think there is no rules ...see this:
us look at what happens during a 24 hour flight. For the solar panels,
the day begins late and ends early: enough light to sustain flight is
limited to just 10 hours per day! It’s a race against time...
6am: The sun has just risen. The airplane is on the runway. Its
batteries, charged by the sun on the previous day, are nearly full. So
it can take off using that stored energy... 6am-6pm: It gains
altitude. The four motors turn at maximum power as the plane must
create lift (the force that allows the plane to climb) to reach a
thinner layer of air where there are fewer clouds. Despite the steady
energy consumption, the batteries are charging. Nearing 6pm: It
reaches 9000 meters (29,500 feet), its maximum altitude. The sun’s
rays fade. The motors are throttled down and the plane starts to glide
down to an altitude of 1500 meters (5000 feet), which takes about 4
hours, during which time it consumes almost no electricity... Nearing
10pm: At 1500 meters (5000 feet) of altitude, the pilot powers up the
motors again, but this time they will take their energy from the
batteries. The airplane flies like this until daybreak when solar
energy again feeds power to the motors and recharges the batteries. A
new cycle begins.
To determine the flight path that Solar Impulse will take in the sky, many factors must be considered: weather, air traffic areas, also the height of the land to be flown over and the performance of the airplane. A team of engineers and meteorologists based in Monaco Mission Control Center (MCC) determines the best route for the plane and then prepares the flight plan. Once the route is chosen, overflight and landing clearances must be negotiated for each country
The numbers in the picture are the energetic efficiency
http://solarimpulsemmktpe.e-monsite.com/pages/i-le-fonctionnement-et-la-conception.html
For more information :https://aroundtheworld.solarimpulse.com/adventure/technical-challenge-1
How does this answer the question about type 23 certification?
It is an awnser for the question 3
I am so sorry for not detailed question, by system, I mean "typical" aircraft system such as flight control or autoflight
Ah, sorry, I only read the title and the beginning of the question body. As stated, the question is way too broad though.
Question not clear
@L'aviateur sorry my bad
Yeah but is my question useless for you?
So i ad some info about the flight
See what's bold
|
STACK_EXCHANGE
|
New to the Community? Start here.
The ability to "schedule" rules now requires setting up two seperate rules. You need a rule to turn the sensor on and a seperate rule to turn the sensor off. A byproduct of having this setup, makes the sunrise / sunset feature practically useless unless you want something to happen specifically between sunset and sunrise or vice versa. For instance, if I wanted to turn my Christmas lights on at sunset and then turn them off 4 hours after sunset, I would not be able to do this without manually setting it up each day.
The following Popular Rule is a direct excerpt from the automation section of the portal.
Turn on a light for 15 minutes every hour between sunset and midnight.
There is no way to actually make the system perform that task in the current configuration.
Agreed! This needs to be addressed prior to full implementation. There are many shortcomings with the new portal. Let's fix them not ignore them.
I agree some of the limits with setting rules in the new version require a closer look. I like the sunrise/sunset options, however, once a rule is created there is no way to view the times you set like the old version. If you select edit there is no way to see the settings for the rule you created. It seems like it wants you to re-create the rule not modify it. Hope this is corrected before the old version is retired.
They have not improved it at all. I just got the system installed and it "works" exactly as described here - to turn a light on for 15 minutes every hour at night will literally require 24 rules, none of them editable if you screw something up.
Hi Amez, zembot, gvjones920 & aorons -
Thanks for reaching out with feedback on the new web portal & rules functionality. We appreciate the input. You're correct - On/Off for lights has to be automated with two rules. We have already begun work on adding on/off in a single rule & will be implementing this soon.
It is very important to me to have the ability to adjust dim levels of individual light, duration, repeat, etc etc. in one rule. This is what I would use in the old portal and would expect to have in the new portal.
I have ongoing issues with my rule for when the system is disarmed as well.
I changed times of day to get a text of when the system is disarmed and it will work once or twice after I save the changes, then the next day it won't sent the text that the syatem was disarmed.
I have tried use the old portal and no good there either, very frustrating being that this issue started only after making the changes in the "New" portal.
Not sure why this issue only impacts some rules and not others, I can change when the lights turn on and off with no issues.
|
OPCFW_CODE
|
My journey into game development 🙂 If you are in a similar experience: the links in the article will lead you to blogs that I wrote during the events, maybe you can relate to them and get something out of it.
I started in 2017 with ‘Find the Gnome’ and was not ready for it. I created this game while working full-time on another job and having a family to attend to. It did burn me out for 1.5 years. But it was no failure but a journey. I learned so much that I think I could not have learned it any other way than through this experience.
In December 2019 I officially started my own company. And started freelancing as a business software developer and spending a small but dedicated amount of time on my game development.
Working on my own title ‘Manage the Universe’ did learn me that I needed to get better at crafting games before making my dream game.
I am currently working part-time on ‘Find the Gnome 2’, a successor to an earlier game. I am open to side projects as a freelance game dev and business software dev.
In November of 2017, I started with creating my first game ‘Find the Gnome’.
Experimenting while building Find the Gnome. Questions like ‘Can I get a base income when I create 3 games in 2 years?’ or ‘Can I use my previous experience in business software development to amplify a start in game development?’.
Release of Find the Gnome, a game build in 500 hours during 7 months besides having a 40 hour workweek and a wife and 2 kids to attend to.
I could not cope well with the stress due to building ‘Find the Gnome’, the dramatic launch and the amount of energy it had drained.
My first game jam, the GlobalGameJam 2019 in Zwolle (external website), was really inspiring to me.
A lot of reality checks got me thinking. Seeing all those people try in the game jam but also on YouTube was a really depressing sight at first. I deemed myself not worthy.
Thankfully I got inspired again by successes on my ‘normal’ job, due to mentoring and personal growth. I finally saw the light and realized that I am a professional software developer with capabilities that are worth money and people want me to fix things for them. I made a few blogs about agile that are (in hindsights) try-outs by me in finding my passion.
But reality is hard, it is incredibly hard finding time and mental peace. I emphasize family life, and thus trying to get game dev from hobby to professional in my free time seems farther away than ever.
November 2019 a few things happened. My wife and I looked at each other and said: lets get things into our own hands. So that’s when I quit my job.
Starting a game dev company is hard, especially if you are from the outside like me and don’t have specific artistic qualities. I tried a few other concepts before settling down on the current path.
Joined the GlobalGameJam 2020 Breda for a 48-hour game jam / hackaton.
Started my own company as a freelancer. Doing part-time work in GameFeelings. But most (32) hours as a freelance C# .Net backend/full-stack software developer.
During the first half of 2020 I have completed 1 consultancy assignment in the business software development that let me pay for the remainder of 2020. Further more I did 2 freelance game dev opportunities.
In the 3rd quarter of 2020 I worked on my own title ‘Manage the Universe’. This was sort-of the dream game I hoped to create. However, the sheer scale brought me into disarray. Also, I tried really hard to create good art but that was a sheer mountain to climb.
This combination of issues led to me picking up my software consultancy work again, and re-evaluating what I liked about game development. Then, December 2021, I decided I needed to give ‘Find the Gnome’ an update. Because the scale was smaller, my idea’s way more refined (after all these years thinking about what had gone wrong and could have been done better), and the art style was more accessible to me.
I started on salvaging what was left of Find the Gnome, but quickly realized I had to create all assets again. The art was too inconsistent, there was no coherency. And I wanted to do low poly because I liked that style.
During the year however I discovered that letting other people work for me was actually very worth the money. So instead of creating all this art myself I had a concept artist work out what I wanted and then used this to instruct modellers to build my game models. This proved to be the silver bullet for my earlier issues with game development: better quality work than I could have done it myself, without having to make it myself and learn that craft.
With more money funneling into this project, and with more artists spending tens to hundreds of hours on the project, I realized that I had the duty to make the best of this game. Not only for me (and the money), but also to appreciate the effort the others put into the game. So that is when I decided I wanted to make ‘Find the Gnome 2’ as a separate game and give it the spotlight it deserved.
|
OPCFW_CODE
|
Snort is processing VOIP/SIP media packets
Hopefully someone can help here. We have SNORT running on LAN interface, I see this is recommended across the Internet, but this does contradict BMeeks quick setup guide.
We also run HAProxy, and I see having SNORT on LAN none of the HAProxy traffic is inspected, so will move to WAN. I do however like seeing which local host is being targeted so if perhaps running on both WAN and LAN desirable?
That aside before I start processing all traffic we have an issue with the SIP preprocessor / Stream5 preprocessor.
I have created an alias for our VOIP servers and a port list alias for the signalling ports (5060,5061,5080,5081) and applied this aliases in the appropriate locations in the Variables tab.
I've reviewed the auto generated snort.conf file and see under the SIP preprocessor ignore_call_channel is set.
This should tell Snort via the Stream5 API not to inspect the UDP traffic containing the voice/video data.
Depite this being enable when a call is active we see the CPU usage increase from 1% at idle to around 20/30%, this increases with each additional call. This suggests the UDP traffic for voice data is still being inspected.
I've tried adding our UDP media ports 16384:32768 to the ports alias so that Snort knows which ports the media is sent, but after doing this Snort will not start on the LAN interface. Also a bit of googling suggests this should not be necessary. I also see in the variables tab SIP_Proxy_Ports which the defaults should cover our needs, but when reviewing the snort.conf file I cannot see any reference to this entry, nor does adding an alias to it make any changes to snort.conf.
I thought this might be the cause, perhaps a bug, so I manually in the advanced settings pass through added the line 'portvar SIP_PROXY_PORTS [5060:5080,16384:32768]', whilst Snort did start without issue, it made no difference, still high CPU usage during a call.
Can anyone shed any light on what I'm doing wrong here? Anymore info required let me know.
Running pfSense 2.2.4-RELEASE (amd64) and Snort PKG 188.8.131.52
Just tested this on a fresh pfSense 2.3.2 install with Snort 184.108.40.206_14 and the issue persists, so hasn't been fixed in the latest releases, assuming it is a bug of course and not something I am missing.
Just to update, I have used a BPF file to bypass Snort on the media ports to the VOIP hosts.
This has resolved the CPU issue, although this is a workaround rather than a fix so I would still appreciate any input.
To achieve this I created /etc/snort.bpf with the following contents
not (host 10.0.200.161 and udp portrange 16384-32768)
and added the following line to the advanced configuration pass-through
config bpf_file: /etc/snort.bpf
saved the configuration and restarted snort. Now calls do not hog the CPU.
|
OPCFW_CODE
|
kubernetes v1.19.5 container runtime docker-containerd error deploy coredns
Description
With kubernetes v1.19.5 container runtime docker successfully deploy coredns latest, but with container runtime docker-containerd v1.4.3 failed deploy coredns latest
Steps to reproduce the issue:
Deploy coredns in kubernetes v1.19.5 with container runtime docker-containerd
Describe the results you received:
coredns [Dec 16, 2020 12:54:40 PM GMT+7] E1216 05:54:40.324876 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://<IP_ADDRESS>:443/api/v1/services?limit=500&resourceVersion=0": dial tcp <IP_ADDRESS>:443: connect: no route to host coredns [Dec 16, 2020 12:55:08 PM GMT+7] E1216 05:55:08.036263 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://<IP_ADDRESS>:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp <IP_ADDRESS>:443: connect: no route to host coredns [Dec 16, 2020 12:55:14 PM GMT+7] E1216 05:55:14.948263 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://<IP_ADDRESS>:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp <IP_ADDRESS>:443: connect: no route to host coredns [Dec 16, 2020 12:55:21 PM GMT+7] E1216 05:55:21.540372 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://<IP_ADDRESS>:443/api/v1/services?limit=500&resourceVersion=0": dial tcp <IP_ADDRESS>:443: connect: no route to host coredns [Dec 16, 2020 12:55:52 PM GMT+7] E1216 05:55:52.133305 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://<IP_ADDRESS>:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp <IP_ADDRESS>:443: connect: no route to host coredns [Dec 16, 2020 12:56:03 PM GMT+7] E1216 05:56:03.716440 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://<IP_ADDRESS>:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp <IP_ADDRESS>:443: connect: no route to host coredns [Dec 16, 2020 12:56:18 PM GMT+7] E1216 05:56:18.694104 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://<IP_ADDRESS>:443/api/v1/services?limit=500&resourceVersion=0": dial tcp <IP_ADDRESS>:443: connect: no route to host coredns [Dec 16, 2020 12:56:45 PM GMT+7] E1216 05:56:45.316567 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://<IP_ADDRESS>:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp <IP_ADDRESS>:443: connect: no route to host coredns [Dec 16, 2020 12:56:51 PM GMT+7] E1216 05:56:51.972765 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://<IP_ADDRESS>:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp <IP_ADDRESS>:443: connect: no route to host coredns [Dec 16, 2020 12:56:51 PM GMT+7] E1216 05:56:51.972838 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://<IP_ADDRESS>:443/api/v1/services?limit=500&resourceVersion=0": dial tcp <IP_ADDRESS>:443: connect: no route to host
Describe the results you expected:
Kubernetes v1.19.5 with container runtime docker:
coredns
.:53
coredns
[INFO] plugin/reload: Running configuration MD5 = f68047850b236e33a043cf64188722fd
coredns
CoreDNS-1.8.0
coredns
linux/amd64, go1.15.3, 054c9ae
Output of containerd --version:
core@manager-01 ~ $ containerd --version
containerd github.com/containerd/containerd 1.4.3 6806845b4f638417933f68721e289c9aeda456b1
Any other relevant information:
core@manager-01 ~ $ sudo kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME manager-01.bnpb.go.id Ready <none> 10m v1.19.5 <IP_ADDRESS> <none> Flatcar Container Linux by Kinvolk 2605.9.0 (Oklo) 5.4.81-flatcar containerd://1.4.3
Likely to be CNI misconfiguration, not a containerd bug
@AkhiroSuda ,
Ok, i will check cni config and redeploy again tomorrow,
Thank you for your hint
Has been fixed, not issue with coredns but with authorization kube-apiserver secure port 6443.
|
GITHUB_ARCHIVE
|
Table of contents ☰
- Why is TCP and UDP important?
- How does TCP provide security?
- Why do servers use well known TCP or UDP ports for network communications?
- Is TCP and UDP secure?
- Do I need both TCP and UDP?
- Which layer is most important with respect to network security?
- Which is more important TCP or UDP?
- What is the importance of UDP?
- Why is TCP important?
- How does TCP provide security?
- Why TCP is secure?
- Is TCP IP protocol secure?
- What is TCP in cyber security?
- Do Web servers use TCP or UDP?
- What ports do servers use?
- What are the well-known TCP and/or UDP port numbers for a given collection of common applications?
- Is UDP secured?
- Is TCP protocol secure?
- What provides security to the UDP?
- Can https be over UDP?
why is tcp and udp important for a network security administrator - Related Questions
Why is TCP and UDP important?
Application programs that rely more on reliability, such as file transfer, emails, and online browsing, use TCP. Video conferencing, live streaming, and online gaming are some of the applications where UDP is used.
How does TCP provide security?
Any data that is communicated using TCP without any encryption functions can be accessed by anyone. An unauthorized attack on a TCP connection cannot be prevented using TCP. In TCP, peer entities are certificated by their source IP addresses and their port numbers.
Why do servers use well known TCP or UDP ports for network communications?
The well-known ports are a range of port numbers 0-1023 reserved for common TCP/IP applications. By using well-known ports, client applications are able to locate remotely installed server applications.
Is TCP and UDP secure?
The difference between TCP and UDP merely lies in the fact that TCP is a stateful protocol, so it requires acknowledgement of each segment to become more reliable. The UDP protocol is stateless, meaning it sends segments, but does not know if the client receives them.
Do I need both TCP and UDP?
In UDP, packets can't exceed 512 bytes. UDP packets always have a shorter length. The TCP protocol must be enabled in any application needing data to be transmitted over 512 bytes. Because of these reasons, DNS uses UDP and TCP, for example.
Which layer is most important with respect to network security?
In the network layer, the Internet Protocol Security (IPsec) framework is widely used to ensure security.
Which is more important TCP or UDP?
Compared to TCP, UDP has benefits of speed, simplicity, and efficiency. TCP allows packet retransmission, whereas UDP does not. A lost packet cannot be retransmitted in UDP (User Datagram Protocol). HTTP, HTTPS, FTP, SMTP, and Telnet all make use of TCP.
What is the importance of UDP?
protocol for sending and receiving datagrams from one thing to another, and it has been used for decades to provide low-latency and loss-tolerant connections between programs. As a result, transmissions are sped up since data can be transferred before the recipient agrees.
Why is TCP important?
As one of the main components of the Internet, TCP is vital as it establishes the rules and standard procedures used to communicate information. This enables uniform data transmission no matter where, what hardware or software is involved. The internet as it currently exists depends on the services provided by TCP/IP.
How does TCP provide security?
Any data that is communicated using TCP without any encryption functions can be accessed by anyone. An unauthorized attack on a TCP connection cannot be prevented using TCP. In TCP, peer entities are certificated by their source IP addresses and their port numbers. Although the source address and port number cannot be changed, they can be modified.
Why TCP is secure?
Due to the fact that TCP ensures that all segments are received in order and any lost segments are retried, it provides a level of reliability greater than TCP. There is no guarantee of this in UDP. UDP segments can get lost because of poor connections or arrive in the wrong order when they arrive.
Is TCP IP protocol secure?
Data sent over the network is not encrypted by TCP/IP's security feature. Access control for Internet Ports (DACinet, for the AIX® operating system) is user-based and is dedicated to controlling communication between hosts using TCP ports.
What is TCP in cyber security?
Communication is facilitated over a network using the Transmission Control Protocol (TCP), an implementation of a standard for system programs and computing devices. A data and message transport protocol designed to ensure the smooth delivery of data over networks and the transmission of packets.
Do Web servers use TCP or UDP?
There are two answers. A web server connects to requests through the HTTP (and HTTPS) protocol using TCP. Most people mean TCP if they fail to specify whether they want UDP, TCP, or SomethingElse used.
What ports do servers use?
Server resources can be assigned a number by the Internet Assigned Numbers Authority. As can be seen from the diagram above, SMTP servers use port 25 and Web servers use port 80. PORTS 1024 - 49151 - Registered Port - These can be registered with the IANA for services and should be considered semi-reserved by the IANA.
What are the well-known TCP and/or UDP port numbers for a given collection of common applications?
Well-Known TCP/UDP Ports 0 to 1023Port #PortocolDescription21TCPFTP Protocol (control) - port for FTP commands and flow control22TCP, UDPSSH (Secure Shell) - used for secure logins, file transfers (scp, sftp) and port forwarding23TCP, UDPTelnet protocol - unencrypted text communication, remote login service
Is UDP secured?
A secure method of transport is HTTPS over UDP. Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are both options you can choose from. These two methods serve the same purpose of putting your data in motion. It varies according to how they do it, however, so pick according to your preferences.
Is TCP protocol secure?
Segment data cannot be protected against messages eavesdropped by TCP. Data streams that are transferred over TCP are used in applications. Any data that is communicated using TCP without any encryption functions can be accessed by anyone. An unauthorized attack on a TCP connection cannot be prevented using TCP.
What provides security to the UDP?
This protocol offers privacy for communications over UDP; it is called Datagram Transport Layer Security. In addition to providing datagram protocols with privacy, the DTLS protocol does so as well. Client/server applications can also communicate with one another without eavesdropping, unauthorized access, or tampering with messages.
Can https be over UDP?
There are 6 answers to this question. A reliable stream transport protocol such as TCP, or a stream control protocol such as SCTP, can run HTTPS. UDP, a datagram protocol with unreliability, is not expected to be used for this (although it's not its official name, it's a good way to think of what it means).
|
OPCFW_CODE
|
How to update state of a listview located in the previous screen
I am currently developing an sqflite application with flutter and trying to set state of my list after performing a navigator.pop operation from another screen but it doesn't load the new state unless I make a hotrestart.Useful snippets from my code are below.Also, I can share whole code if it helps. How can I set state of listView without restarting my app?
// these methods are on my first screen that I display my listview
void getNoteList() {
final notesFutureList = db.getNotes();
notesFutureList.then((data) {
setState(() {
notes = data;
});
});
}
@override
void initState() {
getNoteList();
}
// This method is on my adding screen attached to the button which performs the saving.
Future<void> saveNote(BuildContext context) async {
await db.insertNote(Note(
header: myControllerHeader.text,
detail: myControllerDetail.text)); // inserted to db.
Navigator.pop(context);
// Navigator.push(context, MaterialPageRoute(builder: (context) => const NoteListScreen(),));
// if a use another push like this it's working. But doesn't look like a good way.
}
You can try to pass a setstate method but I guess a cleaner solution would be using statemanagement
Already tried setState method. But didn't work out
If you don't wanna go with statemanagament there are several posts that explain how to use setState https://stackoverflow.com/questions/48481590/how-to-set-update-state-of-statefulwidget-from-other-statefulwidget-in-flutter
You can turn the function into async from where you are saying Navigator.push(). Wait for the return of route and then setState. Just like
navigateToDataEntryScreen()async{
await Navigator.pushNamed("DataAddingScreen");
setState((){});
}
Or
navigateToDataEntryScreen()async{
await Navigator.pushNamed("DataAddingScreen");
getNoteList();
}
getNoteList() seems to be working out the problem. But could you kindly explain how it happened? So, does navigateToDataEntryScreen() method complete when returning to the main screen, which displays list elements ?
Sure Furkan. So basically what happened, you went to next screen to add data to database. So you need to reload the data back to memory. We used await keyword, which waits until the screen pops back. Once screen is popped back, the next line is executed. Which is getNoteList() function in this case. This function reloads the data and update the state. Hope answers your question.
|
STACK_EXCHANGE
|
Someone is trying to say do not use it because it does not work properly.
Is this false information?
+ Reply to Thread
Results 1 to 10 of 10
That person doesn't know what s/he's talking about
Removing every 5th frame is a decimation operation, not a deinterlacing operation .
If it says source type "film" in DGIndex , and it's 100% , you just use "force film" and you don't have to do anything else, and it will be perfect. This means it was "soft telecine" source
But on other DVD's or if it's not 100% film - You can delete every 5th frame straight up if you want to. The problem is with edits and cadence breaks. That means the wrong frame can be deleted if you delete every constant 5th frame
An IVTC is NOT a deinterlacing operation. TIVTC and Decomb do have deinterlacers built into them for fields that can't be matched properly. It can be switched for about any other deinterlacer if preferred.
Besides being a semi-literate moron, he's an ignorant fool. Guys like that don't want to be taught better. They only want confirmation for their biases. I don't see why people like that are deserving of patience, politeness or respect.
Of course, this is a small world we travel in so he might show up here to spout some more nonsense.
yeah Im not even gonna bother arguing with them. All they want to say is they know what they're doing and give examples saying they did this in Vapoursynth.
(Hotlinked image removed)
Last edited by killerteengohan; 17th Oct 2019 at 11:32.
Yes, it is possible in vapoursynth . With ESRGAN ( AI based training) . That model used is specifically trained for American Dad from PRAGMA . And that trained model should be able to be used for other types of cartoons with similar drawing styles. (You wouldn't be able to apply that model to ,say, an anime style source)
You can see other examples here
Actually, that probably is PRAGMA . My detective skills say the avatar is the same in that screenshot
I think the same model was used as this one
You can ask him to post the script . Some pre-processing is typically done, because these GAN methods to tend amplify noise and artifacts. There are other types of models, denoising models, dehaloing models, etc...
It's fine poison. It's not a big deal to me who, how or what they did. I was just wanting to clear up whether or not the IVTC was as crap as they were claiming. Thanks for clearing up my original question you two.
TIVTC suite is my goto now and days. If after stepping through the video if see a pattern I try :
rate can be 23.976,24,25,etc.
On some "made for TV" stuff like legends of tomorrow, Dolly Pardons a Coat of Many colors, etc. the 5th frame is a full frame so I just use TDecimate only
After a test encode if I dont like motion or if I didn't see a pattern and it's interlaced I just deinterlace and let it be 60fps like so:
Star Trek the next generation DVD's, Doctor Who 2005 Region 1 DVD's, etc I do this to. I may have a blend here and there, but overall there isn't too many "jaggies", it usually stays in sync, the motion is smooth, they're fast, most progressive frames are left alone, and it works under wine in linux via AVISynth. I've used srestore, QTGMC, etc. They are fine tools but can give odd results at times for mixed material. for noise I may use hqdn3d(2,2,3,3) give or take the strength of the first 2 numbers. If down the road I hate the way it looks I can re-encode from my master, but so far I have yet to be overly distracted with the final encode. Reason Why I listed both telecide and TFM is there's times one will get it perfect while the other will mess it up. Example is A-X-L or Captain Marvel on DVD. some scenes TFM messes up the field match no matter how I tweak, while telecide does it right. I usually try telecide first now and days.
Last edited by dannyboy48888; 17th Oct 2019 at 15:50.if all else fails read the manual
|
OPCFW_CODE
|
Why doesn't a reference variable of type A which contains reference to an object of type B can't access member functions of class B?
Why can't objA access methodB() since it contains the reference to an object of class B type?
Class A
{
public void methodA()
{
.........
}
}
Class B:A
{
public void methodB()
{
..........
}
}
now
A obj1= new B();
this throws an error:
obj1.methodB();
why ? obj1 contains the reference to an object of type B but still, it can't access member functions of it.
Let's say ClassA was Fruit and ClassB was Apple. MethodA is EatFruit and MethodB is RemoveAppleCore. Does it make sense to call RemoveAppleCore on any fruit (remember, it might be a strawberry or some other fruit)?
If you assign to a type A then the compiler doesn't know that it's a B. It could be just an A or even a C!
https://stackoverflow.com/questions/2662369/covariance-and-contravariance-real-world-example might read that. As a complete answer on a question
You declared the variable of type A:
A obj1
And A has no method called methodB. If you want the variable to be of type B, declare it as such:
B obj1= new B();
(This would of course mean that you can't store any other implementations of A in that variable, only B.)
Or if you don't want to change the variable then you'd need to cast the variable:
(obj1 as B).methodB();
(This would of course fail if obj1 ever contains an implementation of A that isn't a B.)
Basically, when you declare the variable of type A, the compiler maintains it as type A. There is no guarantee at any point after declaration that the variable would contain an instance of B. I could contain an instance of anything that implements A.
When you put those two lines nicely together, you will think "of course i can execute methodB() on that object! It is a B, obviously!".
But now consider this:
private void DoSomethingWithAnA(A obj1)
{
obj1.MethodB();
}
Why would this work? In this method, you only know that you receive an object A, nobody will assume that you have to call it with an object B. After all, if your method wanted to do something with a B, it should have asked for a B, not an A!
Of course, I can call the method with a B, this would not be a problem:
DoSomethingWithAnA(new B());
But that doesn't mean that DoSomethingWithAnA all of a sudden will do something with a B, it just does something with an A.
If you also want to do something B-specific, you can, however:
private void DoSomething(A obj1)
{
obj1.MethodA();
if (obj1 is B)
{
((B)obj1).MethodB();
}
}
This method would do something with a B if you pass it a B. However, it first needs to check if the A you send is actually also a B, and then it casts the A to a B. On the resulting B it can then call MethodB().
Because its called restrictive view from base class to derived. If you instantiate object using polymorphism like you do obj1.methodB(); your obj1 posses only things defined in parent class. This is possible because we know that derived class MUST have that definitions too (fields, properties, methods etc).
So that's why you cant call explicit obj1.methodB();but you can call methodA() with derived class implementation.
Examples:
A objA = new A();
objA.methodA(); //calling method implementation of class A
objA.methodB(); //compile time error -> class A doesn't have that method
A objA = new B();
objA.methodA(); //calling method implementation of class B
objA.methodB(); //compile time error -> during restrictive view trough parent class
B objB = new B();
objB.methodA(); //calling method implementation of class B
objB.methodB(); //calling method implementation of class B
Imagine that you have class C,D and so on, if you instantiate them with restrictive view you can call common method (defined in parent class) for all of them (children).
Example
List<A> list = new List() {objB,objC,objD};
foreach(A obj in list){
obj.MethodA();
}
|
STACK_EXCHANGE
|
Jul 06 2021 01:45 PM
Jul 06 2021 01:45 PM
My workbook has 2 tabs, Stores and Final.
This is what I'd like to do:
If Parent _ID (from column B of Stores) is 9 insert the text Southern into Column B of Final
If Parent_ID (from Stores) is 21 insert the text Northern into Column B of Final
If Parent_ID (from Stores) is 1 insert the text HO into Column B of Final
This is the formula I created:
What am I missing?
Jul 07 2021 10:15 AM
After I posted my question, I realized it is a lot more complicated than I thought. Here are the details. I have attached the spreadsheet. However; I understand if this is too much work for you and you would rather not work on this any further. I do appreciate your original response but I asked the wrong question, sorry.
I am trying to populate the Region column in the myFinl_Data tab
I need 3 regions, Northern, Southern, Home
I have a Store_Id that rolls up to a Parent_ID in the Stores tab
The Store_IDs 2,3,4,7,8,11,15,16,17,18,19,22.29,31 need to be in the Northern region which is Parent_ID 21
The Store-IDs 5,6,10,12,13,14,20,23,24,25,28,27,28,30,32 need to be in the Southern region which is Parent_ID 9
The Store-ID 1 needs to be in the Home region which is Parent_ID 0
The Store_IDs 9,21 also need to be in the Home region which is Parent_ID 1
Parent_ID 21= (Northern) If city= Baltimore, Raleigh,Washington,Wilmington enter Northern
Parent_ID 9=(Southern) if city= Philadelphia,New York,Jersey City enter Southern
Note: Jersey City appears as Parent_ID 9,0,and 1. The Regional offices (Parent_ID 1) are supposed to roll up into the Home region (Parent_ID 0)
My thought was to change the name of Jersey City with Parent_IDs 0 or 1 to JerseyCity M, to avoid conflict with the other Jersey City store
0,1=Jersey City M, region Home
Jul 09 2021 01:47 AM
Perhaps that will be bit easier if we convert data in structured tables. Let name them
Data and Regions
Convert formula to use structured references and apply some formatting
=IFERROR( INDEX(Regions[Region], MATCH( INDEX(Stores[Parent_ID], MATCH( [@[Store_ID]], Stores[Store_ID],0) ), Regions[Parent ID], 0) ), "not defined")
Internal MATCH returns position of current Store_ID in table Stores.
Taking that position INDEX returns value of Parent_ID in table Stores
Next MATCH finds position of returned Parent_ID in the table Regions
Upper INDEX returns Region value in Regions table for the record number returned by previous MATCH
Finally we wrap by IFERROR which returns some text if nothing was found, i.e. internal formulae returned an error.
Please check in another two sheets in attached file.
|
OPCFW_CODE
|
Thank you both for your tips and hints.
My current assumption goes with "old graphics cards", above I mistakenly called them "screen drivers". In fact, I used the NVidia Quadro 2200 with a modern driver for the Quadro 4000. I admit I only had the graphics card idea after posting here, so my edit above.
I should have been more specific, the thing I've observed on two completely different systems (on the second system I even had not installed most of my FF add-ons yet, but Addblock Plus, Click&Clean, and perhaps New Tab Override, I don't remember, so there is a chance the culprit is among those.
Next time I'll install it into a new system, I'll leave out those add-ins for some days, to check - it's only now that I see that with those add-ons, my FF was not a fresh installation, too bad.
It has nothing, or very little, to do with FF system load, on the other system it occurred almost instantly. It always (in both systems) comes with some sort of a shadow in the "menu" corner (top left corner) of FF, in other words, from that shadow, I can even see my input will not be processed if I try (on condition that I see that shadow first of course, but it's always there before, and it has nothing to do with the keyboard?).
This brings me to the fact that mouse and keyboard/keyboard driver were the same, too, so that is another possibility indeed, my saying "totally nother system" was wrong, obviously, so there's lots of possible reasons I realize now.
So I've disabled the "Filterkey" function, never knew what it was good for anyway, never used it in any way but it was on indeed. CPUBalance seems interesting, but does not work with XP. Have installed Cyberfox, will try it, first without add-ons, than with the more important ones, then with the rest of them, but the question remains, if Cyberfox was FF but without the memory problems, why doesn't everybody use Cyberfox? Correction: Cyberfox 52 is incompatible with XP, 45 seems to be compatible, in-between: doubtful. Will report my FF trial, as well as the CPUBalance trial, to after having bought a new pc.
For the "old graphics cards" and old computers. I have sent back the second-hand pc, for probable motherboard problems, should buy instead a modern and new i7 (6700 or 7700 or probably 6700 if I can it for less because of the 7700 being new now, I'm looking for sales) instead.
It was an i5, and it was far from running at the speed I had expected, ditto for the 4 gb graphics card: ridiculous! So it seems that when buying something new today, i5 is not good, let alone i3.
Also, 8GB of memory will suffice for most uses it seems, with that 16 GB thing I often checked the memory load, it it was never higher than 35, 40 p.c., while at the same time, as said, speed was much too low for my wishes. In other words, you'll need 16 MB when you run several heavy programs concurrently, but if not, you don't need them, and above all, plenty of memory will not replace processor power. (This may be an evidence, but it was good to have seen my own eyes.)
Old graphics cards are not good either: Office 2016 was preinstalled, but the graphics card, even with its modern driver - or because of the combination old card-new driver, but the old driver would not have been compatible with Windows 10 anyway -, did not correctly display the very last line in these Office applications: you typed without seeing what you typed.
So with your old pc, you know exactly what to expect from it, but with a new one, if it's a i3/i5, you risk to be quite let down; I would not have expected an i5 to be that lame, with quite simple (office) software - not speaking of video cut or such things.
But my problem with FF does not seem to be that widespread, so I'll have to check any "old" component with a new computer, incl. keyboard drivers etc.
This last element is probably even the one, since on forums, people complain about incompatibilities between Cherry kb drivers and newer Windows versions, and I had said on that other system the problem occurred five times as much as on my old one. And then, this does never occur in any other program than FF...
|
OPCFW_CODE
|
How to create 3D illustrations in Sketch
A short tutorial with download file and CSS excursion
Some days ago I wrote an Medium article on What I learned about UX from drinking tea and designed some visuals for it — stating at the very end:
“Visuals created in Sketch, heavily inspired by Peter Tarka. I wasn’t really aware you could achieve such 3D effects with Sketch but after a few experiments with shadow layers it worked out quite well. Ping me for the Sketch file, if you want to take a closer look.”
Many readers asked me about that file, telling that they probably underestimated Sketch. So I thought it would be a good idea to make a little tutorial out of this. I also liked the idea of digging even deeper by putting an example into code and of course I also make the Sketch file available to you here. So enjoy and happy sketching!
To make this tutorial a little bit easier and clearer, we will remove a few elements for our example file. So let’s focus on the browser interface in it’s plastic look and the two spheres floating on top:
A lot of people wrote that they would rather use something like Photoshop for such a design and that Sketch would not even occur to them here. Fair enough, considering that we’ve been designing flat UIs for years now. A way of thinking that is slowly fading away with the newly discovered neomorphism trends.
Two simple tricks
To recreate this haptic look and feeling, all you need are two simple tricks that should work regardless of your Sketch version.
Multiple shadow layers
The first trick is to apply more than one shadow, especially inner shadows. Something that seems a little bit strange for Photoshop users, although of course the feature is there too (“Bevel an Emboss”).
Taking a closer look at the control buttons of the browser in the example, you can see that it consists of five different layers — one fill, one drop-shadow and three inner shadows. Most remarkable here the two top layers: The incident light is half covered by another fill layer, thus creating the rounding effect.
Blurred light elements
For the second trick, let’s take a closer look at the spheres. In order to achieve the light reflections it is not enough to rely on the shadow effects of the single elements. We have to do a little handwork here and place some blurred fill layers as separate elements where we want them to be, masking them on the ground layer. By the way it doesn’t matter if you use the separate blur option of the effect palette or the blur amount in the shadow/inner shadow section. It will end in the same results.
In our example you can see three separate light layers, unmasked and outlined on the right side — two white layers for the incident light and and another reddish light reflecting from the smaller sphere onto the large one.
How to do this in CSS
What you can sketch, you can code, right?
Basically it is very easy to put all this into code. The only thing that was actually new to me (since you rarely do it) is that in CSS, just like in Sketch, you can apply multiple shadows on a single element. This simplifies things a lot without nesting several divs or using pseudo elements.
For example the five layers of the red control button from above is translated as follows:
box-shadow: inset -10px 10px 20px 0 #77BCF7, inset 20px -20px 20px 0 rgba(255,255,255,0.5), inset 20px -20px 20px 0 rgba(0,0,0,0.2), 10px 10px 20px 0 rgba(0,0,0,0.4);
Unlike in Sketch, you can’t use unfilled shadow layers for the second trick with the blurred elements. So it’s best to simply use filter: blur() here. Note to apply overflow: hidden to the parent element to mask the light reflections like in Sketch.
For more complicated shapes, it is certainly a good idea to switch directly to SVGs. The rounded edges can also be created directly in the SVG, as inner shadows can’t really be applied here. Also use filter: drop-shadow() instead of box-shadow to handle SVG shapes.
If you want to dive deeper go ahead and download the Sketch file of the original article to which I have added the example artboard from above:
|
OPCFW_CODE
|
Machine Translation (MT) – Key success process in natural language processing. This tool helps to translate one language to another with high accuracy. This post will focus on high-level arguments around machine translation only to you can find out more details on Machine Learning Basics here.
What is Machine Translation?
Machine translation (MT) is an automated translation process used by a computer application to translate a natural language text into another. Such as translation of English into Spanish. In the translation process, the meaning of the source text must be already stored in the destination i.e. target language. It sounds simple, but on the surface floor, it is far more complex.
A translator interprets and analyses all the keywords or symbols in the text. It also understands how each word affects another. For creating such a complex system, it requires expertise in grammar, sentence structure, coding, AI techniques, semantics, etc. The biggest role is played by locals who are familiar with the geographical regions.
Into the Limelight
It’s difficult to ignore GT- Google Translate; when talks are about language translation. GT is in presence for decades. Sadly apart from the development and technology enhancement, GT still has too many challenges.
One of the critical issues was “what will happen if you moved to any remote area” or any “unfamiliar country with no internet connection”? Also, you forgot to download their native language before. As you know, the image-to-text based translation models are not correct and slow. What will you do?
No worries..!! The advancement is AI service brings the various products which are developed to give correct solutions to users. A content intelligence solution provider named Abbyy has improved the TextGrabber application supported to iOS devices with significant updates, which has emerged as a powerful alternative to Google Translator.
Some More Insights
In May 2018, a social media giant Facebook added 24 new languages to its platform for improving customer interaction. Facebook has leveraged artificial intelligence to enhance its neural machine translation models. These translation pairs are Serbian and Belarusian to English in Europe and other countries. In a report, Facebook revealed that more than six billion translations are performed on its platform every day.
In the same month, the company open-sourced its neural machine translation model- PyTorch 1.0 and some of its AI tools for developers and AI experts.
The application is marked with the power of real-time translation function. It uses a smartphone camera to capture and translate the text immediately. And the best part is it works online and offline too. It can translate the text of any color or any kind of background. You don’t need to download any kind of language package to translate your image test in offline mode.
How does it work?
The machine translation model renders text from one natural language to another. There are various MT models developed to effectively drive translation based application souls. To carry out this task, experts rely on the powerful approaches used to build these models. It can be Rule-Based, Statistical, Neural, Hybrid or Example-Based MT.
The field is very vast and it is not possible to cover the model in a single article. So, I’m only gonna cover Rule-Based and Statistical ML approaches. To cover up the basics to advanced areas of artificial intelligence and its associated technologies you can join the Artificial Intelligence Course which covers critical topics like ML, deep learning with TensorFlow, etc.
Rule-Based Machine Translation (RBMT) model follows the same approach as a language that is based on a bunch of grammatical and syntactical rules. To get a correct translation of a phrase, the application requires a linguistic dictionary for both languages. It should include a proper set of rules for sentence formation structure for both the languages. RBMT is most popular among professionals because it can give a better quality of language pairs with multiple word orders.
Statistical Machine Translation (SMT) is developed on the concept of probabilities. For each chunk of the source phrase, there are various possible target chunks defines by the probability of which one is the correct translation. An application chooses the chunk with the highest statistical probability of being a correct translation. Since SMT is not developed on the basis of resource-intensive and unlike RBMT it can be applied to multiple languages. Getting more attention from the developer’s community and professionals is obvious.
Final Words to Take Home
With such incredible features and services, the MT technology market is expected to reach 983.3 million USD by the end of 2022. The growing market of cloud computing has to lead this technology to offer Machine Translation Software as a Service (MTSaaS) offered by various platforms. Thus, anyone can predict how this technology has emerged as an advanced tool hastening the future of human-made translation applications.
Points to Note:
All credits if any remains on the original contributor only. The guest author has covered all basics around Machine Translation only Learning. Machine Learning is all about data, computing power and algorithms to look for information. How machine can do more than just translation this is covered in Generative Adversarial Networks. A family of artificial neural networks.
Feedback & Further Question
Do you have any questions about Supervised Learning or Machine Learning? Leave a comment or ask your question via email. Will try my best to answer it.======================== This is a Guest Post =================================
Danish Wadhwa – Governs the digital content to assemble good relationships for enterprises and individuals. Danish is a SME in digital marketing, cloud computing, web designing and offers other valuable IT services for organizations. His efforts eventually enhance companies shape by delivering the stupendous solutions to their business problems.
“Thank you all, for spending your time reading this post. Please share your feedback / comments / critics / agreements or disagreement. Remark for more details about posts, subjects and relevance please read the disclaimer.==========================================================================
|
OPCFW_CODE
|
[DOCS]
Description
Hey ,
I found some headings in the home page that could be improvised . The changes would make it more grammatically correct and make it sound better.
Kindly review the issue and if you find it okay , please assign it to me as a part of GSSOC 2023
Screenshots
The 'What Grabtern Do for their Students' can simply be changed to 'What Grabtern does for it's students'
The 'Why to be Mentor at Grabtern' can be replaced by 'Why Mentor At Grabtern'
Additional information
No response
Kindly review the issue and if you find it okay , please assign it to me as a part of GSSOC 2023
Are you up for redesigning or improving the UI of Home page...
Ideation - https://mentro.tech/ ( Animations)
https://unstop.com/mentor for more. It would make it as level 2
@anmode , SURE , I'm up for it . Kindly give me more details . Like whether or not the animations will be provided to me from your end or not . And also if you want a complete redesign of the current UI or just some minor enhancements based on the websites you pinned as Ideation .
Also if you want I could finish this issue and create a separate issue for the tasks mentioned by you and work on it . Your call .
If you are up for complete changes in design..that is ok. We can start with Figma design. could you give me some time. I'll confirm you tomorrow. Thank you
@anmode
Yeah , Sounds good . Figma works for me . No worries , Take your time.
@anmode
Till then if it's okay with you , you could assign me the above issue I'll work on it by that time
Sure!
@anmode I have linked a PR that will close this issue . Can you please review it . And If you find it okay please add labels and merge it
@SargamPuram We got guidelines that one line code or two line code... will not get any levels. And I disscussed that Why to be mentor at grabtern is ok...yeah but first one yours ok!
Would you like to work on nav bar? i just created issue it would be nice contribution
@SargamPuram We got guidelines that one line code or two line code... will not get any levels. And I disscussed that Why to be mentor at grabtern is ok...yeah but first one yours ok! Would you like to work on nav bar? i just created issue it would be nice contribution
@anmode that's completely understandable . Even I wish to adhere to the community guidelines . Just let me know then should I just close this issue or what should I do next .
Also would love to work on the responsiveness of the nav bar but just checked that it has already been assigned to someone .
Been thinking of adding a Student Testimonial section or page . OR overall improving the UI of the page . Just let me know your thoughts about it .
@anmode also if nothing else is possible . I am ok that the current PR gets merged without levels , just the gssoc , enhancements is fine too . I can change it back to Why to be mentor at grabtern ? if that's what you are looking for
I'll work on making some more significant contributions in the future so that they'll pass the level check .
it is ok! Please revert to the second statement you changed. I'll merge it. and further, you wanna work. if you are familier with GitHub automation you can create bots.
Or you can create a login card. I hope no one is working on that. login card is like go to gmail.. when you click on profile a card comes... so create a card component so it can be used anywhere.
@anmode
Just reverted it back !
Can you tell me the usecase of the bot you are looking for . Do you want the usual one which displays 'Thanks for raising the issue ' on the issue thread , and comments on the PR requests , mergers , etc.
Also about the login card I think I get what you are saying , but can you elaborate where else can the card component be used . and the contents on the card has to be just login , my profile ?
Also , thank you for being so supportive . And suggesting me various issues where I can contribute .
Is this what you are talking about . Similar to udemy.
@anmode
Just reverted it back !
Can you tell me the usecase of the bot you are looking for . Do you want the usual one which displays 'Thanks for raising the issue ' on the issue thread , and comments on the PR requests , mergers , etc.
Also about the login card I think I get what you are saying , but can you elaborate where else can the card component be used . and the contents on the card has to be just login , my profile ?
Tomorrow I'll attach UI. How card looks like.
Nd for bot yess.
A fully autmoted bot for greeting for first time contributor..
There are numerous cases and there greeting them differently.
First time.
Not new
Collaborator creating issue
Author/owner creating issue.
Pr bot differently..
You can start by simply greet bot them we will move to different cases. Thank you
Okay , then looking forward for the UI design .
Till then I'll to create a simple greet bot as you've mentioned since I've had only limited experience of github automation . But will still try and create a useful bot.
@SargamPuram Thanks for ur patience and appreciation ✌🏻❤️. Will close these PR Now, and if u have any idea or suggestions, huat ping us at discord.
|
GITHUB_ARCHIVE
|
How to upload multiple files in perl?
I need to upload multiple files using perl cgi.
i used form type as
enctype="multipart/form-data
and also set
multiple='multiple' in input type file.
just need to know what should we do write at server side ?
Can anybody tell me how to upload multiple files using perl?
Are you asking how to include files in an HTTP request, or how to receive them in a CGI script?
The unedited post says 'need to know what should we do write at server side'
possible duplicate of http://stackoverflow.com/questions/3448117/can-perls-cgi-pm-process-firefoxs-input-type-file-multiple-form-fields
The following piece of code is enough and upload files present in the params to /storage location:
use CGI;
my $cgi = new CGI;
my @files = $cgi->param('multi_files[]');
my @io_handles=$cgi->upload('multi_files[]');
foreach my $upload(@files){
print "Filename: $upload<br>";
my $file_temp_path = "/storage";
my $upload_file = shift @io_handles;
open (UPLOADFILE,">$file_temp_path/$upload") or print "File Open Error";
binmode UPLOADFILE;
while (<$upload_file>) {
print UPLOADFILE;
}
}
print "Files Upload done";
Thank you, this saved me a lot of problems.
Something like this should handle multiple files upload:
my @fhs = $Cgi->upload('files');
foreach my $fh (@fhs) {
if (defined $fh) {
my $type = $Cgi->uploadInfo($fh)->{'Content-Type'};
next unless ($type eq "image/jpeg");
my $io_handle = $fh->handle;
open (OUTFILE,'>>','/var/directory/'.$fh);
while (my $bytesread = $io_handle->read(my $buffer, 1024)) {
print OUTFILE $buffer;
}
close (OUTFILE);
}
}
Ofc 'files' is the name of the file upload form.
On the server side, you first retrive the file file handle like this:
use CGI;
my $q = CGI->new();
my $myfh = $q->upload('field_name');
Now you have a filehandle to the temporary storage whither the file was uploaded.
The uploaded file anme can be had using the param() method.
$filename = $q->param('field_name');
and the temporary file can be directly accessed via:
$filename = $query->param('uploaded_file');
$tmpfilename = $query->tmpFileName($filename);
I highly recommend giving the CGI.pm docs a good solid read, a couple of times. While not trivial, it's all rather straightforward.
|
STACK_EXCHANGE
|
I want to discuss the work that goes into engineering a data pipeline. At work, I frequently do ELT - extract data from a source, load it into BigQuery, and transform it with SQL queries that I compose with dbt.
Over the last three years, I have worked with tens of different data providers. I have noticed that the work required to extract and load data crucially depends on choices the data provider makes. The source file's format alone makes the difference between an afternoon task and a project spanning several months. I believe the following points are important to bear in mind when providing data for third parties.
Keep The Schema Close To The Data
Keep the schema close to the data. A schema contains rich, condensed information on the data. Anyone working with unschemed data must at some point invest compute and time to recover that information. The more unschemed data there is, the more time and compute needs to be invested.
Assume that when loading data into a data warehouse, most engineers do not care about the contents of the data. Any company larger than a startup employs both analysts and engineers. Analysts will work with the data once it is loaded. And engineers do the plumbing; they load the data. Engineers do not want to figure out column names, data types, column descriptions, or table descriptions. They want this information provided in a machine-readable format.
Don't make the engineer call support and ask for schema information. Don't make them manually assemble a schema. Provide it. The schema's format does not matter in that regard. Let it be JSON, YAML, a txt-file. It does not matter, as long as the format is consistent and machine-readable. Make the schema impossible to miss.
Use A Reasonable File Format
When I made measurements in the lab during my studies, we used computers from the early 2000s to record measurements. Those machines were programmed by students working at the lab, and controlled Arduinos that performed the measurements. Results were stored as plain .txt files, containing column names, and the data. The files contained some KBs worth of data.
Today the year is 2023. My code runs in the cloud, where I transfer TBs of data. A .txt file is not a reasonable way to transfer large amounts of data. The Avro file format is reasonable, or the Parquet file format. I find those great, because they keep the schema close to the data and compress the data.
Compress The Data
Bandwidth and storage cost money. It is reasonable to compress data for that reason. The easy way to compress data leverages a reasonable file format. Sometimes one needs to transfer plain txt files. Most RDBMSs support gzip as the standard compression method for plain txt files. While not exactly cutting-edge technology, gzip offers a reasonable trade-off between file size and speed of compression/decompression.
Transfer Large Amounts Of Data In Batches
I understand that it is intriguing for data suppliers to leverage REST APIs to provide data to data customers. The API is already there for some other application. And all one has to do is strap authentication and monetization on top.
This approach is fine for transferring partial data and small datasets. But it is deficient when transferring large datasets. The overhead required to transfer an entire dataset via small HTTP requests can be avoided by transferring data in batches.
|
OPCFW_CODE
|
phpinfo(); not working
tip at fixer.com
Fri Oct 26 20:37:26 PDT 2007
I have FreeBSD-6.2 and I'm having trouble setting up apache, mysql and
php for my business. I can't get phpinfo(); to display. If I run php
-i, I get several pages of information which means php is working. Out
of three different types of php, php-html isn't operating. My browser
is where I do 100 percent of my work. Both PHP CLI and PHP CGI seem to
work. If it can't display phpinfo();, it won't run my custom designed
php software. I did successfully install it a few years ago with
FreeBSD-4.8 and older versions of PHP. The install disk for FreeBSD-4.8
became defective and I had to re-order. They only had FreeBSD-6.2. This
where the trouble starts.
The 'install file' from www.php.net says:
In case of setting up the server and PHP on your own, you have two
choices for the method of connecting PHP to the server. For many
servers PHP has a direct module interface (also called SAPI). These
servers include Apache, Microsoft Internet Information Server, Netscape
and iPlanet servers. Many other servers have support for ISAPI, the
Microsoft module interface (OmniHTTPd for example).
Reading the above, I think my 'sapi module' wasn't installed.
Using the 'find program', I ran sapi, SAPI. isapi, nsapi, ISAPI and
NSAPI. I received a responce for isapi,
sapi and nsapi. 'sapi' was located at '/port/php-4.4/sapi'. Both isapi
and nsapi are located under sapi. But 'sapi' was just a directory, I
was expecting to find some sort of program.
For apache, mysql and php, I have all the files. When installing php, it
automatically updates httpd.conf. I always double check to make sure.
/usr/local/libexec/apache has about 35 modules including 'libphp4.so'.
The main php file is located in /usr/local/bin/php. Using telnet, it
confirms php is working.
I did several re-installs of FreeBSD and different combinations of
apache, mysql and php. If I use FreeBSDs apache-1.3 and mysql-5.1 and
php4 from www.php.net it will work, but only once. I tried to improve
it, but couldn't. How many times can I reboot to make one order? This
is version I'm working on.
- does the 'sapi module' have a different name, so I can find it with
the 'find program'?
- if the 'sapi module' is missing, how do I replace it?
uname -a: Freebsd localhost 6.2-RELEASE FreeBSD 6.2-RELEASE #0: Friday
Ost 5 15:08:51 UTC root@:/usr/src/sys/compile/HAP i386
Thanks in advance,
More information about the freebsd-questions
|
OPCFW_CODE
|
BenHayat1p said 5 years, 6 months ago:
I think one way to learn SL, is looking at these demos and figure out what the [experts] at C1 have done.
Looking at Control Explorer demo, I see two classes as part of the project: ControlNode and DemoTemplate.
Were these classes made particularly for these demos or are classes something we need to add to our apps?
Second question, is about "generic.xaml". Again is this used as part of using your Treeview component or is this for the demo only?
C1_MaxM7p said 5 years, 6 months ago:
ControlNode and DemoTemplate were made particularly for the explorer app, but that doesn't mean you can't learn from them. ControlNode is an example of a specialized control. It shows how you can change the behavior and look of a silverlight control. ControlNode inherits from C1TreeViewItem, that allows it to attach some data to the tree, and also specialize the behavior in a small way: it expand and collapses by clicking on the tree header.
The style of ControlNode is defined in generic.xaml. If you look into generic.xaml you will see a Style with ControlNode as its TargetType, but without a key. This style is applied to all ControlNodes by the Silverlight runtime. So generic.xaml is the place wherer you put default styles for your controls. In the current silverlight release it's not possible to set default styles for controls from other assemblies. Therefore, if you want to style our controls or Microsoft's, you must inherit from them or set the Style property manually.
There is a generic.xaml stored as a resource in the C1 Silverlight assembly, which contains the default styles for all of our controls. We will probably release this in the future. It's much easier to style a control starting from a complete template.
C1_BernardoC17p said 5 years, 6 months ago:
The ControlNode and DemoTemplate classes were created specifically for this demo.
ControlNode is a class that derives from TreeViewItem and adds properties and methods that identify the specific control that this node represents. This is a convenient way to use the TreeView control. The alternative would be to use the Tag property to attach custom information to plain TreeViewItem objects.
DemoTemplate is a a generic control holder class. It uses reflection to instantiate controls and to expose some of its properties for demo purposes. This class makes it easy for us to add controls to the demo or to change the way they are demonstrated. If you inspect the demo project, you will notice an XML file that contains the list of controls that should be demonstrated and how they should be demonstrated.
The 'generic.xaml' file contains Styles that are applied to controls. In this sample, it defines the Style for the custom TreeViewItem class. The nodes in the sample show plus/minus images instead of the default collapse/expand icons. This is just one way to customize the look of your applications.
You must be logged in to reply to this topic.
|
OPCFW_CODE
|
Summing attribute values of overlapping polyline portions in QGIS 3.0.3
Note:
This is basically the same question that I asked before in Summing attribute values of overlapping polyline portions in ArcGIS Desktop?, but this time, I'm trying to solve the same problem using QGIS. The reason is, I need to be able to teach/explain the solution to users in a developing country who cannot afford ArcGIS. I'm not an expert in QGIS either. So here is the question:
I have polylines of 80+ bus routes operating in a city. The polylines have attribute values for the number of passengers, number of trips, etc. Many of the polylines overlap at certain areas of the road network.
For example in the image above, the Green Route has 1,000 passengers and travels 20 trips a day while the Orange Route has 500 passengers and 10 trips a day. I'd like to have a final shapefile with data on 1,500 passengers and 30 trips (overlapping areas), but also data on the 1,000 passengers with 20 trips and 500 passengers with 10 trips (non-overlapping areas). This is just a simple example with two routes but I have 80+ in all that overlap in many locations with each other.
What I have done so far:
Step 1: Run "Merge vector layers" to combine all the individual polylines into one shapefile.
What I want to do next:
Step 2: I want to cut/split the overlapping features exactly where the overlaps occur.
Step 3: I want to fuse/merge/join (whatever the correct term is in QGIS) all overlapping line segments, AND add up their numbers of passengers and trips into the resulting feature. 1,500 passengers and 30 trips in the above example.
Step 4: Lastly, I want to, somehow be able to find out which routes whose values have been added together. In the above example, a column saying "Green, Orange" would be nice, but doesn't matter if I get "GreenOrange".
What I have tried so far for Steps 2, 3 and 4:
Multipart to singleparts
Tried to split all overlapping segments, but the splits did not occur exactly where the overlaps start and end.
Line-polygon intersection
Created a polygon that covers entire area to try to get all overlapping segments split. Splits did not occur exactly where the overlaps start and end.
Aggregate
Seems to be able to sum up the numbers of passengers and trips, and also able to concatenate the Route Name, but the result is only one feature. In the example, I need to have 5 features as the result.
Join attributes by location
Referring to Merging overlapping lines into one line. QGIS. Requires an Input Layer and a Join Layer. I only have one layer since Step 1, so this can't be it, unless I'm doing it wrong from Step 1.
Dissolve
This just returns the separated lines into one, back to Step 1. I was hoping that it dissolve the overlapping segments.
Collect geometries
Same as above.
EDIT:
Sample Data:
Shp file, Shx file, Dbf file
Result of Step 2: v.clean
Shp file, Shx file, Dbf file
Result of Step 3: Aggregate
Shp file, Shx file, Dbf file
Final Result, Step 4: Delete Duplicate Geometries
Shp file, Shx file, Dbf file
In above image I selected the overlapping line segments. This is how the map looks like:
Two line segments North of the loop are supposed to overlap, but they are now missing. Only a tiny chunk is left.
Any chance of a simple sample dataset? It would make it much easier for everyone who wants to help you since it saves us all having to create something from scratch.
@Spacedman Please see attached sample data. I noticed that Step 2 didn't actually work so I had to use ArcGIS to prepare this sample data.
This is just a follow-up on your Aggregate strategy for the Steps 3/4.
(1) Aggregate
Create three new fields by the Field calculator.
total_trip by sum("No_of_Trip", group_by:= geom_to_wkt($geometry))
total_pax by sum("No_of_Pax", group_by:= geom_to_wkt($geometry))
routes by concatenate(to_string("Route_No"), group_by:= geom_to_wkt($geometry), concatenator:=', ')
You will obtain table below:
(2) Delete duplicate geometries
Row 5 and 6 are ovelapping. Delete one of them by running Delete duplicate geometries tool (in QGIS processing toolbox | QGIS geoalgorithms | Vector general tools).
It will return a new layer Cleaned (below), without row 6 of previous data (above).
[Edit] as per request for the Step-2:
I would suggest GRASS v.clean with break option as the cleaning tool. It can be found in QGIS processing toolbox | GRASS GIS7 | Vector.
It is similar to QGIS Explode lines tool, but unlike Explode lines which breaks all segments of the polyline, v.clean cuts the line only at intersecting nodes.
It does not specifically search for duplicates, so it may not be perfect.
Thank you so much! Can you please also advise me on Step 2? "Multipart to singleparts" or "Line-polygon intersection" did not split the features neatly where the overlaps start and end. I had to use ArcGIS for this, but would like to know how to do it in QGIS.
@DXV I recommend v.clean to work on your Step 2. It may not be ideal, but it usually gives me satisfactory output. Let me know if it does not give you what you need.
Thanks again Kazuhito! I run v.clean + break, then proceeded to the rest of the steps. I noticed that the lines are split into multiple, shorter segments so I now have 12 features instead of 5. I do not mind this as they still contain the desired values. However, a big portion of the overlapping segment (North of the loop) has also been deleted in the final result, so this will be an issue when I work with the complete data set. Any idea on why this happened? I appreciate the help.
@DXV Unfortunately I could not reproduce the situation you've seen (missing loop segment) with my quick test on your data. Could you post a screenshot or upload the problematic files?
this is what I did this morning. First I lost the problematic files (temporary files) so I repeated the process but I was surprised t get the desired results! I thought problem solved, but I repeated it one more time but this time I got the same problem as yesterday. I made sure to do exactly the same procedures, but got different results. I added the files for Steps 2-4 with screen shots. Please take a look.
@DXV Thanks for the update. I see you have done pretty well. But I have to wonder why you had to start with 6 features, while your original data would have been just two (Route_No: 1, 3). Just use your original data, then you would not see this error because the original features are contiguous and v.clean will try to keep the geometry as much as possible.
Sorry Kazuhito, that was the old result of Step 2 from yesterday before I learned about v.clean. I added the correct sample data with 2 features. The result of Step 2 (v.clean) using this data is 19 features as shown.
@DXV You are right, sorry. I had not noticed when I processed previous sample, but the v.clean - break has broken Sample_step1 into 19 features as you have shown. As v.clean - snap has produced 6 features, your shapefile may have tiny gaps in between these segments. Geometry checker has not detected any errors in your file, though. Only thing I noticed about the segment (the one you lost) was that its line direction was reversed, if I compare it with another duplicated segment...but I do not know it had anything to do with your problem.
Thanks for the input Kazuhito! So I created another Route 1 to replaced the old one. I made sure that this time the overlapping portion with Route 3 is in the exact same location (by tracing in ArcGIS). I thought that this may solve any geometry alignment and therefore give me the desired final result. Unfortunately there are still missing segments, but now at different locations. I think I'll just stick with ArcGIS for now, and maybe return to this discussion later. Thank you again for your time. I appreciate it!
|
STACK_EXCHANGE
|
|Phone #:||+ 32 2 650 44 71|
|Fax #:||+ 32 2 650 44 75|
|Main Field:||Development Economics|
|Second Field:||Political Economics and Collective Decisions|
Gani Aldashev is Professor of Economics at the Université libre de Bruxelles, a member of the European Center for Advanced Research in Economics and Statistics (ECARES) and research affiliate of the Centre for Research in the Economics of Development (CRED) at the University of Namur.
He received B.A. in International Economics from the American University of Paris, and M.A. and Ph.D. in Economics from Bocconi University (Milan). His main research interests are in development economics, political economics, and economic history.
“When NGOs Go Global: Competition on International Markets for Development Donations” (with T. Verdier), Journal of International Economics, 2009, 79(2): 198-210.
“Goodwill Bazaar: NGO Competition and Giving to Development” (with T. Verdier), Journal of Development Economics, 2010, 91(1): 48-63.
“Political Information Acquisition for Social Exchange”, Quarterly Journal of Political Science, 2010, 5(1): 1-25 (lead article).
“Legal Reform in the Presence of a Living Custom: An Economic Approach” (with J-Ph Platteau and Z. Wahhaj), Proceedings of the National Academy of Sciences of the United States, 2011, 108(S4): 21320-21325.
“Follies Subdued: Informational Efficiency under Adaptive Expectations and Confirmatory Bias” (with T. Carletti and S. Righi), Journal of Economic Behavior & Organization, 2011, 80(1): 110-121.
“Using the Law to Change the Custom” (with J-Ph Platteau, I. Chaara, and Z. Wahhaj), Journal of Development Economics, 2012, 97(2): 182–200.
“Modern Law as a Magnet to Reform Unfair Customs” (with J-Ph Platteau, I. Chaara, and Z. Wahhaj), Economic Development and Cultural Change, 2012, 60(4): 795-828.
“Deadly anchor: Gender bias under Russian colonization of Kazakhstan” (with C. Guirkinger), Explorations in Economic History, 2012, 49(4): 399-422.
“Religion, Culture, and Development” (with J.-Ph. Platteau), in Handbook of the Economics of Art and Culture, vol.2, eds. V. Ginsburgh and D. Throsby, Elsevier, 2014, pp. 587-631.
“Brothers in Alms? Coordination between Nonprofits on Markets for Charitable Giving” (with M. Marini and T. Verdier), Journal of Public Economics, 2014, 117(1): 182-200.
“Watchdogs of the Invisible Hand: NGO Monitoring and Industry Equilibrium” (with T. Verdier and M. Limardi), Journal of Development Economics, 2015, 116(1): 28–42.
“Voter Turnout and Political Rents”, Journal of Public Economic Theory (Special Issue on Governance and Political Economy), 2015, 17(4): 528-552.
“Clans and Ploughs: Traditional institutions and the production decisions of Kazakhs under Russian colonization” (with C. Guirkinger), Journal of Economic History, 2016, 76(1): 76-108.
“Colonization and changing social structure: Evidence from Kazakhstan” (with C. Guirkinger), Journal of Development Economics, special issue on “Economic History and Development”, 2017, 127(1): 413-430.
“Endogenous Enforcement Institutions” (with G. Zanarone), Journal of Development Economics, 2017, 128(1): 49–64.
“Assignment Procedure Biases in Randomized Policy Experiments” (with G. Kirchsteiger and A. Sebald), Economic Journal, 2017, 127(1): 873–895.
“Invalid Ballots and Electoral Competition” (with G. Mastrobuoni), Political Science: Research and Methods, forthcoming.
“Small is Beautiful: Motivational Misallocation in the Nonprofit Sector” (with E. Jaimovich and T. Verdier), Journal of European Economic Association, forthcoming.
Topics in International Trade and Sustainable Development
ECARES, Université libre de Bruxelles
CRED, University of Namur
|
OPCFW_CODE
|
Unreadable font on 4K screen
4K screens set to native resolution (3840x2160) with 100% font scaling has nearly unreadable fonts.
On my 4K monitor if I set font scaling to 100% then the font and everything else becomes small, as expected. However the font becomes nearly unreadable. It looks like the resolution being sent doesn't match the resolution of the screen. If I screenshot the screen content and zoom in I see perfect fonts.
I tried setting Nvidia display scaling to "No scaling", but no change. I've connected the screen via HDMI and running 3840x2160 at 30hz and 60Hz as forced setting ("Customize..." button in Nvidia Control Panel). 30Hz because apparently DisplayPort is required for 60Hz, but sometimes 60Hz seems to work too.
I have an Asus PB287Q 4K monitor, but I had the same problem on a Phillips Brilliance 288p 4K monitor. Nvidia GTX 560 graphics card. I also have a second monitor at 1920x1080 connected, but its the same result without it connected.
Anyone know how to fix this?
Screenshot sample 1:
Picture of same area (ignore distortions from resizing image of pixel grid):
That screenshot (not the photo) doesn't look like High DPI at all. Are you sure the software side is properly set up?
Ah, nevermind my earlier comment, just noticed you set it to 100% scaling intentionally. Your graphics card may not be able to keep up. You could try with a cheap modern graphics card to check if it can drive the display properly.
the 560 specs say it isnt going to even do 4k, for 1 monitor. you could set the res to 1/4 res of the monitor and get better interpolation, and that likely means only 1 monitor also.
Upgraded graphics card + connected via DisplayPort and everything works fine.
Upgraded graphics card to one that supports 4K + used DisplayPort to connect. Works fine now.
Don't use a DVI-to-HDMI adapter at resolutions above 1920x1200.
The DVI port on your video card will attempt to use dual-link signalling, which is incompatible with HDMI.¹ Dual-link DVI carries pairs of pixels on different pins, but is designed to degrade "gracefully" in case of e.g. a wrong cable, resulting in half of those pixels being lost. (In contrast, HDMI achieves these resolutions with a faster pixel clock.)
Your best option is to use DisplayPort 1.2+ or HDMI 2.0+, whichever your setup supports (if any). Many older video cards just don't support 4K past 30Hz.
¹ Source: Personal experience plugging a Dell P2415Q into a GeForce 8600GT.
|
STACK_EXCHANGE
|
additional installer option to wipe all disks before install
Problem:
During out internal automation testing it is possible different disks were used to install of harvester.
When the disk is changed the older installation is left intact which can cause unexpected results during reboots.
To ensure this does not happen we need to be able to allow wiping of disks via the installer.
Solution:
PR adds an additional option in harvester config wipeDisks which can be passed via a config url or kernel arguments harvester.install.wipe_disks=true and will result in the harv-install script wiping all disks on the node.
Related Issue:
https://github.com/harvester/harvester/issues/2066
https://github.com/harvester/harvester/issues/2781
https://github.com/harvester/harvester/issues/4527
Test plan:
cc @m-ildefons
It's the partprobe:
# cat /proc/partitions
major minor #blocks name
7 0 581632 loop0
253 0 125829120 vda
253 16<PHONE_NUMBER> vdb
253 17 1024 vdb1
253 18 51200 vdb2
253 19 8388608 vdb3
253 20 15728640 vdb4
253 21<PHONE_NUMBER> vdb5
253 22<PHONE_NUMBER> vdb6
253 32 209715200 vdc
253 33 1024 vdc1
253 34 65536 vdc2
253 35 4194304 vdc3
253 36 8388608 vdc4
253 37 197063680 vdc5
253 48 20971520 vdd
11 0 5863104 sr0
# lsblk -d -n -J -o NAME,TYPE | jq -r '.blockdevices[] | select(.type == "disk") | .name'
vda
vdb
vdc
vdd
# sgdisk -Z /dev/vda
Creating new GPT entries in memory.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
# sgdisk -Z /dev/vdb
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
# sgdisk -Z /dev/vdc
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
# sgdisk -Z /dev/vdd
Creating new GPT entries in memory.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
# partprobe -s
/dev/vdd: msdos partitions
/dev/vdb: msdos partitions
Error: Can't have a partition outside the disk!
/dev/vdc: msdos partitions
/dev/vda: msdos partitions
# echo $?
1
Hi @tserong,
Could you check the vdb partitions?
It seems to be the root cause of the issue.
In this case /dev/vdb is a 4TiB disk, and I don't know why it purports to have msdos partitions what with having just been zapped... But that'll be where the error is coming from.
# fdisk -l /dev/vdb
Disk /dev/vdb: 4 TiB,<PHONE_NUMBER>104 bytes,<PHONE_NUMBER> sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
# parted /dev/vdb print
Error: /dev/vdb: unrecognised disk label
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 4398GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
There's nothing there. What am I missing?
# fdisk -l /dev/vdb
Disk /dev/vdb: 4 TiB,<PHONE_NUMBER>104 bytes,<PHONE_NUMBER> sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
# parted /dev/vdb print
Error: /dev/vdb: unrecognised disk label
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 4398GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
# cat /proc/partitions
major minor #blocks name
7 0 581632 loop0
253 0 125829120 vda
253 16<PHONE_NUMBER> vdb
253 32 209715200 vdc
253 48 20971520 vdd
11 0 5863104 sr0
There's nothing there. What am I missing?
How about vdc?
vdc is the same (no partitions). I did a little more digging, turns out it's picking up the CD-ROM:
# for dev in vda vdb vdc vdd sr0 ; do echo $dev ; partprobe -s /dev/$dev ; done
vda
/dev/vda: msdos partitions
vdb
/dev/vdb: msdos partitions
vdc
/dev/vdc: msdos partitions
vdd
/dev/vdd: msdos partitions
sr0
Error: Can't have a partition outside the disk!
So, I've got a couple of ideas:
Move the partprobe inside the above loop (so we call partprobe -s /dev/$disk for each disk wiped, or,
Keep the existing partprobe but ignore the return value (so change it to partprobe -s || :)
vdc is the same (no partitions). I did a little more digging, turns out the partprobe of all devices is picking up the CD-ROM:
# for dev in vda vdb vdc vdd sr0 ; do echo $dev ; partprobe -s /dev/$dev ; done
vda
/dev/vda: msdos partitions
vdb
/dev/vdb: msdos partitions
vdc
/dev/vdc: msdos partitions
vdd
/dev/vdd: msdos partitions
sr0
Error: Can't have a partition outside the disk!
So, I've got a couple of ideas:
Move the partprobe inside the above loop (so we only call partprobe -s /dev/$disk for each disk wiped), or,
Keep the existing partprobe but ignore the return value (so change it to partprobe -s || :)
nice catch, sr0 can not show the partition table.
If you just run partprobe, does it still hit error?
If you just run partprobe (w/o display partition), does it still hit error?
Yup:
# partprobe
Error: Can't have a partition outside the disk!
# echo $?
1
One other thought - as I mentioned in https://github.com/harvester/harvester/issues/4527#issuecomment-1948077812:
Thinking about this further though, some care must be taken before wiping everything that looks like a disk that's attached to a given host. Imagine you had a system that could see a bunch of random disks on a SAN. Wiping all of these might not be a good idea.
I still think this PR is fine, but when we document the harvester.install.wipe_disks=true option, we're going to need to say something like "DO NOT USE THIS UNLESS YOU REALLY REALLY MEAN IT"
@mergifyio backport v1.2
|
GITHUB_ARCHIVE
|
The Blender submitter is an add-on that allows you to configure and submit jobs to Conductor.
This tutorial will get you up and running. For a detailed description of the plugin, please check the Blender reference page.
If you haven't already done so, Download the Companion App.
- Open the Companion app on the Plugins page and install the Blender add-on.
Load the Conductor Render Submitter add-on¶
- Relaunch Blender and navigate to Edit->Preferences->Add-ons.
- Click the "Refresh" button to confirm that you are accessing the most recent version of the "Render: Conductor Render Submitter" add-on.
- Proceed to enable the "Render: Conductor Render Submitter" add-on.
- Switch over to the "Rendering" workspace, where you'll find the "Conductor Render Submitter" panel.
- Open the scene you want to render.
- Within the Conductor Render Submitter panel, click on Connect. This connects to your account on Conductor and you may be asked to sign in. It fetches the list of projects, packages, and instance types available to you. You'll notice the respective dropdown menus become populated.
You have the option to customize the default job title, which will be displayed on the Conductor dashboard upon job submission.
For faster and more efficient rendering, especially with Cycles, choosing an Instance Type of GPU is highly recommended. If you're working with Eevee or Redshift, selecting an instance type of GPU is crucial, as these renderers depend entirely on GPU power.
Choose a suitable machine from the Machine Type drop-down menu.
Select the Blender version for your job, which might differ from your local version. Note that this choice affects available features, renders, and add-ons.
In the Render Software section, choose your preferred rendering software. By default, Blender's built-in renderer, Cycles, is selected.
Adjust render settings like resolution X & Y, resolution percentage, camera selection, and sample count specifically for your job submission. These customizations are exclusive to your submission and won't alter your original Blender scene.
Initial Steps with Scout Frames¶
Begin your rendering process by utilizing scout frames to check render quality without processing the entire sequence.
Set "fml:3" to render the first, middle, and last frames (e.g., frames 1, 51, 100 for a 100-frame range), and keep the chunk size at 1.
Full Rendering Setup¶
After verifying quality with scout frames:
- Deactivate Scout Frames to prepare for full rendering.
- Choose Chunk Size based on scene complexity:
- For complex scenes, select a chunk size of 1-5.
- For simpler scenes, a chunk size of 10-20 is recommended.
Custom Frame Range¶
You have the option to override the Blender scene's frame range
Enable Custom Range if needed. Specify the frame range, such as "1-100", using sequences, individual numbers, or ranges with steps (e.g., "1,7,10-20,30-60x3,1001").
Select add-ons compatible with your chosen Blender version, noting that changing the Blender version updates the list of available add-ons and their versions. Choose the most suitable add-on and version for your project.
Click the Preview Script button to check your submission details in an updating JSON script. Make sure to verify the "upload_paths" to ensure all your assets are properly uploaded. Regularly reviewing this can confirm the accuracy and completeness of your submission.
Validate and Submit¶
Hit the Submit button to dispatch your job to the conductor. We first perform an in-depth review of your scene to identify any issues that could potentially cause rendering failures.
If any critical problems are found that could jeopardize the submission, errors will be flagged in the submission dialog, pausing the process. More commonly, you might see warnings or informational alerts rather than direct errors. These notices will be outlined in a dialog window, giving you a rundown of all the detected issues. Despite these warnings, you will have the choice to continue with your submission if you choose.
Example of a successful submission:
Download finished files¶
As tasks finish, you can download your images via the command-line tools or open the Downloader page in the Companion app.
Be sure to visit the Blender reference page for more info.
|
OPCFW_CODE
|
The 5 Basic Types of Data Science Interview Questions
Data science interviews are notoriously complex, but most of what they throw at you will fall into one of these categories.
By Roger Huang, Springboard.
This is an excerpt of Springboard’s guide to data science interviews.
"Always explain the thought process behind your choices and the assumptions that guide them."
Data science interviews are daunting, complicated gauntlets for many. But despite the ways they're evolving, the technical portion of the typical data science interview tends to be pretty predictable. The questions most candidates face usually cover behavior, mathematics, statistics, coding, and scenarios. However they differ in their particulars, those questions may be easier to answer if you can identify which bucket each one falls into. Here's a breakdown, and what you can do to prepare.
1. BEHAVIORAL QUESTIONS
Similar to any other interview, these questions are meant to test for your soft skills and see if you fit in culturally with the company.
Example: What have you liked and disliked about your previous position?
The intent here is to identify whether the role you’re interviewing for suits your personality and temperament, and to identify why you’re moving onfrom a previous position.
Don't overthink it or imagine that the key here is really any different from any other type of interview: Just understand the role well, avoid talking about issues you've had in the past with specific people, and be professional when describing what you disliked and why. A data science role may call for an analytical mind, but hiring managers still want to hear what makes you passionate.
2. MATHEMATICS QUESTIONS
Data scientist roles where you're expected not only to implement algorithms but also tweak them for specific purposes will usually come with mathematical questions.
Example: How does the linear regression algorithm determine what the best coefficient values are?
The point is to see how deeply you understand linear regression, which is critical because in many data science roles you won’t just work with algorithms in a black box; you’ll actually put them into action. This category of question tests how much you know about what's actually happening beneath the surface.
So this is one of those "show your work" moments. Trace out every step of your thinking and write down the equations. As you’re writing out the solution, describe your thought process so the interviewer can see your mathematical logic at work.
3. STATISTICS QUESTIONS
It goes without saying that a strong grasp of statistics is important for solving different data science problems. Chances are you’ll be tested on your ability to reason statistically and your knowledge of statistical theory.
Example: What is the difference between Type I error and Type II error?
Proving your mettle requires showing you understand the fundamentals of statistics. But more than that, interviewers also want to see whether you're capable of using the technical language and logic of statistics to grapple with ideas you may not often approach that way—and still communicate them clearly. So be no-nonsense in your response. Use the relevant statistical knowledge to arrive at your answer, but be as direct as possible about whatever you're asked to define.
4. CODING QUESTIONS
A big part of most data sciences roles is programming to implement algorithms at scale. These questions are similar to the ones candidates face in software engineering interviews; they're meant to test your experience with the technical tools a company uses and your overall knowledge of programming theory.
Example: Develop a K Nearest Neighbors algorithm from scratch.
Showing you can write out the thinking behind an algorithm and deploy it efficiently under time constraints is a great way to demonstrate your engineering skills. This kind is usually posed to data scientists who have a knowledge both of algorithms and their technical implementation, or data engineers who are given some context on what the algorithm is.
In any event, this type of question tests your understanding of matrix computation and how to deal with vectors and matrices. So start by going through a sample set of inputs and outputs, and manually work out the answer. As you do, keep an eye on time/space complexity.
5. SCENARIO QUESTIONS
Last but not least, scenario questions are designed to test your experience and knowledge in different fields of data science, to find out the practical limits of your abilities. Demonstrate your applied knowledge as thoroughly as you can, and you’ll come off well in any case analysis.
Example: If you were a data scientist at a web company that sells shoes, how would you build a system that recommends shoes to visitors?
This question is meant to see how you envision your work delivering products or services from end to end. Scenario questions don’t test for knowledge in every field; they're meant to explore a product's life cycle from beginning to delivery and see what limits the candidate might have at each stage of that process. But these questions also evaluate holistic knowledge—for instance, what it takes to manage a team to deliver a final product—to determine how candidates perform in team situations.
Here, too, the usual job-interview advice applies: Be honest about where you can add a lot of value, but don’t be shy about where you expect to get a little bit of help from your teammates. Try to relate how your technical knowledge can help with business outcomes, and always explain the thought process behind your choices and the assumptions that guide them. And don’t hesitate to ask questions that can help you suss out an interviewer's intentions so you can better tailor your answers.
Data science interviews can be tricky straddling acts—you're challenged to program and come up with technical algorithms on the spot, but you're also measured by much the same criteria for nontechnical roles. Your statistical and mathematical knowledge will be tested, as will your ability to lead a team, communicate, persuade, and influence.
So instead of trying to prepare for every imaginable question, prepare for these five types of question. You can't anticipate every question that's thrown at you, but you can pretty accurately forecast what a hiring manager's needs and expectations might be—then set yourself up to meet them.
Bio: Roger Huang works in Growth at Springboard. He broke into a career in data by analyzing $700 million worth of sales for a major pharmaceutical company. Now he writes content that compiles insights from Springboard's network of data experts to help others do the same.
This post originally appeared on Fast Company. Reposted with permission.
- 21 Must-Know Data Science Interview Questions and Answers
- 10 Tips to Improve your Data Science Interview
- The Secret to a Perfect Data Science Interview
|
OPCFW_CODE
|
Big bertha says: default passwords
Contribute to the default password list. Manufactor: Product: Revision: Protocol: Access: User ID: Password:
80+ Best Free Hacking Tutorials
Free Kindle Books
Hacker Test: Level 2
Shush.se - Watch TV Shows and Documentaries Online
Watch TV Shows and Documentaries Online for free in high definition.
Application skeletons in wxPython
Counting Sort Algorithm in Java - The Code Master
Counting Sort Algorithm in Java
Certified Ethical Hacking - News
Dive Into HTML5
Electronic Projects For Beginners
I made a guide for those people who are still starting with their electronics hobby. I started connecting wires, batteries, bulbs, buzzers and motors ...
» Cool case mods Computers. Review: New and old computers....
Computers. Review. New and old computers.
10 Principles for Keeping Your Programming Code Clean
RFID Zapper Destroys RFID Tags
Radio Frequency Identification, or RFID, has some pretty incredible capabilities but this hacker is appearently not a fan. His gun just looks plain
Solarized - Ethan Schoonover
Solarized is a sixteen color palette (eight monotones, eight accent colors) designed for use with terminal and gui applications. It has several . I designed this colorscheme with both precise lightness relationships and a refined set of hues based o...
Apollo's Coding for GOOD
presented by Apollo and GOOD
Lazy Foo' Productions
Extreme Programming Rules
The rules of Extreme Programming (XP)
C programming.com - Learn C and C++ Programming
DEFCON 15: Teaching Hacking at College
Learn to Create Mobile Games with this In-Depth Unreal Developer's...
How to hack a remote system using Metasploit and Armitage...
Open a Padlock with an Aluminum Can
Most us who've had school lockers or rental storage units know that lots of people trust inexpensive padlocks to secure their belongings. Tactical studies weblog ITS Tactical proves that this trust is a false sense of security by opening the two most...
DEFCON 16: Nmap: Scanning the Internet
8 ways to be a better programmer in 6 minutes.
Remember how started about becoming a better developer in 6 months?
CMSC 311 - Class Notes
Top 10 Hardware Boosting Hacks
With great hardware comes great opportunity. Thanks to the internet and clever hacking communities, there are plenty of ways to boost the capabilities of your everyday gadgets.
The Ultimate Nintendo DS ROM Hacking Guide!
Raspberry Pi Tor proxy lets you take anonymity with you
Pictures from a developer's life
How To Crack A Wi-Fi Network's WPA Password With Reaver
Your Wi-Fi network is your convenient wireless gateway to the internet, and since you're not keen on sharing your connection with any old hoolig...
The Web's best free stuff
Guide To (Mostly) Harmless Hacking
IWS is an online resource that aims to stimulate debate about a range of subjects from information security to information operations and e-commerce.
10+ regular expressions for efficient web development
A video game that teaches how to program in Java
The Command Line Crash Course Controlling Your Computer...
NeoGaf's Free software list. - NeoGAF
Building a cluster of virtual linux machines
Some lesser-known truths about programming
My experience as a programmer and as webmaster of Mises.org has taught me a few things about writing software. Here are some things that people might find surprising about writing code: A programmer spends about 10-20% of his time writing code, and m...
The guide to implementing 2D platformers
My Favorite Smallware
I'm interested in and write about a wide variety of topics - economics, psychology, marketing, music, etc. I prefer writing long articles to short posts and don't update very often.
How To Become A Hacker
       Â
fogus: 10 Technical Papers Every Programmer Should Read...
Pseudo-random ramblings from Fogus.
Aircrack-ng is an 802.11 WEP and WPA/WPA2-PSK key cracking program.
Secret Hack Codes for Android Mobile Phones
50 Places You Can Learn to Code (for Free) Online
If you're curious about learning a programming language, you're in luck: there's no shortage of resources for learning how to code online.
Become a Programmer, Motherfucker
If you don't know how to code, then you can learn even if you think you can't. Thousands of people have learned programming from these fine books:
Gravity Points · CodePen
Killer Game Programming in Java
is for people who already know the basics of Java. For example, students who've finished an 'Introduction to Java' course. The aim is to teach reusable techniques which can be pieced together to make lots of different, fun games. For example, how to ...
Tangible Media Group
Invent Your Own Computer Games with Python
Invent Your Own Computer Games with Python is a free ebook programming tutorial for the Python programming language. Learn how to program by making fun games!
Great Mistakes in Technical Leadership - reprint « Mroodles...
RavenDB - 2nd generation document database
Knockout : Home
XML Serialization of Complex .NET Objects
Free Online Computer Science and Programming Books, Textbooks,...
Free online computer science, engineering and programming books, ebooks, texts, textbooks, lecture notes, documentations and references.
Top 50 Free Open Source Classes on Computer Science : Comtechtor...
HowStuffWorks "The Basics of C Programming"
A computer program is the key to the digital city: If you know the language, you can get a computer to do almost anything you want. Learn how to write computer programs in C.
Hacking - Beginning txt
The Evolution of a Programmer
If you enjoyed this, you might like:
FTIR Multitouch and Display Device - Experiments with Processing,...
Johnny Chung Lee - Human Computer Interaction Research
curriculum vitae, publications, patents, recognition
How to be a Programmer: A Short, Comprehensive, and Personal...
Copyright 2002, 2003 Robert L. Read
|
OPCFW_CODE
|
[deliver] deliver init could recognize when it is run in non-interactive mode and handle accordingly
The lane run_deliver_init runs bundle exec fastlane deliver init via sh(). But unlike other actions or functionality, it doesn't recognize that it runs in non-interactive mode and so doesn't properly handle the 2FA stuff but just dies.
λ bundle exec fastlane run_deliver_init
[✔] 🚀
[18:11:35]: ------------------------------
[18:11:35]: --- Step: default_platform ---
[18:11:35]: ------------------------------
[18:11:35]: Driving the lane 'ios run_deliver_init' 🚀
[18:11:35]: -----------------------------------------------
[18:11:35]: --- Step: bundle exec fastlane deliver init ---
[18:11:35]: -----------------------------------------------
[18:11:35]: $ bundle exec fastlane deliver init
[18:11:43]: ▸ [18:11:43]: Login to App Store Connect<EMAIL_ADDRESS>[18:11:49]: ▸ Two Factor Authentication for account<EMAIL_ADDRESS>is enabled
[18:11:49]: ▸ Your session cookie has been expired.
Run normally this looks like this.
λ fastlane deliver init
[✔] 🚀
[18:14:11]: Login to App Store Connect<EMAIL_ADDRESS>Two Factor Authentication for account<EMAIL_ADDRESS>is enabled
Your session cookie has been expired.
Please enter the 6 digit code:
Workaround
Use an app specific password, those don't need 2FA.
If there is no password supplied at all, then the error message recognizes non-interactive mode:
λ bundle exec fastlane run_deliver_init
[✔] 🚀
[18:31:05]: ------------------------------
[18:31:05]: --- Step: default_platform ---
[18:31:05]: ------------------------------
[18:31:05]: Driving the lane 'ios run_deliver_init' 🚀
[18:31:05]: -----------------------------------------------
[18:31:05]: --- Step: bundle exec fastlane deliver init ---
[18:31:05]: -----------------------------------------------
[18:31:05]: $ bundle exec fastlane deliver init
[18:31:13]: ▸ [18:31:13]: Login to App Store Connect<EMAIL_ADDRESS>[18:31:13]: ▸ -------------------------------------------------------------------------------------
[18:31:13]: ▸ Please provide your Apple Developer Program account credentials
[18:31:13]: ▸ You can also pass the password using the `FASTLANE_PASSWORD` environment variable
[18:31:13]: ▸ -------------------------------------------------------------------------------------
[18:31:13]: ▸ Looking for related GitHub issues on fastlane/fastlane...
[18:31:14]: ▸ Found no similar issues. To create a new issue, please visit:
[18:31:14]: ▸ https://github.com/fastlane/fastlane/issues/new
[18:31:14]: ▸ Run `fastlane env` to append the fastlane environment to your issue
[18:31:14]: ▸ C:/Projects/Fastlane/win-fastlane/credentials_manager/lib/credentials_manager/account_manager.rb:135:in `ask_for_login': [!] Missing password for user<EMAIL_ADDRESS>and running in non-interactive shell (RuntimeError)
The necessary logic is implemented as UI.interactive?:
https://github.com/fastlane/fastlane/blob/cd88359fa9b6772f91a31a290f8f3c790a716dc0/fastlane_core/lib/fastlane_core/ui/implementations/shell.rb#L120-L125
|
GITHUB_ARCHIVE
|
It has always been a big challenge to efficiently support continuous development and integration for ML in production. These days, Data Science and ML are becoming basic ingredients to solve complex real-world problems and deliver tangible value. In general, this is what we have:
- Large datasets
- Inexpensive on-demand compute resources
- ML accelerators in cloud
- Rapid advances in different ML research fields (such as computer vision, natural language processing, and recommendation systems)
However, we are missing the ability to automate and monitor at all steps of ML system construction. In short, the real challenge is not to build an ML model, but rather the difficulty in creating an integrated ML system and to continuously operate it in production. Ultimately, ML code is just a part of a real world ML ecosystem, and there are innumerable complicated steps that surround and support this ecosystem: configuration, automation, data collection, data verification, testing/debugging, resource management, model analysis, process/metadata management, servicing infrastructure, and monitoring.
Before implementing any ML use-cases it is useful to consider the following:
- Is this really a problem that requires ML? Is there not a way to tackle it with traditional tools and algorithms?
- Design and implement evaluation tools, to properly track if you are moving in the right direction.
- Try to use ML as a helping hand as opposed to a complex necessity.
So, all-in-all a well-defined ML flow can be represented in three phases:
Phase 1: The first pipeline
- Keep the model simple and think carefully about the right infrastructure. This means defining the correct method of moving data to the learning algorithm, as well as implementing well-managed model integration and versioning
- To have a test infrastructure independent of the model. This should include tests to verify that data is successfully fed into the algorithm, that the model is successfully output from the algorithm, and that statistical metrics of the data in the pipeline is the same as data outside the pipeline.
- Usually the problems that machine learning is trying to solve are not completely new. There generally exists some existing system for ranking, or classifying, or whatever problem you are trying to solve. This means that there are a bunch of rules and heuristics. A heuristic is a series of approximate steps to help you model the data. These same heuristics can give you an edge when applying machine learning. Try to turn heuristics into useful data. The transition to a machine learned system will be smoother: heuristics may contain a lot of the intuition about the system you don’t want to throw away.
- Now comes the monitoring part. Depending on the use-case, it is possible that performance may decrease after a day, a week, or perhaps longer. It makes sense to have an alert monitoring system watching and triggering retraining continuously.
- Use an appropriate evaluation metric for your model. For example, know when to use an ROC curve vs when to use accuracy.
- Watch for salient failures, which provide exceptionally useful information to the ML algorithm.
- Often, one may not have properly quantified the true objective. Or perhaps the objective may change as the project advances. Further, different team members may have different understandings of the objectives. In fact, there is often no “true” objective. So, train on the simple ML objective, and add a “policy layer” on top, which allows one to add additional logic and rank ML models as needed.
- Using simple pipelines make debugging easier.
In the first phase of the lifecycle of a machine learning system, the important aspect is to push the training data into the learning model, get any metrics of interest evaluated, and create a serving infrastructure that can be built upon. After, that Phase 2 begins.
Phase 2: Feature Engineering
In the second phase, there is a lot of low-hanging fruit. Feature combination and tweaking can generate improvements, and a rise in performance is generally easy to visualize.
- Be sure to employ model versioning as the model is trained and upgraded.
- AS ML models train, they try to find the lowest value of the loss function, which in theory should minimize error. However, this function may be complex, and one end up stuck in different local minima with each run. This can it hard to determine if a change to the system adds meaning or not. By creating a model without deep complex features, you can get an excellent baseline performance. After the baseline, more esoteric approaches can be tried and tested – combining features to make more complex ones.
- Exploring features that generalize across different data contexts.
- Specific feature use may result in better optimization. The reason being that, with a lot of data, it is simpler to learn many simple features than a few complex features. Regularization can come in handy to eliminate features that apply to only a few examples.
- Apply transformations to combine and modify existing features to create new features in human-understandable ways.
- It is important to understand that the number of feature weights that can be learned in a linear model is roughly proportional to the amount of data available. The key is to scale the the number of features and their respective complexities to the size of data.
- Features that are no longer required should be discarded.
- One should apply human analysis to the system. This requires calculating the delta difference between models, and being aware of any changes when new data (or a new user) is introduced to a model in production.
- New features can be created from patters observed in measurable quantities (metrics). Hence, it is a good idea to have an interface to visualize training and performance.
- Quantifying undesirable observed behaviour can help in analyzing the properties of the system which are not captured by the existing loss function.
- It is not always true that short-term behaviour is an indication of long-term behaviour. Models sometimes need to be frequently tuned.
- Study the test-train skew. This is the difference between performance during training and performance during testing/serving. The reason for this skew can be:
- A discrepancy due to differences in data handling in training and testing/serving.
- A change in the data between these steps.
- The presence of feedback loops between the model and your training algorithm.
One solution is to monitor training and testing explicitly so that any change in system/data does not introduce unnoticed skew.
Phase 3: Optimization refinement and complex models
There will be certain indicators that suggests the end of Phase 2. One may observe that monthly gains start to diminish. There will be trade-offs between the metrics: a rise or fall in some experiments. And this is where one notices the need for model sophistication as gains become harder to achieve.
- Have a better look at the objective. If unaligned objectives are an issue, don’t waste time on new features. As stated before, if product goals are not covered by existing algorithmic objectives, one needs to change either the objectives or the product goals.
- Keep ensembles simple: each model should either be an ensemble (only accounting for the input of other models) or a base model (taking many features), but never both.
- Looking for qualitative new sources of information can be useful, rather than refining existing signals once performance plateaus.
- When dealing with content, one may be interested in predicting popularity (e.g. the number of clicks a post on social media receives). In training a model, one may add features that would allow the system to personalize (features representing how interested a user is), diversify (features quantifying whether the current social media post is similar to other posts liked by a user), and measure relevance (measuring the appropriateness of a query result). However, one may find that these features are weighted less heavily by the ML system than expected. This doesn’t mean that diversity, personalization, or relevance aren’t valuable
With all these steps in mind, it is clear that one cannot go about implementing simple ML code. One needs a sophisticated ML architecture to address the complications and improvisations that come with developing an ML environment.
As can be seen in the above diagram, the pipeline includes the following stages:
- Source control
- Test and build services
- Deployment services
- Model registry
- Feature store
- ML metadata store
- ML pipeline orchestrator
Which can be better analyzed in this diagram
Lets take an example task: churn prediction. The idea is to determine the number of people leaving a given workplace by using various parameters. The idea is to implement CD/DI integration when deploying, and Kubernetes is used as an environment to support the various processes involved in the integration.
Once the model is deployed following things need to be kept in mind:
Evaluation: measuring the quality of predictions (offline evaluation, online evaluation, evaluating using business tools, and evaluating using statistical tools)
Monitoring: tracking quality over time
Management: improve deployed model with feedback → redeployment
So, in conclusion the need for automation and monitoring for all steps of ML system construction is important. A wellengineered ML solution won’t simply make the development process easier, but will also make it coherent and resilient.
|
OPCFW_CODE
|
Heat and dry weather can drive snakes into basements and other cool and moist areas of houses and other manmade structures, where their presence is sometimes detected by skins they shed.
The frequency of shedding ranges from more than once a month to once or twice per year. It varies, depending on the species and age of snakes, as well as access to food because that impacts growth rate.
For many people, the only thing worse than finding a snake skin in their house is finding the snake. It might make some folks rest easier if they could tell if the snake that left the skin was venomous.
I did a little research and learned that you can identify a snake by its skin after finding a “shed” recently.
Identifying snakes by their sheds is a process of elimination based on size, remnants of color patterns and features of scales (similar to fish scales) and other factors.
The first thing most people want to know when they find a shed is if it came from a venomous snake. The only two venomous species native to Wilkes County are copperheads and timber rattlesnakes.
The shed isn’t from a rattlesnake if it obviously has a complete tail because a rattler’s shed ends abruptly near the tail since it doesn’t cover the rattles. If it’s not obvious how much of that end of the shed is missing, it can be hard to use this as an identifier.
The shed I found had a complete tail so it wasn’t from a timber rattler. They tend to stay clear of residential areas, so there wouldn’t likely be one near our house.
We’ve found several copperheads around railroad ties in our yard, which I think is because they wait for mice running along the ties. This is why you should be careful stepping over logs in the woods.
Sheds cover more than just the top of the scales, so a shed’s length can be a good bit longer than the snake it came from, plus scale size in relation to body size varies with species. In addition, snake skins are stretched during ecdysis and a shed could be from a hatchling or juvenile that will grow much larger.
The shed I found was about 2 ½ feet long, just a little more than a copperhead’s average length.
Timber rattlers average about 3 feet long and can grow considerably longer. A recent Wilkes Journal-Patriot issue with news from 1979 said a 4.8-foot-long timber rattler that had just been killed on the Little Brushies was unusually long.
Copperheads and rattlesnakes have wider bodies than non-venomous species found in Wilkes. Our two venomous species also have a single row of scales under their tails while our non-venomous snakes have a double row of scales.
Some snake species have slight keels on their dorsal scales (those on the top and sides of the body) and some are smooth. A keel is a ridge running lengthwise down the center of the scales, like the keel of a boat.
Rattlesnakes, copperheads and water snakes have distinct keels. Rat snakes have slight keels, but racers don’t have them.
Patterns on a snake’s body are often preserved in its shed, but without color. These are most visible in newly-shed skin.
If you find a shed with an intact head and its shape is triangular or you see a small pit between the eye and nostril holes, it came from a copperhead or rattlesnake.
|
OPCFW_CODE
|
After installing unity for the first time I’ve received this error. I’ve uninstalled and reinstalled at least 3 times, i’ve tried different builds same thing. I’m using windows 10, and im running it as admin,
if anyone knows how to fix this i would be thankful.
Weirdly enough, this kind of error happened to me simply because I have a Windows Explorer opened on a folder I was trying to move. My fix was simply to close that window.
An access denied error generally means that folder/file is opened somewhere else and made a “lock” on it, preventing other apps from deleting/moving that file/folder. So check what other programs you have running, and make sure they don’t have the folder/file in question opened.
The cause of my file access denied issues was actually very simple: Google Drive. Even though I hadn’t told it to include my project files in the file sync, it was still including them anyway. And the very annoying fact was if a file was syncing, access to it was locked on my PC. I disconnected my Drive and all the errors went away. I’ll just have to stick with the web interface, I guess.
Closed visual studio and problem was solved
After many hours of research I am back with the solution. I found this post:
I found packages unityeditor-cloud-hub-0.0.1.tgz and unity-editor-home-0.0.7.tgz in C:\Program Files\Unity\Editor\Data\Resources\Packages. Create folders node_modules\unityeditor-cloud-hub and node_modules\unity-editor-home in C:\Users%user_name%\AppData\Roaming\Unity\Packages. Extract dist and package.json from unityeditor-cloud-hub-0.0.1.tgz into unityeditor-cloud-hub, dist and package.json from unity-editor-home-0.0.7.tgz into unity-editor-home
and because it was a bit old it didn’t solve the solution completely, I’m guessing. Inside your install location (C:\Program Files\Unity\Editor\Data\Resources\Packages) you may also find “unityeditor-collab-history-0.4.10.tgz” and “unityeditor-collab-toolbar-0.4.12.tgz” you must also create folders for those in the Appdata folder and extract them to their respective folder.
Here is an image showing the 3 layers of my AppData folder and an image of my install location Packages folder. !
Image Link (http://i.imgur.com/fvV9rcw.png)
Use Resource Monitor → Associated Handles, search for the filename, find the process that uses it and right click, end process.
Close out of Visual Studio or whatever you use to code. It’s basically the same as when you move a file in Windows Explorer and it says cannot move/copy because the file or the folder is open in another app.
|
OPCFW_CODE
|
What is a File Server?
File servers, simply put, are network components that store data files for easy access of other network components in a client-server model. Their main function is to facilitate the transfer of data files from one place to another without relying on physical devices like USB flash drives or optical media. A user who wishes to access a particular data file, merely has to log into the file server, navigate the file system and download the file to their machine. It is essentially a data store that resides on a network.
File servers often use a special network protocol to transfer data files back and forth; a common one that is use is FTP or File Transfer Protocol. It works differently from the more common Internet protocol, HTTP, in that it uses two connections between a client and the server. The first connection is the control connection, where information regarding authentication and sessions are stored. The second connection is used for the actual data transfer.
File Server Backup Systems
There are a number of methods with which the contents of a file server can be preserved from harm. Most commonly, the file server itself has a backup mechanism akin to a system restore function on a standard personal computer. However, a bigger network, where the file server plays a more pivotal role would require a more sophisticated set of policies.
A file server backup system is a software application that, once configured, performs backup functions for file servers. There are a number of commercial applications available; these vary greatly, depending on the functions they perform. Therefore it behoves a user to examine all their available options before settling for any particular one.
If the base operating system is Linux, there it is possible to create a shell script to automate the backup process. Optionally, if the file server is leased from a service provider, then those companies sometimes provide backup services as a part of their package. However, cost becomes a factor in this particular scenario, and all things considered, it may be simpler to use dedicated storage that is separate from the file server.
Why Backup a File Server?
File servers are often vital components of a network. They form part of the backbone and a failure of the file server can result in significant losses. This is mainly due to the fact that most of the other network components use the file server to access data files.
Backing up a file server is an important part of data security and consistency. For example, a website could use a file server to store their images and downloads. If a file server goes down for any reason, all the files linked to it will no longer be available.
Next Generation Backups
As time has progressed, cloud computing is the new technological revolution. In terms of backups, a cloud could have a significant impact. Clouds could be used to backup file servers and still keep them within the same network. Therefore if a file server fails, users will not have to wait for the data to be restored. It can be directly loaded from a secondary source in the cloud, which essentially serves as a backup.
|
OPCFW_CODE
|
Registering a root-scoped service worker in Nuxt3 not working with nitro header configuration
Environment
Operating System: Darwin
Node Version: v16.16.0
Nuxt Version: 3.1.1
Nitro Version: 2.1.1
Package Manager<EMAIL_ADDRESS>Builder: vite
User Config: typescript, nitro
Runtime Modules: -
Build Modules: -
Reproduction
A reproduction of this issue can be found here: https://github.com/eliaSchenker/nuxt-webworker
The reproduction contains a simple service worker script, the plugin registering the service worker and a sample page with basic service worker interactions (sending messages, making a request, updating, etc.)
Describe the bug
I am trying to register a service worker in Nuxt. I am importing the url of the service worker as mentioned in the vite-docs Vite Features - Web Workers like so:
import MyWorker from '~/worker/serviceworker.ts?worker&url'
This results in the following url when running a dev build (yarn dev):
/_nuxt/worker/serviceworker.ts?type=module&worker_file
And this url when running a preview of the node build (yarn build)
/_nuxt/serviceworker-4409edbe.js
Because this url is not located in the root directory of the project, when trying to register the service worker with a scope of / the following error message is printed:
Service worker registration failed DOMException:
Failed to register a ServiceWorker for scope ('http://localhost:3000/') with script ('http://localhost:3000/_nuxt/serviceworker-4409edbe.js'):
The path of the provided scope ('/') is not under the max scope allowed ('/_nuxt/').
Adjust the scope, move the Service Worker script, or use the Service-Worker-Allowed HTTP header to allow the scope.
So fair enough, we need to set the header to configure the correct scope. This is where the issues start.
The core of the problem is in setting this header for the service workers script file. After some research I found that Nitro has a way of setting headers for certain files using the routeRules. I've tried to implement this by setting the required header for all files in the worker directory (location of serviceworker.ts):
nitro: {
routeRules: {
'/worker/**': { headers: {'Service-Worker-Allowed': '/'} },
}
}
Similarily, I've tried moving the worker to the asset directory and add this rule for all files in the asset directory (as mentioned in the nitro docs):
nitro: {
routeRules: {
'/assets/**': { headers: {'Service-Worker-Allowed': '/'} },
}
}
As I've noticed, the routeRules do not seem to behave very well with nuxt and seem to only work for the server api and middle ware, and the pages, all rules for assets are simply ignored.
Is this an issue with Nitro's routeRules not working correctly with Nuxt? If so, is there another way to configure file-specific (or global) headers in Nuxt or modify the incoming requests in another way?
Additional context
Possible Workaround
I almost found a workaround for this problem, which is unfortunately invalidated due to the way vite bundles the files: By creating a server endpoint in server/api and fetching the service workers compiled file and outputting it (therefore the contents of the server endpoint and the service worker script are the same). Because the content of the service worker script is now located in a server endpoint, I could set the headers using Nitro. This solution only works when running the application locally though, as the url of the script differs (and also contains a hash) when building for production. Because the url changes it is not possible to fetch the file.
Logs
No response
This should be possible for assets - but note that routeRules applies on the final route of the outputted file not on the location of the source files. But it obviously won't work as you don't know the path in advance. So you would likely need to add a directory for service worker exports:
export default defineNuxtConfig({
vite: {
build: {
rollupOptions: {
output: {
chunkFileNames (chunkInfo) {
if (chunkInfo.name.includes('workers')) { return '_nuxt/workers/[name].[hash].js' }
return '_nuxt/[name].[hash].js'
}
}
}
}
}
})
Then you could set the routeRules accordingly:
nitro: {
routeRules: {
'/_nuxt/workers/**': { headers: {'Service-Worker-Allowed': '/'} },
}
}
Note: I've not tested this but I think it should work.
@danielroe
Thanks for your quick response!
I've tried the configuration option you provided, and as it seems, the serviceworker.ts file (or the folder path) is never passed through the function and therefore the path is never altered.
I have tried to console.log all the chunkInfo.name and the workers/serviceworker.ts path is not listed:
As you can see, the serviceworker script is never passed through the function. The script is currently located in the workers folder in the asset directory (though I have also tried placing the workers folder outside of the assets directory, with the same results):
On that note: This should still work when run on dev, even if the path is not replaced, right? Because, even if I have the serviceworker on /_nuxt/workers/serviceworker.ts?type=module&worker_file and the routeRules set to
nitro: {
routeRules: {
'/_nuxt/workers/**': { headers: {'Service-Worker-Allowed': '/'} },
}
}
the header is still not added, even though all the paths seem to match. Is something wrong with my configuration or is this an issue with Nitro?
@danielroe Any update on this?
@eliaSchenker I've successfully integrated a custom service worker to deal with push notifications with this Vite-PWA library - just out with the Nuxt3 extension and supported by Nuxt Labs.
https://vite-pwa-org.netlify.app/frameworks/nuxt.html
Also, check this for a simple integration with your own service worker:
https://vite-pwa-org.netlify.app/guide/service-worker-without-pwa-capabilities.html
(More details):
https://vite-pwa-org.netlify.app/guide/inject-manifest.html
This isn't really an issue with Nuxt. The issue is that the service worker integration is emitting the file as an asset - it's not quite the same kind of thing as normal runtime code.
This should do the trick for you:
// https://nuxt.com/docs/api/configuration/nuxt-config
export default defineNuxtConfig({
vite: {
plugins: [{
name: 'vite-plugin-serviceworker',
generateBundle (_options, bundle) {
for (const file in bundle) {
if (file.includes('serviceworker')) {
bundle[file].fileName = bundle[file].fileName.replace(/.*\//, '_nuxt/workers/')
}
}
}
}],
},
routeRules: {
'/_nuxt/workers/**': { headers: { 'Service-Worker-Allowed': '/' } },
}
})
|
GITHUB_ARCHIVE
|
spanner: Enhance session pool behavior
Change defaults for maxSessions and blockIfExhauseted
Added tracking of leaked sessions
Changed closeAsync to close leaked sessions as well.
Changed closeAsync to close pool maintenance worker as well.
Added a SessionPoolStressTester
Changed Spanner#closeAsync to be blocking Spanner#close.
@tomayles Please take a look
Coverage decreased (-0.006%) to 80.954% when pulling c9d56df6bf80c5893e0920d584141bde3876a933 on vkedia:session-pool-exception into 9623994f58199c79ee5b9f99ad0ff6d7fb69bd84 on GoogleCloudPlatform:master.
Closed by mistake.
Coverage decreased (-0.009%) to 80.952% when pulling ae8860fbf8054af3122f9288e972b88227bd1b09 on vkedia:session-pool-exception into 9623994f58199c79ee5b9f99ad0ff6d7fb69bd84 on GoogleCloudPlatform:master.
Coverage decreased (-0.009%) to 80.952% when pulling 531f73ccf8002d04522d2abec792764628cf5af8 on vkedia:session-pool-exception into 9623994f58199c79ee5b9f99ad0ff6d7fb69bd84 on GoogleCloudPlatform:master.
Thanks for updating the javadocs. I'm still a little uneasy about having blocking RPC calls in the shutdown path though.
Thanks for your comments. It is almost essential that this method be
allowed to finish and delete all the sessions otherwise it can lead to
session lying around on the backend for upto an hour. Since there is a
hard limit on how many
sessions one can have on the backend, this can lead to one running out of
sessions which will cause requests to fail. That is the reason I removed
the async version. You almost always want to block for this method to
finish.
On Mon, May 1, 2017 at 3:47 PM, Daniel Compton<EMAIL_ADDRESS>wrote:
Thanks for updating the javadocs. I'm still a little uneasy about having
blocking RPC calls in the shutdown path though.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
https://github.com/GoogleCloudPlatform/google-cloud-java/pull/2026#issuecomment-298452883,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ATdefzp5iSCWxKfhSyeI8DA2rGGLmXRTks5r1mDsgaJpZM4NNavX
.
Coverage decreased (-0.009%) to 80.952% when pulling 1d867bf018a6fdf7253b995895325dee76618870 on vkedia:session-pool-exception into 9623994f58199c79ee5b9f99ad0ff6d7fb69bd84 on GoogleCloudPlatform:master.
@tomayles @vam-google Please take a look again.
Changes Unknown when pulling 2b385eb480c1e6186b17898a6878ec975250f864 on vkedia:session-pool-exception into ** on GoogleCloudPlatform:master**.
Changes Unknown when pulling 2b385eb480c1e6186b17898a6878ec975250f864 on vkedia:session-pool-exception into ** on GoogleCloudPlatform:master**.
Changes Unknown when pulling 710701b11eb409e93cb5f00f19fc0556920d9e3d on vkedia:session-pool-exception into ** on GoogleCloudPlatform:master**.
|
GITHUB_ARCHIVE
|
<?php
namespace Zeero\Zcli\Commands;
use Zeero\Core\Env;
use Zeero\Zcli\Command;
/**
* Command for Security purposes
*
* @author carlos bumba carlosbumbanio@gmail.com
*/
class SecurityCommand extends Command
{
public static $_arguments = ['key'];
public function __construct()
{
parent::__construct();
}
public function _initialize()
{
echo "generate a new APP_KEY: \n\n";
echo "try security:key generate \n";
}
/**
* generate a application key
*
* @return void
*/
public function key()
{
if ($this->input_value == 'generate') {
$key = bin2hex(openssl_random_pseudo_bytes(20));
Env::replace('APP_KEY', $key);
echo "app key generated";
}
}
}
|
STACK_EDU
|
I'd like to propose a solution by combining knowledge and techniques from James Bach and others in the context driven community to create a solution which may help, depending on the application under test, and your workplace culture. It sometimes works for me, and I hope that it may work someone else.
1. James Bach's Heuristic Test Strategy Model
Available here: http://www.satisfice.com/tools/satisfice-tsm-4p.pdf
As it says on the front page: "The immediate purpose of this model is to remind testers of what to think about when they are creating tests."
I like to build from the Quality Criteria categories, although I think you could begin building from any of the categories, depending on the type of model you want to build.
2. Mind Mapping
There's a lot of talk in the context driven testing community right now about mind mapping, and I have to admit I was originally dismissive of the idea. I had always associated mind mapping with high school social studies teachers for some reason, or, at best, highly paid advertising executives, and wondered if they really had a practical purpose in the 'real world'. I've since learned, of course, there is no such as the 'real world' so I started looking into it. There are a few free mind mapping tools out there: http://en.wikipedia.org/wiki/List_of_mind_mapping_software
In these examples I'll be using FreeMind
You can read more about using mind mapping in testing here: http://www.bettertesting.co.uk/content/?p=956
Select the appropriate criteria for your project and map it out (I use the word 'Functionality' instead of 'Capability'):
4. Flesh Out
Start fleshing out using the other categories to generate ideas.
5. Post up in a public area
When you think you've finished, or you've run out of ideas, print out, and post up somewhere visible. This lets PMs and developers see what your model of the application is, and they can point out areas that you may have missed.
6. Create Test Charters
If you're unfamiliar with test charters, or session-based test management, read these:
With a little tweaking, you've almost automatically generated your test charters from your mind map. As you run along the lowest levels of hierarchy, create your test charters from them. I find that usually each lowest level generates 1 to 3 test charters. In the example above, some example test charters may be:
- Create admin user; ensure has admin rights as described in document X, and can create, edit, and delete users
- Check the claims made by the sales team are present in the application
- Test the Proin at ligula libero
- Explore the Quisque quis libero urna using Internet explorer 6
The mind map lets you visualise the testing process and can be used to report how testing is going. You can use it to manage the testing process by assigning testers to a branch, letting them take ownership of all sub branches. This lets you quickly assign work without having to go one by one through requirements and specifications and manually assigning individual tasks. You could colour branches that you're currently working on, and shade in branches that have been completed. At a glance, you can see what has been done, what is being worked on, and what is left to do.
I hope this illustrates a good way to structure, manage, and report on exploratory testing. The main advantages are that it is quick to do: no lengthy documents with headings and paragraphs of text; and has many emergent properties which also aid the management of the testing process.
|
OPCFW_CODE
|
Discover the Best Python Classes in Chicago
Python continues to top online lists of the most popular programming languages in the world. This object-oriented language offers simplicity, scalability, and versatility to tech professionals across every industry, from data science to web development.
If you want to learn Python in Chicago, you might find the number of options confusing. You can choose one of countless tutorials, bootcamps, or certificate programs. Consider first that you should base your selection on how and where you plan to use the skills and knowledge you gain from training.
Read on to learn more about the best Python classes in Chicago.
Best Python Classes & Schools in Chicago
- ONLC Training Centers - ONLC Training Centers offers live online courses with optional access to their on-site computer labs for those who need them. In Chicago, they provide multiple training options, including Python Programming Level 1: Intro for Non-Programmers. This beginner-level course is ideal for development pros who need to learn Python. Applicants with no programming experience are also welcome. There are no prerequisites. ONLC also offers intermediate courses in related topics like Power BI and Tableau data visualization.
- Sprintzeal Americas, Inc. - Many of Sprintzeal’s classes help participants prepare for project management, marketing, and web development certification tests. The CompTIA PenTest+ Certification class prepares attendees for CompTIA PenTest+, a popular certification for professionals in cybersecurity roles. While not every Python pro starts here, the course is open to students at all levels.
Online Python Schools & Classes
Online live learning is one of the most popular options for busy professionals and those with family obligations. Virtual training is interactive, offering a hands-on experience like in-person coursework. Students can attend from the comfort of home or office, ask questions in real-time, and get answers from expert instructors who can even control the screen—with permission, of course. Check out live online Python classes you can take from Chicago or anywhere.
- Noble Desktop - Python is one of many topics available from Noble Desktop, which offers bootcamps and certificate programs in-person and live online. Their Data Science Certificate program takes Python beginners from novices to data analysis or science pros in four weeks full-time or twenty weeks part-time. They also offer Python web development classes and shorter Python programming classes and bootcamps.
- Practical Programming - Practical Programming offers multiple Python classes, too, including beginner-level and intermediate. For example, their FinTech Bootcamp covers Python, emphasizing data analysis and visualization. There are no prerequisites for this course.
- NYC Career Centers - You can take in-person classes from NYC Career Centers in New York but live online from anywhere, including Chicago. Their Python Machine Learning Bootcamp is open to all levels, but applicants should be comfortable with Python data science libraries before enrollment.
Chicago Industries That Use Python
Some of the top industries in Chicago that use Python include information technology (IT), life sciences, and BFSI (banking, financial services, and insurance). While these sectors require Python nationwide, their importance in Chicago is even higher.
Today the greater Chicago metropolitan area is home to some of the biggest FinTech employers in the nation. Chicago biotech and pharmaceutical companies include Abbott Laboratories, Amgen, and Takeda Pharmaceutical Company, to name a few. Python and Django development companies based in Chicago include BuildThis, Foxbox Digital, and Brightlab.
Other key industries in Chicago use Python, too. Manufacturing, transportation, healthcare, and energy are industries that benefit from Python—from web development and Python data science to cybersecurity and artificial intelligence, including machine learning.
Python Jobs & Salaries in Chicago
Python professionals in the Chicago metro area do quite well salary-wise. For example, Data Analysts earn an average annual salary of about $73,000, comparable to the average U.S. salary.
While Data Scientists typically make around $118,000 annually here, Python Developers in Chicago earn about $144,000 annually—an impressive 25% higher than the national average for comparable positions. And Machine Learning Engineers here make an average annual salary of $168,000, eight percent higher than the national average.
These high-salary positions may require multiple programming languages and years of experience. Many Python professionals begin training through immersive bootcamps or certificate programs to gain entry-level jobs before advancing. When looking for Python positions in Chicago, remember to include titles like Software Engineer, DevOps Developer, or Build Release Engineer, as well as those listed above.
Python Corporate Training
You can book corporate Python training for your Chicago organization through CourseHorse. A live online Python Fundamentals class is available at the beginner level. Training is available live online through CourseHorse or at a state-of-the-art Midtown Manhattan location with workstations and equipment provided.
You can also get private group training tailored to your specific needs. If you want your team to have more scheduling flexibility, you can also purchase discounted vouchers for public enrollment classes. Call or click today for a free consultation.
|
OPCFW_CODE
|
Using LINQ to SQL in ASP.NET MVC2 project
Well I am new to this ORM stuff. We have to create a large project. I read about LINQ to SQL. will it be appropriate to use it in the project of high risk. i found no problem with it personally but the thing is that there will be no going back once started.So i need some feedback from the ORM gurus here at the MSDN. Will entity framework will be better? (I am in doubt about LINK to SQL because I have read and heard negative feedback here and there)
I will be using MVC2 as the framework. So please give the feedback about LINQ to SQL in this regard.
Q2) Also I am a fan of stored procedure as they are precomputed and fasten up the thing and I have never worked without them.I know that LINQ to SQL support stored procedures but will it be feasible to give up stored procedure seeing the beautiful data access layer generated with little effort as we are also in a need of rapid development.
Q3) If some changes to some fields required in the database in LINK to SQL how will the changes be accommodated in the data access layer.
See my answer here: http://stackoverflow.com/questions/2701952/dump-linq-to-sql-now-that-entity-framework-4-0-has-been-released/2702016#2702016 -- it's almost the same question, and my answer also applies here.
When it comes to Linq-to-Sql vs Entity Framework, I strongly suggest to use Entity Framework. With the release of .NET 4.0 and VS2010, Microsoft added soooo much goodness in Entity Framework(EF) 4.0. Let me just mention a few points: POCO and NTier support (this means that you can have a separate library with your simple entity classes and of course EF will still be aware of them), Lazy Loading, Sql query optimizations...Also you can let EF to generate your entities (and you have the option modify the T4 generation template) or you can create them by hand if you need more control. Also, if you app will indeed be large, with EF 4, now you can separate your layers quite nicely(you can create your Mocks fo testing etc...). I'm not a web developer, so I cannot give you any hints on mvc2 on this matter.
q2-q3) - in EF you can have precompiled queries - IF you observer later on that query performance is not quite what you need. This will speed-up things quite a bit. If you plan to use EF and if you add a few changed to you database, you can easily update your model with a click.
I know I babbled too much on EF and not Linq to sql :), but hey...I believe this suits way better on your needs and you should definitely check it out for this project. Also, I don't know how much Microsoft will add features / invest in LinqToSql in the future.
Cheers,
EF4 still is a two-layered approach which does carry quite a bit more overhead than Linq-to-SQL. For simple scenarios with SQL Server only, I would still tend to use Linq-to-SQL. For enterprise apps and more complex scenarios, then go with EF4.
ok precompiled queries that certainly is catching my attention.
May I ask you, what for did you made 2 registrations?
|
STACK_EXCHANGE
|
deprecation notice: as npm has scaled, the registry architecture has gradually migrated towards a complex distributed architecture, of which npm-registry-couchapp is only a small part. FOSS is an important part of npm, and over time we plan on exposing more APIs, and better documenting the existing API.
npm-registry-couchapp is still a core part of our functionality, but all new registry features are now added to the micro-services that now make up npm. For this reason, we will not be accepting any pull requests, or making any changes to this codebase going forward.
For issues with the npmjs.com website, please open an issue on the npm/www repo. For issues wih the registry service (for example, slow package downloads, or inability to publish a package), see the npm/registry repo.
The design doc for The npm Registry CouchApp
You need CouchDB version 1.4.0 or higher. 1.5.0 or higher is best.
Once you have CouchDB installed, create a new database:
curl -X PUT http://localhost:5984/registry
You'll need the following entries added in your
[couch_httpd_auth]public_fields = appdotnet, avatar, avatarMedium, avatarLarge, date, email, fields, freenode, fullname, github, homepage, name, roles, twitter, type, _id, _revusers_db_public = true[httpd]secure_rewrites = false[couchdb]delayed_commits = false
Clone the repository if you haven't already, and cd into it:
git clone git://github.com/npm/npm-registry-couchapp cd npm-registry-couchapp
Now install the stuff:
Sync the ddoc to
npm start \ --npm-registry-couchapp:couch=http://admin:password@localhost:5984/registry
Next, make sure that views are loaded:
npm run load \ --npm-registry-couchapp:couch=http://admin:password@localhost:5984/registry
And finally, copy the ddoc from
npm run copy \ --npm-registry-couchapp:couch=http://admin:password@localhost:5984/registry
Of course, you can avoid the command-line flag by setting it in your ~/.npmrc file:
_ prevents any other packages from seeing the setting (with a
password) in their environment when npm runs scripts for those other
Replicating the Registry
To replicate the registry without attachments, you can point your CouchDB replicator at https://skimdb.npmjs.com/registry. Note that attachments for public packages will still be loaded from the public location, but anything you publish into your private registry will stay private.
Using the registry with the npm client
With the setup so far, you can point the npm client at the registry by putting this in your ~/.npmrc file:
registry = http://localhost:5984/registry/_design/app/_rewrite
You can also set the npm registry config property like:
npm config set \ registry=http://localhost:5984/registry/_design/app/_rewrite
Or you can simple override the registry config on each call:
npm \ --registry=http://localhost:5984/registry/_design/app/_rewrite \ install <package>
Optional: top-of-host urls
To be snazzier, add a vhost config:
[vhosts] registry.mydomain.com:5984 = /registry/_design/app/_rewrite
registry.mydomain.com is the hostname where you're running the
5984 is the port that CouchDB is running on. If you're
running on port 80, then omit the port altogether.
Then for example you can reference the repository like so:
npm config set registry http://registry.mydomain.com:5984
|
OPCFW_CODE
|
The starting pointTo replace the previously failed photos I took a new one where one could see the top sides of the tank, all clearly and neatly primed. Everything looked good and well-covered, so I didn't have to patch anything up, causing new delays or problems.
Paint paint paintOn one early evening the eager painter was released on her model. Very surprisingly she chose blue as her first paint, instead of her favourite, Orange.
As a fascinating detail she was pretty careful while painting, instead of just swishing around with the paintbrush. She also held the model very nicely, instead of the full-hand grab that we had been somehow expecting. Perhaps all the watching over me had given ideas or the explanation is somewhere else. Still, it was plenty of fun to watch :)
After one session a decent amount of untouched grey was still visible. Considering its painted parts the tank looked like either a paintball target, a piece of urban art or something you'd see in a neon-camouflaged fighting unit in Seoul or Tokyo - completely without all the scifi associations, though. Or it just looks like someone's first ever Warhammer 40,000 vehicle :P
Painting, round 2After a number of evenings my Project Assistant wanted to get back to painting her tank. The table was double-covered quickly and the beast was released on her model.
Her painting process was very careful and it looked like she was paying quite a bit of attention to it, surprisingly much. Especially the carefulness surprised us. When something like twenty minutes of work had elapsed she declared that "This is now ready!". Good, now it was good and done. I asked if she wanted decals on her tank and such, just like daddy's models. Her response was an immediate "yes!" - I was very pleased with that.
Kuvaton viimeistelyI spent a brief moment on two afternoons to first apply a gloss varnish all the places that were going to get a decal on them. On the second one I asked "which of these options would you prefer?" and did as requested. I did try to explain what the point was and why did I do things the way I did with the decals, but it didn't seem to be that interesting at this point. No surprises there.
So I put on a handful of numbers 32 and greater than signs, whose excuse I've never cared about enough to actually google it. These ended up in the sides of the turret and the skirts. On top of the gun's barrel, in the front face of the laser box I put a flame sign, whatever its point was. There were a bunch of other decals on the small sheet, such as stripes and slightly different greater than signs, but as none of them were declared on the instructions, I threw them away.
To finish the model up and protect it a bit from the expected playing around with, I applied a healthy layer of matt varnish all around. At this point I didn't take any photos, but left the model curing in peace over the night.
|
OPCFW_CODE
|
- (Link now to a download for v8.0), the authors homepage appears
to be dead.
Welcome once again. In previous serial # tutorials I've shown you how to quickly crack programs by finding the error message and then backtracing through the disassembly to reverse the necessary (yet all to often) single jump. In some cases its been a requirement to trace the protection_routine CALL and patch that, but in either case the idea has been simple and any casual cracker with a modicum of intelligence would easily beat 60% of today's software.
Although I've done it in many tutorials, patching some instructions to circumvent a serial # protection is not "real" reverse engineering. I've chosen APP LAUNCHER because it looks as if patching would be a fairly cumbersome process (a lot of jumps to reverse), so without further ado lets whip out W32Dasm and hunt down the following code.
:00411A4B CALL 00427050 <-- Call great protection
:00411A50 TEST EAX,EAX <-- Test EAX.
:00411A52 JZ 00411AC9 <-- Jump_bad_registration_code.
Simple reversing of this JZ or an equivalent patch (say NOP's) ensures that any code returns the good_guy message box, but there is a snag, the program uses a registry key to store the good code and on subsequent restarting verification fails. The result is that 00427050 will have to be traced/patched regardless. You should see that EAX must be returned !=0 and as we've seen countless times, bad guy i.e. EAX=0 will probably be achieved using XOR EAX,EAX.
As I indicated earlier, patching this function would be a painful and unprofessional process, so I'll highlight the important pieces of code instead (note that we wont be needing to SoftICE).
:0042709E CMP DWORD PTR [EAX-08], 15 <-- Check length
of [EAX-08] for 15h.
:004270A2 JZ 004270C9 <-- Jump_good_else_XOR_EAX_@004270B5.
This length check ought to be elementary, 15h = 21 decimal. So it seems our good code must be of length 21 or we'll get thrown out. Lets follow the code further.
:004270C9 CMP BYTE PTR [EAX], 41 <-- Check 1st digit.
:004270CC JZ 004270F3 <-- Jump_good_else_XOR_EAX_@004270DF.
:004270F3 CMP BYTE PTR [EAX+01], 4C <-- Check 2nd digit.
:004270F7 JZ 0042711E <-- Jump_good_else_XOR_EAX_@0042710A.
:0042711E CMP BYTE PTR [EAX+0C], 21 <-- Check 13th digit.
:00427122 JZ 00427149 <-- Jump_good_else_XOR_EAX_@00427135.
:00427149 MOV DL, BYTE PTR [EAX+02] <-- DL moved to 3rd digit.
:0042714C MOV CL, 2D <-- CL moved to 2D (hyphen).
:0042714E CMP DL,CL <-- Compare.
:00427150 JZ 00427177 <-- Jump_good_else_XOR_EAX_@00427163.
:00427177 CMP BYTE PTR [EAX+0B], CL <-- Check 12th digit.
:0042717A JZ 004271A1 <-- Jump_good_else_XOR_EAX_@0042718D
:004271A1 CMP BYTE PTR [EAX+10], CL <-- Check 17th digit.
:004271A4 JZ 004271CB <-- Jump_good_else_XOR_EAX_@004271B7.
:004271CB CMP BYTE PTR [EAX+05], 78 <-- Check 6th digit.
:004271CF JZ 004271F6 <-- Jump_good_else_XOR_EAX_@004271E2.
O.K, lets take a breather, as you can see there are 7 checks here (in addition to the length check we saw earlier), although fairly trivial checks a crack which patched some instructions and then said insert serial # blah to pass these checks would indeed be frowned upon. So from this code lets build up an initial code matrix that would pass these checks.
AL-zzxzzzzz-!zzz-zzzz - where z is currently unknown.
Lets continue onwards.
:004271F6 MOV ESI, 03 <-- ESI will now be used as
a loop control variable.
:00427201 MOV AL, BYTE PTR [ESI+EAX] <-- AL points at 4rth digit.
:00427204 PUSH EAX <-- Stack it.
:00427205 CALL 0042C220 <-- Your going to see this call a few times.
:0042720D TEST EAX,EAX <-- Test EAX.
:0042720F JZ 0042734C <-- Bad_jump.
:00427215 INC ESI <-- Increase loop.
:00427216 CMP ESI, 04 <-- Check ESI for 4.
:00427219 JLE 004271FD <-- Loop again.
The first loop in our protection will execute twice (ESI has to reach 5), it checks obviously digits 4 and 5 of our code, the magic being worked beneath CALL 0042C220 which must return EAX non-zero. You need only trace this call once to work out the pass criteria (in fact you may even be able to see it from the disassembly String Reference *smile*), remember that I'm offering 2 grade 'A's to anyone working this section out. Be sure also to remember 0042C220 as you are going to see it again for sure.
We move further on into the verification code, once again look how ESI is used to access various positions of our code.
:0042721B MOV ESI, 06 <-- 7th digit now.
:00427224 MOV DL, BYTE PTR [ESI+ECX] <-- DL now holds 7th digit.
:00427228 CALL 0042C220 <-- Needless to say we've seen this before.
:00427230 TEST EAX,EAX <-- Ditto and this.
:00427232 JZ 0042734C <-- Ditto and this as well.
:00427238 ADD ESI, 04 <-- Add 4 to ESI i.e. shift along the code.
:0042723B CMP ESI, 0A <-- Check ESI for 0Ah 10dec.
:0042723E JLE 00427220 <-- Loop.
This loop executes twice and requires gentle analysis, the first pass checks the 7th digit where as the 2nd pass (after ADD ESI, 04) will check the 11th - you already know what 0042C220 desires so fixing your input should be fairly easy. Rather than totally lead you through the code, 1 more section (virtually identical) to the 2 you've already seen checks 2 more positions of the code.
The next snippet now checks position 8 of the code, again using ESI as the pointer.
:00427265 MOV ESI, 07 <-- Prepare to point at 8th
:0042726E MOV AL, BYTE PTR [ESI+EDX] <-- Use AL.
:00427272 CALL 0042C390 <-- Another magic function.
:0042727A TEST EAX,EAX <-- Which must return EAX!=0.
:00427283 CMP ESI, 09 <-- So we are going to check positions 8,9 & 10.
:00427286 JLE 0042734C <-- Loop_a_bit_more.
This code style ought to be fairly familiar by now, however 0042C390 works some slightly different magic, yet unfortunately the String Reference would seem to give away the surprise. Remember this function as you step because the program checks other positions with exactly the same criteria. At the end of this there are yet more checks.
:004272E4 MOV DL, BYTE PTR [EAX+03] <-- 4rth digit.
:004272E7 MOV CL, BYTE PTR [EAX+0D] <-- 14th digit.
:004272EA CMP DL, CL <-- Compare.
:004272EC JZ 00427313 <-- Jump_good.
In similar fashion, the 8th & 15th digits must be the same as do the 11th & 16th. The final check involves the last 4 digits which should be the only ones which are now unknown. This is the relevant code.
:00427398 CALL 004356B0 <-- Here's_where_the_magic's_done.
:004273AC CMP EDI, 0F <-- Check *something* against 0Fh.
:004273BB JZ 004273D6 <-- Jump_and_MOV_EAX_1_good_guy.
This last function and check which you ought to trace is trickier, the real magic is done a level or so below this call again, but the simpler answer is that the last 4 numbers ASCII values -30h must add up to F (or 15 dec). So 2346 would work as a combination (i.e. 2 + 3 + 4 + 6 = 0F). Thus our final result looks as follows.
My verdict, well this isn't a particularly bad scheme and I can't see any warez cracker wanting to patch it, I fear unfortunately though that the lack of any algorithm will result in a generic serial # appearing on one of those lists I so detest.
|
OPCFW_CODE
|
April 30th, 2006
I had been to Bangalore to attend the MSAPP expo. And incidentely, the same day was also for Imagine Cup National Finals. So, on the whole there were 20 teams for MSAPP and about 12 teams for Imagine Cup. Judges arrived at 12PM and went through all the projects. The best projects received a hefty sum as cash prize. At the end of the day, the best projects selected would be showcased before the whole student and the judge audience.
MSAPP – Microsoft Academic Project Program
The best one in MSAPP was a project called iTrust. You can read more about it from Praveen‘s blog. His team walked away with Rs.75000/-. The runner up was an innovative project named “Computer for Blind”. Based on the Braille system, they developed a simple keyboard, which can be used by the visually challenged to type out text, as we do on a normal keyboard. They developed the whole circuit and plugged it to the CPU thro’ the serial port. Once the typing is done, a simple voice command saves the file and another voice command retrieves the file. They also made a simpler version of braille printer. Lots of people advised them to patent their idea. Hope they do it soon.
Three teams were selected for Imagine Cup National Finals, to present their idea over the whole audience. This time, the judges selected 2 best projects. The ideas of the 3 selected teams were brilliant. But only the other 2 could make it to the International Finals. Btw, the international finals are happening at Agra by August. The two selected projects were about a whole new computing experience for the visually challenged and for cerebral palsy affected people. Two inventions that really touched me deep were the “mouse” they had developed for cerebral palsy people. They had developed the mouse in such a way that the affected people need not grip the mouse. The second one was the navigator for the visually impaired. Moving from left to right on the screen, there will be a change in sound that will tell the person his position on the desktop. Similar things happen when he moves from top to bottom. It also announces the name of an icon when the mouse is over it. One task that the team did before the audience was that they closed laptop and cleared the recycle bin blind-folded.
Teachings of the Masters
Coming back to the title of the post, what had i learnt from all these teams??
1. The people with the idea should not piss off in the middle of the project. They must be as involved as others.
2. The full team should be working on every aspect of the project with the burning desire of winning.
3. The team should have decent marketing skills to market their idea.
4. And last but never the least, the team picker must ensure that all the above happens with minimum effort.
As i was returning to the room where i was staying, all these things hit me. I knew the mistakes i made. I picked up a wrong team member as part of MSAPP. My buddy Moyeen was the one who helped me all along to get it going to MSAPP as part of the top 20 projects. Thanks a lot dude! I owe you a lot.
Over the past 2 1/2 years, i had some amazing experiences and will continue to have them. Will surely share them over time. Bye!
|
OPCFW_CODE
|
What's the probable cause for extremely low inbound traffic and high outbound traffic?
Yesterday our Digital Ocean server encountered something that looked like an attack. The outbound traffic suddenly increased to 700Mbps, while the inbound traffic stayed at about 0.1Mbps, and didn't increase even once. The traffic lasted for several minutes until Digital Ocean cut our server off the network assuming we're performing a DoS (which is reasonable).
I have two assumptions: either someone hacked into our server (after the attack I realised my colleague had enabled SSH login with password) or there's some kind of an attack that I don't know about.
Can anyone clear this situation up for me? If there indeed is a kind of DoS which traffic looks like that, please educate me.
https://serverfault.com/questions/218005/how-do-i-deal-with-a-compromised-server
If you are running VestaCP, please make sure to look at this DigitalOcean page.
@Sevvlor oh god. I had no idea my colleague had installed this thing on our server. Thanks.
Also @JonasWielicki thanks for the link, it'll prove itself useful someday.
One likely possibility is an amplification attack. If you are running an open recursive DNS resolver (there are other protocols you can do this with though), for example, you can receive a very small UDP packet that has a spoofed IP address. Your server then generates a large response and sends it to the victim, thinking that it's a legitimate request.
Another possibility is that someone was exfiltrating data off your network. If someone got into your server and was offloading every byte they could find, it would look like that as well.
There's no way to know which one it was without doing an investigation, and hoping that whatever did happen left evidence. If it's the latter (exfiltration) then they probably cleared their tracks as best they could.
Thanks. I'm in a corespondence with DO, hopefully they'll have an idea of what was going on. According to my investigation, it's likely that someone gained access to our server via SSH. I'm accepting your answer as it's the most precise in answering my question, although other answers are also very useful.
@KrzysztofKraszewski Unless your colleague is/was using a really braindead password, SSH would NOT seem like a likely candidate to me. Remote brute-forcing is very slow and noisy.
If the server was compromised, an amplification attack seems very unlikely. Why bother with such a trivial attack when you've rooted the server? And braindead passwords are remarkably common.
@PhilFrost The point of me mentioning the amplification attack was that it's possible the OP is running something else that's just being used in that way and that the server has not been compromised. DNS is the most common, but there's also MOTD and other weird old protocols that can be abused in this way. It is one possible solution that fits the weird traffic pattern.
memcached is a particularly dramatic recent amplication attack
I agree an amplification attack is a good guess given the original question, but now that the OP has added in the comments "It looks like someone accessed our server and used it to perform an attack." I would guess it was not an amplification attack.
I would rule out an amplification attack (unless OP was running an improperly configured memcached instance) as the amplification factors for common protocols don't bridge the gap from 0.1Mbps to 700Mbps. Additionally, if they were exfiltrating data off your sever (rsync-ing or such), at 700Mbps-outgoing there'd be a significant protocol overhead that'd be registered as incoming traffic too, which I don't think (guesstimate here, math needed) 100Kbps is enough. Therefore I'd lean towards compromised node in a DoS botnet.
As it turns out, the reason was an exploit for software called VestaCP, which was installed on the server without my knowledge. See Savvior's comment under my question for details.
Thanks for the answer anyway, I learned something thanks to all of you.
I agree with the possibility of an amplification attack. The simplest way to handle this is to use DigitalOcean's free cloud firewall.
Only allow SSH, HTTP,and HTTPS inbound. If possible, only allow SSH from your trusted IPs.
You can do this using the firewall on your VM, DO's solution is just easier.
Thanks for the tip, I'll spend some time securing our servers (as I should a while ago).
You should ask Digital Ocean. They don't shut off servers just for high outbound traffic: that would shut down most servers. For example, a webserver hosting something popular.
Rather, they shut down your server because the nature of your traffic looked malicious. As such, they probably have some idea what it was.
Otherwise you'll have to investigate yourself. Perhaps if the host is still running it's still attempting to send traffic which is being dropped by Digital Ocean. In that case you'd be able to observe it with a packet dump. Or you may be able to find clues in the system logs. It could be any of a million things unfortunately, so speculating on the underlying cause absent such an investigation is futile.
Check out my comment under Mike M's answer. It looks like someone accessed our server and used it to perform an attack. Thank you for your answer.
|
STACK_EXCHANGE
|
by Tyler Denton, Sales Engineer, Rockset
There are two major problems with distributed data systems. The second is out-of-order messages, the first is duplicate messages, the third is off-by-one errors, and the first is duplicate messages.
This joke inspired Rockset to confront the data duplication issue through a process we call deduplication.
As data systems become more complex and the number of systems in a stack increases, data deduplication becomes more challenging. That's because duplication can occur in a multitude of ways. This blog post discusses data duplication, how it plagues teams adopting real-time analytics, and the deduplication solutions Rockset provides to resolve the duplication issue. Whenever another distributed data system is added to the stack, organizations become weary of the operational tax on their engineering team.
Rockset addresses the issue of data duplication in a simple way, and helps to free teams of the complexities of deduplication, which includes untangling where duplication is occurring, setting up and managing extract transform load (ETL) jobs, and attempting to solve duplication at a query time.
In distributed systems, messages are passed back and forth between many workers, and it’s common for messages to be generated two or more times. A system may create a duplicate message because:
- A confirmation was not sent.
- The message was replicated before it was sent.
- The message confirmation comes after a timeout.
- Messages are delivered out of order and must be resent.
The message can be received multiple times with the same information by the time it arrives at a database management system. Therefore, your system must ensure that duplicate records aren’t created. Duplicate records can be costly and take up memory unnecessarily. These duplicated messages must be consolidated into a single message.
Before Rockset, there were three general deduplication methods:
- Stop duplication before it happens.
- Stop duplication during ETL jobs.
- Stop duplication at query time.
Kafka was one of the first systems to create a solution for duplication. Kafka guarantees that a message is delivered once and only once. However, if the problem occurs upstream from Kafka, their system will see these messages as non-duplicates and deliver the duplicate messages with different timestamps. Therefore, exactly once semantics do not always solve duplication issues and can negatively impact downstream workloads.
Some platforms attempt to stop duplication before it happens. This seems ideal, but this method requires difficult and costly work to identify the location and causes of the duplication.
Duplication is commonly caused by any of the following:
- A switch or router.
- A failing consumer or worker.
- A problem with gRPC connections.
- Too much traffic.
- A window size that is too small for packets.
Note: Keep in mind this is not an exhaustive list.
This deduplication approach requires in-depth knowledge of the system network, as well as the hardware and framework(s). It is very rare, even for a full-stack developer, to understand the intricacies of all the layers of the OSI model and its implementation at a company. The data storage, access to data pipelines, data transformation, and application internals in an organization of any substantial size are all beyond the scope of a single individual. As a result, there are specialized job titles in organizations. The ability to troubleshoot and identify all locations for duplicated messages requires in-depth knowledge that is simply unreasonable for an individual to have, or even a cross-functional team. Although the cost and expertise requirements are very high, this approach offers the greatest reward.
Stream-processing ETL jobs is another deduplication method. ETL jobs come with additional overhead to manage, require additional computing costs, are potential failure points with added complexity, and introduce latency to a system potentially needing high throughput. This involves deduplication during data stream consumption. The consumption outlets might include creating a compacted topic and/or introducing an ETL job with a common batch processing tool (e.g., Fivetran, Airflow, and Matillian).
In order for deduplication to be effective using the stream-processing ETL jobs method, you must ensure the ETL jobs run throughout your system. Since data duplication can apply anywhere in a distributed system, ensuring architectures deduplicate in all places messages are passed is paramount.
Stream processors can have an active processing window (open for a specific time) where duplicate messages can be detected and compacted, and out-of-order messages can be reordered. Messages can be duplicated if they are received outside the processing window. Furthermore, these stream processors must be maintained and can take considerable compute resources and operational overhead.
Note: Messages received outside of the active processing window can be duplicated. We do not recommend solving deduplication issues using this method alone.
Another deduplication method is to attempt to solve it at query time. However, this increases the complexity of your query, which is risky because query errors could be generated.
For example, if your solution tracks messages using timestamps, and the duplicate messages are delayed by one second (instead of 50 milliseconds), the timestamp on the duplicate messages will not match your query syntax causing an error to be thrown.
Rockset solves the duplication problem through unique SQL-based transformations at ingest time.
Rockset is a mutable database and allows for duplicate messages to be merged at ingest time. This system frees teams from the many cumbersome deduplication options covered earlier.
Each document has a unique identifier called
_id that acts like a primary key. Users can specify this identifier at ingest time (e.g. during updates) using SQL-based transformations. When a new document arrives with the same
_id, the duplicate message merges into the existing record. This offers users a simple solution to the duplication problem.
When you bring data into Rockset, you can build your own complex
_id key using SQL transformations that:
- Identify a single key.
- Identify a composite key.
- Extract data from multiple keys.
Rockset is fully mutable without an active window. As long as you specify messages with
_id or identify
_id within the document you are updating or inserting, incoming duplicate messages will be deduplicated and merged together into a single document.
Other analytics databases store data in fixed data structures, which require compaction, resharding and rebalancing. Any time there is a change to existing data, a major overhaul of the storage structure is required. Many data systems have active windows to avoid overhauls to the storage structure. As a result, if you map
_id to a record outside the active database, that record will fail. In contrast, Rockset users have a lot of data mobility and can update any record in Rockset at any time.
While we've spoken about the operational challenges with data deduplication in other systems, there's also a compute-spend element. Attempting deduplication at query time, or using ETL jobs can be computationally expensive for many use cases.
Rockset can handle data changes, and it supports inserts, updates and deletes that benefit end users. Here’s an anonymous story of one of the users that I’ve worked closely with on their real-time analytics use case.
A customer had a massive amount of data changes that created duplicate entries within their data warehouse. Every database change resulted in a new record, although the customer only wanted the current state of the data.
If the customer wanted to put this data into a data warehouse that cannot map
_id, the customer would’ve had to cycle through the multiple events stored in their database. This includes running a base query followed by additional event queries to get to the latest value state. This process is extremely computationally expensive and time consuming.
Rockset provided a more efficient deduplication solution to their problem. Rockset maps
_id so only the latest states of all records are stored, and all incoming events are deduplicated. Therefore the customer only needed to query the latest state. Thanks to this functionality, Rockset enabled this customer to reduce both the compute required, as well as the query processing time — efficiently delivering sub-second queries.
|
OPCFW_CODE
|
- I’m a senior C++ software developer with passion for working on complex high - performance and low-latency systems with rich experience in Analysis, Design, Development and Testing of Network, Web, Cryptographic and Desktop Security software and systems, in Linux as well as Windows.
- Possess strong knowledge in Object-Oriented Analysis/Design, Analytics, and Decision Making.
- Fluent knowledge in Project Management and SDLC best practices with excellent team player and contributor.
- I has more than 7 years of experience in security software development working on small to big projects as a leading software developer/designer and as a manager of software development team.
Languages: C, C++, VC++, C#.Net, QT.
Scripting: Shell Scripting, Python.
Frameworks: MFC, .Net.
DBMS: MYSQL, MS SQL Server, SQLite.
Systems: Windows, Linux, Systems Programming.
Concepts: Data Structures, COM, OOPS, OOAD, Sockets, Multithreading, IPC.
Others: UML, Design Patterns, Visual Studio, Project Management, MS Office.
Testing: Manual, LDRA Tool Set. Test Management.
Networking: TCP/IP, UDP, FTP, SFTP, SSL and HTTP.
Sr. Engineer Software Development
- Working on Test Automation & Measurements domain.
- Development of native applications for automated tests and measurements.
- Design & Development of CORBA, TCL, MDF, and ECU interfaces.
- Design & Development of template class libraries w.r.t to ASAM ODS standards.
- Design & Development of template factory classes for database access using ODS standards.
- Design & Development of CLI implementation for Web and WPF applications.
- C++11, C++ 98, VS2013/2015, Design Patterns, Open Data Standard, RDBMS, Real Time DB.
Project Team Lead
- Introduced Agile and Test Driven Development for major projects.
- Lead designer for Network & USB based Secure Open Source Solutions.
- Managed File Transfer
- Designed system for a managed file transfer application to synchronize logs across domain.
- Project Management for resource, estimation, tasks and risk engineering.
- Designed & developed network security & cryptographic implements for end to end security.
- Developed server side secure data vault mechanism to save encrypted data for analytics.
- Using MS Project, SDLC, C++, VC++, C#.Net, MFC, Win32, Cryptography, Design Patterns, Visual Studio, and SQL Server 2012.
- Requirement analysis & documentation on Test Center of Excellence.
- Designing of methodologies for setting up a Test Center of Excellence.
- Project analysis, formulation of project management workbook with SQA methodology.
- Drafting of QRs and other relative requirement document.
- Designed entire architecture for a secured server design & server-app communication.
- Requirement Engineering, design modeling, task engineering and testing.
- Android OS customization at firmware level.
- Design & Development of Secure Android app & communication module using GCM.
- Using Project Management, SDLC, MDM, Linux, Native C++, Python, Secure VOIP.
- Designed architecture for enterprise level cross platform volume encryption system.
- Module Design & Development implementing Design Patterns for container-user mapping, display update, database synchronization, run time verifications etc.
- Developing DLP & DRM module for various access levels of data and escrow module using PKCS.
- Data Tuning & Analytics for centralized backup, monitoring & processing.
- Using C, C++, VC++, C#, Python, Visual Studio, Design Patterns, QT, Python, Bash, PKCS, WinDDK, and MYSQL, SQL Server for Linux and Windows.
- Designed next generation architecture for microprocessor based Data Diode.
- Development of secure data communication using Infrared Sensors.
- Developing advanced UART transmission module for faster data transfer.
- Embedded application modules design and development for automated user experience.
- Using C, Linux, Python, Bash, MYSQL, IPC, Multi-threading.
- Secure USB Drive and Advanced USB Protection Suite
- Software architecture designer along with feasibility study, requirement modeling.
- Developed multi-threaded VC++ application to interact with USB DLL & controllers.
- Worked with USB controllers on making a custom USB, with advanced storage features.
- Using VC++, C#, MFC, Design Patterns, SQL Server, Visual Studio.
- Worked on File System Filter Driver, for Secure Data transmission in Windows.
- Developed a file signature module to identify and isolate various file types and malware control.
- Using C++, Visual Studio, UML, Project Management Methodologies.
- Secure Shredder & File Encryption System
- Developed C application for Shredding, Encryption & Decryption of files/folders.
- Developed Windows Explorer shell extension module for handling context mode calling.
- Using C, VC++, Win32, C#, Design Patterns.
- USB Desktop Security Suite and USB Active Directory Security Suite
- Developed Multi-threaded Windows Service in VC++ working in cohesion with UAC.
- Designed, developed and implemented windows synchronization application / tool in C++.
- Developed various middle tier Network Objects in VC++ using MFC and Active Template Library framework in which teh Objects communicated via DCOM/RPC to a server-side multi-threaded service.
- Using C, C++, VC++, C#, SQL Server, MFC, Win32, IPC, SQLITE, Visual Studio, Linux IPC.
- Developed multi-threaded Client Server application using Unix System Programming.
- Developed libraries in C & C++ to communicate with various hardware peripherals.
- Using C, C++, Python, MYSQL, Linux Systems, IPC, Sockets & Multi-threading.
- Encrypted Volume Creator & Full Disk Encryption
- Customized Open Source C language framework and hardened teh source.
- Developed GUI using Win32 and Escrow mechanism for password and recovery using C, VC++, C#, Win32, DDK.
|
OPCFW_CODE
|
Aberdeen conclusion was based on a survey. However, many people can reach the same conclusion in less methodological approach: Data quantities, formats and locations are growing massively. users preserve data on PCs and on removable devices (e.g. Disk On Keys and CDs) as well as on multiple servers placed in multiple locations.
The data protection approach of the DOD's Orange Book of the begining of the 90's is no longer realistic. Aberdeen Group basic findings can be summarized in three bullets:
- Critical and sensitive data should be indetified and protected. Probably not all other data could be protected.
- The trend is towards encrypting critical data, wherever it is located including End Users devices and removables.
- Key Management becomes complex, so organizations are moving gradually from manual management to automatic management such as Public Key Infrastructure (PKI)
The trend of encrypting data is the link between Aberdeen Group research and this psot's title.
Microsoft's environment was known as relatively less secure environment than competing environmnets (mainly due to binding between infrastructure and applications). However, few years ago, the company decided to improve the Security of its infarstructure solutions by building trusted environmnets. The compnay's ability to excute is well known (some of us still remeber MSN as an alternative to the Web and "the internet is a fade" attitude until 1995 and the impressive up side down change towards internet solutions), therefore the missing encryption capabilities in Vista Home editions (including Premium) could hardly be explained. Microsoft already developed an encryption solution, which is part of the enterprise editions, so technical issues are not the obstacle towards build in encryption component. Many neccessry as well as unneccessry Security dialog boxes are part of the Home editions Vista (The High granularity of the Security levels prevents elimination of the unneccessry dialog boxes without exposing your system to additional threats), so it seems that lack of emphasys on Security issues is not the reason for this missing component. To me it looks like an unsuccessful marketing decision.
Third party Omnipass solution (at least in my Lenovo mobile computer) is not fully integrated with the system and problems determination involving both the OEM and the vendor is difficult. As an experienced IT professional I succeeded to circomvent an unsolved problem. could the non-IT professionals home users bypass such problems?
Many Home edition users expose their systems to the Web. Web access may expose them to Security thteats including access of their data on their own PC. As most of these users are less aware to Security threats in comparison to corporate users, so critical data (e.g. passwords, Bank account details, credit card details etc.) may be stolen. These users may blame again Microsoft for its non-secured systems.
Aberdeen group's report points at encryption as a mean for reducig threats.
I do think that Home Vista users should help Microsoft to overcome this erronos decision, by asking the company to include build in encryption support in the next Service Pack home editions Vista.
|
OPCFW_CODE
|
[Clamav-announce] ClamAV® blog: ClamAV Signature Interface maintenance is now complete! New Main.cvd!
Joel Esler (jesler)
jesler at cisco.com
Wed Mar 16 23:24:37 EDT 2016
ClamAV Signature Interface maintenance is now complete! New Main.cvd!
Our ClamAV Signature Interface maintenance is now complete. While we apologize for the delay, the rollout of the the new Signature Interface inside of ClamAV will result in several new features for the community, and I wanted to tell you about some of them:
First, the first new “main.cvd” in about two years. This main.cvd has been completely re-written from scratch, and while the function of the “main” is largely the same, it’s been rewritten to not only enforce order to the signatures, but naming convention as well. For example:
W97M.Ethan.AK-1 has moved to Doc.Trojan.Ethan
Worm.Padowor.A-zippwd has moved to Win.Worm.Padowor
Adware.Smshoax has moved to Win.Adware.Smshoax
Re-naming of the signatures may affect a local user’s whitelist. If you have excluded certain signatures in the past that are now firing, we ask that you both submit the file to us for false positive remediation (if you believe it to be a false positive), and rename the signature whitelist on your side.
This new main is 109Mb in size, and contains 4 million signatures for ClamAV. Now that the main.cvd has been rewritten, it is now easier for us to create diffs, which means upgrading the main more often, and making the “daily.cvd” smaller more often.
Second, we now have the ability to offer different types of CVDs. For instance, we now have the ability to distribute 3rd party signatures that are officially signed by ClamAV, but updated through the ClamAV global mirror network. If we wanted to separate out “policy” type signatures from the daily.cvd into their own cvd, we can now do that.
Third, while we have not removed some of the older signature formats, we did convert those older signatures to the newer formats to empty those older “cvd”s out.
“db" signatures were consolidated into “ndb" signatures
“zmd" and “rmd" archive signatures we moved to the “cdb" container signature format
These formats are not new, they simply have never been published before. This includes other formats such as “hsb", “msb", “sfp", and “crb". The older formats are supported for now, we are simply no longer publishing them.
Fourth, newer features, like the ability to write signatures based on the SHA256 of a file have been added to the system, and we can now publish that type of detection.
We’d like to thank you for your patience.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the clamav-announce
|
OPCFW_CODE
|
The incoming request has too many parameters. The server supports a maximum of 2100 parameters.?
The incoming request has too many parameters. The server supports a maximum of 2100 parameters. Reduce the number of parameters and resend the request.
code:
query = query.Where(x => !rejected.Contains(x.ApplicationId) && x.Id > minvalue);
here rejected is a list of int type.
it is working fine when i have records upto 700 in rejected list.But after that it start givng me error of sql 2100 parameters.
I suppose each record has 3 fields.... 700 x 3 = 2.100
It looks like there is more to the query that you are building. Can you show the rest as I'm guessing you might be using that collection more than once (maybe 3 times?). Also how do you populate rejected? If it's based on values in the DB then you can use those instead of a list of ids to filter rejected items.
How many items you have in rejected collection? Maximum mumber of parameters supported by SQL query is 2100. So you need to check if rejected collection has more than 2100 items. If yes then you should break it down to chunks of 2100 items and execute separate queries using each chunk.
rejected in your case is a list that has more than 2100 items. If the Rejected is a db table than don't do ToList on it.
@ChetanRanpariya Separate queries wouldn't work here since the OP is doing NOT contains. That means that each query you break it down into would include all the desired records, plus some of the ones filtered from other queries, resulting in more records than what is in the entire table. That method would work if it was just contains.
Even if you make it work for up to 2100 items, it is bad and non scalable design anyway. Do not use linq and Contains in this case, use different approach. TVP is the best option here.
Can you create a staging table that will iterate thru and store ApplicationID, then have your query based on the staging table?
@ChetanRanpariya i have tried this .i made the two chunks of the rejected list the first chunk was working well but on second chunk i got the same error.
@NicolaLepetit No....it has only one field.
@FlorimMaxhuni No rejected is a list where i am only containing Id's and i have only 1000 items in this list
@AntonínLejsek Actually i wanna make it fast as i can beacause i have bulk data here which i have to process.
@juharr query containing filtered records from the table and it is queryable and rejected is the list where i am i only storing id's and using only once and rejected has only 1000 records.
If you have only 1000 items in the rejected list then you will never get this error. Can you share the code of chunk approach?
@ChetanRanpariya i have edit the post please check the code now
What is the value of rejected.Count in if (rejected.Count > 0)? And what is the value of totalUnusedRecords?
@ChetanRanpariya In current case value of total totalunusedRecords is 1837 and record.count() is 1000
|
STACK_EXCHANGE
|
Last year and the year before, I spent a lot of time on a project called AutoPoC, which I presented at both BSides London last year and SecuriTay this year. At the end of my second talk, I said I might release the AutoPoC framework and Sandbox Spy, a project I was working on.
This short blog post explains what each tool does and overviews the use/reason for the release. The backbone of both projects leverages Thinkst's CanaryTokens project; during the AutoPoC research, they were nice enough to give me access to their paid API; however, the open source version on git will work just as well if you want to recreate your own instance of the project.
HoneyPoC and AutoPoC are two combined projects that were created to investigate how easy it is to poison different data feeds and whether there is integrity in parsing data and passing it to different parties.
The secondary objective was to identify what range of people run things directly from GitHub; the preliminary findings from the original HoneyPoC project were that folks will run anything blindly, it appeared, but as I automated the project more, it became apparent that different geographic locations had a deeper interest in different types of CVEs and software vulnerabilities.
Therefore I am releasing the underlying framework that AutoPoC is built upon so that defensive teams can learn from how the binaries are structured, look at how a disinformation campaign may affect their internal landscape and get a better understanding of how I automated misinformation with CVE proofs of concept.
Caveat/Disclaimer: While I'm releasing AutoPoC, the framework on its own is harmless as it requires some pre-requisites to build the automated backend, but the outputted code is technically malware so be careful what you do with it and it's for educational purposes etc, I'm not liable if you use it for crime or other chaos.
The framework and its code can be found here https://github.com/ZephrFish/AutoHoneyPoC
In addition to the framework, I also built a project called SandboxSpy, which is detailed below.
Initially, an idea to profile sandboxes, the code is written to take environmental variables and send them back in a Base32 string over HTTP to an endpoint.
The project was born off the back of data analysis performed from the AutoPoC project. Different types of analytics were observed on each analysis platform profiled and signature AutoPoC binaries.
The primary goal is to understand if we're in a sandbox or not based on the path and domain/username.
The repo itself consists of two main factors:
- SandBoxSpy.go - This is the main tool; there was an initial binary in the repository, but I've removed this and kept the GO source code for folks to read.
- decoder.go - Takes the Base32 string and decodes it; there is a compiled binary version that will simply decode base32; although there is nothing malicious in this binary, I still recommend you compile it yourself for peace of mind.
Enjoy folks, it's a project I created because I was bored one evening and it grew arms and legs!
|
OPCFW_CODE
|
portable mysql for development
I've been developing an application on my desktop that uses MySQL. However, I travel to and from different locations all the time and I'd like to develop on my laptop.
Is there a way I can continue to develop without leaving on my desktop turned on and connecting in remotely?
xampp has a portable version for download. you can even run it from a memory stick.
Why don't you try to use XAMPP instead? I think they have portable version of it. This way, you just need to copy the whole XAMPP folder to carry it with you. I think they stored the whole thing, including the database file inside it.
This depends on what you really want to do.
If you just care about the table structure, your build process probably already has some .sql files that get executed to set up a new database. Just instal MySQL on your laptop, and set it up from the same .sql files. If your build process doesn't have the complete database schema in some file (version controlled, of course), that is the first step. At that point, you can make DB schema changes while disconnected, and just have to diff those .sql files to see what has changed.
If you only care about getting schema + data from the desktop to the laptop when you go offline, install MySQL on the laptop, and then follow any of the standard backup/restore procedures, backing up your desktop, and restoring on your laptop. Then each session on your laptop, you'll have the latest data and schemas from the desktop. You could reverse the process if you change the schemas or add data you care about while on the laptop.
If you are intending to sync up the data and the structure between your laptop and desktop, you might look at setting up replication. Both MySQL servers would keep logs, and when they contact each other they process the logs to reconcile differences.
And, if you don't want to bother with installing and maintaining a second instance of the database on the laptop, you might want to think about abstracting the database layer. Most languages have bindings for an in-memory database like SQLite or Hypersonic or the like. If you aren't doing incredibly complex stuff with the database that would lead to lots of vendor specific hacks, it should be easy to support one of the in-memory databases just to do some development on the laptop and have a database available. Even if you are doing complicated things, if you are using a framework, many of them support an in-memory database out of the box, if only to have something available for demos and example code.
So, it really depends on exactly what you need - anything from full two-way automated syncing to some manual, ad-hoc syncing, to just needing a database, any database available for the program to run.
Can't you just install MySQL on your laptop?
Clearly you won't be able to connect to your desktop if it's not on.
I realize that but If I make changes to the tables I'd want that reflected on my desktop. I use SVN so how would I incorporate this?
I'm running MySQL (and pg) on a netbook. No problem with tables with under about 1 million rows.
You could try WAMP server that includes, Apache, MySQL and PHP all in one package
|
STACK_EXCHANGE
|
There are things on Usenet that you want to download regularly. Doing so is a time-consuming chore that'd be better accomplished through automation. This guide aims to show you how.
The problem with Usenet is that, even with the requisite utilities, you still find yourself manually extracting RAR files, applying PAR2 files to regenerate missing chunks, and then disposing of all the compressed/encoded files after extracting your media file. Not to mention seeking out and downloading every episode of everything you want to download. It's not for the faint of heart.
Here's where it gets awesome, though. There's a free, open-source application called SABnzbd+, available for every platform, that does all that for you. Even awesomer, it can monitor RSS feeds and watch for user-defined strings in the filenames to facilitate the automatic downloading, unpacking, repairing, renaming and moving of files into your media library with zero intervention on the user's part. After setting up SABnzbd, the content you want to download is magically downloaded FOR you, with no intervention on your part. This is the future, and it is AWESOME.
To get started with your magical new life of automatic content delivery, you first need a Usenet account. And, you're probably going to want a 'premium' account, meaning that you'll have to spend some money every month. There are many different options when choosing premium Usenet providers, but I recommend Giganews. They even have a free trial, allowing you to see how awesome this whole thing can be. You can sign up for your free trial by clicking the nifty banner below. (We'll supposedly get referral credit or something if you end up being a paying customer.)
The next thing you need to do is install SABnzbd on a computer in your household. On Mac/Windows it's a super-easy installer, and it runs using a web interface rather than a GUI. Upon installation you'll need to specify the username/password for your Giganews (or other Usenet provider) in the Config tab.
The next stop is giving SABnzbd one or more RSS feeds to monitor looking for things to download. There are many different options for sites that provide RSS feeds of nzb files. A quick Google search can help you find one that has the type of content you're looking for. Once you add a feed, you can enter in names/words in filenames to either 'accept' or 'reject.' SABnzbd will then periodically check the rss feed, and when it finds an nzb that matches your rules, it queues it for download.
You then configure the Folders option to specify where you want finished downloads to end up. That's really all there is to it. Now your computer will periodically check any configured RSS feeds for things it should download, and when it finds something, it just does. And then it decompresses, repairs (if necessary), and then gets rid of the compressed stuff. No muss, no fuss. Set it and forget it.
An average 360meg file downloads in about 2 and a half minutes. But you don't care how fast it is because it'll just be there waiting for you automatically.
An added perk, is the SABnzbd Firefox extension , which gives you a constant indicator of things that are downloading, right in your browser's status bar -- and also the ability to click on any nzb file from any nzb search engine and have SABnzbd automagically start downloading it, even if you're surfing from a different computer than SABnzbd is running on. Very awesome.
UPDATE: I've now written an app for Android phones that will allow you to queue nzb files on your SABnzbd installation: NZBdroid
Few things have excited me as much recently as that of the OLPC project. Since last we talked about it, I've donated another to the efforts. This means that I'll have two of them to play with, which I figure is essential to seeing how the mesh networking functions. I also figure that when the nerds begin doing really awesome things with them, people who are kicking themselves for not having the foresight to have gotten in on the ground floor may suddenly be willing to pay considerably more for them than those of use who donated did. This, of course, comes after the joy of knowing that I've helped two kids get one.
Before I get into trying again to explain just how awesome the project is -- despite all the negative attention the press, Intel, Microsoft, John Dvorak, and Digg commenters have been lavishing upon it -- I've a couple links for you.
First up, from the BBC: A child's view of the $100 laptop. In that story a virtually computer-illiterate 9-year-old in the UK gets his hands on an OLPC and talks about all the things he was able to do with it WITHOUT HELP from an adult. And how fun and creativity-inspiring it is, despite the abundance of video gaming systems he owns. Now imagine a 9-year-old kid who has never had an electronic device before, and now suddenly has one that can help bolster creativity in many, many ways. Remember how cool it was when you first exchanged instant messages with people from all over the world via your computer? Now imagine being able to do that from your bug-laden tent, and being able to get skills and contacts that might be able to get you out of the dirt into a job in a more technological world at some point in the future. I just don't see how people can bad-mouth the amazing thing the OLPC people are doing.
Next: for those of you that've donated to the project and are anxiously awaiting the shipment of yours, I learned that the OLPC is maintaining a delivery estimate on their site without prominently linking to it. Click on over to Give One Get One Shipping Information to see when you can expect to get yours. (My first one is in batch two; my second is in batch 3.)
On to my excitement. Remember in Orson Scott Card's novel "Ender's Game," the futuristic space-school where the students' textbooks and assignments were all on digital tablet-thingies, with which they could communicate amongst their peers via text chat and email whilst working on said assignments from their living areas? And how they could play learning games to help them unlock what's inside themselves while having fun? I would have killed to have something like that when I was in grade school and actively wished for such a thing a little later in life. Now, thanks to the work of many individuals who came together under the OLPC initiative, kids in some of the worst parts of the world are going to have EXACTLY THAT -- minus the zero-gravity and attacking aliens, natch.
If you want to help one of these kids have something nice in their otherwise unpleasant-seeming (to this westerner, anyway) life, you can still donate to the project over at laptopgiving.org. If you donate before December 31st, you'll be able to get one yourself. This helps in two ways: the Give One portion puts one of these laptops into a kids hands, while the Get One part helps increase the production quantities. This helps makes things easier and cheaper for the manufacturer, which means it's good for everyone involved. $200 of the $399 cost is tax-deductable (here in America, anyway), and I'm confident that post-Dec. 31st you'll be able to recoup the rest by selling it to some other nerd -- assuming, of course, that you won't find a tiny, low-power, uber-portable eBook reader/word processor/email/drawing/web-surfing machine useful yourself. (If you're outside of America, you will require a US postal address to get one. If you're otherwise interested in participating but don't have such an address, drop me a line and I bet we could work something out.)
|
OPCFW_CODE
|
I listen to some voice mail messages over the web. Firefox downloads the message as a “.wav” file, and then invokes amarok to play it.
What I have been noticing, is that amarok seems to cut the message short.
As an additional test, today after playing a message on amarok, I tried playing the same message with kaffeine. There was significantly more to the message when played on kaffeine. The particular message ended with the phone number I should call back (if I wanted to). None of that ending sentence showed in the amarok playback.
Is there a setting that would fix this problem? Or is it an amarok bug?
That was already unselected. I think I had unselected that a few days ago to see if it would help.
As another experiment, I just reselected that, then changed the fadeout duration to the minimum (400 ms). That didn’t help either. Then I again unselected fadeout. Still no noticeable effect of that change.
I would play a regular MP3 song. Press start & stop and make sure sound stops at once. Then play a whole song through and see if you are losing the last part of your MP3 files. What version of Amarok are you using? Perhaps it is a problem with that version.
So I am using Amarok version 2.4, but it is a pre-release version and I never had any problems with 2.3 either. I am thinking you will want to try a different player, even kaffeine if it works fine for you as you say.
I managed to find an mp3 file online. It played that to the end.
The “.wav” file stops at about 15 seconds from the end according to the play indicator. Playing voice mail from a different location (different voice mail software, still a “.wav” file), it typically stops at around 11 seconds before the end.
I’ll see if I can work out how to configure it to use kaffeine in future, at least for “.wav” files.
lame prefix-of-music-file_01.wav song.mp3 - Fixed 128kbs stereo encoding
lame -h prefix-of-music-file_01.wav song.mp3 - High quality
lame -f prefix-of-music-file_01.wav song.mp3 - Fast and low quality
lame -b 112 prefix-of-music-file_01.wav song.mp3 - Encode at a bit rate of 112 kbs
Just open YaST / Software Management and search on lame. If it is not there, you must add the packman repository located at:
|
OPCFW_CODE
|
Recently, I'm doing work on querying the database , Because the amount of data in a table is too large , Cause the program process to get stuck ,SQL Optimization is imminent , The index came on stage !
Oracle Query index in :
1, There are no restrictions on the returned rows , That is, No where clause .
2, The row of the data table corresponding to any index primary column is not qualified .
for example : stay id-name-time Column creates a three column composite index , So only for name Column qualifiers cannot use this index , because name Is not the primary column of the index .
3, There are restrictions on the primary column of the index , However, using the following expression in a conditional expression invalidates the index , Cause token scanning :
(1)where Clause to perform a function on a field , Expression operation , This will cause the engine to abandon the index and perform a full table scan .
(2) Query field is null Index Invalidation on , Cause full table query .
terms of settlement :SQL Used in grammar null There will be a lot of trouble , When it is best to index columns not null of ; about is
null, Composite indexes can be established , nvl( field ,0), Table and index analyse after ,is null Index lookup can be re enabled when querying , But the efficiency is not worth affirming ;is not
null The index will never be used . Generally, do not use tables with large amount of data is null query .
(3) Unequal operator is used in query criteria (<>,!=) Limits the index , Cause full table scan .
resolvent : By changing the not equal operator to or, You can use indexes , Avoid full table scanning .
for example , hold column<>10, Change to column<10 or column>10, You can use the index .
(4) There are restrictions on the primary column of the index , But conditional use like Actions and values to ‘%’ The start or value is an assignment variable .
for example :where city like '% Dalian %'.
select * from citys where name like '% Dalian ' ( Do not use index )
select * from citys where name like ' Dalian %' ( Use index ) .
terms of settlement : First, try to avoid fuzzy queries , If necessary , Do not use full fuzzy query , Right fuzzy query should also be used as far as possible , Namely like ‘…%’, The index will be used ; Left blur like
‘%...’ Indexes cannot be used directly , But it can be used reverse + function index Form of , Change into like ‘…%’;
Full fuzzy queries cannot be optimized , If you must use it, it is recommended to use a search engine .
4, or Improper use of the statement will cause a full table scan
reason :where Two conditions for comparison in Clause , An indexed , One has no index , use or Will cause a full table scan .
for example :where A=:1 or B=:2,A Index on ,B There is no index on the , Then compare B=:2 The full table scan is restarted when
5, Composite index
When sorting, it shall be sorted according to the order of columns in the composite index , Even if only one column in the index is to be sorted , Otherwise, the sorting performance will be poor .
for example :
create index skip1 on emp5(job,empno,date); select job,empno from emp5 where
job=’manager’ and empno=’10’ order by job,empno,date desc;
In fact, it's just a match job=’manager’ and empno=’10’ Record the conditions and press date Descending sort , But written order by date
desc Poor performance .
6, Update sentence , If only change 1,2 Fields , No Update All fields , Otherwise, frequent calls will cause significant performance consumption , It also brings a large number of logs .
7, For multiple large data tables JOIN, You have to page first and then JOIN, Otherwise, the logical reading will be very high , Poor performance .
8, select count(*) from table;
This is unconditional count Will cause a full table scan , And it doesn't make any business sense , We must put an end to it .
9, sql of where Condition to bind variables , such as where column= 1, Don't write where
column=‘aaa’, This will cause reanalysis at each execution , waste CPU And memory resources .
10, Do not use in Operator , In this way, the database will perform a full table scan .
11, not in use not in I won't go
Recommended scheme : use not exists perhaps ( Extraneous junction + Judged to be empty ) To replace
12, > and < Operator ( Greater than or less than operator )
The greater than or less than operator generally does not need to be adjusted , Because it has an index, it will use index search .
But in some cases, it can be optimized , If a table has 100 Wan record , A numeric field A,30 Million records A=0,30 Million records A=1,39 Million records A=2,1 Million records A=3.
Then execute A>2 And A>=3 The effect is very different , because A>2 Time ORACLE Will find out first 2 The record index of is compared again , and A>=3 Time ORACLE Then find it directly =3 Record index for .
13, UNION Operator
UNION Duplicate records are filtered out after table linking , Therefore, the resulting result set will be sorted after the table is linked , Delete duplicate records and return results .
In most practical applications, duplicate records will not be generated , The most common are process tables and history tables UNION. as :
select * from gc_dfys
select * from ls_jg_dfys
this SQL Take out the results of the two tables at run time , Then sort in the sorting space to delete duplicate records , Finally, the result set is returned , If the table has a large amount of data, it may cause sorting with disk .
Recommended scheme : use UNION ALL Operator substitution UNION, because UNION ALL The operation is simply to merge the two results and return .
14, WHERE The following conditions affect the order
Select * from zl_yhjbqk where dy_dj = '1K following ' and xh_bz=1
Select * from zl_yhjbqk where xh_bz=1 and dy_dj = '1K following '
Above two SQL in dy_dj and xh_bz Neither field is indexed , Therefore, full table scanning is performed ,
Article 1 SQL of dy_dj =
'1KV following ' The ratio of conditions in the recordset is 99%, and xh_bz=1 The ratio is only 0.5%, The first article is in progress SQL When 99% All records are recorded dy_dj and xh_bz Comparison of .
And the second is under way SQL When 0.5% All records are recorded dy_dj and xh_bz Comparison of , This leads to the second article SQL of CPU The occupancy rate is significantly lower than the first article .
15, Effect of query table order
stay FROM The list order in the following table will be correct SQL Execution performance impact , Without index and ORACLE Without statistical analysis of the table ORACLE Links are made in the order in which the table appears ,
Therefore, because the order of the tables is not correct, there will be a data crossover that consumes a lot of server resources .( notes : If the table is statistically analyzed ,ORACLE Links to small tables will be automatically advanced , Then link the large table )
Table connection , The main table with large amount of data is more efficient .
16,Oracle The parser of is processed from right to left from Table name in Clause , therefore from Clause is written in the last table ( Basic class driving table) Will be processed first .
stay from Clause contains more than one table , The table with the least number of records must be selected as the base table , Put last .
Basic table for quick judgment in the following cases ( Drive table ):
(1)where Try to use indexes in clauses .
(2) The connection operation should return fewer line drives .
(3) If where Contains selective conditions , as where id=1, The most selective part should be placed at the end of the expression .
(4) If only one table has an index , Another table has no index , An indexed table is usually used as the base table .
|
OPCFW_CODE
|
When a task only needs to be done once, the graphical or ncurses interface is usually the best solution. If a task needs to be done repeatedly, it might be easier to use the YaST command line interface. Custom scripts can also use this interface for automating tasks.
View a list of all module names available on your system with yast -l or yast --list. To display the available options of a module, enter yast module_name help. If a module does not have a command line mode, a message informs you of this.
To display help for a module's command options, enter yast module_name command help. To set the option value, enter yast module_name command option=value.
Some modules do not support the command line mode because command line tools with the same functionality already exist. The modules concerned and the command line tools available are:
sw_single provides package management and system update functionality. Use rug instead of YaST in your scripts. Refer to Section 9.1, Update from the Command Line with rug.
online_update_setup configures automatic updating of your system. This can be configured with cron.
With inst_suse_register, register your SUSE Linux Enterprise. For more information about the registration, see Section 8.3.4, Registering SUSE Linux Enterprise.
hwinfo provides information about the hardware of your system. The command hwinfo does the same.
These modules control or configure AppArmor. AppArmor has its own command line tools.
The YaST commands for user management, unlike traditional commands, considers the configured authentication method and default user management settings of your system when creating, modifying, or removing users. For example, you do not need create home directory or copy skel files during or after the user addition. If you enter the username and password, all other settings are made automatically in accordance with default configuration. The functionality provided by the command line is the same as in the graphical interface.
The YaST module users is used for user management. To display the command options, enter yast users help.
To add multiple users, create a /tmp/users.txt file with a list of users to add. Enter one username per line and use the following script:
Example 8-2 Adding Multiple Users
#!/bin/bash # # adds new user, the password is same as username # for i in `cat /tmp/users.txt`; do yast users add username=$i password=$i done
Similarly to adding, you can delete users defined in /tmp/users.txt:
Example 8-3 Removing Multiple Users
#!/bin/bash # # the home will be not deleted # to delete homes, use option delete_home # for i in `cat /tmp/users.txt`; do yast users delete username=$i done
Network and firewall configuration commands are often wanted in scripts. Use yast lan for network configuration and yast firewall.
To display the YaST network card configuration options, enter yast lan help. To display the YaST firewall card configuration options, enter yast firewall help. The network and firewall configurations with YaST are persistent. After reboot, it is not necessary to execute scripts again.
To display a configuration summary for the network, use yast lan list. The first item in the output of Example 8-4 is a device ID. To get more information about the configuration of the device, use yast lan show id=<number>. In this example, the correct command is yast lan show id=0.
Example 8-4 Sample Output of yast lan list
0 Digital DECchip 21142/43, DHCP
The command line interface of the YaST firewall configuration is a fast and easy way to enable or disable services, ports, or protocols. To display allowed services, ports, and protocols, use yast firewall services show. For examples of how to enable a service or port, use yast firewall services help. To enable masquerading, enter yast firewall masquerade enable.
|
OPCFW_CODE
|
Computer science is just a fascinating subject. I enjoy the theoretical side of computer science and also the fact that computing has real applications. I'm a lecturer in computer science and I specialise in bioinformatics, or the application of computing to solve problems in the analysis of biological data.
Working in computing is akin to solving puzzles. When the task I'm trying to compute actually works, it's a great feeling. When I've decided on the data structures, chosen how to break down the problem into reasonable functions, thought about the efficiency, written some code and put it all together, then it's an exciting creation. When it doesn't work, it's a challenge that can stump you and your colleagues for days. When it's correct it can be beautiful.
I chose to study science subjects for A-level even though they were unpopular choices for girls at my school. I was the only girl in my maths, physics and chemistry A-level classes, and one of only two girls in my computing A-level class (though we were only five in total). It isn't easy being different as a teenager, but science A-levels were the right choice.
Science was just more fun than the other subjects. The puzzle solving in mathematics appealed to me so much that maths at university was an obvious choice, with some computing as well to keep my career options open. However, at uni, computing quickly became more interesting. Learning to program was hard but fun. It is still hard but fun, 20 years later.
It's easy to see that being technically literate is important in today's society. Most people need to manage email, backup a computer, speak to their broadband supplier, puzzle over a broken printer, browse the internet and interact with social media. That kind of IT knowledge is necessary, ever-changing and has drastic consequences when it goes wrong. It makes fools of all of us. That's not the kind of knowledge taught in computer science degrees. The techniques involved in programming, in problem solving, in algorithms and computational thinking, are all fundamental and span many decades of thought.
I often find myself fascinated and inspired by the history of computing and the people who have made contributions. Computing is a young field, with the first electromechanical computer built in the 1930s. Some of the pioneers are still alive today. However, the ideas that underpin the field have often been around longer. Computational ideas have appeared and reappeared, sometimes so far ahead of their time that they went unrecognised and had to be rediscovered.
Ada Lovelace documented the use of variables, loops, tracing computation and profiling the usage of values for efficiency gains way back in 1843. She also imagined aspects of artificial intelligence and the future capability of computing. How far can computing go? What will we be able to compute in the future? Universal Turing machines are a model that describes what is possible for a computer to compute. Lambda calculus is another equivalent but different model of computation.
General recursive functions are yet another representation of the same scope of computable expressions. I find it amazing that these three models were created independently at approximately the same time in history (Turing, Church, Gödel), and are three different ways to inspect the same questions: What can a computer produce? What is computing capable of? Where will it take us?
What would I advise for someone starting out? Study whichever subject you want to study, regardless of society's gender norms or feeling different. Don't give up or leave it because you feel out of place. Computing is a diverse subject, and there are many niches that fit many different people. It's not just about the fast moving social media world, or the practical business oriented IT world. It's also about long term elegant ideas. Whatever you prefer, there's a niche for you.
Amanda studied mathematics and computation degree at the University of Oxford, and an MSc in Artificial Intelligence at the University of Edinburgh. She had great fun building Lego robots at Edinburgh and thoroughly enjoyed the weirdness of Prolog for building grammars for languages. She then took a job in Canon Research Centre Europe, looking at ways to talk to photocopiers and retrieve captioned images from large image collections.
Canon had a bunch of great programmers from whom she learned about C++, software engineering and how to make practical programs in a functional language. After a couple of years Amanda started a PhD in Bioinformatics at Aberystwyth University. She fell in love with the seaside town of Aberystwyth and the subject matter of bioinformatics and is now a lecturer at the University, still enjoying both.
|
OPCFW_CODE
|
I want to take a moment to think about Qur'an-only perspectives...
The first time I saw this matter was by a former user who made these meta posts: SE Islam is not really Pluralistic and Why SE Islam is doomed to fail. As I see things, the user was grumpy that their Qur'an-only posts were perceived as Truthy and were consequently poorly received. The user reacted inappropriately, basically yelling at people.
A recent post by a new user was described as "anti-hadith polemic"; the new user also reacted inappropriately, possibly due to lack of familiarity with the site.
So basically: How can we accommodate Qur'an-only contributions?
1. Qur'an-only perspectives are officially on topic as far as I can tell:
The first two paragraphs of the (current) on-topic page state:
Islam Stack Exchange is for experts in Islam, students of knowledge, and those interested in Islam on an academic level. For the purposes of this site, "Islam" includes all groups that identify themselves as Muslim; do expect to see answers from multiple points of view unless a certain perspective is explicitly requested in the question.
Respect other people's beliefs, and don't get into arguments about whether any particular group is "right" or "wrong"; we are all here to learn together.
And it's essential we play fair. In 2012, Aarthi wrote:
Islam.SE is not a game show. We are not here to play an elimination round of "Are these beliefs true in real Islam?" For the purpose of this site, assume that each sect's hadith are valid, and that each sect's accepted imams are reliable. This is the only way a group with such diverse cultures and beliefs can get along and do something productive. If you must, choose to focus on the fact that you all follow the teachings of your Prophet. You share that in common.
2. The standard of evidence in Qur'an-only Islam differs. As far as I can tell, Qur'an-only Islam has two sources of information:
- the Qur'an, and
- deductions using "God-given logical reasoning".
This is what constitutes evidence in Qur'an-only Islam. (Barring ignorance on my part.)
The impression I get is that if something is not in the Qur'an or deducible from the Qur'an, then God intended for it to be that way, and thus is not part of Islam. Consequently:
- Clash 1: References, which are essential to a good answer at Islam.SE, are ill-suited to Qur'an-only Islam.
3. The flip side of the "anti-hadith polemic" coin is "pro-hadith polemic".
As far as I can tell, Islam.SE has a majority of people who take hadith seriously. What may be perceived as "anti-hadith polemic" by the majority of users, may be perceived as normal and unbiased to a practitioner of Qur'an-only Islam.
Clash 2: Qur'an-only perspectives may be interpreted as anti-hadith perspectives, and thus biased and Truthy to the majority of users.
Clash 3: Pro-hadith perspectives may be perceived as biased and Truthy to Qur'an-only users.
I guess this is how we ended up with the two grumpy meta posts above.
4. Qur'an-only Islam doesn't seem to have experts.
Throughout StackExchange, we seek expert answers. In practice, we don't really care who writes the answers if they're at the expert level, but ordinarily it takes an expert to write an expert-level answer. Qur'an-only Islam doesn't seem to have "experts" (I asked about it here: Which scholars are respected by Quranists?).
- Clash 4: Islam.SE seeks expert answers, whereas Qur'an-only Islam respects "God-given logic" over "experts".
|
OPCFW_CODE
|
Windows 8 Pre-Orders Start; $39.99 Download Option on October 26thby Ryan Smith on October 15, 2012 12:10 PM EST
With Windows 8 officially launching in under two weeks, Microsoft and its retail partners have finally begun taking pre-orders for Windows 8. As with prior Windows pre-order promotions, several retailers are participating, including a number of brick & mortar retailers along with e-tailers such as Newegg, Amazon, and even Microsoft’s own online store.
Microsoft will essentially be handling the launch of Windows 8 in two phases: pre-order and launch. The pre-order phase is primarily geared towards buyers looking for boxed copies of Windows and with delivery on the 26th; unsurprisingly these boxed copies are priced notably higher than Microsoft’s download options. As for buyers looking to take advantage of Microsoft’s previously announced $39.99 download offer, that promotion will not begin until the launch on the 26th when Windows 8 actually ships. On that note, as previously announced both the boxed and download copies will be offered with promotional pricing, with Microsoft and its partners selling the upgrades at a significant discount until January 31, 2013.
|Windows 8 SKUs|
|Windows 8 Upgrade||Windows 7/Vista/XP Upgrade||Full Version||Price|
|Windows 8 Pro Pack||X||-||-||$69|
|Windows 8 Pro Upgrade (Boxed)||-||X||-||$69|
|Windows 8 Pro Upgrade (Download)||-||X||-||$39|
|Windows 8 (Core) OEM||-||-||X||$99|
|Windows 8 Professional OEM||-||-||X||$139|
For buyers looking for physical copies, retailers are taking pre-orders for both upgrade and full editions of Windows 8. For Windows X/Vista/7 users Microsoft is offering a single upgrade package, the Windows 8 Professional Upgrade, which has a list price of $99 but is being offered at $69 for the life of the promotion. Meanwhile the download version that will be made available on the 26th will have a $39 promotional price, putting a $30 premium on boxed copies.
As for Windows 8 (core) users – primarily those who buy computers with Windows 8 pre-installed – Microsoft is offering the Windows 8 Pro Pack upgrade for upgrading a Windows 8 (core) installation to Windows 8 Pro. Like the Win7 upgrade, this too is being offered at a promotional price of $69 with a list price of $99.
Finally, full versions of both Windows 8 (core) and Windows 8 are also being offered for pre-order, but only in OEM form at this time. There isn’t a publicly announced discount on these, so the list price of $99 for Windows 8 (core) and $139 for Windows 8 Professional should be the final price, which also closely matches the price for OEM copies of Windows 7. We haven’t seen retail full versions of Windows 8 appear for sale yet, and while there are rumors going around that Windows 8 will be OEM-only, it has not been confirmed by Microsoft.
Post Your CommentPlease log in or sign up to comment.
View All Comments
kmmatney - Monday, October 15, 2012 - linkI guess I don't need any of that.. I already use Pandora for music (or just use my own music collection). My experience with Metro apps so far has been pretty bad - they just don't seem to be designed for a large 24" monitor. It seems like so much space is wasted on the screen.
Spivonious - Monday, October 15, 2012 - linkOr Windows ID, Skydrive integration, Facebook/Flickr/LinkedIn/Twitter/whatever else integration, the Share and Search charms (really exciting possibilities with those), hardware-accelerated graphics, DPI-independent graphics (on Metro), improved Task Manager, File History, the list goes on and on.
Anyone who reads Anandtech who doesn't like Windows 8 and doesn't feel the upgrade is worth $40 hasn't used it. Windows 7 is still in there. Why not get the new stuff too?
Old_Fogie_Late_Bloomer - Monday, October 15, 2012 - link"Anyone who reads Anandtech who doesn't like Windows 8 and doesn't feel the upgrade is worth $40 hasn't used it."
I hear this a lot. It's flat-out untrue, though.
andrewaggb - Monday, October 15, 2012 - link"I hear this a lot. It's flat-out untrue, though."
Why is it untrue? In my experience so far running rtm it's been true.
Just like windows 7, but faster. And with better multi-monitor support (in desktop), better taskmgr, lots of little things that are better and a few that are probably worse.
Metro with mouse/keyboard isn't great, and on multiple monitors it's plain stupid, but I just don't use those parts and stay in the desktop the whole time.
Old_Fogie_Late_Bloomer - Monday, October 15, 2012 - linkWhat I quoted is untrue because I do not like Windows 8 and I have, in fact, tried it--on three different computers, no less.
It only takes one example to prove a blanket statement like that wrong, but setting that technicality aside, I very much doubt that I am the only person who a) has tried Windows 8 and b) didn't like it. If you like it, great, but not everyone does.
imaheadcase - Monday, October 15, 2012 - linkAll those things you mentioned are available for Win 7.
If you think upgrading software for integration or things you are to lazy to install, then by all means get it.
But MS also included items in every other OS version that people found interesting, but not worthy in the long run.
B3an - Monday, October 15, 2012 - linkSo you'd rather use bloated software to do all the tasks that Win 8 does much more efficiently while STILL using less RAM and resources than Win 7? And you can't get anything like Storage Spaces on Win 7.
It's worth it for the performance improvements alone. Theres also better security and built-in Anti Virus now. Which again is better than most bloated AV software and don't slow the system down at all.
Go back to using DOS.
kmmatney - Monday, October 15, 2012 - linkDrive bender works in Windows 7, and it gives similar features to storage spaces. You can already get MSE Antivirus for free in Windows 7. So far I haven't noticed any less RAM or resource usage compared to Windows 7, while using the preview versions
They will sell a lot of copies because new systems will be forced to use it.
what are the improvements in security? The only increased security measured I've heard is that the drivers for anti-malware software will start sooner, compared to win 7, and it has better root-kit protection....
Belard - Tuesday, October 16, 2012 - linkA - The memory usage is not much different... the savings done to Win8 are lost with the operation of the bolted on shell called METRO.
B - What bloat? Start menu uses far less memory than the Start Page. It easy to tweak out the Services to reduce processes.
C - You've been answered about Storage Spaces (weeee... yawn, fart)
D - What performance improvements other than its start-up? Whoopdee-doo, with my SSD - WIN7 boots in about 20seconds. But since its so stable, it just goes to SLEEP... takes about 2-3 seconds to wake up. I rarely actually need to reboot my computer.
E - You rant on AV is hilarious... DOS? Pretty much all OS's are "DOS" They've just dropped the "D". For example, an Apple II or Commodore 64 are not DOS systems as they are functional without any disks.... blah.
1 - Metro *IS* ugly on the large screen, fine for phones.
2 - METRO is inefficient and useless since its just a launcher - its live tiles are useless. METRO should be a strip along the left side (2 columns - like WP7) , then its LIVE tiles would have made sense.
3 - Hiding of buttons that are "still there" is a sign of brain-damaged craziness. Then at the same time - its a "touch based" OS in which the "Start" button is hidden but use keyboard short-cuts shows how Frackin Stupid MS is.
4 - Win8 has two modes people have to keep track of, METRO and DESKTOP.... wow, avg. Joe won't have fun with that.
5 - Charms are stupid... again, most of those things were on the START menu and were NOT HIDDEN. Using Win8 - I'd get charms when I DIDN'T want it, then at times- can't quite get the charms to slide out when I (*&#$(&#@ do.
These are POINTLESS changes that didn't do anything to actually improve the usage of the computer. Windows7 actually did a lot of little improvements over Vista which didn't really function any differently than WinXP.
The Metro-izing of the Desktop mode is plain U G L Y! Desktop in Win8 Preview 8440 was rather slick improved version of Win7... I liked how it look. In RTM - its flat 80s style ugly crap, what idiot comes up with this?! MS pretty much gave everyone "WIN 7 BASIC" mode... nothing more.
Are there some good things in Win8? YES!
But they are not worth it... No doubt about that.
mcnabney - Tuesday, October 16, 2012 - linkShouldn't you be calling Storage Spaces by its original name, Drive Extender. You know, the feature that made WHS what it was and then promptly ripped it out because it was 'too hard to create' under a 64 bit OS?
|
OPCFW_CODE
|
How to replace the default web api model validation with Fluent Validation
[Validator(typeof(ProductDetailsRequestDTO))]
public class ProductDetailsRequestDTO
{
public int ArticleGroup { get; set; }
public DateTime ProducedAt { get; set; }
}
public class ProductDetailsRequestDTOValidator : AbstractValidator<ProductDetailsRequestDTO>
{
public ProductDetailsRequestDTOValidator()
{
RuleFor(r => r.ArticleGroup).NotEmpty().WithMessage("custom message");
RuleFor(r => r.ProducedAt).NotEmpty().WithMessage("custom message");
}
}
// FluentValidation setup
config.Services.Add(typeof(System.Web.Http.Validation.ModelValidatorProvider), new FluentValidationModelValidatorProvider());
From where does fluentvalidation know my created validators and why is my model always true?
Maybe because both of those types have default values which aren't empty. So int is always instantiated with 0. It isn't a nullable type. Same with DateTime. Have you tried saying .MoreThan(0) or equivalent.
MoreThan does not exist in my intellisense. From the documentation: NotEmpty Validator
Description: Ensures that the specified property is not null, an empty string or whitespace (or the default value for value types, eg 0 for int)
Your Model have 2 properties, they are integer and DateTime types, they can't empty, default of integer is 0 and DateTime is MinValue of Date
@trungtin1710 Have you read my comment from docu? It says "eg 0 for int" is from NotEmpty()
@Pascal Sorry for this, now I can see your comment about this document
It does not even work for an empty string, so my SETUP is wrong I also REPLACED the provider and the f... documenation of fluent validation says nothing about how to configure it.
@Pascal - You're right that the rules are correct and you interpreted them correctly. I've run a quick test explicitly newing up a validator and running your DTO through it and it does return errors for both the rules so the issue lies in config. Not sure what exactly is wrong with that but that kinda rules out the implementation of your rules themselves as being a culprit.
It was but a typo. Your model is decorated with the wrong type name. Instead of decorating it like so:
[Validator(typeof(ProductDetailsRequestDTO))]
public class ProductDetailsRequestDTO
{
public int ArticleGroup { get; set; }
public DateTime ProducedAt { get; set; }
}
Decorate it with the type of your validator:
[Validator(typeof(ProductDetailsRequestDTOValidator))]
public class ProductDetailsRequestDTO
{
public int ArticleGroup { get; set; }
public DateTime ProducedAt { get; set; }
}
|
STACK_EXCHANGE
|
mojoPortal 188.8.131.52 is now available on our download page.
In one sense this is a minor upgrade with a few improvements but this is also a significant release in that it marks a change in our target framework from .NET 4 to .NET 4.5. Since mojoPortal is now compiled against the .NET 4.5 framework, any code that depends on it must also be compiled against the newer framework, so we have also released corresponding compatibility updates for all of our add on products.
In terms of web hosting, you can host either .NET 4 or .NET 4.5 compiled code under .NET 4.5 hosting and most hosts should probably be updated to .NET 4.5 by now. .NET 4.5 is really still the .NET 4 framework with some additions, it is considered an in place upgrade of the framework, so it may not be easy to know for sure whether your hosting has been updated to .NET 4.5 or if it still is running an older .NET 4 framework.
If you do get errors about the framework version after deploying the upgrade, we have a separate download with replacement Web.config and bin folder which has a version of the dlls compiled for .NET 4. Using these files should get things working again if you are still using .NET 4.
We have a similar package of replacement files that should in theory also support .NET 3.5 but at this point we are not officially supporting .NET 3.5 any longer and we hope very few people download that package.
Changing our target framework to .NET 4.5 allows us to start moving forward with the use of some of the great new things that the .NET team has been working on such as Web API. In fact this release has a new plugin system that allows features to plugin their own Web API Routes, and we implemented a small custom web api in our forum feature to handle moderation before sending posts to the forum subscribers as discussed below.
This release includes a new per forum setting for Roles That Can Post. We've had several requests for making read only forums and also for limiting posting by role so that only premium members could post. This is useful for example if you are selling site membership levels with Site Membership Pro.
It is now possible also to configure the forums such that notification email is not sent until a moderator approves the post for sending to the list. We really needed that on this site because we have our forums configured to include the post message in the notification email and we had several occasions where someone would register late at night and post a few spamy forum posts that got emailed to our subscribers and this in turn hurts the reputation of our email system. Note that the settings for this are per forum and you must add the moderator emails in the settings so that the moderator gets notified of the new post. The moderator will see new link buttons on the post for either sending the post to the list or marking it as sent so that it does not get sent to the list. There is also a setting to allow users that are marked as "Trusted" in the user management page to have their posts sent to the list without moderation so that you can avoid delays for posts from your active trusted community members.
There are also new settings per forum that control whether new posts get included in the google site map for the forums by default or whether new posts get a NOINDEX meta tag added by default. On this site we've found that forums can be a mixed bag in terms of SEO value, some threads may have worthy content and others may not. You can always edit a thread after the fact to set whether it is included in the google site map or whether it gets a NOINDEX meta, these new forum level settings just control the default for new posts.
This release also includes upgrades to CKEditor 4.4.3 (from 4.3.4) and TinyMCE 4.1.2 (from 4.0.21). We've also included the new moono-dark skin for CKEditor which you can enable by adding this in user.config:
<add key="CKEditor:Skin" value="moono-dark" />
If you have a dark site skin you might like that better than the default editor skin.
Follow us on twitter or become a fan on Facebook
|
OPCFW_CODE
|
Disappearing Windows controls VS2010
After moving and rearranging controls on a Winform when invoking a Build and/or Rebuild All command the following error message appears :
"An error occurred while processing this command. Could not load file
or assembly 'LoLock, Version=<IP_ADDRESS>, Culture=neutral,
PublicKeyToken=null' or one of its dependencies. The system cannot
find the file specified."
At that point all the controls disappear from the Designer and form the executing form as well. I've scoured the designer cs file and run diffs against a previous working version and cannot find anything amiss.
This has happened to me on several occasions and appears to be random.
Any clues ??
What's LoLock? Is that an external assembly that provides controls used by your application?
I've had the same problem...exactly...same error followed by the disappearance of most of the controls. The controls that are missing in the designer are my custom controls. The change I made before the error and the disappearance was to add a constructor to each of the controls derived class (i.e. my part of the control). So far, I've noted that the Control.Add(...) is missing for each of the hundred or so controls that have disappeared (from the automatically generated Form.designer.cs file). This is the one point that seems to differ from your situation if you are running a diff on the designer.cs file between pre and post failure. Mine definitely has missing Add()s.
So far, my solution is to manually add back the Add() methods to the generated file. However, it would obviously help if there was some way to get visual studio to see this problem and add the controls back automatically. However, I can't think of any way that VS could know, at this point, which controls to add to which parent control.
For example, before the error I had the following group box defined in my designer.cs file:
//
// groupBox10
//
this.groupBox10.Controls.Add(this.checkBox_FincaDescription_ForRent);
this.groupBox10.Controls.Add(this.checkBox_FincaDescription_ForSale);
this.groupBox10.Location = new System.Drawing.Point(883, 67);
this.groupBox10.Name = "groupBox10";
this.groupBox10.Size = new System.Drawing.Size(310, 76);
this.groupBox10.TabIndex = 9;
this.groupBox10.TabStop = false;
this.groupBox10.Text = "Property Type";
After the FAIL I have the following code which was generated as a result of either the error or simply the designers failure to manage my custom controls:
//
// groupBox10
//
this.groupBox10.Location = new System.Drawing.Point(883, 67);
this.groupBox10.Name = "groupBox10";
this.groupBox10.Size = new System.Drawing.Size(310, 76);
this.groupBox10.TabIndex = 9;
this.groupBox10.TabStop = false;
this.groupBox10.Text = "Property Type";
This is a massive FAIL for me as I have so many fields to manually correct (although luckily only a few group boxes and a good backup). I have read of so many people having this same problem from 2005 on, I can't believe it hasn't been addressed.
I have just experienced the same issue using Visual Studio 2012. You are right: That is a MASSIVE fail...
I also experienced this with a user control.
I received an exception for each control that had the Add method removed from the designer.
Surprisingly, I had a couple of panels, and the Add code for the children of those panels remained in tact.
I only had to implement Add for those panels and a few controls that were not in containers, which is fortunate because there were over 100 controls.
An error was introduced in the constructor of the user control, and I believe that this contributed to the chain of events resulting in the corrupt designer file.
|
STACK_EXCHANGE
|
module UnderarmourApi
# `UnderarmourApi::Config` manages the configuration options for the UnderArmour API wrapper. This is a good place to refer to your client ID and client secret. These should be referenced from your secret file as environment variables, such as ENV['UA_API_KEY'].
class Config
CLIENT_KEYS = [:client_id, :client_secret, :access_token]
attr_accessor *CLIENT_KEYS
# Creates a new instance of 'UnderarmourApi::Config'
def initialize(client_keys={})
# error if no hash,
return unless client_keys.is_a? Hash
# error if hash keys named incorrectly
return unless valid_key_names? client_keys.keys
client_keys.each do |key, value|
self.send("#{key}=", value)
end
end
def client_keys
CLIENT_KEYS.inject({}) do |client_keys, key|
client_keys[key] = send(key)
client_keys
end
end
def valid_key_names?(key_names)
key_names.reject { |key| CLIENT_KEYS.include? key }.count == 0
end
end
end
|
STACK_EDU
|
Landing Commits with Autoland¶
MozReview provides an easy way to land commits to another repository through a service called Autoland. Autoland can send your commits to the repository of record when your reviews have been granted. In addition, Autoland can be used to send commits to Try if you are developing within mozilla-central (e.g. Gecko or Firefox).
Sending Commits to Try¶
If you are working on Gecko, Firefox, or anything else within mozilla-central, and if you have at least L1 SCM access, you can send your commits to the Try service at any time. On the Reviews view of any review request there is an Automation menu. If you have L1 access or greater, the top option, “Trigger a Try Build”, will be enabled for you. Note that it doesn’t matter which review request in a given series you are on; all commits in the current series will be sent to Try.
This option will open a dialog prompting for a Try string, with an expandable panel for the TryChooser Syntax Builder tool. Once a build is started, results will be visible under the commits table in all review requests in the series, with links to Mercurial and Treeherder.
Once your commits have been reviewed, if you have L3 SCM access you can use Autoland to push them to the repository of record. For Gecko/Firefox, this is mozilla-inbound. As with Try builds, the Autoland option is in the Automation menu, as “Land Commits”.
For the Autoland option to be enabled, the current user must have L3 access and the following must be true for every commit in the series:
- The commit has been reviewed by someone with L3 access, or
- The commit has been submitted (pushed to MozReview) by someone with L3 access.
If these conditions have been met, the option will be enabled.
Clicking it will prompt the user to confirm the commit message(s), which
will be rewritten to reflect the actual reviews given,
r=reviewer. Review strings requesting reviews,
r?reviewer, will be stripped out. In the case that the
commit message is not correct, the author will have to push up an
amended commit or land directly.
As with Try landings, the entire commit series will be landed regardless of the commit on which the “Land Commits” action is triggered. Results, with links to Mercurial and Treeherder, will be posted to the review requests as soon as the commit has landed. If for some reason Autoland cannot land the commits due to a transient error, e.g. due to the tree being closed, it will retry until it is successful.
|
OPCFW_CODE
|
02-07-2007 08:44 AM
You might have got these kind of query. This is bit diffrent. we have old tru64 box running "Digital UNIX V4.0F (Rev. 1229); Wed Jul 9 19:15:20 PDT 2003" with DUV40FB18AS0007-20020102 kit installed that is been up from 703 day. Now the one on the hp website they have link for "HP Tru64 UNIX V4.0F PK8 ". our I think pk1..question the same I can apply here without going for PK8? if not is there any easy solution for me?
The prerequisite says required is PK8.
Thanks in Advance.
02-07-2007 12:16 PM
If there is no problem after installing pk7(especially with NHD3 patches), then you can go ahead download pk8 and install.
02-07-2007 11:47 PM
Your choices are to:
1. Go to PK8 (a good idea in any case) and then install the available DST patch.
2. Make the necessary changes manually, i.e., edit the timezone source files and recompile the timezone definitions with "zic". I believe there are other threads here with details on this.
02-08-2007 07:18 AM
THANKS for the time. I am not in support of doing pk8 because We can'nt afford to fail on this one. Also I am not good at compilation stuff. is it possible that I can change the date & time manually on the system. since this box is going retire in another 6 months? is that good choice?
many many thanks
03-08-2007 07:35 AM
This makes time calculations much simpler: because the GMT/UTC time does not use DST, the timescale is continuous and each moment in time can be uniquely identified, even when the local time "falls back" in DST->Standard transition and one hour is "doubled" according to local time. When a file is created during that doubled hour, the system can still tell you whether the file was created during the DST or the Standard "version" of the hour.
All the timestamps in the filesystem are internally stored in Unix time_t format, for example.
If you change the system's idea of local time according to the new rules without changing the timezone information, you'll skew the system's idea of GMT/UTC time by one hour. When the old rules say it's time to transition to DST, you need to change the system time again.
After this, the system's idea of UTC time will again agree with the true UTC. You'll also have introduced *two* breaks to the system's UTC timescale, which normally has *never* any breaks at all. One of the breaks will be "spring forward" and the other "fall back" type. So there will be non-unique timestamps and files created up to one hour "in the future".
So, if you cannot get official patches for your OS version, I'd recommend you edit the timezone definition and use the "zic" command to make the changes available to the system.
This is why the "zic" command is included into the OS: so you won't be dependent on the vendor patches if the DST rules change.
In some countries, the government used to decide the times of DST transitions independently for each year. Unix admins in these countries have dealt with that, by editing the timezone definitions whenever necessary.
|
OPCFW_CODE
|
Protocols and Communications Utilities Under XMSF
There are a number of network protocol and network communications
frameworks managed under the XMSF project. The include no less than
three Distributed Intractive Simulation (DIS) implementations, and a
framework that turns text XML into a more compact, more easily parsed
- DIS-XML: The recommended
Java DIS framework
- XMLPG: C++ and Java DIS
- DIS-Java: Legacy Java DIS
- XSBC: Extensible Schema-Based
Compression, a binary XML format
Why three? Over the years we've developed them to meet different needs.
A quick description of each of them is below.
This is a more modern implementation and the prefered Java
The objective of DIS-XML is to introduce another format in which to
represent PDUs. Where DIS-Java had two formats--the Java object format
and the IEEE-1278 binary format--DIS-XML introduces a third way to
represent PDUs, namely XML format. This means that PDUs can be read
from the wire in binary format, turned into Java objects, and then
written out in XML format. An XML representation of PDUs opens up the
DIS world to all the tools used in the XML world, include web services,
archiving, XML database tools, and so on.
While DIS-Java used hand-written code, DIS-XML is created using a mix
of automatically generated code and some hand-written code. Sun's Java
API for XML Binding (JAXB) is used along with an XML schema written to
describe the DIS protocol to automatically generate Java language
classes that describe the PDU. For example, this is a fragment from the
schema that describes DIS:
<xsd:documentation>Denotes 32-bit floating-point
<xsd:attribute name="x" type="xsd:float"/>
<xsd:attribute name="y" type="xsd:float"/>
<xsd:attribute name="z" type="xsd:float"/>
Sun's JAXB tool will read this schema description and turn it into a
Java class that has a class name of "Vector3Float", class comments as
described by the xsd:documentation tag, and three instance variables
declared as floats with the names "x", ,"y", and "z".
While most of the work of writing the class is done for you, it is
still required that the programmer write code to write the PDUs into
IEEE binary format, and read them from IEEE binary format into object
format. JAXB can automatically read and write Java objects to an XML
format compatible with the XML schema, so no programmer effort s
required to implement this feature. About twenty of the most used PDUs
have been fully implemented. About fifty PDUs have been described in
the XML schema file, but not all of those have had IEEE binary
serialization/deserialization methods written for them.
DIS-XML is easier to maintain than DIS-Java, and has better memory
performance; not as many objects that must be GC'd are generated.
This makes it a better choice for real time applications. The ability
to read and write XML also makes it the preferred solution for
The features of DIS-XML include:
- The ability to read and write PDUs in XML format
- A simple networking framework
- An implementation that includes about twenty PDUs, and XML schema
descriptions for about 30 more
- A "slider application" that allows the user to rotate an X3D box
window using Yumetech's Xj3D libraries
- Ability to send and receive packets to Xj3D implementations
- A rudimentary JUnit test framework
- A modified BSD open source license
- An XML schema that describes much of the DIS protocol
- Some code that can be used in MATLAB to read and write PDUs
- An XMPP (jabber chat) bridge that passes XML-ified DIS PDUs
- Simple sender and receiver programs
- An ant build file
A page that describes how to download DIS-XML is here. This includes
instructions for downloading a pre-built distribution with jar files,
and instructions for checking out the latest code from CVS.
Protocol Generator (XMLPG) is a third
implementation of DIS. This implementation arose from a need for a C++
implementation of DIS for the Delta3D
game engine produced by NPS. A hand-written C++ implementation
would have had all the same problems as the hand-written Java
implementation, namely the volume of code that would need to be
maintained. Ideally we would have used the XML
schema from DIS-XML plus a C++ tool equivalent to JAXB to generate the
C++ classes. Unfortunately, all the C++ schema-to-code implementations
I looked at generated C++ code that was a horror show, even by C++
The solution was to come up with what is essentially an XML template
language that describes PDUs. This custom XML file is parsed into what
amounts to an abstrct syntax tree (AST), and that AST can then be used
to generate source code in any language. I've implemented C++ and Java
language output languages.
The process looks like this:
| XML file
|----->| AST | +-->| Java source file |
+-----+ | +------------------+
+-->| C++ source file |
The programmer writes an XML file that describes the protocol. This is
converted by some code into an AST, and that AST can then be used to
Java or C++ source code. Note that the concept is not limited to just
DIS; you can write many network protocols by simply writing the correct
XML file and then generating C++ and Java code.
Why not use the XML schema file rather than the special-purpose XML
file described above? It turns out that intelligently parsing all the
special cases in XML schema is a pain. There's a reason JAXB is big and
complex. Rather than deal with the complexity of XML schema I simply
punted and came up with a very simple XML file format that did what I
The XMLPG impelmentation of DIS features:
- A C++ implmentation of the DIS protocol
- A Java implementation of the DIS protocol
- A BSD open source license
- Simple example code to read and write PDUs (both Java and C++)
- Makes use of the HawkNL package on the C++ side for networking
- The ten our so most popular DIS PDUs are implemented
If you want to use C++ DIS, XMLPG is your only choice. It's the only
source C++ DIS implementation I've found. The Java implementation
generated by XMLPG is reasonably OK, has a smaller footprint in
terms of the supporting jar files needed, and is reasonably efficient
in terms of memory use. But it also gives up the ability to read and
write XML, and is unlikely to be maintained as rigorously as DIS-XML.
You can find out how to download a pre-compiled version of XMLPG that
includes the generated DIS code, plus instructions for downloading the
latest from CVS, here.
The original. DIS-Java is a hand-coded implementation of the DIS
(IEEE-1278) protocol written in the Java programming language. Full
source code and javadocs are provided with a modified BSD open source
license. This is from a conceptual standpoint perhaps the easiest
distribution to understand for a new programmer, because it is Just
Source Code, but it also has some well-known defects. It features:
- An implementation of most of the DIS-1995 PDUs.
- DIS Enumeration classes that include many of the "magic numbers"
to map arbitrary values to semantic meanings in DIS PDU fields. These
enumeration values were generated from JDBE web pages, which are now
several years old. You should not expect the DIS enumeration values to
include the latest enumeration values.
- A simple framework for reading and writing PDUs to and from the
- Rudimentary unit tests
- Simple (and somewhat elderly) AWT GUI applications for sending
- A quaternion implementation. There are standard quaternion
implementation released with J2SE these days in the javax.vecmath
package, so these probably aren't very useful.
- An ant build file
- Some simple implementations of unsigned numbers in Java
As you can guess, this code is somewhat elderly and isn't really
maintained any more. While conceptually simple it's also a pain to
maintain all the hand-written code. Also, it tends to produce a lot of
objects for garbage collection to handle, which makes it not ideal for
real time applications. Most of the getter methods for PDU fields are
written to return a copy of the field object, rather than the object
itself; this is nifty from a programming perspective, because
encapsualtion is not violated, but also churns a lot of memory.
The download page for DIS-Java is here.
This includes links for
downloading a pre-built DIS-Java implementation and instructions for
downloading the latest code from CVS.
Extensible Schema-Based Compression (XSBC) is a system for converting
text XML files into a more compact and more easily parsed binary
format. It is similar to several of the candidates in the W3C's
Efficient XML Interchange (EXI) working group. The overall objective is
to create XML files that are smaller than text files, ideally smaller
when gzipped than a text XML file when gzipped, and is faster to parse
and in data binding than text XML. This last criteria is not always
immediatly obvious. When a piece of XML like this is parsed:
x="1.0" y="2.0" z="3.0"/>
The parser must extract the text representation of 1.0 from the XML,
and then convert it to an IEEE floating point representation and
associate it with a programming language variable. This process is
called "data binding" With XSBC and most other EXI formats, the value
1.0 is already in IEEE floating point format, and the bits can simply
be referenced by the name.
Instructions for downloading a pre-built version of XSBC, or checking
out the latest version from anonymous CVS, can be found here.
|
OPCFW_CODE
|
Project 2: Simulation of uncoupled circadian oscillators (due Friday Feb 23rd at midnight)
This week we will get the infrastructure working. Ultimately, we will be simluting the mammalian circadian clock as a network of oscillators that signal each other, but this week we will be simulating the oscillators without any signaling. We will write the code we need to run a simulation, detect events happening in that simulation, and compute simple summary statistics about those events. Next week, we will update the code to incorporate signaling and then collect statistics about the performance (both timing and program output).
- Read Stephanie's Reference Guide and the links within it. This week, you can ignore the signaling and the VRC.
- On NSCC, cURL the code tarball to it. Then expand the tarball:
- curl -O http://cs.colby.edu/courses/S18/cs336/projects/proj02/proj02.tar
- tar -xf proj02.tar
- Main programs:
- sim_slow.c: Runs a single simulation of 5 uncoupled oscillators, outputting the results (for every time step) to a file that can later be displayed by disp_clocks.
- disp_phase.c: A program Stephanie wrote to graphically display the results of a simulation. It reads in a file output by sim_slow.
- dump_phase.c: A program that Stephanie wrote to make it easy to read the contents of the binary file output by sim_slow.
- sim_events.c: A program that will run a simulation and detect each time that an oscillator's phase crosses CT6. It prints out the index of the time step at which each event occurred.
- sim_stats.c: A program that will run a simluation and detect each time that an oscillators phase crosses CT6. This version prints out the standard deviation of those event times for each cycle.
- Data files: Stephanie developed a file format that contains the information that allows us to view and visualize the movement of every oscillator in a simulation. It is a binary file format, so don't try to read it with Xemacs.
- Support routines:
- my_timing.c and my_timing.h: contains a function that allows us to time code in seconds with precision up to milliseconds.
- phase_io.c and phase_io.h: contains file IO routines for the slow simulation and disp_clocks.
- my_math.h: mathematical functions. You should be able to supply my_math.c from your project 1 code..
- phase_support.c and phase_support.h: the heart of the simulation. Routines for simulating and computing statistic go here. You will be adding code to these files..
As you can see, there are many top-level programs. They begin with more IO and less computation (good for debugging) and end with more computation and less IO (good for running lots of simulations and not getting overwhelmed with output). We also introduce code to time the computations. Our goal as we parallelize this code in the future will be to speed it up dramatically.
- Write runPhaseSimulation (in phase_support.c). It should use the Forward Euler method to solve the set of differential equations that model Nx oscillators without any signals. See Stephanie's guide for more information about the model and method and the comments in the header file for more information about parameters and return values.
- Test it by running sim_slow (which I have supplied). You can visualize the output of sim_slow using disp_phase or dump_phase. When you are debugging, please use lots and lots of print statements.
If you would like to compare it to the output of my code, examine the following .phs files. They run the simulations with 5 and 100 oscillators, respectively. To generate the files, I ran sim_slow like this:
./sim_slow uncoupled_5.phs 5 ./sim_slow uncoupled_100.phs 100You should generate files with your implementatin and then use disp_sim and/or dump_sim to compare my files to yours, e.g. ./disp_phase uncoupled_5.phs
- Write runPhaseSimulationAndDetectEvents (in phase_support.c). The new feature in this version is that it finds the timestep at which the phase (which you will need to put into units of circadian hours) crosses 6 and is increasing (e.g. if the phase is 5.97 at time step 21 and 6.02 at time step 22 then the event happened at time step 22).
- Test it with sim_events.
I ran sim_events with the following command and output:
./sim_events 2 5 Oscillator 0: 61 300 Oscillator 1: 61 302 Oscillator 2: 61 303 Oscillator 3: 61 304 Oscillator 4: 62 305 Ran in 0.001009 seconds
- Write runPhaseSimulationFindEventStats (in phase_support.c). The new feature in this version is that it computes the standard deviation of the event times for each cycle. Before computing the standard deviation, transform the evet time step index into a time (in hours). This way, the statistical output (the standard deviation) is in units that are easy to interpret.
- Test is with sim_stats.
I ran sim_events with the following command and output:
./sim_stats 2 5 0.044722 0.192354
Ideas for Extensions
- Write a main program (and any necessary support functions) that uses the complex order parameter to determine how synchronized the oscillators are. Plot its length over time to show how the synchrony changes. Since these oscillators are not signaling each other, but they are all starting at the same phase, we should see them drift out of sync.
- Explore the effect of the distribution of periods on how quickly the oscillators desynchronize (using either the event-based or order-parameter-based measure). Can you predict the relationship mathematically?
- Use valgrind to demonstrate that you have no memory leaks.
- Add more command line arguments to sim_slow, sim_events, and/or sim_stats to control the number of oscillators, the distribution of intrinsic periods, and/or the time to run the simulation. In your project report, demonstrate the feature (e.g. show how the output changes in response to input and comment on whether or not this output makes sense).
Writeup and Handin
- Create a named README.txt. Describe the code files (the purpose of each file) along with how to compile and run the code.
- Create a second file for your project report and it should use a format that allows you to include images or tables (e.g. .docx or .pdf). This report should demonstrate that you have a working simulation and including the answers to the memory-drawing exercises.
- You should hand in all code necessary to run your solutions. Place all necessary .h, .c, and Makefile files in proj03 directory along with the README file. Stephanie will probably want to compile and run the code. It should be possible to do so without looking for any more files. Tar/zip up the directory and put it in your Private folder on Courses/CS336.
Zip up the directory and put it in your Private folder on Courses/CS336.
|
OPCFW_CODE
|
"""
Tests for Hooks
"""
from unittest import TestCase
from os.path import join, dirname
from nose.tools import eq_, ok_
from confab.definitions import Settings
from confab.data import DataLoader
from confab.hooks import Hook, ScopeAndHooks, HookRegistry
class TestHooks(TestCase):
def setUp(self):
self.settings = Settings()
self.settings.environmentdefs = {
"environment": ["host"],
}
self.settings.roledefs = {
"role": ["host"],
}
self.settings.componentdefs = {
"role": ["component"],
}
self.component = self.settings.for_env("environment").components().next()
def test_add_remove_hook(self):
"""
Load additional configuration data via hook.
* Test adding new hook
* Test removing hook
"""
def test_hook(host):
return {'data': {'num_cores': 4}}
testhook = Hook(test_hook)
local_hooks = HookRegistry()
local_hooks.add_hook('host', testhook)
ok_(testhook in local_hooks._hooks['host'])
local_hooks.remove_hook('host', testhook)
ok_(testhook not in local_hooks._hooks['host'])
def test_hook_override_data(self):
"""
Test that data loaded via hook overwrites data loaded via config.
"""
def test_hook(role):
return {'data': {'role': 'role2'}}
with ScopeAndHooks(('host', Hook(test_hook))):
loader = DataLoader(join(dirname(__file__), 'data/order'))
eq_(loader(self.component)['data'],
{'default': 'default',
'component': 'component',
'role': 'role2',
'environment': 'environment',
'host': 'host'})
def test_data_override_hook(self):
"""
Test that data loaded via hook will be overwritten by data loaded later via config.
"""
def test_hook(role):
return {'data': {'environment': 'environment2'}}
with ScopeAndHooks(('role', Hook(test_hook))):
loader = DataLoader(join(dirname(__file__), 'data/order'))
eq_(loader(self.component)['data'],
{'default': 'default',
'component': 'component',
'role': 'role',
'environment': 'environment',
'host': 'host'})
def test_hook_load_order(self):
"""
Test that hooks overwrite each other based on order they are defined.
"""
def test_hook1(host):
return {'data': {'host': 'host1'}}
def test_hook2(host):
return {'data': {'host': 'host2'}}
with ScopeAndHooks(('host', Hook(test_hook1)), ('host', Hook(test_hook2))):
loader = DataLoader(join(dirname(__file__), 'data/order'))
eq_(loader(self.component)['data'],
{'default': 'default',
'component': 'component',
'role': 'role',
'environment': 'environment',
'host': 'host2'})
def test_filter_func(self):
"""
Test that hooks only run if the filter_func returns true
"""
def test_hook1(host):
return {'data': {'host': 'host1'}}
def test_hook2(host):
return {'data': {'host': 'host2'}}
with ScopeAndHooks(('host', Hook(test_hook1)),
('host', Hook(test_hook2, lambda componentdef: False))):
loader = DataLoader(join(dirname(__file__), 'data/order'))
eq_(loader(self.component)['data'],
{'default': 'default',
'component': 'component',
'role': 'role',
'environment': 'environment',
'host': 'host1'})
|
STACK_EDU
|
The framework is an intrinsic part of web development, web application, and other technology. Web developers leverage the best web development framework with creating rich and compatible web apps and websites.
As there is a lot of web development framework available, it can confuse experienced developers. Therefore, one needs to think carefully about selecting one since future website work depends on the chosen framework. With increasing the requirement to find the best software to get the job done, we need to create a list of essential tools for finding front-end development tools. Check out the below list to get the best front-end development tools and front-end development frameworks.
It’s a first-rate code editor that comes with well-designed features with an ultra-speedy user interface. Sublime text lies in the program with a vast array of keyboard shortcuts. It can perform simultaneous editing by making quiet navigation files.
It’s Google’s built-in chrome developer tool that is available for both Safari and Chrome. It allows developers to access the internals of web apps. On top of it, there are network tools that help to optimize loading flows, giving timelines and understanding what the browser is doing at the moment.
It’s a nightmare of every developer that when new project work comes, you get screwed up. But with rolling over with the project with GitHub, you have a great way to move ahead. The repository hosting service of GitHub comes with a rich open-source development community that provides components for bug tracking, task management, and wikis for every project.
HTML usually comes with an open-source web application framework. It’s a toolbox used by a front-end developer. It is developed by Google, where AngularJS extends the application’s HTML syntax. The results are expressive and quick to create an environment with HTML. It comes with several data binding that has invaluable skills and a front-end kit.
If you are tired of typing in the same style for any container, then now it’s time to build a front-end application that starts to notice the same pattern emerging. The UI framework comes with a solved attempt to solve the problem and abstract the common elements in reusable modules with the new application. Bootstrap is the most widely used framework, and the tools used in it normalize stylesheets with build modal objects. It can dramatically cut down the code amount required to build a project.
A web tool that is used to save time probably helps you learn CSS, which is usually not DRY. It is perhaps a more popular tool and 8+ years old, which defines the genre of modern CSS preprocessors. Sass comes with the combination of nesting, variables, and mixins, rendering simple CSS when compiled with more readable and DRY.
React is the simplest framework to learn and is developed at Facebook. It helped to fix code maintainability issues due to the constant addition of features in the application. It’s an open-source framework where react stands out because of its virtual document object model (DOM). It’s an ideal framework where people anticipated high traffic and needed a stable platform to handle them.
Check The Ultimate Revelation Of Why One Should Go With React Native App Development As Compare To Native App Development?
Angular is a framework that’s based on TypeScript in this list. It was launched in 2016 and developed by Google to bridge the gap between the increasing demand for conventional and technology concepts. The framework is unique and comes with a two-way data binding feature. It means there’s real-time synchronization between view and model. The framework helps to develop applications built with angular. A few of the companies that use Angular are Blender, Forbes, Xbox, BMW, and other such applications.
It’s one of the popular front-end frameworks which is simple to use. It helps to remove the complexity that Angular developers face. It is small in size and offers significant benefits like visual DOM and component-based. It helps in the 2-way binding where Vue is versatile and helps with multiple tasks. It handles simple and dynamic processes easily and optimizes app performance to tackle complexity. The primary users of this framework are 9gag, Alibaba, Xiaomi, etc.
It’s a framework developed in 2011 and is a component-based two-way data binding similar to angular. It helps to build complex Mobile and web applications using modern-day technologies seamlessly. The framework is the toughest framework to learn but has a rigid and conventional structure.
It’s one of the most accessible frameworks that allow you to develop a single-page application with ease. It’s a framework based on MVC architecture where MVC allows the implementation of component logic. Besides this, the backbone js framework uses tools like Marionette, Handlebars, Thorax, and more.
Pros of Backbone Js framework
Above were a few recommendations for the best front-end tools and frameworks based on our research for developing large-scale projects to MVPs. However, choose the framework that depends on the nature of your projects. Once you implement the tools and framework, you need to test how it works on websites and web apps with rendering multiple devices. Choose us to hire front-end developers and build a website or application based on your needs.
|
OPCFW_CODE
|
EWGT2016 Call for Papers
EWGT2016 Call for Papers
The EURO Working Group on Transportation 2016 Meeting will offer keynote plenary talks; workshops; tutorials; and oral presentations from academics, industry experts, students, and representatives of government agencies to discuss recent methodological developments and application challenges in traffic, transportation, and logistics systems. In conjunction with the motivation of the EURO Working Group on Transportation, the main theme for the conference has been specified as the "Simulation and Optimization of Traffic and Transportation Systems".
In conjunction with the theme of EWGT2016, leading scientists that have confirmed keynotes are:
- Peter Wagner, German Aerospace Centre (DLR) and TU Berlin, Germany
- Jaume Barceló, UPC-Barcelona Tech, Spain
- Michel Gendreau, CIRRELT and MAGI, École Polytechnique, Montréal, Canada
- Mohamed Abdel-Aty, University of Central Florida, US
More on keynote talks is available at Keynote Speakers.
As for the previous series, the topics of interest include, but are not limited to, the followings:
- Air transport operations
- Applications of vehicular networks, including ITS
- Advanced communication technologies (V2V, V2I, V2X ..) and automated transportation systems
- Dynamic network modeling and optimization
- Dynamic fleet management
- Simulation and optimization of traffic, transportation, and logistics systems
- Meta-heuristic methods in optimization
- High performance computing in traffic simulation
- Vehicle routing
- Emergency management
- Human factors, travel behavior and choice modeling
- Decision analysis
- Transportation economics
- Modeling, control, and management of traffic and transportation systems
- Supply chain management
- Transportation systems planning and operations
- Innovative and multi-modal transport
- Impact assessments of transportation networks on safety, efficiency, and environment
- Energy efficiency and emission reduction
- Land use and transport interactions
- Sustainability in transportation planning and traffic engineering
- Big data analytics for travel data
- Planning, design, and operation of transportation networks
- Smart cities and smart mobility
Participants interested in contributing to the EWGT2016 with a paper should submit an extended abstract of at most 4 pages before
January 20, 2016 February 15, 2016 via EasyChair using the EWGT2016 template. See Submissions for more.
Selected papers of the conference will be invited to be reconsidered for publication in the theme-based special issues of the journals, including IEEE Intelligent Transportation Systems Magazine, Transportation Research Part C: Emerging Technologies and Accident Analysis & Prevention. See Journal Special Issues for more.
Participants interested in taking the advantage of early bird registration should pay and get registered before June 30, 2016 via EWGT2016 Online Registration System. See Registration for more on registration deadlines and fees.
Follow us @EWGT2016 to get prompt updates!
|
OPCFW_CODE
|
1 One vacant position for a full-time Senior Scientist (m/f/d) at the Chair of Cyber-Physical-Systems in the Department Product Engineering
Reference number: 2211WPD
1 position for a full-time senior scientist (m/f/d) at the Chair of Cyber- Physical-Systems in the Department Product Engineering with 01.01.2023 or at the earliest possible date. The initial contract will be limited to one year with the option of extension to a permanent position.
Salary Group B1 to Uni-KV, monthly minimum charge excl. SZ.: € 4.061,50 for 40 hours per week (14 times a year).
The group's research topics are autonomous systems, machine and deep learning, embedded smart sensing systems and computational models. The senior scientist will work on one of these topics (or combinations of them), where a focus will be developed jointly based on the experience of the candidate. The researcher will be further engaged in teaching (e.g., student supervision), project management and funding applications.
What we offer:
We offer a research position in fascinating fields with the opportunity to develop own ideas and implement them independently. Further, the researcher is part of a young and newly formed team, learns and assumes leadership responsibilities with coaching sessions, and receives targeted career guidance for a successful scientific career.
Degree in computer science, physics, telematics, electrical engineering, mechanics, robotics or mathematics with a PhD. Experience in at least one of the topics of machine learning, neural networks, robot learning or learning sensor systems.
Willingness and ability to co-supervise scientific work in research including related publication activities. Programming experience in one of the languages C, C++, C#, JAVA, Matlab, Python or similar. Ability to work in a team, sociability, self-motivation and reliability in a growing team are expected.
Desired additional qualifications:
Scientific experience demonstrated by patents and publications in international conferences and journals on machine learning, neural networks, robotics or sensing. Experience in obtaining external funding and in collaborations with industrial partners. Good English skills and willingness to travel for research and to give technical presentations.
A complete application includes a (1) detailed curriculum vitae, (2) a letter of motivation with a reference to the desired field of research and teaching from the above-mentioned topics, (3) two letters of recommendation, (4) the PhD thesis as PDF file, (5) all relevant certificates of prior education for bachelor's, master's and PhD studies, (6) name, email and phone number of two additional references to contact, (7) previously published or submitted publications or patents as links or PDF
Reference number: 2211WPD
Application deadline: 12.12.2022
The Montanuniversitaet Leoben intends to increase the number of women on its faculty and therefore specifically invites applications by women. Among equally qualified applicants, women will receive preferential consideration.
For the application please use the online form on the homepage:
https:// www. unileoben.ac.at/jobsContact Information
|
OPCFW_CODE
|
M: Goism - Use Go instead of Emacs Lisp inside Emacs - demiol
https://github.com/Quasilyte/goism
R: Tenobrus
I haven't taken an incredibly close look at this, but it seems like a pretty
bad idea. Elisp is definitely not a great language, and I'd like an
alternative as much as the next Emacs user. But I feel pretty strongly that
any alternative has to be a Lisp, or very close to one. Code-is-data/data-is-
code is very important for the more "config-file" aspects of configuring
Emacs. Being able to use and write DSLs to succinctly encode exactly how you
want aspects of the editor to behave is a critical strength. I've tried
systems that were configurable in Python and other good-but-non-Lisp
languages, and it's always much more annoying, because the language is
(purposefully) limited w.r.t metaprogramming and possible DSLs. That's a good
thing when optimizing for maintainability by others and obviousness, but not
so much when optimizing for maximum personal customizability.
Go seems like quite possibly the polar opposite of this, as far in the "keep
it simple and understandable by literally everyone by cutting out many
techniques for metaprograming and code reuse". Which perhaps is the point, but
if so it seems like that point misses much of the draw of Emacs?
While this is definitely impressive technically, I think Guile Emacs is a much
more plausible option.
R: quasilyte
I used Emacs Lisp for scripting tasks like code and data generation. It is
great to have an ability to evaluate form right inside the spot you want
results to be inserted. This kind of code does not require AST manipulations
or macro.
Also, some of my projects that become bigger than 1000 LoC could benefit from
static typing and (subjectively) better tooling.
By the way, I think extending Emacs in Racket would be great; just do not have
an idea on how to implement that integration smoothly.
R: kkylin
You may already know about Guile-Emacs, but in case not, take a look at
[https://www.emacswiki.org/emacs/GuileEmacs](https://www.emacswiki.org/emacs/GuileEmacs)
. Guile is not Racket, or more precisely Racket is no longer exactly Scheme,
but they are closer to each other than to elisp.
R: busterarm
The idea of using a language without Map/Reduce/Filter as a substitute for a
Lisp, in an editor built around Lisp...seems vaguely antithetical to me.
R: quasilyte
You can call map/reduce/filter from Go code: `xs := lisp.Mapcar(f, ys)`.
Mapconcat is already used inside runtime implementation:
[https://github.com/Quasilyte/goism/blob/master/src/emacs/rt/...](https://github.com/Quasilyte/goism/blob/master/src/emacs/rt/builtin.go)
(Print and Println functions).
R: 43224gg252
Why did you get downvoted for this?
R: josteink
I agree Emacs Lisp is pretty inferior as far as lisps go, but IMO this seems
pretty misguided.
Technically impressive, I'm sure, but will it be around for another 30 years?
If someone writes a module using this, will I be able to rely on that module
keeping on working for the years to come?
R: bigdubs
Why wouldn't that be the case? The golang authors have been pretty strong on
backwards compatibility so far (even though it is admittedly a young
language).
R: kornish
Is josteink talking about Go or Goism? Probably the latter.
R: justinmk
Neovim has a go client[1] for nvim's RPC API. Vim doesn't have bytecode to
speak of, so of course there's no transpiler step. But it removes the friction
of integrating between nvim <-> go, and that is "useful when it's useful". In
particular it enabled a new UI[2] to be built in go.
[1] [https://github.com/neovim/go-client](https://github.com/neovim/go-client)
[2] [https://github.com/dzhou121/gonvim](https://github.com/dzhou121/gonvim)
R: ajarmst
Dear God, no. No. Please, just give me Guile. Please. We've been so good. So
patient. Guile.
R: flavio81
What a sad idea.
Seriosuly, is that hard to learn Emacs Lisp? Even if one is using Go or Rust
(etc) at work, any programmer worth his salt should at least already be
familiar with Lisp syntax.
It is one of the easiest languages to learn!!
R: quasilyte
I love Emacs Lisp.
Emacs has really good support for it which continues to improve over time.
But.. I love more than one language (and more than one Lisp for sure). Will
you try to persuade me that I am wrong in that regard?
R: gkya
Emacs only having elispallows me to fix the third party code that I have in my
config, and simplifies all the things. Thoemacs already has C too now, there
is module support. I guess it could be possible to use Rust via that interface
too, and maybe go. But better keep these to a minimum because elisp is one of
the things that makes emacs great
R: jlarocco
Another case where it'd be nice to have a "Why are we doing this?" section in
the README.
If it's just a demo then it's neat. Interesting that it can be done.
On the other hand, if it's a real push to get people scripting Emacs with Go,
then I don't see the point at all. It's solving a problem people don't really
have.
R: hk__2
> It's solving a problem people don't really have.
Isn't "I want to script Emacs but I don't like LISP" a problem to solve?
R: ue_
A better thing to answer would be why people prefer less powerful languages to
the more powerful, and what can be done about it? Framed that way,this seems
like an XY problem.
R: jchw
>why people prefer less powerful languages to the more powerful
Clearly, power is the only measure one should consider when picking a
programming language. And Lisp surely has more power than Go. I'm going to
guess 36.1% more power, to be exact.
>and what can be done about it
We could always start performing eugenics to get rid of them.
... Okay, I apologize for being an ass. But I hope my points aren't lost; the
way you're phrasing things makes it feel like you're bitter that anyone would
consider using something that's not Lisp.
R: ue_
>I'm going to guess 36.1% more power, to be exact. Where's this figure from?
>We could always start performing eugenics to get rid of them.
I meant more like what we could do to improve the more powerful languages to
make them more appealing.
I'm less bitter than I am perplexed about the choice to use Go over Lisp.
R: jchw
It was sarcasm. Obviously one can't objectively measure the power of two
languages. I assume you mean it has more powerful metaprogramming than Go,
which is what everyone else is saying.
But I find the rhetoric quite closed-minded, because I doubt any of the people
replying here who've just found out about this and are calling it a bad idea
have actually tried it yet. Clearly, there are advantages to Lisp that you
would lose if you tried to do something in Go. But nobody is acknowledging the
reverse, that there may also be advantages in Go to Lisp that are unforeseen,
even for this specific task.
All in all this is a disappointing development. I don't think it would go over
the same in other text editor communities, because people seem very defensive
about the use of languages in place of Lisp where they might not be so
defensive about the use of languages in place of say, VimScript.
R: lngnmn
...and write all the types (without generics) - no, thank you!
Sarcasm aside, Lisp is as much as possible well-suited for the job of text and
AST processing, Emacs is one of the Lisp's "killer apps" and the second best
"case study" after classic old-school AI code (PAIP).
R: fithisux
Congratulations, keep up the good work.
R: testcross
How does it compare to something like
[https://github.com/janestreet/ecaml](https://github.com/janestreet/ecaml) ?
R: quasilyte
I see three main approaches for the tasks projects like ecaml and goism try to
solve:
1\. Use a plugin system (ecaml)
2\. Transcompile to a target language (gosim and emscripten-like platforms)
3\. Embed another VM inside Emacs and call its eval
There are many differences between these approaches and I am not sure one of
them is objectively better as a general solution.
For the end users, all of these approaches can deliver good level of
integration (they require different sets of tricks to achieve that).
R: kmicklas
Replacing a 50s language which a 60s language! Amazing!
R: solidsnack9000
Being able to script the editor in something other than Lisp seems good to me
(despite the objections of others in this thread). After all, many people are
just scripting settings and stuff for themselves. Might as well not make them
jump through hoops to do so.
I wonder about the implementation strategy -- why compile to Emacs LISP
instead of doing a plugin (FFI) or RPC style setup?
R: wcummings
What happens if I hover over a goism function and hit M-.? Do I get dumped
into the go source?
R: quasilyte
Currently, no. Hope I get your question right..
Name mangling scheme preserves fully qualified package path. All goism sources
live inside GOPATH (1), so nothing stops us from implementing a jump to Go
definition.
For given `goism-foo/bar.baz` Emacs symbol, Go definition can be found in
`GOPATH/src/foo/bar/` package. Exact location can be found by using existing
Go tools (simple grep-like solution can work, too).
(1) It can change in future; see
[https://news.ycombinator.com/item?id=13368846](https://news.ycombinator.com/item?id=13368846)
and even more relevant:
[https://github.com/golang/go/issues/17271](https://github.com/golang/go/issues/17271)
R: dreamcompiler
Why?
R: ww520
Is it possible to have a language transpire/compile to elisp?
R: quasilyte
It is possible to emit Emacs Lisp instead of bytecode/lapcode. This was the
first code generator target actually.
Easier to debug, simpler to trust (for the end user) and not that hard to
generate.
The problem is that it is harder to implement some features of Go in terms of
Emacs Lisp without going down to the virtual machine level. Best examples are
arbitrary return statements (can be emulated by throw/catch) and goto.
R: ruricolist
Could you implement arbitrary return with cl-block and cl-return-from?
R: quasilyte
With minor Emacs Lisp compiler patch (addition of %return, %goto and %label
intrinsics), it is now possible to output Lisp that is optimal.
Possible implementation (about 20 lines of code):
[https://github.com/Quasilyte/goism/issues/57](https://github.com/Quasilyte/goism/issues/57)
Not sure if "defadvice" around "byte-compile-form" is acceptable for all
users.
|
HACKER_NEWS
|
"""
DeleteStack
Deletes a cloudformation stack by its name
"""
from botocore.exceptions import WaiterError
from cloudwedge.utils.logger import get_logger
from cloudwedge.utils.sts import get_spoke_session
LOGGER = get_logger('DeleteStack')
# Setup boto3 session
SESSION = None
CLIENT_FORMATION = None
class DeleteStack():
def __init__(self, target_account_id=None, event=None):
self.target_account_id = target_account_id
# Pull values off event
self.stack_owner = event['stack']['stackOwner']
self.stack_name = event['stack']['stackName']
def run(self):
"""Run"""
global SESSION
global CLIENT_FORMATION
if not SESSION:
SESSION = get_spoke_session(self.target_account_id)
if not CLIENT_FORMATION:
CLIENT_FORMATION = SESSION.client('cloudformation')
# Delete the stack, catch response to know
stack_deleted = self._delete_stack()
output = {
'inProgress': not stack_deleted,
'stackOwner': self.stack_owner,
'stackName': self.stack_name,
'targetAccountId': self.target_account_id,
'waitAttempts': 0
}
return output
def _delete_stack(self) -> bool:
"""Delete stack and do quick check if its deleted or not"""
try:
LOGGER.info(f'Attempting to delete stack: {self.stack_name}')
response = CLIENT_FORMATION.delete_stack(StackName=self.stack_name)
# Wait just a little, it could have been a speedy delivery
try:
res = CLIENT_FORMATION.get_waiter('stack_delete_complete').wait(
StackName=self.stack_name,
WaiterConfig={
'Delay': 1,
'MaxAttempts': 1
}
)
LOGGER.info(f'Stack delete is completed')
return True
except WaiterError as err:
LOGGER.info(f'Stack delete is still in progress, even after a little wait')
return False
except Exception as err:
LOGGER.error('Error deleting stack {err}')
raise err
|
STACK_EDU
|
create application on different web site in IIS on different machine
We are building a asp.net system which is used to publish .net based web service.
It has a simple logic: we have many kinds of web service templates under the App_Data folder.
For example,our application url is : http://localhost/serviceManager
And this is the folder of our application:
App_Data
ws1.zip
ws2.zip
Bin
modules
service
web.config
Note: When this system if first visited ,we will check if the
"service" directory is "application" in iis. If not we will make it.
Through our system user can fill some required information (especially the service name,for example "test") step by step and we gather them for later use.
Then we create a folder named test under the service directory,then we find the accordingly web service template(for example ws1.zip),extract is content to the test folder, then we modify the web.config file accordingly information we got.
At last we set the test as a application in IIS.
Now the folders of our application will change like this:
App_Data
ws1.zip
ws2.zip
Bin
modules
service
test
App_Data
service.asmx
web.config(filled with gathered information)
web.config
And a new web service is deployed,we can access it use this:
http://localhost/serviceManager/service/test/service.asmx (note,the service.asmx exist in every template)
This is what we can do so far.
However the client have further requirements now:
1. separate the manager system from the created service.
Since the client want to map the created service to internet and keep our manager system only on intranet,this is for security.
As you can see,the created service is put under our manager application. We need to deploy them in a different port(in iis,different port means different web site,isn't it?).
For example,our application url does not change: http://localhost/serviceManager
But the created service will change to something like this: http://localhost:8888/service/test/service.asmx
2. implement the cluster(especially for the created service)
Since the created services are mass-oriented,so for performance considernation,the client require cluster.
However we do not found anything like weblogic for IIS cluster.So we think the only way it make the copy of the service once it is created and put it in another server (the cluster node) with the same port and virtual context name.
If so we need to create "application" in iis in different machine as my post topic.
We have no idea now,anyone can give us some suggestion?
BTW,we have to support iis6 - iis7.5.
If you solved your own problem you should post an answer; this will get you rep points and possibly a new SO badge.
@DourHighArch: In fact we do not have a successful and complete solution. But as you said, I can post what we have done.
I am the asker of this question.
We have not find a prefect solution by now. But we have done some jobs.
Our asp.net application have been packaged as a window installer. When the application is installed, some necessary jobs would be done:
1 Create two web sites at different port like 8000(for service manager) and 8001(for created services.
2 Make the root directory of web site of 8001 as windows shared folder.
3 Deploy an iis application named IISManager. This application is a web service which will make a folder under a web site root directory as an IIS application.
Now when user access the:
http://ip:8000/servicemanager, and create a service according to the template.
We will do the following things:
1 copy the extracted and configured folder to different cluster nodes by the window shared folders.
2 call the IISManager service one by one to make the folders at step1 as an IIS application.
Now if the step successful, we can access the new created service:
http://ip:8001/servicename/service.asmx
That's what we have done now.
But we still have a lot of problems:
1 The IISManager does not work every time, the iis application creation would be failure sometime.
2 The created service at different nodes are completely separate applications, it will be hard to do some session share job.
I hope someone have a better solution,
|
STACK_EXCHANGE
|