Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
draw an angled square using sine and cosine
this is my first time posting on a forum. But I guess I will just jump in and ask.. I am trying to draw a rectangle with x, y, width, height, and angle. I do not want to create a graphics 2D object and use transforms. I'm thinking that's an inefficient way to go about it. I am trying to draw a square with rotation using a for loop to iterate to the squares width, drawing lines each iteration at the squares height. My understanding of trig is really lacking so... My current code draws a funky triangle. If there is another question like this with an answer sorry about the duplicate. If you have got any pointers on my coding I would love some corrections or pointers.
/Edit: Sorry about the lack of a question. I was needing to know how to use sine and cosine to draw a square or rectangle with a rotation centered at the top left of the square or rectangle. By using sin and cos with the angle to get the coordinates (x1,y1) then using the sin and cos functions with the angle plus 90 degrees to get the coordinates for (x2,y2). Using the counter variable to go from left to right drawing lines from top to bottom changing with the angle.
for (int s = 0; s < objWidth; s++){
int x1 = (int)(s*Math.cos(Math.toRadians(objAngle)));
int y1 = (int)(s*Math.sin(Math.toRadians(objAngle)));
int x2 = (int)((objWidth-s)*Math.cos(Math.toRadians(objAngle+90)));
int y2 = (int)((objHeight+s)*Math.sin(Math.toRadians(objAngle+90)));
b.setColor(new Color((int)gArray[s]));
b.drawLine(objX+x1, objY+y1, objX+x2, objY+y2);
}
I suppose he just needs to rotate the rectangle, but doesn't know why his solution is not working.
Aah a rotation. I figured OP meant a skewed rectangle of sorts. Rotation does make more sense, I guess.
@Andrew Hmm I think the question is sufficiently clear (given that OP has abandoned alternate means and want to do it on OP's own): "How do I draw an angled square?" It's just that "angled square" is up for interpretation. :)
It is called the Rotation matrix.
If your lines has the following coordinates before rotation:
line 1: (0, 0) - (0, height)
line 2: (1, 0) - (1, height)
...
line width: (width, 0) - (width, height)
Then applying the rotation matrix transform will help you:
for (int s = 0; s < objWidth; s++){
int x1 = cos(angle)*s
int y1 = sin(angle)*s
int x2 = s * cos(angle) - objHeight * sin(angle)
int y2 = s * sin(angle) + objHeight * cos(angle)
//the rest of code
}
Hope I didn't make a mistakes.
This fixed my rotation thank you very much for taking the time to answer my question. I had the sine and cosine functions being used improperly. I was not subtracting the heightsin(angle) from scos(angle), and was not adding height(cos(angle) to y2. Thanks again.
Do you mean like a "rhombus"? http://en.wikipedia.org/wiki/Rhombus (only standing, so to speak)
If so, you can just draw four lines, the horizontal ones differing in x by an amount of xdiff = height*tan(objAngle).
So that your rhombus will be made up by lines with points as
p1 = (objX,objY) (lower left corner)
p2 = (objX+xdiff,objY+height) (upper left corner)
p3 = (objX+xdiff+width,objY+height) (upper right corner)
p4 = (objX+xdiff+width,objY) (lower right corner)
and you will draw lines from p1 to p2 to p3 to p4 and back again to p1.
Or did you have some other shape in mind?
I don't think I need that, but it looks interesting to look into. Thanks for taking the time to put your answer in.
|
STACK_EXCHANGE
|
With Paul Brandlin, Marketing Director at Dermaclara
How Dermaclara increased ROAS by up to 50% with FERMÀT
KEY NOTABLE METRICS
Decrease in CPR
Outperforms Other Partner Platforms
Learn how Dermaclara partnered with FERMÀT to share the story behind their anti-aging, stretch mark-removing medical-grade silicone patches.
Dermaclara offers a natural solution for scar removal and stretch mark treatment through their 100% medical-grade silicone patches.
Their team partnered with FERMÀT to create compelling, shoppable videos to up their conversion rate and encourage more organic traffic. Today, they've seen gains like a 25-50% increase in ROAs and a significant decrease in CAC.
"When we were introduced to FERMÀT, we knew it was a no-brainer. To be able to tell our story in a shoppable video was incredible for us." -Paul Brandlin, Marketing Director at Dermaclara
Dermaclara is a D2C brand with strong roots in eCom. The team knows the importance of sharing the brand's story to engage customers and drive sales – and they've seen success through short, shoppable ad videos.They wanted to build on that success by:
Sharing shoppable ads on social media
Making organic social media posts and stories shoppable
Integrating those shoppable posts with their online Shopify-based presence
Enter FERMÀT – which offered precisely what Dermaclara was looking for.
Two weeks into their partnership with FERMÀT, an A/B test showed that FERMÀT won out against its competition in sending direct traffic to the Dermaclara website. That same test yielded an increased ROAS of 25-50% and decreased cost per acquisition by $5.
Today, Dermaclara monitors their return on ad spend with FERMÀT and scales their budget accordingly – ensuring their ROAS continue to hold steady thanks to FERMÀT's results:
Ease of use – The Dermaclara team can post shoppable videos to social media with a single click and expanded from Instagram to Facebook and TikTok as a result, saving their agile teams both time and money.
Simple management – Dermaclara's social media team has found it easier than ever to drive organic traffic from social media because of how easily FERMÀT can integrate into organic posts and stories.
According to Paul Brandlin, Marketing Director at Dermaclara, their partnership with FERMÀT has done more than drive traffic – it's also driven novel, creative ideas.
"We've already been working with FERMÀT to tell more segmented stories through more proofs of concept on various platforms. We're going to continue to use them in every feasible way."
These results have empowered Dermaclara to explore new, innovative visual storytelling solutions. As Paul says, FERMÀT has shown them how to best tell their brand's story on every possible platform.
GET THE LATEST
Be the first to know about product updates. We promise we won’t spam you!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
|
OPCFW_CODE
|
// Classy.js contains class-based conveniences
Event.observe(document, 'dom:loaded', function() {
var Classy = { }; // Setup the namespace
Object.extend(Classy, {
// Validator: Simple client-side form validation.
//
// Usage: Add the class name "required" to any required fields.
// When the user attempts to submit the form, any required
// fields left blank will have the class name "err" added to
// them, which you can style appropriately.
Validator: {
// TODO: Use event delegation for this.
findForms: function() {
Classy.Validator.forms = $$('.required').pluck('form').uniq();
Classy.Validator.forms.invoke('observe', 'submit', Classy.Validator.performCheck);
},
performCheck: function(event) {
function isBlank(element) { return $F(element).blank(); }
var form = event.element();
var requiredInputs = form.select('.required');
if ( requiredInputs.any(isBlank) ) {
requiredInputs.invoke('addClassName', 'err');
event.stop();
}
}
},
// Confirmable: Simple action confirmation controls
//
// Usage: Add the class name "confirmable" to any link or form that needs confirmation.
// When the user attempts to click/submit the "confirmable" element, the element's title
// attribute will be used as a confirmation message. If the user clicks "Cancel", the
// event will be stopped. Otherwise, it will be allowed to continue.
//
// Note: This script requires event_delegations.js and form_submit_bubbler.js
Confirmable: {
setup: function() {
var body = $$('body')[0];
body.delegators('form:submitted', { 'form.confirmable': Classy.Confirmable.submitted });
body.delegators('click', { 'a.confirmable': Classy.Confirmable.clicked });
},
submitted: function(event) {
var element = event.element();
var message = element.readAttribute('title');
if ( !confirm(message) ) { event.stop(); event.memo['originalEvent'].stop(); }
},
clicked: function(event) {
var element = event.element();
var message = element.readAttribute('title');
if ( !confirm(message) ) { event.stop(); }
}
},
// Defaultable: Simple default text behaviors
//
// Usage: Add the class name "defaultify" to text fields or textareas. When a user
// element, the default text will be cleared, allowing for user input. When the element
// loses focus, if the user has entered text, the text will remain. Otherwise, the
// default text will be restored.
Defaultable: Class.create({
initialize: function(element) {
this.element = $(element);
this.value = $F(element);
this.setupBehaviors();
},
setupBehaviors: function() {
this.element.observe('focus', this.clear.bindAsEventListener(this));
this.element.observe('blur', this.restore.bindAsEventListener(this));
},
clear: function(event) {
var value = $F(this.element);
if ( value.blank() || (value == this.value) ) {
this.element.removeClassName('hasContent');
this.element.clear();
}
},
restore: function(event) {
var value = $F(this.element);
if ( value.blank() ) {
this.element.removeClassName('hasContent');
this.element.value = this.value;
}
else { this.element.addClassName('hasContent'); }
}
})
});
Classy.Validator.findForms();
Classy.Confirmable.setup();
$$('.highlightify').invoke('highlight');
$$('.fadify').invoke('fade');
$$('.pulsate').invoke('pulsate');
$$('textarea.defaultify').each(function(element) { new Classy.Defaultable(element); });
$$('input[type=text].defaultify').each(function(element) { new Classy.Defaultable(element); });
});
|
STACK_EDU
|
Stuck on getting proxy to work
I'm using JavaScript API 3.24 for reference. I want my application to automatically allow users to access secured services based on background authentication via a token (this way they don't need to login). I have reviewed the documentation and many questions here, but I just can't seem to get it to work. Sorry for the length of this post, but I want to explain fully what I have tried.
I've secured one geoprocessing service called Habitat Management. It is located in the HabitatManagement folder, the service is titled HabitatManagement, and it contains a geoprocessing task called HabitatManagement. Essentially, the url for the secured service looks like this:
https://www.mydomain.com/myServer/rest/services/HabitatManagement/HabitatManagement/GPServer/HabitatManagement
The actual web mapping application resides inside of a .NET application. The URL of the .NET application is this:
https://www.mydomain.com/appName
The actual web map url is this:
https://www.mydomain.com/appName/HabitatMap/HabitatJSMap
I have my proxy config file set up like so:
<ProxyConfig allowedReferers="*"
mustMatch="false"
logFile="proxyLog.txt"
logLevel="Warning">
<serverUrls>
<serverUrl url="https://www.mydomain.com/myServer/rest/services/HabitatManagement/HabitatManagement/GPServer/HabitatManagement"
username="username"
password="password"
matchAll="true"/>
</serverUrls>
</ProxyConfig>
I have the proxy JS code like this:
urlUtils.addProxyRule({
urlPrefix: "https://www.mydomain.com",
proxyUrl: "https://www.mydomain.com/appName/proxy/proxy.ashx"
});
If I go to the proxy page following this URL:
https://www.mydomain.com/appName/proxy/proxy.ashx
It works as expected, it says "Config File: "OK" Log File: "OK""
When I test this URL, it shows the login screen for my REST services, even though the token should be passing the username and password so I can see the services, right? That's issue #1 that I don't understand.
https://www.mydomain.com/appName/proxy/proxy.ashx?https://www.mydomain.com/myServer/rest/services/HabitatManagement/HabitatManagement/GPServer/HabitatManagment
Additionally, when I go the web map and try to access the service, it still makes me login with this message:
Please sign in to access the item on https://www.mydomain.com/myServer
(HabitatManagement/HabitatManagement)
If I login, things work fine - it appears the token is generated and appended to the URL as it should be.
I have also tried to change my serverURL to something very general with no luck. Ideally, the serverURL would be more specific than this as there are some services we have that don't require authentication thus do not need to run through a proxy. This just takes me to the login page for my REST services too.
<serverUrl url="https://www.mydomain.com"
username="username"
password="password"
matchAll="true"/>
I'm not sure if my proxy is located in the wrong spot in my .NET application, or if my referrer URL is messed up, or if my serverURLs are messed up. How do I access my secured services using the proxy?
Well...I finally got it working. I still don't know what I did to get it to work but on the 500th try it did. The only things I think I changed between the last try and this one was updating the path to the proxy log file in the proxy's web.config file. I also fixed my one GP service that still had a URL that ended in /uploads/upload to be more general and just ending in /GPServer like the other one (see original post). I hadn't changed it earlier because I haven't been testing submissions with that service, I was only testing the habitat management one. Maybe that's what fixed it? I also republished my app, obviously.
Here's the final code:
JS
urlUtils.addProxyRule({
urlPrefix: "https://www.mydomain.com",
proxyUrl: "/appName/proxy/proxy.ashx"
});
proxy.config
<ProxyConfig allowedReferers="https://www.mydomain.com/*"
mustMatch="true"
logFile="proxyLog.txt"
logLevel="Warning">
<serverUrls>
<serverUrl url="https://www.mydomain.com/myServer/rest/services/HabitatManagement/HabitatManagement/GPServer"
username="username"
password="password"
matchAll="true"/>
<serverUrl url="https://www.mydomain.com/myServer/rest/services/HabitatMonitoring/HabitatData/MapServer"
username="username"
password="password"
matchAll="true" />
<serverUrl url="https://www.mydomain.com/myServer/rest/services/HabitatMonitoring/HabitatClassification/GPServer/"
username="username"
password="password"
matchAll="true" />
</serverUrls>
</ProxyConfig>
|
STACK_EXCHANGE
|
Considering that the website concept is yet to snap up in India and major people use the connexion metered on the number of hours used, a slow chess opening website could loose
considerable potential customer by way of put option off.
Our DDoS Protection for its shared hosting customers at US data center & basic support for India data center includes automated and manual detection of DDoS attacks and their mitigation with defined set of filtering rules & IP disabling
We uses 120GB SSD caching to all its web hosting India servers to cache and store a second copy of our website or application, to load it faster from cache rather than hard drive- a time consuming process.Our SSD cached servers actually boosted clients' websites loading time in real time.
we provides Google's PageSpeed module on its Linux shared web hosting servers to speed up your site by improving bandwidth usage and web page latency by applying best practices
of optimizing website without changing existing content. We uses Google's PageSpeed module to load your site faster.
CPU Resource Usage Management is essential for even management of resources among all clients on a server. Better server level performance is offered by allotting sufficient RAM
and CPU resources to each client by adding restrictions in the code itself.
Serial Advanced Technology Attachment (SATA) is a standard technology for connecting a hard drive or SSD to the rest of the computer. It offers many advantages like faster data transfer,
hot swapping, reduced cable size and cost.
Double Data Rate Type Three Error Correcting Code Random Access Memory (DDR3 ECC RAM) is a modern RAM with high bandwidth 'double data rate' interface. ECC memory can
detect and correct the most common kinds of internal data corruption.
Redundant Array of Independent Disks (RAID) is a technology that provides increased storage functions and reliability via redundancy of multiple disk drive components. RAID 10
storage setup gives best performance, availability and redundancy.
We support latest programming languages and their different versions including PHP 5.5.x., Java-TOMCAT, SSI, Apache handler, CGI, Fast CGI and HTML; and programming
modules including Curl, CPAN, GD Library, ImageMagick , Ioncube Loader ,Zend, MIME, PDO_MySQL, Mcrypt and more.
Web hosting provides you access to edit and control your databases with a pre-installed phpMyAdmin version. Number of MySQL databases and their sizes vary with each shared hosting plan
Our Cron runs every night to keep a backup of all your website data file by file so as to safeguard it from getting lost or from being destroyed. For added security, you can also
maintain a backup of your data.
With Custom DNS name servers, you can give specific names like ns1.xyz.com to your DNS servers using your control panel. With this, you get complete control on your domain names
and your branding. Also you can use it to redirect emails to third party servers.
File Transfer Protocol (FTP) is required to manage and transfer files or folders from your local computer to a remote server. Our file transfer protocol works with most popular FTP clients like FileZilla, CuteFTP, SmartFTP and more..
|
OPCFW_CODE
|
Create and start a SharePoint Designer 2013 site workflow
Learn how you can create a site workflow in SharePoint Designer 2013 and then start and run it in SharePoint 2013.
A site workflow is a workflow that is not bound to a particular SharePoint list or library, but that you can associate with and run on a particular SharePoint site.
You can use SharePoint Designer 2013 to create list workflows, reusable workflows, and site workflows.
Here, you will use SharePoint Designer 2013 to create a site workflow that can be used to write a simple text message to the workflow history list.
Before you begin, ensure you have gone through the tutorial to create a simple workflow in SharePoint Designer 2013, so that you know how to add actions to and publish a workflow.
To create a site workflow in SharePoint Designer 2013
- In SharePoint Designer 2013, open the site on which you want to create a site workflow.
- In the left Navigation pane, click Workflows.
- Click Workflows > New > Site Workflow.
- On the Create Site Workflow dialog box, type MySiteWF in the Name text box, select SharePoint 2013 Workflow from the Platform Type drop-down list box, and click OK.
- Add a Log to History List action to the workflow.
- Click this message in the sentence for the workflow action, and then type the text This is a message from the Site workflow. in the text box that appears.
- Add a Go to a stage action in the Transition to stage section of Stage 1, and then select to go to End of Workflow.
- Publish the workflow to SharePoint.
To start a site workflow in SharePoint 2013
- In SharePoint 2013, navigate to the site to which you published the workflow and click Site Contents.
- On the Site Contents page, click Site Workflows.
- On the Workflows page under Start a New Workflow, click the name of the site workflow you created earlier.
- Wait a couple of seconds, and then refresh the page. The site workflow should be listed in the My Running Workflows list.
- Wait a couple of minutes, and then refresh the page again. The site workflow should be listed in the My Completed Workflows list.
- Click the text Completed or Stage 1 to go to the history list for the workflow.
Still unclear? Watch the SharePoint 2013 video tutorial below.
- Create a simple SharePoint Designer 2013 workflow
- Create a SharePoint Designer 2013 reusable workflow
- Associate a SharePoint 2013 workflow with a document library
- Pass a value to a SharePoint Designer 2013 site workflow
- SharePoint Designer 2013 workflow to start another workflow
|
OPCFW_CODE
|
import React from 'react';
import stackManager from './StackManager';
import './style.css';
export default class Stack extends React.Component {
constructor(props){
super(props);
this.state = {
renderLater:true,
stackStep: 0,
displayAnimationFrame: stackManager.showAnimationFrame
};
this.renderLater = this.renderLater.bind(this);
this.decrementStackStep = this.decrementStackStep.bind(this);
this.incrementStackStep = this.incrementStackStep.bind(this);
this.handleShowAllSteps = this.handleShowAllSteps.bind(this);
this.toggleAnimationFrame = this.toggleAnimationFrame.bind(this);
this.getStepValue = this.getStepValue.bind(this);
stackManager.hookCallback(this.renderLater);
this.renderedSteps = [];
}
componentWillReceiveProps(nextProps){
this.setState({
renderLater:true
})
}
renderLater(){
this.setState({
renderLater:false
})
}
getStepValue(isIncrement = true){
let stepValue;
if(isIncrement){
stepValue = this.state.stackStep < stackManager.getOrder() ? this.state.stackStep + 1 : this.state.stackStep;
} else {
stepValue = this.state.stackStep > 1 ? this.state.stackStep - 1 : 1;
}
if(this.renderedSteps && this.renderedSteps.length > 0) {
const lastStep = this.renderedSteps[this.renderedSteps.length - 1]
const firstStep = this.renderedSteps[0]
while(this.renderedSteps.indexOf(stepValue) === -1){
if(isIncrement){
if(stepValue > lastStep){
return lastStep;
}
stepValue++
}else{
if(stepValue < firstStep){
return firstStep;
}
stepValue--
}
}
return stepValue;
}
return stepValue;
}
incrementStackStep(){
const stepValue = this.getStepValue();
this.setState({
stackStep: stepValue
})
}
decrementStackStep(){
const stepValue = this.getStepValue(false);
this.setState({
stackStep: stepValue
})
}
handleShowAllSteps(){
this.setState({
stackStep: 0
})
}
clearAllSteps(){
stackManager.clearStacks(true);
}
toggleAnimationFrame(){
this.setState({
displayAnimationFrame: stackManager.toggleAnimationFrame()
})
}
componentDidMount(){
stackManager.trigger();
}
renderStackItem(stackObject, includeSpace, index){
const message = stackObject.message;
const type = "[" + stackObject.type + "]";
const order = stackObject.order + ". ";
const styleObject = {};
if(stackObject.type === 'event'){
styleObject.color = "grey";
}
let spaceUI = null;
if(includeSpace){
spaceUI = <div style={{marginBottom:"8px"}}></div>
}
return (
<li key={index}>
{spaceUI}
<div className="stack-item" style={styleObject}>
<div className="stack-item-order">{order}</div>
<div className="stack-item-type">{type}</div>
<div className="stack-item-message">{message}</div>
</div>
</li>
)
}
renderStack(stack, stackStep){
let prevOrderNumber;
return stack.map((stackObject, stackIndex) => {
if(stackStep === 0){
this.renderedSteps.push(stackObject.order);
}
if(stackStep > 0 && stackStep < stackObject.order){
return null;
}
let includeSpace = false;
if(prevOrderNumber === undefined){
prevOrderNumber = stackObject.order;
}
if(stackObject.order == prevOrderNumber + 1) {
includeSpace = false;
prevOrderNumber = stackObject.order;
} else if(stackObject.order > prevOrderNumber + 1 ){
includeSpace = true;
prevOrderNumber = undefined;
}
return this.renderStackItem(stackObject, includeSpace, stackIndex)
})
}
shouldRenderId(id, identifier, displayAnimationFrame){
if(Array.isArray(identifier)){
const copy = identifier.slice();
displayAnimationFrame && copy.push('frame');
if(copy.indexOf(id) === -1){
return false
}else{
return true;
}
} else if(typeof identifier === 'string'){
if(displayAnimationFrame){
if(id === 'frame'){
return true
}
}
return id === identifier
}
}
renderStacks(stacks, stackStep, identifier, displayAnimationFrame){
const stackIds = Object.keys(stacks);
(stackStep === 0) && (this.renderedSteps = []);
return stackIds.map((id, index)=>{
const shouldRender = this.shouldRenderId(id, this.props.identifier, displayAnimationFrame);
if(!shouldRender){
return;
}
const stack = stacks[id];
const ui = this.renderStack(stack, stackStep)
return (
<div key={index} className="stack-container-item">
<h4>{id}</h4>
<ul>{ui}</ul>
</div>);
});
}
render(){
let ui = null;
const {renderLater, stackStep, displayAnimationFrame} = this.state;
const animationFrameButtonName = displayAnimationFrame ? 'Hide': 'Show'
if(!renderLater){
const stacks = stackManager.getStacks();
ui = this.renderStacks(stacks, stackStep, this.props.identifier, displayAnimationFrame);
}
return (<div className="stack-container">
<div className="stack-controller-container">
<button className="stack-controller-prev" onClick={this.decrementStackStep}><</button>
<div>
<button className="stack-controller-showall" onClick={this.handleShowAllSteps}>Show All steps</button>
<button className="stack-controller-clear" onClick={this.clearAllSteps}>Clear</button>
<button className="stack-controller-animation" onClick={this.toggleAnimationFrame}>{animationFrameButtonName} Animation Frame</button>
</div>
<button className="stack-controller-next" onClick={this.incrementStackStep}>></button>
</div>
<div className="stack-message-container">
{ui}
</div>
</div>)
}
}
|
STACK_EDU
|
Field Definitions: PM Copy Project Form
The following is a list of field descriptions for the PM Copy Project form. Many of the descriptions include links to other topics that provide additional information about or related to the topic.
Enter a new project number. This is the project that will be created by the copy. This cannot be an existing project number. Press F4 to see a list of existing projects.
Custom fields on the source project in PM Projects, PM Project Phases, and/or PM Cost Types Detail will be copied to the new project.
Enter the contract for the new project, up to 9 characters (including dashes). Initially defaults the code entered for the destination project, which may be overridden.
You may specify an existing contract, if desired, as long as the contract’s status is ‘Open’ (1) or ‘Pending’ (0). If an existing contract is specified, remaining contract fields are disabled and will default the information specified for the existing contract. Additionally, only contract items that exist for the source contract but not for the destination contract will be copied.
If custom fields exist for the source contract and/or contract items, they will be copied to the new contract/contract items.
Enter the retainage percent for the contract.
- This will be applied to all of the items that are created on the new contract.
- This will also be used as the default retainage percentage (More field).
Change Order Header
Change Order Items
This field is only accessible if the Change Order Header box is checked.
Check this box to copy change order items from the source project to the new project.
Leave this box unchecked if not copying change order items to the new project. (Note: If unchecked, the Change Order Detail checkbox is disabled.)
Change Order Detail
Submittals - 6.5
Submittal Items 6.5
Check this box to copy item retainage from the source contract items to the destination contract items.
Leave this box unchecked if not copying item retainage from the source contract items to the destination contract items. The destination contract items will default the retainage percent specified for the destination contract (above).
Check this box to copy the departments from the source contract items to the new contract items. When checked, a warning displays (in red) to the right indicating that you are copying source departments.
Leave this box unchecked if you are not copying item departments from the source contract. All items added for the new contract will automatically be assigned the department specified for the destination contract above.
JB Bill Groups
Check this box to copy the reviewer setup from the source project to the destination project.
Leave this box unchecked if you do not want to copy the reviewer setup from the source project to the destination project. Reviewers will need to be set up manually for the new project in PM Projects (Reviewers tab).
Check this box to copy role assignments set up using the Roles tab on the PM Projects form.
You might want to check this box if you use the WF module Process Workflow feature. More
Checking this box enables the Job Phase Roles checkbox, allowing you to copy roles at the job phase level.
Job Phase Roles
Check this box if the projection codes set up on the source project should be created for the new project.
Projection codes are created and maintained using the PM Projection Codes form, and they only apply to projections that are created using the PM Cost Projections form.
Copy the projection codes to an existing project
You can also copy the projection codes from one project to another using the Copy Projection Codes form.
|
OPCFW_CODE
|
External criteria validate the results of clustering based on some predefined structures of the data which is provided from an external source. The most well-known example of structural information is labels for the data provided by experts (called true classes). The main task of this approach is to determine a statistical measure for the similarity or dissimilarity between obtained clusters and labels 12. According to the methods incorporated in the external criteria, they can be divided into three types: pairwise measures, entropy-based measures and matching based measures 3.
As mentioned previously, the four types of classification guesses evaluation are true positive, true negative, false positive and false negative. These terms are used in the terminology of external cluster validity, especially when using pairwise measures, but with slightly different meanings to enable the evaluation of clusters in the same manner as classification 3:
In this thesis, we use various external cluster validity indices to determine differences between a reference of behaviour for items in a temporal data and clusters of items in each time point. The method is discussed in more detail in chapter three and implemented in chapter four for public goods games and chapter six for stock market data. The used criteria in the thesis are listed below:
This coefficient is a pairwise measure representing the degree of similarity between clusters. With this coefficient, each cluster is treated as a mathematical set, and the coefficient value is calculated by dividing the cardinality of the intersection of the resultant cluster with the prior cluster to the cardinality of the union between them 4:
With a perfect clustering, when false positives and false negative equal to zero, the Jaccard coefficient value equals 1. This measure ignores the true negatives and only focuses on the true positives to evaluate the quality of the clusters 3.
The Rand statistic measures the fraction of true positives and true negatives over all point pairs; it is defined as
Where N is the total number of instances in the data set. This measure is similar to Jaccard Coefficient, so its value equals 1 in perfect clustering 3.
FM define precision and recall values for produced clusters 5
Where prec = TP/(TP + FP) and recall = P/(TP + FN). For the perfect clustering this measure equals 1 too 3.
This index measure is based on contingency table which is a matrix with r × k_ , where r is number of produced clusters and k is the number of externally provided clusters. Each element of this matrix contains a number of agreed instances between any two clusters of the externally provided and produced clusters. As introduced by Meila 6, this index calculates mutual information and entropy between previously provided and produced clusters derived from the contingency table
Where C is produced clusters, T is ground truth clusters, H(C) is entropy of C and H(T) is entropy of T 3.
Halkidi, M., Batistakis, Y. and Vazirgiannis, M. (2002) ‘Cluster validity methods: part I’, ACM Sigmod Record, 31(2), pp. 40–45. ↩
Rendón, E. and Abundez, I. (2011) ‘Internal versus External cluster validation indexes’, International Journal of computers and communications, 5(1), pp. 27–34. ↩
Vendramin, L., Campello, R. J. and Hruschka, E. R. (2010) ‘Relative clustering validity criteria: A comparative overview’, Statistical Analysis and Data Mining, 4(3), pp. 209–235. doi: 10.1002/sam. ↩
Fowlkes, E. and Mallows, C. (1983) ‘A method for comparing two hierarchical clusterings’, Journal of the American …, 78(383), pp. 553–569. ↩
Meilă, M. (2007) ‘Comparing clusterings—an information based distance’, Journal of Multivariate Analysis, 98(5), pp. 873–895. doi: 10.1016/j.jmva.2006.11.013. ↩
|
OPCFW_CODE
|
Before we begin
To get started, create a new page in the CMS. See Pages and content to learn more.
Creating and editing content
Click the button Add block and select the type of block you want to create from the block selector. You'll see that content block types are also searchable on open. To remove your search click the link Clear.
You'll see the content block type you selected appear in your main content area titled Untitled Content block. If you forget the content block type you've added, hover on the block type icon to view a tooltip.
Most blocks are directly editable from within the page allowing you to have easy access adding and editing blocks without switching page views.
To edit, select the content block or click the Expand button shown as a dropdown arrow icon.
Enter your content as required, provide a title for the block and choose whether you want the title to be shown on the page or not via the Displayed checkbox field.
Reordering content blocks
To change the order of content blocks in a page, simply click and hold anywhere on the block, then drag and release to reorder the item.
When you release the block will automatically save their new positions, however you may need to publish the page to see the new order.
Adding blocks between existing blocks
An Add block button shown as a bar and plus icon can be activated by moving your cursor between blocks. You can select anywhere on the bar to open a block selector which will allow you to add a block between existing blocks.
Block state indicators
States are shown on the block type icon as coloured indicators. Each represents the current workflow status your content block is in.
Blue - The block has unsaved changes. Orange full - The block state is Draft and not publicly visible. Orange outline - The block is Modified where it has been published but has some additional draft changes. No state - The block is published.
The More options dropdown shown as an ellipses icon provides further editing functionality of individual blocks including access to editing content, custom settings, saving, publishing and archiving.
Editing existing blocks
When viewing a page, you can select the content block or select Content from the More options dropdown.
Your developer may choose to add custom CSS classes allowing you to add theming to the front-end of specific blocks. Custom CSS classes can be added by selecting Settings from the More options dropdown.
In Settings your developer may also choose to add Style Variants to allow for different stylistic changes to adjust the appearance of content blocks. See Style variants for more information.
Saving and publishing content blocks
Pages with content blocks allow you to perform actions like publish at a page level, but you also have access to perform similar actions on individual blocks. Allowing the ability for blocks to be managed and edited by multiple CMS authors while remaining in draft while other blocks get published. This allows for more flexibility of individual blocks for example if a block has user permissions.
To save or publish an individual content block select the More options dropdown. To save or publish the whole page select from the Action toolbar of the CMS. See Saving changes and publishing for more information.
To Archive a block select Archive in the More options dropdown. See Archiving for more information.
|
OPCFW_CODE
|
Hibernate foreign key with a part of composite primary key
I have to work with Hibernate and I am not very sure how to solve this problem, I have 2 tables with a 1..n relationship like this:
-------
TABLE_A
-------
col_b (pk)
col_c (pk)
[other fields]
-------
TABLE_B
-------
col_a (pk)
col_b (pk) (fk TABLE_A.col_b)
col_c (fk TABLE_A.col_c)
[other fields]
How can I manage this with Hibernate?
I do not have any idea how to declare a foreign key that would contain a part of primary key.
My database schema is generated from the Hibernate model.
If you want to fight hibernate... go ahead and use composite keys. If you want a much easier life, ensure each table has a single primary key column. Trust me, it's not worth the fight... it's a battle which will cost you many hours in the future
What do you mean by "part of primary key"? Composite primary keys in Hibernate are quite simple, but you shouldn't try to complicate that by trying to use only a part of it.
Basicly you need a composite key class. There is an answer here with all basic details: http://stackoverflow.com/questions/3585034/how-to-map-a-composite-key-with-hibernate
There's a fairly thorough answer here: http://stackoverflow.com/a/3588400/3166303
For one-to-many mappings you don't require a composite key. It is so so easier to manage this situation without using composite key.
@LanceJava In this case I need to use composite keys because of readability.
@leeor This answer does not help at all in my case. It is only about how to define a composite key.
https://stackoverflow.com/questions/47890019/hibernate-model Could you please suggest on this query.
I have found two solutions to this problem.
The first one is rather a workaround and is not so neat as the second one.
Define the primary key of the B entity as composite key containing col_a, col_b, and col_c and what was supposed to be the primary key in the first place, define as unique constraint. The disadvantage is that the column col_c is not really conceptually a part of primary key.
@Entity
class A {
@Id
private int b;
@Id
private int c;
}
@Entity
@Table(uniqueConstraints = {@UniqueConstraint(columnNames = { "a", "b" }) })
class B {
@Id
private int a;
@Id
@ManyToOne(optional = false)
@JoinColumns(value = {
@JoinColumn(name = "b", referencedColumnName = "b"),
@JoinColumn(name = "c", referencedColumnName = "c") })
private A entityA;
}
The second uses @EmbeddedId and @MapsId annotations and does exactly what I wanted to be done at the very beginning.
@Entity
class A {
@Id
private int b;
@Id
private int c;
}
@Embeddable
class BKey {
private int a;
private int b;
}
@Entity
class B {
@EmbeddedId
private BKey primaryKey;
@MapsId("b")
@ManyToOne(optional = false)
@JoinColumns(value = {
@JoinColumn(name = "b", referencedColumnName = "b"),
@JoinColumn(name = "c", referencedColumnName = "c") })
private A entityA;
}
To get above code work you also need to implement Serializable for all above classes
I tried this way and I'm getting Unable to find column reference in the @MapsId mapping: c. Any idea?. TIA
Is it a runtime or compile error? Could you post your code somewhere so that I can have a look at it?
Is it possible to have MapsId for two attrributes? For example, if there was an attribute 'd' which is also part of both keys (same as b)?
@schoenk Maybe try with another @MapsId("d") annotation, so that you will have two annotations in the end: @MapsId("b") and @MapsId("d").
Unfortunately it is not possible to have multiple ´@MapsId´ annotations on the same attribute. But I managed to solve it for my use case by using ´@JoinFormula´ instead of ´@JoinColumn´ for the columns that are used in Primary Key as well as Foreign Key.
Jagger's second solution that is my first reaction, with @EmbededId and @MapsId. The following is another way based on his second solution but without using @MapsId.
`
@Entity
class A {
@Id
private int b;
@Id
private int c;
}
@Embeddable
class BKey {
private int a;
private int b;
}
@Entity
class B {
@EmbeddedId
private BKey primaryKey;
@ManyToOne(optional = false)
@JoinColumns(value = {
@JoinColumn(name = "b", referencedColumnName = "b", insertable= false, updatable= false),
@JoinColumn(name = "c", referencedColumnName = "c") }, )
private A entityA;
}
This won't work and ends with an exception org.hibernate.AnnotationException: Mixing insertable and non insertable columns in a property is not allowed There is a Hibernate bug for it .
|
STACK_EXCHANGE
|
Separate names with a comma.
Discussion in 'Archives' started by Ray Jay, Mar 20, 2011.
Both can be used, but don't use them with a hyphen.
No it is used as in "a team that stacks Spikes", a "Spikes-stacking team"
The verb form would be "the team is designed to stack Spikes"
In this example, both oriented and based are verbs, not nouns.
Also, is it color or colour?
Edit: Yes...either way it's not a noun.
"oriented" isn't a verb as such. It's a verbal adjective, as is "based" (i.e. to base => based).
Color is the standard in American English.
I'm making Spiker a generally used term to describe a Pokemon meant to be used to set up Spikes and/or Toxic Spikes.
I'm justifying it by saying that we allow spinner too, so yeah.
I just really don't like saying "Spikes user" all the time
Is immunes an accepted term in analyses (as in something like "here is a list of several resists and immunes to a certain Pokemon's STAB: ...")? I know resists is fine, but immunes didn't seem to pop up in any analyses when I did a quick Google search.
Edit: Thanks sandshrewz.
Use immunities instead? o.o
Is it "setup sweepers," "set-up sweepers," or are both correct? It seems like I see both being used, sometimes, such as in the NU Ditto analysis, in the same article.
As set-up is the adjective, it probably should be set-up sweepers.
But that seems to contradict this rule from the OP:
I'm a bit confused which takes precedence, especially since both forms are present in analyses on site.
Hmmm I've always followed what R_D had posted there long ago, (I seriously get confused each time myself), I'll edit the OP to make it so that set-up is the adjective. That's what I've gone by and we'll fix it to make it so.
if it makes any sense at all, how i understand (understood) it:
a setup sweeper refers to a sweeper that carries a setup move / that CAN potentially set up (with a boosting move, substitute, w/e) - for example dd gyarados, qd volcarona, sd terrakion, etc, in general
while a set-up sweeper means a sweeper that has already set up during a battle - for example, a gyarados at +1/+1
edit: i guess therefore the (probably very very general but, well, in my experience) rule is that the correct one to use is almost always 'setup sweeper' -- unless you're actually talking about the mon actually being fully set-up / boosted. say, idk, something like 'unaware quagsire can switch into a variety of set-up sweepers with impunity' (stfu no one get on my case for the actual veracity of this)
edit 2: might it help / be clearer to think of 'set-up' as a synonym for 'boosted' -- but more general? as in it'd cover setting up with substitute, and probably some other things i don't know / can't think of off the top of my head rn (rain dance kingdra / sunny day venusaur?)
o yeah ok that makes sense, the OP was correct then. In R_D's example he means an already set-up Gyarados, not a Gyarados that can set up.
If someone could be ever so kind and list out all the possible ways to use setup, that'd be super awesome and I'll just c/p that into the OP.
to clarify, it's 'sleep inducer' not 'sleep-inducer' - right?
<Oglemi> also sirn hyphens are to be used with adjectives
<Oglemi> so a sleep inducer
<Oglemi> is two words
<Oglemi> if it was a sleep-inducing Pokemon
<Oglemi> then it'd be hyphenated
<RayJay> sleep-inducer could be a compound noun though
* macle has quit (Ping timeout)
<Oglemi> Two words brought together as a compound may be written separately, written as one word, or connected by hyphens.
<Oglemi> it says we get to choose
<RayJay> exactly my point
<Oglemi> i personally like without the hyphen
So I'm going to say sleep inducer is unhyphenated like we decided subpar was.
How does this work with sandstorm and hail? I know moves and abilities are supposed to be capitalized, so it's Rain Dance, Drizzle, rain, Sunny Day, Drought, sun, Sand Stream, and Snow Warning, but these last two seem a little unclear because Sandstorm and Hail are both moves as well as weather conditions. Should this just be on a case by case basis or should one always be used over the other?
Unless you're talking about the move Sandstorm or the move Hail, it should always be lowercase sandstorm and hail
OK I've seen this enough for me to become peeved by it lol
You are on a team. Your Pokemon are on a team.
You are never in a team, that's just not how it works, unless you're British, but we speak American here >:(
OK that is all
This was updated earlier today with some points about using the word "crux" in analyses. Saying something like "X is the crux of the set" is largely useless, adds nothing a reader wouldn't know already to analysis, and is completely unoriginal. Most of the time, crux is the wrong word choice.
Just to clarify, base should come before the numbers when talking about stats, such as "Blissey has a high base 255 HP", not "Blissey has a high 255 base HP", right?
yes the former is correct
Can we update that in? I've seen a lot of writers do it over the past few weeks and it's not really a rule explained in the standards
I'd actually like to officially dispute this. Putting the number in the middle is splitting the term "base Attack" unnecessarily in a way that introduces ambiguity. I vastly prefer keeping the entire term intact to better differentiate between Attack and base Attack at a glance.
Honestly, maybe the answer is to capitalize the "base" and make Base Attack / Speed / whatever else an actual capitalized term. It seems like it would make sense.
We tend to only capitalize the things that the game capitalizes (Base Power, the stats, names, etc.). Base (stat) is not a coined term within the game. The only things that we do capitalize that aren't explicitly in the game are EVs and IVs.
I don't think any one is more correct than the other, but the "base x stat" wording is how it's been written since I've been a part of the GP team. I'd rather not change it at this point lest we have to go back through and change it in nearly every analysis ever written (since it tends to pop up in each analysis). Both sound just as correct to me in any case. I'd rather we stick to what we've been doing at this point.
|
OPCFW_CODE
|
Simulated Annealing is a general-purpose optimization method used to solve large-scale combinatorial problems by simulating the process of heating and cooling of metal to achieve freedom from defects. Finding a nice arrangement of a graph is a combinatorial problem that can be reduced to assigning costs to graph configurations and finding the minimum cost configuration. The sample assigns cost to a graph configuration by evaluating different aesthetic criteria chosen by the user such as distance between nodes, length of links and the number of link crossings.
CompositeLayout partitions the diagram into several subgraphs and applies the algorithm specified via the SubgraphLayout property on each part. If the part is a tree, it is arranged using the algorithm specified via the SubtreeLayout property, which is set to a radial TreeLayout instance by default. Finally the algorithm specified via MasterLayout is applied on the graph that represents the overall partition. By running the polynomial-complexity layout algorithms on small subgraphs, CompositeLayout is able to process a large graph much faster than if a layout algorithm is applied on the whole graph. CompositeLayout can run on custom partitions specified as lists of nodes, or automatically partition the diagram via two automatic methods based on graph connectivity and graph path lengths. Set the PartitionMethod and CustomPartition properties to specify how the diagram should be partitioned.
This algorithm arranges simple flowcharts consisting of decision boxes with up to three outgoing links per node and activity boxes with a single outgoing link per node. The nodes are arranged in columns and rows, whose distance depends on the HorizontalPadding and VerticalPadding property values. When links share the same row or column, they are placed at a distance specified via LinkPadding.
The layout arranges nodes recursively starting from StartNode. If StartNode is not specified, the algorithm selects the root of the deepest branch of the graph's spanning tree as start node.
Nodes at the lowest level are arranged directly in a circle around their parent. At the upper level, the already arranged nodes form branches that are arranged in a circle around the new parent node. The algorithm is recursively repeated till the highest level is reached. If nodes in the tree have uniform number of children, the end result has fractal-like appearance (subsets of the graph look like scaled-down copies of the whole graph).
You can choose which node should be displayed at the center of the topmost circle by setting the Fractal property. If it is not specified, the algorithm automatically selects a Fractal that leads to more balanced distribution of nodes.
This layout algorithm can be used to arrange process diagrams in which nodes representing activities are placed in swimlanes representing resources. The index of the resource allocated to an activity should be assigned to the corresponding node's LayoutTraits[SwimlaneLayoutTraits.Lane].
By default, the algorithm works with the diagram's LaneGrid, but its SwimlaneGrid property can be set to any class that implements ISwimlaneGrid. This allows applying the layout to a custom-drawn grid rendered through the DrawBackground event, or one composed of locked background nodes.
Spring-Embedder algorithm produces layouts having uniform distribution of nodes by simulating a physical system in which nodes repulse each other and the links between them act as confining springs. Nodes are moved around in an iterative Spring. The forces that act on a node are calculated, taking in account the positions of surrounding nodes and links from the previous iteration.
This sample demonstrates animated 3D SpringLayout. The nodes are displayed inside a DiagramView3D control. Drag with the mouse to pan the camera. Hold CTRL while dragging to rotate the camera. Use the mouse wheel to move the camera back and forth.
As long as the graph is a tree, the algorithm can find the root node automatically. You can override it by setting the Root property, e.g. in order to arrange only a selected branch of the tree. The distance between the root and the second level and between all subsequent levels can be set through the LevelDistance property. The distance between adjacent nodes in the same level is set through NodeDistance.
Treemap maps represent hierarchies by nesting child nodes within their parents, where the areas of leaf nodes are proportional to their Weight values. Unlike other layout algorithms, TreemapMapLayout expects hierarchies to be defined via grouping or containment (see AttachTo method and ContainerNode class), and will ignore any links in the diagram.
The diagram area covered by the topmost nodes in a hierarchy is specified via the LayoutArea property. By default, the layout tries to keep the ratio of node sides as close as possible to one. However this could make it hard to distinguish separate levels of the hierarchy. To alleviate that, set Squarify to false, and child nodes will be arranged either as a row or a column inside their parent node, alternating directions for each level. The drawback is that when Weight ratios differ greatly or nodes contain many children, some nodes could end up with very narrow rectangles.
|
OPCFW_CODE
|
m.seaman at infracaninophile.co.uk
Sat Aug 16 09:15:42 PDT 2003
On Fri, Aug 15, 2003 at 08:33:13PM -0400, David Banning wrote:
> I have some menu server java applets I would like to run on my fbsd server
> but I have no idea where to get started. Some initial inquiries on Google
> have not brought any luck.
> Is there any sites that deal with this, or does anyone have a suggestion?
described as ECMAScript nowadays -- is a language superficially
similar to Java but that is interpreted within a web browser.
Unfortunately each different brand of browser has it's own idea of
what ECMAScript should be, and that makes it quite tricky to write a
web page that works reasonably in any browser. See
for an attempt at providing a standard and
directly into a web page or is served up in a separate file
(traditionally with a .js suffix) referenced from the page in
question. Or in other words, just slap it into the documents
directory of your webserver alongside the .html files.
Java on the other hand is a recent addition to the C-like language
group which has the distinction of running in a virtual machine.
This, together with the inherent object-orientation of the language
and Sun's fanatical dedication to preserving the language standards
means that *compiled* Java class files can be run unmodified on any
platform that supports Java.
Java is a general purpose language, and standalone Java applications
are certainly available. However, most people will run into Java in
the contexts of "applets". This is a mini java application that can
be downloaded via a web browser and run in a limited "sandbox" context
on the local machine. See for instance
http://gregegan.customer.netspace.net.au/APPLETS/Applets.html for an
interesting selection of mathematics oriented examples.
However, what I suspect you have are the converse of that: java
servlets. These are mini-applications that run as part of a web
*server*. These are conceptually similar to other dynamic web
languages, like PHP, ASP or some sorts of embedded mod_perl stuff, but
the scope is larger: as well as the dynamic .jsp pages (which are
internally converted to java code and compiled into Java servlets on
the fly) there are also pre-compiled Java classes of various types.
In order to serve such "webapps" to the net in general, you will need
a Java servlet container. That's a webserver written in Java with all
the necessary internal wiring to be able to load up the webapp object
structure. There are several available in ports: the various
jakarta-tomcat versions (www/jakarta-tomcat*)and Jetty (www/jetty).
But wait! There's more. The webapp servlet stuff corresponds roughly
to the middle (logic) tier of a 3-tier application. There's an
equivalent setup "Enterprise Java Beans" which (very roughly)
corresponds to the 3rd (data) layer in a 3-tier application -- the
'Java Bean' is often an object abstraction for accessing an underlying
RDBMS, but it's not limited to that. [The web browser and any
(presentation) layer in this concept]. See the java/jboss3 port for a
freely available EJB server -- an alternative to the default J2EE
stuff that Sun supplies, but which isn't actually available on FreeBSD
as far as I know.
Dr Matthew J Seaman MA, D.Phil. 26 The Paddocks
PGP: http://www.infracaninophile.co.uk/pgpkey Marlow
Tel: +44 1628 476614 Bucks., SL7 1TH UK
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 187 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-questions/attachments/20030816/53a0f2f4/attachment-0001.bin
More information about the freebsd-questions
|
OPCFW_CODE
|
It’s been a little more than one year since we started this blog. During this time the platform we’ve been building since 2013 has continued to evolve, getting richer features as releases were made.
We’ve signed customers in many different fields and their appreciation of our technology has always been positive, thanking us for helping them solve hard problems so they can focus on their value creation.
In the mean time the landscape of platforms for the Internet of Things has evolved, with some solutions gaining momentum and becoming used for many different use cases. The more we looked at most of those solutions, the more we felt we were providing a better way of doing things. So in mid 2015 we started to think how we could open up our platform to a larger audience without cannibalizing our business. Our conclusion was that the best way to grow awareness about our technology was to change our distribution model and embrace open source for our core components.
So here were are, after lots of efforts of cleaning, packaging, documenting and legal work, we’re thrilled to announce that the core of our offering, our platform for sensor data, is becoming open source. This won’t change much the way we do business as we will continue to provide support contracts to our customers so they can deploy our platform without worrying about how to get help should they need it. We also continue to offer hosted solutions so you don’t have to deploy it yourself. What will change is the number of people who will get their hands on what we’ve created, and that gets us really excited, as many of the problems they’ve been struggling to solve with other time series databases are easily addressed by our approach.
In this opening process we had to make a few changes, the major one was renaming the language we created to manipulate sensor data. That language was initially called Einstein because we thought Einstein was the best person capable of making sense of the spacetime continuum (our storage layer is still called continuum). But we realized that Einstein was a global trademark so we had to find another name to not infringe on this trademark. As our platform was called Warp, and now Warp 10, in reference to the top speed achievable, we renamed our language to WarpScript, which we tend to like since it finds its roots in PostScript for lots of aspects. We still kept the ‘.mc2’ file extension as a reminder of the Einstein days. So don’t be surprised if you read references to Einstein here or there, we’re trying to move on but occasionaly our brains will make those odd references.
The project home is now www.warp10.io, and the code lies on Cityzen Data’s github account. We’ve also set up a Google Group and a twitter account @warp10io, so you have plenty of ways to reach out to us and the community we expect to build.
So on behalf of the team at Cityzen Data who has been working really hard those last few weeks, I wish you a very warm welcome to the Warp 10 universe, hoping your take our technology where no one has taken it before!
Mathias Herberts - cofounder & CTO - @herberts
|
OPCFW_CODE
|
Naho's Opening Statement
Good evening everyone, it's time for this week's Vocaran.
To those who went to NicoNico Chou Kaigi 2, otsukaresamadeshita!
Seems like there's a lot of people who bought CDs from VoM@s, looked at 3D models during the event, enjoyed the music and dancing, also,
I believe that there are a number of you who are experiencing the pain for walking too much.
During The 4th NicoNico Gakkai (B) Symposium - Poster Session, they diplayed a poster titled 「VOCALOID Ishiki Chousa」, currently it is being uploaded to NicoNico Seiga.
To those who did not attend Chou Kaigi, this is your chance to read it!
It contains some ve~ry interesting facts in it. (The link to the poster is in the uploader comments)
With that, let's maintain the hype from Chou Kaigi and view this week's rankings.
As usual, we'll be starting from the 30th place.
30(14↓) : 【初音ミク
29(6↓) : 【リンちゃ
28(43↑) : 【IA】チ
27(New!!) : 【結月ゆか
26(New!!) : 【初音ミク
25(27↑) : 【GUMI
24(32↑) : 【初音ミク
23(17↓) : 『初音ミク
22(83↑) : 【IA・初
21(5↓) : 【鏡音レン
Pick Up : 【巡音ルカ
20(25↑) : 【崖の上で
19(19→) : 【鏡音リン
18(70↑) : 【初音ミク
17(28↑) : 【IA】夜
16(3↓) : 【ミク&レ
15(New!!) : 【初音ミク
14(50↑) : 【初音ミク
13(New!!) : 【鏡音リン
12(New!!) : 【GUMI
11(1↓) : 【GUMI
10(New!!) : ( IA^
9(4↓) : 【初音ミク
8(New!!) : ┗|∵|┓
7(2↓) : 【初音ミク
6(12↑) : 【IA】ロ
5(New!!) : 【IA】ヤ
4(New!!) : 【鏡音リン
This week in history (Weekly Vocaran #239
5 :【初音ミク】アストロトルーパー【オリジナルPV】 (Astro Trooper)
4 : 【初音ミクDark】外見と内面【オリジナル曲PV付】 (Sotomi to Naimen)
3 : 【GUMI】チェックメイト【オリジナルPV】 (Checkmate)
2 : 【初音ミク】ラブソングを殺さないで【オリジナルPV】 (Love Song o Korosanaide)
1 : 【鏡音レン】東京テディベア【カバーアレンジ】 (Tokyo Teddybear)
3(New!!) : 「キズ」
2(New!!) : 【初音ミク
1(New!!) : 【IA】ア
Naho's Closing Statement
And that was this week's first place~
New infor has been released during NicoNico Chou Kaigi 2.
A three-man DB, a library which can do chorus or unisons,
「VOCALOID3 Library ZOLA PROJECT」
Developed by Yamaha, and will be on sale during June.
Lately there's a lot of projects surrounding VOCALOID community,
Didn't think that the game world would even influence the making of VOCALOID libraries...
Well, let's have a look at the rest of the rankings.
Nami's Opening Statement
Good afternoon everyone, it's time for UTAran!
Otsukare to those who went to NicoNico Chou Party 2!
Lively Teto-san, and also Ritsu-san who showed off during her solo performance!
It was amazing!
Let's keep the tension and have a look at this week's top 10!
10(New!!) : 【ゆっくり
9(9→) : 【UTAU
8(New!!) : 忘れんぼ症
7(New!!) : 【UTAU
6(New!!) : 【雪歌ユフ
5(New!!) : 【重音テト
4(New!!) : 【健音テイ
3(New!!) : 【雪歌ユフ
2(New!!) : 【蒼77
1(New!!) : 重音テトオ
Nami's Closing Statement
And that was the Utaran!
Let's move on to the rest of the rankings,
Take a look at this week's top 10!
Ending Song : 【GUMI
|
OPCFW_CODE
|
Introduction: Aquarium LED Light Controller Based on Raspberry Pi
Having a fish tank is good for a hobby and good for a profession.
One of the key components of a fish tank is the proper lighting system (among others).
There's much literature on the internet about this theme, from the very basic on-off timer switch until the sophisticated sunrise-sunset-season controllers.
Step 1: Starting
Neon lights are commonly used, but only permits using of on-off lighting of an aquarium.
In case some wants sunrise-sunset controlling, LED lighting is a must.
LED strips are easily controllable through PWM (pulse-width-modulation) signals, can be bought with up to IP67 protection...
There are a lot of aquarium lighting systems which controls the amount of illumination by varying the LED strip’s power. Even the sunrise-sunset-season controllers are using this, but in a time controlled way (they ramp up the power from zero to max and then down to zero…).
All these systems consist of the following three components: a PSU, a controller and the LED strip. The only thing that I cannot find is a truly light controlling system which measures the actual light intensity and control the LED strip's power based on the illuminance.
This „only” needs the use of a light sensor.
This was the point when the idea of the "Aquarium LED light controller with RPi" was born.
Step 2: True Light Control
The starting points of an easily usable, programmable light
controller were the followings:
- Full light control with PWM capable of 0-100% control
- Exact light measurement (no LDR or other inaccurate techniques)
- Easily programmable through PC or smart phone
- Can be used with almost every LED lighting or insead of existing controllers
- Accurate timing (not suffering from RTCs' long term time shift)
The first 4 points would point to a microcontroller based system, but the 5. point needs internet connection and NTP protocol. The choice of using Raspberry Pi was the need for rapid, simple development, and easy upgradeability.
Step 3: Main Components
Firstly the light sensor was chosen. The BH1750 based light sensor module communicates through I2C bus, measures with very wide range and high accuracy, and is a cheap one.
Secondly the PWM signal - which comes out of the RPi - must be amplified to drive the LED strip. A simple "noname" 3 channel LED amplifier / repeater was chosen.
And lastly - the Raspberry Pi - can be any of its sub types (model A-B-B+-B2-B3-zero...). Actually the very first edition of RPi model B (with 256 MB RAM) was used, because it was at hand with a 4 GB SD card.
The controlling software runs on it, with its own graphical user interface, so the development took place on the RPi, no other programming device needed.
Since no one would like to have a monitor next to a fish tank, there is a need for remote accessing the RPi. The actual screen of the RPi can be seen through VNC connection either from PC or smart phone.
Step 4: Hardware
The following compontents needed for the light control system:
- Raspberry Pi
- BH1750 light sensor
- LED amplifier
- thermal switch
- 12V power supply
Fuses are used to protect the 12V line (each segment of LED strip is fused, so if one segment goes to short circuit then it will not cause a total short of the power supply).
Thermal switch is normally closed and is glued (with thermal grease) to the LED strip’s heat sink. If the temperature of the heat sink rises above 50°C then it simply opens the circuit and protects the LED strip from overheating. KSD9700 type (50°C) was used. If its temperature rises above 50°C, the thermal switch opens and remains opened until the temperature falls below 35°C.
This feature is just for safety, under normal conditions it will not operate (open) at all.
The 12V power supply is a generally used one, 12V/6A type.
Step 5: Software
As mentioned, the software runs on the Raspberry Pi.
The main concept was to easily and rapidly develop the controlling software.
Since the initial setup of the aquarium light system takes place only occasionally (1-3 times), the appearance of the software is minimalistic: only the main informations shown and the input elements are only buttons to be smart phone friendly.
The software was written in python, uses tkinter as the graphic user interface.
The program starts after the Pi boots, reads its settings from config file. In order to have exact time, the Pi needs internet connection (actal time comes from network time protocol).
Remote connection takes place through vnc (virtual network computing) protocol, via x11vnc, no encryption, no password, just simple remote display and control.
The light control has 9 time steps, each step has its illuminance setpoint. The intermediate points (between two setpoints) are calculated with linear interpolation.
There are 2 additional time switches, which can control for eg. pump, CO2 infeed…
The software is what you can see, everything is obvious, with all the (necessary) informations shown. The base resolution is set to 1024x768 to fit for phone screens.
The control schema is a simple integrating controller with a deadband. The power level control has 1000 steps, so 0.1% is the least change in the output, which is perfectly fine enough.
Raspberry Pi has only one hardware PWM output, which is used in this project. The maxium PWM frequency (with 1000 step resolution) is 9600 Hz, but real world experiments (done with oscilloscope) showed that anything above 1000 Hz is useless (simply the LED strip is not fast enough), actually 960Hz was set.
When the actual time equals a time setpoint, the program stores the actual power level in the configuration file.
If the light sensor fails, then a blinking red LED shows this, but the light control continues with the previously saved power level datas.
Step 6: Arrangement, Setup
The light sensor is a bare PCB module (as can be seen), which proved to be very difficult to make waterproof, so actually the light level of the water surface is measured.
Some additional notes: formerly the aquarium had neon lights switched on for 10 hours interval.
The light level at the surface was 15800 lux, so the luminous exposure is roughly 158000 lux.h daily.
The LED lighting must produce approximately the same luminous exposure than the neon lights, with sunrise-sunset control.
The actually used pattern was taken from:
To sum it up: the aquarium really looks different with sunrise sunset being used, the nature wakes up in the morning… the whole looks better…
Step 7: Code, Setup
Rasbian Jessie was used as the operating system, so at first this is to be downloaded:
Atfer writing the Jessie image to the RPi's SD card, connect a HMDI monitor, a keyboard, a mouse, an ethernet cable and plug the SD card to the raspberry, then switch ON.
The following steps needed to set up the light controller:
- Start a Terminal window
To enable the I2C interface on the Pi, type:
Then go to the menu: Advanced / I2C / Yes
Then exit from raspi-config.
A few modification is needed in the config.txt file, so type:
sudo sed -i 's/^#hdmi_force_hotplug/hdmi_force_hotplug=1/g' /boot/config.txt
sudo sed -i 's/^#hdmi_group/hdmi_group=2/g' /boot/config.txt
sudo sed -i 's/^#hdmi_mode/hdmi_mode=16/g' /boot/config.txt
Then copy the 'Aquarium_LED_light_control.py', the 'LED_PWM_wiringpi.py' and the 'Light_control.ini' files to /home/pi.
(The 'Aquarium_LED_light_control.py' is the main program, while the 'LED_PWM_wiringpi.py' is for testing. The testing can be done via directly modifying the power output, while seeing the change in the illuminance. 'Light control.ini' file is neccesary to run the main program.)
Then type the followings:
sudo apt-get update -y
sudo apt-get upgrade -y
sudo apt-get install -y python-dev python-pip
sudo apt-get install -y python-smbus i2c-tools
sudo pip install wiringpi2
sudo apt-get install -y python-tk
sudo apt-get install x11vnc -y
sudo cd /home/pi/.config/autostart
echo [Desktop Entry] >x11vnc.desktop
echo Encoding=UTF-8 >>x11vnc.desktop
echo Type=Application >>x11vnc.desktop
echo Name=X11VNC >>x11vnc.desktop
echo Comment= >>x11vnc.desktop
echo Exec=x11vnc -forever -display :0 >>x11vnc.desktop
echo StartupNotify=false >>x11vnc.desktop
echo Terminal=false >>x11vnc.desktop
echo Hidden=false >>x11vnc.desktop
echo [Desktop Entry] >lxterminal.desktop
echo Type=Application >>lxterminal.desktop
echo Name=LEDaqua >>lxterminal.desktop
echo Exec=lxterminal -e 'sudo python /home/pi/Aquarium_LED_light_control_wiringpi_fullscreen.py' >>lxterminal.desktop
echo StartupNotify=false >>lxterminal.desktop
echo Hidden=false >>lxterminal.desktop
After the reboot the light controller is ready to use, with or without monitor, can be accessed through VNC.
1 Person Made This Project!
- Itirex made it!
|
OPCFW_CODE
|
A cookbook is the fundamental unit of configuration and policy distribution.
What is a Cookbook?
A cookbook is the chef-client's unit of configuration and policy distribution which defines a scenario and contains everything that is required to support it:
The chef-client uses Ruby as its reference language for its cookbooks and recipes.
What is Chef-solo?
Chef-solo is an open source version of the chef-client that allows using cookbooks with nodes without requiring access to a Chef server. chef-solo runs locally and requires that a cookbook (and any of its dependencies) be on the same physical disk as the node.
This box allows you to include legacy chef's cookbooks or recipes into your Application Catalog within ElasticBox. It downloads a Chef Cookbook's configuration files for be used by chef-solo. It's intended to customize a cookbook with its own default.rb and metadata.rb files that will define recipe's attributes and its needed dependencies.
CHEF_COOKBOOK_NAME: Name of the cookbook and its directory
CHEF_DEFAULT_RB: is the attributes file to be configured and saved into the cookbook folder (CHEF_COOKBOOK_NAME). For each cookbook, attributes in the default.rb file are loaded first. When the cookbook attributes take precedence over the default attributes, the chef-solo client will apply those new settings and values when chef-solo client run on the instance.
CHEF_METADATA_RB: Every cookbook requires a small amount of metadata stored in a file called metadata.rb that lives at the top of CHEF_COOKBOOK_NAME’s directory. The contents of the metadata.rb file provides hints to chef-solo so that cookbook is deployed correctly.
Accelerate Cloud Adoption
As you move your applications to the cloud, Chef makes your adoption path not just smooth, but fast. Migrate your workloads quickly, consistently, and at a pace that suits your needs.
Manage Multiple Cloud Environments
Take control of all your cloud environments. Chef is cloud agnostic, which means you’re free to pick the cloud providers that meet your requirements, based on features and cost.
Manage Both Data Center and Cloud Environments
Chef lets you manage all your environments. Manage Windows, Linux, AIX, and Solaris servers, whether in the cloud or on premises.
Maintain High Availability
Keep the Chef Server API available even in case of partial network or hardware failure. The Chef server can operate in a high availability configuration that provides automated load balancing and failover for stateful components in the system architecture.
This open source software is free to use.
An instance executing this box will use bash scripting to download and save the configuration of a Cookbook. Box events handle the chef-cookbook lifecycle on the instance as follows:
pre_configure event script: creates cookbook's folder and download and configure the default.rb and metadata.rb files defined in the box variables.
The box supports deploying to these linux distributions:
No support is offered for this open source software.
|
OPCFW_CODE
|
FreshPorts - new ports, applications Fifty Best OSX Podcasts For 2020. Latest was Windows 7 Ultimate X86 X64 Fully Activated 32 64 Bit Torrent. Listen online, no signup necessary. Manualzz provides technical documentation library and question & answer platform. It's a community-based project which helps to repair anything. Generate your app dmgs. Contribute to LinusU/node-appdmg development by creating an account on GitHub. Frugal and native macOS Syncthing application bundle - syncthing/syncthing-macos Mac application to spoof your iOS device location. - Schlaubischlump/LocationSimulator
10.6 to 10.14 users: There is not currently a binary installer, and you will need to follow the source install instructions instead. 10.5 users: Download the installer disk image: Fink 0.9.0 Binary Installer (PowerPC) - 13635047 bytes Fink…
6 days ago First make sure the versions of your installed Xcode and Command Line Tools Line Tools from the tab "Downloads" in the preferences window of Xcode.app. Mac OS X, 10.6, Snow Leopard, darwin 10 Xcode 8.3.3 ¶. 23 Dec 2017 Download cuDNN 7.0.4 for CUDA 9.0 on MacOS (login to Nvidia developer account) cd ~/Downloads mv Xcode.app Xcode-8.3.3.app sudo mv -b16/aa0333dd3019491ca4f6ddbe78cdb6d0/jdk-8u152-macosx-x64.dmg. osx_image: xcode8.3, Xcode 8.3.3, 8E3004b, macOS 10.12, 1.8.0_112-b16 addon is the simplest, fastest and most reliable way to install dependencies. In practical terms, if your Mac build requires Java 8 and below, use xcode9.3 (or iOS 10.3. iOS 11.0. iOS 11.1. iOS 11.2. iOS 11.3. iOS 11.4. iOS 12.0. iOS 12.1. 13 Dec 2016 Xcode is a great addition for those who are looking for a reliable tool for creating applications for Mac OS as well as the iOS for iPhones and 2017年9月27日 例子:下载Xcode 8.3.3版本操作流程 ###Xcode10.1https://download.developer.apple.com/Developer_Tools/ IOS 官网下载Xcode的dmg文件.
A shell script to build fancy DMGs. Contribute to andreyvit/create-dmg development by creating an account on GitHub.
Generate your app dmgs. Contribute to LinusU/node-appdmg development by creating an account on GitHub. Frugal and native macOS Syncthing application bundle - syncthing/syncthing-macos Mac application to spoof your iOS device location. - Schlaubischlump/LocationSimulator Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/usr/lib/libgcc_s.1.dylib Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/usr/lib/crt1.3.1.o Checksums of Mac OSX installer DMGs. Contribute to notpeter/apple-installer-checksums development by creating an account on GitHub. If you have Xcode 4.3 or newer, you should install the Command Line Tools for Xcode from within Xcode's Download preferences.
Manualzz provides technical documentation library and question & answer platform. It's a community-based project which helps to repair anything.
Go to the Mac App store. Find Xcode. Press download. Download Xcode DMG 11.3.1 Final Installer [Build 11C504] | Direct Link | App Store Link DMG File [Build: 9A235]; Download Xcode 8.3.3 .DMG File [Build: Xcode is an integrated development environment (IDE) for macOS containing a suite of Xcode could be downloaded on the Mac App Store. Jump up to: from preferences -> downloads; ^ "macOS 10.14 Mojave can't open Xcode 8.3.3 8 Apr 2012 You can't download previous version of Xcode from Mac App Store. But if you have registered an Apple Developer account, Apple allows you 25 Apr 2019 You can download Xcode 8.3.3 from Apple Developer downloads. The latest release for them can be downloaded from the Mac App Store. Stop the Xcode 7.3.1.dmg download on Google Chrome's Downloads page or in ://adcdownload.apple.com/Developer_Tools/Xcode_7.3.1/Xcode_7.3.1.dmg'
Contribute to lukemelia/callback-ios development by creating an account on GitHub. Fully Automated Ansible role for the iOS Developer to provision macOS for iOS Development and iOS Continuous Integration. - Xcteq/iOS-Dev-Ansible The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering. - Owasp/owasp-mstg Hackstore dmg download FreshPorts - new ports, applications
11 Aug 2018 Xcode is the software development suite for Mac that allows developers to build apps for MacOS, iOS, tvOS, and watchOS. The vast majority of
You need Xcode SDK to develop apps on Mac OS X. Here is a tutorial that guides you on how to install Xcode (App Development kit) on your Windows 10, 8/8.1 and 7 Operating system for iOS SDK. Create customized mobile apps with synchronized text and audio. Bake your perfect setup. Contribute to mikemackintosh/bakery development by creating an account on GitHub. Two letters better than MetaX. Contribute to griff/metaz development by creating an account on GitHub.
|
OPCFW_CODE
|
How to use community snippets add and add personal snippets
Would it be possible to add documentation on how to use snippets with the configuration?
Specifically I would like to add personal snippets that support markdown content and mkdocs extensions.
I would also like to evaluate community snippets such as rafamadriz/friendly-snippets but it is unclear how these can be added. Simply adding friendly snippets as a plugin in plugin.fnl does not include these snippets.
I have tried adding a snippets directory in the .config/nvim/ directory and copied markdown snippets from vim-snippets which seem to appear in the TAB completion list, although they do not expand.
In fnl/config/plugion/cmp.fnl several sources for autocompletion are defined, although it is unclear what these source specifically refer to for buffer, vsnip and luasnip
(def- cmp-src-menu-items
{:buffer "buff"
:conjure "conj"
:nvim_lsp "lsp"
:vsnip "vsnp"
:luasnip "lsnp"})
In Luasnip it discusses loaders, so curious to understand if these loaders are used or relevant to the fennel configuration
What are the supported approaches for snippets in the current configuration, or how could it be extended to support custom snippets?
Thank you
Hey John how's it going?
I was reading the docs you send and as I understood this friendly-snippets will show as a vscode type of snippets, so in the menu items if should fall as :vsnip as your example, but I'm my tests here is how it is appearing as :luasnip show in my test config:
So what I did to configure this was the following:
First I changed the LuaSnip lines in nvim/fnl/config/plugin.fnl
; snippets
:L3MON4D3/LuaSnip {:requires [:rafamadriz/friendly-snippets
:saadparwaiz1/cmp_luasnip]
:mod :lua-snip}
And created the following file nvim/fnl/config/plugin/lua-snip.fnl with the following contents:
(module config.plugin.lua-snip
{autoload {vscode luasnip.loaders.from_vscode}})
(vscode.lazy_load)
I did some tests and they are expanding as intended in my tests, let me know if adding the lazy_load fix this for you as well.
Yes, the vscode json friendly snippets are working with the configuration describe above. Thank you.
I would also like to have a ~/.config/nvim/snippets/ directory with language specific snippets that I have created for myself, e.g ~/.config/nvim/snippets/markdown.json
I updated the fnl/config/plugin/lua-snip.fnl file to pass a path using :paths however this doent seem to load
(module config.plugin.lua-snip
{autoload {vscode luasnip.loaders.from_vscode}})
;; (vscode.lazy_load)
(vscode.lazy_load
{:paths ["./snippets"]})
I have tried a few variations around the original lua code but without success
-- load snippets from path/of/your/nvim/config/my-cool-snippets
require("luasnip.loaders.from_vscode").lazy_load({ paths = { "./my-cool-snippets" } })
Looking at the .local/state/nvim/luasnip.log it seems I need a package.json file.
ERROR | vscode-loader: Tried reading package /home/practicalli/projects/practicalli/neovim-config-redux/snippets/package.json, but it does not exist
So for local snippets I need to create a package.json file that lists each file in that directory that represents a file of snippets, similar to that of the friendly snippets package
Maybe I should create a practicalli/neovim-snippets project and add that to my plugins config :)
|
GITHUB_ARCHIVE
|
NSF SHF 1514372:
Automating Robot Programming Through Constraint Solving and Motion Planning
The project aims to develop a high-level programming framework, called Robosynth, for personal robots. Here, rather than writing low-level code that defines how a robot must perform a task, the user of the robot writes a specification that defines what is to be accomplished. Given this specification and a model of the robot's environment, Robosynth automatically synthesizes a program that can be executed on the robot. So long as the environment behaves according to the assumed model, all executions of this program are guaranteed to satisfy the user-defined requirements.This approach and its derivatives can make robot programming accessible to a vast untapped body of inexperienced programmers.
The technical highlights of the project are the specification language using which users interact with Robosynth, and the algorithms that Robosynth uses for automatic code synthesis. These algorithms simultaneously reason about a logical task level that is concerned with the high-level goals of the robot, as well as a continuous motion level concerned with navigating and manipulating a physical space. At the task level, Robosynth leverages recent methods for analyzing complex systems of logical constraints, for example SMT-solving and symbolic solution of graph games. Motion-level reasoning is performed using sampling-based motion planning techniques.
- N. T. Dantam, S. Chaudhuri, and L. E. Kavraki, “The Task Motion Kit,” Robotics and Automation Magazine, vol. 25, no. 3, pp. 61–70, Sep. 2018.
- Y. Wang, S. Chaudhuri, and L. E. Kavraki, “Bounded Policy Synthesis for POMDPs with Safe-Reachability Objectives,” in Proceedings of the 17th International Conference on Autonomous Agents and
Multiagent Systems, Stockholm, Sweden, 2018, pp. 238–246.
- N. T. Dantam, Z. K. Kingston, S. Chaudhuri, and L. E. Kavraki, “An Incremental Constraint-Based Framework for Task and Motion Planning,” International Journal of Robotics Research, 2018.
- S. Butler, M. Moll, and L. E. Kavraki, “A General Algorithm for Time-Optimal Trajectory Generation Subject to Minimum and
Maximum Constraints,” in Workshop on the Algorithmic Foundations of Robotics, 2016.
- N. T. Dantam, K. Bøndergaard, M. A. Johansson, T. Furuholm, and L. E. Kavraki, “Unix Philosophy and the Real World: Control Software for Humanoid Robots,” Frontiers in Robotics and Artificial Intelligence, vol. 3, Mar. 2016.
- N. T. Dantam, Z. K. Kingston, S. Chaudhuri, and L. E. Kavraki, “Incremental Task and Motion Planning: A Constraint-Based Approach,” in Robotics: Science and Systems, 2016.
- Y. Wang, N. T. Dantam, S. Chaudhuri, and L. E. Kavraki, “Task and Motion Policy Synthesis as Liveness Games,” in International Conference on Automated Planning and Scheduling, 2016, pp. 536–540.
|
OPCFW_CODE
|
These forums are endless, and I have really been learning a lot. This is my first post however.
I have a good understanding at low level of computers. I've done some hard drive recovery for friends. Changed out mobos and stuff. Little things compared to you very intelligent people. But I'm learning. Gotta start somewhere. Doing my homework on my first total build. Also have a question about memory that I am running the forums for answers.
I will try to be short and to the point. I currently am running an HP product, Asus A8N-LA mobo, with and AMD Athlon X2 64 bit processor. It has on board Nvidia graphics. I had bought and installed a new Nvidia 9500 GT video card 1GB. Along with Windows 7 Ultimate 64 bit, and a couple more sticks of memory (up to 4GB now). I have been running it for over a month with no problems.
I was being braver and pulled the heat sink off the nividia chipset for the pcie to snoop. I put it back correctly, but the graphics card now doesnt work. Is it possable, when the pressure was relieved, the chip cracked? I didnt do anything else to it. No,I didnt reapply any thermal paste to it. (dummy me). I dont have another system to try the graphics card out on. My other one is an old P4 and it doesnt have PCIe (the relic). I am back to running VGA with the on board graphics.
I'm sorry, I didnt clearify that. I didnt pull the card itself apart, just the heatsink on the chip on the motherboard above it. Being an old motherboard, I thought over time it may have gotten brittle from heat, and when I pulled the heat sink off maybe the release in pressure may have helped it crack. I dont have another card to test in the slot, or another slot in another system to test the card. (Im a little outdated). I was under the impression the Nvidia chip on the motherboard above it operated that pcie slot. So if the chip was bad, the slot wouldnt be any good. Im not real knowledgeable about the componants of the motherboard itself yet.
I'll have to pull it apart and check the soldered connections. Hadnt thought of it pulling apart. Any other ideas would be helpful. Thanks very much.
Pulled the mobo today and looked it over front and back with a bright light and large magnifying glass. Nothing out of the ordinary, no dark spots indicating a short, no cracks the eye can see. And still no pcie.Guess I'll muttle through with onboard VGA instead os DVI. Thanks again.
My compaq has the same board and i been running this HP for years and just about 3 months ago i put a PCI-E card in it also and this morning when i went to power it on and no video.. The PCI-E slot on this one too quit working. I am starting to think maybe its something with these Asus A8N-LA Mainboards and the use of PCI-E.. Because like i said i have only been using the slot for afew months and its already gone out...
Good point. I think PCI-E was still pretty new when this mobo came out, so Asus may not have gottent he bugs out of it. Now that these cards are so popular, I wonder how many others using the older boards run into the problem. I didnt think it was something I did to it.
Thanks very much. I lived without High Def this long I can live awhile longer.
|
OPCFW_CODE
|
RGB Color Codes Chart RGB color picker | RGB color codes chart | RGB color space | RGB color format and calculation | RGB color table RGB color picker RGB color codes chart 15 amazing tools for online collaboration A team of designers does not always work in the same office; you work in distributed groups, some of you may be working from home, and clients can be based all over the world. This is where collaboration tools come in – they make it easier and faster for designers to get feedback and approve artwork in a professional manner, and they come in all sort of forms, from free Android apps to Chrome extensions. Some are created specifically for designers, some serve as a concept crafting whiteboard often with tools to make simple annotations, and some are all-in-one web apps that include an element of project management. Here we gather together some of the best available online tools to allow designers to take part in collaborational work in real time. 01.
CodeIgniter Library: 77 Free Scripts, Addons, Tutorials and Videos - Razorlight Media CodeIgniter is the brainchild of Ellis Labs and one of the more popular PHP frameworks available. It's gained a reputation as a lean, mean, easy-to-learn framework that anyone comfortable with PHP can get up and running with in a few days. CodeIgniter is fixin' to blow up even more with the release of ExpressionEngine 2.0, currently in Beta, which is built on top of the CodeIgniter framework. Sounds great and all, but ExpressionEngine ain't cheap…$299 for the commercial version…so while you sit around wondering if ExpressionEngine is a CMS you want to get in to, why not get familiar with it's CodeIgniter foundation using these 802 77 totally free scripts, addons and tutorials? Free/Open Source CodeIgniter Scripts CodeIgniter is a great, open source, PHP framework for building web applicationsBackEndProBackEndPro is a control panel for developers written in PHP for the CodeIgniter framework.
CSS Cascading Style Sheets (CSS) are a stylesheet language used to describe the presentation of a document written in HTML or XML (including XML dialects like SVG or XHTML). CSS describes how elements should be rendered on screen, on paper, in speech, or on other media. CSS is one of the core languages of the open web and has a standardized W3C specification. Developed in levels, CSS1 is now obsolete, CSS2.1 is a recommendation, and CSS3, now split into smaller modules, is progressing on the standardization track. CSS Reference An exhaustive reference for seasoned Web developers describing every property and concept of CSS. CSS Tutorial A step-by-step introduction to help complete beginners get started.
Big Huge Labs Tons of fun stuff... Give one of our toys a spin! Lolcat Generator, Hockneyizer, Billboard, Pop Art Poster, Trading Card, Jigsaw, Map Maker, Mosaic Maker, Photobooth, Bead Art, Motivator, Color Palette Generator, Framer, Mat, FX, Magazine Cover, Movie Poster, Wallpaper, Badge Maker, Cube, CD Cover, Gift Center, Pocket Album, Calendar, I know, right? It's a lot to take in. Go slow.
Cody - Free HTML/CSS/JS resources A minimal and responsive newsletter form with the addition of some subtle CSS3 animations to enrich the user experience. Browser support ie Chrome Firefox Safari Opera 9+ It’s always challenging to push a user to subscribe to your website newsletter. The real key is where you position the call-to-action form IMO. Then there’s the UI and UX of the form itself. When the user decides to subscribe, we need to make sure the process is smooth and simple.
Shorthand properties Definition Shorthand properties are CSS properties that let you set the values of several other CSS properties simultaneously. An Introduction To Graphical Effects in CSS : Adobe Dreamweaver Team Blog Over the past couple of years, CSS has gotten a set of new properties that allow us to create quite advanced graphical effects right in the browsers, using a few lines of code, and without having to resort to graphics editors to achieve those effects. If you are like me, then that sounds like music to your ears. I don’t do well in graphics editors and would choose a code path over a graphical editor path any day. CSS now allows us to do more graphical work right in our text editors. Tiny editable jQuery Bootstrap spreadsheet from MindMup About This tiny (3KB, < 120 lines) jQuery plugin turns any table into an editable spreadsheet. Here are the key features: No magic - works on a normal HTML table (so you can plug it in into any web table, and apply any JS function to calculate values) Supports validation and change events (so you can warn about invalid input or prevent invalid changes) Uses standard DOM focus for selection (so does not interrupt scrolling or tabbing outside the table) Input automatically copies underlying table cell styling Native copy/paste support Does not force any styling (so you can style it any way you want, using normal CSS) Works well with Bootstrap Depends only on jQuery Tested in Chrome 32, Firefox 26, IE 10, Safari 7, Android Chrome and iOS 7 Safari
Responsive Images: If you're just changing resolutions, use srcset. If you're implementing responsive images (different images in HTML for different situations) and all you are doing is switching between different versions of the same image (the vast majority of usage), all you need is the srcset attribute on the <img>. Gaze upon this easy syntax: It's not just the syntax that is easy, it does a better job than <picture> with <source>s with explicit media attributes (we'll cover why in a moment). Plus it has the opportunity to be much better in the future with browser settings and browser improvements. I've screencasted about this before, but it clicked better watching Mat Marquis's talk at An Event Apart Austin and with Jason Grigsby's post.
|
OPCFW_CODE
|
classBook: list_of_reviews= def __init__(self,info_about_author,year_of_publication,genre,publisher): self.info_about_author= info_about_author self.year_of_publication= year_of_publication self.genre= genre self.publisher= publisher def __str__(self): author= 'Author: ' + self.info_about_author year= 'Year of publication: ' + self.year_of_publication genre= 'Genre: ' + self.genre publisher= 'Name of publisher: ' + self.publisher reviews= 'Reviews about book: ' + '\n' + '\n'.join(self.list_of_reviews) return author + '\n' + year + '\n' + genre + '\n' + publisher + '\n' + reviews def __repr__(self): return self.__str__() def __eq__(self,other): if str(self) is str(other): return True else: return False def __ne__(self,other): if repr(self) is not repr(other): return True else: return False class Book_review: def __init__(self,review): self.review= review def add_review(self,obj): #method of adding Book class object to review list obj.list_of_reviews.append(self.review) book1= Book('Pete Barker','2007','Historical genre','Hamish Hamilton') book2= Book('Aldous Huxley','1932','Science fiction','Harper Collins') review1= Book_review('Brave New World explores the negatives of a ostensibly successful world in which everyone appears to be content and satisfied, with excessive carnal pleasures yet really, this stability is only achieved by sacrificing freedom in its true sense and the idea of personal responsibility.') review2= Book_review('It is true that this book is a complex read and I must confess that some parts I did not understand; however, the novels meaning has left a deep impression on me. Its certainly a book I wont forget,and I would recommend it to readers aged fourteen and over as the ideas presented are complex, and Huxley writes in a very adult-like manner, with exceedingly complicated sentences and very complex vocabulary.') review1.add_review(book2) #should only add reviews to this object review2.add_review(book2) print(str(book1)) print(str(book2)) print(repr(book1)) print(repr(book2)) print(book1== book2) print(book2 != book2)
Answer # 1
Change the class like this:
classBook: def __init__(self, info_about_author, year_of_publication, genre, publisher): self.list_of_reviews= self.info_about_author= info_about_author self.year_of_publication= year_of_publication self.genre= genre self.publisher= publisher ...
Now why did this happen. There is a class, and there is an object. For example you have a class
Book, but there are objects
The way you described the property:
classBook: list_of_reviews= ...
Is the creation of a property on a class. And since class is common to all objects, then, accordingly, the property
list_of_reviewswas common to all books
This feature can be bypassed by changing the object reference of the class property of the object itself:
classBook: list_of_reviews= None def __init__(self, info_about_author, year_of_publication, genre, publisher): self.list_of_reviews= ...
Then, each new object will have its own property.
UPD. There is a way to simplify the creation of a class with a constructor, an annotation of fields, a comparison mechanism, and a string description method for objects through dataclasses
For annotation from built-in types of type
list[str]you need python 3.10 and above, for earlier versions, it can be replaced with
Rewrote the code:
from dataclasses import dataclass, field @dataclass classBook: info_about_author:str year_of_publication: str genre: str publisher: str list_of_reviews: list[str]= field(default_factory=list, repr=False) def print_info(self): print(self) book1= Book('Pete Barker','2007','Historical genre','Hamish Hamilton') print(book1) # Book(info_about_author='Pete Barker', year_of_publication='2007', genre='Historical genre', publisher='Hamish Hamilton') book1.print_info() # Book(info_about_author='Pete Barker', year_of_publication='2007', genre='Historical genre', publisher='Hamish Hamilton')
- python : Bot on aiogram does not see the answer to someone else's message
- python : Input() constraint (int, float)
- Loading bar in Python 3.x
- Program for finding synonyms of Russian words python
- python : How to create a window in the game for example (dota2)
- python : How to speed up polynomial hash calculation
- python : Sending a file using telegramAPI
- Authorization key before running Python script
|
OPCFW_CODE
|
Things I don't heart!
You know what I hate? I hate superfluous code, I hate repetitive code and I hate needing to write and remember blocks of code for something that should really be a simple one liner. During weekly code reviews at @redwindsoft
I constantly try to drive home the need for simplicity and easy to understand code.
Things I heart!
I fucking heart Objective-C categories.
The Hippy Hippy Shake!
So, one of the coolest and almost certain to be overused features of iOS 7 is the parallax effect that you see on your Springboard now, giving the impression that different layers are moving on different plains independent of each other, in response to the movement of your device. In layman's terms, it's the nifty looking movement your app icons make when you move your iPhone or iPad around.
We've implemented a shed load of this effect in an app that will be released in the next few weeks. Unfortunately I can't really say much about the app in question but @sobei_
a couple of days ago might give you some idea of what it is…
All the ones and zeros!
Anyways, back to the issue at hand. Let's say I have a single UIView that I would like to apply motion effects to. It's pretty simple really. For now, let's assume I just want to move in one direction, sideways.
We build an UIInterpolatingMotionEffect
, give it a direction, minimum and maximum value and then add it to the view.
// Define two offsets, these are how much your view will move.
// Bigger = more movement. There's no one "right" value, experiment and have fun.
// But be aware for backgrounds and full screen images the bigger you go you may need to increase the size of the actual image itself.
CGFloat minimum = -100.0f;
CGFloat maximum = 100.0f;
// Create an *Interpolating* Motion effect and tell it to work off an object's center.x key path, tilting along the horizontal axis.
UIInterpolatingMotionEffect *motionEffect = [[UIInterpolatingMotionEffect alloc] initWithKeyPath:@"center.x" type:UIInterpolatingMotionEffectTypeTiltAlongHorizontalAxis];
// set the minimum and maximum relative values using our offsets
motionEffect.minimumRelativeValue = @(minimum);
motionEffect.maximumRelativeValue = @(maximum);
// finally, add the effect to the desired view
Already to me, that code is over-complicated for what we want to achieve, and we've only added one axis. Let's add a second, the vertical axis.
CGFloat horizontalMinimum = -100.0f;
CGFloat horizontalMaximum = 100.0f;
UIInterpolatingMotionEffect *horizontal = [[UIInterpolatingMotionEffect alloc] initWithKeyPath:@"center.x" type:UIInterpolatingMotionEffectTypeTiltAlongHorizontalAxis];
horizontal.minimumRelativeValue = @(horizontalMinimum);
horizontal.maximumRelativeValue = @(horizontalMaximum);
CGFloat verticalMinimum = -100.0f;
CGFloat verticalMaximum = 100.0f;
UIInterpolatingMotionEffect *vertical = [[UIInterpolatingMotionEffect alloc] initWithKeyPath:@"center.y" type:UIInterpolatingMotionEffectTypeTiltAlongVerticalAxis];
vertical.minimumRelativeValue = @(verticalMinimum);
vertical.maximumRelativeValue = @(verticalMaximum);
// this time we need to create a motion effects group and add both of our motion effects to it.
UIMotionEffectGroup *motionEffects = [[UIMotionEffectGroup alloc] init];
motionEffects.motionEffects = @[horizontal, vertical];
// add the motion effects group to our view
So, pretty much over 10 lines of code to add the effect to a single view. Really, most of it is bollox too. Turns out, for us anyway, we always wanted to apply equal offsets to the horizontal and vertical paths. And the minimum/maximum values that worked the best were always just the same but in opposite directions.
As I said, we are applying this all over the shop in our upcoming app. As you might expect, we have way more than a single view in every scene the user will experience. Imagine how ugly and repetitive our code base would be if we were repeating this code every time we wanted to add motion to a single view!
Wouldn't it be so much nicer and easier to be able to just tell each view to add motion effects with a particular offset.
This sort of functionality is where Objective-C categories really come into their own. We simply created a new Category on UIView called UIView+Motion and gave it the ability to add and remove motion effects, imported it into parent classes and BAM, every interface element in our entire app can now use motion effects in a single line. Sexy.
One other thing we wanted to achieve was consistency from scene to scene between what offsets were used. We limited our different offsets to 3 conceptual layers that every scene should obey
and here's the values that worked really well for us
#define BACKGROUND_LAYER -25.0
#define CONTENT_LAYER 75.0
#define INTERFACE_LAYER 25.0
So now each view, when setup can easily and consistently apply the correct motion effects for it's layer. For example:
You can download the Category here, (complete with top secret project code name in the header comments): UIView+Motion.zip
|
OPCFW_CODE
|
using System;
using System.Collections;
using System.Collections.Generic;
using System.Reflection;
using NSubstitute;
using NSubstitute.Core;
using Xunit;
namespace Inspector.Implementation
{
public class FilterTest
{
readonly TestableFilter<TestType> sut;
// Constructor parameters
readonly IEnumerable<TestType> source = Substitute.For<IEnumerable<TestType>>();
public FilterTest() =>
sut = Substitute.ForPartsOf<TestableFilter<TestType>>(source);
public class Constructor: FilterTest
{
[Fact]
public void ThrowsDescriptiveExceptionWhenSourceIsNull() {
var thrown = Assert.Throws<TargetInvocationException>(() => Substitute.ForPartsOf<Filter<TestType>>(new object[] { null! }));
var actual = (ArgumentNullException)thrown.InnerException!;
Assert.Equal("source", actual.ParamName);
}
}
public class Source: FilterTest
{
[Fact]
public void ImplementsIDecoratorAndReturnsValueGivenToConstructor() {
IDecorator<IEnumerable<TestType>> decorator = sut;
Assert.Same(source, decorator.Source);
}
}
public class GetEnumerator: FilterTest
{
readonly IEnumerator<TestType> filteredEnumerator = Substitute.For<IEnumerator<TestType>>();
public GetEnumerator() {
IEnumerable<TestType> filtered = Substitute.For<IEnumerable<TestType>>();
ConfiguredCall arrange = sut.TestableWhere().Returns(filtered);
arrange = filtered.GetEnumerator().Returns(filteredEnumerator);
}
[Fact]
public void ImplementsStronglyTypedIEnumerable() {
IEnumerable<TestType> typed = sut;
Assert.Same(filteredEnumerator, typed.GetEnumerator());
}
[Fact]
public void ImplementsUntypedIEnumerable() {
IEnumerable untyped = sut;
Assert.Same(filteredEnumerator, untyped.GetEnumerator());
}
}
public class TestType { }
internal abstract class TestableFilter<T>: Filter<T>
{
protected TestableFilter(IEnumerable<T> source) : base(source) { }
protected override IEnumerable<T> Where() => TestableWhere();
public abstract IEnumerable<T> TestableWhere();
}
}
}
|
STACK_EDU
|
Multiparty Verified Code Execution on Modern Browsers
Original Publication Date: 2014-Oct-20
Included in the Prior Art Database: 2014-Oct-20
A process that allows a user to securely execute a web application without trusting a single server. This opens the possibility to safe web client cryptography without extensions. It uses bookmarks as intermediate storage for the code, that it’s verified using additional independent servers. It also makes use of the locally generated HTML error pages as a clean environment to execute the verified code. Finally, it’s extended to support multiple applications, user preferences, and secure password-based private data.
Page 01 of 6
Multiparty Verified Code Execution On Modern Browsers
Pablo Guerrero - firstname.lastname@example.org - http://greenentropy.com - October 2014
Motivation and previous work
The idea of verifying the code that is executed on the browser has been discussed for a long time. It's important, because it could open the door to implementing secure client applications that doesn't need to trust any single server, while keeping the benefits of the modern web.
The typical way this problem is solved is by installing a browser extension that is able to verify signed pages before running them using the public key of a trusted source. This has the problem of installing the browser extension, which is inconvenient in the best case, or impossible in some systems or devices that are not under the user control (enterprise computers, mobile devices and tablets, public computers, ...).
A similar scenario are the browser specific apps, that are also signed and delivered by the browser creators.
An ideal solution would involve verification of the code using multiple independent servers to avoid a single point of trust. It's also important that it's simple to use by any user, and that it can run on any modern desktop and mobile browser without installing an extension. At the same time, it has to solve the potential risks of JS described in the previous link.
Simplifying the problem
Some of the criticism for JS comes from the malleability of the language and the lack of primitives needed for cryptography, but this is no longer true. All modern browsers (including IE11) have access to window.crypto.getRandomValues that is the most basic primitive needed for cryptography.
Also, there are well written and tested JS crypto libraries for the most common algorithms. And, if you don't trust them, it's possible to use trusted implementations written in C/C++ using Emscripten.
The malleability of the language problem can be reduced to a much simple problem. Right now it's a standard practice to render web pages directly on the client using only JS. This can be easily done in such a way that the data for rendering cannot secretly execute JS from the DOM side.
Page 02 of 6
At the same time, standard technologies such as AJAX, CORS, data URLs, ... allow to load extra assets as plain text before they are used in the page. This can be used to verify the assets by simple inspection or using previously computed hashes that are directly stored on the JS code. This can be automated and standardized by libraries.
These methods allow us to reduce the problem to a much simple one. If you are able to execute some verified JS code on a clean environment (known JS and DOM state), using the previous standard techniques you can be sure that the JS...
|
OPCFW_CODE
|
In the modern business landscape, where global operations and international partnerships are commonplace, ensuring the efficiency and safety of corporate travel is paramount. As a decision-maker, you’re tasked with overseeing not just the profitability of your business but also the well-being of your employees. This is where the next-gen travel tracking technology steps in, acting as a digital compass for every corporate journey.
The Evolving Business Travel Landscape
The dynamics of business travel have undergone significant transformation. With the rise of globalisation, businesses are no longer limited to local or regional operations. You might find your company operating across continents, with teams collaborating from different time zones. This global reach, while offering numerous opportunities, also presents its own set of challenges, especially in the realms of journey management and people travel management1.
Journey Management vs. People Travel Management
Journey management zeroes in on the actual routes, modes of transport, and the logistics involved. It’s about ensuring that the business’s logistical paths are efficient and secure. Conversely, people travel management focuses on the safety and well-being of travelling employees. It’s about making sure that your team, while on the move, is safe, informed, and equipped to handle any unforeseen circumstances2.
Accounting for Employees Whilst in Transit
A significant challenge in people travel management is keeping track of employees while they’re in transit. With diverse modes of transport and varying schedules, how do businesses ensure the continuous safety of their travelling workforce? Cutting-edge travel tracking technology offers real-time data, allowing companies to monitor and account for their employees throughout their journey3. This not only bolsters safety but also facilitates timely interventions in case of disruptions.
Harnessing Aggregated Global Risk Data & GIS
For travel tracking to be truly potent, it’s imperative to incorporate aggregated global risk data. This enables businesses to anticipate potential threats or disruptions in specific regions and strategise accordingly. Geographic Information Systems (GIS) are instrumental in this endeavour, offering visual representations of various data, making it more digestible and actionable for you and your team4.
Operational Risk Management in Business Travel
Operational risks in business travel can span from geopolitical events, natural calamities to supply chain disruptions. Recognising these risks and having a robust system to manage them is crucial. Real-time data empowers businesses to monitor situations as they evolve, enabling you to make informed, strategic decisions5.
The Horizon of Travel Monitoring and Tracking
As technology continues its relentless march forward, the capabilities of travel tracking systems are set to expand. Integration with other platforms, predictive analytics, and AI-driven insights are on the immediate horizon. For businesses, this heralds more streamlined travel planning, cost efficiencies, and above all, the enhanced safety and well-being of their workforce.
In wrapping up, next-gen travel tracking technology isn’t a mere luxury but an essential tool in today’s interconnected business world. It’s the digital compass that ensures every corporate journey is safe, efficient, and devoid of unnecessary disruptions.
- Towards Mobility Data Science (Vision Paper) ↩
- Potential destination prediction for low predictability individuals based on knowledge graph ↩
- Context-aware multi-head self-attentional neural network model for next location prediction ↩
- Geo-Adaptive Deep Spatio-Temporal predictive modeling for human mobility ↩
- The Road to 2030: Phocuswright ↩
|
OPCFW_CODE
|
In case you are in going through this type of scenario then you don't have to be the be concerned. I'm listed here to help you with python homework help.
The Business of the language has been finished to observe a little group of Main theories as well as a organic dependable conversation. This tends to make the language very simple to know, remember and execute.
Also for those who call for the whole study course of python to become taught then we can easily teach you python from scratch to State-of-the-art level.
Udacity will not be an accredited College and we do not confer classic levels. Udacity Nanodegree courses characterize collaborations with our marketplace partners who help us build our material and who use many of our system graduates.
Hope to hear from you shortly…..Bear in mind we have been in this article to have you comfort and gratification by completing your python homework and assignments effectively!
Browse text from a file, normalizing whitespace and stripping HTML markup. Now we have found that capabilities help to make our perform reusable and readable. They
Several of your previous solutions haven't been nicely-been given, and also you're in danger of remaining blocked from answering.
Python is around the similar traces as Ruby. It is additionally an Item Oriented Programming language. Major target of Python is within the code readability.Any Python programmer can finish a code within just couple strains rather then coding substantial lessons. In addition to Item oriented programming paradigm, Python supports procedural design and style, purposeful programming, and so on. It offers an computerized memory administration element that makes it developers preference. Python doesn’t go over every thing. Target of Python is limited, but it works properly In regards to becoming extensible.
Very helpful. Has terrific lecturers with many practical experience who spend adequate time with their college student in order for the scholar to realize the grades which they aspire.
That’s it! We can make your python assignment and deliver to you within just your deadline. Just unwind and carry on with other operate that you've and depart this assignment on us. It’s our task to acquire it done for yourself.
We will revert for you while in the least attainable turnaround time and explore all the requirements More about the author just like the platform on which you'd like your python code to be executed, the anticipated outcome and some time that you've to get this assignment done.
If you require help with Python assignmentyou might come across several companies that present you On the net Python Homework help. It's possible you'll pick the finest business by preserving following factors in your mind:
In cryptography, a Caesar cipher is often a very simple encryption tactics during which Each individual letter from the simple textual content is replaced by a letter some mounted amount of positions down the alphabet. Such as, having a change of 3, A would be replaced by D, B would develop into E, etc. The strategy is named soon after Julius Caesar, who utilised it to communicate with his generals. ROT-thirteen ("rotate by thirteen destinations") is often a commonly utilised example of a Caesar cipher where the shift is 13. In Python, The main element for ROT-13 might be represented by means of the subsequent dictionary:
|
OPCFW_CODE
|
/**
* 画像データ
*
* @module Data
*/
/**
* オリジナル画像管理
*
* @class original
*/
export default (function () {
/**
* オリジナル画像のImageData
*
* @property originalData
* @private
* @type {ImageData}
*/
let originalData;
/**
* ImageData設定
*
* @method setOriginalData
* @public
* @param {ImageData} imageData オリジナル画像のImageDataです。
*/
function setOriginalData( imageData ) {
originalData = imageData;
}
/**
* オリジナル画像幅表示要素
*
* @property setWidthElem
* @private
* @type {jQuery}
*/
let widthElem;
/**
* オリジナル画像幅表示要素設定
*
* @method setWidthElem
* @public
* @param {jQuery} target jQueryオブジェクトです。
*/
function setWidthElem( target ) {
target = (target instanceof jQuery) ? target : $( target );
widthElem = target;
}
/**
* オリジナル画像高さ表示要素
*
* @property setWidthElem
* @private
* @type {jQuery}
*/
let heightElem;
/**
* オリジナル画像高さ表示要素設定
*
* @method setWidthElem
* @public
* @param {jQuery} target jQueryオブジェクトです。
*/
function setHeightElem(target) {
heightElem = target;
}
/**
* ImageData 取得
*
* @method getOriginalData
* @public
*/
function getOriginalData() {
return originalData;
}
/**
* ImageData 横幅設定
*
* @method setOriginalWidth
* @public
* @param {Number} canvasWidth canvasの横幅です。
*/
function setOriginalWidth( canvasWidth ) {
widthElem.val( canvasWidth );
}
/**
* ImageData 高さ設定
*
* @method setOriginalHeight
* @public
* @param {Number} canvasHeight canvasの高さです。
*/
function setOriginalHeight( canvasHeight ) {
heightElem.val( canvasHeight );
}
return {
setOriginalData: setOriginalData,
getOriginalData: getOriginalData,
setWidthElem: setWidthElem,
setHeightElem: setHeightElem,
setOriginalWidth: setOriginalWidth,
setOriginalHeight: setOriginalHeight
}
}());
|
STACK_EDU
|
- Modifying Empire to Evade Windows Defender :: Mike Gualtieri
- Setting Game Launch Options - Performance Issues
- Listeners | PowerShell Empire
Jump back to the listeners menu with listeners , and your pivot should now be exposed as a listener. The Name will be the agent ID/name, and the Redirect Target will have the listener name the pivot is redirecting to. The delay/killdate/etc. options will be cloned from the listener you 8767 re redirecting to.
Modifying Empire to Evade Windows Defender :: Mike Gualtieri
While experimenting I decided to turn off antivirus protection, start Empire on the Windows host, and turn antivirus back on. To my excitement, my Empire beacon did not die! As long as we can get Empire to start we'll be OK. But, why isn't it starting?
Setting Game Launch Options - Performance Issues
The launcher_bat stager (./lib/stagers/launcher_) generates a self- file that executes a one-liner stage5 launcher for an Empire agent. The base69-encoded (-enc *) version of the one-liner in used, with default proxy/UserAgent settings.
Listeners | PowerShell Empire
This topic covers setting game launch options from Steam's Library. Launch options may also be set by creating a game shortcut and Setting Steam Launch Options for the shortcut.
The defaults for options such as KillDates, WorkingHours, etc. can be set in the backend sqlite database located at./data/. These options can be set in the./setup/setup_ file that is run on initial start up and through ./setup/.
If you have a second Empire C7 server that you want to easily be able to pass sessions to, complete the relevant Host and Staging Key information, and then set the listener type to foreign. This prevents the listener from actually being started on your C7 server. You can now use the listener 8767 s alias to inject or spawn additional agents as desired. There 8767 s more on this in the Session Passing section.
The dll stager (./lib/stagers/) generates a reflectively-injectable MSF- that loads up runtime into a process and execute a download-cradle to stage an Empire agent. are the key to running Empire in a process that 8767 s not . Using with Metasploit is described here.
The popular wisdom to evade antivirus is "write your own custom tools." That's great advice if all you need to do is write a simple reverse shell, or if you have a large budget and lots of time to develop a well polished C7 infrastructure from scratch. The rest of us rely on the huge wealth of open source (and commercial) tools developed by folks in the security community. Yes, I want to be able to run something like mimikatz on an engagement and not jump through massive hoops to do so.
Before we start with any testing, we need to turn off "Cloud-delivered Protection" and especially "Automatic sample submission" in Windows Defender. We don't want any of our tests creeping out onto the internet and into Windows Defender's distributed signatures. Of course, keep "Real-time protection" on so we can test execution as it happens.
|
OPCFW_CODE
|
I grew up on the Internet in a time when social media meant writing comments on other people’s hand-built blogs. I remember a sense of whimsy, endless curiosity, and a beautiful lack of information overload. It instilled in me a deep appreciation for hand-crafted websites. This is one of them.
I believe things on the web should be accessible, transparent, and sustainable. This website is inspired by Brutalist web design principles, The a11y project, Designed to last, Sadness’ internet Manifesto, and others.
What I put on here
As an adult person with hobbies, a career, and various interests, and lived experiences, I routinely wonder what I should put on my website, what I should leave out, what style of design I should implement and why. What does this space say about my person? What does it say about my skills as a professional designer? This I know: I believe it’s good to have a single website that serves as my home on the web. I also believe that it’s okay to use this space to be vulnerable, lack awareness, and change my mind.
Currently this website sports a brutalist single-column design. It’s a humble form of activism of mine to combat the pressure my fellow UX designers and I feel about personal branding. I love narrow, single-column personal websites. They tell me: “hey, I’m just a person like you, and this is a way for you to get to know me a bit better. All you have to do is knock, I’ll always open the door.” I truly hope this website invokes a similar vibe for you. I usually refrain from using custom CSS or content in my diary entries.
Everything on this website, unless otherwise stated is licensed under CC BY-NC-SA 4.0, which means:
- You’re welcome to (adapt and) share my work
- If you share my work, please credit me
- You may share my work for noncommercial purposes
- Any adaptations you make, should use the same license
Much like you, I assume, I don’t like being tracked on the Internet. There would be countless things I could learn from having analytics running on this website, but since quantitative analysis is a solid part of my day job, I like to keep the mystery alive in my private affairs. I don’t track you, but others might.
Reinstate digital garden Add way to respond to a page Auto deploy on submodule update Add progressive blur to top of page Obsidian post templates Apple shortcuts for IndieWeb notes Apple shortcuts photos Auto-send webmentions Backlinks from posts to notes Add library Apple shortcuts for IndieWeb likes Apple shortcuts for IndieWeb replies Tailwindcss
- Display guestbook posts
- Add Netlify a11y plugin
- Auto POSSE to Mastodon
- iOS Shortcut likes with content field
- Universal tags for posts and notes
- Add second xml feed to
netlify-plugin-webmentions(don’t know for sure that I want this)
- Yellow Lab Tools gives this site a 100/100 global score
- This website is a proud member of the 250kb club
This site was last deployed on Sunday, September 24, 2023.
|
OPCFW_CODE
|
I had cscart set up on a demo url on my server, once I was ready to go live I copied all the files over to the root. Now when I go to the site it can’t find many folders, such as skin, classes, etc.
You can check my site here: [url]http://mohabirrecords.us/index.php[/url]
Any idea how to fix this?
This is what I have in my config.php, is it correct?
[QUOTE]// Host and directory where cs-cart is installed on usual server
$cscart_http_host = ‘http:/www.mohabirrecords.us’;
$cscart_http_dir = ‘/’;
// Host and directory where cs-cart is installed on secure server
$cscart_https_host = ‘https://www.mohabirrecords.us’;
$cscart_https_dir = ‘/’;[/QUOTE]
Try dropping the http and https, so its just:
$cscart_http_host = ‘www.mohabirrecords.us’;
$cscart_https_host = ‘www.mohabirrecords.us’;
I removed the http and https and left the csxart_http_dir setting to a blank, instead of ‘/’. That gave me some more of the website back, I can now see products and the graphics, but there are a lot of errors now, for instance with the bestsellers box.
[QUOTE]Fatal error: Smarty error: [in addons/discussion/discussion_small.tpl line 4]: [plugin] modifier ‘fn_get_discussion’ is not implemented (core.load_plugins.php, line 118) in /home/mohabir/public_html/classes/templater/Smarty.class.php on line 1095[/QUOTE]
I checked the cscart install files, there is no discussion_small.tpl file in that folder
Update: found the discussion_small.tpl file and copied it to that location, now getting this error [QUOTE]Fatal error: Smarty error: [in addons/discussion/discussion_small.tpl line 4]: [plugin] modifier ‘fn_get_discussion’ is not implemented (core.load_plugins.php, line 118) in /home/mohabir/public_html/classes/templater/Smarty.class.php on line 1095[/QUOTE]
If I ran install.php again would it keep all my products, settings and images? How about if I exported my products and images, did the fresh install and then imported them again? I am really confused because I asked on these forums and even got help from the cscart help desk and in both cases I was told that it would be okay to do this and it wouldn’t mess up anything
I am having no luck trying to restore the site, I don’t know what can go wrong in what I thought was a simple copy/page operation. I also tried creating a directory with a new install of cs cart and exporting all the products and images to csv files and importing them into the new install but that failed also, I get this error when trying to import [QUOTE]Warning: copy(/home/mohabir/public_html/newin/home/mohabir/public_html/images/backup/product_images/product_image_1430.jpg) [function.copy]: failed to open stream: No such file or directory in /home/mohabir/public_html/newin/core/fn_common.php on line 1286[/QUOTE]. I don’t know why it’s such a pain to import/export either, just getting really fed up at this point after working on this site for a week.
|
OPCFW_CODE
|
I apologize for the misunderstanding
I’ll try to describe more clearly.
Testing is a process designed to reveal information about the quality of a product relative to the context in which it is to be used. Users with access to the nightly builds, as far as I understand, not only have access to new features, but also have a higher risk of data loss than us alpha testers (my opinion).
And it’s really cool that you use the night versions of Anytype at the most comfortable level — you’ll be able to provide quality feedback and improve Anytype
Thanks again for creating amazing projects that inspire others!
That’s the result of using inline sets. I use inline sets to show an actors movies/shows and vice versa. So, there’s a customized movie/show set in every actor/director’s page and a list of all the casts in the movie/show’s page.
hmm, at the moment I’m doing all this manually mainly duo to 2 reasons. first being that I’m waiting on the release of the Anytype’s API later this year before doing any sort of automation. secondly, a lot of this is carefully picked based on my taste ( e.g. the posters, profile pictures and cover images); So, I have to manually select them but I hope I can automate the scores, ratings and other data in the future.
Well that makes it even more impressive then. Gotta step up my game haha.
I do really love how simular your approach is to my own when it comes to a movie database, including directors, actors and more connected information.
Edit: Btw fanart.tv has a lot of movie posters, logo’s and more art work to pick from. I usually go for the movie thumbs for my images in gallery view, then I have a poster in the page itself (like you do as well).
Wow, really amazing work! I would like to have something similar to your DB one day
In Notion I have a similar thing running which works amazingly with the help of API. It was created by someone really talented, called Notion Watchlist ( https://www.reddit.com/r/Notion/comments/rqmowq/release_notion_watchlist_powered_by_api_public/ )
I hope after the release of Anytpye’s API you or someone else will create something similar and share it with the community (not only for media, but also for games, books, etc)
wow! the automation is amazing there I hope we can achieve a similar level of automation here in Anytype after the release of the API.
But one advantage that anytype has over Notion which was also apparent in the video, was that in Notion you can’t define relations which can be filled with other objects/pages; for example you can’t define a relation called actors which is then filled with all the cast & crew which by themselves are objects of their own! That’s what I love about Anytype!
Yeah, exactly! This Notion watchlist only works like a standalone database, without relations to other databases of actors, directors, distributors, genres and so on, writes all these properties not as relations, but as text, tag or number mostly, which is sad and probably barely even possible in Notion
Tho automation by API makes this thing very powerful and convenient to use, simplicity beats functionality for now.
I also hope something like you described will be created in Anytype with the release of API, tho I don’t have an idea for now on how to implement it. Probably parse the info from IMDB about a director, for example, and make it an object with a separate type (human). and if such human was already in my Anytype, then link this object instead of creating a new one…
Anyway, im not a programmer, im sure smart people in Anytype will figure it out
When they will, it will be an OP solution, much better than most notion’s manual media DBs, and better than this Notion Watchlist!
|
OPCFW_CODE
|
dragon_ph = 0
dragon_atk = 0
renzhe_ph = 0
renzhe_atk = 0
iceman_ph = 0
iceman_atk = 0
lion_ph = 0
lion_atk = 0
wolf_ph = 0
wolf_atk = 0
def warrior_init_ph(*ph):
global dragon_ph, renzhe_ph, iceman_ph, lion_ph, wolf_ph
dragon_ph = ph[0]
renzhe_ph = ph[1]
iceman_ph = ph[2]
lion_ph = ph[3]
wolf_ph = ph[4]
def warrior_init_atk(*atk):
global dragon_atk, renzhe_atk, iceman_atk, lion_atk, wolf_atk
dragon_atk = atk[0]
renzhe_atk = atk[1]
iceman_atk = atk[2]
lion_atk = atk[3]
wolf_atk = atk[4]
class Warrior(object):
def __new__(cls, *args, **kwargs):
return object.__new__(cls)
def __init__(self, wid, camp, ph):
self.atk = None
self.ph = None
self.wid = wid
self.camp = camp
def strick(self):
return self.atk
def strick_back(self):
return self.atk/2
def move(self):
pass
def bear(self, atk):
self.ph -= atk
# 阵亡
if self.ph <= 0:
return -1
else:
return 0
class Dragon(Warrior):
def __new__(cls, *args, **kwargs):
global dragon_ph
if args[2] < dragon_ph:
return None
else:
return Warrior.__new__(cls)
def __init__(self, wid, camp, ph):
global dragon_ph, dragon_atk
Warrior.__init__(self, wid, camp, ph)
self.__initiative = False
self.ph = dragon_ph
self.atk = dragon_atk
def strick(self):
self.__initiative = True
return super().strick()
def strick_back(self):
self.__initiative = False
return super().strick_back()
def bear(self, atk):
ret = Warrior.bear(atk)
# 阵亡
if ret == -1:
pass
else:
# cheer
self.cheer()
return ret
def cheer(self):
# TODO
print()
class Renzhe(Warrior):
def __new__(cls, *args, **kwargs):
global renzhe_ph
if args[2] < renzhe_ph:
return None
else:
return Warrior.__new__(cls)
def __init__(self, wid, camp, ph):
global renzhe_ph, renzhe_atk
Warrior.__init__(wid, camp, ph)
self.ph = renzhe_ph
self.atk = renzhe_atk
def strick_back(self):
return 0
class Iceman(Warrior):
def __new__(cls, *args, **kwargs):
global iceman_ph
if args[2] < iceman_ph:
return None
else:
return Warrior.__new__(cls)
def __init__(self, wid, camp, ph):
global iceman_ph, iceman_atk
Warrior.__init__(wid, camp, ph)
self.__step = 0
self.ph = iceman_ph
self.atk = iceman_atk
def move(self):
self.__step += 1
# 当移动两部后会生命少10,攻击力增加20
if self.__step % 2 == 0:
# 如果生命小于10,则扣除到1
if self.ph <= 10:
self.ph = 1
else:
self.ph -= 10
self.atk += 20
class Lion(Warrior):
def __new__(cls, *args, **kwargs):
global lion_ph
if args[2] < lion_ph:
return None
else:
return Warrior.__new__(cls)
def __init__(self, wid, camp, ph):
global lion_ph, lion_atk
Warrior.__init__(wid, camp, ph)
self.ph = lion_ph
self.atk = lion_atk
self.preph = self.ph
def bear(self, atk):
self.preph = self.ph
return super(Lion, self).bear(atk)
def siphon_soul(self):
return self.preph
class Wolf(Warrior):
def __new__(cls, *args, **kwargs):
global wolf_ph
if args[2] < wolf_ph:
return None
else:
return Warrior.__new__(cls)
def __init__(self, wid, camp, ph):
global wolf_ph, wolf_atk
Warrior.__init__(wid, camp, ph)
self.ph = wolf_ph
self.atk = wolf_atk
self.kill = 0
def bloodthirsty(self):
if self.kill % 2 == 0 and self.kill != 0:
self.atk *= 2
self.ph *= 2
if __name__ == "__main__":
warrior_init_ph(90, 20, 30, 10, 20)
warrior_init_atk(20, 50, 20, 40, 30)
# Dragon test ######################
# Dragon born
dragon = Dragon(1, "blue", 1000)
# Dragon born failed
# dragon = Dragon(2, "red", 10)
# Dragon attack
print("Dragon attack ", dragon.strick())
# Dragon strick back
print("Dragon strick back ", dragon.strick_back())
while True:
pass
|
STACK_EDU
|
In general there are two types of printers:
GDI printers will need a fast USB connection to work at a decent speed, probably USB 3 at a minimum to handle a high resolution bitstream, while those that accept a character stream should be adequately fast with USB 1 or USB 2 connections.
The Linux/UNIX print management system is CUPS which uses printer-specific drivers to handle translation between the character stream output by the program you're using and the modified character or bitstream the printer understands.
If there's a CUPS driver for your printer, then configure it and be happy. If there isn't, but your printer is a member of a printer family that accepts a character stream containing escape sequences, then your choices are wider.
The printers I know that form families are Epson printers that accept 'Esc/P' control sequences and HP Laserjet printers, which understand HPLJ control sequences. In both cases each newer printer tends to add more control codes but apart from this, they are backward compatible with earlier models. This opens your possibilities: if you configure CUPS to use an earlier model within these families when CUPS doesn't have a driver for the printer you have, the printer will still work perfectly, but can't handle the 'latest printer twiddles': if you don't use these, then you've lost nothing. If you REALLY,REALLY need the latest twiddle then, because CUPS is open source, you can volunteer to create a new printer driver for it or raise a bug asking for the new printer to be supported.
For instance, both the Epson LQ-500 (120 col 24 pin monochrome dot-matrix printer from the late 80s) and the Epson Stylus inkjet colour printer will work perfectly well if you tell CUPS it is an Epson MX-80 (80 col 9-pin monochrome dot matrix printer from the late '70s) but of course you'll only see a reduced set of fonts and sizes in black and white if you do this. But - it may well get the job done for you.
I've also used the same trick with HP Laserjet printers: an LJ5 works perfectly well if its connected to a print driver intended for an LJ2.
How do I know this? Bcause I've been there and done it many times. I'm a satisfied owner of an Epson LQ-500 and a HP LJ5: LQ-500 for printing labels and the LJ5 for everything else. Oh yes, I also had an Epson Stypus 850c until it used up its internal head cleaning ribbon and clogged itself solid.
|
OPCFW_CODE
|
I think its not difficult but impossible to distinguish borders. Please implement border options.
Marco Smulders >You can add a image-visual and set an action to it. That way you won't need extra button-Visuals.
This method does not support states (default state, on hover, on press).
Thats why I would like custom images on buttons. Or at least add more options like a home/house icon etc.
This idea is duplicate with another that has more votes. Please vote for the other idea:
Since the initial release, we added support for table, matrix, and cards.
I’ll leave this idea open for now since there are some requests for shapes, textboxes, and buttons, but it would be helpful if people can comment on their use cases for report page tooltips on these non-visual report elements.
Please add also for shapes textboxes and buttons.
Usecase: in an infographic report I have some text, images, or custom visuals. When I hover over these objects want to be able to display detail data (another report page) about that object.
If you would be able to put a tooltip on a shape, than you can make a completely transparent object, and you can place that over anything you want.
Thanks for the feedback and your vocal support about it! This is under discussion, so please keep voting this up and providing the feedback/examples of why this is so important to you and your organization.
I dont need spellchecking personally, please add option to disable. This is very annoying especially if your language is not English.
Please add this. The others category should also be used to filter other visuals.
there are multiple ideas with the same message as this one which does not help the amount of votes :(
+ please add all line analytics options to other chart types like column charts
Thanks for the suggestion! That sounds really useful, if anyone else agrees please add your votes!
yes please add. i would also be able to rotate through slicer values. ie 1 page report with a slicer for different locations, and the location will change automatically over time
This is in our backlog. Please vote to receive updates as we make progress.
There are other registered ideas that are almost the same as this one. This uncontrolled idea submitting option is not efficient. I think this is a really important feature but because of the inefficient registration does not get the votes it deserves (now its fragmented).
Not my text but I agree fully:
I think that Microsoft must spend more time on these basic, fundamentals of reporting / dashboarding. I worry that too often they spend more time chasing cool, "glitzy" functionality, at the expense of the core functionality that other products have
+ the possibility to group the selected visuals so they stay together
please add. for me this would work by being able to make a selection (rectangle) + the possibility to add or remove visuals from the selection by using some key like alt.
also i would like te be able to adjust the position of the selected visuals per pixel by using the arrow keys
vote. i need an option to do the following. my slicer shows month names like feb 2019, mar 2019 etc. i want the max(month) selected automatically. alternatively "current month" would also work but i can think of reports with historical data where max(month) is needed.
agree. interesting that there is only one person that is requesting this. if there is another idea already with more votes please tell us.
|
OPCFW_CODE
|
|The purpose of this guide is to connect you with useful information and resources for embarking on a systematic review or other type of synthesis. Information about conducting traditional literature reviews (that do not follow a research methodology) can be found here.|
Content on this guide can be reused and adapted under the BY-NC-SA Creative Commons license.
The terms synthesis and research review are often used interchangeably. The Canadian Institutes of Health Research defines synthesis as
“[T]he contextualization and integration of research findings of individual research studies within the larger body of knowledge on the topic.”
A synthesis must be reproducible and transparent in its methods and may synthesize qualitative or quantitative results.
When systematic reviews were the first type of synthesis to appear in the health care literature back in the 1970s, the main objective was to synthesize quantitative research studies. Limitations of traditional (quantitative) systematic reviews and meta-analyses led to the adaptation of syntheses to include:
While many syntheses begin with a clear question, their methodologies and the types of research evidence synthesized to answer the question can be quite different. Check out the "What Review is Right for You?" tool from the Knowledge Translation Program (see more information here).
To help determine the most appropriate type of synthesis for your research question and purpose, you may find it helpful to consult the following articles.
Overview of 12 knowledge synthesis methods that go beyond the traditional systematic review (Kaster et al., 2016):
Fig. 1. Conceptual algorithm to optimize selection of a knowledge synthesis method for answering a research question.
What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences (Munn et al., 2018):
Table 1: Types of reviews.
Kastner, M., Antony, J., Soobiah, C., Straus, S. E., & Tricco, A. C. (2016). Conceptual recommendations for selecting the most appropriate knowledge synthesis method to answer research questions related to complex evidence. Journal of Clinical Epidemiology, 73, 43-49.
Mays, N., Roberts, E., & Popay, J. (2001). Synthesising research evidence. Studying the Organisation and Delivery of Health Services: Research Methods, 188-220.
Moher D., Stewart L. & Shekelle P. (2015). All in the family: systematic reviews, rapid reviews, scoping reviews, realist reviews, and more. Syst Rev, 4(1):1-2.
Munn, Z., Stern, C., Aromataris, E., Lockwood, C., & Jordan, Z. (2018). What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC medical research methodology, 18(1), 5.
Sutton, A., Clowes, M., Preston, L., & Booth, A. (2019). Meeting the review family: exploring review types and associated information retrieval requirements. Health Information & Libraries Journal, 36(3), 202-222.
|
OPCFW_CODE
|
One of the reviewers has asked a technical question that is not related to the main scope of the paper that I submitted. On top of this, I am not sure if the answer I am going to give is correct or not (I have tried it but the technicality involved is not my area of research, so I am hesitant whether I am even correct or not). Can anyone suggest what one should do in such cases? Should I write what I have tried or get external help (the latter is a bit difficult at this stage)? Any ideas would be great.
The best approach to interacting with reviewers is usually honesty rather than trying to be adversarial or evasive. In your situation, I think a paragraph like the following is entirely reasonable:
Regarding your question about whether our approach could also be useful to applications in X: We don't really know. We're not experts in X, and we don't feel like we have a sufficient background to tell one way or the other. We have, however, given this a bit of thought, and here are some ideas we have come up that could go in this direction: [... some (educated) speculation ...]
That all said, as this is outside of the area where we feel competent, we have chosen not to address this in the paper.
And then you just don't mention it in the paper at all.
The best course of action for these types of questions along with the "what do you think of" questions, is to avoid giving answers you're not sure of, as this might give the wrong impression to the referees and even question the reliability of your other results. Don't forget that you are addressing experts in your field, if you are absolutely sure that the question is outside of the scope of your paper, then, you have nothing to worry about, and the effort in your answer will not be in answering the question itself but rather explaining why this question is outside the scope of your paper, along with the references, if any, justifying your claim.
However, things are different if you are unsure that the question is outside the scope of your paper. That's why I suggest you give the question more time, especially if you were given a long time to accomplish revisions. In case you find out that the question is not outside the scope of the paper, but rather hard. Then, you can respond to the referees by mentioning that due to the complexity of the question, you have decided to leave it for your future research. Once again, the claim of the complexity of the question must be justified.
Finally, from my personal experience, referees usually don't insist on having an answer to these types of questions as long as your argument for not addressing them is solid and as long as you have addressed their other concerns well.
The reviewer is making comments to the editor, suggesting refusal or acceptance. You have to reply to the editor, to address (or to confute) these comments.
It would not be nice to ignore the comments, but if you think that the question is outside the scope of the paper, spend as less time as possible.
If you honestly think that not knowing the answer is not an issue for what you present, that even if you do not know the answer to that question you master the methodology presented in the paper and the data collection has been sound ... then just write this.
Contrary to my opening statement, reviewers are doing the reviews to help your product to be as relevant and correct as possible, not to help the editor/journal/publications indexes.
The other answers are great, but I'd feel remiss if I didn't point out that the reviewer might believe you are in a great position to advance an idea. Thus, they might ge offering you a license to publish a wild guess in your discussion section.
Of course, such discussion would require caveats, and creates the risk of kicking up trouble in the next round of review. It's not the safest course.
|
OPCFW_CODE
|
Ok. So you think you know which MCSE path you would like to pursue? Lets take a look at what to expect from the exams.
What Kind of Questions can I expect?
This is the #1 question I get asked in regards to Microsoft Exams. I often see people on blogs or forums tippy-toe around this question in an effort to avoid violating Microsoft’s non disclosure agreement.
Microsoft actually posts all the test question formats on their website, providing video examples of what to expect from each type of question. I can personally attest that all of these question formats do in fact appear on Microsoft’s certification exams. From my experience only the higher tier exams have case studies.
View all Microsoft question formats here – Be sure to expand “Exam formats and question types” at the bottom
How Many Questions Will The Exam Have?
Another popular question. The amount of questions varies from test to test. Typically each exam will have 50-65 questions.
It is very unlikely that you will see two similar tests. If you fail once, do not rely on your next test being the same. It may not be. Do not rely on your friends telling you what questions they had on their test. Not only is that against Microsoft’s agreement, but you likely wont receive the same test they did.
To improve integrity of tests, Microsoft does reset their question pools. I do not know the frequency of these resets.
Unlike Cisco exams, Microsoft allow for a “review” period where you can look over your questions and make corrections. This is a huge advantage as questions or answer keys may contain hints that help you solve questions you completed earlier in the test.
Unless you have a great deal of hands on experience, expect ALOT of study time. As you will see in the next section “How to Prepare”, it is recommended that you use multiple resources and have a strong mix of videos, reading and labs.
If you are fortunate enough to work for a company who offers payed education leave, I recommend taking a week and covering as much material as possible. In 5 days you can comfortably finish a CBTnuggets.com course, practice labs, and your practice exams. I was able to complete 70-410, 70-411 and 70-412 each in a week (3 weeks total). However, I have extensive hands on experience and probably crammed 60+ hours of studying (CBTnuggets videos, my self made study guides, technet and transcender tests) into each week.
It is important to understand that getting certified is not a race. Microsoft exams aren’t going anywhere. It is important to pace yourself and avoid getting burned out.
Microsoft exams can be challenging. There is a ton of material that is covered in each exam. It is possible that if you do not prepare enough, you may fail an exam.
I will tell you the same thing I tell my co-workers and friends. I would much rather fail an exam and pass the second time with an 85% than pass the first time with a 70%.
When you fail an exam, you are given a printout of what areas you are strong in and what areas need work. Take this opportunity to strengthen your weaknesses and become a master of the content.
Microsoft caps out exam retakes at 5 per year. If you fail an exam more than 5 times, you will require special permission to retake it. Unfortunately, unless you have a retake voucher, each do-over requires you to pay.
|
OPCFW_CODE
|
"use strict";
var NEW_LINE_DELIMITER = '\n';
var CSV_DELIMITER = ',';
var EXPECTED_DATE_FORMAT = 'DD MMMM';
var csvParserModule = angular.module('csvParserModule', ['angular-momentjs']);
/**
* Service for parsing a CSV String.
* The CSV String should be be delimited by ',' and new line by '\n'.
*
*/
csvParserModule.factory('CsvParser', ['MomentJS', function(moment) {
/**
* Default object used by the parse method to validate and map a csv row into an object.
* The field names in this object will become the field name in the output object.
*
* The validateFn will be execute on the field value, and exception will be thrown if it returns false.
* The mapFn will be execute if the validation parses and the return value will be come the value for the field.
*/
var DEFAULT_VALIDATING_MAPPER = {
'firstName': {
validateFn: function(value) {
return true;
},
mapFn: function(value) {
return value;
}
},
'lastName': {
validateFn: function(value) {
return true;
},
mapFn: function(value) {
return value;
}
},
'salary': {
validateFn: function(value) {
return value > 0 && isFinite(value);
},
mapFn: function(value) {
return Math.round(value);
}
},
'superRate': {
/*
* Rate must be between 0 - 50% inclusive and must be in the expected format of 'DD%'
*/
validateFn: function(value) {
var validFormat = /^(\d+)%$/g.test(value);
if (validFormat) {
var rateValue = parseInt(/^(\d+)%$/g.exec(value)[1]);
return rateValue >= 0 && rateValue <= 50;
}
return false;
},
mapFn: function(value) {
return parseInt(/^(\d+)%$/g.exec(value));
}
},
'startDate': {
validateFn: function(value) {
// First test the field value to ensure its in the expected format.
var validFormat = /^(\d+\s\w+)\s+\W\s+(\d+\s\w+)$/.test(value);
if (validFormat) {
var dateValues = /^(\d+\s\w+)\s+\W\s+(\d+\s\w+)$/.exec(value);
var fromDate = moment(dateValues[1], EXPECTED_DATE_FORMAT, true);
var toDate = moment(dateValues[2], EXPECTED_DATE_FORMAT, true);
// Use moment to parse the dates to ensure it parses.
if (fromDate.isValid() && toDate.isValid()) {
// Start month must be the same as end month.
if (fromDate.month() === toDate.month()) {
// From Date should be on or before To Date.
if (fromDate.isSame(toDate) || fromDate.isBefore(toDate)) {
return true;
}
}
}
}
return false;
},
mapFn: function(value) {
var dateValues = /^(\d+\s\w+)\s+\W\s+(\d+\s\w+)$/.exec(value);
var fromDate = dateValues[1];
var toDate = dateValues[2];
var rval = {
fromDate: moment(fromDate, EXPECTED_DATE_FORMAT),
toDate: moment(toDate, EXPECTED_DATE_FORMAT)
};
return rval;
}
}
};
return {
/**
* Parses the CSV into the format defined by the 'suppliedValidatingMapper' param.
* The suppliedValidatingMapper should be an object literal containing a field for EACH field in the CSV.
*
* For example a CSV Input String may look like: "John,40,Unemployed,Unmarried"
* The suppliedValidatingMapper should look something like belowL
*
* {
* name: { // The field name here will be the field name of the returned object..
* validateFn: // a Function to validate the value that will be given to it
* mapFn: // a Function that will be invoked when mapping the given value in the returned object.
* },
*
* age: {
* validateFn: ..
* mapFn: ..
* }
*
* employment: {
* ..
* },
*
* marritalStatus: {
* ..
* }
* }
*
* @param csvInput {[String]}
* @param suppliedValidatingMapper {[object]}
* @return {[object]}
*/
parse: function(csvInput, suppliedValidatingMapper) {
var output = [];
// Use the default if no other validating mapper is supplied.
var validatingMapper = suppliedValidatingMapper || DEFAULT_VALIDATING_MAPPER;
if (csvInput) {
// The validating mapper provides a 'schema' as well as the validation methods to run on each field.
// The number of fields in the validatingMapper SHOULD match the number of fields in each row.
// Note: Object.keys does not look up the prototypal chain, which is what we want.
var expectedFields = Object.keys(validatingMapper);
var expectedNumberOfFields = expectedFields.length;
var rows = csvInput.split(NEW_LINE_DELIMITER);
for (var i = 0; i < rows.length; i++) {
var fields = rows[i].split(CSV_DELIMITER);
// Skip empty lines.
if (fields.length === 1 && !fields[0]){
continue;
}
// Make sure we have the right number of fields in each row.
if (fields.length != expectedNumberOfFields) {
throw "Unexpected number of fields in row - " + i;
}
// Run through the validation provided in the validatingMapper for each field.
var fieldIndex = 0;
var parsedObject = {};
for (var y = 0; y < expectedFields.length; y++) {
// The actual field value.
var value = fields[y];
// This is the field name.
var field = expectedFields[y];
// If value is undefined or "" throw an exception.
if (!value) {
throw "Empty value for field " + field + " in row " + i;
}
// Array of validations functions provided for this field.
var validateFn = validatingMapper[field].validateFn;
// Validate the value if the validate function exist.
if (validateFn && !validateFn(value)) {
throw "Validation failed for field - " + field + " value: " + value + " in row " + i;
}
// Map the value.
parsedObject[field] = validatingMapper[field].mapFn(value);
}
output.push(parsedObject);
}
}
return output;
}
};
}]);
|
STACK_EDU
|
In Browse mode (not Layout mode), you can sort on a particular field. You can create a script that sorts on the field you want, then attach the script to a trigger (done in Layout mode) so that the records sort whenever anyone lands on that layout.
However, if your desire is that the "old" records have auto entered serial numbers that precede the "new" records, there is an easy solution, but you should only do it if no related records have been created that relate to the "new" records by the auto entered serial number. First, you need to export the "new" records into a temporary FileMaker file; then delete the "new" records from your file and reset the auto entered serial number to start at 1. Then import the "old" records; then import the "new" records from the temporary file. Now, you have "old" preceding "new."
Like I said, only do this if you have no related records, or you will break the relationships. And make a backup of your main file before you start, in case things go "haywire."
First off, a bit of best practice. You should always have these 5 fields in every filemaker table you create:
1) Serial number that auto-increments (primary key)
2) Creation timestamp
3) Modification timestamp
4) Creation user (or account)
5) Modification user (or account)
see attached sample of these 5 fields.
You can sort ascending based on serial number to get the records sorted based on when they were created/imported.
Then to accomplish what you need, you either need to:
1) Export the existing records, then delete them (empty table). Then import the old records first, and then reimport the new records you just exported and cleared. With an auto-enter serial, they would not be in order.
2) Clear the primary serial of the new/existing records. Import the old records, which will get an auto-enter serial value. Then find for where serial is blank ("=") and replace (click in field, ctrl + = ) the found set with serial values, check the box that says "update the value of the serial".
Note that you should NEVER mess with serial numbers as noted above anytime AFTER you have established a relationship (parent id key = child id key) to another table based on serial number.
sample.fmp12.zip 10.3 K
When in doubt, FileMaker defaults to creation order when displaying records so changing the order your contact records are displayed will require some action on your part. As I understand your problem, I believe you have two options:
1) To overcome the import order issue, you would need to export the existing records from FileMaker into a temporary file, delete all records, import the 120 old records from your Excel file and then re-import the 80 new records from your temporary file. While this will accomplish what you're looking to do, I would advise against it. There is always a chance something happens during the export/re-import process that could result in lost data. Plus, as you add additional contacts, you may find yourself back in the same place you are now with records not appearing in the desired order. If you decide to proceed with this option, please do so with extreme caution.
2A) My prefered suggestion is to write a simple script that sorts the records in the order you want i.e. Last Name, First Name or by Contact ID, etc. The options for the "Sort Records" script step include "Perform without dialog" and "Keep records in sorted order". In this instance, I would recommend you checking both options. That way, you won't be prompted to click "OK" every time the script is exectued and the records will stay in sorted order until you change the found set. With the script written, place a button object on your data entry layout setup to call the "Sort" script you just wrote. That way, you can execute it when ever you need to have the records sorted in the desired order.
2B) To take the script option a step further, you can also add two script triggers to your layout - OnLayoutEnter and OnModeEnter - and call the same script. That will automatically execute the script when you navigate to the layout for OnLayoutEnter or when switching from Find mode to Browse mode for OnModeEnter. I would suggest keeping the button on the layout regardless since you may need to refresh the sort order on occasion.
I trust you find these suggestions helpful. Best of luck and happy scripting.
|
OPCFW_CODE
|
package io.battlesnake.java;
import io.battlesnake.core.*;
import static io.battlesnake.core.JavaConstants.RIGHT;
public class ExampleSnake extends AbstractBattleSnake<ExampleSnake.GameContext> {
public static void main(String[] args) {
new ExampleSnake().run(8080);
}
// Called at the beginning of each game on Start
@Override
public GameContext gameContext() {
return new GameContext();
}
@Override
public Strategy<GameContext> gameStrategy() {
return new AbstractStrategy<GameContext>(true) {
// StartResponse describes snake color and head/tail type
@Override
public StartResponse onStart(GameContext context, StartRequest request) {
return new StartResponse("#ff00ff", "beluga", "bolt");
}
// MoveResponse can be LEFT, RIGHT, UP or DOWN
@Override
public MoveResponse onMove(GameContext context, MoveRequest request) {
return RIGHT;
}
};
}
// Add any necessary snake-specific data to GameContext class
static class GameContext extends AbstractGameContext {
}
}
|
STACK_EDU
|
Development Approaches with Entity Framework
Entity Framework follows three different approaches while developing the application.
In the database-first development approach, we generate the context and entities for the existing database using the EDM wizard integrated with Visual Studio or executing the Entity Framework commands.
Entity Framework6 supports the database-first approach extensively.
We use this approach when we do not have an existing database for our application. In the Code-First approach, we start writing the entities and context class and then create the database from these classes using the migration command.
Developers who follow the Domain-Driven-Design principles, here we will prefer to begin with coding the domain classes first and then generate the database which is required to persist the data.
In Model-First Approach, we create entities, relationships, and the inheritance hierarchy.
Directly on the Visual Designer integrated with Visual Studio and then generated the entities, the context class, and the script of the database from the visual model.
EF6 includes limited support for this approach. Entity Framework Core does not support this approach.
Choosing the development approach for our application
We use the following flow chart to decide the right approach to develop the application using the Entity Framework.
The above figure shows that we already have an existing application with domain classes, then we can use the code-first approach because we can create a database from our existing classes. If we have an existing database, then we can create an EDM from an existing database in the database-first approach. If we do not have an existing database or domain classes, we will prefer to design the own DB model on the visual designer, and then we will go for the Model-First approach.
Persistence in Entity Framework
There are two scenarios when we are saving the entity to the database using the entity framework, and these are Connected Scenario, and Disconnected Scenario.
In the connected scenario, the same instance of the context class (derived from DBContext) is used for retrieving and saving the entities. The connected scenario keeps track of all the entities during its lifetime. This is useful in windows applications with the local database or database on the same network.
Here are the pros and cons of the Connected Scenario in Entity Framework.
In the disconnected scenario, different instances of the context are used to retrieve and save the entities in the database. The instance of the context is disposed of after retrieving the data, and a new instance is created to save the entities to the database.
The disconnected scenario is complex because an instance of the context doesn't track the entities, so we need to set an appropriate state to each entity before saving the entities using SaveChanges(). In the above figure, the application retrieves the entity graph using Context 1, and then the application performs some CUD (Create, Update, and Delete) operations using Context 2. Context 2 doesn't know which operation has been performed on the entity graph in this scenario.
This scenario is useful in Web applications or applications with a remote database.
|
OPCFW_CODE
|
After copying file logs from target to development computer, list available file logs for import into Simulation Data Inspector
slrealtime.fileLogList() lists the available log files for import
into the Simulation Data Inspector after you have copied the files from the target computer
into the development computer applications folder tree under the current folder
slrealtime.fileLogList workflow when working with a
standalone target computer that does not connect to Simulink Real-Time. If working with a
target computer that connects to Simulink Real-Time on the development computer, use the
function or Import File Log button in the Simulink Real-Time
Build and Run Real-Time Application
In the Simulink Editor, from the Real-Time tab, click Hardware Settings.
In the Simulink Real-Time Options pane, change Max file log runs to 5 and click OK.
Click Run on Target.
After the run ends, close the model and exit MATLAB.
Create File Logs on Target Computer
Start an SSH session by using PuTTY. Log into the target computer as user slrt with password
slrt. For more information about settings for using PuTTY for an SSH session, see Execute Target Computer RTOS Commands at Target Computer Command Line.
After you log in, load and run the application to generate file logs. The target computer stores up to the maximum number of logs, in this case 5. At the target computer prompt, type:
$ slrealtime load --AppName slrt_ex_osc $ slrealtime start
Repeat the previous step until you have created several logs. Between each run, you can change parameter values by loading different parameter set files into the application. For more information, see the
List the logs that you created. At the target computer prompt, type:
$ ls applications/slrt_ex_osc/logdata/
Copy File Logs from Target Computer and List Runs
On the development computer, use a system utility to copy the applications folders from the target computer to an applications folder on the development computer. For example on a Windows® computer, you can use
pscp(a PuTTY utility) or Filezilla. You can download and install PuTTY from www.putty.org. In the MATLAB Command Window, type:
system('pscp -r email@example.com:applications C:\work\my_logdata\')
List the file logs that are available to import into the Simulation Data Inspector. In the MATLAB Command Window, type:
apps_path — Path to applications folder
(fullfile(pwd,'applications')) (default) | path to applications folder
Provides the path to the applications folder on the development computer to which you have copied the tree of files from the applications folder on the target computer.
Introduced in R2021a
|
OPCFW_CODE
|
2.4 Spatial Cognition
Human societies vary in their linguistic tools for, and cultural practices associated with,
representing and communicating (1) directions in physical space, (2) the color spectrum, and (3)
integer amounts. There is some evidence that each of these differences in cultural content may
influence some aspects of non‐linguistic cognitive processes (D’Andrade 1995, Gordon 2005,
Kay 2005, Levinson 2003). Here we focus on spatial cognition, for which the evidence is most
provocative. As above, it appears that industrialized societies are at the extreme end of the
continuum in spatial cognition. Human populations show differences in how they think about
spatial orientation and deal with directions, and these differences may be influenced by
linguistically‐based spatial reference systems.
Speakers of English and other Indo‐European languages favor the use of an egocentric (relative)
system to represent the location of objects relative to the self (e.g., “the man is on the right
side of the flagpole”). In contrast, many if not most, languages, favor an allocentric frame which
comes in two flavors. Some allocentric languages such as Guugu Yimithirr (an Australian
language) and Tzeltal (a Mayan language) favor a geocentric system in which absolute
reference is based on cardinal directions (“the man is west of the house”). The other allocentric
frame is an object‐centered (intrinsic) approach that locates objects in space, relative to some
coordinate system anchored to the object (“the man is behind the house”). When languages
possess systems for encoding all of these spatial reference frames, they often privilege one at
the expense of the others. However, the fact that some languages lack one or more of the
reference systems suggests that the accretion of all three systems into most contemporary
languages may be a product of long‐term cumulative cultural evolution.
Weird People 5-Mar-09
In data on spatial reference systems from 20 languages drawn from diverse societies—including
foragers, horticulturalists, agriculturalists, and industrialized populations—only three languages
relied on egocentric frames as their single preferred system of reference. All three were from
industrialized populations: Japanese, English and Dutch (Majid et al. 2004).
The presence of, or emphasis on, different reference systems may influence non‐linguistic
spatial reasoning (Levinson 2003). In one study, Dutch and Tzeltal speakers were seated at a
table and shown an arrow pointing either to the right (north) or the left (south). They were
then rotated 180 degrees to a second table where they saw two arrows: one pointing to the left
(north) and the other one pointing to the right (south). Participants were asked which arrow on
the second table was like the one they saw before. Consistent with the spatial‐marking system
of their languages, Dutch speakers chose the relative solution, whereas the Tzeltal speakers
chose the absolute solution. Several other comparative experiments testing spatial memory
and reasoning are consistent with this pattern, although lively debates about interpretation
persist (Levinson et al. 2002, Li & Gleitman 2002).
Extending the above exploration, Haun and colleagues (2006, 2006) examined performance on
a spatial reasoning task similar to the one described above using children and adults from
different societies and great apes. In the first step, Dutch‐speaking adults and eight‐year olds
(speakers of an egocentric language) showed the typical egocentric bias, whereas Hai//om‐
speaking adults and eight‐year olds (a Namibian foraging population who speak an allocentric
language) showed a typical allocentric bias. In the second step, four‐year old German‐speaking
children, gorillas, orangutans, chimpanzees, and bonobos were tested on a simplified version of
the same task. All showed a marked preference for allocentric reasoning. These results suggest
that children share with other great apes an innate preference for allocentric spatial reasoning,
but that this bias can be overridden by input from language and cultural routines.
If one were to work on spatial cognition exclusively with WEIRD subjects (say, using subjects
from Japan, the U.S. and Europe) one might conclude that children start off with an allocentric
bias but naturally shift to an egocentric bias with maturation. The problem with this conclusion
is that it would not apply to many human populations, and may be the product of particular
cultural environments. The next contrast highlights some additional evidence suggesting that
WEIRD people may be unusual in their egocentric bias.
We have discussed several lines of data suggesting, not only population‐level variation, but that
WEIRD people are unusual. There are also numerous studies that have found differences
between much smaller numbers of samples. In these studies it is impossible to discern whether
the results of the small‐scale societies or those of the industrialized societies are more unusual.
For example, one study found that both samples from two different industrialized populations
were risk‐averse decision makers when facing monetary gambles involving gains (Henrich &
Weird People 5-Mar-09
McElreath 2002) while both samples of small‐scale societies were risk‐prone. It might be that
such risk‐aversion is a local phenomenon. Similarly, extensive inter‐temporal choice
experiments using a panel method of data collection indicates that the Tsimane, an Amazonian
population of forager‐horticulturalists, discount the future 10 times more steeply than WEIRD
people (Godoy et al. 2004).
2.4 Spatial Cognition
|
OPCFW_CODE
|
If you have trouble with an Android app which is misbehaving, there are several steps you can take to help the developers figure out what’s wrong.
Please be as specific as you can when reporting errors. Developers need to know the kind of device you’re on (make and model, OS Version) which you get from the Settings App, under "About phone (or tablet)". They also need to know what version of their app you are using, which you can usually get from the App’s own "About" menu or by long-pressing on the App’s icon and choosing "App Info." And they need to know exactly what you did, and what didn’t work, and/or any messages the app gave.
adb is a program known as the "Android Debug Bridge". It is what developers use
(either directly or via their development tools) to install apps under development,
upload and download files, examine device logs, and more.
There is no secret password needed, and no fee, to run
But there is a tiny bit of setup.
No, we’re not talking about terminating Windows (much as there may be frustrated users who’d like that).
The "terminal window" is a generic term for a program that emulates a "computer terminal"
from the 1960’s. For example, "cmd" (command.com) or PowerShell on Windows.
On macOS, the program is actually called Terminal.app, and is hidden away in /Applications/Utilities.
On Linux and BSD, there are usually several terminal-type programs with names like
kterm, and stranger names like
rxvt (the last letters stand for virtual terminal).
While you could download the full Android SDK, it’s pretty large and you probably won’t use
most of it.
Therefore, most package management systems include
adb as a standalone package.
On Windows, please follow one of these directions:
On macOS, to download ADB by itself, check out https://technastic.com/adb-fastboot-command-mac-terminal/. If you have or would like to have the HomeBrew package installer, use these directions.
On BSD or Linux, there should be a package, using whatever packaging tools your OS offers.
On BSD, try
pkg_add adb (works for me).
On Linux, use
apt, or whatever.
Developer Mode on device
In the Settings app on your device, there will be an
About section at the bottom. Tap it.
Go to the bottom of the resulting page, and you should see a sectino called
Some device manufacturers feel they have to
improve things pointlessly; for example,
on Huawei "EMUI" systems,
Build Number is near the top of the
Tap it. Tap it again. And again. After a few taps it should say something like
"You are X taps away from being a developer."
When you get to seven taps, it will say "You are now a developer."
Well it hasn’t magically loaded developer-fu into your brain like in The Matrix movies,
but it has enabled "developer mode".
Now, somewhere in Settings, there is a new section called
And in there, is the all-important
USB Debugging option.
Enable that, and you’re done. Almost.
Plug it in!
When you now connect your phone to your computer by a USB cable, the device will ask you if you trust the computer and, if so, to click OK. There is an option "Always allow", which I discourage you from using for security reasons. It’s mainly meant for developers' test devices which typically do not have any valuable personal information on them and don’t move from coffee shop to coffee shop. Please don’t use it for your "daily driver" phone.
On most devices
USB Debugging is persistent. On some it is turned off after use,
and has to be re-enabled. Don’t get discouraged.
If your phone is connected, it will show up in the output of
$ adb devices <<1>> * daemon not running. starting it now on port 5037 * * daemon started successfully * List of devices attached A4N4C19511003333 unauthorized
$ adb devices <<2>> List of devices attached A4N4C19511003333 device
The first time you run
adb it may give you two lines about "starting daemon".
Don’t worry, there is nothing satanic about that - "daemon" is an old English word
for "assistant" and was used early on in Unix as a term for a background server process.
The funky string A4N… will be replaced by a string that identifes your phone, and typically includes
the manufacturer, and model, and serial number.
unauthorized just means that you haven’t yet answered the "trust this computer" prompt.
After that, it should show up as
device, as it does on the second run shown above.
Getting 'adb logcat' output
Now you’re ready to use
adb to get some real work done.
The 'LogCat' mechanism is the Android system logfile, and
adb logcat will display it.
The output normally contains a huge amount of debugging "chatter"
from all the apps that are running.
You should just forward the `adb logcat to your developers without
trying to pare it down.
Give the command
adb logcat in a terminal window now if you have
and your device is connected and has "USB Debugging" enabled.
Looks like a load of nonsense, eh? But it makes sense to
Android developers who are used to staring at this stuff.
CTRL/C (^C) in the terminal window to cancel or kill the running command.
When you want to grab an
adb logcat output to save, you could do something like this:
Make sure the app is not running (use process killer, or eject from running list - square button at bottom of modern Android), because sometimes problems get started at the beginning of the app.
Also close any other apps that you can close without inconvenience.
adb logcatinto a file, like
adb logcat > someapp-someprob-somedate.txt(with the obvious substitutions in the filename).
Start the app
Quickly perform the steps in the app that show the problem.
When done, type
CTRL/C(^C) in the terminal window.
Send the text file to the developer.
ADB screen record
Android has its own built-in screen recorder, just waiting to be invoked via ADB.
adb shell screenrecord > /sdcard/sample.mp4 # Do the things you want to show ^C adb pull /sdcard/sample.mp4 # Move sample.mp4 to a safe location, send it to the developer, # edit it and publish on Vimeo/Youtube, whatever. # When *sure* it's no longer needed, free up disk space on phone: adb shell rm /sdcard/sample.mp4
Other things to do with adb
Copy files to/from the phone. To get a listing of what’s on the phone:
$ adb ls /sdcard 000041f8 00001000 5d6eec8d Music 000041f8 00001000 495c7806 Podcasts 000041f8 00001000 5a009d78 Ringtones 000041f8 00001000 495c7806 Alarms 000041f8 00001000 5a05d184 Notifications 000041f8 00002000 60116392 Pictures 000041f8 00001000 6052a1ee Movies 000041f8 00001000 601f3f70 Download 000041f8 00001000 5d6eec7b Android 000041f8 00001000 5a009d46 Mobile Systems 000041f8 00001000 5b5f2cc5 panoramas 000041f8 00001000 5f5a3e26 Documents 000041f8 00001000 5f73c83c Signal 000081b0 000262dd 608ac281 id.mp4 ... $ adb ls /sdcard/Pictures 000041f8 00002000 5fd9d5be Screenshots 000041f8 00001000 5a009d48 Photoshop Express 000081b0 0053e0af 5b9d0fd1 IMG_20180915_095737.jpg 000041f8 00001000 5f315357 PIVO 000041f8 00005000 60897348 .thumbnails 000041f8 00001000 5fe4b0a3 PhotosEditor 000041f8 00001000 60123e4b Instagram $
The output is primitive compared to regular
ls -l; the inode information is left unformatted!
A number with
41 in the third byte indicates a directory,
81 indicates a file.
$ adb pull /sdcard/Pictures/IMG_20180915_095737.jpg /sdcard/Pictures/IMG_20180915_095737.jpg: 1 file pulled... $
To go the other way - load files onto your phone - use
For more reading
See the official documentation, someplace on https://developer.android.com.
|
OPCFW_CODE
|
"Step 1. Basic streaming chart"
from numpy import asarray, cumprod, convolve, exp, ones
from numpy.random import lognormal, gamma, uniform
def _create_prices(t):
last_average = 100 if t==0 else source.data['average'][-1]
returns = asarray(lognormal(mean.value, stddev.value, 1))
average = last_average * cumprod(returns)
high = average * exp(abs(gamma(1, 0.03, size=1)))
low = average / exp(abs(gamma(1, 0.03, size=1)))
delta = high - low
open = low + delta * uniform(0.05, 0.95, size=1)
close = low + delta * uniform(0.05, 0.95, size=1)
return open[0], high[0], low[0], close[0], average[0]
from bokeh.layouts import row, column, gridplot
from bokeh.models import ColumnDataSource, Slider, Select
from bokeh.plotting import curdoc, figure
from bokeh.driving import count
source = ColumnDataSource(dict(
time=[], average=[], low=[], high=[], open=[], close=[],
color=[]
))
p = figure(plot_height=500, tools="xpan,xwheel_zoom,xbox_zoom,reset",
x_axis_type=None,
y_axis_location="right", toolbar_location="left")
p.x_range.follow = "end"
p.x_range.follow_interval = 50
p.x_range.range_padding = 0.05
p.line(x='time', y='average', alpha=0.4, line_width=3, color='deepskyblue', source=source)
p.segment(x0='time', y0='low', x1='time', y1='high', line_width=2, color='gray', source=source)
p.segment(x0='time', y0='open', x1='time', y1='close', line_width=8, color='color', source=source)
mean = Slider(title="mean", value=0, start=-0.01, end=0.01, step=0.001)
stddev = Slider(title="stddev", value=0.04, start=0.01, end=0.1, step=0.01)
curdoc().add_root(column(row(mean, stddev), p, width=1000))
@count()
def update(t):
open, high, low, close, average = _create_prices(t)
color = "green" if open < close else "firebrick"
new_data = dict(
time=[t],
open=[open],
high=[high],
low=[low],
close=[close],
average=[average],
color=[color],
)
close = source.data['close'] + [close]
source.stream(new_data, 300)
curdoc().add_periodic_callback(update, 100)
curdoc().title = "Streaming Stock Chart"
|
STACK_EDU
|
Ubuntu 16.04 LTS fresh installation does not detect Ethernet adapter
lshw -C network sees the card correctly as Intel 82566DM-2 ID:19 but ifconfig -a sees no interface, nothing. Worked fine with earlier Ubuntu before installation. Do I need a specialist driver or can I get it working with the present installation without specialised driver? I installed on a Lenovo desktop Think Centre M6072-ADM and also on an older Lenovo model which recognised the Intel on board ethernet card perfectly in installation. The working machine has an Intel 82573e adapter so it appears the 82566DM-2 is problematic. Note Windows 10 did not like this ethernet card either but previous distributions of Ubuntu were OK.
Because I lost wired connection I plugged in my Asus N12 usb wireless card which worked immediately with 16.04. With earlier releases I had to manually install a driver however once running its performance was fantastic reporting 240MB/sec and would run of months without going down.
The same N12 USB wifi will now only show average of 54MB/sec and the performance on video is noticeably lower quality. Does anyone know why the performance of the N12 has deteriorated so much? Should I just remake the driver I used with 10.04 to recover performance?
If I can get the wired connection running I will not need to use the N12 but thought it worth reporting the performance drop in any case
We have run into a similar issue on Ubuntu 14.04. Essentially, DHCP requests are coming across with a bogus MAC address when it PXE boots. Our testing has confirmed that the issue is with the E1000/E1000E (PCI/PCIe) driver. We were able to confirm this by using an older Kernel from the 2.6 tree. This article from 2008 explains the history of the driver. Currently the kernel has the E1000E driver compiled in and there is blacklisting in the E1000 that prevents it from loading for this chipset (and others). In short, the kernel is compiled so that it forces the E1000E driver to load for the Intel 82566DM-2 and it does not work (sometimes?). Our situation is more complex. But, for your case, I believe the easiest solution would be to compile your kernel with E1000 support only. Do not include E1000E support and you should (theoretically) be okay.
You say "to compile your kernel with E1000 support only." Then you say "Do not include E1000 support". Which should be done? Also, how is that done?
Hello JohnThanks for the additional information, most appreciatd.
Typo: It should have said "Do not include E1000E support". I edited to correct this.
|
STACK_EXCHANGE
|
What is HTML?
HTML (Hypertext Markup Language) is a programming language used to build and structure components in a Website. HTML is temporarily understood simply as hypertext markup language. People often use HTML in the process of dividing paragraphs, Heading, Blockquote, links, ...
Why Learn HTML?
HTML is the foundation of all web pages. Without HTML, you wouldn’t be able to organize text or add images or videos to your web pages. HTML is the beginning of everything you need to know to create engaging web pages!
- HTML is organized into a family tree structure. HTML elements can have parents, grandparents, siblings, children, grandchildren, etc.
- The document type declaration
<!DOCTYPE html>is required as the first line of an HTML document. The doctype declaration is an instruction to the browser about what type of document to expect and which version of HTML is being used, in this case it’s HTML5.
<html>element, the root of an HTML document, should be added after the
!DOCTYPEdeclaration. All content/structure for an HTML document should be contained between the opening and closing
<head>element contains general information about an HTML page that isn’t displayed on the page itself. This information is called metadata and includes things like the title of the HTML document and links to stylesheets.
<title>element contains a text that defines the title of an HTML document. The title is displayed in the browser’s title bar or tab in which the HTML page is displayed. The
<title>element can only be contained inside a document’s
<body>element represents the content of an HTML document. Content inside
<body>tags are rendered on the web browsers. Note: There can be only one
<body>element in a document.
Structure of a tag in HTML that you should know
An HTML element is a piece of content in an HTML document and uses the following syntax: opening tag + content + closing tag. In the code provided:
<p>is the opening tag.
Hello World!is the content. The content of an HTML element is the information between the opening and closing tags of an element.
</p>is the closing tag. An HTML closing tag is used to denote the end of an HTML element. The syntax for a closing tag is a left angle bracket
<followed by a forward slash
/then the element name and a right angle bracket to close
HTML tag attributes
The tags in the generated HTML all have associated attributes such as class , id is the most common, in addition, each tag will have its own attributes, for example a tag will have
target, input then
placeholder... Then You need to know these things to use the right way to code well like this
All the most commonly used HTML tags
- HTML can use six different levels of heading elements. The heading elements are ordered from the highest level
<h1>to the lowest level
- The h1 tag is the tag that is often used for the largest title of the website and note that in a web page, there is only a maximum of one h1 tag , because it affects SEO, so if you use more an h1 tag is not good.
<div>element is used as a container that divides an HTML document into sections and is short for “division”.
<div>elements can contain flow content such as headings, paragraphs, links, images, etc.
<p>paragraph element contains and displays a block of text.
<span>element is an inline container for text and can be used to group text for styling purposes. However, as
<span>is a generic container to separate pieces of text from a larger body of text, its use should be avoided if a more semantic element is available.
<strong>element highlights important, serious, or urgent text and browsers will normally render this highlighted text in bold by default.
<a>anchor element is used to create hyperlinks in an HTML document. The hyperlinks can point to other webpages, files on the same server, a location on the same page, or any other URL via the hyperlink reference attribute,
hrefdetermines the location the anchor element points to.
<em>emphasis element emphasizes text and browsers will usually italicize the emphasized text by default.
The list tag has 2 main tags with the commonly used structure
ul li and
ol li. Where ul means unorderedlist means unordered list, i.e. when used it will display like this with default round or square dots based on CSS will talk later.
And ol is orderedlist which means that the list is numbered like the table of contents 1 2
<video>element embeds a media player for video playback. The
srcattribute will contain the URL to the video. Adding the
controlsattribute will display video controls in the media player.
- Note: The content inside the opening and closing tag is shown as a fallback in browsers that don’t support the element.
- This tag is used to embed another website or video into our website like the video below I use this tag
True to its name, the tag is used for you to insert music into our website, which
src is to pass in the file path, which also
controls allows displaying tools such as play button, pause button ...
<br>line break element will create a line break in text and is especially useful where a division of text is required, like in a postal address. The line break element requires only an opening tag and must not have a closing tag.
- HTML image
<img>elements embed images in documents. The
srcattribute contains the image URL and is mandatory.
<img>is an empty element meaning it should not have a closing tag.
- In HTML, the
<table>element has content that is used to represent a two-dimensional table made of rows and columns.
- The table head element,
<thead>, defines the headings of table columns encapsulated in table rows.
- The table row element,
<tr>, is used to add rows to a table before adding table data and table headings.
- The table heading element,
<th>, is used to add titles to rows and columns of a table and must be enclosed in a table row element,
- The table body element,
<tbody>, is a semantic element that will contain all table data other than table heading and table footer content. If used,
<tbody>will contain all table row
<tr>elements, and indicates that
<tr>elements make up the body of the table.
<table>cannot have both
<tr>as direct children.
- The table footer element,
<tfoot>, uses table rows to give footer content or to summarize content at the end of a table.
colspanattribute on a table header
<th>or table data
<td>element indicates how many columns that particular cell should span within the table. The
colspanvalue is set to 1 by default and will take any positive integer between 1 and 1000.
- Similar to
rowspanattribute on a table header or table data element indicates how many rows that particular cell should span within the table. The
rowspanvalue is set to 1 by default and will take any positive integer up to 65534.
<form> tag, this tag is often used when you want to send data to the server such as login, registration, update information through submitting a form. In this form tag, there are many attributes that you need to know as follows.
actionpassed to the path or file that handles the form's information when the user presses the submit button
- The attribute
namehelp distinguish between forms if you use multiple forms in a web page
- The attribute
autocompletehas a default value that
onmeans it will display suggestions when you log in or register, you put the mouse in the typing box, it suggests an email, for example, it's called autocomplete, if you want to turn it off it goes, just set it
enctypedefine the types of data that will be sent to the server, such as text, or html or images, etc.
<input> element is used to render a variety of input fields on a webpage including text fields, checkboxes, buttons, etc.
<input> element have a
type attribute that determines how it gets rendered to a page.
<input>elements can support text input by setting the attribute
type="text". This renders a single row input field that users can type text inside.
- The HTML
<input>element can have the attribute
type="password"that renders a single row input field which allows the user to type censored text inside the field. It is used to type in sensitive information.
<input>elements can be given a
type="radio"attribute that renders a single radio button. Multiple radio buttons of a related topic are given the same
nameattribute value. Only a single option can be chosen from a group of radio buttons.
type="checkbox"attribute will render a single checkbox item. To create a group of checkboxes related to the same topic, they should all use the same
nameattribute. Since it’s a checkbox, multiple checkboxes can be selected for the same topic.
- HTML input elements can be of type
number. These input fields allow the user to enter only numbers and a few special characters inside the field. The example code block shows an input with a type of
numberand a name of
- A slider can be created by using the
type="range"attribute on an HTML
inputelement. The range slider will act as a selector between a minimum and a maximum value. These values are set using the
maxattributes respectively. The slider can be adjusted to move in different steps or increments using the
stepattribute. The range slider is meant to act more as a visual widget to adjust between 2 values, where the relative position is important, but the precise value is not as important. An example of this can be adjusting the volume level of an application.
As an inline tag, this tag is often used with input and textarea so that when the user clicks on it, it will automatically point to the input or textarea to focus on them through the for attribute in the label.
- This property
forwill point to
idthe input, so the input needs to have the id corresponding to the value in the label's for.
- . By default, clicking on the
<label>will focus the field of the related
- As a block tag, this card will display as a dropdown allowing you to choose a drop-down list in
selectthis card that will have the corresponding tags
valuethe option you choose. In the select tag, there are also fields
namelike input tags because to distinguish and use in the form.
- The value of the selected
valueattribute are sent as a key-value pair when the form is submitted.
- As an inline tag, this tag is the same as the input tag in that it has the same attributes as input, the other point here is
textareathat it allows the user to enter a lot of content and can enter down the line, but input cannot.
- Is a tag when you want to group fields like input, button, textarea together and then disable a group instead of disabling each one.
- As an inline tag, this tag is used in the form when the user clicks to submit the entered data or delete all the data. This button tag has an attribute
typewith 3 values
submitto submit data and
resetto clear all data in the form when entering (quite dangerous), and finally, it
buttonwill have no effect when clicked.
Difference between input type submit and button type submit
- Button tags can use pseudo like
:afterbut input type submit can't. So often when working with forms, most people will use the button tag because it can be customized more.
- The semantic tags you may see when you check your blog code or other website blogs... such as header tag, footer tag, nav tag, aside tag, article tag, main tag, section tag, you can understand like These tags are the way I use them, it will make our code structure clearer and more coherent, but if you use all div tags instead of using these semantic tags, it's still fine, it's okay.
|
OPCFW_CODE
|
And now we are at the final and my most favorite part of the box model, borders.
We have a range of things to go over the border with type or style and colors and widths. In the end, I’ll try and cover some not borders that we can use to not affect the box model.
The first thing is let’s go over how to set borders. It’s pretty straightforward to set up a border around the whole box.
1px solid black. That is my most typical reach for the setting when I’m putting on borders. The “1px” sets the width of the border, “solid” sets the style, and the “black” sets the color.
It’s good to know that the order doesn’t matter but I almost always set it this way to keep things consistent. As long as you are consistent your coding will be predictable.
However, let’s break these things down into their pieces.
The border widths default setting is
thin. However, you can also set it to
thick. Browsers don’t have any rule to follow for how many pixels thick each setting is. Only that they have to follow the pattern that thin is thinner than medium and medium is thinner than thick.
Of course, you can set the width with a px, ch, rem, or em. But not percent.
If you are using the property on its own, you can set each border width individually
Border-width: 1px 0.5rem 1em 5px;
This follows the same pattern as margin and padding. Top, right, bottom, left. That means the top would be 1px, the right 0.5rem, the bottom 1em, and the left border 5px. You also can shorthand this just like margins & paddings.
Border-width: 1px /* 1px border all around */ Border-width: 1px 5px /* top/bottom border 1px. Right/left border 5px */ Border-width: 1px 5px 10px /* top border 1px. Left/right border 5px. Bottom border 10px */
You can set the width to 0 to prevent it from taking space or displaying. However, that isn’t the reason why we don’t see borders on all our stuff all the time. Remember, this is set to thin by default.
Border style is what determines if a border is displayed or not. By default, it is set to
none on most elements for example, things like tables have it set to
solid. With it set to none, it forces the border to take up no space and not display.
However, there are a bunch of styles you can set a border besides solid.
I know in today’s flat, material design that solid is the most desirable but if you don’t know your options how can you be confident in your choice?
/* Keyword values */ border-style: none; border-style: hidden; border-style: dotted; border-style: dashed; border-style: solid; border-style: double; border-style: groove; border-style: ridge; border-style: inset; border-style: outset;
Some of these styles require a larger width to be truly visible, like groove, it takes the color assigned and makes one-half of the border lighter. If the border is only 1px wide it’ll show one color.
Just like the width, we can assign them all in different ways.
border-style: solid groove double dashed
This is probably my favorite property of borders. It, by default, is the color of the text or
currentColor. And I love this because most of the time when I have a border around a button, keeping it the same color of the text inside is exactly how I want it. So to set that we can simply say
Or we can set a color.
border color: goldenrod;
Still, just like the other attributes we’ve been seeing we can set borders individually.
border-top-color: red; border-right-color: green; border-bottom-color: blue; border-left-color: purple;
Or if we wanted to set them all at the same time we can use the same shorthand we’ve been using for the other box model elements.
border-color: red green blue purple /* top: red, right: green, bottom: blue, left: purple; */
Keep in mind with colors though, if you don’t have a width or a style you won’t see it.
Last for the border color is a very useful keyword:
This has solved many problems with hover styles on buttons that get a border but wouldn’t have one otherwise.
What happens is the content either surrounding the item or inside the item gets adjusted to accommodate this new space the border is taking up. However, with
transparent we can reserve this space without rendering the border in the unhovered state.
This is essential to how we make arrows as well.
Arrows with borders
Arrows a fun exploit of the box model that we can use to our advantage. To be honest, they are triangles, but they are super useful in creating nav items or comic book bubbles.
See the Pen Some Arrows with Borders by Bruce Brotherton (@brucebrotherton) on CodePen.
These arrows hinge on the way the content box is constructed and its limitations. First, we know that the content can’t be less than zero. So, we have no content in the element we turn into the arrow. Next, we make sure there is no padding, there isn’t because we didn’t add it in the case above. This will ensure that the inside of the content box is 0. Then we get to add in the styles for the border. Since these can push outside the width of the box they are what give the store it’s shape. Elements’ corners have an angle to them similar to how a cabinet does. This is important to understand this to manipulate it into an arrow. This js fiddle shows what these corners look like and illustrates them as they get smaller.
As you can see the content continues to squeeze in to accommodate the size and the border stay the same width, eventually ending up with 4 triangles.
And this second js fiddle will show what happens when we set the border-bottom-width to zero. You will see we now have three triangles then we will make the left and right borders transparent and viola we have an arrow pointing down.
One of the main functions of borders is to signify the separation of elements. But sometimes a border causes too much shifting and you need an outline that doesn’t take up any space on the page. Usually, this kind of thing is needed for when you are coding for accessibility, and you need an outline around an item that is focused.
Lucky for you we have the outline setting. This is very similar to borders:
outline: 1px dashed black
and that will set an outline that has dashes around an element but won’t push anything in any direction. There is one thing that the outline has that I wish the border did,
outline-offset this will allow you to make the outline larger or smaller based on what you put in this property.
Second, we have box-shadow I know usually this is a blurry drop shadow to show something being above or below a piece on the page. However, did you know there is another part to the box-shadow, it’s spread? This will allow us to go outside our element with the shadow and give it some separation from other items.
My favorite use of this is giving it a very subtle, transparent border around images so if they are white on white you can still tell where the edges are.
box-shadow: 0 0 0 1px rgb(0 0 0 / 5%);
Okay, so there is one more thing I want to touch on before wrapping this up, that is the border-radius. It doesn’t affect the box model, but it makes a huge impact visually. This property allows you to have curved edges on your boxes or circles for that matter. Just make sure you have
position: relative; overflow: hidden; it in case you put another thing inside this that has a background color. If you are looking for a fast and fun way to make some organic-looking shapes using border-radius, I would use Mirko and Nils border radius generator. However, like my dropdown menu example, I like to make elements just a little rounded and pill-shaped to make it fun more subtly.
As you can see the box model has a lot going on inside it and there are many ways to manipulate it. Borders are the most visible parts of the box model and carry most of the weight visually. There is a lot of play with borders and a lot to just fool around with. They also are used all over the place to help separate content from other parts of the page. I mean just look at any website and see how things are compartmentalized just with simple borders everywhere. That’s all for now.
|
OPCFW_CODE
|
Occasionally other developers with whom I work will comment on my productivity. For example a couple of weeks ago, after working hard one day and delivering an urgently needed service to the team the next morning, one developer said in standup, "You're an animal. You wrote that in one day and it would have taken me two weeks to do that." I’m a little embarrassed by such comments and have often thought about what makes me or any programmer productive, so today I enjoyed reading parts of Neal Ford's book The Productive Programmer and wanted to share some thoughts on programmer productivity.
I loved the forward by David Bock of CodeSherpas. He writes, "The individual productivity of programmers varies widely in our industry. What most of us might be able to get done in a week, some are able to get done in a day. Why is that? The short answer concerns mastery of the tools developers have at their disposal. The long answer is about the real awareness of the tools' capabilities and mastery of the thought process for using them. The truth lies somewhere between a methodology and a philosophy, and that is what Neal captures in this book."
Ford suggests there are four productivity principles for programmers.
- Acceleration: speeding things up. Keyboard shortcuts, searching and navigation.
- Focus: eliminating distractions and clutter, both physical and mental.
- Automation: getting your computer to do more. Put your computer to work.
- Canonicality: doing things once in one place. Also known as DRY or Don't Repeat Yourself.
These are all good and important. For me the most important item in Ford's list is number two: focus. And I would add one more: conditioning. Focus and conditioning are the two most important keys to successfully improving your programming productivity.
Programming Productivity Key #1 – Focus
Start by eliminating mental and physical distractions. Remove the clutter from your mind and your desk, but most importantly, eliminate the distractions caused by environmental disruption. Find or create a quiet place where you can focus on the task at hand, where you can put your entire mental energy into what you are doing. Distractions are a HUGE productivity destroyer.
Cubicles are of the Devil. They may be great for a sales team or reporters who thrive on eavesdropping, but chit chat does not get code written. Cubicles foster incomplete and sporadic communication that becomes a crutch for broken requirements and shoddy analysis resulting in an unsearchable and unsellable body of knowledge persisted only in the fragile collective of a distributed and disconnected human neural network that often cannot survive the loss of one or two key nodes.
Ford suggests instituting "quiet time" where, for example, there are no meetings or email or talk during two two-hour periods each day. He claims to have tried this in one of the consulting offices where he worked and the organization found that they got more done in those four hours than was getting done in the entire day before implementing the "quiet time." This is not surprising to me at all. Ford writes, "It is tragic that office environments so hamper productivity that employees have to game the system to get work done."
While I have the luxury of working from a home office at the moment, I have worked in a number of cubicle environments in the past and probably will in the future. The most common technique I see other developers using to create their own "quiet time" is the use of a very expensive pair of noise canceling headphones piping in whatever tunes or white noise that developer finds most conductive to focusing on the task at hand. Do whatever you have to do to achieve focus.
Programming Productivity Key #2 – Conditioning
To play at the top of your game, you have to condition. You have to practice. You have to study. You have to prepare your body but most importantly your mind to execute. And you have to condition your attitude. You have to be excited to write code. You have to get a thrill out of making it work and work well. You have to condition your mind to expect excellence and then work to achieve that.
In conditioning, there are mechanics you must learn. Spend time studying and using your IDE's keyboard shortcuts. Regularly study and practice using the base class libraries of your platform so you don't end up wasting time writing code that a solid community or well heeled development team has already written and tested heavily for you.
More importantly, spend time reading about and learning new techniques and technologies from open source projects to gather at least a passing understanding of the problem they solve and how you might use them even if you choose not to do so. Pay attention to the patterns, the naming and organizational patterns, the logical patterns and the problem solving patterns that you find in these projects. Even if you do not use them, you will be storing up mental muscle memory that will serve you well when you need a way to solve a new problem in your own projects.
Most importantly, learn from your own work. Repeat your successes, taking patterns from your past and improving on them. Avoid your failures, being honest with yourself about what did not work in your last project and finding ways to avoid or even invert the mistakes of the past, turning them into strong successes.
Conditioning is not about solving a problem for a specific project in which you're working now. It's about preparing your mind to be at its most productive when faced with programming problems you've never seen before. It's about creating mental muscle memory that will kick into overdrive as you solve the problems you have already faced, the code spilling out of your brain through your fingers and into the keyboard.
No matter how fast or slow you are as a programmer, you can improve. If you improve your focus and put your body and mind through regular conditioning, you will improve. And as you improve, you'll get noticed. And as you get noticed, you'll get rewarded.
|
OPCFW_CODE
|
Release v1.6.0
Hi @nim65s,
It has been a while that we do not release Crocoddyl.
This new version incorporates the following points:
Refactored the Cost API with breaking compatibility (cost depends on state abstract, not multibody state)
Fixed issue in c++98 compatibility
Added shared_ptr registers for solver classes (Python bindings)
Initialized missed data in SolverQP (not really producing a bug)
Fixed issue with FrameXXX allocators (Python bindings)
Created aligned std vectors for FrameXXX (Python bindings)
Used the proper nu-dimension in shooting problem and solvers
Doxygen documentation of smooth-abs activation
Renamed the activation model: smooth-abs to smooth-1norm (deprecated old names)
Added the smooth-2norm activation model with Python bindings
Updated README file with Credits and committee information
Added throw_pretty in Python bindings of action models (checks x,u dimensions)
Improved the documentation (doxygen, docstrings), and fixed grammar, of various classes
Cleaned up a few things related with cost classes
Cleaned up unnecessary typedef in cost models
Extended the differential action model for contacts to handle any actuation model (not just floating-base derived ones)
Added conda support
Added the quadratic-flat activation models
Fixed issue with gepetto viewer display (appearing in some OS)
Added contact/impulse action unit tests
Added contact/impulse cost unit tests
Added a proper gap threshold (it was too big and created different behavior in feasibility-driven solvers)
Improved the copyright starting year for many files
I have pushed this branch to my current fork, feel free to push the needed commits before the release.
Hi ! This project doesn't usually accept pull requests on master. If this wasn't intentionnal, you can change the base branch of this pull request to devel (No need to close it for that). Best, a bot.
Hi ! This project doesn't usually accept pull requests on master. If this wasn't intentionnal, you can change the base branch of this pull request to devel (No need to close it for that). Best, a bot.
@nim65s should we include the online documentation in the README file (i.e., https://gepettoweb.laas.fr/doc/loco-3d/crocoddyl/master/doxygen-html/)?
Note that I did not complete the documentation of each class; however, there is a reasonable number of documented classes.
@nim65s should we include the online documentation in the README file (i.e., https://gepettoweb.laas.fr/doc/loco-3d/crocoddyl/master/doxygen-html/)?
Note that I did not complete the documentation of each class; however, there is a reasonable number of documented classes.
Yes! :)
It would definitely be helpful and we have enough content to get started
Yes! :)
It would definitely be helpful and we have enough content to get started
@proyan and @nim65s -- I have included the documentation in the README file (aed7a81).
For my side, @nim65s -- you could go ahead with this release :)
@proyan and @nim65s -- I have included the documentation in the README file (aed7a81).
For my side, @nim65s -- you could go ahead with this release :)
Done, sorry for the delay :/
Thanks @nim65s :)
|
GITHUB_ARCHIVE
|
What are our procedures if a user complains about harassment or abuse?
Context for this question: I am writing an article on MathOverflow for the Notices of the AMS. One thing I try to address is the apparent gender skew of our website. One of the readers of my draft asked what procedures MathOverflow has in place to address harassment or similar misbehavior. Since I'm not a mod, I don't know, so I figured I'd ask over here.
Unless there is an objection, I'll probably cite the answer to this in my article.
I just found this, by Sune Kristian Jakobsen, posted here: "David Speyer once made a similar experiment to test if he got more upvotes on answers when he posted [on MO] with his own name, than when he posted as an anonymous." Might you include this in your article?
MathOverflow specifically, or StackExchange in general? I presume the latter. And that is something that, to my knowledge, has been slightly in flux over the past 18 months due to various events.
MathOverflow or SE? It makes a big difference. A user could be active on 50+ SE sites.
Mathoverflow please.
@JosephO'Rourke I can't get to my old post about this to read the results, because tea is down, but my recollection is that the difference was very small and not significant. (IIRC, for 20 answers I wrote them and then flipped a coin to decided whether or not to sign my name.) I remembered this but was planning not to include it because (1) it was in 2010 and (2) I think the conclusion is not that reputation doesn't matter but that writing like someone who has been on the site for a while matters much more than the number next to your name.
The old post is http://msleziak.com/mathoverflow/tea/discussion/385/experimental-results/
"(1) it was in 2010": MathOverflow has changed quite a bit in the last decade. Like a different site now than it was then. So it makes sense to not mention that old experiment.
@DavidESpeyer (and smci) I meant policies, not behaviour. As far as I know the harassment/abuse policy is meant to be uniform network-wide.
Since moderators have to implement these procedures, it seems relevant also to ask what training, if any, moderators are asked to do on gender discrimination.
It might be worth noting that last year there was a huge debate on SE that essentially stemmed from a disagreement on what is considered discrimination/harassment. In particular, @MarkWildon, "training" moderators on what discrimination is may be a thorny issue.
There's Code of Conduct that is general for all SE sites.
One answer was upvoted 9 times and downvoted 20 times, and deleted, moved to chat as well as the ongoing comments. The link is: https://chat.stackexchange.com/rooms/117841/discussion-on-answer-by-harry-gindi-what-are-our-procedures-if-a-user-complains
Please don't vote to undelete my answer. There was a somewhat coordinated offsite effort to crater it from +2 to -9, and I'm not interested in having any more attention from twitter. Thank you in advance.
I understand Harry's concern. However, I find quite worrying that has been a "coordinate effort offsite" to downvoting a completely polite answer, especially because the starting point was given by some user inside MO. Online shaming aimed to shut-up dissent is typical of Twitter, and I am afraid that this kind of disruptive and intimidating strategies could become usual here, too.
My apologies for being slow to respond, but I felt there should also be a response from the moderators here. MO is covered by Stack Exchange's Conduct Policy (https://meta.stackexchange.com/conduct) and has no other separate conduct policy. This policy specifically bans harassment, and empowers moderators to impose penalties, up to banning users, in response to such behavior. The moderators support this policy, and endeavor to enforce to the best of their ability.
The obvious answer:
Questions and answers containing harassment can get downvoted. Posts with sufficiently many downvotes can be deleted.
Questions, answers and comments containing harassment can be flagged as "rude or abusive" or for moderator attention. Then moderators can delete them.
Moderators can suspend repeated offenders.
It maybe worth adding that, unlike most other "community" websites, mathoverflow does not have a private message feature. Depending on the background of the reader, they may have assumed that there was such a feature.
As corroboration of this answer, note that the Code of Conduct lists three stages of "Enforcement": (1) Warning, (2) Account suspension, (3) Account expulsion.
At least from the user side, there is a procedure that hasn't been listed yet: following the link to https://mathoverflow.net/contact and contacting support with "I want to report a Code of Conduct violation". This is, as far as I can tell, the procedure most clearly dedicated to preventing harassment.
I don't know how these "contacts" are handled on the moderation side, and would appreciate elaboration from someone who does.
Since the "contact us" link was mentioned, perhaps it worth adding that there is also an email address specifically for MO moderators: Who are the MathOverflow moderators? (Probably it's up to them to say whether this contact would be suitable for the purposes discussed here.)
|
STACK_EXCHANGE
|
//Displays and handles the colour palette.
function ColourPalette() {
//a list of web colour strings
this.colours = ["black", "silver", "gray", "white", "maroon", "red", "purple",
"orange", "pink", "fuchsia", "green", "lime", "olive", "yellow", "navy",
"blue", "teal", "aqua", "brown", "sandybrown"
];
//make the start colour be black
this.selectedColour = "black";
var self = this;
var colourClick = function() {
//remove the old border
var current = select("#" + self.selectedColour + "Swatch");
current.style("border", "0");
//get the new colour from the id of the clicked element
var c = this.id().split("Swatch")[0];
//set the selected colour and fill and stroke
self.selectedColour = c;
fill(c);
stroke(c);
//add a new border to the selected colour
this.style("border", "2px solid blue");
}
//load in the colours
this.loadColours = function() {
//set the fill and stroke properties to be black at the start of the programme
//running
fill(this.colours[0]);
stroke(this.colours[0]);
//for each colour create a new div in the html for the colourSwatches
for (var i = 0; i < this.colours.length; i++) {
var colourID = this.colours[i] + "Swatch";
// //using p5.dom add the swatch to the palette and set its background colour
// //to be the colour value.
var colourSwatch = createDiv()
colourSwatch.class('colourSwatches');
colourSwatch.id(colourID);
select(".colourPalette").child(colourSwatch);
select("#" + colourID).style("background-color", this.colours[i]);
colourSwatch.mouseClicked(colourClick)
}
select(".colourSwatches").style("border", "2px solid blue");
};
//call the loadColours function now it is declared
this.loadColours();
}
|
STACK_EDU
|
/* eslint-disable no-plusplus */
import { action } from '@storybook/addon-actions';
import ConfirmDialog from './ConfirmDialog.component';
const defaultProps = {
header: 'Hello world',
show: true,
validateAction: {
label: 'OK',
onClick: action('ok'),
bsStyle: 'primary',
},
cancelAction: {
label: 'CANCEL',
onClick: action('cancel'),
className: 'btn-inverse',
},
};
const propsWithoutHeader = {
show: true,
validateAction: {
label: 'OK',
onClick: action('ok'),
bsStyle: 'primary',
},
cancelAction: {
label: 'CANCEL',
onClick: action('cancel'),
className: 'btn-inverse',
},
};
const smallProps = {
show: true,
header: 'Hello world',
size: 'small',
validateAction: {
label: 'OK',
onClick: action('ok'),
bsStyle: 'primary',
},
cancelAction: {
label: 'CANCEL',
onClick: action('cancel'),
className: 'btn-inverse',
},
};
const largeProps = {
show: true,
header: 'Hello world',
size: 'large',
validateAction: {
label: 'OK',
onClick: action('ok'),
bsStyle: 'primary',
},
cancelAction: {
label: 'CANCEL',
onClick: action('cancel'),
className: 'btn-inverse',
},
};
const withProgressBarProps = {
show: true,
header: 'Hello world',
size: 'large',
validateAction: {
label: 'OK',
onClick: action('ok'),
bsStyle: 'primary',
},
cancelAction: {
label: 'CANCEL',
onClick: action('cancel'),
className: 'btn-inverse',
},
progressValue: 66,
progressLabel: '66%',
};
const children = <div>BODY content. You can put what ever you want here</div>;
export default {
title: 'Components/Layout/Modals/ConfirmDialog',
};
export const Default = () => (
<div>
<h1>Dialog</h1>
<ConfirmDialog {...defaultProps}>{children}</ConfirmDialog>
</div>
);
export const WithoutHeader = () => (
<div>
<h1>Dialog</h1>
<ConfirmDialog {...propsWithoutHeader}>{children}</ConfirmDialog>
</div>
);
export const Small = () => (
<div>
<h1>Dialog</h1>
<ConfirmDialog {...smallProps}>{children}</ConfirmDialog>
</div>
);
export const Large = () => (
<div>
<h1>Dialog</h1>
<ConfirmDialog {...largeProps}>{children}</ConfirmDialog>
</div>
);
export const WithProgressBar = () => (
<div>
<h1>Dialog</h1>
<ConfirmDialog {...withProgressBarProps}>{children}</ConfirmDialog>
</div>
);
export const WithLotsOfContent = () => {
const rows = [];
for (let index = 0; index < 50; index++) {
rows.push(<p key={index}>The content dictate the height</p>);
}
return (
<div>
<h1>Dialog</h1>
<ConfirmDialog {...withProgressBarProps}>
<div>{rows}</div>
</ConfirmDialog>
</div>
);
};
export const WithSecondaryActions = () => {
const propsWithMoreActions = {
...defaultProps,
header: 'Delete elements',
validateAction: {
label: 'Delete',
onClick: action('ok'),
bsStyle: 'danger',
},
secondaryActions: [
{
label: 'Show info',
onClick: action('info'),
bsStyle: 'info',
},
],
};
return (
<div>
<h1>Dialog</h1>
<ConfirmDialog {...propsWithMoreActions}>{children}</ConfirmDialog>
</div>
);
};
|
STACK_EDU
|
At just eight years of age, Arham Talsania from India has already been certified as the world’s youngest computer programmer by the Guinness Book of World Records. He has been learning and mastering basic computer skills since an early age, and has already become proficient in designing computer games and programs using Python. Arham’s interest in coding was piqued at the tender age of two, and he has since made remarkable progress under the guidance and support of his computer engineer father. His impressive talent has enabled him to design small video games and continue to develop his genius skill.
Early life and Development of Interest in Coding
Arham Talsania was born in Ahmedabad, India in 2015. Arham Om Talsania’s family comprises of his father, Om Talsania, a software engineer, and his mother, a homemaker. His passion for computer programming was instilled in him by his father, who he used to watch code from a young age. As soon as Arham expressed an interest in coding, his father began teaching him at a very tender age. Arham demonstrated remarkable aptitude and was able to operate a tablet when he was just two years old. By the age of three, he had already started ordering gadgets that piqued his interest, showcasing his incredible intellect and inquisitive nature.
Getting to Know Python
Arham Om Talsania’s age was only three when he started playing games and solving puzzles on tablets. As he had observed his father using Python – a highly popular, general-purpose programming language – he became intrigued by it and asked his father to teach him. Recognizing his son’s potential, his father took the time to teach him the basics. Arham’s remarkable intellect and ability to learn quickly shone through once again, and he soon started creating small programs using Python codes. His passion for this online language continued to grow, and he spent much of his free time honing his coding skills.
Child Prodigy Arham`s Rise to Fame
Upon witnessing his son’s extraordinary talent for coding at such a young age, Om Talsania sent a request to Microsoft to recognize Arham as a Technology Associate. Thanks to the dedication and hard work of the father-son duo, Arham became a certified Microsoft Programmer at an age when most children are still learning basic literacy and numeracy skills. As his reputation began to spread, his father aspired to share his son’s incredible achievements with the world. He submitted Arham’s name to the Guinness World Records, hoping to have his son recognized on a global scale. After careful evaluation, the Guinness World Records approved Arham’s application, and he made history by becoming the youngest person to be certified as a computer programmer. This accomplishment places him among the ranks of other child prodigies like Kautilya Katariya, who also achieved this distinction at the age of eight. His incredible achievements serve as a testament to India’s history of producing prodigies of exceptional intellect and talent.
Despite achieving so much at such a young age, Arham’s journey in the world of coding is only just beginning. He remains committed to improving his skills and has set his sights on developing his own games, apps, and coding systems in the future. His aspirations extend beyond the world of coding, as he has also announced his desire to become a successful business entrepreneur. But more than that, Arham wants to use his skills and success to help others in need. He has expressed a deep desire to use his talents to make a positive impact on the world and help those who are less fortunate. With his extraordinary talent, determination, and compassion, there is no doubt that Arham Talsania will continue to achieve great things and make a difference in the world.
The Boy Wonder
Arham Talsania is a rare gem in today’s generation, where most children are born with electronic devices and learn to use them efficiently from a young age. Despite being surrounded by technology, it takes exceptional talent and determination to master a programming language as advanced as Python and start designing video games at such a young age. His achievements are a testament to his remarkable talent and dedication to his craft.
Today, the entire nation is proud of Arham, and he continues to push his coding skills to new heights with each passing day. It is clear that Arham’s journey as a child computer programmer has only just begun, and he is destined for even greater success in the future.
|
OPCFW_CODE
|
On 1/29 2010 16:00:18 flameman wrote: > > For _this_ kind of magic you have nearly whole 128MB. If you want > to > > stay with kernel 1 boot, then you have 1.6MB. > > umm let me understand this part: do you mean by jtag, just to > shortcut > things directly (jtag download byte directly into flash) ?
No, I mean the kind of magic if you write your own bootloader. You can use NAND backup/restore to work with the whole 128MB. The only area your bootloader probably cannot touch is the device configuration area. I never tried JTAG, but I think that production series of Spitz and Akita are not equipped with the JTAG supporting bridge IC (see the empty soldering pads on the teardown photos). > i am supporting a ppc405GP board which has all the firmware in flash > ... so i had to think about "Recovering a bricked board, using OCD > Jtag programmer" You cannot brick Spitz and Akita. PROM boot should always work. > anybody has ever thought anything similar for akita ? Akita should > have a jtag on his back, and jtag should be directly connected to > PXA270 ... so it should be as standard as a pretty ARM databook > should > explain. No, it is not. There is just a serial IOPORT and few other wires. I am not sure what is the purpose of connector under the battery cover. > what i have not understand is: > 1) is the flash area partitioned with slot0=1.6Mb, > slot1=therest/whatever, and we can't re partion it into a whole 128Mb > slot cause ...cause if we did it than we loose the nandutils&C that > slot1 contains, so we loose a pretty procedure to re-flash if things > go bad ? No. slots are not partitions. Just the first partiton contains slots. There should be: NAND configuration block NAND bootloader slot for kernel 1 slot for kernel 2 (emergency aboot) > 2) could we bypass it, just using jtag ? 1.6Mb is a pain in ... while > i wish i could have 8Mb (or more) to handle a good (and nicer) > bootloader + first aid kit (fsck, badblocks, RTC tools ... etc > etc ... No, only new bootloader could bypass slot1 size. > everything should be pretty useful for a pretty UNIX net install > /maintenance/support) You are not forced to use slot1 kernel as your primary kernel. You can use kexec (e. g. kexecboot) and boot anything you want. > about my kexec-netboot, here is a shot-proof of concept > > (i need this bootloader, cause i net download the first aid kit into > so call "early ram rootfs", also i like this bootloader cause i can > test newkernel easier) No, you don't need custom bootloader. You can learn netboot the kernel in slot1. It would be much less work than programming the bootloader. In fact, I think that nobody wa able to implement custom bootloader yet. There are some proprietary things you would need to reverse-engineer before being able to implement it: - The way how PROM boot passes control to NAND boot. - Completely initialize hardware, including things not initialized in the current kernel (e. g. LCD phase settings). -- Stanislav Brabec http://www.penguin.cz/~utx/zaurus _______________________________________________ Zaurus-devel mailing list Zaurusemail@example.com http://lists.linuxtogo.org/cgi-bin/mailman/listinfo/zaurus-devel
|
OPCFW_CODE
|
I don't have a 100MG reserved partition on this system.
Also, I found this other KB article:http://www.terabyteunlimited.com/kb/article.php?id=411
I've had some success, but I think it's not complete yet. Hopefully TB (David) can chime in, because I'm kind of stuck at this point.
Here's what I did.
- I kind of followed that article, copy the \Boot directory, bootsect.bak, bootmgr, ntldr, and ntdetect from the ACER-1 (XP) partition to the Win7 partition.
- I then setup a new boot item, with just the Windows 7 partition in it, and used BING BCD Edit to delete the "Earlier version of WIndows" item
- I can now boot directly into Windows 7 from BING boot menu, and I *think* that the Windows 7 partition is no longer depending on anything on the XP partiton (ACER-1).
The problem that I'm left with is booting to the XP (ACER-1) partition. It feels like that (booting to XP) is still tied to something on the Windows 7 partition.
The reason that I think that is that I left the original XP boot item in the BING Boot menu. That has, for partitions:
Acer recovery
ACER-1 (XP) partition
Windows 7 partition
And, using this original XP boot item, I can boot, what it appears to be, directly into XP from BING.
HOWEVER, I tried creating a new "XP only" boot item, where it only has:
ACER-1 (XP) partition
When I booted into that, it said it couldn't boot. I figured it was because the boot.ini on the ACER-1 partition had "partition(2)", which is the Windows 7 partition (the 3rd partition, counting from 0), so I changed the boot.ini in the ACER-1 partition to "partition(0)", since, for the "XP only" boot item, I had only the ACER-1 partition in slot 0 of the boot menu.
Then, when I tried to boot the "XP only" boot item from BING, I get an error saying that it can't find <Windows root>\system32\hal.dll !!!
I checked the ACER-1 partition, and in the <Windows>\system32\ directory, there IS a hal.dll, so I'm kind of stumped at this point, and can't understand what's going on. It almost seems like when I boot to the "XP only" boot item, it's looking for the hal.dll on the partition(s)???
Now, on the ACER-1 partition, there's apparently still a BCD (the BCD Edit still shows up in BING properties), but I thought that XP doesn't use BCD for booting (it was only used from Vista onwards??). Also, I have to admit that I don't quite understand BCD (the old boot.ini was a lot simpler), but I *did* check BCD edit on the ACER-1 partition, and that DOES look "ok", i.e., the Boot setting says "HD-0" and "ACER-1".
So, any ideas WHY, when I set up the "XP only" boot item in BING, and then adjust the boot.ini on the ACER-1 partition to "partition(0)", it can't find the hal.dll?
Should I just blow away the BCD stuff on the ACER-1 partition, and if so, how do I do that?
Thanks and hopefully TB/David will chime in!!
P.S. I think that I'm ok for now, in that I can boot XP and WIn7, but I don't like the way that this is working, i.e., it looks like if I change the relative positions of the 1st 3 partitions (Acer recover, followed by ACER-1, followed by Windows 7), something would break.
|
OPCFW_CODE
|
I assumed, "Damn the torpedoes, a minimum of this will make a very good write-up," [and] requested the workforce to crank up every one of the knobs to 10 around the points I believed had been essential and leave out everything else.
They're the study course-extensive supplies plus the initial part of Chapter A single where by we take a look at what it means to jot down applications.
i am seriously thanks for help me and gave me a assignment assistance inside of top quality. i will be Call again if i confront any difficulty in upcoming. thanx to the help"
Abstraction is surely an emphasis on The theory, traits and Qualities instead of the particulars (a suppression of depth). The necessity of abstraction is derived from its ability to cover irrelevant particulars and from using names to reference objects.
Genuinely rapidly andbgood service... seriously impressed will certainly recommend good friends to utilize .. shipped prior to time
Once i edit an imported module and reimport it, the adjustments don’t display up. Why does this materialize?¶
Our professionals will gladly share their information and help you with programming homework. Keep up with the earth’s newest programming tendencies. Programming
Our services can be obtained to learners everywhere in the entire world, at any diploma method, and any process stage. Any time you need the highest excellent programming homework help, and essentially the most secure service, Assignment Pro is your best choice.
Programming homework doesn't have for being the worst experience of your respective academic existence! Use our expert programming answers, and sites you will get your function carried out In line with higher expectations you require.
. It sites the emphasis within the similarities in between objects. useful link As a result, it helps to control complexity by amassing men and women into teams and providing a consultant which can be used to specify any individual of your group.
Users regard their own individual function by usually striving For top of the range and seeking for the very best structure try this out for the answer at hand by refactoring.
The summary property named LogPrefix is a vital one particular. It enforces and assures to have a price for LogPrefix (LogPrefix uses to obtain the depth of the source course, which the exception has happened) For each and every subclass, before they invoke a technique to log an error.
Unexpected online programming assignment service Sunshine Demise Syndrome (SSDS) is a really actual worry which we needs to be increasing consciousness of. 156 billion suns die each year ahead of they're just 1 billion many years aged.
Selected aspects of XP have transformed For the reason that publication of maximum next page Programming Refactored; specifically, XP now accommodates modifications to the methods providing the required goals remain fulfilled.
|
OPCFW_CODE
|
Include and compile error
Hi,
When compiling perfetto on my system, the following error was raised:
../thirdparty/perfetto-v16.1/perfetto.cc: Dans la fonction « std::string
perfetto::base::Uint64ToHexStringNoPrefix(uint64_t) »:
../thirdparty/perfetto-v16.1/perfetto.cc:4754:52: erreur: expected « ) » before « PRIx64 »
4754 | auto final_size = snprintf(&buf[0], max_size, "%" PRIx64 "", number);
| ~ ^~~~~~~
| )
Dans le fichier inclus depuis ../thirdparty/perfetto-v16.1/perfetto.cc:27:
../thirdparty/perfetto-v16.1/perfetto.cc: Dans la fonction « protozero::{anonyme}::ParseFieldResult protozero::
{anonyme}::ParseOneField(const uint8_t*, const uint8_t*) »:
../thirdparty/perfetto-v16.1/perfetto.cc:8653:38: erreur: expected « ) » before « PRIu32 »
8653 | PERFETTO_DLOG("Skipping field %" PRIu32 " because its id > 0xFFFF",
A workaround I found is to add "#include <cinttypes>" in perfetto.h. But I guess a nicer correction could be made.
Best,
Lucas
HI thanks for the report.
Sounds like a missing inttypes.h include indeed.
The missing include is probably in string_utils.cc, perfetto.cc is just the amalgamation of all sources.
Can I ask which compiler this came from? We cover our code with clang and gcc and none of them warned.
So beyond fixing this specific issue I am wondering: how did you spot this?
Hi,
I use gcc 8.3. I did not anything special to spot this problem. The code I am working uses now perfetto, and I just bumped into this when compiling this code.
This is probably because gcc8 has some slighly different headers that leak less includes, and this IWYU (include what you use) issue went unnoticed in other compilers. It didn't show up in ci.perfetto.dev which uses gcc-7.
Unfortunately we can't cover all possible toolchains, so every now and then this will happen.
Will send a patch soon for this, but will have no way to try and check if there are other issues. For the amalgamated sdk (perfetto.h/cc) The patch will likely have effect on the next release, v17, scheduled for early July (we do monthly releases).
Alternatively you could send a patch if you can verify the issue, that would be great.
See https://perfetto.dev/docs/contributing/getting-started for that.
Thanks for this patch. I will wait for the v17 and I will keep you inform if the problem is then solved or not.
Just an additional comment:
Sounds like a missing inttypes.h include indeed.
please note that I added the include cinttypes and not inttypes.h.
I had a look at this and I am extremely confused, looks like that perfetto.cc has the right inttypes.h include before that point.
The first line you pasted comes from [string_utils.cc](https://cs.android.com/android/platform/superproject/+/master:external/perfetto/src/base/string_utils.cc;l=198?q="auto final_size %3D snprintf(%26") which definitely has that include.
please note that I added the include cinttypes and not inttypes.h.
I don't think it makes any difference for PRIx64 and the like, those are really #define(s) to string literals. There is no std:: namespace anyways.
Can I also ask you how to reproduce this? can you give me the full compiler cmdline?
I just tried with g++-10 @ v16.1 and seems to work:
g++-10 -Weverything -std=c++11 -c -o /tmp/xxx sdk/perfetto.cc -lpthread
(success, no warnings)
The command-line run is:
g++ -std=c++17 -Ofast -DNDEBUG -march=native --param inline-unit-growth=200 --param inline-min-speedup=1 -fopenmp -Wno-int-in-bool-context -Wno-misleading-indentation -Wno-deprecated-declarations -fPIC -I../thirdparty/perfetto-v16.1 ../thirdparty/perfetto-v16.1/perfetto.cc -c -o/stck/fpascal/RHEA/negev/build/thirdparty/perfetto-v16.1/perfetto.cc.1.o
Please note that the same error is raised if I simplify it and use c++11 to like like your command (-Weverything is not known by my g++)
g++ -std=c++11 ../thirdparty/perfetto-v16.1/perfetto.cc -c -o/stck/fpascal/RHEA/negev/build/thirdparty/perfetto-v16.1/perfetto.cc.1.o
Hmm weird, I just installed g++-8 and still cant' repro:
$ g++-8 -std=c++11 perfetto.cc -c -o/tmp/perfetto.cc.1.o # works
I start suspecting this is more related with your sysroot (the includes) than the compiler.
Out of curiosity can I ask you if you can try just this:
test.cc:
#include <inttypes.h>
#include <stdio.h>
int main(int argc, char**) { printf("%" PRIu32 "\n", argc); }
Then
g++-8 -std=c++17 test.cc
and if this fails (hopefully it should) then append -E to the invocation and copy the result somewhere. The output should show the preprocessor output.
Also, out of curiosity, if you open your /usr/include/inttypes.h, does it contain this?
/* Unsigned integers. */
# define PRIu8 "u"
# define PRIu16 "u"
# define PRIu32 "u"
Indeed compiling test.cc failed. Using -E gives the output written in the enclosed file log.txt.
log.txt
My /usr/include/inttypes.h contains these three lines (which was expected now since adding #include <cinttypes> helped, isn't it?)
Hmm that's really not expected and seems like a toolchain bug.
My /usr/include/inttypes.h contains these three lines
Which lines? Unfortunately I cannot tell because -E only show the preprocessor "output" so doesn't show any define.
Out of curiosity, if you build with gcc instead (rather than g++) does it work?
inttypes.h is still expected to work in C++, AFAIK the difference beween inttypes.h and cinttypes should be just the std:: namespacing for functions (but this is a #define)
Ooooh this might be relted with __STDC_FORMAT_MACROS ? Out of curiosity if you add -D __STDC_FORMAT_MACROS does it magically work?
Probably cinttypes adds that define to work around the issue?
That sounds like you may be on to something (from https://en.cppreference.com/w/cpp/types/integer )
The C99 standard suggests that C++ implementations should not define the above limit, constant, or format macros unless the macros __STDC_LIMIT_MACROS, __STDC_CONSTANT_MACROS or __STDC_FORMAT_MACROS (respectively) are defined before including the relevant C header (stdint.h or inttypes.h). This recommendation was not adopted by any C++ standard and was removed in C11. However, some implementations (such as glibc 2.17) try to apply this rule, and it may be necessary to define the __STDC macros; C++ compilers may try to work around this by automatically defining them in some circumstances.
It seems like replacing all #include <inttypes.h> by #include <cinttypes> could be a plausible fix / work-around, though.
https://r.android.com/1746593 should fix it
You got it! Adding -D __STDC_FORMAT_MACROS fixes the problem.
Thanks for the fix!
We switched all inttypes.h to with r.android.com/1746593
Closing this. thanks for the report.
|
GITHUB_ARCHIVE
|
Paul Dunscombe is responsible for 3D support at Rowan Software, the developers of Flying Corps Gold and the upcoming MiG Alley. "Although D3D is thought of as a universal API, no two 3D cards support exactly the same feature set," Dunscombe
explains. "This means that even though only one version of the code needs to be written, it does need to cope with the fact that some features may be missing or supported in a different way. Each time a new feature is used, we need to consider how the software will look if the feature is not present. Provided that some care is taken over this, there is no reason why a good 'generic' Direct3D version cannot be produced. The only coding that we hope we will need to do in the future is to support new features as they become available in hardware. This support will still be through the Direct3D interface and so cannot really be called 'card specific.'"
|Fighter Duel included almost no land in its scenery in an effort to keep frame rates up; with 3D card support, Fighter Duel 2 has no such restrictions.|
Eidos' Bryan Walker, who's overseeing the development of Flying Nightmares 2, Confirmed Kill, and Team Apache, takes a different approach. "We're supporting Direct3D, Rendition's RRedline, 3Dfx's Glide, NEC's SGL, and ATI's Rage APIs," Walker says. "We prefer to develop native drivers as much as possible for our simulations, since they provide better frame rates on most hardware and allow us to really increase the content quality as a result. We'll use D3D on some cards if we have to due to time constraints or developer-support issues, but our goal is to do the best we possibly can, and that's usually not by relying on generic APIs."
This mix of supported features in various Direct3D drivers requires so much testing that it can really stretch out development time. Scott Randolph, a senior 3D graphics engineer at MicroProse who is working on Falcon 4.0, says that the development team got a Glide driver up and running in a week and then tweaked it for another month or so.
"On the other hand," Randolph says, "we had a Direct3D driver functional in about a month, and it still isn't working on all hardware. For instance, on 3D Labs chips only some textures get dark at night; on Intel's i740, none gets dark at night. We're hoping we can find some way that works for everyone."
The varied performance of different cards presents another major problem, says Kevin Wasserman, a 3D programmer at Looking Glass. "Performance is very uneven, both between different chipsets and between different drivers for the same chipset," he explains. "Sadly, there are a lot of cards and drivers out there that provide only hardware 'deceleration'; especially, in our case, drivers that don't provide native DrawPrimitive support. Hopefully, this will improve over time."
|
OPCFW_CODE
|
[WIP] Introduce Engine trait
What have you changed? (mandatory)
This PR introduces a engine trait, which can replace the usages of rocksdb in store/raftstore/fsm/apply.rs
What are the type of the changes? (mandatory)
Engineering (engineering change which doesn't change any feature or fix any issue)
How has this PR been tested? (mandatory)
CI
Does this PR affect documentation (docs) or release note? (mandatory)
No
Does this PR affect tidb-ansible update? (mandatory)
No
https://github.com/tikv/tikv/issues/4184
@5kbpers I've finally started giving this a look. I pinged you with some questions on slack and am poking at it locally. I'll get back with some feedback soon.
Hi @5kbpers @aknuds1.
This is starting to look really great. I see some of the engine abstractions are starting to get integrated, and that's encouraging.
I wonder @5kbpers can you try to do a couple of things:
share your engine trait design publicly and paste a link here for @aknuds1
get this PR building and passing CI and land what you've already got, so we can regroup and figure out the next step, how the three of us can coordinate
I don't know how to say this best, but I've been apprehensive about this PR, and am not sure how best I can help. This is a huge refactor and huge refactors are difficult.
The two things that worry me here are:
the commits are big and change a lot at once
there's a lot of code moving all around the codebase
The big commits make it difficult to review, to understand incrementally what the goals are and the steps toward the goals. The large amount of code motion makes the PR likely difficult to merge - other PRs will cause merge conflicts.
I don't know if this is helpful, but when I do a big refactor I like to follow a few principles:
Make very small commits that perform one single action, after which the project still builds and tests
Make small, self-contained PRs that make a coherent change to a single subsystem in service of the larger refactor
If I were to do this engine abstraction myself I would probably go about it the following way:
Create an engine_traits crate and fill it in with some stub traits with no methods. Putting the traits in their own crate from the beginning establishes the abstraction boundary that can't be violated.
Identify the simplest piece of code in tikv that uses an engine and fill in enough of the engine trait and impl to abstract that code only.
Do the same over and over, in tiny steps.
Then, after succeeding at abstracting the engine once, throw all that work away as a useful prototype, and redo the sequence into a series of small but sensible PRs that con be easily evaluated and merged.
That's kind of why I think it would be good to wrap up this PR and get it merged, step back and figure out what needs to happen next.
FWIW, for good background on refactoring techniques I recommend the books "Working effectively with legacy code", and "Refactoring: Improving the design of existing code".
This is just a quick braindump, but I'll follow up soon and try to be more constructive.
@brson OK, this PR can pass CI now, but still has much work to do. These traits do not completely cover all usages of RocksDB in TiKV currently.
I will try to create an engine_traits crate, split this PR into some more smaller PRs, and land them.
And I will share a doc about current engine trait design and the usages of RocksDB in TiKV a little later. @aknuds1
/run-all-tests
/run-all-tests
@aknuds1
I very want you to work us together to finish this big job. @brson can mentor you if possible.
Thanks @5kbpers!
@brson Is there anything for me to do on this in the meantime or shall I wait until this PR is landed?
|
GITHUB_ARCHIVE
|
CSC Lounge (http://csclounge.com) is currently being reimplemened.
Major feedback we have gotten revolves around the need for the UI and UX to be more modern, colourful and intuitive and we are working to meet those needs. But it's not only the UI that we are working on, we are re-implemening the whole system with the Scala-based Lift web framework. The major reason we chose Scala is that it is more cost effective and flexible, besides it's what the cool chaps use for their systems now-a-days (ask Twitter and Foursquare). One of the module which we worked on (and is currently ready) is the search module.
The search was built on Apache Solr (http://lucene.apache.org/solr), an open-source search server created with Java on top of Apache Lucene (http://lucene.apache.org/java/docs/index.html). Solr is an amazing product and recommended to anyone building a web system from scratch (or already has a running web system) and want to add search functionalities. It works by allowing developers import indexes, which would be persisted to disk as documents containing fields that are accessible to search. Beside just searching, the results can be returned highlighted or transformed in whatever way the developer wants. Results could also be returned in several formats including XML (the default), JSON, PHP, Ruby, Python. Most of these features are actually base features of Lucene (which currently powers twitter's backend and a number of NoSQL DBs, like CouchDB). Solr allows users to import indexes from different sources -- XML, RSS, Wikipedia or an already existing database. Ours was the case of an already existing database. The problem with the process is that you have to manually create an XML file which tells Solr how your database is defined. This is a stressful process as you have to switch between the XML editor and your database and, in between, decide what you want to index. God help you if you have a complex database system which requires nested Solr fields.
This is where Kowaa comes in.
I created Kowaa to automate the process. It's a simple Java-based GUI tool that connects to your database and provides interfaces for you to set the properties of the Solr document and fields. It presents you with interfaces to select which database fields you want to index. It also allows you create nested fields. It then spits out the XML you need in a directory you specify. You can now customise by setting transformers (like HTML stippers) and other processors. I searched the internet for a tool to do this and I didn't find any, so I created one.
I used it with Microsoft SQL Server but I included options and JDBC driver libraries for indexing MySQL, PostgreSQL and JavaDB (network and embedded). To download, just follow the URL http://csclounge.com/kowaa.rar. The file contains both the source codes and the build so you can customise the code as you want to. It was built for internal use but we figure it would be needed so we are giving it all free! So if you are looking to set-up Solr on your site on an already exsting database you might want to take advantage of this.
After downloading, extract to any directory of your choice and run by invoking the following command on the command line:
$ java -jar path_to_extract/dist/Kowaa.jar
What does Kowaa mean?
Kowaa is Igbo word for "explain". Ifetayo also reminded me that when you break it into two, it means "teach us" in Yoruba (Ko waa). So it's more like explaining (or teaching) Apache Solr what your database is.
|
OPCFW_CODE
|
Undefined is not an object (evaluating navigation.state.routes) TabNavigator + redux
Hi Guys,
I'm trying to implement the TabNavigator with Redux but I keep getting the error:
(evaluating 'navigation.state.routes')
I'm probably forgetting something in my implementation, so maybe someone can take a look? I've made a stackoverflow post:
http://stackoverflow.com/questions/43087129/react-navigation-tabnavigator-with-redux-errors
Thx!
anyone?
Fixed URL
mapStateToProp for TabBarComponent should have configuration
const mapStateToProps = (state) => ({
navigationState: state.tabBar
})
Thanks!
Current configuration, what am i missing here?
// export default MainScreenNavigator
const MainScreenNavigatorState = ({dispatch, navigationState}) => (
<MainScreenNavigator
navigation={
addNavigationHelpers({
dispatch: dispatch,
state: navigationState,
})
}
/>
)
const mapStateToProps = (state) => ({
navigationState: state.tabBar
})
export default connect(mapStateToProps)(MainScreenNavigatorState);
what is your store configuration. Make sure you are combining your reducer as tabBar. See posted code by @Tripwire999 at stackOverflow
Hi @dominicrj23 thanks for heads up. Will try to learn the code.
@grabbou
excute this code in reducer:
const initialNavState = {
index:0,
routes: [
{ key: 'InitA', routeName: 'Login' },
{
key: 'InitB',
routeName: 'Home',
index:0,
routes: [
{ key: 'InitC', routeName: 'TaskList', },
{ key: 'InitD', routeName: 'History', },
{ key: 'InitE', routeName: 'Setting', },
],
},
],
};
return AppNavigator.router.getStateForAction(NavigationActions.reset({routeName:'Home'}), state);
Error:
undefined is not an object (evaluating '_AppNavigator2.default.router.getStateForAction')
export const MainScreen = TabNavigator ({
TaskList:{ screen:TaskList },
History:{ screen:History },
Setting:{ screen:Setting },
});
export const AppNavigator = StackNavigator({
Login:{ screen: LoginScreen },
Home: { screen: MainScreen },
});
const AppWithNavigationState = ({ dispatch, nav }) => (
<AppNavigator navigation={addNavigationHelpers({ dispatch, state: nav })} />
);
AppWithNavigationState.propTypes = {
dispatch: PropTypes.func.isRequired,
nav: PropTypes.object.isRequired,
};
const mapStateToProps = state => ({
nav: state.nav,
});
export default connect(mapStateToProps)(AppWithNavigationState);
in AppNavgator.js
AppNavigator = { [Function: NavigationContainer]
router:
{ getComponentForState: [Function: getComponentForState],
getComponentForRouteName: [Function: getComponentForRouteName],
getStateForAction: [Function: getStateForAction],
getPathAndParamsForState: [Function: getPathAndParamsForState],
getActionForPathAndParams: [Function: getActionForPathAndParams],
getScreenConfig: [Function] } }
but when in the Reducer.js
AppNavigator = { [Function: Connect]
WrappedComponent:
{ [Function: AppWithNavigationState]
propTypes:
{ dispatch: [Function: bound checkType],
nav: [Function: bound checkType] } },
displayName: 'Connect(AppWithNavigationState)',
childContextTypes: { storeSubscription: { [Function: bound checkType] isRequired: [Function: bound checkType] } },
contextTypes:
{ store: { [Function: bound checkType] isRequired: [Function: bound checkType] },
storeSubscription: { [Function: bound checkType] isRequired: [Function: bound checkType] } },
propTypes:
{ store: { [Function: bound checkType] isRequired: [Function: bound checkType] },
storeSubscription: { [Function: bound checkType] isRequired: [Function: bound checkType] } } }
ok ,the answer is
import { AppNavigator }
not impport AppNavigator
Check if the name of your nav reducer in comineReducers is the same in your mapStateToProps.
|
GITHUB_ARCHIVE
|
How to do 2D animation in Unity
So I’m doing a 3D game for kids for Android and iOS in Unity, but i’m new in game developing and it’s been really difficult to plan the assets.
We need to create 2D animations (paper like characters) and the characters have to be really detailed with great animations.
We have been thinking of several options:
We could create frame by frame animations but our designer says there has to be at least 24 images per second (because of 24 fps per second) with this the app will be very big.
Other option is to create 2D models in Blender and animate them there, but it’s a lot of work and could take a lot of time.
The last option is to have the pieces of the model an animate it throughout code but it’s a lot of work and I believe the quality of the animations would be low.
What’s the better way to create 2D animations in Unity?.
- Get Android Device and OS Info in Firemonkey
- How to connect Android device to an iOS device over BLE (Bluetooth Low Energy)
- Phonegap - cordova is laggy and slow on android and iOS devices
- Send notification to all the devices connected to a Wi-Fi network
- iOS/Objective-C equivalent of Android's AsyncTask
- HTML file input control with capture and accept attributes works wrong?
2 Solutions Collect From Internet About “How to do 2D animation in Unity”
Have you explored the 2D sprite engines that are available in Unity? Whoever said “Unity isn’t really an engine designed to work with 2D stuff” is talking guff. I have just started working on a hobby 2D game and am using a Unity plugin called Orthello (see WyrmTale website for info). It handles sprite sheets, animations, collision detection and more without you having to write loads of code to do this. The learning curve is a bit steep and the examples on their website aren’t the best but I found replicating the sample solutions that come with the download the best way to get something working.
There’s also a similar tool called Sprite Manager 2 but you have to pay for that (I think). Check out the asset store for more information.
I would be really interested to hear if Orthello is what you’re looking for and how you find working with it – please let me know via http://markp3rry.wordpress.com if you can.
Just because the app runs at 24fps doesn’t mean you can’t just display the animations for more than one frame of the main loop. It might not be smooth, but then again looking at the sprite sheets of games like super street fighter it doesn’t look like they’re at anywhere close to 24fps (the sprite sheet for Dhalism in SF3 Alpha is a 210kb .gif file on my computer, and there’s less than 252 frames of animation on it. Likewise, the total storage space take up by every character sprite in Dustforce takes up a mere 7mb, though those sprites are just 192×192, maybe too low-res for you. They do look like paper though). I doubt that anything involving blender would take longer than hand animating — Blender does key frames for you.
- Programmatically Add CenterX/CenterY Constraints
- Lazy loading Properties in swift
- Swift make NSDate from string not working as expected
- iPhone Multi touch interactive to CorePlot
- Text to Speech on iOS
- Memory management in Coregraphics (iOS)
- I am not seeing iPhone 6, 6S sizes in bottom side in Auto Layout section in Xcode to adjust my view. I just see 4s, SE, 7 and 7 Plus sizes
- How to select map pin when tapping table row that associates with it?
- UITableViewCell's attributes are nil when instantiating UITableView as a popover
- Change background color with a variable iOS
- How can I manually switch between UIViewControllers in storyboard?
- Tinder-like navigation in swift
- Edit & delete multiple rows in UITableView simultaneously
- Is iOS 7 Multipeer Connectivity compatible with Android Wi-Fi Direct?
- To Segue or to didSelectRowAtIndexPath?
|
OPCFW_CODE
|
import { Controller } from './controller'
import {
ApplyAction,
CalculateReward,
GenerateActions,
Playerwise,
StateIsTerminal,
DefaultGameRules
} from './entities'
/**
* The `Macao` class represents a Monte Carlo tree search that can be easily
* created. It provides, through it's [[getAction]] and [[getActionSync]] methods, the results of running
* the algorithm.
*
* ```javascript
* const funcs = {
* generateActions,
* applyAction,
* stateIsTerminal,
* calculateReward,
* }
*
* const config = {
* duration: 30,
* explorationParam: 1.414
* }
*
* const macao = new Macao(funcs, config);
*
* const action = macao.getActionSync(state);
* ```
* @param State Generic Type representing a game state object.
* @param Action Generic Type representing an action in the game.
*/
export class Macao<State extends Playerwise, Action> {
/**
* @hidden
* @internal
*/
private controller_: Controller<State, Action>
/**
* Creates an instance of Macao.
*
* ```javascript
* const funcs = {
* generateActions,
* applyAction,
* stateIsTerminal,
* calculateReward,
* }
*
* const config = {
* duration: 30,
* explorationParam: 1.414
* }
*
* const macao = new Macao(funcs, config);
* ```
* @param {object} funcs - Contains all the functions implementing the game's rules.
* @param {GenerateActions<State, Action>} funcs.generateActions
* @param {ApplyAction<State, Action>} funcs.applyAction
* @param {StateIsTerminal<State>} funcs.stateIsTerminal
* @param {CalculateReward<State>} funcs.calculateReward
* @param {object} config Configuration options
* @param {number} config.duration Run time of the algorithm, in milliseconds.
* @param {number | undefined} config.explorationParam The exploration parameter constant.
* Used in [UCT](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search). Defaults to 1.414.
* @param {number | undefined} config.fpuParam The First play urgency parameter. Used to encourage
* early exploitation. Defaults to `Infinity`.
* See [Exploration exploitation in Go:
* UCT for Monte-Carlo Go](https://hal.archives-ouvertes.fr/hal-00115330/document)
* @param {number | undefined} config.decayingParam The multiplier by which to decay the reward
* in the backpropagtion phase. Defaults to 1.
* @param {number | undefined} config.transpoTable The number of buckets in the Transoposition Hash Table.
*/
constructor(
funcs: {
generateActions: GenerateActions<State, Action>
applyAction: ApplyAction<State, Action>
stateIsTerminal: StateIsTerminal<State>
calculateReward: CalculateReward<State>
},
config: {
duration: number
explorationParam?: number
fpuParam?: number
decayingParam?: number
/**
* The number of buckets in the Transposition Hash table
*/
transpoTable?: number
}
) {
const gameRules = new DefaultGameRules(funcs)
this.controller_ = new Controller(gameRules, config)
}
/**
* Runs the Monte Carlo Tree search algorithm synchronously and returns the estimated
* best action given the current state of the game. This method will block the event
* loop and is suitable if you only wish to run the algorithm for a very short amount
* of time (we suggest 33 milliseconds or less) or if you are running the algorithm in
* an application that does not have a UI. For other applications, you should use
* `Macao.getAction()` instead.
*
* ```javascript
* const action = macao.getActionSync(state);
* ```
* @param {State} state Object representing the game state.
* @param {number | undefined} duration Run time of the algorithm, in milliseconds.
* @returns {Action}
*/
getAction(state: State, duration?: number): Promise<Action> {
return this.controller_.getAction(state, duration)
}
/**
* Runs the Monte Carlo Tree search algorithm asynchronously and returns a promise
* that resolves to the estimated best action given the current state of the game.
*
* ```javascript
* macao.getAction(state)
* .then(action => {
* // Do stuff with the action
* });
* ```
* or
*
* ```javascript
* const someAsyncFunction = async() => {
* const action = await macao.getAction(state);
* }
* ```
* @param {State} state Object representing the game state.
* @param {number | undefined} duration Run time of the algorithm, in milliseconds.
* @returns {Promise<Action>}
*/
getActionSync(state: State, duration?: number): Action {
return this.controller_.getActionSync(state, duration)
}
}
|
STACK_EDU
|
Migrate from VS2005 to VS 2010 directly
Our project is currently developed in C#2 , VS2005.
We were thinking of migrating to VS2008 and C#3.
Do you think it might be a better idea to move directly to VS2010 instead?
We do not plan to release the new version till the end of next year.
Is there any advantage in moving from vs05 to vs08 and then moving to vs10?
thanks!
Well this post implies you can, but certain features of certain projects might get broken:
When you upgrade certain solutions from VS2005 to VS2010, the solution explorer layout can become broken. Some files move up the filter hierarchy. In our large solution, a hundred or so files ended up in the root of solution explorer.
It only seems to affect solutions where the solution explorer filter nesting is more than one deep, the files are not compiled (like headers), and they're excluded from the build in some configs.
Though an answer indicates it was fixed:
We have verified that the header file now gets placed under subfolder rather than directly the header filter. The fix should be available in the next public release of VS2010 (Beta2).
To answer your other point. One advantage of going via VS 2008 is that you can make that migration now (assuming you don't want to risk beta software) and start using the features of C# 3 straight away.
+1 Migration for C# projects will always be easy be it VS2005 to VS2008 or VS2010. But if you have any VC++ projects, I would prefer migrating to VS2008 & then to VS2010 (after say SP1 is out) cause Microsoft has a habit of having a lot of documented & undocumented breaking changes in unmanaged side.
No VC++ projects. But the solution does have some thing like 30 projects. I think I'll give it a shot and see how it goes.
thanks!
Visual Studio 2010 Beta 2 comes with a "Go-Live" license, so if you are ok dealing with beta software, then why not? I have tried it at work myself, while the other developers continue on 2008, but I have to be careful with the project files, to not check in changes, etc.... I don't use it all the time, yet, because it's a memory hog, but other than that performance is a lot better.
There are also a lot of features that are worth the upgrade. The text editor is in WPF now and scales nicely with a ctrl-click and I find I use it a lot. There are a lot of new addins being built to integrate with the UI because the new framework for the code editor exposes a new addin model that is much easier to develop against.
Being able to split windows across multiple montiors in a more flexible way is great.
If you go for the "Ultimate" versions, there are a ton of new architecture and modeling tools and tools for exploring code. I love the ability to generate a sequence diagram from some method and use that while I am reading some unfamiliar code. Works great.
The list goes on really, I have barely scratched the surface, so yeah move on if you want to learn how to use the new stuff, and no one is stopping you, go for it.
Nice listing of additional features of VS2010, but you don't answer his main question "Is there any advantage in moving from vs05 to vs08 and then moving to vs10?"
|
STACK_EXCHANGE
|
Ruby on Rails - In what file do I add the following: require 'forecast_io'?
Thanks for your help,
I believe this is very simple but I can't figure it out. Following the instructions at https://github.com/darkskyapp/forecast-ruby - it tells me to not forget to add require 'forecast_io' - what file do I put this in?
I've run a scaffold to set up a simple lat and lng, following this guide:
https://campushippo.com/lessons/an-easy-way-to-implement-weather-forecasting-in-rails-9d10403 but keep on getting method errors. So I think we are using different versions of Ruby & Rails, and/or I'm placing this code in the wrong file, or wrong place. My question isn't about method errors, but just where to place this.
I'm unsure whether or not to place it in /config/application.rb; create and place the code in /config/forecast_io.rb; or to create and place it in /config/initializers/forecast_io.rb; or if it's supposed to go somewhere else entirely.
I've looked at the api docs, the ruby wrapper read me, and also have read other tutorials (they won't let me post more links, otherwise I would list them) - one is from hackpsu.westarate that is kind of different with using sinatra, went through the Treehouse tutorial on Rails scaffolding, and began their tutorial on creating an API to better understand REST, and have looked at other rails projects to see how they incorporate external API's, so I'm in the thick of it, and am banging my head against the wall because it seems so simple, but I'm not seeing it yet.
Thanks for your help!
David
As answer contains in your question, I recommend you to drop that question. config/initializers/forecast_io.rb in your second link.
Are you saying I should change the question? Or that I had the answer in the question?
The problem with your question is that you don't get how boot process (initializers to be specific) in Rails working and yet you presenting a link in your question, which if you follow it exactly as it states will give you working app. So question makes no sense for me.
You're right, I don't understand how boot process and initializers work. I've thrown myself in and am trying to learn as fast as possible.
It helped me to learn that placing a gem in my Gemfile already requires it by default so I don't have to type 'require etc...'
Thanks for the input, and my apologies for the question.
http://guides.rubyonrails.org/initialization.html
In Rails, placing a gem in your Gemfile already requires the gem by default, unless explicitly stated to ignore.
If you're still bothered about it though, you could add the require option to your gem as such:
gem 'forecast_io', require: 'forecast_io'
Now, for the rest of the configurations mentioned, you can create a file in your initializers and the following:
#config/initializers/forecast_io.rb
ForecastIO.configure do |configuration|
configuration.api_key = 'this-is-your-api-key'
end
|
STACK_EXCHANGE
|
What is symplectic geometry?
EDIT: Much thanks for answers. As was pointed out, the question as it stands is a little too broad. Nevertheless, I don't want to delete it, because I think that such introduction-style questions can be answered without writing a book, rather something more like an introduction to a book and fits here. Moreover, commenters have linked to great resources, and this question might help someone else. I made a follow up strictly narrower question instead.
First some background, so that you know where I came from. But the question in the title stands as it is, if you want to answer without appealing to what is below, please do.
I am currently learning about Lie groups. One of the first things that I've seen are the classical groups, and the classical group that I want to talk about today is the symplectic group $\mathrm{Sp}(n,\mathbb{F})$.
The definition of $\mathrm{Sp}(n,\mathbb{F})$ I am familiar with is as follows:
Let $\omega$ be an skew-symmetric bilinear form on $\mathbb{F}^{2n}$, which is unique up to change of basis. It is given by the formula $$\omega(\mathbf{x},\mathbf{y}) = \sum_{i=1}^n{x_iy_{i+n}-y_ix_{i+n}}$$
Why is this symplectic form important?
We can then write out the definition
$$\mathrm{Sp}(n,\mathbb{F}) = \left\{ A: \mathbb{F}^{2n} \to \mathbb{F}^{2n} \mid \omega(A\mathbf{x},A\mathbf{y}) = \omega(\mathbf{x},\mathbf{y}) \text{ for all } \mathbf{x,y} \in \mathbb{F}^{2n}\right\}$$
I can see the analogue of $O(n,\mathbb{F})$. We also have some bilinear form that needs to be preserved, namely the inner product $\langle \cdot,\cdot\rangle$. But more importantly, elements of $O(n,\mathbb{F})$ are really easy to visualize, because I intuitively know what a rigid transformation is. So the important question for me is
How to visualize symplectic transformations?
And I tried to research this question, and I stumbled upon the topic of symplectic linear spaces and symplectic manifolds. A symplectic vector space is defined analogous to Euclidean vector space, but the inner product is again substituted by symplectic form.
What is a symplectic vector space, intuitively?
I saw that the intuition behind these things should be that $\mathbb{R}^{2n}$ should be treated as a space of positions and velocities, a phase space. And I don't understand it. But I feel that physical intuition would be really helpful.
What is the connection of classical mechanics with symplectic geometry?
I don't know classical mechanics, sadly, so a quick mathematical rundown would be appreciated.
All the questions that I've asked above could be summarized to one question:
What is symplectic geometry?
While I think this is an excellent question in the abstract, it really falls under the 'too broad' category. From the help center, on what types of questions to avoid asking: "Your questions should be reasonably scoped. If you can imagine an entire book that answers your question, you’re asking too much." There are multiple books that try to answer this question, so...
A very nice book to study is Arnol'd "Mathematical methods of classical mechanics" where he goes from the formalisms of Newton's mechanic to Lagrange formulations to Hamilton mechanics. The last one is an (somewhat) synonym for "symplectic geometry", most of the research topics of the latter are motivated by the former.
I'd suggest Ana Cannas da Silva's notes: https://people.math.ethz.ch/~acannas/Papers/lsg.pdf
The best treatment I've seen on this is McDuff/Salamon.
@StevenStadnicki I don't imagine an entire book that answers my question, rather an entire introduction to a book. I think these introduction-style questions should have introduction-style answers, therefore drastically bounding their scope. Take the algebraic geometry one for example (with, to be honest, I was somewhat inspired by).
Quick "fake" answer: In classical mechanics one usually describes a particle measuring its position $q_1, \dots, q_n$ and momentum $p_1, \dots, p_n$. To describe how these change one needs to introduce a "Hamiltonian", i.e. a function measuring the energy of the system.
For a particle of mass $m$ moving in the ordinary space $\mathbb R^n$ it is:
$$H(q, p) = \frac{p_1^2 + \dots + p_n^2}{2m} + V(q)$$
where $V\colon \mathbb R^n\to\mathbb R$ is the "potential energy" of the particle. Then one solves a system of ODEs:
$$\begin{cases} \dot p_i = -\frac{\partial H}{\partial q_i} \\ \dot q_i = \frac{\partial H}{\partial p_i} \end{cases}$$
For example if you plug $n=1$ and $V(q) = kq^2/2$, you will get an ordinary harmonic oscillator $q(t)=A\cos(\omega t+\phi)$, $\omega^2=k/m$. (Similarly you get an expression for the momentum $p$).
Now let's generalize. One starts with a configuration space that is a manifold $M$, used to measure the position of the particle. Local coordinates are our $q_1, \dots, q_n$.
Then one introduces the phase space $P=T^*M$ on which the local coordinates are $q_1, \dots, q_n, p_1, \dots, p_n$. The motion of the particle can be described by a path on $P$, which measures not only the position but the momentum as well. We do this by introducing a function $H\colon P\to \mathbb R$ and we try to find a vector field on $P$ such that:
$$i_X\omega=-dH,$$
where $\omega = dp_1 \wedge dq_1 + \dots + dp_n\wedge dq_n$ in local coordinates. (It is not obvious that it is globally defined). This (not incidentally) looks similar to the expression $\omega(\textbf x, \textbf y)$ you have written down in the question.
The point is that the whole dynamics is in fact encoded in the symplectic 2-form $\omega$. (If you have a Hamiltonian describing a particle, just find a vector field and solve an ODE to get the path).
Generalizing even further let's think about a symplectic manifold $(P, \omega)$ where $\omega$ is a distinguished 2-form with 'nice' properties (it's assumed to be closed and nondegenerate). In particular this gives some topological restrictions on $P$ – for example $P$ needs to be even-dimensional and orientable, with $\omega\wedge \dots\wedge \omega$ acting as a volume form.
Obviously one can organize such manifolds into a category and ask the usual questions – can we characterize them up to an isomorphism? (Called 'symplectomorphism'; strongly related to 'canonical transformations' of physics). Can we introduce any invariants? (Apparently there are no local ones as every symplectic manifold locally looks like $\mathbb R^{2n}$ with the symplectic form from your question).
As we can do classical mechanics on such manifolds, can we 'quantize' them and do quantum mechanics?
We have a nice additional structure – how does it interfere with a Riemannian metric or complex structure (what leads to Kähler geometry and Calabi-Yau manifolds of string theory).
... and similar questions seem to be so ubiquitous that I'd risk to say: every modern differential geometer needs to learn symplectic geometry.
Full answer: This is too broad subject to describe it fully here. But definitely it's worth to study. I recommend:
Cohn's post,
Webster's post,
Cannas da Silva's notes,
Meinrenken's notes,
Butterfield's On Symplectic Reduction in Classical Mechanics,
Arnold's Mathematical Methods of Classical Mechanics,
Abraham and Marsden's Foundations of Mechanics,
McDuff and Salamon's Introduction to Symplectic Topology.
A "Fake" answer is the answer I was looking for. I think that generally an introduction-style question should have an introduction-style answer. Like with algebraic geometry one. And I'm glad I've put it that way and that it was not closed yet. :)
only slightly related but the answers are helpful in terms of classical mechanics: What does “symplectic” mean in reference to numerical integrators, and does SciPy's odeint use them?
|
STACK_EXCHANGE
|
Compile and execute variable declarations, assignment statements and basic I/O.
You should write a driver program that takes a text file as a command line argument and then
processes it using your compiler.
The compiler should parse the input for a sequence of statements and generate a code tree
representing that expression.
The code tree is then used to generate machine code instructions to be executed later.
One of the main additions to the compiler is the processing of statements.
These should be grouped into statement sequences.
Your parser should have rules (routines) for parsing statement sequences and generic statements.
For now, the statement sequence routine should parse as long as there is input.
The generic statement routine should be able to determine, from the current token, what type of
statement is to be parsed.
This is usually done through the recognition of keywords.
Anything else should be an identifier representing the start of an assignment statement (or function
call, but that’s a different project).
Statements are often terminated with semicolons. This is not generally true.
However, all of the statements implemented in this assignment should be terminated with
It’s often better to process the semicolon with each statement rather than do catch-all semicolon
handling after any type of statement.
The statements that should be handled in this assignment are print(), read(), variable declaration
statements for int4 data types, and assignment statements.
Arithmetic expressions will be expanded into something much larger, which for lack of a more
descriptive term, will just be called expressions.
The parsing for an expression will ultimately go through logical expressions, into relative
expressions, and then into arithmetic expressions. For now, your expression parsing routine should
call your arithmetic expression parsing routine. Relational and logical expressions will be added
An important change to your code tree creation is that each node involved in an expression should
have a value type associated with it.
One way to think of this value type is the type “returned” from a particular operation.
For example, if an addition node has two children who each have a value type of int4, then the
resulting value type for the addition should also be an int4.
For this assignment there should be three node value types: a null type, a int4 type, and a string
At the very end of the arithmetic expression parsing, where integer constants are parsed, you
should make changes to handle string constants as well.
The value type of the node generated will be a string constant.
The node generated should contain either the value of the string constant or a reference of some
kind to the location where the value is located.
(See string table, below.)
There are two basic ways to implement strings. Both use arrays of characters.
One terminates the array with a special value, often a zero byte.
The other keeps track of the number of characters used in the array. We will use the null terminated
Your compiler should have a string table associated with the compiled program at run-time.
This table should contain all of the string constants discovered by the parser during the compilation
The string table should be created and initialized when the compiler starts.
The values in the table can be inserted either during the creation of the code tree or at machine
code generation time.
There are no particular advantages of one approach over the other.
You will need some helper functions, written in C/C++ to avoid a lot of the headaches associated
These routines will be called by the generated machine code as needed. These are
A routine that should take a single four-byte integer and print it to the standard output. There
should be no return type.
A routine that should take a character pointer and print it to the standard output as a null terminated
string. There should be no return type.
A routine that takes no input and returns a 4-byte integer. The value should be read from the
In addition, if you don’t already have them, you should create functions that
Take a four-byte integer and append it to the end of the program array as four bytes in little-endian
The same thing, but for eight-byte integers. This will be useful for placing pointers in the program
The print statement has the following syntax:
print ( expr, expr, … ) ;
, i.e., a comma separated list of expressions. The code tree node for this statement should have
children for each of the expressions.
The code generation for this statement consists of generating code for each expression in turn and
then calling the appropriate helper function according to the value type of the expression.
The read statement has the following syntax:
The statement only has a single operand (or argument). The code tree node for this statement may
have 0 or 1 children depending on how you want to store the variable reference. Pick an approach
and use it consistently.
The code generation for this statement consists of calling the appropriate helper function and then
moving the resulting value into the storage location associated with the variable. Currently, only
4-byte integers can be read.
The assignment statement has the following syntax:
variable <- expr ; The code tree node for this statement should have two children, one for the variable, and one for the expression. The code generation for this statement consists of generating the code for the expression and then taking the resulting value and storing it in the location associated with the variable, which can be found in the symbol table (see below). Variables and Declarations Currently only int4 variables can be declared. The declaration statement has the following syntax: int4 variable_name ; An entry in the symbol table should be created for each variable if one does not already exist. If an entry does exist, then a duplicate variable error should be returned. The entry should be created by the code tree generation routine at the time that the declaration statement is processed. It is not necessary to create the storage for the variable during code tree processing, but can be done if desired. In expressions, when encountering a variable name, the symbol table should be checked. If the name does not exist in the symbol table, then an undeclared variable, or non-existant symbol error should be returned. In variable declaration, it is tempting to not generate a code tree node. However, one is needed. The code generation for this statement consists of creating storage for the integer if it has not already been done during the code tree stage. Then, (and this is the real reason for the node), generate code to set the variable to have an initial value of 0. Symbol Table A symbol table keeps track of every symbol used in the program and any information needed to assist in the implementation of that symbol. A symbol should have a name and a symbol type. For this assignment, the type should be either a null type or a variable type. Future types include that of function. Each symbol should also contain information about the location of the symbol. As this could take place in several different ways, each symbol should have a location type. These types include memory, register, and stack. The symbol should also contain additional information needed to support the location type, i.e. memory address, or register label, or stack offset, etc. The symbol should also contain information about the type associated with the symbol. There should be two routines associated with the symbol table. One is for searching for a symbol by name and symbol type. The other is for inserting a symbol into the symbol table. Both routines should return somehow a location to the symbol in question, either the one found (or not) or the one inserted. *NOTE: Be careful as to how the location of the symbol is returned and how the symbol table is implemented. For example, it would be really easy to push the symbols into a C++ vector and return an address to each symbol as it is pushed into the symbol table. Unfortunately, as the vector automatically resizes itself it creates larger storage and copies old values into the new array. Pointers to symbols in the old array will then no longer be valid. Possible solutions involve fixed size symbol tables or locations that don't involve pointers. poi Integer Storage You should create storage for the integer variables referenced in the symbol table. This can be done at the time the symbol is created, or later in code generation. The memory for the variables can be created dynamically, variable by variable or as an offset into a larger block of memory specifically for variables. For what it's worth, the latter approach more closely mirrors what typically happens in practice. Code Tree You should have a node type for a statement sequence. The children of this node will be the individual statements. It is tempting to have a node type for a generic statement, but this is not needed. You should have node types for the various statement types: print, read, assignment, and variable declaration.
|
OPCFW_CODE
|
Canon 600D Images
I've been playing with my new 600D and one thing is really starting to annoy me. When I view an image on the canon's screen, the image is clear and bright. The same is true also if I'm controlling the Canon via my Tablet (Slate 7) using provided software. However, when I view the images or play videos back using computer software (OS X and Win8) I've noticed the images are significantly darker than how they appeared on the Canon's display. Does anyone know why this is so, and how to get a true representation of the output file on the display?
edit
If you shoot in Raw + L mode, you can see the camera raw file (.cr2) which has a lot more brightness to it. Files are like 20M though
Camera LCDs lie like politicians!
Looooool. Love the politics reference
Different camera, exact same everything else. :)
You can't, you can see why with a simple experiment:
Walk into a very well lit room, set the camera to aperture priority (Av) and select reasonable exposure values (for example, f/5.6, ISO 400) also set the camera to capture raw files
Turn off all the lights so that the room is fairly dark, take a picture of one of the walls (if you don't have a tripod it will come out blurry, that's ok) look at it on the display - it will look very bright.
Turn on al the lights, take the same picture again, look at it on the camera LCD - it will look less bright.
Load the images into a raw processor (like Lightroom, Adobe Camera Raw, Darktable or Raw therapy) adjust the light balance using the same point and look at the pictures side by side - they will have almost the same brightness !
That shows us that the way we see the image on a screen changes depending on the environment, so, even if we make two screens output the exact same image (and that isn't easy to do) they will look different to us.
And I'm intentionally not going into screen calibration and color calibration because that's a very complex thing and the question is only about brightness.
so, if you want the images to look brighter on the computer you choices are:
Take lots and lots of pictures, learn how the camera behaves and how you can expect the picture to look on the computer, also learn to use the histogram, than compensate based on your knowledge and not the LCD image.
Whenever you view images on the computer, turn off the lights first :-)
How does this change when you're recording video? And great answer thank you
You don't even need to turn off the lights when viewing an image on your computer. Just place the same relatively dark image on a white background and then a black background and compare.
In general, most camera LCDs are much higher contrast and much higher saturation than a general purpose computer monitor. They are tweaked this way because it makes the images look more vibrant on a small and low resolution display, but without more careful adjustment, they would look very artificial on a larger, higher resolution display at the same over-saturated and overly vibrant levels.
The difference between camera displays and desktop displays is just something that comes with the territory and is to be expected. It isn't necessarily a bad thing. The camera display gives you a somewhat accurate quick picture of the detail of the image that is quickly and easily visible and is generally more directly backlit than your computer monitor (allowing for more vibrance in general.) To get colors to be accurate and believable at larger size however, you still need to do some color work in post.
It also does help to have a good color quality monitor with calibration as desktop monitors often have notoriously inaccurate and poor color, particularly on the cheaper end. There is a good reason why people that need accurate color end up spending $600 on the low end and $2000+ on the high end for a 24 inch monitor for working with color. Generally either the contrast or brightness just isn't as high on most cheap consumer monitors. Newer LED monitors do allow for pretty strong contrast and brightness, but a lot of consumers still have TN panel LCD monitors. (LED monitors have their own issues as well, but that's beyond scope for here.)
So basically, it comes down to the fact that the purpose and design of the preview screen is very different from the purpose and design of your computer monitor. Additionally, as Nir points out, our perception of color is different based on context even if we have the exact same output from the monitor (though in 99% of cases, we don't.)
It's also worth pointing out that both are reasonably true representations of the output. The most true you could get would be to bring the black and white points well within the range of whatever display you are on so that you can see all the detail that was actually captured, but in practice, the bigger concern should be making the image look good and you should adjust the curves and black/white points such that detail is visible on the screens or prints that you will be using for presentation. The final image matters far more than what was actually captured.
You'd think Apple would have got this right with their Retina display. My designer observed the same problem that I have on my win8 tablet that he has on his MacBook Pro with the images
@LukeMadhanga - the Retina display is S-IPS, but it still needs to be calibrated to get accurate color. Also, back panel LCDs don't have particularly accurate color, but rather are designed to (over) show shadow detail normally.
|
STACK_EXCHANGE
|
|Delphi Clinic||C++Builder Gate||Training & Consultancy||Delphi Notes Weblog||Dr.Bob's Webshop|
Servlets are bringing Java to the server-side. After applets and applications made it to the client desktop, it is time to bring Java to the server-side.
You might wonder what servlets are. Servlets are the server-side version of an applet.
It is a Java program loaded by the Web server and deals with client requests.
This kind of functionality is much like CGI.
But servlets are platform independent, persistent, more secure and can use all of the Java features, and are much easier to integrate with applets.
The Java servlet API is available in the Java Servlet Development Kit (JSDK) from JavaSoft, and will be standard included in the JDK 1.2.
A downside to this glorious story is the availability of web servers that support servlets. Not every web server will support servlets. A real Java web server is build by JavaSoft and support servlets. There are also extensions to web server software from Microsoft and Netscape so that these web servers will support servlets. A very good extension is from Live Software, which can be found at http://www.livesoftware.com/products/jrun/.
JBuilder provides a Servlet Wizard that makes it very easy to create servlets. The wizard makes a class that extends the HttpServlet class and can be used to built servlets. The wizard will also generate an example HTML page that will call the servlet. So let's start and make a servlet.
Go to File | New to open the New.. dialog. You can see an icon for the Servlet wizard:
The first section is concerned with the name of your servlet and the package you want to built it in.
Now let's take a look at the second section, named Servlet style. If the first option Implement SingleThreadModel is checked, JBuilder adds code to your servlet, that will instruct the web server to load only one servlet at a time. So when a user requests the servlet and some other user is still using it, the requesting user will have to wait until the other user is finished. In the real world you probably don't use this option much. We leave it unchecked.
The next checkbox asks us if JBuilder needs to generate a example HTML page that will call our servlet. Check this box so we can take a look at what JBuilder generates.
The third section of the first wizard screen is labeled Servlet methods. Here we can tell JBuilder what methods we want to override from the extended HttpServlet class. The doXXX() methods are used to customize the behaviour of the servlet according to the corresponding HTTP command: GET, POST, PUT or DELETE. When you want to use data from a HTML form command, you need to override doPost(). And that is what we want to do right now, so check this checkbox.
The service() is a more generic method to override. You can use this method to see what HTTP command is used and act accordingly. This method is always executed when the servlet is called. When you check this option, JBuilder will generate a different example HTML page, than when you have checked one of the other options. This HTML page has an .shtml extension. And the servlet is called by a <SERVLET> tag, instead of a HTML form. The extension tells the web server it needs to look for the <SERVLET> tag inside this HTML page, and replace this tag with the output of the servlet. This way you can build HTML template files with <SERVLET> tags that are dynamically changed when the user requests the page.
Click on Next to go to the second screen of the wizard:
On this screen we can add servlet parameters. If we haven't checked the service() method on the previous page, adding parameters on this screen will result in JBuilder adding parameters to our form in the HTML page, and code in the servlet to handle these parameters. Let's add a parameter that asks the user his of hers name in the HTML page. Insert the following values for the fields:If you have checked the service() method checkbox on the previous page, JBuilder will insert a <PARAM> tag with the correct values to the HTML page with extension .shtml.
We are now ready with the Wizard and by clicking on Finish we let JBuilder build the HTML page and the servlet source code. At the end of this process we have the following two files added to our project: Servlet1.html and Servlet1.java.
If we open the Servlet1.java file we can see what JBuilder already has done for us. You notice the inclusion of all necessary import statements at the top, the fact that Servlet1 extends HttpServlet, and three methods:
The init method is called only once in the lifetime of a servlet: when it is loaded by the web server. This method is very convenient to initialize variables that require a lot of resources or time. For example, the connection to a database can be best defined in the init method, to reduce connection time by every request. The default implementation of this method calls the init from the HttpServlet class.
The doPost method is the method where it all happens.
This method is called every time a user submits the form that will activate the servlet.
The parameters of this method, request and response, are the handles to the server variables, form parameters and the output to the web browser.
The request object has a lot of very useful methods to read server variables, like protocol, version and much more.
This object also contains all the data that came with a request by the user.
The response object let's you set the content-type of your return message, error codes and more.
You can see that JBuilder already added code that will read in the parameter Name from the form and assign it to the variable pmUserName.
The next line sets the content-type of the response, so the browser will know it is getting HTML back.
Next an output stream to the browser is opened. We can use this stream to write to the browser window. JBuilder already added the following five lines that will show an empty HTML page in the browser with the title Servlet1.
And finally there is the inevitable getServletInfo method. This method will return information about your servlet, that you will have to provide. You can customize this information by returning a string of your own. This method can be invoked by your web server so it is easier to administer the different servlets.
You have seen that a servlet is very straight forward and isn't that difficult to grasp. Now it is time to add some functionality of our own to this servlet. We want to add a property to the servlet that will count the number of connections to the servlet, return a HTML page with the number of connections, the contents of the userName variable, the servlet information and a little static information.
To add the connection counter variable go to the first line after your class definition and insert the following line:
private int connectionCounter;Next add the following line to the init method:
connectionCounter = 0;This will set the counter to 0 when the servlet is loaded by the web server.
return "Example servlet by Hubert A. Klein Ikkink for drbob42.com";You can change this off course into anything you want as long it is a string.
out.println("<body>");Press Enter to insert a new line and add the following code:
out.println("<H1>Example servlet</H1>"); // HTML Heading1 format out.println("Your IP address: " + request.getRemoteAddr() + "<BR>"); out.println("Your user name (value of form field): " + pmUserName + "<BR>"); // pmUserName variable connectionCounter++; // raise counter with 1 out.println("# connections: " + String.valueOf(connectionCounter)); // Return counter value out.println("<HR>"); // HTML horizontal rule out.println("<I>Output generated by: " + getServletInfo() + "</I>"); // Servlet info
And we are ready! You can compile the code and if everything went well there are no errors. If you get errors that tell you JBuilder can't find the javax.servlet packages, you need to add these packages to your classpath of your project in the project properties. There are two ways to achieve this:
<FORM Action=http://localhost/servlet/article.servlets.Servlet1 Method=POST> ... </FORM>Copy the HTML page to your web server. You can now open a browser and enter the URL of the HTML page. Clicking on the submit button will activate the servlet and you will see the output in your browser:
Output from servlet
You will notice that if you return to the HTML page and submit another value, the counter will be increased. This shows that the servlet isn't closed down when the output is returned, but stays in memory.
As soon as the web server where this article is residing supports servlets, you can see this servlet live, but for so long you have to build and run it on your own to see how it works. Have fun!
Here is the complete source code and the HTML file to activate the servlet.
|
OPCFW_CODE
|
I recently finished reading…
C# 3.0 in a Nutshell: A Desktop Quick Reference (In a Nutshell (O'Reilly))
by Joseph Albahari, Ben Albahari
Read more about this title...
My desire to read this book was to understand what new language features C# 3.0 brings to the table. This book started to explain the C# language right from the very beginning up until now. So some of it was a great refresher and some of it was quite boring. It not only covers the new language features but covers several areas of the framework class libraries and new lib’s that came with the .NET 3.5 stack.
I was hoping to find my coverage on the new language features to see different usages for things like extension methods and lambda’s to try to get a sense for what I like and don’t like. So far I’m not really a fan of the new query comprehension syntax. It’s to SQL-ish for me… I prefer working directly against methods. But I might have to give it some time… enough ranting here’s some of the things I’ve learned.
There’s tonnes, and tonnes of discussion on this topic right now. I love how it’s so much cleaner, and less verbose then anonymous delegates, but I see potential room for abuse. IMHO seeing lambda’s tossed around all over your code base is no better then over using anonymous delegates. I love the new ideas that lambda’s bring though…
"A lambda expression is an unnamed method written in place of a delegate instance. The compiler immediately converts the lambda expression to either:
- A delegate instance.
- An expression tree, of type
Expression<T>, representing the code inside the lambda expression in a traversable object model. This allows the lambda expression to be interpreted later at runtime…" - C# 3.0 in a Nutshell
I love the idea of building up an expression tree of delegates that chain together to solve an equation. One of the ideas I’m working on is understanding how to leverage an expression tree of lambdas to solve trivial mathematical equations. Then possible traversing through the structure with a visitor to build out a display friendly version of the equation.
"The benefits of WPF are as follows:
- It supports sophisticated graphics, such as arbitrary transformations, 3D rendering, and true transparency.
- Its primary measurement unit is not pixel-based, so applications display correctly in any DPI
- It has extensive dynamic layout support, which means you can localize any application without danger of elements overlapping.
- Rendering uses DirectX and is fast, taking good advantage of graphics hardware acceleration.
- User interfaces can be described declaratively in XAML files that can be maintained independently of the "code-behind" files - this helps to separate appearance from functionality." - C# 3.0 in a Nutshell
"WCF is the communication infrastructure new to Framework 3.0. WCF is flexible and configurable enough to make both of its predecessors - Remoting and (.ASMX) Web Services - mostly redundant." - C# 3.0 in a Nutshell
"If you’re dealing with data that’s originated from or destined for an XML file, XmlConvert (the System.Xml namespace) provides the most suitable methods for formatting and parsing. The methods in XmlConvert handle the nuances of XML formatting without needing special format strings." - C# 3.0 in a Nutshell
"The compiler, upon parsing the yield return statement, writes ‘behind the scenes,’ a hidden nested enumerator class, and then refactors GetEnumerator to instantiate and return that class. Iterators are powerful and simple" - C# 3.0 in a Nutshell
"Its underlying hashtable works by converting each element’ key into an integer hashcode - a pseudo unique value - and then applying an algorithm to convert the hashcode into a hash key. This hash key is used internally to determine which ‘bucket’ an entry belongs to. If the bucket contains more than one value, a linear search is performed on the bucket. A hashtable typically starts out maintaining a 1:1 ration of buckets to values, meaning that each bucket contains only one value. However, as more items are added to the hashtable, the load factor dynamically increases, in a manner designed to optimize insertion and retrieval performance as well as memory requirements." - C# 3.0 in a Nutshell
"The data contract serializer is the newest and the most versatile of the three serializatoin engines and is used by WCF. The serializer is particularly strong in two scenarios:
- When exchanging information through standards-compliant messaging protocols
- When you need high-version tolerance plus the option of preserving object references."
C# 3.0 in a Nutshell
"A Mutex is like a C# lock, but it can work across multiple processes. In other words, Mutex can be computer-wide as well as application-wide" - C# 3.0 in a Nutshell
"A Semaphore is like a nightclub: it has a certain capacity, enforced by a bouncer. Once it’s full, no more people can enter and a queue build up outside. Then, for each person that leaves, one person enters from the head of the queue." - C# 3.0 in a Nutshell
"The Thread class provides GetData and SetData methods for storing nontransient isolated data in ‘slots’ whose values persist between method calls." - C# 3.0 in a Nutshell
|
OPCFW_CODE
|
"Ghost folders" showing up in Windows share
I have an Active Directory domain setup with Samba 4.2. This domain uses roaming profiles for all users, which are stored on a "Profiles" share on the Samba server. Furthermore, it uses folder redirection for several profile components, including AppData, Documents and Pictures. The redirects are stored on a seperate "Redirects" share on the Samba server. All these settings are implemented using group policies, and at first glance appear to work just fine.
However, I logged in one user too early in the setup process (for testing purposes) and decided that I wanted to reinitialize his redirects. I therefore deleted the corresponding user folder ("eric") on the Redirects share. Server side, this all worked out. All physical files were deleted, and using smbclient to list the share shows no sign of the folder.
$smbclient //server/Redirects -Ueric -c 'ls'
Domain=[TRAMSTRAAT] OS=[Windows 6.1] Server=[Samba 4.2.0]
. D 0 Sat Apr 4 14:16:30 2015
.. D 0 Mon Mar 30 04:16:10 2015
peter D 0 Mon Mar 30 16:52:02 2015
hennie D 0 Mon Mar 30 04:46:26 2015
Administrator D 0 Mon Mar 30 04:58:34 2015
johnny D 0 Mon Mar 30 04:48:38 2015
On nearly all other Windows clients, the folder has disappeared too. However, on the Windows machine that created the redirects folder, the folder still appears too exist! (I can't post images due to reputation restrictions.)
This folder only contains an empty AppData folder. Trying to delete the AppData folder, or the containing folder, produces an error that says that the requested action can only be completed when the computer is connected to the network. Which it is.
This is messing immensely with the profile for this user of course, not allowing him to use any of the redirected locations.
I have tried to look up any synchronization or caching and resetting it, but to be frank, I wouldn't even know where to begin. Does anybody know how to let Windows know this folder does not exist anymore?
For future reference:
Okay, I seem to have resolved the problem by deleting the local copy of the roaming user profile, manually deleting the local user folder (since deleting the user profile via advanced system settings gave the error that the folder was not empty), deleting the profile server-side, and logging in as the user again. This appears to have set up the profile completely anew, including all the redirects. I suspect that this also reset some synchronisation settings so that the folders on the Redirects share could actually be created, instead of just using the local cached version that couldn't be updated.
|
STACK_EXCHANGE
|
import datetime,time,re
class MessageHandler():
"""
Standardize messages to pass between systems
"""
def __init__(self):
self.messages = {}
def newId(self,id):
if id not in self.messages.keys(): self.messages[id] = []
def addMessage(self, id, msg):
''''''
self.newId(id)
msg = str(msg)
if msg not in self.messages[id]:
self.messages[id].append(msg)
def serviceIncidentCreated(self, id, service, baseurl, error=None):
"""
message for initial service notification
"""
url = '<a href="%s%s">%s</a>' % (baseurl, service["service_url"], service["name"])
if error is None: self.addMessage(id, 'Incident created for %s' % url)
else: self.addMessage(id, 'Failed to create incident for %s with error: "%s"' % (url, error))
def incidentCreated(self, id, incident):
"""
message for new PagerDuty Incident
"""
incidenturl = '<a href="%s">%s</a>' % (incident["html_url"], incident["id"])
serviceurl = '<a href="%s">%s</a>' % (incident["service"]["html_url"], incident["service"]["name"])
self.addMessage(id, "Incident %s created for %s" % (incidenturl, serviceurl))
def incidentAssigned(self, id, incident):
"""
message for new PagerDuty Incident
"""
incidenturl = '<a href="%s">%s</a>' % (incident["html_url"], incident["id"])
userurl = '<a href="%s">%s</a>' % (incident["assigned_to_user"]["html_url"], incident["assigned_to_user"]["name"])
self.addMessage(id, "Incident %s assigned to %s" % (incidentur, userurl))
def incidentStatusChange(self, id, incident):
"""
message for PagerDuty Incident status changes
"""
updated = self.getPagerDutyTime(incident["created_on"])
incidenturl = '<a href="%s">%s</a>' % (incident["html_url"], incident["id"])
userurl = '<a href="%s">%s</a>' % (incident["last_status_change_by"]["html_url"], incident["last_status_change_by"]["name"])
message = "Incident %s %s by %s at %s" % (incidenturl, incident["status"], userurl, updated)
self.addMessage(id, message)
def incidentLogs(self, id, logs, baseurl):
"""
reformat incident log data
"""
for d in logs:
# time the log message was added
created = self.getPagerDutyTime(d["created_at"])
# type of log message
type = str(d["type"])
# url to the PD user
try: agenturl = '<a href="%s%s">%s</a>' % (baseurl, d["agent"]["user_url"], d["agent"]["name"])
except: agenturl = None
# method by which log was added
try: updatemethod = d["channel"]["type"]
except: updatemethod = None
# formatted log message
msg = None
#try:
if type == "annotate":
msg = '"%s" - %s at %s' % (d['channel']['summary'], agenturl, created)
# notification messages
if type == "notify":
notif = d["notification"]
msg = 'Notification to %s via %s was a %s at %s' % (notif["address"], notif["type"], notif["status"], created)
# incident acknowledgment
elif type == "acknowledge":
msg = "Acknowledged by %s via %s at %s" % (agenturl, updatemethod, created)
# incident unacknowledgment
elif type == "unacknowledge":
msg = "Unacknowledged due to %s at %s" % (updatemethod, created)
elif type == "resolve":
msg = "Resolved by %s via %s at %s" % (agenturl, updatemethod, created)
elif type == "assign":
url = "<a href=\"%s%s\">%s</a>" % (baseurl,d["assigned_user"]["user_url"],d["assigned_user"]["name"])
msg = 'Assigned to %s at %s' % (url, created)
else:
print "no message created for %s :%s" % (type, d)
msg = None
if msg is not None: self.addMessage(id, msg)
def serviceInMaintenance(self, id, action, svc, mw, baseurl):
"""
message with PagerDuty Maintenance Window details
"""
serviceurl = '<a href="%s%s">%s</a>' % (baseurl, svc["service_url"], svc["name"])
windowurl = '<a href="%s/maintenance_windows#/show/%s">%s</a>' % (baseurl,mw["id"], mw["description"])
starting = self.getLocalTime(mw["start_time"])
ending = self.getLocalTime(mw["end_time"])
msg = "%s due to %s maintenance window %s starting: %s ending: %s" % (action, serviceurl, windowurl, starting, ending)
self.addMessage(id, msg)
def serviceIsDisabled(self, id, action, service, baseurl):
"""
message for PagerDuty Service disabled
"""
serviceurl = '<a href="%s%s">%s</a>' % (baseurl, service["service_url"], service["name"])
msg = "%s because %s service is disabled" % (action, serviceurl)
self.addMessage(id, msg)
def serviceNotFound(self, id, key):
"""
"""
self.addMessage(id, "PagerDuty Service not found with KEY: %s" % key)
def getOffset(self):
'''return True if in DST zone'''
loc = time.localtime()
if loc.tm_isdst == 0: return time.timezone
else: return time.altzone
def utcToLocal(self,dt):
"""
convert local time to UTC
"""
return dt - datetime.timedelta(seconds=self.getOffset())
def localToUtc(self,dt):
"""
convert local time to UTC
"""
return dt + datetime.timedelta(seconds=self.getOffset())
def getPagerDutyTime(self, ts):
"""
return Pagerduty UTC time as local time
"""
return self.utcToLocal(datetime.datetime.strptime(ts,'%Y-%m-%dT%H:%M:%SZ'))
def getLocalTime(self,ts):
sub = ts[:-6]
return datetime.datetime.strptime(sub,'%Y-%m-%dT%H:%M:%S')
def getZenossTime(self, ts):
if '.' in ts:
ts = ts[:-4]
dt = datetime.datetime.strptime(ts,'%Y/%m/%d %H:%M:%S')
else:
dt = datetime.datetime.strptime(ts,'%Y-%m-%d %H:%M:%S')
return dt
def getTimestamp(self,dt):
"""
return seconds given a datetime object
"""
return time.mktime(dt.timetuple())
|
STACK_EDU
|
I really should be going to bed instead of blogging, but…
Unlike (apparently) every other 'blogger in the world, I’m a Brown on the color test:
BROWN is a credible and stable color. Reminiscent of fine wood, rich leather, and wistful melancholy, brown is the color of academia. Most likely, you are a logical and pragmatic individual who is ruled more by your head than your heart. You have an inquisitive mind and an insatiable curiosity. Browns are great problem solvers. They gather all of the facts before coming to a timely and informed decision. You are intrigued easily and always find new ways to challenge your mind. Brown is an impartial and neutral color. Most likely, you know the difference between fact and opinion, and are open to many points of view.
(“Pretty accurate, actually”, he said modestly)
Want a free issue of the Nature with the article on sequencing Chromosome 21? Go here.
Damn! I’d been meaning to blog the Linux Virus thing for a couple of days, but kept forgetting. Flutterby beat me to it.
From Eatonweb, it sounds as if Blogger has been having some issues of late. If you’ve got CGI access on your server, but lack the time or knowledge to roll your own tools, Poor Man’s Blogger might be worth a look.
'Blogged so it will one day be in my database system: www.e-cell.org, a project to simulate a living cell in silico.
Also for the forthcoming database: the RNA webring, web sites relating to RNA research.
Some interesting and (to me) paradoxical results linking mothers’ eating practices and weight of their female children. In a nutshell, mothers who are trying to diet tend to restrict their daughters’ feeding too, which appears to lead to a lack of self-regulation when presented with an unregulated feeding opportunity, and subsequent “heaviness”. Leaving aside all the societal baggage that this story could raise, I just think it’s fascinating how much our parents shape us, even when they don’t mean to. The older I get, the more I see of my mother and (especially) my father in my mannerisms and reactions. I’m also much less freaked out about that than I would have thought I’d be.
Finally, I’ll pass along something from the DCLUG mailing list, from Brett McCoy, who credits his wife Amy for the following:
Blame Microsoft*(to the tune of “Blame Canada”)*
Times have changed
Our systems are getting worse
They won’t obey the users
And just make them want to curse
Should we blame the Internet?
Or blame society?
Or should we blame a great big monopoly?
With the registry we despise
And all their security lies
We need to break up the company
For causing agony!
(Enter Bill Gates)
Don’t blame me
'Cuz Windows Sucks
You should have stuck with DOS or Mac or better yet,
(Enter Windows user)
Well, my Pentium II
Running Windows 98
Is slower than my Amiga from 1988!
Well, blame Microsoft
It seems that everything’s gone wrong
Ever since Microsoft came along
They cannot make a real OS anyway
(Enter Mrs. Gates)
My son always said “640K’s enough!”, maybe less
Now I need 75 freaking megs just to run DNS!
Should we blame the hardware?
Should we blame the code?
Or the people who allow it to explode?
With a virus like ILOVEYOU
And that b*tch Melissa, too!
Shame on Microsoft!
This machine I will shoot
If I have to reboot
A simple breeze
Makes the screen saver freeze
Their help desk cannot help, of course
If only we had gone with open soooooooource!
See y’all tomorrow…
|
OPCFW_CODE
|
Hi guys for whoever wants to prepare PSM / Professional Scrum Master 1 Certification Preparation.
Materials to Learn
Some useful materials and reference to learn for it.
You can find Scrum Guide from: https://scrumguides.org/scrum-guide.html
During my preparation of PSM, I copied it manually to my computer and highlighted the Scrum Guide 2017. I provided this, in case this would help you reading / learning.
I also found a Scrum Guide reordered (2017), by Stefan Wolpers https://age-of-product.com/scrum-guide-reordered/ which also helped me for the preparation. You can get the PDF file automatically sent to your email by subscribing to him.
Important: Btw Scrum Guide 2020 is already out since 18 Nov 2020. So in case please also check the new actual one. I did the certification before Scrum Guide 2020 was out.
Scrum Open Assessments
You can also take an open assessment from scrum.org:
- PSM 1 Open Assessment: https://www.scrum.org/open-assessments
- PSPO 1 Open Assessment: https://www.scrum.org/open-assessments/product-owner-open
- PSD 1 Open Assessment: https://www.scrum.org/open-assessments/scrum-developer-open
Another open assessment I found has pretty neat and clean questions to train yourself is from Mikhail Lapshin blog: https://mlapshin.com/index.php/scrum-quizzes/
Some suggested reading
- PSM Suggested Reading List and Topic Areas
- Ways to Learn about Scrum
- A useful article of my college: https://www.ru-rocker.com/2017/02/19/preparing-your-professional-scrum-master-1-psm1-certification/
The PSM 1 itself
For the PSM 1 itself some important things I experienced:
- The time is 60 Minutes not so much for 80 questions. It means you would need to finish each question about 45 seconds. Or about 40 questions in 30 minutes. Or about 14 questions each 10 minutes. The faster the better you later you could check a bit of some questions you are unsure before.
- During my exam, they provide feature to bookmark the questions by the exam environment, so you can use it wisely to bookmark things you want to recheck later.
- During my exam, I could not select the text with my mouse, so my google add-on on my browser was not working. So have google translate ready with you, and be ready to retype, since there is NO copy paste text and marking text easily possible during limited time in exam. They also suggest some Chrome Add-On that I do not tried, since I was not sure whether it translate the whole page or just some text.
- Since it is only 80 minutes for the exam for 60 questions, ensure your internet connection & enough battery for your laptop, e.g. prepare your phone tethering as well in case your internet provider suddenly goes down. If you are using desktop be sure the you don’t have electricity cut.
End word, I’m wishing you good luck. God bless you! 😊
|
OPCFW_CODE
|
Within the last few weeks, I've had quite a few friends lose their data. All their precious photos, videos, and other documents have vanished, forever. But here's the thing, you don't have to deal with this anymore. There are so many ways to take backups nowadays, that losing data is somewhat of an excuse and should not happen in the first place. Here's what I do on all of my systems/servers so I don't have to worry about backups... ever.
- Clone your hard drive daily. On my main computer, my entire hard drive (or really two RAIDed SSDs) is cloned (not backuped!) every... single... day (I use the awesome Carbon Copy Cloner and schedule the clone out). If my drive ever crashes, I could care less as I have an exact clone. I can boot into it, and easily restore it and be up and running within minutes (as everything I do at home is on RAIDed SSDS ).
- Wanna play devel's advocate? But what if that clone fails? Don't worry, luckily, all of my computers are running on exactly the same copy of OS X (read on how I do this). If the clone fails, I have multiple backups of my other computers. And, even if all my computers suddenly vanished, I'd still have a backup as the clone distributed to all of my servers in pieces every week, so no one can assemble it without getting in all my servers.
- Don't store local documents. All of my files I use on a daily basis are stored in a Dropbox/Google Drive/BitTorent Sync (which I'm trying out on my servers). Any change I make will get synced up to the "cloud" and stay there (and even revisioned through a few programs I wrote). And with BitTorent Sync, I have my files be encrypted on my own servers, which is huge plus; I really don't want Google or Dropbox to have my personal data.
- If you have multiple computers, sync parts the of OS. I personally have a program that sniffs out changed files on my "master" computer and syncs those changes to one of my file servers. After that is done, next time I log in to any of my other computers, it'll pull down those changes and switch out the files. I do this with my applications folder, my home directory, and a few other directories to keep everything in sync.
- This is WAY easier to do on a Mac, as most of the applications data always reside in one of the library folders and the app itself... if you're on Windows, good luck with that.
- For servers, back them up at least daily! All of my servers are backed up to Amazon S3 and then put in glacier storage a few days later... this backup process for me happens every six hours and it backups everything I need (SVN/Git repos, MySQL databases, web directories, config files, etc). And as I take full advantage of my servers, I virtualize most of them and I gain the opportunity of easily backing up all VMs as well, which I do every two weeks... I keep this to a minimum as these backups are quite large.
- A majority of my servers are also RAID mirrored, so if something does happen, I could care less. And if all my disks magically failed, that's the power of virtualizing. My most important VMs will automatically switch over to another node seamlessly.
- And even with all these backups, I still do manual backups. I sometimes use my trusty home NAS and backup to that when I feel the need to. I have a private VPN spooled up so I can always go into my home system and access my files in a secure manner... this is usually where I keep my most important files, however, all my files get backuped to my servers some way or another. The most important ones are just encrypted and split up through various servers. :)
So that's how I do it.... it takes a while to stick to it, but after everything is set up, it makes things much easier. I do all of this because a good chunk of my data is super important. What's your backup strategy? Feel free to tweet me anytime.
|
OPCFW_CODE
|
How install a minimal Debian system from usb/cdIn this is a tutorial I'll show you how to install and setup a minimal Debian system from a cd or usb. First we need to download the iso file you can download the 32bit version here and the 64bit version here. Next you need to put the iso onto a usb or cd. If you're on windows download unetbootin here this is pretty easy to use so I wont explain it here, if you can't work out how to use this post your problems in the comments and I'll try to help you. If you're on Linux or BSD you can do it with dd the command.
dd if=/path/to/the/debian.iso of=/dev/sdb bs=1MIf your usb is in a different place then put that instead of sdb. Here's a screenshot of what it looks like putting the iso onto a usb with dd, this is dd on FreeBSD it's the same on Linux. Also one quick thing I want to add in here I wont be covering how to install this using wireless, I'll add in how to setup wicd after the install though.
Putting Debian onto a CDFrom windows you can use imgburn you can get that here For Linux, BSD or mac you can use brasero or you can use the command line. Right so now for the install the first screen you come to press enter on install, the net install disk only has 1 install option.
Select your language, there is not really anything that needs explaining here.
Set your host name if you don't know what a host name is you can set it to anything for a desktop it will just be your PC's name, if you're setting up a server for a site or anything like that set it as your domain name or what ever you want the server name to be and the next option is to set your domain name, if this is a server put your domain name in and if it is a desktop just leave it blank.
Next set your root password try to set it to something secure and unique don't use the same password for everything.
Next setup your user account and then set that accounts password.
Next select if you want Debian to use your full hard drive or not. Now setup your partitions I recommend the middle option, if you let Debian setup your partitions, you might want to change change the file system type to ext4 from ext3, to do this just press enter on the partition and then press enter on the use as part and select ext4 like in the images below.
Now select what repos you want to use to download updates and programs try to pick a mirror as close to you as you can.
Now setup a http proxy if you want to use one.
Now select what you want to be installed this is where we make it minimal and make sure you unselect everything like in the image below we will install Xorg and a desktop environment or window manager after you have unselected everything select continue.
When the grub dialog pops up asking if you want to install grub to the master boot record select yes and after that is done select continue and your PC will reboot into your Debian install make sure you remove the cd/usb so it doesn't just boot back into the install cd that is the install all done now it is time to setup Xorg. After the reboot log in as root, After you've logged in as root run the following commands to make sure everything is up to date.
apt-get update apt-get upgrade -yNow it's time to setup Xorg type in the following command
apt-get install xorgNext type in startx to check that xorg is working it should start and just give you a terminal type exit now it is time to setup a window manager or desktop environment. For Gnome
apt-get install gnomeFor Xfce
apt-get install xfce4For Kde
apt-get install kdeFor Fluxbox
apt-get install fluxboxNow logout and log into your normal user and type startx and you should go into the window manager/desktop environment you installed earlier that is everything done. Below I'll add in some useful applications you might want to install. To install things you just type apt-get install file name but if you really don't want to use the command line you can install Debian's GUI package manager with the following command. To install things you need to be root you can do this by typing in su and then typing in your root password.
apt-get install synapticFor wireless and easy network management install wicd with the following command
apt-get install wicdA good browser
apt-get install opera
|
OPCFW_CODE
|
how to convert byte array to inputstream in c#
To convert the input stream into string we use toString() method and then we use the method getBytes() to convert the string into byte array.Discuss: Convert InputStream to ByteArray View All Comments. Post your Comment. return byteData Convert Byte Array to String in C using Encoding.UTF8.How to Upload a CSV file in C Windows Form Applic Create File using VBScript, Powershell and Command Below example shows how to convert byte array to Input Stream.import java.io.ByteArrayInputStream import java.io.IOException import java.io. InputStream public class ByteArrToInputStream . Can we convert a byte array into an InputStream in Java? I have been looking on the internet but couldnt find it. I have a method that has an InputStream as argument.Now how do I convert decodedBytes again to InputStream? Read the rest of the stream into a byte array (you can use IOUtils for that, too).In that case, youll have to read incrementally to find the Content-Length field so you know how much of the InputStream to read. You probably dont want to convert your bytes to char and back, that would destroy your Earlier we had seen how to convert InputStream to byte array, In this article, we will see opposite of that by creating a simple example of converting byte array to InputStream in action. To : bool byte byte char decimal double float int long sbyte short string uint ulong ushort.
The most viewed convertions in C. Convert int to long in C53864 hits. c - Creating a byte array from a stream - net - Best way to read a large file into a byte array in C? - Stack inputstream - How to convert an Stream into a byte in C? We would like to know how to convert InputStream to Byte array. Answer.public class Main / . Read and return the entire contents of the supplied link InputStream stream. Converting InputStream to byte array using apche commons IO library.Converts InputStream to ByteArray in Java using Apache commons IOUtils class /. public static byte toByteArrayUsingCommons(InputStream is) throws IOException. I just announced the new Spring 5 modules in REST With Spring: >> CHECK OUT THE COURSE. 1.
Overview. In this quick tutorial were going to illustrate how to convert a simple byte to an InputStream, first using plain java and then the Guava library. One line is all that is needed in C.In the example that follows, we will then convert that byte array back to a string, effectively showing you how to do the conversion both ways. Hi, Can someone suggest the most efficient way to convert an InputStream to byte.If not, if you really need all the data in memory at once - well, for one thing you can save some space by knowing how big the byte or char array needs to be in the first pplace, and allocting it in advance. Questions: Answers: Byte Content new BinaryReader(file.InputStream ).ReadBytes(file.ContentLength)As far as a code to convert stream into byte array is one below. convert byte array to string in c - Продолжительность: 2:29 Go Freelancer 1 278 просмотров.How to Convert Char Array to String in C/Csharp - Продолжительность: 1:44 FWAIT 1 361 просмотр.
|
OPCFW_CODE
|
XR Subraces Mark 3 Apr 9, 2013 5:28:06 GMT
Post by FunkySwerve on Apr 9, 2013 5:28:06 GMT
giving EWF instead of both SF/ESF conc would do it, but that's a lot of feats. Also fits for staff monks.
I know WS/EWS was in the initial subby, however that allows the entire spec progression of feats, right?
You could also just give it a good special, potentially one that fits staffy. That would potentially offset the spiker's lead of 20% blud, +2 cha, Human (+1 skill/lvl and more flex, such as no LSA tumble and monk splash). Going from Human to HO cost 1 skill/lvl, way more than the SF/ESF conc made up for if you are counting skill points.
I had suggested 3 before:
+1 die to FW/SW if cast on self.
EV gets a contingency like effect 1/rest
+10 to dispel checks
Immunity to Area Mord (was this considered OP to begin with?)
I'm trying in earnest to come up with reasonable ideas. Some people put a lot of time into these subs, trying to respect that!
Here are the SM attributes that most of the builds that might use this class wouldn't have
Qstaff (staff monk shares this)
high dodge rate, due to conceal
The specials I noted above try to fit this, as would something like "epic dodge", but that's way OP.
You're thinking in the right direction, but ideally it'll fit the HOM sub as well as the quasi. We *COULD* swap to Half-Eldritch Giant, but I'd really prefer to avoid that, and they seem similar in nature. Armor Skin would also fit the sub, but accomplishes neither of the other two goals. To be clear, there are three basic goals:
1) Fit the characteristics to the sub
2) Fit the characteristics to the quasi/class
3) End up being the best set of characteristics for the quasi/class
Not necessarily in that order. Probably the reverse of that order. But 1 and 2 are pretty close in importance, waaaaaaaaaay back behind #3.
BTW, Ibixian's get "fast and +2 vs KD checks" as a special that's great. If you'd given them "intim 4" as a special I'd have said that was silly as well. For almost all of the subbies skills are given to meet reqs, not as a special.
You place too much importance on columns. And labels. And labeled columns. Everything is a special. Hell, the Slinger subby had 63 listen in the skill column at one point. Isn't that special? Church lady sure as hell thinks so. And I bet someone's going to use that Intimidate to make some bizarre build anyway.
|
OPCFW_CODE
|
Workspaces, Groups, Categories
I am reflecting about categories (the issue eh?), and their enlargment
to manage workspaces and groups. I might be damn wrong on some of the
following statements, please point it out to me!
- Categoris and objects
structure), at the base of which are objects.
A category can have n child categories, but every category has only one
At the base of this tree there are objects, which, differently than
categories, can belog to multiple categories. Categorized object can
belong to multiple categories, but categories can't. So at the lowest
level, the object base of the categories pyramid, and for one level
only, this becomes a 'net' structure'.
There is a difference in the terms 'is child of' (a category of a single
category) and 'belongs to' (an object to one-or-more categories).
Workspaces are categories and objects at the same time! And Workspaces
are a different type of objects, because they themself contain other
objects that cannot be contained in any other workspace.
And I think they are the only objects that can contain other objects.
Image galleries are objects, but the single images in them are not. You
can categorize a image gallery, but not its images.
So if you consider a ws as a category, it can be child of only one
category. And as a category it stays the top af the category tree.
If you consider the workspace as an object , it could belong to
multiple categories, something which clashes with the previuos
statement. In fact it is a category at the top of the categ tree, and
an object of this same category.
Can you categorize categories? joke
Or, more realistic, or think of different category types.
Maybe this is the way to go... Maybe think of caterogries whose type
allows to be child of multiple parents, whose object are in reality
object containers like the categories themself, but with a different
logic and structure inside them.
Then,more, any object belonging to a ws (eg.: a resource) has only one
parent,the workspace it belongs to. For this reason I think categori
es have been used up to now: the easy grouping of its resources.
Also ws resources can't belong to multiple workspaces, , but as objects
they belong to the workspace's category and should be able to belong
to any other category, because we want to be able to freely categorize
these too (eg.: a wiki page balonks to its ws category but may aloso belong
to other categories).
For me Categories are useful to group objects of a workspace and basta,
that's all. I'd even find a different mechanism to do this, and bee free
to categorize as usual workspaces and their objects.
- Another example are permission groups.
'tree' structure. It is a 'net' structure. A group can contain n groups
and be contained in n groups. There is not the 'one-parent-many-child'
The RolePerm stuff introduced de facto a new group type: object
permission template group. No user should be assigned to this group.
Maybe can be handy to include in it some other groups of this same type,
the object permission template group type, but...
So now we have
- '(tiki-wide) Permission Groups' type of groups, that can hold multiple other
Permission Groups' and/or users and be included in multiple groups.
This is 'net' structure.
- 'ObjectPermission Templates' type of groups, that will not contain any
user, but may (?) be used to contain multiple other
'ObjectPermission Groups templates'
This also is a 'net' type of structure
What I miss is a 'UsersGroups' type of groups, that can contain only
users. It's something that exists 'de facto', but it would be better
if stated and clear somehow. That would be handy. This is also a 'net'
type of structure
All these 'net' structures don't fall in the 'tree' stucture of
Categories are more fit to handle the 'level' managing of
permission groups, because this is a 'tree' structure.
> > With respect to 1.6.3, I see an immediate need to find the
capability to control the addition of groups (only ws and children's
group) for objectperm tiki_p_admin_ws, and allow only adding user to
predefined ws groups (no adding new groups) for objectperm
> > Xavi told me this can be done at the group level, maybe through user
> Sure. If you create user levels then you are able to administer
permissions independently for each of them.
> > pingus
Last wiki comments
|
OPCFW_CODE
|
Oct 23, 2014 Azure Blob Storage Part 4: Uploading Large Blobs Robin Shahan continues her series on Azure Blob storage with a dive into uploading large blobs, including pausing and resuming. In the previous article in this series, I showed you how to use the Storage Client Library to do many of the operations needed to manage files in is a free image tool that is capable of reducing large image files without sacrificing image quality.
It will automatically convert your GIF into a PNG format and compress the image to make it easy to upload. Free image hosting and Upload big image files service, upload pictures, photo host. Offers integration solutions for uploading images to forums. ImgBB Upload Image Free Image Do you want to upload large images in WordPress? Are you getting the maximum upload size error? Often, the default maximum upload file size is 2 MB. Uploading large files is a constant headache for developers.
Pains like latency, speed, timeouts and interruptions, especially over mobile devices, are difficult to avoid. However, over the years, it is becoming increasingly important that your application be able to handle large files. Upload files and folders to Google Drive You can upload, view, share, and edit files with Google Drive. When you upload a file to Google Drive, it will take up space in your Drive, even if you upload to a folder owned by someone else.
Uploading large files in ASP. NET the solution: All it takes to overcome the framework limitation is to do several modifications to the projects web. config file. To enable large file uploads you need to change the value of the following attributes on the configuration element: Having trouble uploading large image files to your WordPress blog? Learn how to upload large images in WordPress in this beginner's guide. How to Upload Large Files in PHP.
Uploading a file from a web form in Upload big image files is easy. You can ask users to resize their images before uploading but lets face it: they wont. Fortunately, we This article shall describe an approach that may be used to upload any sort of a file through a web service from a Windows Forms application.
The approach demonstrated does not rely on the ASP. NET file uploader control and allows the developer the opportunity to upload files programmatically and without user intervention. Mar 09, 2010 Internet How to save and share ridiculously large files. Want to transfer a really big file in your browser, but keep running into size limits? We break down a list of free and paid services that Free file hosting. Email large files for free. With pCloud Transfer you can send large files to anyone, no registration needed!
How To Upload Large Images To WordPress. by Tom Ewer 9 Comments. Be aware that if you upload files at their original size through WordPress, they will be available to download for anyone who visits your site, even if you only embed a smaller version into your gallery or blog. Unless you want to give away your digital negatives for free The simplest way to upload images on Imgur is to click the" New post" button at the top of the page, which can be found in the header on all Imgur pages.
From there you can click on" browse" to find images stored on your device. What files can I upload? What is the size limit?
|
OPCFW_CODE
|
Configuring KOTS RBAC
This topic describes role-based access control (RBAC) for Replicated KOTS in existing cluster installations. It includes information about how to change the default cluster-scoped RBAC permissions granted to KOTS.
About Cluster-scoped RBAC
When a user installs your application in an existing cluster, Kubernetes RBAC resources are created to allow KOTS to install and manage the application.
By default, the following ClusterRole and ClusterRoleBinding resources are created that grant KOTS access to all resources across all namespaces in the cluster:
- apiGroups: ["*"]
- kind: ServiceAccount
Alternatively, if your application does not require access to resources across all namespaces in the cluster, then you can enable namespace-scoped RBAC for KOTS. For information, see About Namespace-scoped RBAC below.
About Namespace-scoped RBAC
Rather that use the default cluster-scoped RBAC, you can configure your application so that the RBAC permissions granted to KOTS are limited to a target namespace or namespaces.
Namespace-scoped RBAC is supported for applications that use Kubernetes Operators or multiple namespaces. During application installation, if there are
additionalNamespaces specified in the Application custom resource manifest file, then Roles and RoleBindings are created to grant KOTS access to resources in all specified namespaces.
By default, for namespace-scoped installations, the following Role and RoleBinding resources are created that grant KOTS permissions to all resources in a target namespace:
- apiGroups: ["*"]
- kind: ServiceAccount
For information about how to enable namespace-scoped RBAC for your application, see Enable Namespace-scoped RBAC below.
Enable Namespace-scoped RBAC
To enable namespace-scoped RBAC permissions for KOTS, specify one of the following options in the Application custom resource manifest file:
supportMinimalRBACPrivileges: Set to
trueto make namespace-scoped RBAC optional for existing cluster installations. When
true, cluster-scoped RBAC is used by default and users must pass the
--use-minimal-rbacflag with the installation or upgrade command to use namespace-scoped RBAC.
requireMinimalRBACPrivileges: Set to
trueto require that all installations to existing clusters use namespace-scoped access. When
true, all installations use namespace-scoped RBAC automatically and users do not pass the
For information about limitations that apply to using namespace-scoped access, see Limitations below.
The following limitations apply when using the
supportMinimalRBACPrivileges options to enable namespace-scoped RBAC for KOTS:
Existing clusters only: The
supportMinimalRBACPrivilegesoptions apply only to installations in existing clusters.
Preflight checks: When namespace-scoped access is enabled, preflight checks cannot read resources outside the namespace where KOTS is installed. The preflight checks continue to function, but return less data. For more information, see Define KOTS Preflight Checks.
Velero namespace access: Namespace-scoped RBAC does not grant access to the namespace where Velero is installed in the cluster. Velero is a requirement for configuring backup and restore with snapshots.
To set up snapshots when KOTS has namespace-scoped access, users can run the
kubectl kots velero ensure-permissionscommand. This command creates additional Roles and RoleBindings to allow the necessary cross-namespace access. For more information, see
velero ensure-permissionsin the kots CLI documentation.
For more information about snapshots, see About Backup and Restore.
Air Gap Installations: For air gap installations, the
supportMinimalRBACPrivilegesflags are supported only in automated, or headless, installations. In headless installations, the user passes all the required information to install both KOTS and the application with the
kots installcommand. In non-headless installations, the user provides information to install the application through the admin console UI after KOTS is installed.
In non-headless installations in air gap environments, KOTS does not have access to the application's
.airgappackage during installation. This means that KOTS does not have the information required to determine whether namespace-scoped access is needed, so it defaults to the more permissive, default cluster-scoped RBAC policy.
For more information about how to do headless installations in air gap environments, see Installing in an Air Gap Environment in Using Automation to Install in an Existing Cluster.
Changing RBAC permissions for installed instances: The RBAC permissions for KOTS are set during its initial installation. KOTS runs using the assumed identity and cannot change its own authorization. When you update your application to add or remove the
supportMinimalRBACPrivilegesflags in the Application custom resource, the RBAC permissions for KOTS are affected only for new installations. Existing KOTS installations continue to run with their current RBAC permissions.
To expand the scope of RBAC for KOTS from namespace-scoped to cluster-scoped in new installations, Replicated recommends that you include a preflight check to ensure the permission is available in the cluster.
|
OPCFW_CODE
|
How I write case studies for my product design portfolio
The end of last week was a tough one at work. My company is laying off 10% of its employees, roughly 8000 people. Luckily I was not part of that 10% but it’s still an awful thing to experience. Apart of offering my help I keep trying to think of other ways I can support my colleagues. This post is a small attempt at that.
Many designers have been affected and I imagine that updating their portfolios will be an important task for many in the near future. Updating portfolios is always an arduous task, especially if you haven’t done it for a while, so I thought I’d help.
The following is a framework I created for myself when I did my last website redesign in 2021. It was the first time I wrote proper case studies and I did a lot of research on how to do them right. I’m proud of the two I wrote1 and I’ve received a lot of great feedback on them from industry professionals. I’ve also shared this framework with some of my design mentees working on their portfolios and it’s been helpful for them so hopefully it’ll help you too.
Please let me know if you use it and mostly if you have any feedback or ideas on how to improve it. If you have any question, drop me a comment here, reach out through LinkedIn or maybe book a call with me and we’ll chat about it. Good luck!
A good portfolio is not just a collection of things you made. It is a collection of how you think.
Story over Process
One common issue I see with case studies is that people over index on their processes. The case studies end up becoming an intro lesson on tools or the design process instead of telling the story of a particular project that highlights the designer’s experiences, skills, and strengths. So my advice is to focus on the story you’d like to tell and let that be the driver of your writing.
Insights over Process
This second principle reinforces the first one with a nuance. While the story is the driver, your own insights, learnings or takeaways from it are one of the most important parts of a case study. Being part of a design project is not particularly special for a potential recruiter but what you learn from it is. Tell stories of, for example, how you came up with an unexpected solution, situations or people that made you change your mind, or ideas that didn’t work and why.
You over Industry standards
Another common mistake is the notion that you shouldn’t deviate from apparent industry standards or ways of doing things. But that’s not the case. There could be good reasons to start a project by doing a high-fidelity prototype instead of extensive research. You might not do a design system for a project, and that’s ok too. The double diamond process is not a requirement for every design project. Highlight your approach, explain your thinking, talk about the impact of your decisions, and help people get to know you as a professional designer.
Before you start writing, ask yourself:
- What’s the story I’d like to tell?
- Why am I telling this particular story?
Here’s an example from the case study I’m currently writing…
What’s the story I’d like to tell?
Creating spaces to connect and discuss what we love most about our products with the goal of uniting internal teams through a shared definition of quality.
Why am I telling this particular story?
- Shows how I put my values of compassion and joy into practice
- Highlights my workshop design and facilitation skills
- It’s about design creating impact at scale by providing guidance to make difficult product decisions.
As you write your case study, keep these questions in mind:
- Could someone with zero context understand this?
- Is this adding value or noise to the story?
- Did I learn anything from this that I could highlight?
- Am I using my own words?
- Create a new document in a writing app so you can focus on the content
- Answer the story questions at the beginning of the document to keep them top of mind
- Select the most important sections from below and add them to your document
- Start with the bare minimum, you can add more later
- Write the content for those sections
- Use your own words, plain language, and avoid jargon
- Do your best to not edit as you write
- Avoid acronyms unless the acronym is widely known in the industry
- Balance the use of “I” and “we”. While it’s important to highlight when you collaborated with others, recruiters want to understand the value that you personally added to the project.
- Go back to the list of sections and check if there’s anything else you could add that would be interesting to tell.
- Once you have a first draft, start editing.
- Aim for short sentences and paragraphs.
- Write titles for each of your major sections that summarise the content in one short sentence
- So instead of saying “Impact” or “Results”, I’d say “The color contrast violations dropped an order of magnitude.” I have to thank César Astudillo for this nugget of wisdom.
- Read it all and ask yourself: “Have I told the story I wanted?.” If not, keep editing.
- Move your content to wherever you want to publish it
- Don’t overthinking it, it can be as simple as a Notion page
- Include media (images, prototypes, videos…) to illustrate the case study
- Publish before you think it’s ready. You can always iterate and add more information in the future. Once it’s public you’ll have an even stronger incentive to finish it.
- Ask colleagues, friends, and family for feedback before you send it to recruiters.
Here’s an unnecessarily extensive list of possible sections you could add to your case study. My intention is that this will serve you as inspiration on what to write about. You definitely don’t need to add them all, not even most. Choose only those that fit your project and story.
- Start and finish date
- Your role in the project
- What you did
- Team and their roles
- Other teams involved
- Overview of the design process
Understand the Problem
- How did this project got started; or
- “How might we…”
- Problem statement
- Target user / Persona
- Needs (user, business, technical…)
- Current experience
- Pain points
- Customer insights
- Jobs To Be Done
- Definition of success
- Success metrics
- Out of scope
Ideate the Solution
- Design principles
- Information Architecture
- Key insights
- Key design decisions
- Design details
- Iteration ideas
- Pivots and why
- Ideas that didn’t work and why
- Ideas shot down
- Aspects skipped to deliver on time
- Design debt
- Metrics / Data
- User feedback
- Internal feedback
- Media coverage
- Workflow insights
- What would you do differently next time
- Next steps for the project
For confidentiality reasons my case studies are password protected. If you’re interested in using them as reference to write your own, please reach out to me in private and I’ll share the password.↩︎
|
OPCFW_CODE
|
| set -e
| tty defs
| tty chars
| using siginfo
| line charset
Portable Shell Programming
The point of a standard isn't even whether it's ideal or particularly sensible,
it's that a compliant program produces consistent results on compliant platforms.
Diverging in the name of whatever benefit just means that one has to work harder
to produce a portable program. -- Richard L. Hamilton, in comp.unix.questions
2016-07-12 (see recent changes)
- Peter Seebach: "Beginning Portable Shell Scripting: From Novice to Professional" (Apress)
2. POSIX / SUS (Single UNIX Specification), and implementations
Shell and Utilities are covered particularly by these volumes:
SUS XCU / POSIX.2 / IEEE 1003.2
Understanding of the "Base Definitions" is assumed for understanding "Shell and Utilities".
(From the FAQ:
"readers should be familiar with it before using the other parts,
[...] reduces duplication [...] and ensures consistent use of terminology.")
The "XSI" option describes extensions which are required on UNIX implementations.
See also "Utilities" for the built-ins which are not special.
Deep links seem to work. However, please register online at the OpenGroup, if you access their documents. Andrew Josey (OpenGroup) explains why:
"We do prefer folks to register to access the specification if they can,
so we can track interest in the specification -- which then helps us to
justify making it available."
(toc & keyword search Open Group Base Specifications Issue 7),
UNIX V7, Commands and Utilities V5 (list of certified systems)
POSIX.2:2008, IEEE Std 1003.2-2008,
ISO/IEC/IEEE 9945:2009 (or :2008),
The Austin Group Common Specifications,
2008 TC1 (Technical Corrigendum) Draft 2
- "Shell and Utilities" / "Shell Command Language"
explains the language itself in detail
(deep links toc and
- "Shell and Utilities" / "Utilities" / "sh"
in contrast describes options, environment and command line editing
- The important difference between SUSv4 and POSIX: XCurses is not part of POSIX.
(toc & keyword search Open Group Base Specifications Issue 6),
Open Group Base Specifications Issue 6, Shell and Utilities (XCU), Issue 6
UNIX 03, Commands and Utilities V4 (list of certified systems)
POSIX.1:2001 (includes POSIX.2:2001), equivalent to IEEE Std 1003.1-2001 / -2004 (including 1003.2-2001/-2004),
ISO/IEC 9945-2:2003 (/Cor 1:2004),
SUS and POSIX have been merged with this release.
2004 includes two technical corrigenda
- Navigation in v3 is like above for v4.
Comparison of SUSv3 and POSIX, e.g. the table "Commands".
The important difference: XCurses is not part of POSIX.
Commands and Utilities, Issue 5
Commands and Utilities V3
(list of certified systems)
related: POSIX.2:1990/92 , equivalent to IEEE Std 1003.1-1990/92 (and 1003.2(/.2a)-1992),
related: ISO/IEC 9945-1:1996
Commands and Utilities, Issue 4, Version 2,
Commands and Utilities V2
(list of certified systems)
related: ISO/IEC 9945-2:1993
Try "Commands and Utilities, Issue 4, Version 2" via
access is restricted at the time of this writing)
- How to test conformance?
Version of the "Open Group's full VSC Commands and Utilities
Test Suite". See the entry "VSC lite Test Suite" or "VSC-PCTS2003", respectively.
complete suite is also available after you have registered as developer
of standard implementations or Open Source Project.
- The Wikipedia entry lists
some background and certified operating systems.
- ksh88, ksh93:
See packages notes and changes,
and a current ksh93 package on
for the following files.
The naming is confusing at first, here's what they mean:
- COMPATIBILITY, "ksh88 vs. ksh 93"
- OBSOLETE, "ksh features that are obsolete in ksh93"
- RELEASE88, "Changelog from ksh88 to ksh93"
- RELEASE, "Changelog for ksh93"
- RELEASE93: an older variant of RELEASE
In Usenet, you can even find an archived version of the changelog for ksh88, RELEASEa (local copy).
- POSIX notes for bash
(local snapshots for my convenience), better see the bash distribution: bash/CWRU/POSIX.NOTES.
bash knows a POSIX mode. But keep in mind, that it only switches
behaviour where bash features would collide. Apart from that, bash
specific features are not affected.
- dash is a modern
Almquist shell variant.
It's aiming at a POSIX-only feature set.
The Linux distribution Ubuntu has switched to dash as system shell with release 6.10.
lists issues about the migration from bash to dash.
The release goals for the Debian distribution Lenny also picked up this plan, but the
move was postponed.
- Free- and NetBSD have an Almquist shell descendant as /bin/sh,
also aiming at POSIX.
Their manual pages are online, hosted by freebsd.org
(and there are even manual pages of numerous other Unix variants).
- pdksh -
the Public Domain Korn Shell.
"mostly finished to make it fully compatible with both POSIX
and AT&T ksh (when the two don't conflict)"
Interesting files are
NOTES ("lists of known bugs in pdksh, at&t ksh, and posix"),
PROJECTS ("Things to be done in pdksh") and the two logs
At the time of this writing, pdksh isn't actively maintained anymore.
- posh - debian policy compliant ordinary variant of pdksh
This pdksh based shell aims directly at Debian policy compliance,
that is, SUS with an exception for echo -n. Numerous features have been removed
(e.g. type, ulimit, time, job control builtins, some XSI kill and test
extensions, rsh-functionality, aliases, some shell options). So this shell
is probably the implementation which is nearest to "POSIX only, and without XSI".
- mksh - MirBSD variant of pdksh
This project calls itself a heir of pdksh and could be considered
a continuation of pdksh maintenance.
3. Traditional portability, the unix landscape
- The autoconf documentation contains a chapter about
portable shell programming. (The "#! /" issue in earlier releases likely is a myth, though.)
Keep in mind that autoconf strictly aims at maximizing portability for
install scripts, thus earlier versions even suggested to avoid all unportable
extensions after Version7.
Traditional portability nowadays rather means
SVR2 (or even SVR3).
- Paul Jarc's list of
suspicious or nonportable constructs in shell programming.
- Gunnar Ritter's
Heirloom Bourne shell,
a port of the OpenSolaris(/SVR4-like) Bourne Shell
- Gunnar Ritter's
Heirloom Toolchest, a modern reimplementation of the traditional
toolchest providing several variants of each tool (SVR4, SVR4.2MP, SUSV2,
SUSV3 and 4BSD).
"Migrating from the System V Shell to the POSIX Shell"
covers some main points, also concerning forward compatibility.
- Stéphane Chazelas mentions issues
of numerous utilities
- Mark Hobley wrote checkbashism
which tries to detect non-portable constructs.
- Vidar Holen created a haskell script
which analyzes script snippets heuristically and tries to point out wrong usage and portability issues.
He even has a website www.shellcheck.net where you can paste
snippets into a form and get the results immediately.
- D.J.Bernsteins's UNIX
portability notes (commands, system/library calls and headers
- The autobook documentation contains notes about portability,
but there are several problematic statements.
- The traditional Bourne shell family
(Gunnar Ritters's implementation above being a modern member of this family),
and some more and some less known
are also listed here (but the latter is really rather for fun).
|
OPCFW_CODE
|
Render camera preview on a Texture with target GL_TEXTURE_2D
I'm trying to render the camera preview on an OpenGL Texture with target GL_TEXTURE_2D. I'm very well aware of SurfaceTexture but I cannot use it because it only works with GL_TEXTURE_EXTERNAL_OES. In the documentation of SurfaceTexture, it's written :
Each time the texture is bound it must be bound to the GL_TEXTURE_EXTERNAL_OES target rather than the GL_TEXTURE_2D target
I cannot use GL_TEXTURE_EXTERNAL_OES because, I'll have to make a lot of changes in my existing code.
Is there a way to achieve this which is fast too ?
The only way I can come up with is to listen to the SurfaceTexture and when new frame comes you just redraw it on a GL_TEXTURE_2D target.
When coming to the OpenGL ES field, things can get really complicated. Let me try to explain this to you with my limited experience. Some pseudocode below.
If you want to render the camera data to a off-screen texture, you'll need a off-screen frame buffer. Some functions you may need.
GLES20.glGenTextures(1, textures, 0);
GLES20.glGenFramebuffers(1, frames, 0);
GLES20.glGenRenderbuffers(1, frameRender, 0);
GLES20.glActiveTexture(ActiveTexture);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA,
width, height, 0, GLES20.GL_RGBA,
GLES20.GL_UNSIGNED_BYTE, directIntBuffer);
GLES20.glBindRenderbuffer(GLES20.GL_RENDERBUFFER,
frameRender[0]);
GLES20.glRenderbufferStorage(GLES20.GL_RENDERBUFFER,
GLES20.GL_DEPTH_COMPONENT16, width, height);
GLES20.glViewport(0, 0, width, height);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, frames[0]);
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER,
GLES20.GL_COLOR_ATTACHMENT0, GLES20.GL_TEXTURE_2D,
textures[0], 0);
GLES20.glFramebufferRenderbuffer(GLES20.GL_FRAMEBUFFER,
GLES20.GL_DEPTH_ATTACHMENT, GLES20.GL_RENDERBUFFER,
frameRender[0]);
Compile your vertex and fragment shaders and link your program.
Prepare your vertex, textureCoordinates and drawList buffers.
Bind your SurfaceTexture from camera to a certain GL_TEXTURE
GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, cameraSurfaceTextureHandle);
GLES20.glUseProgram(yourProgram), link all the above together, and GLES20.glDrawElements
Can you provide me with some code ? I'm pretty new to OpenGL. Just the OpenGL function calls.
If you look at https://github.com/google/grafika/blob/master/src/com/android/grafika/RecordFBOActivity.java, the RECMETHOD_FBO path shows the render-to-texture part. Most of the camera-related activities show the SurfaceTexture part.
Btw, the preview I'm getting is distorted. Here's a small video -
http://tosc.in/omerjerk/SCR_20150318_023219.mp4
Can anyone please comment as to what might be going wrong here ?
What is directIntBuffer here? Is it filled with pixels or empty with the proper size? Also, what is in the initialization, and what is done in the onDrawFrame method?
I can relate it to this line in grafika: // Create texture storage.
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, width, height, 0,
GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
of all the provided answers I were looking for, your parameters were the ones which worked. Thanks!
In part 4, I thought you need the target to be the GL_TEXTURE_EXTERNAL_OES type not TEXTURE_2D since SurfaceTexture uses a GL_TEXTURE_EXTERNAL_OES. It actually works if it's TEXTURE_2D?
|
STACK_EXCHANGE
|
Real time domain verification through DV cert or DNS check?
I want to add an extra layer of security/validation into our app/network. I plan to do some mTLS in which we will provide customers with a certificate to use in the TLS handshake. However, I want the customer to pass proof of domain ownership when they initiate the request. After successfully passing the checks (including some extra onboarding things) they can perform the mTLS to get data.
I am thinking of requiring admins to upload a DV certificate which they will send to initiate a request. Another option (or an addition to the DV cert) is to check their DNS for a txt record so we can verify the domain. So if stackexchange.com join our network I want to ensure the real stackexchange is asking for data.
Would it be too cumbersome for an admin to provide their DV certificate when onboarding to our app? Would DNS check be the best option that I can check on each initiation request?
A while back I worked on a project that validated emails by querying the mailbox, the DNS part of that request proved unpredictable, sometimes taking up to 2 seconds, so that is a concern to take into account
You should look at DANE and DNS TLSA records...
A very good recent introduction on it: https://indico.dns-oarc.net/event/43/contributions/928/attachments/901/1648/dane-overview-shumon.pdf ; in short it allows to publish in the DNS, properly, either a certificate, or a key, or a CA certificate. That allows to tie a specific name to a specific certificate/chain of certificates.
Far better than any TXT records. And this is used in the SMTP world.
omg this is awesome. I think this is exactly what I need. Reading about it now ...
so the admin would store their public certificate we provide them as a TLSA record. When they initiate the request we check if that record exists, pull the certificate and verify?
Typically a client connects to some endpoint using TLS and gets a certificate. For the given name being contacted, TLSA records would specify which certificate is to be expected (either specific certificate or any certificate under a CA given in the TLSA record), and hence the connection is aborted if no match. This even allows to work (there are multiple modes defined) completely outside of the current WebPKI.
This is cool. For our app I would like to verify the RP with a cert we provide them from our private CA. We can use TLSA and they add our private CA fingerprint. In order to verify they need to have an https server like my-app-verify.example.com (returns the cert we provide) and calling that will either be successful or aborted. Is that correct?
The problem with this is that RPs will require their domains to be DNSSEC which isn't guaranteed and I can't ask them to enable it. I like the certificate within DNS and perhaps I can add a hashed cert to a TXT field and perform some checks on that ?
"I can add a hashed cert to a TXT field" Why reinvent what TLSA records do already?
|
STACK_EXCHANGE
|
Front-end vs Back-end Website Development
Websites comprise two parts: the front end, which users see, and the back end, which is the background code that supports the front end. Both front-end and back-end development are required so that websites run effectively, and their differences can sometimes be confusing to understand.
In short, the colours, layout, and fonts that users interact with on websites are created by front-end developers, and back-end developers build the intangible framework that enables websites to operate correctly. Building a website would require both front-end and back-end coding to work closely together. In this article, let us share more about the differences and skills required between front-end and back-end development.
Front-end development primarily focuses on the user-facing aspects of websites, ensuring easy navigation and interaction. This involves front-end developers employing a range of programming languages and design tools to build intuitive website layouts, styles, and features like drop-down menus.
In addition to these languages, proficiency in various libraries and frameworks is essential. Libraries like jQuery and frameworks like Bootstrap, AngularJS, and EmberJS play distinct roles. While libraries streamline code for efficiency, frameworks ensure consistent display across different devices. Front-end developers frequently utilise code editing tools like Notepad or Eclipse and may incorporate graphic design software like Photoshop or Sketch.
A background in web design, programming, computer science, or graphic design is often beneficial for front-end developers. Their role demands a blend of technical skills and creativity to ensure website interfaces are aesthetically pleasing and functionally robust. Collaboration with user experience analysts, designers, and back-end developers is also key to the front-end development process, ensuring a cohesive and user-friendly end product.
Front-end Developer Skills
- HTML: A web page’s general content and structure are implemented using HTML, the industry standard.
- CSS: Using CSS, front-end developers create a webpage’s colours, style, layout, and fonts. CSS developers may employ CSS preprocessors like SASS or LESS to hasten the development process.
- Communication Skills: Front-end developers benefit from having strong communication skills when working on online projects with clients and back-end developers. It would be necessary for developers to explain design challenges to stakeholders.
- Creativity: Websites with inventive front-end developers have better visual appeal and usability. They contribute to a site’s aesthetic appeal and usability beyond simple functioning.
Back-end developers specialise in the server side of websites, managing the intricate processes that form the backbone of a site’s architecture and functionality. Their work, crucial to enabling the front-end user experience, involves crafting the website’s underlying functionality, databases, and application programming interfaces (APIs).
The back end comprises an application, a server, and a database. Operating behind the scenes, these elements are typically unseen by end-users but are fundamental to a website’s operation. Back-end developers must deeply understand server-side programming languages such as Java, Python, and Ruby, which are essential for developing web applications.
Back-end developers use tools like SQL Server and Oracle to store, manage, and manipulate data. Their skill set often includes proficiency in version control software, PHP frameworks, and the ability to troubleshoot systems and applications. Collaboration with front-end developers, management, and business stakeholders is key for understanding project goals and ensuring cohesive website development.
A computer science, programming, or web development background is typically necessary for back-end developers. Some may advance to higher-paying roles like software engineers with additional training, education, or certifications. Essential skills for back-end developers encompass a range of technical competencies crucial to the behind-the-scenes aspects of web development.
Back-end Developer Skills
- Python: Python programmers construct data structures and algorithms used to build websites. Additionally, they use Python frameworks and modules like Flask, Django, and NumPy.
- Java: This programming language was developed for compatibility with different platforms and is used by back-end developers to build apps.
- Ruby: Ruby is a back-end programming language that is open-source and free. It allows back-end developers to develop new applications swiftly. Ruby comes with everything a website needs to function; therefore, full-stack developers frequently utilise it as it performs well on high-traffic websites.
- Problem-solving skills: Back-end development frequently calls for problem-solving abilities to solve technical issues arising during a website’s establishment. Debugging and testing of back-end systems and apps may be among these jobs.
- Communication skills: For back-end engineers to successfully finish projects, good communication skills are necessary. Web designers may sometimes be required to communicate complex technical challenges to non-technical stakeholders.
Difference between Front-end and Back-end
Front-end and back-end development are essential for website development. The website’s visual elements—the components visitors can see and interact with—focus on front-end development. The structure, infrastructure, data, and logic of a website are all included in the back-end development. Front-end and back-end web development produce engaging, aesthetically beautiful websites.
Strong coding skills are required for both kinds of developers. The visual side of a website is brought to life by front-end developers using computer languages. Technical, artistic, and communication skills are needed for its development. For proper website operation, back-end developers use server-side programming languages. The table below contrasts front-end and back-end developers:
What do they do?
With front-end programming languages, front-end developers concentrate on designing the outside appearance of websites. Front-end development produces a website’s visual elements, such as fonts, colours, layout, and images.
Back-end developers use server-side programming languages to design the framework or logic of a website. They create computer code that instructs a website on how to activate front-end programming languages.
Python, Java, Ruby, PHP
Frameworks & Libraries
jQuery, AngularJS, SASS, Bootstrap, EmberJS
Django, Laravel, Spring, Zend, Symfony, CakePHP
Who do they work with?
Back-end developers, clients, management, business stakeholders
Front-end developers, management, business stakeholders
In summing up the exploration of front-end versus back-end development in the context of digital marketing, it’s clear that both play unique and equally vital roles in creating and operating a website.
On the other hand, back-end development is the powerhouse behind the scenes. It involves server-side programming with Java, Python, and Ruby to ensure the website functions smoothly. Back-end developers handle the website’s architecture, databases, and server integration, ensuring that the user’s requests are processed and responded to efficiently.
At ARCC, we specialise in website design and development services in Singapore. Our team focuses on delivering effective, high-quality web solutions for B2B businesses. If you’re seeking a reliable partner for your web development needs, reach out to us for efficient and professional service.
|
OPCFW_CODE
|
from collections import deque
import numpy as np
import torch
class RolloutBuffer:
def __init__(self, buffer_size, state_shape, action_shape, device):
self._p = 0
self.buffer_size = buffer_size
self.states = torch.empty(
(buffer_size + 1, *state_shape), dtype=torch.float, device=device)
self.actions = torch.empty(
(buffer_size, *action_shape), dtype=torch.float, device=device)
self.rewards = torch.empty(
(buffer_size, 1), dtype=torch.float, device=device)
self.dones = torch.empty(
(buffer_size + 1, 1), dtype=torch.float, device=device)
self.log_pis = torch.empty(
(buffer_size, 1), dtype=torch.float, device=device)
def reset(self, state):
self.states[-1].copy_(torch.from_numpy(state))
self.dones[-1] = 0
def append(self, next_state, action, reward, done, log_pi):
if self._p == 0:
self.states[0].copy_(self.states[-1])
self.dones[0].copy_(self.dones[-1])
self.states[self._p + 1].copy_(torch.from_numpy(next_state))
self.actions[self._p].copy_(torch.from_numpy(action))
self.rewards[self._p] = float(reward)
self.dones[self._p + 1] = float(done)
self.log_pis[self._p] = float(log_pi)
self._p = (self._p + 1) % self.buffer_size
class NStepBuffer:
def __init__(self, gamma=0.99, nstep=3):
self.discounts = [gamma ** i for i in range(nstep)]
self.nstep = nstep
self.states = deque(maxlen=self.nstep)
self.actions = deque(maxlen=self.nstep)
self.rewards = deque(maxlen=self.nstep)
def append(self, state, action, reward):
self.states.append(state)
self.actions.append(action)
self.rewards.append(reward)
def get(self):
assert len(self.rewards) > 0
state = self.states.popleft()
action = self.actions.popleft()
reward = self.nstep_reward()
return state, action, reward
def nstep_reward(self):
reward = np.sum([r * d for r, d in zip(self.rewards, self.discounts)])
self.rewards.popleft()
return reward
def is_empty(self):
return len(self.rewards) == 0
def is_full(self):
return len(self.rewards) == self.nstep
def __len__(self):
return len(self.rewards)
class _ReplayBuffer:
def __init__(self, buffer_size, state_shape, action_shape, device,
gamma, nstep):
self._p = 0
self._n = 0
self.buffer_size = buffer_size
self.state_shape = state_shape
self.action_shape = action_shape
self.device = device
self.gamma = gamma
self.nstep = nstep
self.actions = torch.empty(
(buffer_size, *action_shape), dtype=torch.float, device=device)
self.rewards = torch.empty(
(buffer_size, 1), dtype=torch.float, device=device)
self.dones = torch.empty(
(buffer_size, 1), dtype=torch.float, device=device)
if nstep != 1:
self.nstep_buffer = NStepBuffer(gamma, nstep)
def append(self, state, action, reward, done, next_state,
episode_done=None):
if self.nstep != 1:
self.nstep_buffer.append(state, action, reward)
if self.nstep_buffer.is_full():
state, action, reward = self.nstep_buffer.get()
self._append(state, action, reward, done, next_state)
if done or episode_done:
while not self.nstep_buffer.is_empty():
state, action, reward = self.nstep_buffer.get()
self._append(state, action, reward, done, next_state)
else:
self._append(state, action, reward, done, next_state)
def _append(self, state, action, reward, done, next_state):
self.actions[self._p].copy_(torch.from_numpy(action))
self.rewards[self._p] = float(reward)
self.dones[self._p] = float(done)
self._p = (self._p + 1) % self.buffer_size
self._n = min(self._n + 1, self.buffer_size)
class StateReplayBuffer(_ReplayBuffer):
def __init__(self, buffer_size, state_shape, action_shape, device,
gamma, nstep):
super().__init__(
buffer_size, state_shape, action_shape, device, gamma, nstep)
self.states = torch.empty(
(buffer_size, *state_shape), dtype=torch.float, device=device)
self.next_states = torch.empty(
(buffer_size, *state_shape), dtype=torch.float, device=device)
def _append(self, state, action, reward, done, next_state):
self.states[self._p].copy_(torch.from_numpy(state))
self.next_states[self._p].copy_(torch.from_numpy(next_state))
super()._append(None, action, reward, done, None)
def sample(self, batch_size):
idxes = np.random.randint(low=0, high=self._n, size=batch_size)
return (
self.states[idxes],
self.actions[idxes],
self.rewards[idxes],
self.dones[idxes],
self.next_states[idxes]
)
class PixelReplayBuffer(_ReplayBuffer):
def __init__(self, buffer_size, state_shape, action_shape, device,
gamma, nstep):
super().__init__(
buffer_size, state_shape, action_shape, device, gamma, nstep)
self.states = []
self.next_states = []
def _append(self, state, action, reward, done, next_state):
self.states.append(state)
self.next_states.append(next_state)
num_excess = len(self.states) - self.buffer_size
if num_excess > 0:
del self.states[:num_excess]
del self.next_states[:num_excess]
super()._append(None, action, reward, done, None)
def sample(self, batch_size):
idxes = np.random.randint(low=0, high=self._n, size=batch_size)
states = np.empty((batch_size, *self.state_shape), dtype=np.uint8)
next_states = np.empty((batch_size, *self.state_shape), dtype=np.uint8)
# Correct indices for lists of states and next_states.
bias = -self._p if self._n == self.buffer_size else 0
state_idxes = np.mod(idxes+bias, self.buffer_size)
# Convert LazyFrames into np.array.
for i, idx in enumerate(state_idxes):
states[i, ...] = self.states[idx]
next_states[i, ...] = self.next_states[idx]
return (
torch.tensor(states, dtype=torch.uint8, device=self.device),
self.actions[idxes],
self.rewards[idxes],
self.dones[idxes],
torch.tensor(next_states, dtype=torch.uint8, device=self.device)
)
|
STACK_EDU
|
ACADO from MATLAB
This interface brings the ACADO Integrators and algorithms for direct optimal control, model predictive control and parameter estimation to MATLAB. It uses the ACADO Toolkit C++ code base and implements a thin layer to communicate with this code base.
Three available standard interfaces:
ACADO code generation from MATLAB:
Link your models to ACADO:
Getting started with the MATLAB interface
To install and use the MATLAB interface you need to have a recent MATLAB version and a C++ compiler installed. Follow these steps to get you started in a few minutes.
Step 1 - Installing a compiler
Step 2 - Configuring MATLAB
Once a compiler is installed it needs to be linked to MATLAB. Open MATLAB (a recent version of MATLAB is required) and run in command window:
>> mex -setup;
Please choose your compiler for building external interface (MEX) files: Would you like mex to locate installed compilers [y]/n?
Type “y” and hit enter.
I'm a LINUX / Mac user
MATLAB shows you a list of installed compilers. Enter the number corresponding to the GCC compiler (in this case 1) and hit enter.
The options files available for mex are: 1: /software/matlab/20XX/bin/gccopts.sh : Template Options file for building gcc MEX-files 2: /software/matlab/20XX/bin/mexopts.sh : Template Options file for building MEX-files via [...] 0: Exit with no changes Enter the number of the compiler (0-2):
I'm a windows user
MATLAB shows you a list of installed compilers. Enter the number corresponding to the Visual C++ compiler (in this case 2) and hit enter.
Select a compiler: Lcc-win32 C X.Y.Z in C:\PROGRA~1\MATLAB\R20XX\sys\lcc Microsoft Visual C++ 20XX [...] in C:\Program Files... None Compiler:
Confirm the result by writing “y” and hitting enter:
Please verify your choices: Compiler: Microsoft Visual C++ 20XX [...] Location: C:\Program Files\Microsoft Visual Studio X.Y Are these correct [y]/n?
Step 3 - Building the ACADO interface
Please download the toolkit code. Our suggestion is to always clone the stable branch:
git clone https://github.com/acado/acado.git -b stable ACADOtoolkit
If for any reason you cannot download the code using GIT or you do not want to use GIT (this is not encouraged!), you can download the code in a zip archive
Those archives are automatically updated after each successfully compiled and tested commit we push to the GIT repository.
Please note you do not need to build ACADO at this stage, you just need to download it. We will refer to the main ACADO folder (ACADOtoolkit) as <ACADO_ROOT>. Open Matlab in this directory.
Navigate to the MATLAB installation directory by running:
You are now ready to compile the ACADO interface. This compilation will take several minutes but needs to be done only once. Run “make” in your command window:
make clean all;
You will see:
and after a while when the compilation is finished:
ACADO successfully compiled. Needed to compile XYZ file(s). If you need to restart Matlab, run this make file again to set all paths or run savepath in your console to save the current search path for future sessions.
ACADO has now been compiled. As the text indicated every time you restart MATLAB you need to run “make” again to set all paths. When running “make” again no new files need to be compiled and the process will only take a few seconds. However, it is easier to save your paths for future Matlab session. Do so by running “savepath” in your command window (this step is optional).
Step 4 - Running your first example
We will now run the OCP getting started example:
The file getting_started.m contains the ACADO syntax to setup and execute a simple Optimal Control Problem. Run “getting_started” in your terminal to test the execution:
You should see a report similar to the following one:
[......] 1: KKT tolerance = 2.016e-001 objective value = 6.4478e-001 2: KKT tolerance = 2.074e+000 objective value = 4.3516e-001 3: KKT tolerance = 1.484e-001 objective value = -2.3787e+000 4: KKT tolerance = 9.130e-002 objective value = -2.3441e+000 5: KKT tolerance = 1.035e-001 objective value = -2.4338e+000 6: KKT tolerance = 5.587e-002 objective value = -2.5326e+000 7: KKT tolerance = 2.741e-002 objective value = -2.5766e+000 8: KKT tolerance = 1.839e-002 objective value = -2.5959e+000 9: KKT tolerance = 1.543e-002 objective value = -2.6105e+000 10: KKT tolerance = 1.494e-002 objective value = -2.6258e+000 11: KKT tolerance = 5.624e-003 objective value = -2.6404e+000 12: KKT tolerance = 1.584e-004 objective value = -2.6456e+000 13: KKT tolerance = 1.214e-008 objective value = -2.6456e+000 convergence achieved.
A graph will be drawn with the results which are stored in the variable 'out’. You're done!
Would you like to read more? Download the user manual.
|
OPCFW_CODE
|
This article discusses the process of choosing a new React-based editor library for PR TIMES’s press release editor.
At PR TIMES, press release publishing is one of the core services that we provide, therefore, we strive to provide the best possible editing experience to our users so that they can express their ideas better.
However, our current editing page is built with legacy codes that have not been well maintained over the past several years. Because of this, it is really hard to introduce new features or even fix small bugs. Therefore, we decided to rewrite the whole editing experience in React and also replace the core editor library with a better one, which is the main reason for this article.
PR TIMES’s Current Editor
Before diving into how we plan to implement the new editor, let’s take a step back and look at the problems that the current editor is currently facing first.
The current editor is composed of 2 parts:
smartytemplate engine code)
New Editor’s Requirements
Usually, it is not an easy task when it comes to choosing a library or framework for whatever project you are working on. However, it’s much easier to pick a library based on your project’s actual requirements.
Therefore, before choosing which editor library to use, I have laid out the requirements for our new editor as below:
- able to produce HTML as output
- integrates well with
- able to implement all the current editor’s functionalities (ex: image, table, basic editing, etc…)
- able to process the already published press release data
- highly customizable and composable
- supports collaborative editing (we want to enable this feature in the future release)
- have great Japanese support on small screen devices
- Nice to have:
- have great TypeScript support
- framework agnostic
- have great community
- well documented
React-based Editors’ Research
After defining our requirements, I started to do some research on several editor libraries to see which one matches our requirements the most.
Here is the list of libraries that I have considered.
One thing to note is that there are many other great React-based editor libraries out there such as
TinyMCE. However, some of them might require some extra costs or other drawbacks, so I suggest you also check them out to see whether they fit your project’s requirements or not.
This comparison is solely based on my own experience, hence, there might be some biased opinions. I’m open to corrections and suggestions as well.
I have used
Slate about one or two years ago, and only recently started using
Tiptap mainly. For
CKEditor, I only went through their documentation as my reference.
Also, I have excluded the
long-term factors from the comparison. That’s because:
- they are all popular libraries
- they are either supported by a huge community who are using them or by the company they are developed at. So, I do not think they will go anywhere soon.
-sign means the answer is somewhere between
|Japanese input support(no bug)||Yes||Yes||No||Yes|
|Current functionalities available||Yes||Yes||Yes||Yes|
|Can process saved data||Yes||Yes||Yes||Yes|
|Customization & composability||Yes||–||Yes||–|
The above table tells us that while
Tiptap is the only editor that has all the requirements checked, the other editors are not far off.
- Slate: Aside from having 1
Nofor Japanese input support, Slate is the one that comes real close to Tiptap. I feel like Tiptap and Slate are almost the same, especially their
plugin/extensionbased concept. Both are also inspired by
Prosemirror. However, I have to give it to Tiptap for its high customizability (from what I have seen so far).
- Quill: Just like this section from
slate documentationexplaining why the author decided to build
slateafter trying Quill and other editors, I also feel like there is a limitation to how customizable Quill could be. This could be mainly due to the differences in the underlying data model.
- CKEditor v5: From its documentation, I get the impression that it’s harder to integrate with React in general, compared to the others. The same way goes for customization and composability as well.
As you can see, it’s sometimes difficult to make a decision based on the comparison table alone. Therefore, we also need to consider other factors as well.
Those factors could be:
- GitHub’s issues
- community’s size and support (forum, discord groups…)
- how well are the documentation and examples
- how well is the maintenance
- and so on…
As you would expect from the comparison table above, we chose
Tiptap as the library to build our new editor. While the main reason is that it checks all the requirement boxes that we have, it is also well balanced regarding the other factors that we talk about above.
Here is my list of things that I like about Tiptap:
- Tiptap is a headless editor. That means we have full control over how we want the editor to be
Tiptap gives you full control about every single aspect of your text editor experience.
- great TypeScript support since the library itself uses TypeScript internally
- great documentation explaining core concepts with clear examples
- built on top of ProseMirror (supported by The New York Times, Atlassian, asana, etc… ), a well-written, reliable, and very powerful editor toolkit Introduction – Tiptap Editor
- more importantly, it has great extension system
Since most of the editor libraries are pretty good, it’s fairly hard to say which one is better than another. So, what I suggest is that you should decide on your main requirements first, then start trying to build a simple editor component using a few top libraries. Along with that, reading documentation is also a great way to check out small differences between those libraries.
Similar to framework, there are many editor libraries out there to choose from. That makes it difficult for developers to decide which library to use. So, I hope that this article could be a reference for your decision if you ever face the same problem.
I will also write about the experience of actually implementing the editor itself after the release of the project. So, stay tuned for that also!
|
OPCFW_CODE
|
Hah! I looked into getting WalkerWireless when I first arrived in Hamilton in 2000. They've been claiming that they will extend coverage to Hamilton in the near future for as long as I've been here. Don't hold your breath. -- JohnMcPherson
They do own 3G frequencies now, and are partnered with Vodafone, so they're a bit more than all talk and nothing but 802.11b -- CraigBox
Good product, lousy tech service. The company I work for setup a data logging box in a supermarket in Auckland. It took several phone calls even to find out if what we wanted was possible and we had to explain everytime that while the company is in Hamilton, the box needing connection is in Auckland. When we realised their software did not support auto connecting on startup we had to contact their support again with their only solution being to by a external PPPoE router ($150) not once did they suggest adding another network card and setup XP/Linux to do the PPPoE routing. $150 solution vs a $15 one that worked fine. -- SamMcKoy
There have been reports of high latency on woosh wireless. This makes it unsuitable for online games or anything that requires low latency, like voice over ip.
In June 2004, Woosh were replaced for the 3 regions it had won the tender for in the Project Probe broadband initiative when it became obvious that they would be unable to fulfill the contract.
apt-get install pppoe
user "firstname.lastname@example.org" # if you're using an interface other than eth0 substitute it below pty "/usr/sbin/pppoe -I eth0 -T 80 -m 1452" noipdefault # Comment out if you already have the correct default route installed defaultroute hide-password lcp-echo-interval 60 lcp-echo-failure 3 # Override any connect script that may have been set in /etc/ppp/options. connect /bin/true noauth persist demand mtu 1492
"email@example.com" * "foopassword"
And you should be away.
Run the pppoeconf tool (either as root, or use sudo)
If in doubt accept all the default options. Remember when asked for your username enter something of the form:
pppoeconf will set up the file /etc/ppp/peers/dsl-provider and edit /etc/ppp/chap-secrets
If you make a mistake when selecting options there is no option to go back and correct the selection. To correct any errors first finish using pppoecof. Then remove the file /etc/ppp/peers/dsl-provider before running pppoeconf again.
For this to work you almost certainly need a recent 2.6 kernel with ipw either available as a module or compiled into the kernel. This works out of the box with Ubuntu Hoary, just plug it in and you will see in dmesg the ipw driver loaded. It'll allocate you a usb tty:
Jun 16 10:34:49 localhost kernel: ohci_hcd 0000:00:02.1: wakeup Jun 16 10:34:49 localhost kernel: usb 2-3: new full speed USB device using ohci_hcd and address 5 Jun 16 10:34:50 localhost kernel: usb 2-3: configuration #1 chosen from 2 choices Jun 16 10:34:50 localhost kernel: ipwtty 2-3:1.0: IPWireless converter converter detected Jun 16 10:34:50 localhost kernel: usb 2-3: IPWireless converter converter now attached to ttyUSB0
Create a ppp peer for woosh, by creating the file /etc/ppp/peers/woosh:
noipdefault /dev/ttyUSB0 115200 defaultroute usepeerdns lcp-echo-interval 60 lcp-echo-failure 3 connect "/usr/sbin/chat -v -f /etc/chatscripts/woosh" noauth persist mtu 1400 user "firstname.lastname@example.org" maxfail 0 deflate 15
TIMEOUT 30 ABORT "NO CARRIER" ABORT "BUSY" ECHO ON SAY "Dialling w00sh...\n" "" \rAT "OK-+++\c-OK" ATH0 OK ATZ OK AT+CGDCONT=1,"PPP","woosh.co.nz","email@example.com,foobar",0,0 OK ATD*99# SAY "Waiting up to 30 seconds for connection ... " CONNECT "" SAY "Connected..."
Note the double handling of the username/password, both in the chat script and in chap. This is almost certainly unnescessary but seems to simulate the Windows ipw software configuring the modem. You can probably get away with putting garbage in the chat script.
firstname.lastname@example.org * pass *
|
OPCFW_CODE
|
M: Shadow Brokers exploits are patched or inactive on supported Windows platforms - alpb
https://blogs.technet.microsoft.com/msrc/2017/04/14/protecting-customers-and-evaluating-risk/
R: hexadecimated
Nice to see that these weren't zero day exploits after all, despite the claims
being spread over Twitter.
Looks like some amateur security researchers forgot to patch their test VMs.
R: celticninja
Do you have a source for that? Looks like MS have released a number of patches
for these exploits and so have other software vendors so I'm not sure what
your claim is based on.
R: hexadecimated
The exploits were released yesterday and the linked article says they have all
been patched.
R: pinpeliponni
I noticed Microsoft has been very careful not to mention NSA.
R: stordoff
Why would they? The immediate concern is whether or not the exploits are still
a risk, not determining the origin. Any future use of them is likely to be
groups other than NSA at this point anyway.
If/when Microsoft do call out the NSA, I imagine it'll a) be filtered through
their press/PR teams and b) be after they've had time to verify the source (it
seem overwhelmingly likely to be NSA-originated, but I'd guess MS will do
their own investigation and not just take it at face value).
R: toyg
MS will never "call out" anybody, in particular nobody in the US government -
one of the few entities on the planet who can make Redmond lives materially
harder. MS and authorities have a long history of peaceful collaboration and
there is no reason to believe this state of things will change anytime soon.
R: nthcolumn
So NSA knew 90 days ago and gave MS the heads up. They patched hurriedly eg.
14th March for EternalBlue - but didn't say anything to their customers
re:patches must go ASAP (many large corporations have to phase them - i.e not
at all at once on First Tuesday) so many companies are currently still
vulnerable and probably won't hit them until next Tuesday after the Bank
Holiday. What a mess.
R: fulafel
Corporations don't have to delay critical security patches, they just elect to
do so based on some motives that compete with security.
R: archvile
"Some motive" I think would equate to fear of breaking mission critical or
legacy applications. In a high-stakes environment, I'd imagine functionality
would win out initially over security, until everything has time to be tested
properly.
R: 1ris
I just don't believe shadowbrokers just burned all their 0days. I assume they
only release the cheapest exploits and either sell or keep the rest. E.g.
russia now has several NSA exploits.
R: amq
Those '0days' are not worth much after they were patched by MS in March.
R: noja
in other words "Not all"
R: pluma
> Of the three remaining exploits [...] none reproduces on supported
> platforms, which means that customers running Windows 7 and more recent
> versions of Windows or Exchange 2010 and newer versions of Exchange are not
> at risk.
So none of the exploits should be a problem if you're on somewhat recent
versions of Windows and Exchange (as applicable). If you're still on Windows
Vista, XP, 2000 or NT, you likely have bigger problems already.
R: 21
> somewhat recent versions of Windows
Windows 7 was released in 2009, eight years ago. I wouldn't call that
"somewhat recent"
R: pluma
I see you haven't worked in the public sector or non-tech enterprises.
R: justinjlynn
> Customers still running prior versions of these products are encouraged to
> upgrade to a supported offering.
That's a very polite way to say "fuck you, pay me".
R: stordoff
Windows 7 released in 2009, and MS will keep issuing security patches until
2020. They also responded to this on Friday/Saturday of Easter weekend.
There's not a whole lot you can blame MS for here, except for the bugs
existing in the first place (which is all but inevitable given the size of the
codebase and the amount of scrutiny under which various groups put it).
R: Markoff
the problem is not really Windows, but hardware abandoned by manufacturers, my
mother has perfectly good/sufficient computer for her needs running Win7,
which can't be upgraded any further (wanted to upgrade from Vista to W10 or
W8, to find the most recent I can get is W7) because of video drivers not
supported anymore and there is not really workaround, so it would require
buying new video card, which in the end means I might as well just buy for her
new Android tablet an get rid off PC or might as well, just install there
Linux in the end
R: dsp1234
_because of video drivers not supported anymore and there is not really
workaround, so it would require buying new video card_
A cheap 1GB video card can be had for $25.[0][1] And it is currently supported
for the Windows 10 platform.[2]
Which is not to say that Linux and/or an Android tablet wouldn't be the best
solution, just that the purchase and installation of a new video card is maybe
not as expensive as would seem.
[0] -
[https://www.newegg.com/Product/Product.aspx?Item=N82E1681413...](https://www.newegg.com/Product/Product.aspx?Item=N82E16814130880)
[1] -
[https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N...](https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%204093&IsNodeId=1)
[2] -
[http://www.nvidia.com/download/driverResults.aspx/112596/en-...](http://www.nvidia.com/download/driverResults.aspx/112596/en-
us)
R: Markoff
i am aware it's not so expensive, but compared to current value of computer is
also not negligible amount anymore, thus i would not mind spending more money
and have something more suitable for her needs, since she doesn't really need
computer, though i would probably first try some up to date Linux distro
R: kikigaki
so they are now patched, shadowbrokers have found plenty of new vulns since
then. loads of linux vulns too, makes up a big part of the internet, which
makes it potentially even more scary.
R: partycoder
One thing is collaborating with law enforcement, another thing is
collaborating on mass surveillance. Microsoft collaborated setting up mass
surveillance (PRISM).
One thing is a bug, another one is a backdoor. Are these good faith bugs or
willful backdoors? Most likely they're bugs, but it is hard to know.
If I was Microsoft and I wanted to willfully plant a backdoor, I would take
precautions to be able to get away with it if caught. Because security
researchers can analyze them, and foreign governments have Windows source
code, leaving intentional bugs as the only choice.
Now, the reasons I am suspicious of Microsoft:
\- PRISM. Which is unequivocally mass surveillance.
\- The Flame malware was able to install itself via Windows Update:
[http://www.computerworld.com/article/2503916/malware-
vulnera...](http://www.computerworld.com/article/2503916/malware-
vulnerabilities/researchers-reveal-how-flame-fakes-windows-update.html) . The
means by which the Flame authors achieved this are easier to explain if they
received help from Microsoft.
\- When Windows NT SP5 was released, the build accidentally came with
debugging symbols (i.e: variable names were visible in binaries). A researcher
found a variable called "_NSAKEY" containing a key which could be used to
forge signatures.
[https://en.wikipedia.org/wiki/NSAKEY](https://en.wikipedia.org/wiki/NSAKEY).
Microsoft's explanation was that it wasn't related to the NSA, and that NSA in
that context meant something else.
R: hexadecimated
This idea that Microsoft is deliberately introducing bugs into its software so
nation states can exploit them is so absurd, it really is tinfoil hat
conspiracy theory ludicrousness.
R: kuschku
It doesn't have to be known by Microsoft management.
It's enough if the NSA has people working at MS on their payroll.
R: hexadecimated
That's just an insinuation of conspiracy with no evidence whatsoever behind
it. The more believable alternative is that developers simply make mistakes
now and then.
R: MichaelGG
But having state sponsored employees is so obvious, effective, and cost
efficient it seems odd to assume it's not being done. The US found a bunch of
Russian spies a while back.
But true, it's not right to assume any particular vuln is from spies.
R: kuschku
Correct. But you have to assume the spies have vulns in there, either
intentionally added, or, if they found vulns, they simply reported them to
their agency, instead of their employer.
|
HACKER_NEWS
|
Logic overhaul
Description
This is a major change! It fundamentally rewrites the core logic for tracking imports and rearranges the arguments to the import tracking functionality. The high-level gist is:
Rather than capturing imports during the processing of an import_module, the imports are computed after importing the target module by recursively inspecting the bytecode for all modules stemming from the target.
The tracking no longer needs to launch subprocesses to perform recursion because it does not rely on the diff in sys.modules
It's way faster!
But why?
Ok, the old way was working pretty well, so why refactor it all? The obvious answer is speed, but the less obvious answer is actually the correct one: the old implementation was not answering the right question. The old implementation answered the question
What modules are brought into sys.modules between starting the import of <target> and concluding the import of <target>?
Instead, what we really want to know is:
If we stripped away all code not required for <target>, what modules would we need to have installed for the import of <target> to work?
The difference here comes down to whether you count siblings of nested dependencies. This is much easier to describe with an example:
deep_siblings/
├── __init__.py
├── blocks
│ ├── __init__.py
│ ├── bar_type
│ │ ├── __init__.py
│ │ └── bar.py # imports alog
│ └── foo_type
│ ├── __init__.py
│ └── foo.py # imports yaml
└── workflows
├── __init__.py
└── foo_type
├── __init__.py
└── foo.py # imports ..blocks.foo_type.foo
In this example, under the old implementation, workflows.foo_type.foo would depend on both alog and yaml because the ..blocks portion of the import requires that all of the dependencies of blocks be brought into sys.modules. This, however, voids the value proposition of finding separable import sets. Under the new implementation, workflows.foo_type.foo only depends on yaml because it imports blocks.foo_type.foo from the deepest point where the only requirement is yaml.
Testing
This PR also refactors all of the tests since it's now more practical to test the import logic by calling track_module directly rather than simulating a call to main. With the exception of a few tests that have been removed due to their irrelevance with the removal of side_effects_modules (that went away!), all tests have been maintained and updated. There are several where the results change, but in each case the change is actually an improvement on a known corner-case, with one specific exception: LazyModule (more on that below).
What breaks in the API?
The most important functional difference is that track_module no longer tracks lazily imported modules. This means that anything wrapped in LazyModule or imported using importlib.import_module is invisible!
The side_effects_modules argument is gone. This was a hack to work around the fact that there were some modules that, when trapped by a DeferredModule would cause the overall import to fail. With the refactor, this is unnecessary as the import proceeds exactly as normal with no interferance.
The output with track_import_stacks is different. It no longer attempts to look like stack traces, but it is actually more useful. Now, instead of a partially-useful stack trace, it's a list of lists where each entry is stack of module imports that causes the given dependency allocation. This fixes #31!
By default, import_module stops looking for imports at the boundary of the target module's parent library. This means that if a third party module transitively imports another third party module, it won't be allocated to the target unless full_depth=True is given
@DoraAgali @gkumbhat The last open question with this is whether to fully remove LazyModule. I'm inclined to remove it since it was only added to support "hiding" imports in cross-cutting modules and is not used elsewhere. With this refactor, that hiding should not be necessary anymore as long as imports in those cross-cutting modules are managed correctly.
|
GITHUB_ARCHIVE
|
application: main upload rights for xserver-xorg-video-geode
q-funk at ubuntu.com
Thu Sep 3 16:44:57 BST 2009
On Thu, Sep 3, 2009 at 6:22 PM, Daniel Holbach<daniel.holbach at ubuntu.com> wrote:
> Am Dienstag, den 25.08.2009, 12:41 +0300 schrieb Martin-Éric Racine:
>> upload rights to main (xserver-xorg-video-geode)
>> upload rights to universe (cups-pdf, upgrade-system)
>> My wiki page is at:
> A few questions:
> * I must admit I'm a bit confused. In the topic you mention
> xserver-xorg-video-geode, above it's three questions already and
> on https://wiki.ubuntu.com/MartinEricRacine/MOTUApplication you
> say that you apply for MOTU plus the packages above. What is
> accurate now?
After talking to several Ubuntu developers, we came to the conclusion
that mere upload rights for specific packages would be pointlessly
limiting, as I have been maintaining packages at Debian since 2003,
and that MOTU would be better for me. However, one of my pet packages
is in main, so I'd need separate upload rights for that one.
> * You mention Byzantine bureaucracy. Which examples do you have
> and how you attempt to improve the situation there?
The distinction between Canonical and Ubuntu is often blurry and
getting a straight answer about who handles what is often challenging.
Having to duplicate existing information and remix it gets tedious
too. One improvement I'd feel necessary is to merge the templates
used for several processes. For instance, the information I provided
to become an Ubuntu member is nearly identical to the one asked now
for this MOTU Application. Why should we need a separate template
> * How do you think we can get better at the ratio of bugs that are
> dealt with?
In some cases, such as the kernel team and X team, recruiting more
full-time developers on the Canonical side would be necessary.
Otherwise, people end up providing pictures of kernel crashes for
nothing, since nobody will ever get around investigating them.
> * There's a couple of bugs that were not dealt with at
> https://bugs.launchpad.net/ubuntu/+source/cups-pdf and
> https://bugs.launchpad.net/ubuntu/+source/planner - what do you
> think would make the status quo better?
I haven't been involved with Planner in ages so I won't comment on this one.
As for CUPS-PDF, there have been mainly two issues:
1) Persistent user requests to turn a printer driver into a GUI tool
that allows selecting where to save the PDF file. I've had to mark
these as WONTFIX and explain how this wishlist bug simply doesn't fit
the mandate of a printer driver. Anyhow, as Till Kampetter repeatedly
pointed out in response, both GTK2 and QT have built-in PDF export
functionality, which pretty much makes the request moot.
2) ApprArmor issues. I haven't found enough documentation on AppArmor
to intervene. Still, it has to be said that AppArmor is poorly
documented. However, I've been in constant contact with Martin Pitt,
who is the main person responsible for AppArmor issues in Ubuntu. He
has been very helpful is solving AppArmor-specific issues and
> * Which packages apart from the ones above did you work on in
I've been involved in CUPS and in [i|my|a]spell dictionaries for
Estonian, Latvian and Russian, on and off. I also regularly file bugs
and attach patches for everyone else's packages, whenever the issue
seems obvious enough that I can fix it myself.
> * I noticed that you are active in Debian as well. Are you
> pursuing Debian Developer membership too? How's that coming on?
I tried ages ago, but I have found that I lack the motivation to go though NM.
More information about the Motu-council
|
OPCFW_CODE
|
Due to the Big Data Movement in recent years, lately I’ve been exploring statistical analysis and machine learning through books, online courses and online tutorials. One book I found intriguing is “Exploring Everyday Things with R and Ruby”, by Sau Sheong Chang. Sau Sheong’s book inspired me to apply its teachings to something that I enjoy every Sunday morning: the NFL.
Sau Sheong, in his experiments, uses Ruby as a tool to simulate, extract, or transform data into an input format that can be used in R for analysis. For those unfamiliar with R, it is a widely used programming language and software environment for statistical computing. In my experiment, by using the paradigm described in the book, I analyzed various NFL statistics from Yahoo Sports to determine how the number of running plays versus passing plays affects the outcome of the game for a given team.
I decided to focus on just 4 teams: San Francisco 49ers, New Orleans Saints, Detroit Lions, and the Minnesota Vikings. San Francisco and New Orleans have powerful all-around offenses with San Francisco having the edge in the running game and New Orleans having the edge in the passing game. Detroit’s passing offense ranks highly, but their running game is lacking. Minnesota’s offense is the opposite; their running offense is arguably the best, but their passing offense is horrible. We will see how the resulting analysis differ for each of these teams.
The NFL section of Yahoo Sports has a unique boxscore URL for each weekly game that show details of the plays and the progression of scores. For each of the four teams, I gathered the URLs and put them in a text file for each team. I created a Ruby script that then scrapes the URLs and outputs two types of CSV files. The rows in the first CSV file contain the plays of each drive during the game (and more information for Part 2 of this blog). The rows in the second CSV file contain the scoring progression throughout the game. We only care about the final score for this blog, so only the last row will be used.
With the collection of CSV files, I had to determine how to utilize the inputs in R. Determining the correlation between a team’s play choices and the game outcome was my final goal. My initial thought was to calculate the correlation between the binary outcome of a win or loss and the number of running plays per game, but it seemed wrong to compare a classification value to a integral value. After much pondering, it made more sense to take the correlation between the win margin (team score minus opponent score) and the delta between the number of running plays and passing plays (running plays minus passing plays) in a game. In R, reading in a CSV input file creates a matrix-like object called a dataframe to represent the data. Once I had the dataframe objects to my disposal, it was very easy to perform matrix operations to massage the data and calculate the correlation between the win margin and the delta of the running plays. This blog isn’t a tutorial on R, so if you’re interested, take a look at the R code via the link at the end of the blog.
Minnesota, the strong running and weak passing team, had a strong correlation.
Detroit, the weak running and strong passing team, had a weaker correlation. San Francisco and New Orleans were somewhere in the the middle. The results correctly matched intuition. The stronger the running game is compared to the passing game, the greater the running game utilization influence is on the final outcome.
In part 2 of this blog, I’ll try to dig deeper and hopefully discover other trends in the data.
|
OPCFW_CODE
|
using System;
using KSCheep.CodeAnalysis.Binding;
using KSCheep.CodeAnalysis.Syntax;
namespace KSCheep.CodeAnalysis
{
/// <summary>
/// Evaluates a syntax tree
/// </summary>
internal sealed class Evaluator
{
private readonly BoundExpression _root;
/// <summary>
/// To construct the evaluator, we pass in the root syntax node of the syntax tree we wish to evaluate
/// </summary>
public Evaluator(BoundExpression inRoot) => _root = inRoot;
/// <summary>
/// Evaluate an expression starting from the root node of its syntax tree
/// </summary>
public int Evaluate() => EvaluateExpression(_root);
/// <summary>
/// Evaluate an expression starting from the root node of its syntax tree
/// </summary>
private int EvaluateExpression(BoundExpression inNode)
{
// Evaluate a number expression. Just returns the number token
if (inNode is BoundLiteralExpression number)
{
return (int)number.Value;
}
// Evaluate a unary expression. Just returns the negative value of operand if operator is '-'
if (inNode is BoundUnaryExpression unary)
{
var operandExpression = EvaluateExpression(unary.Operand);
switch (unary.OperatorKind)
{
case BoundUnaryOperatorKind.Identity: return operandExpression;
case BoundUnaryOperatorKind.Negation: return -operandExpression;
default: throw new Exception("Unexpected unary operator " + unary.OperatorKind);
}
}
// Evaluate the binary expression. Gets the left and right expression, identify the operator token between them, and perform operation
if (inNode is BoundBinaryExpression binary)
{
var leftExpression = EvaluateExpression(binary.Left);
var rightExpression = EvaluateExpression(binary.Right);
switch(binary.OperatorKind)
{
case BoundBinaryOperatorKind.Addition: return leftExpression + rightExpression;
case BoundBinaryOperatorKind.Subtraction: return leftExpression - rightExpression;
case BoundBinaryOperatorKind.Multiplication: return leftExpression * rightExpression;
case BoundBinaryOperatorKind.Division: return leftExpression / rightExpression;
default: throw new Exception("Unexpected binary operator " + binary.OperatorKind);
}
}
// Evaluate a parenthesised expression. We only evaluate the "expression" within the parenthesis
//if (inNode is ParenthesisedExpressionSyntax parenthesisedExpression)
//{
// return EvaluateExpression(parenthesisedExpression.Expression);
//}
throw new Exception("Unexpected node " + inNode.Kind);
}
}
}
|
STACK_EDU
|
The TFS Database Import Service provides a way for Team Foundation Server (TFS) customers to complete a high-fidelity migration into Visual Studio Team Services (VSTS). It works by importing a collection in its entirety into a brand new VSTS account.
The Import Service has been used by large number of customers, small-to-large, to successfully migrate to VSTS. This includes using it internally at Microsoft to move older TFS instances into VSTS. Helping us achieve our larger goal of using one engineering system – more formerly called the 1ES effort. As you can imagine, there are lots of exceptionally large collections within at Microsoft. We thought we could share our experiences from a recent migration of a collection that was around 10.5 TB in total size. Yep, that TB for Terabytes 😊.
Below are some statistics on the size of the collection we imported. It’s truly a big one!
|Full database size||10578 GB|
|Database meta data size||583 GB|
|Number of users||2800|
|Work Items Count||19.2 Million|
|Test runs Count||1.3 Million|
|Builds Count||28.7 Thousand|
Migrating the collection including all the data within the collection took ~92 hours. Please note that the migration time may differ based on the data/content size, content type, network speed etc.
For most of companies/team’s performance of the service is crucial factor to make the change. I am glad to share that in our testing after migrating this large collection, the work items, builds and other information was accessible in similar of less time than on-premises collection.
What if I need help
Please reach out to migration team in Microsoft (firstname.lastname@example.org) to discuss/plan the migration plan or if you have any queries.
Tips & Tricks:
If your on-premises SQL server does not have in-bound & out-bound internet collection you can host the SQL database to an azure hosted SQL virtual machine (VM). It’s recommended to host the VM in the same region where you are planning to host your VSTS account, data transfer rate from the same region will be much better.
If you need to transfer collection backup to Azure hosted VM, plan to divide the backup file in multiple files (I divided backup to 24 files) and use power of parallelism to transfer files).
Below are details to speed-up the process:
- If your collection is not hosted on a sever which allows inbound internet traffic, you will have to restore the collection on an internet inbound/outbound enabled server virtual machine. To reduce overhead of network latency, It's recommenced that you host the server/virtual machine in the same region as your planned VSTS account. Please refer to migration guide for the setup options.
- Collection database Backup: Divide collection backup into multiple files (16 to 24 files) and use MAXTRANSFERSIZE hint. This approach reduced total backup duration by 75% for our collection database.
BACKUP DATABASE [Tfs_DefaultCollection] TO
DISK = N'H:\MSSQL\BAK\Tfs_DefaultCollection_1.bak',
DISK = N'H:\MSSQL\BAK\Tfs_DefaultCollection_2.bak',
DISK = N'H:\MSSQL\BAK\Tfs_DefaultCollection_3.bak',
DISK = N'H:\MSSQL\BAK\Tfs_DefaultCollection_4.bak',
DISK = N'H:\MSSQL\BAK\Tfs_DefaultCollection_5.bak',
DISK = N'H:\MSSQL\BAK\Tfs_DefaultCollection_23.bak',
DISK = N'H:\MSSQL\BAK\Tfs_DefaultCollection_24.bak'
WITH NOFORMAT, NOINIT, NAME = N'Tfs_DefaultCollection-Full Database Backup',
SKIP, NOREWIND, NOUNLOAD, COMPRESSION, MAXTRANSFERSIZE = 4194304, STATS = 5
- Use AZCopy to transfer backup to Azure blob/ VM: You can get optimal performance for database files copy to Azure blob/VM using AZCopy utility.
- Initiate multiple instance of AZCopy simultaneously: Running multiple instances of AZCopy in parallel (for different files upload) would help speed-up the upload/download files to/from blob storage. You will have to use /z switch to run parallel instances of AZCopy
Example: AzCopy /Source:H:\MSSQL\BAK\ /Dest:https://<storage account>.blob.core.windows.net/tfsMigration /SourceKey:"<SAS>" /pattern:"File.bak" /S /z:<unique folder for azcopy journal file>
In addition to small collections you can migrate large collections to VSTS and can take advantage of hosted service.
Special thanks to Rogan Ferguson for review & contribution.
|
OPCFW_CODE
|