Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Great looking speaker with amazing sound quality (soundtouch 300) no complaints with the hardware. The app on the other hand is just not worthy of the technology its attached to. For some reason I can no longer use my Pandora with it, keeps asking me to re-enter the log in details and when I do it simply tells me i can't log in under this account as it's already logged in.
5/6/2017 - app seems to be better the last month or so. I hope this means they're improving their programming skills. 3/1/2017 - I use this app because I love the speakers, but the app is a pain to update. It is the only app update for which I consistently need to babysit, test, and reboot my Android. And then it often can't find the speakers on the network after an update. Bose, please hire a better team to build your app!!
The latest update really messed up the app in my opinion. It's slower, sluggish and harder to navigate. I love my bose systems. You gotta get the app down especially when it's the only controller you provide us with! I thought they would finally bring YouTube to the app like Apple but still nothing.
Much better than when I originally downloaded but improvements are still needed. I want the ability to adjust volume without having the actual app open like Pandora and other ones. Also I like the ability to play the music from my computer. Perhaps it exists but I am unaware of it. Overall excellent affordable system
OK for what it already does, but navigating presets is bad. Presets should not be hidden off in some menu in a far corner of the app, they should be the first thing I see, they are presets, they are what you use most, so not very logical. It would also be great to somehow control the volume without having to reopen the app. Bluetooth volume control works fine because they're in sync, but when stream through the internet radio, to lower the volume you have to wait a while for the app to load. Like a widget or in the notification controls would be awesome.
Presets are a nightmare to configure and use. Layout is more than average, looks like a 3rd grader designed it. Usage of Spotify is extremely limited, no access to albums for example. Bose, please get your act together and come up with something decent!
- There’s now a Quick Tour to introduce you to our new design
- We also added troubleshooting steps if you experience an offline speaker (we hear you and will have additional connectivity enhancements in the upcoming releases)
- There’s also a new Feedback area of the app that lets you share your thoughts with Bose engineers
- Finally, we tuned the app to include some major performance and stability improvements
Free music player for SoundCloud gives you instant access to millions of songs!
Free Music - Best free music player & app, search free songs mp3 from Soundcloud
|
OPCFW_CODE
|
<?php
/**
* This file is part of slick/mail package
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
namespace Slick\Mail\Tests;
use PHPUnit_Framework_MockObject_MockObject as MockObject;
use PHPUnit_Framework_TestCase as TestCase;
use Slick\Mail\Header\AddressListInterface;
use Slick\Mail\Header\HeaderInterface;
use Slick\Mail\Message;
use Slick\Mail\MessageBody;
use Slick\Mail\MessageInterface;
/**
* E-Mail Message Test case
*
* @package Slick\Mail\Tests
* @author Filipe Silva <silvam.filipe@gmail.com>
*/
class MessageTest extends TestCase
{
/**
* A message created without sender is invalid
* @test
* @return Message
*/
public function invalidMessage()
{
$message = new Message();
$this->assertFalse($message->isValid());
return $message;
}
/**
* A message has always a header with date that was created
*
* @param Message $message
*
* @depends invalidMessage
* @test
*/
public function dateHeader(Message $message)
{
$headers = $message->getHeaders();
$this->assertTrue(array_key_exists('Date', $headers));
}
/**
* Sets correctly the sender address
*
* @test
* @return Message
*/
public function addFromAddress()
{
$message = new Message();
$message->setFrom('john.doe@example.com', 'John Doe');
$this->assertEquals(
'John Doe <john.doe@example.com>',
$message->getFromAddressList()
);
return $message;
}
/**
* The From list is an address list
*
* @param Message $message
*
* @test
* @depends addFromAddress
*/
public function fromIsAnAddressList(Message $message)
{
$from = $message->getHeaders()['From'];
$this->assertInstanceOf(AddressListInterface::class, $from);
}
/**
* Use the addTo() method to add multiple addresses
*
* @param Message $message
*
* @test
* @depends addFromAddress
* @return Message
*/
public function addToAddressList(Message $message)
{
$message->addTo('jane.doe@example.com', 'Jane Doe')
->addTo('joane.smith@test.com');
$this->assertEquals(
'Jane Doe <jane.doe@example.com>,joane.smith@test.com',
$message->getToAddressList()
);
return $message;
}
/**
* Should override the message To header
*
* @param Message $message
*
* @test
* @depends addToAddressList
*
* @return Message
*/
public function setToAddressList(Message $message)
{
$message->setTo('jane.doe@example.com', 'Jane Doe')
->addTo('elvin@example.com');
$this->assertEquals(
'Jane Doe <jane.doe@example.com>,elvin@example.com',
$message->getToAddressList()
);
return $message;
}
/**
* Set the subject header
*
* @param Message $message
*
* @test
* @depends setToAddressList
*
* @return Message
*/
public function setSubjectHeader(Message $message)
{
$message->setSubject('Hello Friends!');
$this->assertEquals('Hello Friends!', $message->getSubject());
return $message;
}
/**
* The subject is an header
*
* @param Message $message
*
* @test
* @depends setSubjectHeader
*/
public function subjectIsAnHeader(Message $message)
{
$subject = $message->getHeaders()['Subject'];
$this->assertInstanceOf(HeaderInterface::class, $subject);
}
/**
* Set the body message
*
* @param Message $message
*
* @test
* @depends setSubjectHeader
*
* @return Message
*/
public function setBody(Message $message)
{
$body = new MessageBody('simple.mail.twig');
$message->setBody($body);
$this->assertEquals(
'This is a simple e-mail message.',
$message->getBodyText()
);
return $message;
}
/**
* When adding a body the setMessage() method should be called
* with the message as argument.
*
* @param Message $message
*
* @test
* @depends setSubjectHeader
*/
public function settingBodyNeedsToSetMessage(Message $message)
{
/** @var MessageBody|MockObject $body */
$body = $this->getMockBuilder(MessageBody::class)
->setConstructorArgs(['simple.mail.twig'])
->setMethods(['setMailMessage'])
->getMock();
$body->expects($this->once())
->method('setMailMessage')
->with($this->isInstanceOf(MessageInterface::class))
->willReturn($body);
$message->setBody($body);
}
/**
* Should add an header for cc address list
*
* @param Message $message
* @test
* @depends setBody
*/
public function addCcToMessage(Message $message)
{
$message->addCc('john.cc@example.com', 'John');
$header = $message->getHeaders()['Cc'];
$this->assertEquals(
'Cc: John <john.cc@example.com>',
(string) $header
);
}
/**
* Should add an header for bcc address list
*
* @param Message $message
* @test
* @depends setBody
*/
public function addBccToMessage(Message $message)
{
$message->addBcc('john.cc@example.com', 'John');
$header = $message->getHeaders()['Bcc'];
$this->assertEquals(
'Bcc: John <john.cc@example.com>',
(string) $header
);
}
/**
* Should add an header for ReplyTo address list
*
* @param Message $message
* @test
* @depends setBody
*/
public function addReplyToMessage(Message $message)
{
$message->addReplyTo('john.cc@example.com', 'John');
$header = $message->getHeaders()['Reply-To'];
$this->assertEquals(
'Reply-To: John <john.cc@example.com>',
(string) $header
);
}
}
|
STACK_EDU
|
In the image below, when the switch is closed (and when it's opened again), the video has a severe flicker. It only does this when the switch is changing states. While the load is on, the video returns to normal. I've been trying for a week to figure out how to get rid of it. Here is some scope data that may help someone more experienced than I.. Any help would be greatly appreciated
Can you give the 4A device its own pair of wires from the battery?
it has it's own positive lead coming from the battery but it shares a ground near the camera. (I realize that is different from what the wiring diagram shows.)
Your picture only shows a single connection for the video, so presumably it's using the common earth. That suggests that the two wires carrying the video signal may not be close together. You might try taking the video and earth to the camera via a shielded twisted pair cable (with the shield grounded at one end) and see if that reduces the amount of EMF picked up on the video channel.
They are in their stock housing coming out of the camera, then they are apart for about half an inch as they go into a cat5 breakout board, then they are in a twisted cat5 pair for 8ft or so, then the reverse happens as they go into the LCD screen at the other end.
I have an idea of what might be happening, if thought of like a Y connection in a water pipe, when the the high amp load is turned on all of the water (amperage) flows into it temporarily depriving the camera and causing the brief loss of video. Once the motor's on, the current flow stabilizes and the video is clear again while the motor is powered. Is that what's happening? if so, how do you make sure the camera will always have the current and voltage it needs to operate smoothly? Use an inductor?
If you provide the 4A load with its own supply (with common earth) does this affect the problem?
David82: They are in their stock housing coming out of the camera, then they are apart for about half an inch as they go into a cat5 breakout board, then they are in a twisted cat5 pair for 8ft or so, then the reverse happens as they go into the LCD screen at the other end.
Wait - you're not using impedance matching baluns, similar to these:
...also - video shield/signal "ground" should be kept separate from chassis/power ground, IIRC...
PeterH: If you provide the 4A load with its own supply (with common earth) does this affect the problem?
It's still the same. That was one of the first things I tried.
cr0sh: Wait - you're not using impedance matching baluns
What specifically do those do besides provide an easy breakout for video? I don't have any issues with video over longer distances (only tested 50 ft). Just when the switch is cycled.
cr0sh: ...also - video shield/signal "ground" should be kept separate from chassis/power ground, IIRC...
I can't. All 8 ports of the cat5 breakout board are being used. consolidating all the grounds together was what enabled this to work with a cat5 cable in the first place.
What size wire is used between the 12v battery and the 4a load, and how long is that wire?
zoomkat: What size wire is used between the 12v battery and the 4a load, and how long is that wire?
It is just whatever thickness is usually used in PC power supplies. Then it goes into a solid copper cat5 cable and back to pc power supply wire. From there it is soldered to the thicker pigtails wires that came with the 4amp motor.
Then it goes into a solid copper cat5 cable and back to pc
Cat5 cables have 8 conductors, how many of these did you use for the power +- power wires? You also seemed to missed the "how long" part of the question, especially the cat5 section.
just one is ground, one is 12v+ that powers the camera, one is the power for the 4amp load, one is +video, and the other four are going to two bi-directional motors which, by the way, have no effect at all on the video when used.
These graphs are before and after adding a cap across ground and load+. I also tried other cameras, and powering those other cameras with a completely independent power supply (same results). Maybe the cap increases the current draw because the power supply has to suddenly charge the cap and power the load.
For the final time, how long is the cat5 cable run? :roll_eyes:
8ft like it says in the wiring illustration and again a few replies back.
Is this a repost of the same problem you asked about in 'How do you make the current draw from a device gradual instead of instant?', and again in 'How do I prevent video interference caused by electric motors', and also in 'Simple video interference problem. I'll pay to have it resolved.'? The people following this thread are in danger of wasting a lot of time and effort coming up with suggestions that have already been made.
I'm trying to provide the information necessary for someone experienced enough to actually explain why the problem exists and exactly, with certainty, what will solve it.
That sounds like a 'yes'.
|
OPCFW_CODE
|
The trouble I had was that I connected the libraries in CodeBlocks inside of a not effectively purchase (it seems that is necesary include initial ‘libmpfr.a’, next ‘libgmpxx.a’ And eventually ‘libgmp.a’).
Probably it is considered the most normal choice to carry out operations with the information we have only – according to precision of arguments. Up to now MPFR C++ follows this approach.
No offense but when any individual stopped and read this submit they after they necessary an answer to their query would find yourself squandering 10 minutes of their time. All that should be stated is 'Be certain, respectful and client.
Eigen includes old Edition of MPFR C++ – eigen fellas promised to present me access to Eigen resource base so I can update MPFR C++ portion, but I didn’t get a person.
I dont think Anyone would bite your head off of inquiring the identical query which was answered 2 months back, and is particularly now thirty internet pages again. It truly is another detail any time you write-up a question that was answered two hours in the past and remains to be on the 1st page.
Sorry for your n00b concern, I’m utilized to scripted languages, and c++ looks like deep h2o even just starting good compiling/linking.
It might appear to be very easy to get input with receives() but this might lead to troubles. The fgets() functionality can be a safer choice – but that much too may possibly result in its have complications, as I clarify in this article.
Present-day design of MPFR C++ doesn’t permit ‘mpreal’ number to be used as world-wide variable. Commonly, global variables are regarded as poor Option in C++.
I just operate in to the identical scenario as Anonymous. I exploit CodeBlocks anonymous and I’m also a beginner in C++ (just switching from Ruby our website for this one undertaking).
If can be a binary operator plus the language has the compound assignment operator =, then the following are equal:
The C language delivers a number of common data sorts. At times it is useful to treat a single kind as An additional style. On this lesson I demonstrate the hows and whys of ‘form-casting’.
I want to question you something. I attempted the exaple code, that includes your wrapper and once I move any variable outside of the key(), I get an assertion error in ini2.c (about precision I guess).
No. If anybody "owns C++," it need to be the ISO. AT&T gave the rights towards the C++ handbook which i wrote for the ISO. The ISO C++ Conventional is copyrighted by ISO. Compiler distributors never pay out royalties to me or to AT&T for C++, and ISO requirements are specifications supposed for royalty-free use by everyone (the moment they have compensated the ISO or possibly a national typical committee for his or her copy of the regular). The person compilers are owned by their respective suppliers/suppliers.
Open-ended questions are usually perceived as open up-finished time sinks. The individuals most probably to have the ability to provide you with a handy remedy also are the busiest people (if only given that they take on the most work by themselves).
|
OPCFW_CODE
|
Re: [PATCH] regulator: core: resolve supply voltage deferral silently
From: Mark Brown
Date: Fri Sep 03 2021 - 07:11:00 EST
On Thu, Sep 02, 2021 at 03:41:02PM -0700, Brian Norris wrote:
> On Thu, Sep 2, 2021 at 10:06 AM Mark Brown <broonie@xxxxxxxxxx> wrote:
> > On Wed, Sep 01, 2021 at 01:06:28PM -0700, Brian Norris wrote:
> > > Take a look at the commit this "Fixes":
> > > 21e39809fd7c ("regulator: vctrl: Avoid lockdep warning in enable/disable ops")
> > That driver change is at most tangentially related to the code that's
> > being updated,
> It introduced another case where we hit a spurious error log. And
> below, you admit that you didn't understand what this is fixing
> without that pointer. I guess we disagree.
The point is the "another" bit - by just picking a random commit you
will cause people to think that an issue was introduced by that commit
which in turn means that people will for example use the presence of
that commit as a guide to backporting. They may not backport things far
enough since the random commit isn't there, or backport things too far
if the actual issue was introduced later which can be even worse as it
can introduce breakage where it wasn't before.
In terms of not understanding the issue here is that the patch didn't
pass the smell test, it was your explanation that helped here not the
pointing at a driver change that lacks obvious relevance. I really
don't know what the reader is supposed to infer about the change from
> > That's definitely an abuse of the API, the hardware design is pretty
> > much a gross hack anywhere as far as I remember. As Chen-Yu says I'd
> > only expect this to be possible in the case where the supply is in
> > bypass mode and hasn't got its own parent. In any case I can see why
> > it's happening now...
> Well the hardware exists, the driver exists, and it all worked OK
> until somewhat recently (and now it works again, thanks to Chen-Yu).
> What should we do here, then? Just leave the "abuse" in place?
I don't think anyone came up with anything more tasteful to do with that
hardware, like I say the hardware is itself very hacky.
Description: PGP signature
|
OPCFW_CODE
|
New ECORR signal models
I'm starting this PR, again, so we can take a look at how things are progressing. It still needs tests and some cleanup related to Issue #90. Some of the new code is pretty cool, IMHO. To deal with ECORR in the N matrix we need to make it play nice with the standard N "vector" (i.e. the diagonal part of a diagonal matrix) that we use for EFAC, and EQUAD. In order to do this I've created three special data types: ndarray_alt, csc_matrix_alt, and ShermanMorrison. They all have their own __add__ methods so that they know how to add to one another (note that csc_matrix_alt cannot be added to ShermanMorrison at the moment but I don't know if you would ever want to do that anyway). They also have their own solve methods that have the same API for all so that the Likelihood doesn't need to know which kind of N you are using.
Here is how it works: the EFAC and EQUAD signals return ndarray_alt arrays (basically numpy arrays with special __add__ and solve methods), ECORR (sparse) signals return csc_matrix_alt arrays (which, again, are a special type of spicy.sparse.csc matrix with __add__ and solve methods), and ECORR (sherman-morrison) signals return a ShermanMorrison array. This way when computing the total N in SignalCollection using the sum function we return the correct type. For example ndarray_alt + csc_matrix_alt returns a csc_matrix_alt type array where we can then call solve in the likelihood. The same is true for ndarray_alt + ShermanMorrison where it returns a ShermanMorrison type array. I have done tests and all of this works smoothly but there are still a few more checks to do.
Here is how to use the solve method (for any of the three types mentioned above):
To solve Y^T N^{-1} X, where X and Y can be either vectors or matrices of the correct shape you can do N.solve(X, left_array=Y).
Lastly, in terms of the ECORR signal factories themselves I have defined EcorrKernelNoise which implements the sparse matrix version and have sub-classed that to create EcorrKernelNoiseSM which implements the Sherman-Morrison version. I would still like to clean these up a bit and make them easier to subclass but they are working now.
Timings (model m is EFAC + ECORR(sherman morrison) + timing model, model m2 is the same but with ECORR (sparse)) for J1909-3744:
T = m.get_basis()
N1 = m.get_ndiag(params)
N2 = m2.get_ndiag(params)
%timeit -n5 m.get_ndiag(params)
%timeit -n5 m2.get_ndiag(params)
%timeit -n5 N1.solve(psr.residuals, left_array=T, logdet=True)
%timeit -n5 N2.solve(psr.residuals, left_array=T, logdet=True)
%timeit -n5 N1.solve(T, left_array=T)
%timeit -n5 N2.solve(T, left_array=T)
returns :
5 loops, best of 3: 159 µs per loop
5 loops, best of 3: 99.7 ms per loop
5 loops, best of 3: 10.5 ms per loop
5 loops, best of 3: 29.9 ms per loop
5 loops, best of 3: 86 ms per loop
5 loops, best of 3: 96.6 ms per loop
So the solve method is fairly similar for the two methods but constructing N for the sparse case is very slow...
Ok, I've added new type for ECORR, BlockMatrix. It is similar to ShermanMorrison but takes a list of blocks (i.e., the small arrays that make up the block diagonal) and solves systems by just inverting the individual blocks. This way it is generic enough to handle blocks that cannot be written as the outer product of two vectors (a requirement for ShermanMorrison). Oddly enough it is actually faster than the sparse matrix version overall.
Overall it still works but I'm not quite happy with how I've organized this. Right now there are three separate class factories for the three different methods of doing ECORR. This may not be the right thing to do as the signals are identical it is just the method that changes. Doing things this way does avoid having a bunch of if statements in the class if we were to try to include the method as an argument but maybe that is preferable in this case.
Thoughts?
I think this is ready to merge.
Ok, now I think we are good. After talking with @vallis, we've decided it is cleaner to just have one class factory with a method argument. It does involve a few internal methods and some if-statements but it is much clearer overall and makes more sense since these are not different signals, merely different methods of constructing the same signal.
So the signal is initialized as follows
# method can be sherman-morrison, block, or sparse
ec = EcorrKernelNoise(log10_ecorr=Uniform(-10, -5), selection=selection, method='sherman-morrison')
# initialize efac (ecorr cannot really act on its own so we always need efac even if fixed to 1)
ef = MeasurementNoise(efac=Uniform(0.5,1.5), selection=selection)
s = ef + ec
model = s(psr)
# then the signal collection knows how to handle block-diagonal N-matrices for any of the three methods
N = model.get_ndiag(params)
# matrix operations are done with the following
# T^T N^{-1} T
TNT = N.solve(T, left_array=T)
# T^T N^{-1} \delta t
TNr = N.solve(psr.residuals, left_array=T)
# \delta t^T N^{-1} \delta_t
rNr = N.solve(psr.residuals, left_array=psr.residuals)
# the solve method also takes a boolean logdet option to return the log-determinant
Ok, for whatever reason there is some black magic happening when using .. math:: in the travis tests. For now I have just removed an equation from the docstrings. We can deal with this more when we finalize the documentation.
Merging now.
|
GITHUB_ARCHIVE
|
(Welcome to Movies for the Resistance, a weekly column intended to showcase films with particular pertinence for 2017. One of the fundamental purposes of art in general, and movies in particular, is to serve as a spiritual armory: bringing hope, timely lessons and shared experiences when times are dark. They can move us to positive political action, lend insight to the inexplicable, and sometimes just give us a moment to remember that we’re not alone. I’m hoping to embrace as many genres and subjects as possible here: nothing is out of bounds and the plan is to vary the content as much as I can from week to week. But all of them are chosen for the same basic purpose: to support, comfort and inspire as we enter a troubling new phase in our nation’s history. We’ll showcase a new film every Tuesday.)
So yeah. Roy Moore.
For most of the rational parts of the country, the recent eruption of sexual scandals has provoked a combination of horror and soul-searching. In Alabama, however, the defiant campaign of gibbering troglodyte Roy Moore stands out not only for its sheer ugliness, but for the way his evangelical constituents have cheerfully excused his monstrosity because reasons. We’re all accustomed to the casual hypocrisy of the religious right – going back to the televangelist scandals of the 1980s at the very least – but their lack of contrition or self-reflection in the light of objective, established human evil is shocking even for them.
The movies – and indeed art of any kind – always made an easy target for the religious right. Fundamentalism thrives on blind acceptance, and creativity of any sort involves questions, challenging conventions, and looking at the world in new ways. Hollywood’s decadence and moral failings provide an easy straw man to distract from their own horrific failings, and every few years some new movie draws their ire for increasingly ridiculous reasons. (Disney’s recent live-action Beauty and the Beast, with its not-so-subtle homosexual subtext, is only the most recent example.)
Monty Python’s The Life of Brian is an archetypical example of their misplaced piety and refusal to acknowledge objective facts. The film produced an uproar when it was first released in 1979 – with the now-familiar pattern of boycotts and pious finger-pointing – and more than any other target of fundamentalist ire, it lays bare the self-serving double standard of Moore and his ilk.
Most cinematic targets of the Christian right either evince some kind of “amoral” subject matter such as sex or violence, or else treat religious topics in a manner that they disapprove of (The Last Temptation of Christ comes immediately to mind). The Life of Brian dabbles in that – in the typically irreverent Python style – but its target largely lies elsewhere. It very clearly and carefully separates Jesus Christ from the equation. The film opens with Christ preaching on the Mount and our hapless hero Brian (Graham Chapman) asking his mother to attend. They make it exceedingly clear that they are NOT attacking or mocking Christ, and that their narrative has nothing to do with his teachings. As Eric Idle said at the time, you can’t mock Christ’s philosophy. It’s good moral philosophy.
Their targets are his followers, or more strictly, the ones who let their fanaticism get in the way of adhering to that philosophy. Funny how that ticks off actual fanatics something fierce.
The movie really kicks into high gear when Brian finds himself the most reluctant of messiahs: thrust more or less against his will into a position of moral authority and haltingly attempting to guide his creepily eager followers in some kind of positive direction. He fails, miserably, but his efforts form the core of the movie’s humor and some of its funniest sequences.
Brian draws inspiration from Jesus, but he’s not especially good at implementing it, less because of his own good intentions than because those listening aren’t really interested. They have their own agenda, and are wrapped up in their own needs, which comes into conflict with the message being delivered.
That, the film assures us, is an all-too-human failing. Fanaticism certainly predates Christ, and the savagery that accompanies it remains a core part of our condition. The notion of moral wrongdoing – of perceived transgression against The Good – ironically becomes a means of committing great evil. Here, too, the Pythons’ instincts strike home, not only in Brian’s miserable failure to imbue his followers with some basic decency, but in the self-apparent barbarism that he’s fighting against. (Witness the execution scene, in which a bloodthirsty crowd really really REALLY wants to stone a man to death, but needs some flimsy excuse in order to make it acceptable.)
Religion isn’t the only sacred cow the Pythons skewer here, of course. Political power – always an easy target – receives a thorough savaging, from Michael Palin’s hysterically inept Pontius Pilate to the People’s Front of Judea (seemingly evoked every time the left starts squabbling amongst themselves).
During these sequences, the film moves to more traditional comedy, pointing out how inept the people in charge are and why we really can’t depend on them when we need them. But they’d targeted such figures before without engendering much controversy (well, no more than normal).
It was the addition of religion that set the fundies off: the perception that Python was mocking something sacred and therefore deserved public lambasting. They missed the obvious truth that they were engaging in the same bloodlust as the film’s bearded women at the stoning, but were they capable of such insight, they never would have spoken up at all. In truth, it’s the Pythons themselves that they disapprove of, and possibly the very notion of comedy: the iconoclasm, the need to punch upward, the general mocking of authority figures, and our own human proclivity for foolishness. An overt religious satire simply gave them the barest precepts of moral justification to cling to, and from that oyster they simmered a stew that they continue to greedily devour.
They’ve long since abandoned any pretext that they need to do more. They simply aren’t capable of looking the Gorgon in the face. They require a fig leaf, any fig leaf, to prevent themselves from acknowledging their own monstrosity, and now even that flimsy justification has begun to fray. So the Roy Moores of the world become the REAL victims in their minds, and the targets of his monstrous appetites suffer as a result. Christ’s teachings are actively abandoned in the name of Christ, and thus do his most vocal followers actively embody the evil they claim to decry.
Somewhere, Brian Cohen is shaking his head and wondering how it got so far.
The Pythons are smart enough to realize that there’s no easy solution to such idiocy, of course. Pointing out the hypocrisy of the religious right convinces them of nothing, save perhaps to double down and insist that, oh say, a child molesting monster is actually a righteous follower of Christ. Their only real answer is a gentle admonishment to look at our own hypocrisies and try to do better.
The film closes with Idle’s “Always Look on the Bright Side of Life,” conducted as Brian and a gaggle of fellows await death on the cross. It’s a lovely bit of absurdism, but also a reminder – quite serious – to take joy when and where you can. Especially if it means laughing at those who hate you, or who claim some flimsy moral superiority over you. “At least you’re not one of them,” it tells us. If that’s not a reason to keep punching, nothing is.
|
OPCFW_CODE
|
Autonomous Motion Planning Strategies in Opportunistic Navigation Environments
Abed AlRahman Al Makdah, Joshua Morales, and Zak (Zaher) M. Kassas; University of California, Riverside
Future autonomous vehicles (AVs) will be deployed in poorly known environments with minimal human interaction, where they will be tasked to complete a certain mission. In order to achieve this mission, they need to sufficiently learn the environment to possess situational awareness. For accurate navigation, AVs need to be equipped with accurate navigation systems, such as Global Navigation Satellite System (GNSS). However, GNSS will not handle the demand of the future autonomous systems, since GNSS signals are very weak/unavailable in indoor environment and in deep urban canyons, and are susceptible to jamming and interference. Hence, additional sources of information are needed for the AVs to exploit in order to possess situational awareness. This gives rise to the need for equipping the future AVs with sensors to be able to look for additional sources of information within the unknown environment (e.g. landmarks, radio signals), where they map these landmarks/signals while localizing themselves within that environment. This process is known in the literature as Simultaneous Localization and Mapping (SLAM) , . The case where the AV uses radio signals (e.g. signals from cellular towers, digital video broadcasting) to extract spatial and temporal information is referred to as radio SLAM.
In traditional SLAM problem, AVs move passively while collecting information to achieve situational awareness without acing deliberately to collect this information. Extensive re- search has been done on developing the SLAM algorithms in order to result in more accurate maps. Recently, researchers have been focusing on finding motion planning strategies for au- tonomous systems, which maximize information gathering. This type of SLAM is called active SLAM. In this type of SLAM the AV is actively controlled to follow a planned strat- egy that results in maximum information gathering, which leads to more accurate situational awareness with less uncertainty , .
In practice, the main interest when deploying an AV in a certain stochastic environment, is to perform a specified mission. However, to be able to complete this mission, sufficiently enough information about the environment needs to be collected. So, for the AV to complete its main mission in a stochastic environment, it has to consider pursuing a low-level mission which corresponds to information gathering and mapping this environment.
This paper addresses the problem of optimal motion planning strategies that optimizes the performance of completing the main objective assigned to an AV, which is deployed in a stochastic environment. The main objective cannot be achieved without gathering suffi- ciently enough information about this environment. So, this work investigates the decision
on the quantity of the trade-offs between both objectives (i.e. main objective and low-level objective) in order to find an optimal trajectory that optimizes the performance of the main objective.
Finding an optimal trajectory that optimizes multi-objectives is considered a typical op- timal control problem, where no single solution that optimizes each objective simultaneously exists. Optimal solutions change depending on the decision on the quantity of trade-offs between the two objectives. However, in SLAM the optimal motion planning problem is more complex, and optimal control framework cannot be applied. This complexity stems from the fact that AV’s observations and dynamics are stochastic, where its states and the environment’s are estimated after applying a motion command, then plan for a new opti- mized input based on the updated states and the updated map. So, the motion planning strategy is directly influenced by the accuracy of the estimated states, making the estimation and planning coupled.
Extensive framework approaches were applied for active SLAM in the literature. In and , the authors used Model Predictive Control (MPC) approach for active SLAM. In , the authors used multi-step look-ahead (receding horizon) for active SLAM in stochastic environments using radio signals a source of information. Greedy motion planning optimiza- tion approach is used in and for optimizing situational awareness in environments using radio signals as landmarks. Recent works in and , the authors consider apply- ing continuous-space planning approaches under uncertainty. Also in , the initial optimal trajectory is assumed to be known and optimization where performed locally around that initial trajectory. So, convergence of the approaches used is not guaranteed outside the re- gion around the initial trajectory. Several more approaches are mentioned in . In this work, multi-step look-ahead (receding horizon) strategy is adopted.
Consider an AV deployed in a stochastic environment with radio signals as the source of information. The AV has minimal a priori knowledge about its own states and the environment’s. And it is assigned to complete a specific objective, which is to move from its initial position to a specified final destination. This paper studies three motion planning strategies that optimizes the cost function corresponding to the main objective, which for the sake of simplicity, is considered to be the norm of the error between the estimated states of the AV and the states of the target. The first strategy considers minimizing the cost function that corresponds to the main objective. An optimum solution is solved for using a traditional optimal control approach assuming that the AV’s estimated states coincide with its true states. This strategy results in a catastrophic failure of the system, where the uncertainty increases at an enormous rate and the observer diverges. The second strategy considers adding a constraint to the previous strategy which corresponds to adding a bound to the uncertainty of the estimated states. In this strategy, multi-step look ahead (receding horizon) approach is used to solve for the optimal solution that minimizes the cost function subject to the previously mentioned constraint. This strategy results in a convergence of the system. In the third strategy, two cost functions are considered to be optimized simultaneously: (i) a cost function that corresponds to the main objective, (ii) a cost function that corresponds to the low-level objective (learning the environment). Also in this strategy, multi-step look ahead (receding horizon) approach is used to solve for the optimal solution that minimizes the convex combination of both cost functions. Similar to the second strategy, this strategy results in a convergence of the system. However, given that the second strategy resulted in
a different optimal solution than the third strategy, the main question is, are these the most optimized solutions that can be solved for? This work addresses the following questions: 1) What is the optimal value of the uncertainty bound in the second strategy’s constraint that will result in the most optimal solution? 2) Does changing this value adaptively as the AV gathers more information, results in a more optimized solution? 3) What is the is the optimal weights’ values of the convex combination of the third strategy’s cost functions that results in the most optimal solution? 4) Does changing the values of these weights adaptively as the AV reduces the uncertainty of its situational awareness, results in a more optimized solution? 5) Considering the most optimal solutions are solved for using the second and the third strategies, which solution provides a better overall performance? Do both solutions result in a similar performance?
This work will address the questions in simulation environment. And the resulting opti- mal motion planning strategies will be studied analytically and in simulation.
S. Thrun, W. Burgard, and D. Fox, Probabilistic robotics. MIT press, 2005.
H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: part i,”
IEEE robotics & automation magazine, vol. 13, no. 2, pp. 99–110, 2006.
C. Leung, S. Huang, N. Kwok, and G. Dissanayake, “Planning under uncertainty using model predictive control for information gathering,” Robotics and Autonomous Systems, vol. 54, no. 11, pp. 898–910, 2006.
C. Leung, S. Huang, and G. Dissanayake, “Active slam using model predictive control and attractor based exploration,” in Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on, pp. 5026–5031, IEEE, 2006.
Z. M. Kassas, “Collaborative opportunistic navigation [student research highlight],” IEEE Aerospace and Electronic Systems Magazine, vol. 28, no. 6, pp. 38–41, 2013.
Z. M. Kassas and T. E. Humphreys, “Receding horizon trajectory optimization in op- portunistic navigation environments,” IEEE Transactions on Aerospace and Electronic Systems, vol. 51, no. 2, pp. 866–877, 2015.
Z. M. Kassas, A. Arapostathis, and T. E. Humphreys, “Greedy motion planning for simultaneous signal landscape mapping and receiver localization,” IEEE Journal of Selected Topics in Signal Processing, vol. 9, no. 2, pp. 247–258, 2015.
Z. Kassas and T. E. Humphreys, “Motion planning for optimal information gathering in opportunistic navigation systems,” in AIAA Guidance, Navigation, and Control (GNC) Conference, p. 4551, 2013.
J. Van Den Berg, S. Patil, and R. Alterovitz, “Motion planning under uncertainty using iterative local optimization in belief space,” The International Journal of Robotics Research, vol. 31, no. 11, pp. 1263–1278, 2012.
V. Indelman, L. Carlone, and F. Dellaert, “Planning in the continuous domain: A generalized belief space approach for autonomous navigation in unknown environments,” The International Journal of Robotics Research, vol. 34, no. 7, pp. 849–882, 2015.
C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard, “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,” IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309– 1332, 2016.
|
OPCFW_CODE
|
It’s been a few 12 months since I graduated from college. Useful Trace: If you take notes by hand throughout class, which may help you to retain extra info and enhance your memory , kind them up later that day. Doing this gives you an automatic mini research session and you will have an opportunity to read by way of things whereas they’re fresh in your thoughts. There’s nothing worse than trying over notes you took every week ago and not having a clue what you meant whenever you wrote them.
Manage a study group. A wonderful thing about school is that you will not be in courses for a large chunk of the day. Find some classmates with comparable schedules and create examine groups. You may combine notes and have an all-encompassing examine guide.
As is true in other fields, success in academia requires the event of specific abilities that improve one’s effectiveness and efficiency. Working arduous at school studies is pointless if the proper ideas usually are not studied and if the scholar does not take advantage of the out there examine time. That’s the reason it is necessary for every school scholar to develop a sound set of research expertise. The proper research skills enable college students cfacademic sponsorship to focus on the right material when they’re learning, and so they ensure that college students use their time efficiently so that they get probably the most out of their studies and do not find yourself permitting their research to crowd out different necessary actions and disciplines. Growing good research skills is how faculty college students can study smartly, and there’s no scarcity of sources accessible for serving to one develop the appropriate study strategies.
Pre-learn or skim a textbook chapter before studying to attempt to get the essential ideas. This can let you create a structure to your learning that can increase your concentration, your comprehension, your curiosity, and your lengthy-term memory.
Your professors and TA’s are there to help! Ask them questions relating to the fabric and the exam so that you’re ready when exam time arrives. After you’re completed with every quiz, establish questions you’ve gotten answered correctly and incorrectly. Then, dedicate most of your time studying the ones you could have answered incorrectly during study periods.
Plan your time successfully: You don’t at all times have to tug an all-nighter before your homework is due or earlier than exams Finding out few hours every day is rather more effective and far less painful. Notice that you simply will not lose pals, respect, or a “good time” just because you’re finding out.
Realistically, we might solely study for 3 lessons a day provided that we enroll in 5 lessons, and that is positive because it forces us to area out our learning. We may additionally need to spend more time on a specific subject in a day. Carrying around textbooks for all the lessons can be exhausting too.
2. Immediate overview is crucial to retention. Unless you assessment within 24 hours after lecture or at the least before the next lecture, retention will drop sharply and you’ll be relearning rather than reviewing. Consider this firsthand recommendation from college students like you to begin constructing your good studying house.
Make an observation of examples that you haven’t been capable of do in the set exercises and ask for help with those the next time you have got class. Go over these questions once more earlier than the take a look at. These are all problematic as a result of it unconsciously prevents us from finding out extra or utilizing efficient study suggestions.
|
OPCFW_CODE
|
How to fix a wiggling closet railing?
The upper railing for the closet in my daughters room wiggles. I went up into the attic to see why, and it seems like it's attached predominantly into the drywall ceiling - in some places with anchors, and in some places without anchors (?!).
(Sorry for the image quality, was balancing at a strange angle and trying to get enough light into the area).
What can I do to fix this situation? I'm sort of inclined to just unscrew the railing entirely, lay a 2x4 on top of the drywall and then screw the railing back in (e.g. so then screws go through the drywall into the 2x4). Do I need to support this 2x4 in some other way, by connecting to the joist (or is it a rafter)?
If not too much weight your idea should work, but if weight or moving weight(hanging clothes) is expected, then the 2x4 should be anchored to the rafters/trusses.
This is actually for the doors to the closet, so I don't think there should be too much weight.
With the movement of the doors will need to anchor the2x4 so it will be solid. The doors movement will make the holes in the drywall bigger over time, so the 2x4 needs to mounted to the trusses.
@crip659 That makes perfect sense, thank you.
Since you have access to the back of the drywall, installing a backer board of some sort is definitely the way to go. A 2x4 is fine - actually more than you probably need. But the board needs to itself attach to at least two other boards. That could be screws between boards or some sort of metal bracket connecting the boards and screwed into both them.
Thank you. Does it seem fair if I lay the initial 2x4 directly on top of the drywall, and then put two smaller sections of 2x4 on top of it - which would be screwed into the joists at the side, and the 2x4 underneath it? Forming a sort of fat H. If that explanation isn't coming across, let me know and I'll try to make a drawing. :)
That's absolutely fine. But best if the new main ("initial") 2x4 is butted up against an existing board on one end (which looks easy based on the pictures) - that way the next layer keeps it from pushing away from the drywall but doesn't have to deal with sliding around too.
Oh, you're talking about the shorter 2x4 that goes from one joist to another? Butt up against that, and maybe repeat the process at the other end?
|
STACK_EXCHANGE
|
If fixed that and after a restart of the server everything worked fine. What am I doing wrong? Forum List Topic List New Topic Search Register User List Log In Powered by RForum and Captchator. I'm a little confused as to why we're seeing this exception. http://silkiconfinder.com/timed-out/vnc-failed-connect-operation-timed-out-60.html
asked 2 years ago viewed 2507 times active 2 years ago Related 2Rails 3 Gemfile containing pg gem causes server crash0Why connection is not establishing on first time?0cap deploy:setup giving a Reload to refresh your session. Is an animated corpse with a weapon overpowered? Skip to content Ignore Learn more Please note that GitHub no longer supports old versions of Firefox.
Reload to refresh your session. Join them; it only takes a minute: Sign up Operation timed out - connect(2) (Errno::ETIMEDOUT) up vote 4 down vote favorite 2 I am trying deploy my rails application on heroku. This is what jenkins says after firing: DELIVER_ITMSTRANSPORTER_ADDITIONAL_UPLOAD_PARAMETERS='-t DAV' pilot upload --skip_submission -u 'MYID' --skip_waiting_for_build_processing true --verbose INFO [2016-11-03 15:36:53.83]: Login to iTunes Connect (MYID) Operation timed out - connect(2) Operation up, run.1 !
sleep_while_not_running checks instance.status (which says "running" long before the server is fully baked. Reload to refresh your session. You signed in with another tab or window. share|improve this answer edited Oct 20 '14 at 21:14 answered Feb 21 '12 at 0:28 edk750 959611 add a comment| up vote 2 down vote I have been getting the same
Personal Open source Business Explore Sign up Sign in Pricing Blog Support Search GitHub This repository Watch 164 Star 3,663 Fork 1,724 bundler/bundler Code Issues 188 Pull requests 33 Projects Errno::etimedout: Connection Timed Out nathancolgate commented Nov 20, 2013 After switching to a different network, everything worked as expected for us. A Little Cryptic Puzzle Equivalent form of Black-Scholes Equation (to transform to heat equation) Why are Zygote and Whatsapp asking for root? http://stackoverflow.com/questions/15992689/operation-timed-out-connect2-errnoetimedout Not the answer you're looking for?
Is it a security vulnerability if the addresses of university students are exposed? dbrgn commented Nov 22, 2012 When uninstalling and reinstalling the heroku gem, it prints ! traceroute to OOO.XXX.YYY (IP), 30 hops max, 60 byte packets 1 220.127.116.11 0.564 ms 0.599 ms 0.680 ms 2 18.104.22.168 66.869 ms 66.873 ms 67.103 ms 3 22.214.171.124 4.233 ms 4.248 Skip to content Ignore Learn more Please note that GitHub no longer supports old versions of Firefox.
How do I use threaded inserts? my review here Browse other questions tagged ruby-on-rails ruby-on-rails-3 amazon-s3 paperclip prawn or ask your own question. Ruby Errno::etimedout https://toolbelt.heroku.com/standalone I couldn't get it working :/ dbrgn commented Nov 22, 2012 Duplicate of #407. Rescue Errno::etimedout c.
How does Decommission (and Revolt) work with multiple permanents leaving the battlefield? this content I tried to run heroku run heroku pg:reset DATABASE, it still didn't work. Please try running bundle install again. Sign in to comment Contact GitHub API Training Shop Blog About © 2017 GitHub, Inc.
Archive Aug 2nd Fri Fix for "Errno::ETIMEDOUT: Operation timed out - connect(2)" when installing gems https://gist.github.com/xarimanx/1798774 Powered by Tumblr The trace is: /opt/ruby-enterprise-1.8.7-2011.03/lib/ruby/1.8/net/http.rb:560:in `initialize' /opt/ruby-enterprise-1.8.7-2011.03/lib/ruby/1.8/net/http.rb:560:in `open' /opt/ruby-enterprise-1.8.7-2011.03/lib/ruby/1.8/net/http.rb:560:in `connect' /opt/ruby-enterprise-1.8.7-2011.03/lib/ruby/1.8/timeout.rb:53:in `timeout' /opt/ruby-enterprise-1.8.7-2011.03/lib/ruby/1.8/timeout.rb:101:in `timeout' /opt/ruby-enterprise-1.8.7-2011.03/lib/ruby/1.8/net/http.rb:560:in `connect' /opt/ruby-enterprise-1.8.7-2011.03/lib/ruby/1.8/net/http.rb:553:in `do_start' /opt/ruby-enterprise-1.8.7-2011.03/lib/ruby/1.8/net/http.rb:542:in `start' /opt/ruby-enterprise-1.8.7-2011.03/lib/ruby/1.8/open-uri.rb:242:in `open_http' /opt/ruby-enterprise-1.8.7-2011.03/lib/ruby/1.8/open-uri.rb:616:in `buffer_open' /opt/ruby-enterprise-1.8.7-2011.03/lib/ruby/1.8/open-uri.rb:164:in `open_loop' /opt/ruby-enterprise-1.8.7-2011.03/lib/ruby/1.8/open-uri.rb:162:in `catch' /opt/ruby-enterprise-1.8.7-2011.03/lib/ruby/1.8/open-uri.rb:162:in `open_loop' /opt/ruby-enterprise-1.8.7-2011.03/lib/ruby/1.8/open-uri.rb:132:in `open_uri' I used the command heroku run rake db:migrate and I am getting the following error Running rake db:migrate attached to terminal... weblink The exception should be wrapped in Faraday::Error::TimeoutError.
Why leave magical runes exposed? Search for help at: https://help.heroku.com ! prove an equation holds in series Differential high voltage measurement using a transformer From zero to parabola in 2 symbols Print all ASCII alphanumeric characters without using them more hot questions
Heroku client internal error. ! I'm not even sure what that would do. more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Skip to content Ignore Learn more Please note that GitHub no longer supports old versions of Firefox.
Using Flexbox, have elements stretch to fill gap between rows Kids shuffling cards Hacker used picture upload to get PHP code into my site How should I respond to absurd observations up, run.2255 ! more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed check over here Registration is free and takes only a minute.
You can also check the heroku status page when things like this happen, but I don't see any current issues up there right now. –jefflunt Apr 13 '13 at 20:47 Why does the `reset` command include a delay? Dealing with "friend" who won't pay after delivery despite signed contracts Did Joseph Smith “translate the Book of Mormon”? The `heroku` gem has been deprecated and replaced with the Heroku Toolbelt. !
You signed in with another tab or window.
© Copyright 2017 silkiconfinder.com. All rights reserved.
|
OPCFW_CODE
|
Error When Trying to Create a New Android Project in Eclipse
I've just downloaded ADT Bundle for Windows 64, which comes with Eclipse + ADT plugin, Android SDK Tools and so on... when I try to create a new Android project, right after the screen where I choose the Blank Activity template, the following error message is displayed:
Unsupported template dependency: Upgrade your Android Eclipse plugin
in this same screen is informed that this template depends on the Android Support Library... and right below there are 2 buttons, one for Install/Upgrade and the other one for Check Again. none of those buttons work. I've already done the necessary updates in Help/Check for Updates and Help/Instal New Softwares, in Eclipse. I've also done the necessary updates in SDK Manager. none of those updates fixed the problem. can someone help me with this deal?
thanks in advance.
What version of Android SDK Tools do you have?
Please, upload the screenshot of the problem..
my Android SDK version is 22.2.0 and my Eclipse version is 4.2.1
You have to add more repositories to your Eclipse.
I am not sure..But try adding this repositories before updating..
https://dl-ssl.google.com/android/eclipse
UPDATED
Download from http://dl-ssl.google.com/android/repository/tools_r22-windows.zip, unzip it and replace the /tools folder with this one to downgrade the sdk tools. There's a bug in version 22.
As I said for the user Bay, I installed all the repositories available for download in Eclipse. I also installed all the tools and extras and some of the APIs that I want to work with in SDK Manager. The problem still persists.
link
Here you can download older versions of Eclipse
Since You asked.@ThorMy See my Updated answer..
Augustus, the problem was solved when I updated the Android SDK Tools. When I posted this question, Android SDK Tools version was 22.2, now, it is 22.2.1. I also had to update Eclipse plugins. Thanks a lot for your assistance.
You may follow these: (on Eclipse) File>>new>>project>>Android>>Android Application Project
If this can't be helpful, follow again these steps and when choose Android, you may see "?" icon on the left-down side.
If this can't be helpful again, try to use Eclipse older versions or verify that you are downloading x64 one. Also, try ADT-Plugin's .rar file from developer.android.com site and use older one to download.
These should be helpful for you.
Firstly, I run Eclipse x64 version, because my Windows is 64bit. Then, I run Eclipse x32 version. I didn't work anyway. On both cases, I installed all the repositories available for download in Eclipse. I also installed all the tools and extras and some of the APIs that I want to work with in SDK Manager. It didn't fix the problem. I don't know whatelse to try.
how can I download older versions of Eclipse or Android SDK Manager?
For Eclipse older download: http://www.eclipse.org/downloads/packages/release/juno/sr2
For ADT-Plugin older ones: http://developer.android.com/tools/sdk/tools-notes.html
Bay, the problem was solved when I updated the Android SDK Tools. When I posted this question, Android SDK Tools version was 22.2, now, it is 22.2.1. I also had to update Eclipse plugins. Thanks a lot for your assistance.
Okey Thor, You're welcome.
Bay. Now, I noticed that the emulators are not running at all. Did you have the same problem? What now? Can you help me again?
Let me make a revise regarding the last post. It loads, I can see the screen with the icons but, when I try some options, it says: "Launcher has stopped".
Maybe because of your emulator's RAM capacity lowness, you can increase your virtual device's RAM. I know, emulator is working slowly, I connect my phone 4.2.2 to my Eclipse, when I click on RUN it starts up on my phone directly.
Android virtula device manager(click on that icon on the top side of your eclipse)>>choose your device that before you did create>>On memory options>>RAM 512 or 1024 that you have what is that.
If you have mobile phone, on your phone, please active debugging on your options menu. Go your eclipse project>>right click>>run as>>run configuration>>click on target bar.
|
STACK_EXCHANGE
|
I am a dishonest debater.
I remember being around 7 years old and arguing with my brother. Some adult had told us in the past not to run when crossing the road, and I agreed. My brother, a little older and a little more rebellious, still wanted to run across the road. One day, he did, and I confronted him.
"Don't run across the road" I (proudly) stated.
"Why not?" he (reasonably) asked.
At this point I was in trouble. I had no idea why it was a bad idea (at least in my best recollection). I ended up spinning a convoluted argument about how he might run, suddenly realize in the middle of the road that there was a car coming, trip, and then get hit by a car.
It wasn't my proudest moment in debate, and I can tell you that I didn't convince him. I wish I had learned something all those years ago, but I don't think I have.
The thing is, I frequently catch myself doing this while arguing for something I truly believe is true. The debate could be about religion, health care policy, or even just about what the best pizza topping is. Often I care deeply about the outcome of this debate at a policy level, and I truly want to convince the person on the other side of the debate, or at least open their mind.
When I'm at my best, I listen carefully to their arguments, and try to genuinely communicate that I"m listening and try to genuinely understand their point. That's the only way to build common ground between positions. Then I try to give my position as carefully as possible, without embellishing or exaggerating my argument just for rhetorical effect. This means that they hear the minimal core of my argument, which I hope is as convincing as possible. For example, in a religious debate I might talk about historical Jesus rather than telling someone it's important they believe in every miracle Jesus worked, today. This is sometimes a bad idea, since some arguments are only convincing when presented in their strongest form, but it's the only way I want to live. I'm happy with incremental change because I'm aware I could be wrong.
When I'm at my worst, though, I embellish. I use rhetoric. I raise my voice, and I subtly cast aspersions on people's characters to make them think it's obvious that my side is correct. I almost do this as a reflex, and sometimes I"m barely aware that I'm doing it. I often don't realize that my tone of voice is implying that what I'm saying is obvious. And worst of all, I make leaps of logic to try to keep my argument together.
The problem is that against a poor debater, or someone not really interested in debate, this often works. They'll agree with me out of general respect for me, or because of the pressure to conform, or just pretend to agree to end the argument. And I go off feeling like I've done something worthwhile.
But there are two huge problems with this approach. One, it means I'm "educating" people with bad arguments. If a skilled debater argues with them and they bring up the same (invalid) points, they'll get shot down and feel stupid. They'll probably end up less convinced of what I wanted in the first place! Two, when I'm against a skilled debater, they'll lose respect for me and my ideas because they'll recognize I'm not arguing in good faith.
So, I'm going to try to change. Premise one: I believe in the ideas I'm arguing for. Premise two: if I'm wrong, I want to learn that and change my mind. So the best conclusion I see is that I become more disciplined about presenting only the strong core of my argument. My mission should be to find common ground with opposing debaters so we can drill down to premises, axioms, studies, and morals. Once we're there, we can really start to open each others' minds and hopefully advance the cause of truth just a little bit.
|
OPCFW_CODE
|
i don’t have much to add besides what is already in the conversation i had
with Bruce some time ago (to which he already linked in an earlier message).
but i thought i might mention a few things about where my rough prototype
came from in case it might be useful those new to the discussion.
Mainly, i would suggest that before a conversation about UI specs can be
truly productive, it needs to be decided (at least by those developing the
UI) whether the UI will be organized around macros or types (as has already
been mentioned a few times in this conversation). Certainly there are a
number of smaller configuration options that once can quibble about, but the
macro/type distinction seems crucial to me because it dictates the
fundamental visual structures of the interface and the fundamental logic
behind the code.
Personally, I developed my rudimentary tool under the assumptions: 1) that
users don’t want to know or care about macros or any other underlying
structure of CSL files, and 2) that they think about styles predominately in
terms of item type. My tool obviously reflects that.
I also thought, and still think, that it would be unnecessarily difficult to
maintain a type-based UI that creates CSL files that were heavily macro
oriented, so I was content to use more explicit type definitions in the CSL
files. But I don’t think this is a compromise in creating good CSL files.
While using macros for creators and dates and other important exceptions is
entirely necessary (and good practice), I don’t believe that the best CSL
files best are organized fundamentally around macros and are light on
explicit type definitions. While more macros and fewer type definitions get
you more generic functionality from a CSL file, I think they make
maintaining and extending the CSL much more difficult, especially with a
tool, and especially if that tool presents a type-based UI.
This philosophy grows out my belief that CSL files are most useful and
powerful when easily editable by a number of people who will be correcting
the styles to work with rare item types and non-standardized sources over
time. It is most commonly in the edge cases, I think, when people get
frustrated with citation software that doesn’t allow small formatting
adjustments either in common or rare citation cases. I created the tool
largely with that audience in mind. Macros, as the term itself implies, are
best at generic behavior, and there certainly is a lot of that in a given
style. But to enable users to easily make small and incremental changes
means that editing a style will tend toward explicit type definitions and
away from macros. So it was my conclusion in groping towards a prototype
that the advantages of explicit types definitions far outweighs the coding
complexity to maintain macros.
The above is true of course only if the UI is style-based, as mine is. But
one could create a macro oriented UI that eliminates some if not much of the
coding complexity I have just alluded to. So I return to my initial point
that the type vs. macro approach in the UI is a fundamental driver of how
things will unfold both in the UI and the code.
|
OPCFW_CODE
|
View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0018318||MediaMonkey for Android||Synchronization||public||2021-09-19 15:45||2021-09-20 11:11|
|Summary||0018318: Duplicate sync profile / problems with 2.0 (regression)|
|Description||Upon attempting to sync with 2.0, using an existing profile from 1.x, a duplicate profile was created.|
New Profile: Google Pixel 2 XL - Internal shared storage
Old profile: was renamed to ""
The sync operation then:
- synced tracks from a playlist (Mix 3) that was checked on the auto-sync list
- did not delete unselected tracks !
- did not create the playlist on the device!
I suspect that the problem stems from the fact that a duplicate profile was created on the initial sync. This may be because the IP address of the MM5 machine changed.
Log ID (from a second sync operation--after a duplicate profile had already been created): EI89CL16AC
|Tags||No tags attached.|
|Fixed in build|
Further testing reveals that duplicate servers are created every time MMA attempts to authenticate ("An unknown portable device is attempting to sync with the MediaMonkey Library for the first time.... Do you want to grant it access?..."). This appears to be related to MM5 presenting the authentication dialog first when the MMA user clicks 'Select sync server' > <SyncServerName>, and then a second time when the MMA user attempts to choose which 'Server content' to sync.
What ends up happening is that some config settings (e.g. track deletion settings) are then saved with the initially created profile (Google Pixel 2 XL - Internal shared storage) and others (e.g. sync list) are associated with the second profile ("").
This bug makes it impossible to properly sync MMA 2.
Retested on a clean install of MMA (after having deleted sync profiles in MM5) and the issue still occurs.
So it is currently not possible to set up and initiate synchronization with MMW5 from MMA.
The workaround is to:
1 Initiate configuration via MMA
2 In MMW5, delete the "" profile, and configure the server content to sync (this way the sync list is associated with the correct profile and a new one isn't created)
3 In MMA initiate the sync
I can't replicate.
I have 20+ devices in that test build and not a single issue with 1010.
Tested upgrade, Clean/New profile MM5.
Note that the sync profile is stored within MM5.DB so if you deleted the profile in MM5 (or not used MM5.DB/settings previously used for the same device) then new profile creation is expected.
EDIT: Based on the Rusty's video there are by evidence two profiles created, so I think MMA erroneously sends two different sync keys (profile IDs).
The first one with key '9ec46b44-eee2-368c-8104-61ada296bf73.0.63d061ae-3381-48fe-bf9d-6fd9d637e2cb'
based on MMA's request:
/DeviceID=9ec46b44-eee2-368c-8104-61ada296bf73.0.63d061ae-3381-48fe-bf9d-6fd9d637e2cb_DeviceName=Google Pixel 2 XL - Internal shared storage_ItemID=DeviceConfig.xml
And the second (empty) profile with empty name based on MMA's request:
So the problem is in the request 'SyncItems:DeviceID:null' that should be 'SyncItems:DeviceID:9ec46b44-eee2-368c-8104-61ada296bf73.0.63d061ae-3381-48fe-bf9d-6fd9d637e2cb'
|
OPCFW_CODE
|
Paypal IPN and Golang integration on GAE
I am writing a listener for handling Paypal IPN messages and responses.
From Paypal IPN requirement, The listener have to post the values received from Paypal back in the same order with a new parameter "cmd=_notify-validate" inserted at the front of the value list.
Your listener HTTP POSTs the complete, unaltered message back to PayPal.
Note: This message must contain the same fields, in the same order, as the original IPN from PayPal, all preceded by cmd=_notify-validate. Further, this message must use the same encoding as the original.
However, Go's url.Values variable is implemented in map data structure which order of the value is not guaranteed to be the same when being iterated each time.
...When iterating over a map with a range loop, the iteration order is not specified and is not guaranteed to be the same from one iteration to the next"
And when url.Values encoded method is called, it will be sorted by key
Encode encodes the values into “URL encoded” form ("bar=baz&foo=quux") sorted by key.
The listener is running on GAE thus I use "appengine/urlfetch"'s PostForm function which takes url.Values as the second parameter
c := appengine.NewContext(r)
client := urlfetch.Client(c)
resp, err := client.PostForm("https://www.sandbox.paypal.com/cgi-bin/webscr", r.Form)
As url.Values is a map, the order of values in the map are not guaranteed to be in order. How can I possibly pass the parameter values back in the same order received from Paypal IPN back to Paypal with GAE urlfetch service?
Use Post instead of PostForm. You can probably use the body from the request:
var buf bytes.Buffer
buf.WriteString("cmd=_notify-validate&")
io.Copy(&buf, r.Body)
client.Post("http://localhost", "application/x-www-form-urlencoded", &buf)
This works but after the io.Copy(&buf, r.Body) line, I tried to run r.ParseForm(). It had no error since I have the error handler for it. However, I got no value in r.Form. Seems like io.Copy did something to r.Body? And I can't run parseForm() before io.Copy(&buf, r.Body) because it altered the body somehow. Do you have any idea?
Or do I have to copy the r.Body by value for running io.Copy(&buf, r.Body) separately?
Yeah sorry, consuming the body with io.Copy will make it not available for ParseForm. (and vice-versa) You could replace the Body with a TeeReader: http://golang.org/pkg/io/#TeeReader. So instead of io.Copy, do r.Body = io.TeeReader(r.Body, &buf) and then call: r.ParseForm().
Oh my. Now I encounter http: invalid Read on closed Body after calling client.Post(.., .., tt). How can I keep the body open after calling io.TeeReader()? (I used a new variable tt := io.TeeReader(r.Body, &buf) because TeeReader returns Reader that does not implement closer)
Sorry didn't notice that. You'll have to make your own TeeReader that supports closing. This actually isn't so hard because the code for TeeReader is simple: http://golang.org/src/pkg/io/io.go?s=15125:15166#L465. Here's an untested example: http://play.golang.org/p/-0qsJrkw_x. Then you'd use r.Body = TeeReadCloser(r.Body, &buf).
|
STACK_EXCHANGE
|
Intuitive Surgical Careers
Network Applications Engineer
Primary Location: United States-California-US-CA-Sunnyvale
Requisition ID: 182112
Who is Intuitive Surgical? The numbers tell an amazing story. Learn more about our company.Joining Intuitive Surgical means joining a team dedicated to using technology to benefit patients by improving surgical efficacy and decreasing surgical invasiveness, with patient safety as our highest priority.
Primary Function of Position:
We are seeking passionate software developers to help build applications that enable the transfer of data between IOT educational products and cloud infrastructures. Furthermore, this developer will partner with web application developers to extract meaning from the profuse amount of educational data for our customers so they may better their performance.
Your core role will be to work with architectural leads to define, build and deploy functional communications pipelines that move large amounts of data across our infrastructure and to work closely with front end developers to display relevant and actionable information to our customers. This is a great opportunity to create an extremely groundbreaking product that positively impacts the lives of people.
As part of the design team, immediate responsibilities include:
• Build data infrastructure in collaboration with a lead engineer:
o Develop functional asynchronous data pipelines that have high throughput and are resilient to failure using the latest software technologieso Develop SQL queries and stored procedures that effectively store data in an efficient mannero Develop applications that interface with 3rd party Restful APIso Develop Restful APIs for use by 3rd partieso Verification test all manners of software related to data transfer, Restful APIs, and Web UI
• Build applications to help people understand and visualize data coming from our next generation low latency streaming protocol from our robot. These tools will enable the acquisition of rich data from robot for use with research partners, internal engineers, UI and software developers, and many others in the company.• Create rich web and mobile applications to replay, annotate and interact with synchronized surgical video, audio, system events and rendered robot motion
We have a wide spectrum of work to do and are looking for the right person over a precise set of skills.
The candidate will have some combination of following skills/abilities:
• Educational background Math/Statistics, Software, Electrical, or Computer Engineering related program with 2 to 5 years of industry working experience• Enthusiasm and passion for current IOT software technology.• Interest in data analytics and a desire to derive meaning from reams of data• Experience in statistical modeling with context of data discovery, data analysis, and machine learning.• Technologies and Languages:
• Visualization Frameworks and tools
o d3.jso three.jso Highchartso more generally, SVG, HTML5 Canvas, WebGL, etc.
• Other web frameworks and tools
o jQueryo AngularJS
• Backend experience
o Apacheo Flasko Basic Linux administration familiarityo AWS or other cloud based deployment experience
• Expertise in visually representing complex information in an actionable and impactful way.• Ability to work in an ambiguous, dynamic, and often fast-paced environment.• Comfortable working in a collaborative / team-based approach.• Ability to lead cross-functional/business projects.
We will consider for employment qualified applicants with arrest and conviction records in accordance with fair chance laws.
|
OPCFW_CODE
|
The user specifies any two of these three quantities and the program derives the third. A description of each calculation, written in English, is generated and may be copied into the user's documents. Interactive help is available. The program provides methods that are appropriate for matched and independent t-tests, survival analysis, matched and unmatched studies of dichotomous events, the Mantel-Haenszel test, and linear regression. It can plot graphs of any two of these variables while holding the third constant.
Power? What Power?
Linear or logarithmic axes may be used and multiple curves can be plotted on each graph. Graphs may be copied and pasted into other documents or programs for further editing. From Wikipedia, the free encyclopedia. Plummer Stable release 3. New York: Oxford U. Press; Statistical software. Next we will outline three approaches A, B and C. In this approach, we want to plan a fairly definitive study and have plenty of time and funding.
What power should we use? What size difference are we looking for? While we haven't yet studied people with hypertension, we know that other studies have found that the average number of receptors per platelet is about How large a difference would we care about? Let's say that our budget or patience only lets us do a study with 11 subjects in each group.
How much information can we obtain? Is such a study worth doing?
With a small study, we know we are going to have to make do with a moderate amount of power. In that case, what's the point of doing the experiment? We want more power than that, but know we can't have a huge amount of power without a large sample size. This sample size analysis has helped us figure out what we can hope to learn given the sample size we already chose.
- hotmail wont work on mac mail.
- mac mini a1283 processor upgrade.
- moving office mac to new computer.
- PS Power and Sample Size?
- adobe master collection cs6 mac iso.
- logitech solar bluetooth keyboard mac.
Now we can decide whether the experiment is even worth doing. Different people would decide this differently. How can all three approaches be correct? If you specify exactly what power you want, and how large an effect you want to detect, StatMate can tell you exactly how many subjects you need. But generally, you won't be sure about what power you want or are willing to accept or how large an effect you want to detect.
Therefore, you can justify almost any sample size. It depends on how large a effect you want to find, how sure you want to be to find it power , and how willing you are to mistakenly find a significant difference alpha. So there is no one right answer. It depends on why you are looking for a difference and on the cost, hassle and risk of doing the experiment. Graph the relationship between N and power StatMate does not create graphs itself. But if you own a copy of GraphPad Prism version 4. Each curve is for a different power, and shows the relationship between the sample size you could choose for each group X and the difference you would then detect as "significant" Y.
- G*Power free download for Mac | MacUpdate;
- how to root i9500 on mac?
- Heinrich-Heine-Universität Düsseldorf?
- maplestory on mac for free!
- Interesting tutorials.
As you go from left to right, the curves go down. This makes sense — if you use more subjects collect more data , then you'll be able to reliably detect smaller differences. Each curve is for a different power. If you choose a higher power, the curve shifts to the right. This also makes sense — if you want more power to have less chance of missing a real difference , then you'll need more subjects.
We'll choose the sample size chosen in approach B. In step 3 of StatMate, each value is a link. The screen shot below shows the first two of four sections of the report: a reiteration of your choices, and a detailed interpretation. The report then shows the entire table of tradeoffs which you have already seen and a discussion of when it makes sense to design studies with unequal sample sizes. You may view the entire report for this example as a pdf file.
You can send the entire report to Word with a click of a button Windows only , or via copy and paste. Using StatMate is entirely self-explanatory, and this example discusses the logic behind power analysis more than the mechanics of using StatMate. Learn the basic concepts of statistical power.
We will continue analyzing the experiment discussed in the sample size example Clinical Science , We determined the number of alpha2-adrenergic receptors on platelets of people with and without hypertension.
- obd scanner software for mac.
- An app to perform many different statistical tests.?
- free mac business card design software.
- toneport ux1 driver download mac.
- Publisher's Description.
- free web video capture mac?
Here are the results: Controls Hypertensives. The data was analyzed with an unpaired t test. Here are the results from Prism: Because the mean receptor number was almost the same in the two groups, the P value is very high. These data provide no evidence that the mean receptor number differs in the two groups. One approach is to interpret the confidence interval. Enter the results of the study.
Note that you do not need to enter the mean of the two groups. Mean values are not needed for power calculations. You need only enter the size and variability of the groups. StatMate shows us the power of the study given the sample sizes and standard deviations you entered to detect various hypothetical differences delta. The screen shot above shows the first two of three sections of the report: a reiteration of your choices, and a detailed interpretation. The report then shows the entire table of tradeoffs which you have already seen. You can export the report to Word by clicking one button Windows only or via copy and paste.
Calculate sample size — How many subjects data points do you need? Naturally, the answer is "it depends". It depends on how large a difference you are looking for, how much your data vary, and on how willing you are to risk mistakenly finding a difference by chance or mistakenly missing a real difference. StatMate helps you see the tradeoffs, so you can pick an appropriate sample size for your experiment. Calculate power — Just because a study reaches a conclusion that the results are "not statistically significant" doesn't mean that the treatment was ineffective.
StatMate calculates the power of a test to detect various hypothetical differences. Your sample size and power wizard. Why Choose StatMate? StatMate shows you the tradeoffs Some programs ask how much statistical power you desire and how large an effect you are looking for and then tell you what sample size you should use.
What about power? Introduction It is easiest to understand sample size calculations in the context of an example.
StatMate - orahehunumen.tk
For this example, we want to compute sample size for a new study. Later, we'll go through an example of determining the power of a completed experiment. What is your experimental design? In this example, we plan to compare the mean of two groups using an unpaired t test. Step 3: View tradeoff of sample size and power Some programs would ask you at this point how much statistical power you desire and how large an effect size you are looking for.
Approach A. That is a lot of subjects.
Power and sample size calculator download
Approach B shows an approach that justifies fewer subjects. Approach B. In this approach, we want a smaller sample size, and are willing to make compromises for it.
|
OPCFW_CODE
|
I honestly don't know how much it applies to desktops, but certainly in other form factors, you can't rely on TDP as an indicator of the size of power supply needed. TDP tells you how much heat the heatsink needs to be able to dissipate. If you exceed the TDP for a few seconds, there's enough thermal mass that you won't exceed the maximum junction temperature of the chip, as long as you keep your average power (on the scale of seconds or minutes) below the TDP. Indeed, many devices like laptops, tablets, and cell phones rely on this for maximum performance; if you load a web page, the CPU will ramp up well above the sustained TDP in order to render the page, relying on the fact that the CPU will be nearly idle or off while you read the page after it has rendered, so the average power will be below the TDP.
Instead, there is a different parameter - sometimes referred to as EDP. This tells you the power that the power supply needs to be able to provide, more on the scale of milliseconds (so a long enough timescale that the capacitors on the board can't keep up with the power demands, but not so long that the TDP starts to limit you). EDP is always higher than the TDP.
As I said, I don't build my own computers though (I just design them ;-)), so I have no idea if that's something that has worked its way into choosing a power supply yet. It doesn't seem like it, from some googling.
Other things that can make a difference are inrush current and power factor. I have a desktop that has about a 950W power supply. Typical power draw is on the order of 150 watts or so. So you'd think that a quality UPS with a 900 VA rating should be plenty - unfortunately, when powering it on or waking it, either the inrush current or the power factor as it charges all the inductors and capacitors in the power supply exceeds the capacity of the UPS about one in every 10 times. In that case, the UPS lights up its "Overload" light and shuts off, which rather negates the whole point of a UPS. Additionally, because turning on a light switch with a lot of CFLs often causes enough disturbance to the power to trip the UPS, and the UPS activating causes it to wake the computer, it happened that turning on a light switch would often cause my computer to be immediately disconnected from power, as it would hit the overload condition.
So I moved the UPS to my TV and network connections, and put my computer back on a surge protector. Everything is much happier now.
|
OPCFW_CODE
|
Fill blanks with adjacent fields with conditions
I want to fill(row-wise) blanks with "adjacent fields text" if the same or "X". Kindly suggest (AWK/sed) solution. (Additional requirement: Distance of empty fields matters while computing i.e using the header record, blank fields distance should be < 100). Otherwise fill with "X" even if adjacent fields matched.
Example of blank fields filled with "X" even after matching: Line0 $612-$822.
Input(tab-separated)
ID 577 592 598 600 612 650 700 822 825 830 840 870
Line0 A A A A
Line1 B B NA B
Line2 B A A A
Explanation of exceptional empty fields
Exceptional intervals are Columns ID-600 to ID-822 because the distance is greater than 100
Expected output
ID 577 592 598 600 612 650 700 822 825 830 840 870
Line0 A A A A X X X X A A A A
Line1 B B B B X X X X NA X X B
Line2 B X X A X X X X A A A A
Welcome to the site. To help contributors understand your problem, pleas add some more information on your input and desired output. Is your input file tab-separated (or is a combination of tabs, spaces etc. possible)? Is the lower example the result of what you have tried, or an example what the output should look like?
@AdminBee, guided. Basically, if Adjacent fields in a row are the same, fill empty fields with Adjacent value i.e A---A will be AAAAA., thanks.
Not sure I understood your request correctly, but how about
awk -F"\t" '
NR > 1 {i = 2
while (i<=NF) {if (!$i) {while (!$(++i)) ;
for (j=LAST+1; j<i; j++) $j = ($LAST == $i)?$LAST:"X"
}
LAST = i++
}
}
1
' OFS="\t" file
ID s577 s592 s598 s600 s612 s650 s700 s822 s825 s830 s840 s870
line0 A A A A A A A A A A A A
line1 B B B B X X X X NA X X B
line2 B X X A A A A A A A A A
Annotated version on request:
awk -F"\t" '
NR > 1 {i = 2 # Don't work on the header line
while (i<=NF) {if (!$i) {while (!$(++i)) ; # check every field if empty and
# increment i while seeing empty
# fields; i now holds next non-
# empty field
for (j=LAST+1; j<i; j++) $j = ($LAST == $i)?$LAST:"X"
# fill the empty fields with "X"
# or last non-empty field's value
# depending on actual and last
# fields' values being equal or not
}
LAST = i++ # retain last non-empty field
}
}
1 # default action: print
' OFS="\t" file
you nailed it. Please provide some explanations about the script! I just had to add a dummy column at the end of the file so that the last blank can be computed correctly.
Is there a "last blank"? Not obious from your sample... And, if there were - should it become "X" or the last non-empty field's value?
@glen-viet: added comments for explanation.
Succinct explanations, many thanks @RudiC. I hope I can use it to handle important conditions I forgot to put in the question, i.e Distance of blanks matters if more than 100 replace with X even if adjacent matched (header modified to integers). "Last blank" is not an issue, I suppose input problem on my end, not the script.
Really sorry to bother you again! Please correct the follow-up script trying to account for distance between headers.awk -F"\t" '
NR>0 {i = 2
while (i<=NF) {if (!$i) {while (!$(++i)&&((i-LAST)<100)) ;
for (j=LAST+1; j<i; j++) $j = ($LAST == $i)?$LAST:"X"
}
LAST = i++
}
}
1
' OFS="\t" INPUTAWK.txt > inputAWKFilled.txt
Try (untested) to adapt the "conditional assignment": $j = (($LAST == $i) && ((LAST-i) < 100))?$LAST:"X"
Not successful, I suppose another variable is required to track the headers but I have no idea how to introduce!
Add an (abbreviated) example.
edited the original post and added an example of expected output. I hope it's ok! .Thank you.
Make it (i-LAST) in lieu of (LAST-i).
No change! This is the script am using awk -F"\t" ' NR > 0 {i = 2 while (i<=NF) {if (!$i) {while (!$(++i)) ; for (j=LAST+1; j<i; j++) $j = (($LAST == $i) && (((i-LAST)) < 100))?$LAST:"X" } LAST = i++ } } 1 ' OFS="\t" INPUTAWK.txt > inputAWKFilled.txt
Can check header record using head -1 INPUTAWK.txt | sed 's/* delimiter*/\n/ g' |nl > inputAWKColumns.txt but dont know how to incorporate into the script
NR>0 is a non-condition as it will always be true. I'm not sure what your request is, now. Please specify in detail. The (i-LAST) condition worked when I tested it with your sample (reduced to 10 empty columns)
I have added an image to illustrate, briefly; if the interval of empty fields is greater than 100 based on header difference;it should be filled with "X". In my sample for example, all fields in columns between ID-600 and ID-825.
Code using but not successful .Thanks again.```
awk -F"\t" '
NR > 1 {i = 2
while (i<=NF) {if (!$i) {while (!$(++i)) ;
for (j=LAST+1; j<i; j++) $j = (($LAST == $i) && ((i - LAST) < 100))?$LAST:"X"
}
LAST = i++
}
}
1
' OFS="\t" unix.txt > unixFilled2.txt
|
STACK_EXCHANGE
|
I’ve just released an updated version of my package for estuary monitoring data, SWMPr, available on CRAN. I’ve made several additions to the package since it’s initial release – nothing too crazy but enough to warrant another push to CRAN and blog post. I’ve been pretty bad about regular updates but I’ve added a few features to make some of the functions easier to use in addition to some new functions for plotting SWMP data. I’ll start with a brief overview of the package then describe some of the major changes since the last release (2.0.0). As always, please keep a close watch on the GitHub repository for progress on the development version of the package.
What is SWMPr? SWMPr is an R package for estuary monitoring data from the National Estuarine Research Reserve System (NERRS). NERRS is a collection of reserve programs located at 28 estuaries in the United States. The System-Wide Monitoring Program (SWMP) was established by NERRS in 1995 as a long-term monitoring program to collect water quality, nutrient, and weather data at over 140 stations (more info here). To date, over 58 million records have been collected and are available online through the Centralized Data Management Office (CDMO). The SWMPr package provides a bridge between R and the data provided by SWMP (which explains the super clever name). The package is meant to augment existing CDMO services and to provide more generic features for working with water quality time series. The initial release included functions to import SWMP data from the CDMO directly into R, functions for data organization, and some basic analysis functions. The original release also included functions for estimating rates of ecosystem primary production using the open-water method.
# installing and loading the package install.packages('SWMPr') library(SWMPr)
What’s new in 2.1? A full list of everything that’s changed can be viewed here. Not all these changes are interesting (bugs mostly), but they are worth viewing if you care about the nitty gritty. The most noteworthy changes include the following.
overplotfunction can be used to plot multiple variables with identical scaling on the y-axis. I think this is generally discouraged under sound plotting theory (see the rants here), but overplotting is an often-requested feature regardless of popular opinion. I had to use the base graphics to write this function since it’s not possible with ggplot. I actually borrowed most of the code from a colleague at NERRS, shouts to the Grand Bay office. To illustrate ease of use…
# import data and do some initial clean up data(apacpwq) dat <- qaqc(apacpwq) # a truly heinous plot overplot(dat, select = c('depth', 'do_mgl', 'ph', 'turb'), subset = c('2013-01-01 0:0', '2013-02-01 0:0'), lwd = 2)
qaqcfunction now has more flexible filtering of QAQC data flags by using regular expression matching, rather than searching by integer flags as in the previous version. What this means is that observations can be filtered with greater control over what flags and errors are removed. This example shows how to remove flags using the old method as integer flags and using the new method. The second example will keep all flags that are annotated with the ‘CSM’ comment code (meaning check the metadata). The value with this approach is that not all integer flags are coded the same, i.e., QAQC flags with the same integer may not always have the same error code. The user may not want to remove all flags of a single type if only certain error codes are important.
# import data data(apadbwq) dat <- apadbwq # retain only '0' and '-1' flags, as in the older version newdat <- qaqc(dat, qaqc_keep = c('0', '-1')) # retain observations with the 'CSM' error code newdat <- qaqc(dat, qaqc_keep = 'CSM')
- Several of the data import functions were limited in the total number of records that could be requested from the CDMO. I made some dirty looping hacks so that most of these rate limitations, although technically still imposed, can be ignored when making large data requests to the CDMO. Previously, the
all_params_dtrngfunctions were limited to 100 records in a single request – not very useful for time series taken every 15 minutes. The new version lets you download any number of records using these functions, although be warned that the data request can take a long time for larger requests. As before, your computer’s IP address must be registered with the CDMO to use these functions.
Although it’s now theoretically possible to retrieve all the SWMP data with the above functions, using the
import_localfunction is still much, much easier. The main advantage of this function is that local data can be imported into R, which allows the user to import large amounts of data from a single request. The new release of SWMPr makes this process even easier by allowing data to be imported directly from the compressed, zipped data folder returned from the CDMO data request. The syntax is the same, but the full path including the .zip file extension must be included. As before, this function is designed to be used with data from the zip downloads feature of the CDMO.
# this is the path for the downloaded data files, zipped folder path <- 'C:/this/is/my/data/path.zip' # import the data dat <- import_local(path, 'apaebmet')
- A nice feature in R documentation that I recently discovered is the ability to search for functions by ‘concept’ or ‘alias’ tags. I’ve described the functions in SWMPr as being in one of three categories based on their intended use in the data workflow: retrieve, organize, and analyze. The new version of SWMPr uses these categories as search terms for finding the help files for each function. The package includes additional functions not in these categories but they are mostly intended as helpers for the primary functions. As always, consult the manual for full documentation.
help.search(package = 'SWMPr', 'retrieve') help.search(package = 'SWMPr', 'organize') help.search(package = 'SWMPr', 'analyze')
- Finally, I’ve added several default methods to existing SWMPr functions to make them easier to use outside of the normal SWMPr workflow. For example, combining time series with different time steps is a common challenge prior to data analysis. The
combfunction achieves this task for SWMP data, although using the previous release of the package on generic data was rather clunky. The new default method makes it easier to combine data objects with a generic format (data frames), provided a few additional arguments are provided so the function knows how to handle the information. Default methods were also added for the
I guarantee there are some bugs in this new release and I gladly welcome bug reports on the issues tab of the development repo. Ideas for additional features can also be posted. Please check out our SWMPrats web page for other SWMP-related analysis tools.
|
OPCFW_CODE
|
First of all I just gotta say: kudos on the awesome forums!
Now to my issue:
I have a european SCPH-50004, v9 I believe, and I understand that this version is known for being silent. I've been playing quite some and never even though about the sounds it makes, until yesterday when I got my Network Adapter. Without Network Adapter it is silent enough, and doesn't bother me at all, but as soon I plug in the adapter the fan will constantly run on max speed. I tried to unplug EVERYTHING external (controllers, memory cards, composite cable, disc) except (of course) the power cord and the Network Adapter (without HDD or even network cable plugged in), and as soon as I start the console it's like the PS2 checks if theres a Network Adapter present, and when there is, it's like "Woho I found a network adapter, let's set the fan's speed to max and never slow it down!". I've opened up the case and there was basically no dust, however that shouldn't be the source of the problem anyway as it's rather silent without the adapter plugged in.
Basically, when I start with the Network Adapter the fan will start on full speed and not slow down even if the console is cool and/or idling.
I have never used one of these adapters before so I have no idea how it is supposed to impact the fan, and unfortunately I don't have any other console available I can test it on. But it seems wierd that this version is supposed to be silent and I can't seem to find anyone with the same issue when searching the net.
* PAL SCPH-50004/v9, not chipped or anything. Should be 100% original.
* PAL Network Adapter, manual says SCPH-10350 EE.
* Fan: NMB 2406ML-09W-B40 7V 0.29A. On the plastic it's written PBT-GF30-FR. Can't seem to find any info on whether it's original or not, but it seems to fit nicely etc.
Other info (may or not be relevant):
I bought the console from a friend, who bought it used from Gamestop. When I opened the case, one of the case's screws where missing and I noticed the case to the DVD-reader had been opened. Probably (or hopefully) by Gamestop, not sure if just for maintenance or if there has been some internal changes, but nothing inside looks wierd or anything.
The Network Adapter I also bought used, even though the former owner claimed he never used it the case was broken. However the sticker covering the RJ45-socket was still there and everything looked brand new.
I tried to play SSX 3 over network using OPL, and it seemed to freeze rather frequently. However I suspect my home network or OPL compitability rather than the adapter itself.
In summary, I'm looking for answers to the following questions:
1. ) Is this behavior expected? What is your experience on Network Adapter and fan noise/speed?
2. ) Is there any way to lower the fan noise/speed except using a more silent 3rd party fan?
3. ) If answer to question 1 is "no": Is my fan original: for v9? for another version of the console?
|
OPCFW_CODE
|
import test from 'ava'
import randomFrog from './'
function getImageURL (imageData) {
if (!imageData) {
return
}
return `https://imgur.com/${imageData.is_album ? imageData.album_cover : imageData.hash}${imageData.ext.replace(/\?.*/, '')}`
}
const imgurRegEx = /^https?:\/\/(\w+\.)?imgur.com\/[a-zA-Z0-9]+(\.[a-zA-Z]{3})?(#[a-zA-Z]*)?$/
test('get random', async t => {
const result = await randomFrog()
t.regex(getImageURL(result), imgurRegEx)
})
test.cb('use callback', t => {
t.plan(2)
randomFrog((err, result) => {
t.falsy(err)
t.regex(getImageURL(result), imgurRegEx)
t.end()
})
})
test.cb('use callback and different subreddit', t => {
t.plan(2)
randomFrog('aww', (err, result) => {
t.falsy(err)
t.regex(getImageURL(result), imgurRegEx)
t.end()
})
})
test('get more random', async t => {
const result1 = await randomFrog()
t.regex(getImageURL(result1), imgurRegEx)
const result2 = await randomFrog()
t.regex(getImageURL(result2), imgurRegEx)
const result3 = await randomFrog()
t.regex(getImageURL(result3), imgurRegEx)
const result4 = await randomFrog()
t.regex(getImageURL(result4), imgurRegEx)
})
test('different subreddit', async t => {
const result1 = await randomFrog('aww')
t.regex(getImageURL(result1), imgurRegEx)
const result2 = await randomFrog('aww')
t.regex(getImageURL(result2), imgurRegEx)
const result3 = await randomFrog('aww')
t.regex(getImageURL(result3), imgurRegEx)
const result4 = await randomFrog('aww')
t.regex(getImageURL(result4), imgurRegEx)
})
test('invalid subreddit', async t => {
const result1 = await randomFrog('23rkljr2klj3')
t.falsy(getImageURL(result1))
const result2 = await randomFrog('')
t.regex(getImageURL(result2), imgurRegEx)
const result3 = await randomFrog({})
t.regex(getImageURL(result3), imgurRegEx)
const result4 = await randomFrog(false)
t.regex(getImageURL(result4), imgurRegEx)
})
|
STACK_EDU
|
Older versions of Apache had all the configuration information in a single file, such as /etc/httpd.conf. In Apache 2 on SUSE, the configuration is modularized, and there is a set of configuration files under the directory /etc/apache2/. The main configuration file /etc/apache2/httpd.conf references (and describes in comments) the other configuration files in the same directory. In each case, a directive starting with the key word Include is used to pull in the content of the subsidiary configuration files.
As with other server daemons, when you make a change in the configuration files, you must either restart or reload the server using rcapache2 restart or rcapache2 reload.
There is also an option rcapache2 configtest that allows you to check whether the syntax of the configuration files is acceptable. This is equivalent to the command apache2ctl -t or apache2ctl configtest. Running any of these commands should show output like this:
# apache2ctl configtest Syntax OK
However, if there is a problem, you will see something like the following:
# apache2ctl configtest
Syntax error on line 19 of /etc/apache2/listen.conf: Invalid command 'Listeen', perhaps misspelled or defined by a module not included in the server configuration
We have made a "typo" in listen.conf, replacing the Listen directive by Listeen.
In the Apache configuration files, a hash character (#) is used to introduce a comment, which is not read by Apache. The directives in the configuration files that start a line are often written with a capital letter, but are actually case-insensitive. However, the arguments to the directives may well be case sensitive. Lines that are too long can be continued to the next line with a backslash character (\).
Looking more closely at the configuration files, the active lines in the file /etc/apache2 /httpd.conf are mostly Include directives, including another configuration file. Exceptions to this are the global ErrorLog directive (defining where Apache should log error information) and a few other global settings including the section that denies access to the entire filesystem (important for security reasons).
The configuration file is broken down into three main sections (comments in the file show where each one starts), but most of the content in each section is provided by Include directives:
■ Global Environment: This section contains directives that affect the overall operation of the Apache web server.
■ Main Server: This section provides configuration options dedicated to the operation of the primary web site.
■ Virtual Host: Here you can apply the same configuration options available in the main server configuration section to any virtual host.
j f - - r Virtual hosts are discussed later in the chapter.
Was this article helpful?
|
OPCFW_CODE
|
Causal Neural Mechanisms for Decision Making: Putting Rules into Context
In nature, different rules apply under different contexts. For example, how we act in a classroom is very different from how we might be when we are with our friends. Likewise, a self-driving car would need to know that under some scenarios cautiously turning on a red light to allow an ambulance to pass may be required. However, before we can understand how we can program machines, we first need to understand how natural systems, like the brain, learn rules under different contexts. There are at least two possibilities. One is computationally inefficient but easy to implement: where every rule is learned under the relevant context. The other is where a context highlights the relevant rules that apply to it. This is a much more elegant and efficient way to achieve rule learning, and machine learning could benefit from understanding how the brain achieves this.
A motivated student has the opportunity to work with a synergistic partnership with a Newcastle lab and a lab in Durham. The Newcastle lab has built a computational model relevant for this work, and works on rule learning in primates and people. The Durham lab works with rodents on learning under different contexts. The studentship work will include designing a task based on the computational model and one that is likely to be learnable by primates and rodents. Then in Durham the task would be tested while rodents work on it with and without impairment to their hippocampal or frontal system. The Newcastle work will involve primates working on the task while an innovative approach for modulating the brain using sonic waves is used to test the role of the frontal and hippocampal systems in rule or context learning.
The potential outcome is information in two species that are crucial for translating neuroscientific information to humans and could lead to better machine learning models. The student will also receive unique skills training that are in high demand in the UK and by funders like the BBSRC. The will have an opportunity to sculpt their ideal PhD work around the topic and will be supported by a company partner (Brain Box Inc.) and a prominent figure in neuroscience who will periodically remotely join the student supervision and support meetings. The outcome could not only advance scientific knowledge on neurobiological mechanisms for crucial aspects of human intelligence but also how such mechanisms might be implemented in artificial systems.
HOW TO APPLY
Applications should be made by emailing [Email Address Removed] with a CV (including contact details of at least two academic (or other relevant) referees), and a covering letter – clearly stating your first choice project, and optionally 2nd and 3rd ranked projects, as well as including whatever additional information you feel is pertinent to your application; you may wish to indicate, for example, why you are particularly interested in the selected project(s) and at the selected University. Applications not meeting these criteria will be rejected.
In addition to the CV and covering letter, please email a completed copy of the Additional Details Form (Word document) to [Email Address Removed]. A blank copy of this form can be found at: https://www.nld-dtp.org.uk/how-apply.
Informal enquiries may be made to [Email Address Removed]
This is a 4 year BBSRC CASE studentship under the Newcastle-Liverpool-Durham DTP. The successful applicant will receive research costs, tuition fees and stipend (£15,009 for 2019-20). The PhD will start in October 2020. Applicants should have, or be expecting to receive, a 2.1 Hons degree (or equivalent) in a relevant subject. EU candidates must have been resident in the UK for 3 years in order to receive full support. Please note, there are 2 stages to the application process.
Neural mechanisms for complex combinatorial binding: Computationally and neurobiologically informed hypotheses (in press) Philosophical Transactions of the Royal Society, Biological Sciences
Changes in presynaptic calcium signalling accompany age-related deficits in hippocampal LTP and cognitive impairment. (2019) Aging Cell 18(5): e13008
The NMDA receptor antagonist MK-801 fails to impair long-term recognition memory in mice when the state-dependency of memory is controlled. (2019) Neurobiology of Learning and Memory 161: 57-62
An Open Resource for Non-human Primate Imaging. (2018) Neuron 100, 61-74.
Artificial grammar learning in vascular and progressive non-fluent aphasias. (2017)
Neuropsychologia, 104, 201-213
Sequence learning comparably modulates neuronal nested oscillations in human and monkey auditory cortex (2017). PLoS Biology e2000219.
Different forms of hierarchical effective connectivity in primate fronto-temporal pathways. (2015) Nature Communications 6, DOI: 1038/ncomms7000. Open Access
Episodic-Like Memory for What-Where-Which Occasion is Selectively Impaired in the 3xTgAD Mouse Model of Alzheimer’s Disease. (2013) Journal of Alzheimer's Disease 33(3): 681-698
Orthogonal representation of sound dimensions in the primate midbrain. (2011) Nature Neuroscience, 14(4): 423-5.
|
OPCFW_CODE
|
A quadrilateral $ABCD$ is formed from four distinct points (called the vertices), no three of which are collinear, and from the segments $AB$, $CB$, $CD$, and $DA$ (called the sides), which have no intersections except at those endpoints labeled by the same letter. The notation for this quadrilateral is not unique- e.g., quadrilateral $ABCD$ = quadrilateral $CBAD$.
Two vertices that are endpoints of a side are called adjacent; otherwise the two vertices are called opposite. The remaining pair of segments $AC$ and $BD$ formed from the four points are called diagonals of the quadrilateral; they may or may not intersect at some fifth point. If $X$, $Y$, $Z$ are the vertices of quadrilateral $ABCD$ such that $Y$ is adjacent to both $X$ and $Z$, then angle $XYZ$ is called an angle of the quadrilateral; if $W$ is the fourth vertex, then angle $XWZ$ and angle $XYZ$ are called opposite angles.
The quadrilaterals of main interest are the convex ones. By definition, they are the quadrilaterals such that each pair of opposite sides, e.g., $AB$ and $CD$, has the property that $CD$ is contained in one of the half-planes bounded by the line through $A$ and $B$, and $AB$ is contained in one of the half-planes bounded by the line through $C$ and $D$.
a) Using Pasch's theorem, prove that if one pair of opposite sides has this property, then so does the other pair of opposite sides.
b) Prove, using the crossbar theorem, that the following are equivalent:
- The quadrilateral is convex.
- Each vertex of the quadrilateral lies in the interior of the opposite angle.
- The diagonals of the quadrilateral meet.
I can’t seem to make sense of a) at all. For b) I approached it in this way - I made three separate proofs:
- If the quadrilateral is convex, then the diagonals of the quadrilateral meet.
Proof: Assume quadrilateral $ABCD$ is a convex quadrilateral. We have to prove that segment $AC$ and segment $BD$ have a point in common. By the definition of a convex quadrilateral, $C$ is in the interior of angle $DAB$. Hence, ray $AC$ intersects segment $BD$ at some point $E$ (by the crossbar theorem). Therefore, $E$ is the required intersection point of the diagonals $AC$ and $BD$.
- If the diagonals of the quadrilateral meet, then each vertex of the quadrilateral lies in the interior of the opposite angle.
- If each vertex of the quadrilateral lies in the interior of the opposite angle, then the quadrilateral is convex.
I’m also confused over the proofs for 2. And 3.. Theorems and axioms that might be helpful:
Pasch’s Theorem: If $A$, $B$, and $C$ are distinct points and $l$ is any line intersecting $AB$ in a point between $A$ and $B$, then $l$ also intersects either $AC$, or $BC$. If $C$ does not lie on $l$, then $l$ does not intersect both $AC$ and $BC$.
Interior an angle: Given an angle $\angle CAB$, define a point $D$ to be in the interior of $\angle CAB$ if $D$ is on the same side of Ray $AC$ as $B$ and if $D$ is also on the same side of Ray $AB$ as $C$. Thus, the interior of an angle is the intersection of two half-planes.
Crossbar Theorem: If ray $AD$ is between ray $AC$ and ray $AB$, then ray $AD$ intersects segment $BC$.
Any help at all would be much appreciated.
|
OPCFW_CODE
|
amino acid frame triplet side chain
Hello, thanks for open sourcing your project.
I was wondering if I train ScanNet from scratch, should I modify line 170 to use Nitrogen atom of the current amino acid atom_coordinate instead of the previous aimo acid i.e. atom_coordinates[l-1]. I noticed in your comment that it was due to a previou issue. So just to confirm if I retrain the network, my understanding is I should replace line 169-175 line 182-188, is that correct?
https://github.com/jertubiana/ScanNet/blob/77d851037ef564b8d10aa0e23f063eb1c9243ea6/preprocessing/protein_frames.py#L166-L188
Hi,
Thanks for your interest in our research.
Regarding protein_frames.py, please leave as it is, the change was already made.
However, if you retrain, you can comment l427 of file preprocessing/protein_chemistry.py and uncomment l428.
This is a bug regarding histidine parsing that I only noticed recently.
Best regards,
Jérôme
Thansk for your reply! Though I am still confused about line 169-171 in protein_frames.py
For examle, a Glycine (residue index l), as in line 169-171 you are calculating its virtual side-chain CoM using the glycine's C and Cα atom coordinates and the Nitrogen atom of its preceding residue (l-1) if I understand correctly. In your paper, the side chain CoM was defined using Cα, C, N atom vector coordinates of the residue. So, why using nitrogen from preceding residue atom_coordinates[l-1][atom_ids[l-1].index(17)] rather than the glycine its own nitrogen atom e.g. atom_coordinate[atom.index(17)]?
Hi, sorry for the belated reply and the confusion.
After double checking the situation, you are correct.
You should indeed comment out 169-171). I thought I had already made the correction but had actually not.
Thanks for pointing out the issue, and best of luck for your future research.
Best,
Jérôme
On 18 Oct 2022, at 17:09, ChuNan Liu @.***> wrote:
Thansk for your reply! Though I am still confused about line 169-171 in protein_frames.py
For examle, a Glycine (residue index l), as in line 169-171 you are calculating its virtual side-chain CoM using the glycine's C and Cα atom coordinates and the Nitrogen atom of its preceding residue (l-1) if I understand correctly. In your paper, the side chain CoM was defined using Cα, C, N atom vector coordinates of the residue. So, why using nitrogen from preceding residue atom_coordinates[l-1][atom_ids[l-1].index(17)] rather than the glycine its own nitrogen atom e.g. atom_coordinate[atom.index(17)]?
—
Reply to this email directly, view it on GitHub https://github.com/jertubiana/ScanNet/issues/4#issuecomment-1282460936, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACHSPJV6K6QEKP33PFGOUNLWD2VTHANCNFSM6AAAAAARHJNS3Y.
You are receiving this because you modified the open/close state.
|
GITHUB_ARCHIVE
|
The Vccorelib141xvd.dll trojan has been known to duplicate itself into several files across several locations on the same PC. Each duplicate is given a new name to further lessen the chances of a complete detection and removal. First, Vccorelib141xvd.dll seems to be related to Microsoft Visual Credits (or C++). The system file that looks most similar is Vccorlib140.dll . Vccorelib141xvd.dll seems to be related to download dll files from Apple Microsoft Runtime Library in general, going by the file name alone.
Once your download is complete, run the YouCam downloader to complete the installation.Click here to restart the download if it didn’t automatically start. You’ll also be required to give your system a name; pick out a username for yourself, and come up with a password. You can also choose to encrypt your home directory to keep files relatively safe from prying eyes. However, an encrypted home directory slows systems down.
No-Fuss Methods For Dll Clarified
Anyway, a regular developer will rarely use those IDs. So basically I noticed that some applications are not prepared to handle ICL files properly for all cases.
It blocks any malicious ActiveX and provides security to your PC. Once in the Resource Monitor, click on ‘CPU’ Now select ‘Associated handles’. Type name of the file in the dialogue box and press Enter. From the results, right-click on each process and select ‘End Task’. Press Windows + R, type “taskmgr” in the dialogue box and press Enter.
The Options For Simple Secrets In Missing Dll Files
Icons embedded into executable files generally represent or are directly used by an application. Windows also uses icons to display the application in its taskbar for example, or in Windows Explorer shell lists. I modified a running exe with resource editor and i could see the changes only after i restarted the exe. If you want to change the icon while your program is running i don’t think it’s possible. This function uses the ICONDIR and ICONIMAGE structures to replace an icon resource identified by the bundle ID and the language ID .
- Open the icon library containing the icons you wish to export.
- Intel does not intend to offer updates to mitigate any security vulnerability in this Software, whether currently known or discovered in the future.
- A bad application installation may corrupt one or more files, causing the .dll errors.
If the version of protection system supports Windows 7, but you have problems with application launch nevertheless, please contact our support group. Only for companies and public authorities Microsoft offers a paid extension of support until January 2023. The program is only available for companies that use Windows 7 Professional and Windows 7 Enterprise . The respective clientele receives the offer for ‘Windows 7 Extended Security Updates ’ only if there is a volume license agreement I had reported in the blog post Wow!
Painless Programs In Missing Dll Files – An Intro
If for some reason your Windows Update stops working or you are working on computer not connected to internet, then you may miss some important critical updates & patches. To help to update Windows 7 offline here is WSUS Offline Update utility. With WSUS Offline Update, you can update different versions of Windows including Windows 7, 8, 8.1, 10 or the various newer server versions. For Office, 2007, 2010, 2013 and 2016 are supported. Choose how Windows can install updates according to your needs.
By downloading and using this program, you declare to have read and agree with the End User License Agreement for our software. To create a user account, please fill up the form below. We will send a validation message to the email address you indicate. Meeting policies are used to control the features that are available to meeting participants for meetings that are scheduled by users in your organization. This is a per-user policy and applies before a meeting starts. Allow the Outlook Add-in setting to control whether Teams meetings can be scheduled from within Outlook or not. Also, users can access Teams Add-In from outlook home.
|
OPCFW_CODE
|
In this assignment, you will solve a regression problem in two ways: using the closed-form least-squares solution, and using gradient descent.
Part I: Linear regression using closed-form solution (10 points)
[5 pts] Write a function [w] = lr_solve_closed(X_train, y_train) that computes the closed-form least-squares solution to linear regression, using the Moore-Penrose inverse, as derived in class. Use the Matlab function pinv. The body of this function only requires one line of code.
- X_train is an NxD feature matrix with N samples and D feature dimensions. N is the number of samples in the training set.
- y_train is an Nx1 vector containing the labels for the training set. The i-th sample in y_train should correspond to the i-th row in X_train.
- w is a Dx1 vector of weights (one per feature dimension).
[5 pts] Also write a function [y_pred] = lr_predict(X_test, w) that uses the weights computed above, to predict a label for a new test sample.
- X_test is an MxD feature matrix with M samples and D feature dimensions. M is the number of samples in the test set.
- w is a Dx1 vector of weights.
- y_pred is the predicted Mx1 vector of labels for the test set.
Part II: Linear regression using gradient descent (20 points)
Now implement the gradient descent solution, in a function [w] = lr_solve_gd(X_train, y_train, iters, eta).
Inputs: same as for lr_solve_closed, plus:
- iters, the number of iterations to run gradient descent for, and
- eta, the learning rate to use in the weight update.
Outputs: same as for lr_solve_closed.
- [5 pts] First, initialize the weights in some way (use either random values or all zeros).
- [10 pts] Second, repeat the following iters times. In each iteration, first compute the loss function gradient using all training data points. To do this, you need to use lr_predict.m.
- [5 pts] Then, adjust the weights in the direction opposite to the gradient.
Part III: Testing on the Wine Quality dataset (20 points)
You will use the Wine Quality dataset. Use only the red wine data. The goal is to find the quality score of some wine based on its attributes. Include your code in a script regression.m. In a file report.pdf or report.docx, report the L2 error (described below) for the closed-form and gradient descent solutions.
- [10 pts] First, download the winequality-red.csv file, load it in Matlab (e.g. using dlmread) and divide the data into a training and test set using approximately 50% for training. Standardize the data, by computing the mean and standard deviation for each feature dimension using the train set only, then subtracting the mean and dividing by the stdev for each feature and each sample. Append a 1 for each feature vector, which will correspond to the bias that our model learns.
- [5 pts] Find the direct closed-form solution and evaluate the accuracy on the test set, by computing the L2 distance between the predicted vector y_pred and the ground-truth vector y_test. Print the L2 error in your script, with an appropriate description for what is being printed; use fprintf. Include it in your report.
- [5 pts] Now compute and evaluate the gradient descent solution. Use 50 iterations, and experiment with the following values for the learning rate: 10.^(-6:-1). Evaluate the L2 distance between predicted and ground-truth test labels as above. Print the errors for each learning rate and include them in your report.
Submission: Please include the following files:
|
OPCFW_CODE
|
A Physics Problem in Calculus.
In our class, we encountered a problem that is something like this: "A ball is thrown vertically upward with ...". Since the motion of the object is rectilinear and is a free fall, we all convene with the idea that the acceleration $a(t)$ is 32 feet per second square. However, we are confused about the sign of $a(t)$ as if it positive or negative.
Now, various references stated that if we let the upward direction to be positive then $a$ is negative and if we let downward to be the positive direction, then $a$ is positive. The problem in their claim is that they did not explain well how they arrived with that conclusion.
My question now is that, why is the acceleration $a$ negative if we choose the upward direction to be positive. Note: I need a simple but comprehensive answer. Thanks in advance.
If the ball is moving up (call that positive), then intuition says that gravity pulls it back down (negative). Thus, regardless of the initial velocity, the acceleration due to gravity is down (negative). If, for whatever reason, you say that that up is negative, and you throw a ball up, then gravity counteracts this by pulling in the positive direction. In either case, gravity will pull the object back towards Earth.
You have thrown the ball upwards now you're not applying any force to it anymore, so the only force that applies to the object is the gravitational force which works in the negative direction of the velocity. Hence a is negative by convention.
I agree with your intuition too @ElliotG. However, I also want to know the answer to my question using a frame of reference. Thanks for the comment.
@kingW3 I agree that I am not applying any force to it anymore. Thanks for your comment. But can you explain a bit further when you say that gravity works in the negative direction of velocity? I am thinking what if the ball is going down.
Since it's a free fall, the acceleration is :
$$\vec{a}(t) = \vec{g}$$
Since it is rectilinear you get :
$$a(t) = \vec{a}(t).\vec{z} =\vec{g}.\vec{z}$$
So if $\vec{g}$ and $\vec{z}$ have the same sign, i.e. downward, you get $a(t) = g$.
And if $\vec{g}$ and $\vec{z}$ have opposite sign, i.e. if $\vec{z}$ is upward as $\vec{g}$ is always downward, you get $a(t) = - g$.
It is more natural to take $\vec{z}$ upward because you will have positive $z(t)$ when the ball is up.
@N.F.Taussig didn't know which word to use, I am not a native english speaker. Thanks for pointing it out
Nota that $\vec{g}$ itself is already an acceleration, not a force.
I will try to answer my own question but correct me if I am wrong. This answer was due to @Yves's answer and with the help of Serway's book which states that the negative in $a$ simply means that the acceleration is on the negative direction.
Clearly, if we set the upward direction to be positive, the gravitational force that acts on the object is in the negative direction. With the use of the Newton's law that $F=ma$, we have $(-)F=ma$. Now, since mass is a scalar quantity, in order for $F$ to be on the negative direction, $a$ must be on the negative direction. That is $a$ must be negative. Note, the negative in $a$ means the acceleration is also downward. Thanks for all your answers and comments.
You can model this on a linear one dimensional space, the "heigth" $x$ of the particle. Different forces are acting on the ball, on the one hand the gravitation, which is directed to the "ground", on the other hand in the beginning a force is applied towards the "top" as the ball is thrown in that direction. You can derive the equations of motion through Newtons formular $F(t)=m\ddot{x}(t)$ and the superposition of forces, the resulting force is the sum of each force.
I agree crankk that we can model the problem on 1D space. However, I think that the only force that acts on the object is just g. Is it right? Thanks
Yes okay it is probably more convenient to model the initial force through initial conditions, then for all positive $t$ the only force acting is the gravitation.
Yes @crankk however, I am still puzzled why $a(t)$ must be negative.
Note that forces e.c. are vectors $\vec{F}=m\ddot{\vec{a}}$. The acceleration has not only a magnitude, but a direction as well (it is the second derivative of the height or the first derivative of the velocity). The force clearly acts towards the ground, hence in negative direction.
I'm downvoting this because it doesn't even attempt to answer the question.
Acceleration is defined as the derivative of the velocity, i.e. $a(t)=\frac{v(t)}{dt}$. When the ball is going upward, the speed of the ball decreases and thus the acceleration becomes negative.
thank you for your answer. However, acceleration is the change in velocity no just velocity over change in time right? Correct me if I am wrong. Thanks again
I am sorry but I could not understand your question, but would like to add my comments. Yes, the acceleration is the change of velocity. Due to gravitational force, the speed of the object decreases. Thus it means that the acceleration is negative. Actually as pointed out by @Crazy it is a convention: we can say that acceleration is 10m/s downwards or -10 m/s upwards.
The acceleration due to gravity is always downward. The convention is so.
Case I:Downward is the positive direction.
Let's us examine your case where the ball is falling vertically from the sky. Normally, we take the direction of initial motion of the ball to be positive. This happens in physics. Since the ball is accelerating downward due to gravity and it occurs in the direction of the initial motion then the acceleration is positive since we have choose downward as the positive direction. If there is a air resistance force acts in the ball. We know that air resistance occurs in the other direction of the object. So the air resistance must be negative!
The differential equation for an object undergoes acceleration can be modeled as follows.(We assume that mass is a constant variable)
$$m\frac{dv}{dt}=F_R$$
Where $F_R$ is a resultant force.
$$m\frac{dv}{dt}=F+kv$$
In our case
$F$ will be the $mg$
$k$ is the proportionality of the force
$kv$ or sometimes we use $kv^2$ instead to model the resistance acting on the object. We know that the resistance acts on the object opposes the direction of the object. Then we attribute it as
$$m\frac{dv}{dt}=F-kv$$
Case II Upward as the positive direction!
Again consider the object is falling downwards.We model our differential equation as. The weight of the object is acting downwards
$$m\frac{dv}{dt}=-F+kv$$
What we say about negative and positive of acceleration. Since acceleration are vectors. Vectors make up of magnitude and direction!
Can you see why case II is negative compared to case I.
I always think of $$ma=Upward+downward$$
I thought the questioner ask for more explanation?
I hope to make the things clearer. I think OP don't quite get the positive force direction and negative force direction of the equation.
|
STACK_EXCHANGE
|
using Microsoft.Xna.Framework.Graphics;
using Nez.Sprites;
using Nez.Textures;
namespace Nez.Samples
{
public class Cyclone : Component, IUpdatable
{
Collider _collider;
private string owner;
private int destroy = 300;
public Cyclone(string name)
{
owner = name;
}
public override void OnAddedToEntity()
{
_collider = Entity.GetComponent<Collider>();
Debug.WarnIf(_collider == null, "BouncingBullet has no Collider. BouncingBullet requires a Collider!");
}
public override void OnRemovedFromEntity()
{
}
void IUpdatable.Update()
{
CollisionResult collisionResult;
var platformerScene = Entity.Scene as PlatformerScene;
// fetch anything that we might collide with at our new position
var neighbors = Physics.BoxcastBroadphaseExcludingSelf(_collider, _collider.CollidesWithLayers);
foreach (var neighbor in neighbors)
{
// if the neighbor collider is of the same entity, ignore it
if (neighbor.Entity == Entity)
{
continue;
}
if (_collider.CollidesWith(neighbor, out collisionResult))
{
if (neighbor.Entity.Name.Equals("projectile"))
{
string name = "owner";
if (neighbor.Entity.GetComponent<BulletProjectileController>() != null)
{
name = neighbor.Entity.GetComponent<BulletProjectileController>().Name;
}
else if (neighbor.Entity.GetComponent<BouncingBullet>() != null)
{
name = neighbor.Entity.GetComponent<BouncingBullet>().Name;
}
if (!owner.Equals(name))
{
platformerScene.soundEffects[4].CreateInstance().Play();
Entity.Destroy();
neighbor.Entity.Destroy();
}
}
}
}
// Core.Schedule(3f, timer => destroy = true);
destroy -= 1;
if (destroy <= 0)
{
platformerScene.soundEffects[4].CreateInstance().Play();
Entity.Destroy();
}
}
}
}
|
STACK_EDU
|
How to use PUNIAS?
PUNIAS can be used directly to analyze experimental data from a Agilent Technologies, Bruker / Veeco, Asylum Research, JPK, NT-MDT and Novascan facility.
In the case of data from an Asylum Research facility generated by using the Save to Memory option, this files are quite big since they can contain the information of several hundreds of trials, thus the data need first to be exported as an ASCII file by means of an export macro.
The first time you are launching the software the condition of use window is displayed:
You need to agree with the conditions of use after having read them in order to move further and be able to use the software.
If you accidentally click the "refuse" button, you won’t be able to use the software anymore. When you try to launch the software, the following window will be displayed:
In order to fix this problem, you will need to locate the configuration file for PUNIAS named "punias.ini" which is has been saved in the software folder and erase it.
This file contains all the configuration data the program needs (type of analysis, type of zoom, display of raw curves, display of markers…) and each time the user changes one of these configuration options by using the software this file is rewritten, saving the new options. Creating such a file has the advantage of saving the configurations the user likes so that he doesn’t need to, for example, choose the type of analysis he is used to using each time he starts up the software.
Once the configuration file has been erased you will again be able to go to thru the condition of use window.
After accepting the condition of use, the user interface will look like:
You need then to choose which kind of analysis you want to perform:
PUNIAS will then open (one after the other) all the curves to be analyzed. After plotting the curve, as force versus extension or deformation, the computer determines the best possible baseline for the corresponding curve, and adds it to the graph. In the same way, the unfolding peaks or markers for the different calculations are determined by an algorithm and plotted on the graph. With all of these results being plotted on the graph along with the experimental curve, the user can very quickly visualize the results the computer finds, and validate them by saving the results and moving to the next file.
In case the user erroneously proceeds to the next file, he or she may go back (maximum 5 files) and re-analyze them by using the Home key. In case the user disagrees with the results of the algorithm, it is possible for the user to place a different baseline. The choice of a new baseline will discard all the unfolding peaks or markers found previously and generate a new set of results corresponding to the parameters characterized by the value of the new baseline. Once the user agrees with the setting of the baseline, it is then possible to add or discard peaks or modify the position of the markers. All the manual interactions with the software are done through the mouse and the keyboard using the arrow-keys and programmed keys.
A convenient way to obtain information on any part of the software is to use the contextual help:
The data is serialized immediately after each file is analyzed to avoid data loss.
A result file data file (extension .pic) is generated with the name of the file followed by different information depending on the type of analysis used.
All the data obtained at the end of the analysis can then be opened by different ASCII software to plot histograms and perform further analysis
Also, ASCII files can be created (if the Save ASCII Files option is activated) for each file opened to simplify the plotting of the files with spreadsheet software. These files are saved in a folder named PlotableData and contain four columns (piezo position for extension, extension curve, piezo position for retraction, retraction curve) with the data expressed in nanometers. And in the case of nano-indentation, there are additional ASCII files saved in a folder nanoIndent and containing four columns (deformation for extension and retraction in nanometer and force in pico Newton).
For any questions, suggestions, corrections, and requests for information please send a mail to: firstname.lastname@example.org.
|
OPCFW_CODE
|
how to save a variable into the stack or in the memory
I am working to enable and disable a cheat for some game with a press of a button and got it working at some point.
The problem am facing is when the button is kept pressed the cheat works fine, but then once I let go of that button the cheat is no longer active .
The question is how to make it save the value of 0x02 once the button is pressed.
Code Cave address 0x015F3800
015F3800 push r8
015F3802 movabs r8, 0x20219d1c4 <-- Buttons Address, (movabs) move absolute memory address value into r8
015F380C mov r8,[r8]
015F380F cmp r8, 0x02 <-- r8 = 0 if nothing is pressed and when L3 is pressed r8 = 2
015F3813 jnz 0x015F3823 <-- ZF is set to 1 if L3 is pressed and ones i get hit HP gos to full, otherwise its clear (0)
015F3815 mov [rbx+0x1e68], ax <-- AX has max HP, modified code
015F381C pop r8
015F381E jmp 0xA161BB <-- return address, if L3 is preesed the rest of the code below will be ignored
015F3823 mov [rbx+0x1e68], cx <-- Original code, if L3 is not preesed the code continue as normal
015F382A pop r8
015F382C jmp 0xA161BB <-- return address
I'm not sure if this is the right place to post my question, so please feel free to delete it if it violates your forum rules
I have been learning amd64 assembly for the last few months until I got stuck.
off-topic: If you used rax, you could use movabs rax, [0x20219d1c4] for the mov rax, moffs64 load encoding with a 64-bit absolute address. https://www.felixcloutier.com/x86/MOV.html. Also you could pop after the cmp, before the jnz
Not clear what you're asking. Does this code run every game update cycle? If you need something to survive between runs of this code to record if the cheat button has ever been pressed, you need some static storage somewhere. You can't use the stack for the same reason a non-static local variable in C can't record anything across function calls. Some storage in a page near your code-cave, using a RIP-relative addressing mode, would make sense if I'm understanding you correctly. Don't put it within 2k of your code, that will cause self-modifying-code pipeline nukes.
@Peter Cordes Its just button toggle to turn things on and off, also i have tried some of what you have mentioned in your comment like movabs rax, [0x20219d1c4] it looks totally legit but the debugger am I using doesn't like it and i get crash call and it changed to mova eax, [0x20219d1c4] that make me wonder why it accepted it the code in 1st place unless my code is a big mess
also i get crash call when i try pop after the cmp, before the jnz, but after jnz works fine i have managed to save a byte somewhere in the eboot ones the functions of the player hp get executed it may help latter, never imagined it's going to be that hard and quite challenging
|
STACK_EXCHANGE
|
Multiple variables are separated by commas. Data type of the value returned by the Function procedure; may be Byte , Boolean , Integer , Long , Currency , Single , Double , Decimal not currently supported , Date , String except fixed length , Object , Variant , or any user-defined type. Any group of statements to be executed within the Function procedure. Return value of the Function. Indicates that an argument is not required.
If used, all subsequent arguments in arglist must also be optional and declared by using the Optional keyword. Optional can't be used for any argument if ParamArray is used. ByVal Optional. Indicates that the argument is passed by value. ByRef Optional. Indicates that the argument is passed by reference.
ByRef is the default in Visual Basic. ParamArray Optional. Used only as the last argument in arglist to indicate that the final argument is an Optional array of Variant elements. The ParamArray keyword allows you to provide an arbitrary number of arguments. Name of the variable representing the argument; follows standard variable naming conventions.
Data type of the argument passed to the procedure; may be Byte, Boolean, Integer, Long, Currency, Single, Double, Decimal not currently supported Date, String variable length only , Object, Variant, or a specific object type. If the parameter is not Optional, a user-defined type may also be specified. Any constant or constant expression.
Valid for Optional parameters only. If the type is an Object, an explicit default value can only be Nothing. Remarks If not explicitly specified by using Public, Private, or Friend , Function procedures are public by default. If Static isn't used, the value of local variables is not preserved between calls. The Friend keyword can only be used in class modules. This method is based on a curious behaviour of XL97 : assume that an open workbook contains a function called "Myfunc".
But if you call it with the function wizard, it displays the informations of After further experimenting, I've noticed also that you must declare the VBA functions "Private" in order to remove them from the default "user-defined" category otherwise, they would appear twice in the wizard. This method is just a funny work-around. However, if the alias name e. For example, you can not name the function ABC as it also refers to a cell in Excel worksheet.
If you do this, Excel would give preference to the in-built function. You can use an underscore if you want to separate words. The function name is followed by some arguments in parenthesis. These are the arguments that our function would need from the user.
In our example, there is only one argument — CellRef. If you have more than one arguments, you can specify those in the same parenthesis — separated by a comma. We will see later in this tutorial on how to use multiple arguments in a user-defined function.
This would tell VBA that the result of the formula would be of the String data type. While I can use a numeric data type here such as Long or Double , doing that would limit the range of numbers it can return. Hence I have kept the function output data type as String.
The second line of the code — the one in green that starts with an apostrophe — is a comment. When reading the code, VBA ignores this line.
You can use this to add a description or a detail about the code. This is the variable where we store the value of the length of the string that is being analyzed by the formula. The fourth line declares the variable Result as a String data type. This is the variable where we will extract the numbers from the alphanumeric string. Sixth, seventh, and eighth lines are the part of the For Next loop.
The loop runs for as many times as many characters are there in the input argument. The second last line of the code assigns the value of the result to the function. The last line of the code is End Function. This is a mandatory line of code that tells VBA that the function code ends here. The above code explains the different parts of a typical custom function created in VBA. In the following sections, we will deep dive into these elements and also see the different ways to execute the VBA function in Excel.
Arguments in a User Defined Function in VBA In the above examples, where we created a user-defined function to get the numeric part from an alphanumeric string GetNumeric , the function was designed to take one single argument. In this section, I will cover how to create functions that take no argument to the ones that take multiple arguments required as well as optional arguments. These functions are not dependent on any input arguments. You can create such similar function in VBA as well. Below is the code that will give you the name of the file.
If the file has been saved, it returns the name with the file extension, else it simply gives the name. The above has one issue though. Normally a function refreshes whenever there is a change in the input arguments. To make the formula recalculate whenever there is a change in the worksheet, you need to a line of code to it.
Name End Function Now, if you change the workbook name, this function would update whenever there is any change in the worksheet or when you reopen this workbook. Creating a Function in VBA with One Argument In one of the sections above, we have already seen how to create a function that takes only one argument the GetNumeric function covered above. Function created with the below code would convert the referenced text into uppercase. Now we already have a function for it in Excel, and this function is just to show you how it works.
It then assigns the value to the function ConvertToUpperCase. Volatile part here. As soon as the argument changes, the function would automatically update.
It takes two arguments — the cell reference that has the text string, and the delimiter. Note that for each argument, you can specify a data type. When you use the above function in the worksheet, you need to give the cell reference that has the text as the first argument and the delimiter character s in double quotes as the second argument. This position is then used to extract all the characters before the delimiter using the LEFT function.
Finally, it assigns the result to the function. This formula is far from perfect. For example, if you enter a delimiter that is not found in the text, it would give an error.
If you enter the text from which you want to extract the part before the delimiter directly in the function, it would give you an error. Go ahead.. If you want the function to have the flexibility to accept direct text input or cell references from the user, you need to remove the data type declaration. This would end up making the argument as a variant data type, which can take any type of argument and process it.
An optional argument, as the name suggests, is optional to specify. But optional arguments are not useless. They allow you to choose from a range of options. Remember that the optional arguments must always come after all the required arguments.As a general rule, the Insert Function dialog displays the description of the function that is currently selected in Select a function list box. If you are using Excel or 97 you won't notice much difference, and I'll mention anything that's significant. Advantage 2: You can, slightly, simplify your formulas whenever you're using VBA functions that aren't stored in the same Excel workbook you're working in. To save the user having to enter two arguments we could let them enter just the Length and have the function use that value twice i. Instead, you can use the Array function in VBA. There's not an application. I may cover conditional formatting in future Excel tutorials. Friend Optional. Name of Elisabeth rieper thesis sentence Function; follows standard variable naming conventions. List of variables representing arguments that are passed to. Dashes should be limited to the insertion of an. This applies even if, as in the case above, you're assigning a description for a single argument.
If it's not, you can always open a user form and go to town. A Private scope means that the function is available only in the module in which it exists. When you use the above function in the worksheet, you need to give the cell reference that has the text as the first argument and the delimiter character s in double quotes as the second argument. The above steps would insert the function in the worksheet. Here in the UK fuel is sold in litres. Many global Universities and institutes offer the BBA program.
Step 3: The Squared function receives the argument of If you use an earlier version of Excel, argument descriptions are not displayed.
If your procedure refers to an undeclared variable that has the same name as another procedure, constant, or variable, it is assumed that your procedure refers to that module-level name. This word tells VBA that our code is a function and not a subroutine. A Function procedure is a VBA code that performs calculations and returns a value or an array of values. Public is the default scope of VBA functions. Method 2: Using The Application.
Run method is the macro to run Squared in the case above. Go ahead..
Go to the File tab and click on Save As. When reading the code, VBA ignores this line. Volatile part here. You can make this more robust by taking only specific values as arguments and showing an error in rest of the cases as shown in the below code.
As I understand the method, each UDF has to be "registered over" a different function from the library.
This is a mandatory line of code that tells VBA that the function code ends here. Find out how to do this in the tutorial Build an Excel Add-In. Once you've entered the appropriate name, click on the Options button located on the right side of the Macro dialog. This may change in the future.
The Application. The main use of the Application. Alternatively you can create your own add-in containing one or more functions that you can install into Excel just like a commercial add-in. The main exception to this rule are VBA functions that are defined as Private. Scenario 2: When declaring arguments for an Excel VBA Function, you can also use the As keyword and Type statement for purposes of determining its data type. You can also use the variant data type here.
You need to place the below code in a module in the VB Editor.
Using the Private keyword is useful when you create VBA Function procedures that aren't designed to be used in formulas. But I don't want to see a number, I want to see "Thursday". Functions are organized in different categories, such as Financial, Text and Logical.
Indicates that the Function procedure's local variables are preserved between calls.
|
OPCFW_CODE
|
Created attachment 608259 [details]
spec file patch
OLPC doesn't ship linux-firmware at the moment. This is because of its ever-growing size, the fact that we run on a small disk, and the fact that none of the hardware that we expect to be used with our platform requires something from linux-firmware.
The one exception there is the wifi card, but we have other packages for those firmwares (libertas-usb8388-firmware, libertas-sd8686-firmware).
It's time to revisit this now that we are adding a 3rd generation of marvell wifi hardware into the mix, the sd8787 on the upcoming XO-4 laptop. I presume that adding a new libertas-sd8787-firmware package would be frowned upon.
I read the discussion in bug #828050 and I'm encouraged by the acceptance of subpackages. For F18 I propose that we do something similar: we convert the libertas-usb8388-firmware and libertas-sd8686-firmware packages into subpackages of linux-firmware, orphan those old packages, and we add a new subpackage for the sd8787.
How does this sound? Here is a spec file patch that implements it.
I would actually like to see the linux-firmware package broken down into some general smaller chunks as it's getting quite sizable and there's a lot that are unrelated to platforms that they're used.
Not sure the exact best way to break it down but I was thinking somewhere along the lines of:
-storage (contains FCP/FCoE/iSCSI/SCSI HBA)
-wifi (this is pretty large, some are also single platform)
-network (some wired eth need firmware such as Broadcom chips)
-arm (there's some wifi for exmaple that is ARM only)
-usb (could be used on all platforms)
There could be sub packages of some of the above too for single wifi drivers or possible to further breakdown something like storage into storage-iscsi etc.
You could then have linux-firmware as an overall meta package that will provide the same functionality to pull all firmware in so there's no random breakage similar to what xorg-x11-drivers provides.
I'm really not thrilled at all with splitting linux-firmware up into a larger number of sub-categories. In theory it sounds good, but it leads to a couple of complications:
1) All the firmware will still be installed by default via comps or whatever because if we don't do that people's machines/devices will just not work.
2) It makes the spec ugly for little gain.
About the only win it gets us is that people can uninstall various subpackages if they know their machine doesn't need it, or they can use kickstart to only install specific sets.
The comparison to xorg is misleading, because xorg-x11-drivers is purely a meta-package that pulls in other actual packages, not subpackages. That avoids the complication of #2 above.
The only reason iwlwifi was absorbed as subpackages was because they were already existing packages and someone complained when I tried to just consume them entirely.
Let me think about this a bit more. Feel free to add further convincing arguments if you have them.
I agree with your concerns about the sub-category idea.
But going back to the issue in question, what OLPC is facing here is very similar to the iwlwifi case. So please consider the original request independently.
(In reply to comment #3)
> But going back to the issue in question, what OLPC is facing here is very
> similar to the iwlwifi case. So please consider the original request
Applied to F18/master. You'll need to EOL the separate packages once this is available in the F18 repos.
Apologies for the delay here.
linux-firmware-20120720-0.3.git7560108.fc18 has been submitted as an update for Fedora 18.
* should fix your issue,
* was pushed to the Fedora 18 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing linux-firmware-20120720-0.3.git7560108.fc18'
as soon as you are able to, then reboot.
Please go to the following url:
then log in and leave karma (feedback).
linux-firmware-20120925-0.1.git236367d.fc18 has been submitted as an update for Fedora 18.
|
OPCFW_CODE
|
I am developing a GUI where the user is supposed to connect to a WiFi network.
At one point, the user is shown all the available networks in a "Vertical List" and he can choose one of them to connect. This list is supposed to be refreshed one in a while. I have implemented a timer which will call the function "updateAwailableNetworks()". This function is supposed to save the information about the available networks in 3 arrays, one for the names, one for the signal strengths and one for the fact whether there is a password needed or not. All the information about the networks is meant to be saved in the global variable called "WiFiNetworks", where there is also information about the current network an other things.
Before integrating it to the platform, I have built the dummy functions which update the information about available networks. When there is an update, I set the new properties of the vertical list, such as the NoOfItems, and force a reload by invoking the VerticalList.InvalidateItems().
The problem is that when I scroll the list up and down, some items disappear. Some are replaced by the default "WiFi Name". I do not understand why this happens.
After debugging I think that there is a problem I am having with the global variable, which does not seem to update itself in all the places where I call it. However I do not know howto solve this issue.
Attached you find the project.
Thank you very much for the support.
the problem is caused by automatically reclaimed autoobject Settings_WiFi::WiFiNetworks. Thus the data stored in the autoobject is lost. This is an expected behavior. If an autoobject is not needed (referenced) anymore by other objects, Embedded Wizard releases the autoobject automatically saving so the memory. Please see the documentation Lifetime of an autoobject.
To solve the issue, add a variable to your Application::Application class, declare the variable with type corresponding to the of the autoobject (or even with object) and initialize the variable with the autoobject. The variable will prevent the autoobject from being discarded - it will retain a permanent reference to the autoobject. Following screenshot demonstrates the modification within your application:
I hope it helps you further.
Thanks for the reply. It seems to solve the issue.
However, I have now another issue.
I have implemented three main screens, called channels, commands and settings. I have implemented a navigation using Dialogs with horizontal swipe between these main screens. I want to navigate also within these screens. For example: when I am at Settings I can go into the specific settings.
For this purposeI have used Modals. So, if you press "WiFi settings" a modal is started. The reason why I chose Modals is because when I implemented it using Dialogs, it somehow did not disable the screen on the back and I could still interact with the previous screens (Settings screen in this case). I could press "WiFi settings" many times, even though I was already inside "WiFi settings".
My main problem now is that when I am inside the WiFi settings, I can still swipe left and go to the "Commands" screen. This should not be allowed.So I have 2 questions:
1. How do I make sure that I can not swipe when I am inside the WiFi settings?
2. Is it preferable to use modals or dialogs? (In case of Dialogs, how do I make sure that the user can not interact with the previous screens?)
Thank you very much.
Attached you find the project.https://ask.embedded-wizard.de/?qa=blob&qa_blobid=4145558495861140828
the concept of a modal state is in fact older. Compared with it, the Dialogs offer more features and are more flexible. Therefore I would recommend to use Dialogs in new projects. On the other hand, managing the modal state is simpler. So it is eventually better for very simple application cases.
Since both approaches are different, mixing them will not work as expected. If you have already experience with Dialogs, I would use this technique only and ignore modal state completely. Concerning your issue, please try following:
1. Present the Settings_Wifi component as a dialog. Don't use Add() and BeginModal(). For this purpose modify the method Settings::Settings.onWiFiPress as follows:
2. In the method Settings_WiFi::Settings_WiFi.onBack adapt the code to dismiss the dialog in the correct context:
The problem in your preceding implementation was, you tried to present the WiFi dialog in context of the Settings dialog. In this way, the Settings dialog was still able to handle user inputs. The chapter Take a closer look at the Dialog functionality explaines the special behavior of nested dialogs. Since you don't need such sophisticated behavior, just present all dialogs within the root object (GetRoot() is used here).
Also please note the additional IsCurrentDialog() if-conditions. These protect the application from eventual race conditions. They are not obligatory. I recommend them :o) Please see: Identify the active Dialogs and avoid race conditions.
I need your advice once more, since now the project is getting a bit more sophisticated.Above I said that I had trouble with the dialogs not deactivating the other dialogs which were below it and you said the I need to preset a dialog using the GetRoot(), so that it is presented within the root object. And it indeed solved the issue.Now I have the same project, but I have arranged it a bit diffrent since I need some things which should always be on top of everything else, such as a status bar. What I did was divide my UI components into Level_0 and Level_1. Level_0 is on top of Level_1. I need to switch dialogs if I swipe in the Level_1 component. Then, within Level_1 I have the Settings section. When I press the WiFi button it should present a new Dialog, which should disable the Level_0 and Level_1 component from taking user inputs. Folloing you suggestion, I present the Settings_WiFi dialog under the root context. However, I can still interact with the things which are below it. I have attached the project and have made the Settings_WiFi background transparent so that you can see when the interaction with the screens bellow takes place.
How do I solve this issue?
Step 1: In the method Settings::Settings.onWiFiPress replace GetRoot() bei Owner. In this way the new dialog will be presented in context of the same owner component as the actual Dialog. The result is thus similar to presenting all dialogs in context of root object:
Step 2: In the method Settings_WiFi::Settings_WiFi.onBack also change GetRoot() in Owner. Also note, I have commented out the IsCurrentDialog() condition. I address this in the Step 3 again:
Step 3: The condition IsCurrentDialog() does not work anymore. To recognize a dialog as active one, the dialog and all its owner have to be dialogs too. The Level_1 component, however, is not presented as Dialog. Trying to present it as Dialog will however put it over Level_0 component. What can you do?
Option 1: Instead on IsCurrentDialog() use IsActiveDialog( false ). This tests the status of the dialog in a less global context.As long as you present all dialogs within Level_1 it should avoid evtl. race conditions (if any). For example:
Option 2: Don't use IsCurrentDialog() and avoid race conditions in another way.
Option 3: Present the GUI component Level_1 by using PresentDialog(). Then, however, the Level_1 will overlapp the Level_0 component. See your other thread Is it possible to set up a view that will always come on top of everything (like a status bar)? concerning the possible approaches of how to retain a GUI component on top of other components.
Thanks for the reply. It seems to solve the issue. I need to make sóme adjustments, but the first trial I made looks good.
There is one other thing which I have noticed and is a little strange. As you have seen in the project, there is a keboardIcon on the bottom right which is supposed to open and close the keyboard. I have implemented a short animation and it looks good.
The behavior I want is that when the keyboard is opened, the part of the screen that is on the back of the keyboard has to stop getting user inputs. In order to achieve that, I set up a touch handler as the bottom most object of the component keyboard which is supposed to do othing, but just prevent any touch events to penetrate through the keyboard. And it indeed was ok, untill I by chance did a double click right above the keyboard, which is indeed area of the keyboard component. You can see this in the picture attached.
I have set the thouch handler of the keyboard so that it does not retarget anything, however, the double click does this strange behavior.
Do you know where this comes from? Is there another way to disable the user inputs on the back of a certain component?
I suppose the handler in the background is configured to react to single taps only. As soon as you double tap on it, the handle resigns and other handler willing to react to the event will process. Please see the section Configure the filter condition. Especially please note the usage of the both properties MinStrikeCount and MaxStrikeCount. Set MaxStrikeCount to a value that is greater than the max. number of expected taps. If you set it to 10, the handler will process all tap sequences with 1 .. 10 taps.
Also possible approach is to present the keyboard as Dialog too and so prevent other dialogs from being able to react to touch event.
Ask Embedded Wizard
Welcome to the question and answer site for Embedded Wizard users and UI developers.
Ask your question and receive answers from the Embedded Wizard support team or from other members of the community!
|
OPCFW_CODE
|
# -*- coding: utf-8 -*-
import mock
import pytest
import staticconf.testing
from boto.exception import BotoServerError
from mycroft.backend.sqs_wrapper import SQSWrapper
from mycroft.models.aws_connections import get_boto_creds
from tests.data.mock_config import MOCK_CONFIG
""" Create a few fake objects (fake connection to SQS, fake SQS queue) for use
in tests below. Rely on duck typing for the code under test to call these
methods.
"""
# TODO: Update this test to use an inmemory SQS rather than rely on duck typing
# Do away with all Fake* classes below.
class FakeConn(object):
queue = None
def set_queue(self, queue):
self.queue = queue
def get_queue(self, queue_name):
return self.queue
class FakeQueue(object):
id = "a"
msgs = None
exception = False
def set_messages(self, msgs):
self.msgs = msgs
def get_messages(self, num_messages, wait_time_seconds):
if self.exception:
raise BotoServerError(503, "test")
return [] if self.msgs is None else self.msgs
def set_message_class(self, class_type):
self.class_type = class_type
def delete_message(self, msg):
self.deleted_msg = msg
def delete_message_batch(self, msgs):
self.deleted_msgs = msgs
def clear(self):
self.msgs = None
def get_attributes(self):
return {'MessageRetentionPeriod': 4 * 24 * 3600}
@pytest.yield_fixture
def get_mock_boto():
with staticconf.testing.MockConfiguration(MOCK_CONFIG):
with mock.patch('boto.sqs.connect_to_region') as mock_boto:
yield mock_boto
def get_test_creds():
with staticconf.testing.MockConfiguration(MOCK_CONFIG):
return get_boto_creds()
def test_constructor_throws_on_queue_being_none(get_mock_boto):
mock_obj = get_mock_boto
fake_conn = FakeConn()
mock_obj.return_value = fake_conn
with pytest.raises(ValueError):
SQSWrapper("test")
mock_obj.assert_called_once_with('us-west-2', **get_test_creds())
def test_set_class_type_works_fine(get_mock_boto):
mock_obj = get_mock_boto
fake_conn = FakeConn()
fake_conn.queue = FakeQueue()
mock_obj.return_value = fake_conn
SQSWrapper("test", object)
mock_obj.assert_called_once_with('us-west-2', **get_test_creds())
assert fake_conn.queue.class_type == object
def test_init_variables_initialized_from_config_correctly(get_mock_boto):
mock_obj = get_mock_boto
fake_conn = FakeConn()
fake_queue = FakeQueue()
fake_conn.set_queue(fake_queue)
mock_obj.return_value = fake_conn
sqs = SQSWrapper("testq", None)
assert sqs._num_messages_to_fetch == 2, "expected num_msgs_per_fetch to \
be 2 from config"
assert sqs._wait_time_secs == 20, "expected wait_time_secs to be 20 from \
config"
assert sqs._queue is not None, "queue should not be None at \
end of constructor"
mock_obj.assert_called_once_with('us-west-2', **get_test_creds())
def test_get_messages_successful_case_no_errors(get_mock_boto):
fake_conn = FakeConn()
fake_queue = FakeQueue()
fake_conn.set_queue(fake_queue)
fake_queue.set_messages([object()])
mock_obj = get_mock_boto
mock_obj.return_value = fake_conn
sqs = SQSWrapper("some-queue", None)
ret = sqs.get_messages_from_queue()
assert len(ret) == 1, "Expected 1 message from sqs queue"
mock_obj.assert_called_once_with('us-west-2', **get_test_creds())
def test_get_messages_exception_thrown_out(get_mock_boto):
fake_conn = FakeConn()
fake_queue = FakeQueue()
fake_conn.set_queue(fake_queue)
fake_queue.exception = True # set it to throw boto error
mock_obj = get_mock_boto
mock_obj.return_value = fake_conn
sqs = SQSWrapper("some-queue", None)
with pytest.raises(BotoServerError):
sqs.get_messages_from_queue()
mock_obj.assert_called_once_with('us-west-2', **get_test_creds())
def test_delete_msg_no_exceptions(get_mock_boto):
fake_conn = FakeConn()
fake_queue = FakeQueue()
fake_conn.set_queue(fake_queue)
mock_obj = get_mock_boto
mock_obj.return_value = fake_conn
sqs = SQSWrapper("some-queue")
test_msg = object()
sqs.delete_message_from_queue(test_msg)
mock_obj.assert_called_once_with('us-west-2', **get_test_creds())
assert fake_queue.deleted_msg == test_msg
def test_delete_msg_batch(get_mock_boto):
fake_conn = FakeConn()
fake_queue = FakeQueue()
fake_conn.set_queue(fake_queue)
mock_obj = get_mock_boto
mock_obj.return_value = fake_conn
sqs = SQSWrapper("some-queue")
test_msgs = [object()]
sqs.delete_message_batch_from_queue(test_msgs)
mock_obj.assert_called_once_with('us-west-2', **get_test_creds())
assert fake_queue.deleted_msgs == test_msgs
def test_get_name(get_mock_boto):
fake_conn = FakeConn()
fake_queue = FakeQueue()
fake_conn.set_queue(fake_queue)
mock_obj = get_mock_boto
mock_obj.return_value = fake_conn
sqs = SQSWrapper("some-queue")
assert sqs.get_queue_name() == "some-queue"
def test_clear(get_mock_boto):
fake_conn = FakeConn()
fake_queue = FakeQueue()
fake_queue.set_messages([object()])
fake_conn.set_queue(fake_queue)
mock_obj = get_mock_boto
mock_obj.return_value = fake_conn
sqs = SQSWrapper("some-queue")
sqs.clear()
assert fake_queue.msgs is None
|
STACK_EDU
|
What is SQL Server Express used for?
SQL Server Express is the free version of Microsoft’s acclaimed SQL Server relational database management system. It is used for small scale applications and development, and is ideal for desktop, web, and mobile applications. While it does has some restrictions and limitations, it is the exact same database and contains many of the features that are included in the paid for versions.
Microsoft’s SQL Server Express grew out of MSDE, the original Microsoft SQL Server Data Engine (aka Microsoft Desktop Engine). Microsoft wanted an alternative to Microsoft Access that would give software developers and vendors a defined path to the paid for versions of SQL Server Standard and Enterprise.
Who uses Microsoft SQL Server Express?
SQL Server Express is generally aimed at applications used by small businesses, these may be standard desktop applications, web applications or as a data source for mobile apps. It doesn’t included some of the more advanced features enjoyed by SQL Server Standard or SQL Server Enterprise but the average small business doesn’t have the luxury of a SQL Server database administrator (DBA) or a developer that would use these features anyway.
Many independent software vendors (ISVs) will embed SQL Server Express with their software as it is also free to distribute. In fact Microsoft have created an even more lightweight version called SQL Server Express LocalDB, targeted at developers and ISVs, which runs “in-process” with applications rather than as a separate “Service”.
SQL Server Express is also seen as a good starting point for people who want to learn about SQL Server.
SQL Express has some deliberate limitations/restrictions. It is limited to a maximum database size of 10GB and can only use up to 1GB Memory and the lesser of 1CPU or 4 CPU Cores. However, for many small business applications these numbers are more than sufficient. If the business is growing to the point where more resources are required then the paid for version – SQL Server Standard – will be a good and necessary business investment. Please see this article for more details of the limitations of SQL Server Express.
Can applications that use SQL Server Express be used by multiple people?
Desktop applications can connect to SQL Server Express databases saved on another computer. However, every application is different and some applications that are designed to only be used by one person may not give options for changing the location of the database.
Where the database can be shared then it is advisable that the SQL Server Express database is kept somewhere that is resilient and secure, can be backed up and is always available. Traditionally this may have been a “server” on the businesses premises but increasingly businesses are looking to cloud based solutions for their applications.
Can you use SQL Server Express “in the cloud”?
Cloud computing is fast becoming the norm for modern business applications, with many being built to run in the cloud either as web applications (built into a web site) or for mobile apps. However desktop applications really need to be “close” to their SQL Server Express database to perform properly. If the database is hosted in the cloud but the application remains on the desktop there is a good chance performance will be poor and there’s a risk of databases becoming corrupted.
The way round this is to run the desktop application in the cloud as well. This can be easily done by using a hosted remote desktop service (hosted desktop), previously known as terminal services. Here both the desktop application and the database sit on servers in the provider’s data centre and the users simply remote control them. Form the user’s point of view it will look and feel no different than running on their local PC.
Find out more
For a more in-depth look at hosting SQL Server Express based applications please read this article or simply contact one of the technical sales team at Your Office Anywhere for a no obligation chat about your needs.
Visit different areas of our Info Hub to learn more
Microsoft Access Hosting
DaaS & Hosted Desktops
Sage Accounts Hosting
SQL Server Hosting
Microsoft Office 365 Hosting
About 'Your Office Anywhere'
All about Cloud Computing
Hosting MS Visio/Project
Hosted Desktops Apple/Android
Contact us here for a fast response
+44 (0)1282 500318
Your Office Anywhere, 4 Dominion Court, Billington Road, Burnley, BB11 5UB
M-F: 9am - 5pm
|
OPCFW_CODE
|
""" Class for handling serialized backup/recovery requests. """
import logging
import json
import threading
import backup_exceptions
import backup_recovery_helper
import cassandra_backup
import zookeeper_backup
from backup_recovery_constants import StorageTypes
class BackupService():
""" Backup service class. """
# The key to use to figure out the type of request sent.
REQUEST_TYPE_TAG = "type"
# The key to use to lookup the backup name.
BACKUP_NAME_TAG = "backup_name"
# Google Cloud Storage bucket tag name.
BUCKET_NAME_TAG = "bucket_name"
# Google Cloud Storage object tag name.
OBJECT_NAME_TAG = "object_name"
# The storage infrastructure used for backups.
STORAGE = 'storage'
# AppScale components that can currently be backed up.
SUPPORTED_COMPONENTS = ['cassandra', 'zookeeper']
def __init__(self):
""" Constructor function for the backup service. """
log_format = logging.Formatter('%(asctime)s %(levelname)s %(filename)s: '
'%(lineno)s %(message)s')
logging.getLogger().handlers[0].setFormatter(log_format)
self.__cassandra_backup_lock = threading.Lock()
self.__zookeeper_backup_lock = threading.Lock()
@classmethod
def bad_request(cls, reason):
""" Returns the default bad request json string.
Args:
reason: The reason the request is bad.
Returns:
The default message to return on a bad request.
"""
return json.dumps({'success': False, 'reason': reason})
def remote_request(self, request_data):
""" Handles remote requests with serialized JSON.
Args:
request_data: A str, the serialized JSON request.
Returns:
A str, serialized JSON.
"""
try:
request = json.loads(request_data)
logging.info("Request received: {0}".format(request))
except (TypeError, ValueError) as error:
logging.exception(error)
return self.bad_request("Unable to parse request. Exception: {0}".
format(error))
request_type = request[self.REQUEST_TYPE_TAG]
if request_type.split('_')[0] not in self.SUPPORTED_COMPONENTS:
return self.bad_request("Unsupported request type '{0}'".format(request))
storage = request[self.STORAGE]
if storage not in StorageTypes().get_storage_types():
return self.bad_request("Unsupported storage type '{0}'".format(storage))
path = request[self.OBJECT_NAME_TAG]
if not path:
return self.bad_request("Missing path argument.")
if request_type == "cassandra_backup":
return self.do_cassandra_backup(storage, path)
elif request_type == "cassandra_restore":
return self.do_cassandra_restore(storage, path)
elif request_type == "zookeeper_backup":
return self.do_zookeeper_backup(storage, path)
elif request_type == "zookeeper_restore":
return self.do_zookeeper_restore(storage, path)
elif request_type == "app_backup":
return self.do_app_backup(storage, path)
elif request_type == "app_restore":
return self.do_app_restore(storage, path)
def do_app_backup(self, storage, path):
""" Top level function for doing source code backups.
Args:
storage: A str, one of the StorageTypes class members.
path: A str, the name of the backup file to be created.
Returns:
A JSON string to return to the client.
"""
if not backup_recovery_helper.app_backup(storage, path):
return self.bad_request("Source code backup failed!")
logging.info("Successful source code backup!")
return json.dumps({'success': True, 'reason': ''})
def do_app_restore(self, storage, path):
""" Top level function for restoring source code.
Args:
storage: A str, one of the StorageTypes class members.
path: A str, the name of the backup file to be created.
Returns:
A JSON string to return to the client.
"""
if not backup_recovery_helper.app_restore(storage, path):
return self.bad_request("Source code restore failed!")
logging.info("Successful source code restore!")
return json.dumps({'success': True, 'reason': ''})
def do_cassandra_backup(self, storage, path):
""" Top level function for doing Cassandra backups.
Args:
storage: A str, one of the StorageTypes class members.
path: A str, the name of the backup file to be created.
Returns:
A JSON string to return to the client.
"""
success = True
reason = "success"
try:
logging.info("Acquiring lock for db backup.")
self.__cassandra_backup_lock.acquire(True)
logging.info("Got the lock for db backup.")
if not cassandra_backup.backup_data(storage, path):
return self.bad_request("DB backup failed!")
else:
logging.info("Successful db backup!")
except backup_exceptions.BRException, exception:
logging.error("Unable to complete db backup: {0}".format(exception))
success = False
reason = str(exception)
finally:
self.__cassandra_backup_lock.release()
return json.dumps({'success': success, 'reason': reason})
def do_cassandra_restore(self, storage, path):
""" Top level function for doing Cassandra restores.
Args:
storage: A str, one of the StorageTypes class members.
path: A str, the name of the backup file to be created.
Returns:
A JSON string to return to the client.
"""
success = True
reason = "success"
try:
logging.info("Acquiring lock for db restore.")
self.__cassandra_backup_lock.acquire(True)
logging.info("Got the lock for db restore.")
if not cassandra_backup.restore_data(storage, path):
return self.bad_request("DB restore failed!")
else:
logging.info("Successful db restore!")
except backup_exceptions.BRException, exception:
logging.error("Unable to complete db restore: {0}".format(exception))
success = False
reason = str(exception)
finally:
self.__cassandra_backup_lock.release()
return json.dumps({'success': success, 'reason': reason})
def do_zookeeper_backup(self, storage, path):
""" Top level function for doing Zookeeper backups.
Args:
storage: A str, one of the StorageTypes class members.
path: A str, the name of the backup file to be created.
Returns:
A JSON string to return to the client.
"""
success = True
reason = "success"
try:
logging.info("Acquiring lock for zk backup.")
self.__zookeeper_backup_lock.acquire(True)
logging.info("Got the lock for zk backup.")
if not zookeeper_backup.backup_data(storage, path):
return self.bad_request("ZK backup failed!")
else:
logging.info("Successful zk backup!")
except backup_exceptions.BRException, exception:
logging.error("Unable to complete zk backup: {0}".format(exception))
success = False
reason = str(exception)
finally:
self.__zookeeper_backup_lock.release()
logging.info("Released lock for zk backup.")
return json.dumps({'success': success, 'reason': reason})
def do_zookeeper_restore(self, storage, path):
""" Top level function for doing Zookeeper restores.
Args:
storage: A str, one of the StorageTypes class members.
path: A str, the name of the backup file to be created.
Returns:
A JSON string to return to the client.
"""
success = True
reason = "success"
try:
logging.info("Acquiring lock for zk restore.")
self.__zookeeper_backup_lock.acquire(True)
logging.info("Got the lock for zk restore.")
if not zookeeper_backup.restore_data(storage, path):
return self.bad_request("ZK restore failed!")
else:
logging.info("Successful zk restore!")
except backup_exceptions.BRException, exception:
logging.error("Unable to complete zk restore: {0}".format(exception))
success = False
reason = str(exception)
finally:
self.__zookeeper_backup_lock.release()
logging.info("Released lock for zk restore.")
return json.dumps({'success': success, 'reason': reason})
|
STACK_EDU
|
Our new home has a Wood/Oil combination furnace. This means that we can burn oil on one side, and start a wood fire on the other. This is great for us because we have 100 acres of woodland out behind the house. If I can stay on top of it, we will never lack for home heating. Eventually we are hoping to be able to harvest our own fire wood from our own property. This will significantly reduce the cost of heating our home.
One of the problems with doing this is the length of time wood needs to be dried for. Based on all of my reading, wood should be cut into 16 inch lengths (bucked) and left for at least one year (longer is better), to slowly dry out. As you can imagine this takes up a lot of space: leaving it to dry for a year, means having a stack you are burning, a stack you are drying, and a stack you are building up. So that’s three stacks at 6 cords (768 feet3) each, for a total of 18 cords (2304 feet3), all needing to be stored and sheltered.
That’s a lot of wood.
Currently, we are using two buildings for storing wood: the stable portion of the barn, and a shed that is attached to the Garage. I don’t mind using the shed, but am very disapointed that we are taking up a large portion of the barn. Further, this only accounts for a single year’s worth of wood: what do we do with the other two year’s worth.
One technique I have seen for storing that much wood is to stack it in such a way that it shelters itself. Effectively, you build a stack, and then shingle it with some of the wood you have cut and split. This keeps the wood dry, and uses no extra material.
One interesting technique that I came across for doing this is the Holz Meite or the Holz Hausen1 Due to its cylindrical shape, the Holz Meite minimizes the surface area that is exposed to rain, and minimizes the footprint. By stacking the wood in a tight block, as tall as it is wide, you find a balance between stability and minimum footprint (you want short and squat, not tall and fall over).
My idea is to construct a Square Holz Meite. This will give me the nice dense storage of a Holz Meite; the strength2 and tidiness3 of nice square joins . Once I have completed the peaked
roof, I will lay a tarp over the entire stack, and place another layer of logs over that. This should keep the tarp from flying away.
This leads to an ongoing annual strategy for dealing with wood:
- Spring: start obtaining 6 cords of wood. This can be done either through buying or cutting my own.
- Summer: make a new stack of wood for long term storage and seasoning(Holz Meite)
- Fall: take old stack and move it to the storage shed
- Winter: empty the storage shed by heating the house
Supply chains can fall apart and prices can go through the roof; but hopefully, with a hundred acres of forest, we can become self-sufficient in our heating. It will take a lot of work, but with the expense of purchasing fuel for the house, a couple of days in the woods sounds like a small price to pay.
While many North American sites state that this is an ancient German technique, Germans seem to have never heared of it. ↩
I know that round is often stronger than square, however, it is damn near impossible to get perfectly round by hand. Making reasonably straight sides is possible. ↩
The Wife makes fun of me for my insistance on being so fussy about tidiness, but I know (without proof) that there is a correlation between strength and tidiness. ↩
|
OPCFW_CODE
|
module BBLib
def self.title_case(str, first_only: true)
str = str.to_s unless str.is_a?(String)
ignoreables = %w(a an the on upon and but or in with to)
regx = /\s+|\-|\_|(?<=[\w\d])\.(?=[\w\d])|(?<=\W|^)\"(?=\w|$)|(?<=\W|^)\'(?=\w|$)|\(|\)|\[|\]|\{|\}|\#/
spacing = str.scan(regx).to_a
words = str.split(regx).map do |word|
if ignoreables.include?(word.downcase)
word.downcase
elsif first_only
word.to_s.slice(0,1).to_s.upcase + word.to_s[1..-1].to_s
else
word.capitalize
end
end
# Always cap the first word
words[0] = words.first.to_s.slice(0,1).to_s.upcase + words.first.to_s[1..-1].to_s
combined = words.interleave(spacing).join
combined.scan(/(?<=\.)\w(?=\.)/).each { |part| combined.sub!(".#{part}.", ".#{part}.".upcase) }
combined
end
def self.start_case(str, first_only: false)
regx = /[[:space:]]+|\-|\_|\"|\'|\(|\)|\[|\]|\{|\}|\#/
spacing = str.scan(regx).to_a
words = str.split(regx).map do |word|
if first_only
word[0] = word[0].upcase
word
else
word.capitalize
end
end
words.interleave(spacing).join
end
def self.camel_case(str, style = :lower)
regx = /[[:space:]]+|[^[[:alnum:]]]+/
words = str.split(regx).map(&:capitalize)
words[0].downcase! if style == :lower
words.join
end
def self.delimited_case(str, delimiter = '_')
regx = /[[:space:]]+|\s+|[^\w\d]+|\#{delimiter}+/
str.split(regx).join(delimiter)
end
def self.snake_case(str)
BBLib.delimited_case(str, '_')
end
def self.method_case(str)
str.gsub(/(?<=[^^])([A-Z])(?=[^A-Z\s])/, '_\1').gsub(/[\s\_]+/, '_').snake_case.downcase
end
def self.class_case(str)
str.gsub(/(?<=[^^])([A-Z])/, ' \1').gsub(/\s+/, ' ').title_case.gsub(/\s+|\_/, '')
end
def self.spinal_case(str)
BBLib.delimited_case str, '-'
end
def self.train_case(str)
BBLib.spinal_case(BBLib.start_case(str))
end
end
class String
def title_case(first_only: false)
BBLib.title_case self, first_only: first_only
end
def start_case(first_only: false)
BBLib.start_case self, first_only: first_only
end
def camel_case(style = :lower)
BBLib.camel_case self, style
end
def delimited_case(delimiter = '_')
BBLib.delimited_case self, delimiter
end
def snake_case
BBLib.snake_case self
end
def method_case
BBLib.method_case(self)
end
def class_case
BBLib.class_case(self)
end
def spinal_case
BBLib.spinal_case self
end
def train_case
BBLib.train_case self
end
end
|
STACK_EDU
|
Project has migrate to GitHub !
New documentation available.
1 - Introduction
2 - Presentation of GUI
3 - Sftp-who
4 - Sftp-state
5 - Config
6 - Log
7 - Remote Connection
8 - Parameters
9 - About
10 - Configuring MySecureShell
11 - Wizard
12 - Expert
Once installed, MySecureShell must be configured. Here we see the configuration GUI under Java. To run the graphical tool, you must have Java installed and have previously. Download it from the site "Sun" [Java http://java.sun.com]. We recommend the Java "Sun" rather than "Microsoft Java" if you're on an architecture type "Microsoft Windows".
Since version 1.8 graphical tool, it is no longer necessary to use the user "root".
We must add an administrator user (via the tag IsAdmin) to MySecureShell then launch the GUI using this user.
To start the graphical tool, there is 2solutions:
- The first is that your operating system supports extensions and appurtenances. In this case, simply double click the file "sftp-mss.jar" and the graphical tool MySecureShell will start.
- The second solution is to use a terminal to launch the command line java file. To start, you must perform the following command:
java -jar sftp-mss.jar
Now the graphical tool launched, we will analyze it. It consists of several tabs as follows:
We shall see in detail each of these elements.
This tab lets you know which is connected and can disconnect any user.
When people are connected, we can see absolutely all what they do on our server. When multiple users are connected, we must have a stranglehold on disconnections. Therefore you have a small menu.
First select users that you want to cross-react using the check box. Then, choose the option that suits you:
- "Do nothing": is the default choice, which has no impact on your boxes checked.
- "Show selected": this display only users whose boxes have summers checked.
- "Show only unselected": will toggle the display of the selection. This means that we see that persons who have not been checked.
We then have several possible actions depending on the selection made above:
The button "Refresh" will allow you to refresh the list of those currently connected to the server.
The button "Refresh every seconds" will be the same function as "Refresh", but will updated every second.
The button "Kill selected customers" will allow a user to disconnect once it selected.
Sftp-State can enable or disable the SFTP server and check its status.
You can see that the server is enabled is written as "ONLINE !".
If the server is disabled, it will be listed on the site "OFFLINE!".
The button "Close server" will disable SFTP server. This button will only appear if the server is "ONLINE".
The button "Open server" will enable SFTP server. This button will only appear if the server is "OFFLINE".
And finally, the checkbox "Keep users connected" keep the users currently connected to the end of their session if you choose to disable the SFTP server.
This tab allows you to make the Configuration server. There are 2 kinds of configurations:
- The configuration mode wizard, specially made for beginners or those with little knowledge.
- The configuration mode expert. This thing here is for people wanting to make more precise rélages.
We will see below ([#conf Chapter 2]) how to perform configuration.
Here you can see everything that happens and has happened on your server. These are logs!
Several colors exist in the logs that allow better visualization of actions performed.
The part containing "Search" will help you to look for very specific data in the logs (action equivalent to "grep").
You go by typing "guest", having all the information about the user prompt.
Following this, you can export a selection of logs! In effect, select the log you want, and click the button "Copy selection". It'll just paste the selection you want.
With this tab you will be able to remotely administer your server MySecureShell.
Before you connect remotely, be sure to have in the server configuration rights to administer the server.
You need to have in your configuration directive "IsAdmin" of valid to the user with whom you will connect remotely.
To pass the directive "IsAdmin" to "True", it should be edit the configuration file by hand or to use the GUI in Java local (on where the computer is installed MySecureShell). You can then use the tab "Remote".
To connect remotely, it's very simple. Simply complete these fields:
- Host: IP Address or hostname of the server MySecureShell, followed optionally by port (eg: 22 (by default)).
- Login: id is "Admin" (with the Directive "IsAdmin").
- Password: password of this identifier.
Then click the button "Connect". When you're connected, the Host, Login and Password become disable and the button "Connect" appears as "Disconnect". The button red tab will also Green if the operation is successful.
You will also see a small white rectangle in which the connection status will appear.
This tab allows the location of certain files and the choice of language of the GUI.
- Language : choose here the language of the GUI.
- Configuration by Default : switch configuration mode or expert mode wizard default.
Finally, click on the button "Save" to save your settings.
Some information and links to MySecureShell.
Remember that the forum is the best way to find the solution to his problem!
The configuration MySecureShell is in the "Config". We will see here the 2 possible modes.
Here are some of the simplified configuration. This means that we have here reduced the possible options to basics. This allows easy configuration with a maximum of preconfigured security.
In this menu you can select multiple choices, so that solutions are settings you suggested. Then click "next" so that they are then "Finish" when you complete the configuration. We will see in sequence the various options.
- Home by default : This is the directory where users go online once.
- Download maximum speed : This is the maximum throughput that the server can supply downstream capacity.
- Upload maximum speed : This is the maximum throughput that the server can supply flow amount.
- List of users / existant groups : You can select the users with whom you want them to register to MySecureShell.
- Create new user : Create a user for MySecureShell.
- Delete : Deletes the account MySecureShell for the user.
- Select the administrator : Selection of the administrator who will MySecureShell all administrative rights to MySecureShell.
- Hide files without rights : You can automatically hide the files for users who do not have rights to certain files.
- Maximum connection : This is the total maximum number of simultaneous connections.
- Maximum connection by user : Maximum number of connections per concurrent user.
The last choice is "Go to expert mode. You can now go to expert mode. You have a question you will be asked whether you want to be default in expert mode or not. You can change your choice in the "Parameters".
The configuration is divided into MySecureShell 2 parts: tags first and second level.
Pay close attention to these distinctions!
Tags first level can identify a user, group, etc. ...
While tags second level will serve as guidelines for restricted tags first level.
There is a directive from first level ESSENTIAL for the proper functioning of MySecureShell called "Default". When you installed MySecureShell for the first time, automatically. Do never delete it because it will govern the default rules of your server MySecureShell!
For add a tag of first level, just click on the button "Add Tag". To delete "Delete Tag".
The second level tags appear in the central area of the window. They correspond, for example "Home", "Download", "Upload" etc. ...
We have created several categories to better identify them.
We will create a directive for a user instance called "guest". If you want an account complete system including a "home" like your system, then you must create MySecureShell otherwise you will create a basic account without "Home".
If a system account already exists for the user name you want to enter MySecureShell, then it will use to connect to it.
Click "Add Tag", choose the type of tag first level (here "User") and enter "guest" in "Tag value".
Then click "Add". The directive is created. When you position on this, there will be above the gray cells. The cells thus appear when a value in the Directive "Default" is already defined or no value is defined. You can of course change.
Now that you know everything you need to make a configuration, you can save your own by clicking on "Save". You can test it by connecting to the server MySecureShell that you just configured. If you encounter any problems, look at the "Logs", they are made to it ;-).
You can see the current configuration of your server MySecureShell by clicking on the button "See config".
We recommend you consult tags to understand what each represents.
|
OPCFW_CODE
|
Two weeks after publishing the ONLY review seen to date of the updated Shimano R8100-P (& R9200-P) firmware 4.2.0, I’m struggling to get a resolution from Shimano Australia regarding the issues I encountered. The data presented clearly indicated there’s a problem with the power meter I’ve purchased. Their initial reply was very generic and indicated they had not reviewed the issues in any way. Here’s a breakdown of their reply and how it may or may not apply the testing I perform.
SUBSCRIBE to GPLama YouTube Channel ➡️ https://bit.ly/3aUph9W ⬅️
GPLama Review of Firmware 4.2.0 for the R8100-P Power Meter: https://youtu.be/1KmpgWVgDs4
Firmware Update to Improve the Power Meter Accuracy (Shimano Official): https://bit.ly/49lRWRX
GPLama Shimano R9200-P Review: https://youtu.be/2xuAPFWL6LM
GPLama Stages R9200 Dual Review: https://youtu.be/H1YxXidWZc4
GPLama Stages R8100 Dual Review: https://youtu.be/yNv0bdyhS2Y
Keith Wakeham Shimano Meter Analysis Video: https://youtu.be/Zwt4Bx_FGHU
GPLama Shimano Power Meter Report (2019): https://bit.ly/3Ct5sS4
Shimano R8100-P Technical Specifications:
● Shimano Hollowtech II Crankset.
● Type: Dual Sided Meter.
● Power Accuracy ±2.0%
● ANT+ [Bicycle Power]: Power, Cadence, Left/Right Power Balance, Torque Effectiveness, Pedal Smoothness, Battery Level.
● Bluetooth ® LE [Cycling Power]: Power, Cadence, Left/Right Power Balance, Battery Level.
● ANT+ / Bluetooth ® LE [Force Vector]: Power, Cadence, Left/Right Power Balance, Pedalling Efficiency, Force vector, Battery level. *Only on supported head units.
● Active Temperature Compensation.
● Battery: Internal / Rechargeable / 300h+ runtime.
● USB charging (Same charge cable as 12spd Di2)
● Weight: 172.5mm 790g (Measured in the Lama Lab)
● E-Tube iOS & Android™ Management App / Configuration / Firmware Updates.
● Other: Status LED. Control Button (Status/Zero Offset).
#Cycling #Shimano #PowerMeter
|
OPCFW_CODE
|
Francois Imberton, SAP Product Management, provided an ASUG Webcast on SAP Dashboard BI 4.0 Feature Pack 3 Webcast.
The usual disclaimer applies that things are subject to change.
Figure 1, Source: SAP
BI 4.0 offers a new panel to create queries inside the Dashboard Designer. This panel is being enhanced for additional data sources such as BW. Figure 1 shows that you can access BW via BEx queries from the Query Panel. Also you can access third party OLAP via universes such SQL Server Analysis Services (SSAS) and Essbase. The semantic layer will also offer access/connectivity to SAP ERP classic InfoSets and ABAP functions.
Figure 1 also shows there will be new ways to visualize hierarchies. This is shown as part of the demo of this webcast.
Long term, Francois said they want to simplify authoring of design to reduce dependency on Excel. Excel (as part of the Dashboard) will never go away; it introduces an overhead in building and maintaining a solution, to be improved through better binding.
Figure 2, Source: SAP
Figure 2 shows the Query Panel in Feature Pack 3. It enables you to access hierarchies defined in OLAP source. You can select hierarchies as result objects or conditions – it can be level based member selection or value based. Figure 2 shows how it looks in Design Time.
Figure 3, Source: SAP
Figure 3 shows the run time of how the hierarchies look using the tree view prompt selector or the hierarchical table.
In the hierarchical table, using BEx / ECC data sources, the component can act as a Report to Report Interface (RRI) sender to a BEx Web report, an ECC transaction, or trigger another action, as long as the row select fits the context. So you can jump from a dashboard table to a BEx web or an ERP transaction.
Francois said functionality like this allows BI to “close the loop” between BI and business processes.
DEMO of Dashboards 4.0 Feature Pack 3
Figure 4, Source SAP
Figure 4 shows a demo of Dashboards 4.0 FP3 in preview mode
This contains two query prompt components. One is based on a hierarchy based on BEx/BW.
Figure 5, Source: SAP
Figure 5 shows the navigation of the hierarchy and the resulting change in the chart below.
Figure 6, Source: SAP Free Characteristics
Figure 6 shows the selection of Free Characteristics from the BEx Query inside the Dashboard Preview mode and the ability to select multiple values.
On the left of Figure 6 is the hierarchical table where the response time in navigating is fast due to multiple levels of caching.
Figure 7: Source: SAP
Figure 7 shows how the hierarchical table acts like a spreadsheet and when you select a row, it acts as a selector and refreshes the information (chart) below.
Figure 8, Source: SAP
Figure 8 shows a hierarchy example against SQL Server Analysis Services (SSAS)
Figure 9, Design Time BI 4.0 Dashboard Feature Pack 3, Source: SAP
Figure 9 shows the design time of feature pack 3
On the left Query 1 and 2 are based on BEx Queries while Query 3 is based on SSAS.
Figure 10, Source: SAP
Figure 10 shows the Edit Query within Dashboards 4.0 FP3 based on a BEx query based on Flex.
Francois said that Figure 10 shows how similar this is to Web Intelligence on top of BEx.
Francois said they wanted to provide a “unified experience” dependent on the source
Figure 11, Source: SAP
Figure 11 shows the member selector that defines the levels you want to select from BW. You can define selection based on values or go into levels and retrieve specific levels of the hierarchy.
Figure 12, Prompts
Figure 12 shows the Prompts; the first one comes from the BEx Query and the second one was defined within the Query Panel. Also notice in the prompt summary there is a dependency (blue arrow). First answer the root and then get a list of values for the dependent prompts.
Figure 13 Preview of Query Result, Source: SAP
Figure 13 shows the preview of query results to show how the results will look like – others in the past did not have these features (QaaWS).
Dashboards 4.0 Feature Pack 3 BW Connectivity in More Details
Figure 14, Source: SAP
Figure 14 covers BW Connectivity, comparing BEx Connector with Xcelsius 2008.
The new connector is to provide consistency across the clients. Another area is consistency across the platform as with FP 3 you do not need BI Java.
“Interoperability” between dashboards and other clients, providing value. The new infrastructure uses new dashboard services, platform services (including monitoring)
Before you had to publish to BW; this limitation is now gone.
To bind data/components to the data, you do not need to go to Excel anymore; you can bind to the query. This simplifies the dashboard authoring and maintenance.
You still need to bind to Excel, however, to define the Chart title.
Figure 16, Source: SAP
Figure 16 shows limitations in BI 4.0 FP 3
BEx variable dependencies are handled at design time – and not at run time
Dashboard audience is casual; BEx variable dependency features work better in Analysis
Other Dashboard FP3 Enhancements
Figure 17, Source: SAP
The upgrade from Adobe Flex 2 to Flex 4 – for customers it has no impact unless you are using add-ons provided by partners
These partner add-ons need to be updated to Flex 4.
If you don’t upgrade add-ons these will not work with Dashboard FP 3.
The upgrade was done at the request of partners to take advantage of Flex 4 features.
When Excel formulas are used over large range of data, performance data has been improved.
Combination charts will have the capability to define alerts with multiple series.
New waterfall chart preview is shown in Figure 17.
Question & Answer
Q: What is the planned release of FP 3?
A: The plan is Q1 2012 – plans are subject to change
Q: Will hierarchical demo work with MSAS cubes?
A: Yes – shown in 3rd tab of the demo
Q: Does the new version support bookmark functionality like Web Application Designer (WAD) does? The user can drill down by hemisphere, region, and when they bookmark it, it saves those variables; they do not have go through drilldown/variable process again?
A: Not offered out of the box; it is about personalization
Dashboard designer has a local scenario specific to a laptop – you can persist values / to retrieve view state but it will not be shareable between users.
Alternative is to persist in XML file on server – but that is custom work
This request is on the list of the backlog; have not discussed post FP 3
Q: What is the minimum version of NW Portal that FP 3 support?
A: In FP 3, to integrate with NW Portal you use the Portal Integration Kit with BI Launchpad – you would need to follow up with that group
Q: Is there an XCelsius to Dashboard migration plan?
A: No migration – no feature being removed.
Q: Does the hierarchy table support BPC hierarchies?
A: Post FP 3, there is a plan to provide universe connectivity on top of EPM cubes. EPM suite, extended analytics package, does provide connectivity from Dashboards to EPM but not what was shown here with hierarchies
Q: Are SWF and XLF be combined?
A: They are, when you publish now you publish one entities – physically they are 2 but appear as one
Q: Does the FP 3 Dashboard come with BI 4.0 FP 3
A: Yes, it is part of the core release – it is the same release
Dashboard is in synch and included in that release
I thank SAP’s Francois Imberton for this ASUG webcast. We have two more upcoming BI 4.0 Feature Pack 3 webcasts for ASUG Members:
December 1st: SAP Integration in SAP BusinessObjects BI 4
December 7th: BusinessObjects Analysis Feature Pack 3 Updates
|
OPCFW_CODE
|
"""API for QAOA, which includes Quantum Approximate Optimization Algorithm,
Quantum Alternating Operator Ansatz, and any layered ansatzes."""
import numpy as np
import scipy
from numpy import mod
from openfermion.utils import count_qubits
from zquantum.core.circuit import Circuit, Qubit, Gate
from zquantum.core.evolution import time_evolution, time_evolution_derivatives
from openfermion import QubitOperator
from qeopenfermion import convert_qubitop_to_dict, convert_dict_to_qubitop
from forestopenfermion import qubitop_to_pyquilpauli
import itertools
import networkx as nx
def build_farhi_qaoa_circuit_template(hamiltonian):
"""Constructs a circuit template for a QAOA ansatz.
Args:
hamiltonians (list): a list of zquantum.core.qubitoperator.QubitOperator objects
Returns:
circuit_template (dict): dictionary describing the ansatz
"""
n_qubits = count_qubits(hamiltonian)
diffusion_op = QubitOperator()
for i in range(n_qubits):
diffusion_op += QubitOperator((i, 'X'))
ansatz = {'ansatz_type': 'singlet UCCSD',
'ansatz_module': 'zquantum.qaoa.ansatz',
'ansatz_func' : 'build_qaoa_circuit',
'ansatz_grad_func': 'build_qaoa_circuit_grads',
'supports_simple_shift_rule': False,
'ansatz_kwargs' : {
'hamiltonians': [
convert_qubitop_to_dict(hamiltonian),
convert_qubitop_to_dict(diffusion_op)]
},
'n_params': [2]
}
return(ansatz)
def build_qaoa_circuit(params, hamiltonians):
"""Generates a circuit for QAOA. This is not only for QAOA proposed by Farhi
et al., but also general ansatz where alternating layers of time evolution under
two different Hamiltonians H1 and H2 are considered.
Args:
hamiltonians (list):
A list of dict or zquantum.core.qubitoperator.QubitOperator objects representing Hamiltonians
H1, H2, ..., Hk which forms one layer of the ansatz
exp(-i Hk tk) ... exp(-i H2 t2) exp(-i H1 t1)
For example, in the case of QAOA proposed by Farhi et al, the list the list is then
[H1, H2] where
H1 is the Hamiltonian for which the ground state is sought, and
H2 is the Hamiltonian for which the time evolution act as a diffuser
in the search space.
params (numpy.ndarray):
A list of sets of parameters. Each parameter in a set specifies the time
duration of evolution under each of the Hamiltonians H1, H2, ... Hk.
Returns:
zquantum.core.circuit.Circuit: the ansatz circuit
"""
if mod(len(params), len(hamiltonians)) != 0:
raise Warning('There are {} input parameters and {} Hamiltonians. Since {} does not divide {} the last layer will be incomplete.'.\
format(len(params), len(hamiltonians), len(params), len(hamiltonians)))
# Convert qubit operators from dicts to QubitOperator objects, if needed
for index, hamiltonian in enumerate(hamiltonians):
if isinstance(hamiltonian, dict):
hamiltonians[index] = convert_dict_to_qubitop(hamiltonian)
output = Circuit()
# Start with a layer of Hadarmard gates
n_qubits = count_qubits(hamiltonians[0])
qubits = [Qubit(qubit_index) for qubit_index in range(n_qubits)]
output.qubits = qubits
for qubit_index in range(n_qubits):
output.gates.append(Gate('H', (qubits[qubit_index],)))
# Add time evolution layers
for i in range(params.shape[0]):
hamiltonian_index = mod(i, len(hamiltonians))
current_hamiltonian = qubitop_to_pyquilpauli(hamiltonians[hamiltonian_index])
output += time_evolution(current_hamiltonian, params[i])
return output
def build_qaoa_circuit_grads(params, hamiltonians):
""" Generates gradient circuits and corresponding factors for the QAOA ansatz
defined in the function build_qaoa_circuit.
Args:
hamiltonians (list):
A list of dict or zquantum.core.qubitoperator.QubitOperator objects representing Hamiltonians
H1, H2, ..., Hk which forms one layer of the ansatz
exp(-i Hk tk) ... exp(-i H2 t2) exp(-i H1 t1)
For example, in the case of QAOA proposed by Farhi et al, the list the list is then
[H1, H2] where
H1 is the Hamiltonian for which the ground state is sought, and
H2 is the Hamiltonian for which the time evolution act as a diffuser
in the search space.
params (numpy.ndarray):
A list of sets of parameters. Each parameter in a set specifies the time
duration of evolution under each of the Hamiltonians H1, H2, ... Hk.
Returns:
gradient_circuits (list of lists of zquantum.core.circuit.Circuit: the circuits)
circuit_factors (list of lists of floats): combination coefficients for the expectation
values of the list of circuits.
"""
if mod(len(params), len(hamiltonians)) != 0:
raise Warning('There are {} input parameters and {} Hamiltonians. Since {} does not divide {} the last layer will be incomplete.'.\
format(len(params), len(hamiltonians), len(params), len(hamiltonians)))
# Convert qubit operators from dicts to QubitOperator objects, if needed
for index, hamiltonian in enumerate(hamiltonians):
if isinstance(hamiltonian, dict):
hamiltonians[index] = convert_dict_to_qubitop(hamiltonian)
hadamard_layer = Circuit()
# Start with a layer of Hadarmard gates
n_qubits = count_qubits(hamiltonians[0])
qubits = [Qubit(qubit_index) for qubit_index in range(n_qubits)]
hadamard_layer.qubits = qubits
for qubit_index in range(n_qubits):
hadamard_layer.gates.append(Gate('H', (qubits[qubit_index],)))
# Add time evolution layers
gradient_circuits = []
circuit_factors = []
for index1 in range(params.shape[0]):
hamiltonian_index1 = mod(index1, len(hamiltonians))
current_hamiltonian = qubitop_to_pyquilpauli(hamiltonians[hamiltonian_index1])
derivative_circuits_for_index1, factors = time_evolution_derivatives(
current_hamiltonian,
params[index1])
param_circuits = []
for derivative_circuit in derivative_circuits_for_index1:
output_circuit = Circuit()
output_circuit.qubits = qubits
output_circuit += hadamard_layer
for index2 in range(params.shape[0]):
hamiltonian_index2 = mod(index2, len(hamiltonians))
if index2 == index1:
output_circuit += derivative_circuit
else:
current_hamiltonian = qubitop_to_pyquilpauli(hamiltonians[hamiltonian_index2])
output_circuit += time_evolution(current_hamiltonian, params[index2])
param_circuits.append(output_circuit)
circuit_factors.append(factors)
gradient_circuits.append(param_circuits)
return gradient_circuits, circuit_factors
|
STACK_EDU
|
Citing sources in the text
Under the Harvard System, sources are cited in short notes in brackets in the text, and a corresponding full reference is included in a list of references at the end of the work.
Every time the ideas, facts or opinions of another are used in a piece of work this must be acknowledged with a full reference. Whether a source is quoted directly, indirectly, paraphrased or summarised, it must be acknowledged. To do otherwise is plagiarism.
Notes in the text should include:
- The name of the author - surname only e.g. 'Brown'
- The date of the source - full year e.g. '1987'.
- The specific page reference if necessary - e.g. 'p.23' or 'pp.56-78'.
If the name of the author is not known, use 'Anon' in its place. When making reference to a chapter or article that appears in a book made up of contributions from many authors, cite the name of the author of the actual article; details of the editor of the volume will appear in the list of references.
If, in the course of your work, you cite two authors with the same surname, you will need to check that the dates of their books or articles are different. If you have two Browns, each of whom wrote their piece in 1984, you will need to add their initials (e.g. F. Brown 1984, Z. Brown 1984) to distinguish them.
If the date is uncertain try to give an approximate date that is as accurate as possible. This should appear in square brackets preceded by 'ca.' e.g. [ca.1990]. If you are citing several works written in the same year, you will need to distinguish between them by writing Brown 1984a, Brown 1984b etc.
There should be no comma between the name of the author and the date, but the date and page reference should be separated with commas. Page references should be as specific as possible. Do not use vague abbreviations like 'f' or 'ff'. Note that there is no space between 'p.' and the page number.
The particular content of the parenthetical note will depend on the nature of the sentence in which the reference appears. In some instances all three pieces of information will be included, at others only the date and the page reference, or even the page reference alone.
In this example, the first sentence has a full citation, as the author's name does not appear in the text, unlike the second citation, which contains only the date and page reference.
It has long been argued (Butcher 1990, pp.78-90) that pork and leek sausages are better than pork and apple. However, Neville (2005, pp.56-98) has recently produced evidence to the contrary.
The relative merits of different varieties of sausage is a topic that has been fiercely debated in recent years (Butcher 1990, pp.78-90; Neville 2005, pp.56-98).
|
OPCFW_CODE
|
Image source - MIT
Ever since Google gained “Quantum Supremacy”, i.e it's Sycamore Quantum Computer solved in 200 seconds what would take conventional supercomputers 10,000 years to solve, the clock has been ticking on when quantum computing will be able to crack the encryption algorithms on which depend modern-day necessities, like instant messaging (WhatsApp), online shopping (Amazon), web browsing (google), online banking and the topic of my interest - Bitcoin.
Is Quantum Computing a threat to cryptocurrencies?
The short answer is “Yes”.
However, that does not mean that all is lost and you should sell your cryptos and accept that your money must be stored in ever-depreciating fiat form. There are ways to protect cryptocurrencies from this threat and the crypto community is hard at work to tackle it before it becomes a real problem.
Before we get into the details, I’d want to make 2 points:
1. A practically useful quantum computer is still many years away.
2. If a practically useful quantum computer was a reality today, we would have bigger things to worry about than bitcoin. Rogue nations would be able to intercept sensitive military secrets like nuclear codes, access controls of critical domestic infrastructure that bring life to a standstill in an enemy country and cause havoc in international finance. Bitcoin security will be the least of our problems.
Now let's get going.
We start with an Idiot’s Guide to Quantum Computing
Conventional Computers use “bits” which can have one of two values, 0 or 1. Quantum Computers use something called “qubits”. Qubits can simultaneously be 0 or 1. Remember Schrödinger’s cat? The cat is locked in a box. You can’t know if it is dead or alive till you open the box. So during the time that it is locked in a box, it is simultaneously dead and alive. (I’m not pretending to understand this. But I like the cat story).
=> So while conventional computers can perform 1 calculation at a time, qubits (which exist in multiple states at a time) enable quantum computers to perform multiple calculations at a time.
What does this have to do with cryptocurrencies?
Among quantum computing’s many use cases is the ability to break encryption. Cryptocurrencies use encryption to sign transactions and this signature is vulnerable to quantum computing.
Let me explain using Anna and George's love story
Let’s say Anna wants to send a message to George. But she knows that the nosy Aunt Sally will intercept the message and read it before passing it on to George. Anna decides to use a key to scramble the message but the problem is that she will need to send the same key to George to unscramble the message and that key too can be intercepted by Aunt Sally. The solution is that instead of using the same key to encode and decode, she can use a public-private key pair that are related to each other.
George has a private key that he keeps to himself and a public key that he broadcasts to everyone. Anna can encrypt her message using George’s public key and the message can be decrypted only by the person who has the private key i.e George. Aunt Sally will have to find someone else to spy on.
Also, when George receives a message, how can he be sure that the message has actually come from Anna and not Aunt Sally impersonating Anna? If Anna signs the message with her own private key that only she knows, George can verify the signature with Anna’s publicly broadcast public key.
Coming to cryptocurrencies, if Anna wants to send 1BTC to George, she will send the BTC to George’s public address and only George who has the corresponding private key can receive the BTC. Also, Anna will need to sign the transaction with her private key which will be used to verify that the BTC was actually sent by Anna and not someone pretending to be her.
What exactly in the Bitcoin network is vulnerable to quantum computing?
The short answer is “Bitcoin’s digital signature system”.
Bitcoin uses Elliptical Curve Digital Signature Algorithm (ECDSA) which creates these public-private key pairs (used by Anna and George above). The relationship between the public-private key pairs is such that if you know the public key, you can verify that the transaction was approved by the holder of the private key, but you cannot derive the private key.
Actually, a more accurate statement would be that it is not feasible to derive the private key from the public key using conventional computers. It would take billions of years. However, in 1994, the mathematician Peter Shor published Shor’s Algorithm using which someone with a sufficiently powerful quantum computer can break the cryptography behind the public-private key pair and derive the private key from the public key much faster.
Who is vulnerable to quantum attacks?
Crypto security experts are constantly reminding us- “Not your keys, Not your coins”. This means that if someone else has your private keys, she has your coins.
The private key however is under threat only if the public key is exposed. If you don’t reveal your public key, then the private key is quantum-safe. The vulnerability happens when the public key is exposed which can happen in the following 3 situations:
1. P2PK addresses - When one transfers BTC to someone, the receiver’s public key is not exposed. What is exposed is the hash of the receiver’s public key (P2PKH- Payment to Public Key Hash). Therefore the receiving address is not under threat. However, in the first year of bitcoin operations, the public key of the receiving address was exposed (P2PK - Payment to Public Key).
=> If those funds have not been moved from the P2PK addresses then they’re vulnerable to a quantum attack. Further, some people have forgotten their private keys and cannot move coins even if they wanted to. The first to develop a usable quantum computer may be able to derive the corresponding private keys and keep those coins for themselves.
2. Reused Addresses - When one transfers BTC from one's wallet, the sender’s public key becomes visible on the blockchain and is vulnerable to a quantum attack. Therefore, one must never reuse an address. Today it is standard practice to create a new public key every time one makes a transfer. Every half-decent crypto wallet does this automatically.
=> As long as users adopt good practices like not reusing public keys, this is not really a problem.
3. Unprocessed Transactions - Once you broadcast your transaction, there is a window when your public key is exposed but the transaction is not confirmed. Your transaction would be sitting in the mempool till it makes it to a block. During that time, if a quantum computer can derive the private key, it can send your coins to the hacker’s address instead of the address you intended to send it to. To ensure that their malicious transaction makes it to the block and not your legitimate transaction, they would pay a higher transaction fee.
=> According to the paper Quantum attacks on Bitcoin, and how to protect against them, quantum computers could break the signature scheme in under 10 minutes (the average time to create a block) as early as 2027. This would make every transaction henceforth vulnerable. This represents the biggest quantum threat to Bitcoin.
If quantum computing already exists then why isn’t anyone using it to steal BTC?
Quantum Computers of roughly 4,000 qubits will be required to break the Bitcoin code. The most powerful Quantum Computers currently existing operate with only about 50 qubits.
Does that mean that once Quantum Computing has advanced to the level where it has 4,000 qubits then that will be the end of cryptocurrencies?
No- The solution is to use quantum-resistant algorithms that cannot be broken even by quantum computing. There are several options available and their pros and cons are being evaluated by the crypto community. Also, while quantum computing is likely to break several of current encryption algorithms, it is also capable of creating hack-proof replacements.
Can someone equipped with a Quantum Computer launch a 51% attack?
To remind the reader, a 51% attack is when someone gains control over 51% of the hashing power and hence controls the blockchain. That someone can control which transactions make it to the blockchain, get all of the block-rewards and transaction fees, approve double-spends, and do other manipulations.
To create the next bitcoin block, one needs to solve a SHA-256 Proof-of-Work hashing problem. There is no algorithm, conventional or quantum, that can reverse engineer SHA-256. So the only way to solve it is “brute force”, which is just a fancy way of saying that you have to keep guessing till to arrive at the right answer. SHA-256 is considered quantum-resistant because the most efficient theoretical implementations of quantum computing are less efficient than ASICS miners (specialized classical computers used to mine bitcoin).
It is possible that with time Quantum Computers will be able to solve SHA-256 hashing problems faster than conventional computers. If that happens more miners will upgrade and adopt quantum computing just like they upgraded from CPUs to GPUs to FGPAs (Field Programmable Gate Arrays) to ASICs (Application-SPecific Integrated Circuits) in the past.
Bitcoin’s proof-of-work consensus mechanism based on SHA-256 algorithm is quantum-resistant but the Elliptical Curve Digital Signature (ECDSA) is not. Cryptocurrencies must migrate to a quantum-resistant signature before quantum computers become a reality. The crypto community is working on alternatives.
|
OPCFW_CODE
|
get relative url via mixin
Ziggy version
1.4.6
Laravel version
6.20
Description
I used to be able to generate a relative url via a mixin for a Vue component:
export default {
methods: {
route: (name, params, absolute = false) => route(name, params, absolute, Ziggy)
}
}
But this no longer works -- at least on a new branch that I am developing. Isn't this the correct way to override route() and make the url relative? I don't know whether something changed between versions of Ziggy. I'm at 1.4.6 now. Using Vue 2. I'm not getting an error, but I'm using route() to do an axios call in created(). This is new, so I'm wondering whether that could be a problem; i.e., the mixin's being ignored if the route method is called in created()? I've tried changing the method name, but that just seems to break everything.
Ziggy call and context
const {
data: {
users,
areas,
counties,
schools,
statuses
}
} = await axios.get(this.route('api.project_create_data'))
Ziggy configuration
nothing relevant
Route definition
none
Sure @bakerkretzmar. I've tried so many variations today, but here's what was working on a previous branch. I don't know exactly why I did it this way, but it did work. ;-)
in bootstrap.js
[....]
// load Ziggy resources for utilizing Laravel named routes in js
import { ZiggyVue } from 'ziggy-js/dist/vue'
window.ZiggyVue = ZiggyVue
import { Ziggy } from '@/ziggy'
window.Ziggy = Ziggy
import ZiggyRoute from '@/components/mixins/ZiggyRoute'
Vue.mixin(ZiggyRoute)
[....]
ZiggyRoute mixin
export default {
methods: {
route: (name, params, absolute = false) => route(name, params, absolute, Ziggy)
}
}
app.js
[all kinds of other imports]
Vue.use(ZiggyVue, Ziggy)
create: {
this.getCreateData()
}
methods: {
getCreatedata() {
const {
data: {
users,
areas,
counties,
schools,
statuses
}
} = await axios.get(this.route('api.project_create_data'))
}
}
For cypress testing, I use a valet-env.php file for several testing domains
In the current config, the 'APP_URL' is 'http://widea-editor.test'
The url in my generated Ziggy file is 'http://wisconsin-idea.test', which is my "normal" local dev domain.
axios.get(this.route('api.project_create_data')) should make a request to
http://widea-editor.test/ProjectCreateData
instead, it makes a request to
http://wisconsin-idea.test/ProjectCreateData, which gives me the wrong set of users to test with.
So it seems as if the mixin is being ignored.
If I change the name of the method from route to zroute in the mixin, I get an error that route is undefined in the mixin. Which doesn't really make sense to me.
To add to the above, if I try to do the same sort of thing from a Pinia store, I get the same result.
Pinia store
import route from 'ziggy-js'
import { Ziggy } from '@/ziggy'
window.route = (name, params, absolute = false, config = Ziggy) => route(name, params, absolute, config)
[....]
actions: {
async loadCreateData() {
try {
const {
data: {
users,
areas,
counties,
schools,
statuses
}
} = await axios.get(route('api.project_create_data'))
}
Thanks, that's really helpful! A few things:
your Vue.mixin(ZiggyRoute) is probably being overwritten by Vue.use(ZiggyVue, Ziggy) because that one gets called later and also registers a mixin method called route, my guess is that's why your default absolute = false is being ignored (assuming app.js is doing require('./bootstrap.js') at the top)
that alone would cause the wrong URL issue you're seeing in tests, but if you wanted to you could also generate a different ziggy.js file for testing that has the correct testing URL, and then you could use absolute routes
not sure what's causing 5.
Do you have the ziggy-js npm package installed or are you aliasing ziggy-js in your mix file? Are you also using the Blade @routes directive anywhere?
Try this (remove everything else):
// app.js
import route from 'ziggy-js'
import { Ziggy } from '@/ziggy'
Vue.mixin({
methods: {
route: (name, params, absolute = false, config = Ziggy) => route(name, params, absolute, config)
}
})
Ok, that worked! I wasn't aware of the possible conflict with Vue.use(ZiggyVue, Ziggy). That really helps me understand what's going on here.
I almost went with generating multiple ziggy.js files a while back, but ultimately decided that if it worked, using a relative url is "easier" (as long as I get the ziggy setup stuff and the mixin right!).
I am using both the composer and ziggy-js packages together. I do not use @routes anywhere. I include the artisan generate command as a script in package.json to generate the ziggy file either on its own or as part of npm run dev.
I still need to check whether the problem persists in the Pinia store file. The only difference is that the mixin is replaced with
window.route = (name, params, absolute = false, config = Ziggy) => route(name, params, absolute, config)
Okay great! I'll close this now but let me know if you have any other questions. Running php artisan ziggy:generate from an npm script as part of your build is a great idea, I don't think I've seen that before but I love it.
You might already know this but you don't need both the npm and composer packages if your whole app is in one repo. You can alias ziggy or ziggy-js to the compiled javascript files using Mix, there's an example in our readme. The end result is the same though, the way you're doing it isn't wrong.
I'm not super familiar with Pinia but the only thing there that could be an issue is that the imported route function and your override are both called route. If you're running into problems, try something like this:
import baseRoute from 'ziggy-js'
import { Ziggy } from '@/ziggy'
const route = (name, params, absolute = false, config = Ziggy) => baseRoute(name, params, absolute, config)
// now use route() as normal, it will always be your new function and not Ziggy's
Thanks Jacob!
One thing I’m wondering about: how do you generate the ziggy routes file without the artisan command (which I’m guessing doesn’t exist in the npm package)?
On Apr 18, 2022, at 5:45 PM, Jacob Baker-Kretzmar @.***> wrote:
You might already know this but you don't need both the npm and composer packages if your whole app is in one repo. You can alias ziggy or ziggy-js to the compiled javascript files using Mix, there's an example in our readme. The end result is the same though, the way you're doing it isn't wrong.
No problem! You can't, it's part of the composer package. The npm package is basically for apps that have completely separate SPA frontends, potentially in a different repo, but you still need to generate the ziggy.js file in the Laravel app and then copy it to the javascript app or load it remotely or something.
@bakerkretzmar Just following regarding using Ziggy in Pinia store files.
Here's what works:
import route from 'ziggy-js'
import { Ziggy } from '@/ziggy'
const zroute = (name, params, absolute, config = Ziggy) => route(name, params, absolute, config);
and in an action/method:
myconstant = await axios.get(zroute('api.project_create_data'))
In other words, the constant has to be something other than "route", and that's what has to be called in a Pinia action. Otherwise webpack will choke on it.
Identifier 'route' has already been declared.
|
GITHUB_ARCHIVE
|
Android stripes get cards of a user
I just using stripes. I tried all the samples.
How to get all cards of a user? I have a user with a userid in my app. Then I just create a card object and got a token.
How can I save that card to somewhere and how to get back those cards...
I read all of the samples and websites but that is not clear.
Is there a way to delete one card from my user cards from the android app later?
Or update...etc?
Any way to make a default card or select a card from my cards to pay with?
Thanks
Stripe makes it easy to store card details and charge them later. You just need to attach the card details to a customer object. Once you’ve done that, you can charge the customer at any time.
Securely collecting payment information
Basically, for each user you have to create a customer and add cards into it that belong to the user. you can find customers under customer tab in stripe dashboard once you creating it. Stripe provides three methods for tokenizing your customer’s payment information over HTTPS. So first, you need to collect user card details using one of these method and then send the token to your backend that can be used to create a customer. instead of saving token you need to save customer id in your backend db that can be used to charge the customer. so when you charging using customer id the charge will be credited from the default card of the customer.
P.S.
Tokenization ensures that no sensitive card data ever needs to touch your server so your integration can operate in a PCI compliant way. If any card data were to pass through or be stored on your server, you would be responsible for any PCI DSS guidelines and audits that are required.
List all cards
Once you created a customer you can see list of the cards belonging to that customer.
Delete a card
You can delete cards from a customer. if you delete a card that is currently the default source, then the most recently added source will become the new default. If you delete a card that is the last remaining source on the customer then the default_source attribute will become null.
Update card / Set default card
You can retrieve a customer using Customer::retrieve({CUSTOMER_ID}); API and query the card that you want to update $card = $customer->sources->retrieve({CARD_ID}); then update it. default_source property against the customer object with the card id value can be used to set a card to default.
Links to refer
https://stripe.com/docs/charges#saving-credit-card-details-for-later
https://support.stripe.com/questions/can-i-save-a-card-and-charge-it-later
https://stripe.com/blog/multiple-cards
|
STACK_EXCHANGE
|
Thursday, October 23, 2008
Comic talent John Cleese has no shortage of credits, fans or accomplishments.
At 61, his successful creative career has included not only stints as the irascible straight man of “Monty Python” and creator of “Fawlty Towers” and its crabby, self-sabotaging hotel manager, but he has cowritten two psychology books and is a professor at Cornell University. It's hard to imagine much broader appeal. So when the BBC approached Cleese to host The Human Face, the documentary project let him infuse his unique style into his first love.
“Psychology is my primary interest and The Human Face is really about psychology,” he says. “There are other things that I would like to do and this could be a kind of steppingstone to those projects.”
Sure, it's tough to imagine a face as recognizable as Cleese's needing a steppingstone. Recognition is, after all, what The Human Face is all about. The four-part series, airing on The Learning Channel Aug. 26 and 27 and available from Winstar on VHS ($24.98) and DVD (double disc set, $34.98) Aug. 28, examines the history, biology and physiology of the human face as well as its deepest role in society: silent communication.
As if to emphasize the point, Elizabeth Hurley (“The best fun was gossiping with Elizabeth Hurley,” Cleese confides), Pierce Brosnan, Candice Bergen and Joan Collins lend a little high-profile support to the series. Python fans will also recognize familiar faces like Michael Palin and Prunella Scales.
The first episode, “Face to Face,” examines the physical structure of faces and the roles of voluntary and involuntary expressions.
“I've wanted to play poker for a long time but I never have. If I was going to play poker, I would go up to San Francisco and talk to Paul Eckman,” Cleese says. “He's the one in the series who talks about the fact that there are 18 different kinds of smiles. He teaches how to tell [when people are] lying. He teaches ‘microexpressions.' He claims that he can teach you those expressions in 40 minutes.”
Eckman's ideas got a lot of attention during the Clinton investigation, when television news programs caught on to his methods and applied them to then-President Clinton's public addresses about the Monica Lewinsky scandal. News magazine programs seized on the high brow furrow that Eckman identifies as a sign of lying.
“When people are lying they are exhibiting behaviors,” Cleese says, though experts agree those behaviors mirror nervousness, not necessarily lying. The episode also looks at the role of facial expressions in communicating not only strong emotions like fear and anger, but more subtle cues like apology and empathy.
The series examines social and cultural perceptions of faces on levels as deep as human genetics and as shallow as a stranger's glance across a crowded room. Just doing the show changed Cleese.“Especially my reaction to people who have such unusual faces, we might almost say deformed,” he says.
“I realized from an evolutionary point of view it is the genes. We perceive it as a sign of disease. The first evolutionary thought is to keep away. The next thing is for mating, choosing a partner. It's a natural unconditioned response to recoil when you meet someone [unappealing].”
Episode two, “Beauty,” looks at humanity's visual prejudices and the benefits of a pleasant visage. “It's about two things,” Cleese explains. “There is a mathematical basis for what is an attractive face, the idea used to be called the ‘Golden Proportion.' This is the idea of Stephen Marquardt. He started to use it to do reconstructive surgery.”
But while Marquardt was looking for an objective standard of beauty, much of beauty is, as the clich? says, in the eye of the beholder.
“The second part of the program is that beautiful people are imbued with all kinds of qualities that they don't really have,” Cleese says. “A lot of the people who have become celebrities are not famous because they have discovered penicillin or won the Civil War. Now it's because they are number two or three in a sitcom and often those people don't have that much to talk about.”Which may account for the number of celebrities who go into politics, at least in the United States.
“We pay far too much attention to people's faces when we try to figure out what kind of a person they are,” Cleese says, citing myths like a person whose eyes are too close together being untrustworthy. “Everybody will believe this. They will look at the picture and everyone will agree on this. There is no reason to believe this person is untrustworthy.”
The remaining two segments, “Here's Looking at You” and “Fame,” delve further into expressions and perceptions.
Not everything the team examined made it to the finished product. Cleese notes with some regret that segments on a dating service and the way pets perceive human expressions landed on the cutting room floor.
A dating service is an example of how everyone searches for the Golden Proportion, a genetic and social ideal, a trophy of sorts. “When the men first come into the dating agency, they go for the best lookers,” Cleese says. “The more intelligent ones will come back in a few days and say ‘I need someone more intelligent.’
So by now, is there anyone who doesn't recognize John Cleese? Maybe.
“The stuff that got cut out of the portion on expressions is about Darwin. He was quite sure that his dog grinned on certain occasions,” Cleese says. “I was interested in this because I have a cat that they said doesn't recognize me. I didn't believe that.”
|
OPCFW_CODE
|
I’m using a SAML enterprise connection. I noticed that user profiles for users that have logged in via this connection don’t have the right user names. Instead of the user’s name, the name in the profile is populated with an email address.
The SAML connection has the following field mappings:
I’ve looked at the connection’s debug logs, and I see that the
http://schemas.microsoft.com/identity/claims/displayname field does have the expected user name. Auth0’s docs say that the profile should be updated automatically whenever a user logs in through the connection, but I’m not seeing the user name getting updated to match.
Any help would be appreciated! Thanks.
Welcome to the Auth0 Community Great to have you on board.
I will run through an overview of my process implementing your use case, you can check against yours to see if you missed anything. On first glance your mapping prefix URL’s are all different and this is unexpected, usually they will all be consistent but let’s review.
The key thing on this is to take a look at what is in your SAML response. If you do a login against your SAML connection and have the browser dev tools open to see the network calls. There will be a POST against your ACS url configured on your IdP like the below:
As you can see the SAML response is within the payload. If you copy this into https://samltool.io/ you’ll be able to see the attributes in the SAML response clearly.
In my test you can see what my IdP is configured to send, I included an additional full name attribute for illustration purposes.
As you can see my IdP is sending the attributes without any prefixed url’s so I can map this to my Auth0 property as is within the connection:
The attributes have come across as expected:
I hope this helps you. Let us know if you need anything further.
Hi, thanks for your response!
Unfortunately I’m not able to use the browser console to view the SAML response directly, as these are users outside my organization who are logging in to my application using their own IdP. Instead, I have been using Debug Mode on the connection, and looking at connection logs as described here: Troubleshoot SAML Configurations
- I enabled debug mode on the SAML connection
- Found a Success Login log entry
- Looked at the
original_profile property on the log entry
- Within the
original_profile value, I see there is a mapping from
"http://schemas.microsoft.com/identity/claims/displayname" to the expected display name
Thanks for your patience on this.
You can absolutely use that method as well to access the attributes instead of the SAML response directly.
If you can check one thing for me, go to your SAML connection in Auth0 (Authentication > Enterprise > SAML > Select the connection)
Scroll down and check the field “Sync user profile attributes at each login” is toggled on and then hit “Save Changes” as below:
It could be that this is switched off.
Hi @SaqibHussain , thank you for your continued support!
I just checked the connection settings, and I see that
Sync user profile attributes at each login is currently enabled on the connection.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.
|
OPCFW_CODE
|
sqoop incremental lastmodified with merge-key
I am trying to import the updated records in HDFS. Earlier I had three part file in HDFS.When I run below tool it gave an error that merge key is required.
sqoop import --table employee --username training --password training --connect jdbc:mysql://localhost/loudacre --incremental lastmodified --check-column modified --last-value '2016-08-22 19:02:41'
However, when I used the merge-key option I can see only part file with all the records in it.
Though a very old post, I'm adding points for those with same question.
@Deepak_Spark_Beginner : If error says 'merge key is required' then it means that the "--target-dir" is already present. since you did not specify this in Sqoop command, the default value i.e. table name 'employee' is picked. Meaning, a directory by name employee is already present.
Next, when you specify a merge key during incremental approach below sequence of action happens,
All rows which satisfy the last value criteria are imported to a temporary location. This execution runs map only jobs. This can be seen in logs as follows
8/10/30 13:27:52 INFO mapreduce.Job: map 0% reduce 0%
18/10/30 13:27:58 INFO mapreduce.Job: map 67% reduce 0%
18/10/30 13:27:59 INFO mapreduce.Job: map 100% reduce 0%
Next, merge action takes place between your original target directory 'employee' and the temporary target directory, wherein the column specified in --merge-key is used to update the existing data. This runs mappers and reducer which can be seen in logs as follows
18/10/30 13:28:08 INFO mapreduce.Job: map 0% reduce 0%
18/10/30 13:28:14 INFO mapreduce.Job: map 17% reduce 0%
18/10/30 13:28:16 INFO mapreduce.Job: map 100% reduce 0%
18/10/30 13:28:20 INFO mapreduce.Job: map 100% reduce 100%
reducer job is required because of obvious reasons that key matching has to scan complete dataset. Plus you can verify the same by your file name which would have a 'r' denoting reducer output.
This is the reason why you see only 1 file in your output.
you commented "I mean just to get the modifed field from database and update HDFS using lastmodified". Well, this is what had happened. If you check the output, the execution has got modified fields from database and updated in HDFS. Only difference is that output is merged into one single file.
This is expected I guess. Generally, merge tool is run after an incremental import with the --incremental lastmodified mode.
When merging the datasets, it is assumed that there is a unique primary key value in each record (mentioned as --merge-key).
The merge tool will run mapreduce job to merge all datasets into one, taking the newest available rows for each --merge-key.
Check sqoop docs for more details.
Is there a way to avoid this.I mean just to get the modifed field from database and update HDFS using lastmodified.
|
STACK_EXCHANGE
|
A platform-specific client (backup agent) is necessary for each platform to be backed up. The base client can back up and archive files and restores them if required. The term platform is used here to mean the various operating systems and the file systems that they support. Furthermore, some base clients offer HSM for selected file systems.
The backup of file systems takes place at file level as standard. This means that each changed file is completely retransferred to the server and entered there in the metadata database. By using backup at volume level and at block level it is possible to change the granularity of the objects to be backed up.
When backup is performed at volume level, a whole volume is backed up as an individual object on the backup server. We can visualize this as the output of the Unix command 'dd' being sent to the backup server. Although this has the disadvantage that free areas, on which no data at all has been saved, are also backed up, only very few metadata database operations are necessary on the backup server and on the client side it is not necessary to spend a long time comparing which files have changed since the last backup. As a result, backup and restore operations can sometimes be performed more quickly at volume level than they can at file level. This is particularly true when restoring large file systems with a large number of small files.
Backup on block level optimizes backup for members of the external sales force, who only connect up to the company network now and then by means of a laptop via a dial-up line. In this situation the performance bottleneck is the low transmission capacity of modem or ISDN connections. If only one bit of a large file is changed, the whole file must once again be forced down the dial-up connection. When backing up on block level the backup client additionally keeps a local copy of every file backed up. If a file has changed, it can establish which parts of the file have changed. The backup client sends only the changed data fragments (blocks) to the backup server. This can then reconstruct the complete file. As is the case for backup on file level, each file backed up is entered in the metadata database. Thus, when backing up on block level the quantity of data to be transmitted is reduced at the cost of storage space on the local hard disk.
In addition to the standard client for file systems, most network backup systems provide special clients for various applications. For example, there are special clients for MS Exchange or Lotus Domino that make it possible to back up and restore individual documents. We will discuss the backup of file systems and NAS servers (Section 7.9) and databases (Section 7.10) in more detail later on.
Use the following table of contents to navigate to chapter excerpts or click here to view Network Backup in its entirety.
|ABOUT THE BOOK:|
|Storage networks will become a basic technology like databases or local area networks. According to market research, 70% of external storage devices will be connected via storage networks in 2003. The authors have hands-on experience of network storage hardware and software, they teach customers about concrete network storage products, they understand the concepts behind storage networks, and show customers how storage networks address their business needs. This book explains how to use storage networks to fix malfunctioning business processes, covering the technologies as well as applications -- a hot topic that will become increasingly important in the coming years.Purchase the book from Wiley Publishing|
|ABOUT THE AUTHOR:|
|Authors Ulf Troppens and Rainer Erkens are both employed at IBM TotalStorage Interoperability Center in Mainz, Germany a testing, development and demonstration laboratory for storage products and storage networks. Both authors work at the interface between technology and customers. Wolfgang Müller is currently working as a software architect in the Storage Software Development Department at IBM in Mainz, Germany, where the focus is on software development projects supporting open standards such as SMI-S/CIM/WBEM and IEEE 1244.|
|
OPCFW_CODE
|
Powershell command to hide user from exchange address lists
I'm trying to write powershell script which hides user from exchange lists.
I was able to find following command:
Set-Mailbox -Identity [user id here] -HiddenFromAddressListsEnabled $true
And it doesn't give me an error message, and when I run the command twice, I get following warning:
WARNING: The command completed successfully but no settings of '[user id here]' have been modified.
Which probably means that the command did actually work.
but when I go to Exchange Management Console, and open user profile, "hide user from exchange address lists" check box is off.
What could be the reason?
Try these links and see if they are helpful: - http://vnucleus.com/2011/07/hide-disabled-users-from-exchange-2007-gal-via-powershell/ - http://forums.msexchange.org/m_1800498374/mpage_1/key_/tm.htm#1800498374
I use this as a daily scheduled task to hide users disabled in AD from the Global Address List
$mailboxes = get-user | where {$_.UserAccountControl -like '*AccountDisabled*' -and $_.RecipientType -eq 'UserMailbox' } | get-mailbox | where {$_.HiddenFromAddressListsEnabled -eq $false}
foreach ($mailbox in $mailboxes) { Set-Mailbox -HiddenFromAddressListsEnabled $true -Identity $mailbox }
You can use the following script, just replace DOMAIN with the name of your domain. When executed it will prompt you for a userlogin then hide that user's account from the address lists.
$name=Read-Host "Enter login name of user to hide"
Set-Mailbox -Identity DOMAIN\$name -HiddenFromAddressListsEnabled $true
Brian.
This is exactly what I'm using, and it sets the value, however when I go to Exchange Management Console, and open user profile, "hide user from exchange address lists" check box is off.
I was getting the exact same error, however I solved it by running $false first and then $true.
You will have to pass one of the valid Identity values like DN, domain\user etc to the Set-Mailbox cmdlet. Currently you are not passing anything.
manojlds, I put User ID in inequality signs and it wasn't displayed. Actually this is not the issue, and valid ID was provided. Sorry for that
"WARNING: The command completed successfully but no settings of '[user id here]' have been modified."
This warning means the setting was already set like what you want it to be. So it didn't change anything for that object.
For Office 365 users or Hybrid exchange, go to using Internet Explorer or Edge, go to the exchange admin center, choose hybrid, setup, chose the right button for hybrid or exchange online.
To connect:
Connect-EXOPSSession
To see the relevant mailboxes:
Get-mailbox -filter {ExchangeUserAccountControl -eq 'AccountDisabled'
-and RecipientType -eq 'UserMailbox' -and RecipientTypeDetails -ne 'SharedMailbox' }
To block based on the above idea of 0KB size:
Get-mailbox -filter {ExchangeUserAccountControl -eq 'AccountDisabled'
-and RecipientTypeDetails -ne 'SharedMailbox' -and RecipientType -eq 'UserMailbox' } | Set-Mailbox -MaxReceiveSize 0KB
-HiddenFromAddressListsEnabled $true
|
STACK_EXCHANGE
|
#
# wave1d.py
# Simple library for 1-d wave in wave package
#
# Sparisoma Viridi | https://butiran.github.io
#
# 20210206
# 1009 Try creating class [1].
# 1012 Learn naming convention [2].
# 1035 Test it and ok but not full wave.
# 1048 Finally it shows full wave, k = wavelength --> wavenumber.
# 1123 Move it to wave package.
# 1137 Change pi to _pi as in Latex, especially for _lambda.
# 1146 Add reference [3] for lambda --> _lambda.
#
# References
# 1. -, "Python Classes and Objects", W3Schools, url https://www
# .w3schools.com/python/python_classes.asp [20210206].
# 2. Jasmine Finer, "How to Write Beautiful Python Code With PEP 8",
# Real Python, 2019, url https://realpython.com/python-pep8/
# [20210206].
# 3. -, "Python Lambda", W3Schools, url https://www.w3schools.com
# /python/python_lambda.asp [20210206].
#
# Import necessary libraries
from numpy import pi, sin
# Define a travelling wave
class TravellingWave:
def __init__(self, amplitude, period, wavelength, phase_constant):
self.amplitude = amplitude
self.period = period
self.wavelength = wavelength
self.phase_constant = phase_constant
self.frequency = 1 / period
self.angular_frequency = 2 * pi / period
self.wavenumber = 2 * pi / wavelength
def displacement(self, position, time):
A = self.amplitude
_omega = self.angular_frequency
k = self.wavenumber
_phi_0 = self.phase_constant
x = position
t = time
y = A * sin(k * x - _omega * t + _phi_0)
return y
|
STACK_EDU
|
Container crash when I delete 1 Monitor
⚠️ Please verify that this question has NOT been raised before.
[X] I checked and didn't find similar issue
🛡️ Security Policy
[X] I agree to have read this project Security Policy
📝 Describe your problem
When i try to delete one of my Monitor its just stuck in delete and Crash
After that i create new containor and When i restore from backup.json File and again delete same monitor again its stuck in delete
and just crash
📝 Error Message(s) or Log
==> Performing startup jobs and maintenance tasks
==> Starting application with user 0 group 0
Welcome to Uptime Kuma
Your Node.js version: 18.20.2
2024-06-15T07:16:05Z [SERVER] INFO: Welcome to Uptime Kuma
2024-06-15T07:16:05Z [SERVER] INFO: Node Env: production
2024-06-15T07:16:05Z [SERVER] INFO: Inside Container: true
2024-06-15T07:16:05Z [SERVER] INFO: Importing Node libraries
2024-06-15T07:16:05Z [SERVER] INFO: Importing 3rd-party libraries
2024-06-15T07:16:08Z [SERVER] INFO: Creating express and socket.io instance
2024-06-15T07:16:08Z [SERVER] INFO: Server Type: HTTP
2024-06-15T07:16:08Z [SERVER] INFO: Importing this project modules
2024-06-15T07:16:08Z [NOTIFICATION] INFO: Prepare Notification Providers
2024-06-15T07:16:08Z [SERVER] INFO: Version: 1.23.13
2024-06-15T07:16:09Z [DB] INFO: Data Dir: ./data/
2024-06-15T07:16:09Z [SERVER] INFO: Connecting to the Database
2024-06-15T07:17:01Z [SERVER] INFO: Shutdown requested
2024-06-15T07:17:01Z [SERVER] INFO: Called signal: SIGTERM
Trace: Error [ERR_SERVER_NOT_RUNNING]: Server is not running.
at new NodeError (node:internal/errors:405:5)
at Server.close (node:net:2161:12)
at Object.onceWrapper (node:events:631:28)
at Server.emit (node:events:529:35)
at emitCloseNT (node:net:2221:8)
at process.processTicksAndRejections (node:internal/process/task_queues:81:21) {
code: 'ERR_SERVER_NOT_RUNNING'
}
at process.unexpectedErrorHandler (/app/server/server.js:1905:13)
at process.emit (node:events:517:28)
at emit (node:internal/process/promises:149:20)
at processPromiseRejections (node:internal/process/promises:283:27)
at process.processTicksAndRejections (node:internal/process/task_queues:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
🐻 Uptime-Kuma Version
1.23.13
💻 Operating System and Arch
UBUNTU 22.04
🌐 Browser
Mozila
🖥️ Deployment Environment
Runtime: Docker
Database: Not know just use stack in portainer
Filesystem used to store the database on: NFS mount in Ubuntu
number of monitors: 31
services:
uptimekuma:
image: louislam/uptime-kuma:1
container_name: uptimekuma
ports:
- 3001:3001
volumes:
- /mnt/nfs/uptimekuma_1/:/app/data
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
If you use slow disks (such as NFS mont) that is a known problem. Please also be aware of the db-corruption issues for said storage documented in the wiki.
Deleting a monitor is in V1 a somewhat expensive operation time whise. This may trigger the health check to restart your container depending on your retention and disk speed.
Since v2s architecture we do store heartbeats in an aggregated fashion. This likely resolves your issue.
See #4500 for the things that need to happen before v2.0 can be published.
All this NFS related issues should be closed with the hint towards "NFS is not supported".
Not sure about the plan with NFS in v2, but for v1 just close and move on. Saves a lot of time.
FYI: I ran into this issue not running on NFS and running on a fast disk. It was a monitor with a ton of history, so that is probably the cause.
Since v2s architecture we do store heartbeats in an aggregated fashion. This likely resolves your issue
Consider upgrading, that should resolve this issue => I am not going to reopen ^^
https://github.com/louislam/uptime-kuma/releases/tag/2.0.0-beta.0
Done! Thanks for the help @CommanderStorm!
|
GITHUB_ARCHIVE
|
New Raspberry Pi Pico with own microchip launched
The Raspberry Pi foundation has announced a new Raspberry Pi board. And it's tiny this time! Say hello to your next DIY project with a Raspberry Pi Pico!
The already small sized existing Raspberry Pi boards seem to be huge compared to this tiny little board as it is more or less the size of a finger.
The best thing? The pico is already available! You can buy it for more or less $4 from many retailers. Or you get the latest edition of the HackSpace magazine – it added a free Raspberry Pi Pico in it.
Raspberry Pi's first own microchip
Something very new in this little board is the new microchip, the RP2040. RP standing for Raspberry Pi and, understandably, the Raspberry Pi foundation is very proud of this. The microchip is based on a ARM Cortex chip. We believe this microchip lays the ground for future (and also larger) Raspberry Pi editions with their own microchips.
Other micro-board producers, such as Adafruit and Arduino also announced their own boards with the RP2040 chip.
What about Performance?
Don't expect you can run Windows 10 on this board – these times have not (yet) come. But for a small Linux (such as Raspberry Pi OS – previously known as Raspbian) and for working with sensors, it is perfect.
- Dual-core Arm Cortex-M0+ @ 133MHz
- 264KB (remember kilobytes?) of on-chip RAM
- Support for up to 16MB of off-chip Flash memory via dedicated QSPI bus
- DMA controller
- Interpolator and integer divider peripherals
- 30 GPIO pins, 4 of which can be used as analogue inputs
- 2 × UARTs, 2 × SPI controllers, and 2 × I2C controllers
- 16 × PWM channels
- 1 × USB 1.1 controller and PHY, with host and device support
- 8 × Raspberry Pi Programmable I/O (PIO) state machines
- USB mass-storage boot mode with UF2 support, for drag-and-drop programming
A quick first look at retailers and prices
A quick overview over retailers (partnered with Raspberry Pi) and their prices of the Pico.
|Australia||Core Electronics||AUD 5.75||+ AUD 6.95|
|Austria||Welectron||EUR 4.20||+ EUR 3.95|
|Austria||BerryBase||EUR 4.13||+ EUR 5.90|
|Belgium||MC Hobby||EUR 4.08||unknown|
|Canada||BuyaPi||CAD 5.25||+ CAD 12.00|
|Canada||CanaKit||CAD 5.25||+ CAD 12.95|
|France||Kubii||EUR 4.20||+ EUR 3.90|
|France||elektorstore||EUR 4.95||+ EUR 7.95|
|Germany||Welectron||EUR 4.20||+ EUR 3.95|
|Germany||Reichelt||EUR 3.95||+ EUR 5.95|
|Germany||BerryBase||EUR 4.10||+ EUR 4.95|
|Germany||Rasppishop||EUR 4.49||+ EUR 4.90|
|Germany||Funk24||EUR 4.10||+ EUR 4.50|
|Netherlands||Raspberry Store||EUR 4.50||+ EUR 4.43|
|Netherlands||elektorstore||EUR 4.95||+ EUR 4.95|
|Netherlands||Kiwi Electronics||EUR 4.50||+ EUR 2.50|
|Sweden||electro:kit||SEK 49.00||+ SEK 23.20 (+taxes)|
|Switzerland||Pi-Shop||CHF 4.75||+ CHF 6.90|
|Switzerland||BerryBase||CHF 4.75||+ CHF 9.50|
|United Kingdom||Okdo||GBP 3.71||+ GBP 2.99|
|United Kingdom||Pimoroni||GBP 3.60||+ GBP 2.50|
|United Kingdom||The Pi Hut||GBP 3.60||+ GBP 2.99|
|United Kingdom||sb components||GBP 3.59||+ GBP 3.10|
|United States||adafruit||USD 5.00||unknown (+ taxes possible)|
|United States||MakerBright||USD 4.00||+ USD 4.75 (+taxes possible)|
|United States||Vilros||USD 4.00||+ USD 7.95 (+taxes possible)|
|United States||sparkfun||USD 4.00||+ USD 7.77 (+taxes possible)|
|United States||PiShop||USD 4.00||+ USD 8.95 (+taxes possible)|
|
OPCFW_CODE
|
Who has the greatest opportunity with Hadoop? Yahoo?, Google? Facebook? I believe they already cash on this technology. The biggest revenue opportunity is for database companies, particularly for the market-leader Oracle. I will explain why below.
There are a few bloggers covering what actually happens, but none of them foresee the huge monetization future of this technology. As the CTO of Kharmasphere Shevek Mankin, says it easy to set the Hadoop cluster, put the data in, but how do you take the data out? This is crux of MapReduce technology: it t is not, in spite of contrary claims, mature enough to conquest the Enterprise on June 29,
Basically all Hadoop applications are collecting huge streams of data, classify them via MapReduce and place them in a structured data base using some form of intelligence..
Open source tools like Oozie, a work-flow system for managing Hadoop jobs including HDFS, are nice, but what is the business model? . This is more frustrating as one can see the billions and billions of dollars in revenues and valuation at social sites companies like Facebook (recently valued at $ 24 Billions), Netflix Yahoo and Google. What about Enterprise, where most of the wealth in our society is created ?
Cloudera and KharmaSphere want to sell supported Hadoop distributions and developer tools. Market is limited by the minimal sales coverage these companies have in enterprise settings.
So here is the other way around. IBM plans it's own Hadoop supported distribution and has presented at the Summit a do-it-yourself analytic tool based on Hadoop. It has “an insight engine, for allowing ad-hoc business insights for business users – at web scale. It allows access to embedded unstructured data, previously un-available to analyze”
The most puzzling and conspicuous was is the de-facto absence of Oracle at the Hadoop Summit 2010. If anyone from Oracle attended, it was probably in a stealth mode :-)
Assume Oracle can productize a Hadoop-based analytic at web scale , they can sell add-on to all theiir database enterprise users. Oracle, according to Gartner 2009
o Is #1 in worldwide RDBMS software market share by top five vendors
o Holds more market share than its four closest competitors combined
o Is #1 in total software revenue for Linux and Unix with 74.3 per cent and 60.7 per cent market share respectively
Assuming $24B per year total revenues in Oracle, can you imagine having a Hadoop product to complement the existing $10B a year database income only? Note this is a yearly amount, the installed data base based on the last five years should be at least $40B . Assuming a 1% attach ratio, they can sell Hadoop analytic web-wide tools for $500 million per year growing exponentially to $5 billion if the attach rate is 10%. What if the attach rate is 20%?
At that level, it would be the biggest money making product using the Hadoop technology, outside social networking industry.
There is simply no other product, IMO in Oracle portfolio that can provide this growth. Oracle has a Grid Engine team, they recently acquired via Sun acquisition, which has been integrated in December 2009 with Hadoop as Sun Grid Engine. A significant chunk of Oracle's Hadoop know-how comes from Sun's merger.
The first step is not engineering, but customer research within their corporate data base customers and determine the minimum number of features customers need and are enchanted with. And making the product wanted through astute customer research are not the focus of the Hadoop Summit developers so far.
1. Hadoop Tutorial; http://thecloudtutorial.com/hadoop-tutorial.html
2. IBM BigSheets : http://www.slideshare.net/ydn/1-ibm-disruptiveapplicationwithhadoophadoopsummit2010
3. Oracle Grid Engine: http://www.oracle.com/us/products/tools/oracle-grid-engine-075549.html
4. Ahrono Associates : http://ahrono.com
|
OPCFW_CODE
|
How should I move the story forward with the support cast while hiding a plot twist regarding the MC from the reader?
I am finishing an arc of my novel, and right at the end some sudden development happens with MC, that is partially obscured from the reader. They can guess at what it is, but it is a sudden change to MC outlook and demeanor.
I'd like to move the story forward and start the next arc some time later with the support cast, showing some changes to the world and then later return to MC and reveal his change, to cause a huge impact (tentatively).
This development will cause surprise to the support cast because they are ignorant of that sudden MC development. My intention is to get the reader to share that surprise along the support cast during the big reveal.
But to do so, MC has to go a few chapters without showing up. Readers might complain about his whereabouts.
If I don't do the short timeskip and skip to the support cast and let the narrative follow the MC, the reader will already know when the reveal is done.
How should I structure my narrative so (assuming they won't guess - they probably will) the reader can be surprised together with the support cast during "the reveal".
I'll make one example using a wuxia story (mine is not wuxia):
Shen Ling is wounded and tired. He was separated from his companions, and ended up seeking shelter in this tomb he fled to alone during the evacuation of the kingdom due to demon attacks. Then he finds an ancient scroll in the tomb depths. Chapter ends.
Fast forward a few months, the gang is settled in another kingdom that is going to host a martial arts tournament soon. Warriors from all around will gather to fight, including some from the demon kingdom that made the gang refugees. We follow their antics, how they settled in this new home and how they miss MC. But for dozens of pages, MC is nowhere to be seen.
The demons want to get the princess because it is demons and princess and plot they harass the gang and set up a trap. A mysterious contender that is in town for the MA tournament rescues then and vanishes before they can give their thanks (or ask who he is >.> ).
The mysterious contender and the gang+princess meet several times. The stranger seems interested in the princess, but her irritation gets her to the point of almost breaking into a fight. But the demons' plans are repeatedly thwarted because of the contender's intervention, even though it is not clear for the gang at the time.
MA tournament comes, and both princess and contender move forward as they win their fights. Contender defeats demon representative, and faces the princess in the final round.
During the fight it is revealed that contender is actually MC, that he spent months trapped in the tomb studying the awesome MA techs in the scroll, and he learned of the demons plans. But he had to achieve goal X without anyone suspecting his identity (he is rather infamous, if people knew who he was from the beginning, goal X would be unattainable). Since the gang was closely monitored at all times, he couldn't reveal himself to them either.
I think the princess still beats him out of sheer anger (angry heroines add her girl power to damage), but our example ends here.
The reveal is both that contender is MC in disguise (this is rather obvious and cliche, but its fine) and the other hidden reveal is goal X. I think it would be impossible for the narrative to follow MC without giving out goal X.
What writing techniques can I use to make this kind effect? how can I keep the reader interested when MC is MIA for a long time (both in universe and in # of pages).
Hang a lantern on it.
If possible, I would revise your story to make the MC's transformation secret but brief, in fact you can take a chapter where the rest of the group is trying to find the MC, and succeeds. Then the MC conceals from them what happened to them, makes up an excuse. "I don't remember, the last thing I remember is eating some berries, then I must have passed out, I did not recognize where I woke up..."
The reader can suspect something happened, but not what happened. So can the other characters.
From your Wuxia example (which I know is not your story), that MC could have found the scroll which revealed many secrets to him, but instead of disappearing could have just been reading and practicing this new art in private, intentionally keeping it a secret and making excuses for his absences.
This won't work if the narrator is privy to the thoughts of the MC, of course. If that has been the case earlier in the book, you will be breaking an implicit contract with the reader, that the MC cannot lie to them. In that case, the only way to pull this off is if the MC doesn't even know the change has happened to them. That can be done with magic powers, for example, a person could become more powerful without realizing it. But in THAT case, you would need hints so the reader suspects this has happened, and is expecting the greater power to manifest itself sooner or later, otherwise the greater power will seem like a deus ex machina.
Thanks. In my novel the reader knows what MC is thinking.
I'll answer this last question:
'What writing techniques can I use to make this kind effect? how can I keep the reader interested when MC is MIA for a long time (both in universe and in # of pages).'
Quite simply: Have multiple main characters. Make the MC you currently have not the only agent of conflict/the plot. Make it so you have another compelling character you're following the journey of while your main character is 'out of commission'.
Because otherwise, yeah. You're going to bore the reader to death.
|
STACK_EXCHANGE
|
Removing excess key from Inside of box
I'd like to try an effect where corner keys for a decorative box show on both outside and inside (typical approach is the cut does not go all the way through the corner with the key only showing on the outside of the box).
So it's easy to make the cut depth needed and fashion a key piece big enough, removing the outside waste from the keys as usual, but given that the bottom of the box could be close to a key, even within an inch, I can't come up with a straightforward way to remove the key waste on the inside. Even using a delicate sawing technique it seems like one could all too easily damage the bottom and/or sides and/or corners.
I have thought about pre-shaping the key with a corner before glueing in, so that just sanding is needed, but not sure what is best way to go.
Any thoughts?
Terminology 101: these aren't splines (no matter how commonly woodworkers today refer to them as such). These are keys, see previous Answer. Splines run along a joint, keys go across it.
Yeah...you are right. Kinda like dados and grooves...for whatever reason everyone calls them all dados. I blame it on the coarsening of our culture.
I can't come up with a straightforward way to remove the key waste on the inside. Even using a delicate sawing technique it seems like one could all too easily damage the bottom and/or sides and/or corners.
Sawing may be one of the better methods to remove the majority of the excess, although it does come with some danger a great many operations in woodwork pose a certain amount of risk of doing some damage to the wood (including sanding). This is an aspect of David Pye's "the workmanship of risk".
I was thinking I would probably try doing this first by notching the key with a sharp knife, then by doing some careful work with one or more chisels, but the size of the box may make this impossible unless using skews and furthermore there's always a chance of marring adjacent surfaces when paring flush even in ideal circumstances.
So back to sawing, a quality flush-cutting saw with a flexible blade is ideal for this kind of work. An option is to tape one side of small pull saw, and in a pinch you can also use a hacksaw blade. These two options will leave more wood proud of the surface which will require more effort to clean up.
Cleanup
Once you have removed all or almost all of the excess, you will be left with some cleanup. Sanding into a corner is tricky to say the least. Scraping is a better option in this situation, the work can be tedious but it is what it is.
A small bullnose plane may be the ideal tool to deal with the remainder and make the keys perfectly flush, but not everyone has one of these or can afford one, so careful scraping may be the preferred solution.
I have thought about pre-shaping the key with a corner before glueing in, so that just sanding is needed, but not sure what is best way to go.
I think this is actually a viable option, and actually one of the better ones if you can figure out how to do it accurately and repeatably.
Accurately aligning the cutouts in the keys to the inside of the corner may turn out to be the most critical aspect of this method. Small blocks inside the corners to clamp against (covered in tape or well waxed to protect them from glue) might prove useful.
If you want the box finished inside as well as out pre-finishing the inside of the box* would make cleanup of any glue squeeze-out easier (glue doesn't bond to finished wood). I would use either paste wax or a light coat of shellac for this myself, but spray lacquer could also be perfectly suitable. See this recent Q&A for why you might want to avoid using any oil-based finish here.
If you prefer to leave the inside of the box bare protect adjacent surfaces using tape. Packing tape is great for this as it's wide and very thin, and costs a lot less than equivalent painters' tape.
*If you pre-finish the box pieces before assembly be very careful not to get any finish on joint surfaces.
If you make the key oversize, it almost doesn't matter what orientation your notch is relative to the key. As long as the notch is square, you can line it up as Graphus suggests, and then trim the outside of the key. Trimming on the outside of the box will be much easier than trimming the inside.
I think I am going to experiment with some Dremel accessories, too...one can usually get some fine detail out of them with a little practice and patience. @charlie: yep...but the whole point of the question was about trimming the inside, because that is where the real challenge is. I was hoping for a slick tip/trick, but it may just all come down to "be careful".
@mblatz01, the trick is arguably the use of a bullnose/chisel plane (assuming the box is large enough that one will fit inside and can then be moved back and forth!) but these tend to be pricey for a start, plus care is still needed. I've used Dremel-type accessories a fair bit (never Dremel branded by the way, far too overpriced) and I would only use them for the gross material removal here, if that as it's far easier to screw up badly with a power tool! Once I got close to flush I'd switch to hand methods no question, no matter how slow — this is a job where it takes as long as it takes.
"Once I got close to flush I'd switch to hand methods no question, no matter how slow — this is a job where it takes as long as it takes." --Agreed! My current 1st attempt plan will be to protect inner bottom with foam pad (e.g. mousepad) and the inner walls around the corners with layer or two of blue tape, and then use a flush cut tool, possibly jimmied out of a hacksaw blade, to remove waste leaving may 1/32" - 1/16" of material proud, then finish with some sanding.
Will probably need to reapply finish/clear coat after all that, but there are worth things in life, I guess...
I would try to aim for the excess key to be proud by approximately the thickness of two layers of masking tape at most. It's easily doable to get closer to flush than you've said, somewhere around 1/64" at max. The more wood you leave the harder removing the excess becomes..... this goes double if you end up using sanding to do the job!
|
STACK_EXCHANGE
|
Brighton Webs Ltd.
Statistics for Energy and the Environment
Numerical Integration - Trapezium Method
Numerical integration can provide a solution to the problem of integrating functions for which an analytical solution is not available (either because one does not exist or not having enough paper and pencils to derive it). The trapezium method is the simplest to understand and implement.
Integration is often described as the calculation of the area under the curve. It the area under the curve is divided into a number of strips, each strip can be approximated to a trapezium and the sum of the area of all the trapeziums will be an approximation of the area under the curve. As the number of strips increases (and the width of each strip decreases), the approximation will improve.
The algorithm described below, starts with a single strip and then doubles the number of strips until the required level of accuracy is achieved.
As with all numerical methods it is important to determine that the algorithm is fit-for-purpose. In this case the function to be integrated should not be discontinuous or take the value of infinity with the interval over which it is being integrated. In addition computational efficiency, accuracy and convergence should be considered.
The Area of a Trapezium
The diagram shows a trapezium and the formula for calculating its area.
The formula can be interpreted as the area being the product of the width and the average height.
The algorithm starts with a single strip (Number of strips = 20), then on the next loop the number of strips is increased to 2 (number of strips = 21), then four strips (number of strips = 22) and so on. On each loop, the approximation to the integral is refined by using the sum of the values of additional ordinates. After a set number of loops, a check is made on the accuracy. If the required accuracy is not achieved, the algorithm is deemed to have failed.
Initial Estimate (n=0)
The initial estimate is simply based the start and end points of the interval
The value of the sum of the ordinate heights is expressed in terms of the Integral approximation.
First Loop (n=1)
Next, we have a series of loops, at the start of each loop the number of strips is doubled, on the first loop, this results in a single additional ordinate value being included in the approximation.
Only the value of the new ordinate is calculated, the results of initial estimate are used to provide an updated estimate.
Second Loop (n=2)
The number of strips increases to four
As before, the new ordinate values are used to update the previous estimate
The process established in the initial loops, can be generalized to:
The generalized equation can be developed into a form which can be used as the basis for a computer program
There are two versions of the code, the first was created in VB.net Express, this is based on a library which is in regular use, the second is in Python which is a translation of the VB.net code. It consists of two elements, the function trapintegrate which implements the methodology described above and a sample function which is the probability density function of the standard normal distribution.
The table below was derived from Excel. With the tolerance set to 0.0001 intrap produces similar values to those of Excel when formatted to 4 d.p.
Reducing the value of tolerance, imporves the accuracy at the expense of increasing the run time.
Python code blocks are defined by indententation, some adjustment might be needed after a copy and paste operation.
|Page updated: 04-Mar-2013|
|
OPCFW_CODE
|
OK, good idea, will update the title as soon as I am ready with my first 3d printable tool... a lock ring remover, specifically designed for the white industries ENO lock ring... but might be usable for others.
One thing I am experiencing, which I actually expected is... While I can design a model attempting to match the original tool as much as possible, due to this being 3d printing, and the tools will be in plastic... compared to their metal counterparts... they usually don't have the strength necessary. This likely wont' be true with every tool, but definitely for most. BUT, the simple solution is to modify the design, so that it can still do what it was supposed to, just build differently than the original. It's also going to matter what plastic the tool is printed in, while PLA is the most common 3d printing filament, due to it's ease of printing... it's almost never gonna work for tools. ABS and PETG might work for some (and my goal is to make designs that at least can work as these when possible).... but for most tools, the best option for a 3d printable plastic is going to by PolyCarbonate (or something very close, ie. my preference, PC+PBT). Sadly all PC based filaments are on the harder side to print. I've had good luck by setting my bed to 130 degrees Celsius, and my nozzle to 245-255C. While many people can get their nozzle to this (assuming they have a full metal hotend), not so many can get their beds that high. This is why I'm gonna try my best to get these to work as ABS and/or PETG, which are both similar in strength and ease of printing (generally require heated beds, and ideally full metal hotends... but not in the crazy high temp levels like a 130C bed, more like a 50-90C bed is more than enough for those).
Anyway, my plan is to first design a model imitating the original tool as much as possible (when it is practical to start there)... Then start making adjustments until I feel it's functional as plastic.. then possibly make additional versions slightly different to give people some options. I'll try to release all the versions, as well as the Fusion360 model so people can make their own changes if they want (I try to set almost every dimension as a Parameter, so anything can easily be adjusted with no 3d modeling knowledge... just need to change some numbers in the list of Parameters, and watch as the model automatically updates itself).
To give an example of what I mean, I'll describe my current work on the White Industries Lockring tool. I basically made something almost identical to the metal tool, with similar dimensions, that could attach to a 3/8" socket wrench (I'm pretty sure that's what the original tool works with... though that's an educated guess based on the size of the lockring, and the size of the photos of the tool. But, when printing this tool (doing test prints in PETG before I got to PC+PBT), the 3 small "teeth" that would connect to the lockring to rotate it, are so thin they would snap off with practically no pressure on them. Given, Polycarbonate would def work better but at the original dimensions, even PC might not be strong enough. The very simple solution here is to expand the exterior diameter.... that way the tool would still fit over the lockring, but the teeth could be as thick as I find necessary in my tests, so the tool should still function fine, just will look a little different (And with my 3D model, it is almost trivial to increase or decrease the exterior diameter.... so people are free to adjust to their liking... and if enough people request an adjustment, I'll even do it for them). Now, as I mentioned, I made this to attach to a 3/8" socket wrench.. simply because that's what the original did. BUT, this could theoretically be a weak point (tho I admit, likely won't be an issue).. but what I might do as an alternative model/option is instead of making it attach to a socket wrench, simply make it shaped so a common wrench can be used to turn it (probably something like a 15mm wrench)... since it's a plastic tool, this other version would likely be a little stronger and hold up better than the first... even tho the 3/8" socket likely will work just fine.
Good news is.. I'm pretty much done with version 1 of this tool, just need to do a couple of final test prints. Then depending on how they go, may or may not make the second version I described right away. Hopefully I'll be able to release at least the first version by the end of today.
|
OPCFW_CODE
|
I have an interesting scenario
We have a number of hosts and on the hosts we have certain virtual machines that are classified as overhead machines. Backup servers, Antivirus, etc, just a collection of VM that we need to run the environment.
Its easy enough to grab the statistics for the total CPU/MEM/DISK for the entire cluster and easy enough to grab the stats for the collection of virtual machines.
What I want to do is deduct the overheads from the totals, this will give me what actually I have available for the business.
I could create a super metric, but when creating that and applying it, I need to wait for it to collect that statistic, in our case we report monthly, so I need to wait some time to see this stat and if I get it wrong, wasted time.
Whats the best way of going about this? Is there a way I can run formulas on existing data? the data is already there, I just want to run some formulas over the top of it and present this in pretty boxes for management 🙂
A super metric is just a mathematical formula that works on the current data. AFAIK you can't create it on "old" data. That's why I don't fully understand why you have to wait.... maybe share your SM?
You can build a SM with a where condition. Perhaps you can filter out those VM's:
I guess the difficult thing about a super metric is that when you create it and enable it .. thats when it starts collecting so you have to wait.
If you wanted to run custom formulas on previous data, I cant see a way of doing that.
I am not opposed to creating the metric, most of the time what I need would be static. I am probally overthinking it.
So under the vCenter Adapter - Cluster Compute Resource, select the cluster and hit Metric - CPU - Total Capacity (MHZ), that give me a number in mhz.
I have created a group and updated the membership with the Overhead VMs
I have attached a simple supermetric to that group - sum(Virtual Machine: CPU|Total Capacity)
If the virtual machine is a member of the group, tally it up.
The bit I am struggling with is how to sub one from the other, that will be my magic 'whats available to be used' by the business.
I think your challenge is with the depth. For VMs you will need 2 to get to the cluster, while for hosts you need 1.
My approach would be to create 2 custom groups. 1 with a (dynamic) filter to select the VMs to exclude. The second custom group will contain all the hosts from which you want to calculate the CPU MHz.
Create a SM for the first custom group. Something like sum(virtualmachine:cpu|demand(MHz)). Set the depth of 2 and assign that SM to the environments.
Create the 2nd SM. Something like sum(HostSystems:CPU|Total(MHz) with a depth of 1. Also assign that SM to the environments.
Then create the 3rd SM. Something like sum(SM2) - sum(SM1). And assign that the clusters.
My test environment is down so this is from the top of my head. Maybe you need to tweak here and there.
|
OPCFW_CODE
|
How to convert an ASCII string to an UTF8 string in C++?
How to convert an ASCII std::string to an UTF8 (Unicode) std::string in C++?
std::string ASCIIToUTF8(std::string str) {
return str;
}
Every ASCII character has the same representation in UTF8, so there is nothing to convert.
Of course, if the input string uses an extended (8-bit) ASCII character set, the answer is more complex.
The term "Extended ASCII" has been mostly used only for CP437 (or other MS-DOS codepage), which is nearly extinct these days.
@Tronic: True, but ultimately, any 8-bit character set that is a superset of ASCII is an extended ASCII character set. :)
@Eduardo: Which kind of unicode? An ASCII string is already a perfectly valid UTF8 unicode string. Unicode defines several different encodings.
ASCII is a seven-bit encoding and maps identically onto the UTF-8 encoding of the subset of characters that can be represented in ASCII.
In short, there is nothing to do. Your ASCII string is already valid UTF-8.
I assume that by ASCII you mean CP1252 or other 8 bit character set (ASCII is only 7 bits and it is directly compatible with UTF-8, no conversion required). Standard C++ cannot do it. You need e.g. Glibmm, Qt, iconv or WINAPI to do it.
That's a big assumption. CP1252 is very platform specific and there is no indication of platform in the question.
That's why I said "or other". However, it seems that Windows users are most ignorant about character sets. One big benefit of assuming CP1252, when converting from 8 bit, is that it is also compatible with ISO-8859-1 (but not the other way around).
What a bizarre comment. Code page 1252 is very much a Windows specific encoding. Saying that Windows users are "most ignorant" about Windows specific implementation details is, erm, ignorant.
It is still probably the most common 8 bit character coding these days. It is also compatible with ISO-8859-1, which happens to be the most standardized coding. Even some UNIX programs (e.g. Irssi) default to CP1252 for conversions because of those two reasons. UNIX users are generally less ignorant because they have to deal with UTF-8 and older character codings all the time (or at least had to, a few years ago. Windows developers on the other hand often call all 8 bit encodings ANSI (as if it was only one character set) or even ASCII (as if it was 8 bit).
|
STACK_EXCHANGE
|
import nflgame
import random
from os import system, name
'''
The Draft class deals with the fantasy league distribution of players to the teams participating.
'''
class Draft:
'''
Methods:
player_pick(): Function that helps the user draft the players they want onto their team.
Returns:
user_team (dict): A dictionary that has the users players.
'''
def __init__(self):
'''
__init__ is used to initialize local variables.
'''
self.taken = []
self.pos_list = ['QB','RB','WR','TE','K','LB','DB']
self.user_team = {}
#Dictionary to store users players.
self.bot_team = {}
self.bot_team1 = {}
self.bot_team2 = {}
self.bot_team3 = {}
#qb_pos = 'QB'
#rb_pos = 'RB' These are just place holders used for reference, will be deleted
#wr_pos = 'WR' in the end.
#te_pos = 'TE'
#kicker_pos = 'K'
#def_pos = 'D/SP'
def player_pick(self, pos, taken, user_team):
'''
player_pick is a function that is used to let the user pick and choose who they want on their team.
Args:
pos (str): The first argument is a string that is pre-defined, so the user would not
see this variable until they are done with the draft.
taken (list): The second argument is a list that stores all of the players that have already
been drafted by other users.
user_team (dict): user_team is a dictionary that stores all of the players that the user has on
his/her team.
Returns:
dict: function returns a dictionary that holds the users picks. It is formatted so that it is
{pos_name, pos} EX: {Tom Brady: QB}
If the name player is already taken, it returns an error message that the player is taken already.
'''
player_list = []
games = nflgame.games(2019, 2, kind='REG')
players = nflgame.combine_game_stats(games)
meta = nflgame.players
print()
print(pos)
for player in players:
if player.player.position in [pos]:
print(meta[player.playerid].name)
player_name = meta[player.playerid].name
player_list.append(player_name)
print()
pos_name = input("Enter the " + pos + "'s name you would like to draft from above list (Case Sensitive): ")
print()
hit = False
while hit == False:
#Check to see if the player is already taken.
if pos_name in taken:
#If the player is already taken, then error message is given.
pos_name = input("He is taken! Enter another " + pos +"'s name you would like to draft (Case Sensitive): ")
elif pos_name not in player_list:
pos_name = input("That player is not a " + pos +" or was spelled wrong! Enter another " + pos +"'s name you would like to draft (Case Sensitive): ")
else:
#If player is available, it will print out that player was added.
print(pos_name + " was added to your team!")
#Player is then added to the taken list.
self.taken.append(pos_name)
#Player stored in Users team.
self.user_team[pos]=pos_name
hit = True
def bot_pick(self, pos, taken, bot_team):
'''
bot_pick is a function that is used to randomly generate bot teams for user to play against.
Args:
pos (str): The first argument is a list of all of the positions so they can be looped through.
taken (list): The second argument is a list that stores all of the players that have already
been drafted by other users.
bot_team (dict): bot_team is a dictionary that stores all of the players that the bot has on
its team.
Returns:
dict: function returns a dictionary that holds randomly generated teams for the bots. It is formatted so that it is
{pos_name, pos} EX: {Tom Brady: QB}
'''
player_list = []
games = nflgame.games(2019, 2, kind='REG')
players = nflgame.combine_game_stats(games)
meta = nflgame.players
for player in players:
if player.player.position in [pos]:
player_name = meta[player.playerid].name
player_list.append(player_name)
hit = False
while hit == False:
pos_name = random.choice(player_list)
if pos_name in taken:
hit = False
else:
self.taken.append(pos_name)
self.bot_team[pos]=pos_name
hit = True
def draft_start(self):
'''
draft_start is a method that takes care of the draft simulation as a whole.
Args:
self: The self argument is just the data that is being called from the Draft class.
Returns:
The method returns the dictionaries for the user and the three bot teams that are participating.
'''
print("Welcome to the Draft!")
choice_draft_check = 0
choice_draft = input("Would you like to start the Draft? Y/N: ")
while choice_draft_check != 1:
if choice_draft == "Y" or "y":
choice_draft_check += 1
elif choice_draft == "N" or "n":
print("Restart Program")
return
else:
choice_draft = input("Invalid choice! Would you like to start the Draft? Y/N: ")
#User Team
for pos in self.pos_list:
self.player_pick(pos, self.taken, self.user_team)
print("Your team consists of: " + str(self.user_team))
print ()
#Bot 1 Team
self.bot_team1 = self.bot_team
for pos in self.pos_list:
self.bot_pick(pos,self.taken,self.bot_team)
print("Bot 1 team consists of: " + str(self.bot_team1))
print()
#Bot 2 Team
self.bot_team2 = self.bot_team
for pos in self.pos_list:
self.bot_pick(pos,self.taken,self.bot_team)
print("Bot 2 team consists of: " + str(self.bot_team2))
print()
#Bot 3 Team
self.bot_team3 = self.bot_team
for pos in self.pos_list:
self.bot_pick(pos,self.taken,self.bot_team)
print("Bot 3 team consists of: " + str(self.bot_team3))
print()
|
STACK_EDU
|
How to use Mass File Editor
1. Prepare the list of files you want to edit
Add appropriate include items to the Include List.
Represents a single file.
A single file which contains a list of files.
This could be either a list of file names in plain text or a binary file created from the "Save file list" command of File List.
Represents one folder.
You can customize which files from this folder to include and should the files in subfolders be included as well.
You can turn any include item into "exclude" one so that all files from that item will be excluded from the final file list.
You can save the include list at any moment and load it later.
The current include list is saved automatically when you close the program and when you start the next time it will be loaded again.
You can change the order of include rules in the list by dragging any rule and dropping it onto the new place.
Now when you have include list ready start compiling the result file list by "Refresh file list" command.
All include items will be processed in the order they appears and soon the resulting list of files will be made.
You can press "Cancel" at any time to interrupt the process.
2. Prepare the list of edit rules
The process of editing a file consists on a number of steps which we call "edit rules".
They are executed one by one in the order they appears in the Edit Rules list.
Currently there are several edit rules available.
Does search in currently opened file for text or regular expression.
It can search all occurances of the text or only first or only last or both first and last ones.
If text is a regular expression all subexpressions will be identified and turned into text values as well.
Replaces any text value in file by another text or text value.
Replacement text can be combined from plain text and several text values according to the format string.
Parse file name
Parses the name of current file into different text values containing different pieces of it.
See also Find/Replace example
and INI-file editing example
3. Select file and start editing
Double-click one file from the list to open it or open any file in the Editor by "File->Open" command.
Now apply edit rules to the file. There are several possibilities:
- Apply all edit rules automatically and stop editing. "Run" and "Edit current" commands does this.
- Apply all edit rules automatically see all the data values in "Data View" after the last rule has been applied. This is what "Pass" command does.
- Apply edit rules one by one and see all the data values in "Data View" after each step. Do this by command "Apply next rule".
4. Start editing all files in the list
It is "Edit all" command that does it.
All the edit rules will be applied to every file in the list automatically.
|
OPCFW_CODE
|
A lot of research projects use Python and different Python packages/modules to achieve results. This page describes how to easily setup a workable python environment with your own python packages inside.
For new projects, you should in particular consider whether you want to use the 2.x or 3.x variant of Python. The two versions are not compatible and in some cases, you may have to use an older 2.7.x versions of Python due to some of your packages not working with Python 3.x.
At Abacus we maintain two variants of Python and several versions of each:
- python/2.7.11 (default)
- python-intel/2.7.11 (default)
The vanilla Python versions (
python) includes Python and a few
extra packages including in particular
below). For further information, have a look at the official Python
The Intel optimised version of Python (
python-intel) has been
compiled by Intel and includes a lot of widely used python packages
virtualenv, etc. for more information look at the official Intel
Python home page:
To use a particular version of python simply use
testuser@fe1:~$ module add python-intel/184.108.40.206
Adding extra packages
In many cases you'll need extra python packages for your project. In the following we describe two ways to do this. You should consider both of them and use the one most suitable way for your project.
As noted above, also consider using one of the
variants as this already contains many packages, including in
particular maybe some of the packages you need.
Adding extra packages #1 - using
In the simple case, you only need one/a few packages, and only need
this for yourself. In this case, use
pip install --user to
install the module your own home directory as shown below, i.e., first
module add to select the right python version, next use
testuser@fe1:~$ module add python-intel/220.127.116.11 testuser@fe1:~$ pip install --user Pillow Collecting Pillow Downloading Pillow-4.1.0-cp35-cp35m-manylinux1_x86_64.whl (5.7MB) 100% |████████████████████████████████| 5.7MB 204kB/s Collecting olefile (from Pillow) Downloading olefile-0.44.zip (74kB) 100% |████████████████████████████████| 81kB 8.6MB/s Building wheels for collected packages: olefile Running setup.py bdist_wheel for olefile ... done Stored in directory: /home/testuser/.cache/pip/wheels/20/... Successfully built olefile Installing collected packages: olefile, Pillow Successfully installed Pillow-4.1.0 olefile-0.44
Files are installed in your home directory (in
Things to consider:
- The packages are only available to your own user, not to anybody else.
- If you change the Python version selected with
module add, the module may not work, and you may have to redo this.
Adding extra packages #2 - using
virtualenv is a tool that can be used to create isolated Python
environments. In each environment you select the Python version and
Python packages needed for you project. If you keep old virtualenv
environments, it is possible to later redo some of the job scripts in
the exact same Python environment as when you ran the script the first
Creating the environment
The Python files need to be placed in a directory. In the following
examples we use
/work/sdutest/tensor to install our own version
of Tensorflow. You should instead use a
directory within one of your own project directories.
testuser@fe1:~$ module purge testuser@fe1:~$ # tensorflow also requires the CUDA and cudnn modules testuser@fe1:~$ module add python/3.5.2 cuda/8.0.44 cudnn/5.1 testuser@fe1:~$ virtualenv /work/sdutest/tensor-1.2 PYTHONHOME is set. You *must* activate the virtualenv before using it Using base prefix '/opt/sys/apps/python/3.5.2' New python executable in /work/sdutest/tensor-1.2/bin/python3.5 Also creating executable in /work/sdutest/tensor-1.2/bin/python Installing setuptools, pip, wheel...done. testuser@fe1:~$ source /work/sdutest/tensor-1.2/bin/activate (tensor-1.2) testuser@fe1:~$ # you are now inside your own Python environment
Note the line with
/work/sdutest/tensor-1.2/bin/activate. You'll need to repeat this
step every time before you actually use your new Python environment.
We suggest to edit the
activate script to include the
module add lines from above to easily setup the
correct environment every time you use this. The two lines must be
added to the top of the file.
testuser@fe1:~$ nano /work/sdutest/tensor-1.2/bin/activate # add module purge and module add ... lines at the top
After the initial package setup, you can use
pip install as you
would if you had installed Python yourself, e.g.,
testuser@fe1:~$ source /work/sdutest/tensor-1.2/bin/activate (tensor-1.2) testuser@fe1:~$ which pip /work/sdutest/tensor-1.2/bin/pip (tensor-1.2) testuser@fe1:~$ pip3 install --upgrade tensorflow-gpu Collecting tensorflow-gpu Downloading tensorflow_gpu-1.1.0-cp35-cp35m-manylinux1_x86_64.whl (84.1MB) 100% |████████████████████████████████| 84.1MB 18kB/s Collecting protobuf>=3.2.0 (from tensorflow-gpu) ... Installing collected packages: protobuf, numpy, werkzeug, tensorflow-gpu Successfully installed numpy-1.12.1 protobuf-3.3.0 tensorflow-gpu-1.1.0 werkzeug-0.12.2 (tensor-1.2) testuser@fe1:~$
Using the environment
If you added the
module purge and
module add ... lines as
described in the first step, you simply need to
activate script everytime before starting to use the Python
testuser@fe1:~$ source /work/sdutest/tensor-1.2/bin/activate (tensor-1.2) testuser@fe1:~$ # you are now inside your own Python environment
Similarly, in your Slurm job scripts you should add the
line as shown below:
#! /bin/bash #SBATCH --account sdutest_gpu # account #SBATCH --time 2:00:00 # max time (HH:MM:SS) echo Running on "$(hostname)" echo Available nodes: "$SLURM_NODELIST" echo Slurm_submit_dir: "$SLURM_SUBMIT_DIR" echo Start time: "$(date)" # Load the Python environment source /work/sdutest/tensor-1.2/bin/activate # Start your python application python ... echo Done.
|
OPCFW_CODE
|
The Post Office Horizon nightmare is in the news again, and this time, I suppose because it's been on the television, something is actually happening.
If you are unfamiliar with the events that unfolded, the Post Office commissioned some accounting software in the 90s called Horizon, from a company called Fujitsu. The original purpose of this was to basically act as a one-stop shop for all financial activity in various Post Office branches, with one of the goals being to cut down on benefits fraud. This would have been a not-insignificant task; it's basically the equivalent of asking for a payments processor, auditing system, and all the rest; a field that was functionally in its infancy in the 1990s. The software was eventually delivered, in a slimmed-down form, with no benefits elements, to the Post Office. Problems ensued.
In short, the system was riddled with bugs. They haven't been made public, but multiple third parties audited it, and found it not fit for purpose. This resulted in a number of Post Office branches looking like they'd absconded with tens of thousands of pounds, and the Post Office itself decided to take the postmasters of these places to court rather than acknowledge the software was full of bugs, and consequently, over more than a decade, they destroyed a whole lot of people's lives, forever. The Government has been in the process of helping them pay reparations to the people who didn't kill themselves, and since the TV show, have been looking into pardons for everyone.
I don't want to get too deep into that side of things, but I do want to address something I keep seeing.
That is "why did this happen", and "why is Fujitsu a preferred supplier", and that's what I'm going to talk briefly, a little bit about, today. To understand why, you have to know a little bit about the history of computing in the UK. Back in the 1950s and 1960s, the UK was a trail-blazer in terms of the tech industry. Several firsts happened on UK soil. Indeed, the Post Office itself was actually quite technologically competent! They actually set up the first UK-wide test network. They bought a bunch of computers from a company called Ferranti in 1971, and then used them to build a packet-switching network. You can find their 1974 paper here: https://web.archive.org/web/20160304023636/http://rogerdmoore.ca/PS/EPSSFer/EF.html
So, what happened? Well, this is actually a culmination of a whole history of half-baked ideas by successive British governments. The first is that, in the 1960s, the Wilson Labour government decided to create a Frankenstein out of the major British consumer computer players; they were forcibly merged into a company called ICL, or International Computers Limited, of which the government was a minority owner, and provided a significant injection of money for research purposes. The idea was to create a British competitor to IBM; a large, monolithic giant that could really show those chaps. Unfortunately, this didn't really pan out. British economic troubles, European competitors and issues, and IBM's dominance and strategy conspired to, effectively, keep ICL sickly in its cot. However, as the now defacto only major technology company in the UK (Ferranti, above, escaped this for a time, before falling to an ignoble end that's worth another blog one day), they were effectively guaranteed government contracts, which did keep them alive for some time.
Meanwhile, the Post Office, having become a major player in telecommunications with serious knowledge about technology, was forcibly divested of this element in 1984, which became the private company BT. The Post Office returned to sorting mail, and BT went about its business.
Fujitsu and ICL became close bedfellows in the 1980s, with the British pseudo-giant wanting to source hardware from them, because it was cheaper than developing their own integrated circuits. Little by little, over time, ICL grew ever-closer to Fujitsu, until finally Fujitsu absorbed it, not unlike what happens to a male anglerfish when it encounters a female. ICL was allowed to exist as a subsidiary for a little bit, acting as a sort of handpuppet to keep those government contracts going, before Fujitsu got bored and took it off, revealing Mr. British Technology Industry was actually just them, now. But as part and parcel of all this, Fujitsu effectively inherited ICL's position and contracts with the UK government; a position it continues to hold today. If somewhere is using some ICL technology, or something ICL helped with, Fujitsu is really the only one you can turn to. Decades of knowledge of the inner workings of the British government have made them pretty much the preferred supplier. If you know anything about Britain, too, you'll know that large upgrading and migration projects tend to take an inordinate amount of time there. Fujitsu may well have delivered solutions with support contracts lasting 10-15 years, or more.
Some might say that the British government trying to tamper in the technology industry was a bad idea, especially when it comes to thinking they could somehow create an IBM-killer. To that, I would agree. Some might also say that allowing Fujitsu to buy and make a corporate suit out of the remains of the British technology industry was a bad idea. To that, I agree. While I wouldn't call this an inevitable outcome, I do think it was, sadly, predictable. If it wasn't this, it would have been something else. Indeed, it's entirely possible something else, somewhere, did go wrong, just without the same level of consequences.
Most importantly, though, I think it's an absolute tragedy that people died as a result of this, and that's something that nobody involved should be allowed to forget. All the reparations and apologies in the world, timely or late, can't bring anyone back to life.
|
OPCFW_CODE
|
EPI in Java
The draft version of EPI in Java is ready!
EPI is a community book - its content, quality, and very existence,
are a testament to the engagement and enthusiasm of its readers.
In this spirit, we are asking readers to help us by providing
feedback on 1-2 chapters on the Java draft.
By acting as a reviewer, you will get
- a chance to perfect your interviewing skills,
- an early look at the book,
- our undying gratitude, and,
- a free hardcopy of the book, if you are one of the first 25 reviewers or make great contributions to this review.
Due to enormous interest, we are no longer signing up reviewers. If you are very keen on participating,
If you are interested, please sign up via this –>Google form.
please write directly to us - you can use firstname.lastname@example.org.
We expect reviewers to spend one to two afternoons going through
the chapter, and identify an issue every 1 to 2 pages. The text is very similar to that of
the current version - the big difference is in the programs which are now in Java.
The perfect is the enemy of the good - please send us your inputs as soon as you can. (We are hoping
to have a substantial amount of feedback by the end of August.)
Issues can be typos, language that is misleading, suboptimum solutions, bad programming practices - in short
anything that can improve the quality of the book.
Every individual issue you identify should be reported through a Google form, which you
can view here.
Here are some examples of issues reported by readers.
Note how specific these suggestions are - they have details on where the issue was, what the problem was, what the right wording should be, etc.
- Typo: 3rd paragraph, 3rd line: “<0, 7="" 0,="" 2,="" 3,="" 5,="">“. According to the input array, it should be “<0, 7="" 0,="" 2,="" 3,="" 6,="">“, because when we reach day 6, the max profit is 14 - 8 = 6, not 14 - 9 = 5.
This means on the last line of this paragraph, the last two elements in M should be 8 and 6 (instead of 7 and 5).
- Comment: Why in problem 6.19 and the variant, we need to assume that n = 2^k? This condition is not really used in current solutions (unless you meant to discuss an alternate solution that uses divide and conquer to rotate the matrix by quarters).
- Clarification: Solution 12.14: I feel the parameters in recursive function are confusing. Would it not be better to use two integers to keep track of the number of left parens we need and the number of right parens we need? If the first one is larger than 0, then we can add either left or right paren; otherwise we just add right parens. When both are 0, we save the valid prefix.
- Error: Program in Solution 8.4 goes into an infinite loop
if there are duplicates entries in the input list. The problem statement should either explicitly
rule that out, or you should use a container which supports duplicate entries.
- Suggestion: Drop Variant 10.16.1, as it is effectively the same as Problem 10.18.
|
OPCFW_CODE
|
Thanks for replying, I have tried to install version 16.2.1 but it seems it's a different one to what you suggest. 'My' version is called "Non-WHQL-64Bit-Radeon-Software-Crimson-16.2-Win10-Win8.1-Win7-Feb23.exe" which I found on the AMD site here --> LegacyI also downloaded 16.1.1 but bizzairely version 16.1.1 has dissapeared.
I will try 'your' version and report back.
Further to my earlier reply, still being moderated, I have now successfully installed version 16.2.1. I will go back to Nero and tell them, their product still doesn't work. However, I dare say they will say that I should have installed version 16.1.1 but I can't now find this on your site. Do you know where it went or is it no longer available?
Below are the file names I have downloaded and tried to install without success:
16.1.1 --> non-whql-64bit-radeon-software-crimson-16.1.1-win10-win8.1-win7-jan30.exe
16.2.1 --> Non-WHQL-64Bit-Radeon-Software-Crimson-16.2-Win10-Win8.1-Win7-Feb23.exe
and the one you pointed me at and I installedl --> non-whql-64bit-nieg-radeon-crimson-16.2.1-win10-win8.1-win7-feb27.exe
I will see what Nero say about this version 16.2.1.
I won't close this case until I hear from them as I'm expecting them to say I have the wrong driver.
Well thanks to everyone who put in some effort on helping me but it's all been for nought!!! Nero finally got back to me after 'supporting' me for 10 weeks with the following reply:
Customer Support [2016-04-13 11:16:59.167]
As Nero 2014 is an older Generation of Nero, we have to inform you that such older Generation is no longer maintained.
Therefore there will be no new updates available anymore. Sorry.
16.2.1 should work okay on your system. If you continue to have problems, let me know and i will try to locate 16.1.1 for you. Sorry to hear about the lack of Nero support.
|
OPCFW_CODE
|
When GPT4-V was release everyone was amazed at it's capabilities of being able to interperet visual information such as photographs. A team have put together a multimodal model called LLaVA (Large Language and Vision Assistant) which is free to use.
I wanted to give it a go through my Ollama private instance running on a virtual machine in the datacenter with a 48gb NVIDIA GPU.
I first updated my Ollama instance to the latest 0.1.16 build which has support for this now. Then pulled the model down.
ollama pull llava:13b
Thats the housekeeping out of the way, only thing left to do now is to try it out via the API.
You need to convert the images to base64 encoding when passing it through but a reduced example below, similar to text only models with the addition of the images array.
"prompt":"What is in this picture?",
Prompt: What is in this picture?
Response: In this picture, there is a person holding up a giant slice of pizza. The scene takes place in front of a truck, making it an interesting and unconventional setting for such a large piece of food.
Prompt: Is there a vehicle in this image?
Response: Yes, there is a vehicle in the image, specifically a truck.
Prompt: What footwear is being worn?
Response: The person is wearing boots
Prompt: Describe this picture
Response: The image shows a colorful bar graph representing the number of students in different colors. Each color stands for a specific number of students, with orange indicating 75 students, green representing 100 students, blue showing 125 students, and yellow symbolizing 150 students. The heights of these bars correspond to the respective number of students they represent.\nThe bar graph is organized in such a way that each color has a distinct line, making it easy to identify the different colors and their corresponding values. This visual representation helps demonstrate the distribution of student numbers throughout the various levels or categories.
Prompt: Give me a concise list of the colors in this image
Response: 1. Red 2. Green 3. Blue 4. Yellow 5. Orange 6. Pink 7. White
Prompt: Does the blue bar have a bigger value than the green bar?
Response: Yes, the blue bar has a bigger value than the green bar.
Prompt: Does the green bar have a bigger value than the blue bar?
Response: No, the green bar has a smaller value than the blue bar.
Prompt: Describe this photo
Response: The image features a beautiful mountain range with snow-covered peaks in the distance. The mountains are tall and majestic, surrounded by trees that add to the natural scenery. There is also a valley visible between two prominent peaks, showcasing the breathtaking landscape. The scene captures the essence of nature and its rugged beauty, making it an appealing sight for any outdoor enthusiast or traveler.
Prompt: How many peaks are visible?
Response: 3 peaks are visible in the image.
Prompt: What species of trees are visible in the image
Response: The image shows a mountain covered with conifers, which are evergreen trees.
Prompt: Describe this picture
Response: The image features a thank you note written on a white piece of paper. The lettering is blue, and the message reads "Thank You!" A pen with gold accents sits on top of the note, indicating that it was recently written. There is also an envelope placed nearby, likely for sending the note to someone
Prompt: Does this contain the name of a person?
Response: No, this message does not contain the name of a person. Instead, it contains the word "thankyou" written on the card by the pen.
Prompt: What language are the words written in?
Prompt: What words are hand written on the letter, can you translate them to french?
Response: Merci !
|
OPCFW_CODE
|
Don’t talk to girls about STEM, they need to see it in action
December 20, 2019
When Leanne Huynh was at university, she was one of just two females in a class of 40 students. Now, an application development associate at Accenture, Leanne says it’s not enough to tell girls to pursue a career in STEM, the key is getting girls interested in science and technology to show them what they can do with it.
The first time I really appreciated just how wide and creative a career in STEM could be was at the Accenture Tech Bootcamp. I was in the final year of my double degree in engineering and commerce and when I approached Accenture they suggested I attend the Tech Bootcamp as a pathway to employment.
At Tech Bootcamp, everyone who was in my team was a university graduate, like me. Everyone wanted to make an impression, but I think we also wanted to experience what it would be like to work in technology and for Accenture.
It’s not until you get into a company that you really get to experience the culture and the values and the particular way it operates.
Tech Bootcamp was unlike any interview I have ever been to! To begin with, I learnt a lot, and I didn’t realise it would be so much fun.
myWizard AIOps at Accenture Open Day where we spoke to
university students about what we do!
Over the four days we were given a real world business case to work on. Ours was to create a way to tokenise Asian language in code. Problem was, we only had one computer science guy in the team! So we decided to split up the tasks – one person researched machine learning, some worked on developing the use cases and some worked on how we would present our work, and so on.
I think everyone on my team has gone on to work at Accenture!
When I was at university, STEM was definitely male dominated, but for a woman in STEM, Accenture is great. It’s more gender balanced, and women are given the opportunity to develop their skills in areas that interest them.
Accenture has a strong culture of nurturing learning. I wanted to learn coding and they facilitated that; I wanted to learn more frontend development, so they supported that too.
We went to Yarra Valley for wine and chocolate tasting as part of the
Associate Academy Training
I also helped out on Accenture’s Girls in STEM initiative, which is a program aimed at getting girls interested in science and technology, not by just telling them how great it is, but by showing them the amazing ways it can be applied.
As part of the initiative, high school girls came in and participated in half-day workshops. We also invited vendors and university groups to come in and they showed the girls a racing car they created from sustainable materials, and they got to try out virtual reality headsets. The girls were really excited and inspired.
I understand this completely. I started out studying traditional finance, I never even considered doing the work I do now. But once I was given my first business case, and began working on teams to create solutions, I knew this was for me.
Working with other women in STEM is great, and hopefully more people will get to experience that, as we present STEM to female innovators of the future.
Accenture Tech Bootcamp is a great environment for women in STEM, because you have the opportunity to work with the Accenture mentors, and other future innovators, solving real world problems.
Some of the volunteers at the Girls in STEM event at Accenture
Get the latest stories of the week, delivered to your inbox. Select Careers Blog on the registration form.
|
OPCFW_CODE
|
As an active Apple Newton user, I love it when fellow users continue finding ways to improve the platform. This week Matthias Melcher deserves a huge “Great Job” from everyone. Matthias is the current maintainer of Einstein, the Newton Message Pad emulator. This week he released an updated version of the Android version of the emulator that included many improvements that increased the speed considerably.
After using the new release on both my Galaxy Nexus and a Nexus 7 I have to agree with the claims. The emulator is feels like it is running at about 90% the speed of my Newton 2000u. the Screen reaction seems to be a little slower than the real thing but given time can only improve. I could see an Android tablet with Einstein replacing my beloved green machine one day.
There are still some things to fix in Einstein. Sound doesn’t work, there is no way to set data in or out, and the app menu needs to fix for 4.0+ devices. Regardless, I think we are closer to realizing Paul Guyot’s original idea for Einstein.
If you would like to run Einstein on your Android device, please check out My previous post here.
Einstien Update Announcement: [NTLK] New version of Einstein for Android
Einstein Project Page: http://code.google.com/p/einstein/
Written on Message Pad 2000u
Have you ever thought to yourself, “I wish there was a way to use my Newton on my phone or tablet?” Well now you can. For all of those Newton Users who have an Android device, you can now run the Einstein emulator.
Let me give you a little backstory. Back in 2005 Paul Guyot, a well-known and respected Newton programmer, released the first public version of his Newton emulator called Einstein. The original concept was to come up with a solution to make the Newton operating system hardware independent and possibly license the OS from Apple (1). That was going to open the way for new hardware and therefore keep the Newton platform alive. Unfortunately the project never got past the emulator step, but Paul continued to working and was able to build Einstein to run on OS X, Windows, and a few Linux tablets like the Nokia 770/800. In 2007 Einstein was released as open source software so other programmers could join in and help bring Newton OS to new devices. Once such programmer was Matthias Melcher (2).
Since 2007 Matthias has continued to work on Einstein to improve its speed, reliability, and functionality. 2011 has been particularly exciting because Matthias ported Einstein over to the iOS platform giving users the potential to run Newton OS on their iPhones and iPads. Unfortunately Apple rejected Einstein from being released in the App store and the iOS port is no longer being developed (3).
Now Matthias is working on an Android port of Einstein (4). An early build is available for download, but a lot of work still needs to be done to optimize the speed and reliability. Running the emulator on a Droid 1 is slow, but good enough to show off to friends. I personally can’t wait to see what happens to Einstein in the near future as devices get faster. What are you waiting for? Go dump the ROM off your Newton and get to emulating.
I knew it was only a matter of time and now I am proud to say that SNES gaming works on the PS3, again. One coder by the name of eiz compiled the Snes9x emulator for the PS3. Another coder, squarepusher2, has taken the source code and worked to improve it. As of writing, Squarepusher2’s 4.2.1 build plays most games with no issues. Both NTSC and PAL roms are supported. Things such as a GUI still need to be address and loading freezestates hasn’t been implemented, but those are not the highest priority. Even with only two people working on this project the progress has been phenomenal. I can’t to see what comes out next.
Eiz’s source code: http://github.com/eiz/ps3_snes9x
Squarepusher2’s build and source: http://www.ps3news.com/forums/ps3-hacks/snes9x-super-nintendo-emulator-ps3-port-arrives-112925-19.html#post311837
|
OPCFW_CODE
|
From the Beginning ...
Installing SynthMaker is a breeze. The program is downloaded as a ZIP file which contains the installation program as an EXE file. Clicking on the EXE performs the installation. When you first start up SynthMaker, you're presented with a schematic containing "a very simple synth built completely from high-level modules." SynthMaker projects are in the form of multi page hierarchical schematics which are stored in ".osm" files. Figure 1 shows the SynthMaker UI.
On the left side of the UI are filters that are used to organize the 75+ modules that come standard with SynthMaker. Clicking on a filter causes the modules of that category to be displayed in the next column of the display. The majority of the UI is for display of the schematic (bottom-right) and the hierarchical schematic browser (top-right). Figure 1 shows the UI of the example software synthesizer. To see the functionality behind the sleek brushed aluminum front panel, click on the module to give it focus and then click the page down key. This opens the next lower level of the schematic hierarchy for view. Use the page up key or click on the high level schematic in the browser to return to the top level. Modules can be made up of many other modules each with many levels.
An important thing to notice about the high-level display is that the example synthesizer is setup for interactive testing. The Midi In module allows the computer's keyboard to act as a Midi controller for driving the synthesizer. The link (connection) from the Midi In module to the synthesizer's Midi input routes the Midi codes. On the output side, the two outputs (left and right) from the synthesizer are connected to the DS Out (DirectSound) module which allows the sounds produced by the synthesizer to be heard via the computer's audio subsystem. Pressing keyboard keys causes the synthesizer to do its thing. All buttons and knobs on the synthesizer are active and any changes made will be heard in real time. Pretty neat.
To develop your own instruments or effects, you select the modules you need from the left side of the UI and drag them onto your schematic. Modules are connected by dragging a link from the output connector of a module to the input connector of the next. Once you have your creation working you can convert it into a module that is managed by SynthMaker just like standard modules. I've created a module category called mine that I put all modules I create into. If SynthMaker doesn't have the functionality you need in a standard module and that functionality cannot be created by combination of the standard modules you can create a code or assembler module and code it up. The language syntax is explained in the documentation.
SynthMaker enforces correct signal routing between modules by only allowing links to be made between input and output connectors of compatible types. SynthMaker supports over 30 different types of data with each type having a unique connector symbol. All data types fall into one of two categories: stream or triggered.
Quoting from the SynthMaker User's Guide:
Stream data covers all digital audio and control signals. These signals are fluctuating at sampling rate and so any components which process them will also perform calculations at sampling rate. Components that process stream data are where all the digital signal processing occurs. Triggered data works in a completely different way. Whereas stream data is continuously flowing, triggered data only flows in response to some event. Usually the event is a user interaction or data arriving from some external source like MIDI or a timer". Commonly used data types include: integer, float, stream, UI events, midi, mono and poly data.
|
OPCFW_CODE
|
Task Notes Pane
incorporate a preview pane below the task pane showing the task notes and attachments, like the outlook preview pane
@abcd There is already a popular request for making the edit window work as a pane: https://taskcoach.uservoice.com/forums/26465-desktop-version-windows-linux-mac-of-task-coach/suggestions/466618-adding-an-edit-viewer
This preview pane could actually just be the task properties window that opens up when we double click a task. This way it would be simple to implement, and work wonders for us users.
What a great idea! This would broaden the scope of the app significantly.
I came here looking for this too. Please help by adding this feature..
This has been requested for over 6 years now - I am disapointed to see that it would be ignored for so long because it tells me the developers are not responsive. For me, like others who have commented, I cannot use this app without this feature....
I can't use this great app without this feature. A note / description pane is necessary to plan tasks more efficient.
Devin Lussier commented
I'd like to see this too. I'd be very happy if it works exactly like the "effort for selected tasks" pane works, but showing notes instead.
There are still only two developers working only in their free time on Task Coach. And this request simply hasn't been worked on yet, given some easier things and focus on the top request which is technically challenging.
I am not a programmer myself but help with other things so the other guys can focus more on programming. Otherwise, I am developing a new funding/patronage system overall to help projects like Task Coach really get funded, dedicated development time. Until my project is functional, the little bits of support here and there are good motivation to continue but don't really change the amount of free time the few developers have for the project.
4 years for incorporate this feature ?
I am moving from MylifeOrganized to TaskCoach and I also miss this feature. As soon as you select a task in your list (or in the tree) you can see in another pane (it doesn't matter where it is) the essential information: sometimes you need to remember key details but other times you need to redefine the properties (dates, priority, urls, etc)... This feature will empower this fantastic tool. At least this is my opinion.
@Chris, I agree (see my comment below about how I envision this working)
Also, there's a separate request for full rich-text:
Finally, if you have thoughts about how to improve things, please feel free to get involved. this is Free/Libre Open Source Software after all. We have a public developer's discussion group at http://tech.groups.yahoo.com/group/taskcoach-dev/
Anyone is welcome to join and discuss ideas about priorities and how we should implement things (although notice that there are already several hundred feature requests which we are struggling to get to, and I already have a lengthy list of additional improvements I want to see and we're just getting to bit-by-bit).
Chris Carmichael commented
I think what is needed here is a viewer for the CONTENTS of the note ( ie. "description" field). Contents is the base data that people need to store and organise - you've done the organisation brilliantly, but contents data needs to be easily accessible and formattable, even multi-media (include hot links to URLS and files, playable media/ slideshows).
As a writer of databases, I can see your gripe about what are essentially 2D Databases - you've successfully created a multi-dimensional database, but you have to remember that the function of a database manager is not only to store and retrieve data, but to PRESENT the data well. To do this, you have to bear in mind what the data is used for. This is not really a programmer's job, it's the job of a business system designer/analyst, but in fact programmers have to implement it. Version 1.3.22 is the result of good programming, but needs more thought in the systems design area.
Add me to the list of people who aren't going to be using Task Coach until this is implemented. The ability to see notes related to the tasks without having to click into sub windows is essential. Please implement asap.
Sounds to me like most users here should also be voting for this other request:
That would make this basically functional for description and with some improvements to notes pane in the editor that would do it completely.
I personally would be happy with a two-pane (A) notes list and (B) editable description pane WITHIN the task editor window's note's tab and in the note viewer, (and also in the notes viewer), but I don't especially care about having this pane connected to the task viewer.
A separate view for details of the selected task, showing description, attachments, and a summary of dates and effort would be awesome. I agree with commenters that having to open each task to view details isn't very practical and a quick preview / overview of particular task will certainly assist productivity. Thanks.
Will be nice to have note pane below task list and when you click on task it will show you the notes. Current situation when you need to open every task is not practic. When I review all task I can not just push down arrow, I have to open every task. Bubble notification is not so useful, because it is annoying when you dont need it. Some comments are too long and then the bubble is to hight etc.
I would like to be able to click on a task and see the efforts that only apply to that particular task.
This would be very useful feature. I wish I could scroll down my tasks and be able to see notes/descriptions in separate pane. As it is a task manager maybe notes and description could be combined in one field.
Additionally it would be great if this note field could use active links as it is in Tomboy, where I can paste url address and it is automatically highlighted in blue and active and when I click it it opens in default web browser. It is useful because many of my tasks are just searching for information and gather information.
This is a really crucial aspect of using notes, and without it the functionality and reason for using the tool are pretty much lost.
I also think this feature is the main reason for me not to switch over to TaskCoach. I would be more specific on how it could be desingned:
- Add another viewer for Description and Notes. It should have two tabs in it, a description tab and a notes tab.
- This viewer would show the description / notes of the task, category (or note) which is currently selected in any of the other windows. It should also be possible to edit the contents of this new viewer.
|
OPCFW_CODE
|
import * as Cesium from 'cesium'
/**
*@class
*@memberof MeteoLib.Util
*/
function Path() {}
/**
*
*获取文件扩展名(后缀)
*@param {String}fname 文件名
*/
Path.GetExtension = function (fname) {
var start = fname.lastIndexOf('.')
if (start >= 0) {
return fname.substring(start, fname.length)
}
return ''
}
/**
*
*获取文件扩展名(后缀)
*@param {String}fname 文件名
*/
Path.GetFileName = function (fname) {
var start = fname.lastIndexOf('/')
if (start < 0) {
return fname
}
return fname.substring(start + 1, fname.length)
}
/**
*
*获取文件夹
*@param {String}fname 文件名
*/
Path.GetDirectoryName = function (fname) {
var start = fname.lastIndexOf('/')
if (start < 0) {
return ''
}
return fname.substring(0, start)
}
/**
*
*合并文件目录和文件名
*@param {String}dir 文件目录
*@param {String}fname 文件名
*@return {String}
*/
Path.Combine = function (dir, fname) {
return dir + fname
}
/**
*拼接路径
*@param {Array.<String>|Any}arguments
*@return {String}
*/
Path.join = function () {
var paths = []
var srcArguments = arguments
if (srcArguments.length == 1 && Cesium.isArray(srcArguments[0])) {
srcArguments = srcArguments[0]
}
srcArguments.forEach(function (path) {
if (path.length > 1 && path[path.length - 1] == '/') {
path = path.substr(0, path.length - 1)
}
paths.push(path)
})
return paths.join('/')
}
/**
*
*@param {String} fname
*@param {String} newExt
*/
Path.ChangeExtension = function (fname, newExt) {
return fname.replace(Path.GetExtension(fname), newExt)
}
// return Path;
//});
/**
*
* @param {String} fname
* @return {String}
*/
Path.GetFileNameWithoutExtension = function (fname) {
fname = Path.GetFileName(fname)
fname = Path.ChangeExtension(fname, '')
return fname
}
export default Path
|
STACK_EDU
|
Why can't I kill this process on Linux?
Problem
I would like to kill a process called raspivid (program which records videos using a Raspberry Pi Camera) but I cannot...
This is how I call it:
#!/bin/bash
#Start recording...
raspivid -w 800 -h 600 -t 15000 -o $1 -v -n -rot 270 >> /home/pi/log/camera_output.txt 2>&1 &
#Waiting the video to be complete
sleep 16
#Killing child process
sudo kill -9 $!
#Killing parent process
sudo kill -9 $$
If I search for this process, it is still there:
pi@raspberrypi ~ $ ps -ef | grep raspivid
root 7238 7234 0 21:53 ? 00:00:00 [raspivid]
pi 17096 14925 0 22:05 pts/0 00:00:00 grep --color=auto raspivid
If I try to kill it, it doesn't die. Instead it changes the parent PID to 1:
pi@raspberrypi ~ $ sudo killall raspivid
pi@raspberrypi ~ $ ps -ef | grep raspivid
root 7238 1 0 21:53 ? 00:00:00 [raspivid]
pi 17196 14925 0 22:05 pts/0 00:00:00 grep --color=auto raspivid
pi@raspberrypi ~ $ sudo killall raspivid
Observations:
The call works fine for a while (2 hours or something) then it starts
hanging.
Only a physical power off solves the issue. I cannot reboot via
terminal (it hangs too)
My questions:
Why does Linux assign the parent PID to 1?
Why the process cannot get killed? (I also tried sudo kill -9 7238)
Problem
Your script is probably creating zombies because of your kill -9 commands; as suggested from jjlin answer too is never a good practice to kill abruptly some process without being forced to.
From man bash we can read:
Processes marked < defunct > are dead processes (so-called "zombies") that remain because their parent has not destroyed them properly.
These processes will be destroyed by init(8) if the parent process exits.
Answer #1: The process init has the PID 1 and for this Linux assigns them the parent with PID 1 (because it assign them to init).
Answer #2: They cannot be killed simply because they are just dead... if their parent is init probably is enough to wait some time.
To remove zombies from a system, the SIGCHLD signal can be sent to the parent manually, using the kill command. If the parent process still refuses to reap the zombie, the next step would be to remove the parent process. When a process loses its parent, init becomes its new parent. Init periodically executes the wait system call to reap any zombies with init as parent. [1]
Just in case this idea arises one day or another: to #kill -9 init process with root privilege is the software equivalent to physically unplug the computer from the electricity grid. [:-)]
However zombie processes can be identified in the output of ps command by the presence of a "Z" in the STAT column. You can use the following line to easily identify them
ps -aux | grep Z
Some references about Linux zombies world:
Is there any way to kill a zombie process without reboot?
What are zombie processes
How to kill zombie process
[1] From many sources on the net
A process with parent PID 1 is not a zombie. A process gets this parent when its parent is killed before it is. So his killall is apparently killing the parent, not the process he wanted to.
Where do you see <defunct> in his ps output? What does that have to do with this question?
@Barmar I didn't see. Unfortunately not always the problem is exactly where you are searching for. BTW from the $! he kill -9 without waiting the background process with a camera...after a sleep 16 he kill -9 the parent, abruptly again. It smelled of .zombie... Following the smell ( :-) ) you can see that, with the following ps -ef he did, the child still alive, but the parent was killed (-9).
I think you're confusing orphan processes with zombie processes, but they're unrelated.
Looking again the script: he kill -9 its own process. It reasonable to assume it's killed and < defunct >... even more after the non effective call sudo killall raspivid. It's even possible that raspivid spawns its own child processes that remain orphan. BTW it's enough to do "ps -aux | grep Z" to see if it is zombie or not, and it should be (enough) to avoid to kill -9 the process in the main script.
To answer question number 1:
When a process spawns child processes, the children each have their own PID. The PPID of each child (parent's process id) is the PID of their parent process. If the parent dies, then the child processes are orphaned. Orphaned processes are automatically picked up by the system init process which has a PID of 1.
The program probably has the camera device open, and by forcibly killing it, you haven't allowed it to clean up properly, so now it's stuck.
A few observations:
It's generally not a good idea to kill a program by starting with -9 unless you know what you're doing. Just a normal kill (with no options) is fine.
There should not be a need to do any killing in your script at all. You've already passed -t 15000 to the program to specify the length of the video, so the first kill should be unnecessary. The second kill is also unnecessary since the shell will exit on its own when it reaches the end of the script. If the program isn't exiting on its own (as it should), then you've got other problems.
Here is some trick I did to clean zombie process. I needed it because upgrade script of one package has been waiting with killall -pw but the process was zombie because gnome-session didn't do 'waitpid' on it. So I did it manually:
gdb
(gdb) attach <zombie parent PID>
(gdb) call waitpid( <zombie PID>, 0, 0)
(gdb) detach
That's it! Zombie process was clean up!
|
STACK_EXCHANGE
|
Hello. I'm wondering if I'm an INTP or INTJ, as I've been in doubt about this for quite a long time (mostly thanks to bad stereotypes). I've looked into the cognitive functions and got rid of a bunch of them (i.e. I'm certain I don't use Si or Se dominantly, nor Fi or Fe).
I'll try posting some information about me that might be useful (if not, tell me what would be, please) :
-I'm very driven, almost perfectionistic - if something doesn't live up to my standards, I want to alter it (including myself - for example, if I don't do so well in a test, I'll throw myself into studying for the next one as well as reviewing everything I did wrong).
-I tend to be oblivious to my environment to a large extent - I didn't notice the electricity had cut off in my house until my laptop battery sign popped up saying it was discharging, then I realised I was sat in darkness. I also usually make two or three trips back to my room before I leave, due to either forgetting things or being convinced I've forgotten something. This has been commented on by a lot of people (my mum calls it 'being artistic' and I respond with calling it 'being stupid').
-I'm not great with making conversation with people I don't know - I'm not good at small talk, so I tend to not say anything for quite a while, until I get where I stand with them. I've learnt to nod and smile at what people say in order to encourage them to talk to me. I don't see myself as having many friends (maybe one or two), but lots of people consider me a friend to them (which I can't understand).
-I'm slightly neurotic in the way I feel the need to double- or triple-check things - I never trust myself to pack my back properly, so I check it several times before leaving, and I make sure my work's in order (even if that order is in piles of paper on the floor). I also take very good care of my possessions, unlike a lot of people I know, who will stuff books into their bags, not caring they get bent out of shape.
-I'm usually very much 'to the point' in writing and conversations - I'm an under-writer for essays (well, used to be, as I've almost worked myself out of doing that) and I've been told I'm good at condensing big points into a couple of words.
-My hobbies are reading, writing (mostly science-fiction), drawing occasionally, watching films and listening to music, but I'm interested in the link between biology and psychology to an almost obsessive level (when I was twelve or so, I used to spend hours researching mental illnesses - which my mum thought was unhealthy behaviour).
-I can be very obsessive about things and used to re-read books and/or listen to one song repeatedly (this may be OCD, as it runs in my family).
-I'm quite good at reading people, so if they act up, I can explain to them why they're acting the way they are (I know it's annoying to do so). I also spend time analysing people and how they react to certain things.
-I'm not too concerned about getting into debates with people - I don't like conflict much, but if the topic's interesting, I don't mind.
Hmm, I'm not sure what else to put. Please feel free to ask questions if it'll help.
Thanks in advance for replies!
|
OPCFW_CODE
|
January 7th, 2011, 02:42 PM
an option to turn off scraping?
I really like the option of having BB not going to IMDB each time to scrap my movies. It seems like if I add a local movie + its associated TBN file, BB should just recognize that and be done with it when I power on the device. Right now, it's like pulling teeth try to get BB to acknowledge new movie files that I've added to my server. I'm also a WDTV owner and with that, it was super easy to add new movies to my server. I just copy the movie along with the JPG for the thumbnail and WDTV will instantly recognize it when i power it on.
Does anyone know if I can achieve the same effect with the BB? I am really thinking of returning the BB if this is not possible.
January 7th, 2011, 03:15 PM
Yea, Boxee isn't really setup to watch shows when you want to, at least not through the Movies and TV Shows view. To see video files immediately, you have to browse through the File view and select the source to view. It's not pretty, but it gets you watching your file before the night is over.
January 7th, 2011, 03:21 PM
well, this seams like an oversight and a step in the wrong direction for BB. For a $200 device, it should give the user the ability to turn on/off scraping.
i've just created a bug on Jira. Please vote on this issue if this interests you.
Last edited by tonester99; January 7th, 2011 at 03:26 PM.
Reason: added post on Jira
January 7th, 2011, 05:43 PM
There is also http://jira.boxee.tv/browse/BOXEE-7068 - Gettings the API opened back up so that the community devs can do some work to improve the experience.
If I had access to the database via the api, then I could just write a server side application that does instant library updates and could turn off this horrid scanner.
January 7th, 2011, 07:00 PM
gribbli: that would be awesome.
January 7th, 2011, 07:55 PM
It may sound stupid but what does scraping movie mean (I'm french)
January 7th, 2011, 08:06 PM
This is now the most 'popular' jira issue and ticket..
January 8th, 2011, 03:07 AM
It just means using a webservice from sites like imdb or moviedb to get information and cover art for the movie files on your local server.
Originally Posted by larryd2910
Tags for this Thread
|
OPCFW_CODE
|
What is time warp attack and how does it work in general?
I am trying to understand the recent time warp attack on verge but since the attack is universal across all POW block chain i figure I could also ask here since people here are .. more knowledgeable.
I don't understand the detail of code.
But i assume something like, in bitcoin, there is a difficulty parameter that is decided for every 2016 blocks. Perhaps a integer. Such that all 2016 blocks mined during that that period all needs to have a hash that is less than that integer.
This difficulty is recalculated whenever 2016 blocks are mined. Everybody goes over this entire 2016 blocks, and take the earliest time stamp, take the latest timestamp, calculate how long it took to mine those blocks, and decide on a new difficulty integer for the next 2016 blocks.
But how does time warp works? I read something along, miners starts submitting blocks with large drifting timestamps (i think it is 2hrs for bitcoin?), but 2 hours is relatively small compare to the average time it is going to mine 2016 blocks (300+hrs). How would add or subtract 2 hours in the blocks cause mining to difficulty to decrease dramatically?
For a miner to drastically decrease difficulty, wouldn't he need:
Majority hash power so he can construct all 2016 blocks.
Change all timestamp in this 2016 blocks in a way such that it causes
dramatically decreased difficulty
profit... (but by that time people probably abandoned the chain anyway)
But once we get to point #1 we are.. kind of in trouble already, it's
just that having time warp made it worse???
Thanks!
The Bitcoin Protocol (consensus rules) has two relevant rules for the timestamps in block headers:
A node will not accept a block whose timestamp is more than two hours in the future.
A node will not accept a block unless it has a timestamp greater than the median of the previous 11 blocks. In Bitcoin, we call this Median-Time-Past (MTP).
As you mention in your question, difficulty changes are calculated based on the times of the first and last blocks in a 2,016-block difficulty period.
Given the above rules, if all miners agreed, they could simply increment the clock the minimum amount of one second MTP for the first 2,015 blocks and then set the time to two hours in the future. That would basically give them just a small decrease in the difficulty, but think about what happens to the MTP when they add that last slightly-future datapoint: the actual median doesn't change much at all. Actual time stamps are in seconds, but here's a set of 11 timestamps in days delta from present time:
[-13, -13, -13, -13, -13, -13, -13, -13, -13, -13, 0]
The median of the above is -13, meaning that after miners create the slightly-future-time block at the end of the difficulty period, they don't need to move their timestamps forward more than the minimum one second---so the next difficulty period starts out -13 days.
At the end of the next difficulty period, miners again move the timestamp forward as far as possible, so the protocol thinks it took 28 days to mine the blocks---half the expected speed---and so decreases the difficulty by about half. Now the values used for MTP look like:
[-27, -27, -27, -27, -27, -27, -27, -27, -27, -27, 0]
So miners can continue keeping timestamps far in the past and repeat the attack, lowering difficulty every period until the point where it takes them less than 2,016 seconds to produce 2,016 blocks, at which point they can't lower difficulty any further because the MTP function requires time increase by a minimum of one second over the median every block.
Now, your main question was how can this attack work without collusion by a majority of miners. Now that you've seen how the attack works with all miners participating, it should be clear than selecting the median time can allow an attacking miner who's lucky enough to find blocks reliably to prevent median time from jumping forward to an honest value. For example, imagine these are the times of the previous 11 blocks, in block chain sequence:
[-27, 0, -27, 0, -27, 0, -27, 0, -27, 0, -27]
If you sort those numbers to find the median, it's -27 even though 5/11ths (45%) of the hashrate is mining accurately. But wait, doesn't that mean the attacking miner has 55% of the hashrate? Maybe not, for a large miner with about 30% or more of the hashrate could obtain an advantage over other miners using a selfish mining attack, or the miner could simply threaten to attempt to make stale ("orphan") other miners blocks who have accurate time stamps, causing those honest miners to earn less income.
I myself don't consider the attack particularly likely on Bitcoin because it's slow to execute and publicly obvious, but it's something protocol designers do need to keep in mind for when they change parameters, as those changes could make the attack easier to execute.
re: "it's something protocol designers do need to keep in mind for when they change parameters": a perfect example of this is the recent attack against Verge (XVG), where a different algorithmic setup allowed a timewarp attack to be more potent. David has another great answer on this here: https://bitcoin.stackexchange.com/questions/75438/what-prevents-similar-time-warp-attacks-in-bitcoin-as-happened-to-verge
It seems wrong for a node to accept: [-13, -13, -13, -13, -13, -13, -13, -13, -13, -13, 0] ? Isn't the last block "more than 2 hours in the future" from the previous block ? It seem it's 13*24=312 hours in the future. The protocol's notion of time comes from the timestamps of the blocks and the time reported by peer nodes.
@dbkeys by "2 hours in the future", I mean by the node's own clock (e.g. its host system's clock). For example, it's 2018-10-03 11:00 UTC as I write this; my node will accept any otherwise-valid block right now with as timestamp up to 2018-10-03 13:00, but will not currently accept an otherwise-valid block with a timestamp of 13:01. If we wait a minute, then my node would accept the 13:01 block.
"they could simply increment the clock the minimum amount of one second MTP for the first 2,015 blocks..." It's not clear to me what exactly is going on. What clock is being incremented 1 second, (is it the system clock, or the timestamps ) ? how many times is it being incremented one second ?
"Technically there's an off-by-one error there, but that's not important here." -> I think it's important. If the difficulty re-target were overlapping (as they would if it was based on the last 2016 blocks instead of 2015), the timewarp attack wouldn't be possible. Of course miners could still hold off timestamps in the past but it would be much less effective.
@AntoinePoinsot edited to correct this, thanks so much for your comment here and for walking me through it in more detail in a chat!
I was reviewing the Testnet 4 BIP yesterday and Pieter made me realize that the minimum timestamp increase of blocks is not 1 second per block, but 1 second every sixth block.
@Murch I've described that in other places, e.g. https://bitcoinops.org/en/newsletters/2019/03/05/#the-time-warp-attack , but I think it would be good for somebody to go into detail on that, so I created https://bitcoin.stackexchange.com/questions/123698/how-many-blocks-per-second-can-be-sustainably-be-created-using-a-time-warp-attac
A time warp attack occurs when a miner manages to put an incorrect time-stamp into the blocks in order to reduce the subsequent mining difficulty.
By inserting wrong time-stamps a miner, or a group of miners with majority of the hash rate can deceive the difficulty adjustment algorithm (DAA) that deals with adjusting mining difficulty, thereby pushing it to lower it's value, causing extraction of the blocks to become faster and therefore make mining more profitable.
As per the consensus rule, Bitcoin won't accept a block with a time-stamp more than 2 hours in the future, and it won't accept a block unless its time-stamp is greater than the median time-past (MTP) of the previous 11 blocks.
A miner can exploit this consensus rule to mine block in a difficulty adjustment period of 2,016 blocks by incrementing the timestamps of the blocks mined by merely 1 second greater than the MTP of the past 11 blocks for 2,015 blocks and then insert the time stamp as current time + 2hours.
This will not bring any drastic effect in the first time epoch. However, for the subsequent difficulty adjustment period, the miner can manipulate the time stamp of the first block (block 2,017) and set it to be 1 second greater than the MTP (block 2,011). This will deceive the DAA into believing that the first block of the second difficulty adjustment period (2,017) was mined ~13 days ago, and then insert the timestamp of time final block of the second difficulty adjustment period to be current time + 2hours, and thus it would appear that it took ~28 days to mine 2,016 blocks in the second difficulty adjustment period, which will lead to a decrease in the difficulty of ~50%.
It is important to note that DAA takes into consideration the timestamps of the first and the last blocks in a difficulty adjustment period to adjust difficulty.
The time-stamp for the first block in the second difficulty adjustment period(block 2,017) should be minimum 1 second greater than the timestamp of the median of the previous 11 blocks(block 2,011)i.e 6th previous block, which is ~13 days in the past, while the time stamp of the last block of the second difficulty adjustment period is C+2 (Current time + 2hours), thus deceiving DAA to believe that it took ~28 days for all the blocks in the second difficulty adjustment period.
|
STACK_EXCHANGE
|
Native escalation module integrated in your Chatbot
Many of you tell us that chatbots are great for helping you improve your business and customer satisfaction on a daily basis. However, as you know, a chatbot sometimes needs a little human help for very sensitive or high value-added requests. The transfer feature of a chatbot to a human is there for that, both on Facebook Messenger and your Chatbots for your websites!
When to use the transfer feature?
Sometimes, the request made by the user is sensitive or has a very high added value. Imagine a user chatting with the chatbot of Nesla, a fictional electric car brand. The user wants to buy the most expensive model. It might be more prudent for the brand to transfer to a sales manager.
Likewise, if I have a problem with a lost parcel placed on Zamajohn, a famous fictitious e-commerce site, I might be better off letting a human handle this complex and sensitive situation.
How to activate the transfer feature?
It’s very simple, it’s a function like the others displayed in the chatbot tool BOTNATION AI. You just need to activate it anywhere in your chatbot by enabling the feature.
No complicated settings. As soon as a user is exposed to this feature the bot administrator and possibly other people (added in “advanced settings”) receive an email notification and a link to access the conversation. At this point, your chatbot is paused. You can customize the messenger displayed to the user and the number of hours to respond, after this time the chatbot takes over. Of course, you can extend this period as you wish.
Last but not least, you can also find the escalations in progress at any time and put a user back in touch with the chatbot through the conversation tab of your interface.
There is also nothing to stop you from conditioning the transfer to the human according to the days or hours of availability of the human support.
As sometimes a short video is better than a long speech I invite you to watch below:
Advanced mode of escalation to human
As you know, we evolve the platform daily for better functions and faster handling. We realized that this first version did not correspond to all the uses, in particular for companies recording a significant volume of tickets. That’s why we’ve upgraded the function to a more powerful tool. Discover this new version now!
Frequently asked questions
Why do I see a message that says it doesn’t work?
This is normal because the transfer function is only activated when your chatbot is online. If you are on the test page (“Botnation Preview”) it will not work.
Can I customize time slots if my human service is not available?
Yes, you just have to follow this tutorial which gives you an example to condition the transfert automatically according to days and hours.
Is the transfer to humans limited to Facebook?
Not at all, web chatbots can also escalate using internet messaging to Botnation via the “conversations” tab of your chatbot.
Do I have to be the only person to respond to requests to escalate to human?
You can be 2 collaborators working on the escalations to human and receiving notifications. Go to Settings > Advanced settings, then to the “Transfer to human” block. You can add another collaborator (you can only add one person in addition to yourself). Enter the email address of your collaborator and validate with the Enter key. It’s done! You will now be 2 to manage the requests.
|
OPCFW_CODE
|
Chris Dracket responded to one of my tweets the other day, saying that “OpenID should be dead… it’s way over-rated”. I’ve of course heard plenty of criticisms of OpenID, but hadn’t really heard that it was “overrated” (which implies that people have a higher opinion of OpenID than it merits).
Intrigued, I replied, asking him to elaborate, which he did via email:
I don’t know if overrated is the right word.. but I just don’t see OpenID ever catching on.. I think the main reason is that its too complex / scary of an idea for the normal user to understand and accept.
In my opinion the only way to make OpenID seem safe (for people who are worried about privacy online) is if the user has full control over the OpenID provider. While this is possible for people like you and me, my mom is never going to get to this point, and if she wants to use OpenID she is going to have to trust her sensitive data to AOL, MS, Google, etc. I think that people see giving this much “power” to a single provider as scary.
Lastly I think that OpenID is too complex to properly explain to someone and get them to use it. People understand usernames and passwords right away, and even OAuth, but OpenID in itself I think is too hard to grasp. I dunno, just a quick opinion.. I think there is a reason that we don’t have a single key on our key rings that opens our house, car, office and mailbox, not that that is a perfect/accurate analogy, but its close to how some people I’ve talked to think OpenID works.
Rather than respond privately, I asked whether it’d be okay if I posted his follow-up and replied on my blog. He obliged.
To summarize my interpretation of his points: OpenID is too complex and scary, potentially too insecure, and too confined to the hands of a few companies.
The summary of my rebuttals:
- OpenID will become a necessary convenience in cloud computing.
- OpenID can be incrementally secured and, combined with OAuth, helps to defeat the password-anti-pattern.
- OpenID is about more than just accounts and fewer passwords — it’s a building block for online identity, and therefore personal agency for web citizens.
OpenID should not be judged by today’s technological environment alone, but rather should be considered in the context of the migration to “cloud computing”, where people no longer access files on their local harddrive, but increasingly need to access data stored by web services.
All early technologies face criticism based on current trends and dominant behaviors, and OpenID is no different. At one time, people didn’t grok sending email between different services (in fact, you couldn’t). At one time, people didn’t grok IMing their AOL buddies using Google Talk (in fact, you couldn’t). At one time, you had one computer and your browser stored all of your passwords on the client-side (this is basically where we are today) and at one time, people accessed their photos, videos, and documents locally on their desktop (as is still the case for most people).
Cloud computing represents a shift in how people access and share data. Already, people rely less and less on physical media to store data and more and more on internet-based web services.
As a consequence, people will need a mechanism for referencing their data and services as convenient as the
c: prompt. An OpenID, therefore, should become the referent people use to indicate where their data is “stored”.
An OpenID is not just about identification and blog comments; nor is it about reducing the number of passwords you have (that’s a by-product of user-centered design). Consider:
- if I ask you where your photos are, you could say Flickr, and then prove it, because Flickr supports OpenID.
- if I ask you where friends are, you might say MySpace, and then prove it, because MySpace will support OpenID.
- if you host your own blog or website, you will be able to provide your address and then prove it, because you are OpenID-enabled.
The long-term benefit of OpenID is being able to refer to all the facets of your online identity and data sources with one handy — ideally memorable — web-friendly identifier. Rather than relying on my email addresses alone to identify myself, I would use my OpenIDs, and link to all the things that represent me online: from my resume to my photos to my current projects to my friends, web services and so on.
The big picture of cloud computing points to OpenIDs simplifying how people access, share and connect data to people and services.
I’ve heard many people complain that if your OpenID gets hacked, then you’re screwed. They claim that it’s like putting all your eggs in one basket.
But that’s really no different than your email account getting hacked. Since your email address is used to reset your password, any or all of your accounts could have their passwords reset and changed; worse, the password and the account email address could be changed, locking you out completely.
At minimum, OpenID is no worse than the status quo.
At best, combined with OAuth, third-parties never need your account password, defeating the password anti-pattern and providing a more secure way to share your data.
Furthermore, because securing your OpenID is outside of the purview of the spec, you can choose an OpenID provider (or set up your own) with a level of security that fits your needs. So while many OpenID providers currently stick with the traditional username and password combo, others offer more sophisticated approaches, from client-side certificates and hardware keys to biometrics and image-based password shields (as in the case of my employer, Vidoop).
One added benefit of OpenID is the ability to audit and manage access to your account, just as you do with a credit card account. This means that you have a record of every time someone (hopefully you!) signs in to one of your accounts with your OpenID, as well as how frequently sign-ins occur, from which IP addresses and on what devices. From a security perspective, this is a major advantage over basic usernames and passwords, as collecting this information from each service provider would prove inconvenient and time-consuming, if even possible.
Given this benefit, it’s worth considering that identity technologies
are being pushed on the government. If you’re worried about putting all your eggs in one basket, would you think differently if the government owned that basket?
OpenID won’t force anyone to change their current behavior, certainly not right away. But wouldn’t it be better to have the option to choose an alternative way to secure your accounts if you wanted it? OpenID starts with the status quo and, coupled with OAuth, provides an opportunity to make things better.
We’re not going to make online computing more secure overnight, but it seems like a prudent place to start.
Personal agency for web citizens
Looking over the landscape of existing social software applications, I see very few (if any) that could not be enhanced by OpenID support.
OpenID is a cornerstone technology of the emerging social web, and adds value anywhere users have profiles, accounts or need access to remote data.
Historically, we’ve seen similar attempts at providing a universal login account. Microsoft even got the name right with “Passport”, but screwed up the network model. Any identity system, if it’s going to succeed on the open web, needs to be designed with user choice at its core, in order to facilitate marketplace competition. A single-origin federated identity network will always fail on the internet (as Joseph Smarr and John McCrea like to say of Facebook Connect:
We’ve seen this movie before).
As such, selecting an identity provider should not be relegated to a default choice. Where you come from (what I call provenance) has meaning.
For example, if you connect to a service using your Facebook account, the relying party can presume that the profile information that Facebook supplies will be authentic, since Facebook works hard to ferret out fake accounts from its network (unlike MySpace). Similarly, signing in with a Google Account provides a verified email address.
Just like the issuing country of your passport may say something about you to the immigration official reviewing your documents, the OpenID provider that you use may also say something about you to the relying party that you’re signing in to. It is therefore critical that people make an informed choice about who provides (and protects) their identity online, and that the enabling technologies are built with the option for individuals to vouch for themselves.
In the network model where anyone can host their own independent OpenID (just like anyone can set up their own email server), competition may thrive. Where competition thrives, an ecosystem may arise, developed under the rubric of market dynamics and Darwinian survivalism. And in this model, the individual is at the center, rather than the services he or she uses.
This the citizen-centric model of the web, and each of us are sovereign citizens of the web. Since I define and host my own identity, I do not need to worry about services like Pownce being sold or I Want Sandy users left wanting. I have choice, I have bargaining power, and I have agency, and this is critical to the viability of the social web at scale.
OpenID is not overrated, it’s just early. We’re just getting started with writing the rules of social software on the web, and we’ve got a lot of bad habits to correct.
As cloud computing goes mainstream (evidenced in part by the growing popularity of Netbooks this holiday season!), we’re going to need a consumer-facing technology and brand like OpenID to help unify this new, more virtualized world, in order to make it universally accessible.
Fortunately, as we stack more and more technologies and services on our OpenIDs, we can independently innovate the security layer, developing increasingly sophisticated solutions as necessary to make sure that only the right people have access to our accounts and our data.
It is with with these changes that we must evaluate OpenID — not as a technology for 2008’s problems — but as a formative building block for 2009 and the future of the social web.
|
OPCFW_CODE
|
Salt alert is a mobile app concept helping parents to track sodium level in their kid's diet. High sodium diet is one of the causes of childhood obesity.
Domain to user research
Process to requirements
Concept to final design
User testing to Specification
Scholarly/Non-scholarly research summary
Stakeholder Interview Questions
Target User Group
During the initial research phase our focus was to understand the problem space of
childhood obesity. We did this by reading scholarly articles and interviewing possible mentor's/stakeholders in the project. Following this
we did extensive user research by interviewing 16 parents ( having children age 5-10 years) residing in the city of Bloomington, Indiana. Parents were chosen as target user group as scholarly articles suggest parent based intervention is the most effective in lowering possibility of childhood obesity. Easy access to parents was another factor which played a role in parents being selected as a user group. We concluded this phase by deciding parent's to be our primary user base and developed a few persona's to help us understand everyday life of parents.
B.J. Fogg's Design Process
A simple behavior to target
Systemic design concepts
During the requirements phase we came across B.J. Fogg's design process which helped us to guide this project. We decided to target a simple behavior of low sodium food amongst children after discussion with stakeholders. At this point we brainstormed system level concepts, unique concepts was one of the key criteria we kept in mind while brainstorming new concepts. Exemplar research helped us to understand existing applications across different technological medium. This research also helped us to understand possible rationale other designers had while designing the applications. Lastly we documented user scenario's and corresponding requirements. These scenario's were mainly derived from the storyboard's showcased in the next phase of the process i.e. design.
Mobile interaction storyboard
Android design pattern- Source: Google
Wireframe showing one of the workflow
Design phase started with storyboarind user's interaction with the mobile application.
A workflow was prepared after to consolidate user's expectation after story boarding and user scenario. At this point we consulted android 4.1 design patterns
to understand how existing patterns and components recommended by google could be used while designing wireframes. Usage of patterns while designing a application
can help designers to concentrate on bigger challenges faced by user's than worrying reinventing the wheel for every single design problem.
Formative user testing in progress
Task and Task completion timing- Quantitative data
Functional specification and next steps
Paper prototype was used to test the concept and various workflows with 5 parents. Formative usability testing was performed and the findings of this user testing were discussed with stakeholders to access the severity of issues. The main pain points in the workflow were the editing of child's profile and sharing of sodium level with family doctor. Functional specification were documented to help developers get a overview of the application. The pricing of this application and long term usage statistics would help us see if this application is successful in lowering sodium level in kid's diet.
|
OPCFW_CODE
|
Excalidraw is described by its creators as a
Virtual whiteboard for sketching hand-drawn like diagrams. I’d describe it slightly differently, I think I’d go with
a drawing tool that’s so easy to use that you spend your time making sure the drawing is effective, rather than ‘pixel-fiddling’.
Excalidraw is great for quickly creating diagrams (you can see lots of great examples on the bottom of this page), but it can be used for “sketch” style drawings, like the drawing below of my rowing setup.
Some of Excalidraw’s features:
- runs in the browser
- supports real-time collaboration
- supports multiple export formats
- is secure (end-to-end encrypted)
- is vector based (so scales to all sizes)
- is very easy to learn (no “learning” required)
- is open source
- is free to use
How I use it
As a Product Manager, I’m often having to define what apps look like, how they’re structured, what components they include etc. and I used to do this using hi-fidelity mock-up tools such as AxureRP (or even Adobe illustrator!). And to be honest, I wasted a lot of time obsessing over the size, colour and exact location of various page components. Once my company got serious about UX and started embedding dedicated UX resource within our squads, then it was no longer my responsibility to make my mock-ups pixel perfect.
This is where Excalidraw really shines (it’s hard to be pixel perfect when there are no straight lines!), as its focus is on a “hand-drawn” aesthetic, it lends itself well towards the creation of “quick-and-dirty” sketches where the focus is to portray an idea or concept, rather than to represent a final pixel perfect design. Sketches can be created and altered quickly, and all from within a browser (important for those of us who work for large corporations and don’t have the admin rights required to install apps on our PCs).
The mock-up below of a news search page took about 15 mins to produce, but the sketch does highlight quite a few important features about the search page concept.
In addition to using it at work for mock-ups, it’s also my go-to drawing app for this blog, as you can see in my post about my lockdown days.
Libraries make creating diagrams even quicker
As I’ve mentioned, one of the benefits of Excalidraw is the speed, but the existence of a small (but growing) community of users who share their Libraries makes it even faster to create diagrams.
I really can’t recommend Excalidraw strongly enough, it’s a great tool, built and maintained by a group of enthusiastic developers who are actively engaging with their users. But, the only way to see how good it really is, is to go and sketch something :-)
|
OPCFW_CODE
|
The Java programming language, celebrated for its versatility and widespread adoption, has a rich history that traces back to its inception in the mid-1990s. From its humble beginnings to its status as a fundamental player in the world of software development, this article takes you on a journey through the history of the Java language.
Birth of a Vision Early 1990s
The story of Java began at Sun Microsystems in the early 1990s. A team of engineers, led by James Gosling, aimed to create a programming language that could be used for building applications that could run on a variety of devices, irrespective of the hardware and operating system. The original goal was to develop a language for interactive television, but it soon became apparent that Java had broader potential.
“Green Project” Emerges 1991
The project was initially referred to as the “Green Project,” and its early versions included features that set Java apart from existing programming languages. Notably, the concept of “Write Once, Run Anywhere” (WORA) was a driving force. This idea aimed to enable developers to write code once and have it run on various platforms without modification.
Public Debut 1995
In May 1995, Sun Microsystems officially announced Java to the world. Java 1.0 was released, and it gained attention for its portability, safety features, and user-friendly syntax. The language’s ability to run “applets” within web browsers, enhancing web interactivity, added to its allure.
Java Applets and Web Revolution
Java applets, small applications that could run within web browsers, fueled Java’s popularity in the mid-1990s. This led to a vision of interactive and dynamic web content. However, as browser plugins evolved, the use of Java applets declined, and the focus shifted toward server-side programming.
Java 2 (J2SE 1.2 and Beyond)
Java 2, released in 1998, marked a significant milestone. It introduced the Swing graphical user interface toolkit, enhancing the visual aspects of Java applications. With successive versions (J2SE 1.3, 1.4, etc.), Java continued to evolve, introducing new features, performance improvements, and APIs.
The Java Platform, Enterprise Edition (Java EE)
Recognizing the need for a standardized platform for enterprise applications, Java EE (formerly J2EE) was introduced. It provided tools and frameworks for building scalable and robust web applications. Java EE evolved through several versions, addressing modern enterprise needs.
The Modern Java Landscape
In recent years, Java has maintained its relevance through regular updates and enhancements. With the introduction of modular programming in Java 9, a focus on performance improvements, and a more frequent release cadence, Java remains a language of choice for a wide range of applications, from web development to mobile apps and beyond.
The history of the Java programming language is a testament to the power of visionary thinking and innovation. From its early days as a project aimed at interactive television to its role as a versatile and widely used programming language, Java has left an indelible mark on the world of software development. As technology continues to evolve, Java’s history continues to be written, with each chapter reflecting the dynamic nature of the programming landscape.
|
OPCFW_CODE
|
Red Hat Bugzilla – Bug 40124
problems with lilo after install of red hat 7.1. Boot loader in partition past 1024 cyl.
Last modified: 2007-04-18 12:33:10 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.77 [en] (X11; U; Linux 2.4.2-2 i686)
Description of problem:
Computer DELL Latitude C600 laptop with 20GB HD. First partition is NTFS
win2000. Size: 4 GB. Win2000 boot loader in MBR. ext2 Linux /hda2=/boot
(23333KB) and /hda5=/ (remaining) partitions follows. Upgrade from RH7.0 to
7.1 or fresh install of 7.1. Selected "linear" in lilo. At end of install
installer stated that lba32 is the correct choice (there is no lba32
button, but un-selecting "linear" does just this: bad dialog design see bug
14351 and related). Then could not boot linux from win2000 boot loader.
Tried to modify lilo configuration but failed. 1: attempt to boot from
/boot using rescue floppy fails: message about partitions unproperly
unmounted and needing verifying. 2: Could see partition when booting from
RH 7.1 CD binary 1 / linux rescue. Verified the partitions from there.
Still same problem as before. CD's bought shrink-wrap (not downloaded).
Problem disappear when un-selecting "linear" during installation.
Steps to Reproduce:
0.Dell latitude laptop with cd-rom in bay only: attach floppy drive to
parallel port to allow creation of emergency boot disk (likely unimportant
to reproduce the bug if you have already such a floppy or an internal
floppy drive but that's how it happened on my machine).
1.Win2k in first partition exceeding 1024 cyl & Win2k boot loader in MBR to
start lilo (not sure it's important) /boot in partition past 1024 cyl
2.Install RH7.1 from cd's select linear in lilo configuration
3.Try to boot new installation.
4.Try to use emergency boot disk.
5.Try to use boot from RH7.1 binary 1 and verify partitions.
6.Install RH7.1 (re-format /boot and / partitions) un-selecting "linear" in
Actual Results: 2. => installer at end of install complains that lba32
should be used
3 => lilo failure
4 => fails msg about checking the partitions
5 => still can't boot same msg as before (or maybe I missed something...)
6 => works but old data lost
Expected Results: I wish that the program will have used lba32 by default
(ide) linear for scsi
I wish i would have been able to recover partition (but maybe that was
possible and I missed something. In which case I wish there was a HOW-To
We don't have this hardware, so I can't really reproduce it. I'm trying to
understand exactly what the problem is...
Is the problem that selecting 'linear' in the LILO screen causes the system to
The fact to select "linear" in the LILO screen indeed causes the system to be
In addition when selecting "linear" in the LILO screen the installation program
gives a warning that lba32 should have been used.
Are things ok if you don't select 'linear'?
Everything is ok when linear is not selected. That's how I've solved this
issue, and the laptop has run very reliably since then.
Ok. We've made some changes to the bootloader that should avoid this problem in
|
OPCFW_CODE
|
Our recent Digital Drop-In, Brett Tweedie, took an interest in researching New South Wales through its crimes and has represented the results in a contemporary, data visualisation that focuses on the topics of people, places and things. We are excited to show you the results of his research in this innovative data visualisation We Are What We Steal.
NSW and Sydney in particular, has a rich history of criminality. But crimes do not occur in a vacuum, they are a reflection of the time and place in which they occur. Data visualisation designer and DX Lab Digital Drop-In, Brett, has been researching the crimes recorded in the New South Wales Police Gazette & Weekly Record of Crime and the detailed descriptions that accompany them. His research question has been about viewing some of the ways in which NSW changed over the period in which the gazette was published.
‘This data visualisation looks at the almost 20 million words that were recorded in 41 years of the New South Wales Police Gazette and Weekly Record of Crime from the beginning of 1860 up until the end of 1900 to see what it shows about how the people, places, and things changed in NSW over that period. What was valued? How did people dress? Where did people live? What items did they own?’ — Brett Tweedie
Brett’s research aimed to reveal aspects about a range of subjects, including: fashion, technology, health, wealth, population growth, language, and racism by looking at the descriptions of what was stolen, by whom, where, and when.
Burglaries, stealing from Premises &c. Redmyre.—Stolen, about 8 a.m. the 30th ultimo, from the residence of Mr. Laidley Mort, Redmyre Road, Redmyre,—A,silver Waltham watch, maker’s name and number unknown, steel Albert attached by split rings, and a silver breast pin. By a man about 40 years of age, dark whiskers; dressed in light blue shirt or jumper, dirty moleskin trousers, and old soft felt hat. Can be identified.
Design and Methodology
The pale colour palette design has revealed the data in a new way, allowing the user to gain insights into crimes that may not necessarily have been easily found in the gazettes.
For Brett’s methodology he took the simple approach, using the Trove API, to count how many times a word or phrase (known as an n-gram) appeared in a given year, and to see how that changed over time.
‘I didn’t look for context, so with few exceptions, the code makes no distinction between whether, for example, it was a Panama hat that was stolen, or the Panama hat was worn by the (alleged) suspect. The same approach was taken with places: it didn’t matter whether the crime took place in Dubbo, or the person was originally from Dubbo, or the person was ultimately sentenced in Dubbo, merely that the ‘‘Dubbo‘’ appeared in the gazette in a given year.
These tallies are then normalised against the total number of words in the gazette for that year, so that words or phrases that appear in years where more crimes and events were recorded (and hence there are more words) aren‘t given undue weight. The results are then graphed to show the changes over the years.’
You can read more about Brett’s methodology here.
Dealing with Historical Data
A significant issue that arises when working with historical data is what it reveals about the time it was collected. As a technologist extracts data from historical records, to show in a contemporary and new experience, it can bring up issues that may be offensive and confronting in today’s context. As there is little curation over this particular data set, but rather a decision based on the design and how the data is viewed, the way this information is experienced can be very problematic. We need to reveal the information, in this case relating to Aboriginal and Torres Strait Islander people, that was collected by police from c 1860 –1930. We are mindful and concerned however about how this gets interpreted today.
The Library’s Indigenous Engagement Manager, Damien Webb reveals:
‘Perhaps the biggest issue when working with colonial-era collections (and in particular those of the police and government) is how easy it can be to accidentally recreate or reinforce racist ideas and biases. The idea that uncurated data is neutral data remains deeply problematic and we now know that some sources require additional contexts to be considered culturally safe. To bring awareness to these biases in a way which clearly illustrates the limitations of colonial records in regards to Aboriginal identity and agency is an ongoing challenge.’
We hope you find this data visualisation of the New South Wales Police Gazette & Weekly Record of Crime interesting. Read more about this project in this article on The Conversation. The complete code for this experiment has been open sourced on Github.
|
OPCFW_CODE
|
Open Source, Artificial Intelligence, and LPI
I'm going to lead with the punchline on this one. I believe that LPI should invest in providing a certification path for some kind of machine learning, specifically geared to open source development in artificial intelligence.
Whatever you may think about automation and artificial intelligence from the perspective of what it will eventually mean for humanity, there's no question that some form of artificial intelligence is present in every aspect of our lives. Those of us who own one or more Google Home or Alexa speakers know full well how much AI touches our lives. For us, it's an ever-present companion.
Smart systems like Google's Assistant are built using TensorFlow ( https://tensorflow.org ), an open source programming library that has become a kind of goto set of tools for anyone building machine learning, deep learning, natural language processing (as in your smart speaker), or neural network based applications. TensorFlow based applications are programmed using Python, another free and open source development platform.
Speaking of Python, there's also PyTorch ( https://pytorch.org ), a set of deep learning Python libraries that is built on Torch, yet another machine learning set of tools developed, this time, by Facebook. It's primary purpose was computer vision, facial recognition, and natural language processing.
Keep in mind that there are already plenty of AI and ML tools out there, built with and distributed as open source. We also have organizations that are dedicated to AI and ML being entirely open. For instance . . .
H2O.ai at https://www.h2o.ai/
AI.Google at https://ai.google
OpenAI at https://open.ai
While I understand that the focus for LPI has been to champion Open Source and to help build the futures and careers of Linux systems administrators, including DevOps, machine and artificial intelligence tools are making their way into every aspect these professions. In fact, the smart sysadmin has always sought to use the tools at their disposal to automate as much of the processes regarding administration as is possible with the available technology.
As systems get more complex and distributed across both the physical and virtual world, a simple hands-on approach is no longer practical. Automation is key to keeping things running smoothly. Even so, simply replying on these automated systems to spit out interpreted logs doesn't help if there isn't someone there to respond should something catastrophic happens. That's why we've been automating a variety of responses based on selected events. We can tell our systems, "Only call me if it's really important." Only tell me about those things that actually require my intervention.
Trouble is, those complex distributed systems I was talking about are getting more complex, and more distributed. At some point, human intervention, by even the best and most highly trained human, becomes a bottleneck.
Have you heard of DeepMind? This machine learning startup was bought by Google (technically Alphabet, but I still think of the whole thing as Google) in 2014. In 2015, it's AlphaGo program beat Fan Hui, the European champion Go player, 5 games to zero, in a demonstration that a machine learning system could learn to win a game so complex, with so many combinations and permutations, that it was deemed nigh impossible for a computer to win.
AlphaGo continued to flex it's machine learning muscles until, in 2017, it beat Ke Jie, the reigning world champion of Go.
Later that same year, a next generation system, AlphaZero taught itself to play Go in less than three days then went on to beat AlphaGo 100 games to zero.
Fast forward to 2018. Alphabet (who I'll probably just keep thinking of as Google) turned DeepMind loose on its monolithic data centres, giving the algorithm complete control over the cooling of those data centres, providing Alphabet with a 40% savings on ventilation, air conditioning, and whatever other cooling measures might be used. No humans are involved or required. This is data centre infrastructure management, fully automated.
It is, in fact, the logical end goal of every sysadmin.
So, am I suggesting that LPI should get behind and provide certification for a technology that will, if all goes well, do away with the need for systems and network administrators? In a word, yes. The next logical question is why?
Since full automation is the logical end game for what we've come to think of as systems administration, and since pretty much all of this smart technology runs on Linux servers, and is built on open source software and tools, we must embrace the technology and direct it, making sure that intelligent machines have our collective human best interests at heart. I don't know how long it will be before the last sysadmin retires, but that day is coming whether we are a part of it or not. It behooves us to make sure that when fully autonomous systems take over, that we have done everything we can to make sure that they operate on safe and ethical principles.
Furthermore, as the need for classic administration fades into history, it is those people with the skills to tackle these marvellous new technologies who will benefit from a slightly longer career. For as long as that might last, this will be valuable knowledge indeed.
Needless to say, there must be conflicting opinions on this subject and this is where I turn it over to you. Am I right? Should LPI follow a path to Artificial Intelligence and Machine Learning Certification? The first one could be AIML-1 in the spirit of past course naming conventions. Perhaps I've read the tea leaves wrong and the age of human admins is far from over. Either way, I open the floor to you and look forward to your comments.
|
OPCFW_CODE
|
References for "different" proofs of the spectral theorem for compact operators
It is with some sort of reverential fear that I've come here to write. I've been reading you for a long time, but writing is another story... In any case, I suppose it is too late now to back out!
Then, I am looking for (as many as possible) references to known "different" proofs of the classical spectral theorem for compact (linear) operators (on complex Banach spaces) with a special focus on the point where we are given to show that all non zero elements in the spectrum are, in fact, eigenvalues. I am well aware of the "usual one" (as basically drafted in this Wikipedia entry - just look at the ideas since at present the proof is flawed in some parts, as outlined by Prof. Johnson below in the comments) and I have tidings of a proof based on the Fredholm alternative (though I don't know any explicit reference in this case). Indeed, I'm wondering if there are some others around. Thanks so much for any clues.
http://terrytao.wordpress.com/2011/04/10/a-proof-of-the-fredholm-alternative/
Thank you, this settles for the premise. Now then, what about "different" proofs (and references thereof)?
The proof in Wikipedia implicitly assumes that the unit ball of the Banach space is weakly sequentially compact (i.e., that the space is reflexive) and should be modified.
I must confess that I've read it with little attention. Thank you for pointing this out, BJ. I'm editing the original question accordingly.
Let $T$ be a compact operator on the Banach space $X$ and $\lambda$ a non zero point in the spectrum $\sigma(T)$ of $T$. Then $\lambda$ is in the boundary of $\sigma(T)$ since $T$ is compact and hence is an approximate eigenvalue of $T$. Take a net $x_a$ of norm one vectors in $X$ s.t. $Tx_a$ converges and $\lambda x_a - T x_a$ converges to zero. Since $\lambda \not= 0$, this forces $x_a$ to converge to an eigenvector of $T$ with eigenvalue $\lambda$.
Voilà! This is (almost) exactly the proof that I had in mind myself. Any reference?
No; I took your question as a "challenge" and came up with this proof. It is not quite as elementary as the usual proof.
Ah! I had completely missed that... The only difference between your proof and mine is that I'm stating it in the language of $\varepsilon$-pseudospectra, but it makes no real difference, indeed.
There is a proof of the spectral theorem for arbitrary bounded normal operators based on the theory of commutative Banach algebras. Then the general theorem is easily specified for the case of compact operators. For the details see, for example,
K. Maurin, Methods of Hilbert spaces. Warsaw, 1967.
If I understand it correctly, the question is on general compact operators.
Yes indeed, I am basically interest in the case of arbitrary compact (linear) operators on a (complex) Banach space. No need for them to be normal. But nonetheless, thank you for the contribution, AK.
I don't know how "different" you will consider this proof, but if you want a quick and efficient exposition I would suggest to have a look to Klaus Deimling's Nonlinear Functional Analysis.
|
STACK_EXCHANGE
|
Get the public address & key associated with one of the Ganache accounts using Web3.js?
How do I get the public address & key (not address or account) for one of the accounts that Ganache displays on startup using Web3.js? I need it for a unit test I'm creating that tests transaction signing on the Ganache network, but without using Metamask. When Ganache starts up I see 10 accounts and 10 private keys along with the Mnemonic words, but no public addresses. I am already retrieving the accounts list from Ganache using web3.eth.getAccounts(), but I don't know how to get the public address & key associated with a particular account.
According to this post (question and answers), there is no practical use for the public-key (as far as the user - you, in this case - is concerned). The address, which you already know how to retrieve, is sufficient for all your needs.
start ganache-cli with --acctKeys ganache-accounts.json parameter
it will save all the data about accounts/keys
then use this script to extract public key :
#!/usr/bin/env python
import json
from pprint import pprint
import sys
file = sys.argv[1]
print file
json_data=open(file).read()
data = json.loads(json_data)
for a in data['addresses']:
pubkeyArray=data['addresses'][a]['publicKey']['data']
pubkey=""
for pbk in pubkeyArray:
pubkey = "%s%02x" % (pubkey,pbk)
print "Address:%s" % a
print "Public Key:0x%s" % pubkey
print "Private Key:0x%s" % data['private_keys'][a]
pass file name ganache-accounts.json to the script
it will produce output as follows :
./scripts/ganache-keys.py ./keystore/ganache-accounts.json
./keystore/ganache-accounts.json
Address:0xb32613b32fd6f358259194c0a34e689e7eb9d88c
Public Key:0xf93e5a7cbd7b76cb3e2c1b362039397fbba00a0cbea1eccdf00e0e1e64d575411ae32bc6daf297aba31707fea7f84f20adb82c325fc33d347b54cf5fcd2bd71a
Private Key:0x30963a4383bfb78f0ecf290b1f3879a9c70903481e865f9e5d2df66ced64395d
Address:0x7c06350cb8640a113a618004a828d3411a4f32d3
Public Key:0xdcaa05bb5a82e0d1675d7f9b12b4d1948122500fc35ca7dee2edc99f2a3af2c5d6498f80dfbc0ab7c624793919aee8e4651adeefa7951c96438cad7838fbb9ea
Private Key:0x5641128d7d895f185d91fa2b83dffe646eede097d1a85c3247debc2b75219d92
|
STACK_EXCHANGE
|
# SEIR model class definition
# Dr. Tirthajyoti Sarkar, Fremont, CA
# April 2020
import numpy as np
import matplotlib.pyplot as plt
class SEIR:
def __init__(self,
init_vals=[1 - 1/1000, 1/1000, 0, 0],
params_=[0.2,1.75,0.5,0.9]):
"""
Initializes and sets the initial lists and parameters
Arguments:
init_vals: Fractions of population in the S, E, I, and R categories
params_: Dynamical parameters - alpha, beta, gamma, and rho.
Here the last parameter 'rho' models social distancing factor.
"""
# Initial values
self.s0 = init_vals[0]
self.e0 = init_vals[1]
self.i0 = init_vals[2]
self.r0 = init_vals[3]
# Lists
self.s, self.e, self.i, self.r = [self.s0], [self.e0], [self.i0], [self.r0]
# Dynamical parameters
self.alpha = params_[0]
self.beta = params_[1]
self.gamma = params_[2]
self.rho = params_[3]
# All parameters together in a list
self.params_ = [self.alpha,self.beta,self.gamma,self.rho]
# All final values together in a list
self.vals_ = [self.s[-1], self.e[-1], self.i[-1], self.r[-1]]
def reinitialize(self,init_vals,verbose=False):
"""
Re-initializes with new values
"""
assert len(init_vals)==4,"Four initial values are expected"
assert type(init_vals)==list, "Initial values are expected in a list"
# Initial values
self.s0 = init_vals[0]
self.e0 = init_vals[1]
self.i0 = init_vals[2]
self.r0 = init_vals[3]
if verbose:
print("Initialized with the following values\n"+"-"*50)
print("S0: ",self.s0)
print("E0: ",self.e0)
print("I0: ",self.i0)
print("R0: ",self.r0)
def set_params(self,params_,verbose=False):
"""
Sets the dynamical parameters value
"""
assert len(params_)==4,"Four parameter values are expected"
assert type(params_)==list, "Parameter values are expected in a list"
# Dynamical parameters
self.alpha = params_[0]
self.beta = params_[1]
self.gamma = params_[2]
self.rho = params_[3]
self.params_ = [self.alpha,self.beta,self.gamma,self.rho]
if verbose:
print("Set the following parameter values\n"+"-"*50)
print("alpha: ",self.alpha)
print("beta: ",self.beta)
print("gamma: ",self.gamma)
print("rho: ",self.rho)
def reset(self):
"""
Resets the internal lists to zero-state
"""
self.s, self.e, self.i, self.r = [self.s0], [self.e0], [self.i0], [self.r0]
def run(self,t_max=100,dt=0.1,reset=True):
"""
Runs the dynamical simulation
Arguments:
t_max: Maximum simulation time, e.g. 20 or 100 (can be thought of days)
dt: Time step interval e.g. 0.1 or 0.02, a small value
reset: A flag to reset the internal lists (restarts the simulation from initial values)
"""
if reset:
self.reset()
# Time step array
t = np.linspace(0, t_max, int(t_max/dt) + 1)
# Temp lists
S, E, I, R = self.s, self.e, self.i, self.r
# Temp parameters
alpha, beta, gamma, rho = self.alpha,self.beta,self.gamma,self.rho
dt = t[1] - t[0]
# Loop
for _ in t[1:]:
next_S = S[-1] - (rho*beta*S[-1]*I[-1])*dt
next_E = E[-1] + (rho*beta*S[-1]*I[-1] - alpha*E[-1])*dt
next_I = I[-1] + (alpha*E[-1] - gamma*I[-1])*dt
next_R = R[-1] + (gamma*I[-1])*dt
S.append(next_S)
E.append(next_E)
I.append(next_I)
R.append(next_R)
# Stack results
result = np.stack([S, E, I, R]).T
self.s, self.e, self.i, self.r = S, E, I, R
# Update final values
self.vals_ = [self.s[-1], self.e[-1], self.i[-1], self.r[-1]]
return result
def plot(self,results=None):
"""
Plots the basic results
"""
# Runs a simulation is no result is provided
if results is None:
results = self.run()
# Plot
plt.figure(figsize=(12,8))
plt.plot(results,lw=3)
plt.title('Basic SEIR Model',fontsize=18)
plt.legend(['Susceptible', 'Exposed', 'Infected', 'Recovered'],
fontsize=15)
plt.xlabel('Time Steps',fontsize=16)
plt.ylabel('Fraction of Population',fontsize=16)
plt.grid(True)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.show()
def plot_var(
self,
var,
var_name=None,
show=True):
"""
Plots the given variable
Expect a list or Numpy array as the variable
If var is None, plots the infected fraction
"""
if var is None:
var = self.i
plt.figure(figsize=(12,8))
plt.plot(var,lw=3,c='blue')
plt.title('Basic SEIR Model',fontsize=18)
if var_name is not None:
plt.legend([var_name],fontsize=15)
plt.xlabel('Time Steps',fontsize=16)
plt.ylabel('Fraction of Population',fontsize=16)
plt.grid(True)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
if show:
plt.show()
|
STACK_EDU
|
Е is a "ye" or a vowel "raising" the preceding consonant (the middle of the tongue is higher; it is called "palatalisation")
Э roughly corresponds to something like "e" in "red". Maybe with a hint of "a" in "rad". At least, Russian speakers hear "map" as something like "мэп" with a vaaary open "э". Also, we have some difficulty distinguishing between "man" and "men", "bad" and "bed", especially in AmE.
Note that in a number of foreign words "е" actually means "э" and is spelt there due to its beauty alone (тест, менеджер, флешка, плеер, проект). It mainly concerns front consonants, usually T, D and S, sometimes M. As words became more "Russian", the pronunciation shifted to the usual pronunciation of е. However, modern Russian feels rather comfortable having both Е and Э, so there is guarantee the same is going happen to the loanwords of the recent decades.
Where should i get the russian keyboard? it marked me wrong for writing in english.
Most operating systems have a built in option to switch between keyboard languages.
On Windows 7, you can go to Control Panel>Region and Language>Keyboards and Languages>Change keyboards.
Then add whatever languages you want to the list, and then go to "Advanced Key Settings" to assign them to button combinations.
What should I do if my language choice is English only. How do I get the Russian language on my Windows 7?
If you are on a Windows Computer: Settings-->Time and Languange-->Language-->Русский. (To change it press Shift+Alt) If on a Android phone:Settings-->Languange-->Русский (To change it press the small button on the keyboard that looks like a globe) I do not know how to do it on other devices. You can look it up.
By context and how different parts of the sentence get emphasises or de-emphasized.
I'm gathering that words like "the", "a" or "is" are not translated in Russian at all?
No, Russian has no articles (the, a, an) and the use of the verb "to be" is minimum, from what I gathered so far.
there isnt an "e" on my russian keyboard....i have tried ё, з, and э it still counts it as incorrect, and it won't let me continue....so now what?
It should be. Which keyboard do you use? A phonetic keybord should have it on the Latin E, a ЙЦУКЕН-layout on T.
I know this is very late, but some of the comments above explain how to change the settings for your keyboard!
|
OPCFW_CODE
|
Using a sportsbook is a great way to bet on your favorite sports. You can get the best odds from licensed sportsbooks that are regulated by the state government. A sportsbook can also provide you with a wide variety of betting props and odds so you have more options to win. However, it is important to choose a sportsbook that is legitimate and offers good customer support. You also need to check whether or not the website is licensed in your jurisdiction. If you are a first time bettor, you can use a risk free bet to test out the sportsbook before making your first bet. This is an excellent way to try out a sportsbook and get money back if you are not satisfied.
You should also check out whether or not the sportsbook has a good reputation. Some sportsbooks are run by illegal criminals or organized crime organizations. Others have a poor reputation and can’t be trusted. When choosing a sportsbook, you need to check for customer reviews, reputation, payment methods, and more. You can also check the state laws and regulations for sports betting. Lastly, you need to check out the sportsbook’s terms and conditions to make sure you are allowed to use the website.
SBOBET is one of the largest sportsbooks in Asia. The website is easy to use and offers a number of betting options. It also offers live streaming in the sportsbook. There are over 1500 events in the sportsbook each week. There is also a blog that provides sports news and picks for fans. SBOBET has over 1,200,000 active users. This sportsbook also offers a variety of promotions and gaming areas. There are sports picks for football, tennis, basketball, cricket, and many more.
While SBOBET has been involved in match fixing allegations in 2008, the Football Association found no evidence to support those allegations. Still, it is important to check out the sportsbook’s reputation and history to ensure that it is safe to bet with. SBOBET also offers a number of ways to contact customer support, including email, phone, and Skype.
SBOBET is a licensed sportsbook that has been in business for 16 years. It is one of the leading sportsbooks in Asia, and its extensive sports menu provides punters with a great range of options. Users also enjoy the wide variety of betting options, competitive odds, and fast payouts. SBOBET also offers a number different banking options, including a tiered revenue sharing system.
If you are looking for a sportsbook that provides great value and offers good customer support, SBOBET is the sportsbook for you. You can also use a risk free bet to try out the sportsbook and get your money back if you are not satisfied. Choosing a sportsbook can be a daunting task, but it’s not impossible. You just need to do your research and select a sportsbook that offers the best odds and a variety of betting options.
|
OPCFW_CODE
|
chore: remove useless compiler_flag and copy .exe on windows
v install should not be concerned with compilation flags.
And the actual compiler is determined by build.vsh:
PS C:\Users\***\.config\v-analyzer> $Env:CC = 'clang'
PS C:\Users\***\.config\v-analyzer> v .\install.vsh up --nightly
Installing latest nightly version...
Updating v-analyzer sources...
✓ Successfully updated v-analyzer sources
Building v-analyzer...
✓ Dependencies for v-analyzer installed successfully
Building v-analyzer at commit: 7e11a6f, build time: 2024-06-15 01:17:19 ...
✓ Prepared output directory
Building v-analyzer in debug mode, using: "E:\Programs\v\v.exe" "C:\Users\***\.config\v-analyzer\sources" -o "./bin/v-analyzer.exe" -no-parallel -cc clang -g
To build in release mode, run v build.vsh release
Release mode is recommended for production use. At runtime, it is about 30-40% faster than debug mode.
✓ Successfully built v-analyzer!
Binary is located at C:\Users\***\.config\v-analyzer\sources\bin\v-analyzer.exe
Moving v-analyzer binary to the standard location...
Failed to copy v-analyzer binary to C:\Users\***\.config\v-analyzer\bin: Failed to remove "C:\Users\***\.config\v-analyzer\bin\v-analyzer.exe": Permission denied; code: 13
✓ v-analyzer successfully updated to nightly (7e11a6f8f369df935664fadd2f0c99d90fe3226f)
Path to the binary: C:\Users\***\.config\v-analyzer\bin\v-analyzer
Also, the executable has .exe extension on Windows, thus v-analyzer.exe should be copied on Windows
The analyzer test failure is unrelated, and due to a change in string.is_upper() that happened in https://github.com/vlang/v/pull/21358 , while v-analyzer's CI was not ran after that (we only test that v-analyzer itself builds on the main V CI, and not that v-analyzer's CI passes).
I am not sure whether to fix pascal_case_to_snake_case or its tests, but I am more inclined to change pascal_case_to_snake_case to preserve its old behavior. @ttytm what do you think?
https://github.com/vlang/v-analyzer/blob/7e11a6f8f369df935664fadd2f0c99d90fe3226f/src/utils/text_utils.v#L6
- if c.ascii_str().is_upper() {
+ if c.ascii_str().is_upper() || c.is_digit() {
https://github.com/vlang/v-analyzer/blob/7e11a6f8f369df935664fadd2f0c99d90fe3226f/src/utils/text_utils.v#L6
- if c.ascii_str().is_upper() {
+ if c.ascii_str().is_upper() || c.is_digit() {
Applied in 2584e86 .
(rebased over current main)
|
GITHUB_ARCHIVE
|
thank you PA 5. Fifteen Puzzle Topics Covered 1. Graphs 2. Graph Traversals and Search Algorithms (DFS, BFS; IDS, etc) 3. Numpy ndarrays. Permutations, Transpositions, and Linear Algebra 4. Optimiation and Game Al 5. Event-Driven Programining Instructions Fifteen Puzzle Objectives practicing with classes, modules, gaph ADT, graph traversals, optimization techniques, game A, and event.diviven programining : supplementary Material 15 puzale-Wikipedia. B. Taxicabseonctrv- Whipedia. B. IkDocs Tutorial-Basic tvideets: Description In this assigment you will wite a program that emulates a game Fifteen, a sliding pura made of 15 tilos. The tiles are arranged in the 44 toweut made of 4 tow and 4 columns, Since there ate fo spaces coositions and ondy 15 tilos one space is engty. Your program should generate a GUi window with the game tayout that f controfitble by tha mouse batton clichs. The user can plav the earne for solve the curzaleBy default, the solution to the Fifteen puzzle is positioning of all tiles in order from left to right and tog to the bottom as shown in the picture above. Optionally, the user miy change the default final solution to any possible arrangements, for exarnple, the arrangement in the reverse order from 15 to 1. It is worth to mention that there ate 16! possible permutations of 15 tiles and an empty space; however; only a half of them is solvable given the constrains of the tile movements. This property is known as an invariant of the puzzle that is unchanged after purzle transformations and is due to the parity of the nermutation E, and the parity of the Manhattan (taxicabl distance. BSo. a half of 16 . petmutations is unsolvable for given in itial conditions! Your progan should generate only correct (allowed) permutations that preserve the invariant, it can be achitved if only correct transiormations are applied to the tiles such as only an empty space can be exchanged with its neighbors. However, the problem arises if the user enters the final solution of the puzzle that cannot be teachable fthis property of the game is optional. You can find the purale invalant by calculating the parity of a permutations if the empty square is in the lower right corner as shown on the picture above, then the purzle is solvable if and only if the permutation is even. Ever nermutations have a sign +1. and odd pernutations have a sign - 1 that are calculated by the following formula: sin(1)2= where N(s) is a mamber of inversions in s. An imversion is a pair of elements that are out of their natural order, You can read about a permutation parity here Parity of a permutation - Wikiocta th. For example, the following permutatiors are even: 12345678910111213141516N=0 where 16 is an empty space 16151413121110987654321N=B However, the following permutations are odd. 16123456789101112131415 15141312111098765432116N=7N=15 A. Fifteen Puzzle Solver (Extra Credit 10 points)A Fifteen Puzzle Solver (Extra Credit 10 points) The Fifteen puzzle can be solv.
|
OPCFW_CODE
|
When you make a connection to a database, you connect via the database server. The maximum number of concurrent connections to your database depends on several factors.
These factors include:
The type of database server that you choose to run.
The number of databases running on the database server.
The type of connections that the database and database server accept.
The database and database server options that you specify to reduce the maximum number of connections for your application.
The connection limit for the personal database server (dbeng17) is ten connections, and by default only seven of these connections can be HTTP connections. At minimum three connections are reserved for standard connections. A standard connection is any connection other than an HTTP or HTTPS connection
The connection limit for network database servers is equal to the maximum number of connections allowed by your per-seat license, or unlimited (32766) connections if you have a per-core license. (In practice, creating 32766 connections to a single database server might not be possible because of operating system restrictions. Also the database server makes internal temporary connections during operation, which also reduces the number of connections available for users.)
To reduce the maximum number of concurrent connections that a database server can accept, specify the -gm database server option.
If your database server accepts HTTP/HTTPS connections, then you can limit the maximum number of HTTP/HTTPS connections that can be made to a database server by specifying the MaxConnections protocol option. You can also reserve database connections for only standard connections by setting the reserved_connections database option.
If your database server has numerous databases running on it, then limiting the number of connections to a specific database can be more effective than limiting connections to the database server. For example, a database server that allows 100 connections may have 90 connections to one database, leaving only ten connections available for all other databases.
To limit the number of connections that a database can accept, set the maximum_connections database option for that database.
If your database accepts HTTP/HTTPS connections and you must ensure that standard connections can be accepted at any time, then reserve connections for standard connections. Specify the reserved_connections database option to reserve standard connections.
You can also use a login policy to limit the number of connections to users who have the specific login policy. When creating the login policy, set the max_connections login policy option or the max_non_dba_connections login policy options. Alternatively, you could create a custom login procedure to limit the number of connections by using the login_procedure database option.
The database server queues HTTP/HTTPS connections. When the database server reaches its connection limit and a new HTTP/HTTPS connection is attempted, the connection attempt is queued. Connection attempts remain in the queue until an HTTP/HTTPS connection becomes available, or until the attempt times out after 150 seconds. The queue can hold a maximum of 1000 connection attempts.
Standard connections are not queued. When a database server reaches its connection limit, any new standard connection attempt fails.
While there are connection attempts in the HTTP/HTTPS queue, there is no opportunity for a standard connection to replace an HTTP/HTTPS connection. A user wanting to make a standard connection could wait indefinitely unless some of connections are reserved for standard connections. By reserving some of the database's connections for standard connections, you allow users to connect to the database to perform administrative tasks, such as dropping connections, performing backups, and so on.
Reserve standard connections by using the reserved_connections database option. The network database server does not reserve connections for standard connections. The personal database server reserves a minimum of 3 connections for standard connections.
You could also limit the number of HTTP/HTTPS connections that can be made to a database server by specifying the MaxConnections protocol option.
When the connection limit has been reached, a DBA user can make one additional standard connection to a database above the limit to forcibly drop connections as needed. This new connection is allowed only if all of the following conditions are met:
The user has the DROP CONNECTION or SERVER OPERATOR system privilege.
The total number of database server connections (excluding the new connection) is less than or equal to the MaxConnections database server property limit.
The total number of database connections (excluding the new connection) is less than or equal to the MaxConnections database property limit.
|
OPCFW_CODE
|
How to Wrap Text In Google Docs
Google Docs is a better alternative to Microsoft word document, and it enables its users to do almost everything possible in the application. Google Docs is a google web-based document app for creating and editing public and private word and spreadsheet documents. These word documents can be stored online on the user’s computer and online in the google cloud.
You can access your word documents from any computer with an internet connection. Other users with your permission can also view the documents.
The docs application is designed for both individual and real-time collaborative projects. The app security is well maintained through their cloud storage and offline storage on user’s PCs. Besides, the documents created are compatible with many presentations and word processing software and hence can be published as a web page or printed. The spreadsheets can also be created and edited in different fonts and formats.
Wrapping a text around images in google docs
Before we begin, it is essential to understand how google docs handle pictures and how that can affect the texts. After inserting a picture into your word document, you will be given three choices: wrap text, inline, and the break text. By default, the google docs are set such that it sets the text wrapping to inline.
For this setting, google docs handle the image the same as any other character within the file. You can relocate it anywhere throughout a paragraph or sentence, and it moves along the doc the same way any other text character would.
If you choose to wrap text, the wrapping text surrounds the four edges of the image and leaves a significant gap between the text and the image.
On the other hand, if you prefer to break text, the text will remain above and below the image boundaries, hence breaking the paragraph or sentence into which you added it.
Although we are not using this option in the below guide, it is good to know about it.
To wrap a text around a picture in google docs, ensure you follow this simple procedure.
- To begin the process, ensure that you open the google docs and create a blank document. In the situation where you already have a file, you need to open it in with the google docs.
- Now you can embed your picture in your file. For that, ensure that you select a location in your word document where you need to insert the picture and be sure to click insert and hit the image
- Ensure that you select the source from where you will fetch the image.
- Upon selecting and embedding the image, click on it. Then, you should see a blue border around all the edges of the image. Now you need to click on the second pop up with wrap text
- Now you should select and hold the picture and relocate it to where you intend to place it. Then you have to resize the image according to the requirements of your documents. Ensure you hover the mouse at the corner upon selecting the picture and use a two-way arrow icon to get it done.
- Ensure that you set a margin for the image and the text to look separated. To achieve this, click on the picture, expand its margin drop-down menu, and adjust it as per your requirements.
Recommended: How to Add a Border in Google Docs
|
OPCFW_CODE
|
Among my colleagues, the idea of documentation can be a bit of a joke. It’s one of those things that we know is beneficial, but it often ends up at the bottom of the priority list. As a software engineer, I get it – we’re notorious for finding ways to avoid documenting our work. However, decent documentation has saved my bacon more than once, so I’m a believer. I hope to convince you to integrate the practice of documentation into your work, too, by building a living, growing knowledge base for your organization.
At Clearistic, we work a lot with business process — we create web applications, mobile apps, and automations to help companies digitize or streamline their information flow. This work requires a deep understanding of how an organization works, and so this is usually where we begin. What we’ve found is that working to understand a process can be a minefield of tribal knowledge1, hidden fringe cases, and a general lack of clarity. To my surprise, we’ve found this to be true in most cases, no doubt a byproduct of the ever-increasing demands on our time.
The good news is that it’s easy to build an internal knowledge base that will evolve organically and which will relieve some of the issues that stem from complex or poorly-understood processes. Even if you don’t intend to do a deep-dive and streamline what you do and how you do it, there are a number of ways in which having a solid base of documentation will help.
We typically bring on additional staff when the team’s work exceeds its capacity — that is, when we’re overloaded. That’s a tough time to train our eager new colleagues on the intricacies of our systems and processes. Imagine having an up-to-date knowledge base that explains the what, why, and how of the job. Not only will our new team members be productive more quickly, they’re likely to avoid some “rookie mistakes” as well.
Coincidentally, onboarding is a great time to test (and expand!) your documentation. You have a unique opportunity to obtain feedback from a motivated, untarnished source. Use that feedback to make sure your knowledge base includes critical information, is easy to navigate and is presented clearly.
Any time a repetitive task requires you to think, you run the risk of introducing errors. That is especially true in a distraction-rich workplace, where a phone call or a notification ding may cause you to lose your place. If you have a procedure that is intricate or delicate, build a comprehensive checklist. If you can, use a tool that supports literal checkboxes (OneNote is a go-to of mine). That way, if you are taken away from your desk, you’ll know exactly where you were when you return.
We’re always looking for opportunities to automate complex workflows, but sometimes that’s not practical. The nice thing about having a checklist is that, if you decide to automate a process in the future, it’s going to be much easier to do because you have the recipe.
Every once in a while, I prefer to not be working. With good documentation, my team can support me by taking care of certain tasks when I’m not around (and vice versa). An effective knowledge base enables team work and makes it easier for staff to support each other.
For documentation to be successful, it has to work when the author is not available to answer questions. Any time the knowledge base is actually used is a great opportunity to identify and fill in missing information. This is one reason we recommend using a technology that allows your team to enrich the documents collaboratively, over time.
As you document your work — whether to speed onboarding, reduce manual errors, or enable better team work — you may notice some odd things. If a task contains inefficiencies, you may be able to streamline it. If it’s highly repetitive, you may be able to automate it. Sometimes these opportunities aren’t obvious until you think deeply about your process, which is another benefit of taking the time to document.
How to Get Started
To me, the term “knowledge base” feels heavy, like some arcane Encyclopedia Britannica of your organization’s processes, policies, and procedures. It doesn’t have to be that way. There are many great tools available today to build your knowledge base, some of which I describe below. Whatever tool you choose, consider these guidelines to help your documentation effort flourish:
- It should be accessible, editable, and searchable. A good search makes it easier for staff to find and use the information they need. Editability turns your knowledge base into a living system that continuously improves.
- Don’t worry about perfection, or “it’s OK to start small”. Get something out there, even if it’s sparse, to allow your colleagues to give you feedback about what would help them. They may even begin contributing themselves.
- Use a system that allows you to refine over time. This is closely related to “editability” above, but it’s critical to use a system that reduces or eliminates friction when it comes to editing. If you have to log in with some arbitrary credentials (and reset your password, because you only use the system twice a year), people may bounce rather than take the time to contribute.
- Keep the tool open while you work. This is the key to making the work less painful. If you are able to flesh out your notes as you do the work, it requires very little thought to add context and detail to your documentation. Combined with “don’t worry about perfection”, you can create useful documentation without needing to schedule separate “documentation time”.
Here are some of the tools that we have used to build out our knowledge bases over the years.
If you are one of the more than 1 million organizations using Office365, you already have access to Microsoft’s collaborative note taking software, OneNote. While there are more sophisticated tools for this task, there are several reasons it’s a great place to start:
- Included with Office365
- Minimal setup
- Available everywhere (desktop, web, mobile)
- Supports collaborative editing
- Simple but effective organization (notebooks/sections/pages)
- Fast, simple search
If you’re not in the Microsoft universe, Evernote is a similar tool with a business plan to facilitate collaboration.
If OneNote doesn’t give you enough control over the navigation and presentation of your docs, SharePoint (also included with Office365) may be a fit. It requires more setup and can become pretty complex if you let it, but it enables you to build a rich and evolving knowledge base.
- Included with Office365
- Rich content tools (styling, media, layouts)
- Fine-grained access control
- Share files and news content as well as documentation
- Workflow automation tools that you can grow into
Whereas OneNote is designed around note taking and SharePoint is more of a general information repository, Atlassian’s Confluence is purpose-built to share information within your organization.
- Templates help create a consistent look and feel for your documentation
- Collaborative editing includes real-time editing, comments, and notifications
- Deep integration with Jira ticket/task management system
MediaWiki (the software that powers Wikipedia) is different than the options above because it’s not a SaaS (subscription) production. In fact, it’s a free and open-source solution that you’d have to host yourself. It’s also the oldest tool here (released in 2002!) and probably the least user-friendly. So why do I suggest it?
Years ago, during my time at WineAccess, I introduced a wiki running MediaWiki that quickly became an essential (dare I say beloved?) documentation repository for our business. It even outlasted my time there. While there’s a wealth of new, user-friendly options available today, MediaWiki could just as easily be a fit for your organization, too.
Keep in mind: The success of your documentation effort lies less in the tooling, and more in the commitment of your coworkers and staff to grow your knowledge base.
Side note: If there’s a Microsoft flavor to this list, it’s because we spend a lot of time with Microsoft technologies. We know that there are as many systems out there as there are creative ways to use them. We’d love to hear your recommendations so we can expand our list.
- Tribal knowledge is any unwritten information that is not commonly known by others within a company. What is Tribal Knowledge and how to capture it before it’s too late | Tettra
|
OPCFW_CODE
|
Introduction: K2 Compiler
Recently, Kotlin have been working on a new frontend for their Kotlin compiler or K2. Here, the frontend of K2 has the ability to parse the codes while performing semantic analysis, data flow analysis, call resolution, and type interference. Computer developers interact with the frontend of compilers as it continuously runs inside the IDE and reports all error and warning messages. It also helps out with auto-completion, semantic-aware inspections, and re-factorings. Currently, the new frontend is available for preview as per Kotlin and they are continually polishing and stabilizing it with a plan to make it the default compiler frontend in the future Kotlin release. The future release will be named Kotlin 2.0.
Kotlin 2.0 Release Roadmap
Before the release of Kotlin 2.0, Kotlin is strategically planning to release a more traditional version named Kotlin 1.9. It will include updates from the ongoing development process of the K2 compiler frontend as it approaches the optimized stabilization. With no Kotlin 1.10 in the wings, the next major version after Kotlin 1.9 will be Kotlin 2.0.
Why Kotlin 2.0?
The new frontend is a major part of the compiler. Here, the K2 compiler frontend is a complete re-write from scratch based on a novel architecture and nit a re-factoring of the old compiler. The differences of these two have been published by Kotlin in various videos, conference talks, and blogs. The new architecture is faster and has a more extensible design. It addresses the technical debt in the old compiler frontend by fixing bugs and corner cases in various places. Thus, it makes inroads for future extensions of the language as per Kotlin.
For various programming languages, the major version of Kotlin gets scaled-up when major features and breaking changes get introduced. Usually, for software programs, the major version can be incremented whenever anything gets substantially written that signifies a major rework. Irrespective of Kotlin being a language, the fronted developers are following the practice they use for software programs. Kotlin is a software. The version of Kotlin language gets defined by the versions of its compiler. The Kotlin provides specifications as a language reference.
Will Kotlin break any codes?
Kotlin, the software, changes with every language version update. Here, the new language features get introduced while the old bugs get fixed. The Kotlin developers always try to minimize the number and impact of breaking changes that follow the Kotlin language evolution process. Kotlin 2.0 will also be the same in this regard. The developers of Kotlin are consciously avoiding the introduction of any new features in Kotlin 2.0. Here, the re-write of the Kotlin compiler frontend is a considerable change that introduces a number of improvements by itself.
The new data flow analysis algorithms that got developed is more precise as it leads to better smart casts than were possible before. These algorithms change the red codes that will not compile in Kotlin 1.9 on the old frontend into green code, which compiles in Kotlin 2.0. It is technically a defining aspect of the new Kotlin language feature. It is a side-effect of the new compiler architecture.
With the new compiler, tons of long-standing bugs have been fixed. Inevitably, some of these fixes might break code that accidentally relied on incorrect compiler behavior. Kotlin is carefully studying the impact of all the changes in behavior between the old and new K2 frontends. It makes additional efforts to replicate the old behavior whenever possible. Thus, it might affect a sizable number of Kotlin users. Here, Kotlin is committed to minimize the impact of migrating the code from Kotlin 1.9 to Kotlin 2.0.
Kotlin will keep the users updated on Kotlin 2.0 stabilization process that provide one with ample time to try out the upgrades of Kotlin 2.0 on various projects in advance. So, one can report any regressions that they might have missed in their testing. The current work of Kotlin is focused on a beta-quality release of the K2 compiler frontend later this year that can track in the roadmap issue.
IDE Support for Kotlin 2.0
The IDE plugin for Kotlin is bundled into the IntelliJ IDEA and Android Studio updates. It includes a copy of the Kotlin Compiler frontend for the semantic analysis of the codes. This is not the version of the Kotlin compiler, which is used to actually compile the code. Here, the actual version of the compiler is determined by the settings to build files. When the code gets compiled with Kotlin language version 2.0, the developers use the new compiler frontend. It will get all the performance benefits of the new compiler architecture. IDE might use the old frontend to inspect the code. In order to get the Kotlin performance benefits of the new compiler frontend in the IDE, Kotlin is writing a completely new version of the Kotlin IDE plugin, which is built on top of the K2 compiler frontend. Also, Kotlin is currently working towards the public preview of the new IDE plugin.
Expectations after Kotlin 2.0
Here, the new architecture of the Kotlin 2.0 compiler frontend enables clean implementation of many novel features. Some of the features include KT-10648 context receivers, which is ready to preview in the old compiler frontend. However, they will not be turned on in Kotlin 2.0 by default. They remain in the preview stage for now. The stabilization of new features happen in future Kotlin 2.x releases. These also include the implementation of other new things in production pipeline of Kotlin. As usual, Kotlin share regular updates on their plans through the Kotlin roadmap. In future, there will be a Kotlin 3.0 with lots of big changes in the pipeline that include language-changes in the future. Kotlin will look into Kotlin Features Survey before implementing the new versions with changes. These changes will most certainly guarantee the next major Kotlin version.
Kotlin new K2 Compiler – Performance + Extensibility
As aforementioned, Kotlin introduces the new K2 compiler. It offers a dual advantage that includes significantly faster compilation times with enhanced extensibility. Here, Kotlin has worked on a re-designed frontend and backend architectures.
These employ new technologies such as the Frontend Intermediate Representation (FIR) and Intermediate Representation (IR).
For a compiler, the frontend is responsible for parsing the code. It helps to check syntax and semantics. This ensures that the code adheres to language rules and turns it into intermediate form, which the backend can understand.
The backend undertakes the intermediate code and translate to machine-specific code, which makes it executable.
With frontend of the compiler, one can unpack the advancements. Kotlin, through K2 compiler, introduces the new concept of Frontend Intermediate Representation (FIR). FIR consolidates the elements of syntax trees and semantic information. The benefits of FIR include faster compilation, enhanced IDE performance, aligning with traditional compiler principles. Faster compilation results in a 2X speed improvement as per Google in call resolution, one of the most slowest and time intensive and consuming compiler processes. Error reporting and auto completion gets done by the enhanced IDE performances.
The Intermediate Representation (IR) works well with backend innovation of compilers. The two primary implications include extensibility and multi backend support. The extensibility helps developers to create custom compiler extensions. With multi backend support, IR acts as a common language. Thus, it makes it easier to support different backend that include JVM and JS. Here, Kotlin 1.5 employs the new JVM IR backend and Kotlin 1.8 uses the JS IR backend.
The Balance between Performance and Extensibility
With Kotlin K2 compilers, while the frontend is laser-focused on performance improvements, the backend focuses on extensibility. The advancement of frontend speed up operations that ensure a 2X performance gain whereas the new IR structure of backend allows a modular and plug-and-play approach. It makes it easier introduce new language features.
In effect, the K2 compiler is a transformation of compilers, which positions Kotlin 2.0 as a faster and more extensible language. Thus, from the significant speed improvements in code compilation to the ability to extend the language capabilities, K2 shapes up to be a novel game-changer for Kotlin.
Kotlin upgrade advances K2 compiler
Here, K2 is intended to increase the compilation performance. So, it is slated to be production ready with the Kotlin 2.0 version. The compiler is positioned to speed up the language feature development. It unifies all platforms that support Kotlin, and offer a better architecture for multi-platform projects. The new features of K2 compiler include improved performance for the garbage collector in Kotlin and Native languages. It also includes the support for the WASI (WebAssembly System Interface) API in the standard library for Kotlin/Wasm.
|
OPCFW_CODE
|
Managing Redis at scale is hard. And ObjectRocket by Rackspace wants to alleviate the pain and headaches developers face managing Redis NoSQL databases. Using the same principles we apply to our fully-managed MongoDB service, our ObjectRocket team now offers full automation, support and management of Redis so developers can focus on their applications and not maintaining their databases.
For the past two years, ObjectRocket has delivered a rich, managed MongoDB experience to our customers on an optimized server fleet tuned specifically to run MongoDB. Customers look to ObjectRocket to shoulder the burden of running a MongoDB environment. While many open source NoSQL databases are easy to adopt and free to obtain; the time spent troubleshooting and administering the full environment is difficult, especially at scale. ObjectRocket allows MongoDB users to treat the database as a target for applications and not another component to manage. You simply code your application to the managed ObjectRocket service and scale your data to meet the needs of the application.
And starting today, ObjectRocket for Redis provides that same level of service for the increasingly popular key-value data structure Redis.
Redis is often used as an ephemeral data structure, which means the dataset and data store technologies do not persist after the computation is done. You can persist the data by dumping it to disk every once in a while or by appending them to a log. The fleeting nature of the application is solely to provide flexibility in deployment and the most optimal utilization of resources. It is easily replicated over a trivial-to-setup master-slave sync to provide reliability for more complex operations.
As we see more applications wanting to leverage user data and derive the behavior based on this data, Redis is becoming even more of a focus for developers. Additionally, Redis is easy to setup, replicate and code to, which makes it an important part of the modern data architecture. Redis has quickly become a leading contender in the open source community. It is deployed at scale at several major companies, including Flickr, which uses automated Redis master failover for an important subsystem, and Pinterest, which uses Redis for its following model and interest graph.
But just because Redis is easy to setup and run doesn’t mean that complications do not exist when you try to scale the technology. This is where the expertise and optimized nature of the ObjectRocket platform lends real value. With the new ObjectRocket for Redis service, users can deploy a fully managed Redis service backed by certified data engineers who are Redis specialists. This allows you to adopt and use Redis technology quickly and become more successful. Along with 24×7 expert support, ObjectRocket for Redis offers high availability with free backups; simplified operations, high performance and high bandwidth.
ObjectRocket for Redis is available now in the Rackspace Northern Virginia data center, and will be available in our Dallas, Chicago and London data centers by the end of August 2014. Users can provision and manage Redis instances and highly available cluster nodes of up to 50GB through the ObjectRocket control panel and API.
Interested in a free trial of ObjectRocket for Redis? Fill in a couple short fields and get started today at objectrocket.com/redis.
|
OPCFW_CODE
|