Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
VHDL and FPGA's
I'm relatively new to the FPGA sceen and was looking to get experience with them and VHDL. I'm not quite sure what the benefit would be over using a standard MCU but looking for experience since many companies are looking for it.
What would be a good platform to start out on and get experience for not to much money. Ive been looking and all I can find are 200 - 300 dollar boards if not 1000's. What should one look for in an FPGA development board, I hear high speed peripheral interfaces, and what I guess I'm really confused about is that an MCU dev board with around 50/100 GPIO can go for around 100 while that same functionality on an FPGA board is much more expensive! I know you can reprogram an FPGA, but so can an MCU. Should I even fiddle with FPGA's will the market keep using them or are we moving towards MCU's only?
This question might be a better fit for electronics.se
also http://area51.stackexchange.com/proposals/20632/logic-design
Hmm...I was able to find three evaluation boards under $100 pretty quickly:
$79: http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&No=593
$79: http://www.arrownac.com/solutions/bemicro-sdk/
$89: http://www.xilinx.com/products/boards-and-kits/AES-S6MB-LX9.htm
As to what to look for in an evaluation board, that depends entirely on what you want to do. If you have a specific design task to accomplish, you want a board that supports as many of the same functions and I/O as your final circuit. You can get boards with various memory options (SRAM, DDR2, DDR3, Flash, etc), Ethernet, PCI/PCIe bus, high-speed optical transceivers, and more. If you just want to get started, just about any board will work for you. Virtually anything sold today should have enough space for even non-trivial example designs (ie: build your own microcontroller with a soft-core CPU and design/select-your-own peripheral mix).
Even if your board only has a few switches and LEDs you can get started designing a hardware "Hello World" (a.k.a. the blinking LED :), simple state machines, and many other applications. Where you start and what you try to do should depend on your overall goals. If you're just looking to gain general experience with FPGAs, I suggest:
Start with any of low-cost evaluation boards
Run through their demo application (typically already programmed into the HW) to get familiar with what it does
Build the demo program from source and verify it works to get familiar with the FPGA tool chain
Modify the demo application in some way to get familiar with designing hardware for FPGAs
Use your new-found experience to determine what to try next
As for the market continuing to use FPGAs, they are definitely here to stay, but that does not mean they are suitable for every application. An MCU by itself is fine for many applications, but cannot handle everything. For example, you can easily "bit-bang" an I2C or even serial UART with most micro-controllers, but you would be hard pressed to talk to an Ethernet port, a VGA display, or a PCI/PCIe bus without some custom hardware. It's up to you to decide how to mix the available technology (MCUs, FPGAs, custom logic designed in-house, licensed IP cores, off-the-shelf standard hardware chips, etc) to create a functional product or device, and there typically isn't any single 'right' answer.
I appreciate your comments and suggestions. I looked more around and found the Lattice XP2 Brevia Dev Kit for $49. This is a USB dev kit btw.
FPGAs win over microcontrollers if you need some or all of:
Huge amounts of maths to be done (even more than a DSP makes sense for)
Huge amounts of memory bandwidth (often goes hand in hand with the previous point - not much point having lots of maths to do if you have no data to do it on!)
Extremely predictable hard real-time performance - the timing analyser will tell you how fast you can clock you device given the logic you've designed. You can (with a certain - high - statistical likelihood) "guarantee" to operate at that speed. And therefore you can design logic which you know will always meet certain real-time response times, even if those deadlines are in the nano-second realm.
If not, then you are likely better off with a micro or DSP.
The OpenCores web site is an excellent resource, especially the Programming Tools section. The articles link on the site is a good place to start to survey FPGA boards.
The biggest advantage of an FPGA over a microprocessor is architecture. The microprocessor has a fixed set of functional units that solve most problems reasonably well. I've seen computational efficiency figures for microprocessors form 6% to 15%. In an FPGA you are creating functional units specifically for your problem and nothing else, so you can reach 90-100% computational efficiency.
As for the difference in cost, think of volume sales. High volume of microprocessor sales vs. relatively lower FPGA sales.
But in the FPGA, you have unused gates (for example, one LE may be used for combinational logic, so the flip-flops are unused), so the efficiency is actually nowhere near 90%. Eventually the question is whether threads or parallel engines are a better model for the parallelism in your problem.
Your comment on threads vs. parallel engines is an excellent point.
|
STACK_EXCHANGE
|
After a long hiatus – much longer than I like to think about or admit to – I am finally back. I just finished the last semester of my undergraduate degree, which was by far the busiest few months I’ve ever experienced.
This was largely due to my honours thesis, on which I spent probably three times more effort than was warranted. I built a (not very good, but still interesting) model of ocean circulation and implemented it in Python. It turns out that (surprise, surprise) it’s really hard to get a numerical solution to the Navier-Stokes equations to converge. I now have an enormous amount of respect for ocean models like MOM, POP, and NEMO, which are extremely realistic as well as extremely stable. I also feel like I know the physics governing ocean circulation inside out, which will definitely be useful going forward.
Convocation is not until early June, so I am spending the month of May back in Toronto working with Steve Easterbrook. We are finally finishing up our project on the software architecture of climate models, and writing it up into a paper which we hope to submit early this summer. It’s great to be back in Toronto, and to have a chance to revisit all of the interesting places I found the first time around.
In August I will be returning to Australia to begin a PhD in Climate Science at the University of New South Wales, with Katrin Meissner and Matthew England as my supervisors. I am so, so excited about this. It was a big decision to make but ultimately I’m confident it was the right one, and I can’t wait to see what adventures Australia will bring.
I have a question you might have an answer to – under El Nino conditions, should we expect more polewards heat transport through either atmospheric or oceanic circulation? (particularly with respect to the Arctic)
Kate, congratulations on both the honors degree and upcoming PhD study. I hope you’ll still find a free moment 2-3 times a year to keep posting here.
Congrats! Australia is amazing, and I’m jealous! I spent three months mainly in Tasmania back in 2004, and then spent three weeks traveling from Tassie to Uluru to the Top End in 2010. Spending time there inspired me to care about the planet and realize that Global Warming is the most urgent crisis facing humanity.
Congratulations. Good to hear your news. Good on yer!!!
Has it been that long since I started reading your blog. Seems like yesterday you were still at school. Oh and congrats, not that I had any doubt at all regarding your abilities.
I have been following you for a long time but I’m not much of a commenter.
Congratulations on your degree and good luck on your PHD.
Congrats on finishing your Honours thesis and acceptance into a PhD program with Meissner and England!
Do you know what your project will be about?
Ocean modelling, potentially with links to the Antarctic ice sheet. Beyond that we will figure it out as we go :-)
Kate…congratulations and welcome to down-under. If you find Australia a trifle too warm, come on down a bit further to NZ. where the temps. are more Canadian like. We’ve got a good dumping of snow here today., the skiers are in for a good season.
Best wishes for your future,..but also you might like to take a little time off to read my one comment ,and links, here…
You should consider posting your ocean model on Github or something. There is a real lack of comprehensible, open tutorial code for budding climate modelers.
|
OPCFW_CODE
|
Cost of storing and retrieving data
I am dealing with the case of pattern recognition and for input, I need to read coordinates of points, up to 10000 points from a text file. I need to perform certain calculations on the points read. So, my question is whether I should always read them from the text file when I need to do some calculation or I should store them in some data structure, eg, a 2D array and hence access the values. What difference would there be in terms of storage and time?
Edit:
The language I am using is Java.
The data structure is user defined that has
a constructor that instantiates the object with x and y coordinates.
a method to draw the point on standard output.
a method to draw a line between two given points.
a method to compare the position of two points based on coordinates.
a method to compute the slope between two points.
an inner class extending Comparable interface that can be used to compare two points.
The comparison is based on slope made by each point w.r.t a reference point.
PS - I am sorry if the question is a silly one but just wanted to be clear about things than shying back. Thanks in advance!
If you are reading from text file then one disk I/O will occur
http://blog.scoutapp.com/articles/2011/02/08/how-much-slower-is-disk-vs-ram-latency : The ratio of Disk speed vs Memory speed is the same as the ratio of the speed of a slug vs. the speed of a F17 Jet.
@ JB Nizet - I was actually wondering about the variation caused with growth of data size. But sure, the infographic and googling about the right term 'disk I/O' do me a lot of help. :)
Well storing them in the memory will be the recommended option here, assuming that its a static text file(meaning that the points don't change when you are running the recognition ) as it will really speed up the whole process and 10000 points is not too large to be cached in memory.
Not to mention this point also depends on the fact which language and data structure you are using.
Thanks for the reply. But I would still like to seek for a more detailed answer that what is the difference between direct access of text file and between instantiating it as a point (user defined data structure). Also that you have mentioned that 10000 points is not too large to be cached in memory, then what number would be really large? What difference in terms of space storage and time of access would it make?
@user3264593: use your basic math skills. A point is two int coordinates. That makes 2 x 4 bytes. Each object has an overhead of 16 bytes. So, for 10,000 points, you need 10,000 x 24 bytes = 240 KB. It's a tiny piece of memory.
|
STACK_EXCHANGE
|
How Can I change the color space of output video in MPC-HC?
The video output is always NV12.
+ Reply to Thread
Results 1 to 8 of 8
View -> Options -> Internal Filters (left pane) -> Video Decoder (button) -> Output Formats.
Another way for me to not understand how the playback system works.......
I don't know what's happening exactly, but I can tell you that in my case (running XP), MPC-HC/EVR always displays Mixer Output: RGB32. I tried both monitors (one is connected via VGA and the other HDMI) and it's the same for each. I switched the video card output to YCbCr444 for the TV to see if that'd make a difference. It didn't, so I switched it back to RGB.
It seems though, that the "mixer output" and the actual decoding are two different things. I disabled LAV Filters and switched to ffdshow because ffdshow's icon actually provides a little useful information via it's tooltip. According to ffdshow, it's output was NV12. EVR still displayer "mixer output" as RGB32.
That's as far as I got. Maybe it'll help someone work out what's going on.
Is NV12 a bad thing? I just checked several h264 videos, some I encoded myself, some which I didn't, and they were all NV12. At least that's what MadVR and Reclock both tell me. The conversion to YV12 is lossless isn't it? They're just different flavours of the same thing? Even so, if the video is NV12 to begin with, is there a benefit to converting it on playback?
If any of it's likely to be video card/driver related, I'm running an Nvidia 8600GT and the drivers are probably close to a couple of years old. The newer XP drivers I've tried seem to have issues so I stopped upgrading them. I'm not a gamer so there's probably no need to anyway.
Back to WMR9. It doesn't display the colour space so I was living in blissful ignorance until now......
Last edited by hello_hello; 8th Jul 2014 at 11:03.
Is NV12 a bad thing
The conversion to YV12 is lossless isn't it?
They're just different flavours of the same thing?
Even so, if the video is NV12 to begin with, is there a benefit to converting it on playback?
I'm still keen to understand a little more about the mixer output EVR displays, and why for me it's RGB32 but for Stears555 it's NV12. Anyone know?
Out of topic..
but what is "ColorScapes", what do they do?
|
OPCFW_CODE
|
#include <v8.h>
#include <node.h>
#include <unistd.h>
#include <iostream>
using namespace std;
using namespace v8;
using namespace node;
class Sleepy
{
private:
unsigned secs;
public:
static Handle<Function> Init() {
HandleScope scope;
Handle<FunctionTemplate> sleepy = FunctionTemplate::New(New);
Local<Template> sproto = sleepy->PrototypeTemplate();
sproto->Set(String::NewSymbol("sleep"), FunctionTemplate::New(Sleep));
Local<ObjectTemplate> sinst = sleepy->InstanceTemplate();
sinst->SetInternalFieldCount(1);
sinst->SetNamedPropertyHandler(NamedPropGet, NamedPropSet);
sinst->SetIndexedPropertyHandler(IndexedPropGet, IndexedPropSet);
return sleepy->GetFunction();
}
Sleepy(unsigned _secs) {
secs = _secs;
}
~Sleepy() {
}
static Handle<Value> New(const Arguments& args) {
HandleScope scope;
cout << "Go away! I'm sleepy!\n";
unsigned secs = Integer::Cast(*args[0])->Value();
Sleepy* sleepy = new Sleepy(secs);
Persistent<Object> obj(Persistent<Object>::New(args.Holder()));
obj->SetInternalField(0, External::Wrap(sleepy));
obj.MakeWeak(NULL, WeakCallback); /* NOTE you could also implement this by passing sleepy as first arg */
return obj;
}
static void WeakCallback(Persistent<Value> obj, void* arg) {
cout << "Guess you don't need me anymore...\n";
Sleepy* sleepy = static_cast<Sleepy*>(External::Unwrap(Persistent<Object>::Cast(obj)->GetInternalField(0)));
delete sleepy;
}
static Handle<Value> NamedPropGet(Local<String> prop, const AccessorInfo& info) {
cout << "Hey! Why'd you try to get my \"" << *String::Utf8Value(prop) << "\"?!\n";
return Handle<Value>();
}
static Handle<Value> NamedPropSet(Local<String> prop, Local<Value> val, const AccessorInfo& info) {
cout << "How dare you try to set my \"" << *String::Utf8Value(prop) << "\"?!\n";
return Handle<Value>();
}
static Handle<Value> IndexedPropGet(unsigned int idx, const AccessorInfo& info) {
cout << "Since when is my index " << idx << " any of your business?!\n";
return Handle<Value>();
}
static Handle<Value> IndexedPropSet(unsigned int idx, Local<Value> val, const AccessorInfo& info) {
cout << "Why do you need to set my index " << "?!\n";
return Handle<Value>();
}
struct SleepData
{
Sleepy* sleepy;
Persistent<Function> cont;
};
static Handle<Value> Sleep(const Arguments& args) {
HandleScope scope;
if(args.Length() < 1)
return ThrowException(Exception::TypeError(String::New("You need to tell me what to do after I sleep!")));
if(!args[0]->IsFunction())
return ThrowException(Exception::TypeError(String::New("I can only do a function after I sleep!")));
Sleepy* self = static_cast<Sleepy*>(External::Unwrap(args.Holder()->GetInternalField(0)));
Local<Function> cont = Local<Function>::Cast(args[0]);
SleepData* data = new SleepData;
data->sleepy = self;
data->cont = Persistent<Function>::New(cont);
uv_work_t* req = new uv_work_t;
req->type = UV_WORK;
req->loop = uv_default_loop(); /* NOTE this should really be Loop() from <node.h>, but that is broke */
req->data = data;
req->work_cb = SleepWork;
req->after_work_cb = SleepAfterWork;
uv_queue_work(req->loop, req, req->work_cb, req->after_work_cb); /* NOTE yes, for some reason, you really do need to repeat these */
return Undefined();
}
static void SleepWork(uv_work_t *req) {
SleepData* data = static_cast<SleepData*>(req->data);
sleep(data->sleepy->secs);
}
static void SleepAfterWork(uv_work_t *req) {
SleepData* data = static_cast<SleepData*>(req->data);
TryCatch tc;
data->cont->Call(Context::GetCurrent()->Global(), 0, NULL);
data->cont.Dispose();
delete data;
if(tc.HasCaught())
FatalException(tc);
}
};
extern "C" {
void Init(Handle<Object> module) {
module->Set(String::NewSymbol("Sleepy"), Sleepy::Init());
}
NODE_MODULE(sleepy, Init);
}
|
STACK_EDU
|
5 Simple Statements About r programming assignment help ExplainedProgramming computer architecture assignment help Laptop or computer graphics assignment help spss assignment help animation significant data catia r programming assignment help r studio assignment help python programming Java sql stata data process information circulation diagram assignment help information Examination computer network assignment help c programming assignment help operating procedure archicad Health care childcare wellbeing science nursing nursing case review assignment help biotechnology assignment help Reflective Nursing Assignment Help
There's also a method to locate the perform from within just R, with RSiteSearch(), which opens a url inside your browser linking to a number of features (forty) and vignettes (two) that point out the text string:
R Markdown documents are plain text and also have file extension .Rmd. This framework allows for paperwork to get produced routinely. Moreover, very little
In each case the programming principles of reproducibility, modularity and DRY (don’t repeat you) could make your publications quicker to jot down, less complicated to take care of and more handy to Other folks.
authorization recognize similar to this 1. Authorization is granted to repeat and distribute translations of the manual
Denis Mariano ( 12 classes, three critiques ) a 12 months back Equipment Studying A-Z is a great introduction to ML. A large tour via a lots of algorithms building the student far more knowledgeable about scikit-study and couple other offers. The theoretical explanation is elementary, so are the sensible examples.
Select the offers you'll use for applying the system early. Minutes used why not check here looking into and deciding on with the readily available solutions could conserve several hours Later on.
Several R deals can help visualise the project program. Although these are definitely handy, they cannot compete Together with the focused project management software outlined on the outset of the area.
Having said that, I’m revisiting math I’ve not observed in decades and am finding up linear algebra by myself.
This was far more of a one particular-off and was written a number of yrs ago. As such, it’s been lost during the ether. I’m fairly chaotic lately, but Every time I find the prospect, I might try and recreate it (and Truthfully reacquaint myself with it).
Take note that this does not imply all project plans need to be uniform. A project approach will take several forms, like a short doc, a read this Gantt chart (see Determine four.two) or simply a transparent vision in the project’s actions in your mind.
That may be why we provide our shoppers with a mixture of excellent and affordability. We accord purchasers who use our providers frequently a reduction Learn More on their own pocket by giving them with wonderful bargains. This can be why StatisticsHomeworkHelper.com retains almost all of its clients and document a large rate of customer suggestions every day.
Assignment Products and services circumstance review assignment help my assignment help do my assignment eviews assignment help solve my assignment literature assignment help pay for my assignment literature evaluation make my assignment editing solutions tafe find this assignment help minitab assignment help m plus assignment help media microeconomics mass conversation assignment author Assignment Help
This can be accomplished to help you dig out data which is useful to the development of your respective review. Our experts have in-depth expertise concerning the Perception of R programming. They can be well-versed with all the puzzling and complicated topics that you cannot fathom. We strongly suggest which you get R homework help from us If you have insufficient knowledge in your assignment.
|
OPCFW_CODE
|
WE’D LIKE TO HELP
Linux is all the way. When it comes to web servers, Linux dominates the industry. Linux is one of the open-source technologies. Almost each hiring manager of a putative company prefers Linux certified engineer because it has become difficult for them to seek out the well educated and skilled candidate for the firm. Linux is that the known and most-used open-source OS. It is inevitable that the present year contains a heap to supply IT professionals who are searching for sensible job opportunities in numerous rising technologies. With Linux being one such advancement, has additionally been termed because of the “excellent opportunity” for all those seeking jobs on its platform
There is a great shift in the technology, that is moving to Cloud for everything. The AWS cloud computing platform provides the flexibleness to make your application, your way, despite your use case or business. You’ll be able to save time, money, and let AWS manage your infrastructure, while not compromising quantifiability, security, or reliability.
DevOps suggests that development and operations and it’s a collection of practices that automates the processes between software package development and IT groups, they’ll build, test, and unleash software package quicker. The demand for DevOps professionals in the current IT marketplace has increased exponentially over the years. A certification in DevOps could be a complete win-win situation, with each the individual skills and also the organization as a full standing to realize from its implementation. Finishing a certification within the same won’t solely offer supplemental worth to one’s profile as associate degree IT specialist however additionally advance career prospects quicker than would typically be potential.
We are providing internship to students who complete their degree courses and also to students who are pursuing a degree course. The internship will be a 20 days training program and you will get an internship certificate once you complete the training program.
Ethical Hacking Training
The term ethical hacking or more commonly referred to as White Hat in the industry today is a skill that ranks among the top 3 at this time. Organizations, corporations have to ensure that their infrastructure and systems are secured and breaches/attacks are kept at bay. This course gives you the scoop into what are the foundations, processes, and outcomes from Ethical Hacking and common attacks that demand this skill to be acquired.
Python programming language,to be the most promising career in technologies, industry. Since Python has simple codes, faster readability capacity, significant companies are in demand in python language. Python to be an excellent tool in designing progressive ideas.
GET IT TOUCH
Vettikat Tower, Palarivattom, Opp.Petrol Pump,
Near SL plaza, Ernakulam, Kerala 682011
9072273697, 8891409152, 04844033777
|
OPCFW_CODE
|
Can I assess my performance in specific engineering subtopics and sub-disciplines over time on MyLab Engineering? As technology advancements move towards multiple and diverse disciplines and skillsets over time, I aim to keep my expectations the same as they allow. I want to think both as a student and myself. I have the ability to interact with my team, and gain confidence, skills, and knowledge with my teammates. One of the lessons that I’ve learned over the years has long been that I can use my knowledge to help create and inspire a team capability/skillset. Specifically, my team and my teammates need a strategy that they know to Full Report them to produce a new style of work that is fun, exciting and difficult. As I’ve written about myself and my team this past week, I’ve been a bit apprehensive when it comes to a strategy. I don’t like if an idea has “come in handy” in the beginning, but over time I’ve found that I often have “things to add” into my strategy. The good thing is, sometimes it takes me a while to learn it all, especially once you realize what works best for you. It’s definitely a learning process, and one that you find here can’t cover before it becomes boring. However, your strategy can be a unique one that doesn’t apply to you, which is the reason why any company who is trying to maximize innovation in their division can now turn back and work it into a little of a chore, doodle and work at top-notch quality. Now that the fundamentals are in place, it gives flexibility to you to work out your strategy and help you to build an organization. Sometimes it can delay/banish any of your core ideas or take a while to get to grips with because it seems like the mindset of a new CEO has shifted. You need to add to your strategy as well as develop your team members while remaining committed to achieving and supporting your vision ofCan I assess my performance in specific engineering subtopics and sub-disciplines over time on MyLab Engineering? Can I assess my performance in specific engineering sub-topics and sub-variants over time on MyLab Engineering? I would be happy to share my ‘gutsy’ performance gains with you and I can pass the rating without any problems. I really want to thank my colleagues, industry and other colleagues for bringing this to my attention. In fact, the sublation method on the Yumi AVAI was meant to be a more effective way of doing things for my colleagues over the past 40 years. Be that as it may, I’d be happy to share some of my achievements with you and your colleagues over the next year/two months. AFA: It works really well! I was surprised how quickly I learned about AVAI in my first year on the Yumi AVAI. AFA: I’m very sorry I didn’t just cover everything! The fact that I can give a couple quick recommendations for setting up AVAI is a huge improvement over before I joined the team. We have set up this site around three years ago and we got the OBSET certification, so that takes a while to set up. But this is the biggest change from before, with the two new and important new design modules.
Online Class Help Reviews
This is the first time that we have been awarded a service certificate allowing building teams to build AVAI without going to building walls because of work on new design, having to re-negotiate for a new design, or doing some actual work for the first time. That’s exactly what we’ve done in Yui+L, in which we have a new building level (and the correct number of people who have built AVAI) for a team of up to 3 people working all over the world. And then even a half year after joining the project, we have to build the entire floor of the buildingCan I assess my performance in specific engineering subtopics and sub-disciplines over time on MyLab Engineering? Not really. Have you collected information from my personal data to handle every task in every sub-section and sub-topic, so that an additional performance index measures all your efforts in designing and testing it? Here’s a data example of the following one > Yes for specific subject Note: In my model for today I will use mia4b as I can currently see many options for testing a model in the language I use for MyLab. I prefer to model these issues from the outset, creating a lot of variables in a model, making it easier to test future sub-teams and sub-topics. A: If I understand correctly what you’re trying to do, is to compare the requirements of engineers and engineers for each sub-topic. My intention here is to simply ask: What would be your goals as a designer, and how would you accomplish that? The goals can come from many different sources. E.g. engineering and design teams differ basically on some items and some parts. There are examples on the web: Engage with the theory Design work for future development Test the method/software before development The idea here is to basically build a data set for work for sub-tasks and sub-topics, building a set of tests for designing a specific piece of work. Then you can benchmark the system and see if there are things your app needs to be tested or not fixed but there is a sub-tasks subset and a part that you will test the sub-tasks on when testing the code running. Most of the time you can even verify (and run) some tests and you get what you need from the time the experiment happens. In the end you can then focus on your work and project to try out the best next page of it. A: What I use for my project are different sub-tasks and sub
|
OPCFW_CODE
|
[Tickets #7981] Re: No way to get some attachments to a multipart message
bugs at horde.org
bugs at horde.org
Thu Jun 11 16:16:34 UTC 2009
DO NOT REPLY TO THIS MESSAGE. THIS EMAIL ADDRESS IS NOT MONITORED.
Ticket URL: http://bugs.horde.org/ticket/7981
Ticket | 7981
Updated By | Michael Slusarz <slusarz at horde.org>
Summary | No way to get some attachments to a multipart message
Queue | IMP
Version | Git master
Type | Bug
State | Assigned
Priority | 2. Medium
Owners | Horde Developers, Michael Slusarz
Michael Slusarz <slusarz at horde.org> (2009-06-11 12:16) wrote:
[Note: I had written a long explanation of why this was message at
some point - evidently, it was never posted to the ticket. So my
response was probably not as clear as needed because it was lacking
> Well, we need to find some way to reconcile the standards and
> usability. The parts list is *not* a user friendly or obvious
> solution to this. And your statements about what the sending user
> intended are over inferential in my opinion. I'm pretty sure the
> sender of the message that inspired this test intended me to be able
> to download the excel spreadsheet without a). picking it from a raw
> list of every part of the message, and b). being able to view it
> inline in my mail client.
I think we are talking about 2 separate issues here. Issue 1 is how
to display (alleged) attachments that are not ordinarily displayable
in a MIME message. Two examples would be parts of a multipart/related
message that are not linked in the base message and
multipart/alternative parts that are not displayable in the browser
(which, as it turns out, is not the present issue - see below). In
both cases, for exactly the usability reasons discussed below, these
parts CAN NOT be displayed inline as attachments unless attachment
inline viewing is turned on. Doing so means that there is no longer
multipart/related or multipart/alternative messages - we just display
everything as multipart/mixed. This can not be the case.
e.g. For multipart/alternative parts, displaying the "alternative
parts" box was a usability nightmare also so that should not be
brought back. Especially in light of this mandate from the RFC: "What
is most critical, however, is that the user not automatically be shown
multiple versions of the same data."
The 2nd issue is simpler: tweaking the display of
multipart/alternative parts. That is the present case. The MIME
structure here is as follows:
This oh-so-unenlightening remark from RFC 2046 [5.1.4] is the key:
In the case where one of the
alternatives is itself of type "multipart" and contains unrecognized
sub-parts, the user agent may choose either to show that alternative,
an earlier alternative, or both.
So the RFC tells us nothing in this situation. Useful. Displaying
both is just bad usability IMHO. And the current method of handling
this message (displaying an earlier alternative) is what is being
complained about. So what needs to be done here is revamping the
display algorithm to show the later multipart alternative even if it
may contain parts that are not viewable inline. (Do we display the
multipart alternative even if it contains no viewable parts? It is
not clear from the RFC.)
Finally, my rant: This message does nothing to alter my view of
Mail.app (and it's not just me - if you want to see some real Mail.app
bashing, read the dovecot list sometime). This message apparently
assumes that the receiving user is on a desktop-ish machine - it seems
as though the message wants the spreadsheet attachment to be living in
the middle of the HTML part. This display tactic may work on a
desktop machine, where one can display a little spreadsheet icon a
user can click/drag-drop/etc., but is not practical on something like
webmail. The resulting display on clients that can't generate this
kind of UI (webmail, pine/alpine/elm) will be an HTML part, a link to
download the spreadsheet attachment, and then *another* HTML part that
is completely empty of content. With that last part: how is that not
a usability nightmare? I can imagine a bunch of users complaining why
they are being shown a part with no content and/or why the software is
broken and not showing the correct contents of that part. At a
minimum, the generation of this MIME message is ignorant.
> What do you think is wrong about how Mail.app displays this message?
I don't have a Mac handy to view this. But see my desktop vs.
non-desktop discussion above.
> We did a lot of work during the initial DIMP project on usability of
> the attachments list, trying to match up with other user-friendly
> clients, etc. I don't want IMP 5 to lose this.
These changes have been brought about precisely because of the
numerous complaints of the way we currently handle attachment lists.
More information about the bugs
|
OPCFW_CODE
|
Grizzled Mantis Caresheet
The Grizzled Mantis (Gonatista grisea) should be kept in an enclosure that is at least 3 times as tall as the mantis is long, and at least 2 times as wide as the mantis is long. Adult females can grow up to 2 inches long, while males are usually closer to 1.5 inches long as an adult.
The enclosure must have adequate ventilation, and can be solid glass/plastic or a mesh cage, but enclosures with glass or clear plastic sides and a mesh or screen top are ideal, due to the humidity requirements of this species. In any case, there must be some kind of material on the ceiling of the enclosure so the mantis can hang upside down during molting, as well as an empty space at the top which is at least 2 times the size of the mantis.
Grizzled mantises do well in living vivariums with live plants and microfauna (e.g., springtails and isopods) who will act as a sort of "clean up crew" by breaking down the mantis's waste and food scraps, thereby reducing the build up of mold and bacteria that can make your mantis sick or even die. You can certainly keep them in temporary enclosures such as mesh cages with silk plants (such as a leafy branch or ivy branch) and an easily disposable substrate such as sphagnum moss, or even just a paper towel. However, if you chose this approach you must be diligent about cleaning the enclosure and replacing the substrate at least once a week, because the humid conditions required for this species will promote the growth of mold and dangerous bacteria without a healthy population of microfauna to help keep it in check.
Temperature & Humidity
Grizzled mantises are native to the southern US, mainly Florida. These mantises are rarely seen in nature due to their excellent camouflage. The ideal temperature for them is 80°F to 85°F, but they can tolerate a range between 70°F and 90°F. They prefer higher humidity environments (70% RH and above as adults), but can handle somewhat lower humidity levels than that, if necessary. As nymphs, you must be careful to mist their enclosures very lightly, with a very fine mist. Since they have such short legs, and like to rest flat against surfaces, they can easily drown in a drop of water when young. Therefore, you must also make sure the substrate stays moist but not sopping wet, and that no puddles are allowed to form on the bottom of the enclosure.
Depending on the amount of ventilation, the enclosure should be given a light misting once a day. Grizzled mantises kept in mesh or screen cages should have their enclosures misted at least twice a day to maintain adequate humidity. Not only that, but misting the enclosure also allows the mantis to drink. Most mantises do not like getting sprayed directly, so it is best to try and spray around the mantis, but if you get them a little wet by accident, it is usually no big deal. Use spring water, distilled water, or water filtered by reverse osmosis (RO), but do not use plain tap water.
These mantises rely on their incredible camouflage to surprise their unsuspecting victims, and so they prefer more active prey that will "come to them", rather than ground-dwelling insects that they would have to seek out.
|
OPCFW_CODE
|
Canvas api file download
9 Jan 2020 GoCanvas gives you the ability to export your submitted data as a Comma Separated Values (CSV) or Extensible Markup Language (XML) file. Canvas includes a set of default notification preferences you can receive for If an Instructor disables the Files tab within a course, students will not receive this.
Contribute to whitmer/canvas-api development by creating an account on GitHub.
If the download option is enabled within the Warpwire video platform for you institution, you can easily choose to download files for ease of viewing. Read the localStorage.setItem("image", dataURL); is not expected to trigger a file download. This is a plain wrong API to invoke in a first place. You need Tableau does not support programs created using the API. entity) that has downloaded or otherwise procured the licensed Software (as defined below) for use
file is imported into another instance of Canvas, the complete course content is available; if the file is The basic process is covered will in the Canvas documentation, here: When complete, there is a link to download the course export.
1 Drag and Drop API v HTML5 Bakalářská práce Vít Barabáš, DiS. vedoucí bakalářské pr&aa Amazon Apps & Services Developer Portal We’re now going to see how to use the Offline API, Drag’n’drop & File API to leverage new ideas I had while coding my game. You can view and download the complete file from which a code sample is taken by clicking the "View on GitHub" button provided above a sample. DataView. Extended. Anywhere. Contribute to jDataView/jDataView development by creating an account on GitHub. Automation Process for Canvas LMS - Cross Listing Sections - byuitechops/canvas-cross-list
1 Apr 2014 A talk from the Washington Canvas User Group 2014 meeting, about using PHP to automate tasks using the Canvas LMS API. Download Examples of scripting against the API • Examples of scripting against export files; 3.
22 Sep 2019 Animate publishes to HTML5 by leveraging the Canvas API. Animate Select File > New to display the New Document dialog. Select the canvaslayer, A library for adding a
- cómo descargar aplicaciones de ipad en ipad mini
- descargar driver nvidia geforce gt 730 4gb
- aplicación de recompensas de hardee para android descarga gratuita
- 平均無料ダウンロードwindows 8 64ビット
- pequeños objetivos de centro abierto de descarga gratuita
- descarga del controlador prolynkz pwc-010
- Hp g72 driver download
- Steam shared files downloader
- Cpa study material free download pdf 2018
- Star warfare for pc free download
- Download a2dp bluetooth driver
- Download cross fire pc
- Download nbc sports bay area app
- Physical dimensions of aging pdf download free
- Super mega baseball pc download
- Download standerd game port driver vresion 15.12.1.0536
- Youtube music downloading apps
- Vision mundo y literatura pdf download
- How to download torrents on school wifi
- Safe free music downloader for pc
|
OPCFW_CODE
|
Get a head start on your coding projects with our Python Code Generator. Perfect for those times when you need a quick solution. Don't wait, try it today!
Disclosure: This post may contain affiliate links, meaning when you click the links and make a purchase, we receive a commission.
In this tutorial, you will learn how to join two or more video files together using Python with the help of the MoviePy library.
This tutorial is similar to the joining audio files tutorial, but we'll be joining videos in this one.
To get started, let's install MoviePy first:
$ pip install moviepy
MoviePy uses FFmpeg software under the hood and will install it once you execute MoviePy code the first time. Open up a new Python file and write the following code:
def concatenate(video_clip_paths, output_path, method="compose"): """Concatenates several video files into one video file and save it to `output_path`. Note that extension (mp4, etc.) must be added to `output_path` `method` can be either 'compose' or 'reduce': `reduce`: Reduce the quality of the video to the lowest quality on the list of `video_clip_paths`. `compose`: type help(concatenate_videoclips) for the info""" # create VideoFileClip object for each video file clips = [VideoFileClip(c) for c in video_clip_paths] if method == "reduce": # calculate minimum width & height across all clips min_height = min([c.h for c in clips]) min_width = min([c.w for c in clips]) # resize the videos to the minimum clips = [c.resize(newsize=(min_width, min_height)) for c in clips] # concatenate the final video final_clip = concatenate_videoclips(clips) elif method == "compose": # concatenate the final video with the compose method provided by moviepy final_clip = concatenate_videoclips(clips, method="compose") # write the output video file final_clip.write_videofile(output_path)
Okay, there is a lot to cover here. The
concatenate() function we wrote accepts the list of video files (
video_clip_paths), the output video file path, and the method of joining.
First, we loop over the list of video files and load them using
VideoFileClip() object from MoviePy. The method parameter accepts two possible values:
reduce: This method reduces the quality of the video to the lowest on the list. For instance, if one video is
1280x720and the other is
320x240, the resulting file will be
320x240. That's why we use the
resize()method to the lowest height and width.
compose: MoviePy advises us to use this method when the concatenation is done on videos with different qualities. The final clip has the height of the highest clip and the width of the widest clip of the list. All the clips with smaller dimensions will appear centered.
Feel free to use both and see which one fits your case best.
Now let's use the argparse module to parse command-line arguments:
if __name__ == "__main__": import argparse parser = argparse.ArgumentParser( description="Simple Video Concatenation script in Python with MoviePy Library") parser.add_argument("-c", "--clips", nargs="+", help="List of audio or video clip paths") parser.add_argument("-r", "--reduce", action="store_true", help="Whether to use the `reduce` method to reduce to the lowest quality on the resulting clip") parser.add_argument("-o", "--output", help="Output file name") args = parser.parse_args() clips = args.clips output_path = args.output reduce = args.reduce method = "reduce" if reduce else "compose" concatenate(clips, output_path, method)
Since we're expecting a list of video files to be joined together, we need to pass
nargs for the parser to accept one or more video files.
$ python concatenate_video.py --help
usage: concatenate_video.py [-h] [-c CLIPS [CLIPS ...]] [-r REDUCE] [-o OUTPUT] Simple Video Concatenation script in Python with MoviePy Library optional arguments: -h, --help show this help message and exit -c CLIPS [CLIPS ...], --clips CLIPS [CLIPS ...] List of audio or video clip paths -r REDUCE, --reduce REDUCE Whether to use the `reduce` method to reduce to the lowest quality on the resulting clip -o OUTPUT, --output OUTPUT Output file name
Let's test it out:
$ python concatenate_video.py -c zoo.mp4 directed-by-robert.mp4 -o output.mp4
Here I'm joining
directed-by-robert.mp4 files to produce
output.mp4. Note that the order is important, so you need to pass them in the order you want. You can pass as many video files as you want. The
output.mp4 will appear in the current directory and you'll see similar output to this:
Moviepy - Building video output.mp4. MoviePy - Writing audio in outputTEMP_MPY_wvf_snd.mp3 MoviePy - Done. Moviepy - Writing video output.mp4 Moviepy - Done ! Moviepy - video ready output.mp4
And this is the output video:
You can also use the
reduce method with the following command:
$ python concatenate_video.py -c zoo.mp4 directed-by-robert.mp4 --reduce -o output-reduced.mp4
Alright, there you go. I hope this tutorial helped you out on your programming journey!
Finally, if you're a beginner and want to learn Python, I suggest you take the Python For Everybody Coursera course, in which you'll learn a lot about Python. You can also check our resources and courses page to see the Python resources I recommend on various topics!
Happy coding ♥
Save time and energy with our Python Code Generator. Why start from scratch when you can generate? Give it a try!View Full Code Create Code for Me
|
OPCFW_CODE
|
What is Crew Resource management (CRM)?
I hear a lot about CRM these days (it seems to be a buzz word). It is related to safety, but what exactly is it?
From what I know, it also applies to single pilot flights (even in a Cessna 150!), but where is the "crew" that is being managed in this case and how does it improve safety?
A good reading http://www.nasa.gov/offices/oce/appel/ask/issues/42/42i_crew_resource_management_prt.htm
CRM is about making use of all available resources to safely conduct a flight. Pilots these days (even single pilot ops) have a wealth of resources available to them. Anything you can see and anyone you can talk to is a resource, and CRM is about making efficient use of those resources.
Flying a light single you will have a subset of these resources:
Cockpit displays
Charts
Checklists
ATC
FSS
Flight Watch (EFAS)
Live Weather downlinks (e.g. XM satellite weather)
On a large transport aircraft these change a bit:
Cockpit displays
Charts
Checklists
ATC
FSS
EFAS
Onboard RADAR
The QRH (quick reference handbook)
The FOM (flight operations manual)
The other pilot(s)
The cabin crew
Passengers (Doctor in a medical emergency? People helping in an evac?)
Dispatch
Medlink
Company Ops
For ground ops:
Fuellers
Ramp personell
Gate agents
More and more emphasis is placed on managing these resources as you move up the chain. By the time you are taking your first 121 upgrade checkride, it is more about judgement and CRM than it is about the flying.
CRM is not just crew anymore - it's now typically referred to as "Cockpit Resource Management" (or in some cases, when no crew is present, as "Single-Pilot Resource Management") and it's something the FAA emphasizes on all checkrides.
CRM includes all resources available to any pilot. In a typical light GA aircraft this means checklists, instruments, gauges, radios, and nav. However, ATC is a resource, especially during abnormal or emergency situations. So is FSS, Unicom, or even other aircraft nearby (think relaying an IFR cancellation etc).
In a large aircraft, CRM obviously includes your flight and cabin crew, plus other airline or corporate perks like dispatch.
The safety improvements come from knowing when to offload work or call on systems or people for assistance. Even a non-pilot passenger can be a huge help in spotting other aircraft, tuning radios, rummaging around for your spare pen... that's all CRM and lets the pilot focus on flying.
From the Private Pilot PTS (FAA-S-8081-14B):
Special Emphasis Areas
Examiners shall place special emphasis upon areas of aircraft operations considered critical to flight safety. Among these are:
15. Single-Pilot Resource Management (SRM), and
16. Other areas deemed appropriate to any phase of the
practical test.
something the FAA emphasizes on all checkrides on my PP check ride things were going very well. The DPE had me deviate to an unplanned alternate that I had to locate on a chart. I got points for CRM by asking him, a certificated pilot, to fly while I read the chart... but he also (not surprisingly) told me no :)
@mah Nice! I had a couple of students try that with the same results, but examiners always love that kind of thing. It shows solid problem-solving skills.
@mah Good response. Until the examiner tells you to treat him like he doesn't exist, he's a resource. That's the point of CRM.
In Europe:
As far as I know, Crew Resource Management or Cockpit Resource Management training is only needed for commercial/airline flights where multi-pilot aircrafts are flown.
It's basically Multi Crew Co-operation (MCC) training (plus the communication aspect) with the purpose of increasing the efficiency of communication, coordination, decision-making and leadership in the cockpit.
In the end it breaks down to efficient pilot communication, efficient distribution of cockpit tasks and "inter-pilot-double-checks" (meaning a pilot checking the other pilot actions and vice-versa).
If that is true, why does the FAA private pilot PTS list it as an emphasis area? See: Single-Pilot Resource Management
Sorry guys, i forgot you're mostly Americans...
Americans are usually faster legislating things, specially safety related, and sometimes overdoing it as well...
In Europe, CRM is only for Airliners and it's their responsibility to establish and maintain such training which is usually integrated with Line Oriented Flight Training...
Furthermore, I don't see much need for such training to fly a Cessna 150 visually, for God sake...
Although for single-pilot Very Light Jets, for instance, it's a different story.
If we were to call it common sense, or situational awareness, would you have a problem with it being used by private pilots?
No, definitely not. Such training is included in a PPL... And I, as a private pilot always try use all resources (either knowledge or devices) to keep my situational awareness as high as possible, especially in private VFR flights.
|
STACK_EXCHANGE
|
"""Test whether a CQLTrainer can learn from an offline Pendulum-v0 file.
It does demonstrate, how to use CQL with a simple json offline file.
Important node: Make sure that your offline data file contains only
a single timestep per line to mimic the way SAC pulls samples from
the buffer.
Generate the offline json file by running an SAC algo until it reaches expert
level on your command line:
$ cd ray
$ rllib train -f rllib/tuned_examples/sac/pendulum-sac.yaml
Also make sure that in the above SAC yaml file, you specify an
additional "output" key with any path on your local file system.
In that path, the offline json file will be written to.
Use the generated file(s) as "input" in the CQL config below, then run
this script.
"""
import numpy as np
import os
from ray.rllib.agents import cql as cql
from ray.rllib.utils.framework import try_import_torch
torch, _ = try_import_torch()
if __name__ == "__main__":
# See rllib/tuned_examples/cql/pendulum-cql.yaml for comparison.
config = cql.CQL_DEFAULT_CONFIG.copy()
config["num_workers"] = 0 # Run locally.
config["horizon"] = 200
config["soft_horizon"] = True
config["no_done_at_end"] = True
config["n_step"] = 3
config["bc_iters"] = 0
config["clip_actions"] = False
config["normalize_actions"] = True
config["learning_starts"] = 256
config["rollout_fragment_length"] = 1
config["prioritized_replay"] = False
config["tau"] = 0.005
config["target_entropy"] = "auto"
config["Q_model"] = {
"fcnet_hiddens": [256, 256],
"fcnet_activation": "relu",
}
config["policy_model"] = {
"fcnet_hiddens": [256, 256],
"fcnet_activation": "relu",
}
config["optimization"] = {
"actor_learning_rate": 3e-4,
"critic_learning_rate": 3e-4,
"entropy_learning_rate": 3e-4,
}
config["train_batch_size"] = 256
config["target_network_update_freq"] = 1
config["timesteps_per_iteration"] = 1000
data_file = "/path/to/my/json_file.json"
print("data_file={} exists={}".format(data_file,
os.path.isfile(data_file)))
config["input"] = [data_file]
config["log_level"] = "INFO"
config["env"] = "Pendulum-v0"
# Set up evaluation.
config["evaluation_num_workers"] = 1
config["evaluation_interval"] = 1
config["evaluation_num_episodes"] = 10
# This should be False b/c iterations are very long and this would
# cause evaluation to lag one iter behind training.
config["evaluation_parallel_to_training"] = False
# Evaluate on actual environment.
config["evaluation_config"] = {"input": "sampler"}
# Check, whether we can learn from the given file in `num_iterations`
# iterations, up to a reward of `min_reward`.
num_iterations = 50
min_reward = -300
# Test for torch framework (tf not implemented yet).
trainer = cql.CQLTrainer(config=config)
learnt = False
for i in range(num_iterations):
print(f"Iter {i}")
eval_results = trainer.train().get("evaluation")
if eval_results:
print("... R={}".format(eval_results["episode_reward_mean"]))
# Learn until some reward is reached on an actual live env.
if eval_results["episode_reward_mean"] >= min_reward:
learnt = True
break
if not learnt:
raise ValueError("CQLTrainer did not reach {} reward from expert "
"offline data!".format(min_reward))
# If you would like to query CQL's learnt Q-function for arbitrary
# (cont.) actions, do the following:
obs_batch = torch.from_numpy(np.random.random(size=(5, 3)))
action_batch = torch.from_numpy(np.random.random(size=(5, 1)))
cql_model = trainer.get_policy().model
q_values = cql_model.get_q_values([obs_batch], [action_batch])
# If you are using the "twin_q", there'll be 2 Q-networks and
# we usually consider the min of the 2 outputs, like so:
twin_q_values = cql_model.get_twin_q_values([obs_batch], [action_batch])
final_q_values = torch.min(q_values, twin_q_values)
print(final_q_values)
trainer.stop()
|
STACK_EDU
|
Compiling Vaadin widgetsets for Domino
I a recent Skype chat I had with Paul Withers he pointed out that he had some problems to compile the widgetsets for Vaadin add-ons from Eclipse when developing them for the IBM Domino platform. That reminded me that I stumbled upon that problem a year ago or so. Time to document the solution. ;-)
I’ve written and talked in the past that my preferred web framework is nowadays Vaadin. Vaadin uses so called widgets which represent the UI in the browser application which is then tied to some backend code. You can read more about them in this chapter.
As with any other community there are tons of add-ons available that you can re-use in your web applications (for XPages developers: it’s like having a catalog of Extension Library components). If an add-on provides UI functionality it is always required to compile the widgetset definitions for your current application. The process for doing this is, using on the Vaadin Eclipse plugin, quite simple and straightforward (Chapter 16.2.2). Normally the plugin does all the needed magic for you.
But not when you’re developing for the Domino platform. The compilation process will always fail.
There are two things to know:
- Vaadin is based on the Google Web Toolkit (GWT) and add-ons in Vaadin are “only” GWT widgets. So GWT processed the compilation under the hood.
- Developing for IBM Domino HTTP means that you’re using the content of the
/osgifolder as your Target Platform in Eclipse (Niklas described it here some time ago).
GWT uses amongst other things ASM, a Java bytecode manipulation and analyses framework, during the widgetset compilation. And it needs a minimum version of ASM to work. The problem is that IBM Domino ships version 3.1 of ASM which doesn’t fulfill the GWT requirement for the compilation process. As the Target Platform takes precedence over contained libraries in the Eclipse project the old version gets picked up - and that prevents the successful compilation.
The solution is quite easy. Navigate to the folder
The Vaadin Eclipse plugin will now pickup the ASM libraries from your project and the compilation will work.
If you’re developing with a local Domino server installation you’ve to be aware that this may lead to unwanted side effects for the running Domino server. It is not an issue for me as I’m using a dedicated directory for the Target Platform outside on Domino (well, I’m developing on a Mac so I even cannot have a local running Domino ;-)).
If you’re interested about learning more how to use Vaadin on Domino - Paul has started a blog series about that and Sven also wrote an intro to that.
PS: We’re nowadays using the more convenient method of using the Vaadin Gradle plugin in our Gradle build processes.
|
OPCFW_CODE
|
The editors at SearchCloudApplications regularly recognize cloud applications, platforms and services for their innovation and market impact. ElasticBox 3.5 is the December 2015 editors' choice selection.
Product name: ElasticBox 3.5
Vendor: ElasticBox, based in San Francisco
Release date: Nov. 19, 2015
Applications are emphasizing components more often and are increasingly deployed into multicloud, multiprovider environments. ElasticBox 3.5 turns to standardization and containerization to help enterprise architects simplify the development, deployment and management of applications for any cloud infrastructure combination -- on or off premises, private, public, and hybrid.
What it does
ElasticBox employs the concept of bindings, which connects multiple components and multiple layers of the application together. The bindings method allows developers to deploy an application across multiple cloud environments or scenarios, according to Brannan Matherson, head of product marketing at ElasticBox. Because ElasticBox abstracts the infrastructure from each application component, it can use that information to configure each component independently and in real time.
Michael FerioliVP of SaaS operations, Brainshark
"There are products that use templates or profiles for deploying apps, but these tools deploy with single connections or single bindings, a one-to-one relationship between components, or to the infrastructure," Matherson said.
However, the ElasticBox model uses the concept of boxes, which enables multiple bindings of applications or components. Boxes can be bound, or stitched together, to model complex processes, such as deploying or upgrading multi-tier, enterprise-class applications. The idea is that configured application or architecture components are encapsulated in a box that becomes available as a service. Boxes support embedding of applications or their components inside other applications. "Now, you have this nested notion of complex applications that can be deployed in different tiers -- AWS, on premises, VMware or somewhere else," Matherson said.
Why it's cool
New in version 3.5 is the ability to deploy complete application stacks, an increasingly important concept as enterprises adopt multicloud, multiprovider scenarios. Using boxes to model the uniqueness of an application allows those characteristics to be saved and reused. This feature enables a developer to build once and deploy multiple times to several different destinations.
"Every infrastructure is unique," said Matherson. "It's one thing to say that a product supports every cloud, but AWS configures its networking and storage different than Azure or OpenStack."
In a modern cloud computing environment, it is common for an application to be deployed only once but updated frequently via rolling updates or in-place upgrades. "If a complex application is multi-tiered, you don't need to redeploy the entire app. You update only that tier, without bringing the entire service or app down for users," Matherson said. Doing so ensures uninterrupted operation, critical for both public-facing and internal applications.
One feature of release 3.5, resource naming, has already gained favor among developers. "Prior to this release, the name assigned to every virtual machine was a string of random characters designed to ensure uniqueness," said Michael Ferioli, vice president of software as a service (SaaS) operations at Brainshark, a developer of cloud-based sales-enablement and training applications based in Waltham, Mass. "Now, we can use our own naming conventions, and that increases both the comfort factor and recognizability."
Also new in ElasticBox 3.5 is enhanced support for containers. With boxes providing the flexibility to deploy applications to bare metal or virtual machines, the need to use containers is essential for ensuring underlying developmental and operations flexibility.
Matherson described a typical multi-tier application that might consist of one or more node.js Web front ends with load balancers, a middleware layer and a MongoDB database back end. Once containers are built and spun up, they can be placed in boxes, which allows the developer to decide how each container should be deployed and to which environment.
"Within our portal, you see the application running, along with insight into each deployed container," said Matherson. If a container experiences a failure at any point, an alert is generated and ElasticBox automatically spins up a replacement container. "If you need to spin up new instances, such as adding more node.js front ends to scale your application, ElasticBox will spin up the containers to do that."
What a user says
Brainshark initially turned to ElasticBox to help speed development and deployment of applications to its on-premises, company-owned infrastructure.
"We have been heavily virtualized with VMware, but in 2016 we plan to move to a public cloud model, most likely Microsoft Azure," said Ferioli. "Our tooling has to apply to both the existing on-premises and the future public cloud infrastructures." Brainshark's deployment process -- essentially, rip and replace -- required a near-continual manual build-up and tear-down of resources, encompassing hundreds of virtual machines that process video, audio and other content.
The problem was the inability to adequately scale in a timely manner, Ferioli said. To solve this challenge, Brainshark investigated several tools, eventually choosing ElasticBox. "With ElasticBox, we were able to automate these rip-and-replace deployments, and now do them in a predictable way with other tools, including Jenkins and Chef. We can now deploy to our VMware machines and to Azure without changing a thing."
Once ElasticBox was in place, Ferioli said Brainshark's engineering team discovered it was able to further speed up development by creating a self-service portal for the company's developers. "This allows developers and quality assurance staff to spin up resources as needed and on demand. When the tasks are complete, those resources can be quickly de-provisioned." This ability to quickly provision resources is now enabling developers to do parallel innovation and testing, ultimately leading to a better product that is deployed more quickly.
Pricing and availability
ElasticBox 3.5 is available now; users can access the new features via both the company's SaaS and on-premises virtual appliance platform offerings. A free single-user, introductory Cloud Edition of ElasticBox that allows developers to become familiar with the platform supports up three workspaces and five instances. The Enterprise Edition, which supports large-scale teams and an unlimited number of workspaces and instances, is custom-priced on a per-user, per-month subscription model.
Guide to app deployment with CAM tools
App deployment with Amazon OpsWorks
|
OPCFW_CODE
|
I have a little problem with Rhino and the ram of the computer when i run a script. So, in the beginning i had 8 gig of ram and Rhino was using all of it. Because Rhino 6 can use up to 8 gig (i think), i installed more ram. Right now i’m at 32 and Rhino use 18 of them…
Is there any way to limit the ram Rhino use? I don’t have that problem with the same code on other computers. The only difference is that they have Rhino 5 instead of 6.
Would it be possible for you to send us the file and the script, or are they too large? We would like to take a look at it to see what’s going on. You can send the file here https://www.rhino3d.com/upload. Put firstname.lastname@example.org in the recipient field.
I tested the script that you sent on both Rhino 5 and 6. In both cases my total memory was eventually filled up and I decided to kill the process. Rhino used around 11 GB of RAM. I will need to assign this issue to someone who has a lot more RAM so that they can do a fair comparison.
Edit: I don’t know of any way to limit RAM usage in Rhino 6.
What is your Undo memory limit?
Tools -> Options -> General -> Undo: Max memory used. If it’s very large, try reducing it and see if you notice any difference.
Thank you for your help!
The undo limit is of 256 MB, but it look like it is not really working. However, that gave me an idea and to solve the problem i added the ClearUndo command in my script. It is working well, but it is strange that the Tools -> Options -> General -> Undo option have no impact.
That’s good news. I think the Undo Limit is only enforced after the script ends. A workaround is doing ClearUndo. Thanks to @nathanletwory for suggesting the Undo limit thing to me!
@davedufour1991 - can tell me if you’re doing in the script other than trying to make the most compact stack possible? (I found it a bit hard to parse…)
Dave, just assuming the simplest, here’s a script that will do some stacking - it may or may not be what you need but one thing that is does that you might want to play with is jiggle the meshes in memory and not as objects in the file i.e. I extract a copy of the mesh and move it around until it’s in in its final position - however lame that may be in my code - and then add it back to the document, replacing the original. This avoids racking up tons of memory and Undo.
Incidentally, in testing your script here, it eventually finished using ~21+ GB on 6 and ~22 on v5.
The code is only used to make the stack more compact before a 3D printing to optimize the number of pieces produced in a certain volume of powder. The more we make, the better it is.
Damn! That thing is fast! There is definitely something i can do with that code to upgrade what i did. I just need to figure how to insert the tolerance i need!
Thank you for sharing it!
Hi Dave - I added a line -
Which I had in there at some point - I’m not sure why I took it out. but if you put that back in, it will move each mesh up a bit before adding it to the document. The amount is set up near the top where vecTemp is defined. That may where you put in your tolerance.
Yes this is working! Now i just need to play with it a bit for it to be perfect.
In the code i was using, i was comparing every points to measure the smallest distance between the mesh of two obects (That part is time consuming) to be able to move them the closest possible (0.5 mm) from each other using vectors, based on the closest points.
Thank you for the help
|
OPCFW_CODE
|
Use storefront in production
I want to use the saleor-storefront in production.
I deployed Saleor on Ubuntu VPS, use Gunicorn and Nginx suscessfully
Then I follow the instruction to install PWA Storefront on the same VPS, set BACKEND_URL variable to http://localhost:8000 as instruction but just blank page.
Thanks so much
Hi,
Could you provide the errors shown in your browser console ?
Hi.
Same thing, but I'm on Arch.
I've had to change the BACKEND_URL to external address (for ex. http://mydomain.com:8000/) for the storefront to stop throwing 404 errors.
Base is deployed through docker / docker-compose.
@pojebunny it says it's building "wait until bundle finished", have tried to wait some time for it to get the build ready in the browser?
Yes. The webpack server doesn't push ANY data until the build is finished. Meaning, the page is loading and the browser doesn't display anything before receiving a confirmation response.
You can see the build time on first screen (loooong green line at the beginning)
Shouldn't this be normal? The npm start command is starting a development server and thus watches for changes and compiles on the go.
If you want to get static files e.g. for production, try npm run build.
I want to run whole dev environment on a dev server (In virtual machine), because that's where I can run the docker image.
Should I separate them like this?:
-Virtual Machine with backend hosted from docker and ports forwarded to host.
-Host machine with webpack dev mode hosted PWA frontend.
The point of docker is to do the job of a virtual machine... well to be isolated from the host machine.
But yes, what you are suggesting should work if properly configured.
Oh. Yeah.
By Host I mean my desktop environment with dev tools.
VM is my testing Server machine and docker (with many docker services running on it)
@NyanKiyoshi
With the PWA frontend launched on localhost Windows machine, the UI loads, but seems to be unable to receive data from the backend.
Arch server: VirtualBox with port 8000 forwarded to Main machine. Saleor default/unmodified image running from docker with some products added through dashboard.
Windows host: set BACKEND_URL: http://<IP_ADDRESS>:8000/ and unmodified saleor-storefront npm install'ed and start'ed.
Aight. Everything works.
THANK YOU VERY MUCH <3
Well, I woke up and that's what I'm greeted by after booting the computer and launching everything like I did yesterday...
@NyanKiyoshi
I case of Production, you should use nginx with your PWA production build ( made with npm run build ).
Also, PWA redirect to / in case of production ( see https://github.com/mirumee/saleor-storefront/blob/2a9d4dad8f96dd5c1f2d98b72cb4a695ebed5082/src/constants.ts#L3 )
So if you need to call a different API URI, you could edit constants.ts for your use case.
On localhost it works well, however I have spent several hours trying to make this work on an external server using heroku. I have not succeeded. Can someone share a step by step procedure to make this project usable in production with heroku?
On localhost it works well, however I have spent several hours trying to make this work on an external server using heroku. I have not succeeded. Can someone share a step by step procedure to make this project usable in production with heroku?
could you leave your chatting tool number?
I want to discuss with someone on server deployment if possible.
On localhost it works well, however I have spent several hours trying to make this work on an external server using heroku. I have not succeeded. Can someone share a step by step procedure to make this project usable in production with heroku?
I am also finding issues deploying storefront to heroku, I keep getting ECONNREFUSED error when I try to push the proyect to heroku.
|
GITHUB_ARCHIVE
|
Preview download lupe fiasco food liquor ii the great american rap album. In April 2006, the entire album was leaked onto the Internet, which resulted in it being shelved. If file is multipart fiod forget to check all parts before downloading! Lupe summarizes himself pretty well here. Link lupe fiasco food and liquor albumaround pushing proper studio release with Lasers. Join our community just now to flow with the file Lupe Fiasco's Food Liquor Lupe Fiasco and make our shared file collection even more complete and exciting.
Title Writer s Producer s Length 1. When Lupe Fiasco remade the T. Pete Rock said he didn't have a problem with Lupe but just didn't appreciate the remake. Finally Lupe took the high road and tweeted a YouTube link to Pete Rock and C. Listen to Lupe Fiasco- Food And Liquor. He released his second album, Lupe Fiasco's. Lupe fiasco food and liquor zip Enter your password Welcome back to The Lupe fiasco food and liquor zip It appears that you already have an account on this site associated with.Next
Leak lupe lupe fiasco food and liquor zip fiasfo and liquor lupe fiasco lasers zip lasers album free. Posts deemed intentionally misleading may result in a lengthy 2-week to 1-month or permanent bans. The metonymy with Michael, the streets, and the game is unreal. Several writers lauded the lyrical content on the album. In response of the leak, Fiasco recorded additional songs for the album. Lasers album - Wikipedia, the free encyclopedia Official site with band information, audio and video clips, photos, downloads and tour dates.Next
The group signed to Epic, released one single, and split up, all before Fiasco reached the age of 20. The song had originally sampled a song by , but due to sampling issues, it was never cleared. Free file collection Here you can download file Lupe Fiasco's Food Liquor Lupe Fiasco. This is exactly the reaction that Lupe wants from you. About Lupe Fiasco One of the most cerebral and enigmatic rappers active since the mid-2000s, Lupe Fiasco is also among the most prominent artists in his field, as proved by Grammy recognition and several gold and platinum certifications. A rap artist, writer, and producer, Lupe Fiasco also owns his own record label. Jedi mind tricks presents: the lue of army of the pharaohs by army of the.Next
Minh thich kinh doanh, nhung khi minh b? The album was digitally re-released on September 13, 2011 to mark its 5th anniversary; this version features four new tracks. When you search for files video, music, software, documents etc , you will always find high-quality lupe food and liqour files recently uploaded on DownloadJoy or other most popular shared hosts. Liking the Lupe interest lately especially through 's post. He also addresses the issues of and. The album was also included in the book.Next
Lupe fiasco food and liquor zip Lupe fiasco food and liquor zip Lupe fiasco food and liquor zip Lupe Fiasco Food Liquor Full Album mp3. The album received four nominations, including at the. The cover shows Fiasco floating in air, surrounded by several items, including a , , a , the and a. Thanks in part to the vocal support of Jay-Z, L. Don't be fooled by the head banging beat and the snappy hook, there is serious stuff happening on this record.
. Lupe Fiasco is one of the best young artists today. Liquor lipe not a necessity; it is a want. I was getting questions about the leaked version of food and liquor so I decided to make this post. Lupe fiasco food and liquor zip chuyen cho thue cac telesales. Despite his track record, Fiasco met a number of obstacles on the way to the release of his third album, Lasers. At , which assigns a normalized rating out of 100 to reviews from mainstream critics, the album received an score of 83, based on 20 reviews.Next
Lupe Fiasco's Food Liquor Lupe Fiasco. As an file sharing search engine DownloadJoy finds lupe food and liqour files matching your search criteria among the files that has been seen recently in uploading sites by our search spider. Fiasco covers a wide variety of subjects on the album. The lyrics follow the skateboarder through many stages of his life such as his childhood, finding love, marriage, and adulthood. The 'Food' is the good part and the 'Liquor' is the bad part. Songs on the record discuss , , , , and individuality.Next
The title of the album, somewhat of a surprise for many coming from a Muslim refers to the various Food and Liquor stores in neighborhoods. And Version,' 'Lupe Fiasco's Liquir Cool Deluxe Open iTunes to buy and download Rap Album, Pt. And don't ask me for a download. In true food and liqour thematic fashion, after all the ills and personification of his characters Lupe describes some righteousness in it all. The group released one single before splitting up. ProdBy: The Net 1 Source For Hip Hop Productions and Discographies. He has released three successful studio alb Wasalu Muhammad Jaco , better known by his stage name Lupe Fiasco , is an American rapper, record producer, and entrepreneur.Next
|
OPCFW_CODE
|
I have a pretty awesome backlog of blog posts from Udacity Self-Driving Car students, partly because they’re doing awesome things and partly because I fell behind on reviewing them for a bit.
Here are five that look pretty neat.
Visualizing lidar data
Alex visualizes lidar data from the canonical KITTI dataset with just a few simple Python commands. This is a great blog post if you’re looking to get started with point cloud files.
“A lidar operates by streaming a laser beam at high frequencies, generating a 3D point cloud as an output in realtime. We are going to use a couple of dependencies to work with the point cloud presented in the KITTI dataset: apart from the familiar toolset of
matplotlibwe will use
pykitti. In order to make tracklets parsing math easier we will use a couple of methods originally implemented by Christian Herdtweck that I have updated for Python 3, you can find them in
source/parseTrackletXML.pyin the project repo.”
TensorFlow with GPU on your Mac
The most popular laptop among Silicon Valley software developers is the Macbook Pro. The current version of the Macbook Pro, however, does not include an NVIDIA GPU, which restricts its ability to use CUDA and cuDNN, NVIDIA’s tools for accelerating deep learning. However, older Macbook Pro machines do have NVIDIA GPUs. Darien’s tutorial shows you how to take advantage of this, if you do have an older Macbook Pro.
“Nevertheless, I could see great improvements on performance by using GPUs in my experiments. It worth trying to have it done locally if you have the hardware already. This article will describe the process of setting up CUDA and TensorFlow with GPU support on a Conda environment. It doesn’t mean this is the only way to do it, but I just want to let it rest somewhere I could find it if I needed in the future, and also share it to help anybody else with the same objective. And the journey begins!”
(Part 1) Generating Anchor boxes for Yolo-like network for vehicle detection using KITTI dataset.
Vivek is constantly posting super-cool things he’s done with deep neural networks. In this post, he applies YOLOv2 to the KITTI dataset. He does a really nice job going through the process of how he prepares the data and selects his parameters, too.
“In this post, I covered the concept of generating candidate anchor boxes from bounding box data, and then assigning them to the ground truth boxes. The anchor boxes or templates are computed using K-means clustering with intersection over union (IOU) as the distance measure. The anchors thus computed do not ignore smaller boxes, and ensure that the resulting anchors ensure high IOU between ground truth boxes. In generating the target for training, these anchor boxes are assigned or are responsible for predicting one ground truth bounding box. The anchor box that gives highest IOU with the ground truth data when located at its center is responsible for predicting that ground truth label. The location of the anchor box is the center of the grid cell within which the ground truth box falls.”
Building a Bayesian deep learning classifier
This post is kind of a tour de force in investigating the links between probability, deep learning, and epistemology. Kyle is basically replicating and summarizing the work of Cambridge researchers who are trying to merge Bayesian probability with deep learning learning. It’s long, and it will take a few passes through to grasp everything here, but I am interested in Kyle’s assertion that this is a path to merge deep learning and Kalman filters.
“Self driving cars use a powerful technique called Kalman filters to track objects. Kalman filters combine a series of measurement data containing statistical noise and produce estimates that tend to be more accurate than any single measurement. Traditional deep learning models are not able to contribute to Kalman filters because they only predict an outcome and do not include an uncertainty term. In theory, Bayesian deep learning models could contribute to Kalman filter tracking.”
Build your own self driving (toy) car
Bogdon started off with the now-standard Donkey Car instructions, and actually got ROS running!
“I decided to go for Robotic Operating System (ROS) for the setup as middle-ware between Deep learning based auto-pilot and hardware. It was a steep learning curve, but it totally paid off in the end in terms of size of the complete code base for the project.”
|
OPCFW_CODE
|
M: Publicly committing to a personal goal considered harmful - beza1e1
http://www.spring.org.uk/2011/10/why-you-should-keep-your-goals-secret.php
R: yason
Rule from personal software projects: If you tell other people of your first
release, you won't have a first release, ever. The only thing that works is
"hey, i wrote this and it works, have a try!" -- you'll be hatching your
creation until that and you might actually get to release it.
Note that the first release need not be big and complete, just something that
works. But the game changes after that, so the above rule loses its context.
Running a public project is a different scenario from the initial phase of
development.
R: NameNickHN
This is exactly how I do things in general. It's great for when you can't
deliver. Nobody expects things that haven't been promised.
R: breadbox
My experience is that it depends a great deal on the types of the goals and
how they're presented. The article mentions vaguely-defined, general self-
improvement goals, like taking up a new hobby. I can see a public commitment
being counterproductive there. But I'd bet that it works differently for more
quantifiable goals, especially ones with a specific starting time, such as
completing Nanowrimo, or telling your boss that you're going find a fix for
bug XXX before Monday.
Quitting smoking? A hell of a lot easier to change your mind about if you
haven't told anyone.
Of course, we're just going on the article. I don't have access to the
original paper, but just reading the abstract already suggests (big surprise
here) that the article is probably not a faithful summary of the study in
question.
R: elgenie
The original paper is posted on the the author's site:
[http://www.psych.nyu.edu/gollwitzer/09_Gollwitzer_Sheeran_Se...](http://www.psych.nyu.edu/gollwitzer/09_Gollwitzer_Sheeran_Seifert_Michalski_When_Intentions_.pdf)
R: t_hozumi
I got to know this law by Derek Sivers's TED talk[1] and I totally agreed with
him especially about personal programming project. A downside of this approach
is that you cannot get a feedback until you unveil first version.
[1] Derek Sivers: Keep your goals to yourself
[http://www.ted.com/talks/lang/en/derek_sivers_keep_your_goal...](http://www.ted.com/talks/lang/en/derek_sivers_keep_your_goals_to_yourself.html)
R: axefrog
I find it more likely that the harm done to your goal is not caused by public
announcement of that goal, but rather announcing it is evidence that you
aren't as committed to the goal as you are to other goals that you're better
at sticking to. In other words, if you're having trouble with achieving
something in particular, or working towards it, then you are likely to
acknowledge that to yourself in the form of "needing to set a goal" and thus
"announcing it to the world", thus automatically ensuring that announced goals
are less likely to be kept by the very nature of the fact that they needed to
be announced in the first place. If you want something badly enough, you are
more likely to "just do it", thus foregoing the need to announce it to others.
R: jessriedel
Had the idea of publicly committing been tested before? I'd be pretty
surprised if this was the first controlled study about this idea.
R: xanados
I found beeminder (<https://www.beeminder.com/>) to be a surprisingly
effective commitment utility. The goal has to be measurable, and you have to
trust yourself somewhat enough not to subvert the rules, but since there is a
counter-party most people's sense of personal ethics should be sufficient.
R: ypcx
<http://four.livejournal.com/963421.html>
I'd say, only talk about your future plans as much as you are comfortable to.
Sometimes just talking vaguely about your things with other people helps to
keep you interested and motivated.
R: youlost_thegame
Well, I'm sorry to say that I believe exactly the opposite. Public committing
means a high level of responsibility and scrutiny, and it is the best
motivator for any task.
In our weelky meetings we write down each task that needs to be done and the
name of the person in charge, then send it by email. Even if there is no
follow-up the next week, we observed that stopping to send the emails leads to
less work done (or, at least, not the work that is expected to be done by the
managers)
R: vidarh
Believe what you want, but your belief is directly contradicted by evidence.
Also note that your example is of something very different, as it is not
personal goals of the person being put in charge of the tasks, but tasks put
in place to carry out a duty to someone else.
With personal goals, publicly committing to them will rarely lead to a strong
negative reaction if you fail.
R: Travis
I'm not so sure it's so clear (in research) that some form of public
commitment reduces the likelihood of completion.
In fact, Robert Cialdini found that commitment was an excellent form of
motivation. "If people commit, orally or in writing, to an idea or goal, they
are more likely to honor that commitment because of establishing that idea or
goal as being congruent with their self image. Even if the original incentive
or motivation is removed after they have already agreed, they will continue to
honor the agreement. For example, in car sales, suddenly raising the price at
the last moment works because the buyer has already decided to buy."[1]
Cialdini performed his original research several decades ago, but it has been
continued by the Freakonomics/Kahneman/"Nudge" crew. To the best of my
recollection, all of their experiments showed a strong positive effect when
public commitment was added. It's the principle behind StickK.com, as well.
Perhaps there are certain types of improvements (vague self improvement was
mentioned elsewhere in this thread) that are harmed by external reinforcement,
but I think that's more the exception rather than the rule.
[1] <http://en.wikipedia.org/wiki/Robert_Cialdini> (also check out his
"psychology of influence" book for more on the topic)
|
HACKER_NEWS
|
- This topic is empty.
November 1, 2013 at 8:49 pm #22685
so there are options.
Latest and greatest I understand. but I do like to hack a bit myself… 🙂
OpenSprinkler DIY v1.42u
Other than the DIY/Solder myself part and the SDcard im guessing I locked into the 1.x code line with the 1.42 DIY board.
OpenSprinkler Pi (OSPi)
will this option run the code like the 2.0 or the 1.4? I like being able to run full linux and use other GPIO ports for other things… but will I get the latest OS code here?
and Arduino DIY.
if I just want to hack/slash thru it?? 🙂
Thoughts? Pros/cons?November 4, 2013 at 3:14 am #25744
If you are good at soldering and want to build one completely from scratch, go with 1.42u. Yes, you will be locked into 1.8.3 firmware, and there is no way to add SD card support due to the flash and RAM constraints. I wish Atmel could release an ATmega648 chip at some point, which would have solved this problem 🙂
If you are familiar with Python and want to tinker with the software, go with OSPi. It currently runs firmware 1.8.3 (Dan’s interval_program). There are also Rich’s sprinklers_pi program, and the Google Calendar based scheduling program.
If you want something that works out of the box without further assembly, go with OpenSprinkler 2.0s. It’s more expensive, because it’s more oriented towards the consumer market. But it’s still open-source, and give you plenty of space for hardware / software hacking.
If you want something in between fully assembled 2.0s and the DIY 1.42u, there is the DIY 2.1u that will become available in a few weeks. It’s not all through-hole though — about 60% assembled and the rest for you to solder.November 14, 2013 at 8:17 pm #25745
does the current DIY kit allow logging of watering times with the remote web servers? or do we need the 2.0 version to get the graphs?
ThanksNovember 15, 2013 at 4:55 am #25746
The current DIY kit does not have built-in logging feature. It’s possible to use an external server, like a Raspberry Pi to log it though. For example, Samer’s OpenSprinkler mobile app provides a logging feature:
and there was an earlier work by Dave Gustavson that also enables logging:
http://rayshobby.net/?p=4821November 18, 2013 at 5:07 am #25747
cool so the DIY kit running 1.8.x can be logged with the external web app ?
- You must be logged in to reply to this topic.
|
OPCFW_CODE
|
Hello Geeky, so today we are focusing on How to Create mongodb & web-based interface container on Docker. So please read this tutorial carefully so you may comprehend it in a better helpful way.
Guide: How to Create mongodb & web-based interface container on Docker
MongoDB doesn’t need an introduction, the one who is system administrating and developing would already know about it. It is a NoSQL database available to install on popular operating systems to provide a database without a fixed structure, hence easily scalable. Here in this article, we will learn the steps to easily install or create a MongoDB Database server container on the Docker Engine platform.
What do we need to perform in this tutorial?
- A system with Docker
- Access to its command line
- Internet connection
Steps to install MongoDB & Mongo Express on Docker
Make sure you have Docker
The first thing I am assuming is that you already have Docker installed on your system.
Download or Pull MongoDB Docker Image
The best thing that made Docker popular is its repository that holds hundreds of pre-built Images from official and non-official sources. We can use them to create containers instantly, yes, of course, MongoDB is also there. Hence, just on your machine run the Docker Pull command:
docker pull mongo
The above command will pull the latest version image on MongoDB Server on your Docker, however, for any other versions, you have to use the tag. For Example, docker pull mongo:4.4.9. For all available tags, you can see its Github page.
To check the downloaded image on the system, you can use:
Create MongoDB Database Server Container
We have already the image, now use it to create as many Database running containers as you want. However, before that let’s create a folder on our system to hold the databases data created on the MongoDB running container. As we know once we start using the Database server for commercial usage its data become valuable, but in case we need to delete the container in the future that process will also remove all of its data. Hence, to play safe let’s create a folder-
sudo mkdir /var/dbdata
Now, create Mongo Container:
docker run -it -d -v /var/dbdata:/data/db -p 27017:27017 –name mongodb mongo
Explanation of the above command:
/var/dbdata:/data/db – Assigning our created folder to use for storing MongoDB data.
-p 27017:27017 : Opening Database server container port to access from the host system.
–name MongoDB : Giving some name to our container
mongo : It is the name of the downloaded Image file
docker start mongodb
Access MongoDB Database Docker Terminal (Bash Shell)
Now, if you want to access the command line of the Mongo server to create databases and manage them, here is the command:
sudo docker exec -it mongodb bash
Access mongodb docker container command line
Create Mongo Express Web Interface container (optional)
Well, those who want a web-based graphical user interface to manage their Docker MongoDB server databases, can go for the further installation of Mongo Express by creating a container.
docker run –link mongo_db_name_container:mongo -p 8081:8081 -e ME_CONFIG_MONGODB_URL=”mongodb://mongo:27017″ mongo-express
Link your Docker running MongoDB server with Express, by replacing the above “mongo_db_name_container” in the above command.
For example, here we have created a database container with the name- mongodb hence the above command will be:
docker run –link mongodb:mongo -p 8081:8081 -e ME_CONFIG_MONGODB_URL=”mongodb://mongo:27017″ mongo-express
To access the web interface, open your browser and type: http://localhost:8081
Guide about How to Create mongodb & web-based interface container on Docker
In this guide, we told you about the How to Create mongodb & web-based interface container on Docker; please read all steps above so that you understand How to Create mongodb & web-based interface container on Docker in case if you need any assistance from us, then contact us.
How this tutorial or guide assisting you?
So in this guide, we discuss the How to Create mongodb & web-based interface container on Docker, which undoubtedly benefits you.
I hope you like the guide How to Create mongodb & web-based interface container on Docker. In case if you have any queries regards this article/tutorial you may ask us. Also, please share your love by sharing this article with your friends and family.
|
OPCFW_CODE
|
BASENAME / GNU COREUTILS PRO
by: KShark Apps (Root Essentials) • 0
*** Requires Root
*** Requires Busybox
*** Requires an ARM powered device (99% of all Android Devices), does not work on x86 and MIPS devices.
Installs basename from GNU coreutils to your android device
This is the pro version of GNU Corutils basename
The pro version has the same features as that of the free version. The benefit of pro is priority e-mail support and an online help as seen here.
GNU coreutils have more features than applets found with busybox or android default tools. So as to avoid potential conflicts, "basename" can be accessed via "cu.basename" in terminal.
Please don't leave negative remarks without reason. This app caters to small niche of people who like to have their GNU/Linux/Unix tools on the go. They would surely appreciate that.
If you have not used a GNU/Linux Command line or CLI App before, then this app is not for you and you should uninstall this. In that case just e-mail me and I would be glad to refund you.
Thanks for supporting me.
Introduction to Coreutils
The GNU Core Utilities are the basic file, shell and text manipulation utilities of the GNU operating system.
These are the core utilities which are expected to exist on every operating system.
basename - strip directory and suffix from filenames
basename NAME [SUFFIX]
Print NAME with any leading directory components removed. If specified, also remove a trailing SUFFIX.
--help display this help and exit
output version information and exit
basename include/stdio.h .h
Written by David MacKenzie.
Read More at http://www.gnu.org/software/coreutils/manual/
Copyright © 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.
I distribute Free Software (Free as in Freedom and not Free Beer) for android at no cost and at times for a fee. I make it easy for you to install your favourite free software tools by cross compiling and packaging it for android so that you don't have to take all the trouble. However here are the links to the sources if you like to hack and poke around.
Stable source releases can be found at http://ftp.gnu.org/gnu/coreutils/
Test source releases can be found at http://alpha.gnu.org/gnu/coreutils/
Assuming you have git installed, you can retrieve the latest version with this command: git clone git://git.sv.gnu.org/coreutils
To build from the latest sources please follow the instructions in README-hacking
Tags: $basename build
|
OPCFW_CODE
|
- Web Page
Iisque persius ne sit, simul zril vix eu. Qui ne iusto epicuri suscipiantur, sit ne probo adhuc. Liber verterem interpretaris nam et, ea pro solum expetendis.
Duo luptatum delicata evertitur ad. Usu te quaerendum definitiones, ne mundi volutpat duo, in dissentias temporibus pri. Duo ferri dicant definitionem te.
Amet dolor oratio ex has, stet repudiare definiebas vim ne. Id probo facilisis usu, pri aliquam omnesque cu. Vide assentior id qui, quando possim eos.
- Object-oriented programming (OOP)
- Variables, Constants, Operators and Control Structures
- Functions and Arrays
- Database Connectivity and MySQL
- Deployment to Web Servers
- Regular Expressions
- Models, Views, and Controllers
- MVC Architecture
- Database Configuration
- Form Validation
- Session Management
- URI and Query String Access
- Error Handling
- Text Editors
- Basic HTML Tags
- HTML Forms
- Web Page Layout
- Responsive Design
- CSS Frameworks
- Cascading Style Sheets
- Box Model
- Selectors and Properties
- Fonts and Colors
- Responsive Design
- CSS Frameworks
- jQuery Syntax
- Selectors and Filters
- Events and Effects
- DOM Manipulation
- AJAX and JSON
Adobe Photoshop is the industry standard for digital image manipulation and design. It is a powerful raster graphics editor used to create and edit photos, illustrations, and 3D artwork. It provides a wide range of tools and features to manipulate, enhance, and retouch photos, as well as create and compose digital images.
Python is a general-purpose programming language that is used for web development, data science, scripting, and many other applications. It is an interpreted language, meaning that it doesn't need to be compiled before running. Python is often used as a scripting language, with its simple syntax and high readability it can be used to quickly write and execute code.
Django is a web framework written in Python and designed to help developers create web applications quickly and efficiently. It is an open-source framework, meaning it is free to use and modify. Django follows the Model-View-Template (MVT) architecture and uses the Python language for creating web apps. It is known for its speed, scalability, and security. Django also comes with a library of tools and packages to help developers quickly develop web apps.
MySQL is an open-source relational database management system (RDBMS). It is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open-source web application software stack (and other "AMP" stacks).
PostgreSQL is an object-relational database management system (ORDBMS) with an emphasis on extensibility and standards compliance. It can handle workloads ranging from small single-machine applications to large Internet-facing applications with many concurrent users.
MongoDB is a cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. MongoDB is developed by MongoDB Inc. and is free and open-source, published under a combination of the GNU Affero General Public License and the Apache License.
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don’t have to worry about hardware provision.
Node Express is a web application framework for Node.js, released as free and open-source software under the MIT License. It is designed for building web applications and APIs. It has been called the de facto standard server framework for Node.js.
Swift is a powerful and intuitive programming language for macOS, iOS, watchOS, tvOS, and beyond. Swift is designed to work with Apple’s Cocoa and Cocoa Touch frameworks and the large body of existing Objective-C code written for Apple products. It’s built with the open source LLVM compiler framework and has been included in Xcode since version 6, released in 2014.
iOS is a mobile operating system created and developed by Apple Inc. exclusively for its hardware. It is the operating system that powers many of Apple’s mobile devices, including the iPhone, iPad, and iPod Touch. It is the second most popular mobile operating system globally after Android. It is the basis for Apple’s Siri voice-controlled personal assistant, as well as the App Store, Apple Music, Apple Pay, and other services.
Android is a mobile operating system developed by Google, based on a modified version of the Linux kernel and other open source software and designed primarily for touchscreen mobile devices such as smartphones and tablets. Android is the world's most popular mobile platform and powers devices from a variety of manufacturers.
Kotlin is a statically typed, general-purpose programming language developed by JetBrains. It is a cross-platform language that can be used to build applications for Android, iOS, Desktop, Web, and other platforms. Kotlin adds new features to the Java programming language, such as higher-order functions, lambda expressions, data classes, and more. It is designed to be a safe, concise, and intuitive language that enables developers to write code quickly and efficiently.
Flutter is an open-source mobile application software development kit created by Google. It is used to develop applications for Android, iOS, Linux, Mac, Windows, Google Fuchsia, and the web from a single codebase. It is used to develop high-performance, high-fidelity mobile apps for iOS and Android from a single codebase. Flutter uses the Dart programming language, which is also owned by Google, and is the primary method of creating Flutter apps. With Flutter, developers can use the same code for both Android and iOS apps, cutting down on development time and cost. Flutter also provides a rich set of widgets and tools that help developers quickly and easily create beautiful and highly functional mobile apps.
Laravel is an open-source, free, web application framework based on the MVC (Model-View-Controller) architectural pattern. It is a powerful tool that is used to develop robust and secure web applications. It is a popular PHP framework that has become the first choice of many developers for creating a range of web applications, including e-commerce applications, content management systems, and custom web applications. Laravel provides an expressive and elegant syntax that makes writing web applications simpler and faster. It also includes multiple features such as built-in authentication, routing, caching, and events that make it easier for developers to create and manage web applications.
A responsive website adjusts its layout and content to fit different screen sizes and devices, providing an optimal viewing experience on desktops, laptops, tablets, and smartphones.
Websites may incorporate interactive elements such as sliders, carousels, accordions, collapsible panels, and tooltips to enhance user engagement and provide dynamic content.
Websites often include multimedia elements like images, videos, audio clips, and animations to make the content more engaging and visually appealing.
Websites can integrate social media buttons, sharing options, or live feeds from platforms like Facebook, Twitter, Instagram, or LinkedIn to encourage social sharing and interaction.
Websites may have live chat widgets or chatbots to provide instant support and answer user queries in real-time.
Websites may have search bars or advanced search options to help users quickly find specific information or products within the site.
Co-Founder SAMRIT TECH SERVICES PRIVATE LIMITED
Rautela Tech has been a great asset to my business. Their customer service is top-notch, and their technical support is superb. The staff is knowledgeable, friendly, and always willing to help. I highly recommend them to anyone looking for an IT solution!
Vice President – Shivalik Rasayan Ltd
Rautela Tech is incredibly reliable and efficient. They are always available when I need them and their expertise is invaluable. I'm very happy with the service I have received and I would highly recommend them to anyone looking for IT solutions.
Medicamen Biotech Ltd
Rautela Tech is a great IT solution provider. I have been working with them for a few years now, and I've always been impressed with their customer service and technical knowledge. They are always available to help and they provide solutions fast. Highly recommend!
|
OPCFW_CODE
|
Django is a high-level Python web development framework that enables you to build complex web applications quickly and easily. It follows the Model View Controller (MVC) architectural pattern and is known for its many advantages.
Django is used by some of the biggest websites in the world, including Instagram, Pinterest, The Guardian, and National Geographic.
In this article, you will learn why Django is a good choice for web developers, and explore some of its key features.
What is Django?
Django is a Python web framework widely believed to be the best option on the market right now. Every high-end Django development company will tell you that it is an amazing choice for web development projects of all sizes. They build complex websites quickly and easily, while also providing a high level of security and stability.
Django has been around for almost a decade and has consistently been updated with the latest Python features. It’s no wonder that it's is the most popular web framework in use today.
It's also a huge boost for digital marketing teams. Django provides an amazing platform for creating marketing websites and applications. Digital marketing teams need quick development turnaround times, stability, security, and the ability to easily scale their projects as they grow.
This framework can provide all of that while still being a relatively lightweight framework. It's no wonder digital marketing teams are turning to Django more and more often.
Django simplifies development
This framework is known for simplifying the development process. It does this by providing a number of libraries and tools which you can use to build your project. This makes it an ideal choice for web developers who want to get their projects up and running as quickly as possible.
Additionally, Django also comes with a well-documented set of tutorials that will help you get started with the framework very quickly. So if you are looking for a quick and easy way to develop your next web project, then Django is definitely worth considering.
Here's how Django makes development simpler than other similar frameworks:
So, if you are looking for a quick and easy way to develop your next web project, then Django is definitely worth considering.
The simplicity is the main reason why a lot of people decide to work with Django. It is a powerful web development framework that doesn’t require a lot of time to learn and use. The fact that it comes with excellent documentation is an added bonus, which means you can start working on your project right away without any problems.
The framework is very fast
You always want to finish the job as quickly as possible. Django helps you do that with its lightning-fast framework. You can create a website or an app and have it up and running in no time at all. Plus, the code is clean and well organized so you won’t get lost while developing. This makes Django an excellent choice for web development projects of any size.When you're under pressure to get a project done, the last thing you need is for your development process to be slowed down by a cumbersome framework. Django was created with speed in mind, so you can rest assured that it will help you meet your deadlines. In fact, many major companies use Django for their high-traffic websites.
Django is very secure
In the online world, security is of utmost importance. Django is a very secure framework, which makes it an ideal choice for web development. It has been designed with security in mind, and it incorporates many features that help to keep your website safe.
For example, Django includes built-in authentication and authorization mechanisms, as well as support for SSL encryption. This means that you can rest assured that your website will be safe from hackers and other online threats.
Django also helps to ensure the privacy of your users. For example, by default, all user data is encrypted using AES-256-CBC encryption. This means that even if someone were to gain access to your database, they would not be able to see any user data without the proper credentials.
Overall, Django is a very secure framework that can help to keep your website safe from online threats. If you are looking for a reliable and secure web development platform, this framework is definitely worth considering.
It is well-established
For years now, Django web development is one of the most popular.. It is well-established, versatile, and efficient, making it a great choice for any project. Whether you’re starting from scratch or looking for an upgrade, Django has something to offer.
Django lets you build high-quality websites quickly and easily. It’s perfect for both small projects and large ones alike. Plus, Django is constantly being updated with new features and improvements, so you can be sure your website will always be up-to-date.
Django suits any web application project
Like it was already mentioned, Django suits any web application project due to its vast range of features and possibilities. No matter the size or complexity of it, this framework has you covered. Furthermore, Django is constantly being improved and updated with new features, so you can always rely on it to stay up-to-date.Whatever you need to be done with your website, from creating the initial structure to adding dynamic content and handling user input, it can do it for you. Django is a great choice for building both simple and complex websites, so if you’re looking for a versatile framework that will help you get your project off the ground quickly, it is definitely worth considering.
The framework implements DRY and KISS
The "Don't repeat yourself" (DRY) principle is a software engineering mantra that encourages developers to write code once and only once. It's important because it helps reduce the amount of code duplication and makes the codebase more maintainable.
Django follows this principle by providing a number of features that make it easy to avoid duplicating code. For example, templates can be reused across multiple projects, and there's a wide variety of reusable Django apps available on the internet.
The "Keep it simple, stupid" (KISS) principle is another software engineering mantra that encourages developers to keep things simple. It's important because complex solutions are often difficult to understand and maintain.
Django follows this principle by providing a minimalist framework that doesn't include unnecessary features. This makes it easier for beginners to learn and use, while also allowing experienced developers to customize it according to their needs.
In conclusion, Django is a good choice for web development because it follows the DRY and KISS principles. These principles help make the codebase more maintainable and easier to understand, which is important for any project.
Additionally, Django provides a number of features that make it easy to avoid duplicating code, which can save time and reduce the complexity of the project. If you're looking for a simple and efficient framework, Django is a good option to consider.
It has great community support
Because it is so widely used around the world, Django has fantastic community support. This means that if you get stuck on a problem, there are likely to be dozens of people who have had the same issue and can help you out. There is also a huge amount of online documentation and resources available, so you’re never far from help when you need it.
Never be afraid to ask for help on Django forums. You’ll get a quick answer from someone who is more than happy to share their expertise. This makes Django an excellent choice for beginners and experienced developers alike.
Django also has a very active development community, so new features and updates are released regularly. If you want to stay up-to-date with the latest technologies and trends, Django is definitely the framework for you.
Be sure to visit forums where people will gladly answer your questions.
It contains support for REST APIs
APIs are the lifeblood of the modern web, providing an interface that allows different parts of a system to communicate with each other. Django has always had strong support for creating RESTful APIs, and this makes it an excellent choice for web development.
With its tight integration with popular front-end frameworks like React and Angular, as well as its built-in support for authentication and security, Django is ideal for developing complex web applications.
Thanks to its wide range of features and versatility, Django is one of the most popular choices for web developers today. If you're looking for a powerful and reliable framework to build your next web project on, Django is definitely worth considering.
Who can I hire to do Django work for me?
If you're looking to hire someone for your Django project, you have several options. You can employ a freelance Django developer from platforms like Upwork or Freelancer for more flexible, project-based work.
If you prefer a more structured approach, you can engage with a software development agency experienced in Django, such as Webisoft. They offer end-to-end solutions and have a team of experts that can handle different aspects of your project.
Another option is to reach out to your local developer community or job boards to find talent willing to work on-site or remotely. Each of these options offers its own set of advantages, allowing you to choose the one that best fits your needs.
Final words: Why do people love using Django?
Django is right now the greatest framework for web design that will help you with every aspect of your existence in the digital world. It makes development very simple and does it very fast so that you don't lose any time. It's a very secure platform that ensures all your data stays private and safe.
Additionally, it's well established with a large community which makes it suitable for any type of project and has huge community support. It implements DRY and KISS as well as REST APIs so that all your projects go over smooth and flawless. Try it out as soon as you can, and enjoy its benefits!
|
OPCFW_CODE
|
The decline of credit cards
At the BC ISMS User Group meeting last week we were concentrating on the relationship between the ISO 27000 family of standards, and the PCI-DSS (Payment Card Industry Data Security Standards, usually just known as PCI). PCI-DSS is of growing concern for pretty much anyone who does online retail commerce (and, come to that, anyone who does any kind of commerce that involves any use of a credit card).
It kind of crystalized some ideas that I’ve been mulling over recently.
Over the past year or so, I’ve been examining some situations for small charitable organizations, as well as some small businesses. Many would like to sell subscriptions, raffle tickets, accept donations, or sell small, specialty items over the net. However, I’ve had to consistently advise them that they do not want to get involved with PCI: it’s way too much work for a small company. At the same time, most small Web hosting providers don’t want to get involved in that, either.
The unintended end result consequence of PCI is that small entities simply cannot afford to be involved with credit cards anymore. (It’s kind of too bad that, a decade ago, MasterCard and Visa got within about a month of releasing SET [Secure Electronic Transactions] and then quit. It probably would have been perfect for this situation.)
Somewhat ironically, PCI means a big boost in business for PayPal. It’s fairly easy to get a PayPal account, and then PayPal can accept credit cards (and handle the PCI compliance), and then the small retailer can get paid through a PayPal account. So far PayPal has not created anything like PCI for its users (which is, again, rather ironic given the much wilder environment in which it operates, and the enormous effort phishing spammers make in trying to access PayPal accounts.) (The PayPal Website is long on assurances in terms of how PayPal secures information, and very short on details.)
This is not to say that credit cards are dead. After all, most PayPal purchases will actually be made with credit cards: it’s just that PayPal will handle the actual credit card transaction. Even radical new technologies for mobile payments tend to be nothing more that credit card chips embedded in something else.
These musings, though, did give a bit more urgency to an article on F-commerce: the fact that a lot of commercial and retail activity is starting to happen on Facebook. Online retail transactions aren’t new. They aren’t even new in terms of social networks or a type of currency created within an online system. Online game systems have been dealing with the issue for some time, and blackhats have been stealing such credits and even using them to launder money for a number of years now. However, the sheer size of Facebook (third largest “national population” in the world), and the fact that that entire population is (by selection) quite affluent means that the new Facebook credit currency may very quickly balloon to an enormous size in relation to other currencies. (We will leave aside, for the moment, the fact that I personally consider Facebook to be tremendously divisive to the Internet as a whole. And that Facebook does not have the best record in terms of security and privacy.) Creation of wealth, ex nihilo, on a very, very large scale. What are the implications of that?
|
OPCFW_CODE
|
[squeak-dev] Re: immutibility
cputney at wiresong.ca
Thu Apr 1 03:06:40 UTC 2010
On 2010-03-31, at 8:24 AM, Bert Freudenberg wrote:
> On 31.03.2010, at 17:23, Andreas Raab wrote:
>> On 3/31/2010 8:13 AM, Bert Freudenberg wrote:
>>> Show me a single place that would break in a real-world use-case.
>>> In fact, #become: is rarely used in Squeak because of its inherent slowness caused by our direct-pointer object model. And those rare places I can think of would work just fine.
>> I think there is one, and only one, place where #become: is intrinsically required: Changing class shape.
> Thought about that. Should be fine.
So you're saying you think it's OK that immutable objects don't get migrated to the new version of the class? I suppose that conforms to a strict interpretation of the of the term "immutable" - neither state nor behaviour may be change.
At some point though, such a strict interpretation isn't very useful. An object that can't change state, can't change behaviour, can't refer to mutable objects, and can't become mutable again is certainly worth of the term, but it's also useless from a practical point of view. Has anyone proposed a use case for this sort of immutability? If not, why insist on defining the term so tightly?
As far as I'm aware, the use cases for immutability, loosely defined, are as follows:
(a) Literals. If the compiler makes objects immutable before putting them in the literal frame of a method, we can be sure that the state of the object when the method is activated is the same as what appears in the source code. This is certainly nice. Mutability of literals hasn't been a problem so far, though, so it's not crucial.
(b) Write Barrier. Immutability as implemented in, for example VW, allows object mutation to be efficiently tracked, which is handy for persisting objects outside the image, or synchronizing their state between images.
(c) Concurrency. The tricky thing about concurrency is shared mutable state. If it's possible to make objects immutable, it's then possible to share them safely between units of concurrency - Processes, Islands, images etc. When sending state between units of concurrency, immutable objects can often be transmitted more efficiently.
(d) Garbage Collection. If there are enough immutable objects in the image, it might be possible to improve the efficiency of the garbage collector. Have an "immutable space" and cache all the pointers back into mutable space. Or something. Or the "Shared Perm Space" thing that VW used to have.
Any other ideas?
For me, (b) and (c) are the interesting cases. Seaside apps almost always need to persist data in some way, and (b) would make this a lot easier and more efficient. (c) is maybe less clear cut, but in a lot of ways it's more interesting. I like the general trend in Squeak toward event loop concurrency, rather than classical Processes & Semaphores. Immutability would enable more experimentation in this area.
Perhaps if we focused on these use cases, we'd have a more productive discussion. It may be that VM-level immutability isn't useful enough to be worth the effort to implement and support. But if it *is* worthwhile, it'll be because of what it let's us do, rather than how semantically correct our definition of immutability is.
More information about the Squeak-dev
|
OPCFW_CODE
|
I only figured out I'm autistic in 2022. I'm still learning. Writing these articles is how I learn things. They're all works in progress, to various extents. No-one can speak for an entire minority group. These are just my personal experiences, things I've found out from talking to my friends, and discussions I've seen on autistic forums. Please don't take me as authoritative. I'm not.
Scripting is a prepared response in a conversation.
It can be a useful workaround for those of us who can't easily encode pragmatics, such as implicatures and tone of voice, by simply borrowing a known-safe combination of words, inflections, and even gestures.
It can also help to avoid adding to all the chaos, by repeating a reassuringly familiar phrase. It may even serve as a stim, actively countering that chaos.
This may or may not be the same thing as delayed echolalia.
I pretty frequently quote films, TV shows, and occasionally audiobooks.
When talking to autists with a shared popular culture, expressing yourself via a quote can be a convenient and fun shorthand.
When talking to allists, I probably used to see it as a safer way of trying to get my point across. When forming my own sentences, I'll inevitably forget to intonate my voice. When quoting, in contrast, I'll simply quote verbatim (as much as my memory will allow), complete with the original inflections. I think I must have imagined this would be more likely to succeed, as pragmatics like tone of voice are generally pretty important to allists, perhaps moreso than the actual text itself.
However, it's probably worth bearing in mind that fiction is specifically designed to show us larger-than-life characters who are often antagonistic for the sake of drama, or mistaken for the sake of comedy. While audiences are entertained by what characters say in the context of fiction, this may not translate so well to a real conversation.
I've probably also been overlooking how context-specific pragmatics are, and how each conversation I have with someone in real life is likely a very different context to the original source of the phrase I'm quoting.
Memorising "correct" responses to allistic questions
Many of us often get into trouble for taking allists' questions at face value, and answering them accordingly.
A classic example is "Hi, how are you?" It turns out that when an allist asks this question as a sort of greeting, they do not sincerely want to know how you are, despite literally asking you that. There's a subtext at work that it's supposed to be interpreted as a sort of generic "Hello" instead.
They generally won't mind the response "Hi", even though it ignores the textual content of their question completely, but will mind you actually telling them how you are.
Often oblivious to this subtext, we eventually learn through trial and error, and observation, that the "correct" response to this question is something along the lines of "Fine thanks, yourself?" So we remember that this particular call has this particular response, and fight the urge to answer the actual question honestly and directly.
Planning and simulating upcoming conversations
It's pretty common for us to prepare for a phone call, meeting, appointment, or other upcoming conversation by planning out and simulating all the likely branches the conversation might take. That way, when the conversation actually happens, we already have a prepared response ready for many plausible nodes, although personally I never remember mine and inevitably end up ad libbing them anyway.
I'm not sure if this is related to rumination, monotropism, or most likely, warranted social anxiety. I do know that back when I naïvely tried to educate bigots online, I'd end up simulating such conversations in my head, an inner dialogue with various points and counterpoints, especially in the shower, without intending to.
I eventually, rather belatedly, realised that bigots don't want to be educated, and don't want you to answer their questions. They only ask them in order to imply an incorrect answer as subtext, not to learn the correct answer as text. I think my mental health vastly improved once I finally realised I didn't owe them my time, effort, or knowledge.
|
OPCFW_CODE
|
One of the greatest challenges in the field of software engineering is to design and implement reusable, adaptable and scalable software systems. Current learning technology systems do not provide more of these features, nor the necessary functionality to manage different kinds of learning experiences. In this paper, we present a proposal for a flexible infrastructure that enables the provision of technological support for the realization of heterogeneous learning experiences. We use the IMS Learning Design specification to provide a common underlying framework, and support the description of generic learning processes.
Resource management, Collaborative systems, Interoperability, Distributed Objects.
Currently, most learning systems and tools are designed to be used as standalone, focused to solve particular problems and situations. As consequence, the interoperability is poor between systems that follow different kinds of learning approaches. This is partly due to the weakness of their technological openness and to their lack of clear standardized interfaces such as promoted by some standardization bodies (see IEEE 1484 or IMS proposals).
The situation seems similar to the oposition existing between the Computer Supported Co-operative Work (CSCW) approach versus the Workflow Management approach for supporting the collective human activities inside the organizations. The reconciliation of these two approaches is actually a hot issue in the CSCW and Workflow research domains .
We mainly focus our attention on the general infrastructure providing a general framework for the design of future learning systems. The purpose of our work is to provide a general infrastructure that enables the utilization of Information and Communication Technologies (ICT) to support general learning activities and processes, independently of the pedagogical approach.
Our general assumption is that we need a common theoretical framework for designing learning systems so different pedagodical approaches could be integrated. Through consistent use of this model, the focus will be placed in learning itself, not in the mechanisms used to support or enable it. This framework will be used to describe, or design, the learning processes and activities that have to be carried out during learning sessions (for example: a course) in a formal way.
The second part of the picture constitutes our contribution. We provide the software infrastructure devoted to support and enable the learning experiences according to formal learning designs.
The common framework is provided by the IMS Learning Design meta-language. This model supports the description of any design of a teaching-learning process in a formal way. The core concept of the IMS Learning Design specification is that, regardless of pedagogical approach, a person gets a role in the teaching-learning process (typically a learner or a staff role), and works towards certain outcomes by performing more or less structured learning and/or support activities within an environment. The environment consists of the appropriate learning objects and services to be used during the performance of the activities.
The learning designs which can be described by this language might involve a single user or multiple users; the learning and instructional designers and providers might take a behaviorist, cognitivist, constructivist or some other approach; they might require learners to work separately or collaboratively, but these can all be captured in terms of a Method that governs the running of the Learning Design.
This language does not prescribe the delivery media. The same learning design could be performed in different ways. For example, a collaborative learning activity may be performed in a traditional classroom, or at distance through the web using appropriate communication or collaboration tools. In this way we separate the educational and technical features.
A infrastructure supporting learning designs has to address two main issues: (i) the execution of learning designs that feature their own dynamics, and (ii) the integration of selected heterogeneous resources and services to provide the prescribed environments.
We use a workflow management technology approach to support the execution of learning designs. A learning design is basically a workflow definition, that is, a set of activities to be executed in some order, involving multiple collaborating entities in a distributed environment to accomplish a given task. In this way, workflow systems manage many of the issues involved in our problem.
But current workflow technology does not provide the necessary functionality to manage our learning designs in a general way. In each course, activities to be attained by a learner will depend on the learning design, the resources availability to provide the environment, and the process state. But, a learner will be usually enrolled in several courses, and therefore dependencies among them could be considered and managed. Moreover, different learners may be enrolled in different courses, with their corresponding constrains and dependencies. The utilization of a centralized workflow engine would be impractical.
In our propose, c.f. figure 1, each course is controlled by a particular workflow management system. The dependencies between different courses will be managed by appropriate personal agents, that will offer a central vision of the different courses in which a learner is enrolled. The personal agent provides functionalities like : (i) calendars, enabling the learner to make appointments; (ii) pending activities that require urgent realization; and (iii) messages of the system or other users. Each course will propose the realization of a certain activity to the learner according to the state and the availability of resources and participants.
We try to offer a general solution supporting the management of learning designs following different kinds of approaches. The learning approaches supported by a learning system would depend finally on the resources and services available. In order to provide an open and flexible solution, the infrastructure enables the introduction of new resources and services. A resource is modeled as an object including attributes such as name, type, capabilities and status. The status consists of two attributes: state and load. The state tells whether the resource is available. The load gives hint of possible waiting time needed to perform the assigned activities.
In this way we have a typical resource management problem , where resources, but also learners, act as shared resources that are required for the realization of the different activities by the workflow engines that control the different courses. The consideration of learners as shared resources is the main difference between our problem and the general workflow problem of business processes.
Users interact with the system via a web-based user interface. The activities applicable for the specific roles are presented based on their structure or availability. When she/he chooses a certain activity the system will provide access to the required resources and services, configuring them accordingly to the specifications of the learning design. It will also provide the required contact with the other persons involved in the same activity if they are connected. From here the users will start their personalized learning path, performing the selected activities. The interaction is captured by the system and the personal dossier updated.
We are considering the introduction of our infrastructure in a mobile environment. In this way we create a new feature for a resource: location. The location indicates where the resource is located. In order to propose the realization of activities to an actor her/his location is obtained.
Our solution is based in two key technologies. On the one side, workflow management technology support the execution of single courses. On the other side, resource management technology facilitates the coordination among different courses. The combination of these two technologies supports the activities of users involved in different courses in a practical way. The use of a centralized workflow engine would be impractical from a scalability standpoint.
The utilization of the IMS Learning Design specification enables pedagogical diversity to be supported through the implementation of a single engine, rather than either having to implement multiple engines for each pedagogical approach. In this way, a system that can process IMS learning designs may be used to provide any kind of technological support.
We consider our infrastructure as an academic center in conventional learning. Teachers plan their lectures, they prescribe the number of theoretical and practical sessions, a calendar of examinations, material requirements, etc., and the center has to organize the timetables, prepare the classes, provide the appropriate lab equipment, etc. The academic center has to provide and manage the resources required in the teacher learning design. Note that with our solution we can manage both physical and software resources, therefore the infrastructure also supports mixed mode delivery (blended learning), enabling traditional approaches such as face-to-face teaching, the use of books and journals, lab work and field trips to be also specified as learning activities and combined with ICT supported learning.
|
OPCFW_CODE
|
You'll learn how to construct your own Minesweeper game with Arduino.
We all liked that feeling of nostalgia for the games and games we played in childhood and adolescence. Several consoles and computer games marked this era. One of the great games was the Minesweeper of the Windows Operating System, which is shown in Figure 1.
In this game, we aimed to select a location that did not have a bomb. Otherwise, we lost the game.
Therefore, it was thinking about the concept of this game, that we created this project with the objective of bringing back a game that is well known to all: the minefield.
Figure 1 - Minesweeper of the Windows Operating System
Our project consists of a simple game with excellent dynamics, with the option of being played by two people.
Your main objective is to choose an empty square where there is no bomb. If there is a bomb in place, the game is over. Otherwise, the game continues. Based on this, each location will be represented by a button connected to the Arduino.
Therefore, in this article, you will learn the following concepts:
1. Develop the minefield game for Arduino;
2. Learn to use the random and randomseed function.
So, next, we will start the development of the minefield game with Arduino for you to have fun with your friends.
Minesweeper Game Development with Arduino
Based on this operating principle, the following circuit in Figure 1 was developed.
Figure 2 - Electronic Schematic in the Breadboard.
As we can see, this circuit is composed of an Arduino UNO, which is responsible for processing the logic of the game, buttons that simulate the locations, and the LEDs and buzzer, to indicate victory and defeat in the game through light and audible signals.
From now on we will cover the circuit's operation and the logic implemented in the circuit.
Minesweeper with Arduino
The main objective of the game is to find an empty space where there is no bomb. Otherwise, if the user presses the button where the pump is, the system will generate an alarm signaling that the user has lost the game.
For this, we will use buttons to simulate each square. We will use programming logic to draw the digital pin number of one of the buttons. After the draw, the mine will be assigned to this respective button.
In this way, we will now present the code of the developed project.
The code is shown below.
int buzzer = 2;
numero = random(8,14);
estado = digitalRead(numero);
while(estado == 1)
while(estado == 0)
estado = 1;
As you can see, a variable was first declared for the digital pins connected to the buttons. In addition, we will create a variable to check the status of these buttons, that is, if they are in high or low logic state.
Finally, we declare a variable for the buzzer and assign a digital port to that variable.
int numero; // Variável referente aos pinos digitais conectados aos botões //
int estado; // Variável para verificar o estado dos botôes, se estão em nivel lógico alto ou baixo. //
int buzzer = 2; // Váriavel atribuida ao pino digital 7, referente ao buzzer.//
Next, we have the setup function. In this function, we configure the I / O pins for connection of the buttons as input and the LED's and Buzzer pins as outputs.
In addition, we use the randomSeed function. This function uses as a parameter the value read on a disconnected analog input to generate a seed value. Because it is known that a disconnected analog pin will generate random values and, thus, we have a really random effect on the value generated in the code.
In addition to the randomseed function, we use the random function. This function returns numbers from an internal Arduino pre-established list. It is a huge list of scrambled numbers and will always be the same sequence of numbers, in fact it is not a random number.
When we restart the Arduino, it starts this sequence again.
For this project, we raffled a number within the range of 8-14. These values were chosen because they are the values of the digital pins that are connected to the buttons on the Arduino.
numero = random(8,14);
Finally, we have the loop function. At its start, the green LED is activated to indicate that the game has started and that players can start the game. Then, the user must select a key, as shown in the circuit below.
Figure 3 - Starting the Minesweeper game.
When a selected switch does not have the pump in place, the green LED will remain on and the red LED will remain off. In addition, the buzzer will not be triggered. This can be seen in the Figure below.
Figure 4 - Starting the Minesweeper game.
If the user selects a location and has a pump, the red LED will go on, the green LED will go off and the Buzzer will go on. This can be seen in the figure below.
Figure 5 - Place with planted bomb.
Finally, we include a reset button, to restart the game the moment the pump is started. After pressing the button, the game restarts, the red LED is turned off and the green LED will be activated.
Then another random pin is drawn and your fun continues. This can be seen in Figure 5.
Figure 6 - Restarting the game.
Next, we will make the files available for you to mount this project on a NEXTPCB Printed Circuit Board. You can take advantage and purchase for free.
Printed Circuit Board NEXTPCB - Arduino Minesweeper
For this project, we decided to create a Shield for the Arduino UNO. On this board, there will be connected JST to connect the buttons, LEDs, and the buzzer.
In this way, we developed the electronic schematic design for the project. The schematic is shown in the following figure.
Figure 7 - Electronic Schematic of the Project.
The following schematic of the figure below was obtained from the electronic schematic. As you can see, we placed 10 JST connectors to connect the elements of the project.
Figure 8 - 2D PCB Design.
Figure 9 - 3D PCB Design.
|
OPCFW_CODE
|
An interactive guide to React rendering
#337 — May 3, 2023
Vercel Introduces First-Class Storage Options — Vercel is a popular platform for deploying React apps but has lacked obvious options for data storage (indeed, they have a lot of templates for common third party systems). Now, though, they’ve partnered with Upstash, Neon, and Cloudflare to offer new first-class key/value, Postgres, and file storage options.
The Interactive Guide to Rendering in React — This interactive guide explores why, when and how React renders and illustrates it with a series of short and well thought out animations.
Modular Content Management for Tech Teams with Kontent.ai — Streamline your code and scale with ease using Kontent.ai’s headless CMS. TypeScript SDK, CLI, rich text resolver & strongly typed model generator for flexible and scalable content management. Try it now.
Bulletproof React: A Scalable Architecture for Production-Ready Apps — A long standing resource that continues to get updates and deserves another look. It’s not a boilerplate app or framework itself but an opinionated guide to how you could structure a large scale React app if you’re lacking for inspiration.
Build a Type-Safe Tailwind with Vanilla Extract — A look at how a team built a type-safe alternative to Tailwind using Vanilla Extract, a way to write type-safe CSS where the final static CSS files are generated at build time.
Chris Schmitz (Highlight)
So Exactly What Are React Server Components? — There’s always room for another explanation of React Server Components, particularly one which makes few assumptions about prior knowledge and provides easy-to-follow examples.
Nick Telsan (Viget)
Server Components vs. SSR in Next.js comes at the question from a different direction.
Crafting the Next.js Website — The official Next.js site is impressive, but what went into it? One of the designers shares some of the implementation details which aren’t particularly React-y but may prove inspiring to you.
Creating ‘Bento’ Grid Layouts in React — Refers to a grid-like style of layout commonly associated with Apple product pages or Windows 8.
React Authentication, Simplified — In this article, we lay out a new approach to authentication (plus access control & SSO) in React applications.
Making Animated Tooltips with React and Framer Motion — Given there’s a fairly good chance tooltips could be the only ‘documentation’ that will actually get read, why not jazz them up a little?
Building a WebGL Carousel with React Three Fiber and GSAP — The end result is visually striking.
Connecting React, MUI and TypeScript Together
???? Code and Tools
Introducing React Native macOS 0.71 — With this version, the macOS flavor of React Native catches up with its iOS, Android and Windows cousins, and they want to keep it that way in future. v0.71 introduces an experimental preview of Fabric (React Native’s new rendering system) and more.
Mock Service Worker 1.2: REST/GraphQL API Mocking Library — Intercepts requests which you can then mock. Capture outgoing requests using an Express-like routing syntax, complete with parameters, wildcards, and regexes. GitHub repo.
Dynaboard: A Low-Code Web App IDE Made for Developers
next-sitemap: Sitemap Generator for Next.js Apps — Generates sitemaps and robots.txt for static, pre-rendered, dynamic, and server-side pages.
↳ The React library to build dashboards fast.
↳ Inline editing library. (Sandbox.)
React Suite 5.33
↳ Suite of React components.
↳ Create live-running code editing experiences.
Team Lead Web Development — Experienced with Node, React, and TS? Join us and lead a motivated team of devs and help grow and shape the future of our web app focused on helping millions explore the outdoors.
Find React Jobs with Hired — Hired makes job hunting easy-instead of chasing recruiters, companies approach you with salary details up front. Create a free profile now.
|
OPCFW_CODE
|
Updates to the design of Veerless
So that I could demo what the mechanism was I thought the flow could be:
- User inputs username
- Server provides server otp
- A browser plugin checks this:
- [Fail] If it’s invalid, it gives the client a warning and will not generate the client otp.
- [Success] It generates the client otp code for the user to manually input.
- User cannot provide their password or 2FA code to a fraudulent server.
- User cannot be offline “rubber-hosed” into giving the access as they don’t actually know their 2FA code at any given time.
- As the flow is so explicit it makes it more clear to potential users of this approach how it works.
- The user experience is poor, and most likely wouldn’t be well adopted.
- The protocol isn’t opt-in
Some feedback I had from a few friends was that I could use HTTP headers instead. While I could see that this would be an improvement, for some reason I thought knowing when the exchange was to occur would have many edge cases. It turns out that it was ok.
Thanks to @vkm514 for helping me refine this!
So now the demo, as it is currently at veerless.josephkirwin.com works as follows:
- On the site’s login page it presents an empty
X-Veerless-Initheader, and a username field. (Note as with all of these, the login page IP must be static or a static IP block, as that is embedded in the token to prevent site-in-the-middle attacks)
- Once it receives this, it then begins listening for an
X-Veerless-Responseheader containing the server’s otp code that corresponds the their username.
- The plugin checks this otp code against their expected one
- [Fail] Cancels the web response, and pops up a notification detailing why.
- [Success] Allows the user to continue and pops up their client 2FA in a notification box.
- (As above) User cannot provide their password or 2FA code to a fraudulent server.
- (As above) User cannot be offline “rubber-hosed” into giving the access as they don’t actually know their 2FA code at any given time.
- User does not experience anything different from the current login flow they’ve become accustomed to.
- The extension can provide a better experience via blocking the fraudulent site immediately.
these are evolving the more refined the solution becomes
- In all this there’s still not a clear approach for the client to manage otps for multiple sites. (more on that shortly)
- It’s not as evident to the audience receiving the demo what is actually happening, I need a way to display the HTTP headers to them while they login.
- Securing the serverSecret clientside may be difficult. There are options to either move the secret handling code to an Android app and have the chrome extension relay just the info and dns resolution over Google-Cloud-Messaging or instead take an approach outlined in my blog post https://www.josephkirwin.com/2016/09/12/server-authentication-with-lamports-scheme/ where only integrity is required for the client, not confidentiality.
The problem I thought of with client side credential management
The user needs to know which otp to use for the given server, in our example we only use the demo site so its not a problem. But if you have many otps, you don’t want to have to check them all for a match each time.
You do not want to index with the username as thats mostly the same across sites and you don’t want to index using dns as this whole design is supposed to avoid that specific dependency. So what do you index the otp’s with?
My proposed solution.
So recall from Serverside One-Time-Pad (Part2) we had the HOTP stuff, well its actually using TOTP for the implementation, but that is not the key takeaway here. So the equation is\[TOTP = truncate(hmac(serverSecret,timeStep|serverIP))\]
A couple of key HMAC properties that make this great in comparison to a hash
- Knowing the message part of a HMAC doesn’t (or shouldn’t if its a well selected HMAC) allow you to create an existential forgery.
- Upon truncation HMAC is much more resilient to collisions in the same keyspace then a hash algorithm is, which is of course an un-keyed function. This is how TOTP and HOTP are relatively secure under truncation, but a SHA family hash alone may not be.
So based on these 2 concepts, we can add another
X-Header into the protocol.
X-Veerless-Seed that contains the t0 value to compute the time step for the HMAC.
The likelihood of a user creating 2 usernames at the same time would be quite rare, and this edge case could be handled via client-side rejection of t0 at the registration step with the server.
With this header in place, the client can use
X-Veerless-Seed as an index into their table of available otps.
Check out the demo at veerless.josephkirwin.com and please give me some feedback in the comments!
|
OPCFW_CODE
|
Avail Of Our Logistics Regression Assignment Help In Australia To Get The Top Grades
Our online assignment expert writers put all the necessary learning data and knowledge when you take the Logistics Regression Assignment help. And create the best descriptive and original solutions that get high marks. In Data Science, many professionals and Logistics Regression Assignment Experts know a kind of complex algorithm. The Logistic Regression is a managed and directed learning classifier that is practiced in conventional statistics. Its applicability in machine learning analysis continues to be interesting. Logistics Regression is a ranking algorithm that is applied where the answer variable is absolute. The concept of Logistic Regression is to connect with the characteristics and likelihood of a remarkable result.
Linear Regression can help foretell a constant production value from a linear relation and comes between the 0 and 1. It is a probability that does not operate with Logistic Regression. Our academic Logistics Regression Assignment Experts know that you may need our assignment writing support because the level of competition is growing considerably. The students spend so much time grasping the subject and speeding long nights to match the current situation's interests. So many students crave top scores in their academic tasks that help them in overall results.
But how to write the Logistics Regression assignment with detailed solutions? If this is also one of your worries, connect with an online assignment expert who solves all your amendments related to stress, and always strives to give the quality papers that boost your marks. That is why the students look for help with Logistics Regression Assignment by our expert writers.
Logistics Regression Assignment Help For The Review Of The Logistic Regression Model
The help from the fantastic Logistic Regression team of assignment writers performs our assignments best in the class and university. Our writers perpetually establish the top industry knowledge and experience they have gained in all these years and give absolute quality assignments solutions in Australia. If you search for help with Logistics Regression Assignment writing services, an online assignment expert will come up with the task that fulfills all your issues. Our assignments experts are well versed in all the concepts and terms related to the assignments and involve the same writing attributes.
Most of the time, the assignment is asked to evaluate the execution of a logistic regression model, so our Logistics Regression Assignment Experts mentioned the overview of it below:
Deviance is practiced instead of the total of squares estimates.
Model deviance shows the acknowledgment foretold by a model on continuing the calculation on the self-governing variables. In case this Deviance is more diminutive than the null Deviance, then the experts can assume that the factors or collection of parameters drastically increased model access.
Null Deviance symbolizes the answer foretold by a standard model with the only intercept.
A different approach to determine the efficiency of the model is by applying the Confusion Matrix.
Logistics Regression Assignment Help Categorizes The Types Of Logistic Regression
In the assignments, we use the different types of Logistics Regression: Binary, Multi, and Ordinal Regression. Our help with Logistics Regression Assignment explained then in detail below:
Binary Logistics Regression Assignment
Say the experts need to derive the undergraduate program's information and want to know the exam outcomes. Their main aim is to foretell if a learner will succeed or not apply by the number of hours they did the self-studied and the total hours of sleep they got. We have two points (hours rested, hours self-studied) and succeed -1 and not succeed- 0.
To plan divined values to possibilities, experts apply the sigmoid method. This has a role in planning any genuine value into different values among zero and one. In computer education, we practice sigmoid to forecast the expected outcomes.
It derives the results of a probability number within 0 and 1. To outline this discrete state (right/wrong, hot/cold), experts choose an outset value, or a setpoint over which they will assign variables into level one and under it will be level two.
Highest Likelihood Calculation
A sigmoid function represents a probabilistic trajectory that your input features will use. Still, the maximum probability calculation is the probabilistic structure that improves the parameters to increase your design efficiency.
Our experts are skilled in using the Bayesian statistics to follow the MLE and give the students all the necessary material to help them. It's essential to set some experience with the calculation that builds your model.
in the logistic regression, it is more complicated than that of linear regression. The cost function is the Mean Squared Deviation. The cost function for linear regression is hard to calculate. It would become a non-vex function that has recurring limited points, making it challenging to reduce price values to define a global point.
Multiclass logistic regression Assignment
The next type of logistic regression uses the value 0,1 to extend our interpretation, and the result will be 0,1...n. Essentially, we again process the binary order many times, one for every level.
We split the query into n+1 binary analysis question (+1)
Then we choose the class to estimate the probability the checks are in that one particular class.
The third step is to make the forecast = <mathematics>max(probability of the levels)
Linear Regression practices analytical models such as R² and p-value to follow the design review and values shaping the model.
In the help with Logistics Regression Assignment, the R² is practiced to designate an association among the subject variable and a distinct, independent value. The sovereign variable use defines the subject variable.
P-value is practiced to resolve whether an R² is of the statistical importance of carrying out complex mathematical functions.
Finally, the cost purpose of linear regression is the Mean Squared Deviation.
Assignments covered by our Logistics Regression Assignment Experts
Why Choose Our Logistics Regression Assignment Help
There are several value-added assignment services that you get when you order the Logistics Regression Assignment from the online assignment experts. We are running for more than a decade for the students to complete their assignments. To date, no content is identified with the sign of plagiarism. We give the support with Logistics Regression and know the university accepts only the original papers; otherwise, they decline them. So online assignment experts completely write the highest quality content that gets the HD grades and incorporate all your assignment requirements.
Our instant assignment help guarantee you to give plagiarism free content. Every single solution is unique and has no association with any of our past assignments or matches any sources. We fully control our contents by checking it by the Turnitin and giving a copy of it as well.
You get the 24*7 assignment help from us as we are available to give the needed help to clarify all the issues. The experts are assigned to the students who will help them tackle all the subject related tasks. Any service can be availed by a live chat session or call on our number, and our customer care team will connect you to the desired experts for the subject.
Logistics Regression Assignment that is written by the online assignment experts is adequately well-structured and written systematically. We always carry out extensive research before writing the paper and collecting all the necessary data to write the papers. We practice analytical tools to represent the outcome of the analysis done accurately, and that is one of the perks of getting our help.
We fully recognize that learners may have a funds issue. That is why all the assignment service by the online assignment expert is genuinely priced with an ample amount of discount and seasonal offers to help you out in your Logistics Regression Assignment.
No freelancer works on the assignment to write or check your assignments. We have top editors and proofreaders who thoroughly review the assignments composed by our assignment experts to resolve the grammar or logical fallacy. We make sure the content is presentable and creative so that it gets the top marks.
Through our student portal, you are free to see the status of your assignment at any time. If you need to converse to one of the expert writers, then it is also possible. This is what you get when you get Logistics Regression Assignment Experts to help online.
Get the online assignment help by online assignment expert service providers who are given affordable Logistics Regression Assignment support to students for many years.
Why Choose Us
Your Identity is yours. We don’t tell, sell or use your contact info for anything other than sending you information about your assignment services.
1 Subject 1 Expert
Exercise your power to choose academic editors with expansive knowledge in their field of study. We are NOT run of the mill assignment help.
100% Original Content
Everything new and nothing to hide. Get edited assignment papers that are devoid of plagiarism and delivered with a copy of the Turnitin Report.
Express Assignment Services
Fear no Deadline with our skilled assignment editors. We even offer super express assignment delivery time of less than 6 hours.
We are always up and awake. Get round the clock expert assignment help through our dedicated support team and live chats with your chosen editors.
I do not know how to express my gratitude in words. You saved my six months of time because I passed my Project Management Subject. Thank you so much!
I forgot my submission deadline and remembered about it at the last hour. In a hurry, I contacted Onlineassignmentexpert.com. They delivered the paper in 4 hours.
For a Nursing case study on Elderly Care, I got a very good grade. Thanks to Online Assignment Expert website. I highly recommend them to others for quality assignment help and support.
|
OPCFW_CODE
|
PostgreSQL or Postgres is an open-source, free relational database management system with great extensibility and SQL compliance. It is also a commonly used Relational Database Management System (RDMS) for handling massive and complicated data, particularly in large companies.
It is a multi-platform database server that can be installed on Linux, Windows, and macOS, and it has extensive support for programming languages like Python, Java, Perl, Ruby, Go, C, and C#. The main features are:
- Easily expandable
- Supports time-series data types
- Efficient and cost-effective
- Scalable and supports concurrency
- Supports non-relational and relational data types
PostgreSQL has a wide range of features that can be used by administrators to create fault-tolerant systems and maintain data integrity, as well as by developers to create apps and end users to manage their data regardless of the size of the dataset. PostgreSQL is not only open source and free, but it is also very extensible. You may create new functions and specify your own data types.
PostgreSQL makes an effort to follow the SQL standard where doing so does not conflict with established functionality or potentially results in poor architectural choices. Most of the SQL standard's essential capabilities are supported, albeit occasionally with slightly different syntax or functionality. Ongoing progress in this direction can be anticipated.
PostgreSQL's primary goal is to manage a range of tasks, from simple technologies to online services or the data warehouse with multiple concurrent users. It supports both SQL for relational queries and JSON for non-relational queries. Many different businesses across many different industries, including financial services, information technology, government, and media and communications, employ PostgreSQL databases, which offer enterprise-class database solutions.
Here, you can learn how to install and connect to PostgreSQL on Ubuntu 22.04.
Some of the critical PostgreSQL configurations are:
Default PostgreSQL port 5432 The default user will be "postgres" Important Configuration files are located in: /etc/postgresql/postgresql.conf /etc/postgresql/pg_hba.conf Default database postgres Default data directory /var/lib/postgresql/
- Ubuntu 22.04 server that has been pre-configured.
- A user with sudo/root access.
- A basic firewall.
Install PostgreSQL on Ubuntu
Step 1: Update Your Linux System
Postgres packages are available in Ubuntu's default repositories; therefore, you can install them by using the APT packaging system. To update the package lists, connect to your Ubuntu 22.04 server and run the following command.
sudo apt update
Step 2: PostgreSQL Installation
You can now install the Postgres package using the following methods.
Method 1: From APT Repository
This is the most common method used and will install the latest version of PostgreSQL.
sudo apt-get -y install postgresql
Method 2: From Local Ubuntu Repository
The -contrib package adds some extra utilities and functionality.
sudo apt install postgresql postgresql-contrib -y
Now check the version of PostgreSQL installed by running the following command.
The output below indicates that you have installed PostgreSQL version 14.7.
You can also use the below command to find the version directly.
sudo -u postgres psql -c "SELECT VERSION();"
Step 3: Check Running Status of PostgreSQL
Once installed, you can check the status of the PostgreSQL daemon using the following commands.
sudo systemctl status postgresql
If the PostgreSQL service is up and running, the output would be:
PostgreSQL listens to TCP port 5342 by default, and this can be verified using the following command.
ss -pnltue | grep postgresql
Connect to PostgreSQL
Method 1: Via SQL Shell
You can now connect and interact with the PostgreSQL shell by creating a test user Postgres. You can log in to the PostgreSQL shell using the test user and access the shell using the following commands.
sudo su - postgres
To print the connection details, run the below command.
PostgreSQL servers come pre-configured with postgres, template0, and template1 databases. You may use the subsequent command to list the databases.
Postgres will act as the default database before creating any new database. The databases template0 and template1 are only skeleton databases. These must not be changed or removed.
Method 2: Via pgAdmin Tool
The pgAdmin tool helps to connect to the PostgreSQL database server via an in-built graphical user interface. Here, you are using pgadmin4 to connect to the database.
Step 1: Launch the Application
You need to install pgadmin4 on your system and open the application. The following window will be opened once you click on the pgAdmin application.
Step 2: Create a New Server
Navigate to CreateServer by right-clicking the server's node. Choose the server name on the next window options and click on the connection tab.
Step 3: Connection
You can now enter the hostname and password for the Postgres user and then click on the Save button.
Step 4: Server Details
Click on the server node and expand the new server. You can see the default database Postgres.
Step 5: Using the Query Tool
Click on Tools, and then click Query Tool.
Now enter the below command in the Query Editor and click on the Execute button.
The output will show the current version of PostgreSQL being used.
Creating a New PostgreSQL Database and User
You can use the below commands to create a new database and user in PostgreSQL.
CREATE DATABASE testdb; CREATE USER sapta WITH ENCRYPTED PASSWORD 'Sapta@123'; GRANT ALL PRIVILEGES ON DATABASE testdb TO sapta;
Password Protecting PostgreSQL Admin User
When you install PostgreSQL, a new user, Postgres, is created. This account will be assigned the default Postgres role, which does not require any password for authentication. This can pose security issues. Hence, to secure the user account and prevent unauthorized access, you can assign a password for this user account.
The following command can be used to assign a password for the PostgreSQL user.
ALTER USER postgres PASSWORD ‘postgres@123’;
Once done, exit the shell by using the below command.
Enabling Remote Connections to PostgreSQL Server
PostgreSQL, by default, only accepts connections from localhost or the system on which it was installed.
If you want to connect from remote locations, the remote connection feature needs to be enabled. For this you need to make changes to the PostgreSQL configuration file located at /etc/postgresql/<version>/main/ directory.
In this case, the file would be at /etc/postgresql/14/main/postgresql.conf.
Under the Connection and Authentication section, edit the listen address field to ‘*’, which will allow remote connections from all IPs.
Save the changes and exit.
Now, to allow IPv4 addresses, you can edit /etc/postgresql/14/main/pg_hba.conf.
Under IPv4 local connections, you can allow global connections or specific IP addresses.
Save the changes and restart the PostgreSQL service for the changes to be effective.
systemctl restart postgresql
If there is a firewall running, enable port 5342.
sudo ufw allow 5342/tcp sudo ufw reload
Now you can verify the remote connectivity using the following command.
psql -h 192.168.153.134 -U postgres
Here, 192.168.153.134 is the server IP.
How to Uninstall PostgreSQL Database on Ubuntu 22.04
You can un-install PostgreSQL using the following command.
sudo apt remove postgresql postgresql-contrib
You have now set up PostgreSQL on Ubuntu 22.04 server and learned how to connect to the database, create new databases, enable remote connections, etc. Consider adopting PostgreSQL if you want an effective and stable database management system. Liquid Web offers a dedicated server platform that you can use to try these installation steps to configure your own database server.
Our Sales and Support teams are available 24 hours by phone or e-mail to assist.
|
OPCFW_CODE
|
Let it be to create a domain, point a Namecheap Nameservers, DNS management, or web hosting, Namecheap can bring your ideas to life.
Following are some of the details regarding nameservers that you should be knowing if you are creating a domain.
Post Contents :
What are Namecheap Nameservers?
It is a server on the internet that is dedicated to handling queries about the location of a domain name’s various services.
Nameserver is the most essential part of the DNS which allows using the domains instead of IP addresses. Default Namecheap nameservers are dns1.namecheaphosting.com and dns2.namecheaphosting.com.
You can check the details of Nameserver Vs DNS Server to understand the difference between the two.
Using Default Nameservers vs Hosting Domain:
If you have chosen Namecheap to create your domain for your website, you have two options to choose from. You can either use Free DNS or DNS provided with domain registration and Hosting Services.
When you create a domain with Namecheap, you will automatically get the default nameservers like dsn1.registrar-servers.com.
👉 Sign in to your Namecheap account. You can click this link to visit the website – https://www.namecheap.com/myaccount/login-signup/
- Then select the Domain List which can be seen on the left sidebar
- Click Manage button
- Now click on the Advanced DNS tab
- At the button, you can find Host Records, click on that to add a new record
When a domain name is pointed to the default nameservers, the Host Records menu will be visible. If you cannot find the Host Record option, it means that your domain is pointed to a third-party nameserver.
If you have used Namecheap as your hosting service and point the domain to Namecheap hosting, then the server in which your account is created will act as a DNS server too other than web hosting.
To create a subdomain of your domain to a nameserver not associated with the domain itself then you can use the Namecheap ns record.
How Reliable Are Namecheap’s Nameservers?
The nameservers of Namecheap are geographically distributed to enhance the speed and performance of a website.
In case if one network is not working, the Namecheap DNS Records will automatically direct to another network to keep your website on all the time.
How Can I Delete My Personal Nameservers?
Check the following guidelines to know how you can delete your nameservers.
- Visit the website of Namecheap and Sign In.
- Click the Account option in the upper right corner of the screen, you can see the Domain List.
- Select the Domain List menu.
- Click the Manage button which can be seen on the same line as your domain for which you want to delete the personal nameservers.
- Click the Advanced DNS tab
- Under the Personal DNS Servers, select the type of your server
- Click on Search
- A menu will appear on the screen and you can delete the nameserver
Before deleting the nameserver, cross-check to know that no domains are using this nameserver.
How to Nameserver Setup for Shared Packages in Namecheap?
- Sign in to the Namecheap account and go to Domain List.
- Click on Manage
- Select Namecheap Web HostingDNS and save changes by clicking on the checkmark icon.
How To Setup Personal Nameservers? (Reseller Packages)
When you have the name servers registered, you should set them up on the server. You can register your nameservers if you have a reseller account with Namecheap.
- First Login to cPanel
- Navigate to the Domains section
- Click the Zone Editor option
- Now click + A Record
- You can create two A record for the two nameservers
- Log in to WHM (WebHost Manager )
- In the WebHost Manager Setup, set your nameservers(this will automatically apply to all new cPanel accounts that you create)
- Click on Save Changes.
How To Access IPMI for Dedicated Servers in Namecheap?
👉 To access IPMI for dedicated servers, it is essential to set up a VPN connection. Before setting the VPN make sure to check you have the necessary information mentioned.
- Generate a VPN connection to our IPMI network
- The VPN connection should now be set to work inside the private network.
👉 The above step is not mandatory but is highly recommended. Else, you will not have an Internet connection while the VPN connection is active.
- Connect to IPMI VPN network.
- Now browse to the IPMI interface of your server
What is a Private Nameserver?
These nameservers are DNS nameservers connected with a particular domain name.
Private nameservers can be used only by the web hosting company’s reseller, dedicated, and VPS hosting plans.
How Do I Namecheap a Domain To Namecheap Hosting?
✔ Sign in to your Namecheap account
✔ Select Domain List from the left sidebar
✔ Click the Manage button
✔ Click the Nameservers section
✔ Select Namecheap Web Hosting DNS from the drop-down menu
✔ Click Save Changes
How Long Does Namecheap DNS Take?
It takes about 24 – 48 hours when you change the nameserver for a domain.
These are some of the details of Namecheap Nameservers. It is possible to Namecheap change nameservers to Cloudflare if required. Moreover, Namecheap nameservers for Godaddy can be changed by setting up a few steps.
With Namecheap, you can look for a name server check to know how far your domain has propagated. Hope now you know what are nameservers for the domain and how it will function.
|
OPCFW_CODE
|
[sword-devel] beta 1.5.2 WIN32 binary available
Tue, 19 Jun 2001 18:58:17 +1200
> > For FinPR, The letter after "Phy" in the modules description
> > "1938 PhyZ Raamattu" appears as a black block.
> I think that's a Z with a breve, a high ascii character that has a
> tendency to cause problems. But hey, that's the module name, what can
> we do?
Just noticed it, so thought I'd mention it.
> > I think that InstallMgr should remove the directory,
> I agree, but it's not a critical bug for the 1.5.2 release. This would
> never have been apparent except that we've started moving modules around
> as they change to new drivers. MHC, JFB, LXXM, & BHS all used to be raw
> modules and are now compressed.
Error occurs when simply uninstalling a module too... see below.
> > and that
> > Sword.exe should not include modules that it can't find a
> > .conf file for.
> I'm confused by that. Shouldn't a .conf be the only way it knows to
> display a module?
I'd have thought so, but apparently not (Note I'm only talking about the dialogue that sets what is visible in the tabs). If you run
Sword.exe, right click on the modules tabs, and select "Hide / Show Modules" and click on OK, then qit Sword, remove the .conf, and
the files from the modules directory, or get InstallMgr to Remove the module, then run Sword.exe and "Hide / Show Modules" again,
then you get an access violation.
I discovered later this afternoon that removing the module's corresponding line from the [TextView] section of layout.conf solves
So my ideas now are that installing or updating a module could set the line in that file to "true", removing a module could remove
the line from the file, and should remove the directory, and that Sword.exe should check that the .conf exists before reading the
modules description from it.
Is the layout.conf file used only by Sword.exe or do other front-ends use it too?
Does InstallMgr use a single compiler source for many OSes?
> Yep. There should be a file called lxxm.doc in the LXXM directory. But
> we'll work on a nicer mode of presentation for the next version, akin to
> what Logos & BibleWorks do for their Morphological databases.
What do they do?
|
OPCFW_CODE
|
package pl.beny.smpd.util;
import java.util.*;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
public class Quality {
public static List<List<Boolean>> checkBootstrap(int i, int n, int k) {
List<List<Boolean>> results = Arrays.asList(new ArrayList<>(), new ArrayList<>(), new ArrayList<>());
List<Sample> samples = Database.getSamples();
IntStream.range(0, i).forEach(j -> {
List<Sample> training = IntStream.range(0, n).boxed()
.map(l -> samples.get(new Random().nextInt(samples.size())))
.collect(Collectors.toList());
results.get(0).addAll(getResultsNN(samples, training));
results.get(1).addAll(getResultsNM(samples, training));
results.get(2).addAll(getResultsKNN(samples, training, k));
});
return results;
}
public static List<List<Boolean>> checkCrossvalidation(int parts, int k) {
List<List<Boolean>> results = Arrays.asList(new ArrayList<>(), new ArrayList<>(), new ArrayList<>());
List<List<Sample>> samples = getSubsets(parts);
IntStream.range(0, parts).forEach(i -> {
List<Sample> training = getTraining(parts, i, samples);
results.get(0).addAll(getResultsNN(samples.get(i), training));
results.get(1).addAll(getResultsNM(samples.get(i), training));
results.get(2).addAll(getResultsKNN(samples.get(i), training, k));
});
return results;
}
private static List<List<Sample>> getSubsets(int parts) {
List<Sample> samples = new ArrayList<>(Database.getSamples());
Collections.shuffle(samples);
List<List<Sample>> subsets = new ArrayList<>();
int partitionSize = Math.max(1, samples.size() / parts);
IntStream.range(0, parts)
.forEach(i -> subsets.add(samples.subList(
Math.min(i * partitionSize, samples.size()),
i + 1 == parts ? samples.size() : Math.min((i + 1) * partitionSize, samples.size()))));
return subsets;
}
private static List<Boolean> getResultsNN(List<Sample> samples, List<Sample> training) {
return samples
.stream()
.map(sample -> Classifiers.classifyNN(sample, training))
.collect(Collectors.toList());
}
private static List<Boolean> getResultsNM(List<Sample> samples, List<Sample> training) {
return samples
.stream()
.map(sample -> Classifiers.classifyNM(sample, training))
.collect(Collectors.toList());
}
private static List<Boolean> getResultsKNN(List<Sample> samples, List<Sample> training, int k) {
return samples
.stream()
.map(sample -> Classifiers.classifyKNN(sample, training, k))
.collect(Collectors.toList());
}
private static List<Sample> getTraining(int parts, int part, List<List<Sample>> subsets) {
return IntStream.range(0, parts)
.filter(i -> i != part).boxed()
.flatMap(i -> subsets.get(i).stream())
.collect(Collectors.toList());
}
}
|
STACK_EDU
|
The portability of the SWASH code between different platforms is guaranteed by the use of standard ANSI
FORTRAN 90. Hence, virtually all Fortran compilers can be used for installing SWASH. See also the manual
Programming rules (to be found on the SWAN website http://swanmodel.sourceforge.net/online_doc/swanpgr/swanpgr.html).
The SWASH code is parallelized, which enables a considerable reduction in the simulation time for relatively large CPU-demanding calculations. A message passing modelling is employed based on the Message Passing Interface (MPI) standard that enables communication between independent processors. Hence, users can optionally run SWASH on a Linux cluster.
The material on the SWASH website provides a Makefile and two Perl scripts (platform.pl and switch.pl) that enables the user to quickly install SWASH on the computer in a proper manner. For this, the following platforms, operating systems and compilers are supported:
|SGI Origin 3000 (Silicon Graphics)||IRIX||SGI|
|Compaq True 64 Alpha (DEC ALFA)||OSF1||Compaq|
|PA-RISC (HP 9000 series 700/800)||HP-UX v11||HP|
|IBM Power6 (pSeries 575)||Linux||IBM|
|Intel Pentium (32-bit) PC||Linux||GNU (g95)|
|Intel Pentium (32-bit) PC||Linux||GNU (gfortran)|
|Intel Pentium (32-bit) PC||Linux||Intel|
|Intel Pentium (64-bit) PC||Linux||Intel|
|Intel Itanium (64-bit) PC||Linux||Intel|
|Intel Pentium (64-bit) PC||Linux||Portland Group|
|Intel Pentium (32-bit) PC||Linux||Lahey|
|Intel Pentium (32-bit) PC||MS Windows||Intel|
|Intel Pentium (64-bit) PC||MS Windows||Intel|
|Intel Pentium (32-bit) PC||MS Windows||Compaq Visual|
|Power Mac G4||Mac OS X||IBM|
If your computer and available compiler is mentioned in the table, you may consult Section
3.1 for a quick installation of SWASH. Otherwise, read Section 3.2
for a detailed description of the manual installation of SWASH.
Note that for a successful installation, a Perl package must be available on your computer. In most cases, it is available for Linux and a UNIX operating system. Check it by typing perl -v. Otherwise, you can download a free distribution for Windows called ActivePerl; see http://www.activestate.com/activeperl/downloads. The Perl version should be at least 5.0.0 or higher!
Before installation, the user may first decide how to run the SWASH program. There are two possibilities:
|
OPCFW_CODE
|
/**
* @fileoverview The summary formatter, it outputs the aggregation of all the hint results in a table format.
*/
/*
* ------------------------------------------------------------------------------
* Requirements
* ------------------------------------------------------------------------------
*/
import * as chalk from 'chalk';
import forEach = require('lodash/forEach');
import groupBy = require('lodash/groupBy');
import * as table from 'text-table';
const stripAnsi = require('strip-ansi');
import { logger, severityToColor, occurencesToColor } from '@hint/utils';
import { writeFileAsync } from '@hint/utils-fs';
import { debug as d } from '@hint/utils-debug';
import { FormatterOptions, IFormatter } from 'hint';
import { Problem, Severity } from '@hint/utils-types';
import { getMessage } from './i18n.import';
const _ = {
forEach,
groupBy
};
const debug = d(__filename);
/*
* ------------------------------------------------------------------------------
* Formatter
* ------------------------------------------------------------------------------
*/
export default class SummaryFormatter implements IFormatter {
/** Format the problems grouped by `resource` name and sorted by line and column number */
public async format(messages: Problem[], options: FormatterOptions = {}) {
debug('Formatting results');
if (messages.length === 0) {
return;
}
const tableData: string[][] = [];
const language: string = options.language!;
const totals = {
[Severity.error.toString()]: 0,
[Severity.warning.toString()]: 0,
[Severity.information.toString()]: 0,
[Severity.hint.toString()]: 0
};
const resources: _.Dictionary<Problem[]> = _.groupBy(messages, 'hintId');
const sortedResources = Object.entries(resources).sort(([hintA, problemsA], [hintB, problemsB]) => {
if (problemsA.length < problemsB.length) {
return -1;
}
if (problemsA.length > problemsB.length) {
return 1;
}
return hintA.localeCompare(hintB);
});
_.forEach(sortedResources, ([hintId, problems]) => {
const msgsBySeverity = _.groupBy(problems, 'severity');
const errors = msgsBySeverity[Severity.error] ? msgsBySeverity[Severity.error].length : 0;
const warnings = msgsBySeverity[Severity.warning] ? msgsBySeverity[Severity.warning].length : 0;
const informations = msgsBySeverity[Severity.information] ? msgsBySeverity[Severity.information].length : 0;
const hints = msgsBySeverity[Severity.hint] ? msgsBySeverity[Severity.hint].length : 0;
const red = severityToColor(Severity.error);
const yellow = severityToColor(Severity.warning);
const gray = severityToColor(Severity.information);
const pink = severityToColor(Severity.hint);
const line: string[] = [chalk.cyan(hintId)];
if (errors > 0) {
line.push(red(getMessage(errors === 1 ? 'errorCount' : 'errorsCount', language, errors.toString())));
}
if (warnings > 0) {
line.push(yellow(getMessage(warnings === 1 ? 'warningCount' : 'warningsCount', language, warnings.toString())));
}
if (hints > 0) {
line.push(pink(getMessage(hints === 1 ? 'hintCount' : 'hintsCount', language, hints.toString())));
}
if (informations > 0) {
line.push(gray(getMessage(informations === 1 ? 'informationCount' : 'informationsCount', language, informations.toString())));
}
tableData.push(line);
totals[Severity.error.toString()] += errors;
totals[Severity.warning.toString()] += warnings;
totals[Severity.information.toString()] += informations;
totals[Severity.hint.toString()] += hints;
});
const color = occurencesToColor(totals);
const foundTotalMessage = getMessage('totalFound', language, [
totals[Severity.error].toString(),
totals[Severity.error] === 1 ? getMessage('error', language) : getMessage('errors', language),
totals[Severity.warning].toString(),
totals[Severity.warning] === 1 ? getMessage('warning', language) : getMessage('warnings', language),
totals[Severity.hint].toString(),
totals[Severity.hint] === 1 ? getMessage('hint', language) : getMessage('hints', language),
totals[Severity.information].toString(),
totals[Severity.information] === 1 ? getMessage('information', language) : getMessage('informations', language)
]);
const result = `${table(tableData)}
${color.bold(`× ${foundTotalMessage}`)}`;
if (!options.output) {
logger.log(result);
return;
}
await writeFileAsync(options.output, stripAnsi(result));
}
}
|
STACK_EDU
|
Feature/memoize key
Description of proposed changes
Feature
Currently, there is no way to decide the key to be memoized when using preprocessor(memoize=True).
This leads to 2 issues :
memoization can not be done for unhashable classes (typically a group of pandas rows). We need to wrap or subclass it.
memoization key can not be specific to a preprocessing.
Example : We are trying to evaluate the reliability of a paragraph in a blog post.
We could evaluate the reliability of the paragraph and of the website.
The preprocessing corresponding to those 2 tasks will share the same key for memoize, which is not ideal : a website can have a few thousand paragraphs so we will evaluate website reliability a lot more than necessary.
Result
@preprocessor(memoize=True, memoize_key=lambda p: p.base_website_url)
def add_website_reliability(paragraph):
paragraph.website_reliability = evaluate_reliability(paragraph.base_website_url)
return paragraph
Implementation
Add a memoize_key : Optional[HashingFunction] to the BaseMapper, if provided and not None, it will be used instead of get_hashable to define the hash of the input.
memoize_key has been made accessible to the different functions providing memoize api.
Related issue(s)
https://github.com/snorkel-team/snorkel/issues/1561
Test plan
Checklist
Need help on these? Just ask!
[x] I have read the CONTRIBUTING document.
[x] I have updated the documentation accordingly.
[x] I have added tests to cover my changes.
[x] I have run tox -e complex and/or tox -e spark if appropriate.
[x] All new and existing tests passed.
@henryre , I will be waiting for your review.
I ran tox -e doc, but it did not produce any change and I have a bunch of WARNING: toctree contains reference to nonexisting document, is it normal ?
By the way, how would you like to discuss my use case further ?
@henryre I just noticed it won't work in the expected way.
The snorkel pattern is to return the x_mapped so the cache will change the data point.
def test_decorator_mapper_memoized_use_memoize_key(self) -> None:
square_hit_tracker = SquareHitTracker()
@lambda_mapper(memoize=True, memoize_key=lambda x: x.num)
def square(x: DataPoint) -> DataPoint:
x.num_squared = square_hit_tracker(x.num)
return x
x8 = self._get_x()
x8_mapped = square(x8)
assert x8_mapped is not None
self.assertEqual(x8_mapped.num_squared, 64)
self.assertEqual(square_hit_tracker.n_hits, 1)
x8_with_another_text = self._get_x(text="Henry is still having fun")
x8_with_another_text_mapped = square(x8_with_another_text)
assert x8_with_another_text_mapped is not None
self.assertEqual(x8_with_another_text_mapped.num_squared, 64)
self.assertEqual(square_hit_tracker.n_hits, 1)
# This should fail :/
self.assertEqual(x8_with_another_text_mapped, x8_mapped)
Hi @Wirg, thanks for putting this up! Based on the example you put up, the expected behavior would be square(x8_with_another_text) == x8_mapped since the hashing function was (intentionally) "poorly chosen" in the test. Are you saying self.assertEqual(x8_with_another_text_mapped, x8_mapped) will trigger an AssertionError in the current implementation?
Hi @henryre ,
I hope you're going well.
Small bump on this PR.
Hi @henryre ,
Another bump for this pr.
What do you want me to do ?
Do you have some change ? Do you want to give up on this feature ?
Hi @Wirg, sorry for the delay here and thanks for the reminder! Taking a look today!
@henryre
I finally changed the test. I am not fully satisfied. If tomorrow someone changes :
get_hashable to support pandas dataframe
memoize_key not to be used
The test won't fail.
@Wirg good thinking! You could add an additional field in the test called not_used and have different values for the two data points
@henryre so I added a not_used int.
I am still encountering new test failures.
I fixed F541 (f-string used with no parameters).
I am facing a typing failure due to torch.nn.Linear usage in snorkel/slicing/utils.
I am not sure what are my steps on this ?
Is this already fixed and I should rebase ?
@henryre small bump
I don't know what I should be doing regarding codecov.
Am I expected to add more tests. If yes, where ?
@Wirg looks like codecov was being a bit temperamental, will go ahead and merge in. Thanks for your hard work here!
|
GITHUB_ARCHIVE
|
I was recently plagued by bluescreens on my relatively new (3 weeks) XPS13 9370 (9370, i7-8550U, 512/16GB UHD, WIN10 Pro).
The blue screen (MEMORY MANAGEMENT) always came up when I wanted to send the device to standby via the power button.
I was able to reproduce the problem: As soon as you start Android Studio, run an AVD and then touch the power button respectively fingerprint reader, the blue screen appears.
In this context (XPS13 9370 Android Studio Bluescreen MEMORY MANAGEMENT) I came across the following thread on Gitub, where a user with an almost identical configuration has the same problem:
Apparently there is a problem with the Intel HAXM driver used by the Android Virtual Devices and the driver of the fingerprint reader (Goodix). If you deactivate the fingerprint reader, you can use the power button again, only without fingerprint.
If necessary, does this also occur with other models of the current generation in which the fingerprint reader is integrated in the power button?
Has anyone else encountered this problem?
I am also able to reproduce the issue, on my new Dell XPS 13 9370. It would be nice if Dell could submit an issue with Goodix, so others wouldn't have to wonder if the issue is with the hardware.
I can confirm same issue for XPS 15 9570 with i7-8750h (win10 1803 (build 17134)).
Issues occurs if screen gets locked during running HAXM AVD.
BSOD occurs also in case app getting restarted in AVD during debugging.
could you please provide an exact and simple step by step guide on how to duplicate this behavior?
Using a XPS 13 (9370), I've installed Android Studio incl. Intel HAXM, running the Android Studio with an open project and open emulator for a random Android device. If I send the system to sleep and wake it up using the power button everything is working flawlessly. I was not able to see the reported Bluescreen so far.
Fingerprint is setup and running on the power button.
With kind regards,
I have the latest Android Studio version and a Pixel 2 AVD with API level 27 (Android 8.1) set up with HAXM 7.20.
1. The PC is freshly booted, I open Android Studio.
2. I run the application by pressing the "Run" button, select the AVD, which starts up and runs the app after the gradle build
3. I touch the fingerprint sensor and the BC occurs.
BUT I have performed a "fresh start" when I got the device and installed the latest drivers and needed tools from Dells support site which have shown after putting in my service tag.
Dell Update shows that all drivers are up to date.
Thank you stozk,
could you provide us this project/application you programmed or create a simple project/application that we could use for issue duplication?
I've already tried to run a sample application using a Nexus 4 AVD, but it didn't crash upon using the fingerprint sensor. Will have the XPS 13 (9370) in my hands again on monday, so I can test with Pixel 2 AVD too.
sadly I can't provide you with application code.
I've tried to run a Pixel 2 AVD which I wiped before and a new Nexus 4 AVD without deploying the application on it and the problem occured the same way.
I've recorded it here:
For me, the issue appeared with the Nexus 5x AVD. I opened the Android studio, with the default Hellow World project, started the project on the Nexus 5x AVD, then after putting the laptop to sleep, it bluescreened. After disabling the fingerprint sensor in the device manager the problem went away. I have Goodix fingerprint driver, and it's up to date according to the Support Assist.
Thanks for the help.
thank you for the video.
I've tried with Pixel 2 AVD too but the bluescreen wont show up when using the fingerprint reader on the power button.
In addition another XPS 13 (9370) and a XPS 15 (9575) has been tested with Android Studio and Intel HAXM, but there was also no bluescreen shown.
Is there any custom installation done for Android Studio or Intel HAXM like custom memory size etc.?
With kind regards,
|
OPCFW_CODE
|
Drug Infonet provides drug and disease information for your healthcare needs. Visit our FAQ page to find answers to common health questions. Look on the Manufacturer Info page to link to pharmaceutical company pages. Click to Health Info and Health News for the latest in healthcare developments.
Doctors' Answers to "Frequently Asked Questions" - Depakote
Answer: Usually comes back.
Answer: There are reports of this, I do not have title and verse available. You can obtain this from the scientific department of Abbott 800-255-5162
Answer: I don't know of any interaction between Depakote and the thyroid. There is one sentence in the precautions of the PDR that notes that there are reports of changes in the thyroid test, but I have not seen this problem.
Answer: There arer potential toxicities. However, these can be monitored easily and there don't seem to be severe long term problems. Try the drug and see how it moderates your functional ability. The major problem is feeling drunk all the time. Neurontin would be less toxic and might be an answer depending on your EEG. But, the place you live is unlikely to have much bearing on the frequency of your seizures. Seizures are potentially life threatening, so need to be suppressed.
Answer: Wouldn't expect them to, but check the Depakote levels to ensure that they are not higher than usual.
Depakote for Schizophrenia Treatment [posted 10/15/98]
Answer: Depakote has been used in many different medical syndromes in the past couple of years based on its relatively unique activity. While it is not approved for the treatment of schizophrenia by the FDA, it has been used in treatment with mixed success.
Depakote & Herbal Pill [posted 10/13/98]
Answer: Neither is an mao inhibitor. There should be no problem with metabolife.
Depakote & Marijuana [posted 10/8/98]
Answer: Out of my league, but, I'd ask the company- Abbott 800-633-9110.
Depakote & Ephreda Use [posted 10/2/98]
Answer: None that I am aware of.
Liver Problems from Depakote [posted 7/23/98]
Answer: Depakote can cause liver toxicity in some patients for unclear reasons even if in the normal range. However, in general it is dose dependent. Aspirin does not appear to be a factor in this problem. As to the interaction between depakote and the statins, there is no clear evidence here, you'll need to monitor liver functions a little more closely.
Depakote Side Effects [posted 7/16/98]
Answer: Depakote has been used for a while, mainly for seizures, but also for other conditions. Side effects include headache, asthenia, decreased appetite, nausea, somnolence, body aches, vertigo, double vision, tremor, and hair loss. It depends a lot on the drug levels of the Depakote, but side effects are very common with this drug.
Depakote and Pregnancy [posted 7/16/98]
Answer: This study has not been done and will not be for ethical reasons. Studies in bacteria and mice haven't shown any particular worrisome effects.
Depakote and Effexor
Answer: Depakote has been implicated in causing discoid lupus. This would involve sun sensitivity. Otherwise neither is a big risk.
Answer: Depakote(divalproex sodium) is an anti-seizure drug which has been available about 10
years. Side effects include liver toxicity (some deaths, especially small children), low platelet
counts, nausea, sleepiness, vertigo, vomiting, abdominal pain, decreased appetite and rash.
These are the common side effects. The others have been listed
on rare occasions.
There is a low list of minor side effects available in the Physicians Desk Reference, which is available
in any book store or library.
|
OPCFW_CODE
|
Parallel manipulation of object with python
I have a list of ojects ObjList where all objects are instances of the same class. This class has a method run which I would like to execute in parallel for the objects in ObjList.
The results of the computation are then stored inside the objects. Without parallelisation I currently do something like
for obj in self.ObjList:
obj.Run()
This coded is part of class method of a class which contains and handles "lists" of these objects. Afterwards, I want to be able to read the results of the computations performed by obj.Run. I tried the multiprocessing.Pool methods where I ran into problems with pickle. I also tried to use multiprocessing.Process but there I had the problem that the results were stored in a copy of the object which then was discarded. I did not manage to return the manipulated object.
Is there a simple way of applying the same class method to a list of objects which are instances of the same class (which is rather complex and uses multiple objects itself)?
Edit: I tried the approach suggested in the answers to this question but then I always receive errors of the form
AttributeError: Can't pickle local object 'someclass.<locals>.<lambda>'
Possible duplicate of Parallel execution of class methods
@JacquesGaudin: I tried this approach but then I am running into problems with pickle
I am not sure that what I will offer is a "simple way", but see if this works for you.
I would recommend using a third party component (like numba) to transform your code to a compiled machine code that “just-in-time” when needed to
utilize the power of some parallel architecture, like the NVIDIA GPUs or your own multi-core CPU. That way you won't need to handle the hassle of the parallel optimization.
As each code/application has it's own logic and flow, I can't specify the exact transformation of your code, but this is a good line, in my opinion, to follow, that (hopefully) doesn't require much alterations to your code.
Answers that are only a link are rarely the most helpful. Links have a tendency to die, so people looking for help on the same problem in the future might get nothing out of it. Please summarize what the OP should do based on the resources you link in addition to providing the link.
Sure. I made the changes, hope it's up to par. As a side note, I added links to a very solid web pages that the probability of them dying is lesser than the probability of this web page dying, with your comment to (and) my answer.
I will propose to use this code snippet first.
def thread_run(obj):
obj.Run()
def Dothejob():
tr = []
for cmd in obj:
t = threading.Thread(target=thread_run, args=(obj))
t.start()
tr.append(t)
# waiting for all threads ended
for item in tr:
item.join()
Dothejob()
|
STACK_EXCHANGE
|
You choose a group of colleges or universities and you decide what assumptions you’d like to make about proposed or existing college affordability policies…
Use the drop down menus that appear to the right of the table described above to select the school year, state, type of institution, and the student’s living status.
Note that we do not have sufficient data to produce an estimate for every combination of that can be selected. If you select a combination where we have insufficient data an error message will appear at the top of the page stating that we have or will dispatch a student to see if you can obtain the needed data. Unfortunately though you will not be notified of any progress that the graduate student(s) may or may not be making. Sorry ’bout that. Generally speaking though we have fairly complete data up through the 2013-2014 academic year.
Use the sliders to set your assumptions for the contributions made by the student and the student’s family.
The family income exclusion slider determines the discretionary income level for a family based on the poverty guideline. On the slider shown to the right, the income exclusion is set to 200% of the poverty guideline. The family income contribution sets a percent of the discretionary income that is used to help pay for college each year that the student is enrolled. The sliders shown to the right the families are set consistent with an assumption that they contribute 10% of their income greater than 200% of the poverty guideline each year to help pay for their student to attend college.
The percent discretionary income saved, years of savings and interest on savings sliders are used in a similar manner. Note that the interest on savings is real interest (interest earned after inflation). The sliders shown to the right are set consistent with an assumption that families save 5% of their discretionary income for ten years to help pay for their student to attend college.
The student hours worked slider sets the number of hours that the student works each year to help pay for college. All of the take home pay from the number of hours set with the slider is assumed to go toward paying the costs of attending college.
You can use the policy change change settings in several ways. The Lumina Benchmark button toggles the settings for assumed family and student contributions from current income and savings on and off. The tuition adjustment slider can be used to adjust tuition while estimating a change to the state appropriation that would be revenue neutral for the educational institutions.
Additionally, some states might display other policy or policy proposal exploration tools. The slider SB 5476 is an example that might display when the State of Washington is selected.
The “inputs” portion of the table on the left side of the controls section contains overview information about the specific state and collection of institutions during the academic year that is specified. One entry, “Years in college,” can be changed by you, the user. For example the “4” that appears in the illustration to the left of this paragraph can be changed to 4.5 or 5.0 (or whatever) to represent a time of attendance different from the nominal amount associated with the collection of colleges or universities chosen.
The “model variables” portion of the table contains both information that summarizes the family’s contribution (top four items) and the variables used to determine “affordable debt” funding component. “loan repayment ratio,” “Percentile of earner,” “Interest on debt,” and “Loan duration” can be set by you, the user.
“Loan repayment ratio” is the ratio of annual loan payment to gross earnings (after graduation). “Percentile of earner” determines the annual earnings on which to base the affordable debt determination. Interest on debt is the “annual percentage rate” associated with the loan. Loan duration is the loan payoff period, in years.
|
OPCFW_CODE
|
Android dynamic library load vs. LoadLibrary
When I embed libabc.so inside my app it works fine with
System.LoadLibrary("abc");
However, when I move libabc.so to /system/lib/ and I try to load it with
System.Load("/system/lib/libabc.so"); I get the following linker error in logcat:
06-12 04:42:09.864: D/dalvikvm(17630): Trying to load lib /system/lib/libabc.so 0x4254afd8
06-12 04:42:09.869: E/linker(17630): "libabc.so": ignoring 2-entry DT_PREINIT_ARRAY in shared library!
06-12 04:42:09.869: D/dalvikvm(17630): Added shared lib /system/lib/libabc.so 0x4254afd8
06-12 04:42:09.869: D/dalvikvm(17630): No JNI_OnLoad found in /system/lib/libabc.so 0x4254afd8, skipping init
It actually loads my library but I'm unable to call the exported functions (it tells that native method is not implemented).
Why?
By the way, when you move libabc.so to /system/lib/, you can still load it with System.loadLibrary("abc");
Maybe, the app package name has changed?
No. Same app. Just changing the function and rebuilding. I realized that everything loaded from /data/app-lib/myappname/ is fine but from other places it just doesn't work... why?
Changing which function? As I wrote, you can use System.loadLibrary("abc") to load from /system/lib/.
I know, it does load the library. But I can't call the exported functions. As if they are not implemented. The only thing that work is to load the library from /data/app-lib/myappname/
1. what is your device/Android version? 2. what does nm -D libabc.so show?
It show the exported functions
00002bfd T Java_com_john_mytestApp_TestService_myfunc 00002d09 T Java_com_john_mytestApp_TestService_test
Please post your Java file src/com/john/mytestApp/TestService.java, too. Did you show nm output for file you pulled from /system/lib/?
Added the code in the next post. And yes, the nm output was from the library pulled from /system/lib
public class TestService extends IntentService {
static { System.loadLibrary("abc"); }
private Handler handler = new Handler();
public native long myfunc(String arg);
public TestService() {
super("TestService");
}
@Override
protected void onHandleIntent(Intent intent) {
runnable.run();
}
private Runnable runnable = new Runnable()
{
public void run()
{
System.out.println(">>> " + myfunc("test"));
}
};
}
did you find a solution for this error? I am having same issue.
|
STACK_EXCHANGE
|
PyCon, the gathering for the community using and developing the open-source Python programming language. This is the first year of the PyCon Pune where the community will meet for two days of talks and working on upstream projects in two days of dev sprint. CFP ends on 30th November AoE.
Django Inside Out: A complete Python Web framework for fast, secure and scalable Web application.
Django is a high-level free and open-source Python Web framework which follows the model-view-template (MVT) architectural pattern. It encourages rapid application development which is fast, secure and scalable with pragmatic design. Django is developed with a goal to build complex, database-driven websites with ease. It provides hassle free web app development experience and other core functionalities.
In this talk I will talk about the complete architecture of the Django framework, about its component, getting started with building an example web application. And some other performance and optimization tools and techniques.
In this talk the audience (preferably beginner and intermediate level), will know how to get started with Django and sample running web application on the go as well as with core concepts of Django (MVT) architecture and a bit of Python magic.
###Understanding the (MVT) approach of Django
- The Modal Layer : Abstraction layer (the “models”) for structuring and manipulating the data of your Web application.
- The View Layer : The concept of “views” to encapsulate the logic responsible for processing a user’s request and for returning the response.
- The Template Layer : A designer-friendly syntax for rendering the information to be presented to the user.
###Building a simple live web app using Django Framework [Example]
- A simple web application to set up and running with Django Framework.
- Understanding some MVT concept in application.
- The Development process.
###Performance and Optimization Overview
-Tools and techniques that can help get your code running more efficiently.
###This talk is intended for getting up and running the Django web application [live]. Participants are requested to to install the following software as a prerequisite:
- Python 3.4.x
- Install [Django] [https://docs.djangoproject.com/en/1.10/intro/install/]
- Some basic Python programming language and front-end technologies would be helpful
Raj Kumar Maurya is an Open Source enthusiast, currently a final year undergraduate of Computer Science and Enigneering. He is a Free Open Source Software [FOSS] contributor and actively involved in contributing to Mozilla, DuckDuckgo, NumFOCUS, Wikimedia, WordPress and Docker. He is currently a Language Leader in DuckDuckgo. Apart from this he has research and industrial experience as an intern at various positions at tech. startups and also worked as Software Engineer intern at Xerox Research India.
- Useful links for participants:
- Social Profile Links:
- -Email: email@example.com
- -Web: http://rajkmaurya.com/
- -LinkedIn: https://in.linkedin.com/in/raj-maurya-779b495b
- -GitHub: https://github.com/raj-maurya
- -Twitter: https://twitter.com/raj__maurya
- -Facebook: https://www.facebook.com/rajkmaurya111
- -Wiki: https://en.wikipedia.org/wiki/User:Rajkmaurya111
- -DuckDuckgo: https://forum.duckduckhack.com/users/rajkmaurya111/summary
- -Mozilla: https://mozillians.org/en-US/u/rajkmaurya111/
|
OPCFW_CODE
|
package Entity;
public class SimpleClass {
String className;
String fullPath;
String packagePath;
String type;
public SimpleClass(){
}
public SimpleClass(String className, String fullPath, String packagePath, String type) {
this.className = className;
this.fullPath = fullPath;
this.packagePath = packagePath;
this.type = type;
}
public String getClassName() {
return className;
}
public void setClassName(String className) {
this.className = className;
}
public String getFullPath() {
return fullPath;
}
public void setFullPath(String fullPath) {
this.fullPath = fullPath;
}
public String getPackagePath() {
return packagePath;
}
public void setPackagePath(String packagePath) {
this.packagePath = packagePath;
}
public String getType(){
return type;
}
public void setType(String type){
this.type = type;
}
@Override
public String toString() {
return "SimpleClass{" +
"className='" + className + '\'' +
", fullPath='" + fullPath + '\'' +
", packagePath='" + packagePath + '\'' +
", type='" + type + '\'' +
'}';
}
}
|
STACK_EDU
|
Error with IbisTypeError using row validation from professional-services-data-validator tool
So I'm using an open-source tool to compare the row-by-row data between 2 tables sitting in 2 different databases in in MSSQL and the other in Snowflake.
The connection setups for both were fine. I could run the row count comparison successfully and got the count using the cmd below:
data-validation validate column -sc snowflake -tc snowflake -tbls MY_SCH.MY_TABLE
However I ran into issues when running the row-by-row comparison. For a test, I am comparing a snowflake table to itself. The cmd is as below:
data-validation validate row -sc snowflake -tc snowflake -tbls MY_SCH.MY_TABLE --primary-keys MY_PRIMARY_KEY --hash '*'
Where:
row: type of data validation,
-sc: source connection
-tc: target connection
tbls: table name(s)
--primary-keys: the primary key column being used to match between the 2 tables
--hash: * for validating all fields or a list of field names for a selected list only
This error message followed , I've tried comparing different tables, selected columns, using SQL Server database. All came out with the same error:
File "C:\Users\admin\AppData\Local\Programs\Python\Python310\lib\site-packages\ibis\common\validators.py", line 467, in sequence_of
raise IbisTypeError(f'Arg must have at least {min_length} number of elements')
ibis.common.exceptions.IbisTypeError: Arg must have at least 1 number of elements
The Github repo for the data validation tool is here: https://github.com/GoogleCloudPlatform/professional-services-data-validator
Appreciate any suggestions!
why not load both tables into a dataframe ,and use that to compare
Comparing dataframes involves knowing the exact mappings of the table columns between the databases and independent connection handling for different types of database and some sort of reporting formatting and implementation. If I have hundreds of table, dataframe comparison might not be feasible. This tool scales and handles many of those angles out of the box.
Did you read the docs or are you making up the arguments as you go? It says: --tables-list or -tbls SOURCE_SCHEMA.SOURCE_TABLE=TARGET_SCHEMA.TARGET_TABLE and it doesn't seem like you have specified the =TARGET_SCHEMA.TARGET_TABLE part in your example
Yes, I was making it up as I go, thanks!
The answer is as below:
data-validation validate row -sc snowflake -tc snowflake -tbls
MY_SCH.MY_TABLE=MY_SCH.MY_TABLE --primary-keys MY_PRIMARY_KEY --hash *
The key is to follow the syntax of for parameter -tbls MY_SCH.MY_TABLE=MY_SCH.MY_TABLE and remove all quotations.
|
STACK_EXCHANGE
|
Hello to all of you power users out there. I would like to share my experiences in dabbling with the general behavior of my Macbook and ask for your personal advice as well. Current Problems to Solve: a) Cmd+Tab behavior (my favorite pet peeves of all times) b) arrow-key control for buttons on system popups Recently, I’ve started my vendetta against these suboptimal features on my Mac. I’ve had some success but I’m not there yet. Here’s what I’ve done so far and what these things have achieved for me. a) Nemesis 1: Cmd+Tab Behavior Goal: Only have very few, select apps in the application switcher, hide all the others from it and reach these role-player apps though custom hotkeys instead - Witch: https://manytricks.com/witch/ - Automator (native) - Apptivate: http://www.apptivateapp.com/ - Command-Tab Plus: http://commandtab.noteifyapp.com/ Witch has had some minor positive effect on a side problem of Cmd+Tab. It provides a slightly superior way to switch between windows of the same app over the native Cmd+~ function. This is not really ground breaking, though, and therefore not worth the $10 by itself. You shouldn’t go for it, if you don’t like any of the other Witch features. Automator seems to work for others in order to create custom hotkeys to launch apps or to switch to them directly. It has worked for me, but there is a much easier alternative. Apptivate (freeware) achieved the same thing with only very few clicks and an extremely comprehensive interface. Lastly, there is an app which came the closest to aiding me in my vendetta. Command-Tab Plus goes in the direction that I need. Yet, it only achieves my goal partially. When installed and activated, it hides Finder and other unnecessary apps in the application switcher. Unfortunately, it does really exclude them permanently, as I intend to. The appear again, once they are open, cluttering the application switch just as they have before. This is the most important thing about my urge to improve the Mac's behavior. The changing order of apps in the application switcher when cycling through more than two or three apps is confusing me a lot. When it happens, I often switch to the wrong app, which then interrupts my train of thought. Then I wildly jump from window to window looking for the app I was going for. Eventually I forget which one it was, so I start tabbing though the windows to find the anchor of my thought process before I completely lose it and start over. This has been really frustrating me for the past 10 to 15 years on both Windows and Mac. One convenient accurate possibility would be to have only the most important apps like the browser and maybe an Office app available on the Cmd+Tab application switcher and reach everything else by custom hotkeys. This would prevent messing up the order of a long list of opened apps, as it is happening to me so often today. b) Nemesis 2: Control System Popups with Arrow Keys As an old Windows user you appreciate that you don’t have to take your hand of the keyboard when the OS prompts you for an action like save/abort/quit. On Windows you simply cycle through the options with the arrow keys and confirm with Enter or often with hotkeys. This function is dearly missed on macOS leading to the mouse hand reaching the device, moving the cursor to click the intended button, only to then return to the keyboard. How can I make this more convenient?
|
OPCFW_CODE
|
Weather On The Way is a new kind of weather app – it provides a weather forecast for a route. By combining navigation with the weather, you can plan a route to your destination and view a forecast of the weather for points along the route at precisely the moment when you will be traveling through it.
Using the NOAA forecast for the US (and other sources for worldwide coverage), it's able to provide a detailed forecast that's not just temperature and precipitation, but also visibility range, wind speed and direction, UV index, and everything else you might need to avoid unexpected delays.
- Weather forecast for points along your route
- Helps you avoid unexpected delays
- Identifies the best time to leave
- Temperature, conditions, precipitation, weather warnings information, etc.
- Live Snow & Rain Doppler Radar
- Helps you pick the route with the best weather
- Set waypoints along the route
- Privacy-focused (no creepy tracking, no ads, no analytics)
- Support Siri Shortcuts
Android app is on the way 🤖 🚀
Can you tell us a little bit about yourself?
Hi, I'm Piotr and I’m from Krakow, Poland. I got my start in iOS development back in 2011 when I created my first app as part of a bet in high school. A friend bet me that I wouldn't be able to recover the fee for the Apple Developer Program from app sales.
Well, he lost that bet.
Currently, I work as a part-time freelance iOS/ML developer and part-time indie developer.
How did you come up with the idea?
I had just started an internship in a different country, so I often traveled long distances home. I was looking for a new project and noticed that there are no well-designed apps that display the weather forecast for a route.
So, I started to sketch what such an app would look like and started talking with some potential users about it. As it turns out, people were adding all cities along their routes and guessing their arrival times to get a feel for the weather.
From this point onwards, it was obvious that it could be a successful app. It took some time for me to build it as I was doing my Master’s degree at the time, but it was finally launched in July 2020.
💡 Want to see your app featured?
Submit your app or reach out on Twitter 🐦
How did you market the app as an indie developer?
So, I'll contact them via email or Twitter DM with a short message explaining what I'm building and asking if they'd like a preview. Generally, I find people respond to these types of messages more than to ones asking them to cover an app. Once they're interested in the app and have used it, they'll be much more likely to write about it.
However, all of this takes time. I started contacting journalists 6 months before the launch and some of that only led to press coverage a year later.
What's your app design and development workflow like?
Usually, I start by doing a rough design in Photoshop or Illustrator, not to be pixel-perfect but just to get an idea of how the app might look. After that, I'll implement the design and refine it while coding.
Additionally, I have a popup where I encourage users to start video calls with me. I ask them which features they use and if they understand how it works. I've found that having face-to-face discussions with users is more helpful than simply exchanging feedback via email. Features that seem obvious to developers often appear unclear to users which is a sign that they need to be redesigned and simplified.
While I was working for Porsche's engineering team, my colleagues and I would often take the company vehicles out for group drives under the pretense of “testing” 😉. We’d be gone for hours at a time enjoying ourselves driving all over the Bay Area - Highway 1, Skyline Boulevard, Pescadero, and all of the other scenic drives in the area.
If you know anything about the Bay Area, and San Francisco in particular, it's that it's infamous for its micro-climates. So, before we could start our drive, we'd all have to spend considerable time researching the weather in order to find a route with reasonable weather and driving conditions. Having repeatedly experienced the frustration of this type of manual planning, having a single app - like Weather On The Way - that not only did all the work for us but kept us updated on any notable weather changes would have been invaluable.
Like Piotr, I've been making indie iOS apps for a while now and have also struggled to get press coverage. So, I would like to thank Piotr for his transparency and his actionable advice.
As I've looked at dozens of iOS apps over the last few weeks, I've noticed that apps that have press kits on their landing pages usually have the most media coverage. In retrospect, this makes sense as a ready-to-use press kit reduces the amount of time and effort it takes a journalist to review your application.
A quick welcome to the ✨ 22 new people joining ✨ us this week - feel free to reply to this email and say 👋.
If you're enjoying the newsletter, please consider sharing it! Have some feedback you want to share? Drop me a message 📧
Or, if you're looking for something else to read, check out our sponsor Refind!
If you're an iOS Developer with an upcoming interview, check out Ace the iOS Interview:
|
OPCFW_CODE
|
Unexpected behaviour creating events with pandas timestamps
Hi DT,
Just spotted that if you construct an event with a pandas timestamp (e.g. from a time-indexed dataframe), the timestamp field is not populated. Here's an example:
import disruptive as dt
import pandas as pd
from datetime import datetime
t = datetime.utcnow()
print(t, type(t))
print(dt.events.ObjectPresent(state="PRESENT", timestamp=t)) # timestamp is correct
pt = pd.to_datetime(t)
print(pt, type(pt))
print(dt.events.ObjectPresent(state="PRESENT", timestamp=pt)) # timestamp is empty
Would it be possible to either make events work with pandas DateTime types, or raise an exception rather than silently blanking it?
For now I'm using pt.to_pydatetime() to work round the issue
Many thanks,
Andrew
This is unintentional behavior and I will deal with it.
Even though pandas is very convenient and popular for data-science applications we have no intention of including it as a dependency in our base client. I will see if I can find a generic method of turning non-datetime objects (like the pandas datetime object) into iso8601 strings that our underlying API requires. If not, I'll throw an exception.
Thanks for reporting bugs!
Johannes
Johannes,
Thanks for responding. I completely understand you don't want a dep on pandas. It seems that pandas.DateTime is a subclass of datetime.datetime, so its very odd (and so checking its a datetime with istanance won't catch them, but type(t) is datetime will). I'd be interested to know how you resolve it...
I think I have come to an understanding of the problem.
The following snippet creates 2 touch events, one with a datetime timestamp and one using pandas timestamp. When printing the events, the pandas timestamp has indeed disappeared. However, when looking at the objects _raw attribute, which is what I send to our REST API under the hood, the timestamp value is there, just as expected.
>>> from datetime import datetime
>>> import pandas as pd
>>> import disruptive as dt
>>>
>>> ts = datetime.utcnow()
>>> ps = pd.to_datetime(ts)
>>>
>>> ts_event = dt.events.Touch(timestamp=ts)
>>> ps_event = dt.events.Touch(timestamp=ps)
>>>
>>> print(ts_event)
Touch(
timestamp: datetime = 2021-08-22 11:32:09.440321,
event_type: str = touch,
),
>>> print(ps_event)
Touch(
timestamp: Timestamp = Timestamp(
),
event_type: str = touch,
),
>>>
>>> print(ts_event._raw)
{'updateTime': '2021-08-22T11:32:09.440321Z'}
>>> print(ps_event._raw)
{'updateTime': '2021-08-22T11:32:09.440321Z'}
The problem seems to lie with the output formatting of my objects, more specifically with the __str__ dunder behavior. Here's what I've discovered.
When using pandas.to_datetime() on a datetime object, the output is pandas.Timestamp. As you said, this is a subclass of datetime, which is basically what caused the silent failure here. Everything I do with datetime in my logic for dealing with timestamps, like converting to ISO8601 with .isoformat(), still works.
>>> ts = datetime.utcnow()
>>> ps = pd.to_datetime(ts)
>>>
>>> isinstance(ts, datetime)
True
>>> isinstance(ps, datetime)
True
However, there is a difference between them. Unlike datetime, the pandas.Timestamp is a class, but a class with no attributes (by default). Therefore, in the generic __str__ dunder I've written, which checks if a variable is a class object, the output is as you saw, empty.
The fix is, fortunately, very simple. When recursively printing all attributes in an object, when checking if an attribute is a class, i also check if that class is an instance of datetime.
# line 59 in outputs.py.
if hasattr(val, '__dict__') and not isinstance(val, datetime):
# print class attributes recursively.
else:
# Now pandas timestamps are printed here as a pure string.
Now, the output of the first snippet should be as follows.
# Datetime event.
ObjectPresent(
state: str = PRESENT,
timestamp: datetime = 2021-08-22 11:55:50.312554,
event_type: str = objectPresent,
)
# Pandas timestamp event.
ObjectPresent(
state: str = PRESENT,
timestamp: Timestamp = 2021-08-22 11:55:50.312554,
event_type: str = objectPresent,
)
I've pushed the update to the branch timestamp-format-bugfix. Please have a look and report back if this makes sense to you, or if I've misunderstood the problem.
Thanks - I see its only a formatting issue, which is easily demonstrated (and I should have spotted this):
...
pt = pd.to_datetime(t)
print(pt, type(pt))
e = dt.events.ObjectPresent(state="PRESENT", timestamp=pt)
print(e) # timestamp is empty
print(e.timestamp) # timestamp as expected
Happy with your change... got to go an meet Uwe for lunch now,,, Thanks for your help
|
GITHUB_ARCHIVE
|
Is it usual to use “full-cry” as a stand-alone adjective?
Maureen Dowd’s article titled “Spellbound by Blondes, Hot and Icy” appearing in December 1st NY-Times jumps from Alfred Hitchcock’s favor of blonde actresses to the dispute of Hillary Clinton’s responsibility for ill-handling of Benghazi attack that killed the U.S. ambassador to Libya and three other Americans.
“While Republicans continue their full-cry pursuit of Susan Rice, the
actual secretary of state has eluded blame, even though Benghazi is
her responsibility. The assault happened on Hillary’s watch, at her
consulate, with her ambassador. Given that we figured out a while ago
that the Arab Spring could be perilous as well as promising, why
hadn’t the State Department developed new norms for security in that
part of the world?”
As I didn’t know the word, ‘full-cry,’ I consulted Cambridge, Merriam-Webster, and Oxford online dictionary.
None of them registers “full-cry,” but for Cambridge Dictionary carrying “in full cry” as an idiom meaning ‘taking continuously about in a noisy or eager way’.
Google Ngram shows neither “full cry” nor “full-cry,” while showing incidences of “in full cry” since cir 1840. Its usage continues to decline all the way.
Though I surmise “full-cry pursuit” means ferocious and tenacious pursuit from the definition of “in full cry” by Cambridge Dictionary, I wonder if the word “full-cry” is received as a stand-alone adjective as used by Maureen Dowd.
Can “full-cry” be used as an adjective or a noun sui generis? If yes, is it always necessary to combine 'full' and 'cry' with a hyphen?
It doesn't have to be a defined adjective. "In many languages, including English, it is possible for nouns to modify other nouns. Unlike adjectives, nouns acting as modifiers (called attributive nouns or noun adjuncts) are not predicative; a beautiful park is beautiful, but a car park is not "car". In plain English, the modifier ... may generally indicate almost any semantic relationship." (Wikipedia: Other noun modifiers http://en.wikipedia.org/wiki/Adjective#Other_noun_modifiers )
+1 One of the very thoughtful questions, so rare these days.
Just to make explicit what the answers imply: O'Dowd here is not being at all self-indulgent or cute; it's a rare use, but not a strained one.
@StoneyB The writer is Ms Dowd not O'Dowd .
@Yoichi: I don't see it mentioned anywhere else on this page, but in practice, when "full-cry" is used adjectivally it will almost always be followed by "pursuit" (as it is in every one of those Google Books citations). It's a "fixed term" that we simply don't use with any other noun. So I'd say that leading adjectival full-cry (as opposed to trailing in full cry) is almost a fossil word
So. there’s nothing like ‘full-cry protest’ ‘full-cty march, ‘full-cry accusation’?
The OED has an entry for full cry that may be more useful than those you’ve found. It is sense 12b. I will give the a sense, then the b sense with citations:
12. a. The yelping of hounds in the chase.
b. Hence various phrases: e.g. to give cry, to open upon the cry; full cry, full pursuit; also fig.
1589 R. Harvey Pl. Perc. 6 ― Will you··run vpon a Christen body, with full cry and open mouth?
1649 Fuller Just Man’s Fun. 13 ― Hear the whole kennel of Atheists come in with a full crie.
1684 R. H. Sch. Recreat. 16 ― Being in full Cry and main Chase, comfort and cheer them with Horn and Voice.
1710 Palmer Proverbs 53 ― He gives out this cue to his admirers, who are sure to open upon the cry ’till they are hoarse again.
1858 Hawthorne Fr. & It. Jrnls. II. 32 ― All offering their merchandise at full cry.
1891 Rev. of Reviews July 25 ― The journalists gave cry after the Prince, like a pack of hounds when they strike the trail of a fox.
So it appears that the phrase is quite old. It seems to mean “full pursuit”.
Interestingly, the very oldest citation for the word cry is a citation from Laȝamon that ends in “doleful cry”:
C. 1275 Lay. 11991 ― Nas neuere no man··þat i-horde þane cri [C. 1205 þesne weop] hou hii gradde to þan halwes, þat his heorte ne mihte beo sori for þane deolfulle cri.
That is a “doleful cry”, so one of pain, a dolorous one. It doesn’t actually mean in full cry there.
Yes, except that it means not so much full pursuit as noisy pursuit: literally unrestrained barking/belling, so figuratively unrestrained, enthusiastic rhetoric.
-1 I looked for a reference to adjective in the answer.
To answer the basic question, it does not seem to be usual. There is only one record of its use in the Corpus of Contemporary American and English and none in the British National Corpus. The fact that it is unusual does not, of course, mean that it shouldn’t be used.
In general, we could use nearly any suitable noun to modify another noun. We don't always need an adjective. I'd say full-cry here is a noun, not an adjective.
@Kris. I'd say so too.
@Bill Franke What does 'POS' mean? Is it a website terminology? Google Search doesn't give me answer.
@Barrie England. So I needn’t to be ashamed of being ignorant of this word that hasn’t some up so often.
@YoichiOishi You have nothing, absolutely nothing, to be ashamed of in that regard. You are the most patient, diligent, and assiduous Japanese person I have ever met when it comes to your study of English. You should be proud of yourself.
English traditional song from the county of Cheshire:
Through Macclesfield Forest bold Reynard did fly
At his brush closely followed the hounds in full cry
Yoichi Oishi -
Reynard is the anthropomorphic folk name for a fox Vulpes vulpes.
His brush is his tail.
In the UK there was a long tradition of hunting foxes with a pack of hounds, until it was banned around 2004.
"Fox hunting is an activity involving the tracking, chase, and
sometimes killing of a fox, traditionally a red fox, by trained
foxhounds or other scent hounds, and a group of unarmed followers led
by a master of foxhounds, who follow the hounds on foot or on
horseback"
The hounds are said to give cry when they first scent the fox, and then pursue in full cry.
|
STACK_EXCHANGE
|
When will you do your homework in sanskritThe life have to one hour to ensure that would greatly appreciates the languages they're watching each other with professional cv writers sites for. I must do your homework in the desiderative in sanskrit when the teachers. Spokensanskrit - writes a book that the course. Thirdly, the most kids deserve our homework faster - college homework and updates to your own, or weekly tutoring, and verify it. Like to gym essay my earth in education. Happiness essay for all four children in unicode sanskrit. As sanātana dharma, as is to do it. Our schedule is for this combination of your homework is affirmative in manhattan, you. Bachelor's degrees to do your https://concedere.net/, study hard and. Fourthly, how to sanskrit homework google translate! U will announce when you the correct form according to assist you smarter. Webmath is often referred to a research papers be able to books and buddhist scholars, sanskrit dictionary: firstly, desires of 'indus' in sanskrit. Dissertation would do your personal tutor will go? Let us that you will satisfy you understand how to find. Imagine vertical lines would precede horizontal ones here you out with kennel or soon? Though almost all the sanskrit online tutors who can help you. Research papers be graded and in sanskrit on your homework problems. I'm sure about holiday homework should be a centralized bureaucracy to your homework as most effective? Thank you will see how does homework question on wikipedia. Rhetoric, change for class 7 days - best and receive help. Are saying that would like this manner. Today i phone to do your homework problems and pronouns. On the camp writes your summer holiday homework? Very busy in this article will put all the primary lingua franca of the video formats available. Set of the primary lingua franca of the introduction and updates to fulfil wishes. Among some active voice in hindi language course, you are leaders. Four weeks is due at the cundi dharani in nation book. Webmath is stare at the website please forward my world? Fourthly, we discussed the sanskrit dictionary: sanskrit translations! Hinduism is your summer holiday homeowrk: sanskrit phrase. That8217s pay someone happy birthday in sanskrit essay writing services. Well just in english literature and top of it also pleasant and. New app that will be sent directly to google translate! Best is the correct form according to type of half an hour to use our professional. Hello parents, here will demand to be banned - american universities - free course, list at 1. First or to calculator homework meaning, at austin columbia university of mechanical engineering professors to ask 'when does he greatly help - ph. Review: here are familiar with your homework help with toy kettles, but also recorded at least one of the class. Essays in sanskrit essay you use your exam 4, tests, 2009, i will help service - uk universities - readiness of sanskrit. Cute young teenage girl doing your browser does anyone need any of mobile devices. Now think i am doing homework in hindi language - best and the september 19, and translation of the course. U will consist of its great powers include: the causes. Any help you a major landmark - cormiport.
When will you do your homework translate in sanskritSince i helped me in the professor. You agree to english, 11, or website uses language for essay i: what you won't be a beginners sanskrit they be translated intohindi. While you know how to translate sanskrit homework of. He translated directly with or a bazaar language courses may also provide suggestion words meaning sanskrit. Foreword if you read more difficult questions than hebrew and word-by-word explanations. Review the meaning when you are given homework and subheadings. Past history will be some languages with your sanskrit - an english or share for class begins with resources to read, english. Roth and supervision will help on them. Translation is a research paper recommendation system would be familiar with irrealis mood make yourself how are able to alex go? With your homework online english sanskrit homework help translation from english or. A better understanding of holiday homework assignments start piling up our homework online hypertext. Vidhyarti aur anushasan, write a polished translation needs. Famous person i am not find a scholarly language of grammatical questions than.
When your mom tells you to do your homework instead of gameProdigy to ask for for example, academic or upset. It later cards, and how recent a c or take care of you have 2. What your friend took the dishes before your room all: aka, and tell you should i appreciate i do your teeth won't tell you. Mar 22, and you spend their students achieve in the assignment, pick a little distracted and your child. Tell others about the question head on board game, it later cards do math, etc. Occasionally go inside and your child calmly. Options to read a little bit stricter in most people talk. Most of a good classroom discipline is better for many times have i gave no homework.
When you realize you forgot to do your homeworkI did it faster and eyes away from approximately age five. Does it is: is it, and eyes away, parents often don't go home by the. Don't lie and do your homework - see. That i have a terrible feeling to know what needs to get and promptly. With the result will do your life. The best homework doesn't mean very much. Does not remember that complete homework if you're studying conditions, the homework or accept assignments? My essay question 1044311: why they're doing homework and all-knowing google told us you top results on your grades. Can always consider a statement and gain back of the book. Then you should know that your math esmee to the. Then you realize you realize it is a paper within tight deadline is. Get your homework guide checklist you do. Bonus: everything you want, is the homework.
What happens when you do your homeworkMaybe you overcome the child continue to do not to get done. Reasons to tips for you don't do homework, should make a company. My homework in, and votes cannot be the last thing you to. An outline to do the items you all agree that final grade. Put in your robot can happen to do my homework is a. My kid, you can continue to do? Free access to do one of your problems can it at which i don't want to grading. To do it can do your homework. There to do, invite a company prior to https: prepare for me and finishing homework done.
- petite latina pamela
- Have you ever seen a arousing porn vid with a hairy whore? Well, there are mighty guys, who really love ramming hairy pussies with their thick schlongs and cover them with hot and creamy cum loads
- amateur gay blowjob pics
|
OPCFW_CODE
|
understanding tensorflow binary image classification results
For one of my first attempts at using Tensor flow I've followed the Binary Image Classification tutorial https://www.tensorflow.org/tutorials/keras/text_classification_with_hub#evaluate_the_model.
I was able to follow the tutorial fine, but then I wanted to try to inspect the results more closely, namely I wanted to see what predictions the model made for each item in the test data set.
In short, I wanted to see what "label" (1 or 0) it would predict applies to a given movie review.
So I tried:
results = model.predict(test_data.batch(512))
and then
for i in results:
print(i)
This gives me close to what I would expect. A list of 25,000 entries (one for each movie review).
But the value of each item in the array is not what I would expect. I was expecting to see a predicted label, so either a 0 (for negative) or 1 (for positive).
But instead I get this:
[0.22731477]
[2.1199656]
[-2.2581818]
[-2.7382329]
[3.8788114]
[4.6112833]
[6.125982]
[5.100685]
[1.1270659]
[1.3210837]
[-5.2568426]
[-2.9904163]
[0.17620209]
[-1.1293088]
[2.8757455]
...and so on for 25,000 entries.
Can someone help me understand what these numbers mean.
Am I misunderstanding what the "predict" method does, or (since these number look similar to the word embedding vectors introduced in the first layer of the model) perhaps I am misunderstanding how the prediction relates to the word embedding layer and the ultimate classification label.
I know this a major newbie question. But appreciate your help and patience :)
seems like you need to convert the scores into probabilities with softmax.
According to the link that you provided, the problem come from your output activation function. That code use dense vector with 1 neuron without activation function. So it just multiplying output from previous layer with weight and bias and sum them together. The output that you get will have a range between -infinity(negative class) and +infinity(positive class), Therefore if you really want your output between zero and one you need an activation function such as sigmoid model.add(tf.keras.layers.Dense(1), activation='sigmoid'). Now we just map every thing to range 0 to 1, so we can classify as negative class if output is less than 0.5(mid point) and vice versa.
Actually your understanding of prediction function is correct. You simply did not add an activation to fit with your assumption, that's why you gat that output instead of value between 0 and 1.
I had the same issue and added the sigmoid activation to my Dense(1) layer, but now the training doesn't work anymore and is stuck at 0.5 accurancy. Any ideas? I'm using the code from the Image classification tutorial.
Just create a new question, I need to see your implementation.
|
STACK_EXCHANGE
|
go/types: clearly mark the package as not supporting modules
What did you do?
Tried to use go/types with code that uses modules
What did you expect to see?
Either that it works or that the documentation would tell me not to expect it to work
What did you see instead?
Can't find import
The Problem
I spent hours trying to get go/types to analyze some code, and none of the documentation said it didn't work with modules. Modules have been out for like 2 years at this point. Please put clear, hard to miss statements in the comments for go/types that it doesn't support code that uses modules.
Also (and probably something that can be fixed faster), please update the documentation on the examples the godoc links to here: https://github.com/golang/example/tree/master/gotypes so that it, too, says that it won't work with modules. No where did I see any indication that it wouldn't work, except when I posted on Twitter.
Can you clarify how it doesn't work with modules? It's perhaps not trivial to set up correctly, but it certainly works with modules.
An example of a tool that supports modules and uses go/types directly: https://github.com/burrowers/garble/blob/2e2bd09b5e420455f51f7e1e1cbe46a11a6a7cf3/main.go#L409-L413
Granted that it's a complex file and not a very clear example, but just to prove a point. I agree that better docs would be nice, but they should definitely not say "does not support modules".
hmm... lemme recheck what I was doing. I have an incredibly basic test that fails, but I may be making a simple mistake.
I heard from twitter that go/types didn't work with modules and ... probably should have double checked that assertion :) Lemme get back to you in a bit.
I think the reply you got on Twitter is mostly right; most tools which just want to load Go packages should use x/tools/go/packages, which uses go/types under the hood and supports modules. But it's certainly possible to use go/types directly and support modules at the same time. Another way to think about it - if go/types simply did not support modules, how would go/packages support modules and expose go/types.Info etc?
I think I have an idea of the problem I was having. I am running tests that parse go files that exist only for testing purpose (under /testdata). Those testdata files sometimes include 3rd party imports that the main program doesn't have, so they're not in the go.mod or go.sum. I presume this screws up go/types, since it doesn't do the auto-download of imports the way the build tools do.
Previously I was just using go/ast to parse them, and it doesn't care about imports, so these tests worked fine.
Yeah, ok, no, that's not it either.
here's my test:
$ go run main.go
2020/12/14 10:59:15 exec: echo foo
foo
failed to check types on files in ./foo/: foo/foo.go:4:2: could not import github.com/magefile/mage/sh (can't find import: "github.com/magefile/mage/sh")
main.go
package main
import (
"fmt"
"go/ast"
"go/importer"
"go/parser"
"go/token"
"go/types"
"play.ground/foo"
)
func main() {
foo.Bar()
fset := token.NewFileSet()
path := "./foo/"
f, err := parser.ParseFile(fset, "foo/foo.go", nil, 0)
if err != nil {
panic(err)
}
info := &types.Info{
Defs: make(map[*ast.Ident]types.Object),
Uses: make(map[*ast.Ident]types.Object),
Types: make(map[ast.Expr]types.TypeAndValue),
}
conf := types.Config{Importer: importer.Default()}
if _, err = conf.Check(path, fset, []*ast.File{f}, info); err != nil {
fmt.Printf("failed to check types on files in %s: %s\n", path, err)
}
}
foo/foo.go
package foo
import (
"github.com/magefile/mage/sh"
)
func Bar() {
sh.RunV("echo", "foo")
}
go.mod
module play.ground
go 1.15
require github.com/magefile/mage v1.10.0
Seeing your example, I still think that your answer here is that you should be using go/packages, which will do all the modules resolution for you. go/types can support modules via its importer interface, but it doesn't do so out of the box on its own, because it is a typechecker, not a build tool that understands what modules are.
Closing as a duplicate of https://github.com/golang/go/issues/28328 and https://github.com/golang/go/issues/31821.
But also note that trying to use the importer interface yourself directly is not a great idea; see https://github.com/golang/go/issues/44630. This is another reason why you should just rely on go/packages.
|
GITHUB_ARCHIVE
|
How to parse output jenkins console, return value and save in variable
I'm having trouble parsing the jenkins pipeline console output.
Every time I pass a job, a line appears in the console:
12:29:08 [10:29:07] NIDD version is: SBTS23R1_NIDD_2217_100_01
I would like to extract a variable value from it: SBTS23R1_NIDD_2217_100_01
and save it in a variable so that I can use it further.
I tried to do something like described here:
GROOVY: Finding a string from console output if a regexp string found
Unfortunately, I am getting the error:
an exception which occurred:
in field com.cloudbees.groovy.cps.impl.BlockScopeEnv.locals
in object com.cloudbees.groovy.cps.impl.BlockScopeEnv@20cd216a
in field com.cloudbees.groovy.cps.impl.CpsClosureDef.capture
in object com.cloudbees.groovy.cps.impl.CpsClosureDef@3e3cd6ed
in field com.cloudbees.groovy.cps.impl.CpsClosure.def
in object org.jenkinsci.plugins.workflow.cps.CpsClosure2@6be3d3c7
in field org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.closures
in object org.jenkinsci.plugins.workflow.cps.CpsThreadGroup@14fde9cd
in object org.jenkinsci.plugins.workflow.cps.CpsThreadGroup@14fde9cd
Caused: java.io.NotSerializableException: java.util.regex.Matcher
My code :
def matcher = ("12:29:08 [10:29:07] NIDD version is: SBTS23R1_NIDD_2217_100_01" =~ /NIDD version is:\SBTS\d{2}\w\d_NIDD_\d{4}_\d{3}_\d{2}/)
if (matcher.hasGroup())
{
def msg = matcher[0][1] println("Build failed because of ${msg}")
}
What is the code that results in this error?
def matcher = ("12:29:08 [10:29:07] NIDD version is: SBTS23R1_NIDD_2217_100_01" =~ /NIDD version is:\SBTS\d{2}\w\d_NIDD_\d{4}\d{3}\d{2}/)
if (matcher.hasGroup()) {
def msg = matcher[0][1]
println("Build failed because of ${msg}")
}
Looks like you missed \s* and a group that you refer to later from the code. Try https://ideone.com/5zO4d9
You need to use a capturing group and consume any amount of spaces between a colon and SBT (you mistakenly escaped S making S in SBT part of a non-whitespace matching shorthand character class):
def matcher = ("12:29:08 [10:29:07] NIDD version is: SBTS23R1_NIDD_2217_100_01" =~ /NIDD version is:\s*(SBTS\d{2}\w\d_NIDD_\d{4}_\d{3}_\d{2})/)
if (matcher) {
def msg = matcher[0][1]
println("Build failed because of ${msg}")
}
See the Groovy demo.
Thanks for your help, I'll check if it works tomorrow and let you know
Your fixes work, but something is wrong because the code looks for the NIDD value in what was typed in the code
("12:29:08 [10:29:07] NIDD version is: SBTS23R1_NIDD_2217_100_01")
not the entire job console
|
STACK_EXCHANGE
|
We are very proud of announcing that Biswap will be joining Outer Ring´s space trip. Before digging into how this collaboration will be carried out, we want to explain to all of you how this platform works, their staking system and their farming platform. This way, you will familiarize yourself with the defi and better understand the relation with our project.
About Biswap DEX
Biswap is one of the top Decentralized Exchanges in the world. They offer the lowest trading fee of 0.1% per swap, High APRs on Biswap Farms and Launchpools. As well as 0.05% Fee Reward for Liquidity Providers and up to 90% Fee Return. Also, as well as our project, they are audited in Certik.
The official listing of Galactic Quadrants in their exchange means that the pair GQ/BUSD is now available for everybody to trade on. To create it, we added liquidity for a total value of $100k, $50k in GQ and $50k in BUSD.
Stake your GQs in Biswap Farms
Biswap will create a Farm dedicated to GQ tokens. In order to access it, the user has to provide liquidity on Biswap DEX to GQ-BUSD liquidity pair.
On Biswap, liquidity providers get 0.05% from each swap made inside their liquidity pair as a fee reward. After providing liquidity to GQ-BUSD liquidity pool, the user gets his LP tokens, which could be staked in the GQ-BUSD Farm (once it will be created) in order to earn BSW tokens. If you want to farm your LP tokens, you could do it here: https://biswap.org/farms
Biswap will boost the farm with a 0,2x multiplayer, meaning that your rewards will be increased automatically.
Galactic Quadrants Launchpools
Biswap Launchpools – a less resource-intensive alternative to mining. It lets users hold their tokens on Biswap to earn more tokens, for free. Biswap will offer 100$K in Galactic Quadrants rewards for a period of 45 days with an unlimited max stake per user.
Biswap Launchpools will let users stake their $BSW and receive $GQ as rewards.
Biswap + Outer Ring MMO
After understanding all the things you can do using GQ in Biswap, it’s time to see how BSW will be implemented in our DAPP.
In the Galactic Pools, you can stake the LP created in Biswap of BUSD/GQ. As rewards, you will be receiving SCK (Space Corsairs Keys), the keys needed to open the NFT armament lootboxes, soon to be released. No other asset will be needed to obtain the guns, vehicles, materials, and other items playable in the game.
At last, we want to give a special thank you to our Adamantium Partners MH Ventures, for making the collaboration possible.
Official Biswap DEX Social Media links
Telegram Channel: https://t.me/biswap_news
Telegram Chat: https://t.me/biswap
Follow Outer Ring MMO social networks!
- Youtube: @outerringmmo
- Instagram: @outeringmmo
- Twitter: @outerringmmo
- Facebook: @outerringmmo
- TikTok: @outerringmmo
- Discord: https://discord.gg/outerringmmo
Join our official Telegram communities to always be up to date with what’s going on and ask everything you need to know about Outer Ring MMO:
|
OPCFW_CODE
|
InfluxDB is the Time Series Platform where developers build real-time applications for analytics, IoT and cloud-native services. Easy to start, it is available in the cloud or on-premises. Learn more →
Top 23 C++ HTTP Projects
aria2 is a lightweight multi-protocol & multi-source, cross platform download utility operated in command-line. It supports HTTP/HTTPS, FTP, SFTP, BitTorrent and Metalink.Project mention: Torrent Download CLI | reddit.com/r/commandline | 2023-02-06
i don't know about a cli tool you could use to search and download torrent files but aria2 is a CLI tool that's capable of downloading torrents
Simple, secure & standards compliant web server for the most demanding of applicationsProject mention: Nuklear – A single-header ANSI C immediate mode cross-platform GUI library | news.ycombinator.com | 2022-12-23
Not exaclty -- it looks like it's pretty overkill for my needs
I'm looking for something more like websocketpp, or even just grpc without a requisite proxy. uWebsockets looks really promising, being header only, but in the fine print requires a runtime library. unfortunately, none of that ecosystem seems to use cmake, making integrating it that much more of a pain.
why use cpp for this, I'm sure some HNer will ask. the ray tracer itself is using cuda, that's why. I've also debated
- running it as a grpc server and having some proxy in a more web-accessible language
- creating python bindings and using python to make a websocket/http server for it
neither of those are out of the question, but they're not my first choices, because I'd like to keep the build & execution simple. introducing dependencies, especially other executables, is in conflict with that.
i don't need anything particularly scalable -- a threaded implementation, or one using select() would be fine, if not preferable.
Write Clean C++ Code. Always.. Sonar helps you commit clean C++ code every time. With over 550 unique rules to find C++ bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.
C++ Parallel Computing and Asynchronous Networking EngineProject mention: Workflow v0.10.3 Released, Add WFRepeaterTask for Repeating Asynchronous Operations and Other New Features. | reddit.com/r/cpp | 2022-08-28
Apache ThriftProject mention: Symfony in microservice architecture - Episode I : Symfony and Golang communication through gRPC | dev.to | 2022-08-20
There are various notable implementations of RPC like Apache Thrift and gRPC.
A C++ header-only HTTP/HTTPS server and client libraryProject mention: PocketPy: A Lightweight(~5000 LOC) Python Implementation in C++17 | reddit.com/r/cpp | 2023-02-06
Every one of these libraries uses CMake to make it easier for end users to consume their libraries. In fact your example uses CMake as well such that I can consume it the way I describe above.
Drogon: A C++14/17/20 based HTTP web application framework running on Linux/macOS/Unix/WindowsProject mention: Ask HN: Easiest and cheapest full-stack frameworks that you love? | news.ycombinator.com | 2023-02-09
talking about C++, there are drogon framework: https://github.com/drogonframework/drogon
not bells and whistles like on Wt as there are no integrated widgets. But have c++ based template (for HTML) engine and other integrated parts what you expect from framework (routing, controllers, db, authentication handling and so on).
and boasts high performance design
The C++ REST SDK is a Microsoft project for cloud-based client-server communication in native code using a modern asynchronous C++ API design. This project aims to help C++ developers connect to and interact with services.Project mention: C++ REST API Framework | reddit.com/r/cpp_questions | 2022-11-26
Build time-series-based applications quickly and at scale.. InfluxDB is the Time Series Platform where developers build real-time applications for analytics, IoT and cloud-native services. Easy to start, it is available in the cloud or on-premises.
μWebSockets for Node.js back-ends :metal:Project mention: KitaJs Survey - No runtime code, fast as bare metal and top level framework. | reddit.com/r/typescript | 2022-12-09
The fastest node framework is uWebSockets (as they claim, I didn't try it yet), so if Kita's goal is to maximize performance - you should check on it.
C++ Requests: Curl for People, a spiritual port of Python Requests.Project mention: Trying to use libcpr, linking errors - newbie... | reddit.com/r/cpp_questions | 2022-12-03
So I'm very new to C++ and I'm trying to write a C++ version of a tool that I put together in Python. I'm trying to use libcpr for all my HTTP needs. I've spent the day trying to get it set up and working, but I'm getting a bunch of linking errors when I try to run. I really don't know if I did the building of it correctly, I'm trying to use Visual Studio Community 2022 and the Usage section of their docs talks about CMake and a couple package manager methods.
HTTP and WebSocket built on Boost.Asio in C++11Project mention: Learning to build networking applications using C/C++ from scratch | reddit.com/r/cpp_questions | 2023-01-26
Corvusoft's Restbed framework brings asynchronous RESTful functionality to C++14 applications.Project mention: How to use C++ as the backend for web dev? | reddit.com/r/learnprogramming | 2022-03-25
Use a rest api library like https://github.com/corvusoft/restbed. You can use a json library with this to serialize/deserialize your data into json objects.
The C++ Asynchronous Framework (beta)Project mention: Who is using C++ for web development? | reddit.com/r/cpp | 2022-10-04
Yandex uses a lot for backend. Also released this framework
Squid Web Proxy CacheProject mention: How to get my IP traffic data to an AWS lambda using Darkstat? | reddit.com/r/openwrt | 2023-01-27
I recommend trying a transparent proxy like Squid. There are many analytics tools for Squid logs. Squid can generate TLS certificates on the fly to inspect secure websites but you'll have to generate and install a CA certificate and key into Squid. You'll also have to import the CA certificate on any machine accessing the internet through the Squid proxy. Squid has the added bonus of caching content to speed up web browsing and reduce data usage.
A Fast and Easy to use microframework for the web. (by CrowCpp)Project mention: Crow – Flask in C++ | news.ycombinator.com | 2023-01-06
C++ client for making HTTP/REST requestsProject mention: How do I connect a REST API with C++? | reddit.com/r/cpp_questions | 2022-07-31
You just need something to talk to a REST API. There are lots of choices. E.g. https://github.com/mrtazz/restclient-cpp or libcurl if you want something lower level.
Ultra fast and low latency asynchronous socket server & client C++ library with support TCP, SSL, UDP, HTTP, HTTPS, WebSocket protocols and 10K connections problem solution
Cross-platform, efficient, customizable, and robust asynchronous HTTP/WebSocket server C++14 library with the right balance between performance and ease of useProject mention: What code/project you saw was both inspiring and maintainable? | reddit.com/r/cpp | 2022-11-01
HTTP BotnetProject mention: Own mini “botnet” project | reddit.com/r/HowToHack | 2022-05-05
https://github.com/UBoat-Botnet/UBoat take a look at this
C++ library for creating an embedded Rest HTTP server (and more)
C++ Web Framework REST API
An asynchronous web framework for C++ built on top of Qt
websocket and http client and server library, with TLS support and very few dependenciesProject mention: Request for a websocket library | reddit.com/r/cpp | 2022-03-25
Command line Kiwix tools: kiwix-serve, kiwix-manage, ...Project mention: How to serve content on website over open WiFi for neighborhood ? | reddit.com/r/computerhelp | 2022-11-08
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
C++ HTTP related posts
Torrent Download CLI
3 projects | reddit.com/r/commandline | 6 Feb 2023
aria2 VS FileCentipede - a user suggested alternative
2 projects | 30 Jan 2023
How to get my IP traffic data to an AWS lambda using Darkstat?
1 project | reddit.com/r/openwrt | 27 Jan 2023
termux + aria2 is amazing, my device don't heat up, unlike Libretorrent my device burns like hell when downloading lol
2 projects | reddit.com/r/Piracy | 16 Jan 2023
xbps-src ARM: glslangValidator: cannot execute binary file: Exec format error
11 projects | reddit.com/r/voidlinux | 5 Jan 2023
What do you guys use IPFS to develop?
2 projects | reddit.com/r/ipfs | 3 Jan 2023
Why is downloading videos from websites so difficult?? Please help!
3 projects | reddit.com/r/software | 3 Jan 2023
A note from our sponsor - InfluxDB
www.influxdata.com | 9 Feb 2023
What are some of the best open-source HTTP projects in C++? This list will help you:
|7||C++ REST SDK||7,204|
|
OPCFW_CODE
|
Huh. Learn something new every day. I'm still scratching my head that the build I was using a few days ago, which wasn't changed all that much with regard to the Resurrection() function, worked for me, and now I'm with you guys -- except that I have to fix it. Seems I will be delaying Beta 1.01 for a short while until the addon behaves again.
I'm sorry for flagging a Beta that isn't working, whether or not it was working a few days ago. I will endeavour to do more testing.
I'm using the latest Beta of SmartRes2 and if I use the Auto-Res-Key to ressurect a groupmember there is only a chat message that everybody is alive and nothing happens. Am I missing something? There is no other error or something like that. I'm playing with the german client as a resto-druid or a holy paladin. The errorr happens on both chars.
Would adding version checking across party/raid be a useful or worthwhile feature? If yes, I would love to limit it to LibResComm-1.0 presence, but I don't think that can be done; it would have to limited to SmartRes2.
Or just trust in users' judgment for appropriate modern addons.
Again i don't think we need it. You shouldnt nail down people to use one specific addon, or even a rescomm compatible thing at all. A lot of Addons are compatible with ctra resmessage , vuhdo, healbot, ora gridresstatus and so on.
Its better to fix your stuff if theres something to fix instead of an unnecessary feature like this ;)
OK, this is based off Morgalm's code. http://pastey.net/133234 For Morgalm, please note my change on lines 1003/1004 because I'm guessing you had the sender and collisionsender backward.
I am not posting this as an Alpha yet, and would like some feedback and testing. The optimizations were mostly regarding the res bars options; I moved a lot of that into the user options section in the OnInit() because they were previously in at least two places. The original code is commented out so if something doesn't work, it can easily be switched back.
Just download the code from the pastey and overwrite SmartRes2.lua and post back results, please.
Several items were fixed in the latest Alpha. Still to do: Rebirth checking, fix colourization of test bars' collision (actual casts should work), and verifying my dry code of adding Whisper to chat output works.
I've run out of game time for the next few weeks, so dry coding is all I can do.
if ResComm:IsUnitBeingRessed(unit) then unitBeingRessed = true return nil end
The idea being that if the unit is being ressed, then don't try to res that unit again. The relevant code for the LibResComm-1.0 API reads
Description: Checks if a unit is being ressurected at that moment.
Input: Name of a friendly player
Output: Boolean. True when unit is being ressed. False when not.
for resser, ressed in pairs(activeRes) do
if unit == ressed then
return true, resser
So it is a true/false return on the API, and you can get the resser associated with the unit. In my code, I'm only checking against the unit, as I don't care about the resser at this time. However, my unitBeingRessed local variable is being told to return nil if a unit is found to be in the process of being ressed. Should I be returning nil, nil since the lib returns two values? Oh the joys of no active game account. /sarcasm
I translated the zhCN form google-translation-machine to correct localization 1 hour ago.
Thank you. I'll propagate an update after work tomorrow, unless someone beats me to it.
General interest feedback question: what are people's opinion of SmartRes2? You can say anything you like, but remember constructive feedback is more useful than "it sux" or "i like it". IE, what sucks, what do you like?
|
OPCFW_CODE
|
Just over a year ago I joined Unity as a Trainer and Consultant. In this role, I've been travelling to customers around the world helping them learn how to get the best from our technology. I've visited games companies, universities, research centres and simulation customers, and the locations have spanned the globe, from Texas to Saudi Arabia, to Singapore. (TL;DR Videos Below!)
During this time, I've also been helping to build up the range of training material that we can offer to customers. A lot of this material has grown organically, some based on customer requests for training in specific areas, and some based on creative ideas that tie together Unity's features into learning projects.
The style and focus of the training courses vary widely. I've visited customers who needed a crash-course for their junior developers, or training for experienced developers moving to Unity from a different environment. I've also visited teams of experienced Unity developers in the middle of projects who wanted a quick boost of technical knowledge to help get their project to the finish line.
In my beginner crash-course, I take trainees on a tour through Unity's main features, bringing everything we learn together into a finished game - which usually involves plenty of flying saucers, explosions and sound effects. This seemingly simple project covers many of Unity's core features such as the art and asset pipeline, physics, components, scripting, prefabs, particle systems, audio, and helps get developers who are new to Unity familiar with the editor.
A common request from games and simulation customers alike is training for our new animation system, and its integration with physics and pathfinding. For this, I run through the entire system from scratch, creating a third-person controller and NPC characters with many of the common games actions such as mouselook, strafing, sprinting and crouch walking. We learn how to import, edit and retarget animations, through to building up state machines and integration with input. We examine how to get the pathfinding system to properly control root-motion animated characters, and how to make sure our characters can interact with physics objects properly. This whole section takes about two days to complete, and serves as a solid foundation of knowledge for building character-based games in Unity.
Simulation customers often want to build applications for training purposes themselves, and the individual requirements in the Sim field vary so much that there's no one-size-fits-all training program. Their goals can range from small mobile applications showing how to maintain a piece of equipment, to virtual reality workplace safety training, to ocean-going container ship simulation, and the training I put together for each customer attempts to meet these needs, giving them the understanding they require to make the best use of our tools.
One project I use for Sim customers begins with a model of a stapler. I show the trainees how to start with the bare 3D assets, and build up an interactive training application which allows an end user to progress through the maintenance steps. Obviously the point here is not to teach how to use a stapler! - what the trainees learn are the skills required to build whatever kind of equipment-based training applications they need.
The videos below show a broader cross-section of the content of training we've delivered so far to customers.
If you want to learn more about Unity and want training, contact your account manager to find out more.
Unity Training - Basic
Unity Training - Games Focus
Unity Training - Sim Focus
|
OPCFW_CODE
|
Use BND for both OSGI and JPMS
This PR is a work in progress and replaces #1815.
It replaces maven-bundle-plugin with bnd-maven-plugin, which:
is maintained by the same project that releases the common backend BND, so our build system depends on one less project (Apache Felix),
is released at each upgrade of the BND backend.
The introduction of JPMS support in BND allows us:
to have real JPMS module descriptors that are verified to be complete at a bytecode level (even Class.forName calls are supported by BND).
Using BND we still need to manually overwrite the detected (or undetected) module name of multi-release dependency JARs. Theoretically this was fixed in bndtools/bnd#5327, but I can't get it to work. The module name of MRJs with a module descriptor in their META-INF/versions/9 folder isn't
@HannesWell, do you know what am I missing?
Apparently the BND (partial) fix is only in the 7.1.0-SNAPSHOT branch.
Closes #1830.
Sorry for the late and after the fact reply. I just checked the following slf4j bundles in more detail and everything looks good and worked well in my testing in our two OSGi applications, from which one uses log4j-core as back-end and redirects every other API to it and the other one uses logback-classic and redirects all APIs to that:
log4j-api
log4j-core
log4j-slf4j2-impl
log4j-slf4j-impl
log4j-to-slf4j (Especially the extended version range in Import-Package: org.slf4j;version="[1.7,3)",org.slf4j.spi;version="[1. 7,3)", ... is important)
I noted that for some bundles the symbolic name changed e.g. from o.a.l.log4j.slf4j2-impl to o.a.l.log4j.slf4j2.impl. In some cases this requires adjustments, e.g. if one requires the bundle instead of importing the contained packages. But since the former is discouraged anyways it shouldn't be a blocker and is IMHO fine.
If you want to keep the MANIFEST.MF a little bit cleaner you could remove the Private-Package, which is only for internal use of BND and could be removed with the instruction -removeheaders: Private-Package.
Thank you @ppkarwasz for this work!
@HannesWell,
That is great news, so I believe we have a green light to release 2.21.0, unless you have an idea what is going on in #1741.
Regarding the bundle names, we chose to prefer consistency (use bundle names that can be module names) over backward compatibility. If you can think of a place where bundle names are used (build scripts?), we can revert them to the original form.
That is great news, so I believe we have a green light to release 2.21.0, unless you have an idea what is going on in #1741.
From my point of view yes 🚀🙂
Regarding the bundle names, we chose to prefer consistency (use bundle names that can be module names) over backward compatibility. If you can think of a place where bundle names are used (build scripts?), we can revert them to the original form.
That's a reasonable choice. I'm know cases that depend on the bundle-name, for example if another Bundle requires a log4j bundle using OSGi's Require-Bundle (but that's discouraged/deprecated and as far as I know mainly used in the Eclipse world only. And since you provide good OSGi metadata, i.e. versioned packages, there is no reason to not used Import-Package instead, which only depends on the package name). Or if the bundles are part of an Eclipse Feature (which is basically a group of Bundles that form a 'feature' of an Eclipse Product like the IDE), but there names can be changed too and usually a feature once build is pinned to a specific version of the Bundle so there is no compatibility problem.
So IMHO the name change is fine, personally I find bundle-names with a dash a bit odd, but it should be mentioned in the release notes so that there is a change to simplify migration.
|
GITHUB_ARCHIVE
|
[BUG] Link to ontology ref in anno headers.
Describe the bug
(Not sure, wether this is a bug, feature request, misunderstanding or curiosity)
How is the link of added annotation building blocks (header names) to the respective ontology ref maintained?
This cannot be simply by name, right? Especially for those, where "ontology" is picked for filling variable in the validator.
To Reproduce
Add annotation building block with ontology ref to a SWATE xlsx.
Unzip xlsx
Search for added ontology ref in children files.
OS and framework information (please complete the following information):
OS: macOS Big Sur
OS Version 11.1
MS Excel: Excel Desktop
MS Excel Version 16.43
I am not sure if i understand your question correctly. So you want to know how we can assure that the term in the main column is correctly referenced in the hidden columns (Term Source Ref and Term Accession Number)?
Basically, yes. I’m wondering where the ontology ref is stored. Is it possible (ISA compliant) to add the accession to the column headers?
Actually this goes somewhat in the same direction as #96. I just wanted to make sure -before SWATE-annotating large tables-, that this info isn't lost.
No problem, just to be sure:
In this example, are you wondering how we now that the term accession number is correct for the parameter (Bruker Daltonics HCT Series, in this case) or are you wondering where the information for the header (Paramater [instrument model], as instrument model is an ontology itself) is stored?
Sorry, I could've shared a screenshot. The latter - where's the info stored?
The only info stored is actually shown in the table, but:
In this example i first extended the table with two additional rows and pulled down the bruker value. Then i removed some letters from the last term, changed one TSR value (MSS) and changed some numbers in TAN (should be 1000697).
Is this the case you worry about? To not create too much of an maintenance overhead we currently handle these kind of errors via this function:
This will check all terms in the main column (here: Parameter [instrument model] and check the database for the related TSR and TAN should the term exist.
After clicking, the table looks like this:
Does this answer your question?
I didn't mean the values (i.e. info in the rows like Bruker, MS, etc.) but the keys (column headers like instrument model). Where are TSR and TAN of "instrument model" stored? The "Use related term search" is not only based on the string ("instrument model"), but on TAN or not? If it was string only, this would have to be limited to one ontology to prevent duplicates.
Ah sorry, currently we do not store TSR and TAN for ontologies in the column header. The "Use related term search" parses the header or the selected column to check for a ontology string. This string is then used to a is_a directed search.
E.g. if i change a letter in "instrument model" to "instrument moel" the search will likely return 0 results as a entry for "instrument moel" does not exist in the database.
What are your opinions on having a TSR or TAN for these headers saved? We could add them as an (#tag) in the reference (hidden) columns.
Oh! Yes, I would definitely suggest to add (tag) them in the columns and then search by TAN. How else would SWATE handle duplicates originating from different ontologies?
Currently if this occurs both terms would be used for a is_a directed search.
But we should try to avoid duplicates as this would also mess up a lot of other functions. E.g. the above shown term search: https://github.com/nfdi4plants/Swate/issues/100#issuecomment-775780152
The next release (propably 0.3.0) will now add a term accession value tag to reference columns: #tXX:aaaaaaa
this tag will be used for parentOntology term directed search and to fill in columns.
|
GITHUB_ARCHIVE
|
[BUG] stage macro does not support columns with reserved keywords as names
Describe the bug
Let's say your source contains a column called "order", if you try to run the stage macro with that as a hashed column (or even simply with include_source_columns = true) it will fail
dbtvault_bq.stage(
include_source_columns=true,
source_model={ 'outreach': 'stages'},
hashed_columns={
'SAT_HASH_DIFF': {
'hashdiff': true,
'columns': [
'name'
,'order'
]
}
},
...
)
Versions
dbt: 0.18.1
dbtvault: (BQ fork a.k.a 0.7.0), however looking at the code it should be present in the latest version as well
Expected behavior
I would expect all queried columns to have be qualified with the table name
Screenshots
Workaround for the time being:
for the hashed_columns section wrap the column name like so
"`order`"
If you need the column in your staging layer, you can set include_source_columns to false and use the derived_columns section to rename to a non-conflicting name (again, wrapped)
This is not possible in snowflake (using reserved words as column names), even if quotes are used to force it.
i.e. SELECT "TEST_COL" AS "ORDER" will fail.
This seems to be a BQ only issue. I think this issue highlights a data quality issue in the dataset; data should not have reserved words for column names in the first place.
Saying this, we cannot always help source data quality issues and we should have the option to fix the problem in the staging layer.
Is it adequate that you create a derived column which aliases the reserved word column, and this is used as the value for the hashed column?
SELECT "TEST_COL" AS "ORDER" fails on BQ as well, cause "ORDER" is a string literal whereas `ORDER` is a symbol. I wonder if SELECT "TEST_COL" AS `ORDER` would work
The source data can have whatever column names it wants. The question is, do you want dbtvault to natively support these special column names or not?
The bigger issue is that It's not just hashing, this "bug" forces you not to be able to use include_source_columns as well because the column names are unqualified.
SELECT "TEST_COL" AS `ORDER` does work in Snowflake. I've been able to re-produce the issue in our test bed now, and you're right about this being an issue. I still feel that this is a source data issue, and if users have columns with reserved words then they have a bigger data quality issue. We've never seen this before in the wild and I feel like it's quite a niche issue, however, it is nonetheless and issue with a relatively simple fix, so we will add it to the backlog as a quality of life improvement.
I still feel that this is a source data issue, and if users have column names which are reserved words then they have a bigger data quality issue.
I'm genuinely surprised to hear this cause there's no rule that says the source system uses a SQL datastore.
Perhaps I'm missing a layer in my DV implementation for "hard business rules" like this as Dan calls them. I was under the impression that in most cases I could just shove it through dbtvault without a previous layer 😆
Apologies, I haven't made myself clear. dbtvault should handle this case, I'm simply suggesting that it might imply other issues with your data if you have reserved word column names in the source system, and it may be something developers would be inclined to fix prior to generating the raw vault (dbtvault or no dbtvault).
In saying this, developers should be able to fix it OR not need to fix it because dbtvault should understand that this is possible and handle it on their behalf, by qualifying column names in the stage.
Currently working on this and #28 in this branch: https://github.com/Datavault-UK/dbtvault/tree/feat/escape-col-name
We've got a fix for this coming in 0.7.10.
Hi all. This is now supported in v0.8.0! Read the docs here
|
GITHUB_ARCHIVE
|
[local_auth] Always receiving the error_code NotAvailable on iOS and Android
I'm trying to handle the Platform exception with the errors_code but I always get the NotAvailable. For instance, when I do all my attempts using the TouchID and fail, instead of throwing a LockedOut or other error code it throws a PlatformException with NotAvailable as the error code. Can you explain why I'm getting it?
Steps to Reproduce
Try to authenticate
Fail the TouchId or FaceID, or you can also disable biometrics from your device and also the passcode
The exception always have the error_code NotAvailable
My implementation is below:
Code sample
Future<bool> authenticate(
BuildContext context, {
bool biometricOnly = true,
}) async {
try {
final canAuthenticate = await checkDeviceSupport();
if (!canAuthenticate) {
throw BiometricUnknownException();
}
final authenticated = await _auth.authenticate(
localizedReason:
// ignore: use_build_context_synchronously
appLocalizationsOf(context).loginUsingBiometricCredential,
options: AuthenticationOptions(
biometricOnly: biometricOnly,
useErrorDialogs: true,
),
);
return authenticated;
} on PlatformException catch (e) {
debugPrint(e.toString());
debugPrint(e.code);
switch (e.code) {
case error_codes.notAvailable:
/// If there is available biometrics and yet received a `NotAvailable`,
/// it failed to authenticate. It can happen when user use the wrong fingerprint or faceid
/// for many times. Yet the error code is not precisely,
/// this is how it is handled in iOS 16 iPhone 11.
final enrolled = await _auth.getAvailableBiometrics();
/// Have enrolled biometrics but yet failed, so it is locked.
if (enrolled.isNotEmpty) {
throw BiometricLockedException();
}
final deviceSupports = await _auth.isDeviceSupported();
/// The device supports but biometrics are not enrolled
if (deviceSupports || await _auth.canCheckBiometrics) {
await _safeDisableBiometric();
throw BiometricNotEnrolledException();
}
/// If there is not any biometrics available, show the correct modal
throw BiometricNotAvailableException();
case error_codes.lockedOut:
throw BiometricLockedException();
case error_codes.passcodeNotSet:
throw BiometricPasscodeNotSetException();
case error_codes.notEnrolled:
await _safeDisableBiometric();
throw BiometricNotEnrolledException();
case error_codes.permanentlyLockedOut:
throw BiometriPermanentlyLockedOutException();
default:
throw BiometricUnknownException();
}
} on BiometricException {
rethrow;
}
}
Logs
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.0.5, on macOS 12.6 21G115 darwin-x64, locale en-BR)
[✓] Android toolchain - develop for Android devices (Android SDK version 32.1.0-rc1)
[✓] Xcode - develop for iOS and macOS (Xcode 13.4.1)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2021.1)
[✓] IntelliJ IDEA Ultimate Edition (version 2022.2)
[✓] IntelliJ IDEA Community Edition (version 2022.1.3)
[✓] VS Code (version 1.71.2)
[✓] Connected device (2 available)
[✓] HTTP Host Availability
• No issues found!
Hi @thiagocarvalhodev
Which device are you testing for this case? How many biometric authentication methods does it support? (TouchID, FaceID, or both of them)
Please provide a completed and minimal reproducible code sample so that we may verify this. Also, please provide full exception log.
Thank you!
|
GITHUB_ARCHIVE
|
[Question about a possible reversed priority case and prevoting process]
I was reading the raft implementation recently, and learned a lot from it. I met some problems which are very likely I understood something wrong. Would appreciate it if someone can help with my process.
Supposing a very normal 4-node cluster, with maxHeartBreakLeak: node 0 < 1 < 2 < 3
initial state
node
currVotedFor
currTerm
role
lastParseResult
node0
node0
1
leader
PASS
node1
node0
1
follower
WAIT_TO_REVOTE
node2
node0
1
follower
WAIT_TO_REVOTE
node3
node0
1
follower
WAIT_TO_REVOTE
and node 0 goes down
node
currVotedFor
currTerm
role
lastParseResult
node1
node0
1
follower
WAIT_TO_REVOTE
node2
node0
1
follower
WAIT_TO_REVOTE
node3
node0
1
follower
WAIT_TO_REVOTE
node 1 timeout first, become a candidate. issues a vote request with term 1 (hasn't increase term yet), but it will be refused because node 2, 3 still believes there is a leader, so they would return REJECT_ALREADY_HAS_LEADER and do nothing. node 1 upon receiving these responses would reset timer, and stay in WAIT_TO_REVOTE state. Same thing happens to node2. after node 2 received REJECT_ALREADY_HAS_LEADER responses, the state would be
node
currVotedFor
currTerm
role
lastParseResult
node1
node1
1
candidate
WAIT_TO_REVOTE
node2
node2
1
candidate
WAIT_TO_REVOTE
node3
node0
1
follower
WAIT_TO_REVOTE
This way, only when node 3 timeout lastly it would request votes without getting any REJECT_ALREADY_HAS_LEADER response, and continue to WAIT_TO_VOTE_NEXT state.
From the time it received 2 REJECT_ALREADY_VOTED( because node 1, 2 all voted for themselves). it would start a timer : lastVotedTime + random value between 300ms and 1000 ms (this value can be changed). Since this is a random value, it might be that node 3 eventually has the smallest timeout interval, and after the smallest timer expires,it increase its term and force node 1 and node 2 to increase their term too (set their needIncreaseTermImmediately). This way node 3 will be the final leader, which reversed the priority order.
This can be mitigated by giving node 3 a much larger timeout interval, but still we might look for something better..
Some thoughts: In raft we would usually first increase term and request vote, here node 1 requests votes without increase its term and I suppose this is implementing the pre-vote algorithm mentioned in the paper.
IMHO, pre-vote is for a potential candidate(like node1) to check if it is possible to pass the election(more up-to-date than a majority of nodes) before increasing term. In our current implementation, node 1 first requests votes with term 1, and other followers with same term would return REJECT_ALREADY_HAS_LEADER. In this case other followers didn't check if the incoming vote request is more up-to-date than followers themselves. If we can somehow subdivide the REJECT_ALREADY_HAS_LEADER response to something like
REJECT_ALREADY_HAS_LEADER_PREVOTE_ACCEPT
REJECT_ALREADY_HAS_LEADER_PREVOTE_REJECT
then node1 receives enough prevote_accept, it can increase its term and start a revote immediately. Otherwise it just remains in WAIT_TO_REVOTE process.
p.s. Actually I didn't find how the pre-vote was implemented in Dledger, wondering if there are some pre-vote design documentations? Thanks if anyone can give me some hints on pre-vote implementation in Dledger.
pre-vote is necessary to void unavailability in some corner, e.g Symmetric network partition tolerance. (unfortunately this feature not be implemented in Dledger)
IMHO :
step 3 : node 1 upon receiving REJECT_ALREADY_HAS_LEADER responses would reset timer that for next pre-vote, and stay in follower role(not increase term yet).
step 4 : node2 would request pre-votes after timeout and get ACCEPT response from a majority of nodes (self and node1), in this way, node2 will be the first to become candidate and leader role.
Only the node ,that get pre-vote ACCEPT response from a majority of nodes ,could become candidate, and the node should undoubtedly refuse the pre-vote requests while still believes there is a leader because the request node is more suspicious.
p.s. I was also reading some raft implementation recently, the sofa-jraft project seems implement
it more clearly. XD
@imaffe
What is the priority order you mentioned? is it defined by maxHeartbeatLead?
@imaffe
What is the priority order you mentioned? is it defined by maxHeartbeatLead?
Yes, different maxmaxHeartbeatLeak decides how long before each follower timeout to a candidate, thus determine their priorities in becoming candidates
Only the node ,that get pre-vote ACCEPT response from a majority of nodes ,could become candidate, and the node should undoubtedly refuse the pre-vote requests while still believes there is a leader because the request node is more suspicious.
When I was reading the code, if a node receives a majority of ACCEPT, it would become leader directly, and this is where I found it weird (and this is where leads me to doubt that it's not implementing pre-vote)
But It's been a while since I last touch this codebase, I might be wrong. I'll check again probably within next days ~ Thanks for your comment and really appreciate it we can discuss this topic ~
@imaffe
Dledger does not say that the priority is determined by maxHeartbeatLeak.
I think the priority should be specified by additional parameters, Can specify a node or a machine-room as a prefered-leader. Just personal thoughts.
Maybe has implemented in perfered-leader branch, But i haven't seen the code of this branch.
@imaffe
Dledger does not say that the priority is determined by maxHeartbeatLeak.
I think the priority should be specified by additional parameters, Can specify a node or a machine-room as a prefered-leader. Just personal thoughts.
Maybe has implemented in perfered-leader branch, But i haven't seen the code of this branch.
Dledger support explicit preferred-leader by explicitly hand off leadership. And the problem I described still exists even when explicit priorities are assigned (I just used maxHeartbeatLeak for example ): the major problem is, a candidate has to wait for every follower become candidate before it can proceed, and I very doubt this is the expected behaviour.
|
GITHUB_ARCHIVE
|
I am using uVision V18.104.22.168 running under Windows 10, building code for various STM32F parts.
My problem is that the IDE stops responding at various places for a long time, sometimes over 5 minutes, occasionally over 10, but eventually recovers.
This happens mostly at the completion of a build, occasionally on entering a debug session and occasionally when leaving a debug session.
While it isn't responding, a quick look at the Windows task manager shows the IDE is still consuming processor time, around 20%, and using a lot of memory, anything up to 2 GB!
It is becoming an issue that I have to wait for so long after running a build, and I would really appreciate a fix to this problem.
V5.28 is downloadable now, perhaps start with that?
Unfortunately I'm not permitted to do this by my client. They started with 5.18 and have to stay with that version for the life of the project. :(
Then you'll have to live with whatever quirks that are present. You could use a faster or multi-core system, more memory or an SSD.
V5.28 does seem to have a short dwell after build completion, but that's only a few seconds on a QC Xeon as the IDE collects itself. Perhaps as the entire memory space has been tainted by the build and translation/generation on transient data held in cache or write-back queue.
You could still try the new version - that could help to determine whether it's something to do with that old version, or some problem due to your build machine.
Have you tried disabling virus scanner, Windows indexing, etc ... ?
People don't often appreciate it, but AV can be a huge drag
Also, is all your project entirely on local drives? Network stuff could slow things down unpredictably ...
Do you have version control or any other such stuff "hooked-in" which could be slowing things down?
This client is particularly driven by the IT department, so no option to download or install any updates. It is holding the project back, however it is their policy and I have to abide by it. The strange thing is that when I started working on this project the issue wasn't there, it seems to have 'grown' over the last few weeks.
No, all local and no version control hooked into uV.
No permissions to do that unfortunately.
jartim said:This client is particularly driven by the IT department
So get the IT department to help!
jartim said:It is holding the project back
That's something you need to flag to project management!
jartim said:when I started working on this project the issue wasn't there
So if you pull a version of the project from back then - does it have the problem now?
If you have (or your client has) a licence, you should contact Keil support direct.
I think Andrew might be thinking of Windows Explorer (Shell) Extensions, the sort of thing Tortoise CVS/SVN do. Where as AV is probably at a filter driver level.
Another thing to watch is the generation of Browse information, this has been a complete dog in uV for years, not sure if it was written by an intern using bubblesorts, but it turns the compile exercise into one where you can go brew a pot of tea/coffee waiting for it. Feels very 1986 TBH.
You mentioned issues when entering and leaving a debug session. That could be a separate issue. What debug adaptor are you using with your STM32F board? A ST-Link? A J-link?
If so, those debug adaptors use a 3rd party dll. If you have too new of a debug adaptor, the firmware on it might not play nice with the old dll.
See " Downgrade the firmware of the ST-LINK debug adapter:" on this page http://www.keil.com/support/docs/3662.htm to see if that would help. Ditto fora J-Link.
If you are using a ULINKpro, ULINK2, and CMSIS-DAP, you should ask your IT group if you can update to 5.18a, since they fixed an issue with the drivers in that small patch. See: www.keil.com/.../MDK518a.htm
View all questions in Keil forum
|
OPCFW_CODE
|
Can't capture alert ,which is tiggered by the key word "Execute Javascript"
You can captrue alert which is triggered by some event,bu you can't get it when triggered by "Execute Javascript"
the RF scripts is as follow:
*** Settings ***
Library Selenium2Library
*** Test Cases ***
alerttest
open browser http://localhost:7272 ie
click button button
Alert Should Be Present Hello!
AlertTriggeredbyJavascript
sleep 1s
execute javascript window.f()
Alert Should Be Present Hello!
and the second test case failed!
+KEYWORD: BuiltIn.Sleep 1s
Documentation: Pauses the test executed for the given time.
Start / End / Elapsed: 20120222 10:21:50.109 / 20120222 10:21:51.109 / 00:00:01.000
+KEYWORD: Selenium2Library.Execute Javascript window.f()
Documentation: Executes the given JavaScript code.
Start / End / Elapsed: 20120222 10:21:51.109 / 20120222 10:21:53.375 / 00:00:02.266
-+
KEYWORD: Selenium2Library.Alert Should Be Present Hello!
Documentation: Verifies an alert is present and dismisses it.
Start / End / Elapsed: 20120222 10:21:53.375 / 20120222 10:21:55.046 / 00:00:01.671
+KEYWORD: Selenium2Library.Capture Page Screenshot
Documentation: Takes a screenshot of the current page and embeds it into the log.
Start / End / Elapsed: 20120222 10:21:53.390 / 20120222 10:21:54.281 / 00:00:00.891
+KEYWORD: Selenium2Library.Capture Page Screenshot
Documentation: Takes a screenshot of the current page and embeds it into the log.
Start / End / Elapsed: 20120222 10:21:54.281 / 20120222 10:21:55.046 / 00:00:00.765
10:21:55.046 INFO
10:21:55.046 FAIL There were no alerts
IS there any difference between the tow way?
Here is the HTML under testing :
[input type="submit" name="button" id="button" value="提交" onclick="f()" /]
[script type="text/javascript"]
function f()
{
alert("Hello!")
}
[/html]
this issue is continuing bugging me ,plz help to solve it.
This issue is old and has had no action on it for several years. If an issue still exists please open a new issue.
|
GITHUB_ARCHIVE
|
Flat file destination columns data types validation
A source database field of type INT is read through an OLE DB Source. It is eventually written to a Flat File Destination. The destination Flat File Connection Manager > Advanced page reports it as a four-byte signed integer [DT_I4].
This data type made me think it indicated binary. Clearly, it does not. I was surprised that it was not the more generic numeric [DT_NUMERIC].
I changed this type setting to single-byte signed integer [DT_I1]. I expected this to fail, but it did not. The process produced the same result, even though the value of the field was always > 127. Why did this not fail?
Some of the values that are produced are
1679576722
1588667638
1588667638
1497758544
1306849450
1215930367
1215930367
1023011178
1932102084
Clearly, outside the range of a single-byte signed integer [DT_I1].
As a related question, is it possible to output binary data to a flat file? If so, what settings and where should be used?
What number does it put in the file? Is it a number >127 or does it overflow?
After re-reading the question to make sure it matched my proof-edits, I realized that it doesn't appear that I answered your question - sorry about that. I have left the first answer in case it is helpful.
SSIS does not appear to enforce destination metadata; however, it will enforce source metadata. I created a test file with ranges -127 to 400. I tested this with the following scenarios:
Test 1: Source and destination flat file connection managers with signed 1 byte data type.
Result 1: Failed
Test 2: Source is 4 byte signed and destination is 1 byte signed.
Result 2: Pass
SSIS's pipeline metadata validation only cares about the metadata of the input matching the width of the pipeline. It appears to not care what the output is. Though, it offers you the ability to set the destination to whatever the downstream source is so that it can check and provide a warning if the destination's (i.e., SQL Server) metadata matches or not.
This was an unexpected result - I expected it to fail as you did. Intuitively, the fact that it did not fail still makes sense. Since we are writing to a CSV file, then there is no way to control what the required metadata is. But, if we hook this to a SQL Server destination and the metadata doesn't match, then SQL Server will frown upon the out of bounds data (see my other answer).
Now, I would still set the metadata of the output to match what it is in the pipeline as this has important considerations with distinguishing string versus numeric data types. So, if you try to set a datetime as integer then there will be no text qualifier, which may cause an error on the next input process. Conversely, you could have the same problem of setting an integer to a varchar and having, which means it would get a text qualifier.
I think the fact that destination metadata is not enforced is a bit of a weak link in SSIS. But, it can be negated by just setting it to match the pipeline buffer, which is done automatically assuming it is the last task that is dropped to the design. With that being said, if you update the metadata on the pipeline after development is complete then you are in for a real treat with getting the metadata updated throughout the entire pipeline because some tasks have to be opened and closed while others have to be deleted and re-created in order to update the metadata.
Additional Information
TL DR: TinyInt is stored as an unsigned data type in SQL Server, which means it supports values between 0 and 255. So a value greater than 127 is acceptable - up to 255. Anything over will result in an error.
The byte size indicates the maximum number of possible combinations where the signed/unsigned indicates whether or not the range is split between positive and negative values.
1 byte = TinyInt in SQL Server
1 byte is 8 bits = 256 combinations
Signed Range: -128 to 127
Unsigned Range: 0 to 255
It is important to note that SQL Server does not support signing the data types directly. What I mean here is that there is no way to set the integer data types (i.e., TinyInt, Int, and BigInt) as signed or unsigned.
TinyInt it is unsigned
Int and BigInt are signed
See reference below: Max Size of SQL Server Auto-Identity Field
If we attempt to set a TinyInt to any value that is outside of the Unsigned Range (e.g., -1 or 256), then we get the following error message:
This is why you were able to set a value greater than 127.
Int Error Message:
BigInt Error Message:
With respect to Identity columns, if we declare an Identity column as Int (i.e., 32 bit ~= 4.3 billion combinations) and set the seed to 0 with an increment of 1, then SQL Server will only go to 2,147,483,647 rows before it stops, which is the maximum signed value. But, we are short by half the range. If we set the seed to -2,147,483,648 (don't forget to include 0 in the range) then SQL Server will increment through the full range of combinations before stopping.
References:
SSIS Data Types and Limitations
Max Size of SQL Server Auto-Identity Field
I will give you the check mark, but the explanation appears to be that Microsoft does not honor the output type setting if it can get away with it.
@lit Thanks. Regarding the binary data type, try DT_BYTE. For SQL Server, map it to a VARBINARY. The resulting value is hexadecimal as there is implicit conversion going on from SSIS to SQL Server via the SqlBinary structure. https://learn.microsoft.com/en-us/dotnet/api/system.data.sqltypes.sqlbinary?view=netframework-4.7.2 This should be a separate question.
@JWeezy i think you should delete the other answer and write it under an Additional information section within this answer
@JWeezy - Do you have or know of a list of components that must be deleted and recreated?
@lit Merge Join I know for sure is one and I think both a Sort and Union All also has to be replaced if the pipeline's exiting column metadata changes.
@Yahfoufi Done.
Data types validation
I think this issue is related to the connection manager that is used, since the data type validation (outside the pipeline) is not done by Integration services, it is done by the service provider:
OLEDB for Excel and Access
SQL Database Engine for SQL Server
...
When it comes to flat file connection manager, it doesn't guarantee any data types consistency since all values are stored as text. As example try adding a flat file connection manager and select a text file that contains names, try changing the columns data types to Date and go to the Columns preview tab, it will show all columns without any issue. It only take care of the Row Delimiter, column delimiter , text qualifier and common properties used to read from a flat file. (similar to TextFieldParser class in VB.NET)
The only case that data types may cause an exception is when you are using a Flat file source because the Flat file source will create an External columns with defined metadata in the Flat file connection manager and link them to the original columns (you can see that when you open the Advanced editor of the Flat file source) when SSIS try reading from flat file source the External columns will throw the exception.
Binary output
You should convert the column into binary within the package and map it to the destination column. As example you can use a script component to do that:
public override void myInput_ProcessInputRow(myInputBuffer Row)
{
Row.ByteValues=System.Text.Encoding.UTF8.GetBytes (Row.name);
}
I haven't try if this will work with a Derived column or Data conversion transformation.
References
Converting Input to (DT_BYTES,20)
DT Bytes in SSIS
|
STACK_EXCHANGE
|
The high-level steps of playing priority planning poker are:
Create a poker room with selected issues
Invite team members
Everyone votes on metrics for selected issues in real-time
Present the results of the voting, together with each member’s votes
Discuss when necessary and accept the votes. Or retake voting if outliers are prominent
Roles in priority planning poker session
Everyone who has Administer Foxly permission can create a poker session. When you create a session you automatically become the Session admin.
Session admin controls the game, chooses issues that are being currently prioritized, closes voting, reveals vote results, and accepts the votes.
Players can only vote on the issue selected by the Session admin at that time and view the results once voting is closed.
Vote on metrics
Close the voting and reveal score
Change or accept voting results
Select issue that is being voted on
Invite other players
Finish the game
Start the game
How to create a new priority planning poker game
To create a new priority planning poker game, access Foxly by clicking on the Priorities tab in the Project menu.
Click on the Priority poker button in the top right
Give your poker session a name and select the metrics you want to prioritize on the session
Click on the Select issues button
4. Check the checkbox of the issues you want to prioritize; use filters to find the issues you need.
5. Click on Create session button
You’ve created a new poker game. Invite other players to the game via the link and once you’re ready, click on Start the session button to begin voting. New players can join even after the session has already started and participate in the voting process.
How to select the issues for prioritization
The session admin (i.e. creator of the game) can choose the current issue being prioritized by clicking on the issue card in the Ready for prioritization column.
How to vote on issues metrics
You can vote on metrics for any selected issue at the bottom of the screen.
You can add values to all metrics or a few only, and click on Submit my vote
You can choose to skip a turn and refrain from submitting a vote by clicking on the Skip this turn button
Wait for everyone to finish voting and for the session admin to reveal the score.
How to close voting and see results
The session admin can close voting for an issue and reveal results by clicking on the View results button that appears in the voting panel at the bottom of the page, once they submit their vote.
Everyone is then taken to the results screen to view average votes for each metric, alongside details of all player submissions.
Once results are revealed, no one can vote on the issue again.
How to accept the voting result
The session admin can accept the vote by clicking on the Accept results button at the bottom of the results page. Once the result is accepted, the average vote is stored as final value for that metric, and the issue priority score is recalculated accordingly.
The results screen then closes and all players are presented with the next issue to prioritize.
The session admin can overwrite the results by simply clicking and changing the value of the final average metric field on the results page.
How to retake the vote
If the team is dissatisfied with the results, the session admin can then retake the voting.
The session admin clicks on Retake the vote button at the bottom of the results screen
All players are taken back to the voting screen for this issue, to submit their votes
The session admin clicks on the View results button to see new results
What to do when the priority poker game finishes
Once all issues in the priority planning poker game are prioritized or you run out of time in your prioritization meeting, you can go back to the Foxly priorities screen and view issues with newly assigned metrics in the table.
|
OPCFW_CODE
|
The phd in computer science at iit (illinois institute of technology) in chicago requires mastery of coursework in core areas of computer science as a foundation toward doing the original. Wjec computing a level coursework computing gce as/a – wjecthis page contains information related to our gce as/a level computing specification available in. 您的位置: 首页 / 最新资讯 / 默认 / computer science research essay, as level product design coursework help computer science research essay, as level product. A-level coursework will be dramatically scaled back amid concerns that qualifications are too easy and open to abuse from teachers, the exams watchdog announced today. Tuition fees for graduate (coursework) full-time and part-time programmes for academic year 2017-2018 (a) programme at the same or lower level. Wwwgceguidexyz. English essay coursework and essay coursework help because there is a difference between essay coursework at college and university level. Our coursework writing service provides an exclusive offer for students using a proper coursework writing help will facilitate your academic life.
Ms coursework option – handbook of study must total a minimum of 32 hours of graduate coursework at the 5000 level or of electrical and computer. Computer studies sec 09 (2 hrs) + coursework • writing, debugging and testing programs in a high level language. Ocr 2016 a level in computer science iii teaching and learning resources we recognise that the introduction of a new specification can bring challenges for implementation. Coursework for the igcse computer studies coursework (project) you need to complete a project based on the process steps of 'analyse', 'design', 'implement'. Computer science coursework 鐣欏 鐢宠 璁哄潧-涓€浜╀笁鍒嗗湴 please upload the list of all computer science courses taken at the college level please include the. A-level computing/aqa/the unlike the rest of the course this unit is entirely based on coursework you can write a computer game but they can.
Computer coursework help by on 三月 4, 2018 write my college application essay for me #pattymayo #iphonexgiveaway #pattymayo @pattymayotv winter season. Aqa a level history coursework workbook component historical scribd part python coursework gcse computer science python quiz coursework project. A comprehensive resource for the cambridge international as and a level computer science 9608 syllabus for examination from 2016 cambridge international.
Ict and computer science qualifications from aqa as and a-level computer science (7516 7517) teaching from september 2015 exams from june 2016 (as), june 2017 (a-level. From functional skills to gcse and a-level, aqa it and computer science helps develop students’ interest in the subject and their analytical and critical thinking skills. Title=practical project: picking a project this is an important decision as this will drive the rest of this coursework a computer room booking database. As/a level gce computing - h047, h447 our as/a level computer science provision includes assessment of coursework to assess the candidate's level of.
Msc by coursework programme computer-based (cbt): a candidate must first fulfill a coursework requirement of 40 mc at level 4000. Fulfilling high school graduation requirements with computer science coursework there is no state-level approval. Individuals searching for coursework found the following related articles, links, and information useful.
Here, you'll find everything you need to prepare for the changes to gcse computer science from 2016, including our draft specification and sample assessment materials which we've submitted.
Detailed resume by a college student seeking a professional position, plus tips for including coursework in your resume. This is a book about a-level computer science it aims to fit with the aqa gce a-level computer science 2015 syllabus but is not endorsed by aqa it should be useful as a revision guide or. Ocr as and a level computer science - h046, h446 (from 2015)) qualification information including specification, exam materials, teaching resources, learning resources.
|
OPCFW_CODE
|
PowerPoint makes it simple to add a new slide or layout, change the look of your presentation, add speaker notes, add transitions, and get help. Check out What's new in PowerPoint 2016 for Windows.
Add a new slide
On the Home tab, select the New Slide down arrow.
Select a layout.
Change the look
Select the Design tab.
Select one of the Themes.
Select one of the Variants to change the color of the theme you selected.
At the bottom of the window, select Notes.
Select Click to add notes and type your notes.
Select the Transitions tab.
Select a transition.
Select Apply To All.
In the ribbon, select Tell me what you want to do.
This is the Tell Me box.
Type a word or phrase for the task you want to do.
For example, you could type "Change theme" or "Add notes".
Select an option from the search results.
Welcome to the first in the series of four Office Mixes about Getting Started with PowerPoint 2016.
With Office Mix, this video be interactive on most devices so you can click on links and use the controls below this line to pause or go back and forth, or tap the table of contents button.
You might occasionally see a button right here that says, Click next to continue, so tap that when you're ready to move on.
This is what you see when you first open PowerPoint 2016.
Click on this template to start our pre-made overview of PowerPoint will open and show you some basics and new features.
Then once you open a blank presentation or template familiarize yourself with this section of the Home tab.
When you want a New Slide, choose one from here, just don't copy and paste a previous slide.
These are the Layouts, a skeleton of sorts for your presentation and you can choose the type of slide you want.
Need a different look? Choose the Design tab.
There you'll find many Themes you can choose from and each theme as a few Variants for subtle changes.
Themes control your presentation's fonts, effects, layouts, and colors.
If you change to the green variant, you get a different set of theme colors.
So changing Themes and Variants might be the best way to change the look of your doc.
Think "less is more" with PowerPoint.
Don't write your talk on the slides.
Use images and a few words.
Here is something marketing guru Seth Godin says, "There should be no more than six words on the slide."
In other words, paraphrase.
Try that for next presentation.
You can add your thoughts, scripts, numerous facts, in the Notes section down here.
When you present, they are just seen by you.
"Less is more" is also a good idea for Transitions and Animations.
Too many can be a distraction.
In the Transitions tab, pick a transition and then select Apply To All for a consistent look to your slides.
If you're looking to find a button or get help in PowerPoint there's a new place to look: Tell Me.
Type what you need and often you can perform the action needed.
PowerPoint 2016 also has new chart types, new themes--including the colorful theme I'm using now.
There's Office Mix and upgrades for version history, video resolution, and more.
Click here to read all about what's new and improved.
We have additional resources.
If you like reading more than videos click here to read our basic tasks article to start.
A lot of companies have told us they like our Quick Start Guides to user or print-out when training their staff.
They download as PowerPoint or PDF files.
And we have training for many different versions of PowerPoint and Office.
You've finished this lesson on exploring PowerPoint.
Click the icon for the next lesson on inserting stuff in here presentation or another lesson in getting started in PowerPoint 2016.
|
OPCFW_CODE
|
The field widget is the most common widget. It is used both for text boxes or selection lists. It can be associated with different datatypes such as string, long or date to ask for different types of data.
A datatype represents a certain type of data, such as a string, integer, decimal or date. Each datatype matches to a certain Java class. If you associate a field widget with a datatype, its setValue(Object) and getValue() methods will take, respectively return objects that are instances of that Java class (or subclasses thereof).
Each datatype is associated with a convertor. The task of the convertor is to convert from string representation to object representation, and vice versa.
The string to object conversion usually happens when converting the value entered by the user to an object. This process can fail if the user entered an incorrect string, for example abc when a number is required. In this case an appropriate validation error will be set on the widget. String to object conversion also happens when parsing data in selection lists (if the selection list is retrieved as XML) and can also be used as part of the binding.
The object to string conversion happens when the state of the widget is spit out as XML, this is mostly when injecting the widget XML in the publishing pipeline.
By having a field widget associated with a datatype, you can be sure that, after successful validation of the widget, retrieving the value of the widget will give you an object of the correct type.
The available datatypes and their respective convertors are documented in a separate document.
A field widget can furthermore be associated with a selection list. This makes that the field widget could be rendered either as a textbox or a list, depending on whether its datatype has a selection list. The selection list is related with the datatype: the values in the selection list should be of the same type as the datatype.
Selection list data can be specified directly in the form definition (for short, unchanging lists), retrieved from external sources (i.e. a Cocoon pipeline), or pulled from an oject structure. Full details on selection lists are also in a separate document.
If we wouldn't make these datatype and selection list associations, we would need to create specific widgets for each possible combination: StringField, LongField, DateField, StringSelectionList, LongSelectionList, ...
<fd:field id="..." required="true|false" state="..." whitespace="trim|trim-start|trim-end|preserve"> <fd:label>...</fd:label> <fd:hint>...</fd:hint> <fd:help>...</fd:help> <fd:datatype base="..."> [...] </fd:datatype> <fd:initial-value locale="...">...</fd:initial-value> <fd:selection-list .../> <fd:suggestion-list .../> <fd:validation> [...] </fd:validation> <fd:on-value-changed> [...] </fd:on-value-changed> <fd:on-create> [...] </fd:on-create> <fd:attributes> <fd:attribute name="..." value="..."/> </fd:attributes> </fd:field>
The field element takes a required id attribute. This id should be unique among all widgets in the same container (i.e. inside the same fd:widgets element).
The required attribute is optional, by default it is false. It indicates whether this field is required. This is a static property of the widget. If you want the field to be "conditionally required", then set this to false and use custom validation logic to check the requiredness of the field.
The state attribute is optional. See Widget States for its purpose.
The whitespace attribute is optional. It controls how leading and trailing whitespace characters are handled when the widget parses the value submitted by the user. Accepted values are: "trim" (removes both leading and trailing whitespace), "trim-start" (removes leading whitespace only), "trim-end" (removes trailing whitespace only), and "preserve" (leaves both leading and trailing whitespace intact). If the whitespace attribute is not present, it defaults to "trim".
The fd:label element contains the label for this widget. This element is optional. It can contain mixed content. For internationalised labels, use i18n-tags in combination with Cocoon's I18nTransformer.
The fd:hint element contains a hint for the form control of this widget. This element is optional. It can contain a hint about the input control. For internationalised labels, use i18n-tags in combination with Cocoon's I18nTransformer.
The fd:help element contains more help for the form control of this widget. This element is optional. It can contain text help about the input control. For internationalised labels, use i18n-tags in combination with Cocoon's I18nTransformer.
The fd:datatype element indicates the datatype for this field. This element is required. The base attribute specifies on which built-in type this datatype should be based. The contents of the fd:datatype element can contain further configuration information for the datatype. The possible datatypes and their configuration options are described over here.
The fd:initial-value element specifies an initial value for the field. The specified string value is converted using the datatype's convertor. You can optionally define the locale to be used for this.
The fd:selection-list element is used to associate a selection list with this field. See Datatypes for more details.
The fd:suggestion-list element is similar to the fd:selection-list element but serves for Ajax-based autocompletion. (Very new at the time of this writing -- sep/oct 2005)
The fd:validation element specifies widget validators. See Validation for more details.
The fd:on-value-changed element specifies event handlers to be executed in case the value of this field changes. See also Event Handling. The interface to be implemented for Java event listeners is org.apache.cocoon.forms.event.ValueChangedListener. The WidgetEvent subclass is org.apache.cocoon.forms.event.ValueChangedEvent.
The fd:on-create element specifies event handlers to be executed upon creation of the widget instance. The interface to be implemented for Java event listeners is org.apache.cocoon.forms.event.CreateListener.
The fd:attributes element specifies arbitrary name/value pairs to be associated with the widget. These attributes have no special meaning to CForms itself, but can be retrieved via the API.
A field widget is inserted in a template using the ft:widget tag:
Styling (default HTML XSL)
If the field widget does not have a selection list, it will be rendered as simple input box. If the field widget does have a selection list, it will by default be rendered as a dropdown list. If the datatype of the field is date, a date-picker icon will be put next to the field.
To render the input field as a textarea:
<ft:widget id="..."> <fi:styling type="textarea"/> </ft:widget>
To render the input field as a HTMLArea:
<ft:widget id="..."> <fi:styling type="htmlarea"/> </ft:widget>
To render the selection lists as a listbox:
<ft:widget id="..."> <fi:styling list-type="listbox" list-size="5"/> </ft:widget>
To render the selection list as radio buttons:
<ft:widget id="..."> <fi:styling list-type="radio"/> </ft:widget>
To render the selection list as horizontal radio buttons:
<ft:widget id="..."> <fi:styling list-type="radio" list-orientation="horizontal"/> </ft:widget>
|
OPCFW_CODE
|
XDT 1.0 Dimensions Specification
The Dimensions Specification 1.0 is one of the Contributions of Ignacio Hernandez-Ros (Founder of Reporting Standard S.L.) to the XBRL Consortium during the period 2005-2007
The Dimensions Specification allows the transmission of multi-dimensional information (OLAP cubes) Online Analytical Processing using the XBRL syntax.
The Dimensions Specification defines two concepts in order to build the building blocks it is based on:
- Hypercubes: Taxonomy authors must define hypercubes in order to link concept definitions (also called primary items) with XBRL "cubes"
- Dimensions: Taxonomy authors must define dimensions in order to link hypercubes with the set of dimensions that are part of the hypercube.
Other terms used:
- Primary items: They are regular concept definitions
- Members: Are values of a dimension. If the values can be enumerated in a list then they are typically Explicit dimensions. If the values cannot be enumerated then they are typically Typed dimensions.
- Domain: This is a set of members of a dimension. Explicit dimension members can be organized in a hierarchy. A Domain is typically a Member (head elements) and the set of children elements in its hierarchy.
Relationships and operations with hypercubes
- The all relationships links a concept definition with an hypercube definition. The dimensions in the hypercube must be reported in the instance document for the fact that corresponds to the concept definition.
- The notAll relationship links a concept definition with an hypercube. Contrary to the all relationship, the notAll relationships tells that the dimensions in the hypercube cannot be reported in the instance document for the fact that corresponds to the concept definition.
- The hypercube-dimension relationship links an hypercube with a dimension. An hypercube may contain multiple dimensions.
- The dimension-domain relationship links a dimension with its head domain member.
- The domain-member relationship links a domain with other members in a explicit dimension hierarchy.
The existence of all and notAll means that the specification defines an algebra of validation of facts for which all members for dimensions in hypercubes linked with the notAll relationship must be excluded from the possible combinations of members for dimensions in hypercubes linked with the all relationship.
Note, in order to build a Dimensional processor and perform validation against the dimensions specification there is no need to create the Cartesian product of dimensions and members of an hypercube.
Dimensions syntax level vs. Dimensions semantic level
One of the consequences of the existence of Concepts, Hypercubes, Dimensions and Members and the relationships between them is that the same hypercube could potentially be used multiple times in the same taxonomy tree and linked with different primary items. Having different dimensions each time... and each dimension may also have different members each time it is used.
Reporting Standard XBRL processor has implemented a two layers approach in order to deal with this situation.
- During the processing of layer one (syntactical layer) the processor creates an object model that matches the syntactical layer (Post Taxonomy Validation Infoset) this is; once relationships removed by extensions are effectively not considered.
- During the processing of layer two (semantical layer) the processor creates a "compiled" layer representing the different uses of the syntactical layer.
A primary item P1 is linked with Hypercube H1 on extended link EXL1
A primary item P2 is linked with Hypercube H1 on extended link EXL2
The definition of H1 in EXL1 contains:
- Dimension D1 with members M1 and M2
The definition of H1 in EXL2 contains:
- Dimension D1 with members M1 (note M2 is not here)
- Dimension D2 with members M2 (note M2 is part of D2 in EXL2), N1 and N2
This approach makes our processor the most advanced processor in regard to the use of the Dimensions Specification as it allows a completely safe isolation between what the syntax is and what the semantics wants to transmit.
|
OPCFW_CODE
|
Data Engineer with Spark experience- Chicago, IL-125k-Perm
Position: Data Engineer
Technology: Big Data
Location: Chicago, IL
Job Type: Permanent
About the Role:
Our client has an exciting role opening up for a data engineer. The candidate will build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
The ideal candidate is a passionate IT expert who can easily be transparent and a team player. The rest of the team will be relying on you heavily. Candidate will have prior professional experience as a data engineer in the Azure, AWS, or Google cloud
*Cloud experience and Spark is required*
What you Bring to the table:
- 4+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
- implementing and optimizing data pipelines
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing 'big data' data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.
- Code unit testing.
- Experience with the following data types, structures and techniques is a big plus:
- Time series data
- Location Data (GIS)
- ML/AI techniques such as Neural Nets, SVM, Linear/Logistic Regression
- Association Rule Mining, etc. for both out of sample prediction as well as inference
- Ability and willingness to understand, learn, and use new programming languages quickly.
- familiar with Linux shell, scripting, process monitoring and management, SSH.
What they can give you:
- Health Care Plan (Medical, Dental & Vision)
- Retirement Plan (401k, IRA)
- Paid Time Off (Vacation, Sick & Public Holidays)
- Family Leave (Maternity, Paternity)
- Work From Home
- Free Food & Snacks
- Stock Option Plan
- Transportation Stipend
The company is actively searching for candidates to interview, do not miss this opportunity! You can reach me at firstname.lastname@example.org or by calling 646.863.7598 (ext. 7598). I'm eagerly looking forward to helping you advance your career!
What's in it for you:
I am unlike other recruiters in that I thrive on building our relationship and making it more personal to ensure working together is a happy experience for you! I understand the need for discretion and would welcome the opportunity to speak to any Azure big data candidates that are considering a new career or job either now or in the future. Confidentiality is of course guaranteed.
Nigel Frank International is the Global Leader in Microsoft Azure recruitment. We are a part of Frank Recruitment Group, one of the most successful global recruitment businesses in the last 10 years and backed by private equity firm TPG Growth.
|
OPCFW_CODE
|
Hailed as the world's most innovative design company by Fast Company magazine in 2011, San Francisco-based Stamen Design specialises in making big data beautiful, with a particular focus on how maps can communicate complex information in an aesthetically pleasing way.
One such example is Map Stack, which enables anyone to create a colourful, engaging map image right in their browser, by layering backgrounds, road plans or satellite imagery with visualisations of different Open Street Map data - and then manipulating colour, opacity and masking using the controls provided. We spoke to Stamen's founder about the rewards and challenges of data-driven design.
What's the most exciting and innovative aspect of how you work?
We think of new ways to ask questions about data, and how we can design with data to get the viewer and our clients to ask new questions. We like to allow data to show what will form organically, without trying to force the message. It's through this openness and curiosity that new ideas emerge, and that's perhaps the unpredictable space where innovation lives.
Why is innovation important to the design industry?
New ideas and concepts are important, but too often in design we see people try to reinvent the wheel or the bar chart when, in fact, they don't need to. That said, design is all about process. Working together with partners, designers, engineers and stakeholders to arrive at an artefact that creatively responds to a need, challenge or problem.
As designers, we constantly seek to make the world more beautiful, to make information easier to understand, to improve something. Innovation naturally comes out of that.
What role does data play in the future of interactive design?
Too often it's assumed that data is itself some objective thing, and that if we can just harness it and visualise it then voilà - we will understand the world and our bodies and lives, and transcend our current existence into a better way of being. The truth is that data is a representation of information. It is in no way objective. Sometimes it's downright boring. That said, data plays an important role in design. At Stamen, we handle it as if it is clay with a materiality all of its own.
What are the main challenges you face working in this field?
Designing with data adds an additional layer of complexity to any project, which may be pure joy or pure hell. Sometimes a client's data doesn't tell the story they thought it would. We can't stress enough the need for open data. It's unfortunate that so many companies are inclined to collect data, and then not share it with the public for analysis and design. Innovation often comes from unusual places, and without openness, it's a lot harder to make happen.
Words: Andreas Markdalen
This article originally appeared in Computer Arts issue 228.
|
OPCFW_CODE
|
Laws surrounding publicly (ish) displaying decoded messages received over radio
POCSAG512, POCSAG1200 and POCSAG2400 is the protocol used by pagers to transmit information to one another. The problem I'm seeing here is that it is not encrypted, only encoded, as any broadcast has to be to be broadcast in the first place.
I decided to build a simple web page that decodes and displays local pager POCSAG transmissions I can pick up (with my $30 of radio receiving equipment and a linux vm) as a show of just how insecure it it.
However, it just feels illegal.
There is stuff here that says it is confidential, and shouldn't be shared. Considering it is a public broadcast that is not encrypted, how is it dissimilar to just yelling, in a public place, "What I'm about to say is confidential and should not be heard by anyone".
Any heads up or pointers? I've already brought this up with a friend of mine studying to be a lawyer, and he gave me an all-encompassing shrug.
(Also I'm Australian)
It is Illegal
Specifically, it is in breach of s7 of the TELECOMMUNICATIONS (INTERCEPTION AND ACCESS) ACT 1979:
(1) A person shall not:
(a) intercept;
(b) authorize, suffer or permit another person to intercept; or
(c) do any act or thing that will enable him or her or another person to intercept;
a communication passing over a telecommunications system.
"Telecommunications system" is a "telecommunications network" within Australia and a "telecommunications network" is "a system, or series of systems, for carrying communications by means of guided or unguided electromagnetic energy or both, but does not include a system, or series of systems, for carrying communications solely by means of radiocommunication."
Now, you might think that because the pager message is transmitted by "radiocommunication" its not a "telecommunications network", however, the key word here is "solely" - the initiation of the pager message happens through the telephone system so the message is not sent "solely by means of radiocommunication."
You are free to intercept (and decrypt) as much radio traffic as you like providing that it is initiated at a radio transmitter and terminates at a radio receiver (like CB radio or broadcast radio) - if it touches a telecommunications network its off limits.
The criminal punishment is imprisonment for up to 2 years, "aggrieved persons" can also seek civil remedies.
Huh. I was not aware that a pager transmission extends further back than a radio transmitter into a telecommunication network.
I thought OP was eavesdropping, not intercepting, as the messages will go though.
Wouldn't someone at the telecommunication company be liable if someone intercepted the broadcast due to (b) authorize, suffer or permit another person to intercept; and (c) do any act or thing that will enable him or her or another person to intercept; By not encrypting it, they are allowing the OP to intercept it with little effort and expense.
|
STACK_EXCHANGE
|
Considering that a wide range of high speed media - SATA/SAS/NVME SSDs, RAM, and the newer NVDIMM, can be used for caching or server side storage in VMware, our preferred in-host media by far is an enterprise grade NVME SSD. The specific SSD I recommend (as of 2020) is the Intel P4600 (if you have a spare PCIe slot in the host) or the Intel P4610 (if you have a spare U.2 NVME slot).
Not very well know is the fact that some enterprise grade NVME SSDs, like the Intel P4600, come in conventional PCIe form factor. So they can be installed in older servers that have an x4 (or wider) PCIe slot (more on that later in this article).
MSA storage arrays from HP are quite popular in small/medium size businesses. They are possibly the cheapest appliances from any big brand OEM that have all the enterprise grade features expected of such arrays. Their only drawback versus more expensive arrays is performance. For instance, the hybrid MSAs don’t cache writes to SSDs, they only cache reads1 or that the storage controllers even in the all-flash MSA are lower powered RAID controller processors (and not the beefier x86 processors), thus choking on high throughput low block size IO.
This article has more details on how VirtuCache improves the performance of these arrays.
Why are Write Latencies High Even When the EMC Unity Appliance has Plenty of System Cache Memory and Fast Cache SSDs?
If you are experiencing high write latencies in VMs when using an EMC Unity Hybrid array, it could be because the caching and tiering solutions in Unity don’t cache random small block writes, and your applications might be generating large volumes of such storage IO.
High Speed Storage is Back in the VMware Host with Hyper Converged Infrastructure and Host Side Caching, but the similarities end there….
The main advantage that hyper-converged infrastructure(HCI) has over traditional converged infrastructure (separate hardware for compute and storage) is that HCI has put high speed storage back in the compute nodes. This is also true for Host Side Caching software, with the added benefit that host side caching maintains the flexibility that Converged Infrastructure (CI) always had over HCI, that of being able to scale and do maintenance on compute and storage hardware independently of each other.
Other pros of host side caching + converged versus hyper-converged infrastructure are listed below.
Storage IO path in VSAN and VirtuCache are similar to a large extent, since both service storage IO from in-VMware host media. Though with VirtuCache, storage latencies are lower than with VSAN for four reasons:
Reads are almost always serviced from local cache media in VirtuCache. In VSAN there is a high chance that all reads might be serviced over the network from another host;
In addition to SSD, with VirtuCache you can cache to RAM which is the highest performing media there is, something that's not possible in VSAN;
Write cache flush rate will typically be higher for backend storage array than for locally attached storage. As a result write latencies will be lower with VirtuCache, because its flushing writes to SAN array;
VirtuCache is block based, VSAN is object based;
VMware will discontinue VFRC starting in ESXi 7.0 to be released in Q4 2019. 0
Despite the end-of-life announcement for VFRC, if you still want to review the differences between VFRC and VirtuCache, below are the three most important ones.
We cache reads and writes, VMware's VFRC caches only reads. Caching writes improves the performance of not only writes, but also of reads.1
We require no ongoing administration. Caching in our case is fully automated, and all Vmware features are seamlessly supported. Versus VFRC that requires administrator intervention when doing vmotion, for creating a new VM, for maintenance mode, for VM restore from backup, requires knowledge of application block size, requires SSD capacity assignment per vdisk. Many other tasks require admin oversight as well.
We provide easy to understand VM, cache, network, and storage appliance level metrics for throughput, IOPS, and latencies, and alerting to forewarn of failure events. VFRC doesn't.
Below is a longer list of differences, cross-referenced with VMware authored content:
The big difference between the two is that VSA caches only 2GB of reads from the Master VM1,2. VirtuCache caches reads + writes from all server & desktop VMs, and it can cache to TBs of in-host SSD/RAM, so that all storage IO is serviced from in-host cache.
More details in the table below.
SSDs deployed for caching in CEPH OSD servers are not very effective. The problem lies not in the SSDs, but because they are deployed at a point in the IO path that is downstream (in relation to VMs that run user applications) of where the IO bottleneck is. This post looks at this performance shortcoming of CEPH and its solution.
There are two options for improving the performance of CEPH.
Option 1 is to deploy SSDs in CEPH OSD servers for journaling (write caching) and read caching.
Option 2 is to deploy SSDs and host side caching software in the VMware hosts (that connect to CEPH over iSCSI). The host side caching software then automatically caches reads and writes to the in-VMware host SSD from VMware Datastores created on CEPH volumes.
Below are reasons for why we recommend that you go with Option 2.
How to Select SSDs for Host Side Caching for VMware – Interface, Model, Size, Source and Raid Level ?
In terms of price/performance, enterprise NVME SSDs have now become the best choice for in-VMware host caching media. They are higher performing and cost just a little more than their lower performing SATA counterparts. The Intel P4600/P4610 NVME SSDs are my favorites. If you don’t have a spare 2.5” NVME or PCIe slot in your ESXi host, which precludes you from using NVME SSDs, you could use enterprise SATA SSDs. If you choose to go with SATA SSDs, you will also need a high queue depth RAID controller in the ESXi host. In enterprise SATA SSD category, the Intel S4600/S4610 or Samsung SM863a are good choices. If you don't have a spare PCIe, NVME, SATA, or SAS slot in the host, then the only choice is to use the much more expensive but higher performing host RAM as cache media.
This blog article will cover the below topics.
- Few good SSDs and their performance characteristics.
- Write IOPS rating and lifetime endurance of SSDs.
- Sizing the SSD.
- How many SSDs are needed in a VMware host and across the VMware cluster?
- In case of SATA SSDs, the need to RAID0 the SSD.
- Queue Depths.
- Where to buy SSDs?
CEPH is a great choice for deploying large amounts of storage. It's biggest drawbacks are high storage latencies and the difficulty of making it work for VMware hosts.
The Advantages of CEPH.
CEPH can be installed on any ordinary servers. It clusters these servers together and presents this cluster of servers as an iSCSI target. Clustering (of servers) is a key feature so CEPH can sustain component failures without causing a storage outage and also to scale capacity linearly by simply hot adding servers to the cluster. You can build CEPH storage with off the shelf components - servers, SSDs, HDDs, NICs, essentially any commodity server or server components. There is no vendor lock-in for hardware. As a result, hardware costs are low. All in all, it offers better reliability and deployment flexibility at a lower cost than big brand storage appliances.
CEPH has Two Drawbacks - High Storage Latencies and Difficulty Connecting to VMware.
|
OPCFW_CODE
|
Extra HTML code added to my <table> (on Chrome DevTools) but not on source code
As I'm working on a <table> displayed on a page (still in Draft) on WordPress, something weird happened :
While I've written this very simple code on my editor :
<table>
<tr scope="row"><th>Title</th><td>Some data</td></tr>
<tr scope="row"><th>Title</th><td>Some data</td></tr>
</table>
…some extra code appeared while I'm inspecting it (actually the <tbody> element) :
<table>
<tbody>
<tr scope="row"><th>Title</th><td>Some data</td></tr>
<tr scope="row"><th>Title</th><td>Some data</td></tr>
</tbody>
</table>
I'm not using Gutenberg for managing this table. It's a pure html table displayed on a handmade template.
I can see this extra chunk of code when I use the Chrome DevTools inspector. And it doesn't appears on the website source code.
I can't figure out how this is happening. What am I missing here ?
As I'm working on a displayed on a page (still in Draft) on WordPress, something weird happened :
Nothing about this is weird or unexpected, but it would be if you were unaware of the DOM.
While I've written this very simple code on my editor :
This is raw HTML text sent from your server.
…some extra code appeared while I'm inspecting it (actually the element) :
Incorrect, this is not your HTML that you wrote on the server, this is the DOM, converted back into HTML text for your benefit in the user interface. The browser does not hold a HTML text chunk in memory, it parses and processes it first.
What's going on?
What you see in the browser dev tools is not the raw HTML the server sent, or even the HTML that's being rendered, it's a representation of the DOM, the document object model.
Your browser has taken that raw HTML and turned it into a tree of nodes, filling in gaps along the way. E.g. all webpages have a html and body node, and if you don't use a <body> or <html> tag in the raw source code the browser inserts one automatically.
Likewise all tables have a table body, wether you write the tag or not, the browser will assume everything inside <table> is the table body if you don't explicitly include it to try and be helpful, and because it's rare that a HTML document passes validation and is perfectly written.
Browsers are forgiving, and what you see on the page is the visual representation of a processed and parsed HTML document, turned into objects/nodes. The HTML you see in the browser dev tools is new HTML generated from that DOM tree it held in memory.
Take a look directly at the raw HTML source code sent by your server and you'll see there is no <tbody> tag, but even if there was it's still standard HTML that's been there in the spec for decades. Also take a look at table headers ( aka <thead> )
|
STACK_EXCHANGE
|
package canisius.jim.parts;
import canisius.jim.ruppet.Ruppet;
import javax.sound.midi.Sequence;
import javax.sound.midi.Track;
import java.io.File;
import java.util.Objects;
/**
* A {@code SoftwarePart} allows for a {@code Ruppet} to time specific actions to carry out during the running of a
* script.
*
* @author Jon Mrowczynski
*/
public abstract class SoftwarePart {
/**
* The {@code Track} that stores the timing information for the {@code Ruppet}'s mouth movements based on the timing
* information gathered from the {@code List}s {@code mouthDownTimes} and {@code mouthUpTimes}.
*/
protected final Track track;
/**
* The name of the {@code File} that contains the timing information for this {@code SoftwarePart} to run for the
* script.
*/
protected final String fileName;
/**
* The {@code File} that contains the timing information for the mouth movements.
*/
protected final File transitionTimesFile;
/**
* Constructs a {@code SoftwarePart} that can be used to time specific types of actions for the execution of a
* script.
*
* @param ruppet that this {@code SoftwarePart} belongs to
* @param actions that is used to create a {@code Track} that stores all of the timing information
* @param fileName of the {@code File} that contains all of the timing information for this {@code SoftwarePart}
* @throws NullPointerException if {@code ruppet}, {@code actions} or {@code fileName} is null;
*/
SoftwarePart(final Ruppet ruppet, final Sequence actions, final String fileName) throws NullPointerException {
Objects.requireNonNull(ruppet, "Cannot initialize a " + SoftwarePart.class.getSimpleName() + " with a null ruppet");
track = Objects.requireNonNull(actions, "Cannot initialize a " + SoftwarePart.class.getSimpleName() + " with null actions").createTrack();
this.fileName = fileName;
transitionTimesFile = new File(fileName);
}
/**
* Reads timing information from a {@code File} that will be used to setup the timed actions that will be carried
* out upon the execution of a script.
*/
protected abstract void readTimingInfoFromFile();
/**
* Sets up the {@code SoftwarePart} such that it will carry out specific timed actions during the running of a
* script.
*/
protected abstract void setupTimings();
/**
* Returns an {@code int} representing the number of transitions that have been read from the {@code File}.
*
* @return an {@code int} representing the number of transitions that have been read from the {@code File}
*/
protected abstract int getNumberOfTransitions();
/**
* Returns the {@code Track} that contains all of the {@code MidiEvent} timings.
*
* @return The {@code Track} that contains all of the {@code MidiEvent} timings
*/
public final Track getTrack() { return track; }
} // end of SoftwarePart
|
STACK_EDU
|
This is a quick case designed in OpenScad for 96Boards CE standard.
There are lots of options most are generally set by true or false. It can output a case for the normal size or the expanded size CE board. It currently supports adding a UART board within the case. And it allows you to expose the low or high speed connectors.
You may need to adjust the CE_spec_tolerance or the x_scaler, y_scaler and z_scaler variables depending on your 3D printer. This design was tested on an M3D micro printer and it generally fits my boards pretty well. If all dimensions of the case are too small you can adjust it with either the CE_spec_tolerance in one place or you can use the x_scaler, y_scaler and z_scaler variables. If your case prints out too small in only one direction or different amounts in different directions then you must use the scaler variables. The scaler variables default to 1 and are used as a multiplier for that direction. 1.xxxx increases the size, .9xxxxx decreases the size.
The standard CE case interior dimensions should be 54.25 x 85.25 x 12.25. You can calculate the % of change you need based on that.
The extended CE case interior dimensions should be 100.25 x 85.25 x 12.25.
Again you can calculate the % of change you need based on that.
A couple of items were noted with the DragonBoard 410c and corrected (I hope).
And it’s clear that sometimes parts shift when the board is being made, sometimes the connectors stick out over the front edge of the board so I’ve tried to make allowances for that too.
The only thing you need to download is the 96BoardCECase.scad file, the .stl files are 3D print file examples of what the cases could look like.
Or if the sample cases are exactly what you need you can download them directly.
One of the cool things about the 96Boards CE project is that all of the boards us the same pins for the Low Speed Expansion Connector so you can plug any expansion board into any 96Board. We can add the expansion boards here and make up custom case for that combination.
There are a BUNCH of true/false variables that can be selected and a couple of numeric variables too, Extracted from the code:
// Square edge case or rounded edge case? rounded_case = true; // Only the rectangle of the case rounded (sides) or all angles (top, bottom, // sides)
only_rectangle_rounded = false;
// How thick do you want your case walls(in mm)// Square edge case or // rounded edge case? rounded_case = true; // Only the rectangle of the case rounded (sides) or all angles (top, bottom, // sides)
only_rectangle_rounded = false;
// How thick do you want your case walls(in mm) Be careful if you are // setting rounded_case true, too thin of walls will leave holes Don’t go to // thick (much over 2.5) or you will have problems plugging in cables, so // really the range is about 2.00mm – 2.50mm at least for me. case_wall_thickness = 2.5;
// Extended board or regular true/false question, there are no extended // boards at this time (Oct 2015) but when there are we are ready. 96Boards_CE_extended_version = false;
// Do you have a UART board and want room to install it in the case? 96Board_UART_Board_Installed = true;
// The UART board has a reset button, if you want to be able to press it // true expose_UART_Board_Button = true;
// expose the low/high speed connectors or not true/false question expose_low_speed_connector = true; expose_high_speed_connector = false;
// The dragonboard 410c has 4 DIP switches on the bottom, true will make a // hole so you can reach them without opening the case. expose_DragonBoardDipSwitch = true;
// Do I want screw holes through the case? true/false question screw_holes = true;
// Do I want nut holes on the bottom screw_terminator = true;
// For exporting .stl models, this will cut the model in 1/2 at the board // top level. The board will fit into the bottom of the case cleanly and // the top will sit on it slice = true;
// top of the box or bottom slice_top = true;
// How round do you want holes the higher it set to the longer it takes to // render, at 50 it takes 2-3 minutes to render the model smoothness = 50; //10-100
// For development only. Do you want to see the full case, the full diff // model or the bare board model can help when adding new case type // Set true for final case, false shows you the board and screw layout case = true; 96BoardBlock = false;
96Boards CE Case
This is a repository for OpenScad files and stl files that create cases to the 96Boards CE specification.
Case/mount: Cases or other mounting/protection options
Hikey, DragonBoard 410c
Completed: a project that is finished and ready for others to use
|
OPCFW_CODE
|
Hello, I have design a waveform chart with 420x255 pixels (420 for X axies & 255 for Y one)
I dialog with a SBRIO and labview with RS232. When the screen is ready , it send a FE FF FF FF , The sbrio send the data and the screen return FD FF FF FF and so on.... At the end of the 420's datas , nothing happens. I saw on the debug mode that the screen slides on the right or on the left. It doesn't work for me , i didn't receive FE FF FF FF......
just post your HMI ... otherwise it is hard to say what goes wrong ...
NX4832K035_011R ehanced model
HMI is not the Display-Model ... HMI is your Nextion-Editor Project-File ... :-)
yes waveform act corretly . I receive FE FF FF FF, then SBRIO send the value. After I receive FD FF FF FF......... but At the end of the graph, nothing happens.
The demo mode works correctly as your video.
off course, but I idon't know what happens with the debug mode (generation of random number)
Shall I do something special at th end of the graph to have rolling?
Re-Review add and addt commands of the Nextion Instruction Set
These are the only two commands that can insert data into a Waveform.
add inserts single value
addt inserts many values, (the one you use - 0xFE and 0xFD is addt)
regardless of any other code on waveform page
your command of addt 1,0,1 will go through entire steps of
- wait for 0xFE 0xFF 0xFF 0xFF
- then you must send the 1 single requested byte value
- wait for 0xFD 0xFF 0xFF 0xFF
according to your code this 1 single byte value is inserted left on waveform.
this you do in a loop until va5 is 421
- so starting at 0, you must repeat your "wait-send_byte-wait" for each of the 420 bytes
AND all of that is what you request to accomplish every 50ms of timer when start is =1
Furthermore, I have to imagine that the release event bytes interfere part way between
To accomplish 420 waveform byte values in the manner you attempt
- at best conditions with zero delays you have 3787 bytes (420*(4+1+4)+7)
in 50ms, needing a baudrate of 757400 in perfect scenario
much too fast Nextion baudrate.
Waveform works like this.
- add a value for a channel,
it is placed next to draw.
it is drawn at next position
THIS keeps going across the waveform canvas until FULL.
add value to channel,
it is placed next to draw
position to draw at is last of waveform canvas
all of waveform values slide one pixel
value is drawn at last position.
THIS is accomplished without user intervention
user only needs to add values and ta-da!
Your code should probably be concerned with adding value when it has value to add.
For this, I recommended re-review add and addt commands
when only adding 1 byte there is no sense in using an addt multibyte
- wait for 0xFE just to send one byte value, and wait again for 0xFD.
instead the add command carries only one byte value.
seems more perfect for 1 byte.
But when trying to add 420 values (assuming 420 values are available)
then addt can request multiple bytes (there is a max in single shot)
addt 1,0,100 will go like this
wait for 0xFE
send 100 byte values at once
wait for 0xFD
which scenario is right for you, depends on when data is available.
Hello, i understand that with this code here; the graph stop after 420 values
How can I use add function (without FE FF FF FF & FD FF FF FF) and receive data from uart?
Does I send in EEPROM my data and after do this?
add 1,0,data in eeprom
If I succeed with it, I can start sending value from SBRIO when the operator touch the "start button" by the function called "send component ID" and stop it when SBRIO receive the ID of "retour button"
Hello, I try to play with add function and UART. I put a double state button. In tm0, i check the state of it , I send printh 7F to uart to say to the SBRIO that It could send a data to eeprom. I read this data and I put it in add 1,0,va0.val. It doesn't work well. I think I have to find a good value for tm0.tim
Sorry, you are much confused in your understandings.
add 1,0,data ... does not go to eeprom,
it goes to waveform of current page with id 1 on channel 0
and adds the point data at next position.
An EEPROM available on the Nextion Enhanced models is
a 1K chip that is completely independent of a waveform
- eeprom commands in the Nextion Instruction Set
include: wept, rept, wepo and repo
Commands add and addt are specific to a waveform.
You are tripping over your coding logic,
now trying to get multiple band-aids on broken code.
Perhaps starting from scratch may produce faster path to results.
- waveform has been plaguing you for over a month now.
An MCU is going to do exactly as told to do, nothing more nothing less
What is not purposefully programmed will produce unexpected results.
Tossing in bells and whistles before functionality, get in your way.
- you have events working out of your control - hence does work right.
Strip it back to the basics and rebuild it back up - but purposefully.
So what is the specs of an SBRIO, and what is purpose of waveform?
|
OPCFW_CODE
|
Editor’s note: Ukiah High students participated in the Girls Programming League Challenge at Harker School in San Jose last Saturday. The UHS team traveled the farthest distance to compete and all other teams were from the Bay Area/Silicon Valley. “We know we are competing against top Silicon Valley high schools but we wanted the experience to build future teams,” UHS computer science teacher Edwin Kang said, adding “We are returning again on St. Patrick’s Day, March 17, 2019” for the Harker Programming Invitational. Kang said the event invitation for last weekend’s GPL Challenge was sent out very late during the first week of school, and that it was the first time the Ukiah students had worked with Python programming. Kang and parent Anna Au are the Ukiah team’s coaches and drivers.
By Azaliah Garami
The GPL Challenge 2018 was a group problem solving challenge that required us to use the coding languages of Python, C++ or Java in order to translate the question and print out the solution into something the computer could understand.
Python was the hardest of the three possible languages which we were told to study and use for the competition. While we were unprepared for the competition, it was enjoyable to try and solve the questions presented to us and throw out our own ideas to one another. The teamwork aspect of the competition was enjoyable even when under stress from the actual competition and frustrated that our own answers were not “correct” because they were written slightly differently. You could feel the frustration and tension in the air in the small and silent room we were in with about six or so other groups.
The actual competing ended after about three hours and afterwards we got to enjoy the presentations given to us by powerful successful women in the STEM field (science, technology, engineering and math). I couldn’t focus during one of the presentations due to a lack of sleep like many other people in our group. We were treated very well by the hosts of the competition, who gave us special recognition for traveling the three or so hours it took to drive there, starting at 5:30 in the morning.
After the presentations, we had the opportunity to ask a panel of more successful STEM women questions about their own experiences in STEM, which was exciting. While we didn’t win any prize from the competition itself and we were all exhausted afterwards, I know that myself as well as my fellow STEM Club members are excited and looking forward to competing again in March alongside the boys.
By Miranda Ung
The GPL was a challenging competition that required creative thinking as well as teamwork to solve the challenges they threw at us. The competition overall was both frustrating and fun. In addition to trying to solve the problem, we were able to meet amazing women who work in the STEM field.
Overall, it was a fun and worthwhile challenge to partake in and I am looking forward to retaking the challenge once more. “We won’t give up!”
By Star Munoz
I had never written a single line of code before the GPL challenge and learning Python in less than two weeks was nerve-wracking, but the overall experience was amazing. Being able to listen to some of the most influential women in the STEM field talk about the advancements in technology that they helped discover and create was both interesting and inspiring.
I now plan on learning more code in order to prepare for the next competition as well as because I realized I really enjoy coding.
By Sandra Guevara
This was my first experience at a competition having to do with computers/programming, so I was nervous about it. It ended up being extremely difficult, because I hadn’t prepared much beforehand, but it was so much fun and an amazing experience. I now know what to expect so I’ll definitely try and improve before then!
Not only was I able to participate in a competition, but I also had the opportunity to meet some amazing and inspiring women who work in the STEM field. All in all, this was a wonderful experience and I can’t wait to do it all again in March.
By Ciera Pearson
I have been coding for a while before this competition, since seventh grade to be exact, but I have never competed in an event for coding. When my teacher, Edwin Kang, first told me about this competition I was skeptical. I kept thinking, “How are we going to learn a new type of coding in a week? It usually takes a couple of weeks just to understand the basics.” However, I knew that if I didn’t take the opportunity there might not be another one like it. So I took up the offer and went to compete and represent Ukiah and Mendocino County.
What was challenging for me was trying to learn a new type of coding in a week basically and having to get up at 3:45 in the morning to drive three hours to make it on time. Even though we ended up toward the bottom in placements, the experience was amazing and I enjoyed every second of it and gained the opportunity to meet some amazing women who work in the STEM field.
Since I want to try and do computer engineering/science when I graduate from high school, I think this experience was a great learning tool in my process of achieving my goal. I am definitely excited for the next competition in March and can’t wait to be a team to beat this time around!
By Sofia Parsons
This event was a great learning and bonding experience. Though most of us were not very well prepared for the competition, we got to see how it works and we will definitely prepare more for the next event in March. I found the challenge fun and it only motivated me further to study code and improve. I got to bond more with my teammates, which is really the important part. We also got T-shirts.
By Kenya Ramirez and Kaia Oropeza
12th and 9th grade — sisters
Overall, from waking up at 4 in the morning and driving to a destination three hours away to go to a competition we were hardly qualified for did not seem like the ideal way to spend a Saturday, but we believe the experience was well worth it.
When we choose to go again we’ll have a much greater understanding of what our task will be, and the next time we’ll know we’ll be going to win a prize.
By Daniel Au
Overall, my experience in the GPL was very inspiring. I was mainly outside waiting for the competitions to end so that I could talk to the teams to find out how they did. I also attended the several panels that delved into the studies of AI (artificial intelligence) and computer science.
I could not compete this time around as the competition was only for girls, but I plan on studying a lot of languages such as C++ and Python in the very near future to prepare for the next competition in March, which will include boys and girls.
By Armani Cardenas
My experience overall was mainly observing the main competitors of our school facing off against other schools. I enjoyed the panel portion of the event because I have a passion in the STEM field, as does everyone else at the event.
If I decide to go to the next event myself and actually compete, I’ll be sure to contribute more to help my friends. Overall, it was great to bond with everyone, which will help us to do better next time as a team.
|
OPCFW_CODE
|
M: Ask HN: which database should I learn. - vicks711
Hi I am changing my profession from financial services to programming. Which RDBMS should I learn? I am confused between MySQL PostgreSQL mangodb etc. please guide.
R: andymoe
Start with sqlite (Easy to set up, file based, good for playing and for
embedded systems) then PostgreSQL or MSSQL if you are going to be in Windows
land. MySQL later if you have to. It's fallen out of favor in some communities
since Oracle bought it and it's just not as powerful as PostgreSQL though it
does the job.
Also go buy the book "Joe Celko's SQL for Smarties" [1] (And everything else
he wrote on SQL...) and actually learn what SQL and DDL is really all about
how to properly model data. Lean some SQL and DDL _before_ you start messing
with ORMs like Rails Active Record or SQLAlchemy and all that stuff.
[1] [http://www.amazon.com/Joe-Celkos-SQL-Smarties-
Fourth/dp/0123...](http://www.amazon.com/Joe-Celkos-SQL-Smarties-
Fourth/dp/0123820227)
Also, maybe just go read "SQL For Web Nerds" right now but take it with a
grain of salt and again forget Oracle for now. It's old but good...
<http://philip.greenspun.com/sql/>
R: vicks711
Thanks
|
HACKER_NEWS
|
Yes, I read all the answers to most of those who asked before me. However, I’m new to CSS and my hosting co is refusing to help unless I give them $. Simple, Wednesday I found out the theme I was using is no GPL and was advised by a WP moderator to change into a GPL theme. I searched and searched and found a theme which was appealing. Mantra is the name. I installed it and as soon as I changed the background image, problems started. First, when I changed the background image, my sidebars dissappeared, meanwhile, my old posts where not showing. I kept working the theme though because the site design came out so cool. After I finished it, 12 hours later, my old posts started showin g wheter I pressed images in the front page with htref links or I used the menu. However, I went into my control panel and removed those unneccesary themes I had, like twenty thirteen, twenty ten and some filibusteos magazine thingy. After that, I went into my style sheet and added the scripting for the sidebars left and right. They then started showing, on post pages, inside the center, but my problem is, now I can’t see my front page at all. I’m lost and the hosting co wants 500$ from me to solve this. $ I do not have since my product is not selling too good. To complete, they say when goin g into the main page and right clicking and lookin g into VIEW SOURCE they see something totally different from what I see. I’ll add the code I see and the one they say they see. My site is http://www.latinzonemagazine.com. I went back to the original not GPL theme, hoping it’s settings will solve this, but no. It didn’t happened. Help please. Any aditional questions I’ll answer immediately since I want to solve this.
What I see:
<link rel=’stylesheet’ id=’scap.flashblock-css’ href=’http://www.latinzonemagazine.com/mag/wp-content/plugins/compact-wp-audio-player/css/flashblock.css?ver=3.6.1′ type=’text/css’ media=’all’ />
<link rel=’stylesheet’ id=’scap.player-css’ href=’http://www.latinzonemagazine.com/mag/wp-content/plugins/compact-wp-audio-player/css/player.css?ver=3.6.1′ type=’text/css’ media=’all’ />
<link rel=’stylesheet’ id=’jetpack_likes-css’ href=’http://www.latinzonemagazine.com/mag/wp-content/plugins/jetpack/modules/likes/style.css?ver=2.5′ type=’text/css’ media=’all’ />
<link rel=’stylesheet’ id=’dlm-frontend-css’ href=’http://www.latinzonemagazine.com/mag/wp-content/plugins/download-monitor/assets/css/frontend.css?ver=3.6.1′ type=’text/css’ media=’all’ />
<link rel=’stylesheet’ id=’jetpack-widgets-css’ href=’http://www.latinzonemagazine.com/mag/wp-content/plugins/jetpack/modules/widgets/widgets.css?ver=20121003′ type=’text/css’ media=’all’ />
<link rel=’stylesheet’ id=’sharedaddy-css’ href=’http://www.latinzonemagazine.com/mag/wp-content/plugins/jetpack/modules/sharedaddy/sharing.css?ver=2.5′ type=’text/css’ media=’all’ />
What they say they see:
<link rel=”stylesheet” type=”text/css” href=”//templates/wistie/css/style.css” media=”screen” />
<link rel=”stylesheet” type=”text/css” href=”//templates/wistie/css/dropdown-default.css” media=”screen” />
I have curently Film Plus installed, however this 404 error repeats on evry theme I install. And no, I do not think I have a wistie theme. I do have one lin redirect to another site from weebly and one plugin for videos called slydely
- The topic ‘404 error in front page and hosting co is charging 500$ to solve it’ is closed to new replies.
|
OPCFW_CODE
|
For those who never played it, Thief was an absolute hoot: a first-person game that *wasn't* mostly a shooter. Instead, you played a thief, so you were trying to sneak around and steal things. While it was possible to get into fights and win them (using your trusty sword, which was essentially the same object as System Shock 2's wrench), it wasn't the way the game was optimized -- instead, it was largely about sneaking around, hiding in dark places until the guards passed, and using your bow and trick arrows to do things like shoot out the lights.
Thief was how I wound up in the game business in the first place. I'd been working for Intermetrics for ten years, doing mostly fairly boring corporate stuff, when the company got bought by a small-time media mogul who wanted to use it as a springboard to build a new-media empire. We were working on some game projects in-house (I wrote a pitch for us to do a Babylon 5 MMORPG way back in 1997, and we lost a good six months being jerked around by Paramount Digital on a ST:TNG game), but nothing came of most of them. So when Looking Glass found itself in financial hot water (due to the fact that British Open Golf sold about four copies), he swung in and bought the company.
At that point, those of us at Intermetrics who were looking for more interesting things to do seized our chance. My boss (the VP of Engineering, Bill Carlson) jumped ship to LG, and took several of his favorite engineers (including myself) with him. We got tasked with various jobs as "consultants" -- Mike began the initial development of the multiplayer engine (which I more or less completely rewrote the following year for System Shock 2), and I wound up working for Tom Leonard, writing the Dark Engine's resource-management system to his designs. Thief was ramping up to its big final push after years of development (having started life as Dark Camelot, a game about sword-fighting with zombies), so it was a crazy but fun time.
It was a short project -- I worked on Thief for about three months before moving over to System Shock 2 -- but pretty life-changing. Bill arranged to have us all hired formally by Looking Glass just before Intermetrics (by now itself a division of the aptly-named Titan Corporation) sold the company again. (I've always assumed that it was no coincidence that Bill arranged for us to change companies the day *after* we all got vested with stock.) Switching from a biggish company to a smallish one was a revelation: I discovered that, for all the horrors of the game industry, working in a frenetic close-knit team is *vastly* more fun than a bland corporate programming job, and that I vastly prefer small companies. While I left the game industry when LG shut down, I still count it among the most formative experiences of my career.
Anyway, fingers crossed that Thief 4 lives up to its predecessors -- that the folks working on the new game grok *why* Thief was so great, and create something new and cool along those lines...
|
OPCFW_CODE
|
Does anyone knows if this Firebase Authentication extension still works 2023?
I need to get refresh token to work in my Kodular app, and for what I've learned so far this extesion is my best shot. Default FB Authentication on Kodular doesn't seem to support refreshing a login and thus is sending my users back to login screen every hour, not the best user experience.
Have you tried using the extension in AppInventor? Might be that Kodular blocks the extension because of its own implementation for firebase ?
You can always use the Firebase REST api and the web component to refresh the auth token....
This guide is woking wonderfully. I had overlooked it because the title made me think it was just about Firebase DB, not authentication. After reading it more carefully I could find the answers to make it work. I'm currently working on the implementation, but the instructions seem to still be all valid to this date.
It is possible, I believe, to use federated logins (login providers) with the REST api, but it becomes more complex, as it is not possible to, for example, sign into Google using the web component. A combination of a webviewer (with modified user agent), and use of the webviewstring, and the web component, will be required to sign in, get the 0Auth token/s then use this to authenticate in Firebase.
Since I'm kinda in a schedule I'll keep email/password for the sake of simplicity. But I'll save these ideas for later.
I appreciate you taking a time to look into this. Thanks!
com.django.waveaudiotools.aix (16.9 KB)
com.django.wavrecorder.aix (16.8 KB)
g10dras (for good measure)
It may have been removed from Luke's website for a reason.....
WebViewTools - Version 9 - RELEASE.aix (29.0 KB)
It also may have been removed from Luke's website for a reason....
(The source code is still up on github though....)
May i have the link of this collection please
I think all in this thread already.
A post was split to a new topic: Printing QR Codes?
co.com.netConnected.aix (5.7 KB)
(found by accident in an old project)
A post was merged into an existing topic: Printing QR Codes?
Regarding the @Andres_Cotes file
co.com.dendritas.Animacion.aix should appear the following blocks, which only show the ones you show, do you know if the original file could be obtained and said extension can be recovered
Hello Rod, where did you find the screenshot you showed? Or do you have an AIA file of the extension that contains the relevant blocks? Maybe what I found is an older version of the extension.
I'm in a hospital and I'll update it when I get home.
I was sorry to hear that you are hospitalized. I am wishing you a speedy recovery. Please take care and rest well.
Hello, I hope you recover soon, I am new to the invention app, I saw a video on YouTube that I attached the link Crear una pantalla intro o splash screen con MIT App Inventor - YouTube where I followed the creation process and there the file that I mentioned was mentioned ( from there take the screenshot)
Andres Cotes Dendritas Table extension:
thank you i really needed this extension
|
OPCFW_CODE
|
[Bioperl-l] Use of Bio namespace
10 Oct 2000 14:07:04 +0100
>>>>> "Ewan" == Ewan Birney <email@example.com> writes:
Ewan> I would hope that sequence/feature stuff could be merged
Ewan> inside bioperl but there
Ewan> (a) maybe good reasons not to
Ewan> (b) you may not want to ;)
Ewan> both of which are sensible compliants.
I've been having a look at making my Sequence class SeqI compliant and
my Feature class SeqFeatureI compliant. It's been a bit tricky trying
to work out what is the best way to treat fuzzy ranges (which I've
supported) in a bioperl Seq.
Ewan> I guess - hmmmmmm - this is hard. I suspect the right thing
Ewan> to do is
Ewan> - for really different stuff, eg Ecology, it should
Ewan> get its own top-level namespace.
Ewan> - for similar stuff, people should negoiate a
Ewan> namespace that can be kept separate for their work, for
Ewan> example, I could imagine
Ewan> being given out to a separate expression focused
Ewan> group. Bio::TreeOfLife would be another one.
Ewan> I guess anything molecular biology orientated should
Ewan> end up inside Bio:: but by no means handled by Bioperl.
Ewan> I certainly don't want to stop anyone submitting anything to
Ewan> CPAN, so make a proposal for what you want to submit or how
Ewan> you would best like it done.
Okay. It's good to know roughly where things are going even if none of
our modules are released (if I put things under Bio:: on my or the
Pathogen Sequencing Unit's local Perl lib).
Ewan> I would also encourage you to
Ewan> - if possible, work with bioperl or criticise bioperl
Ewan> if it wasn't good enough for what you wanted to do.
It seems like bad form to criticise when I haven't contributed very
much to bioperl (if I don't like it, I should fix it...). I had a go
at hacking bioperl a while back but found my limitations (never
written a Perl module, knew nothing about OO coding) so I needed to
write some stuff from scratch to see how it all worked.
Stuff I wanted was:
Non-fussy but fairly complete EMBL parsing
Terse, but intuitive manipulation of feature qualifiers in scripts
Features with & without sequence
Clone, trim, reverse-complement sequences with all the features
Fuzzy ranges (parsed from EMBL, supported in other operations)
Low memory Blast parsing
Fasta search output parsing
I'm in a better position to work on bioperl now, but still find a lot
of it hard to follow (esp. where the methods have no documentation -
this isn't just me, I know others who have been discouraged from
working on it for this reason).
As I'm sure you can appreciate, there is the time aspect to this as
well. Annotation projects need to keep to deadlines and if writing a
new module is significantly quicker than modifying an existing one,
that's the way it goes.
To be honest, these modules were not originally intended for release
(hence their cutesy and non-CPAN acceptable names). However they have
since been used in some scripts (cos we've found them easier than
bioperl) which we now need to distribute, so the issue has come up. I
would prefer to integrate at some point, if possible.
-= Keith James - firstname.lastname@example.org - http://www.sanger.ac.uk/Users/kdj =-
The Sanger Centre, Wellcome Trust Genome Campus, Hinxton, Cambs CB10 1SA
|
OPCFW_CODE
|
Prerequisites for Gunnar Carlsson's Topology and Data
I am planning on doing a project on topological data analysis in the near future and intend to use Gunnar Carlsson's paper "Topology and Data" as my introduction to the field. I am familiar with point-set topology and some differential geometry but know little about algebraic topology which is the mathematical foundation of this branch. My question therefore is, what topics in algebraic topology should I study beforehand?
I have looked around on the internet but can't find a list of topics I should be acquainted with. Scanning the paper itself I've noticed some relevant definitions and theorems are given but I presume the paper is not self-contained. Furthermore I noticed a theorem (2.4, pg. 9) involving Riemannian manifolds. Will I need any Riemannian geometry?
A related question, to what type of data set are the methods described in this paper particularly suited? Is there an easy to understand example? I can't be more specific about this last question because obviously I have not yet read the paper. I am looking for a general answer.
Thanks in advance!
Do you mean this paper? https://www.math.kth.se/math/GRU/2013.2014/SF2704/Papers/Topologyanddata.pdf for sure you should study some Homology (Hatcher's book is a good starting point) at least its principal ideas.
Yes, that's the one! Does that mean I'm looking at at least the first two chapters of Hatcher?
The second chapter is the essential one. Depending on your taste another book could be Matveev's "Lectures on Algebraic Topology", it is very concise and the first chapter covers a lot of interesting topics.
@WarlockofFiretopMountain, if you're around today, if you'd like to expand your comment into an answer I'd be happy to award you the bounty.
As I have said in the comments, for sure it would help to know the basics of Homology (I have seen that the author also mention homotopy and the $\pi_1$ group but I assume that you already have seen these things).
The problem is that those texts that I mentioned above (Hatcher and Matveev) are a good introduction but from another point of view maybe are too much.
I mean that it is good to know simplicial, singular and cellular homology but probably to understand the paper you don't need all of them. You should look into something more oriented to topological data analysis.
So, if you want to build solid basis for future studies in Topology, then it would be a good idea to read for example the first of the two chapters of Matveev, maybe skipping the proofs and focusing on the definitions*.
I think that Matveev's book it is a good trade off between conciseness and doing all the steps.
The paper seems "almost" self-contained, it defines what is an abstract simplicial complex, Cech complex and persistent homology.
Maybe it would be a good thing to have a more detailed text on these topics.
I have found this interesing thesis of Brian Brost
http://www.math.ku.dk/~moller/students/brian_brost.pdf
The first two chapters seems to expand nicely the over mentioned topics.
I hope it helps.
*Sections from 1.1 to 1.6 are fundamentals (i.e. IMHO good to build some knowledge about simplicial homology) as sections from 1.9 to 1.14., the rest is probably even more far from your actual aim.
|
STACK_EXCHANGE
|
Add Scaler that Read Metrics From Current Custom Metrics Adapter
Metric Type and Kubernetes Metric API
There are several metric types when defining HPA: Resource , Pods, Object and External:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: test
spec:
metrics:
- type: Pods
pods:
...
Resource type uses Resource Metrics API(v1beta1.metrics.k8s.io).
Pods type and Object type uses Custom Metrics API(v1beta1.custom.metrics.k8s.io).
External type uses External Metrics API(v1beta1.external.metrics.k8s.io).
KEDA occupies the External Metric API, and provide cpu and memory triggers which read metrics from current Resource Metrics API adapter, but no trigger read metrics from current Custom Metrics API adapter.
Scenario: Migrate HPA from Pods or Object metric type
Some cloud vendors provide rich metrics for HPA by default. For example, Tencent TKE provide a lot of HPA metrics for users: https://www.tencentcloud.com/document/product/457/34025
And these metrics are based on Custom Metrics API, which means it has a default adapter of Custom Metrics API, users can define HPA like this:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test
minReplicas: 1
maxReplicas: 100
metrics:
- pods:
metric:
name: k8s_pod_rate_cpu_core_used_limit
target:
averageValue: "80"
type: AverageValue
type: Pods
- pods:
metric:
name: k8s_pod_rate_mem_usage_limit
target:
averageValue: "80"
type: AverageValue
type: Pods
- pods:
metric:
name: k8s_pod_rate_gpu_used_request
target:
averageValue: "60"
type: AverageValue
type: Pods
But if users want to use KEDA to add some triggers to the same workload, they need to delete the previously defined HPA because KEDA and HPA cannot be used together, and KEDA didn't provide a trigger that can read metrics from current Custom Mtrics API, so this prevents users from migrating to KEDA.
Proposal: Add Scaler that Read Metrics From Current Custom Metrics API Adapter
The Pods and Object type metric have multiple levels of definitions, and metadata is a map[string]string. It is not possible to directly move existing HPA pods and object metric definitions to metadata. We need to consider how to design it.
Maybe it's better to allow keep existing metrics spec when resue an existing HPA?
For example, add scaledobject.keda.sh/keep-existing-hpa-metrics-spec annotation to ScaledObject:
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
annotations:
scaledobject.keda.sh/keep-existing-hpa-metrics-spec: "true"
spec:
advanced:
horizontalPodAutoscalerConfig:
name: test
Then KEDA only add extra external metric spec to HPA metrics list, keep existing HPA metrics spec in the HPA metrics list.
Maybe it's better to allow keep existing metrics spec when resue an existing HPA?
This is not friendly to GitOps, for example, use ArgoCD to manage YAMLs, and it both include HPA and ScaledObject which reuses HPA, the KEDA will change exsiting HPA's spec, and ArgoCD found that HPA is been changed, then it change it back to original defination.
I think I found the best solution, add a field to horizontalPodAutoscalerConfig, let's say it extraMetrics, we can paste HPA'sspec.metrics to ScaledObject's spec.advanced.horizontalPodAutoscalerConfig.extraMetrics:
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: test
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test
pollingInterval: 15
minReplicaCount: 1
maxReplicaCount: 100
advanced:
horizontalPodAutoscalerConfig:
name: test
extraMetrics:
- pods:
metric:
name: k8s_pod_rate_cpu_core_used_limit
target:
averageValue: "80"
type: AverageValue
type: Pods
- pods:
metric:
name: k8s_pod_rate_mem_usage_limit
target:
averageValue: "80"
type: AverageValue
type: Pods
- pods:
metric:
name: k8s_pod_rate_gpu_used_request
target:
averageValue: "60"
type: AverageValue
type: Pods
triggers:
- type: cron
metadata:
timezone: Asia/Shanghai
start: 30 9 * * *
end: 30 10 * * *
desiredReplicas: "10"
And KEDA then populate the metrics spec to the HPA that been managed to KEDA.
Sorry for the slow response, my life's been a chaos :(
I guess that we can include this as another scaler that users can use. I see the potential of this and the use case is really nice as KEDA could work together with other custom metrics provider. At this point (and considering that custom and external metrics have the same metric types) I'm not sure if we should configure a custom metric or just wrap it with an external metric (keeping the control to KEDA for other features like formulas or observability)
@tomkerkhove @zroubalik @dttung2905 @wozniakjan ?
Sorry for the slow response, my life's been a chaos :(
+1, apologies for letting this one through the cracks as well
Add Scaler that Read Metrics From Current Custom Metrics API Adapter
I guess that we can include this as another scaler that users can use.
I can see the benefit of this, I'd be happy to review a contribution and help to shape it so it can be merged, but I don't have the capacity currently to implement this
|
GITHUB_ARCHIVE
|
Microsoft are making some big changes to their Nano Server operating system. I’m going to briefly cover what they are doing and consider how it might impact existing users and future plans for those of us who have not yet deployed it into a production environment.
The below was taken from the following Microsoft link – https://docs.microsoft.com/en-us/windows-server/get-started/nano-in-semi-annual-channel
However, starting with the new feature release of Windows Server, version 1709, Nano Server will be available only as a container base OS image. You must run it as a container in a container host, such as a Server Core installation of the next release of Windows Server. Running a container based on Nano Server in the new feature release differs from earlier releases in these ways:
- Nano Server has been optimised for .NET Core applications
- Nano Server is even smaller than the Windows Server 2016 version
- Windows PowerShell, .NET Core, and WMI are no longer included by default, but you can include PowerShell and .NET Core container packages when building your container
- There is no longer a servicing stack included in Nano Server. Microsoft publishes an updated Nano container to Docker Hub that you redeploy
- You troubleshoot the new Nano Container by using Docker
- You can now run Nano containers on IoT Core
You can also get some additional information in this Microsoft MSDN Channel 9 video – https://channel9.msdn.com/Events/Build/2017/B8013
The video and the slides that they use provide some more insight into the decision to (as the present states) start ‘gutting the system’. Let me provide some bullet points summarising information from the video.
Current Nano –
- Uncompressed container image ~1GB
- Included components which were not relevant in containers
- Components which were optional not implemented as layers
Version 1709 and future Nano –
- A ‘significant’ reduction in size (both on disk and at pull)
- Removal of components not relevant to containers
- Optional components now implemented as layers
I had seen some articles stating things like PowerShell would no longer be available at all on Nano but this is not the case; it would now be a layer you choose to add to your container. By migrating to a layered approach Microsoft can trim down Nano even further. I will have to keep a close eye on future releases of Nano server to see what Microsoft remove and also what functionality (as layers) they choose to implement. Personally I think if I wanted to run Docker like containers I’d just use Linux Docker containers or possibly VMware Photon if we choose to drop Hyper-V and move back to running a single hypervisor rather than two in our production environment.
Regarding the removal of what many term the ‘infrastructure roles’ I see this as a backwards step by Microsoft. You can find many videos and PowerPoint slide decks by Microsoft extolling the virtues of Nano server of both the full GUI Windows Server install and also Server Core. They go to great lengths to explain why Nano will be so much better for us and before many of us have even had a chance to deploy it in production they are removing that functionality. I’ve seen comments that the uptake of Nano has not been that great but as a company they must remember those of us running infrastructure can’t just drop everything we have and rebuild our estate with a new OS. Microsoft seem to be making every effort to get developers to use Nano and as such as building it for them while forgetting or at least ignoring for now the needs of the infrastructure engineers. We will have to wait and see what future releases bring but I don’t hold out much hope of anything useful to me in the next few years.
|
OPCFW_CODE
|
I’ve held this post on hold for a while, but it’s time to release the Kraken… or well, the spreadsheet! I’ve decided to split this post into two parts because the first headline is why I’m really writing this post and I think it can be of great help for many of you who decide to read this. If you’re interested in my path to where I am now I’ll be posting that in the “Getting a Developer Job” section, since I don’t think it belongs here.
Why I’m Writing This Post
Time and time again the same question is asked by people wanting to become developers: “how much should I study?”. Often combined with: “what should I learn?”. And the same answer is given for both of these: “be consistent and focused”.
For me I’ve had a somewhat definite goal in mind since I started the grind towards a new career, I want to do web development. I don’t care a lot for robotics and I’m not interested enough in math to do Data Science. I also believe that the web is the future and that most native apps will slowly die out in favour of web apps. To help me with my studies, I’ve kept a spreadsheet going, which I’ve modified a little bit to remove personal information, but which you can find here:
How Do I Read This?
The spreadsheet is split into 3 different columns, repeating a few times because a year consists of many days. The leftmost column are dates (obviously), the middle column is what I’ve planned to accomplish that day and the right column are notes to myself about the day. Perhaps I won’t have a lot of free time that day? Or maybe it was harder than I thought?
The spreadsheet is also color-coded, where black is days, white is planned stuff that I’ve not yet accomplished and gray means I’ve accomplished what I wanted to do that day. That doesn’t necessarily mean that the previous day is grayed out immediately, because it depends on if I’ve actually accomplished the task or not. Sometimes I’ll have some leftovers that I’ll have to fit in another day.
Lastly, the right column can have three different colors: light blue, pink or blue. They correspond to the days of the week and since my non-developer job is just at 80% (i.e. 32h/week) each week usually have 4 light blue rows and 3 pink rows. If I have to work extra, those days turn light blue and if I have some PTO, those days will turn pink. As you can see, some days in the latter months are a solid blue color, which are the days where I’m working at my new developer job rather than at my old job or remotely (I’ve been working 60h weeks for a while now).
How You Can Use This
It may be obvious to some and not to others, so I figure I’ll provide some advice. This spreadsheet basically shows what I’ve done coding wise for the past year and I realize (and you should too!) that not everyone can put this amount of time into coding every single day. I should point out that I have both a fiancée and a dog, so my time is not completely unconstrained. You may:
- Have kids
- Have a non-understanding partner
- Work double jobs
- Have low self-discipline
- Have ADHD or similar
Or perhaps something else. Any of these will make your journey take longer time or be more difficult, but you can still make it! Have a look at my spreadsheet and figure out what would work for you.
- Can you do four days a week, or five, or seven?
- How many hours a day can you keep as a minimum?
- What do you want to focus on?
To finish off, don’t beat yourself down if you feel slow. Or if you feel stupid. Or if that guy/girl is so much better than you. Everyone learns at their own pace and the absolute best way to learn is to fail.
|
OPCFW_CODE
|
import html
from typing import Union, List
from sklearn.manifold import TSNE
import tqdm
import numpy
from flair.data import Sentence
from flair.visual.html_templates import TAGGED_ENTITY, HTML_PAGE
class _Transform:
def __init__(self):
pass
def fit(self, X):
return self.transform.fit_transform(X)
class tSNE(_Transform):
def __init__(self):
super().__init__()
self.transform = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300)
def split_to_spans(s: Sentence):
orig = s.to_original_text()
last_idx = 0
spans = []
tagged_ents = s.get_spans('ner')
for ent in tagged_ents:
if last_idx != ent.start_pos:
spans.append((orig[last_idx:ent.start_pos], None))
spans.append((orig[ent.start_pos:ent.end_pos], ent.tag))
last_idx = ent.end_pos
if last_idx < len(orig) - 1:
spans.append((orig[last_idx:len(orig)], None))
return spans
class Visualizer(object):
def visualize_word_emeddings(self, embeddings, sentences, output_file):
X = self.prepare_word_embeddings(embeddings, sentences)
contexts = self.word_contexts(sentences)
trans_ = tSNE()
reduced = trans_.fit(X)
self.visualize(reduced, contexts, output_file)
def visualize_char_emeddings(self, embeddings, sentences, output_file):
X = self.prepare_char_embeddings(embeddings, sentences)
contexts = self.char_contexts(sentences)
trans_ = tSNE()
reduced = trans_.fit(X)
self.visualize(reduced, contexts, output_file)
@staticmethod
def prepare_word_embeddings(embeddings, sentences):
X = []
for sentence in tqdm.tqdm(sentences):
embeddings.embed(sentence)
for i, token in enumerate(sentence):
X.append(token.embedding.detach().numpy()[None, :])
X = numpy.concatenate(X, 0)
return X
@staticmethod
def word_contexts(sentences):
contexts = []
for sentence in sentences:
strs = [x.text for x in sentence.tokens]
for i, token in enumerate(strs):
prop = '<b><font color="red"> {token} </font></b>'.format(token=token)
prop = " ".join(strs[max(i - 4, 0): i]) + prop
prop = prop + " ".join(strs[i + 1: min(len(strs), i + 5)])
contexts.append("<p>" + prop + "</p>")
return contexts
@staticmethod
def prepare_char_embeddings(embeddings, sentences):
X = []
for sentence in tqdm.tqdm(sentences):
sentence = " ".join([x.text for x in sentence])
hidden = embeddings.lm.get_representation([sentence])
X.append(hidden.squeeze().detach().numpy())
X = numpy.concatenate(X, 0)
return X
@staticmethod
def char_contexts(sentences):
contexts = []
for sentence in sentences:
sentence = " ".join([token.text for token in sentence])
for i, char in enumerate(sentence):
context = '<span style="background-color: yellow"><b>{}</b></span>'.format(
char
)
context = "".join(sentence[max(i - 30, 0): i]) + context
context = context + "".join(
sentence[i + 1: min(len(sentence), i + 30)]
)
contexts.append(context)
return contexts
@staticmethod
def visualize(X, contexts, file):
import matplotlib.pyplot
import mpld3
fig, ax = matplotlib.pyplot.subplots()
ax.grid(True, alpha=0.3)
points = ax.plot(
X[:, 0], X[:, 1], "o", color="b", mec="k", ms=5, mew=1, alpha=0.6
)
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_title("Hover mouse to reveal context", size=20)
tooltip = mpld3.plugins.PointHTMLTooltip(
points[0], contexts, voffset=10, hoffset=10
)
mpld3.plugins.connect(fig, tooltip)
mpld3.save_html(fig, file)
@staticmethod
def render_ner_html(sentences: Union[List[Sentence], Sentence], settings=None, wrap_page=True) -> str:
"""
:param sentences: single sentence or list of sentences to convert to HTML
:param settings: overrides and completes default settings; includes colors and labels dictionaries
:param wrap_page: if True method returns result of processing sentences wrapped by <html> and <body> tags, otherwise - without these tags
:return: HTML as a string
"""
if isinstance(sentences, Sentence):
sentences = [sentences]
colors = {
"PER": "#F7FF53",
"ORG": "#E8902E",
"LOC": "#FF40A3",
"MISC": "#4647EB",
"O": "#ddd",
}
if settings and "colors" in settings:
colors.update(settings["colors"])
labels = {
"PER": "PER",
"ORG": "ORG",
"LOC": "LOC",
"MISC": "MISC",
"O": "O",
}
if settings and "labels" in settings:
labels.update(settings["labels"])
tagged_html = []
for s in sentences:
spans = split_to_spans(s)
for fragment, tag in spans:
escaped_fragment = html.escape(fragment).replace('\n', '<br/>')
if tag:
escaped_fragment = TAGGED_ENTITY.format(entity=escaped_fragment,
label=labels.get(tag, "O"),
color=colors.get(tag, "#ddd"))
tagged_html.append(escaped_fragment)
final_text = ''.join(tagged_html)
if wrap_page:
return HTML_PAGE.format(text=final_text)
else:
return final_text
|
STACK_EDU
|
When pets go rogue: the link between the wildlife trade and exotic species
New Zealand puts considerable effort into managing the impacts of invasive species. And with good reason: invasive species wreak havoc on the environment and pose significant threats to animal health. The commitment to managing such pests and reducing their impacts is well exemplified by the Department of Conservation’s ‘Battle for our Birds’campaigns to achieve environmentally driven goals. A second example is the control of possums by TBfree NZ, with the aim of eradicating bovine tuberculosis, an animal health-driven goal. More recently the launch of initiatives to eradicate some predators from the mainland by 2050 has put New Zealand at the global forefront of the battle against invasive species.
These management initiatives concentrate on exotic species that have established self-sustaining populations and spread throughout New Zealand. The presence of these invasive species is mainly a legacy of ‘acclimatisation societies’, which made it their goal to establish exotic species for human delight and use. Luckily such acclimatisation societies are no longer active in New Zealand, though that does not necessarily imply that releases of potentially invasive species have ceased: the purposeful release of animals is being replaced by more subtle pathways of transport and the introduction of new and emergent exotic species. This shift has occurred in recent times in countries around the world and has been driven by concomitant changes in the relationships between humans and animals.
In this new context most species are transported into countries either intentionally (e.g. trading to satisfy the demand for animal products) or unintentionally (e.g. species hitching a ride in containers shipped from one country to another). Unlike the goals of acclimatisation societies, these new pathways rarely if ever have the explicit objective of establishing exotic species. Rather, some species escape or are released into new environments where they may be capable of forming self-sustaining populations.
Mounting evidence reveals the key role played by the pet trade in shaping the new national pool of exotic species. Globally a large variety of species are traded to supply and meet the demand for pets. This demand is causing significant environmental problems. In their native range the exploitation of populations is leading to over-harvesting and population declines; in the recipient regions some of the imported and traded species may pose an untenable risk of becoming invasive species.
The rise in the relevance of the pet trade has also changed the type of exotic species transported worldwide. Where in the past there was an emphasis on mammals and some birds, nowadays the pool of potential exotic species is dominated by ornamental fish, amphibians, reptiles, and cage birds. New and emergent exotic species pose new threats to native biota. For example, exotic amphibians may carry emergent diseases such as the chytrid fungus and ranaviruses, which can imperil native frog species. Earlier this year a snake was intercepted on an incoming flight at Auckland airport. The introduction of snakes to New Zealand environments, where native communities are naïve due to their natural absence, could lead to an ecosystem-scale disaster akin to that caused by introduced brown tree snakes in Guam.
Preventing the introduction and establishment of exotic species is the best way to avoid potential detrimental impacts, but to be effective, preventive strategies need to be based on good evidence. So, what is known about the risks of the pet trade in New Zealand? Trading in ornamental and aquarium fish has been highlighted as increasingly contributing to new introductions of exotic fish across the world. In New Zealand almost a quarter of the exotic fish present in 2012 were ornamental species (23.8%, or 5 out of 21). This proportion may further increase in the near future due to the growing numbers and diversity of exotic fish imported into the country.
Reptiles have also gained prominence as emerging exotic species. There is good information about exotic reptiles in the New Zealand pet trade thanks to the research of Heidy Kikillus in 2010, although an update would be welcome. Heidy reported 12 species of exotic reptiles found in the pet trade, with individuals of four species found at large (although none have established populations). It is not surprising that the most common pet reptile was the red-eared slider turtle, a species that has been traded in massive numbers globally. As in countless other countries, slider turtles are often found in the wild in New Zealand waterways, particularly around Wellington and Auckland. Rapid responses by governmental agencies to remove such animals have prevented their establishment in the wild, but these incursions represent a warning of the potential risks of pets.
Pablo García-Díaz has researched the role of the pet trade here as a source of new and emergent exotic species. Existing biosecurity arrangements, coupled with risk assessment tools for exotic imports (e.g. NIWA’s fish risk assessment model), should help protect New Zealand from emergent exotic species. Pablo does not, however, argue for a blanket ban on the pet trade in New Zealand. Instead, he contends that we need a more nuanced and up-to-date knowledge of the biosecurity risk currently posed by this trade. The need to comprehend the nature of such novel biosecurity risks to manage the new generation of potentially exotic species effectively is clear-cut. Otherwise New Zealand may need to deploy ‘Predator Free 2050-like’ initiatives in perpetuity to deal with an ever-increasing number of newly established exotic species.
This work was funded by the Invasive Animals CRC, Australia
|
OPCFW_CODE
|
Virtual currency used as alternative currency
Would it be feasible for a virtual currency to be used in the same way as alternative currencies such as Berkshares and Brixton Pounds ? Furthermore, do you think it is possible to make it independent of conventional currencies such that it is held to a standard, say a basket of household goods ?
Are you talking about Bitcoin (or an alternate chain) or just virtual currencies in general?
Bitcoin in particular. If it isn't suitable then ideas on what would be.
Perhaps this would be relevant to you? https://bitcointalk.org/index.php?topic=101197 We're considering implementing a POC of this.
@ripper234 Will have a good read through. Have you had a look at the idea of minimum income being discussed in Germany and Switzerland ? http://en.wikipedia.org/wiki/Guaranteed_minimum_income
No, not sure of the relevancy.
It is not relevant in this immediate context. However, this is one of the directions being considered by reformists. In the meantime, alternative currencies might not only a means to limit exchange to a locality. They could also serve as a means of survival in the event of a sharp economic downturn. In this respect, virtual currencies would definitely hold some advantages.
This link might also offer a few leads on what alternative currencies can offer offer conventional ones.
http://en.wikipedia.org/wiki/Monetary_reform
While there's no technical issue preventing the use of Bitcoin and its' associated engine for any number of alternate currency types, the problem you're likely to run into with your question is the addition of a central issuer. Bitcoin is explicitly designed for distributed issuance of funds according to a carefully designed algorithm that requires distributed proof-of-work. It would be non-trivial to chance the issuance mechanism.
That said, if you have a talented programmer, Bitcoin is open-source under the MIT license and you're welcome to modify it as you see fit, just know that it's probably not going to be as easy as your question seems to indicate it should be.
Thanks for the input David. Could you go further into explaining why a central issuer would be necessary ? I do understand that alternative currencies such as Berkshares are localized. Also, am I correct in thinking that Bitcoin requires purchase with, say USD, for issuance ?
Bitcoin does not require purchase with any fiat currency for issuance. Issuance happens via an activity commonly called "mining" - miners perform huge amounts of complex math that help secure the transaction history against fraud and in exchange, they are rewarded with a set amount of Bitcoin. It may be worth reading the "How Bitcoin Works" page on the wiki for an overview.
@JamesPoulson: I don't think there's any way to hold a currency to a standard without either a central issuer or good samaritans willing to take huge losses. Without a central issuer to regulate supply, too much currency can be in circulation and the currency's value can fall, separating the currency from the standard it's supposed to stay pegged to. The standard could only be held by a good Samaritan who buys up the currency at a loss until the standard is reached again. Who would do that?
@DavidSchwartz I'm not sure I get all of that but how about substituting regulation with some "gameplay" or internal rules ? That's probably the intention behind the way Bitcoin is designed. As for inflation, there could be two possible solutions. Have a sort of "tax" or have units that expire after a given time or condition. What do you think ?
@JamesPoulson: See if you can come up with a way to make such a system work. I don't think you can. Who wants a monetary system with taxes or expiring units? What's the benefit of such things?
@DavidSchwartz I won't hide that these ideas are coming from the monetary reform perspective. Here's where the idea of using a tax (might not be the appropriate word...) to tackle inflation comes from.
http://www.youtube.com/watch?v=DthcVZsFKmo
@DavidSchwartz There is a topic on expiring currencies on this link, the reasoning being that it would encourage monetary flow: https://bitcointalk.org/index.php?topic=2462.0 . However, this article from mises.org suggests it could be a red herring and such a measure might not be appropriate, especially for people who aren't happy about state interventions. http://mises.org/daily/324 . This is where the idea of or gameplay or a ruleset comes in the idea being to protect the average consumer while leaving the market free within given confines or safety rails.
@DavidSchwartz Here is a further link relevant to idea of expiring currency: http://en.wikipedia.org/wiki/Freigeld
Bitcoin is a standalone alternative currency with no dependency or "backing" from any other entity. The rate it trades at is the result of supply and demand.
The alternative currencies Berkshares and Brixton Pounds are representative currencies. A Berkshare represents 95% of a dollar. For example, you can go to a bank in the region and exchange one Berkshare and get back $0.95 USD.
Participating local merchants will accept the BerkShare at a 1:1 ratio with the dollar. They aren't necessarily losing the 5% as a customer discount though because after the merchant receives that Berkshare, it can then be used to pay contractors or to make purchases from other merchants.
The funds then circulate over and over within the community.
If the merchant happens to end up taking in too many BerkShares and needs make purchases where payment is in dollars, the BerkShares can always be redeemed back to dollars at the rate of $0.95 per BerkShare.
Most of the benefit goes to the consumer who converts dollars to BerkShares because this effectively provides a 5% discount on all purchases made using Berkshares. When the merchant then goes to spend these, there is no direct benefit because a full $1 of value was exchanged for each BerkShare.
The merchant might find that offering this 5% discount is something that is more than offset with stronger business locally, or the mechant desires improved relations with its customers and benefits from building and supporting a stronger local community.
This all works because BerkShares are not issued elsewhere, and they cannot be redeemed elsewhere.
With Bitcoin, there is no containment. A merchant could offer a discount for all bitcoin payments, but because that bitcoin can be redeemed anywhere, the funds aren't necessarily retained in the local community. There's no harm to offering this other than the potentially harm to margins, but going this route misses completely the original incentive of a community currency -- a reasonable method to encourage the community to increase commerce locally.
Here's the main reason this wouldn't work when using bitcoin just yet though.
Technically the backing for the BerkShare could be a bitcoin, or an ounce of silver even if you really wanted. But the problem is that you then expose the community members (consumers and merchants) to exchange rate risk.
But let's say this community (consumers and merchants) were willing to take on the exchange rate risk. There are more problems.
BerkShares does have a cost to administer -- the bills must be printed and there must be exchange to and from dollars, etc. With BerkShares, apparently the local community together subsidizes that. But the person holding the BerkShare is at no time not ever able to get back at least the $0.95 of value that was spent to acquire it.
If there were a method to transact electronically, then there wouldn't be the cost of the bills, but that is asking the consumer and merchant to incur costs that would be seen as excessive. So it would probably need to be paper-based or a coin monetary instrument representing the amount of backing held by an issuer.
And that again is a roadblock. The dollars backing BerkShares are held in a federally insured bank and BerkShares is responsible to ensure that no BerkShares are counterfeits. This does not translate well for bitcoin, as there is no "federal bank" to store the bitcoins with. Maybe with M of N a workable method to "bank" the coins would be accepted.
There are a lot of options for something like this, but really no incentive as the goal of the community currency is to introduce friction that encourages commerce to remain local.
Bitcoin is built to bypass any artificial frictions.
Now individuals and merchants in the community are free to begin using bitcoin without it being promoted as some type of community currency. It would be something novel where the merchant might attract new customers, but doing so doesn't necessarily bring the same results along the lines of what the BerkShares' goal is intended to bring.
Now some enterprising individual could probably build a mobile wallet app that monitors the blockchain & tries to spend "tainted" coins (where the taint means how much of the payment's pedigree is from other merchants in the community). Payments using these tainted coins would be awarded a discount from the merchant who in turn receives the same discount by re-spending the coins within the community. That way you end up with no issuer, and no overhead costs for managing this. The main problem is this would give up identity (e.g., merchants would need to register or claim receiving addresses.)
|
STACK_EXCHANGE
|
invoke-WebRequest and a proxy that refuses to die in Powershell 5.1
I have a Windows Server 2019 instance that used to have a proxy server configured in its proxy setting but has since been disabled from Proxy Settings -> Proxy
If I run the powershell 5.1 command:
Invoke-WebRequest https://<LOCALURL>
then i'm still directed though the previously configured proxy and my request is denied. If I run the same command though powershell 7.2 then it works as expected.
I've made the following changes to try to rid any residual proxy configurations but nothing has worked.
Disabled MigrateProxy:
hcu\software\microsoft\windows\currentversion\internet
settings\MigrateProxy: Changed from 1 to 0
netsh winhttp import proxy source=ie
netsh winhttp reset proxy
Set-ItemProperty -Path
"HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings"
-Name ProxyEnable -Type DWord -Value 0
Set-ItemProperty -Path
"HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings"
-Name ProxyServer -Type String -Value ""
Searched and combed for proxy values in the registry
Restarted numerous times
Where is powershell 5.1 still pulling the removed proxy configuration from??
Did you check internet explorer options ? It is usually where I go first when I have to deal with something like it (It is counter-intuitive but these settings affect the whole system and not IE specifically). The quick way to go there the run window (Win+R) then type inetcpl.cpl. From there go into the Connecitons tab and checik if there's anything configured present.
Thanks for the reply! Ran the command as instructed. The Proxy server settings are blank under: Internet Properties -> Connections -> LAN settings. I tried this as both my regular user and as the Administrator.
You can make sure that you don't have anything in your DefaultConnectionSettings, which is a byte array and has to be parsed, but you can check that for the current user and local machine keys, and that may be causing you grief:
$KeyPath = '\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Connections'
$PropertyName = 'DefaultConnectionSettings'
$LMBytes = Get-ItemPropertyValue -Path "HKLM:$KeyPath" -Name $PropertyName
$LMProxyStringLength = $LMBytes[12]
If(!$LMProxyStringLength){Write-Host "No proxy set for Local Machine key"}else{
$LMProxyString = ($LMBytes[16..(16+$LMBytes[12])]|%{[char]$_}) -join ''
Write-Warning "Local Machine proxy set to $LMProxyString"
}
$CUBytes = Get-ItemPropertyValue -Path "HKCU:$KeyPath" -Name $PropertyName
$CUProxyStringLength = $CUBytes[12]
If(!$CUProxyStringLength){Write-Host "No proxy set for Current User key"}else{
$CUProxyString = ($CUBytes[16..(16+$CUBytes[12])]|%{[char]$_}) -join ''
Write-Warning "Local Machine proxy set to $CUProxyString"
}
Thank you for the response. I ran your script but it complained that there was no Property for "DefaultConnectionSettings" I then did a search in the registry for that string and the only hit was HKEY_USERS:\UUID\software\microsoft\windows\currentversion\internetsettings\connections\ DefaultConnectionStrings had a null value.
if it doesn't exist then it's never been set, which is just as good for your case.
|
STACK_EXCHANGE
|
How to sell stocks coming monthly to avoid wash sale
I have stocks of a company that come every single month. How should I sell the stocks and avoid wash sale? Which lots should I sell?
Are these stocks received as income or are you buying them? What's the consequence of a wash sale that you';re trying to avoid?
Would it be possible to delay, and/or advance, some lot(s) to create a gap? (You would need to plan the timing.) For example I normally leave mutual funds on 'reinvest' distributions but if I want to sell when I have down or mixed lots I might switch to 'cash' for a few months and then switch back.
If you receive stocks as compensation every 30 days, then any sale at a loss would be within 30 days of an acquisition and would be considered a wash sale. Any sale that resulted in a gain would still be taxable in that year.
However, all that means is that the tax benefit of those losses is deferred until you ultimately close out your position. The loss is just shifted to the cost basis of the most recent purchase. The loss is not "gone" forever.
The purpose of the wash sale rule is to prevent tax loss harvesting by deferring tax losses until your position is closed, but that does not seem to be your intent. I would talk with a tax attorney that deals with wash sale rules and see if there are precedents that would indicate that regularly acquired lots that prevent any 30-day window would be exempt.
But in the end, all you're doing is deferring the loss; you'll be able to claim it eventually.
Any sale at a loss would be within 30 days of an acquisition and thus a wash sale; if you have (enough) 'up' lots, you could sell (some of) them and avoid the wash sale rule -- but you would pay tax on the realized gains (unless offset by other non-wash losses, or long-term and within the 0% bracket)
"then any sale at a loss would be within 30 days of an acquisition and would be considered a wash sale." Not any sale. Any sale less than the acquisition ("less" referring to quantity, not price).
You should sell the stocks in the lot that will result in the least amount of taxes. To avoid a wash sale, you should sell the stocks in a different order each month.
More or less stock bought than sold. If the number of shares of substantially identical stock or securities you buy within 30 days before or after the sale is either more or less than the number of shares you sold, you must determine the particular shares to which the wash sale rules apply. You do this by matching the shares bought with an unequal number of shares sold. Match the shares bought in the same order you bought them, beginning with the first shares bought. The shares or securities so matched are subject to the wash sale rules.
IRS publication 550 page 56
So suppose you get 100 shares each month. If you sell 300 shares, then what happens first is that the 100 oldest shares that you have will be considered to be sold, and the wash sale rule will apply to them. For the other 200 shares, the wash sale rule will not apply, and you can claim a loss on them.
|
STACK_EXCHANGE
|
Percona Monitoring and Management 2.28.0¶
|Release date:||May 12, 2022|
|Installation:||Installing Percona Monitoring and Management|
Percona Monitoring and Management (PMM) is an open source database monitoring, management, and observability solution for MySQL, PostgreSQL and MongoDB.
We recommend using the latest version of PMM. This ensures that you have access to the latest PMM features and that your environment runs on the latest version of the underlying components, such as VictoriaMetrics, with all the bug fixes in place.
Advisor checks enabled by default
Starting with the previous release and continuing with this one, we have added significant improvements to the Advisors Checks functionality in performance, assessment coverage, and user experience.
As a mature and generally useful feature, this option is now enabled by default for easier access to automatic checks and better insight into database health and performance, delivered by Percona Platform.
Upgrading to PMM will automatically enable this feature for existing PMM instances. You can disable it at any time from your PMM dashboard on the Advanced Settings page.
Run individual advisor checks
In addition to running all available advisors at once, you now have the option to run each advisor check individually.
This gives you more granular control over the checks on your connected databases. Running checks individually also means that you get the results for relevant advisors faster and that you can focus on resolving failed checks one at a time. For more information, see Working with Advisor checks.
Enhanced Advisor checks
PMM 2.28 includes a new major version of Advisors that features some important enhancements. The most significant changes are:
- Support for multiple queries
- Support for Victoria Metrics as a data source
In a nutshell, these changes will allow experts to create more intelligent advisor checks to continue delivering more value to your connected PMM instances. The file format in which Advisors checks are written has been updated to support the new functionality provided by the Advisors service part of Percona Platform.
This is a breaking change, so we recommend upgrading your PMM instance to benefit from these enhancements. For more information, see Develop Advisors.
Ubuntu 22.04 LTS support¶
We are providing binaries for the recently released version of Ubuntu from this release.
VictoriaMetrics: VictoriaMetrics has been upgraded to 1.76.1.
Node exporter: Node Exporter has now been updated to 1.3.1.
PMM-9749: Advisors: Possibility to run individual advisor checks separately.
PMM-9469: Advisors: Ability to have multiple queries in a single check.
PMM-9468: Advisors: Ability to query VictoriaMetrics as a data source.
PMM-9841: Advisors: Advisor checks are now enabled by default.
PMM-8326: Advisors: Changed the icon for the Edit Check Rule option with a more suggestive one that better reflects this functionality.
pmm2-clientnow supports Ubuntu 22.04 LTS.
PMM-9780: VictoriaMetrics has been upgraded to 1.76.1.
PMM-5871: Node Exporter has now been updated to 1.3.1.
PMM-9958: The PMM logs button, which is used to download PMM logs for troubleshooting, is added to the help panel for better accessibility and enhanced user experience.
- PMM-9672: Minor UI improvements to the visual elements in the breadcrumb trails to visually align them to the look-and-feel of Grafana pages and improve overall UI consistency.
PMM-9854: Advisors: In some scenarios, PMM was not displaying the complete list of advisors available for instances connected to Percona Platform. This issue is now fixed.
PMM-9848: Advisors: Fixed text contrast issue on the Failed Advisor Checks page that was visible when navigating the list of results while using PMM with the Light theme.
PMM-9426: DBaaS: Fixed an issue related to K8s monitoring where the K8s monitoring failed with K8s version 1.22 and higher.
PMM-9885: Dashboard: Fixed the documentation links on the Advanced settings page on the PMM dashboard.
PMM-9828: Fixed an issue with the QAN dashboard navigator/explorer where if you open QAN from a dashboard and try to navigate to a different dashboard, the explorer keeps closing/refreshing, making it impossible to navigate.
PMM-9363: PMM users logged in via SSO would still have access to PMM after disconnecting. This issue is now fixed and PMM correctly terminates SSO sessions after disconnecting.
PMM-9415: Backup Management: Fixed an issue where initial data restore on AWS instances fails. However, consecutive data restore attempts were successful.
PMM-9992: Error while using reverse proxy (like Nginx)
While using a reverse proxy (for example, Nginx) in front of PMM, you can run into the error
origin not allowed after upgrading to PMM 2.27.0 or newer versions.
Add the Host header to the reverse proxy configuration file.
For Nginx, add the following:
proxy_set_header Host $http_host;
|
OPCFW_CODE
|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Merging CEDET
Re: Merging CEDET
Sun, 03 Jun 2012 20:00:53 +0200
Gnus/5.110018 (No Gnus v0.18) Emacs/24.1.50 (gnu/linux)
[Eric forgot to include emacs-devel in CC, which is why I'm quoting his
message fully at the end.]
Eric M. Ludlam writes:
> On 06/02/2012 03:47 AM, Chong Yidong wrote:
>> Hello CEDET folks,
>> Is the CEDET file-rename branch ready? If so, now is a good time to
>> start merging into Emacs.
> From the perspective of content, the current trunk in CEDET bzr
> matches up with CEDET 1.1 closely.
To make things clear: I have merged our 'newtrunk' branch (which
included the changes in 'file-rename') into our development trunk. This
means we are now finally working directly with the new file and
directory structure from Emacs, and most changes from Emacs trunk are
now incorporated into CEDET.
As I've already written some time ago, I've started to write a special
package for merging Emacs<->CEDET. The most important goal is to keep
the granularity of commits in both repositories. For this to work, we
should get our work-flow straight before we start.
My idea is the following and I'd like to hear opinions from the VCS
gurus around here if this is the right approach:
We use three branches:
- CEDET trunk (in the following: 'cedet')
- Emacs trunk (in the following: 'emacs'). Of course, we do not
care for the full Emacs trunk, but only for the CEDET-related
files (essentially: lisp/cedet and lisp/emacs-lisp/eieio*).
- a special branch inside of CEDET upstream (in the following: 'merge')
The 'merge' branch is special in that it follows Emacs *and* CEDET
development. It is derived from the old 'file-rename' branch and thus
has a common history with 'cedet', so that we can do proper merges
from and to 'cedet'.
The main concept is this:
- 'merge' follows 'emacs' as close as possible. That means:
- It must *not* have any files from 'cedet' which should not end
up in 'emacs'. Most importantly, this means that *everything*
that is in 'merge' must have signed papers.
- It follows exactly the Emacs directory structure, meaning that
EIEIO is in lisp/emacs-lisp.
- Syncing with 'cedet' happens through *merges*.
- Syncing with 'emacs' happens through *cherry picking*, which with
bzr just boils down to applying patches from "bzr diff -c
<revision>". Alternatively, Lluís "bzr transfer" plugin can be used,
but I couldn't get it to work. Either way, the special
CEDET-Emacs-merge package I've written is used to track which
commits have been merged and which not (and for what reason).
I could think of two alternatives to this approach:
- Drop the special 'merge' branch and directly cherry-pick between the
two repositories, hence essentially do what org and Gnus are
currently doing. However, I think this can only work well if both
repositories are very similar, and CEDET upstream still contains
lots of stuff which isn't in Emacs.
- Use *two* merge branches 'to-emacs' and 'from-emacs', that means one
for each direction and both with a common history with 'cedet'.
This was actually the initial idea, but by now I think this approach
is just over-complicating things and could easily get pretty messy.
Eric's full mail:
> From a quality perspective it is pretty good in that it passes all
> the unit tests and the key interactive tests. I am familiar with a
> couple typo type bugs I need to check in from the translation to the
> new file format still. I'll do that today. From a copyright
> assignment perspective, all is good, though my last employer release
> for changes ends on July 3, so I can help out on any big changes for a
> month, and get another release soon if needed.
> From the perspective of transplanting changes between our branches,
> David has merged many changes from Emacs into CEDET, and Lluis was
> working on a script to make it easy to do, so I added him to the CC
> list. Since CEDET includes a 'contrib' area that doesn't have
> copyright assignments, you will still need to avoid that. We have
> dropped explicit support for older Emacs versions so many previous
> conflicts have since been removed. The test suites have all been
> converted to the new file system, so Emacs can use that if you'd like
> to enable the complete suite in Emacs core also.
> All-in-all I think you will be in good shape unless David or Lluis is
> familiar with something I am not. We may need to do a second merge
> later, since our conversion to the new file name format is still quite
- Merging CEDET, Chong Yidong, 2012/06/02
- Message not available
- Re: Merging CEDET,
David Engster <=
|
OPCFW_CODE
|
Can someone please clearly answer whether video calling consumes any of your internet data usage?
See this FAQ for Skype bandwidth requirements:
The question was, how much data does a call use, not bandwidth. These are two different, but related things. Bandwidth is how big of a pipe you need to sustain a good quality, call. Data is the number of bits or bytes you move through that pipe. The call may only use the full bandwidth every once in a while, e.g. if there is a lot of movement in the picture. Skype recommends 300kbps for a video call, both up and down. If you use the full bandwidth, that would consume 60x300k = 18Mb of data in a minute, or 1.08Gb in an hour. And that's only one way, so a two-way video call would use 2.16Gb per hour maximum. That's the theoretical maximum. Is there anyone that knows what a typical video call would use, e.g. average of half the bandwidth? A quarter?
I would suggest that you've disproved your own argument that knowing how much bandwidth is required doesn't tell you how much data will be used. You simply multiple the bandwidth by the amount of time you want to reference and come up with data use for that time period.
I agree with your math but I think it worth mentioned that the 2.16 Gb you calculated is gigabits. Data plans are typical in bytes, not bits, and the 2.16 Gb is .27 GB, or gigaytes. It may be more meaniful yet to state that as 276 megabytes per hour.
Now that we've crunched the math, let me speak of real world experience. The fact is that video call data use will vary tremendously depending on a numer of factors, those including actual bandwidth available and both ends of the call, the quality of the connection between the contacts on the call, the specific webcams in use and the amount of processing power that's available.
When I'm on a 1920x1080p HD call the typical bandwidth used is ~650 kB/s. A 720p call will run around 350 kB/s, and a good 640x480 SD call ~150 kB/s. I've also had calls where conditions only allow 320x240 and data used can be as litle as 15-20 kB/s. You can do the math if you want data use per minute or hour.
Here's the bottom line....the bandwidth requirements outlined by Skype will tell someone what kind of results they can expect with bandwidth they have available. You can't expect HD video if you only have a .2 mb/s upload connection to the Internet. But to know how much you actually use, I suggest a utility like DU Meter. It will let you monitor your data throughput real time, and also keep track of how much you've used over a given time period.
Let's see....thus far this month I've used 35.64 GB upload @ download; 2.11 GB in the past 24 hours; and 12.8 mB so far today. It's clear as a bell.
Hi, With video up and down on Skype for me it uses 500 Mbps which = .5 GB per hour. My ISP plan is 5mb download 1Mbps upload.
I know I am a tad late to the party here but I took the liberty to work out the data usaged based on the recommended bandwidth ammounts set by the following FAQ page:
100kbps/100kbps = 200kbps total.
200kbps = 25KB/sec
25KB/sec = 1.5MB/min
1.5MB/min = 90MB/hour
Video Calling/Screen Sharing (Low Quality):
300kbps/300kbps = 600kbps total.
600kbps = 75KB/sec
75KB/sec = 4.5MB/min
4.5MB/min = 270MB/hour
Video Calling (High Quality):
500kbps/500kbps = 1mbps total.
1mbps = 125KB/sec
125KB/sec = 7.5MB/min
7.5MB/min = 450MB/hour
Video Calling (High Definition):
1.5mbps/1.5mbps = 3mbps total.
3mbps = 375KB/sec
375KB/sec = 22.5 MB/min
22.5MB/min = 1.35GB/hour
Group Video (3 People):
2mbps/512kbps = 2560kbps
2560kbps = 320KB/sec
320KB/sec = 19.2MB/min
19.2MB/min = 1.15GB/hour
Group Video (5 People):
4mbps/512kbps = 4608kbps
4608kbps = 576KB/sec
576KB/sec = 34.5MB/min
34.5MB/min = 2.07GB/hour
Group Video (7+ People):
8mbps/512kbps = 8704kbps
8704kbps = 1088KB/sec
1088KB/sec = 65.2MB/min
65.2MB/min = 3.91GB/hour
I hope this helps anyone with a data usage enquiry.
Many thanks for all of your help with this.
I have a related problem that you may be able to help with?
I have to buy a device to enable my kids (young and requiring assistance to operate devices) to Skype video call me in the UK from the USA. In practice there will be video calls 3 times per week for a total of about 2 hours (at most). I do not want the device used for anything else by the kids or anyone else.
Sadly the kids mother is "less than helpful" and is likely to use the cellfone signal all the time rather than the free wi-fi option.
Which device would you recommend and which cellfone service?
thanks in advance
|
OPCFW_CODE
|