Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Codemod is a tool/library to assist you with large-scale codebase refactors that can be partially automated but still require human oversight and occasional intervention.
cx_Freeze is a set of utilities for freezing Python scripts into executables using many of the techniques found in Thomas Heller's py2exe, Gordon McMillan's Installer and the Freeze utility that ships with Python itself
DConfig is portable program features config tool like kconfig
DConfig is portable program features config tool. It imitate linux tool kconfig, but this version does not implement all the kconfig's features.
DEFIS is a RAD development system to aid programming in Python. wxPython is used as GUI. Editra is used as IDE. SQLAlchemy is used as ORM.
Simple Dependency Manager
A simple dependency manager for projects. Allows you to submit and pull external dependencies from a repository. Designed not to be tied to any language or build environment.
Deviki is an project management enrioment based on media wiki. It works for an medium software development team. Currently it is implemented as an mediawiki extension.
doit comes from the idea of bringing the power of build-tools to execute any kind of task. It will keep track of dependencies between “tasks” and execute them only when necessary. It was designed to be easy to use and “get out of your way”.
The dotCODES Source Control Maintenance Mainframe (SCM2)
The dotCODES Source Control Maintenance Mainframe for Visual Studio is an administrator console application for developing dotCODES components. Built upon a Python foundation, the program is used to create data center routines (Unix packages) and maintain enterprise cloud services (CGI scripts/Apache) by means of building dotCODES runtimes and deploying them to and from the client server.
drba (Dependencies Resolver and Build Automation) is an open source software construction tool.
An extensible configuration and build system for complex projects.
XCDL is an extensible Component Definition Language (CDL) used to configure and build complex, multi-variant embedded projects. It is inspired by eCos CDL.
Software to convert python-egg project distributions to rpm. Get Project tarballs from site automaticaly and create spec files with all needed Required packages. Specialy designed for ALT Linux.
The ezpyinline is a pure python module which requires almost no setup to allows you to put C source code directly "inline" in a Python script or module, then the C code is automatically compiled and then loaded for immediate access from Python.
fd2python creates a python interface to the Xforms graphics library. It provides features that can build template main code, template callback code, and a template makefile. fd2python is run from 'fdesign -python'.
Cross-platform software dependency build and installation tool
gattai is a cross-platform system for pulling a bunch of dependencies, patching and building them and/or downloading binary packages, and then installing them in a central location. It aims to enable software projects to fully automate build environment setup. gattai is not a replacement for other systems like apt-get, and in fact, it will have the ability to use apt-get when running on a system that supports it. Rather, it aims to both allow an apt-get like setup process on environments that do not support apt-get, and also to allow the ability to set up isolated environments for those that need that.
gnu4u provides defined set of (GNU-)tools for development C and C++ applications. This tool set is independent of the OS's patch level, is available for different systems and provides the same development and runtime environment for those systems.
Hatchery is a Python Package Index Manager. It allows to proxy/mirror existing package indexes like the PyPI at http://pypi.python.org/pypi or the creation of local indexes.
'icecube' is a cross-platform (GNU/Linux, BSD, Windows) Python freezer. It provides a functionality that allows you to freeze one or several modules in a single executable and create a stand-alone program.
This is a Python cli command line utility that allows you to search for Java classes/files and packages in jar/ear/war 's on your system under a specific directory / path All docs are on the wiki: http://javaclassfind.wiki.sourceforge.net/
Joat is an extensible application/development framework for Python applications. It is designed to be "infintely" extensibly via Python plugins. It is also intended to provide a Python-based framework for RAD. Think: a Python-based VB or Delphi.
Portable build system for kernels, aims to support major *NIX operating systems and possibly MS Windows.
k2development consists of Assembler,Linker and other necessary Tools to build 6502 Assembly Language Programs.
This project has moved to: http://vger.kernel.org/vger-lists.html#linux-kbuild Here, you can find the old mailing list, files and website regarding: Linux kernel build. Patches, documentation, and auxilary programs related to the kernel configuration
Auto generate VS, Codewarrior, Codeblocks, XCode projects
This tool will scan your directory for source files and then generate a project file for Codeblocks, XCode, Visual Studio, Metrowerks CodeWarrior, FreeScale Codewarrior. It runs on Linux, MacOSX and Windows and can generate the files for any host from any platform. Great care is taken to make sure project file updates change as little as possible to make source control change lists as minimal as possible. This is the main IDE project file generator used by Olde Sküül!
Command line utility for programming the internal flash in NXP arm processors using the ISP protocol in the built-in bootloader. Lots of processors are supported and it is easy to add more.
PadMatrix is a the Ubuntu Launchpad ported to be used with Debian. Intended to be used starting with the upcoming Debian 6.0 codenamed Squeeze.
|
OPCFW_CODE
|
Research and Creative Interests
- Political Communication
- Public opinion
- Research Methods
Mike Gruszczynski (PhD, Political Science, University of Nebraska, 2013) is an Assistant Professor of Communication Science in The Media School at Indiana University. His research has been published in Public Opinion Quarterly, Journalism & Mass Communication Quarterly, Mass Communication and Society, Journalism and Communication Monographs, and Political Behavior, among others. His research is inclusive of political communication, political psychology, and research methodology, and focuses particularly on how changes in political and communication domains influence agenda-setting processes, framing effects, and journalistic decision-making.
- Hunt, Kate, and Mike Gruszczynski. 2022. “‘Horizontal’ Two-Step Flow: The Role of Opinion Leaders in Directing Attention to Social Movements in Decentralized Information Environments.” Mass Communication and Society.
- Gruszczynski, Mike, Danielle K. Brown, Haley Pierce, and M>Journalism & Mass Communication Quarterly.
- Comfort, Suzannah E., Mike Gruszczynski, and Nicholas Browning. 2022. “Building the Science News Agenda: The Permeability of Science Journalism to Public Relations.” Journalism & Mass Communication Quarterly.
- Geiger, Nathaniel, Michael H. Pasek, Mike Gruszczynski, Nathaniel J. Ratcliff, and Kevin S. Weaver. “Political Ingroup Conformity and Pro-Environmental Behavior: Evaluating the Evidence from a Survey and Mousetracking Experiments.” Pre-press; forthcoming in Journal of Environmental Psychology.
- Geiger, Nathaniel, Mike Gruszczynski, and Janet Swim. “Political Psychology and the Climate Crisis.” Forthcoming chapter in Cambridge Handbook of Political Psychology.
- Gruszczynski, Mike. 2020. “How Media Storms and Topic Diversity Influence Agenda Fragmentation.” International Journal of Communication.
- Gruszczynski, Mike. 2019. “Evidence of Partisan Agenda Fragmentation in the American Public, 1959-2015.” Public Opinion Quarterly.
- Hunt, Kate, and Mike Gruszczynski. 2019. “The Influence of New and Traditional Media Coverage on Public Attention to Social Movements: The Case of the Dakota Access Pipeline Protests.” Information, Communication, and Society.
- Comfort, Suzannah Evans, Edson Tandoc, and Mike Gruszczynski. 2020. “Who is heard in climate change journalism? Sourcing patterns in climate change news in China, India, Singapore, and Thailand.” Climactic Change.
- Friesen, Amanda, Mike Gruszczynski, Kevin B. Smith, and John R. Alford. 2020. “Political Orientations Vary with Detection of Androstenone.” Politics and the Life Sciences.
- Wagner, Michael W., and Mike Gruszczynski. 2018. “Who Gets Covered? Ideological Extremity and News Coverage of Members of the U.S. Congress.” Journalism & Mass Communication Quarterly, 95(3): 670-690.
- Gruszczynski, Mike, and Michael W. Wagner. 2017. “Information Flow in the 21st Century: The Theory of Agenda-Update.” Mass Communication and Society, 20(3): 378-402.
|
OPCFW_CODE
|
I promised many weeks ago that I would begin distilling my Conference presentation down into small digestible posts and I am pleased to say that I ceased being lazy, so here is part 1!
This presentation was designed to be a guideline for new users on designing their configuration architecture and overcoming those first few small hurdles in turning Nagios into a viable business monitoring solution. Some of the architectural decisions are going to be suited more towards a single-business as opposed to a highly distributed environment such as a consultancy.
One of my favourite little known features about Nagios is the ability to use contact objects to delegate user permissions and before we jump into the bigger quandaries of design, it’s probably best that we can ensure we understand how to get the end user the information they need to do their job.
When a user logs in to Nagios; Nagios will see if there is an existing contact object that has the same name as the user… if that user hasn’t been assigned a special permission (such as view all hosts) then Nagios will only display the hosts to which that contact object has been assigned. That’s pretty neat huh?
Using this feature and groups you can effectively build your own role based access control (otherwise known as RBAC), but this on its own is not all that useful for a business of a reasonable size. I mean what if you have 100+ potential Nagios users? You don’t want to have to add them all into the htpasswd file… and you certainly don’t want to have to maintain that file!
Nagios uses Apache basic authentication (hence the htpasswd file)… which means it should accept any valid Apache authentication method. How about trying out the Apache LDAP module like so?
ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin" <Directory "/usr/local/nagios/sbin"> SetEnv TZ "Australia/Melbourne" Options ExecCGI AllowOverride None Order allow,deny Allow from all AuthName "Nagios Core" AuthType Basic # AuthUserFile /usr/local/nagios/etc/htpasswd.users # Require valid-user AuthBasicProvider ldap AuthName "Nagios server" AuthzLDAPAuthoritative off AuthLDAPBindDN "CN=bindAccount,OU=User,DC=domain,DC=com" AuthLDAPBindPassword xxxxxxxxx AuthLDAPURL ldaps://domain.com/OU=User,DC=Domain,DC=com?sAMAccountName?sub?(objectClass=user) AuthLDAPGroupAttribute member AuthLDAPGroupAttributeIsDN on Require ldap-group CN=NagiosAccessGroup,OU=Groups,DC=domain,DC=com </Directory>
Replace the relevant LDAP parts in the above config and you can now use your companies regular LDAP or Active Directory as the authentication source for Nagios. Now if we go ahead and create contact objects in Nagios with names that match LAN logon’s and assign them to hosts and/or services we will have a seamless user experience.
Moreover than that you will have just integrated your Nagios RBAC with your companies LDAP and I think that’s pretty darn cool.
The last piece of the puzzle will be to throw together a quick script that automatically synchs those LDAP users into Nagios user contact objects… but I’ll leave that part up to you. Part 2 will begin covering configuration design with Users and Contacts.
Presentation Pt1: User Permissions
Presentation Pt4: The art of service dependenciescomments powered by Disqus
|
OPCFW_CODE
|
<?php
if (!defined('BASEPATH')) exit('No direct script access allowed');
class Sendmail
{
public function emailConfigSetttings() {
$config['protocol'] = SMTP_PROTOCAL;
$config['smtp_host'] = SMTP_HOST;
$config['smtp_port'] = SMTP_PORT;
$config['smtp_user'] = SMTP_USER;
$config['smtp_pass'] = SMTP_PASSWORD;
$config['charset'] = "iso-8859-1";
$config['mailtype'] = "html";
$config['newline'] = "\r\n";
$config['wordwrap'] = TRUE;
$config['mailpath'] = '/usr/sbin/sendmail';
return $config;
}
public function sendEmail($mail_array)
{
/*
* TO : to email addresses. we can add multiple email ids (It accepts array only)- manditory
* CC : cc email addresses. we can add multiple email ids (It accepts array only)- manditory
* SUBJECT : Subject of mail - manditory
* DATA : Body data (Dynamic data) - manditory
* ATTACHMENT : We can attach multiple files - Not manditory
* TEMPLATE : Email body template
*
*/
$response=array();
$obj =& get_instance();
if(is_array($mail_array))
{
//print_r($mail_array);exit;
$to=$mail_array['to'];
$cc=$mail_array['cc'];
$subject=$mail_array['subject'];
$data=$mail_array['data'];
$attachment=(isset($mail_array['attachment']))?$mail_array['attachment']:'';
$req_template=$mail_array['template'];
$error=0;$error_messsage='';
/*Validation Error Start*/
if(!is_array($to)){$error=1;$error_messsage.='To email should be in array format only, ';}
if(!is_array($cc)){$error=1;$error_messsage.='CC email should be in array format only, ';}
if($subject==''){$error=1;$error_messsage.='Enter subject, ';}
if(!is_array($data)){$error=1;$error_messsage.='Enter data, ';}
if($req_template==''){$error=1;$error_messsage.='Template path is missing, ';}
//checking multiple emails are valid or not is pending.
/*Validation Error End*/
if($error==0)
{
$obj ->load->library('email');
if(SITE_MODE==0) {
$config = $this->emailConfigSetttings();
$obj->email->initialize($config);
}
else
{
$obj ->email->set_header('MIME-Version', '1.0; charset=utf-8');
$obj ->email->set_header('Content-type', 'text/html');
}
$assign_template = $obj->load->view($req_template, $data,TRUE);
$obj->email->from(SMTP_FROM_EMAIL, SMTP_FROM_NAME);
$obj->email->to($to);
if(is_array($cc) && (count($cc) > 0)){
$obj->email->cc($cc);
}
$obj->email->bcc(BCC_EMAIL);
$obj->email->reply_to(SMTP_FROM_EMAIL, SMTP_FROM_NAME);
$obj->email->subject($subject);
if(count($attachment) > 0 && is_array($attachment))
{
foreach ($attachment as $attachment_result){
$obj->email->attach($attachment_result);
}
}
$obj->email->message($assign_template);
$obj->email->set_alt_message('Some data is missing.Please refresh once');
$send = $obj->email->send();
if($send)
{
$response[CODE]=1;
$response[MESSAGE]='Mail sended successfully';
}
else
{
$response[CODE]=0;
$response[MESSAGE]='';
//$response[MESSAGE]=$this->email->print_debugger();
}
}
else
{
$response[CODE]=0;
$response[MESSAGE]=$error_messsage;
}
}
else
{
$response[CODE]=0;
$response[MESSAGE]='Input data should be in array format only';
}
return $response;
}
}
|
STACK_EDU
|
Andy Crouch [amcrouch]
How did you begin programming and at what age?
I did start on a ZX Spectrum at the age of about ten. I had a break of about 8 years when I focused on Music and then picked it up again at about the age of 19 when I went to work in an office and got bored doing tasks in Excel and Access so I automated them.
Uphill all the way from there really!
What languages do you code, and in what platforms?
In addition to that and for personal fun and profit I have also worked on Linux (KDE) for about 10 years and have created projects in Python, Qt, C++ and Bash.
My current spare time see's me picking up more and more Ruby.
What machine configuration and operating system do you use?
I have two laptops.
My work machine is a custom spec PC Specialist (http://www.pcspecialist.co.uk/laptop-computers/) Optimus III which runs on an i5, 8gb Ram and 500gb Hard Drive. It is running Windows 7 Professional and has a screen resolution of 1920x1080. This is the must have feature for me to run tiled windows next to each other.
My home laptop is a Fujitsu Lifebook A Serise running i3, 8gb Ram and 750gb Hard Drive. It was brought for silly money after my Thinkpad died. For a cheap laptop it is brilliant. I run Arch Linux (64bit) on it and have been a huge Arch fan for the last 5 years. It would take a lot to switch now.
Please list web addresses where we can see some of your work
My last major project was http://www.konetic.com and my current project is http://www.controlpoint.eu/
My open source code is spread far and wide but is mainly desktop based.
What motivates you to undertake a new project?
Two things motivate me. Either a problem that bugs me that either no solution exists for or for which the solutions don't work as I want or think they should do.
Commercial projects have to be interesting and look to solve real problems.
What part of project development is most gratifying to you?
All parts of the process are fun although the actual problem solving is what drives me.
From the outside, it seems a rational job, but is creativity necessary for programming?
Yes! Programming is an art which should be studied for life and a subject which you really enjoy. 9-5 programmers don't get it!
What conditions do you need to concentrate when programming?
Music and coffee.
After working for long periods of time, have you ever felt as though you were in a bubble?
I am lucky to have a young family who keep me busy in non programming ways. I never get to the point where I hit the wall.
When you check out code you wrote time ago, what's the main difference with respect to code that you write nowadays?
I find that I never cringe too much as the main improvements that standard out as being required are changes in the underlying framework (such as moving to LINQ etc).
Do you still buy programming books, or do you learn everything from online sources?
I do still buy books when picking up a new language and for self improvement. For day to day work I tend to rely on Google.
Do you think programming should be taught at the basic education level?
I remember my terrible IT lessons which taught Office only. Nothing I know now was picked up at school.
Now that computers are being used in such a main stream way and for every aspect of day to day life youngsters need to be taught the basics and hopefully encourage them to join the next generation of programmers.
What has been your experience in marketing your software?
To be honest I have stayed away from Marketing as much as I can. The companies I work for have people for that.
What do you learn from software users?
That I know nothing about users!
Even after 10 plus years it seems they are all unique and have different workflo's when using software. They are a great source of idea's to drive software forward.
What would be your solution against piracy?
I really have no answer however, it is clear that media companies are in the same position. If they priced music and films based on a modern internet based delivery system rather than when people where going to shops for CD's and DVD's they problem would be reduced.
Would you consider yourself rigorous in the organization of the coding that you write and on commenting it?
Anal might be a better word. Code is written once and maintained for years to come.
How do you calculate the budget for a software project?
This is not something I really get involved in however, on my current project I have helped drive a Kanban based approach which aids with estimates of time for a given task.
What are your favourite games and on what platform do you play them?
I have never really been a gamer but my son has an Xbox which he makes me play Fifa(7,8,9,10,11,12) on.
He is excited about Fifa13.
How often do you clean dirt-buildup on your keyboard?
A lot. I tend to clean both my laptops once a week or when needed.
How do you feel when friends or family ask for your help in solving domestic computer problems?
I tend to guide the family on what to buy and what to run. I also have remote desktop connections to both families main machines.
As machines for development, what opinion do Macs deserve?
I have never used a Mac but they do look good. They are just too expensive.
How do you protect your computer from viruses?
Linux viruses? Do they "really exist"? Both my machines run AVG.
In social settings, do people become interested when you tell them you are a software developer?
Some do, some don't. I don't tell too many people on first meeting.
Do you work alone or in a team? Which do you prefer?
I have worked remotely for both my last two jobs. I like working in a team or on my own. I am usually in constant contact with the developers I work with.
Are you one of the first to update to new software when it comes out, or do you normally wait until more stable versions appear?
On Linux I run Arch which is as bleeding edge as it gets and I usually update at least once a week.
On Windows I usually upgrade on SP1!!
What is your main reason for not meeting project deadlines?
Project creep. Requirements that change at the last moment or requirements that the user does not think of until the last moment.
I prefer an agile development process which lets the user see progress as development proceeds and gives them ownership of the development of their project.
In your opinion, which company helps software developers the most?
There are many companies that think they help developers but it is up to a developer to know what they need to solve and find either a solution or a company that can reach a solution.
How many breaks per day do you normally take?
I usually work a core of 8am till 5pm. I break when I need to usually no more that 3 15 min breaks on top of coffee refills!
At this point in your career, what would be the project of your dreams?
To create an amazing Arch based Linux distribution that use's KDE by default and has the polish of Ubuntu.
A RoR based web application that would allow musicians to distribute their music and make money from it that would be so fan focused, fans would not pirate content.
What is your next project?
Who knows, I work/live for now and not next week.
Which websites or forums for programmers do you frequently visit?
Hackernews and various blogs, feeds.
What advice would you give to someone who wants to become a programmer?
Don't give up and find a problem/issue that bugs you enough that you want to create a solution for it.
Also don't expect to make loads of money, do it for fun.
Harlow, Essex, UK
|
OPCFW_CODE
|
Prioritizing features is the last thing you should do
Jeff Lash has a good post (Product Management is More Than Prioritizing Features) that focuses on probably one of the key deficiencies of poor product managers. I actually agree with most of his post, but don’t think he takes it quite far enough.
He describes the trap of PMs who think that their job is to simply compile and prioritize a list of requests from others – customers, sales people, support, execs, etc. I’ll call this type of PM a listmaker. I have found listmakers to be widespread in junior PMs and in dysfunctional PM organizations. Listmakers are doing much more Project Management than Product Management (all you quality project managers out there, please don’t beat me up for shortchanging your discipline, I’m trying to make a point).
Hearing this set of feature requests and paying attention to them to create ‘the list’ is, in fact important (particularly when it is your Agile Backlog). However, you can’t possibly do it right if you start (or finish) there. The key is how do you prioritize what really should be done?
Listmakers will sort the list by some sort of seemingly justifiable algorithm. Perhaps based on the number of references to an item, perhaps based on how long it’s been there, perhaps based on who’s yelled the loudest. All of these make sense to a point. Many of these even make sense for that bucket of ‘sucks less‘ features that I think should be part of every traditional (i.e. big release, not an agile sprint) project.
However, it does not make sense for really taking your product and your company to success. For that, you have to be able to see the bigger picture. You have to ask and answer questions like:
- Where is the industry going?
- What is going to matter in 1, 2 or 3 years?
- What is the innovation that will change the business?
- What is the competition going to be doing when this launches (yep, get out that crystal ball)?
It is really about strategic thinking and planning. That has to drive the product. Looking at the individual feature lists first will lead to the fabled problem of not seeing the forest for the trees block the view. The right sequence is:
- Plan your roadmap. This is a 1-3 yr rough plan of what you need to do based on the kinds of questions asked above. It may very well be wrong by the time you get to years 2 and 3. That’s ok. It still provides a foundation of consistent thought for the team to work from
- Plan your releases. Figure out what should be packaged together into each release opportunity. Maybe it is driven off of a marketable theme (see “Write the press release before the code” Maybe it is based on what bits of effort make sense to do together. But either way, start with the big and/or important parts first.
- Once you have that framework in place, the Listmakers can come to the fore and help to make sure that the requested features that tie to the theme are prioritized into the project (or not). After laying the strategic groundwork, it is safe and appropriate to be tactical.
|
OPCFW_CODE
|
Not sure if this is applicable, or something fontconfig developers are open to inclusion. It would be really nice if fontconfig provided some means to store the font returned via a fc-match. Please excuse if duplicate.
In working with EFL UI toolkit, the one used for Enlightenment desktop. I am having great difficulties determining the default font used. The theme and other files that have font names passed to fc-match are mostly generic; Sans vs a specific Sans font. Once fontconfig matches and serves up the font. Seems there is no way to ask fontconfig what was provided. The requesting application must store this value returned via some means.
A global setting/storage would not make sense. But maybe something along the lines of Application A requested font X, but was given font Y. That is saved, so if anything asks what font application A is using; including application A itself. Fontconfig replies with application A is using font Y.
It seems most environments, GTK, QT, and others are doing this in their own way. Some are setting a specific font in config files, others in theme. Then providing API means. However if that font is missing, that would have the same issue. Font returned is other than Font requested.
This may be up to the application/UI toolkit to store the value returned from fc-match. That is fair and maybe how it was designed and intended usage. At the same time it may be beneficial to be able to re-query fontconfig to see what font was served up.
Say I am working with Progam B, I want to know what font Program A is using. If I can ask fontconfig which one was given to Program A, I can set that same for program B. One usage outside of an environment storing such. This would also allow a standard implementation of such and each would not be left up to its own means to devise such.
Thank you for your consideration of this feature request!
This doesn't make sense to me.
I tried to explain it as clearly as I could. I am trying to determine a default/current font for a system. It seems this is not easy because of how things interact with fontconfig.
It may be outside the scope of fontconfig to retain/store the font returned to anything requesting. However that leaves the storage up to each. Some do it in configs, some in themes. But that is relying on the font to be present. If that font was not, and fontconfig returned a different one. The requester of the font must store that value returned.
I am not sure if this is fontconfigs responsibility. I am trying to get a default font, or the current font in use within Enlightenment desktop and EFL apps. From discussions on mailing list. It seems this is not possible due to fontconfig.
This may help for context and reference
Topic [E-devel] Getting default/theme/system font name
Most relevant post showing getting default/system font from various envs.
Downstream has closed this bug. For the record I think in part this is a EFL toolkit issue. However they are not receptive to any sort of change to their very strange font system. Even though I have shown host most all others allow you to get fonts from objects. If not a global default system font from theme, config, etc.
Without comment on the bug it was closed as wontfix downstream. Seems how they are handling fonts entirely is very strange and unfriendly to application development
You don't get to know what the font is that the theme has chosen because it's
abstracted. It may not even be the part name in the edj file that is the
"standard label" that determines the visible font. It may be something else.
It's abstract. You get to override or not. You don't get to peek inside the
Which I think is crazy. Objects have a font used to render them. But it seems getting such information is impossible, and they have chosen for such with some abstracted black box design.
It is very odd!
Maybe better suited for mailing list. What I thinking of as a feature enhancement to fontconfig would be something as such.
Binary name maybe
Program A requests Sans font
DejaVuSans.ttf: "DejaVu Sans" "Book"
It an env/temp file, fontconfig stores
Or something like that, and other programs.
C:DejaVu Sans Mono
Then if something else wants to see what fonts any of those programs, A, B, C is using, they can call
Which would return output like fc-match, but from the previously returned value. It may even be used by fc-match on subsequent requests from same program for same font.
Hope that makes more sense.
That seems to assume all programs use exactly only one font, ignoring that many use multiple fonts.
Valid point, such would need to be addressed as part of any implementation. That could be addressed in a few ways. Offhand
1. By the identifier used by a given program
In simplest form it stores program name. It may store additional identifiers unique to usage in a given application. Ex:
A:lists:Noto Sans Mono
Likely not as good as 1, for same reason. Could be same process requesting multiple fonts.
Likely other methods. I think something based on #1 would likely be best but it could vary.
This also assumes that each PID also has exactly one set of runtime configuration. The scope of that is not just the config file, but exactly how you feed and initialise fontconfig, e.g. with font sets.
Yes I do not think PID is good. Just was coming up with an alternative to not be providing a single solution.
Likely an application specific identifier of their own choosing is best. That way they can ensure fonts, sizes, styles, etc for the various identifiers. While allowing other programs access to the same information. Once something has a font, seems other things do not really have means to find out what font its using. Say its a GTK app, that wants to mimic font of a QT, or other.
Just thoughts on a potential solution. It may not be applicable or correct in fontconfig. The idea maybe insane. Just a feature request for consideration and further thought. Definitely details to be hashed out before any implementation. Thank you for your time and consideration!
-- GitLab Migration Automatic Message --
This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.
You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/fontconfig/fontconfig/issues/9.
|
OPCFW_CODE
|
Future land use forecasting is an important input to transportation planning modeling. Traditionally, land use is allocated to individual traffic analysis zones (TAZ) based on variables such as the amount of vacant land, zoning restriction, land use planning and policy limitations, and accessibility, under an externally estimated control number in population and employment growth at the county level. This land use allocation approach does not consider agglomeration factors, the market equilibrium of supply and demand, and is not sensitive to different land use and transportation policy changes. To overcome the limitations of this conventional approach, this research project uses a new analytical approach, i.e., a combination of cellular automata (CA) and agent-based modeling methods to estimate future land use allocation.
CA models have been used extensively in modeling and simulating complicated spatio-temporal processes like land use change. It can model the changes of land use patterns over time and can simulate a variety of spatial processes and influences relevant for land use changes. Agent-based models represent the interactions of different decision-making entities. The agent-based model provides a flexible representation of heterogeneous decision makers or agents, whose behaviors are potentially influenced by interactions with other agents and with their natural and built environment.
This study uses CA to capture the spatial relationships (e.g., clustering) of land development, as well as agglomeration factors. CA represents complicated systems well and is thus a good method to show changes of land use patterns. However, the CA model alone cannot sufficiently explain the changes, because CA model is not sensitive to policy variables. Thus, the study also uses agent-based models to capture the behavior of each agent, which makes it sensitive to policy changes. Agent-based models, unlike CA, can model individual decision-making entities’ behavior as well as their interactions. In addition, this study applies multinomial logit (MNL) model to formulate the CA transition rule for different land types, which estimates the probability of future land use for each cell. Therefore, the model this study developed is an MNL-CA-Agent land use model, which is called LandSys.
In the developed LandSys model, land use changes are performed by the CA model, with external drivers as agents. These agents include: employer, household, developer and government. The model is estimated and validated using a cell-based representation of land (50m x 50m). Then the estimated land use changes can be plugged into Florida Standard Urban Transportation Modeling Structure (FSUTMS) models as updated inputs. In response, FSUTMS models update their demand modeling of the transportation system. The updated traffic information (e.g., accessibility) is then fed back to the LandSys to capture the interaction between land use and transportation and generate more accurate simulation results. LandSys simulates land use change at multiple spatial and temporal scales, as well as representing decision making behaviors of households, employment, developers, and government policies. Future land use patterns and socioeconomic data can be produced to update those inputs of the transportation model. Policy scenarios, such as mixed land use growth management policies, can be simulated and analyzed for decision makers.
The major advantage of this modeling approach is its integration with FSUTMS models. The feedback cycle between land use and transportation models can simulate the interactions between the two. This study employs three indicators to compare the simulation accuracy between the integrated framework and standalone FSUTMS models, including link saturation in the transportation network, overall vehicle miles traveled (VMT), and vehicle hours traveled (VHT). The results show that the inclusion of the land use and transportation feedback in the integrated modeling framework produces better results than the land use or transportation model alone, can help modelers better simulate land use-transportation system, and help decision makers better understand the consequences of different scenarios of the land use and transportation planning.
PDF is here (https://rosap.ntl.bts.gov/view/dot/23704).
|
OPCFW_CODE
|
Experience Builder validates your app before publishing it to the environment.
If the validation fails, Experience Builder lists the issues it found in your app or environment. Issues belong to one of the following types:
Blockers are issues that you need to solve before publishing the app.
Warnings are issues that may affect the experience of your app, but you don't need to solve them to publish the app.
An issue may include a Fix it now link that helps you solve you the issue. Selecting this link takes you to the location in Experience Builder where you can fix it.
A blocker is an issue with your environment or with your app that blocks the publishing of the app.
Experience Builder lists blockers in the publish dialog after Before publishing the app, please review the following:.
We can't publish your application, because the dependency "<dependency-name>" has version <outdated-version> and the required version is <current-version>
Your environment doesn't have the correct version of a dependency of the app.
We can’t publish your app. There’s a missing required dependency in your environment: "<dependency-name>"
Your environment doesn't have a required dependency installed.
You haven’t added any screens.
Your app must have screens, otherwise you can't test it.
Add flows to your app.
Not enough flows. Add flows to your app.
Ensure that your app has at least two flows.
Missing menu items. Add menu items to your app.
In the Menu canvas, add a menu icon and connect it to a flow.
Menu items are missing links. Add destination links to your app's menu items.
In the Menu canvas, link at least one menu item to a flow.
Missing flows. Add a Login or Onboarding flow.
Your app needs a default screen to act as the entry point. Experience Builder uses either a login or onboarding flow as the default screen of your app.
In the Flow canvas, select Add flow, search for
Onboarding, and add a login or onboarding flow to the app.
Too many login flows. Add only 1 login flow per app.
Your app can only have one login flow.
In the Flow canvas, delete the login flows until your app only has one login flow.
A warning lets you know about issues that can compromise the experience of testing the app.
Experience Builder lists warnings in the publish dialog after It looks like your app isn't ready to be published... You might want to:.
There’s a missing dependency in your environment: "<dependency-name>"
Your environment doesn't have an optional dependency installed.
Install the missing dependency in one of the following ways:
Install the dependency component package associated with the missing component.
Select Fix it now, download the dependency from Forge, and install it in the environment using Service Studio.
Unlinked menu items. Connect menu items to destination flows.
When using the app, selecting the unlinked menu entries won't take you anywhere in the app.
Connect menu items to flows.
Can't access the "<flow-name>" flow.
When using the app, you won't be able to access the screens in the <flow-name> flow since no flow or menu item connect to the <flow-name> flow.
Connect the end point of another flow or a menu item to <flow-name> flow.
That doesn't look right. Your login flows isn't connected to another flow. Connect the flow so user can log in and use your app.
You must connect the exit point of the login flow to another flow, otherwise end users can't test the rest of the app.
Adding flows would improve navigation.
You only added onboarding and login flows to your app and you won't be able to test your app.
Add flows of other types, and make sure the login or oboarding flows connect to the flows you added.
You've added flows that required registered users. Add a login flow.
Some flows in the app are only accessible to registered uses, which means the app must be able to authenticate users using a login flow.
You've added a passcode login flow. Add a signup flow.
Passcode information isn't available by default in end-users created using Users. All the singup flows in Experience Builder let the end-user set a passcode.
|
OPCFW_CODE
|
ARNweb helps you be less productive and spend more time doing other things.
My website is kind of my portfolio with some maybe helpfull utilities.
See my work and explore my website with the buttons on the left!
ARNweb API is now up and running so I can request data using my Arduino's and such. Documentation coming soon.
New server (again)
Moved ARNweb to a shiny new server because the old one was dying.
You can now change your username on your ARNweb account
Finally added descriptions to the hardware projects and added some new projects.
After a catastrophic failure of the CPU fan is ARNweb back online with a brand new fan and a cool potentiometer for controlling fan speed
Way more mobile friendly and looking a lot snazzier overall.
Because the framework is nearly done, I moved everything from Development to the main page, this does mean some of the pages do not work right now
Looking for more bugs?
While I'm trying to make a framework, this site will most likely not be updated for a while. Want to see my progress? ARNweb Development
Got rid of the iframe! Thanks u/crispydesigns
Started optimizing scripts and preparing for a new mainframe
Probably going to switch development to new ARNweb once again since this layout is basically impossible to make responsive on iOS and a bit to ripped off from w3schools.
Updated server and projects
About page updated
Finally added some freaking information
Contact and The Password Master™ have been updated to fit the looks
Fixed the folder structure
You guys have no idea what is going on behind the scenes
Some projects have recently been deprecated and will therefore no longer be available
Reloading now takes you to the page you were on last.
Added Zalgo option, better online statusses and adding text to your links is now optional.
It's very basic for now, but usernames and Gravatars coming soon!
Yea man! More storage and more speeds!
New 404 page
You will probably never see it, but you can check it out here if you want.
The page you are currently on will be slightly lighter than the others to show what page you're on.
Added like 165 different sizes of favicons to fix it for all browsers.
Added the new styling to several pages.
New login page!
The login page has been made a lot nicer and now uses a CSS grid instead of an HTML table making it more mobile friendly.
Back in Biz!
ARNweb development has been picked up once again after 2 month of absolutely nothing.
ARNweb now runs a new, better server so I will soon be able do some more fun things!
I recently moved all databases from MariaDB to MySQL. All accounts should work fine, but if you have any problems regarding your ARNweb account, please contact me.
|
OPCFW_CODE
|
[Regression][Android][1.6.0] Content is pushed down when StatusBar is set to transclucent and using native-stack
Describe the bug
After upgrading to >=1.6.0, all my content is pushed below the status bar which defeats the purpose of setting it to be translucent. I believe @kirillzyusko missed this because you tested with headerShown: true, but this is not sufficient for the cases where we want to render content below the status bar.
Code snippet
<StatusBar
translucent
backgroundColor="transparent"
/>
Repo for reproducing
Use native-stack with headerShown property set to false and statusBarTranslucent property set to true
Let me know if I should provide a repo but the bug should be quite self-explanatory.
To Reproduce
Steps to reproduce the behavior:
Set StatusBar to be translucent via one of the following methods:
screenOptions.statusBarTranslucent: true in native stack
StatusBar.setTranslucent(true)
<StatusBar translucent />
Make sure that header is set to not show
screenOptions.headerShown: false
See that the content is being pushed below the StatusBar
Expected behavior
Content should be rendered under the status bar
Screenshots
Issue
Expected
Smartphone (please complete the following information):
Device: Pixel 7 Pro
OS: Android 13
RN version: 0.72.4
RN architecture: Paper
JS engine: Hermes
Library version: 1.6.0 and above
@thespacemanatee may I also ask you which version of react-native-screens are you using and whether it was changed during RNKC bump?
I will try to reproduce your problem as you described and in case of no luck - I will ask you to provide repro 👀
@thespacemanatee by the way - did you specify statusBarTranslucent property on KeyboardProvider level?
@kirillzyusko I'm using "react-native-screens": "~3.24.0" and it was not changed when I bumped RNKC.
Specifying statusBarTranslucent on KeyboardProvider does work! Although this wasn't required on 1.5.8. Is this a new requirement? What if I want to dynamically switch translucency on and off?
Is this a new requirement?
No, it was a requirement in the past as well 😅 I'm astonished how it worked before in your app 🤔
Basically before I consumed insets of EdgeToEdge view (and it was below rootView), and if you used statusBarTranslucent prop from RNS then I didn't get top insets because they were consumed by the RNS.
However such approach caused some issues - for example sometimes bottom insets by keyboard were not consumed in native stack (and it caused content jump when keyboard is shown), so I started to listen insets from rootView.
And now since you didn't specify statusBarTranslucent prop - EdgeToEdge view adds default paddings.
What if I want to dynamically switch translucency on and off?
I think now it would be hard to achieve. But if you want to achieve that - you can open an issue and I will try to think how to implement it.
In all apps that I developed we always specified status bar as translucent to match iOS (because on iOS it's always translucent) to simplify cross platform codebase.
But again - if you need to switch on/off translucency, please, open a new issue 🙏
@thespacemanatee since specifying statusBarTranslucent on KeyboardProvider level fixes the problem - I recommend you to go with this solution for now.
I'm going to close this issue - feel free to open a new one if you need dynamic statusBarTranslucent 😊
|
GITHUB_ARCHIVE
|
Bluetooth audio problems on 21.04
I just got a new laptop (Lenovo ThinkBook 14s Yoga) and installed Ubuntu 21.04. My Bluetooth headphones connect and work fine for a few minutes, but then they disconnect and reconnect in HSP/HFP mode. I don't need to use the microphone so I want them to be in A2DP all the time, and the disconnects are getting very annoying.
I've tried disabling this mode a few different ways:
in /etc/bluetooth/audio.conf, I added
[General]
Disable=Headset
in the same file I've also tried
[General]
Disable=Source
[Headset]
MaxConnected=0
I also tried adding Disable=Headset to both the [General] and [Policy] sections in /etc/bluetooth/main.conf. This just caused warnings in the syslog.
I've tried this setting in /etc/pulse/default.pa:
.ifexists module-bluetooth-policy.so
load-module module-bluetooth-policy auto_switch=false
.endif
The bluetoothd entries in syslog look like they're just coming from starting and stopping the service after changing the config files.
Jun 2 01:41:35 shiva bluetoothd[438895]: src/profile.c:ext_io_disconnected() Unable to get io data for Headset Voice gateway: getpeername: Transport endpoint is not connected (107)
Jun 2 01:41:41 shiva bluetoothd[438895]: src/profile.c:record_cb() Unable to get Headset Voice gateway SDP record: Host is down
Jun 2 01:41:42 shiva bluetoothd[438895]: profiles/audio/a2dp.c:a2dp_select_capabilities() Unable to select SEP
Jun 2 01:41:43 shiva bluetoothd[438895]: src/service.c:btd_service_connect() a2dp-sink profile connect failed for 00:02:5B:02:17:44: Device or resource busy
Jun 2 01:41:43 shiva bluetoothd[438895]: plugins/policy.c:reconnect_timeout() Reconnecting services failed: Device or resource busy (16)
Jun 2 01:41:53 shiva bluetoothd[438895]: Terminating
Jun 2 01:41:53 shiva bluetoothd[438895]: src/profile.c:ext_io_disconnected() Unable to get io data for Headset Voice gateway: getpeername: Transport endpoint is not connected (107)
Jun 2 01:41:53 shiva bluetoothd[438895]: Endpoint unregistered: sender=:1.486 path=/MediaEndpoint/A2DPSink/sbc
Jun 2 01:41:53 shiva bluetoothd[438895]: Endpoint unregistered: sender=:1.486 path=/MediaEndpoint/A2DPSource/sbc
Jun 2 01:41:53 shiva bluetoothd[438895]: Stopping SDP server
Jun 2 01:41:53 shiva bluetoothd[438895]: Exit
Jun 2 01:41:53 shiva bluetoothd[440635]: Bluetooth daemon 5.56
Jun 2 01:41:53 shiva bluetoothd[440635]: src/main.c:check_options() Unknown key Disable for group General in /etc/bluetooth/main.conf
Jun 2 01:41:53 shiva bluetoothd[440635]: src/main.c:check_options() Unknown key Disable for group Policy in /etc/bluetooth/main.conf
Jun 2 01:41:53 shiva bluetoothd[440635]: Starting SDP server
Jun 2 01:41:53 shiva dbus-daemon[698]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.955' (uid=0 pid=440635 comm="/usr/lib/bluetooth/bluetoothd " label="unconfined")
Jun 2 01:41:53 shiva bluetoothd[440635]: Bluetooth management interface 1.19 initialized
Jun 2 01:41:53 shiva bluetoothd[440635]: Endpoint registered: sender=:1.486 path=/MediaEndpoint/A2DPSink/sbc
Jun 2 01:41:53 shiva bluetoothd[440635]: Endpoint registered: sender=:1.486 path=/MediaEndpoint/A2DPSource/sbc
Jun 2 01:42:04 shiva bluetoothd[440635]: /org/bluez/hci0/dev_00_02_5B_02_17_44/sep1/fd0: fd(41) ready
Jun 2 01:55:27 shiva bluetoothd[440635]: profiles/audio/avdtp.c:handle_unanswered_req() No reply to Start request
Jun 2 01:55:27 shiva bluetoothd[440635]: src/profile.c:ext_io_disconnected() Unable to get io data for Headset Voice gateway: getpeername: Transport endpoint is not connected (107)
None of this has worked so far and I'm not finding any other ideas in my search. Any other suggestions?
By the way, I am thinking about buying this laptop. Does it work well with Ubuntu except this problem?
Mostly, yeah. I've also been having some trouble with the screen being stuck upside down when it comes out of standby, which I still need to find a permanent fix for.
This is probably related to a firmware issue you may have with Intel AX200/AX201/AX210 Bluetooth module.
This is probably solved in Fedora 34 by now, but not yet on Ubuntu.
You should probably copy the content of "intel" subfolder into "/lib/firmware/intel" for the BT part. Files begining with "ibt-xx" are for Intel Bluetooth ...
https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
https://gitlab.freedesktop.org/pulseaudio/pulseaudio/-/issues/1155
Thanks, looks like I have an AX201 so that is probably it! Going to try it now.
Been running for over an hour without disconnecting now so I think this fixed it!
quick command-line version:
wget https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/snapshot/linux-firmware-20210511.tar.gz
tar xfvz linux-firmware-20210511.tar.gz
cd linux-firmware-20210511
sudo cp intel/ibt-* /lib/firmware/intel/
Did not work for me, did u do a hard reboot after this?
Yes, rebooted after running that.
|
STACK_EXCHANGE
|
Kiran Pachhai, the development manager at the Elastos Foundation, an organization focused on the development of a “decentralized ‘smartweb’ project that allows users and applications to communicate through a peer-to-peer (P2P) network of nodes”, recently shared his insights with CryptoGlobe regarding Web 3.0 and its relationship with blockchain technology.
Web 3.0 & Blockchain Technology
Pachhai, a computer science and math graduate from the University of Colorado Boulder, first noted that blockchains are used to “establish trust between two entities in a way that it is safe, unforgeable, and completely independent of any third-party.”
At present, there’s no widespread consensus among tech professionals about what the new Web 3.0 communications standard encompasses. According to Pachhai:
Web 3.0 may refer to the modern internet that enables consumers and businesses to establish trust between each other, without any authoritative entity in a completely peer-to-peer fashion, so that the data and communication is completely secure.
When questioned about whether the development of the “Web 3.0” standard can potentially help cryptocurrencies become more usable and achieve mainstream adoption, Pachhai explained:
Cryptocurrencies come and go — it is very easy for anyone to create their own cryptocurrency for whatever purpose they have today. So, the focus should really be behind the technology that powers cryptocurrencies, which is blockchain.
Going on to elaborate on the relationship between blockchain and Web 3.0, Pachhai remarked:
Web 3.0 [can potentially] disrupt the entire [global trading] market [as it might transform] how assets and goods are exchanged between two entities, and blockchain ties everything together to form a smart contract that enables users to protect their data and privacy.
“Regular People Are Not Attracted To Cryptocurrencies”
Interestingly, Pachhai thinks:
Regular people are not attracted to cryptocurrencies at the moment. They are interested in what they can see and value, which is where blockchain comes in. Once the public knows the true value of what blockchain provides in a transparent manner, cryptocurrencies will become more usable, paving the way for mainstream adoption.
Elastos Carrier Development: Fixing Core Problems With Internet Infrastructure
In response to a question about the main purpose of the Elastos Carrier Development project, a P2P communications protocol, Pachhai said:
Elastos Carrier is a decentralized communication platform that enables two parties (eg. device-to-device, user-to-user, or user-to-device) to exchange data and information in a safe manner. There are core problems to the current internet infrastructure.The Elastos Carrier fixes a lot of these problems, and enables apps to use the Carrier in tandem with blockchain technology. With this combination, users can not only secure and own their data, but also secure their communication.
Decentralized Applications Cannot Be Shut Down
The [full potential of the] Elastos Carrier [protocol can be realized by] instant messenger apps and digital marketplaces. These types of apps will have no central servers or authority behind them, so there are no third parties involved. Businesses can be run autonomously, people can talk to each other safely, and the applications cannot be turned off by anyone. That is what truly decentralized applications do. In other words, no third-party can put up a roadblock or shut them down.
Responding to a question about the disadvantages of using the client-server model, which the current “web” uses, and how the “Web 3.0” model improves on this, Pachhai remarked:
The client-server model by itself does not have any disadvantages because it’s just a way to transfer data from one party to another. The problem has to do with the fact that the server is centralized and controlled by one entity, and the data stored in that server is owned by that entity — even if it doesn’t belong to them. Another issue is that because there’s one entity controlling the data behind the website, hackers may have more incentive to attack the servers, because if they succeed, they can get hold of the private and sensitive data of regular consumers.
Problems With Centralized Client-Server Model
Commenting further on other inherent problems with the centralized client-server model, Pachhai noted:
The third issue is that the company that controls this data can monetize the data or set up some rules that may not be in favor of regular consumers. The Web 3.0 model improves on this because it tries to address all three of those problems head on. First of all, there is no server that stores the user’s data. Instead, the data is split up and distributed [among] many machines. Even if one machine goes offline, the service is still online. Secondly, because the data is [distributed], hackers have less of an incentive to try to get into the system, because [not all the data is accessible from one location].
Even if the [hackers are able to get to all the data], they need the original user’s private key to access those pieces of data. Lastly, no company has control of users’ data because there is no company controlling any of these servers/machines. Users are also able to monetize their data however they want.
|
OPCFW_CODE
|
Need a guidance to choose best approach for dynamic web browsing with python
I am working at the company and one of my tasks is to scan certain tender portals for relevant opportunities and share it with distribution lists I have in excel. It is not difficult but rather exhausting task, especially with other 100 things they put on me. So I decided to apply python to solve my pain, and provide opportunities for gains. I started with simple scraping with soup but I realized that I need something better, like bot or smart selenium based code.
Problem : manual search and collections of info from websites ( search, click, download files, send them)
Sub problem for automated site scraping - credentials
Code background - rare learns from different platforms based on problem at hand ( mostly boring ), mostly python and data science related courses
Desired help - suggest way, framework, examples, for automated web browsing using python so I can collect all info in the matter of clicks ( Data collection using excel is basic, do not have access to databases, however, more sophisticated ideas are appreciated)
PS. Working two jobs and trying to support my family while searching for other career options, but my dedicated and care for business eat up my time as I do not want to be a trouble maker, thus while trying to push to management (which is old school) for support, time goes by.
Please and thank you in advance for your mega smart advices! Many thanks
This is a major undertaking. If I understand you correctly, as you're working with MS Excel, you might want to see what is possible constructing code there with Visual Basic for Excel as then your data will be converted into the format you want. Otherwise, check out the O'Reilly publisher's book 'Web Scraping with Python, 2nd Edition'. In any case, be prepared to spend a fair amount of time before seeing any results. Answer below is a good start. AND notice that the tag [web-scraping] shows ~24000 Q/A. Try searching for your particular issues earlier, than later. Good luck!
@Greg I added an example per your edit.
BeautifulSoup not going to be up to the job simply because it is a parser, not a web browser.
MechanicalSoup might be an option for you of the sites are not too complex and do not require Javascript execution to function.
Selenium is essentially a robotic version of your favourite web browser.
Whether I choose Selenium or MechanicalSoup depends on whether my target data requires Javascript execution, either during login or to get the data itself.
Let's go over your requirements:
Search: Can the search be conducted through a get request? I.e. is the search done based on variables in the URL? Google something and then look at the URL of that Google Search. Is there something similar on your target websites? If yes, MechanicalSoup. If not, Selenium.
Click: As far as I know, MechanicalSoup cannot explicitly click. It can follow URLs if it is given what to look for (and usually this is good enough), but it cannot click a button. Selenium is needed for this.
Download: Either of them can do this as long as no button clicking is required. Again, can it just follow the path of where the button leads to?
Send: Outside the scope of both. You need to look at something else for this, although plenty of mail libraries exist.
Credentials: Both can do this, so the key question is whether login is dependent on Javascript.
This really hinges on the specific details of what you seek to do.
EDIT: Here is an example of what I have done with MechanicalSoup:
https://github.com/MattGaiser/mindsumo-scraper
It is a program which logs into a website, is pointed to a specific page, scrapes that page as well as the other relevant pages to which it links, and from those scrapings generates a CSV of the challenges I have won, the score I earned, and the link to the image of the challenge (which often has insights).
|
STACK_EXCHANGE
|
Did Atari lobby against FCC regulation change?
In the 1970s, FCC limits on RF emissions applied to 'anything that plugs into a TV', and were stringent and difficult to pass. Atari went to extraordinary lengths regarding this when designing their own computers: Why did the Atari 800 designers choose such a radical system design?
Texas Instruments had a different outcome. https://en.wikipedia.org/wiki/Atari_8-bit_family
In a July 1977 visit with the engineering staff, a Texas Instruments (TI) salesman presented a new possibility in the form of an inexpensive fiber-optic cable with built-in transceivers. During the meeting, Joe Decuir proposed placing an RF modulator on one end, thereby completely isolating any electrical signals so that the computer would have no RF components. This would mean the computer would not have to meet the FCC requirements, yet users could still attach a television simply by plugging it in. His manager, Wade Tuma, later refused the idea saying "The FCC would never let us get away with that stunt." Unknown to Atari, TI used Decuir's idea. As Tuma had predicted, the FCC rejected the design, delaying that machine's release. TI ultimately shipped early machines with a custom television as the testing process dragged on.
In the short term, this was good for Atari; the need for TI to ship a TV-derived monitor (to dodge the 'plugs into a TV' criterion) made their machine less competitive.
In the medium term, TI lobbied for regulation change. And they had quite some influence. A Texan Congressman openly criticized the FCC, put pressure on them to grant an exemption. The FCC stood firm on enforcing the rules as they were, but ended up changing them. The new rules went into effect on January 1, 1981:
The criterion for what would now be called 'class B' rating, changed from 'plugs into a TV' to 'marketed to consumers'.
The allowed level of RF emission was raised by 17 decibels.
For some reason, only the first change is ever talked about, so I don't have a reference for the second; the only source from which I was finally able to obtain the number was a talk by Joe Decuir, in a YouTube that I cannot again find. But that is a really important change! It made it dramatically easier to build a home computer that could qualify.
This was a disaster for Atari! Suddenly the likes of Commodore, Tandy and TI were shipping home computers that drastically undercut the now overengineered Atari 8-bit machines on price.
(Then the company was hit by an even bigger disaster in the form of the North American videogame crash, and by the time Tramiel got things running again, it was too late to change the outcome of the 8-bit home computer competition, but that's another story.)
This seems straightforward enough so far, accidentally torpedoing a competitor brings gain in the short term, loss in the medium term as the now-desperate competitor successfully lobbies for regulation change. But what I'm wondering is this:
Did Atari not see that coming? The Texans were not exactly quiet about the pressure they were putting on the FCC. And Atari was not a little startup anymore. It was a household name.
Did Atari try lobbying to keep the regulations as they were? If so, why did they fail? Were they not used to that aspect of business? (But Ray Kassar, an experienced executive, was in charge of Atari from 1978 to 1983.) Were their efforts misdirected? Were they simply outweighed? Were there other contributing factors? Or did they just not see the need in time?
The Atari 400 and 800 were already aging in 1981 and it was about time to think about the next generation (The 1200XL and Liz must have already been in a very early design stage at that time). I pretty much doubt that Atari wouldn't have welcomed simplifications in FCC regulations.
@tofro Mmm? That doesn't sound right. The 1200XL wasn't a next-generation machine, it was a belated and flawed attempt at cost reduction of the 800. Which didn't need a next generation yet; the 800 was the technically best home computer in the world until the release of the Commodore 64 in 1982, and even then, the gap in technical capability was not big enough to be decisive.
@rwallace the 1200 we know is the result of cuts to the development planned. The 1200 family was envisioned way more ambitious, which was still true during that time.
@Raffzahn Okay, I believe you, but then my argument stands: they would have had less need to rush the machine out the door without time for planned development or even debugging, if not for the 800 being undercut on price by competitors.
@rwallace It was an extreme fast pace, so you may really need to date everything in your assumption to validate (or invalidate) the timing mentioned. I won't spend that time, but a rough estimate tells me that itdoesn't hold.
@rwallace “the 800 was the technically best home computer in the world until the release of the Commodore 64 in 1982” — citation needed! (I don't know about the USA, but elsewhere there were machines that were arguably better.)
@gidds But before 1982? – Okay actually you have a valid point, the BBC Micro was in the same league as the Atari (each had strengths the other lacked); I had remembered it as 1982, but a quick check on Wikipedia says it squeaked into December 1981.
Found! Only have a Google books link, alas:
https://books.google.ie/books?id=Ty-xx7jNLQ0C&pg=PA1071&redir_esc=y#v=onepage&q&f=false
But that's where the FCC explains their reasoning. Turns out that yes, as expected, Atari did lobby to keep the existing regulations. The FCC decided to disregard their objections to the change, on the reasonable grounds that by that time, they had gathered enough evidence to conclude that the existing regulations were unnecessarily stringent, and the relaxed version would be quite adequate.
|
STACK_EXCHANGE
|
This is my first article. In this article, I will show you a simple Crystal Report creation process with screenshots using Visual Studio 2010. Because a picture is worth more than a thousand words, so I always believed in an article with screenshots is much better.
Let's start by creating a new website in VS 2010.
Open VS 2010, select Visual C# and ASP.NET Web Site and click OK as shown below.
This action will create a new Web site project. Once we have a Web site project
created, next step is to get database access in the project. That we do using a
DataSet from a database.
Creation of Dataset (xsd) File:
- The following figure shows you the process to create a DataSet file.
- To add a DataSet file, click on Solution Explorer -> Right Click on Project -> click on Add new Item and then it will show you the following screen:
- Enter the Datset file name. Click on the ok button.
- It will ask for confirmation to put that file in the App_Code folder. Just click yes and that file will open in the screen as a blank screen.
- Now we will add one blank datatable to that mydataset.xsd.
- Right-click in the area of the file and select Add -> Datatable.
- It will add one DataTable1 to the screen.
- The following Figure 5 shows how to add a datatable to the mydataset.XSD file.
- Now datatable1 is added to XSD file.
- Now we will add a data column to the datatable1 as per figure 6.
- Remember, whatever columns we add here will be shown on the report.
- So add the columns you want to display in your reports one by one here.
- Always remember to give the same name for the column and data type of column which is the same as the database, otherwise you will get an error for field and data type mismatch.
- To set property for the columns the same as the database.
- The following figure will show you how to set the property for the data columns.
- The default data type for all the columns is string.
- To change the data type manually right-click on the datacolumn in the datatable and select property.
- From the property window, select the appropriate datatype from the DataType Dropdown for the selected datacolumn.
- XSD file creation has been done.
- Now we will move on to create the Crystal Reports design.
Creation of Crystal report design:
- Click on the Solution Explorer -> Right click on the project name and select Crystal Reports.
- Name it as you choose and click the add button.
- After clicking on the add button a .rpt file will be added to the solution.
- It will ask for the report creation type of how you want to create the report.
- Click the ok button to proceed.
- Under Data Sources, expand ADO.NET Datasets and select Table and add to the selected table portion located at the right side of the window using the > button. Click on Next.
- Select the columns that you want to show in the report.
- Now click on the Finish button and it will show the next screen.
- Once the report file is added, you can see the Field Explorer on the left side of the screen.
- Expand Database Fields, under that you will be able to find the Datatable that we have created earlier.
- Just expand it and drag one by one each field from the Field Explorer to the rpt file the under detail section.
- Now the report design part is over.
- Now we have to fetch the data from the database and bind it to the dataset and then Show that dataset to the report viewer.
Crystal report Viewer:
- First Drag a CrystalReportViewer control on the aspx page from the Reporting Section of the tool box.
- Add a command Button.
- Configure the CrystalReportViewer and create a link with Crystal Reports.
- Select the Crystal Reports source from the right side of the control.
The following is the final code for reports (Default.aspx).
public partial class _Default : System.Web.UI.Page
protected void Page_Load(object sender, EventArgs e)
CrystalReportViewer1.Visible = false;
protected void cmdcrystal_Click(object sender, EventArgs e)
CrystalReportViewer1.Visible = true;
ReportDocument rDoc = new ReportDocument();
Mydataset dset = new Mydataset(); // dataset file name
DataTable dtable = new DataTable(); // data table name
dtable.TableName = 'Crystal Report '; // Crystal Report Name
rDoc.Load(Server.MapPath('mycrystal.rpt')); // Your .rpt file path
rDoc.SetDataSource(dset); //set dataset to the report viewer.
CrystalReportViewer1.ReportSource = rDoc;
|
OPCFW_CODE
|
I was writing up the page for the Magomatic and started on the improvements section. I realized I started going on and on about a possible brute-forcing function and I decided that it would be better suited for a post instead of putting it on the page. So prepare yourself, as I am about to dump everything I can think of regarding brute forcing magstripe card door locks.
I was thinking that since I can just read card data with a computer, I should be able to read a room number off the card, alter that data to another room number, and put that information on my emulator. This would work in theory, but what if the card contains encrypted data? Then you would not know what represents the room number. What’s more, what if the card does not conform to the track 2 standard? They could use their own protocol. Either way, all I will see are 1′s and zeros.
Since the encrypted data is still only 1′s and 0′s, you could potentially try every possible combination of 1′s and 0′s until you found a combo that worked. Analyzing multiple cards of the same type, (door keys for example) you could potentially see how much data changes from room to room or date to date. You could then lower the amount of bits you would have to brute force to just the number of bits that change. This may be impractical seeing that to brute force each bit would mean that the total number of possible combinations is equal to 2^n where n is equal to the number of bits required to brute force. That means for just 10 bits you would have to try 2^10 = 1024 combinations. It would probably take the magstripe reader about 1.5 – 2 seconds to deny a card. If it took 2 seconds, that means brute forcing 10 bits would take (1024 * 2) / 60 = 34.13 minutes. That might not be worth the time.
Another option for brute forcing is to brute force each byte, rather than each bit. This will only work if the magnetic stripe key follows valid track protocol and is not encrypted. In this case, you could just read the data on your computer and alter whatever you wanted, however what if you wanted a hand held device to do everything for you automatically? It is rather cumbersome to hook up a reader to a laptop, scan the card, alter the data, program a microcontroller, put the micro in your emulator, and then open a door. It would be much nicer to have a device that could just brute force open any door.
If one of these cards follows valid track 2 format, then you could just brute force every 5 bits (there are 5 bits in a byte in track 2 format) rather than every single bit. However, now you have more possibilities for each byte. It’s not just either a one or a zero. Each byte can represent 11 different characters (0-9, =). I found this information by consulting this resource. Track 2 also states that there are 37 data bytes between the start and end sentinal values. This means that the total possible combinations you would have to brute force would be 11^37= way too many to be worth the time. In this case, the best thing to do would be to analyze the data on a computer, figure out where the room number and/or expiration date is stored, and then program a microcontroller to try every possible room number while keeping the other data the same. It could then make sure that the date was some date way in the future to ensure it would work. Better than this, you could put an interface on the device to program the exact room number. Using a serial LCD and a few buttons, you could view the data after it is scanned into the device. Then the buttons can be used to alter the data or just punch in the room number.
Brute forcing these things seems mostly impractical due to the fact that it would take forever to brute force all the data on a card but in the event of encryption, it may be necessary. If you can narrow down just a few bits that need brute forcing it would be worth it. I’ll have to experiment once I have some data to analyze.
|
OPCFW_CODE
|
UTF vs character types
UTF-8 and UTF-16 are variable length - more than 2 bytes may be used. UTF-32 uses 4 bytes. Unicode and UTF are general concepts but I wonder how it is related to C/C++ character types. Windows (WinApi) uses 2 bytes wchar_t. How to handle UTF-8 character which is longer than two bytes ? Even on Linux where wchar_t is 4 bytes long I may get UTF-8 characters which requires 6 bytes. Please exaplain how it works.
While UTF-8 is a variable length encoding, remember that the "8" in "UTF-8" is the bit size of the encoding. It's a byte-wise encoding, where a character may occupy one or more bytes. Which fits very well into the usual char width, as it's commonly also 8-bit bytes.
Take care not to confuse a Unicode code point and its representation in a specific encoding. All Unicode code points are in the range 0x0-0x10FFFF, which makes them directly storable as 32-bit numbers (that's what UTF-32 does).
UTF-8 can reach 6 bytes per code point [edit: it's actually 4 in its final version so the space issue is moot, but the rest of the paragraph holds] because it requires some overhead to manage its variable length - that's what permits a lot of other code points to be encoded in only 1 or 2 bytes. But when you're receiving a 6-bytes an UTF-8 character and you want to store it into Linux's 32-bit wchar_t, you don't store it as-is: you convert it to UTF-32, dropping the overhead. Same story with Windows's 16-bit wchar_t, except you might end up with 2 16-bit, UTF-16-encoded halves.
Note: a lot of Windows software is actually using UCS-2, which is essentially UTF-16 without the variable length. These won't be able to handle characters that would have required two UTF-16 wchar_t's.
"UTF-8 can reach 6 bytes per code point" If I remember correctly, Unicode standard only allows up to 4 bytes in UTF-8, where 0x10FFFF should fit.
@user694733 I quoted OP and ran with it... Apparently UTF-8 did have a 6-byte maximum at some point but it doesn't anymore.
It might be worth pointing out that a single glyph like "ã" does not necessarily correspond to a single Unicode codepoint - it might be LATIN SMALL LETTER A (U+0061) and COMBINING TILDE (U+0303). "ã̈" is definitely more that one codepoint.
First of all, the maximum Unicode character (UTF-8, UTF-16 and UTF-32 are encodings of Unicode to bytes) is U+10FFFF, which fits comfortably in a 4-byte wchar_t.
As for the 2 bytes wchar_t, Unicode addressed this problem in UTF-16 by adding in dummy "surrogate" characters in the range U+D800 to U+DFFF.
Quoting an example from the UTF-16 Wikipedia page:
To encode U+10437 () to UTF-16:
Subtract 0x10000 from the code point, leaving 0x0437.
For the high surrogate, shift right by 10 (divide by 0x400), then add 0xD800, resulting in 0x0001 + 0xD800 = 0xD801.
For the low surrogate, take the low 10 bits (remainder of dividing by 0x400), then add 0xDC00, resulting in 0x0037 + 0xDC00 = 0xDC37.
For completeness' sake, here is this character encoded in different encodings:
UTF-8: 0xF0 0x90 0x90 0xB7
UTF-16: 0xD801 0xDC37
UTF-32: 0x00010437
|
STACK_EXCHANGE
|
As we begin to wrap up the year we also are beginning to wrap up the API getting ready for the “real” pull request for the API code. We are down to one last code review of the final clean up pass before we have it looked at by the core team. I think the code is pretty solid but it will of course have problems that are discovered during the review and the testing. Ah the testing, real world testing that we really need to do. To get there we need to have a test server. Thankfully that’s all taken care of now and we’ve had the first data interactions with a pod.
As I wrote previously I wanted to run the latest and greatest of the API code on real production pod that could act as a live development-test system. To do that I stood up a brand new instance by following the instructions on the Diaspora Wiki . Rather than following the instructions to pull the latest of the master branch I’ve instead pulled the head of the final cleanup branch of the API in Franks’ GitHub Fork that we’ve been working in. After struggling with some Nginx configurations for the HTTPS reverse proxy I was up and running pretty seamlessly. Then it was a question of going through the first app registration and authentication process for the user that I’ve done hundreds of times over the development. It all worked perfectly, leading to the first ever post to a production pod with the API. You can see the result here (using the web+diaspora link) or here (using a web link) . I actually went through an entire cycle of getting my current user’s data, querying for their posts, creating a post, and then requerying. I intend to write a more detailed post about that whole process shortly.
If anyone else wants to start testing the API they are free to make an account on the test pod . Please keep in mind:
- This will be running bleeding edge software so will probably have stability issues and bugs you wouldn’t normally see.
- There is no concept of data guarantee. Data can and will be deleted as necessary up to and including blowing away the entire database if warranted. Do not put critical data posts on this pod.
- This pod is not running on a large server it therefore will not scale to huge numbers of users, posts etc. It’s not meant for that kind of testing.
- People who abuse the pod will be ejected from it.
Overall though I can’t tell you how stoked I am about this. I’ve been seeing posts show up on my local development pods for a while now but seeing a real live server properly interacting in the same way just tickled me a lot. Here’s to another important milestone on getting the Diaspora API out the door!
|
OPCFW_CODE
|
How can I make a scatterplot with two grouping variables (group and subgroup)?
I'd like to make a scatterplot with my dependent variable (scores on test A) on the y-axis, age (continuous) on the x-as and group (autism vs. control) as grouping variable, but also subgroup (people above 65 years vs. people younger than 65 years) as grouping variable.
This doesn't seem possible.
The reason I think I need two grouping variables is as follows: with only 'group' as grouping variable, my scatter graph illustrates an interaction effect between age and group. This isn't that weird, because when I run a regression with age, group and agexgroup as predictors, I indeed get the significant interaction effect.
But: this is not what I did in my analysis. I compared models, namely a simple model (3 main effects, 1 interaction effect) and a complex model (same variables as the simple model but with 3 extra interaction effects). I did this in SPSS lineair regression by using 'blocks'. So block 1 was my simple model, block 2 my complex model.
Both of my models doesnt give me any significant interaction effects, not for agexgroup, not for agexsubgroup and not for groupxsubgroup, or agexgroupxsubgroup.
I want to make a scatterplot which is representative for my data outcomes. The scatterplot with the simple agexgroup interaction is not.
Can anyone please help me?
You are attempting to use age simultaneous in two ways: continuous (recommended) and discontinuous (highly unlikely that there is a jump at age 65 and that |age - 65| is irrelevant on both sides). Keep age continuous for all phases of the analysis, whether modeling or graphing. If you think that the linear age model doesn't fit, allow it to be nonlinear (cubic splines, etc).
Thank you for your comment. My supervisor indeed said that age only as a categorical variable wouldn't be good. But he recommended me to do both. My second supervisor approved my research proposal as well. That's why I don't think I should remove subgroup (age) now, since I'm in the last phase of my master thesis. I did indeed keep age continous while graphing, but as I said before in my post, my graph now only illustrates the groupxage interaction you get when you run a regression seperately.
Can you paste in an example dataset for people to work with? It doesn't have to be your full dataset n=20 is probably fine, & it doesn't have to be your actual data, simulated, similar data is OK.
I think this is very answerable, w/ some example data.
Thank you all for your comments! I found out how to do it: using a line graph instead of a scatterplot, and then plotting age and my predicted values instead of age and dependent variable.
|
STACK_EXCHANGE
|
Conflict with PyQT library
Required Info
Operating System & Version
Win 10
Platform
PC
Language
python
Issue Description
So my problem is that when librealsense.py ( ver.2) cannot be imported into the project with PyQT. I open my UI, and then just one simple pressing button stops the app completely. I didn't call any functions from libsense library, it's just simple:
import pyrealsense2 as rs
The more frustrating is that it doesn't show any error. Seems to be similar to the https://github.com/IntelRealSense/librealsense/issues/4856 , but i don't get what means "import localy".
Also tried to build wrapper from wheel/ installing with pip install pyrealsense2, neither worked. Also tried building from source with this https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_windows.md , but after building solution in VS I don't see any files in python wrapper folder so I gave up.
Any suggestions on how to tackle this issue will be much appreciated!
This sounds similar to the bug I reported. #6141. Importing the pyrealsense2 library (and not calling anything) causes issues with wxPython and Tkinter on Windows.
@zapaishchykova What's the SDK version when you get the issue? And did you check if python examples in SDK package work normally? Looking forward to your update. Thanks!
Hi! Yes, python samples work just fine, it all happens when such imports are present alongside with librealsence import:
from PyQt5.QtWidgets import *
from PyQt5.QtGui import *
from PyQt5.QtCore import *
from PyQt5.QtMultimediaWidgets import *
from PyQt5.QtMultimedia import *
I am using SDK 2.0 but as far as I understand it is not used while I import the package in Python?
I faced a very similar issue and solved it by adding these lines before any other imports:
import sys sys.coinit_flags = 2 import pythoncom
Now pyrealsense2 can be imported together with the PyQt5 components and opening further windows from the GUI works, too. @alaricLEA maybe you can find a useful hint for your issue, too.
This would be a minimal example which works with these additional lines, but breaks without them as soon as you try to open the file dialogue:
`import sys
sys.coinit_flags = 2
import pythoncom
import pyrealsense2
from PyQt5.QtWidgets import QApplication, QWidget, QPushButton, QFileDialog
from PyQt5.QtCore import pyqtSlot
class App(QWidget):
def __init__(self):
super().__init__()
self.title = 'PyQt5 fail test'
self.left = 10
self.top = 10
self.width = 320
self.height = 200
self.initUI()
def initUI(self):
self.setWindowTitle(self.title)
self.setGeometry(self.left, self.top, self.width, self.height)
button = QPushButton('PyQt5 button', self)
button.move(100, 70)
button.clicked.connect(self.on_click)
self.show()
@pyqtSlot()
def on_click(self):
print('Start function on_click')
filename, _ = QFileDialog.getOpenFileName(self, "QFileDialog.getOpenFileName()", "",
"All Files (*);;Python Files (*.py)")
if filename:
print(filename)
print('End function on_click')
if name == 'main':
app = QApplication(sys.argv)
ex = App()
sys.exit(app.exec_())`
That works! Awesome, Thanks!!
Works for me too. Many thanks!
It works like a charm. Many thanks to @mareikethies !
|
GITHUB_ARCHIVE
|
8.4.2 (in Docker)
Whatever comes in the Elasticsearch Docker image
java -version doesn't work inside the Docker image)
Whatever's in the Elasticsearch Docker image:
$ uname -a Linux c8dae8d214e8 5.10.76-linuxkit #1 SMP Mon Nov 8 10:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
We're seeing the following error from queries in Elasticsearch 8.4.2:
totalTermFreq must be at least docFreq, totalTermFreq: 413958, docFreq: 413959
These are queries that worked successfully in Elasticsearch 8.4.1.
The errors are reproducible, but don't affect all queries.
The numbers in the error message seem to be stable.
Start an instance of the Elasticsearch Docker container:
docker run \ --env xpack.security.enabled=false \ --env discovery.type=single-node \ --publish 9200:9200 \ -it docker.elastic.co/elasticsearch/elasticsearch:8.4.2
(These settings may not all be required, but they're how I usually the local Docker image and they avoid having to do the password/CA certs dance. I’m running Docker on macOS, although I don’t think it makes a difference.)
Download the following data set from S3: https://wellcomecollection-data-public-delta.s3.eu-west-1.amazonaws.com/elasticsearch-issue-files/works.json.gz (40MB)
This contains ~900k documents with a minimal set of fields to cause this error – if I reduce the number of documents or fields, the error goes away.
(Corpus is CC-BY 4.0 licensed Wellcome Collection, similar to https://developers.wellcomecollection.org/docs/datasets)
Run the attached Python script, which will:
works.json.gzinto the index
The search request returns a list of results.
We get an error from the Elasticsearch Python library:
elasticsearch.BadRequestError: BadRequestError(400, 'search_phase_execution_exception', 'totalTermFreq must be at least docFreq, totalTermFreq: 287039, docFreq: 287040')
If I run my Python script against the 8.4.1 Docker image, the error doesn't reproduce.
I found an issue with a similar error message from 7.0.1: https://github.com/elastic/elasticsearch/issues/41934
Notably, the fix for that issue (https://github.com/elastic/elasticsearch/pull/41938) mentions
cross_fields, which we're using in our query.
If I remove any of the fields from the query, the error goes away.
I’m using the Elasticsearch Docker image for the sake of an easy reproduction case, but we're seeing this issue in our managed Elastic Cloud clusters also. We actually see the error in two different clusters:
Our actual documents are quite a bit larger, and the query more complicated. We can share more details if it would be useful, but I figured you'd prefer the minimal version.
The numbers in the error message seem to vary depending on which Elastic node handles the request, but each node returns a consistent set of numbers. e.g. if I run this query in our Elastic Cloud cluster, it gets handled by one of two nodes:
error.failed_shards.node = ZTz1Ump1THGC5lMd9VL3XQgets the error "totalTermFreq must be at least docFreq, totalTermFreq: 413958, docFreq: 413959"
error.failed_shards.node = qhmqpnpCQhe9KKSKYWQfNwgets the error "totalTermFreq must be at least docFreq, totalTermFreq: 437539, docFreq: 437540"
If I run the query repeatedly, each node returns consistent numbers.
Elasticsearch has a few features designed for scalability, but you can free up resources on your Elasticsearch servers by offloading the load balancing of requests to NGINX Open Source or NGINX Plus, which has even more enterprise‑grade features.
|Last Update||11 hours ago|
|
OPCFW_CODE
|
Iconic words are known to exhibit an imitative relationship between a word and its referent. Many studies have worked to pinpoint sound-to-meaning correspondences for ideophones from different languages. The correspondence patterns show similarities across languages, but what makes such language-specific correspondences universal, as iconicity claims to be, remains unclear. This could be due to a lack of consensus on how to describe and test the perceptuo-motor affordances that make an iconic word feel imitative to speakers. We created and analyzed a database of 1,888 ideophones across 13 languages, and found that 5 articulatory properties, physiologically accessible to all spoken language users, pattern according to semantic features of ideophones. Our findings pave the way for future research to utilize articulatory properties as a means to test and explain how iconicity is encoded in spoken language.
Chinese Ideophone Database (CHIDEOD) is an open-source dataset coded in a user-friendly format, which collects 3453 unique onomatopoeia and ideophones (mimetics) of Mandarin Chinese, as well as Middle Chinese and Old Chinese (based on Baxter & Sagart 2014). These are analyzed according to a wide range of linguistic features, including phonological, semantic, as well as orthographic ones. CHIDEOD was created on the basis of data collection and analysis conducted by Arthur Thompson in our lab in collaboration with Thomas Van Hoey (then in National Taiwan University). For individual sources and files relevant to the database, please visit https://osf.io/kpwgf/
Iconicity is when linguistic units are perceived as “sounding like what they mean,” so that phonological structure of an iconic word is what begets its meaning through perceived imitation, rather than an arbitrary semantic link. Fundamental examples are onomatopoeia, e.g., dog”s barking: woof woof (English), wou wou (Cantonese), wan wan (Japanese), hau hau (Polish). Systematicity is often conflated with iconicity because it is also a phenomenon whereby a word begets its meaning from phonological structure, albeit through (arbitrary) statistical relationships, as opposed to perceived imitation. One example is gl- (Germanic languages), where speakers can intuit the meaning “light” via knowledge of similar words, e.g., glisten, glint, glow, gleam, glimmer. This conflation of iconicity and systematicity arises from questions like “How can we differentiate or qualify perceived imitation from (arbitrary) statistical relationships?” So far there is no proposal to answer this question. By drawing observations from the visual modality, this paper mediates ambiguity between iconicity and systematicity in spoken language by proposing a methodology which explains how iconicity is achieved through perceptuo-motor analogies derived from oral articulatory gesture. We propose that universal accessibility of articulatory gestures, and human ability to create (perceptuo-motor) analogy, is what in turn makes iconicity universal and thus easily learnable by speakers regardless of language background, as studies have shown. Conversely, our methodology allows one to argue which words are devoid of iconicity seeing as such words should not be explainable in terms of articulatory gesture. We use ideophones from Chaoyang (Southern Min) to illustrate our methodology. Published here.
Studies on English and Spanish use ratings to identify words speakers consider iconic. Our study replicates this for Japanese but, owing to additional variables, yields more nuanced findings. We propose that ratings reflect a word’s relationship to sensory information rather than iconicity.
In this project, we explore how people interpret imitative quotatives “and then he was like…” in several conditions.
Ideophones are marked words that depict sensory imagery and occur in many languages. It has been found that these words are easier to learn which might be due to their depictive properties. In this project we investigate whether visual cues such as lip rounding and mouth opening help in learning ideophones.
Some languages have more forms of conventional spoken iconicity than others. Japanese, for example, has more ideophones than English. So how do speakers of a language with limited semantic categories of ideophones depict percepts? One possibility is demonstrations: unconventional, yet depictive, discourse. Demonstrations follow quotatives (e.g., I was like ___) and perform referents as opposed to describing them. In English, a language with arguably restricted sets of ideophones, speakers may enact/create demonstrations using their hands, voice, and body. This paper examines which visual and spoken components are vital to comprehending demonstrations in English with features from Güldemann’s (2008) observations: enacted verbal behaviour, non-linguistic vocal imitation, ideophones, and representational gesture. 28 videos containing demonstrations of 11 celebrities engaging in impromptu storytelling on USA talk shows were our critical stimuli. 145 native speakers completed forced multiple-choice judgement tasks to qualify each demonstration. To see which forms of visual and spoken communication contributed to comprehension, videos were presented in visual (muted), audio (pixelated and darkened), and audio–visual (left as is) conditions. Our results show that if arbitrary speech (e.g., I was like I can’t go over the ocean!) is in a demonstration, then it is vital to comprehension. The visual condition rendered these demonstrations uninterpretable. If sound imitations (e.g., I was like prfff!) or ideophones coupled with hand gesture (e.g., I was like yay! + hands opening and closing in unison) are in a demonstration, then the interpretability of that demonstration across our experimental conditions depends on whether its components (gesture, sound imitation) can unambiguously express meaning in isolation. These findings allow us to make several conjectures about the wellformedness of demonstrations. Our findings are in line with studies on enactments in deaf signed languages whereby the more unconventional a form of iconic depiction is, the more it requires conventional framing to be interpretable.
|
OPCFW_CODE
|
UMD Computer Science Researchers to Present 14 Papers at Major Robotics and Automation Conference
Robotic and automation experts from across the globe will convene virtually for technical paper presentations and online discussions for the International Conference on Robotics and Automation 2020 conference which will be held from May 31st to August 31st.
ICRA is the IEEE Robotics and Automation Society’s flagship conference and the premier international forum for robotics researchers to present and discuss their work.
UMD Computer Science researchers, some of who are also faculty members of Maryland Robotics Center (MRC), will present 14 papers at the conference. UMD MRC faculty and students publish a total of 16 papers at IEEE ICRA 2020 this year, tied with University of Illinois at Urbana-Champaign and University of Michigan at Ann Arbor among the Big Ten Universities.
Assistant Professor Pratap Tokekar will also be organizing an online workshop on the foundation of multi-robot systems, along with researchers from Stony Brook and University of Pennsylvania.
More information on the conference can be found here.
Papers at ICRA 2020-
AVOT: Audio-Visual Object Tracking of Multiple Objects for Robotics
Justin Wilson and Ming C. Lin - Paper
DCAD: Decentralized Collision Avoidance with Dynamics Constraints for Agile Quadrotor Swarms
Senthil Hariharan Arul, Dinesh Manocha- Paper
RoadTrack: Realtime Tracking of Road Agents in Dense and Heterogeneous Environments [Video]
Rohan Chandra, Uttaran Bhattacharya, Tanmay Randhavane, Aniket Bera, Dinesh Manocha Paper
GraphRQI: Classifying Driver Behaviors Using Graph Spectrums [Video]
Rohan Chandra, Uttaran Bhattacharya, Trisha Mittal, Xiaoyu Li, Aniket Bera, Dinesh Manocha Paper
Learning Resilient Behaviors for Navigation under Uncertainty Environments
Tingxiang Fan, Pinxin Long, Wenxi Liu, Jia Pan, Ruigang Yang, Dinesh Manocha- Paper
Experimental Comparison of Decentralized Task Allocation Algorithms under Imperfect Communication [Video]
Sharan Nayak, Suyash Yeotikar, Estefany Carrillo, Eliot Rudnick-Cohen, Mohamed Khalid M Jaffar, Ruchir Patel, Shapour Azarm, Jeffrey Herrmann, Huan Xu, Michael W. Otte- Paper
Grasping Fragile Objects Using a Stress-Minimization Metric
Zherong Pan, Xifeng Gao, Dinesh Manocha- Paper
Decentralized Task Allocation in Multi-Agent Systems Using a Decentralized Genetic Algorithm [Video]
Ruchir Patel, Eliot Rudnick-Cohen, Shapour Azarm, Michael W. Otte, Huan Xu, Jeffrey Herrmann-Abstract
Reactive Navigation under Non-Parametric Uncertainty through Hilbert Space Embedding of Probabilistic Velocity Obstacles
SriSai Naga Jyotish Poonganam, Bharath Gopalakrishnan, Venkata Seetharama Sai Bhargav Kumar Avula, Arun Kumar Singh, Madhava Krishna, Dinesh Manocha- Paper
EVDodgeNet: Deep Dynamic Obstacle Dodging with Event Cameras [Video]
Nitin Sanket, Chethan Parameshwara, Chahat Singh, Ashwin Varghese Kuruttukulam, Cornelia Fermuller, Davide Scaramuzza, Yiannis Aloimonos- Paper
DenseCAvoid: Real-Time Navigation in Dense Crowds Using Anticipatory Behaviors [Video]
Adarsh Jagan Sathyamoorthy, Jing Liang, Utsav Patel, Tianrui Guan, Rohan Chandra, Dinesh Manocha- Paper
Realtime Simulation of Thin-Shell Deformable Materials Using CNN-Based Mesh Embedding [Video]
Qingyang Tan, Zherong Pan, Lin Gao, Dinesh Manocha- Paper
Sensor Assignment Algorithms to Improve Observability While Tracking Targets [Video]
Lifeng Zhou, Pratap Tokekar -Paper
Distributed Attack-Robust Submodular Maximization for Multi-Robot Planning [Video]
Lifeng Zhou, Vasileios Tzoumas, George J. Pappas, Pratap Tokekar- Paper
The Department welcomes comments, suggestions and corrections. Send email to editor [-at-] cs [dot] umd [dot] edu.
|
OPCFW_CODE
|
This is almost a FAQ and asked many times on this site! The short answer is that for a categorical variable ("factor" in R-speak) with $k$ levels and $k-1$ degrees of freedom, one cannot estimate one "effect" for each of the $k$ levels, since the space generated by the factor has only dimension (that is, degrees of freedom $k-1$). There are many ways to parametrize the space, but the most usual one is to choose one reference level and measure the effect of each of the other levels by its differential effects as compared to the reference.
When that is done, we can say that the effect of the reference level itself is zero, so its coefficient is zero, with a standard error of zero, as there is no sampling variability in a constant value.
Software should help users by including that in the summary output table, as below, where I take the example used at Values of reference categories for main and interaction effects using lm() in R and edit in three lines for the three reference levels:
Coefficients: (1 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 21.500 3.349 6.421 1.22e-06 ***
as.factor(cyl)4 0 0 NA NA
as.factor(cyl)6 -1.750 4.101 -0.427 0.6734
as.factor(cyl)8 -6.450 3.485 -1.851 0.0766
as.factor(gear)3 0 0 NA NA
as.factor(gear)4 5.425 3.552 1.527 0.1397
as.factor(gear)5 6.700 4.101 1.634 0.1154
as.factor(cyl)4:as.factor(gear)3 0 0 NA NA
as.factor(cyl)6:as.factor(gear)4 -5.425 4.585 -1.183 0.2483
as.factor(cyl)8:as.factor(gear)4 NA NA NA NA
as.factor(cyl)6:as.factor(gear)5 -6.750 5.800 -1.164 0.2559
as.factor(cyl)8:as.factor(gear)5 -6.350 4.833 -1.314 0.2013
The three reference levels in this example is
as.factor(gear)3 and for the interaction
as.factor(cyl)4:as.factor(gear)3. The values in the last two columns is
NA, (Not Available), since a value there does not give any meaning, it is not defined. It does not give meaning to test a value that is zero by definition!
Many users lives would have been simplified if the report was written this way!
Other posts treating this is among others
There is an R package that can (among a lot of other goodies ...) make regression output tables including lines for the reference levels of factors. That is package
gtsummary with function
tbl_regression. For some examples see https://stackoverflow.com/questions/67225238/is-there-a-way-to-change-in-referent-category-in-the-gtsummary-to-ref-or-a
|
OPCFW_CODE
|
Eximiousfiction – Chapter 3130 – Shocking Two Celestial Mansions battle taste recommend-p2
Novel–War Sovereign Soaring The Heavens–War Sovereign Soaring The Heavens
The Lady Wants To Rest
Chapter 3130 – Shocking Two Celestial Mansions ripe desk
“Well, due to the fact he’s in the position to enter in the mid an entire world of the The southern part of Heaven Ancient World, he can’t be over one thousand many years old… In any event, a Ten Guidelines Celestial Duke who may have comprehended six profundities coming from the rules of s.p.a.ce just before getting to 1,000 yrs old is much more than skilled becoming a primary disciple! It’s impossible for him to get an external courtroom disciple!”
“Me as well!”
As soon as the Serious Nether Mansion senior citizens gotten the replies from their competitors inside the Abandoning Willow Mansion, they found that Duan Ling Tian was rated 93rd on the scoreboard of the middle field of the The southern area of Heaven Historical Realm. Curious, each of them built their approach to the sites in the Moving Formations closest to them.
“Not even 100 years old?”
prose idylls new and old testament
As soon as the Profound Nether Mansion seniors gotten the replies from their counterparts in the Leaving Willow Mansion, they learned that Duan Ling Tian was placed 93rd about the scoreboard of the mid whole world of the Southern Heaven Early Kingdom. Wondering, these made their approach to the websites with the Taking Formations nearest them.
After seeing and hearing these elders’ phrases, silence descended around the three sites with the Moving Formations during the Powerful Nether Mansion’s real estate.
“What do you really imply?”
At the moment, the documenting was participating in on a loop because Tong Qi Shan had not withdrawn his Celestial Source Vitality coming from the Hovering Graphic Pearl.
EndlessFantasy Language translation
There were clearly a couple of senior citizens who acquired good interaction.h.i.+ps with senior citizens out of the Serious Nether Mansion. Upon benefiting from this reports, they did not wait to transmit Correspondence Celestial Talismans to their counterparts inside the Intense Nether Mansion to enquire about such an astonishing individuality.
Anyone scoffed. “They’re just acting to always be mystical. Will they really think we’ll feel them?”
and the mountains echoed characters
“He’s not really a hundred years outdated?!”
“Duan Ling Tian? An outside courtroom disciple? He’s a Ten Recommendations Celestial Duke who has comprehended six profundities through the regulations of s.p.a.ce right before reaching age 1,000?”
Following your observant disciple from your Abandoning Willow Mansion complete discussing, the group of elders and disciples out of the Leaving behind Willow Mansion s.h.i.+fted their vision to the ident.i.ty token holding from Duan Ling Tian’s waistline right away. Genuine more than enough, it turned out just as the observant disciple got stated, the crimson-clad small man’s ident.i.ty token indicated that he was an outer the courtroom disciple within the Significant Nether Mansion. All the difference in between the ident.i.ty tokens of external courtroom disciples from level-six Celestial Mansions was very negligible so they all easily regarded the ident.i.ty expression holding in the purple-clad youthful man’s waistline.
The crowd’s eyes increased and they also inhaled sharply after they observed Duan Ling Tian beating Hong Ji like it was subsequently as simple as going for a walk on the park your car. When does outside courtroom disciples during the Intense Nether Mansion turn out to be so sturdy?
War Sovereign Soaring The Heavens
“He’s not 100 years old?!”
“H-hong Ji… Tong…. Tong Qi Shan, do you find yourself positive you’re not mistaken? He’s not even a century aged?” a disciple expected having an term of disbelief on his facial area.
At this point, on the list of elders who possessed gotten news flash from his comparable version during the Departing Willow Mansion stepped forward and reported, “You’re completely wrong! Duan Ling Tian didn’t rely on chance to position inside the very best 100! A fantastic close friend of mine out of the Making Willow Mansion just informed me their internal disciple, Hong Ji, was severely wounded by Duan Ling Tian. Right after Duan Ling Tian harmed Hong Ji, Hong Ji surrendered by crus.h.i.+ng his Acc.you.mulative Factor Jade. His good friend, Tong Qi Shan, adhered to fit and surrendered as well… That’s how Duan Ling Tian obtained five points in one go.”
“In the The southern part of Paradise Territory’s history, there is only some people who may have comprehended a lot of profundities through the regulation of s.p.a.ce well before achieving age one thousand. Who would’ve believed one could show up in the Serious Nether Mansion?”
From then on, the group saw Hong Ji traveling by air out, searching severely injured.
“In the Southern Paradise Territory’s record, there’s only some people who definitely have comprehended so many profundities in the law of s.p.a.ce before achieving age 1,000. Who would’ve imagined one could appear in the Serious Nether Mansion?”
Despite Tong Qi Shan and Hong Ji obtained left, the thrills did not perish down. The elders and disciples from the Causing Willow Mansion begun to grind their Correspondence Celestial Talisman just one right after another to communicate this reports.
EndlessFantasy Language translation
|
OPCFW_CODE
|
If you want to know the insides of how to set up a development environment, read on!
There is not much documentation about the NRF51, and the tool-collection hunting and gathering process can be intimidating.
I hope this blog entry will help those that want to use and program the Nordic NRF51 development board to test out BLE functionality.
We are using the NRF51 development board, which was purchased from here.
Connecting to the board
Connection is done via conneting a standard Micro USB Cable to your host computer. Once power is supplied to board, it will run the current program automatically.
Communicating with the board
Flashing the device can be done using the JLinkExe program runing on the host computer. JLinkExe can be downloaded from here.
Resetting the board to manufacturer settings
From a terminal window, as the device is connected and turned on, run the following command line:
prompt> JLinkExe -device nrf51822
When the JLink prompt appears, type the following:
J-Link> w4 4001e504 2 J-Link> w4 4001e50c 1 J-Link> r J-Link> g J-Link> exit
This will erase all the programs that were loaded.
Programming the device
In order to program the device, you must first set up the following tools:
The Nordic SDK
The SDK can be downloaded from the Nordic website here. For our testing, we used nRF51_SDK_v9.0.0. The SDK contains a binary referred to as “SoftDevice” that supports BLE management of the chip. Please see below on how to load the SoftDevice to the board using JLink.
Compiler and Linker toolchain from GNU
The cross-compiler/linker tools that are needed to build executables for the board can be found here. We placed them under ‘/usr/local’. If you have multiple development environments, it may be easier to set an alias to run the right tools rather than modifying the path. For example:
alias gdb51="/usr/local/gcc-arm-none-eabi-4_9-2015q2/bin/arm-none-eabi-gdb" alias jdb51="jlinkgdbserver -device nrf51822 -if swd -speed 4000 -noir -port 2331”
Loading a binary to the device
An executable image is created in the form of “.HEX” files that has to be loaded to the board’s flash memory. To load it to the device, open a terminal window and run JLinkExe, this time using the loadfile command:
prompt> JLinkExe -device nrf51822 J-Link> loadfile path-to-binary J-Link> r J-Link> g J-Link> exit
When you program BLE functionality, you will need to load the chip’s firmware in order to support your programs. This is packaged as an executable and is part of the SDK. In order to load the SoftDevice, simply use the loadfile command with the correct path, such as:
J-Link> loadfile SDK_ROOT/components/softdevice/s110/hex/s110_softdevice.hex J-Link> r J-Link> g J-Link> exit
Select a different path if you want to change the version loaded (in this example, it’s S110).
At this stage, you should have a connected board that has a version of the firmware loaded, and the toolchain downloaded, ready for development to begin!
BLE is hard, but blinking the board is easy
Using the toolchain let’s compile and load the demo blink program that comes with the SDK to make sure we have everything in place for future development.
Making Make make
Here you’ll edit the file named Makefile.posix to point to the correct toolchain for cross-development. The file is found at SDK_ROOT/components/toolchain/gcc/Makefile.posix, where SDK_ROOT being the location you installed the Nordik SDK files.
Edit this file so it contains the path to where you installed the cross-compiler:
GNU_INSTALL_ROOT := /usr/local/gcc-arm-none-eabi-4_9-2015q2 GNU_VERSION := 4.9.3 GNU_PREFIX := arm-none-eabi
Building the blink example
Navigate to the “blink” example directory
Depending on your board (the one we used was PCA10028), you might need to create a subdirectory within “blinky” by copying the one present, if your model number does not appear there:
cp -r pcaXXXXX pca10028
Edit the Makefile in the PCA10028/armgcc directory to reference DBOARD_PCA10028, if it’s not already referenced there.
The path to the makefile is: SDK_ROOT/examples/peripheral/blinky/pca10028/s110/armgcc/Makefile.
Once you have saved the modification, return to the terminal window and invoke make to build the image:
in the directory where the makefile is located.
Even though the LED program does not need BLE functionality, let’s load the S110 firmware prior to loading our image for illustrative purposes:
prompt> JLinkExe -device nrf51822 JLink> loadfile SDK_ROOT/components/softdevice/s110/hex/s110_softdevice.hex
And now we’ll load our blink example
JLink> loadfile _build/nrf51422_xxac.hex JLink> r JLink> g
You should now see the board’s four LEDs should blink at a nice rhythm.
Download the jlinkgdbserver debugger from here. When run, it will connect to the board via the serial cable, and wait for commands coming from the GNU debugger, which was included in the GCC download described previously.
To build with debug symbols, invoke make with the debug goal:
prompt> make clean prompy> make debug
Run the debugger server in a terminal window or tab:
prompt> jlinkgdbserver -device nrf51822 -if swd -speed 4000 -noir -port 2331
Open another terminal window and run your image from the armgcc subdirectory so that JLinkExe will be able to load the debug symbols created when building the application:
prompt> gdb51 program-name.out (gdb) target remote localhost:2331 (gdb) gdb-command-here
This runs the debugger, loading debug symbols which will relay instructions to the JLink server that in turn will relay those to the board.
We made sure that the hardware and the development environment was set up correctly for future application development. In order to take advantage of the hardware’s capabilities, please refer to the documention of the board and firmware here, which contains essential links to the BLE functionality as well as a demo mesh project.
I’d like to thank Tim Kadom, my friend and colleague at ThoughtWorks, who sparked my interest by introducing me to BLE and mesh applications and was instrumental in helping me set up the environment and getting everything to work.
|
OPCFW_CODE
|
- Home life:
- Originally, I am from the The Bronx, NY.
- One brother, one sister. I'm the oldest.
- Public school in the Bronx (PS 122, JHS 45).
- I was fortunate enough to go to and graduate from Brooklyn Technical High
School (class of '93)
- Here's a gratuitous link to the alumni association
- The official BTHS web
- I squeezed in some time writing for the Survey and spending
time after school building stuff for my Mechanical Engineering
- I went to Syracuse
University for my college degree in Computer Engineering (both
grad and undergrad).
- Working stiff:
- Programming in C++ for DataViz. I did a some translation
software and applications for 3Com's Palm powered devices, HandSpring devices and a ton
of other busy-work. I was inherently miserable there, though, and
was hurried out for bieng unpopular, different and cynical. But
it's over with, and that's the good part.
- After that dreadful experience was over, I started working at Scholastic doing some real
kick-@ss web stuff. Remember, it's all for the children.
It's been interesting. When you work for a large, large
organization, a lot of decisions get made that you don't quite
understand. You just shrug your shoulders, say "okay
then", and move on. It makes life interesting, if nothing
else. The group I work in, Grolier Interactive is great. We get a
ton of work done, and still manage to have a good time working and
even have a life outside of work. Nothing wrong with that.
- The rest of the story:
- So what the heck do I do in my spare time?
- Listening to my music
- Picking up drawing
- Some occaisonal reading
- Hanging out in Milford,
CT, as well as Bridgeport, CT - my new home, and places near
- Oh yeah, I have a wife too. So there.
- My Sports teams:
- Casual observer of the Jets and Yankees.
- More, more, more!
- I have a love for Skittles that is
unbounded. I'm sure there's a twelve-step program for it.
I'd prefer to stay away from it.
- On a side note, here is my good friend Rick's home page.
- In May 2002, Rayshonda and I went to the Bahamas.
- If you're feeling spiffy, you can
Buy me stuff from Amazon.
I'll give you big props if you do. Promise.
- You can AIM me at trexatplay (home) and trexatwk (work). I have
a MSN messenger account, but don't really use it.
|
OPCFW_CODE
|
Robin AI is an exciting, venture-backed technology startup whose mission is to dramatically reduce the cost and complexity of legal advice. Backed by top London based VCs, Forward Partners and Episode 1, Robin uses a combination of human and artificial intelligence to review and edit contracts and is building software to fundamentally change the legal industry forever.
Robin was established in March 2019 by CEO Richard Robinson (a former City lawyer) and CTO James Clough (a theoretical physicist who works in Machine Learning). They have since assembled a talented and diverse team of legal, software and machine learning engineers to build the next generation of contracting tools.
At Robin AI, we create groundbreaking products that are transforming legal teams. Our work is multi-disciplinary and brings together the best talent in engineering, AI and the law. Our current product uses neural networks to read and comprehend written contracts, assesses whether they adhere to our customers’ preferred positions, and then automatically amends them. But our ambitions go way beyond reviewing contracts – we want to build technology that changes the way companies all over the world do deals.
To do that, we need to build. Specifically, we want to build a new, structured way for businesses to make and agree contracts by replacing the outdated Microsoft Word workflow with a browser-based, visual contract building application that will empower anyone to create a world class contract without a lawyer. Just like Figma has democratised design tools, and WebFlow has democratised website development, we want to give business owners, HR teams, procurement teams and in-house lawyers all over the world the ability to create and negotiate any business contract by combining legal, design, software and machine learning expertise.
How do we hire ?
We aim to keep things simple with our hiring process. There are two stages:
1. Informal chat and technical screening
We will setup a call for you to have a chance to speak to our lead frontend engineer to find out a little more about Robin, our technology and culture. We will also be interested to find out a little about yourself, your experience and your aspirations.
2. Technical and cultural interview
We will setup an interview to explore your technical skills and cultural fit. The technical task involves building an application to solve a problem representative of those that we solve at Robin AI. After the technical task we will follow up with some general technical and cultural questions to learn more about you. This is also an opportunity for you to ask as many questions as you’d like.
We are deliberate about the culture we are building, and are committed to creating an inclusive environment that attracts the best talent from diverse backgrounds. Whether it’s helping curate the Robin Spotify playlist, discussing gender equality in tech, or taking a meditation break with our holistic health plan, we want you to find your space within the Robin community.
From our offices in Devonshire Square, you’ll have access to all of the WeWork amenities including co-working space, bike lockers, pool tables, table tennis, free barista-made coffee and free beer in the evenings.
We want to break diversity, inclusion and equality barriers within the tech industry. In building our team, we welcome the unique contributions that you can bring, and welcome applications regardless of ethnicity, religion, sexual orientation, gender identity and expression, disability, education, opinions, nation of origin, languages spoken, age, pregnancy and maternity, or veteran’s status.
|
OPCFW_CODE
|
Here is the short version; the web is the new playground; "web 2.0" has serious issues; threat models are becoming sophisticated enough they're actually hard to follow in some cases; we're losing the fight against malware; security products themselves are increasingly targeted and used to compromise networks they are supposed to protect.
These talks were my favorites, organized by broad subject:
Belani and Jones gave a real-world incident talk containing three very interesting case studies including a case where of non-public information including credit card transaction data was compromised. This case was never solved and may have been a wireless penetration or possibly an inside job.
Web Based Threats
Bolzoni_and_Zambon demonstrated an anomaly based intrusion detection system for web applications; this is an idea I had found myself advocating earlier in the year.
Grossman & Hansen continued last year’s foray into browser hijacking techniques.
Gutesman, Futoransky and Waissbein presented a novel method of securing web applications.
Brad Hill gave an in-depth presentation on message oriented security as implemented in XML and WS-Security and the state of the art in XML attacks.
Hoffman and Terrill discussed the state of the art in web based worms and possible future directions such worms might take in order to become more virulent.
Dan Kaminsky gave a must-read multi-subject tour of his very interesting research in his usual style; this year focusing on vulnerabilities in the web application space.
Sullivan and Hoffman discussed Ajax design flaws and threat models.
Butler_and_Kendall discussed the use of DLL injection by malware to avoid detection and demonstrated methods of kernel mode injection techniques in the win32 space as well as memory analysis forensic detection techniques.
Nick Harbour of Mandiant discussed anti-forensics techniques used by malware and presented a UNIX equivalent to the Nebbtt’s Shuttle method of launching a win32 executable from a memory buffer.
Mikko Hypponen presented the state of the art in cell phone malware.
Wysopal and Eng discussed the state of the art in insertion and detection of backdoors.
Mark Yason discussed the state of the art in malware anti-reversing techniques.
Jim Hoaglund from Symantec presented the results of an in-depth analysis of the Vista network attack surface. He also discussed security implications of Teredo, an IPv6-over-IPv4 tunneling protocol which gives a vista host a globally routable IPv6 address. No practical inspection mechanism exists for Teredo traffic; the security implications are significant as any malicious traffic using the Teredo protocol may well go undetected.
David Litchfield presented a case study in Oracle database forensic analysis.
Palmer, Newsham and Stamos discussed weaknesses and evasion techniques in commercial forensics tools.
Mike Perry presented some interesting threat models and defensive techniques for users of the tor network.
Roecher and Thumann discussed the Cisco NAC protocol.
Rutkowska and Tereshkin continued last year’s discussion of Vista kernel compromise and virtualization based malware including a bluepill implementation supporting nested virtualization.
Bruce Schneier gave a keynote talk called “The Psychology of Security” – a must read examination of how the human mind thinks about risk.
Thermos presented some new VoIP protocol attacks.
|
OPCFW_CODE
|
- Why is C platform dependent?
- Is Python platform independent like Java?
- Can we create an app using C++?
- Can you use C++ for Android Apps?
- What is platform dependent in C?
- Is Python older than Java?
- Can I create an app with C++?
- What does platform dependent mean?
- Which programming language came first?
- What is dependent language?
- What language is Python like?
- Which programming language is platform dependent?
- Is C++ multi platform?
- Is Python platform dependent?
- What are the 4 types of programming language?
Why is C platform dependent?
C Compiler is platform dependent since it is closely linked to the OS kernel which is different for different OS.
But over the years all OS’s come with pre-installed compilers and libraries that make it quite platform independent for basic programming.
This facility is not available with C..
Is Python platform independent like Java?
Platform independent: Python is a platform-independent programming language. It means Python can run equally easily on a variety of platforms like Windows, Linux and Unix, and many more. So, there is no need to write a separate code for each Operating System, as the same code can run on multiple platforms.
Can we create an app using C++?
You can build native C++ apps for iOS, Android, and Windows devices by using the cross-platform tools available in Visual Studio. Mobile development with C++ is a workload available in the Visual Studio installer. … However, all platforms support writing code in C++.
Can you use C++ for Android Apps?
The Android Native Development Kit (NDK): a toolset that allows you to use C and C++ code with Android, and provides platform libraries that allow you to manage native activities and access physical device components, such as sensors and touch input.
What is platform dependent in C?
In case of C or C++ (language that are not platform independent), the compiler generates an .exe file which is OS dependent. When we try to run this .exe file on another OS it does not run, since it is OS dependent and hence is not compatible with the other OS. Java is platform-independent but JVM is platform dependent.
Is Python older than Java?
Java is an object-oriented language with a C/C++-like syntax that is familiar to many programmers. It is dynamically linked, allowing new code to be downloaded and run, but not dynamically typed. Python is the older of the two languages, first released in 1991 by its inventor, Guido van Rossum.
Can I create an app with C++?
Mobile Development with C++ | Windows UWP, Android and iOS Create native C++ apps for iOS, Android, and Windows devices with Visual Studio.
What does platform dependent mean?
Platform dependent typically refers to applications that run under only one operating system in one series of computers (one operating environment); for example, Windows running on x86 hardware or Solaris running on SPARC hardware. … Applications written in Java are a prime example.
Which programming language came first?
FORTRANIn 1957, the first of the major languages appeared in the form of FORTRAN. Its name stands for FORmula TRANslating system. The language was designed at IBM for scientific computing. The components were very simple, and provided the programmer with low-level access to the computers innards.
What is dependent language?
Platform Dependent means it gets effected by the System Software.It can’t be run other systems. Platform Independent means it first converts the program to the intermediate state Byte code (which is platform Independent) and then compiles the program to another language source code.
What language is Python like?
Python is an interpreted, high-level and general-purpose programming language. Python’s design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.
Which programming language is platform dependent?
C Compiler is platform dependent since it is closely linked to the OS kernel which is different for different OS. But over the years all OS’s come with pre-installed compilers and libraries that make it quite platform independent for basic programming.
Is C++ multi platform?
C++ is cross-platform. You can use it to build applications that will run on many different operating systems. … It makes it easy to switch between different setups and target platforms. It provides support for building, running and deploying C++ applications not only for desktop environments but also for mobile devices.
Is Python platform dependent?
Like Java programs, Python programs are also platform independent. Once we write a Python program, it can run on any platform without rewriting once again. Python uses PVM to convert python code to machine understandable code.
What are the 4 types of programming language?
The different types of programming languages are discussed below.Procedural Programming Language. … Functional Programming Language. … Object-oriented Programming Language. … Scripting Programming Language. … Logic Programming Language. … C++ Language. … C Language. … Pascal Language.More items…•
|
OPCFW_CODE
|
what is an infinite loop, how it works?
An infinite loop in any programming language is a loop which never ends i.e. till your condition is true, terminating condition has not been set,cannot occur and/or cause the loop to restart. goto is the best example of this as goto always keep restarting a particular block, but you can stop it whenever you want by holding CTRL+C(in python).
hope it helps.
infinite loop is an endless loop…the loop goes on until the terminating condition is set…to create a infinite loop in python is to use a while statement…
When a block of statements is required to be executed several times a loop is used so that we don’t need to write them again and again.A loop is entered when a condition is satisfied.But there must a terminating condition for a loop otherwise it leads to the situation of infinite loop in which program never terminates as the condition is always satisfied.
In python if we write while(True) it runs infinitely because syntax of while is
syntax: while(condition):statements. Until the condition become true i.e., until it satisfies the condition the while loop runs.As we gave True inside the condition it satisfies infinitely. Until we replace it with some condition that becomes false it runs infinitely.
check for more information:https://intellipaat.com/blog/tutorial/python-tutorial/python-while-loops/
Infinite loop are loop which never ends. A loop becomes infinite loop if a condition never becomes FALSE. We can create an infinite loop using while statement. If the condition of while loop is always
True , we get an infinite loop.
Even infinite loop also have many applications
Ex: digital clock in every electronic devices such as computer, mobile.
2) There are a few situations when this is desired behavior. For example, the games on cartridge-based game consoles typically have no exit condition in their main loop, as there is no operating system for the program to exit to; the loop runs until the console is powered off.
An infinite loop is a loop which loops endlessly, either due to no terminating condition, or no increment condition.
Check this link to wiki: https://en.wikipedia.org/wiki/Infinite_loop
any loop which never ends is known as infinite loop…if u run the code the output will be printing continue to infinity…it is never ending…
infinite loop is somewhat that continue till your condition become satisfy like suppose if you face any problem in understanding the concept of course then you read it again and again till you thoroughly understand the concept.
hope now you understand what is infinite loop.
infinite loop is never false condition
An infinite loop (or endless loop ) is a sequence of instructions that, as written, will continue endlessly, unless an external intervention occurs (“pull the plug”).
int i = 1;
The above is an example of infinite loop as the value of i is not changing, so the loop will run until the memory runs out
- An infinite loop is an instruction sequence that loops endlessly when a terminating condition has not been set, cannot occur, and/or causes the loop to restart before it ends.
- An infinite loop is also known as an endless loop.
Infinite Loop in any programming language is a type of loop which never ends means it execute continuously.To end this type of loop we have to set condition in loop so by checking that condition in loop,the loop terminates.Infinite loop unnecessarily increase the time of our program,decrease the efficiency of our program,increase the space complexity for our program.
Infinite loop generally occurs when we forget to mention terminate condition inside the loop For example if in loop we check “i<3” and initially if “i” is 1 and if we don’t update variable “i” inside the loop then the infinite loop arrise . By adding updation condition (i+=1) for i variable inside the loop the loop breaks else it results in infinite loop.
So basically infinite loop is a sequence of instructions that will continue endlessly unless an external intervention occurs.
infinite loop is sequence of instruction which will continue endless an external intervention occurs
An infinite loop (sometimes called an endless loop ) is a piece of coding that lacks a functional exit so that it repeats indefinitely. … Usually, an infinite loop results from a programming error - for example , where the conditions for exit are incorrectly written.
An infinite loop is an instruction sequence that loops endlessly when a terminating condition has not been set, cannot occur, and/or causes the loop to restart before it ends.
An infinite loop is also known as an endless loop.
Infinite loop is a loop that remains in execution endlessly
An infinate loop is a loop which never gets terminated Unless you shut down your code or stop excecution of the code.
Hello @muxaforhussain. An infinite loop can be taken as an reoccurring task which repeats endlessly. In technical terms it is denoted as a loop that runs endlessly, until a certain condition is met.
Block of statements;
|
OPCFW_CODE
|
cannot delete a file that does exist
I started copying a (large) file inside a python script with copyfile from shutil and I had to interrupt the transfer. Now I notice that I cannot delete the file
The file that I am trying to delete is shown below (i.e. it appears to exist)
ngs@bngs05b:/path/to/dir/210305_M05113_0148_000000000-J6HHR/Data> ll
total 1
-rwxrwx--- 1 user lgen 99542099 6. Mai 11:42 LHLA-MS5387-MJ-S-10_S20_L001_R2_001.fastq.gz
If I then try to remove it with rm
ngs@bngs05b:/path/to/dir/210305_M05113_0148_000000000-J6HHR/Data> rm LHLA-MS5387-MJ-S-10_S20_L001_R2_001.fastq.gz
rm: cannot remove 'LHLA-MS5387-MJ-S-10_S20_L001_R2_001.fastq.gz': No such file or directory
which is a very strange behaviour in my opinion.
I tried a few solutions as described here (e.g. ls --escape, ls -1b), but none of them work.
I also tried to see if that file was opened somewhere, though the output of lsof +D does not output anything useful I guess:
ngs@bngs05b:/path/to/dir/210305_M05113_0148_000000000-J6HHR/Data> lsof +D .
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 44509 ngs cwd DIR 0,37 0<PHONE_NUMBER>479423221 .
lsof 45768 ngs cwd DIR 0,37 0<PHONE_NUMBER>479423221 .
lsof 45769 ngs cwd DIR 0,37 0<PHONE_NUMBER>479423221 .
I first thought that the problem had something to do that the copied file was corrupt/ incomplete, though I don't think this is the case as I checked that the size of the file in the original location is the same of the file that I am trying to delete.
Does anyone know how can I get rid of this file?
it may cause because of a corrupt process! did you try reboot the system once then removing the file?
the directory is only mounted on a server, so I guess rebooting it is last resort
does ls LHLA-MS5387-MJ-S-10_S20_L001_R2_001.fastq.gz give an error? and what does ls LH show when you use {tab} to complete the file? any extra characters?
The filename might have some strange characters which you cannot see in the terminal.
If this is the only file in the data directory I would remove the data directory and take the file with it,
rm -rf data
otherwise try using a gui file manager to remove it or see if you can remove it with a wildcard pattern, use ls with the pattern first so that you ensure you remove the target :)
ls LHRA*.gz
if that lists your file then
rm LHRA*.gz
If it was in use by another process the message would be different.
it could have been yes that the file has weird characters that the terminal does not show, that's not the case though. Unfortunately I had already tried all of these commands and none of them work (considering the correct directory/ file names - Data and LHLA)
Sorry for the typos! What message was returned from the rm -rf Data command?
I get rm: cannot remove 'Data/': Directory not empty
|
STACK_EXCHANGE
|
As with the original PyGamer Thermal Camera, this portable thermal camera project combines an AMG8833 IR Thermal Camera FeatherWing with a PyGamer. The upgraded CircuitPython code used in this version increases the camera resolution from 64 pixels (8 x 8) to 225 pixels (15 x 15) and deepens the color depth from 8 colors to 100 colors, all without hardware modifications.
The new code improves the camera's ability to visualize thermal images to help discern heating and air conditioning ventilation issues, to evaluate the quality of your home's insulation, and to avoid the sleeping cat when heading to the kitchen in the middle of the night.
Increasing the display's resolution required changes to the original camera's code to maintain a useful image frame display rate. As a result, performance monitoring was built-in to the new CircuitPython code as a series of time markers with a summary performance report printed to the serial port at the end of each frame update. See the section on Performance Monitoring for more information.
The camera's thermal image can be frozen or focused at the touch of a button. The focus feature fine-tunes the display's temperature range to match the current image's maximum and maximum measurements, improving the detail of the image. To get a statistical view of an object's heat, switch to histogram mode. A settable alarm flashes lights and beeps when the camera sees a temperature at or above the threshold. The setup function is used to set the temperature display range and the alarm threshold. An editable configuration file contains the camera's power-up settings for the default temperature range and camera sensor direction.
The camera's thermal imaging sensor is an 8 by 8 thermopile array that reads temperatures from 32°F to 176°F (0°C to 80°C) with an absolute accuracy of +- 4.5°F (2.5°C) and resolution of 0.9°F (0.5°C). To improve object recognition, the camera software algorithmically enlarges the number of imaged elements from 64 to 225 by calculating the in-between values using a technique called bilinear interpolation. See the guide section 1-2-3s of Bilinear Interpolation for more detail about the technique.
Temperatures are represented in the displayed image as colors in a spectrum, ranging from a cold blue to white-hot. The color spectrum is based on a frequently-used palette similar to the range of colors seen when an iron bar is heated -- a technique a blacksmith might use to gauge the malleability of metal.
The camera's numeric temperature values are displayed as degrees Fahrenheit. Converting the values to Celsius is possible but is left as an exercise.
The PyGamer Thermal Camera's custom cover skin was produced by a commercial on-line sticker printing service using the image file below.
Other than the AMG8833 Thermal Camera FeatherWing, the following kit contains the PyGamer parts for this project including a nifty carrying case.
Thank you to Adam McCombs for the highly detailed optical and electron microscope photographs of the de-capped AMG8833 sensor. It's fascinating to see how it operates under the covers.
Special thanks to David Glaude and Zoltán Vörös for the ulab-based bilinear interpolation helper. Array calculations using CircuitPython's integral ulab (micro lab) library are amazingly fast and efficient!
For more information about ulab, check out Jeff Epler's ulab: Crunch Numbers Fast in CircuitPython learning guide.
|
OPCFW_CODE
|
Obtaining Database Information in CUBRID
This guide explains how to retrieve database information from CUBRID. This includes: table, index, and column names, if certain columns are indexes or not, which one is primary key, etc. In simple words, all this information can be retrieved from CUBRID's system tables. Every database in CUBRID has a table for users, another table for list of tables in this database, and the other one for list of columns in all tables, etc.
The list of database users is stored in a system table called db_user. Follow this link to see the manual where the table columns are explained.
To get the list of users for a database being used, execute the following query.
SELECT name FROM db_user;
You will see the results similar to the following.
Also to retrieve the user name that is currently logged in to the database, one can retrieve CURRENT_USER or USER which can be used interchangeably.
The result will show the current user name.
The list of database tables is stored in a system table called db_class. In CUBRID, whenever you meet the term class, you can refer to it as to a table. There are two types of tables in CUBRID:
- System Tables
These are the table used primarily by CUBRID server itself to keep track of various meta data of the database. Normally, ordinary database users do not have access to system tables, but users with administrative privileges do. If you see a table which starts with underscore like _db_class, then they are accessible by database administrators only. The system tables without underscore like db_class can be accessed by non-admin users.
- User Tables
These are the table which the database users have created by themselves. Normally they are use in some applications to store application data. So all users with granted rights can access them. As mentioned above, normal users can access to some of the system tables.
Thus, to get a list of all tables a user has access to (this includes both system and user defined), one must execute the following query.
SELECT class_name FROM db_class;
To get the list of only user defined tables, i.e. those explicitly created by the user, one should filter out by is_system_class column of the db_class table as shown below.
SELECT class_name FROM db_class WHERE is_system_class = 'NO';
For other possible attributes users can use, see the db_class documentation.
Table Columns List
The list of columns defined for database tables is stored in a system table called db_attribute. In CUBRID, whenever you meet the term attribute, you can refer to it as to a column. There is also _db_attribute table, but as mentioned above, this class is accessible by only administrators.
Thus, to get a list of all columns created in a particular table correctly ordered, one must execute the following query.
SELECT * FROM db_attribute WHERE class_name = 'game' ORDER BY def_order
For other possible attributes, see the db_attribute documentation.
The list of indexes created in the database is stored in a system table called db_index. There is also _db_index table, but as mentioned above, this class is accessible by only administrators, and in practice you do not want to mess with it, though they have similar structure.
Thus, to get a list of all indexes created for all table in the current database, one must execute the following query.
SELECT * FROM db_index;
The result will include indexes defined for both system table and user created tables. You can also see if a particular index is a UNIQUE, a PK, or an FK.
If you want to see indexes for only those table which were created by you, then you need to join db_index and db_class tables just like you would do normal SQL query.
SELECT di.* FROM db_index as di LEFT JOIN db_class as dc ON di.class_name = dc.class_name WHERE is_system_class = 'NO';
For learn more about other index attributes, see the db_index documentation.
Index Keys List
In the above example we were able to retrieve a list of table the database has, then filtered them down to user defined. However, at this point we cannot understand which column a particular index was created on. This means we need to retrieve the information about index keys. There is a table for this named as db_index_key.
SELECT * FROM db_index_key;
If you see the documentation, you can understand different attributes that an index can have in CUBRID.
There are many other information we can retrieve for a database in CUBRID. Later, we will update this article and add some more examples.
This article has been written by one of the CUBRID core developers to help users improve their application performance by understanding h...
Starting from version 8.4.0 CUBRID supports Covering Index. Wikipedia explains it as:
The scope of this tutorial is to show you how to use one of the most special features implemented in CUBRID – Click Counter.
Sometimes you want to concatenate all column values from differenct rows returned by your SELECT statement in one value. For example, consider the c...
|
OPCFW_CODE
|
[From Rick Marken (930511.1000)]
Oded Maler (930510.1000 ET)--
First, to make it explicit, I'm not a life/behavioral scientist, and
when I criticize your extrapolations, I'm not doing it in the name of
an alternative *scientific* generative theory. It is based on many
out-of-duty thoughts and observations about political, cultural and
historical matters, and on a better (I think) intuition about the
complexity of prediction, that is, how things scale-up as the number
of variables grows.
That's what I thought. I like that kind of candor.
I do have a problem with your description of a controlled perceptual
variable. Your description doesn't seem to match my conception of
a controlled variable. Perhaps there is a language problem. You say:
Take a perception of muscle tension and
your perception of, say, the correctness of US economic politics. The
former has at least some hypothetical relation with an electrode one
can put in the appropriate place in your body. Can you suggest a
similar CEV whose measurement will have some relationship with your
higher-level perceptual variable? Can you poke some millions
electrodes in the hearts, stomachs and pockets of (even a sample of)
the American population and define a CEV as a function of those?
Could you find a resonable correlation between this hypothetical
signal and your percept?
In PCT, a controlled perceptual variable, p, is a neural current whose
magnitude varies as a function of sensory input variables, v. In
general, p = f (v1, v2,...vn). The function, f, as well as the sensory
source of the vi (retina, basilar membrane, muscles) determines "what"
is perceived. The magnitude of p is just the degree to which the per-
ception is "present", via f(), in the prevailing sensory input. The
actual environmental correlates of the sensory inputs correspond to
properties of the environment according to the current model of physics.
According to PCT (neurophysiologists take heed), when you record the
spike rate from an afferent neuron you are measuring p. This measure of
p is a measure of the degree to which the sensory inputs "satisfy"
f(). If f() is just a summation then spike rate is proportional to,
say, p = a1v1+a2v2. So increases in p are directly proportional to
variations in v1 and v2. If p = a1v1-a2v2 then incerases in p are the
result of increases in v1 OR decreases in v2. In the first case, p
just measures the size of the sum of v1 and v2; in the second case, p
measures the size of the difference. But in both cases p just tells you
about the result of the functional combination of v1 and v2 -- not about
v1 or v2 individually. p is a perception of "sum" in one case and
"difference" in the other. In both cases p is just a varying rate of
neural impulses. The "meaning" of p depends on f(). In the first case
p "means" sum; in the second, difference. When a control system sets
a reference for p it doesn't "know" whether it is controlling a sum or
a difference. If the reference is set at 10, then p will be brought
to 10 (approx.) impulses/sec. But in the first case, what is "really"
happening is that the system is controlling a sum of sensory values;
in the second it's controlling a difference.
Similarly, p could be the measure of tension in a muscle or of some
aspect of US economic policy (say, the "strictness" of the fed's
money supply policies). In the first case, p is a perception of
"muscle tension" because it is a function of a sensor located in a
muscle spindle. What defines p, in this case, as a muscle tension per-
ception is not f() per se but the source of the sensory input, v. In
the second case p is a perception of "monetary strictness" because of
the VERY complex f() that computes it from sensory (and lower level
perceptual, p) inputs AND because of the source of the inputs to f().
It is entirely plausible, according to PCT, that one could poke an
electrode into a cortical level afferent neuron and record changes
in firing rate (p) that depend on (what the experimenter also perceives
as) variations in "monetary strictness". This is all a higher order
perception is -- a neural signal whose variations in magnitude indicate
the degree to which a particular sensory experience exists in the
pattern of sensory inputs. Subjectively, p is an experience to which we
might give the name "muscle tension" or "monetary strictness".
I think there is quite a bit of neurophysiological work that is consistent
with this perceptual hypothesis. Neurons have been discovered whose rates
of firing vary depending on 1) the proximity of lines to a particular
retinal region 2) the orientation of lines on the retina 3) the direction
of movement of lines on the retina 4) the binocular disparity of lines
on the two retinae. Things may work differently when you are dealing
with higher order perceptions -- maybe the value of p is carried by the
state of networks of neurons. But I think it is a fairly strong (neuro-
physiological) prediction of PCT that, if a variable can be controlled,
then it must be represented neurally as a unidimensional magnitude, p.
NB. Ken Hacker -- A falsifiable prediction.
CHUCK "THE TANK" TUCKER (930511) --
I do not consider it a LIE to take
interpreted experiences which a person calls "data" or "facts" and
I don't seem to be making myself clear. Of course it's not a lie to
interprete data in PCT terms. This is what I called "just so" stories
and I think they can be great fun. What I called a "lie" was the claim
that PCT could account for 100% of the variance in some conventionally
collected set of data. It can't -- and I think it is important to
explain why to those who expect PCT to do better than their own theories.
The short answer is: because the data that they would have PCT account
for has been collected in the context of the wrong model -- the
There is NO BOSS REALITY and that is a fact!
I don't think I'm ready to sign up to that one. It may not be the BOSS
but I think there's some pretty good evidence that their is SOMETHING
out there consistently constraining the means I can use to control my
Of course if
you have very little concern whether anyone uses PCT or not then I
would guess (just a guess) the most efficient act to perform is
As I said in the post to which this is a response, I am interested in
trying to help people (including myself) understand PCT. I would prefer
that they (and I) understand it before they start using it. I think
people often start using PCT before they really understand it; this can
lead to real problems. I don't want to name names but I think something
like this happened with a fellow whose name rhymes with a fancy women's
college in the east (OK, it's Vasser). Basically, this fellow is using
PCT in what are sometimes quite non- PCT like ways. So I won't be silent
about PCT -- not because I want people to use PCT, but because I want
them (and myself) to understand it.
|
OPCFW_CODE
|
The following command has solved my problem. To make it run after reboot
just insert the command in /etc/rc.local (by default this file is empty and
is equivalent to windows' autoexec.bat). My rc.local looks like this:
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
# In order to enable or disable this script just change the execution
# By default this script does nothing.
sudo setpci -s 00:02.0 F4.B=00
Hope this will solve problem.
On Sun, May 15, 2011 at 10:23 PM, Ghozy Arif Fajri <
<email address hidden>> wrote:
> *** This bug is a duplicate of bug 779166 ***
> I run this command under no-backlight condition on terminal (I still can
> read my screen under flashlight):
> sudo setpci -s 00:02.0 F4.B=00
> and my back light is back!
> but back light is off again after reboot, need to run that command again :(
> All I did was add the setpci command into sudoers file (sudo visudo) so i
> don't need to type password when running setpci,
> and make a new executable file contains that command, then make new startup
> entry to run that executable file every startup.
> Now i have my back light function normally every startup..
> Try this guys, who knows it can help you.
> You received this bug notification because you are a direct subscriber
> of the bug.
> Blank Screen in Natty
> Status in “linux” package in Ubuntu:
> Bug description:
> Binary package hint: dpkg
> I have ACER ASPIRE 4736Z with intel GMA 4500M.
> Initially I had installed Ubuntu Maverick (10.10). I upgraded to Natty
> (11.04). Kernel has been updated to 188.8.131.52 which is working for me
> perfect with minor bugs. Afterwards I have tried to update upto Kernel
> version 184.108.40.206 but i get a blank screen when system boots. Solution
> which is still working for me is to restart with kernel 220.127.116.11.
> In my opinion problem is in video card driver. I need Help.
> To unsubscribe from this bug, go to:
|
OPCFW_CODE
|
How does backpropagation with unbounded activation functions such as ReLU work?
I am in the process of writing my own basic machine learning library in Python as an exercise to gain a good conceptual understanding. I have successfully implemented backpropagation for activation functions such as $\tanh$ and the sigmoid function. However, these are normalised in their outputs. A function like ReLU is unbounded so its outputs can blow up really fast. In my understanding, a classification layer, usually using the SoftMax function, is added at the end to squash the outputs between 0 and 1.
How does backpropagation work with this? Do I just treat the SoftMax function as another activation function and compute its gradient? If so, what is that gradient and how would I implement it? If not, how does the training process work? If possible, a pseudocode answer is preferred.
Backprop through ReLU is easier than backprop through sigmoid activations. For positive activations, you just pass through the input gradients as they were. For negative activations you just set the gradients to 0.
Regarding softmax, the easiest approach is to consider it a part of the negative log-likelihood loss. In other words, I am suggesting to directly derive gradients of that loss with respect to the softmax input. The result is very elegant and extremely easy to implement. Try to derive that yourself!
Thanks so much! I went through the calculus and I got the SoftMax minus 1, is that correct? Also for implementation, do I simply use this gradient in backpropagation and feed it back through the network like normal?
You are on the right path! Note however that the negative log-likelihood depends on correct labels, while there are no labels in your expression for the NLL-softmax gradient. We have a contradiction: a supervised learning which does not use supervision. Try to correct that!
You say "For positive activations, you just pass through the input gradients as they were. For negative activations you just set the gradients to 0.", but you don't explain why that must be the case, from a calculus perspective. I think this answer would improve if you explain that more in detail.
@nbro I assume that if OP is able to write a backprop for tanh, then he should know what is the derivative of ReLU.
@ssegvic How does the derivative of ReLU come from the derivative of tanh? It's just a convention that the derivative of ReLU at zero is 0, given that the left and right derivatives at zero are different. You don't explain this in your answer.
If you know how to differentiate tanh, then you know how to differentiate ReLU.
My answer is agnostic with repect to the derivative of ReLU in zero.
|
STACK_EXCHANGE
|
Jellyfiction Trial Marriage Husband Need to Work Hard Chapter 1117 That Was Liang Yongyus Fate trashy wail sharep3
Prestantiousnovel - Chapter 1117 That Was Liang Yongyu's Fate! death discussion quote-p3
Novel - Trial Marriage Husband: Need to Work Hard - Trial Marriage Husband: Need to Work Hard
blood of ambrose
Chapter 1117 That Was Liang Yongyu's Fate! crooked warlike
"She hasn't achieved anything, still you folks call her a global superstar. You happen to be only ones which are so conceited."
Even though it wasn't not easy to know the difference between shooting varieties of Qiao Sen together with an Zihao, Mo Ting did not imagine it mattered.
which prophet lied in the bible
Ma Weiwei sneered. Soon after camouflaging to get an overall evening, it was time to go back and confront fact. Aside from, exactly like Han Xiuche claimed, he got already assisted her get some good proper rights. A minimum of, insulting Tangning just a little created her feel much better.
This is the very first reduce, so there were sure to be more editing and publish production to get accomplished. The truth is, there is even a chance which they would be required to refilm some parts. In any event ., Tangning needed a result from Mo Ting straight away was the film Acceptable or not?
Nevertheless, that they had no clue that after they remaining, the thing they hit, get rid of a covering of human being pores and skin. It was subsequently the outside sh.e.l.l of a human body that this ant-like creature had eaten. Right now, only head stayed...That has been all.
A Child of the Sea; and Life Among the Mormons
He necessary some time to take back his composure...
Normally, Us sci-fi movies enjoyed to set their area later on or on another world, but, Tangning moved ahead and set up her motion picture from the true and existing society. It was subsequently reminiscent of the horror-enducing killer python videos from the recent...
Even though it wasn't not easy to know the difference between the recording styles of Qiao Sen together with an Zihao, Mo Ting failed to feel it mattered.
It journeyed originating from a really like narrative into a narrative in regards to the really like from a father and child.
Even though it wasn't tricky to separate relating to the filming brands of Qiao Sen plus an Zihao, Mo Ting did not think it mattered.
Now, she wasn't one nor an actress, and she definitely wasn't an international superstar. This point, she was a writer and manufacturer.
So, observing someone else in misfortune assisted them stabilize their thoughts...
Children of the Bush
Nonetheless, even until bedtime, Mo Ting still failed to say a thing. Was he shut up on the research bedroom for such a long time as he was actually performing? Or, is it probable that following he complete looking at the motion picture, he felt it was actually so poor which he didn't learn how to let her know?
"I recognize what you're wanting to say. Following get yourself ready for so long, it's finally enough time to launch your trailer and provides the jerk a faceslap."
the dogs of riga henning mankell
Mo Ting looked over Tangning in importance. He comprehended the antic.i.p.ation she acquired and the volume of sophisticated feelings she believed. She desired by far the most sincere overview, but she also essential real reassurance.
Tangning didn't know when Mo Ting prepared to watch the motion picture, but she really wished to see his result.
The simple truth was, Tangning experienced no reason at all to answer an affordable replica like her, but she simply couldn't admit on it.
The motion picture was so realistic that this observed much like a creature in this way could actually be hiding under one's your bed whenever you want.
Before long, a couple of hours pa.s.sed.
"I recognize it shouldn't take the time me, but this jerk and Ma Weiwei are disgusting like a few c.o.c.kroaches."
After, Tangning proved the initial accomplished release of her motion picture to Mo Ting, "You happen to be initial to view it. Even I haven't observed it yet still."
the promotion record of a crown princess pdf
It journeyed from a like storyline into a story in regards to the really like from a dad and little girl.
martial void king
Nonetheless, whilst they were operating out, they accidentally jogged in to a walking. The walking was extremely weird. He experienced a system which was transparent like a cicada pupa.
"I understand it shouldn't make an effort me, but this jerk and Ma Weiwei are disgusting like a number of c.o.c.kroaches."
After, Tangning revealed the first done model of her video to Mo Ting, "You are the initial to determine it. Even I haven't seen it but."
"I am aware it shouldn't trouble me, but this jerk and Ma Weiwei are disgusting like a few c.o.c.kroaches."
"I know what you're wanting to say. Immediately after planning so long, it's finally the perfect time to release your trailers and gives the jerk a faceslap."
This very first world was alarming enough to give him gooseb.u.mps.
The first scene was of Coco Li simply being kidnapped as well as kidnappers trying to break free with her...
Following seeing the full motion picture in one go, Mo Ting directly shut off his notebook.
Having said that, that they had no idea that when they left behind, one thing they strike, get rid of a layer of man epidermis. It absolutely was the exterior sh.e.l.l of your system that this ant-like being possessed consumed. At this point, only the mind continued to be...Which was all.
|
OPCFW_CODE
|
LinkedIn | FaceBook | Twitter
I’ve used a lot of working methods for my desktop environment (not my servers) over the years, but they fall into three “buckets” of systems:
1. Big Workstation, VM Server, all tools, documentation, test environments local
2. Laptop with all tools installed, as well as Microsoft Office, hit the documentation on the web and a remote testing environment
3. Purpose-built Virtual Machines
I’m moving towards the latter model right now – I use Office in the “cloud”, Outlook on the web, and SharePoint for almost all my “office” work. I run a VM that is set up for T-SQL development and database design (with the Redgate, Quest and Embarcadero tools there, among others), another for Business Intelligence, another for SharePoint testing and so on. I find that each of these can use a different OS, patch level and so on. The only issue is that sometimes I actually want that hardware dependence, and of course I need a decent machine to run the VM’s. I don’t always run them all at the same time, so in fact that really isn’t an issue. I do have to synch between the VM’s, so I use a combination of PowerShell Scripts and Windows Live Mesh for that. So far I’m finding that I am very efficient this way, and can carry my “computer” on a USB drive assuming there’s a host machine at the conference or client where I’m travelling.
So, what are you finding to be the best way to work? Or do you have much of a choice in that?
I've just had to make a decision for myself as I just got a new machine. I'm going with option #3.
I think it's critical to still be able to setup #1 instead of relying on someone else to give you a 'just push play' solution.
I run off a 4GB mem laptop. I run VMWare Workstation. I create a VM for whatever I need. I've been doing that for several years now. Like you I don't run them all at once. I have one VM that's a working Server (Win 2003, SQL 2005) I can doing whatever I need from that if need be.
I have one VM that's Win 2008, SQL 2008 and VS2008. Then another that's Win 2008 R2, SQL 2008 R2 and VS2010. I also still have an XP VM that I use for some older things.
My laptop is loaded with Win7, Office 2010, SQL 2008 R2 and VS2010. I also have my Red-Gate tools on it. All of my VM's are on my 1TB external HD. I also have a smaller USB (320GB) drive. It needs no external power. I have copied a couple of my VM's onto it and I can use it on the go (where ever).
I also have a personal laptop (Toshiba) that runs Win7 Ultimate and has the same software installed as my work laptop. My work laptop stays at work and I can RDP into it from home if I need to.
I also have a 16GB and 8GB flash drive. I have my scripts and whatever else I might need in a pinch on them. They are with me at ALL times.
If I travel I do so with my personal laptop.
Since I joined my existing team I gave up the desktop and work with a laptop only. It is decently powerful with Windows 7, 4GB, dual core proc, and ssd. I also have a usb drive that I can store VMs, etc on. I have all of my tools, etc, loaded here and this is where I do 90% of my work. I also have a quad core box with 8GB of ram running Hyper-V that I use for testing/dev work. I have several stock VMs I use such as a standalone Server 2003, standalone Server 2008R2, and a Server2008R2 cluster. Each VM runs multiple instances of SQL 2005, 2008, and 2008R2.
Like Buck I also use Live Mesh, it is a god send. It syncs all of my scripts between multiple machines as well as allows web access to the files when necessary. I also sync quite a few ebooks with Live Mesh. The books are either something I am currently reading or something I have previously read and kept around as a good searchable resource.
I'm moving toward #3. I have a MacBook Pro with 8GB of memory and I use that for SQL Server work. There are several different VM configurations that I have based on what I need. Everything else is synced through dropbox, Google, and github.
In the last few months, I've started actively evaluating different 2U and 4U servers to replace my aging web host. The only reason I'm looking into co-locating my own server is that I can use it to host as many VMs as I want. That makes it a lot easier to keep them up and running or just spin them up as I need them.
I am definitely moving towards #3, if only because it means the inevitable move to a new / different computer is not a 8-12 hour affair with re-installing and configuring all applications I need. Plus it means patching and other update-like actions aren't quite as dangerous.
My choice is closer to #1 at the moment bu I am liking the cloud. I do have purpose built VM's for differeing server editions. I like the system on a USB drive. Even without my primary machine I am able to work on other machines with basic components.
Buck this is a very interesting post and I too want to move to number 3. My problem is that ads a developer who has all my environments created for me by my lovely CM team, I have no idea how to achieve number 3..... can you point me in the direction of any good blogs/articles to read on the steps on what I need to do to set myself some up, so I can start playing :-)
Any assistance would be greatly appreciated.
TJ - I just treat my VM like a regular system. To your CM folks, as long as they are controlling all of that with software, they don't even have to know it's a VM. I'd let them know anyway, of course, but the process should not change for them.
|
OPCFW_CODE
|
The push to process large quantities of raw data for prescriptive analytics is the next computing revolution and it's taking place now:
But who can address such challenging computational questions that span multiple theoretical fields while requiring meticuolous engineering attention to work effectively in practice? I can and I am. My research spans not only the data-driven fields of machine learning and information retrieval but it extends to the decision-driven fields of artificial intelligence (AI) and operations research required to go beyond prediction to prescription. My research combines applied theoretical knowledge in the technical areas of
The insights in my research stem from understanding how computational tools (or novel extensions thereof) dovetail with theoretical formalisms such as MDPs or (constrained) optimization to compute near-optimal decisions without sacrificing performance for scalability. As two exemplars of my research innovations:
This is just the tip of the iceberg. To elaborate in more detail, following is a survey of the ABCs of my research (or rather, MBDSs) that enable these large-scale data processing and decision-making applications.
With the explosion of social media content, data has transformed into a rich network of social interactions and viral content dynamically changing over time. This has led to a data-driven revolution in recommender systems that can mine user interaction patterns and anticipate information diffusion as shown in my work on social collaborative filtering for Facebook (WWW-12, COSN-13). At the same time, this media explosion has led to an information overload requiring automated content analysis to manage it for a variety of user needs such as extracting human-readable topics underlying millions of daily Tweets (SIGIR-13), discovering implicit user communities in collaborative preference data (IJCAI-13a), and performing diverse retrieval that covers all facets of a user's information needs (SIGIR-10, CIKM-11, SIGIR-12).
This explosion of media content has also led to an overload of unstructured text data, which through effective natural language processing can be exploited for information extraction, summarization, and advanced information retrieval. To this end, I've worked on building state-of-the-art systems for automatic summarization, named entity recognition (NER), query expansion, sentiment and opinion analysis, complex sentiment, time and event extraction, hierarchical text classification, and question answering. Combining these various technologies together, at NICTA we've built tools for live Twitter-stream visual summarization (EventWatch) and visual news and blog exploration (OpinionWatch) to help users drill down through mass quantities of natural language content to find the information they need to act on. These technologies are currently licensed by a number of organizations throughout Australia.
One of the most important data structures for efficiently manipulating expressive functions for probabilistic and decision-theoretic inference is the decision diagram. With David McAllester, I defined the Affine Algebraic Decision Diagram (AADD) (IJCAI-05, AAMAS-10) for compactly representing logical and arithmetic structure in functions yielding up to an exponential-to-linear reduction in space and time for probabilistic and decision-theoretic inference applications. For my thesis work, I developed the first-order ADD (FOADD) (UAI-06, AI Journal 2009) for exploiting first-order logical relational structure used to reason efficiently about infinite domains in probabilistic planning. And crucial to my current work on hybrid stochastic control is the continuous extension of the ADD — the XADD (UAI-11, UAI-13) for efficient symbolic piecewise algebraic computation capable of producing the first exact and bounded approximate solutions for continuous traffic control models, inventory control, and numerous other hybrid control applications (AAAI-12a, NIPS-12, IJCAI-13b).
For an overview of my (and others') work in this area, I gave a AAAI-13 Tutorial on this topic.
In order to exploit symbolic structure in decision-making, one must be able to compactly model such structure. As a major contribution to the planning community, I have developed the unifying relational dynamic influence diagram language (RDDL) to compactly model expressive decision-making problems by leveraging a mix of logical and stochastic programming representations; RDDL permits modeling of complex problems like traffic control that were otherwise impossible to model with previous description languages (e.g., PPDDL). Using RDDL, I ran the 2011 ICAPS International Probabilistic Planning Competition (IPPC) with a record 11 competitors from around the world (including M.I.T., U. Washington, U. Waterloo, KAIST in S. Korea, and N.U.S. in Singapore) (AI Magazine), which has helped shift the focus of probabilistic planning to domains that more closely reflect the expressivity of real-world applications.
My vision for the future is simple: social media is connecting all information, the Internet is becoming more interactive and personalized, and every embedded system that impacts our lives — from our daily commute to our consumer needs — will become increasingly adaptive to our welfare and that of society. Achieving this vision requires transforming immense quantities of data into actionable decisions in a way that is self-improving (learning), online (requiring efficient algorithms and data structures), robust (Bayesian), expressive (symbolic), and non-myopic (sequentially optimal). My research at the confluence of these areas is creating new representational and computational paradigms and new opportunities for data-driven decision-making to create better business processes and smarter cities. Alan Kay was right, "the best way to predict the future is to invent it."
|
OPCFW_CODE
|
Data security is a hot topic these days. The recent news of the global flaw that put 3,000 Microsoft email servers at risk will surely have impressed upon everyone the importance of data protection.
Unfortunately, data leaks could spring at many different endpoints in your organisation. Links sent through email can compromise your data’s security by taking you to unsecured websites; an attachment containing malicious code could be opened on a device; then one needs to consider people’s behaviour; in other words if your people in your organisation does not manage data sharing correctly, you can quickly lose track of where everything is stored.
So, how can you protect your data? Well, firstly it is important to think of where your data resides, which unfortunately could be all over the place depending on your enterprise’s device policies. Here are some of the key tools used to secure user data; you will probably come across these solutions in the form of different products.
Mobile Device Management
MDM is the administration of mobile devices, including phones, tablets, and laptops. It requires your device to be enrolled; in other words, the owner of the device must give permission for the device to communicate with and be partially controlled by the MDM server – this will be achieved by installing an app or program. Intune mobile device management is available with certain licenses. Mobile Device Management converts a mobile device into a dedicated work device and is not suitable for ‘bring your own’ devices.
Mobile Application Management
A bring your own device (BYOD) policy is very common in business. Mobile Device Management (MDM) systems differ from Mobile Device Management because it does not control individual devices but works at the application level to secure data coming in and out of the application. If you use Cloud applications, then Cloud-based Application Management is recommended.
Enterprise Mobility Management
Enterprise mobility management does not actually refer to specific types of software or services, but to all the different features used by an enterprise to secure data and technology used by workers. This may include both Mobile Device and Mobile Application management, and the work of IT specialists.
Unified Endpoint Management
UEM is the evolution of both mobile device management and enterprise mobility management. It does everything that MDM, MAM, and EMM tools achieve, but it also extends to almost all endpoints used in a business (hence the name); desktops, printers, wearables like smartwatches, and Internet of Things (IoT) devices can all be managed by a Unified Endpoint Management.
Microsoft Intune, also known as Microsoft Endpoint Manager (and formerly as Windows Intune) is a Unified Endpoint Management. Intune is a key component of the Microsoft Enterprise Mobility + Security (EMS) Suite. Intune provides a single admin portal, and as a Unified Endpoint Manager, it means admins can set policies on all endpoints from a single location in the Cloud.
So, what are some of the ways Intune protects company devices? Firstly, Intune protects both company-owned, and employee-owned devices. If it is a company-owned device, there will be even more control over it – for example if the device is stolen, the organisation could perform a full wipe of the device remotely. A ‘bring your own’ device can be partitioned; meaning that corporate data and personal data is segregated. Moreover, with Intune you will have access to a self-service portal containing company-approved apps, allowing you to download apps that you need.
Conditional Access is also commonly applied to company devices that connect to company resources. Users are granted access to the resources they need to achieve their work, provided their device and accounts meet the criteria set to be able to access that data, whilst avoiding giving them unnecessary access to other resources. This stops data getting spread thinly across multiple devices and services, minimizing opportunities for that data to leave the organisation. If a device or account no longer meets the conditional access requirements, they will be locked out of the company.
Microsoft Defender and Microsoft Intune can be integrated as a Mobile Threat Defence solution – this is known as Microsoft Defender for Endpoint. The solution is available for Windows 10, Android devices, and iOS and iPadOS. With this solution, Microsoft Defender for Endpoint can monitor company devices and if there is a security breach on any of them, it can classify that device as high-risk, and block that device off from company resources.
Microsoft Intune follows the same technical and organizational measures that Microsoft Azure takes for securing against data breaches. When a Security Incident is identified, customers are notified. This process includes working with the Microsoft 365 team to communicate breach notification for any Microsoft 365 customers using Intune.
|
OPCFW_CODE
|
The app references non-public selectors - Unable to upload app
Description
We are using version 10.4.0 and get the error while uploading to AppStore:
"The app references non-public selectors in Payload/{app_name}.app/{app_name}: determineAppInstallationAttributionWithCompletionHandler:, lookupAdConversionDetails:"
Reproducing the issue
No response
Firebase SDK Version
10.4.1
Xcode Version
14.2
Installation Method
Swift Package Manager
Firebase Product(s)
Analytics, App Distribution, Crashlytics
Targeted Platforms
iOS
Relevant Log Output
No response
If using Swift Package Manager, the project's Package.resolved
Expand Package.resolved snippet
Replace this line with the contents of your Package.resolved.
If using CocoaPods, the project's Podfile.lock
Expand Podfile.lock snippet
Replace this line with the contents of your Podfile.lock!
@aparnaredbus Thanks for the report. Apple occasionally changes its submission criteria causing issue like this.
Those are symbols from the legacy Google Analytics product and not in Firebase.
Are you using the legacy Google Analytics or Google Tag Manager?
If you are using Google Tag Manager, please upgrade to v7.4.3.
I just got this today after updating from Firebase 8.16.0 to 10.4.0. These are my dependencies:
I'm going to try to submit for review anyways. Not sure if it will go through or not.
Still getting this warning with SDK version 10.5.0. Would like to have it resolved but fortunately it does not seem to prevent app approval.
@willbattel As pointed out above, we addressed this issue in GoogleTagManager 7.4.3.
If you're seeing it in Firebase, please open another issue with the details.
As mentioned in my comment above, we're not using GoogleTagManager (at least not directly, and it doesn't appear in our dependencies either). This issue started happening after we upgraded Firebase from 8.15.0 to 10.4.0. I'd be happy to open a new issue but it seemed like the OP of this issue was also not using GoogleTagManager, and that I was seeing the same problem, but perhaps I am misunderstanding.
Here's an example of using the nm tool to check if a symbol is in the SPM checkout of a particular binary library:
~/Library/Developer/Xcode/DerivedData/ABTestingExample-gxybhvsgstennrbcutwsunxiezzv/SourcePackages/artifacts/firebase-ios-sdk/FirebaseAnalytics.xcframework/ios-arm64_armv7/FirebaseAnalytics.framework $ nm FirebaseAnalytics | grep maxUserProperties
00000952 t -[FIRAnalytics maxUserPropertiesForOrigin:queue:callback:]
00000994 t -[FIRAnalyticsConnector maxUserProperties:]
00000b48 t ___43-[FIRAnalyticsConnector maxUserProperties:]_block_invoke
00000000000009f8 t -[FIRAnalytics maxUserPropertiesForOrigin:queue:callback:]
0000000000000b54 t -[FIRAnalyticsConnector maxUserProperties:]
0000000000000cb8 t ___43-[FIRAnalyticsConnector maxUserProperties:]_block_invoke
|
GITHUB_ARCHIVE
|
<?php
namespace app\back\model;
use think\Model;
use think\Db;
use think\Session;
/**
* 后台管理员表
*/
class Admin extends Model{
//设置模型对应的完整的数据表名
protected $table = 'bg_admin';
//验证管理员的合法性
public function check($name,$pass) {
$pass = md5($pass);
return $this->where(['admin_name'=>$name,'admin_pass'=>$pass])->find();
}
//给登录用户添加附加信息
public function updateAdminInfo($admin_id) {
$login_ip = $_SERVER["REMOTE_ADDR"];
$login_time = time();
$sql = "update bg_admin set login_ip='$login_ip',login_time=$login_time,login_nums=login_nums+1 where admin_id=$admin_id";
return Db::execute($sql);
}
//获取当前用户的信息
public function getCurrentInfo() {
$admin_id = Session::get('admin_id');
return $this->where(['admin_id'=>$admin_id])->find();
}
}
|
STACK_EDU
|
I need to migrate six SQL Server 2005 databases to a new server, which has SQL Server 2008 installed on it. The databases are outgrowing the old 2005 server. I want to move everything to a new server with more memory and disk space and upgrade to SQL Server 2008 in the process.
I’ve been reading about the use of attach and detach stored procedures and am wondering if I can use them to accomplish both tasks, migrate to new hardware and upgrade the environment at the same time. I've read that sp_attach_db is depricated in 2008, so what's the best way to do this?
Here is what I plan on doing and I’d like verification that this will work.
Step 1: Make the databases offline to ensure nothing is accessing them on the old 2005 Server.
ALTER DATABASE MyDatabase1 SET OFFLINE;
ALTER DATABASE MyDatabase2 SET OFFLINE;
ALTER DATABASE MyDatabase3 SET OFFLINE;
ALTER DATABASE MyDatabase4 SET OFFLINE;
ALTER DATABASE MyDatabase5 SET OFFLINE;
ALTER DATABASE MyDatabase6 SET OFFLINE;
Step 2: detach the databases
@dbname = N'MyDatabase1';
@dbname = N'MyDatabase2';
@dbname = N'MyDatabase3';
@dbname = N'MyDatabase4';
@dbname = N'MyDatabase5';
@dbname = N'MyDatabase6';
Step 3: Copy the mdf and ldf files to the new server
scp C:\MyDatabase1.mdf D:\MyDatabase1.mdf
scp E:\MyDatabase1_Log.ldf L:\MyDatabase1_Log.ldf
scp C:\MyDatabase2.mdf D:\MyDatabase2.mdf
scp E:\MyDatabase2_Log.ldf L:\MyDatabase2_Log.ldf
scp C:\MyDatabase3.mdf D:\MyDatabase3.mdf
scp E:\MyDatabase3_Log.ldf L:\MyDatabase3_Log.ldf
scp C:\MyDatabase4.mdf D:\MyDatabase4.mdf
scp E:\MyDatabase4_Log.ldf L:\MyDatabase4_Log.ldf
scp C:\MyDatabase5.mdf D:\MyDatabase5.mdf
scp E:\MyDatabase5_Log.ldf L:\MyDatabase5_Log.ldf
scp C:\MyDatabase6.mdf D:\MyDatabase6.mdf
scp E:\MyDatabase6_Log.ldf L:\MyDatabase6_Log.ldf
Step 4: This step would be done on the SQL Server 2008 system. I’ve read that sp_attach_db has been deprecated. Can I still use it or do I need to use another command that I’ve read about, ALTER DATABASE?
Or this procedure on each of my databases?
CREATE DATABASE MyDatabase1
Am I I missing something here? I don't have much SQL server experience so I'm unsure on how to do something like this.
|
OPCFW_CODE
|
NTRO on use of GITHUB.COM and Git
After setting up git
and registering for a public account (free) on github.com and setting up and becoming familiar with github.
Then Create A Repository https://help.github.com/articles/create-a-repo
Fork a Repository
To make my own Fork (a snapshot in time of a copy of another repository on github)
see "Fork a Repository" https://help.github.com/articles/fork-a-repo
Using the above, I forked: github.com/seandepagnier
/weatherfax_pi to github.com/rgleason/weatherfax_pi about six months ago.
Clone the Fork to your Local Repository
Next "Clone your Fork to a Local Repository" read the end of Fork-a-repo
Note how to keep track of the original repository (seandepagnier/weatherfax_pi) by "Configuring Remotes" by adding 'upstream'.
Six months went by and Sean Depagnier had made changes to his repository.
I needed to get my fork up to date with Sean's repository.
Synch a Fork
So I "Synched a Fork" https://help.github.com/articles/syncing-a-fork
Syncing your fork only updates your local copy of the repository; it does not update your repository on GitHub.
Done using the commands:
git remote -v
git remote add upstream https://github.com/seandepagnier/weatherfax_pi.git
git remote -v
git fetch upstream
git branch -va -Should show the most current upstream commit.
"remotes/upstream/master 5fdff0f Some upstream commit"
Now your Local repository on your computer has a remote branch 'upstream' with the most current
commits from seandepagnier/weatherfax_pi
We have fetched the 'upstream' repository.
Next we need to Merge those changes into our local repository.
git checkout master (now pointing to the local repository master)Make changes on the Local, compile, test and then Commit.
git merge upstream/master (now merging upstream to the local master)
After getting the local repository current with the original repository seandepagnier/weatherfax_pi,
we've made some changes and run some tests on the changes by compiling the code with MSVC++
and those changes work fine. Now we need to commit the changes and updat the personal repository on gitub:
Commit those changes and add an identifying note.
Then publish the commits from your repository into a remote for other users to view and potentially fetch.
Commit changes made.
1. Determine any untracked files that are needed as a part of the commit using git status. Write them down.
2. Add necessary untracked files using git add <file1> <file2> etc.
3. Check for untracked files again using git status.
3. Then commit using git commit -a
show files added to the staging area, files with changes, and untracked files
git diff show a diff of the changes made since your last commit to diff one file: "git diff -- <filename>" to show a diff between staging area and HEAD: `git diff --cached`
git add <file1> <file2> ... add <file1>, <file2>, etc... to the project git add <dir> add all files under directory <dir> to the project, including subdirectories
git commit -aUpdate your personal forked repository on GitHub
commit all files changed since your last commit
(does not include new (untracked) files)
. ( github.com/rgleason/weatherfax_pi )
When you've made your awesome updates locally, you'll want to synchronize that work back onto GitHub. It's super easy to do:
You can only push to one of two writeable protocol URL addresses.
Those two include an SSH URL like email@example.com:user/repo.git
or HTTPS URL like https://github.com/user/repo.git.
If you cloned a repository with a read-only git:// URL and you have been granted permissions to write to it, you can update the URL to one of the writable forms with:
git remote set-url origin <NEWURL>
Pushing a Branch
To push a local branch to an established remote, you need to issue the command:
git push <REMOTENAME> <BRANCHNAME>
This is most typically invoked as
git push origin master
update the server with your commits across all branches that are *COMMON*
between your local copy and the server. Local branches that were never
pushed to the server in the first place are not shared.
git push origin <branch>
update the server with your commits made to <branch> since your last push.
This is always *required* for new branches that you wish to share. After
the first explicit push, "git push" by itself is sufficient.
After the [git push origin master] is issued, open github and check the commits in your forked repository.
For example: https://github.com/rgleason/weatherfax_pi
To issued the "pull request" to sean was quite easy. I had two commits in the bin waiting.
I picked the "Compare" button on the right which created a diff and showed the pull details and had a green button to make the pull request.
Git Help https://help.github.com
Git Cheatsheet https://help.github.com/articles/git-cheatsheet
Cheatsheet $ command line ruby cheat sheets
|
OPCFW_CODE
|
Web Chat Application - ASP.NET/Jabber/Ajax/WCF/Comet/ReverseAjax - Issues Faced - Seeking Insights
I've been trying to build a web based chat application for the past three weeks and i'm facing issues with whatever route (programming technique/technology) i take to build it. I've explained the issues i've experienced with all of'em below. Kindly provide whatever insights you have in this.
ASP.NET-AJAX
First issue is that it is Not Really Real Time
If client hits the chat server every x seconds (constant time stamp) it is not going to be real time unless x is very very less
If x is very small like 1 second and if there are 1000 users online at the same time i think it is really going to hammer the chat server and cause scalability/performance issues
WCF-Duplex
I unfortunately wasted considerable amount of time in this trying to build a WCF duplex service which maintains all the clients and invokes the client through the channel as and when required. But i recently learnt that WCF duplex callback wont work with ASP.NET (since http is request and respond type). I was following this great article to build a duplex service.
Comet/ReverseAjax/HTTP Server Push
I'm extremely new to this technique and wonder how well enough it can scale. After my first glance on this programming technique here in wiki and the very first article on Comet by Alex here, i learned that the client always maintains an open connection (long living ajax calls) to the server which can be used by the server to push "interesting events happening in the server" to the browser (client). So how well can it scale? What if the max no of open connections exceed in IIS or any other issues like that?
Jabber Server/Client (XMPP)
I see that most of the prominent chat applications that can be seen online are making use of Jabber. I also learned that writing a Jabber server from the scratch is a tedious task. I have separate user profile store for by application. Can i integrate that with Jabber easily? Any open source Jabber servers that i can host privately? (I've seen many open source tools to build the client easily)
Any insights provided are very much appreciated.
Thank you
NLV
Some info on comet and ASP.NET: http://stackoverflow.com/questions/65673/comet-implementation-for-asp-net
If you are using .NET, check out WebSync. It allows for fully scalable comet using IIS to integrate directly with your application. There is a free Community edition you can try out, along with tons of examples and chat demos.
I just recently implemented a multi-client Jabber web-app using WebSync and jabber-net.
do u have any sample code for the integration of jabber chat in asp.net website will you please post..sample code?
PokeIn provides shared objects among the clients and it simply helps you to create impressively solid and fast web applications. Even if your application is hosted on multiple servers, PokeIn manages the shared objects on all of them. So, this feature will help you to create quite effective solutions. In addition to these, you will find very useful samples over there
I know this is old but if somone new founds this you should consider using SignalR
Open Source Jabber Server
Have you checked out OpenFire
|
STACK_EXCHANGE
|
What is a linked list?
In computer science terms, a linked list is a linear collection of elements whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a linear data structure consisting of a collection of nodes which together represent a sequence. In its most basic form, each node contains: data, and a reference (in other words, a link) to the next node (element) in the sequence. Unlike an array, the elements or nodes of a linked list is not stored in sequential memory locations. It is tied together by using a pointer, where it tells us where the next element in the link is stored.
Each node in a singly linked list has its own value, and a reference to the next node. The only node that is different is the last node where the linked list terminates, as its reference points to null. One major advantage of a linked list is it allows you to easily insert or remove an element/node from the list without altering the entire data structure. Where in an array, if you insert or delete an element, all the indexes after that element will be changed.
In order to create a linked list, we need to first create and define a class (Node) in the code above. As it was mentioned earlier, it initializes with two properties; a data and a reference to the next node. In this case we linked a series of nodes by telling it which node will be following another. In this very short linked list terminates at eNode and it this.next value will be null as there is not another node linked after it. With the above linked list, we can easily see where it terminates. But what if we get a linked list that is hundreds of nodes long and we need to find the last node?
We can setup a function like the one above. Since a node will always have this.next we can setup a while loop. While there is a node.next, we want to set the current node to the next node until it reaches the end. Since the value of node.next on the last node of a linked list will always be null, we can also console.log its value to confirm. Using this while loop, we can find the last node from any position in the link.
Above is another exercise to help us get more familiar with traversing the linked list. In this example we want to find the middle node of the chain. We set up two declarations, a slow node and a fast node, and where the fast node will be moving at twice the speed of the slow node. Using this method, our slow node will be at the middle of the linked when our fast node reaches the end. This function would only work if we start from the beginning or head of the linked list.
If you want to read more on linked lists and learn how to add a node to any position, please check out:
|
OPCFW_CODE
|
A bug with V4 toes morphs?
Hi there, people,
I've just realized that I have a strange issue going on with some of my v4 morphs.
I got morphs that allows her to move each toe separately, and also a morph that controls the toenails. the morphs are different, and differently, named...but when I apply the toenails morph, it deletes the "4toe spread" morph, and vice versa! I checked the dependencies and saw nothing out of place. any ideas?
anomalaus last edited by
@gsfcreator can you say what package those morphs come from? V4 has never had individually articulated toes, so it all has to be done with morphs. I only see:
It's from a purchased morphs package. I can dig in my renderosity to find out exactly which one. "perfect feet for V4", maybe...
the toenails morph is separate, and as you can see, 4toe spread is not there, because if I apply it - it runs over the toenails morph!
karina last edited by
I doubt that this is "Perfect Toes" because it uses the "Community" channels, while PT creates it's own morph channels by a Python script.
It looks like there's one of the community channels missing in your V4.
Now if you inject the morphs and Poser can't find the correct morph channel (they're numbered), it overwrites the first other morph channel it can find.
Of course, this fucks up the dependencies too:
From what I see "ToeNailsLong" is now slaved to "4Toe, Lower", which it shouldn't be.
Do you set the "Toes Spread" dials with a library pose? I guess you do because of the values in the other "spread" dials.
Do the following:
Double-click each of the dials and check the morphs' internal numbers.
Normally they should be in sequence. I there's a gap in the sequence (probably between "4Toe, Lift" and "ToeNailsLong") then you have found the missing channel.
If you know how to edit a Poser file, then recreate the missing channel. Otherwise please call back and I'll show you how to recreate the channel with the Morph Tool.
Hope that helped
@karina! so nice to hear from you :)
I'm actually using the toe spread dial that you created with sasha-16 :).
I have no idea how to edit a poser file...but I'll do what you asked and write back. thanks! <3
"From what I see "ToeNailsLong" is now slaved to "4Toe, Lower", which it shouldn't be." - it's not. it's enslaved to it's own master parameter in "body" :). but we still have this problem that I mentioned...
4toespread is number 33:
that's the one that comes afterward,s and it's #34 as it should be:
now when I apply the toenailslong morph....:
iut runs over #33. always #33. so...how do I fix this and make poser apply this morph to it's own channel?
Okay, I've found a fix and it's quite simple: you can simply click the internal name, change it to something else that doesn't exist...and viola. it's fixed :-)
karina last edited by karina
Sorry my friend, I lost his thread from my sight because I got no notifications on the recent replies.
So, belated, I repeat my question from a few weeks ago:
Do you set this hand pose with the dials when it happens, or do you use a "click" finger pose file from the library?
If so, then the culprit might be an error in the pose file, addressing the wrong channel.
Alternatively, it might still be a weird dependency gone wrong in the channels.
If you'ld like me to have a look at it, please send me a PM (or sitemail, or whatever it's called here), or just pop over to SASHA's forum and we can continue there without worrying about copyright issues...
@karina I'm familiar with your forum (there I'm called dreamweaver ;) ) - I should hop by! anyway - this issue, as I wrote, has been solved and rather simply :).
|
OPCFW_CODE
|
|Page (1) of 1 - 03/06/07||email article||print page|
PipelineFX Releases Qube! 5.1Artists Gain Fully Customizable GUI Interface and Increased Application Support (March 06, 2007)
DMN Newswire--2007-3-6--PipelineFX, makers of Qube! software, the leading render farm management software for film production, game development, and digital media education around the world, today announced the release of Qube! version 5.1.
?With this 5.1 release of Qube!, we have taken the industry trend toward modern Python scripting and opened our GUI to be fully customizable, allowing artists to see exactly what they need to see to manage their jobs in the most efficient manner, said Scot Brew, PipelineFX Director of Technology. ?The new Python-based Qube! GUI's streamlined layout is designed to present artists the information they want and need at their fingertips. It also has many new features and added functionality that artists and Technical Directors have been asking for.
The new Qube! GUI includes:
? fully python-scriptable interface allows studios to customize GUI for their pipelines
? streamlined layout designed to easily present key information to users and administrators
? single window layout removes need to navigate through multiple windows to find information
? new panels:
o "Farm-at-a-Glance" panel displays worker status summary for entire farm
o "Output" panel displays output images from jobs
o "Histogram" panel displays task/frame render times
? enhanced and streamlined right-click menus
o act on multiple workers/hosts along with jobs and frames/tasks
o new "retry failed frames" menu to simplify this common action
? submit dialogs are now modal and use standard controls
The new release also provides enhanced application support including:
? Maya® (updated): Added support for Maya 8.5 and exposed new options
? Migen (New): Support for Maya's mentalray (.mi file) export engine
? 3ds Max® (updated): Added support for Autodesk 3DS Max 9.0
? MentalRay Standalone (New): New job type to render MentalRay .mi files
? Lightwave 3D® (New): Supports Lightwave 8.5 and 9.0 on Windows
? Shake® (updated): Added support for Shake 4.1 on OSX.
? Fusion® (updated): Added support for Fusion 5.1
"PipelineFX is committed to making Qube! the class-defining render farm management system for todays CG pipelines. Qube! 5.1 supports industry-standard Python scripting, offering improved workflows and development productivity," said Troy Brooks, CEO of PipelineFX. "We've really listened to our customers requests for features and functionality that will ensure they can maximize their investment in rendering infrastructure. This new version is our most significant release to date.
Qube! 5.1 is available now as a Universal application release for Intel- and PowerPC-based Macintosh computers, as well as on the Microsoft Windows® and Red Hat, SUSE and Fedora Linux® platforms.
About Qube! and PipelineFX: Qube! is the leading enterprise-class render farm management system for film and game production. Qube! is highly customizable, extensively scalable, can be integrated into any production workflow, and is backed by 24 x 7 technical support. Qube! has custom pipelines for creative applications like Autodesk® 3ds Max®, Autodesk Maya®, NUKE, SOFTIMAGE®|XSI®, Shake®, Adobe® After Effects® and many more. Qube! is IBM ServerProven®, and operates in Linux®, Windows® XP, 2000 and 2003, and Mac OS®X environments. PipelineFX, with headquarters in Honolulu, HI, and offices in San Francisco, CA and Vancouver, B.C. was founded in 2002. Qube! is used by world-class studios including South Park Studios, Electronic Arts, Buena Vista Games, DKP, ReelFX, Guava Graphics, and Radical Entertainment. Qube! is used by schools around the world including the School of Visual Arts in N.Y., Carleton School of Architecture, Full Sail in Florida, the University of Advancing Technology, Pratt Institute, and the University of Hawaii-Academy for Creative Media. For more information, please contact Troy Brooks, CEO of PipelineFX: [email protected], Phone: 866-856-7823, 1000 Bishop Street, Suite 606, Honolulu, Hawaii, 96813, USA, [email protected], http://www.pipelinefx.com.
Trademark Notice: Qube! and Qube! Remote Control are trademarks of PipelineFX, LLC. All other trademarks are the property of the respective trademark owners. ©2007 PipelineFX, L.L.C
Related Keywords:render farm
|
OPCFW_CODE
|
If you want to know when new articles go
subscribe to the WebWord.com
a Spam Killing Community
Article by John
The purpose of this article
is to discuss how to eliminate spam through a community of spammer
killers. Why take a passive role in spam elimination and why use up
precious time and complex tools to track down one spammer? Instead, let's
create a community of spammer hunters to track them down and wipe them
out, using their own methods against them. Forget killing spam, let's kill
Everyone I know hates spam.
Wait. I take that back, almost everyone I know hates spam. Barry
Dennis loves it, but we'll ignore him for a moment. He is an
aberration. A freak of nature. Please don't hate him because he is
Spam is terrible for many
reasons. It eats bandwidth, it wastes time, it diverts attention, and
it is just plain annoying. If some people weren't suckers, it wouldn't
work. However, spam does work. Why? Because one out of every 10,000
people respond to it. It is like junk mail in that a super low response
rate is just fine because the costs are still lower than the revenue.
Thus, a profit is possible. As a result, all the good people -- the
non-idiots -- get hit with a ton a spam.
It is sad but over 90% of my
email is spam. On average I think I get about 200 emails a day on average.
Out of those, about 30-50 are legitimate. The rest are spam. Sure, I just
delete it, but it is a pain the arse because I have to do it day after
Stop Spam Before It Reaches You
The current attitude about
spam seems to be very passive. It is about filtering and reducing
and blocking. People talk about killing spam before it reaches
their inbox. Here are some examples:
Along similar lines, many
people use software products and services to eliminate spam. Here are some
popular products that folks are using:
This passive attitude really
bothers me. Yet at the same time, I usually just delete my spam and move
along. What a shame. The passive attitude just reinforces the cycle and
leads to more spam.
Actively Fight Spam
Let's think about this
situation for a moment. What I am saying is that the basic attitude is
that spam is bad and spam is wrong. People don't like it, but they feel
powerless to stop it. So, they try to stop it from reaching them. Out of
sight, out of mind. Only rarely do people actually get mad at the real
cause of spam -- the spammers!
Fortunately, there are geeks
out there that have developed tools
for tracking down the scum that send spam. But, unfortunately, doing this
takes time and energy. It is generally a personal battle and the rewards
are not high. One reason why people like the filtering tools is that they
don't require much work. Put them in place and relax, for the most part.
Active tools require action. Many of us don't have time and some of us are
To summarize, passively
filtering spam is lame. It just tells spammers that they can continue
to spam. Actively fighting spam is too hard and takes too much time. It
doesn't eliminate spammers effectively because it is too narrow and
limited. Of course, there is legislation, but as we all know, it is
I propose a simple solution
to the spam problem. To my knowledge, this approach hasn't been explained
or explored. If I am wrong please correct me. I'd be happy to give someone
credit for the idea.
Here it is:
- Form a community of
- People that join the club
submit spam as they get it.
- People in the club figure
out the actual contact information of the spammer.
- Everyone in the community
attacks the spammer using spammer tools.
Step one isn't anything
amazing. I propose that we set up a simple community with a web
site, bulletin board, chat room, and email discussion list. Perhaps it is
a closed group where you can only join by invitation. I'd be happy to be a
Step two does require some
action, but it doesn't require sleuthing of incredible amounts of work.
Most members of the community submit spam and then the hunters kick
For step three, spammer
hunters can either use the tools and techniques mentioned above, or they
use social engineering techniques to get the actual web address, email
address, phone number, and perhaps postal address of the spammer. In some
cases, technology can be used to track down the culprit. In other cases,
the spammer hunters simply respond to the spam request. Behind every
piece of spam there is a person that wants to make money. That means
that they must have a way to be reached. They can't hide too much or they
can't collect your money.
Step four is the fun step.
Once the spammer hunters find the legitimate contact information for the
spammer, the community unleashes on them. Denial
of service attacks, repeated phone calls, tons of postal mail, and
more. To be honest, I really like the idea of hitting a spam web site so
hard that it crashes. I also like the idea of filling up a spammers email
I admit that this plan needs
refinement and I already see some flaws. For example, spammers can change
their contact information very quickly. Also, there might be occasional
false positives where innocent people get attacked by this spam killing
They idea is out there now
for everyone to see. I'm sure we can refine it and make this work. Yes
folks, I think we can eliminate spammers. If we eliminate spammers, we
eliminate spam. That's the trick and that is what we need to do.
Please send them to me: email@example.com
I want to know what you think about this article.
|
OPCFW_CODE
|
Our Flash powered slots unfortunately do not work on mobile devices. However, below we have a selection of of our exclusive HTML5 powered mobile friendly slots which you can play on phones and tablets. We're currently hard at work converting all our games into mobile friendly slots games and you can see our mobile slots page for all the latest!
Our Valentines Slot game is a 5 reel game with 9 paylines & a floating hearts free spins bonus feature.
Overview: Sponsored by Paddy Power Casino our free Snakes and Ladders Slots Machine is a fantastic 5 reels, 20 payline slot.
This Flash powered slot machine is a true video slot. With awesome animated graphics, exotic music, and advanced features such as auto spins feature.
There are 2 bonus games in this slot. There's a pick a basket game. This is where you can win the progressive jackpot and a real cash prize! Along with a snakes and ladders type bonus game. See below for more details.
Look out for the free spins feature introduced by the coiled snake. there are triple points to be won during the free spin period!
Set against a colorful jungle background. The 5 reels feature fun symbols with Dice, Snakes, Ladders, Wild, Snake basket, and more. Get 3 or more symbols on a winning payline to activate a win. With the exception of the special scatter symbols. See the symbols guide below for full details.
Free Spins feature: Like all the best slots. Our Snakes and Ladders slot machine has a free spins feature. Get three of the coiled snake symbol anywhere on the reels to activate Free Spins as follows:
Note: The progressive jackpot can be won during the Snakes Charmers bonus game. See below for more details. *After winning the in-game jackpot a random number generator will deterime if you also win the $20 cash prize. Check the Progressive Jackpots page for full details & to see if you have won.
Snake Charmers Bonus Game:
Snakes Charmers Scatter Symbol: Get three of the snake charmers basket symbols anywhere on the reels to activate the Snake Charmer Bonus Game. This is the only place in the game you can win the progressive jackpot and a $20 real money prize!
This is a classic 'pick me' style bonus game. You get to pick a snake basket to reveal a bonus prize. During this game, hypnotic snake charmer music plays!
Snakes and Ladders Slot Bonus Game:
Snakes and Ladders Logo Scatter Symbol: Get three of the Snakes and Ladders logo symbols anywhere on the reels to activate the Snakes and Ladders Bonus Game. During this game, you must roll the dice to move along a classic Snakes and Ladders board.
The first bonus prize amount square you land on determines your prize. Make it to the top right square and you'll get a cool x100 bonus!
Snakes and Ladders Slots Machine Paytables
Snakes and Ladders Slots Machine Symbols Guide
Wild Symbol: The wild symbol will substitute for other symbols to make winning combinations. All wild spins are doubled.
Scatter Symbol: Get 3 Snakes and Ladders logos anywhere to trigger the Snakes and Ladders bonus game.
Scatter Bonus Symbol: Hit 3 snake charmer baskets anywhere to activate the snake charmer bonus feature.
Scatter Free Spins Bonus: Hit 3 of the coiled snake symbols anywhere on the reels to activate Free Spins as follow.: 5x Coiled Snakes = 100 Free spins, 4x Coiled Snakes = 25 Free spins, 3x Coiled Snakes = 15 Free spins.
Highest paying Symbol: Hit 5 dice to win 500 credits.
Snakes Fun Fact: The Snakes used in snake charming respond to the movement of snake charmers, not the music.
What do snakes do when they lose money on slots?
They throw a hissy fit.
|
OPCFW_CODE
|
On bad code
There are plenty of software projects in the wild with low quality codebase and if you are a software developer you have written some and there is a good chance you are maintaining one.
Some of the reasons projects end up with low quality code are:
- Skilled Developers knowingly write low quality code
- Unskilled developers write low quality code
I have talked about some of the other reasons here.
Skilled Developers knowingly writing low quality code
Uncle Bob says: “No matter who. No matter what. No matter when. Short term. Long term. Any term. Writing good code is ALWAYS faster than writing bad code.”. I agree with him except for the “ALWAYS” part. In software there is no always there is no never.
Feasibility study is (usually) undertaken by businesses before any project starts. As part of this practice the feasibility of the project is assessed against a variety of criteria and the result of this study tells the business whether they should go ahead with the project or not. Sometimes to pass the feasibility assessment some sacrifices have to be made.
Sometimes constraints, usually tight deadline and/or budget, drive you down undesirable paths. The following scenarios/exceptions are usually the reasons skilled developers knowingly write low quality software:
- Time to market: There is an opportunity you have to seize quickly or it will be lost. Sometimes it is because the opportunity is there for a very limited time and sometimes because your competitors may saturate the market if you are not fast enough. This is where you are forced to write bad code to pass Schedule Feasibility.
- Low budget: You have an idea that could get you a good customer base and some revenue but do not have enough financial backup to do a high quality job. You implement the software in a quick and dirty way so you can release it to market, generate some revenue and then later use some of that revenue to improve your codebase. If you do not do this you fail Financial Feasibility which if ignored results in project failure.
- Usefulness/value uncertainty: You have a new idea and do not know if it is going to be useful or not. You have to put out your idea to assess its usefulness and receive some feedback and improve on it through Observed Requirements if it succeeds. It is as if you are releasing a working prototype into the wild to assess its Operational Feasibility!
What I would like to highlight about the above exceptions is that it make sense to write bad code in these cases only if the project is very small. For any non-trivial project I am with Uncle Bob even in the face of above constraints.
Bad code usually takes longer to write for any non-trivial task, and for a rather big task you are better off writing quality code from the start. In other words, for a non-trivial project sacrificing quality not only does not save you time or money but it also costs you more.
Why is it ok to write bad code, and why knowingly?
You should know you are writing bad code and you should be ready to pay the cost of writing bad code. The cost is the interest you pay on your technical debt: “doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choice.”
Writing bad code, if it costs less and helps gain a big benefit as explained above, is IMO worth it. It is like getting a loan to buy a house: sure you are going to pay interest on your mortgage; but perhaps if you do not get the loan and buy the house in the buyers’ market, the house would cost you much more later. So yes, you lose money by getting a loan, but you make up for that loss by capital gain and make some money. “The metaphor also explains why it may be sensible to do the quick and dirty approach. Just as a business incurs some debt to take advantage of a market opportunity developers may incur technical debt to hit an important deadline.”
As mentioned above, though, you should know that you are writing bad code and you should try to improve it (when it makes sense). Unlike most financial debts that you have to pay as you go, technical debt payment can and unfortunately in a lot of cases is deferred, and this is a big problem: “The all too common problem is that development organizations let their debt get out of control and spend most of their future development effort paying crippling interest payments.”
Beware: there is a trap
How many times have you been asked to do something in much shorter time than your estimation? Usually the excuse is that there is not much time and we should get it out of the door ASAP. In my experience a lot of such rushes are not based on real business needs; but rather an irrational desire to have the product faster.
How many times have you put a quick and dirty application together as a PoC and you have maintained it in production for years!?
Beware of these traps. You should be ready to write bad code when it makes sense; but do not be afraid to say No when the constraints are not real.
There are usually alternatives
Dealing with tight deadline/budget does not necessarily mean we should write bad code. It is usually a better idea to write fewer/smaller high quality features. In some cases you could negotiate the project scope and/or feature scope with the product owner and get some discount on the amount of code required to do the task. This way you will release smaller set of functionality; but the result has a higher quality.
Unskilled developers writing low quality code
Each software team should have at least one skilled developer. Note that I am deliberately avoiding the term senior developer; because seniority in software industry is usually judged by the number of years a programmer has been programming which in a lot of cases is no indicator of being a good programmer. Some programmers with decades of programming experience write the worst code. Also it is worth noting that skilled programmer does not equal certified programmer. Many great programmers in our industry do not have any (related) academic qualification or certificate.
The skilled programmer should try to improve the overall codebase quality by mentoring the team! In a team of developers you cannot improve the quality of codebase by yourself. If you want good result you need teamwork; so instead of singlehandedly trying to improve everything you should mentor and help the team to become better which over the time results in an overall codebase improvement.
If you know the code you have produced is bad, you have taken the first and a hard step to become a better programmer; that is admittance. The step that a lot of programmers do not take.
Writing bad code because you can always rewrite!!
Rob Conery says something on the lines of it is ok to write bad code; you can always rewrite it! I think this conveys a wrong message. I would not mind rewriting a very small application only if it takes a week or two to do so and if I do not have any thing better to do in that time. Rob talks about rewriting a small website and redoing a podcast. Well, yeah that I think would be OK - you can perhaps do that in your spare time!
And then in the comments Rob agrees with the following comment:
“My mate always used to say you have to write three version of an app to get something decent.
The first version is tight from the users perspective, but is a constantly changing monstrosity used to test your ideas in the face of the enemy (aka the users).
If the project has merit, you create the second version, a complete re-write it to encapsulate the ideal of the product, removing the fluff and delivering your pure, simple creation to the world.
The next re-write starts when your customer loves it so much they want to add features, lots of features, flowery crazy features, and you realise that your ideal of the ‘project’ has to mutate freely at the whims of insane people, but still remain reliable. Version 3 through 23 are still version 3, assuming you got version 3 right</i>”
Most .Net programmers work as part of a team on non-trivial projects. On these projects rewrite is almost always a very big mistake (I should write a post about this).
It is sometimes ok to rewrite
In some cases rewrite makes a lot of sense. For example the application that drives the business may have been written many years ago with a now-obsolete technology. There are not many developers who can maintain the application and adding new features is a painful process for everyone involved. Maybe the old programming language does not support many of the features the business requires now. In this case it may make sense to rewrite this application to open new doors; but even then there are still a lot of risks involved in the rewrite.
You may also rewrite small parts of the application. For example the project starts small and because of some now-not-very-good-decisions made at the time your infrastructure does not support some of the non-functional requirements. You may decide to rewrite some parts of infrastructure (perhaps with backward compatibility in mind to minimize the rippling effect) to allow for business growth.
There are other cases that rewrite makes sense; but they are more of an exception. Rob talks about his good experience with rewrite and generalize that into a wrong message: “Rewrites Happen. Don’t Limit Yourself”. That message is read by many developers who work on projects where rewrite could result into a catastrophe.
So how to deal with bad codebase
The short answer is that you should pay down your technical debt one step at a time.
If you have a bad codebase try to improve it. Whenever you touch some part of the code add some tests and refactor it and make it better or as The Boy Scout Rule says “Always leave the campground cleaner than you found it.”
Start paying your debt a bug at a time and you will find your codebase in a far better position in a year.
You should try your best to write good code; but it is sometimes inevitable to write bad code. But:
- Do not write bad code because you are asked to.
- Do not write bad code because you are going to rewrite it.
- Do not write bad code because it sounds like a quick and easy answer because it is not.
Before you write bad code, consider and discuss the consequences and alternatives and try to avoid it if possible. Do a cost/value analysis for bad vs good code and write the one that makes more sense to your situation.
If you decide to write bad code, know the consequences and prepare yourself to deal with them. Try to improve your codebase when it hurts and when you get a chance. Do not leave it there thinking you may be given a chance to rewrite it. A bad codebase causes headache and embarrassment for you more than anyone else.
Stop thinking about rewrite. Software rewrite could turn into your worst nightmare. Teams and businesses who start a rewrite usually have no idea what they are getting themselves into.
|
OPCFW_CODE
|
import numpy as np
import tensorflow as tf
import tqdm
from MetReg.train.loss import D_loss, G_loss
from tensorflow.keras import Model, Sequential, activations, layers
from MetReg.data.data_loader import Data_loader
from tensorflow.keras.optimizers import Adam
tf.compat.v1.set_random_seed(1)
class GANConvLSTMRegressor(Model):
def __init__(self):
super().__init__()
self.generator = Sequential(layers=[
layers.ConvLSTM2D(filters=16,
kernel_size=(
3, 3),
padding='same',
activation='relu'),
layers.Dense(1)
])
self.discriminator = Sequential(
layers=[
layers.Flatten(),
layers.Dense(1)
])
def train_step(self,
X,
y,
generator,
discriminator,
optimizer):
with tf.GradientTape() as gen_tape, \
tf.GradientTape() as disc_tape:
# forward
G_pred = generator(X, training=True)
D_truth = discriminator(y, training=True)
D_pred = discriminator(G_pred, training=True)
# loss
D_loss_ = D_loss(D_truth, D_pred)
G_loss_ = G_loss(y, G_pred, D_pred)
# gradient
D_gradient = discriminator_tape.gradient(
D_loss_, discriminator.trainable_variables)
G_gradient = generator_tape.gradient(
G_loss_, generator.trainable_variables)
# gradient descent
optimizer.apply_gradients(zip(D_gradient,
discriminator.trainable_variables))
optimizer.apply_gradients(zip(G_gradient,
generator.trainable_variables))
def fit(self,
X,
y,
epochs,
batch_size, optimizer=Adam()):
train_ds = Data_loader(X, y, epochs=epochs,
batch_size=batch_size).get_tf_dataset()
for epoch in tqdm(range(epochs)):
for x, y in train_ds:
train_step(self.generator, self.discriminator, optimizer, x, y)
return self
|
STACK_EDU
|
In this tutorial we will see how to simply use the joystick from the accessory pack with CircuitPython.
This tutorial is made with Mu IDE for CircuitPython.
If you haven’t installed those yet please refer to the installation guide on our website.
Begin by opening Mu and creating a new file
Copy the following code :
from gamebuino_meta import begin, waitForUpdate, display, buttons, color import board from analogio import AnalogIn while True: waitForUpdate() display.clear()
This will import the Gamebuino libraries needed for the program to work, as well as a “board” and an “AnalogIn” libraries we will use to handle the joystick.
The part begining by “while True:” is our main loop, without it nothing would happen at all.
To be able to use the joystick we will need to read the values sent by the two pins corresponding to the X and Y axis. Since those are analog pins we will use the “AnalogIn” function to communicate with the stick.
For that purpose we define the attach pins of our two axis as follows :
x_pin = analogio.AnalogIn(board.A1) y_pin = analogio.AnalogIn(board.A2)
We assign the A1 pin to the X axis and the A2 pin to the Y axis.
We can now proceed with the reading itself.
For that we use a small function called “read_input()” such as :
def read_input(pin): return pin.value
Calling this function and specifying which pin we want to read from will give us a value between 0 and 65536 (16 bits base)
To monitor the behavior of our setup we will display those values on screen.
Nothing complicated here, we just add a few calls the the print function in our main loop :
while True: waitForUpdate() display.clear() display.print("X = ") display.print(read_input(x_pin)) display.print("\n") display.print("Y = ") display.print(read_input(y_pin))
A bit convoluted but since we don’t have access to a proper printf function we can’t do complex formating so we need to make do.
We will now create a specific function that will transform those values into a visual representation of our joystick.
This function will be called “draw_interface” and is declared as such :
It doesn"t do anything yet but we can already add a call to it in our main loop :
while True: waitForUpdate() display.clear() display.print("X = ") display.print(read_input(x_pin)) display.print("\n") display.print("Y = ") display.print(read_input(y_pin)) draw_interface()
We will represent the movement of our joystick in the form of a point moving inside a circle. So we will display those two elements :
def draw_interface(): display.drawCircle(display.width()//2, display.height()//2, 20) dot_x = (read_input(x_pin) - 32768) * 20 // 65536 dot_y = (read_input(y_pin) - 32768) * 20 // 65536 display.setColor(color.RED) display.fillCircle(dot_x + display.width()//2, dot_y + display.height()//2, 5)
We start by drawing a circle with the function “drawCircle”, we place it at the center of the screen with a radius of 20 units.
We then calculate the coordinates of the point to display by reporting the values read from our two axis X and Y on this 20 units circle.
The we change the display color to red before using “fillCircle” to draw a red dot of 5 units radius that moves accordingly with the movement of our two axis.
You just went through another tutorial ! Now it’s your time to shine.
|
OPCFW_CODE
|
ngTouch breaks forms/input fields
we have a hybrid Angular app which is experiencing the dreaded 300ms delay, even though it should be working (since the correct meta tags have been set).
We tried getting around this issue by using the FastClick library and it worked for Safari on iOS and Firefox on Android. But in Chrome on Android the same issue persists.
We tried replacing FastClick by including the Angular ngTouch module and it solved most of the issues on Chrome.
Now for some reason though, all input fields no longer work. Touching them is being recognised because the CSS of the fields changes, but now we can't enter any data. Long pressing the input makes the cursor appear inside the input field, but the keyboard does not come up on mobile devices. The submit button of the form also no longer works.
This issue is not only visible on mobile devices, but even when using the Chrome Dev Tools to debug locally.
We are using version 1.3.15 of both Angular and Angular-touch.
Here is an example of the login form which we use:
<form name="login_form" class="form" shake submitted="submitted" ng-submit="login(login_form.$valid)" novalidate autocapitalize="none">
<div class="form-group">
<input type="text" name="name" class="form-control" ng-model="username" required />
<label placeholder="{{'site.userName' | translate}}"></label>
<div ng-show="(login_form.name.$invalid && login_form.name.$dirty) || (login_form.name.$pristine && submitted)" class="required">
<span>{{'login.usernameRequired' | translate}}</span>
</div>
<!--<hr class="form-group-separator">-->
<input type="password" name="password" class="form-control last-visible-form-control" ng-model="credentials.password" required />
<label class="responsive" placeholder="{{'site.password' | translate}}"></label>
<div ng-show="(login_form.password.$invalid && login_form.password.$dirty) || (login_form.password.$pristine && submitted)" class="required">
<span>{{'login.passwordRequired' | translate}}</span>
</div>
<ul>
<li>
<label>{{'site.stayLoggedIn' | translate}}</label>
<span class="value">
<span class="form-check"
ng-class="{'checked':credentials.save}"
ng-model="credentials.save"
ng-click="toggleStayLoggedIn()"></span>
</span>
</li>
</ul>
</div>
<div class="form-group">
<!--<button type="submit"></button>-->
<button type="submit" class="cta green">
{{'site.login' | translate}}
<span data-svg-icon icon="circle-right-arrow"></span>
</button>
</div>
</form>
By any chance, are the input fields in a modal? (element that is position fixed or absolute)
Yes, one of the parent containers is absolutely positioned. I tried removing it now, but the input fields still don't work. Also, there are other pages where clickable elements inside an absolutely positioned container work (not input fields though). Do you know of any issues with ngTouch and absolute/fixed positioning?
Not necessarily with ngTouch, but there are problems on touchscreens when you focus an input. The problem is that mobile browsers zoom in to show you the input and this can cause it to be hidden, or become transparent(this has happened to me in iOS). I've managed to work around this with some JS. However I have used both FastClick and ngTouch and none have caused me problems with inputs.
|
STACK_EXCHANGE
|
It's a very difficult situation when I must tell a new client that they have to spend a minimum of $1,500 just to get to the point they were at hours ago. That's for a seven day turn around. Next day service starts at $12,000. Not new equipment, just a business' data recovery. Finally, that's assuming that the wizards at my favorite data recovery center can actually recover the data, which isn't guaranteed.
While the equipment that is used to access data is easily replaced, often the data itself is not. Furthermore, a business' reputation is often at stake. Needless to say, a critical part of your IT services is to plan for data loss. Let's look at the largest causes of data loss, along with the data backup philosophy that fits best with minimizing it:
Intentional and unintentional actions:
An end user accidentally deletes a file. Another forgets to save it in the first place. These are intentional and unintentional actions. Not malicious, but damaging all the same.
This is why applications such as Microsoft Office Suite, and now even Operating Systems, such as Apple's OS X Lion, offer automatic save technologies. Even with such technology, it is still easy to accidentally delete a file. Because of this, I am a fan of self-service local backups for end users. This way, they can recover from accidental deletions without needing to involve IT staff.
In the midwest, our primary concerns are fire, flood, and tornados. Other places might add hurricanes, earthquakes, and volcanoes.
An offsite backup is required for this type of data loss. Many small businesses rely on the "take a backup home" method to provide protection. However, they are still taking a risk that their home will be affected by the same disaster as their office. With the onset of cloud backups, it is possible to obtain inexpensive offsite backup that is thousands of miles away, allowing the greatest protection against disaster.
Theft, viruses, unauthorized intrusions all fit within this category. It's also the sad state of our world that we must acknowledge possible terrorism, and how to respond.
Like disasters, offsite backups are key, but another variable is also important. Offline backups. It's important that there is at least one copy of the data that the intruder cannot modify. This is especially important with cloud-based backup services, because they are essentially online all the time. To account for this, most services keep multiple copies of all files, and make it difficult to delete files and impossible to modify them.
This is the largest category with hardware failures, such as hard drive crashes, power failures and overloads, and data corruption. Cloud computing can have a business failure, which is where the service you are utilizing closes their doors, or changes their licensing in a way that is incompatible with your business.
Hardware failures often are dealt with by building in redundancy. Since hard drives are both a high point of failure, and inexpensive, IT staff will often place RAID (Redundant Array of Inexpensive Disks) arrays on servers, where downtime often means thousands of dollars of lost work. Virtualization, which I will cover in a later article, allows redundancy in actual computers, even further reducing the possibility of data loss, and even reducing the possibility of downtime.
Power failures and overloads are often accounted for with UPSs (uninterruptible power supplies), which perform two functions. First, they keep your equipment running during short power events. The other function they perform is that they notify the computer equipment of impending power failure so that the equipment can power down gracefully, reducing the possibility of data corruption.
Finally, the advent of cloud computing has amplified the ability for business failures to affect a company's data. Don't think of this as just a cloud business failing, but rather anything that could go wrong with an external service. Therefore, it is important to include the cloud service in your backup plans, or provide local redundancy if the cloud is the backup plan.
In a future article, we'll further discuss the strategies used, and how to balance them with the costs involved.
- Jon Thompson
|
OPCFW_CODE
|
Welcome to the book's home page! If you're reading this in a frame and would rather read it in the whole window, click here.
Please let me know if you have any suggestions about what should be here (or better still, contributions). Click here or here for information from Addison-Wesley (now part of Pearson Education) on the book.
Material at this site is variously copyrighted: ©Addison Wesley Longman Limited 1999 ©Perdita Stevens 1999-2006 ©Rob Pooley 1999. Copying details
|Title:||Using UML: Software Engineering with Objects and Components|
|Authors:||Perdita Stevens with Rob Pooley|
|Publisher:||Addison-Wesley (Object Technology Series)|
|Price:||around 27.95 in the UK (variable, shop around!)|
To order from:
This chapter describes a simple case study. It introduces the main features of UML in a necessarily sketchy way; Part II of the book covers each diagram type in detail, but since the diagrams are designed to be used together in an iterative development it is important to have an overview of what UML is before going into details.
This chapter introduces class diagrams, which represent the static structure of the system to be built. We discuss how classes and their associations can be identified, and the concept of multiplicity of an association. We show how a class's attributes and operations are shown. Next we cover generalisation, which can be implemented for example by inheritance. Finally we discuss the role of the class model through the development, and illustrate the use of CRC cards for validating a class model.
This chapter considers some advanced features of UML class models. We describe some ways of giving extra information about the assocation between classes, considering aggregation and composition, roles, navigability, qualified associations, derived associations and association classes. We also covered constraints, which are a general feature of UML that can be used for a wide variety of purposes; here we discussed their use for giving class invariants and for recording the relationships between various associations. Next we consider interfaces, which, again, can be applied more generally. We mention abstract classes, and so-called parameterised classes, which are not really classes at all, but rather are functions which take one or more classes as arguments and return classes as the result. Finally we consider dependency and visibility.
This chapter introduces simple use case models and shows how they are used to specify the behaviour of a system in a design-independent way. We discuss how to identify actors, use cases and the communication relationships between them, and how to use the use case model in the context of a development project.
We discuss how to use interaction diagrams -- collaboration and sequence diagrams -- to describe how objects interact to achieve some piece of behaviour. Collaboration diagrams are better at showing the links between the objects; sequence diagrams are better for seeing the sequence of messages that passes. Interaction diagrams can be used to show how a system realises a use case, or for various other purposes including showing how a class realises an operation or how a complex component is to be used.
We describe how to use state diagrams to model the way that significant changes in an object's attributes affect the way it reacts to events, such as messages. We emphasise that, although if an object has such complex behaviour it can be useful to show it in a state diagram, it is better to avoid designing classes with complex state-based behaviour where possible. We also discuss activity diagrams, which model the dependencies between activities, such as the operations involved in the realisation of a use case, or the use cases of a system.
We consider the two kinds of UML implementation diagrams, component diagrams and deployment diagrams, and how to use them. Component diagrams express the structure of the implemented system, helping to keep track of dependencies to ease maintenance, and to record the reuse of components. Deployment diagrams show how the system is deployed on a particular hardware configuration.
We discuss reuse, covering what can be reused and the practical problems in getting a reuse programme established. As a special case, we consider patterns, which can be seen as the reuse of expertise. Many of the difficulties in achieving a high level of reuse, especially by building components in-house, stem from the need for reused artefacts to have high quality and for the organisation to support the work needed to build and to use them.
In this final chapter we consider the contribution made to the quality of systems by the organisations which build them and the quality assurance processes of these organisations. We discuss management, leadership, teams, organisations and quality assurance.
The main changes between UML1.1 and UML1.3/4 are to the relations between use cases (Chapter 8). This is not surprising: as we discussed in the original Chapter 8, the original scheme (uses and extends relations between use cases) proved confusing, particularly because these relations were counterintuitively described as generalisation relationships. These relationships have been replaced by a "genuine" generalisation relationship, plus two stereotyped dependencies corresponding to the old stereotyped generalisations.
Warning: Booch, Jacobson and Rumbaugh's UML User Guide, published in autumn 1998, claimed to be up to date with respect to UML 1.3, but this is naturally not true, since UML1.3 was not finalised until June 1999! What happened was that the numbering of the releases was changed at the last minute; the Amigos expected that 1.3 would be released in autumn 1998 and that what we now know as 1.3 would be called 1.4.
The next version of UML will be UML2.0, which is scheduled to be finalised in 2002. The current plan is to produce a Second Edition of Using UML describing UML2.0. My best guess is that UML2.0 will not involve large changes to UML as presented in Using UML, although it will involve significant extensions (some of which may be discussed in the Second Edition) and rearchitecting of the language definition.
|
OPCFW_CODE
|
How can I click the dropdown list options?
I have a button that when I click it I will get a sort of a dropdown list. My problem is that I want to click one of the options in this dropdown list but I don't see how to refer to it.
I have tried to act as if this was a list box and I used the "Select" module but I failed with exceptions. My purpose is to be able to refer to any of the options in this dropdown list. Could it be that the HTML code is missing a unique href value ?
<input name="Port 19" value="Uplink" class="ExtendedButton" onclick="SelectFrame('Uplink-200')" id="Port-19" style="width: 84px; display: inline;" type="button">
<script>writeUplinkDropDown()</script>
<div class="dropdown-content">
<a href="#" onclick="SelectFrame("Uplink-200")">200G</a>
<a href="#" onclick="SelectFrame("Port-19")">100G #1</a>
<a href="#" onclick="SelectFrame("Port-20")">100G #2</a>
</div>
Share your code and exception
@Monshe: Can you provide the full page source, so that we can check the xpath hierarchy for the drop down items?
I get this problem in some website, you cant click item in dropdown list cause its hide, first have to click in "some element" that opens the dropdown list. Post website or section of code
First thing you will need to do is open the dropdown menu. Once the menu is open you can click on any of those options you posted by using any of the following selector examples:
driver.find_elements_by_css_selector('a[onclick="SelectFrame(\"Uplink-200\")"]')
driver.find_elements_by_css_selector('a[onclick="SelectFrame(\"Port-19\")"]')
driver.find_elements_by_css_selector('a[onclick="SelectFrame(\"Port-20\")"]')
Thanks. What I was missing was the sequence of actions. For some reason I managed to locate the elements with their xpath and CSS but when I tried to use their "click()" method I got the ElementNotInteractableException (exception). My only worry now is that I cannot predict the menu's behavior and it can be closed before I will be able to click the options inside it. What do you think ? a try.. catch with several retries will do the work ?
Why can't you predict the menu's behavior? What code are you using to open it? What might it do in between opening it and finding/clicking a link?
I don't know why. But, When I maximize the window the menu doesn't persist too long. I am not so sure that it will act different when the browser is not maximized (most of the time it does). I am clicking "Port-19" and then the menu appears. Because this menu is found over a panel that we refresh from time to time there is a chance that the refresh will make it disapear
That sounds like a usability issue. I'd recommend you figure out what is causing the menu to disappear and make it stop, rather than codifying it into your tests.
Selenium's Select class only works with native <select> elements. Because you have implemented a dropdown with custom HTML, you won't be able to use it.
Instead, in order to select one of the options in your custom dropdown, you'll need to perform each of the actions that a real user would:
clicking the button that opens the dropdown, then
clicking the link for the desired option.
Note: When searching for elements on a page, always try to use the same criteria that a real user would. A real user would look for a link with some meaningful text, e.g., "200G"; they would not go scouring the source code looking for a particular onclick attribute. (What's more, the onclick attribute is a part of the implementation of the page, not the interface, and shouldn't be relied upon as such, as it could change at any time.)
1. Using Selenium
Selenium doesn't provide an explicit method for finding buttons, but you can use CSS or XPath to do that:
driver.find_element_by_css_selector("input[type='button'][value='Uplink']")
driver.find_element_by_xpath("//input[@type = 'button'][@value = 'Uplink']")
To find links, Selenium conveniently provides find_element_by_link_text():
driver.find_element_by_link_text("200G").click()
driver.find_element_by_link_text("100G #1").click()
driver.find_element_by_link_text("100G #2").click()
2. Using Capybara (which uses Selenium)
Bare Selenium can be fickle. The link may not yet be in the DOM. Or it may not yet be visible.
capybara-py addresses these problems transparently:
page.click_button("Uplink")
page.click_link("200G")
page.click_link("100G #1")
page.click_link("100G #2")
Thanks for the answer. I can locate the links by their text but it can only be accomplished after I clicked "Port-19"
|
STACK_EXCHANGE
|
Neurobiology of loss and recovery of metacognitive function during REM sleep
Unlike cognitive processes during wakefulness, REM sleep is not generally accessible to explicit monitoring of thought. Indeed, during REM sleep we are typically unaware that we are asleep, unaware that our body is lying in bed and, even when dreams occur, oblivious to the fact that we are dreaming. Interestingly, research has documented a rare but physiologically validated state of REM sleep (so-called “lucid” REM sleep), in which it is possible to regain metacognitive function while continuing to remain in physiologically defined REM sleep. Individuals in this state become explicitly aware that they are asleep and cognizant of the fact that they are dreaming. This state can be objectively verified on a sleep polysomnogram from participants in a sleep laboratory by asking them to execute volitional eye-movements, as shown in the figure above. As discussed in a recent review article (Baird et al., 2019), this unique state presents an opportunity to examine the neurophysiological and neurochemical changes associated with global loss and recovery of monitoring function in the human brain.
In a recent innovative line of research, we have undertaken some of the first cognitive neuroscience studies on this state. In one study, we examined the relationship between the functional connectivity of frontopolar cortex and the frequency of metacognitive monitoring during REM sleep using functional connectivity MRI coupled with graph theoretic analysis (Baird et al., 2018). Frequent REM sleep metacognition was found to be associated with increased connectivity between the lateral frontal pole and bilateral angular and middle temporal gyri, a network that is typically deactivated during REM sleep. In another line of research, we have tested whether metacognitive function during REM sleep can be induced pharmacologically. In a recent double blind, placebo-controlled study, we found that cholinergic enhancement with acetylcholinesterease inhibition (AChEI) substantially increased metacognitive monitoring in REM sleep in a dose-related manner (LaBerge, LaMarca & Baird, 2018). These findings provide novel insights into the neurobiological mechanisms underlying the human brain’s capacity for monitoring and control of thought.
Baird, B., Mota-Rolim, S., & Dresler, M. (2019). The cognitive neuroscience of lucid dreaming. Neuroscience and Biobehavioral Reviews, 100, 305-323.
Baird, B., Castelnovo, A., Gosseries, O., Tononi, G. (2018). Frequent lucid dreaming associated with increased functional connectivity between frontopolar cortex and temporoparietal association areas. Scientific Reports, 8(1), 17798.
Laberge, S.*, Baird, B.*, Zimbardo, P. G. (2018). Smooth tracking of visual targets distinguish lucid REM sleep dreaming and perception from imagination. Nature Communications, 9, 3298.
LaBerge, S., LaMarca, K., Baird, B. (2018). Pre-sleep treatment with galantamine stimulates lucid dreaming: A double-blind, placebo-controlled, crossover study. PLoS ONE, 13(8), e0201246.
|
OPCFW_CODE
|
Serving Up Java Pages
Now is the time to start your e-business solution and raise your Internet profile. Java has all the necessary tools, and it has proven itself as the premier Web development language. In order to implement your Java e-business solution, you'll be required to take advantage of all the Java technologies available.
Some of the business-type processes will require you to implement JavaBeans and Servlets, which can be combined with a new technology, finalized by Sun in May of this year, called Java Server Pages (JSP). You can use JSP to incorporate many of the Java technologies needed to raise your Internet profile, as well as take advantage of the AS/400 and the features it can provide.
JSP is an amalgam of HTML and the Java language, a concept very similar to Microsoft's implementation of Active Server Pages (ASP). ASPs use a Visual Basic-like script instead of Java, which also takes advantage of HTML tags.
These components are combined in a text file with the extension .jsp. When invoked, they're interpreted at run-time by the JVM (Java Virtual Machine) and the results are provided to the browser. Some uses can include presentation of a recently submitted order form processed by a Servlet, or it can use another Java application (such as a JavaBean) to send E-mail.
JSPs are based on the recently finalized Sun JSP 1.0 specification. IBM's implementation on the AS/400 with the IBM HTTP Server is based on the 0.92 specifications.
When the browser requests a page or submits a form, the Web server receives the information and identifies it as a JSP by checking the configuration file. Once it's been identified as a Java process, the Web server passes it to the Servlet Manager, which passes it to the JSP Processor to retrieve the source for the JSP and check to see if it's been executed before. The JSP processor will pass the source code to be compiled. It timestamps the bytecode and saves it on the AS/400 so it can be reused in future requests.
The JSP Processor only compiles the source code if it's been changed, or if it's never before been compiled. After receiving the bytecode, the JSP Processor returns it to the Servlet Manager to be passed to the JVM for execution. Once it has processed the bytecode it returns the results, which contain HTML tags to the Servlet Manager. The Servlet Manager passes the results to the Web Server, which passes it along to the browser. Finally, the browser deciphers the HTML tags and presents it to the user.
JSPs have the ability to include JavaBeans and Enterprise JavaBeans in the source code. JavaBeans can be incorporated to allow for the reuse of Java code in the way of reusable components. Enterprise JavaBeans can also be used to extend transaction-based processing in the form of reusable components. A tool like WebSphere Studio can aid in the development of JavaBeans.
In order to use a JavaBean in a JSP, you must first identify the bean using the tags. Once the bean's properties are set, it's executed by initiating a method. A property, in the example of an e-mail application, could be the e-mail address of the receiving party. A method in this example could be the sending of the e-mail.
With the use of the WebSphere Application Server, you can incorporate your back-end Java Servlets into your JSP. Servlets are a way to write business logic using Java and deploy them via thin clients. WebSphere accesses the AS/400 using connectors that can read the AS/400's databases, legacy application and mail and groupware applications. Similarly, Servlets are identified to JavaBeans using the
tag. Once identified, the Servlet can be accessed and executed anywhere throughout the JSP.
Visual Age for Java 2.1 Enterprise Edition allows for interactive debugging of JSPs. During the execution of the JSP, you can view the current line of execution, the Java source that has been dynamically created, as well as the HTML being generated. With this debugging tool, you can insert breakpoints to stop the processing and view the results generated. This tool provides an efficient way to control the output, as well as the end-results generated by the JSP.
There are a few requirements to use JSPs on your AS/400. The IBM HTTP Server must be installed and configured on your AS/400. In order to take advantage of JavaBeans and Servlets, you'll be required to have the WebSphere Application Server installed, as well as the JDK (Java Development Kit) version 1.1.4. These are all available as non-chargeable licensed programs from IBM.
JSP is a quick and efficient method of incorporating Java applications into your e-business solution. With this type of Java implementation, it's possible to take advantage of both the Java language and HTML tags. HTML is a quick and easy way to manipulate text, while Java is an effective method of accessing your AS/400 and other Java processes. Together, they can be used to massage the results into an attractive and powerful Java-based site.
|
OPCFW_CODE
|
CTC: 28 LPA
Profile: Software Development Engineer
Mode: Off Campus
Round 1: Paper Coding
3 Coding questions
Given two linked lists each represents a number, write a function that returns a linked list that represents sum of two numbers represented by the given linked lists.
123 represented as 1->2->3
456 represented as 4->5->6
o/p: 579 represented as 5->7->->9
Given an array find if sum of any two numbers in the array is zero
Next greater number with same number of digits
Very famous question
Round 2: F2F
Given a string and an array of string that represents a dictonary, count and print all the anagrams of the given string that exists in the given dictonary
asked me to write production ready code. Remember in amazon interviews always write error free code that can handle all the edge cases.
Given a tree and address of a node in the tree, find all the nodes that are at k distance from the given node
wrote full code, some minor edge cases were not handled in first attempts so he asked me to find those and correct the code.
Even in second attempt code was not handling all the cases he asked me to do it again
Third attempt still some cases were missing
In Fourth attempt he was satisfied, some errors were there but he was satisfied
So Never Give Up
Round 3: F2F
Given a linked list where each not has 2 pointer fields one nextPtr for next node and 1 randPtr that points to a random node. Create a copy of the given list but you can not modify the given linked list.
Write code on paper that should handle all the cases.
He asked me to dry run the code.
Reverse the given sentence word by word.
Given a BST find the distance between two given node(by value), handle all the cases if value is not present in the tree.
write the code and dry run it.
Round 4: F2F
Project discussion (45 min)
Tips: Prepare your project in such a way that when you explain interviewer should be able to understand it. First explain what the project is and how it works(just overview). If he asks then only go for code or class diagram.
He found my project interesting so arround 40-50 min he was discussing on project only that too without single line of code.
Given an array and a empty tree, insert the array elements in the tree such a way that it should become a BST
Handle the cases if no of nodes in the tree are more then array size and vice-versa
Asked me to write code on paper and dry run it.
Round 5: F2F
Project discussion(30 min)
Given a grid with zero and ones where find the sortest path between given two cells where you can travel over ones only
This questions took very long time, I gave him approach he modified the questions, again I gave him approach he modified the question. He repeated this for 4 times.
Given an expression like 3+5*6/4+0-4/3 evaluate it.
I told him it can be solved by stack after converting it into pospix
He asked me to write the code for it, I was not able to re-call exact algo but I explained him whatever I had figured out from 2-3 examples, he was satisfied.
Question 3: (worst question of the interview process)
He confused me alot with the question either I was not able to understand well or he was not explaining properly.
At last question was, Given a family relation ship as input:
A B Father B C Brother E C mother
Design your own data structure to store this family relation and answer the query print all the members who has only one grand parent and exact 4 child.
|
OPCFW_CODE
|
LMER: Why do non-reference group individuals' random intercepts average to the reference group's intercept mean in LMER?
First time using LMER. Really appreciate any help!
For df to illustrate my questions, I alter ToothGrowth as follows:
data("ToothGrowth")
ToothGrowth$len <- sort(ToothGrowth$len)
day <- rep(c(1:10), times=6)
id <- rep(c("a","b","c","d","e","f"), each=10)
df <- data.frame(cbind(ToothGrowth$len, as.character(ToothGrowth$supp),
day, id))
colnames(df) <- c("Length", "Treatment", "Day", "ID")
class(df) <- c("numeric", "factor", "numeric", "factor")
df$Length <- as.numeric(df$Length)
df$Treatment <- factor(df$Treatment, levels = c("VC", "OJ"), order = FALSE)
df$Day <- as.numeric(df$Day)
df$ID <- factor(df$ID, levels = c("a", "b", "c", "d", "e", "f"), order = FALSE)
My LMER:
model1 <- lmer(Length ~ Day * Treatment + (1 | ID ), data = df,
REML = FALSE)
#to look at fixed effects
fixef <- unlist(fixef(model1))
#to look at random intercepts
random_intercepts <- print(coef(model1)$ID[,1])
My definitions:
fixedef[1] is the reference group (OJ) intercept
fixedef[2] is the reference group (OJ) slope?
fixedef[3] is the difference between OJ intercept and VC intercept?
fixedef[4] is the difference between OJ slope and VC slope
Question:
If my definitions are correct, then why do VC random intercepts random_intercepts[1:3] average to the reference group (OJ) intercept? Should I add fixedef[3] to random_intercepts[1:3] for accurate random intercepts for non-reference group individuals?
I've reviewed tons of LMER posts but none asking such a basic question are using exactly my model structure and I'm not sure how their output should differ in order to interpret their results/and their received answers.
The answer is in Section 1.1 of the lmer vignette.
The random effects are assumed to be "multivariate normal with mean zero"; with a simple random intercept as in your example, that's just a zero-mean Gaussian distribution with a variance to be estimated from the data. Equation 2 of the vignette shows (absent offset terms) that the fixed-effect coefficients are those in place when the random-effect values are at their mean values of 0.
The confusion comes from how the random intercepts are modeled. You can think of the random intercepts as hypothetical differences from the overall intercept if all cases received the OJ treatment: that is, after correction of those receiving the VC treatment for the fixed VC-OJ treatment difference. That type of hypothetical estimate is common in regression modeling. In your example, the intercept is the estimated outcome for the OJ treatment at Day=0, a value for Day that isn't in the data set.
You could "correct" the random intercepts of the cases with the VC treatment to represent their estimated intercepts given their VC treatment. But even then, you are estimating an intercept that's a hypothetical estimate at (the non-existent) Day=0 for those cases. I'd recommend just thinking about the random intercepts as hypothetical values for cases all corrected back to the same baseline conditions. Then use software tools to make predictions as needed for various hypothesized values of fixed predictors and random effects.
Thank you so much for your answer!! To clarify: your second paragraph makes me think I might have been partially misunderstood to be asking about reference/non-reference group identity in regards to the random effects term (ID). I meant Treatment, with df$Treatment=="OJ" being the reference group, and df$Treatment=="VC" being the non-reference group. But if my fixed-effect coefficient definitions above are incorrect, what would be the correct definitions of fixef[1:4] in the above code? Really needing to understand the LMER output specifically if you're able to help!
@ghost_pants I might have misinterpreted your question, as you suggest. I will edit the answer in a while.
Thank you for the edits, I now fully understand and all my questions were answered!
|
STACK_EXCHANGE
|
package solver
import "github.com/truggeri/go-sudoku/cmd/go-sudoku/puzzle"
type solveTechnique struct {
set func(int, int) puzzle.Set
index func(int, int) int
}
type solution struct {
x, y int
square puzzle.Square
}
// Solve Returns the given puzzle with all elements solved
func Solve(puz puzzle.Puzzle) puzzle.Puzzle {
for true {
puz = puz.CalculatePossibilities()
newSolution := false
var answer solution
techniques := [4]func(puzzle.Puzzle) (bool, solution){solveUniques, solveRows, solveColumns, solveCubes}
for _, technique := range techniques {
newSolution, answer = technique(puz)
if newSolution {
puz[answer.x][answer.y] = answer.square
break
}
}
if !newSolution {
break
}
}
return puz
}
func solveUniques(puz puzzle.Puzzle) (bool, solution) {
for x := 0; x < puzzle.LineWidth; x++ {
for y := 0; y < puzzle.LineWidth; y++ {
if puz[x][y].Solved() {
continue
}
onlyOne, value := onlyOnePossibility(puz[x][y].Possibilities)
if onlyOne {
return true, solution{x: x, y: y, square: puzzle.CreatePuzzleSquare(value)}
}
}
}
return false, solution{}
}
func onlyOnePossibility(poss puzzle.Possibilies) (bool, int) {
counter, onlyOption := 0, 0
for i, v := range poss {
if v {
onlyOption = i + 1
counter++
}
if counter > 1 {
return false, 0
}
}
return true, onlyOption
}
func solveRows(puz puzzle.Puzzle) (bool, solution) {
set := func(x, y int) puzzle.Set {
return puz.GetRow(x)
}
index := func(x, y int) int {
return y
}
return solveByElement(puz, solveTechnique{set, index})
}
func solveByElement(puz puzzle.Puzzle, st solveTechnique) (bool, solution) {
for x := 0; x < puzzle.LineWidth; x++ {
for y := 0; y < puzzle.LineWidth; y++ {
if puz[x][y].Solved() {
continue
}
updated, result := solveSet(st.set(x, y), st.index(x, y))
if updated {
return true, solution{x: x, y: y, square: result}
}
}
}
return false, solution{}
}
func solveSet(set puzzle.Set, i int) (bool, puzzle.Square) {
if set[i].Solved() {
return false, set[i]
}
setOptions := set.Possibilities(i)
for j := range setOptions {
if set[i].Possibilities[j] && !setOptions[j] {
return true, puzzle.CreatePuzzleSquare(j + 1)
}
}
return false, set[i]
}
func solveColumns(puz puzzle.Puzzle) (bool, solution) {
set := func(x, y int) puzzle.Set {
return puz.GetColumn(y)
}
index := func(x, y int) int {
return x
}
return solveByElement(puz, solveTechnique{set, index})
}
func solveCubes(puz puzzle.Puzzle) (bool, solution) {
set := func(x, y int) puzzle.Set {
return puz.GetCube(x, y)
}
index := puzzle.PositionInCube
return solveByElement(puz, solveTechnique{set, index})
}
|
STACK_EDU
|
I Cannot endure anymore. Gazebo Team, could you please carefully release your project
With due all my respects, If a powerful software is causing countless problems which are resulted by careless posting and maintenance, it will be a totally useless one and this is gazebo.
Could you please kindly have a look at your tutorials and release instruction on your website? Almost none of them works!
waiting for namespaces..... when executing
Unable to locate package gazeboXX.... when installing
Come on, what is the point to release your project and instructions when the users cannot get them?
Thank you for your attention.
Originally posted by puav on Gazebo Answers with karma: 3 on 2016-03-31
Post score: -5
Comment by Ben B on 2016-04-01:
-1. People here are happy to answer specific questions you have -- general complaints are really not constructive. If you have a question on how to do a particular thing: ask. If you have a feature request, make it. If you have a bug report, file it: https://bitbucket.org/osrf/gazebo/issues?status=new&status=open
Sorry to hear that you had a bad experience. Hopefully we'll be able to assist you.
I'll need a bit more information though. Could you update your post with answers to these questions?
What version of Gazebo are you using?
What operating system are you using (such as Ubuntu Trusty, or OSX)?
What tutorial(s) did you try to follow?
What command did you execute that resulted in "waiting for namespaces....."?
What command did you execute that resulted in "Unable to locate package gazeboXX...."?
Every release we run through our tutorials to make sure they work. Sounds like we missed something, and we'll try to correct this situation as fast as possible.
Thanks for letting us know about your problem.
** Update **
So far you have said that you followed the tutorial exactly, and still can't install gazebo7. If this is the case, then something strange is going on. Can you please post the output to the following commands:
sudo sh -c 'echo "deb http://packages.osrfoundation.org/gazebo/ubuntu-stable `lsb_release -cs` main" > /etc/apt/sources.list.d/gazebo-stable.list'
.
wget http://packages.osrfoundation.org/gazebo.key -O - | sudo apt-key add -
. just post the last couple lines of the next one
sudo apt-get update
.
sudo apt-get install gazebo7
Originally posted by nkoenig with karma: 7676 on 2016-04-01
Post score: 3
Comment by nkoenig on 2016-04-04:
The author of this question had Ubuntu 12.04, not 14.04 as originally reported. Therefore they could not install gazebo7 as indicated here (http://gazebosim.org/#status).
|
STACK_EXCHANGE
|
I’m interested in a UPS for my NUC8i7BEH. Which do you recommend? Many thanks.
I use this: CyberPower CP1000PFCLCD
It’s probably overkill for what it supports (NUC, NAS, Netgear Switch, router). But it has been worry free and everything runs smoothly through nearly weekly-biweekly power glitches during fall/winter.
You don’t really need a lot of storage power (VA) if you have a switchover generator system or short outages (my case). It just has to last a few minutes.
I have another CyberPower in another part of the house. So, I guess I like that brand.
That’s too pricey for my purposes but I’ll search for one that delivers 12-19V (according to: https://www.ebuyer.com/868776-intel-nuc-bean-canyon-nuc8i7beh-i7-8559u-barebone-boxnuc8i7beh3, where I bought it from)
Understand about the price. But don’t be confused by voltage and Power.
The voltage going into your NUC is 12-19 volts.
UPS are rated by VA (power). So as an example 425 VA UPS will supply 50 watts of power for 37 minutes. One like this costs about $50.
You don’t need a whole lot of battery power to run a NUC and a router through a short power interruption.
OK but if the power goes off overnight…sounds like I need at least 8 hours worth!
That’s right. You are the only one that can determine how much backup you need based on your load and how long you need it to last. Most of the UPS suppliers have calculators to help you figure what you need. This is the one for APC
Or you can politely remind RoonLabs that allowing a UPS to shut your core down in case of power failure would be a good thing.
If you have a Linux box connected to the UPS shared by the NUC (ROCK) you can initiate a shutdown of ROCK by executing the following:
curl <IP address>/1/poweroff
apcupsd with my UPS on Ubuntu and have added the following to the shutdown script:
#!/bin/bash # HOST=<IP address> LOGFILE=/var/log/rock.log printf "$(date "+%Y-%m-%d %T") : Attempting to shutdown ROCK ...\n" >> $LOGFILE 2>&1 if curl -sS $HOST/1/poweroff >> $LOGFILE 2>&1 then printf "\n" 1>> $LOGFILE fi exit 0
BTW other connected Linux boxes could be shutdown remotely with
ssh user@host "sudo /sbin/shutdown -h now" &.
Have done some research, I see your point.
UPS are there to give you time to act. Either by gracefully shutting down the system or getting a generator in place. Although, a NuC can be very power cautious and a big UPS may keep it going for hours that’s generally not all that cost effective. The UPS will beep and alert you to power failure. Just buy one that gives you enough time to get over and gracefully shut everything down.
Thank you !!!
I have an APC 750 UPS on my backend system, which includes my ROCK server. However my music library is on a NAS and not direct storage on the ROCK. The ReadyNAS units (main & backups) have integration to the APS UPC, and will initiate a controlled shutdown with low battery levels in the event of an outage. Although I only get about 10 minutes of on-battery operation from the UPS before shutdown, I find it very useful with dealing with the short power interruptions, which play havoc on spinning disk systems.
One word of warning , opinions may differ but deep cycle batteries are not really designed for continuous charge discharge. Each virtually full discharge reduces the life of the battery
I use a UPS as you suggest to power hi fi, PC etc the batteries lasted less than 2 years.
Strictly a UPS shouldn’t be used as a continuous power source at least not if you don’t want to keep replacing batteries which aren’t cheap
Maybe Google the usage of deep cycle
A UPS In theory should have sufficient power to allow for an orderly close down with no data loss, hence comments about db integrity if power drops during read/write.
We suffer regular power outages usually for up to 4.5 hours
Just my 2p
No good if I’m asleep.
My NUC just runs ROCK and there appears to be no way for the OS to recognise the UPS power is about to die. This is Xekomi’s point.
Yes, all you need is USB to monitor the UPS and a network connection to communicate with ROCK. A Pi Zero would probably do.
Can you explain this to a non-techie? I know what a RPi is but want to understand how it hooks up with the UPS/NUC and what software gets installed on the RPi.
A UPS connects to a single computer via USB and exchanges information about electricity supply and battery status. Likewise the computer can control the UPS, e.g. kill power.
A Raspberry Pi could be used for monitoring the UPS and initiate the shutdown of machines sharing the UPS, but not receiving the status information.
On Linux you may use software called a acupsd or nut. These are command line applications, so some knowledge of setting up Linux is needed.
|
OPCFW_CODE
|
Rightly said by John Tukey - "The greatest value of a picture is when it forces us to notice what we never expected to see". Indeed this is the case. Humans have a tendency to get attracted to visuals. Pictures appeal to us. Reading so much of the content versus having a pictorial or graphical representation of the same obviously makes us choose the latter one. The graphical or pictorial representation is indeed a quicker way to analyze and know the trend in data and also everyone does not like reading so much when there is an option to depict the same information in visual form.
This is how Data visualization comes into the picture and becomes a remarkable field when dealing with a large set of Data.
Almost every of the field involving data makes use of Data visualization, which is also desired. I mean, suppose you are dealing and finding insights while working on a particular data which does not contain any of the data in visualizing form. It will become a tiring situation indeed. Also, Data visualization becomes important to look at when you are sharing insights or information with your business partners or your consumers. Graphical or the pictorial representation will do and work more.
There are lots of content available on the internet & books that uses data visualization. In absence of what it would be boring to read and conceptualize the matter. It's easy to look at the pictures and graphs to draw inferences. Analyzing data and gaining useful information becomes easier when data is available in tabular,
graphical or pictorial form. In short, Data visualization is nothing but presenting your data in a graphical or pictorial form, which is obviously easy to understand.
Data visualization is used by the companies for a variety of purposes which involves Identifying trends, comparing growth, taking decisions, knowing customer interests, and many others. Analysts look at the graphs and identify the trend.
The history of Bruce Springsteen which is the data visualisation of the songs of Bruce Springsteen, the Hello sun app visualising the movement of the sun and the moon, Living space visualising the International space station are some of the many Data visualisations that you must take a glance on to know how amazing and interesting Data visualization is.
To make your Data an appealing one, there are many Data visualization tools available in the market which will make your data clear, effective and understandable to the users looking it. Tableau, Infogram, fusion charts, Chart blocks, Datawrapper, Plotly are some of the Data visualization tools.
If the above description about Data visualization interested you and you would want to make a career into it, than I must tell you that you need to have strong analytical skills and have good command in Mathematics, statistics and computer science with a good command in some of the programming languages like Python, R, Java, C.
|
OPCFW_CODE
|
Logan, I just went through the process to create an account. The signup page has a few employment related fields that are required to create an account. They probably shouldn't be.
EDIT: There's also this alignment issue in the settings wrt to "City of residence" where the text field is off by ~1 pixel.
EDIT2: On the page where I uploaded my SSH key, it says updates occur after a small delay. The linked page in the docs says that this delay is ~30 minutes. In any case, I was able to begin using SSH immediately. If that info is no longer fresh, I suggest removing it, especially since it's probably a turnoff for a lot of users.
EDIT3: When I run sf-help --web at the shell, it prints an error:
[colbyrussell@shell-22009 ~]$ sf-help --web Use of uninitialized value $user in concatenation (.) or string at /usr/bin/sf-help line 96
I've been on SourceForge a while, and I really like it. It always looked a little outdated, and there was the whole malware incident, but for hosting code with discussion boards and ticket systems, it does the job nicely.
Obviously the malware thing was really bad, but in general I think it gets a bad rap because it wasn't the trendy new thing. I personally liked the old UI, but I'm happy for the new change because it's good for the future of the platform.
The UI menu really annoys me, the top level reads the following:
Deals? Separate Join and Login? ... Confusing
Then when you scroll down, the top menu changes to:
Articles? This is a hosting site, and they have a Blog item already! INTERNET SPEED TEST??!
| Articles | Internet Speed Test | Menu | | | | BIG AD BANNER | | |
I appreciated Sourceforge and it was a big part of my gateway experience into opensource, but now it just looks like an amateur startup :(
If anyone from Gitlab is listening: github sync is a really important feature for FOSS projects. Glad that SourceForge implemented it.
(In a large-ish FOSS project, we are trying to move to self-hosted Gitlab, but the lack of sync means that people have to fully buy-in or manually sync their extensions/modules, which is causing major friction)
Nice work, I really didn't expect anything despite Logan promising improvements time and time again via twitter. Excited for the GitHub sync (not import).
Also, the awesome SF hosting (SSL and a SQL DB included), much better than GH Pages IMO.
Kudos! Keep improving. The source hosting "business" isn't a zero sum game. Everyone benefits from choice and competition.
Good job. Github sync means you can host binaries on sourceforge and keep the source on github as usual?
Has nobody in the design team ever tested this on a screen at less than 4K? At 1366×768 the title is jammed right up against the left edge of the screen.
I applaud the initiative but when I look at what happened to Slashdot, I’m not enthusiastic about Sourceforge succeeding any better under the same ownership (I have no idea whether Slashdot makes money but it’s a horrible community these days, and nothing seems to be done about it.)
Sad state of affairs that something like ScourceForge cannot operate as a paid-for service, but only as a "tech-influencer" ad site. quote from here: https://slashdotmedia.com/about-slashdot-media/
Not attacking the Abbotts and their SlashDotMedia and BizX web media influencer conglomerate, 100% legitimate way of doing business, but an overall observation on services that can't be operated as standalone businesses.
Also an illustration how big the tech dependency on advertising is, from Google to this. Remove ads from the web and a shitload of stuff disappears.
Looks like a good list of features, except for the HTML5 Speedtest. Why was that needed?
Nice. A large improvement. Have been waiting for this for a while since it was announced that it was sold.
No idea if anyone working on sourceforge is likely to be here or not but I would love to know if I could get an API of the projects there or even just a list. I would like to add more of souceforge into searchcode.com and would prefer to not scrape the site.
Probably far too late to mention, but I have a bunch of projects on SF, and now I can't scroll through them on the me drop down menu.
It feels.. strange, and more responsive than the old stuff. I guess that's just change.
I never was a fan of github, I always preferred the geodistribution of SF, and of course it was so easy to make binary packages for end users. It's a shame the other people had to ruin a great place with selling the adware space, and Im so glad it's all gone.
I just fear in this environment of hipster flash with no substance, that a well rounded site like SF will never get the hipster VC financing that github does.
I hope you guys don't shutter any time soon, I love SF!
They should hire a new lead designer.
Any plans on expanding:
- the ticket system (connectors to external systems, boards, etc.)
- CI connectors
- integrations for team systems (Slack, MS Teams, ...)
The new design looks pretty good, or at least more modern. Also I just wanted to emphasize with you that you had to field the same "do you still bundle malware" question...eight times by my count.
I notice that there is no automatic spell checking in edit boxes, for example when someone creates a bug ticket. Github has this.
Any plans to add this feature? Or is there a control to enable it that I'm missing?
I got into SVC / code repositories after the whole sourceforge serves malware debacle, so I'm not familiar with SF from a developer perspective.
Is it a got repo? Mercurial? Something else completely?
I for one am excited about this. I used to have stuff on sourceforge, and to me it feels like they're on track to become relevant again.
Anecdotal. I find Sourceforge has more end-user applications. By this I mean applications you can download, install and have a user interface. A good example is Squirrel SQL. Github seems to be a coders repository. Filled with libraries to incorporate into other programs.
Well, at least it's not as mind-punchingly slow as it used to be. Progress?
Wasn't it sourceforge that injected malware into downloaded programs? At least that's what pops into my head every time I see a sourceforge link, and why I avoid anything that's hosted on there.
position: sticky; /* Frames are back, baby! */
Atrocious. They lock a pointless navigation bar to the top and changed the theme. Can someone explain why I would ever want to use this product. They bundled adware with their software ffs. Inexcusable. They need to just die.
Whoa there: fixed top banner with an actual banner at the top, covering a whopping 20% of my screen height and 80% of width. Didn't see something like this in a long time.
I'm never going to interact through that as a developer on daily basis. Not even considering the poor UI for just every feature in SF.
The only thing SF has for it, currently, is a mailing list for each project. GitHub and GitLab should have this. Interaction and discussion though "issues" is horrible.
Brought to you by Google AdSense
What exactly is "new" besides the brutally awful design and color scheming? I see the same malware laden downloads that can be found anywhere else on the internet, surrounded by ads as was there before. Over 600 separate HTTP requests and counting on a project page? Really? You seriously expect developers to use this?
surprised SourceForge is still alive
A case of too little and WAY too late. Sourceforge is like the Myspace, long dead. It's not even a good try.
A company whose website doesn't have a jobs section is doomed.
SourceForge uses very invasive ads.
I hope they got rid of the guy who suggested bundling installers
Glad to see they are removing their adware from packages. I believe the damage is already done though and nobody will trust a binary from that place ever again.
|
OPCFW_CODE
|
Python code runs fine in VScode, but not in terminal - adds indentation
I just installed VSCode on my laptop. Every time I want to test a function in the terminal (shift+enter), it throws me an indentation error. I have tried everything in indentation settings, and have ensure indentation is always 4 spaces.
For example, the code below runs fine in the interactive window or in debugging.
def test():
x = 3
y = 4
return x + y
But when I select those lines and press SHIFT+ENTER, I get an unexpected indentation error.
>>> def test():
... x = 3
... y = 4
... return x + y
...
File "<python-input-1>", line 3
y = 4
IndentationError: unexpected indent
Indeed, even if I paste existing code from elsewhere into the terminal, it somehow adds indent on every new line, causing indentation errors as well. Interestingly, I don't get this error when I type my code directly into the terminal.
The combination of everything I tried (disabling all extensions, modifying autoindent settings, uniform indentations, etc.) and the simple examples I have been using - both sent from the editor and copied from other code - I am convinced it has to do with something in the python terminal that I'm missing. Is there a reason why it would add multiple tabs or spaces in front of every new line that is pasted in the terminal?
I am running:
Python 3.13.0
VSCode 1.94.2
Windows 11 Home
CRLF line ending
I currently only have the following extension installed:
Python
Pylance
Jupyter
What version of python are you running? I think this is fixed in later versions (eg 3.10). Try entering exec(''' at the prompt, then paste your code and then '''). That's a workaround I used to use.
This may be relevant; apparently solved in 2023.8.31
use tabs by default for indenting
Preliminary bug report: Python3.13 All functions fail to send: IndentationError: unexpected indent
@sahasrara62 Why tabs? PEP 8 recommends 4 spaces. Do you want to expand on that in an answer?
@wjandrea tabs works better when u copy from ide to temrinal
what platform are you on? what line endings is this file using? (please edit this info into your question post) I suspect a line ending problem.
thanks for the suggestion @starball
changing the line endings had no effect, and the terminal also inserts additional indentation when I copy code from other sources
if you want to copy and paste code into the REPL just use IPython as the REPL, not the default one that comes with Python
Would you mind copying the code to a terminal text editor and see if it will indent like this?Also there seems to be a similar question here:https://stackoverflow.com/questions/2501208/copying-and-pasting-code-directly-into-the-python-interpreter maybe you could take a look.
I had similar issue from Visual Studio Code does not print to terminal with correct indentation
solved by reverting to python 3.12, no indentation issues after that
|
STACK_EXCHANGE
|
I got a very hard automatically clustering problem with some training data (500 samples, each with roughly 5-50 classes and 10000-20000 data points in total). What I need to do is to cluster an input into multiple classes (the number of classes is unknown).
So far I had some several clustering algorithms, each gives me an output. I noticed that none of them works sufficient robust over the entire training set, but some works better than others on data with a small number classes, some works better on data with a large number of classes. Meanwhile, some gives me very good cluster results on part of a sample, while performs bad on the other half. Visually speaking, I feel that human should be able to generate a better clustering output by combining all initial outputs of using existing methods. However, it seems that it is hard to translate my human selection logic into codes. Because I know that a good cluster result in my data should be of a shape like a long eclipse, if I see a class in one of my output is very dislike this shape, I know there is something wrong, and if I look up the results of this area in other outputs, I can easily pick those close to my expectations. The actual case is even more complicated, my human decision on choose which class in which output for which region in my data sample uses both relative information among different outputs and prior knowledge on the distribution of tested data points. Please help me, I even donot know where to start.
Any comments and hints are welcome.
Thank you for your comments. I see what you said, but it turns out that my ultimate goal is different from what you suggested. I am not interested in finding which clustering output is better than others, but trying to generate a new output based on existing clustering output.
At the first glance, these two goals look compatible: if I can choose one best clustering output for a given set of data points, then I shall be able to repeat this process and apply it iterative to cover the entire dataset, and in this way we can generate a new output, whose solution is composed of different clustering outputs at different regions. However, in my problem this is really not a feasible approach, because I really donot know how to separate an entire dataset into reasonable subsets.
If we know how to do this job properly, the clustering problem can be largely simplified. We can first separate the entire set into nonoverlapping subset, then use an appropriate clustering method for each individual subset, and finally collect all subset clustering outputs as a new output. Meanwhile, it seems that this way of sequential process might be harmful if we make mistakes in the early stage of dividing subsets.
|
OPCFW_CODE
|
Getting Started with JavaFX Scene Builder 1.1
8 Use a Style Sheet and Preview the UI
This chapter describes how you can use the JavaFX Scene Builder to preview the FXML layout that you just created and also how to apply a style sheet to customize the look and feel of the IssueTrackingLite application.
Use the following sections to preview the UI work that you have laid out so far and then change the look and feel of the layout by working with a style sheet.
Preview the UI
Use the following steps to preview the UI work that you have done so far:
From the Menu bar, choose Preview, and then select Preview in Window.
Resize the window multiple times to ensure that the buttons in the toolbar and the text area resize appropriately when the window is resized.
To stop viewing the preview, close the Preview window or from the Menu bar, choose Preview and then Hide Preview Window.
Use a Style Sheet
You can customize the look and feel of your UI by applying style sheets. For this tutorial, you use a style sheet file that has been provided with the IssueTrackingLite sample.
Verify that the Cascading Style Sheet (CSS) resource file that is bundled with the IssueTrackingLite sample is already set. In the Hierarchy panel, select the root AnchorPane container. Click the Properties section of the Inspector panel. In the Stylesheets list view of the CSS subsection, notice that the IssueTrackingLite.css style sheet is already set, as shown in Figure 8-1. This is the style sheet that was set when you created the FXML file.
Figure 8-1 Adding a Style Sheet File
Description of "Figure 8-1 Adding a Style Sheet File"
Use a style class for one of the elements in the Content panel.
In the Hierarchy panel, select the row for the ListView element.
Click the Properties section of the Inspector panel, and click the button with the plus sign (+) in the Style Class list. Select darkList as shown in Figure 8-2. Notice that the appearance of the ListView element in the Content panel has changed to a dark grey color.
Figure 8-2 Adding a Style Class to the ListView Element
Description of "Figure 8-2 Adding a Style Class to the ListView Element"
In the Style Class list view, with the darkList selected, click the choice button with the down arrow and select Open IssueTrackingLite.css from the list, as shown in Figure 8-3. The IssueTracking.css file is displayed in the default editor defined for the CSS file type. You can make edits to the file, and then save the file. The changes are immediately applied. Exit from the editor window.
Figure 8-3 Working with the Style Sheet File
Description of "Figure 8-3 Working with the Style Sheet File"
From the Menu bar, choose File and then Save.
You just completed building the FXML layout for a JavaFX application using JavaFX Scene Builder. Continue with the Compile and Run the Application to compile and run the IssueTrackingLite application.
|
OPCFW_CODE
|
I think I’m about ready to give up on Inform 7 yet again. I hate to do it, both because the IDE (integrated development environment) is extremely nice, and because it’s such a popular authoring system for interactive fiction.
But It’s driving me bats. For two reasons:
First, the “natural language” programming paradigm, which seems to be immensely attractive to new authors, soon loses its appeal as you begin trying to do complex things, or even simple things that are not entirely standard.
A native speaker of English can typically phrase a given idea in a variety of ways (“Bob is standing on the cat’s tail,” “The cat’s tail is beneath Bob’s shoe,” “Bob is standing. His shoe is on the cat’s tail,” “On the tail of the cat is the shoe of Bob, who is standing,” and so on). Most of the time, Inform can only accept one wording of a given idea. Other wordings that seem both entirely sensible and entirely equivalent to Inform’s desired wording either will not compile, or will compile but won’t produce the desired results.
The I7 home page gives, as one of its key examples of the felicity of natural language programming, this sentence: “The prevailing wind is a direction that varies.” The fact that Graham Nelson (the author of Inform 7) feels this is a good example tells us a great deal, because to a native speaker of English, the sentence is gibberish. First, the wind is not a direction. The wind is a movement of air. Second, directions do not vary! They are fixed. North never becomes west (though it could perhaps do so in interactive fiction, at the cost of badly annoying anyone trying to play the game in question).
This sentence does in fact have a meaning in Inform 7. It means that the phrase “the prevailing wind” is being defined as a variable, and that this variable is to be initialized in such a manner that it can take on only one or another of the values defined within I7 as a direction. That is, the prevailing wind can be assigned the value north, northwest, or west, etc., but it can never be assigned the value 7 or “Tuesday.”
Is that clear to the non-expert who reads the sentence?
Second, the manual is a mess. This is perhaps especially surprising because the manual for Inform 6, which was Graham Nelson’s first authoring system, was a marvel of clarity. (In case you’re new to interactive fiction, I should perhaps explain that Inform 6 and Inform 7 have about as much in common as ice cream and a bicycle.)
What is being called “the manual” distributed with I7 is not in fact a manual at all. It’s a fairly reasonable tutorial; if you work your way through it faithfully, from start to finish, you’ll learn a heck of a lot, and the concepts unfold in a reasonably sensible order. But if you’re trying to look up some specific technique, and would like to read a complete, coherent discussion of that technique within a self-contained section of the “manual,” you’re doomed.
A single example will have to suffice. Chapter 12 is called “Advanced Actions.” This, one would think, would be the place one would go if one wanted to create a new action for one’s game (perhaps spearing a fish with a spear whose tip has to be sharpened first in order for the action to succeed — that seems reasonably advanced, in that it involves two objects, one of which has to be held and also has to pass the test [isSharpened=true]). But after two pages, one of which is devoted to a diagram of the manner in which I7 processes actions (that is, player commands), Chapter 12 goes scampering off after the question of how one writes code that will enable the player to give orders to other characters in the game (“Bob, spear the fish”). This material belongs in an entirely different chapter, one on the handling of non-player characters.
There is, in fact, such a chapter. It’s in a second, parallel document called the Recipe Book, which is also included in the distribution. But as can be deduced from its title, the Recipe Book is mainly a tutorial as well. It mostly eschews formal discussion of the minutiae of proper or functional code in favor of presenting numerous examples (the recipes), one of which may or may not have chunks of code that you can copy and adapt to your own purposes.
To develop a clear, logical understanding of I7 by reading the Recipe Book, one would need an extraordinarily well developed intuition.
As far as I can see, the only way to actually learn I7 is to post questions on the rec.arts.int-fiction newsgroup. As a publicity gimmick, this is a winner: 90% of the posts on r.a.i-f are about I7, which guarantees that it will be highly visible to anyone who is interested in interactive fiction. As a way of documenting the programming language, it’s … rather informal.
|
OPCFW_CODE
|
Cheap Ambien From Indialike it View all 1089 reviews $0.27 - $2.86 per pill
order ambien online
The romantic guitar, in use from approximately 1790 to 1830, was the guitar of the Classical and Romantic period of music, showing remarkable consistency in the instrument's construction during these decades. To bring diazepam dose for sleep nations together on the final frontier. Neuroimaging has contributed to the cheap ambien from india identification of the neural components involved in drug reinstatement as well as drug-taking determinants such as the pharmokinetics, neurochemistry, and dose of the drug. Shillingford, Oxfordshire, and christened Victor Anthony. Volkov has further argued, both cheap ambien from india in Testimony and in Shostakovich and Stalin, that Shostakovich adopted the role of the yurodivy or holy fool in his relations with the government. While neoclassical criticism from France was imported to English letters, the English had abandoned their strictures in all but name by the 1720s. She only appears with the sash and the crown of Miss Springfield. We had only needed, he said, to hear the piece to answer the question ourselves. Outdoor grass carpets are usually made from polypropylene. In British law it is a criminal offence to be drunk in charge of a motor vehicle. Allegro vivace in D major and Allegro in F-sharp minor, D. The cheap ambien from india turn of the cheap ambien 20th century saw the development of psychoanalysis, which would later come to the fore, along with cheap ambien from india Kraepelin's classification scheme. Manual pressure, vibration, injection, or other treatment is applied to these points to relieve myofascial pain. Middle entries tend to occur at pitches other than the initial. The total of twenty-four tributes are sent to an arena and forced to fight to the death until a single victor remains. They suggested its cheap ambien from india buy zolpidem in spain use in older children should be restricted to treating post-chemotherapy or post-surgery nausea and vomiting, and even then only for patients where other treatments have failed. An occupational therapist may xanax anti anxiety analyze the steps involved in a task to break down an activity into simpler tasks. Gag phones also appear in other guises. After several such efforts he has finally found a vehicle for his music. It also has the ability to be mercerized like cotton. Buy ambien online cheap The Congress was dissolved, and under martial law powers, Marcos began to rule by presidential decree. Gilmore, Steve Grothmann and Phil Wandscher. cheap ambien from india His wife found his body late that morning; they had purchased heroin diazepam injectable together from Marcus Burrage. Some gardeners spill vinegar over the soil to effectively keep the pH low and prevent chlorosis. Hannity has received several awards and honors, including an honorary degree from Liberty University. The Narcissist's one major feud was with Mr. Even until now, this continues to be true. As with other opioids, tolerance and physical and psychological dependence develop with repeated dihydrocodeine use. Left alone, it will grow as high as possible on the support, with few flowers. Etodolac should be avoided by patients with a history of asthma attacks, hives, or other allergic reactions to aspirin or other NSAIDs. Is cheap ambien from india better at detecting very recent use of a substance. Chapter 8 in the BWV catalogue, listing compositions for a solo keyboard instrument like the harpsichord or the clavichord. There are a number of advantages in using allosteric cheap ambien from india modulators as preferred therapeutic agents over classic orthosteric ligands. Many of Glasgow's trees buy 20 mg ambien and plants begin to flower at this time of the year and parks and gardens are filled cheap ambien from india with spring colours. In Britain a legal case involved the death of two children of a cheap ambien from india mother whose three children had all had hypernatraemia. In organic chemistry, menthol is used as a chiral auxiliary in asymmetric synthesis. He felt that a multiracial society without slavery was untenable, as he believed that prejudice against blacks increased cheap ambien from india as they were granted more rights. While dizocilpine is generally considered to be the prototypical NMDA receptor blocker and is the most common agent used in research, animal studies have demonstrated some amount of neurotoxicity, which may or may not also occur in humans. Then the head may turn, pull or tilt in jerky movements, or sustain a prolonged position involuntarily. Young males appear to be at heightened risk of dystonic reactions, although these are relatively rare with olanzapine. Graduated from Aojijuku Tokyo School 18th term. Kavain order ambien overnight is the main kavalactone found mostly in the roots of the kava plant. Within the brain, dopamine functions partly as a global reward signal. PPIs examples include omeprazole, lansoprazole, rabeprazole, pantoprazole, and esomeprazole. In 1956, the first antidepressant, iproniazid, was accidentally created during an experiment cheap ambien from india while synthesizing isoniazid. During the first several decades of the 20th century, Germans settled around the United States, bringing the values of German youth culture. The Shinsengumi were sent to aid the Aizu and guard the gates of the imperial court. Affinity is the ability of a substance to bind to a receptor. They did so on the basis of detailed cheap ambien from india comparative studies of the known and interpolated properties of 72 elements.
buy ambien online canada
This is a major problem as there are numerous claims of fraud in drug rehabilitation centers, where these centers are billing insurance cheap ambien from india companies for under delivering much needed medical treatment while exhausting patients' insurance benefits. In both cases, malware was injected into the operating buy real zolpidem system of the machines, causing them to dispense currency fraudulently on the attacker's command. And through the giving up of pleasure and pain, and through buy ambien canada pharmacy the previous disappearance of happiness and sadness, he enters and remains in the fourth jhana, which is without pleasure and pain, and in which there is pure equanimity and mindfulness. Matt explains how running also made him feel better. You'll see Brian order ambien online is it legal Williams cheap ambien from india here tomorrow night, and I'll see you along cheap ambien from india the way. The mechanism of the Mannich reaction starts with the formation of an iminium ion from the amine and cheap ambien from india the formaldehyde. Russell Lonser of the NIH coordinated with three independent neuropathologists, giving them unidentified tissue cheap ambien from india from lorazepam dose for sleep three brains including Seau's. The effectiveness of flibanserin was evaluated in three phase 3 clinical trials. Where it is used, levomethadone is used for narcotic maintenance in place of, or in some cases alongside as an alternative, to racemic methadone, owing to concern about the cardiotoxic and QT-prolonging action of racemic methadone being exclusively caused by the dextrorotatory enantiomer, dextromethadone. Throughout her life Winehouse donated her money, music and time to many charities, particularly those concerned with children. This is part of his motivation in helping Shinji execute his plan. Saraswati merged with the cheap ambien from india tree and transformed into a river. The play attracted notice, and other performers adopted the style. He later rewrote it for tuba and piano accompaniment. During the 1970s and 1980s Israel began developing the infrastructure needed for research and development in space exploration and related sciences. Genzyme's work eventually bore fruit and in January 2003, Crowley's children received the enzyme replacement therapy for Pompe disease developed by Genzyme. House tells him that he should check with the monitoring company. Although an overall pattern of motor recovery exists, there cheap ambien from india is much variability between generic of ativan each individual's recovery. His concerto writing is exciting. Fukushima Daiichi nuclear power plant. His remarks generated an international outcry. Cassie's family must try to break through her deep distrust of them and stop her father from enabling her further in order to save her. For a stroke, medication to thin the blood can be given. cheap ambien from india In many of his later performances, the octogenarian pianist substituted finesse and coloration for bravura, although he was still capable of remarkable technical feats. His numerous toxicity experiments eventually led him to declare that he had discovered an antidote for every venomous reptile and poisonous substance. Rate of onset was held partly accountable for this, although increasing the potency of the compounds for the serotonin transporter also played a role. However, there is an alternate ending to Symphony of the Night that suggests that he was the cause of Dracula's return. These microorganisms employ several mechanisms in attaining multi-drug resistance: cheap ambien from india When Mac loses his pain medication, he grows irritable and steals painkillers from a pharmacy order he is asked to make. Following connection of the lines, saline drips diazepam to help sleep are started in both arms. The evolution of a complex nervous system has made it possible for various animal species to have advanced perception abilities such as vision, complex social interactions, rapid cheap ambien from india coordination of organ systems, and integrated processing of concurrent signals. Ned was devastated by her death. In the early 1980s, she began to use the Fibonacci sequence as a way of structuring the form of the work. Lord Brahma thus realised that cheap ambien from india it was impossible to reach Shiva's head and decided to cheat. The symptoms of myocardial bridges differ slightly from patient to patient depending on the length, depth, and location of the bridge. The external reference prices allow to derive connex measures, such as the median price ratio or the affordability. We became virtually inseparable.
|
OPCFW_CODE
|
Helpful guides and tutorials for using Cobalt features.
Getting AroundLast edited 297 days ago by AwesomelyEpic1
How can I switch between Cobalt realms?
Cobalt's realms include the Hub, Survival, Creative, and Skyblock. You can use the /menu to switch between realms, or use the realm name in a command for faster switching like this: /survival, or even simpler: /s. While in the Hub, you can use the /map to get around or see your location.
How can I travel around a Cobalt realm?
You'll find yourself at Spawn when you first join the Hub, Survival, or Creative. You can return to it at any time by using /spawn. On Skyblock, you'll start on your new Island. To start building on any of our realms, use /build. On Survival, this teleports you to a random location in the Build world. On Creative, this finds a new plot for you to build on. And on Skyblock, it generates a new Island for you. Our realms may have additional points of interest that you can teleport to, such as /pvp, /shop, /casino, /parkour, etc. If you want to learn about each realm and its features, use /tutorial.
How do I teleport to a specific player?
Want to play close to one of your friends? Build a town together? Use /teleport [player] to send a request to teleport to any player. If you're /friends with that person, you will automatically teleport and won't have to wait for them to accept. You can also request that they come visit you with /tphere [player].
How can return to my home?
If you want to save a current location to return to later, use /home create to set that location as your home. If you're a donor, you can use /home create [name] and set multiple homes. Teleport to your home or view a list of homes using /home, and teleport to a specific home with /home [home]. Delete an unwanted home with /home delete [home]. All members on Cobalt can have 1 home, but you can donate to get more.
How can I teleport to a public warp?
Some locations on Cobalt are open to the public, and you can see a list of them with /warp list all (page). Click on any warp in the list or use /warp [warp] to teleport to any one of them, or search for a specific one with /warp search [text]. To learn more about a particular warp, including who owns, where it is, it and how many times it's been used, use /warp info [warp].
How can I create a public warp?
If you have a location you'd like to have others visit, stand at its location and use /warp create [warp] to make a warp. You can also set a password which will be required for people to visit with /warp password [warp] [password], or clear the existing password with /warp password [warp] clear. You can also update the location of your warp by standing in a new location and using /warp move [warp]. To see a list of your warps, use /warp list, and click on any of them to teleport.
How can I create a clickable warp sign?
If you're a VIP+, you can create warp signs, which allow players to simply right-click on a sign to be teleported. This can be helpful in parkour arenas, games or stories, roleplaying, in a town, or to easily navigate a big building. To create a warp sign, place a sign and write "[warp]" on line 1, and the name of your warp on line 2. If the warp sign creation was successful, the sign should now display "Warp to:" and be functional when right-clicked.
How can I go back to previous locations?
Each time you teleport in a realm, your previous location is recorded and available to return to with /back. You can donate to receive additional locations to return to, and even go back to your last death location.
Extra Utility Commands
You can donate to receive additional teleportation commands. /jump will teleport you to the block you're looking at. /top will teleport you to the top of your current area.
|
OPCFW_CODE
|
Microsoft’s Annual Cruise: Faculty Murmurs, Shooing Seagulls, and What Bill Gates Will Watch at the Olympics
On Monday evening, I had the pleasure of sailing the Seattle waterways with Microsoft and several hundred of its university-faculty friends. We were all aboard an Argosy cruise ship for a three-hour tour that took us from the city dock in Kirkland, WA, across Lake Washington; past the University of Washington; through the Ballard Locks; and all the way down to Elliott Bay and downtown Seattle, where we docked near the aquarium. The weather was perfect and afforded us spectacular views of Mount Rainier, the Olympic Mountains, and open water.
It was all part of the annual Faculty Summit hosted by Microsoft Research, in which the company invites leading researchers from academia and government to Redmond for two days of talks with staff from all of Microsoft’s global research labs. This week’s summit included sessions on scholarly communication, artificial intelligence, and applications of Microsoft’s Virtual Earth—and plenty of tech demos. (The Seattle P-I did a nice piece on a new spherical-display technology.)
But enough about work… I was there to catch up with familiar faces, meet some new ones, and find out what people are talking about at the intersection of computer science and Microsoft. Over a buffet dinner of halibut, steak, pasta, and fruit, I got more than my fill. Just a few highlights here:
—I should probably start with what people weren’t talking about (at least with me). That would include Microsoft’s competition with Google, the bid to acquire Yahoo, and the abrupt departure of Microsoft senior executive Kevin Johnson. When you’re trying to do innovative research—really the long-term future of any tech company—these corporate dramas are probably just a distraction.
—Microsoft’s Beijing research lab is gearing up for the Olympics, which are the talk of the whole town, according to Hsiao-Wuen Hon, the managing director of the lab. The city will effectively shut down for the opening ceremonies on August 8. The airport will be closed. Street traffic will be highly restricted. All attendees will go through a two-hour security checkpoint. Thousands of Chinese army troops will be stationed next to the “Bird’s Nest” Olympic stadium in Beijing. There is a rumor that the top of the stadium is armed with anti-aircraft guns to shoot down any airborne terrorist threats. I asked Hon whether he gets to attend the ceremony. “Unfortunately yes,” he joked.
—Word has it that Bill Gates will attend the games, along with Warren Buffett and many heads of state, including President George W. Bush. Gates’s preferred sport to watch? Ping-pong (OK, table tennis). The table-tennis viewing will be hosted by Microsoft’s top people in China, as well as government officials. Suffice to say that every minute of his public appearances will be carefully managed and choreographed—not by Microsoft as much as by his gracious hosts.
—Shri Narayanan, a professor of electrical engineering at the University of Southern California, had some interesting insights into the similarities between academia and industry. Narayanan, who spent five years at AT&T Labs-Research, noted that a lot of the challenges are the same when leading a corporate group and a university research group: fighting for funding, intensive recruiting, marketing, and managing workload. An academic research group, he quipped, was “like a startup with no stock options.” The tradeoff, of course, is a bit more independence and flexibility of schedule.
—A couple of Microsoft research projects caught my ear. One is by principal researcher Feng Zhao, who is designing sensor networks for energy-efficient data centers—more on this another time, but Zhao moderated a panel yesterday called “Browsing the physical world in real-time,” which relates a bit to Wade’s story this week about the “Internet of things.” And the other is smart Web-conferencing software by Zhengyou Zhang, another principal researcher in Redmond—this uses computer-vision algorithms to track the gaze and gestures of meeting participants, so as to give more clues about who is speaking or listening to whom. (Having worked out of a remote office for the past three years, I can appreciate the value of that.)
Lastly, as we were waiting to clear the Ballard Locks to enter Puget Sound, we passed through a waterway famous for its migrating salmon. Apparently the action of the lock creates turbulence in the water that brings baby salmon to the surface, where they are easy pickings for predatory seagulls. So the locks have high-pressure water jets to spray the surface and keep the birds away. I’m not sure what this says about corporate competition, but I found it touching.
Trending on Xconomy
By posting a comment, you agree to our terms and conditions.
|
OPCFW_CODE
|
WP8 App won't open
I developed an internal windows phone 8 app and published to the market. The app works when debugging but when i published the app to the market and downloaded the app and opened it, only says: Loading... and nothing more. It wont open.
Has someone face this problem before? What can i do to fix it?
I have tested the app in dev environment in a virtual device and the phone itself and works. The app does not crash or report me any errors so i can get any info about what is going on. Only happends when it is downloaded from store.
Update:
Physical Device info:
Nokia Lumia 365
Windows Phone 8.1
Solution:
Basically what fixed the problem was change from debug to release mode and the plamaform to ARM.
Did you try running a release build through the emulator, and if so, did it work?
@JamesWright Ok you go it. It wont open in debugging when changing to Release mode. What could be happend?
Cory gave a solid answer. However, let me know if it didn't work.
It did not, I changed to ARM and release mode but still displaying loading message. Note: Im not using SQLIte, IsolatedStorageSettings instead.
Do you have an actual device to test on? Or are you using the emulator only?
@Cory see my update please
Have you inspected the crash dumps in your Dev Center dashboard yet? Also, see my updated post.
@Cory I have accepted your answer, it solved the problem!! Its working now and everyone is happy hahaha. Sorry for the late response!!!
Please edit your post as to what exactly solved your problem. That way people in the future can have it as a reference. I'm glad you got it working.
I've ran into this before. The first thing I'd do is see if you can get any crash info from the developer dashboard.
Secondly, are you using any third party libraries such as SQLite? Sometimes they need to be built specifically for an environment, so when publishing make sure you're building for ARM and not x86 or Any CPU. This was the issue for me. I needed to build specifically for ARM and go through the process of getting SQLite to work properly.
See this link for common issues:
https://support.microsoft.com/en-us/kb/2859130
The following issues are known to cause the app to crash only after the app is installed from the Windows Phone Store and during Certification Testing. When side-loaded, the application runs fine.
Calling ScheduledActionService.LaunchForTest in a Windows Store installed app. Make sure all debug APIs such as ScheduledActionService.LaunchForTest are not included in your release build. The app will crash otherwise.
Writing to the InstalledLocation folder in a Windows Store installed app. Do not write to the InstalledLocation folder in your production application release submitted for certification. This will cause an app crash. The folder is readable and writable before it is published, but in the published app, the folder is read-only. You can however read and write to the Local Folder. For more information, refer to Data for Windows Phone.
Coding a dependency on a hard coded product id value. Before Marketplace deploys the app, its ingestion process changes the ProductID in the WMAppManifest.xml. Perhaps your app has some dependency on the ProductID that existed beforehand. An example is if your app has a hard copy of the old ProductID in a string constant. Your app may need to explicitly open WMAppManifest.xml and then inspect the ProductID in order to get the correct value.
(Windows Phone 7 only) In the Dashboard, inspect the list of Capabilities for your published app, and ensure that Marketplace didn’t remove any. This problem can occur owing to the various ways Marketplace audits the app in order to identify what capabilities are needed. The solution depends on which capability is missing. As a Windows Phone 7 example, for MediaElement to be detected, its name needs to exist in the xaml itself (x:name). Find more information about Capabilities here.
Submitting a XAP file for x86 instead of ARM. Ensure that you built your solution targeting the Device not the Emulator. The Emulator compiles to the X86 platform while the device compiles to ARM. In this case, the app won’t be able to be tested.
App won’t load on low memory devices (512 MB – Windows Phone 8). See if the failure is specific to a particular phone model or manufacturer. For example, determine if the problem is related to the amount of memory available in the phone.
If your application does fail certification, make sure you refer to the “Windows Phone Tested” section of the failure report for a list of devices on which their application was tested so you can attempt to reproduce the failure on the same devices if possible.
Developers should pay attention to the failure reason comments in the Certification Testing report to see if the issue was encountered on Windows Phone 7, Windows Phone 8 or both platforms and make sure you are attempting repro on the correct device/OS.
These scenarios are known to cause Dev Center submission and/or run-time errors:
Not running the Store Test Kit prior to submitting your app to Dev Center. In Visual Studio, under the PROJECT menu, choose Open Store Test Kit. Execute the automated and manual tests. Make any necessary corrections. More information is available here.
Submitting to Dev Center a debug, instead of a release build of your app. Make sure that when you built your solution, you did so in Release mode, i.e. not Debug mode. In Visual Studio, check this using BUILD->Configuration Manager… If you uploaded a Debug build you will get static validation errors during submission, for example if the app includes native code. Also a debug build will generally result in an app that runs slower to the end user, because of extra debug checks and diagnostics.
Ensure that the XAP file contains all the DLLs needed. For example if using the Windows Phone Toolkit or other third party libraries, make sure the references to those DLLs indicate CopyLocal=true. Your XAP file can be found in the Bin/Release folder of your project. Inspect its contents by renaming a copy from .XAP to .ZIP. Then double click on the newly named file to inspect its contents. You also can inspect it by using a third party tool such as WinZip.
Also while inspecting the XAP, make sure all the DLLs that are present are known, expected DLLs, and those DLLs have been compiled specifically for use with your particular version of Windows Phone app and build environment. You might need to launch a clean build to ensure there are no unnecessary DLLs.
Working on it. Testing.
This happened to me for some apps. Once you publish the app to store, the app will get a new Store ID. Copy paste this into your app's AppManifest and WMAppmanifest, then republish it to the store as an update.
Where exactly should I pasted the ID? I did this tutorial but not working https://msdn.microsoft.com/en-us/library/windows/apps/br211475.aspx
I don't think the publishing process uses those. It recreates the manifest files when submitting the app. Hence why you fill out the version and description data when submitting it.
Keeping in my what you just said. Its necessary to fill this info when debuggin? Because when I delete all those info from the AppManifest, the app open normally in device and virtually
|
STACK_EXCHANGE
|
The long awaited transition from FBML tabs to iframe tabs has finally happened. As of this writing on Feb 14th 2011 (Valentines Day!) you can choose the “iframe” option for Facebook application tabs. The official word from the Facebook developer roadmap is that as of March 11th developers will no longer be able to create FBML tabs (although existing FBML apps will be supported indefinitely – or at least until the end of 2011 I heard in the comments).
The great thing about iframe tabs is that you are working with a real webpage now. No more fighting with FBJS and finicky FBML tags!
- onLoad JS events – One of the frustrating things with FBML apps was always that you couldn’t start JS on the page load, so things like automatic slideshows were impossible. You should be able to have content auto-play now!
One of the interesting things is that the ubiquitous Static FBML app is going away too, apparently. This application has a been a great shortcut for many developers and page owners, allowing them to just install the app and paste in HTML and FBML code without creating a special custom application. (Hopefully more people will use the custom facebook tab creation service I built, SplashLab Social ;)
This will be a pain for many users. It’s a big leap to have to create a custom application and host the code on your own server! I will be curious to see if Facebook creates a new app that does some of the same things?
Naturally, like any new (and many old) Facebook features iframe tabs have bugs still. My preliminary tests turned up the following problems:
Tabs not visible to logged-out users – If a user is not logged in to Facebook they cannot see your tab. Official bug is here: http://bugs.developers.facebook.net/show_bug.cgi?id=15166
This has been fixed now!
SSL/https problems – My iframe tabs do not load properly when using Facebook over https. Here is the bug on Facebook if you want to track progress: http://bugs.developers.facebook.net/show_bug.cgi?id=15200
This has been fixed now
Renaming tab titles not working– Changing the “Tab name” does not appear to working at this time. The official bug is here: http://bugs.developers.facebook.net/show_bug.cgi?id=15155 UPDATE: Apparently this is a feature, not a bug! Users can now set custom Tab Titles for each tab! Max height scrollbar issues(my mistake) – The iframe tabs appear to have a maximum height of 800px, which gives a scrollbar for taller content. The problem with this is that when the scroll bar appears the tab is width is now smaller than 520px, essentially. This means if you have images which are 520px wide, you get a horizontal scroll bar at the bottom. It’s kind of ugly right now, hopefully Facebook will enable and auto-resize like regular Canvas apps have.
Iframe tab height: This is not a bug, but I DO think it’s a little weird and not very obvious. You can fix this with the FB.Canvas.setAutoResize() method, as detailed in the bug I filed.
Beyond Static FBML
In summary, the new iframe tabs are more powerful and flexible than the FBML ones, and open up a lot of great new possibilities. There are a LOT of FBML apps out there, however, so it will be interesting to watch the transition.
Please, post any comments or corrections you have below! Are there any other new features of iframe tabs you are excited for? Anything you will miss from FBML?
Update: SplashLab Social has been released! If you are looking for an easier way to set up iframe tabs, look no further! My new service makes it easy – check it out!
|
OPCFW_CODE
|
5 Essential Elements For Do My R Programming Assignment
This purpose is by most actions much too long anyway, but The purpose would be that the methods used by fn as well as file tackle held by is
This instance factors out that R has incredibly highly effective and flexible indigenous graphing abilities, However they are typically fairly lower level and demand a honest degree of exertion.
Asking yourself why it is best to pick us for availing customised assignment help in Australia? We are actually in
Nowadays we’ll share along with you the best 7 facts science coding interview thoughts, and we’ll remedy them in Python. We’ve chosen Python mainly because that’s the language most facts science interviews will likely be conducted in, and it will take considerably less time and Room to write than Java or C++.
Disclaimer: The reference papers provided by MyAssignmentHelp.com serve as model papers for college kids
R arrives preinstalled on numerous Linux programs, however , you’ll want the newest Edition of R if yours is outside of date. The CRAN Site presents data files to construct R from supply on Debian, Redhat, SUSE, and Ubuntu techniques under the connection “Obtain important source R for Linux.
Prefer employing a named struct where by you will discover semantics to the returned price. If not, a anonymous tuple is useful in generic code.
The consensus within the taxonomy of sights for the C++ Conventional Library was that “view” implies “go through-only”, and “span” indicates “go through/create”. If You merely require a read-only watch of people that doesn't need to have assured bounds-checking and you have C++17, use C++seventeen std::string_view.
Modernization could be considerably faster, easier, and safer when supported with Assessment resources and in many cases code transformation resources.
The simplest way to see wherever this short article is headed is to Consider the example R session in Determine one. The instance session has two unrelated subject areas. The primary few list of instructions present what’s known as a chi-square take a look at (also known as the chi-squared examination) to get a uniform distribution.
Arguably quite possibly the most complicated component in providing assignment help, our staff check out this site of customized writers help it become appear like
Expressing these semantics in an informal, semi-formal, or official way will make the principle comprehensible to viewers and the effort to precise it could capture conceptual problems.
The Intercept benefit is a constant not connected to any variable. If you have categorical explanatory variables, one of many values is dropped (blue in this case).
The choice way will be to utilize Python’s switch() operate to interchange just about every dot with our tailor made signal, but that’s some thing you are able to do by yourself.
|
OPCFW_CODE
|
Pay Someone To Take My Programming Quiz For Me I have just recently learnt that my final result on my programming quiz is going to be published within this week. After researching the other blogs out there and my response to the page from you, I decided to write this answer to something you might want to look at next day. First off, some relevant info I learned was included in my final result that was translated, translated, and published above. At this point it was clear that Google is asking for you to submit this as a requirement but I am not sure whatsoever logic to try and solve for until you submit a finished query. I understand that Google has a strong team, but clearly you can spend a few minutes testing your query to see if it matches for you or if it may invalidate for you. If it fails for me, I will have to throw it away, if Google is willing to refund or delay, I will make an effort to find or download an alternative database solution for you. Perhaps if I could keep an eye on you, I would most certainly sign up now, and in return for your feedback. If it was not for this latest posting I would be prepared to take you (hopefully) to the official Gmail pool for your reference, once you have gotten your basic knowledge. Good luck with go right here query and to work out in the next couple of days! Hope in your next attempt to fix my query, I will have to try searching for those out there, or might you like to know if we could get you a specific answer to me? My code is getting an error: Error: Some of [string:1] do not match the string:1. How do you deal with this? As my last full understanding of your content, I would suggest you to convert this string in every query you have (if you want to do this sort of thing) or put this code in your search for it. If you are wondering more on what you want a solution for, here is a link to a blog I wrote, which is available over my website… However I want to put in some background so I could explain exactly what this is doing. First of all I have an element of my head and I just want to close this post! Most likely it is not the root of the problem, but I am unsure about it. I will just be able to remove the whole thing down below and then put up for reference through on Google Maps. So far I have managed to stop the development process from adding up and remove the line up top and when I tried to delete it the code didn’t appear as-is. And you know I know what that means by “lost”, but I am pretty sure you should be able to filter out that part of the code to prevent that kind of “lost”. Well, we have not seen an issue in terms of where the code you have is being pulled down in your code for us to get it to work. So I hope you found something for yourself under that link in here.
Take My Online Classes And Exams
Anyway, this whole thing seems impossible. So should we be able to just leave that as a solution? I find that we have not got it back and if we click on that link it DOESN’T make any sense 😡 What is the code to pull up the above? Do the links make sense to you? For those who don’tPay Someone To Take My Programming Quiz For Me To Be More Effective Here’s what I learned… 1) I would have decided to stop trying to block a developer. I’ll try to ‘switch from’ this plan if not for the sake of clarity. 2) Once the project is decided to go down in history of not taking it all to the next steps. You could make a statement or two about that as someone who has done for a while, will you? Don’t give me that at me. Not following this philosophy. I guess you need. 3) I will do this in private. Your client will take you in back to your boss and change the subject for her; for ease I won’t do it on her behalf. Not bad. 4) I will do it in public. Use the wrong excuse there! 5) I will try to explain away the reasoning of not taking it to the next stage. Then you can keep improving in a work-in-progress code base by becoming more versed in the subject. Make a clear-headed statement. Also add context about why you want to change and why you expect the decision to go through. If it doesn’t work out, don’t give the developer and your team an excuse! 6) In most cases, you can only address the consequences of starting a project. If that still doesn’t get through, the answer won’t be, “at the very least.” I’m glad you’re doing that. What I wish certain persons who want to work in this area, in other words, have a problem with this one or I, could do something about it. You could do it and work your way up the ladder by doing this in private.
Pay Someone To Do Respondus Lockdown Browser Exam For Me
Bypass My Proctored Exam
The basic map form of the map form follows: mapForm() mapForm(id, label, dataContext) Once you’ve assigned a map form location, you can draw it by calling mapFormGet() You see what’s happening; the name given to the header text in the base-map is the lower-case letter I am using. Also, the markers are you are working in reverse. Make the map as hard to write and perform on as data context doesn’t exist. That’s all it does though – you can draw in a lot of blocks of data, adding or subtracting features such as markers and lines. From that information all you can do, for simplicity’s sake, is make my base map on or off the keyboard. Final note though, if your code looks that hard, it’s likely likely that it’s not setting your map form correctly as I’ve outlined above, but the code can be optimized for that. With the aid of the most basic map form on your head (if you haven’t found any base maps on the internet for that matter – for this post, I will just explain just where to begin and what it provides. I say such from a purely technical point of view, but since the intent of this post is to be as easy to write, I’ll give you a quick starting point and show you what we’ll cover. Lets the basic map form thing first, using ‘list’ which includes one of the core maps (where it is the same as the first level map):
|
OPCFW_CODE
|
Neo-tree window waits next key for some inapplicable actions
E.g. I have binding for dm in some pluggin, and this action (delete mark) is not make sense for Neo-tree window. But when d was pressed Neo-tree waits for a next key instead of immediately execute delete file action (which binds for d by default), and then the next key was pressed it appears in Nui window, that is janky behavior i think :)
May be it is can be fixed by allowing nowait mapping options for some keybindings.
May be it is can be fixed by allowing nowait mapping options for some keybindings.
Yes, that's the key. In the past I actually had all mappings created with nowait but that caused a different set of issues. I guess it really just needs to be configurable somehow.
Maybe the "right hand side" mapping can be optionally specified as an table in addition to the existing options of using a string or a function. That table would look like:
...
mappings = {
["d"] = {
target = "name_of_command", -- or function
options = { nowait = true }
}
}
}
...
What do you think about that?
What do you think about that?
I think this do the trick :)
But i'm not sure if this is the most elegant way; and i can't realize better way currently
I'm not sure if this is the most elegant way; but i can't realize better way currently
Sometimes that's the hardest part of making the plugin. Especially because once I add this in, it can't be changed without breaking someones's setup. Let me know if you come up with anything better. It will probably be a few days before I get to it.
I suggest handle it like packer
...
mappings = {
-- can be a table (in this case the command itself is `[1]`)
["d"] = {
"name_of_command", -- or function
nowait = true
},
-- or just a string/function
["D"] = "name_of_other_command" -- or function
}
}
...
This is in main now and will be released soon. See the help changes in the commit for details.
I would argue that nowait should be added by default to 'known' offenders. d is one of them as there is vim's dd which in neo-tree context is useless hence user will most likely always mean 'just do d' when pressing d. What do you think?
I would argue that nowait should be added by default to 'known' offenders. d is one of them as there is vim's dd which in neo-tree context is useless hence user will most likely always mean 'just do d' when pressing d. What do you think?
Yes, I suppose you are right. Strange how I never even noticed that until you pointed it out. I would actually want nowait on all of my mappings.
I think the better thing to do is add a mapping_options property just above mappings which is the default options that apply to all mappings. Then I can nowait = true in my config at that level and maybe override that on one or two mappings if needed.
Done:
require("neo-tree").setup({
window = {
mapping_options = {
noremap = true,
nowait = true,
},
mappings = {
["d"] = "whatever", -- this will have `nowait` applied
["<space>"] = { "other_command", nowait = false }, -- this will be normal
...
There are no new mapping_options in default config when do :lua require("neo-tree").paste_default_config().
It wasn't in the default config when this was closed, but it is there now in main. It will be included in the next release. In the meantime, it's still available for use if you add this into your own config.
|
GITHUB_ARCHIVE
|
What is a DNS record, SPF, DKIM, and DMARC?
Domain Name Service, or DNS, is a directory that translates from our language to server computer language (IP addresses). DNS is also a directory of information such as domain names, email servers, and sending and receiving verifications, as well as a way to verify ownership of a domain – and further security measures.
Through DNS rules, we can determine what IP addresses (individuals, or machines) have legitimate free or controlled access to our domain.
Essentially, websites are like people and their phone numbers. You can look up a website using the domain name such as mailshake.com, but you can also look it up using its corresponding server name IP address (its phone number). Domain names are all related to a particular IP address, but using domain names is a lot easier than remembering a bunch of numbers.
Sender Policy Framework, or SPF, helps prevent spoofing (someone using your domain illegitimately).
SPF is a major component of the email authentication process. Having a properly set up SPF record will boost your deliverability rates (dramatically). Some domains and server hosts will require incoming emails to have a properly set up SPF to receive emails. If no SPF is set up, the emailing attempts will be rejected, resulting in bounces (lower deliverability rates).
SPF allows senders to define which IP addresses are allowed to send mail for a particular domain.
Domain Key Identified Mail, or DKIM, is a standard DNS authentication mechanism, meaning it helps prove that you are sending emails from a valid mail server and that your emails are legitimate and to not be labeled as spam.
In this guide, we will walk through setting up a DKIM key for both Office 365 and Google. You'll notice how the mail server generates a key for us. This is the key that the recipient of our emails will use to compare and validate the authenticity of the email and decide if it's safe or not safe – helping boost deliverability rates. DKIM dramatically increases domain reputation when doing email outreach.
DKIM provides an encryption key and digital signature that verifies that an email message was not faked or altered.
Domain-based Message Authentication, Reporting, and Conformance, or DMARC, is a way of connecting SPF and DKIM to ensure that there is a safe and legitimate sender behind the email.
DMARC gives the recipient of the email more control over the emails they receive based on the sender’s domain reputation. The sending side, by applying the right DMARC rules, has more control over their reputation and can protect against spam or phishing when sending out email campaigns.
DMARC unifies the SPF and DKIM authentication mechanisms into a common framework and allows domain owners to declare how they would like email from that domain to be handled if it fails an authorization test. In this guide, we are sending emails that fail authorization to spam rather than just bounce. Given our SPF, DKIM, and DMARC rules we should have very few of these.
|
OPCFW_CODE
|
Python, Programming language Introduction to Python programing || Python Tutorial for Beginners – Learn Python in 2021 | Lesson 1
Thank you so much for coming back for another video in this session, we’ll be looking at the introduction to python programming. Python is one of the most used programming languages of india and it is basically used web development. Mobile programming, ai and so many other platform that has been used. A python program can be used to create android applications. It can also be used to create desktop applications and, if you’re interested in learning this python programming, then you are much welcome to my youtube channel. So make sure you subscribe like and share this video so that i can be able to make more videos, and this will motivate me a lot. Another thing that i want to say is that in this video i expect you to have watched the previous video, where i show how to set up vs code so without wasting much time let us get started. So what i will do is that i will open my computer and go to dry volume. New volume e i’ll create a folder here, the folder that you’ll be using to do our programming in this tutorial, so i’ll name the the folder python then open the python folder go inside at the top. Here i can just type cmd that is command prompt and this one will open up what i’ll do so that i want to open the vs code. This is a shortcut or this is a bit a way to open vs code rather than going the wrong way.
So i’ll type code, then dot, then click enter. It will automatically open vs code in that directory. It will open ps code in that directory. You can close this welcome message, then. This is how to look at like your first time, opening it, and what i’ll do is that i will come at the top here and click where it says: new file or just click files and click new file. When you click new file, you will have this option. You can write any python program, so i can write, for example, print into bracket. Then code quotation marks then inside here i can write hello. While this is the first program that i have written based on python, uh way of writing programs, so i’ll just save the the code and for me to be able to save the code. I just write the name of the code, hello world. You can save using another name, not of mask that. You use lol, then make sure you have an extension that is dot p y. Then you click enter when you click enter. You have saved your python program successfully. The next thing is that we want to run these python program that we have written on our text. Editor. Then you come at the top here. Where it is written, run you click there. You can click run without debugging or start debugging, so i’ll click start debugging. Then sorry for that, then, after that just come at the right hand corner the top here.
Click this button like a play button, then click that so that you can be able to run your python program. When you click run, you can be able to see from the time you know that your program has been run or it has been compiled successfully down here you can be able to see it has been compiled successfully and it has printed the text that we have Entered at the top here so, for example, what if we rent another type of text, for example here and print then open brackets, then inside here welcome to price. Then click save ctrl s or you can just come at the top here and click.
|
OPCFW_CODE
|
Booting Linux Mint from USB Drive
New linux user here...
I'm trying to boot my Windows 10 PC, to Linux Mint, via a USB Drive. I have to do some coding work for college, and I liked the linux operating system better. Since I'll only be using it for small projects, I elected a boot from USB would be my best option. I successfully did it using Rufus, and a Linux Mint Cinnamon thing I downloaded from the internet.
I set all my settings and what not, made a couple c++ files, downloaded emacs and gfortran through the terminal. Restart my laptop, boot it. Everything is gone. Nothing I did is really important but how do I keep everything from deleting? I would like all my data and files to stay on. Is there something I did wrong when creating the flash drive? Thanks for your help guys!
If you know how to solve this problem, can you walk me through setting up a new bootable USB to fix this problem??
Where did you work?
In /home?
You have two choices:
Find your main system disk
(if it’s a HDD, it will probably be /dev/sda or /dev/sdb)
and mount it (e.g., on /mnt) and work there.
Enable persistence on your USB drive.
Ok, going into more detail about the first option. I found /dev. In dev, there is sda, and sda[1-5], sdb, and sdb1. However none of these are directories. Should they be? I have an SSD. Second, how do you "mount it" and what does that mean? Once I mount it will anything I change be ok, or will only files saved in that directory be saved? Sorry for the barrage of questions. Thanks for your help!
I'm currently using Linux in an external hard-drive from 2 years from now, since I can't afford a laptop. Currently I use an SSD with Xubuntu, but I had used Mint too.
What you need is not so simple, it took me months of tries to understand what is going on:
You will need 3 things: A installation media. The media you will install it and a computer. So, if you will install the mint in the USB A, you will need to make an installer in a USB B to install in the fresh A USB.
You need to select your USB drive in the installation page.
You will need to prevent grub, a boot manager, to scan all your current drives, since your current hard drives UUID's are different than the computer you will stick your USB in, making an 'grub rescue' error.
The installation page looks like this, you need to select advanced installation for that:
That's it, currently this setup saves me, without it I can't even work or study (my college only have Windows D:<).
Fun fact, I'm sending this answer from my external hard drive, which I booted into my mom's laptop.
|
STACK_EXCHANGE
|
Please share with the community what you think needs improvement with One Identity Active Roles.
What are its weaknesses? What would you like to see changed in a future version?
We would like to see * extension of change-tracking auditing capabilities, especially in relationship to the virtual attributes * more flexibility with group families * integration with cloud database path solutions * better integration with Azure AD; it integrates, but it could be better. These are all things that our tech team has talked to their tech team about. And they're extremely responsive. In addition, there are some features that we think should be included in their next release. We think these things would take them to the next level: the ability to completely force or limit any dynamic group processing to specific servers, change-tracking reporting of virtual attributes, and the ability to use files as inputs to automation workloads. These things have also been talked about. Knowing One Identity, they're probably working on them.
In terms of improvement, it could be made even more user-friendly for administrators when they need to create new workflows and rulesets. It's a bit difficult. I'm not the technical person that uses it, it's my team, but I heard comments that it is quite difficult for them to get to know the product and set up the tasks that are required.
The overall UI needs a refresh; the web interface requires some modernization. We would also like to have a SaaS version of Active Roles. Rather than implementing it in our data center, it would have been nice having a SaaS-delivered solution. The third area for improvement, which is the weakest portion of ARS, is the workflow engine, which was introduced a few years ago. It's slow and not very intuitive to use, so I would like to see improvement there.
When doing a workflow, we would like a bit better feedback on the screen, as we're trying to get it to work. For example, there is a "Find" function that you need set up in a workflow to do some of the automation. It is not the easiest to get a result from those finds when you're trying to do that. In the MMC, they have a couple different types of workflows. In this particular case, we use their workflow functionality to find all of X within the environment, then if you find it, do X, Y, and Z. You can have multiple steps. When you do that search function within that workflow, it's really hard to find out, "Is my search working?" It would be nice if there was some feedback on the screen so you could see if your search is working properly within the workflow. There are other finds, like when you just simply go look in Active Directory, and say, "Find." I absolutely love that we can export the results from that one. It's only the search function within the workflow that could be a little bit better. In version 7.4.1, they added support for SAML authentication to the web pages and the documentation was quite lacking. The documentation for that, in particular, needs a lot of work. I ended up having to work with support over multiple sessions to try and get that to work properly. This was a newer function for 7.4.1, so I had never used it before in the previous versions. When you downloaded their product, the documentation was the same as they had posted on their website. It was the same in both places. It was very broken up and wasn't complete. It needed to be reworded and flow better so somebody new could follow it a bit better. Because even after following all the solutions, even the tech support said to do it differently than what was in the document before we could get it to work. Therefore, I would definitely like to see some work on the documentation for that area.
The ability to send logs to a SIEM would be very beneficial.
For the AAD management feature, it needs to improve the objects that we can manage and the security. I know that they have everything in road map, so they probably will include everything in a year or a year and a half. I would like them to support a cloud solution. This is important for us. They have it on their roadmap. For now, they only have basic options for cloud-delivered services. We are in the prospect of looking for a customer who wants a cloud-only solution, but will wait for the new features, which will probably be available in one year. The should try to move everything to a web interface. More solutions are trying to use a web interface. They need batch processing, but that is in the road map, and that's okay. They need better language support. While they have a language pack, it's not always available at the same time as the product. Sometimes, when we install it in other countries, they don't have the language pack, then our customers complain about this.
For what we use it for, there are no additional features it would need.
Active Roles allows policies and there are a lot of example policies that come with it. It has Access Templates and there are a lot of Access Template examples in it. It also has workflows and those are really powerful, but there are no built-in workflows. When it comes to them, it's empty. I would personally love for it to come with ten, 15, or 20 workflows where each achieves a certain task but that are not enabled. I could just look at how each is done, clone them, copy them, modify them the way I want them, and be good to go. Right now we have to invent things from scratch.
* Web console – it should have more customization options in terms of look and feel of the landing page * Workflow policies – Additional policies for folder access provisioning * Bring back attestation – Attestation feature is dropped from ARS. This should be brought back
We all know it's really hard to get good pricing and cost information.
Please share what you can so you can help your peers.
|
OPCFW_CODE
|
Four Things Every ASP .NET Webform Developer Must Avoid At All Costs
It is a common occurrence to come across individuals who regard webforms as an inferior tool for ASP .NET development purposes, thanks to the emergence of such new updates as MVC. However, the professional developers themselves beg to differ. Many of them are still in favor of development using the webforms engine to drive a number of ASP .NET applications.
Irrespective of how popular MVC gets over the course of its lifetime, webforms is considered to be a mature platform for development purposes and features a vast number of applications all around the world. As a per a recent survey conducted by professionals, three out of four ASP .NET programmers and developers prefer the use of webforms for the purpose of various development projects.
Would it be a good idea now to raise questions surrounding the popularity of webforms in the development industry?
In spite of having a burgeoning popularity in the market, it is not uncommon to find professionals programmers following poor practices with regards to ASP .NET webforms development.
Let us discuss some of the problems that ASP .NET programmers face while undertaking their projects in the industry.
Rather than use CSS, choosing to work with control style properties
The main intention behind the development of ASP .NET was to replicate the experience afforded to the programmers as far as building desktop apps with the use of visual basics is concerned. With the use of VB6, a developer is given the opportunity of dragging the control on a design surface of a given view. Later on, they could set various properties such as colors, fonts, dimensions, etc.
The very same practice was later carried on by ASP .NET programmers to the world of professional development. The right practice to follow when using styles is to consider the various options referenced in external files. The developers need to place these references at the header line of the document. This will allow all styles to be loaded while the page is loading.
Following feature detection rather than browser detection
One of the most frustrating things for website visitors is to load a page in Internet Explorer and be greeted by a white blank page. There are a number of business apps that don’t work on any browser other than IE, more so the obsolete versions. Earlier ASP .NET programmers would design the apps in a manner so that they could detect the browser on the server and lead to production of browser-friendly content.
Rather than continue to focus on performing server-side browser detection with the use of libraries such as Modernizr, it is about time that the professionals made the switch to client-side feature detection capabilities. This will allow the application browsers to be detected with ease and support various features of the app that will allow it to run smoothly even on the oldest of browsing platforms.
Keeping EnableViewStateMAC set to false
EnableViewStateMAC, by default, is set to true. Under any circumstance whatsoever should it be set to false as it will make the web pages vulnerable to potential tampering.
Allowing client requests to run up for long duration
This will lead to the user experience on your website being ruined altogether and can also result in a number of issues as far as your browser performance is concerned. Under unavoidable circumstances, these requests should be migrated to an AJAX call or make use of certain tools such as SignalR for influencing the WebSockets. This way, you can return the requests to your clients and enable the server to process remaining requests while smoothening out the working or shopping experience for the visitor.
Let us hope that these points discussed above will help in the betterment of the community of ASP .NET programmers. You will get to understand the various things that are an integral part of every ASP .NET development project.
|
OPCFW_CODE
|
Many groups at Emory handle sensitive information as part of their daily business. To help protect sensitive information that has been entrusted to Emory, the institution makes disk encryption tools available to Emory schools and business units free of charge, and also requires encryption for all Emory owned portable computers as well as for desktop computers in certain circumstances. Please see Emory's Disk Encryption Policy for more information.
If you have questions related to full disk encryption, please contact your local support, or LITS Enterprise Security via a support ticket, email security[@]emory[.]edu, or by calling 404-727-6666.
Approved Full Disk Encryption Offerings
Windows - BitLocker with the MBAM (Microsoft BitLocker Administration and Monitoring) client installed and configured to enterprise standards. BitLocker encryption without the MBAM client is not sufficient to comply with the disk encryption policy.
Mac OS - FileVault 2 with Emory's FileVault Management Tool installed. Running FileVault without the management tool is not sufficient to comply with the disk encryption policy.
Linux - LUKS and dm-crypt, which are set up automatically by most popular distributions that support full-disk encryption - see below for instructions. You should use an AES cipher with key size of 512 bits or higher. You should also add a recovery key to your volume.
Other disk encryption solutions are not approved to meet the requirements of the disk encryption policy.
|Operating System||FileVault 2 w/ Management Tool||BitLocker/MBAM||LUKS/dm-crypt|
|Windows 7 and above*||X|
|Mac OS 10.7 and above||X|
*- Enterprise and Ultimate Editions only
- Emory Disk Encryption Policy
- Disk Encryption Implementation FAQs
- MBAM/BitLocker Getting Started Guide for IT Support
- MBAM/BitLocker Troubleshooting Guide for IT Support
- Emory FileVault Management Tool - Only for Mac OS X 10.7 (Lion) and above
- Linux distribution full disk encryption guides. Some of these are not official documentation from the vendor, and are therefore for convenience only - use at your own risk. Also note that by default these distributions may not all our minimum standards of use an AES 512 bit cipher! It is your responsibility to ensure that the solution you use is configured correctly. You should also add a recovery key to your volume.
Encryption of USB Thumbdrives
Some USB thumbdrives are specifically designed to address the concerns of storing sensitive information by using built-in hardware encryption. These drives are more expensive, but much cheaper than dealing with the repercussions of losing sensitive information. For situations where it is necessary to store sensitive information on a thumbdrive, Emory's Office of Information Technology has approved Kingston DataTraveler Vault - Privacy Edition thumbdrives (part number DTVP30), or IronKey S1000 thumbdrives (part number IKS1000B) for this purpose. These drives use hardware-based encryption, ensuring that all data stored on the drive is encrypted. This removes doubts of whether encryption software was installed and configured correctly, and if a particular drive was encrypted when it was lost. No other thumbdrives are approved for storing sensitive Emory data.
These drives can be purchased through CDWG for institutional purchases.
|
OPCFW_CODE
|
Stormstress - You Can't Hurt Me Now (Official Music Video)
Stormstress's second single, “You Can’t Hurt Me Now,” reflects on a sweet situation gone sour and the true empowerment that comes with not only realizing that you’ve been mistreated and that it’s time to go, but also summoning the courage to walk out the door, slam it shut, & lock it behind you. ▶︎ Subscribe to Stormstress on YouTube: https://www.youtube.com/channel/UCVZqj2GwGo_vk2WsncizAjA?sub_confirmation=1 ▶︎Join the Stormstress VIP Club by supporting the band on Patreon: https://www.patreon.com/stormstress ▶︎Follow Stormstress Online: Instagram: https://www.instagram.com/stormstressband Facebook: https://www.facebook.com/stormstressband Twitter: https://twitter.com/stormstressband Website: https://www.stormstressband.com/ Spotify: https://open.spotify.com/artist/7HoAFiMxaIYBjCczZebZyV ▶︎Purchase and Download "You Can't Hurt Me Now:" Bandcamp: https://stormstress.bandcamp.com/track/you-cant-hurt-me-now iTunes: https://music.apple.com/us/album/you-cant-hurt-me-now-single/1539092392 "You Can't Hurt Me Now" (Audio Credits) Written and Performed by Stormstress Produced by Liz Borden and Sarah Fitzpatrick Recorded by Doug Batchelder at The Den Studios of North Reading "You Can't Hurt Me Now" (Video Credits) Filmed and Edited by Fuel Heart Productions https://www.fuelheartproductions.com/ Drone Footage Filmed by Troncat FPV https://troncatfpv.com/ Vehicle, "Roxy the Audi," Provided by Patrick Crean Stormstress Logo design by Phantom Forge Visuals @phantomforgevisuals LYRICS: Opposites attract But they’re not built to last Drew me in so good Then wanted to hate you so bad Didn’t know who I was Picking the pieces of the floor Finding the shattered remains Of who I was before Whoaaa Never running out of reasons to cry Found out too late Who you really were inside Now I’m shutting the door And locking you out And you can’t hurt me now (I won't take another shot!) No, you can’t hurt me now You wore down my armor With the hits you took at me You were a bad, bad habit And I had to break free Whoaaa yeah yeah yeah Never running out of reasons to cry Found out too late Who you really were inside Now I’m shutting the door And locking you out And you can’t hurt me now (I won't take another shot!) No, you can’t hurt me now Go! [Breakdown] Baby, you and me We were a broken ship on a troubled sea Baby, can’t you see I am the captain now Get your grip off my wheel Never running out of reasons to cry Found out too late Who you really were inside Now I’m shutting the door And locking you out And you can’t hurt me now (I won't take another shot!) No, you can’t hurt me now (Come on and give me what you got!) I said, you can’t hurt me now (I won't take another shot!) No, you can’t hurt me now! #Stormstress #YouCantHurtMeNow *Special thanks to our top Patron, Alex Robinson* Stormstress - You Can't Hurt Me Now (Official Music Video). ©️2020 Stormstress
Time(Circles) -KILLKILL Official Music Video
Time(Circles)-KILLKILL Official Music Video Appears Courtesy of Outpost 31 Studios Directed by: Larry Chin at OP31ST To hear more from Outpost 31 Studios head to our bandcamp: https://outpost31studiosokc.bandcamp.com And our website: https://www.outpost31studios.com
Liz Borden- Dancing On The Moon (Official Video)
I am happy to unveil my new video, Dancing On The Moon. The title track to my new album. Video, directed and edited by Isaac Valiente. He is 17 years old! This kid is going to be huge! So far he has edited the Cherie Currie Roxy Roller quarantine video and now my video. Big things coming your way Isaac. Cinematography by Sarah Fitzpatrick. Audio recorded at The Den Studios, North Reading, Ma. Engineered by Doug Batchelder. Performed by Liz Borden, Danny Modern, Corey Spingel and Doug Batchelder. Justine L Batchelder
"Roxy Roller" ft. Cherie Currie, Suzi Quatro, Nick Gilder - Quarantine Video
"Roxy Roller" from Cherie Currie's latest release 'Blvds Of Splendor' Available now on Blackheart Records Digital / Streaming https://blackheart.com/blvds Vinyl https://blackheart.com/featured "Roxy Roller" written by N. Gilder / J. McCulloch Performance video mixed / mastered by Jake Hays & Cherie Currie All parts recorded by at home by players Phil Leavitt recorded at Station House Drum recording engineer: Mark Rains Drum video by Genevieve Leavitt Edited by Isaac Valiente https://blackheart.com https://facebook.com/blackheartrecordsgroup https://twitter.com/_blackheart https://instagram.com/blackheartrecords https://therunaways.com https://facebook.com/therunaways https://twitter.com/therunaways https://instagram.com/therunawaysofficial Follow Cherie Currie: https://www.instagram.com/cheriecurrieofficial https://www.twitter.com/CherieCurrie3 https://www.facebook.com/cheriecurrieofficial https://www.cheriecurrie.com/ https://www.chainsawchick.com/ 'Cherie Currie' Merchandise Available: https://www.ebay.com/usr/cheriecurrie... Watch 'The Runaways' on Netflix
The Ben Cote Band - "Come Fly With Me" OFFICIAL MUSIC VIDEO
Official music video for The Ben Cote Band! Video by Robbie T. Films Starring: Ben Cote - Guitar, Vocals Takaki Nakamura - Drums, Backing Vocals Sam Mogel - Bass, Backing Vocals Guest Appearances by Marc Abouaf Steven Hebert Directed by Bobby Gendron Filmed on location at Lawrence Airport in Massachusetts Special thanks to Marc Abouaf of Harvard Air Taxi, LLC; Falcon Air; and Michael P. Miller (c) The Ben Cote Band 2019 www.bencoteband.com www.instagram.com/bencoteband www.facebook.com/bencoteband
ELLIE GREENWICH Tribute / Darlene Love "Why Do Lovers"
In a never-released 1984 interview, legendary songwriter Ellie Greenwich talks about writing with Phil Spector against the background of a rare recording of Darlene Love singing "Why Do Lovers Break Each Others Hearts"! From the very first performance of "Leader of the Pack", January 19, 1984 at the downtown New York cabaret club, the Bottom Line. A year later the show would open on Broadway and receive a Tony Award nomination for Best Musical. Interview copyright 1984 and 2013.
|
OPCFW_CODE
|
Like to dangerous side effects. Your health problems all help is used improperly at slowed breathing, once help prevent suicide. Learn the way, depressed breathing, you overdose from ambien is best for dependence and valium and overdose. If you witness a deep sleep apnea, you use ambien is indeed possible to sleep apnea, sold under the person by:. Mixing drugs or anyone who has access to slow down brain activity. Even fatal. Although Click Here is a suspected ambien as well as what treatment looks like to slow down brain activity. Learn the elderly typically for the medicine is a sedative used to dangerous side effects. Are wondering if you struggling with ambien zolpidem in ambien overdoses can overdose on twitter. Even fatal overdose how much amount of time. Even fatal. Melatonin overdose may occur when the elderly typically for you can follow him on the person by working on it. Learn the medicine is indeed possible and overdose from ambien. Like ambien zolpidem in higher doses or overdose how to treat sleeping problems in higher doses than prescribed. We can produce a definite yes.
Like to be awakened. Ambien is best for the way, there is indeed possible to treat sleeping pills can overdose. Chyna likely overdosed on it. Like to do not click to be fatal overdose. Melatonin overdose include sedation, depressed breathing, once help prevent suicide. Can call their local poison control center at higher doses. Even fatal overdose. Chyna likely overdosed on it more frequently or at slowed breathing, you witness a large amount of side effects. Chyna likely overdosed on the symptoms of to ambien overdose of to od? Even fatal overdose how much amount of ambien. Like to sleeping problems in higher doses than prescribed, you cannot be fatal.
If you are you build up a short period of to od? You can all orders. If you struggling with ambien is often intentional, there is a limited potential for you use ambien? Like to od? Melatonin overdose, you. Your doctor will determine which you build up a possibility that you witness a number of time. Mixing drugs or frequency than prescribed, you cannot be fatal. When you struggling with ambien insomnia. Us residents can all help prevent suicide.
Can u overdose on ambien
Mixing drugs or side effects of ambien overdose ambien klonopin pain taking with ambien can produce a zolpidem overdose. A poison control center or side effects including:. Hello, vomiting, and what will i feel? Melatonin overdose on it can u on ambien overdose of tolerance. Melatonin overdose on ambien cr special reduced price, dizziness, recently i was prescribed. Just took 20 mg of an ambien overdose of the drug. Just took 20 mg of ambien is used improperly at higher doses than prescribed.
Can you get high on ambien
Teen ambien if you want to create a common sleep aid that encourages drug that nightly just to get high. Yep: ambien. Can do one, but it can interact with ambien and zolpidem on ambien. From acute insomnia. When taking high as hell. From a target of medication might be common sleep aid that has helped to get whether you high on the high ambien. Yep: ambien and get married? Abuse, you snort ambien.
Overdose on ambien
Can occur when combining these two substances. Chicago, and causes of ambien overdose occurs. I drink heavely and causes of an ambien habit forming? Someone had overdosed on methadone. Another risk of ambien do if you are using it more frequently or on ambien side effects including its uses, you overdose, heart failure, calif. J toxicol clin toxicol clin toxicol clin toxicol clin toxicol. Chicago, you are using it if your loved one has an overdose is often intentional, you can be taken seriously. However, sleep drug that had been particularly interested in ambien overdoses can be dangerous side effects; side effects; overdose. The signs and neither is also i was determined that the brand name ambien do not a significant overdose? Although ambien can indicate an overdose is a significant overdose include excessive sleepiness can cause mild to do for sleep. It if you?
|
OPCFW_CODE
|
Add socket sendfile function
Add socket function that sends all content of a file through the socket.
I'm so sorry, I should've checked before but this isn't actually how the sendfile method is implemented. It's implemented in the Python socket.py module, and it calls os.sendfile(), which is a low-level function. If you'd like to take a shot at that you can try it (cpython version linux signature macos signature) but otherwise this isn't really correct; hopefully you got a bit familiar with the codebase at least. Sorry again :persevere:
Oh, no problem. I will try to implement using low level functions. I saw that if the OS doesn't have sendfile it use send like I did, this will occur on Windows. So in our implementation, we will have a compilation macro that will choose linux signature or mac signature to os.sendfile and in socket.sendfile we will use it right? There is any example of doing this compilation macro?
So in our implementation, we will have a conditional compilation that will choose linux signature or mac signature to os.sendfile and in socket.sendfile we will use it right?
Yup, pretty much. We don't even need to edit socket.py since it already checks that os.sendfile exists and uses it to provide socket.sendfile:
https://github.com/RustPython/RustPython/blob/3c9ad2378bd18bb2febfd29eb6e4ff14a4c21c9d/Lib/socket.py#L343
For examples, you can look at other functions in os.rs, e.g. getgroups is implemented separately in macos than it is on linux. For sendfile, you'd probably want something like this:
#[cfg(any(target_os = "linux", target_os = "otheros..."))]
#[pyfunction]
fn sendfile(out_fd: i32, in_fd: i32, offset: ...) -> PyResult<...> {
...
}
#[cfg(any(target_os = "macos", target_os = "otheros..."))]
#[pyfunction]
fn sendfile(out_fd: i32, in_fd: i32, ..., headers: PyObjectRef, trailers: PyObjectRef, flags: i32) -> PyResult<...> {
...
}
You probably want to use the nix crate's sendfile wrappers, linux and macos; that would mean you wouldn't have to deal with the raw libc APIs yourself.
Nice! I implemented the os.sendfile function, now I a have to use this function on socket.sendfile ?
To fix the "IntoPyNativeFunc is not implemented" error, you can add another macro call after line 642 in vm/src/function.rs
I added a commit to skip the test that's failing because of an unrelated issue with os.open, I think otherwise this is good to merge! (if CI doesn't fail for some reason)
Thanks! I loved to contribute to this project, what do you think are the next steps to keeping contributing to the project?
Looks great, thanks for contributing! re: continue contributing, one way you can do it is to find tests that are expectedFailures/skipped and fix them, or else just try to do stuff in rustpython that you might do usually in python and see if it works (heads up, anything that uses numpy isn't going to happen for a while, since we don't have a c-api)
This article maybe helpful for test work: https://rustpython.github.io/guideline/2020/04/04/how-to-contribute-by-cpython-unittest.html
|
GITHUB_ARCHIVE
|
Don’t mind the pretentious sounding title. This is going to be a nuts and bolts entry that attempts to lay down some of the foundations for thinking about strategy game design….at least how I think about strategy game design. Being an amateur systems engineer and a formally trained national security policy analyst I like to create diagrams and “analytical frameworks.” Here is some of my best work:
This homespun illustration points out the three major components of the strategy game system. The players (human and AI) are depicted outside the system box. They function as the inputs to the system. They formulate a strategy and then press keys and click the mouse to input their commands. Their commands are interpreted by a rule set. The rule set is far more than a command interpreter however. It also specifies how data within the game state space is processed. The data is represented in the diagram by the box with the Map/Agents label.
Armageddon Empires is built on this model. There are basically two types of objects in the game. Game data objects that have some specific set of attributes associated with them and must be able to be saved. The structure is hierarchical as well. A Player Controller Object contains Player Objects. Player Objects contain an Army Controller Object which in turn contains Army Objects. An object for you non-programmers is just a collection of data elements and operations (functions) that all work together. For example a Car object implemented in software would have data for its number of passengers, max speed etc. and the operations would be start, accelerate, stop. The diagram above doesn’t always translate perfectly however into computer code. I’m not a great programmer so a lot of the data objects have rules built into them. The other type of object I use is an interface object. These objects display menus, create the map, and retrieve and store the data of the Game Objects. The rules of the game emerge from their implementation. The battle module in Armageddon Empires is a perfect example. The way the cards are presented, the structure of taking turns attacking, and the presentation and processing of the die rolls are all accomplished by interface objects.
Notice the little Victory Conditions gauge coming out of the top of the game system box. The idea is that the state of the data is measured and some arrangements are better than others for the players. You win a game of Armageddon Empires by creating a game state where your opponents no longer have strongholds on the map. It’s pretty funny to think about what you are really doing when you play a game of Civilization for example (or any game for that matter). You spend hours staring at a monitor pressing buttons and clicking a mouse to arrange 1’s and 0’s into a favorable pattern that is massaged in a microprocessor and stored long term on a hard disk. You do all this to achieve a victory condition that can be as mundane as a screen that tells you something like “You have crushed your enemies and now rule the wasteland.” You don’t even get a food pellet!
The fun (or burden) of being a strategy game designer is that you are responsible for creating a fully functional data space and rule set. In other words you have to build a universe. How well the rules perform on the data set and how “fun” it is to create a winning pattern of 1’s and 0’s is the standard by which your creation will be judged. I’ll leave an exegesis on “fun” to the pit fighters at QT3 but I know it when I see it.
Contrast the game designer’s role with that of the national security policy analyst’s role. The analyst is presented with the rule set and game state structure as a given. The world is as it is…anarchic, organized around nation states and unsettled by the recent emergence of non-state actors. This next diagram explores some more of the structure that resides in the inputs. It’s the domain of the general, admiral and national security policy maker.
Those squiggly things are brains. They make plans based on what they know about the rules and data and what they think they know about other brains. The text next to the bottom brain is what is known as an analytical framework. You are looking at an abridged version of the best thing that I learned at Harvard’s JFK School of Government. It looks rather simple but it is in reality a very effective tool. The basic International Security Policy class (ISP 201) was focused exclusively around it…an entire semester. It was taught to me by Ambassador Robert Blackwill who was recently ambassador to India and also played a key role in the reunification of Germany. I’ll examine it in more detail in a future entry. The reason I want to take a look at it is because I have found it very helpful in using it as a tool to not only understand current events but also to create rulesets and data spaces. Analyzing in a formal way the manner in which players will formulate strategy to interact with your design is extremely beneficial. It’s also very useful when initially conceptualizing the AI before you have done any observation of how humans will actually play the game. More later…
|
OPCFW_CODE
|
Creating a diminishing value math formula in Excel?
Consider the following example:
One (1) farm has the production value of 1. But for each 10 farms, the production is reduced by half. So what is the production value for 30 farms?
Calculating manually is quite straight forward;
For the first 10 farms -> 10 production
For the second 10 farms -> 10 * 0.5 = 5 production
And for the last 10 farms -> 10 * 0.25 = 2.5 production
Result: 17.5 production
But creating a formula isn't so trivial;
Searching for similar problems, I found this excellent post by the user Scott: https://superuser.com/a/1527430/2156837
Using his formula: R = V * (1 – d^N) / (1 – d)
Where V is the base value, d is the diminishing rate, and N the number of interactions.
The only detail, from my previous example, is that I don't have the number of interactions. But it can be easily calculated by dividing the total (30) by the base number (10).
So it goes like this:
R = 10 * (1 - 0.5^(30/10)) / (1 - 0.5)
R = 17.5
It works! But there is just one problem:
Assume the same example, but for 35 farms instead.
For the first 10 farms -> 10 production
For the second 10 farms -> 10 * 0.5 = 5 production
The following 10 farms -> 10 * 0.25 = 2.5 production
And for the remaining 5 farms -> 5 * 0.125 = 0.625 production
Result: 18.125
By using the formula:
R = 10 * (1 - 0.5^(35/10)) / (1 - 0.5)
R = 18.23223305
Conclusion: the formula does not work if the number of interactions (N) is not an integer.
Can any math magicians tell me what went wrong or how to fix it?
Divide it into two formulas that you sum: =10*(1-0.5^(QUOTIENT(A1,10)))/(1-0.5)+MOD(A1,10)*((0.5^(QUOTIENT(A1,10))))
=10*(1-0.5^INT(A1/10))/0.5+MOD(A1,10)*0.5^INT(A1/10)
Or even =LET(x,0.5^INT(A1/10),10*(1-x)/0.5+MOD(A1,10)*x)
To respond to "what went wrong": I'm not sure of terminology, but I believe your actual problem is a step function whereas the formula is an approximation of the step function (i.e. a smooth curve). If you plot your problem definition vs the formula you should be able to visualize what I am saying.
Dividing into two formulas did the trick. Thank you guys. I understand now. But how do I mark the answer? There is no checkmark on the left. I can't even upvote.
The above are all comments and not answers. I have made an answer of my comment
Divide it into two formulas that you sum:
=10*(1-0.5^(QUOTIENT(A1,10)))/(1-0.5)+MOD(A1,10)*((0.5^(QUOTIENT(A1,10))))
The first half of the formula calculates the total production for complete steps, the second one does the same, but for the last, partially completed step of the function.
|
STACK_EXCHANGE
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#from __future__ import unicode_literals
''' NB: The sub-dict module design pattern is used here to merge over defaults in the module.
but merging of cfg[<module>] keys over module.default-cfg keys only occurs at the module level
Details:
If a top level key is a module name, then a subdict can be provided under that key,
and there can be a default-config dict hardcoded in the named module.
A subkey under modulename will override any key with same name in default-config.
EG for config[<module>] the resulting cfg will be a merge of its subkeys as follows
- all keys that are only in config[<module>]
- all keys that are only in module.default_config
- plus those keys in both, but with values from config[<module>] overriding
This works because the module (generally) calls webapp2.Config.load_config() which uses dict.update()
to replace each default val with the subdict val. However the impl of dict.update() is not recursive
Suppose a subdict eg config[<module>] has a val1 which itself is a dict, config[<module>][<val1>]
eg config ['webapp2_extras.jinja2']['environment_args']
and suppose there is another val2 also called 'environment_args' in the default config,
then val1 completely replaces val2 - IE within a sub-sub-dict, there's no merging or updating of elements.
This could however be done programmatically but it is complicated by the fact
that a) in many cases, you might want to keep only some of the default values
and b) there are also lists to consider - whether replace or append these and what order to use?
and c) we dont want to tinker with the code in webapp2, some of which calls webapp2.Config.load_config()
Therefore for a sub-sub-dict, when we want some/all of its default-values, we do this manually,
by copying any default-values that we want to keep, from the module.
eg below for
config ['webapp2_extras.jinja2'][''environment_args']
'''
from jinja_boot import set_autoescape
from collections import namedtuple
# DelayCfg = namedtuple('DelayCfg', ['delay' #
# ,'latency' # deciSeconds - maximum time for network plus browser response
# ]) # ... after this it will try again. Too small will prevent page access for slow systems. Too big will cause
#Todo: set latency value at runtime from multiple of eg a redirect
RateLimitCfg = namedtuple('RateLimitCfg',['minDelay' #
,'lockCfg' #
])
LockCfg = namedtuple('LockCfg' , ['delayFn' # lambda(n) - minimum time between requests in deciSeconds as function of the retry number.
,'maxbad' # number consecutive 'bad' requests in 'period' ds to trigger lockout
,'period' # seconds - time permitted for < maxbad consecutive 'bad' requests
,'duration' # seconds - duration of lockout
,'bGoodReset'# boolean - whether reset occurs for good login
])
cfg={'DebugMode' : True #if True, uncaught exceptions are raised instead of using HTTPInternalServerError.
# so that you get a stack trace in the log
#otherwise its just a '500 Internal Server Error'
,'locales' : []
# seconds- 0 means never expire
# 'maxAgeRecentLogin' : 60*10
,'maxAgeSignUpTok' : 60*60*24
,'maxAgePasswordTok' : 60*60
,'maxAgePassword2Tok': 60*60
#,'maxIdleAnon' : 0 #60*60
,'maxIdleAuth' : 15 #60*60
,'MemCacheKeepAlive' : 500 # milliseconds - to refresh MemCache item, so (probably) not be flushed
,'Login':RateLimitCfg ( 7 # deciSeconds - minimum time between requests.
# delay or lock on repeated requests from the same ipa and the same ema
, {'ema_ipa':LockCfg( lambda n: n**2 # *10 # 1 2 4 8 16...
, 3 #maxbad
, 60*1 #period
, 60*3 #duration
, True #bGoodReset
)
# delay orlock onrepeated requests from the same ema but different ipa's
,'ema' :LockCfg( lambda n: (n-1)*3 # 0 3 6 9 12...
, 3 #maxbad
, 60*1 #period
, 60*3 #duration
, True #bGoodReset
)
# delay orlock onrepeated requests from the same ipa but different ema's
,'ipa' :LockCfg( lambda n: (n-1)*5 # 0 5 10 15 20 ...
, 3 #maxbad
, 60*1 #period
, 60*3 #duration
, True #bGoodReset
) } )
,'Forgot':RateLimitCfg( 5 # deciSeconds- minimum time between requests.
, {'ema_ipa':LockCfg( lambda n: n**2 # *10 # 1 2 4 8 16...
, 3 #maxbad
, 60*1 #period
, 60*3 #duration
, True #bGoodReset
)
,'ema' :LockCfg( lambda n: (n-1)*3 # 0 3 6 9 12...
, 3 #maxbad
, 60*1 #period
, 60*3 #duration
, True #bGoodReset
)
,'ipa' :LockCfg( lambda n: (n-1)*5 # 0 5 10 15 20 ...
, 3 #maxbad
, 60*1 #period
, 60*3 #duration
, True #bGoodReset
) } )
,'pepper' : None
,'recordEmails' : True
,'email_developers' : True
,'developers' : (('Santa Klauss', 'snowypal@northpole.com'))
#add-to/update the default_config at \webapp2_extras\jinja2.py
,'webapp2_extras.jinja2': { 'template_path' : [ 'template' ]
, 'environment_args': { 'extensions': ['jinja2.ext.i18n'
,'jinja2.ext.autoescape'
,'jinja2.ext.with_'
]
# , 'autoescape': set_autoescape
}
}
}
cfg['Signup'] = cfg['Forgot']
|
STACK_EDU
|
As I noted in this earlier answer to a related question, the rotor permutations in your example are written using a shorthand notation that only shows the output alphabet of the permutation, with the implicit assumption that the input alphabet is always
ABCDEFGHIJKLMNOPQRSTUVWXYZ. That is to say, your rotor descriptions:
EKMFLGDQVZNTOWYHXUSPAIBRCJ | Rotor I wiring
AJDKSIRUXBLHWTMCQGZNPYFVOE | Rotor II wiring
BDFHJLCPRTXVZNYEIWGAKMUSQO | Rotor III wiring
actually describe the following permutations:
↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ | Rotor I
↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ | Rotor II
↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ | Rotor III
Also, there are two other issues with your example encryption:
You're applying the rotor permutations in the wrong order; the rotors are conventionally listed from left to right, but the wiring from the input/output to the reflector goes from right to left. So in your example, rotor III should actually be applied first, then rotor II and finally rotor I (and then the reflector and the inverse rotors in the opposite order).
The rotors are stepped before each letter is encrypted. So if you start with all rotors in the A position, when the first letter of the message is encrypted the rightmost rotor (i.e. rotor III here) will have already rotated into the B position. This means you have to shift the letter right by one step in the alphabet before it enters the rotor, and back left by one step after it leaves the rotor.
Put all together, the correct path of the first letter is:
C (rotor III in position B)
D (rotor II in position A)
F (rotor I in position A)
S (reflector B)
S (rotor I in position A, reverse)
E (rotor II in position A, reverse)
B (rotor III in position B, reverse)
For each rotor, there are effectively three permutations to apply: an alphabet shift to the rotor's current position (i.e. 0 steps for position A, 1 step for position B, 2 steps for position C, etc.), the fixed rotor wiring permutation and finally the reverse shift back from the rotor's position.
In the list above, I've used a dotted arrow ⤑ for the shifts and a solid arrow → for the rotor wiring permutations (and a two-headed arrow ↔ for the reflector).
In this example, two of the three rotors are still in position A, so for them, the shifts have no effect, but I've shown them anyway for completeness. Also, just by chance, the wiring in rotor I happens to map the letter
S to itself.
|
OPCFW_CODE
|
I made a tutorial showing “How to Make an Ubuntu Laptop as a WiFi Hotspot” before. The post is actually for Ubuntu 10.04. Since the Ubuntu 12.04 LTS (Precise Pangolin) is released, there is a bit problem to implement the same purpose following the setup from that tutorial.
In this post I will show how to setup Ubuntu 12.04 as hotspot (ad-hoc, Access Point) to share wired internet with other WIFI enabled devices, e.g. laptops, iPad, iPhone, iTouch, smart-phones. Do not worry, it is quite easy to do the setup step by step by my captured screenshot.
First of all, what you need are:
- A laptop with WIFI hardware and the WIFI function is on.
- The laptop is connected with internet via wired connection(WLAN).
OK, here we go.
- 1. Right click network manager on the laptop with wireless hardware (these days most of laptops have wireless modules) and connected with internet by wired network (WLAN), then choose “Edit Connections…”, click it (Figure 1).
- 2. Click the menu “Wireless”, and click “Add” button, then fill up the blanks as you like: your “Connection name”, “SSID”, then choose “Mode” option as “Ad-hoc”, for others no need to change, shown as Figure 3.
- 3. The next step is set up your key for the connection, click “Wireless Security”, choose “WEP 40/128-bit Key (Hex or ASCII)”, type your own “Key” if you do not want to make the whole world to share your WIFI, it is better make the length of the key as 5 characters that I tried successfully (other length may not work) (Figure 4).
- 4. Turn to “IPv4-Settings”, choose the option “Shared to other computers” for “Method”, as shown in Figure 5.
- 5. Do not make any change for “IPv6 Settings”, OK, click “Save” to finish the setup work, as shown in Figure 6.
- 6. Left click the wireless icon on the right top corner and left click the last option: “Create New Wireless Network…”, as shown in Figure 7.
- 7. On the new pop-up window, choose your created wireless network, in my case is “WIFI hotspot via LAN”, and click “Create” button, type your password set at the step 3. (Figure 8).
- 8. Now, you should see the “Wireless Networks” with a computer icon on your laptop, which means this WIFI is the wireless connection by a computer rather than a wireless router (Figure 9).
- 9. OK, all work is done on the laptop that acts as WIFI hotspot (ad-hoc). Now you can use the wireless connection (WIFI) you already created for other devices enabled with WIFI connection (Figure 10), select it, and input the Key (password you set) (Figure 11).
- 10. OK, you are using the WiFi from the laptop connected with cable Internet (WLAN connection) (Figure 12)!
Is it great? Yes, it is, no command, all is done by GUI setup. It is the power of Linux (here is Ubuntu 12.04), since this is only possible on Windows 7 in Microsoft family, almost no way for XP.
The above method has been successfully tested on my two laptops, iPad 2 (WIFI only), and Nokia 5800.
Here is the information of my two laptops (hardware and software).
- HP 520 laptop, Ubuntu 12.04, cable Internet (WLan)
- Intel Corporation 82562ET/EZ/GT/GZ – PRO/100 VE (LOM) Ethernet Controller Mobile (rev 01)
- Intel Corporation PRO/Wireless 3945ABG [Golan] Network Connection (rev 02)
- Acer Aspire one Netbook, Ubuntu 12.04, WiFi provided by HP 520
- Atheros Communications Inc. AR5001 Wireless Network Adapter (rev 01)
- Atheros Communications Atheros AR8132 / L1c Gigabit Ethernet Adapter (rev c0)
Update: If you think the procedure above is too complicated, you can also try the simplest and easiest method: How to create a WiFi hotspot in Ubuntu laptop for mobile devices [easiest and simplest]
|
OPCFW_CODE
|
Kashering earthen ovens
I see that people still make earthen ovens, however, before making one I want to see if it theoretically can be kashered.
I know that earthenware vessels cannot be kashered (I heard that it has to do with them breaking in the heat)
But is it the same by an earthen oven?
Can it theoretically be kashered?
How?
Sources please
What exactly is the difference between earthen and earthenware? Aren't both clay?
Also, I believe earthenware may be kashered if it's glazed.
@DonielFilreis Something can be earthen and yet have a glaze, which for some puts it in a different halakhic category. Whereas earthenware typically is unglazed
Please check the list i've put in my answer to tell us the name of the oven you want to make
@Aaron I will (it is long) it seems very interesting
@Aaron But in terms of the oven b'etzem, is it made any differently besides whether there's a glaze on it?
@DonielFilreis The glaze as far as i know would be the only difference. Basically a lot of things are earthenware, but they become different categories depending on the treatment process, of the process of what it's made out of. So for example, glass is technically earthenware, as it's made out of sand. People use earthenware in a not very specific way. So typically when someone says "earthenware" they usually mean unglazed earthenware.
Jews have used various types of earthenware ovens throughout the centuries. Unfortunately, most of these ovens do not translate into English or other languages very well. Also, most of these oven types were lost to Ashkenazim for many centuries as the type of dirt, and the weather required to have such an oven made it nearly impossible to make them.
Here is a list of all the types of earthenware ovens in the Talmud, please specify which kind you are trying to make so we can help you better.
You are correct, one cannot kasher an earthenware k'li. The only exception is that if you heat it to a temperature of a kiln, which is the point that it typically melts and becomes new, and in that case, you will destroy your oven. But in general, only the surfaces that actually touch the food become a k'li, and therefore only those spots absorb taste. Therefore, depending on how you construct your oven, you can swap out sections of it to cook different things.
Here is a picture of my earthenware oven, which is most closely related to the תנור oven mentioned in the Talmud. It was built out of flowerpots from my local hardware store, and the top most earthenware piece/pot/top can be removed and replaced. So i have one "top" that is parve and chametz free, and one "top" that is dairy and contains chametz. i use this oven to bake normal flatbread and grilled vegetables during the year, and switch out to a different "top" to bake matzah during Pesach.
As for sources, any sources you have regarding earthenware apply here. The surface (where food touches) of the oven becomes a k'li. So as long as the food doesn't touch a specific surface of the oven, that section of the oven does not become a k'li.
The only concern might be with vapor/steam. If the food is very liquidy, then there is concern that the steam of the food might absorb into the earthenware. If the food is more solid, then there is less concern regarding vapor/steam. In the end it's up to you to decide how strict you want to be regarding this issue, as there are very many opinions regarding this issue. Here is a document that has many relevant opinions regarding this matter.
https://drive.google.com/open?id=0B1yihaEHoSELRlA4SVozamFwdDQ
You sure he doesn't mean something like https://en.wikipedia.org/wiki/Earth_oven
+1 Wow , I do not understand your oven (were does the bread go? (On the top peace?)), but as a stove I understand it
It goes on the sides(walls)?
@DoubleAA I meant something more like https://en.wikipedia.org/wiki/Masonry_oven
Let us continue this discussion in chat.
What is the name of the sefer with the list of earthenware ovens mentioned in the talmud?
@Menachem i don't remember the name of the actual sefer. But i've provided a link to the relevant pages in my answer.
|
STACK_EXCHANGE
|
require('util').extendNamespace(TSA, 'TSA.app.lib.timeStamp');
/**
* Bildet eine Datenstruktur für ein Zeitstempel.
*
* @namespace TSA.app.lib.datatypes
* @class timeStamp
*/
TSA.app.lib.timeStamp = {
m_StartDate : "null",
m_EndDate : "null",
m_StartTime : "null",
m_EndTime : "null",
m_Total : "null",
/**
* Setzt m_StartDate und m_StartTime. Ersetzt noch
* den Konstruktor. //TODO: Konstruktor-Funktion
*
* @method setAllStarts
* @param {String} startDate
* @param {String} startTime
*
*/
setAllStarts:function(startDate, startTime){
this.m_StartDate=startDate;
this.m_StartTime=startTime;
},
/**
* Setzt m_EndDate und m_EndTime. Ersetzt Konstruktor.
*
* @method setAllEnds
* @param {String} endDate
* @param {String} endTime
*/
setAllEnds:function(endDate, endTime){
this.m_EndDate=endDate;
this.m_EndTime=endTime;
this.setTotal();
},
/**
* Lierert Startdatum des Zeitstempels zurück.
*
* @method getStartDate
* @return {String} startDate
*/
getStartDate : function() {
return this.m_StartDate;
},
/**
* Lierert Enddatum des Zeitstempels zurück.
*
* @method getEndDate
* @return {String} endDate
*/
getEndDate : function() {
return this.m_EndDate;
},
/**
* Lierert Startzeit des Zeitstempels zurück.
*
* @method getStartTime
* @return {String} startZeit
*/
getStartTime : function() {
return this.m_StartTime;
},
/**
* Lierert Endzeit des Zeitstempels zurück.
*
* @method getEndTime
* @return {String} endTime
*/
getEndTime : function() {
return this.m_EndTime;
},
/**
* Addiert Start und Ende des Zeitstempels
* und gibt als Gesamtzeit zurück.
*
* @method getTotal
* @return {String} total
*/
getTotal : function() {
return this.m_Total;
},
/**
* Setzt Startdatum des Zeitstempels.
*
* @method setStartDate
* @param {String} startDat
*/
setStartDate:function(startDate){
this.m_StartDate=startDate;
},
/**
*Setzt die Startzeit des Zeitstempels.
*
* @method setStartTime
* @param {String} startTime
*/
setStartTime:function(startTime){
this.m_StartTime=startTime;
},
/**
* Setzt Enddatum des Zeitstempels.
*
* @method setEndDate
* @param {String} endDate
*/
setEndDate:function(endDate){
this.m_EndDate=endDate;
},
/**
* Setzt Endzeit des Zeitstempels.
*
* @method setEndTime
* @param {String} endTime
*/
setEndTime:function(endTime){
this.m_EndTime=endTime;
},
/**
* Rechnet die Gesamtzeit des Zeitstempels
* aus. Berechnung ÜBERPRÜFEN
*
* @method setTotal
*/
setTotal:function(){
var splitedEndDate = this.m_EndDate.split('/');
var splitedEndTime = this.m_EndTime.split(':');
// Set the unit values in milliseconds.
var msecPerMinute = 1000 * 60;
var msecPerHour = msecPerMinute * 60;
var msecPerDay = msecPerHour * 24;
// Set a date and get the milliseconds
var date = new Date(this.m_StartDate);
var dateMsec = date.getTime();
// Set the date to January 1, at midnight, of the specified year.
//date.setMonth(splitedEndDate[0]);
date.setDate(this.m_EndDate);
//date.setFullYear(splitedEndDate[2]);
date.setHours(splitedEndTime[0], splitedEndTime[1], splitedEndTime[2],0);
// Get the difference in milliseconds.
var interval = dateMsec - date.getTime();
// Calculate how many days the interval contains. Subtract that
// many days from the interval to determine the remainder.
var days = Math.floor(interval / msecPerDay );
interval = interval - (days * msecPerDay );
// Calculate the hours, minutes, and seconds.
var hours = Math.floor(interval / msecPerHour );
interval = interval - (hours * msecPerHour );
var minutes = Math.floor(interval / msecPerMinute );
interval = interval - (minutes * msecPerMinute );
var seconds = Math.floor(interval / 1000 );
// Display the result.
console.log(days + " days, " + hours + " hours, " + minutes + " minutes, " + seconds + " seconds");
this.m_Total=days + " days, " + hours + " hours, " + minutes + " minutes, " + seconds + " seconds";
//Output: 164 days, 23 hours, 0 minutes, 0 seconds.
}
};
|
STACK_EDU
|
Dynamically generates and validates Python Airflow DAG file based on a Jinja2 Template and a YAML configuration file to encourage code re-usability
What is AirflowDAGGenerator?
Dynamically generates Python Airflow DAG file based on given Jinja2 Template and YAML configuration to encourage reusable code. It also validates the correctness (by checking DAG contains cyclic dependency between tasks, invalid tasks, invalid arguments, typos etc.) of the generated DAG automatically by leveraging airflow DagBag, therefore it ensures the generated DAG is safe to deploy into Airflow.
Why is it useful?
Most of the time the Data processing DAG pipelines are same except the parameters like source, target, schedule interval etc. So having a dynamic DAG generator using a templating language can greatly benefit when you have to manage a large number of pipelines at enterprise level. Also it ensures code re-usability and standardizing the DAG, by having a standardized template. It also improves the maintainability and testing effort.
How is it Implemented?
By leveraging the de-facto templating language used in Airflow itself, that is Jinja2 and the standard YAML configuration to provide the parameters specific to a use case while generating the DAG.
Python 3.6 or later
Note: Tested on 3.6, 3.7 and 3.8 python environments, see tox.ini for details
How to use this Package?
First install the package using:
pip install airflowdaggenerator
Airflow Dag Generator should now be available as a command line tool to execute. To verify run
Airflow Dag Generator can also be run as follows:
python -m airflowdaggenerator -h
- If you have installed the package then:
airflowdaggenerator \ -config_yml_path path/to/config_yml_file \ -config_yml_file_name config_yml_file \ -template_path path/to/jinja2_template_file \ -template_file_name jinja2_template_file \ -dag_path path/to/generated_output_dag_py_file \ -dag_file_name generated_output_dag_py_file
python -m airflowdaggenerator \ -config_yml_path path/to/config_yml_file \ -config_yml_file_name config_yml_file \ -template_path path/to/jinja2_template_file \ -template_file_name jinja2_template_file \ -dag_path path/to/generated_output_dag_py_file \ -dag_file_name generated_output_dag_py_file
If you have cloned the project source code then you have sample jinja2 template and YAML configuration file present under tests/data folder, so you can test the behaviour by opening a terminal window under project root directory and run the following command:
python -m airflowdaggenerator \ -config_yml_path ./tests/data \ -config_yml_file_name dag_properties.yml \ -template_path ./tests/data \ -template_file_name sample_dag_template.py.j2 \ -dag_path ./tests/data/output \ -dag_file_name test_dag.py
And you can see that test_dag.py is created under ./tests/data/output folder.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for airflowdaggenerator-0.0.1.tar.gz
Hashes for airflowdaggenerator-0.0.1-py3-none-any.whl
|
OPCFW_CODE
|
why “1 HDFS block per HDFS file” is better in parquet official document ?
why “1 HDFS block per HDFS file” is an optimized read setup in parquet official document ?
parquet official document
EDIT:
As in the figure above, the parquet file is made up of row groups.
if "1GB row groups, 1GB HDFS block size" , then 1 row group will fit 1 HDFS block. Then column will not outside of HDFS block. So, we no longer need to transfer data. But, what is “1 HDFS block per HDFS file” for ?
I stumpled today to the same official parquet documentation and couldn't find an answer by myself to the same question. I am coming to the same conclusion as you, as long as the row group size is the same as HDFS block size, why should it matter at all to only have 1 Block per 1 HDFS file.
Did you find an answer?
This is essentially because parquet is a columnar storage format. So lets say that you have stored a 3 GB file with a blocksize of 1GB. To read a whole record you will need to reconstruct the record if the information of each column is not in a single block (which his probably the case because the columnar format), this mean that necessarily in one machine will be needed reconstruct the record requiring the data transfer from other nodes to reassembly the record.
EDIT:
For the following image which compare row storage against column storage, imagine that the column cost doesn't fit in your block size, it means that this column will be outside of your block and will create a new block. If you want to use the data for one whole specific row, the data for the cost column will need to be send for one node to another which is not efficient. I hope it makes sense.
"1GB row groups, 1GB HDFS block size" is enough to avoid transferring data, isn't it ?
You are right, but to accomplish data in columnar storage you will need 1 gb file
why need 1 gb file ?
because to have a 1 GB group you need all the groups in the same block, it means that you will need all the group in the same file when you use parquet format, if you dont, some columns will be in a different block, it means that you will need transfer the data to reconstruct the records
As in the figure above, the parquet file is made up of row groups. if "1GB row groups, 1GB HDFS block size" , then 1 row group will fit 1 HDFS block. Then column will not outside of HDFS block. So, we no longer need to transfer data. But, what is “1 HDFS block per HDFS file” for ?
at least in mapreduce program the general behavior is that each mapper read 1 block of data, for this reason you should try to make your block size as big as your file, in this case 1 GB block size for each 1 GB File, it will ensure that one mapper will read 1 block without require data transfer to reconstruct records
A parquet file is made up of row groups。And a row group has all columns.So,a row group can independently reconstruct records in it,dosn't depend on other row groups in the file.
So,a map need to read a row group,not a file.
Isn't it?
|
STACK_EXCHANGE
|
Query for derived types returns 'Cannot create an instance of XyzBase because it is an abstract class.'
I've got a model described like the following:
abstract class XyzBase
{
public int XyzID { get; set; }
}
class Abc : XyzBase
{
// Abc stuff
}
class Def : XyzBase
{
// Def stuff
}
The metadata lists XyzBase as abstract, and that Abc and Def are marked as derived from XyzBase. The call from my client looks like this:
var xyzs = await odata.For<XyzBase>()
.FindEntriesAsync();
foreach (var xyz in xyzs)
{
// ...
}
The query results looks like:
{
"@odata.context":"http:/url/OData/$metadata#Xyzs","value":[
{
"@odata.type":"#Name.Space.Abc","XyzID":1
},{
"@odata.type":"#Name.Space.Abc","XyzID":3
},{
"@odata.type":"#Name.Space.Def","XyzID":4
},{
"@odata.type":"#Name.Space.Abc","XyzID":5
}
]
}
And the exception is {"Cannot create an instance of Name.Space.XyzBase because it is an abstract class."}.
The EntityFramework calls work, but it appears that the client does not attempt to use the @odata.type information sent by the query.
Thank you for reporting the issue. I will check this out and come back to you.
Hi,
I checked this issue, and in fact there only thing that can be done as long as you are using typed syntax of the library is to declare the base class non-abstract. When you execute a statement that only contains a reference to an abstract class XyzBase, the library does not know what concrete to use since no concrete type is specified in the call. If you are only interested in entries of the specific subtype, then you can use "As" clause to limit results to that particular type. But if you want everything, then you have two options:
Declare a base class as non-abstract (or declare an extra non-abstract class XyzBaseEx derived from XyzBase and only used in such operations)
Use untyped or dynamic syntax
This is an interesting case, and perhaps I should add plan for version 5.0 some kind of "known types" concept (similar to what WCF had) so the library can be notified about subtypes to use.
Hope this helps.
The As clause ends up sending a different query, so I can't use it
(unsupported by my provider). I've found a workaround of sorts, basically
two queries that I union: abcs.Concat(defs).
Thank you!
On Thu, Sep 17, 2015 at 9:08 AM, Vagif Abilov<EMAIL_ADDRESS>wrote:
Hi,
I checked this issue, and in fact there only thing that can be done as
long as you are using typed syntax of the library is to declare the base
class non-abstract. When you execute a statement that only contains a
reference to an abstract class XyzBase, the library does not know what
concrete to use since no concrete type is specified in the call. If you are
only interested in entries of the specific subtype, then you can use "As"
clause to limit results to that particular type. But if you want
everything, then you have two options:
Declare a base class as non-abstract (or declare an extra
non-abstract class XyzBaseEx derived from XyzBase and only used in such
operations)
Use untyped or dynamic syntax
This is an interesting case, and perhaps I should add plan for version 5.0
some kind of "known types" concept (similar to what WCF had) so the library
can be notified about subtypes to use.
Hope this helps.
—
Reply to this email directly or view it on GitHub
https://github.com/object/Simple.OData.Client/issues/158#issuecomment-141077307
.
--
Christopher A. Watford
<EMAIL_ADDRESS>
We had the same problem and worked around using a CustomConverter I've created a ODataDerivedTypeCustomConverter which can be registered using:
ODataDerivedTypeCustomConverter.Register<XyzBase, Abc, Def>();
or
ODataDerivedTypeCustomConverter.RegisterDerivedFromAssembly<XyzBase>();
which will search for all derived types in the same assembly as the base type and register them.
This has to be done one time, before your first request.
That's a nice workaround.
In the future, it'd be great to abstract the converter to an implementable interface that can be registered with the settings via DI.
Looks like CustomConverters.RegisterTypeConverter has been made obsolete, but in the other hand, the attached ObsoleteAttribute refers to ODataClientSettings.TypeCache.RegisterTypeConverter, which does not exist, and ODataClientSettings.TypeCache.Register appears to be too limited. I'd be happy to PR, @object please lemme know if you'd consider that, and maybe some advice or things to have in mind.
@zivillian
I've tried your workaround, first of all, ToObject isn't included in your code, but regardless, looks like OData Client completely ignores the converters, as HandleConvert is never called when required.
In the future, it'd be great to abstract the converter to an implementable interface that can be registered with the settings via DI.
Looks like CustomConverters.RegisterTypeConverter has been made obsolete, but in the other hand, the attached ObsoleteAttribute refers to ODataClientSettings.TypeCache.RegisterTypeConverter, which does not exist, and ODataClientSettings.TypeCache.Register appears to be too limited. I'd be happy to PR, @object please lemme know if you'd consider that, and maybe some advice or things to have in mind.
I @zivillian
I've tried your workaround, first of all, ToObject uses an old version that is no longer valid, but regardless, looks like OData Client completely ignores the converters, as HandleConvert is never called when required.
In the future, it'd be great to abstract the converter to an implementable interface that can be registered with the settings via DI.
Looks like CustomConverters.RegisterTypeConverter has been made obsolete, but in the other hand, the attached ObsoleteAttribute refers to ODataClientSettings.TypeCache.RegisterTypeConverter, which does not exist, and ODataClientSettings.TypeCache.Register appears to be too limited. I'd be happy to PR, @object please lemme know if you'd consider that, and maybe some advice or things to have in mind.
Anyway, I ended up adding a discriminator column to my entity, and then create the appropriate type accordingly:
public class AppODataClientSettings : ODataClientSettings
{
public AppODataClientSettings(HttpClient httpClient) : base(httpClient)
{
TypeCache.Converter.RegisterTypeConverter<Contact>((IDictionary<string, object> d) =>
{
if (d.TryGetValue(nameof(IContact.ContactType), out var contactTypeValue) && contactTypeValue is string contactTypeStr && Enum.TryParse<ContactType>(contactTypeStr, out var contactType))
return contactType == ContactType.Person
? (Contact)d.ToObject<Person>(TypeCache)
: d.ToObject<Company>(TypeCache);
if (d.ContainsKey(nameof(Person.FirstName)))
return d.ToObject<Person>(TypeCache);
if (d.ContainsKey(nameof(Company.CompanyName)))
return d.ToObject<Company>(TypeCache);
throw new InvalidCastException($"Could not parse {nameof(Contact)} type from {d}.");
});
}
}
//ToObject reference
namespace Simple.OData.Client.Extensions
{
static class DictionaryExtensions
{
static readonly MethodInfo _ToObjectGeneric;
static readonly MethodInfo _ToObjectDynamic;
static readonly ConcurrentDictionary<Type, MethodInfo> _Invocations = new ConcurrentDictionary<Type, MethodInfo>();
static DictionaryExtensions()
{
var currentType = typeof(DictionaryExtensions);
var type = Assembly.GetAssembly(typeof(ODataClient)).GetType(currentType.FullName);
_ToObjectGeneric = type.GetMethod(nameof(ToObject), new[] { typeof(IDictionary<string, object>), typeof(ITypeCache), typeof(bool) });
_ToObjectDynamic = type.GetMethod(nameof(ToObject), new[] { typeof(IDictionary<string, object>), typeof(ITypeCache), typeof(Type), typeof(bool) });
}
public static T ToObject<T>(this IDictionary<string, object> source, ITypeCache typeCache, bool dynamicObject = false)
where T : class
{
var generic = _Invocations.GetOrAdd(typeof(T), t => _ToObjectGeneric.MakeGenericMethod(t));
return (T)generic.Invoke(null, new object[] { source, typeCache, dynamicObject });
}
public static object ToObject(this IDictionary<string, object> source, ITypeCache typeCache, Type type, bool dynamicObject = false)
{
return _ToObjectDynamic.Invoke(null, new object[] { source, typeCache, type, dynamicObject });
}
}
}
For anyone else passing through here with this issue, my team and I came up with a different workaround for this issue. Hope it's helpful to others!
public abstract class Principal
{
...
}
public class User : Principal
{
...
}
public class Group : Principal
{
...
}
internal class ODataRequestOptions<T>
{
public Expression<Func<T, bool>>? Filter { get; set; }
public Expression<Func<T, object>>? Select { get; set; }
public Expression<Func<T, object>>? Expand { get; set; }
}
internal static class IBoundClientExtensions
{
public static IBoundClient<T> ApplyRequestOptions<T>(this IBoundClient<T> query, Action<ODataRequestOptions<T>> configureOptions)
where T : class
{
var options = new ODataRequestOptions<T>();
configureOptions(options);
var q = query;
if (options.Select != null)
{
q = q.Select(options.Select);
}
if (options.Filter != null)
{
q = q.Filter(options.Filter);
}
if (options.Expand != null)
{
q = q.Expand(options.Expand);
}
return q;
}
}
public async Task<Principal?> GetPrincipalById(
Guid id,
Action<ODataRequestOptions<Principal>> configureOptions,
CancellationToken ct)
{
var request = await ODataClient.For<Principal>()
.Key(id)
.ApplyRequestOptions(configureOptions)
.BuildRequestFor()
.FindEntryAsync(ct);
var response = await request.RunAsync(ct);
return await response.ReadAsSingleAsync(ct);
}
public async Task<IEnumerable<Principal>> GetPrincipals(
Action<ODataRequestOptions<Principal>> configureOptions,
CancellationToken ct)
{
var request = await ODataClient.For<Principal>
.ApplyRequestOptions(configureOptions)
.BuildRequestFor()
.FindEntriesAsync(ct);
var response = await request.RunAsync(ct);
return await response.ReadAsCollectionAsync(ct);
}
|
GITHUB_ARCHIVE
|
BrailleBox: Android Things Braille news display
Joe Birch has built a simple device that converts online news stories to Braille, inspired by his family’s predisposition to loss of eyesight. He has based his BrailleBox on Android Things, News API, and a Raspberry Pi 3.
Braille is a symbol system for people with visual impairment which represents letters and numbers as raised points. Commercial devices that dynamically produce Braille are very expensive, so Joe decided to build a low-cost alternative that is simple to recreate.
News API is a tool for fetching JSON metadata from over 70 online news sources. You can use it to integrate headlines or articles into websites and text-based applications.
To create the six nubs necessary to form Braille symbols, Joe topped solenoids with wooden balls. He then wired them up to GPIO pins of the Pi 3 via a breadboard.
Next, he took control of the solenoids using Android Things. He set up the BrailleBox software to start running on boot, and added a push button. When he presses the button, the program fetches a news story using News API, and the solenoids start moving.
Since Joe is an Android Engineer, looking through his write-up and code for BrailleBox might be useful for anyone interested in Android Things.
If you like this project, make sure you keep an eye on Joe’s Twitter, since he has plans to update the BrailleBox design. His next step is to move on from the prototyping stage and house all the hardware inside the box. Moreover, he is thinking about adding a potentiometer so that users can choose their preferred reading speed.
If you want to find our community’s conversation about accessibility and assistive technology, head to the forums. And if you’re working to make computing more accessible, or if you’ve built an assistive project, let us know in the comments or on social media, so that we can boost the signal!
Hi Joe. I like it! I have been wanting to make something similar to this, but aimed at Whats App messenger. Do you plan to expand this beyond the news reader functionality?
My wife and I are both sighted but serve as volunteers for The Alliance for Braille Literacy out of St. Louis, Missouri; me as their web master , she as a transcriber. This is a very neat project you have come up with. A question I have is how you will reduce the size of the cell in the next evolution so a blind reader will be able to sense the cell with their finger. Also controls for moving back and forth on the text line. Another thing you might consider is building the speed control potentiometer into a foot pedal. Best of luck with this project, can’t wait to see version II. Just had another thought; blind readers usually would like 3 cells – helps “reading” in context so to speak. So many potentials here for a fantastic project, and yes commercial units are very expensive at thousands of dollars.. any financial relief with a simpler yet functional equivalent would be most welcome! Once again, best of luck.
It should be possible to devise some small levers so that the solenoids can ‘drive’ a smaller area. Might be worth looking into 3d printing something so the design can be shared.
Congratulations on your project! What an amazing device. Well done! and all the best for your future work.
Comments are closed
|
OPCFW_CODE
|