Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
When I decided to build a PowerPoint-like visual editor, the first thing I had to do was pasting clipboard contents to my view. But it's all around MSDN, CodeGuru, etc. like
But the second part, when I had to rotate clipboard image, it was different. I could not find any hints at all.
Maybe it's because of my poor ability to search for things.. :( Anyway, I decided to rotate EMF.
The API function
SetWorldTransform() is not supported on Win98. Actually, I'm working on Win2K, but I wanted to rotate EMF without any GDI transformation functions.
I think the steps to rotate EMF image are as follows:
COleClientItem inherited object to
- Get EMF from
- Draw acquired EMF bits as rotated on another
- Get EMF from
- Draw rotated EMF file to output DC.
Enumerating EMF (or WMF) requires
CALLBACK function. Global scope can be used, but I've put it into my
CItem class as
static member function. No-
static member function cannot be used as
CALLBACK function in
Included source files were built as the following sequence:
- Create new MFC (EXE) project with Ole Container options selected.
CblahblahCntrItem class is the
COleClientItem inherited class. So, if you want some properties in your item, put your things into
ID_EDIT_PASTE event handler.
- Customize draw function.
Download project & build. Run MROT and get some vector image on your system's clipboard, and paste into MROT.
Then you can see the clipart on MROT's view. Then push RIGHT or LEFT cursor key. Then, the image will rotate by 5 in degrees.
MROT handles only
EMR_POLYGON16 record. So, If your clipart contains other polygonal records or rectangular records... MROT cannot display & rotate it well.
MSDN\TECHART\1619\EMFDCODE.EXE on you MSDN CD can help you inspect your source clipart.
Now I think, building PowerPoint-like visual editor is not difficult (I say editor..:)) but will take long time if you approach as what I did. I'll move to GDI+, because I don't have to convert any polygon points at all.
P.S.: You may wonder what
ConvertEMF member function is for. Some cliparts have their own mapping aspects.
MM_ANISOTROPIC, and Window/Viewport Extents inversed... so, same rotation command will act differently on those kind of cliparts...
WindowExt inversed cliparts to non-inversed one.
Is there anyone to handle all of EMF records to rotate? :)
This article has no explicit license attached to it, but may contain usage terms in the article text or the download files themselves. If in doubt, please contact the author via the discussion board below.
A list of licenses authors might use can be found here.
|
OPCFW_CODE
|
This question received a close vote: When is Fēngyún-3C planned to launch?
What do we do with transient questions? Something similar may happen on Travel, if people are asking on how to travel to specific events. What is our policy for this?
Site self-evaluation tools have fairly recently received an overhaul, and among the most notable changes are the site-specific close reasons. These changes are explained in the Stack Exchange Blog The War of the Closes:
These site-specific reasons will also address situations previously covered by “General Reference” and “Too Localized”. Those were the least used and most misused reasons – moderator and team sampling found a huge percentage of their application to be erroneous.
The blog post is a lot more detailed than the short excerpt above, so I recommend reading it in its entirety, although I'm quite certain you've noticed at least the majority of these changes yourself, too.
I personally miss the "too localized" reason to close questions, but appreciate we (Space Exploration) as an individual Stack Exchange community, need to agree on our own most common reasons to close questions, which questions are considered acceptable/appropriate/..., and which not, so we can enforce these rules with some conviction later on. If we won't be in agreement on such questions, we just won't be able to sustain a healthy and well defined community.
For the time being, we're still in the process of deciding over these "custom close reasons" that we're to eventually include in the list, meaning we haven't reached any consensus yet on any site-wide policy, and I can only provide my own opinion on the matter, which is fairly simple:
I agree with assessment of some reviewers that the question is too localized (it will only be relevant until the FY-3C is actually launched, which might be as soon as in a few days time, whenever the next launch window will be, if the unverified information is to be believed that the satellite is ready for transport to the launch site). Due to the lack of any official information, the question could also be considered to solicit "primarily opinion-based" answers.
So - TL;DR - No policy yet, we need to decide on them first, and then include custom close reasons in our website's questions review options. My vote to close for this particular question mentioned is explained above. We also plan to host launch related events in our chat room - The Pod Bay, and questions or discussions like that certainly aren't off-topic there, they might even be appreciated by some of the regular dwellers there, myself included. Hope this helps, and please vote on proposals you consider agreeable in the "custom close reasons" thread, or even add your own suggestions.
Relevant Q&A from meta.SO:
Jeff Atwood said temporary Qs are better left for chat or twitter.
However, there are events which happen each year on Stack Exchange sites and which usually produce questions with great following. These are April Fool's Day questions.
They tend to generate a lot of answers and votes, but are deleted afterwards. (I ain't kiddin'!)
Personally, I have nothing against questions that keep hanging for months unanswered until an answer becomes available. If the asker loses interest, they may delete the question, otherwise - they get the answer when they get it. Answers that are speculative or invalid simply shouldn't be accepted or upvoted.
Additionally, I'm not against questions that become obsolete over time, providing new answers are posted and selected by the asker as the "new valid". Even if that doesn't happen they provide a historical resource.
|
OPCFW_CODE
|
- content to the Chair of the working group.
- format to webmaster _at_ ripe _dot_ net.
Anti-Spam Working Group - RIPE 50, Stockholm:
Friday 6 May, 2005, 09:00
Chair - Rodney Tillotson
Scribe - Emma Bretherick
A. Administrative Matters
Minutes of the last wg session can be found at:
Co-chair sends his apologies.
The agenda did not follow the published order. The priority item was discussion of a proposed update to ripe-206 (E1).
B1 Developments in UBE
Bots and trojans
If I claim that it's all bots and trojans now does anyone agree with that?
Well its certainly not all bots and trojans but it is a lot.
Absolutely and we have to take this very seriously because of the running together of different security threats. Blocking certain networks is also a serious issue and not all organisations block networks that they should. Does anyone know of the percentage of UBE which comes through bots and trojans (and so, in may cases, out through legitimate ISP relays)?
Does anyone know what to do about this?
Authorised SMTP is at least a step in the right direction. Legitimate users will configure their mail programs to authenticate correctly, but bulk mailing software will usually be unable to do so.
I agree and this is probably something we should include in the BCP document we're going to look at under item E1.
B2 Developments in anti-spam
What do we think about Gmail?
Bad, in that they do not write in the header crucial information required for traceability. In principle they have a novel chain of trust in which accounts are available by invitation and they know who released each invitation. Ultimately it depends whether they do what they say they will.
Any other issues? Black lists, any favourites?
Asia Pacific Area Initiative, anyone know what's going on there?
There has been quite a lot of work done, but getting cooperation between all the different countries in the Asian regions will always be a problem. This is due to language and also because some countries are not action orientated! Australia is leading this but most countries are not doing much so not much has happened yet. Quite a few governments have signed up for this, which is a good sign, but so far that's it.
C. Technical Measures
Anyone know of different (new) tricks regarding filtering?
Greylisting: when a message comes in the receiving server at first rejects it but not permanently. A genuine sending server will retry and its second attempt will normally be accepted. There are some issues with the resulting delays to messages.
Personally I feel that the bot writers will have found a way around this very soon.
Kamran Khalid (during discussion of the BCP):
In regards to the abuse e-mail and notification, I remember there was going to be a new abuse attribute in the database objects?
Rodney gave a short update on the changes to the RIPE Database regarding the new abuse attribute.
I think it is not a good idea to add more contact details to the objects in the RIPE Database.
E1 Update to LINX BCP and ripe-206
How many people here have heard of ripe-206 or the LINX BCP?
Just three attendees put heir hands up, so Rodney gave some background information.
The LINX BCP has been updated and as ripe-206 was based on the original LINX BCP we should consider whether we want to update the RIPE document, and in what way.
i, We accept the LINX doc as it is.
ii, We make some modifications for RIPE, eg change the references that are specific to the UK.
iii, We suggest improvements for the LINX BCP.
Rodney showed the attendees the RIPE Document and the LINX document. He has shown a suggestion of what the new RIPE Document might look like if it followed the new LINX text, with pink highlighting for additions to the existing RIPE Document, and yellow for modifications. Also some notes of possible changes to the document.
I think in many of the docs there is a lack of mechanisms for identifying anti-spam. There are difficulties in identifying if it is spam or advertising or bots. Even some anti-bots organizations are deleting some type of bots from their database as they are commercial adverts. There is no text about the origin of the complaints, there should be some text about what text needs to be included in a complaint about spam. About 30% of the messages I receive are not actually related to me, they are due to mistakes in whois or misleading links etc. It takes too long to explain to people what they have done that is wrong.
I believe the points made were about three things:
1, Identifying different types of spam.
2, Templates for what people should include in a complaint e-mail.
3, Ways of blocking.
I think all of them are out of scope for this document.
No I don't think so. Your doc splits the world into spammers and anti-spammers but it is not so clear. Sometimes anti-spammers can be more abusive than the spammers! There are no requirements for the anti-spam fighters anywhere and therefore they think they can behave anyway they like. They are not behaving in a best-practice way.
Brian from Heanet:
Blocking or not blocking. I don't think that has any place in this particular document, I think that comes under a much more general heading. This document is aimed towards suggestion to orgs what they should do to minimize e-mail abuse. Explaining to anti-spammers how they should react needs to be somewhere else.
You are basically saying that there are two parties and that this document is only focused towards one.
I accept that and I agree that something may need to be done about it.
I support 'person 1's conclusions. Can we then look toward creating a second document, so that we explain both how an ISP should behave and how anti-spammers should behave.
We will take note of this.
Action: on Rodney to move forward with this separate doc.
You use normative language, eg MUST. Do you state what will happen if people do not do this?
No, this is not a legal document. Best Current Practice is to do what the document says, and keywords in it such as MUST and SHOULD identify which classes of behaviour are critical for conformance and which are not.
Which of the options regarding the LINX BCP should we take?
I think option 3 is the best option, as long as it is not a continuous feedback loop.
I agree. I think that more could have been done with the update to the LINX BCP and we probably do need a better document to work on.
Brian Nesbit (HEAnet) offered to help with a new draft. We have enough people willing to work and comment on this so suggestions will be sent to the mailing list for further feedback.
Regarding Rodney's point about the difficulty of the language in the LINX BCP for non-native English speakers, the RIPE NCC can 'Plain English' the new version of the RIPE doc.
Y. Future Tasks
If anyone would like to do any tutorials just let us know!
Z. Agenda for RIPE 51
Standard form. Specific offers or requests by e-mail.
|
OPCFW_CODE
|
The SSIII is developing a detailed, yet broad, sea ice ontology linked to relevant marine, polar, atmospheric, and global ontologies and semantic services. Our overall goal is to improve the interoperability, usefulness, and understanding of Arctic sea ice data using Semantic Web approaches and technologies. The Semantic Web approach exposes, shares, and connects pieces of data through the use of unique identifiers and standardized protocols for describing data. This linked data approach is combined with knowledge models formalized in ontologies to develop sophisticated applications that can help you find, process, integrate, and analyze information. An ontology can be described as "a formal, explicit [machine-readable] description of concepts in a domain of discourse." In this case, our domain is sea ice and its relations to the Arctic system.
SSIII Ontology Browser
The SSIII Ontology Browser is an easy way to explore our ontologies. The ontologies are listed in a main horizontal navigation bar across the top of the Web page. A pull-down menu on the left-hand side of the Web page populates after you have chosen an ontology from the main horizontal nav bar. A Find Tool on the right-hand side of the Web page helps you explore the ontology browser.
Currently, SSIII has seven Ontology types: sea ice, seaice concentration, seaice development, seaice form, ice of land origin, egg code, and Sea Ice Grid (SIGRID), also called SIGRID-3, since this ontology describes a data format that has been through three updates. SSIII recognizes the existing operational use of the egg code and SIGRID-3 at National Ice Centers around the world. We hope that eventually these ontologies can aid the creation and reuse of operational sea ice charts. We have also worked to ensure our ontologies use terminology from the World Meteorological Organization (WMO) Sea Ice Nomenclature.
On sea ice charts, refer to Figure 2, sea ice parameters are represented by symbols with accompanying numbers giving the values of the sea ice parameters. The symbols varied depending on what nation was compiling the sea ice chart until the 1980s, when an international standard called the egg code was developed by the WMO. The egg code, which gets its name from the shape of the symbol used to embody the WMO standard sea ice information, is now used for most sea ice charts. Refer to Figure 1. Scientists and sea travelers use the egg code to describe ice conditions around the world. The egg code describes sea ice concentration (amount of the sea surface that is covered in ice), stage of development (thickness), and form of ice (floe size) for a given area.
The letters in the egg code describe the parameters such as sea ice concentration, thickness, and size. The numbers in the egg code, inserted by people who observe the sea ice directly from ships or aircraft, or indirectly through remote sensing images, represent the stages of the sea ice development such as thickness, type, size, and concentration. Technicians print ice-code ovals on top of ice maps, and captains use the egg code to avoid thick ice and find the best way to get where they're going. Egg codes are also used for lake ice in large bodies of fresh water. The U.S. National Ice Center Web site has a more detailed explanation of the egg code.
Figure 1. The WMO System for Sea Ice Symbology, a.k.a. the Egg Code
SIGRID is an alphanumeric coding of ice chart information originally obtained by overlaying a grid on the original paper chart and encoding the ice information in each grid cell. SIGRID-3 evolved from earlier SIGRID formats and incorporates much of their content. The SIGRID-3 format is a WMO standard shape file format for sharing and archiving ice chart information. A chart encoded in SIGRID-3 has two main components: the chart information itself in shape file format, and metadata describing the chart.
SIGRID encodes the information in each egg as illustrated by the following example:
Figure 2. NIC sea ice chart showing the egg code. The numbers in the egg give total concentration, usually as a range; partial concentration of the first, second, and third thickest ice; stage of development of the first, second, and third thickest ice; and other information such as form, if available.
Web Ontology Language
The Web Ontology Language (OWL) is one of a family of knowledge representation languages for authoring ontologies. The languages are characterized by formal semantics and a variety of serializations for the Semantic Web. OWL is endorsed by the World Wide Web Consortium (W3C) and has attracted academic, medical, and commercial interest.
Our sea ice ontologies use OWL, and they are available in two formats: OWL/XML files and OWL Manchester Syntax files. Refer to Table 1. The OWL files can be encoded in Resource Description Framework (RDF). RDF is a standard model for data interchange on the Web. RDF has features that facilitate data merging even if the underlying schemas differ, and it specifically supports the evolution of schemas over time without requiring all the data consumers to be changed (W3C Semantic Web, http://www.w3.org/RDF/, accessed 1/2012). The RDF is a family of W3C specifications originally designed as a metadata data model. It has come to be used as a general method for conceptual description or modeling of information that is implemented in Web resources, using a variety of syntax formats (Wikipedia, http://en.wikipedia.org/wiki/Resource_Description _Framework, accessed 1/2012).
|Ontology Type||Ontology PURL Link||Manchester Ontology Syntax File Link|
|Seaice Concentration||http://purl.org/wmo/seaice/concentration#||http://code.google.com/p/ssiii/source/browse/ trunk/ontology/seaice-concentration.omn|
|Seaice Development||http://purl.org/wmo/seaice/development#||http://code.google.com/p/ssiii/source/browse/ trunk/ontology/seaice-development.omn|
|Seaice Form||http://purl.org/wmo/seaice/form#||http://code.google.com/p/ssiii/source/browse/ trunk/ontology/seaice-form.omn|
|Ice of Land Origin||http://purl.org/wmo/seaice/iceOfLandOrigin#||http://code.google.com/p/ssiii/source/browse/trunk/ontology/ice-of-land-origin.omn|
|Sigrid-3||http://purl.org/nsidc/jcomm/sigrid3#||http://code.google.com/p/ssiii/source/browse/ trunk/ontology/sigrid3.omn||Egg Code||http://purl.org/nsidc/jcomm/egg#||http://code.google.com/p/ssiii/source/browse/ trunk/ontology/egg.omn|
|
OPCFW_CODE
|
Writing data to a file Writing data to a file Problem You want to write data to a file. Solution Writing to a delimited text file The easiest way to do this is to use write. They will not, however, preserve special attributes of the data structures, such as whether a column is a character type or factor, or the order of levels in factors. In order to do that, it should be written out in a special format for R.
I recently found a few useful explanations that inspired me to write my understanding of binary files. Save the file, right-click and look the properties — it should be 1 byte: Try opening a random.
Given the context of the information i. Now consider how a human would store the actual numeric value of 65 if you told them to write it down. Now, suppose we wanted to store the number 4,, 4 billion.
How would a computer do it? So, we could store the number 4 billion in only 4 bytes. It also saves computational effort — the computer does not have to convert a number between binary and ASCII.
So, why not use binary formats? If binary formats are more efficient, why not use them all the time? Binary files are difficult for humans to read. When a person sees a sequence of 4 bytes, he has no idea what it means it could be a 4-letter word stored in ASCII. Binary files are difficult to edit.
In the same manner, if a person wants to change 4 Billion to 2 billion, he needs to know the binary representation. Binary files are difficult to manipulate.
The UNIX tradition has several simple, elegant tools to manipulate text. By storing files in the standard text format, you get the power of these tools without having to create special editors to modify your binary file.
Binary files can get confusing. Problems happen when computers have different ways of reading data. Regular text stored in single bytes is unambiguous, but be careful with unicode.
In summary, ImageMagick tries to write all images to one file, but will save to multiple files, if any of the following conditions exist the output image's file format does not allow multi-image files, For example, using a value of gamma=2 is the same as taking the square root of the image. This page has been automatically generated. For comments or suggestions regarding the documentation or ROOT in general please send a mail to ROOT support. How to find out line-endings in a text file? Ask Question. ASCII text\ data tranceformingnlp.com: PEM certificate\, ASCII text\ data tranceformingnlp.com: ASCII text\ data tranceformingnlp.com: PEM certificate request\, ASCII text\ data Good. Now all certs have Unix line endings.
Representing numbers in binary can ideally save you a factor of 3 a 4 byte number can represent 10 bytes of text. However, this assumes that the numbers you are representing are large a 3-digit number like is better represented in ASCII than as a 4-byte number.
However, storing text in this way is typically not worth the hassle. One reason binary files are efficient is because they can use all 8 bits in a byte, while most text is constrained to certain fixed patterns, leaving unused space. However, by compressing your text data you can reduce the amount of space used and make text more efficient.
Marshalling and Unmarshalling Data Aside: Marshalling always makes me thinks of Sheriff Marshals and thus cowboys. Sometimes computers have complex internal data structures, with chains of linked items that need to be stored in a file.
Marshalling is the process of taking the internal data of a program and saving it to a flat, linear file. Unmarshalling is the process of reading that that linear data and recreating the complex internal data structure the computer originally had.
Notepad has it easy — it just needs to store the raw text so no marshalling is needed. Microsoft Word, however, must store the text along with other document information page margins, font sizes, embedded images, styles, etc.I never > could write an ascii file with columns of no.s from IDL.
> I have two specific qs. regarding this: > > 1. When I use printf (and even writeu), it gives me > non-sensible results.
Oct 25, · Write EViews data to a text (ASCII), Excel, or Lotus file on disk. Creates a foreign format disk file containing EViews data.
May be used to export EViews data to another program. This page has been automatically generated.
For comments or suggestions regarding the documentation or ROOT in general please send a mail to ROOT support. I am trying to write a small macro that reads data from an ASCII file that has 4 columns.
But I want to graph only the second the third columns as (x, y). To do so, uncheck the option Write Complete Paths in the ASCII/Text File Write section of the Options dialog box or use the SET command CODE-OMIT-PATHS Y with the command processor.
FORMAT Command The format command is optional for fixed format data, but it .
GAUSS can read and write ASCII (text) files. This provides a way of sharing data between GAUSS and other software. While in GAUSS, you can write your working data set into an ASCII file, or convert your GAUSS data file (with tranceformingnlp.com extension) or matrix/string file (with tranceformingnlp.com extension) into an ASCII data file, which you can then edit with the GAUSS editor or with any text editor.
|
OPCFW_CODE
|
Hello, i am having trouble in flashing a simple program in to a 8051 device with a usb kit. The device i am using is a "at89c51ic2"
when i build target does not show any error or warning, but when i try to flash, this error shows up:
with this i went to the "options for target" ->"utilities" to make sure but i did not found nothing :(
this is what is the box:
"-autoisp -device $D -hardware RS232 -port COM5 -baudrate 9600 -operation MEMORY FLASH LOADBUFFER "#H" PROGRAM START RESET 00"
if anyone could help finding the problem it would be great, i am losing my mind :'(
sorry for my bad english
Why do you think that's an error ?
It looks like just an echo of the command being executed.
What happens if you just do the programming from the command line - ie, taking uVision out of the equation ?
Have you studied the documentation for batchisp ?
Andy Neil said:Why do you think that's an error ?
Oh, I see:
EuSouOVALETEbro said:--- Error: failed to execute 'batchisp -autoisp -device AT89C51RD2 -hardware RS232 -port COM5 -baudrate 9600 -operation MEMORY FLASH LOADBUFFER "C:\Keil_v5\C51\Examples\Objects\teste.HEX" PROGRAM START RESET 00'
Well, that's uVision telling you it couldn't execute that command.
So, again, can you execute that command manually from the command line?
If you get the same at the command line, then obviously uVision won't be able to execute it, either!
You will then have to contact whoever supports this "batchisp" thing ...
Where does BATCHISP (.EXE or .BAT) exist on your system? Is it properly pathed, via the PATH environment variable, that the system can find it when asked to run it?
Can you run it from the command line? ie a DOS Box brought up with CMD.EXE, or the Windows-R key
Have the ATMEL tools including BATCHISP been loaded/installed in the system?
As Westonsupermare Pier suggests, the PATH environment would have to be set correctly to run the 'batchisp' command as shown.
This applies whether you run it manually from the command prompt, or get uVision to do it - hence the suggestions to try it from a command prompt.
Alternatively, supply the complete path to the executable - as shown here:
thank you for the response, the "batchisp" was the problem as you suggested. What happened that this 8051 usb kit was the first i used as a student before i reset my PC and lost all the necessary programs for programming this 8051 kit. yesterday i found it and wanted to try to do something with it, but i did not remembered of the total configuration of the "Keil uVision", but worst i forgot to install a program called "FILP" that it is intended to help in the communication with the 8051 kit, and when you pointed it i installed it and put the right path to it.
thank you very much :))
hello and thank you for the response, the "Batchisp" was indeed the problem, see, the problem is that i dint program in this kit for a long time and since then i reset my pc, and i forgot i had to install a program called "FLIP" that is intended to help with the communication with the 8051 USB kit. After that was just put the correct path to the "Batchisp".
and it works great. it can flash the memory of the microcontroller
thank you once more :))
thank you for the response, that was indeed the problem.
after i install "FLIP" was just put the right path and it worked!!!
it can flash the memory of the MCU with the program.
EuSouOVALETEbro said:it works great. it can flash the memory of the microcontroller
Now please verify the solution:
View all questions in Keil forum
|
OPCFW_CODE
|
The Challenges of Anomaly Detection
Recently, Anomaly Detection (AD), a.k.a one-class classification, has received considerable attention in a variety of applications such as biometrics, computer vision, machine learning etc. An anomaly is defined as an observation that does not conform to the expected normal behaviour. To detect anomalies, modelling and encapsulating normal data is still an open problem, especially if only normal (non-anomalous) data is available for training time, making it a challenging problem. To mitigate this issue, several existing works have tried to leverage the available anomalies in the training set to improve the AD performance. However, this design may not be effective in real-world scenarios where very few or no anomalous data is available. In this thesis, the challenge of a pure AD design is studied using non-anomalous samples only. As it would not be feasible to develop a generic framework to cover all the aforementioned applications, several pure AD models are developed so that each one deals with a specific domain.
Although significant improvements have been achieved in face recognition, presentation attacks (PA) are recognised as a considerable threat to the biometric devices where an impostor tries to access a service illegally. In order to counteract PAs, the majority of approaches formulate the presentation attack detection (PAD) problem, a.k.a face spoofing detection, as a two-class classification. Nevertheless, the two-class formulation does not perform robustly due to its poor generalisation performance in the presence of novel PAs. To address this limitation, a pure AD model is trained where the real-access is considered normal and PAs are presumed to be anomalous observations. An aspect of PAD design that has been overlooked is the use of client-specific information in the context of AD. It has been shown that the client identity information can be deployed to achieve better discrimination between the real-accesses and PAs. As the first contribution, the client-specific information is adopted to build the one-class classifiers (OCCs) and determine a client-specific threshold.
To further improve the generalisation performance of OCCs, the idea of constructing a fusion of OCCs has received increasing attention. Nevertheless, very few studies in the literature have been concerned with developing a general methodology of OCC fusion design and examining its effectiveness in a broad range of applications. In the thesis, it is aimed to redress this limitation by proposing a generic OCC fusion method. To boost the performance, three novel contributions are proposed. Firstly, as very few consider the effect of population outliers on the normalisation process, a new score normalisation method is proposed as a pre-processing step to multiple classifier fusion that is able to cope well with heavy-tailed non-anomalous data distributions. Second, to be faithful to the pure AD design philosophy, a novel fitness function is defined which requires only normal observations to estimate the competency of OCCs. Thirdly, a new pruning method is proposed to discard OCCs having no/less informative data from the fusion to improve the AD results.
Up to this point, pre-trained ImageNet CNNs have been used in the thesis to extract the features from image data. To train a CNN from scratch, a deep network is pretrained using self-supervised learning for an auxiliary geometric transformation (GT) classification task. The key contribution is a novel loss function that augments the standard cross-entropy by an additional term that plays a significant role in the later stages of self-supervised learning. The proposed enabling innovation is a triplet centre loss with an adaptive margin and a learnable metric, which relentlessly drives the GT classes to exhibit continuously improving compactness and inter-class separation. The pretrained network is finetuned for the downstream task using non-anomalous data only, and a GT model for the data is constructed. Anomalies are detected by fusing the output of several decision functions defined using the learnt GT class model.
Extensive experiments on publicly available AD datasets demonstrate the effectiveness of the proposed contributions and lead to significant performance gains compared to the state-of-the-art methods. This includes benchmarking datasets in PAD, conventional tabular datasets in the machine learning domain and common computer vision databases.
Attend the event
This is a hybrid event free for everyone to join
|
OPCFW_CODE
|
#!/usr/bin/env python3
import argparse
import sys
import dotnet.object as objects
from dotnet.object import PrimitiveArray, ClassInstance
from dotnet.io.binary import BinaryReader
import typing
if typing.TYPE_CHECKING:
from typing import Dict, Tuple
from dotnet.object import ClassObject, Instance
def inspect_classes(classes: 'Dict[Tuple[int, str], ClassObject]'):
print("Read {} classes".format(len(classes)))
for class_key, class_obj in classes.items():
print(" Class {}".format(class_key))
for member in class_obj.members:
extra_info = member.extra_info.name if member.binary_type == 0 else member.extra_info
print(" {}: {}, {}".format(member.name, member.binary_type.name, extra_info))
def inspect_instance(instance: 'Instance') -> None:
instance_type = type(instance)
if instance_type is ClassInstance:
instance: ClassInstance
inspect_class_inst(instance)
elif instance_type is PrimitiveArray:
instance: PrimitiveArray
inspect_primitive_array(instance)
else:
print(repr(instance))
def inspect_primitive_array(inst: 'PrimitiveArray') -> None:
print("Primitive Array")
print(" Data Type: {}".format(inst.primitive_class.__name__))
print(" Values:")
for i, value in enumerate(inst):
print(" value[{}] = {}".format(i, value))
def inspect_class_inst(inst: 'ClassInstance'):
print("Class Instance")
print(" Class: {}".format(inst.class_object.name))
print(" Members:")
for i, value in enumerate(inst.member_data):
print(" {}: {}".format(inst.class_object.members[i].name, value))
def main():
parser = argparse.ArgumentParser(description="NRBF Test Data Visualizer")
parser.add_argument("--file", "-f", help="Data file to parse")
args = parser.parse_args()
if args.file is None:
print("No input file specified")
sys.exit(1)
ds = objects.DataStore.get_global()
reader = BinaryReader(ds)
value = reader.read_file(args.file)
inspect_classes(ds.classes)
inspect_instance(value)
if __name__ == "__main__":
main()
|
STACK_EDU
|
Hi, I would like to run SEARCH and MUSCLE from within qiime2 but they are not listed as plugins yet and I am not really sure how to start. If I download them can I just run them directly form the command window? What is the process for adding them in?
Hi @annat! Unfortunately, we don’t have QIIME 2 plugins for those tools yet. You can export your data from QIIME 2, and then load this up into either of those tools. We aren’t able to provide support for these tools (including installation), since we aren’t the developers of them, but, perhaps in the future we will have Q2 plugins for them!
Thank you Mathew,
presumably QIIME2 developers feel you can do similar things with the available plugins in QIIME2, such as VSEARCH. However in the documentation about the plugins there isn’t any reference for their development (like there is with USEARCH for example (Edgar 2010)). As I am new to bioinformatics and QIIME and QIIME2, I am not finding the available information on the QIIME website enough to understand what the plugin does exactly (e.g. for VSEARCH: https://docs.qiime2.org/2017.10/plugins/available/feature-classifier/classify-consensus-vsearch/). Where can I find more information on what the available plugins are actually doing and how they perform?
Hi @annat - I would suggest starting here. We have lots of great docs available, and several detailed tutorials. i think spending some time working through those tutorials will get you up to speed on the ethos of Q2.
As far as your specific mention about
vsearch in use in the the
feature-classifier classify-consensus-vsearch method — that particular method uses
vsearch for classification purposes. As you learned in your other thread, this method leverages
vsearch a part of the overall computation. It could just as easily have used some other tool or technique — what I am trying to say is that even though this one method uses
vsearch, it isn’t the only place in QIIME 2 that you might see
vsearch pop up! Some plugins in QIIME 2 basically just wrap a tool, while others perform very specific tasks, and others do a bit of both. By reviewing the resources I pointed you to above, I think that you can become familiar with QIIME 2 and the available plugins, and what it is they do.
If you get stuck or have any questions, you know where to find us!
I think VSEARCH includes excellent documentation, once you find it! You can download the usearch manual PDF from release page, or by clicking here.
If you have any questions about vsearch, you can post them on the vsearch forums. The devs are really helpful.
This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.
|
OPCFW_CODE
|
[Nagiosplug-help] Tracking down pthread/check_dns problem on CentOS4 w/ 1.4.2 plugins.
John P. Rouillard
rouilj at cs.umb.edu
Mon Nov 28 09:27:16 CET 2005
I am running CentOS4, (RH Enterprise 4 public version) and I am seeing the
nslookup returned error status
problem. However the plugins I am using were compiled on this box. As
Ton Voon said:
> Are you using RedHat? There is a known problem with bind on RedHat
> where the nslookup and dig commands do not exit correctly due to a
> kernel pthread issue.
CentOS is "close enough" I guess 8-(.
> If you are using Redhat, this problem is fixed in nagios-plugins
> 1.4.2, but you need to compile it yourself for the ./configure script
> to pick up that your system has a problem and workaround it.
Seems like it doesn't work for CentOS and the kernel I am
running. Grepping through the sources for 1.4.2 doesn't show me a
reference to the pthread bug or a work around for it in
check_dns.c. However I came across the following Changelog entry:
2005-09-12 11:31 tonvoon
* plugins/popen.c, Makefile.am, configure.in, config_test/Makefile,
config_test/child_test.c, config_test/run_tests: ECHILD error at
waitpid on Red Hat systems (Peter Pramberger and Sascha Runschke
A little more searching in plugins/popen.c turned up this segment of
/* wait until SIGCHLD */
Now looking at configure to see where REDHAT_SPOPEN_ERROR is defined I
see it calling a grep "\.EL$" on "uname -r"'s output. The uname -r
output is "2.6.9-22.0.1.ELsmp" so this test is not done.
Correcting the configure script (deleted the $ closing achor) to allow
the test to be run I see it calling make to run "config_test/run_tests
10". If I run run_tests with an argument of 1000, I get Success=993
Fail=7 with "run_tests 10", I get a successfull completion better than
80% of the time leading to REDHAT_SPOPEN_ERROR being undefined.
Increasing the iterations and fixing the regexp so that
REDHAT_SPOPEN_ERROR is defined in config.h does seem to have solved
the problem. However:
> Alternatively, Sascha Runschke has been working with Red Hat and it
> has been fixed in hotfix-kernel-2.6.9-22.12.EL, which you can
> probably request from them through your support contract.
I think I am seeing this problem in a java based application as
well. Searching through redhat's bugzilla hasn't lead me to the ticket
for this fix, does anybody have the kernel patch or a ticket ID so I
can see the actual problem and try to fix/verify it, or send it to the
CentOS folks for inclusion in a release/patch?
My employers don't acknowledge my existence much less my opinions.
More information about the Help
|
OPCFW_CODE
|
Speech and vision services at enterprise scale with security, compliance, and global reach
Azure Media Analytics is a collection of speech and vision components that organizations and enterprises use to get actionable insights from their video files through machine learning technology. Media Analytics services are hosted on the Azure Media Services platform, which is the Azure media solution for encoding, encrypting, and streaming audio or video at scale, live, or on demand (VOD). Media Analytics is offered at enterprise scale and it delivers the compliance, security, and global reach that large organizations need.
What industries can use Media Analytics?
- Analyze evidence. Collect media from body cams, dash cams, and other devices, and analyze it to extract intelligence while observing chain of custody requirements.
- Protect identity. Redact videos to protect people’s identity and comply with the requirements of the Freedom of Information Act.
- Speed up investigations. Extract data from media and use it to build intelligent search indexes that can help speed up investigations.
- Investigate crime. Process video and events collected from surveillance cameras at scale.
- Reduce false positives. Conduct deep analysis of the video snippets associated with motion events from surveillance cameras to reduce false positives.
- Summarize surveillance footage. Generate an intelligent summary of surveillance footage by using Hyperlapse to smooth out time-lapse videos.
- Analyze customer calls. Use Media Indexer to convert speech to text on audio data from customer support calls and find patterns.
- Analyze customer patterns. Correlate customer movements through a store with sales data to make decisions about product placement.
- Speech-to-text. Important for any business that provides customer support through a call center. Use the text extracted from customer support calls to build a search index or analyze the tone of the customer and the customer representative.
- Optical character recognition (OCR). For any business that has video with text content in it, such as videos with PowerPoint presentations, or videos of people with name tags.
- Face emotion recognition. For any business that has videos with customers in it. Correlate facial expressions with extracted text using Indexer to make decisions on future interactions with the customer.
- Automatically generate standard caption files for your videos
- Choose from a growing selection of languages
- Extract spoken keywords to help in search and recommendation
- Use custom vocabulary adaptation to recognize domain specific speech content
- Technology built on more than 20 years of research in computational photography
- Create smooth and stabilized time lapses from first-person videos
- Support for different speed-up factors from 1x to 25x
Motion detection (Preview)
- Detect when motion has occurred in videos with stationery backgrounds
- Eliminate false positives caused because of light changes, shadows, small insects, and other issues
Face detection (Preview)
- Detect faces that appear in videos
- Track movement of faces over multiple frames
- Analyze the output metadata that provides information about timestamps and face locations
Face emotion detection (Preview)
- Recognize the emotion of a person or crowd over time based on the facial expressions in the video
- Identify emotions based on expressions that psychological research has identified as universal
- Recognize specific emotions such as happiness, sadness, surprise, anger, contempt, fear, digest, and neutral
Video summarization (Preview)
- Create summaries of long videos to enable consumers to get a quick preview of the video
- Choose to create between short previews, that are a few seconds long, or slightly longer previews which are a few minutes long
- Choose whether fade transitions should be applied between shots in the summarized videos
- Ideal for building a web page similar to the Bing Videos search page
Video optical character recognition (Preview)
- Extract typeset words from video content
- Select your own sampling rate to balance performance and quality
- Specify where in the video to look for captions.
Content moderation (Preview)
- Detect pornography, racism, profanity, violence, and other content that you want to moderate in a video
- Save money and reduce errors by avoiding the need to hire human content moderators to screen for offensive, illicit, and inappropriate content
|
OPCFW_CODE
|
Constant garbage collection Java
I check my application log and see the following:
163.029: [GC163.029: [ParNew: 545354K->8K(613440K), 0.0421560 secs] 547578K->2232K(20903424K), 0.0422630 secs] [Times: user=0.27 sys=0.03, real=0.04 secs]
164.014: [GC164.014: [ParNew: 545352K->6K(613440K), 0.0438010 secs] 547576K->2230K(20903424K), 0.0439220 secs] [Times: user=0.30 sys=0.00, real=0.04 secs]
164.995: [GC164.996: [ParNew: 545350K->10K(613440K), 0.0350310 secs] 547574K->2234K(20903424K), 0.0351570 secs] [Times: user=0.27 sys=0.00, real=0.04 secs]
165.967: [GC165.967: [ParNew: 545354K->8K(613440K), 0.0532350 secs] 547578K->2232K(20903424K), 0.0533560 secs] [Times: user=0.39 sys=0.00, real=0.06 secs]
166.946: [GC166.946: [ParNew: 545352K->10K(613440K), 0.0308930 secs] 547576K->2234K(20903424K), 0.0309980 secs] [Times: user=0.25 sys=0.00, real=0.03 secs]
167.919: [GC167.919: [ParNew: 545354K->12K(613440K), 0.0393180 secs] 547578K->2236K(20903424K), 0.0394180 secs] [Times: user=0.30 sys=0.00, real=0.04 secs]
168.890: [GC168.890: [ParNew: 545356K->4K(613440K), 0.0449310 secs] 547580K->2230K(20903424K), 0.0450500 secs] [Times: user=0.31 sys=0.00, real=0.04 secs]
169.869: [GC169.869: [ParNew: 545348K->4K(613440K), 0.0422740 secs] 547574K->2230K(20903424K), 0.0423800 secs] [Times: user=0.26 sys=0.02, real=0.04 secs]
170.850: [GC170.850: [ParNew: 545348K->4K(613440K), 0.0434500 secs] 547574K->2230K(20903424K), 0.0435570 secs] [Times: user=0.31 sys=0.00, real=0.04 secs]
There's plenty enough memory (I'm using only 2.6% memory available)
What could possibly cause such behaviour? I'm using this command
java -Xss515m -Xms20g -Xmx20g -XX:+UseConcMarkSweepGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -jar
This --> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
Also, do you really want to set the initial memory allocation pool to 20GB?
For 20gb heap, your young generation of 600mb is way too low.
Given that HotSpot uses a ephemeral GC, common garbage collection is normal and harmless. That just means that the eden generation has run out of space and is being collected, which is normally very fast -- pretty much unnoticeable.
The frequency of garbage collection, therefore, is mostly related to your allocation bandwidth, rather than the amount of memory "in use". The mistake is in thinking it's something bad, while in fact it is how the GC is designed and intended to work.
All modern Java GC's have generational structure.
If you use Oracle Hotspot then you have: Young generation, Old generation and depending of the GC version it's PermGen (till 7 ver.) or Metaspace(8 ver.)
Young generation usually smaller than the old one, because its collection should be finished quickly.
If you want to decrease the number of minor collections than set -Xmn param to a bigger value
(e.g. -Xmn1G).
|
STACK_EXCHANGE
|
To download the iPad version of Questionmark Secure:. Questionmark Secure will be downloaded and installed automatically. An icon will be placed on your iPad's home screen. At this point, you must activate a Guided Access session in order to launch the assessment. Before you can do this, the Guided Access feature must be enabled on your iPad.
To enable the Guided Access feature:. This passcode controls the use of Guided Access and prevents you from exiting the assessment while taking it. This would be the cause of the problem.
Question marks in OS X and what to do about them
If I delete the Atom. Secondary issues: Before I rebooted the computer, I opened Atom. A second icon appeared at the end of the dock, appearing to be the icon that Atom now runs from. I have two Atom icons in the dock now. Fun fact, this exact issue happens with GitKraken - another Electron app. I have a feeling it's deeper than just Atom, but something within Electron. I think I found the issue. It has to do with apps downloaded from the internet.
- cai mac os tren laptop vaio.
- adjust column width excel mac;
- usb explorer xbox 360 mac.
- macOS Dock icon becomes Question mark after restart · Issue # · atom/atom · GitHub;
- Share your voice.
- How to Fix a Flashing Question Mark on a Mac;
- Some icons on the dock became a question mark after TimeMachine migration | MacRumors Forums.
In the case of normal installers, it prompts you to verify it at the point of install. But VS Code is an application you manually extract from a zip file, so there is no installer. That prompt comes the first time you try and run it. The problem is most people just download and unzip the app, drag the app to Applications, then to the dock, then run it. But you've dragged an app to the dock before it's been verified which is the real cause of the problem. The solution is to verify it before you add it to the dock. I again deleted everything, but this time after I unzipped it into Downloads, before I moved it anywhere, I ran it right from there.
It verified and opened fine. I then copied it to Applications, then the dock and it worked as expected.
As a third test, I repeated test 2, but this time I copied it to Applications first before running it, then ran and verified it from there, then dragged it to the dock. Sure enough, that worked too. It's not a VS Code issue, it's not an Electron issue. It's not an Archive Utility vs Dr. Archiver or anything else issue.
It's a 'You added it to the dock before you verified it was safe! Before you add it to the dock, open and run it from anywhere. Doesn't matter. This seems it would be the case for any zip-distributed app. This shouldn't require a workaround; other apps in Sierra don't suffer from this problem.
It's annoying to users to have to read a GitHub post and type commands into the terminal, and for many people, they expect it to "just work" like other apps and text editors. It can be a dealbreaker i. Please read my post. This isn't a problem with VS Code or Electron or anything else.
Installing Questionmark Secure for iPad | Questionmark
It's a case of a not-yet-verified app being added to the dock. Again, TLDR version of my original comment above, before you add it to the dock, open it and answer the 'app downloaded from the internet' prompt. Once you've affirmed that you do want to open it, close it again and now you can safely add it to the dock. This is the case with any app that would put up that 'downloaded from the internet' prompt. I too am using a fresh-out-of-the-box MacBook Pro following the exact steps I described and it works perfectly.
This issue has been automatically locked since there has not been any recent activity after it was closed. If you can still reproduce this issue in Safe Mode then please open a new issue and fill out the entire issue template to ensure that we have enough information to address your issue.
- App store Icons.
- Installing Questionmark Secure for Mac.
- mac dazzle lipstick for sale;
- how to use a proxy on minecraft mac.
- mac ruby woo lipstick amazon.
- americas army free download mac?
Skip to content. Dismiss Join GitHub today GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together. Sign up. New issue. Copy link Quote reply. Description After every restart of macOS This comment has been minimized. Sign in to view.
The things I would ask are: How did you install Atom? In what directory did you install Atom? An Update about it: I reinstalled Atom 1. Thanks for following up!
|
OPCFW_CODE
|
ótimo, agora funcionou mas, como resolvo problema da fala da assistente, toda vez ele deixa um pedaço para traz e as vez fala uma palavra só sempre bug sabe resolver??
It already depends on the operation of the recognition engine on your phone. Internet quality, clarity of pronunciation, the presence of extraneous noise, etc.
assim, obrigado pela ajuda amigo, grato…
StopASR command is not working on v7.
after issueing the command, the app beeps continuously even if the muteaudio is enabled
Strange, but on my phone the new version works fine. I did not change the code to stop recognition
I’ll try to find out what is happening. Perhaps you need to make it possible to record a log.
I’m trying to use your V7 example to understand how to configure the Keyword feature.
Although when loading the “TEST_ASR_V7.aia”, I get an error “the blocks area did not load properly”.
Also from the photos you may see that one block is grayed out and there are block components that were hidden behind another block. Unsure if this is specific to me or if others are also having issues.
Any help would be great.
Yes, it happens to me too. The project is working fine. But, if I export it to a computer, then it will not load. I’ll try to find out what is the matter.
Thank you very much! Would you be able to post a screenshot of what the blocks should look like, this will help me with my understanding.
Also a huge level of appreciation for all your work on this extension, amazing stuff.
Try version 8. Notify me if the problem goes away.
v8 doesnt seem to fix the stopasr bug.
ive been using version 3.1 before on the same phone, it works fine for my needs.
but then it suddenly has the always beeping bug even when muteaudio is enabled.
tried using V7 and the beeping bug is gone, but the stopasr does not work, tried reverting back to v3.1 and still stopasr is not working.
probably android auto updates the speech recognition engine on my phone that i havent noticed. my android is version 10. google text to speech engine is v3.21.8
my other phone works fine even on version 3.1 of this plugin. even the stopasr. but its on lower version of android
Unfortunately, I do not have a smartphone with android 10 to check this problem.
Any suggestion on how I could make the voice recognition continue to listen after a partial recognition that doesn’t match the Keyword. At the moment it stops and the user need to press start again. Also the Stop/Start button stays at “Stop” even though the service has stopped listening.
When I import V8 demo .aia in Kodular, I got errors
Is the problem on my side or other having the same issue ?
I tried in app inventor : it works fine except “continuously recognition” doesn’t work in demo ? (or maybe I misunderstand how it is supposed t work)
An example of the project is made in AppInventor. I am not sure if he will work in Kodular.
To help, you need to understand what specifically does not work. The continuous recognition mode differs from the usual one in that the recognition service continues to work, even if you keep silent for a long time. But, after the first recognized phrase, the recognition service will stop its work and give the result. Next, you must process the result and start the recognition service again, if necessary.
If the text on the button remains “STOP”, the AfterGettingText event has not occurred. If it did not happen, then the keyword is not recognized. In this case, the recognition process should continue. This is the correct functioning. Also, the sample project is very simple, and does not contain all the necessary button locks. For example, resetting a button in case of recognition error. It is assumed that you will do your own project, which will take into account all the subtleties.
These are the blocks related to the button:
I don’t see any mistake here.
Hi, I used this very helpful extension to create an application for my medical product project.
The plan is to give people without hand and/or arm functions to switch up to 6 outputs on my bluetooth receiving module. You can connect this receiving module with jack connectors to switch on an electric wheelchair or switch the mode or many other devices. with jack input connectors.
If you are interested you can send me an email.
Hi, I hope your application helps a lot of people.
If you have any questions, I will try to help you.
|
OPCFW_CODE
|
Passing data from Fragment to DialogFragment
I'm trying to show a dialog when I get a Volley error and tell the user to retry again, so I choose a dialogFragment to be able to customize it.
I'm handling the error as follow in my Fragment class:
if (vError instanceof TimeoutError || vError instanceof NoConnectionError) {
Toast.makeText(getContext(),
Objects.requireNonNull(getActivity()).getString(R.string.error_network_timeout),
Toast.LENGTH_LONG).show();
} else if (vError instanceof AuthFailureError) {
Toast.makeText(getContext(),
Objects.requireNonNull(getContext()).getString(R.string.error_network_auth_error),
Toast.LENGTH_LONG).show();
} else if (vError instanceof ServerError) {
Toast.makeText(getContext(),
Objects.requireNonNull(getContext()).getString(R.string.error_network_server_error),
Toast.LENGTH_LONG).show();
} else if (vError instanceof NetworkError) {
Toast.makeText(getContext(),
Objects.requireNonNull(getContext()).getString(R.string.error_network_network_error),
Toast.LENGTH_LONG).show();
} else if (vError instanceof ParseError) {
Toast.makeText(getContext(),
Objects.requireNonNull(getContext()).getString(R.string.error_network_parse_faillure),
Toast.LENGTH_LONG).show();
}
Actually I can only show Toast message per error type.
With the following, I'm trying to pass the message as an argument but doesn't seem to work.
Bundle args = new Bundle();
args.putString("vErr", "vErr");
DialogFragment errFragment = new NetworkErrorDialogFragment();
errFragment.setArguments(args);
errFragment.show(getFragmentManager(), "NetErrDialogFragment");
Edit:
Retrieving the value:
In onCreateView of the dialogFragment:
errorTextView.setText(getArguments().getString("vErr"));
please paste that part of code where you are retrieving the arguments in the dialog fragment
Possible duplicate of How to pass data from a fragment to a dialogFragment
Are your receiving at
onCreate(){
getArguments
}
of NetworkErrorDialogFragment?
no, onCreateView I set the textView message to getArguments().getString("vErr"))
not advised but okay, can you debug inside onCreate
my dialogFragment doesn't have that method. I didn't override it
would it make any difference?
You tell me, anyway this is a very simple issue, do some debugging you will stumble into the issue
DialogFragment uses onCreateDialog() method for creating a dialog and its content view. so you need to override this method and set an appropriate content view to it:
@Override
public Dialog onCreateDialog(Bundle savedInstanceState) {
Dialog dialog =new AppCompatDialog(getContext(), getTheme());
dialog.setContentView(R.layout.some_content_view);
// retreiving arguments here
TextView errorTextView = dialog.findViewById(errorTextViewId);
errorTextView.setText(getArguments().getString("vErr"));
return dialog;
}
|
STACK_EXCHANGE
|
Sitting at work the other day going through some old packages that are still in production. We are in the middle of upgrading from 2008 to 2016 so it gives us a great opportunity to review any package that may be a slow running package or one that gives constant failures. Many of the packages are on there 2nd or 3rd upgrade and many still have VB script in script tasks performing simple tasks like setting dates for files. This has led to a discussion among us about the best way to go about fixing the issues, since many of the packages need to have the VB removed or changed to C# to work efficiently with catalogs in SQL Server 2016.
After some discussing and proving why my point was correct once again lol, we figured that wherever possible we are going to remove any script task and replace those with expressions in variables. Why replace something that works, some may ask. The answer is a simple one with a more complex logic behind it. Variables are almost free, yes, I know that a script task has no monetary value to it either but in terms of overhead when executing the package compared to a script task.
When a SSIS package is executed it validates connection strings, data flows and any other task that is within the package. It also gets the variables ready, so if you have a date variable or a collation for a file name or any of the other 1000s uses for expressions, those also get ready when first executing the package. Now that does not seem like a very good argument that we want the variables to be first and that’s why we should get rid of the script tasks.
Let’s look at this with script tasks used to set the dates and collate the file names. In order to make those dates usable or to have the file path in a connection string what needs to happen. That piece of the puzzle needs to be converted to yes, a variable. So now the variable has been set when the package first executed and now it is being set again with the script task. Why do something twice why not use an expression set the variable once and be done with it.
Now some of you might be saying that the script task is only adding a second here or there what’s the big deal. Well simple math shows a different story, we have about 150 packages all running on the same server throughout the day, many of those are even using the same databases and sometimes even the same tables. If out of those 150 packages 75 of them use script tasks, but some are using 2 to 3 script tasks to set multiple variables, so we are back up to the 150 possibly even higher for the number of script tasks being executed.
Now if each one takes one second, we are possibly losing a little more than 2 minutes every day because of the script tasks. Yes, I know, in the grand scheme 2 minutes is not very much time but when looking at it from 150 packages many of them running every 15 to 20 minutes that starts adding up more time. So, let’s say of the 75 packages that contain a script task, 30 of them need to run every 20 minutes and half of those have more than one script task. So now we are at 45 seconds every 20 minutes so 45(number of script tasks) * (60/20)(3 how many times the packages execute every hour) * 24(hours in a day) = 3240 seconds every day or 54 minutes.
The 2 minutes has now increased to 54 minutes throughout the day. If the rest of the package all ran as originally designed or the database is properly indexed, then the 54 minutes from those packages may never be an issue. As we all know, databases do what they are designed for and that is to collect data. Now as more data is collected the speed to read and write to those tables now is getting slower. A package that used to take 2 minutes to run now takes 5 and the next one and the next one, eventually you have multiple packages performing large amounts of ETL processes and they just stop. You check the logs, no errors are showing, packages are showing that they executed or are running. That’s when it hits you, packages are still running, some that should never run at the same time are now running at the same time and the data is not in the tables, the business side is starting to call and send emails because they can’t get their reports and why. Because you wanted a script task to set the date for you.
Now I know there are several other factors in the scenario that would contribute to the failure and many are enough to cover pages of discussions and arguments. The one thing to be always thinking when designing, building or repairing packages is to optimize performance. One way to optimize performance is to save the 1 to 2 seconds that the script task is using up.
Besides performance there is an excellent reason not to use script tasks for things such as dates and filenames. Think of the next person that needs to rerun the package to fix a data issue. If the date is set in a script task then the code in the task needs to be changed, run the package to fix the data, change the script back to what it was and hopefully not messing anything up. If the date was set in a variable and maybe use two for the date then one variable can be easily found, changed and changed back to fix the data. When I build packages, I prefer to build any date as flexible as possible, I use multiple variables to get to one date, this allows me to change one variable to number of days back and execute the package to recover lost data.
I always build my packages using a back-date loop, this allows me to go back and fix data in an easy and smooth way. The loop provides three parts, first a Boolean variable that if needing to back date it is set to true, next start and stop variables to set the dates that need to be fixed and finally a loop that executes the package and advances the day. I plan to eventually do a tutorial on how to set up this easy but very useful task that I use all the time in new builds and every repair that I must make that uses dates.
If you only have a few packages or they run very little, then the time constraint is not a big deal and for most people this is the case. If looking for maximum optimization and ease of data repair later then please please please stop using script tasks for something that can be done in an expression. You may want to show off you C# or VB skills to others but save those for bigger projects like pulling Active Directory information.
|
OPCFW_CODE
|
Driver: Microsoft Windows: Manufacturer: “ Generic” Printer: “ MS Publisher Imagesetter” Via IPP. Easily add Print Directory Listing to Windows Explorer context menu in XP Vista that will print any folder contents directory tree structure. Print documents reliably from any Windows or Mac application by selecting Adobe PDF as your printer.
This visual basic script will print all the documents of the folder that you run the script to the default print. When I have illustrator open but the file name doesnt show up in the file name area, go to scripts– it shows up there, lets me browse for the multi page pdf, choose it I can pick pages as well– but the open button never. With it defrag disk, fix errors, remove cache files, you can clean windows registry, update windows download dlls.
Command Prompt CMD Commands are unknown territories for most of the Windows users they only know it as a black screen for troubleshooting the system. FTYPE and REGEDIT to read the PDF print command from the registry we can print PDF. With DOSPrinter you can print to a GUI printer from your DOS application.
/ 08/ 11/ weekend- scripter- use- the- windows- task- scheduler- to- run- a- windows- powershell- script/. For Ghostscript versions 9. Sort printed files, control all printing settings. Windows print pdf script.
You' ll find the Microsoft Print to PDF feature in the Print dialog box from a standard Windows application. StrMsg = vbCrLf & " This script only works when Acrobat Reader is the default. How to print from Ms- Dos programs installed on a Windows Terminal Server to a local remote Windows printer, Windows- Only , including GDI Virtual printers with Printfil.
Or try having your multipdf script call singlepdf with the file. The following VB script show how the COM.
This tool can print your PDF to a Windows. Computer Configuration Policies Administrative templates Windows Components Remotes Desktop Services Remote desktop Session Host Security Always prompt for password upon connection. Print Files from Batch Files. And in those jobs I always use the Save As PDF feature.
Virtual PDF Printer for PDF Generation. JAWS developed for computer users whose vision loss prevents them from seeing screen content , is the world' s most popular screen reader, Job Access With Speech navigating with a mouse. Print Text Files. I have downloaded and put the script in the presets/ scripts folder.
Windows print pdf script. I have tried using the following solution to print from Excel to PDF: Excel Print to PDF in VBA While the solution seems to have worked for other people, it produces a run time error 1004 in. Windows Script Host may be used for a variety of purposes including logon scripts, administration general automation. Supports Citrix Windows 8, Windows Server, Windows 7, Terminal Server .
Microsoft describes it. Here is the list of all Windows CMD commands sorted alphabetically along with exclusive CMD commands pdf file for future reference for both pro and newbies.
' Execute the PDF print command for each PDF file. Examples of using Virtual PDF Printer: Microsoft Notepad Excel, MsPaint: Print Microsoft Word, WordPad PowerPoint: Print.
Print multiple files in various formats at once. Jan 15 advice for Microsoft Windows 7 Computers such as Dell, support community, Acer, HP, providing friendly help , Asus , · Windows 7 Forums is the largest help a custom build. Using your PDF postscript printer via IPP. Print Server: Installs the print server and Print Management console.
It does not address commands that are specific to DOS environments Windows 98, Windows Me, such as Windows 95, to DOS- based operating systems whose Microsoft- supplied. Printing PDFs from Windows Command Line. You can print a PDF from the command.
The World' s Most Popular Windows Screen Reader. This is a prerequisite for configuring print services on Windows Server. SmartPCFixer™ is a fully featured and easy- to- use system optimization suite.
Exe cmd ( after its executable file name), is the command- line interpreter on Windows NT, Windows CE, OS/ 2 eComStation operating systems. To print a PDF file to the default Windows printer,.
Download free print management software Print Conductor. PCL to PDF - PCLXForm is the most powerful product for converting PCL PXL PX3 to PDF plus ASCII text. Command Prompt, also known as cmd. This book addresses 32- bit Windows commands applicable to modern versions of Windows based on the Windows NT environment.
Jan 17, · Using PowerShell to print pdf files automatically. I also use it manually, but I have a rather clean Virtual Machine for my FrameMaker work.
Windows Scripting; General Scripting;. I would like to have a script that will print a pdf.
Script to print a pdf file Hey,. For Windows 7 VDAs that will use Personal vDisk, install Microsoft hotfix– A computer stops responding because of a deadlock situation in the Mountmgr. Portable Document Format ( PDF) is the de facto standard for the secure and reliable distribution and exchange of electronic documents and forms around the world.
Oct 05, · Ian, part of my job is to create automated publishing routines via script.
|
OPCFW_CODE
|
comp.sys.ibm.as400.misc - IBM AS/400 miscellaneous topics.
Fascinating. I tried qualified subfields recently on V5R1 only to discover it didn't work. I just gave up and didn't look at the generated source. I ended up using a PREFIX keyword on the externally described data structures. Possibly stupid question: You're sure you compiled to V5R2? (Our development box is at V5R2, but all the CRT... commands have been changed to specify a target release of V5R1.) For as much as IBM has pushed SQL on the iSeries, their haven't helped the cause with the level of sophistication of their pre-compilers. Sam "Andrew Goodspeed" < XXXX@XXXXX.COM > wrote in message news: XXXX@XXXXX.COM ... > I found a relevant posting from Kent Milligan dated 7JUN2001 on this > topic, which seemed to be asking whether this was an enhancement that > should be pursued. Based on the documentation (for V5R2) I presume that > it was. To wit: > > "When writing an SQL statement, referrals to subfields can be qualified. > Use the name of the data structure, followed by a period and the name of > the subfield. For example, PEMPL.MIDINT is the same as specifying only > MIDINT." > > As documented, this seems an odd way of supporting qualified subfields, > that is, by stripping off the qualification. What is even odder (to me), > is that the documentation is entirely correct. Given the code snippet: > > C/EXEC SQL > C+ FETCH AchRequestsC INTO :AchReqTmp.ExtVerId, > C+ :AchReqTmp.ExtAcctHldTyp, > C+ :AchReqTmp.ExtTranDesc, > C+ :AchReqTmp.ExtTranDate, > C+ :AchReqTmp.ExtExecDate, > C+ :AchReqTmp.ExtTranType, > C+ :AchReqTmp.ExtAcctType, > C+ :AchReqTmp.ExtPrenote, > C+ :AchReqTmp.ExtTrgBnkRT, > C+ :AchReqTmp.ExtTrgBnkRTCk, > C+ :AchReqTmp.ExtAccount, > C+ :AchReqTmp.ExtAmount, > C+ :AchReqTmp.ExtRcpId, > C+ :AchReqTmp.ExtRcpName > C/END-EXEC > > The precompiler generates: > > 454 C Z-ADD -4 SQLER6 > 455 C CALL 'QSQROUTE' > 456 C PARM SQLCA > 457 C PARM SQL_00006 > 458 C SQL_00009 IFEQ '1' > 459 C EVAL EXTVERID = SQL_00011 > 460 C EVAL EXTACCTHLDTYP = SQL_00012 > 461 C EVAL EXTTRANDESC = SQL_00013 > 462 C EVAL EXTTRANDATE = SQL_00014 > 463 C EVAL EXTEXECDATE = SQL_00015 > 464 C EVAL EXTTRANTYPE = SQL_00016 > 465 C EVAL EXTACCTTYPE = SQL_00017 > 466 C EVAL EXTPRENOTE = SQL_00018 > 467 C EVAL EXTTRGBNKRT = SQL_00019 > 468 C EVAL EXTTRGBNKRTCK = SQL_00020 > 469 C EVAL EXTACCOUNT = SQL_00021 > 470 C EVAL EXTAMOUNT = SQL_00022 > 471 C EVAL EXTRCPID = SQL_00023 > 472 C EVAL EXTRCPNAME = SQL_00024 > 473 C END > > Now, it does require that the qualification is valid, but once it > verifies that, it renders the qualification moot by simply removing it. > Needles to say, the compile fails on this generated code. > > Is this a feature or a bug, and is there any incantation that can get it > to work in a sensible fashion? > > Much obliged.
Hello. v8.2. Is it possible? Example: --------------- create function t(v varchar(1)) modifies sql data returns table(c varchar(1)) begin atomic return select v from sysibm.sysdummy1; end@ declare global temporary table session.test(c varchar(1)) on commit preserve rows@ --------------- Now I have tried: --------------- 1. insert into session.test select c from table(t('1')) as f; --- SQL20267, Reason Code=2 2. insert into session.test with a(c) as (select c from table(t('1')) as f) select c from a; --- SQL20165 3. begin atomic for g as with a(c) as ( select c from table(t('1')) as f ) select c from a do insert into session.test values (g.c); end for; end@ --- SQL0901 (known bug) --------------- Sincerely, Mark B.
I am looking for SQL syntax that will enable me to subtract quarters from timestamps. Any suggestions? Pseudo-code example - YEAR( timestamp_column - 9 Quarters) I know that I can easily subtract things like DAYS. However, I can't seem to be able to manipulate using Quarters. Thanks.
Can someone provide me the simple syntax necessary to insert or update to a row containing a single BLOB column, where the BLOB data will be obtained from a file? This is on a linux installation. The table has 2 INTs and 1 BLOB column. So, I've scoured various docs and such, and the closest I've come up with is some sort of animal that looks like this: db2 load from /tmp/myblobfile of asc method L (1 780) insert into...... Now I'm lost. Syntax for an UPDATE command would be just as helpful. Thanks.
|
OPCFW_CODE
|
# THIS PROGRAM IS ROBUST. YOU CAN CHANGE THE slabRange & slabCalculation VALUES. YOU CAN ALSO ADD MORE RANGES IF YOU NEED
slabRange = [100, 200, 300, 300]
slabCalculation = [1, 2, 3, 5]
# Calculates the value of index [0] for slabRange & slabCalculation
def initAmountCalculation():
return (slabRange[0] * slabCalculation[0])
# Calculates the value of all indices for slabRange & slabCalculation except the first and the last index
def midAmountCalculation():
sum = 0
for i in range(1, len(slabRange) - 1):
sum = sum + ((slabRange[i] - slabRange[i-1]) * slabCalculation[i])
return sum
# Calculates only the last index [slabRange & slabCalculation] value
def endAmountCalculation():
return ((units - slabRange[len(slabRange)-1]) * slabCalculation[len(slabRange)-1])
# Calculates all the index [slabRange & slabCalculation] values except first & last index. This calculation is based upon the index value
def recurringDifferenceCalculation(index):
result = 0
for i in range(1, index):
result = result + ((slabRange[i] - slabRange[i-1]) * slabCalculation[i])
return result
# Calculates the last index [slabRange & slabCalculation] value & its based upon the index value
def calcAmountWithInRange(index):
return (units - slabRange[index-1]) * slabCalculation[index]
# Main Method which calculates the EB Bill
def ebBillCalculation(units):
if len(slabCalculation) != len(slabRange):
return "length of slabRange & slabCalculation is not matching. So exiting..."
if (units <= slabRange[0]):
return units * slabCalculation[0]
if (len(slabRange) >= 2):
if (units <= slabRange[1]):
return (initAmountCalculation() + calcAmountWithInRange(1))
if (units > slabRange[len(slabRange)-1]):
return (initAmountCalculation() + midAmountCalculation() + endAmountCalculation())
for i in range(2, len(slabRange)):
if (units <= slabRange[i]):
return (initAmountCalculation() + recurringDifferenceCalculation(i) + calcAmountWithInRange(i))
units = 250;
print(ebBillCalculation(units));
|
STACK_EDU
|
Calculating the root-mean-square-error between two matrices one of which contains NaN values
This is a part of a larger project so I will try to keep only the relevant parts (The variables and my attempt at the calculations)
I want to calculate the root mean squared error between Zi_cubic and Z_actual
RMSE formula
Given/already established variables
rng('default');
% Set up 2,000 random numbers between -1 & +1 as our x & y values
n=2000;
x = 2*(rand(n,1)-0.5);
y = 2*(rand(n,1)-0.5);
z = x.^5+y.^3;
% Interpolate to a regular grid
d = -1:0.01:1;
[Xi,Yi] = meshgrid(d,d);
Zi_cubic = griddata(x,y,z,Xi,Yi,'cubic');
Z_actual = Xi.^5+Yi.^3;
My attempt at a calculation
My approach is to
Arrange Zi_cubic and Z_actual as column vectors
Take the difference
Square each element in the difference
Sum up all the elements in 4 using nansum
Divide by the number of finite elements in 4
Take the square root
D1 = reshape(Zi_cubic,[numel(Zi_cubic),1]);
D2 = reshape(Z_actual,[numel(Z_actual),1]);
D3 = D1 - D2;
D4 = D3.^2;
D5 = nansum(D4)
d6 = sum(isfinite(D4))
D6 = D5/d6
D7 = sqrt(D6)
Apparently this is wrong. I'm either mis-applying the RMSE formula or I don't understand what I'm telling matlab to do.
Any help would be appreciated. Thanks in advance.
What language is this? You've included lots of unnecessary tags, but not the most important one: the language you're using.
Matlab, changed the tags, sorry about that.
There is also a nanmean, or if you have a newer version of MATLAB you can do mean(A,'omitnan').
But why do you say this is wrong? Do you get an unexpected result? Do you get a NaN output? Does your result not match some pre-conceived notion of the result?
When I submit the assignment via matlab grader it says "incorrect value for rms_cubic". All the results up to that point are correct according to matlab grader. I sent an email to the instructor and am awaiting a response. He doesn't tell us what the correct value is in matlab grader.
Just a small tip. For Step 1, i.e. arranging the matrices to column vectors, you could simply use (:). E.g. D1=Zi_cubic(:);
Your RMSE is fine (in my book). The only thing that seems possibly off is the meshgrid and griddata. Your inputs to griddata are vectors and you are asking for a matrix output. That is fine, but you're potentially undersampling your input space. In other words, you are giving n samples as inputs, but perhaps you are expected to give n^2 samples as inputs? Here's some sample code for a smaller n to demonstrate this effect more clearly:
rng('default');
% Set up 2,000 random numbers between -1 & +1 as our x & y values
n=100; %Reduced because scatter is slow to plot
x = 2*(rand(n,1)-0.5);
y = 2*(rand(n,1)-0.5);
z = x.^5+y.^3;
S = 100;
subplot(1,2,1)
scatter(x,y,S,z)
%More data, more accurate ...
[x2,y2] = meshgrid(x,y);
z2 = x2.^5+y2.^3;
subplot(1,2,2)
scatter(x2(:),y2(:),S,z2(:))
The second plot should be a lot cleaner and thus will likely provide a more accurate estimate of Z_actual later on.
I also thought you might be running into some issues with floating point numbers and calculating RMSE but that appears not to be the case. Here's some alternative code which is how I would write RMSE.
d = Zi_cubic(:) - Z_actual(:);
mask = ~isnan(d);
n_valid = sum(mask);
rmse = sqrt(sum(d(mask).^2)/n_valid);
Notice that (:) linearizes the matrix. Also it is useful to try and use better variable names than D1-D7.
In the end though these are just suggestions and your code looks fine.
PS - I'm assuming that you are supposed to be using cubic interpolation as that is another place you could perhaps deviate from what's expected ...
Thanks Jimbo, I spoke with the instructor and he said my calculation was right. It seems that stack exchange recommends that I keep this question up instead of deleting it.
|
STACK_EXCHANGE
|
I cant, can I.
No, because you're not the OT. Chalk it up to one of the most retarded shits Pokémon does. It should be illegal to rename the pets you adopt IRL, just to make it fair.
Jump to content
Posted 29 February 2016 - 05:29 PM
I cant, can I.
Posted 29 February 2016 - 06:51 PM
Edited by gamerman99, 29 February 2016 - 06:52 PM.
Posted 29 February 2016 - 07:37 PM
Posted 29 February 2016 - 08:36 PM
Posted 29 February 2016 - 10:40 PM
I got a bit of a late start compared to the rest of you, but i finally found some good time to sink into yellow tonight. I just made it to cerulean and stopped as soon as i got to the city. Unlike what i'm reading of everyone else here, i've decided not to use the three starters in my playthrough. Instead, i'm using Pikachu, a Rattata, a Butterfree, a Gravler (which sadly, i won't be able to evolve), and later on, a Doduo and Omynite. As for why those six: well... let's just say i have my reasons .
Posted 05 March 2016 - 12:14 PM
I been too busy with Fire Emblem Fates to play more of my Yellow...
I only have two Pokemon and not even the first badge...
Thor, my Pikachu & Goku, my Mankey.
Yes, my nicknames are horrible. I know.
Posted 05 March 2016 - 12:23 PM
Posted 05 March 2016 - 07:32 PM
I'm currently at Silph Co. I did do a bit of sequence breaking to revive my fossil, though, since i wanted it on my team. I had to beat Koga to use surf and I was way underleveled for him. But it was worth it cuz now I have my final team of six.
Lol, my Mew is so fuckin' OP, even without any Psychic moves yet, that I don't think I'd have a problem doing that sequence break... only I don't know what that sequence break is like.
Btw, this was too good not to share. Faggot went all French on me.
>>Edit: Also, as you can see, I've been playing on that original-resolution, make-shit-look-tiny mode. It took a while to get used to all the wasted screen space, but now I don't wanna go back. I haven't turned my game off again after I rebooted it (cuz deleted Thundershock), but I like the GBC margins, and I think I won't wanna go back to this mode again if I go full screen. It's the same size I tolerated as a kid, and even better with true back-lighting and NOT a fuckin' Worm Light, lol, but yeah, still feels tiny.
Edited by Plant42, 05 March 2016 - 07:38 PM.
Posted 06 March 2016 - 02:45 PM
0 members, 0 guests, 0 anonymous users
|
OPCFW_CODE
|
Hans-Bernhard Broeker wrote:
> Daniel J Sebald wrote:
>> course, this demo example doesn't have high enough spatial frequency
>> to cause aliasing.)
> That's quite exactly wrong. All pm3d plots, indeed almost exactly
> *every* plot gnuplot is capable of generating, contains ample amounts of
> infinitely high spatial frequency details --- every thing we output has
> perfectly sharp borders, including the coloured areas generated by pm3d.
No, the example I was referring to is a low frequency sinc function that is adequately sampled. When displayed on the screen, "perfectly sharp borders" is something different. That is what a reconstruction filter is for (different than an antialiasing filter), to smooth out the sharp edges after reconstructing a signal. A reconstruction filter is the case where an image "pixel" occupies multiple screen pixels. They really should be smoothed between image pixels before being displayed. If the image/screen is one-to-one pixels, then the smoothing is inherently done by the monitor screen (unless you place your eye real close to the screen to see individual dots).
You described aliasing below:
> Aliasing has nothing to do with monochrome vs. colour. Aliasing
> is the artefact invariably created whenever you point-sample (i.e.
> render to pixels) a signal at a frequency that's lower than the signal's
> actual bandwidth.
which is correct. From ghostview's perspective, whenever the display is of lower resolution than the image contained inside the PostScript file, there is the possibility of aliasing and antialiasing should be applied. (But applied correctly of course, not with strange lines.) For all I know, "antialiasing" in GhostView could mean both antialiasing (more image pixels than screen pixels) and reconstruction (more screen pixels than image pixels). In fact, that is probably the case. I say that because I have used GhostViews zoom function to greatly expand a few pixels in a subwindow. There is no need for GhostView to apply antialiasing to such an image, yet the lines still appear.
If I bumped up the frequency of that sinc function, I'm sure aliasing would begin to happen. Try this:
1) Run Petr's pm3d.dem demo until the grayscale example that says "gray map".
2) Break out of the demo and change the range as
set xrange [-1500:1500]
set yrange [-1500:1500]
And you will see strange patterns on the image that shouldn't be there because by choosing such a large range the frequency of the sinc within the plotted area is much too great to be displayed.
> More to the point: our x11.trm *knows* the (assumed) resolution of the
> X11 driver, and it does the reduction to integer coordinates itself.
> I.e. the entire rendering process is controlled by gnuplot alone.
> post.trm, OTOH, cannot even make an educated guess what the actual
> output device resolution --- so it just assumes it's 720 DPI.
Right, but the X11 windows can be scaled to have lower resolution. Hence there is a routine in gnuplot_x11 that does that conversion.
|
OPCFW_CODE
|
package com.jonnymatts.jzonbie.priming;
import com.jonnymatts.jzonbie.defaults.DefaultResponsePriming;
import com.jonnymatts.jzonbie.defaults.Priming;
import com.jonnymatts.jzonbie.defaults.StandardPriming;
import com.jonnymatts.jzonbie.requests.AppRequest;
import com.jonnymatts.jzonbie.responses.AppResponse;
import com.jonnymatts.jzonbie.responses.defaults.DefaultAppResponse;
import com.jonnymatts.jzonbie.responses.defaults.DefaultingQueue;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.concurrent.ConcurrentHashMap;
import java.util.stream.Collectors;
import static java.util.Collections.emptyList;
import static java.util.Optional.empty;
public class PrimingContext {
private final List<Priming> priming;
private Map<AppRequest, DefaultingQueue> primedMappings;
public PrimingContext(List<Priming> priming) {
this.priming = priming;
this.primedMappings = new ConcurrentHashMap<>();
addDefaultPriming();
}
public PrimingContext() {
this(emptyList());
}
public List<PrimedMapping> getCurrentPriming() {
return primedMappings.entrySet().stream()
.map(e -> new PrimedMapping(e.getKey(), e.getValue()))
.collect(Collectors.toList());
}
public PrimingContext add(ZombiePriming zombiePriming) {
return add(zombiePriming.getRequest(), zombiePriming.getResponse());
}
public PrimingContext add(AppRequest appRequest, AppResponse appResponse) {
getAppResponseQueueForAdd(appRequest).add(appResponse);
return this;
}
public PrimingContext addDefault(AppRequest appRequest, DefaultAppResponse defaultAppResponse) {
getAppResponseQueueForAdd(appRequest).setDefault(defaultAppResponse);
return this;
}
public Optional<AppRequest> getPrimedRequest(AppRequest appRequest) {
return primedMappings.entrySet().parallelStream()
.filter(priming -> priming.getKey().matches(appRequest))
.map(Map.Entry::getKey)
.findFirst();
}
public Optional<AppResponse> getResponse(AppRequest appRequest) {
final DefaultingQueue primedResponsesQueue = primedMappings.get(appRequest);
if (primedResponsesQueue == null) {
return empty();
}
final AppResponse appResponse = primedResponsesQueue.poll();
if (primedResponsesQueue.hasSize() == 0 && !primedResponsesQueue.getDefault().isPresent()) {
primedMappings.remove(appRequest);
}
return Optional.ofNullable(appResponse);
}
public void reset() {
primedMappings.clear();
addDefaultPriming();
}
private void addDefaultPriming() {
for (Priming priming : priming) {
if (priming instanceof StandardPriming) {
final StandardPriming defaultPriming = (StandardPriming) priming;
add(defaultPriming.getRequest(), defaultPriming.getResponse());
} else {
final DefaultResponsePriming defaultPriming = (DefaultResponsePriming) priming;
addDefault(defaultPriming.getRequest(), defaultPriming.getResponse());
}
}
}
private DefaultingQueue getAppResponseQueueForAdd(AppRequest appRequest) {
if (primedMappings.get(appRequest) == null) {
primedMappings.put(appRequest, new DefaultingQueue());
return primedMappings.get(appRequest);
} else {
return primedMappings.get(appRequest);
}
}
}
|
STACK_EDU
|
This topic is one that fouls up engineers quite often as it can seem more complicated than it needs to be. However, once you spend some time learning the syntax, you'll see that its actually quite flexible AND powerful. The language is based upon SED and RegExp and may seem familiar to those of you with any Unix scripting or programming experience.
Go to this page if you are looking for examples of voice translation rules.
Let's start by reviewing the various symbols that you'll find used within voice translation rules.
Pattern Matches With Wildcards
|Any single digit
|0 to 9,*,#
|Any specific character
|Any range or sequence of characters
|Modifier—match none or more occurrences
|Modifier—match one or more occurrences
|Modifier—match none or one occurrences
|Any digit followed by none or more occurrences. This is effectively anything, including null.
|Any digit followed by one or more occurrences. This is effectively anything, except null.
|No digits, null
|In the match pattern, indicates where to slice up the number.
|In the replacement pattern, indicates where to copy the sets to keep.
|Indicates which sets in the matched number to keep.
|Keep expression "a".
|Ignore expression "b".
|Copy the first set into the replacement number.
Voice Translation Rule Characters
|Voice Translation Rule Character
|Match the expression at the start of a line.
|Match the expression at the end of a line.
|Delimiter that marks the start and end of both the matching and replacement strings.
|Escape the special meaning of the next character.
|Indicates a range when not in the first/last position. Used with the'[' and ']'.
|Match a single character in a list.
|Do not match a single character specified in the list.
|Match any single character.
|Repeat the previous regexp zero or more times.
|Repeat the previous regular expression one or more times.
|Repeat the previous regular expression zero or one time (use CTRL-V in order to enter in IOS).
|Groups regular expressions.
Typical Voice Translation Rule Usage
Match and Replace:
rule precedence /match-pattern/ /replace-pattern/
rule precedence reject /match-pattern/ [type match-type [plan match-type]]
Voice translation profiles (which reference voice translation rules) are referenced by the following:
- Trunk Group - Two different translation profiles can be defined in a trunk group in order to perform number translation for incoming and outgoing POTS calls. If an outgoing translation profile is defined in a trunk group, the number translation is done while the outgoing call is setup.
- Source IP Group - A translation profile can be defined in a source IP group in order to perform number translation for incoming VoIP calls.
- Dial Peer - Two different translation profiles can be defined in a dial peer in order to perform number translation for incoming and outgoing calls.
- Voice Port - The translation profile can be defined in a voice port in order to perform number translation for incoming and outgoing POTS calls. If a voice port is also a trunk group member, then the incoming translation profile of a voice port overrides the translation profile of a trunk group.
- Non-Facility Associated Signaling (NFAS) Interface - The translation profile can be defined for an NFAS interface through the translation-profile command line from the global voice service pots configuration in order to perform the number translation for incoming and outgoing NFAS calls. This translation profile has a higher precedence than the translation profile of a voice port and trunk group in case a channel also belongs to a voice port and/or trunk group with the translation profile defined.
- VoIP Incoming - The translation profile can be defined globally for all incoming VoIP (h323/sip) calls in order to perform number translation. If an incoming H.323/SIP call is associated with a Source IP Group with a translation profile defined, then the translation profile of the Source IP Group overrides the global translation profile for incoming VoIP calls.
Please find more details here and here.
Configuring Voice Translation Rules in IOS Gateways, Nov 2008, Bob Liggett (requires CCO login)
|
OPCFW_CODE
|
I listen to music a whole lot at function and would like not to try to eat bandwidth for videos I am not seeing. Although in power the application has no option to preserve the screen on indicating I must unlock my cell phone to thumb a music down when driving. And quite possibly the most small, no strategy to thumb down a song which is not now enjoying, gotta quit taking part in The present observe, begin actively playing one other a person, then thumb it down. Also, why no usage of opinions?
wikiHow Contributor For anyone who is using an Android tablet, You can utilize the web site downloader Guidance higher than to download videos on to your device.
YouTube Music allows you to view and pay attention to a virtually countless catalog in an app made for music discovery.
We update our app on a regular basis as a way to make your YouTube Music knowledge superior. We polished a handful of items, fixed bugs, and produced some functionality enhancements.
Simply click a download high-quality. Your download will start off routinely. If you need both of those audio and video as part of your video file, ensure that you never simply click one among the options which includes an "x" beside the speaker icon.
Enjoy the file. Double-click the MP3 file to open it within your default media player. The name would be the same since the video's title.
I hear music lots at get the job done and would prefer not to try to eat bandwidth for videos I'm not looking at. Whilst in electrical power the application has no option to hold the screen on meaning I really need to unlock my mobile phone to thumb a song down though driving. And by far the most slight, no approach to thumb down a track that's not currently taking part in, gotta quit playing The present monitor, begin taking part in the other a single, then thumb it down. Also, why no use of reviews?
Run the command. Push ↵ Enter/⏎ Return to run the command. youtube-dl will get started automatically downloading the video, and afterwards FFmpeg will extract the audio and convert it to MP3 structure.
To download music from YouTube you should open up the applying initial. Then seek out a YouTube music you would like to transform and just simply click "Transform to mp3" button. After that this tune are going to be converted and downloaded for your Computer system wherever you'll be able to hear it straight away.
No spam, we guarantee. You are able to unsubscribe Anytime and we will by no means share your facts without the need of your permission.
What should really I do if I get an mistake concept Once i attempt to open up a video which i downloaded from YouTube?
"I attempted a YouTube downloader just before and could not get it to work. Right up until today I only ever applied VLC Player to look at videos, but since I used to be common it, I decided to consider that process, and also you designed it very easy. Thank you a great deal of."..." more A Anonymous
wikiHow Contributor Assuming you are talking about an iOS Digital camera Roll, it's impossible to complete without changing the file.
|
OPCFW_CODE
|
var mem = {
// (A) PROPERTIES
// (A1) HTML ELEMENTS
hWrap : null, // HTML game wrapper
hCards : null, // HTML cards
// (A2) GAME SETTINGS
sets : 6, // Total number of cards to match
hint : 1000, // How long to show mismatched cards
url : "", // Optional, URL to images
// (A3) FLAGS
loaded : 0, // Total number of assets loaded
moves : 0, // Total number of moves
last : null, // Last opened card
grid : null, // Current play grid
matches : null, // Current matched cards
locked : null, // 2 cards chosen did not match
// (B) PRELOAD
preload : function () {
// (B1) GET HTML GAME WRAPPER
mem.hWrap = document.getElementById("mem-game");
// (B2) PRELOAD IMAGES
for (let i=0; i<=mem.sets; i++) {
let img = document.createElement("img");
img.onload = function(){
mem.loaded++;
if (mem.loaded == mem.sets+1) { mem.init(); }
};
img.src = mem.url+"smiley-"+i+".png";
}
},
// (C) INIT GAME
init : function () {
// (C1) RESET
if (mem.locked != null) {
clearTimeout(mem.locked);
mem.locked = null;
}
mem.hCards = [];
mem.grid = [];
mem.matches = [],
mem.moves = 0;
mem.last = null;
mem.locked = null;
mem.hWrap.innerHTML = "";
// (C2) RANDOM RESHUFFLE CARDS
// Credits : https://gomakethings.com/how-to-shuffle-an-array-with-vanilla-js/
let current = mem.sets * 2, temp, random;
for (var i=1; i<=mem.sets; i++) {
mem.grid.push(i);
mem.grid.push(i);
}
while (0 !== current) {
random = Math.floor(Math.random() * current);
current -= 1;
temp = mem.grid[current];
mem.grid[current] = mem.grid[random];
mem.grid[random] = temp;
}
// (C3) CREATE HTML CARDS
for (let i=0; i<mem.sets * 2; i++) {
let card = document.createElement("div");
card.className = "mem-card";
card.innerHTML = `<img src='${mem.url}smiley-0.png'/>`;
card.dataset.idx = i;
card.onclick = mem.open;
mem.hWrap.appendChild(card);
mem.hCards.push(card);
}
},
// (D) OPEN CARD
open : function () { if (mem.locked == null) {
// (D1) OPEN SELECTED CARD
mem.moves++;
let idx = this.dataset.idx;
this.innerHTML = `<img src='${mem.url}smiley-${mem.grid[idx]}.png'/>`;
this.onclick = "";
this.classList.add("open");
// (D2) NO PREVIOUS GUESS - JUST RECORD AS OPENED
if (mem.last == null) { mem.last = idx; }
else {
// (D3) MATCHED AGAINST PREVIOUS GUESS
if (mem.grid[idx] == mem.grid[mem.last]) {
mem.matches.push(mem.last);
mem.matches.push(idx);
mem.hCards[mem.last].classList.remove("open");
mem.hCards[idx].classList.remove("open");
mem.hCards[mem.last].classList.add("right");
mem.hCards[idx].classList.add("right");
mem.last = null;
if (mem.matches.length == mem.sets * 2) {
alert("YOU WIN! TOTAL MOVES " + mem.moves);
mem.init();
}
}
// (D4) NOT MATCHED - CLOSE BOTH CARDS ONLY AFTER A WHILE
else {
mem.hCards[mem.last].classList.add("wrong");
mem.hCards[idx].classList.add("wrong");
mem.locked = setTimeout(function(){
mem.close(idx, mem.last);
}, mem.hint);
}
}
}},
// (E) CLOSE PREVIOUSLY MIS-MATCHED CARDS
close : function (aa, bb) {
aa = mem.hCards[aa];
bb = mem.hCards[bb];
aa.classList.remove("wrong");
bb.classList.remove("wrong");
aa.classList.remove("open");
bb.classList.remove("open");
aa.innerHTML = `<img src='${mem.url}smiley-0.png'/>`;
bb.innerHTML = `<img src='${mem.url}smiley-0.png'/>`;
aa.onclick = mem.open;
bb.onclick = mem.open;
mem.locked = null;
mem.last = null;
}
};
// (F) INIT GAME
window.addEventListener("DOMContentLoaded", mem.preload);
|
STACK_EDU
|
/*jslint node:true, unparam:true, nomen:true, regexp:true*/
'use strict';
var xlsx = require('node-xlsx'),
DataObjectParser = require('dataobject-parser');
module.exports = function (filePath) {
var startRow = null,
colId = 0,
obj = xlsx.parse(filePath),
line = 0,
json = [];
obj.forEach(function (worksheet) {
var i = 0, col,
colTrans = [],
data = worksheet.data,
sheetSplit = worksheet.name.split('.'),
addition;
addition = json;
sheetSplit.forEach(function (sheetComponent) {
if(!addition.hasOwnProperty(sheetComponent)) {
addition[sheetComponent] = {};
}
addition = addition[sheetComponent];
});
for (i = 0; i < data.length; i += 1) {
if (data[i][0] === '{build-doc}') {
startRow = i + 1;
break;
}
}
if (startRow === null) {
throw new Error('Unable to find start of build document!');
}
for (i = 1; i < data[startRow - 1].length; i += 1) {
col = data[startRow - 1][i];
if (!col) { break; }
col = col + '';
col = col.trim();
colTrans.push({
name: col,
column: i
});
}
colTrans.forEach(function (column) {
var columnJson = new DataObjectParser();
data.forEach(function (row, i) {
if (i >= startRow && row[colId]) {
columnJson.set(row[colId], new Object(row[column.column]));
}
});
addition[column.name] = columnJson.data();
});
});
return json;
};
|
STACK_EDU
|
To prevent these problems, Microsoft Internet Information Services (IIS) operators should contact VeriSign to update the intermediate certification authority certificates for servers that use 128-bit SSL to connect to Web sites with the Secure Hypertext Transfer Protocol.
ImpactClients cannot establish SSL-protected connections to Web servers that do not have updated certificates.
RecommendationInstall the updated version of the VeriSign intermediate certificate.
- Microsoft Internet Information Server
- Microsoft Internet Security and Acceleration Server
- Microsoft Exchange
- Microsoft SQL Server
Technical descriptionVeriSign maintains many certificates and certificate revocation lists (CRLs) that are expiring or that have expired. This is not uncommon. Typically, certificates and CRLs are short-lived by design. However, certificates are sometimes re-issued to give them a longer life span. This is generally not a problem, but it can create issues with servers that use secure socket layer (SSL) to help protect sessions that connect to their resources.
If a server operator installs an SSL certificate from VeriSign, together with the relevant issuing certification authority certificates, and then the server operator later renews the SSL certificate through VeriSign, the server operator must make sure that the intermediate issuing certificates are updated at the same time.
If you want to install the updated certificates, visit the following VeriSign Web site for the latest versions of these certificates and for the steps to install them:
Additional informationThe validation of an X.509 certificate involves several phases. These phases include path discovery and path validation.
Path discovery is the process of determining if a certificate was issued by a valid entity. You can use many techniques to do this, including the following:
- Clients frequently maintain a cache of intermediate certificates. An intermediate certificate is a certificate that has proven useful in determining if a certificate was ultimately issued by a valid root certification authority.
Certificates may contain extensions that provide pointers to additional relevant information. One example of this type of extension is the Authoritative Information Access (AIA) extension. The AIA extension may contain a pointer to the certificate’s issuer.
Note Not all certificates contain this pointer, including the VeriSign certificates that are involved in this issue. Microsoft has been and will continue to be working actively with certificate issuers to encourage them to include this information in certificates that they issue in the future. For more information about this extension, see the Internet Engineering Task Force (IETF) Request for Comments (RFC) 3280.
- Servers can provide the additional information to the client. SSL is one example of this technique. In the SSL negotiation, the server provides the client with its own certificate and the certificates that the server has determined that the client can use to determine the server’s identity.
- Does the issuer believe that the certificate in question is still valid and is still under the control of the person it was originally issued to? This behavior is frequently referred to as “certificate revocation checking.” Windows supports a cryptographic object, a certificate revocation list (CRL), to perform this verification.
- Is the certificate being used for a purpose that the issuer intended it to be used for? For example, a certificate that was issued for e-mail should not be trusted to assert that a Web server is associated with a specific domain name (as is done in SSL).
- Are the certificates time-valid? Certificate life spans are constrained for security reasons. An issuer cannot certify that an individual or a resource has a particular identity for longer that the issuer is considered to be trusted.
Frequently asked questionsIs this a security vulnerability?
No. This is not a security vulnerability in any one of the affected products. The problem results only because of the expiration of a third party’s digital certificate.
What’s the scope of the problem?
Recently, VeriSign, Inc., a major certification authority, renewed their “VeriSign International Server CA - Class 3” certification authority with certificates that have a longer validity period. If Web server operators renewed their SSL certificates after this renewal, their customers may experience problems when they try to validate that their Web servers are actually associated with their organizations.
How is the issue resolved?
You can resolve this issue by manually updating the intermediate certification authority (CA) certificate on each Web server. To obtain this certificate, visit the following VeriSign Web site: If this is a server issue, why do clients experience the problem?
The problem occurs when a client tries to establish a security-enhanced connection to a Web server. As a part of the process of establishing the connection, the server passes many certificates back to the client. The client uses these certificates to validate the server's certificate. In this case, one of the intermediate certificate authorities (the “VeriSign International Server CA - Class 3” CA) has expired. This intermediate certificate is not valid. Therefore, the browser displays a warning message to the user that explains that a security-enhanced connection could not be established.
Are Microsoft certificates involved?
No. These certificates are issued and are owned by VeriSign, Inc. VeriSign participates in a program that is maintained by Microsoft. In this program, third-party trust providers can help secure Internet commerce for Microsoft customers. For more information about this program, visit the following Microsoft Web site: What certificate authorities participate in the Microsoft Root Program?
For a list of the current trusted third parties that have qualified for the Microsoft Root Program, visit the following Microsoft Web site: Does Microsoft still update the certificates that Microsoft Internet Explorer uses?
Yes. As a part of the Microsoft Root Program, the list of trusted root authorities can be updated quarterly. For users of Microsoft Windows XP and Microsoft Windows Server 2003, this update occurs in the chain validation engine when it is presented with a certificate that it does not trust. When this behavior occurs, Windows Update is contacted to verify whether the certificate has been added to the Root Program. On pre-Windows XP clients, a recommended package is published to Windows Update for manual download. Microsoft recommends that enterprises make their own decisions about which trusted third parties they want users in their enterprises to trust.
Note Updates that the Microsoft Root Program provides will not address the issues that VeriSign Intermediate Certificate Expiration raises.
SupportFor a complete list of Microsoft Product Support Services phone numbers and information about support costs, visit the following Microsoft Web site:Note In special cases, charges that are ordinarily incurred for support calls may be canceled if a Microsoft Support Professional determines that a specific update will resolve your problem. The usual support costs will apply to additional support questions and issues that do not qualify for the specific update in question.
Security resourcesFor more information about security in Microsoft products, visit the following Microsoft TechNet Web site:
The information provided in the Microsoft Knowledge Base is provided "as is" without warranty of any kind. Microsoft disclaims all warranties, either express or implied, including the warranties of merchantability and fitness for a particular purpose. In no event shall Microsoft Corporation or its suppliers be liable for any damages whatsoever including direct, indirect, incidental, consequential, loss of business profits or special damages, even if Microsoft Corporation or its suppliers have been advised of the possibility of such damages. Some states do not allow the exclusion or limitation of liability for consequential or incidental damages so the foregoing limitation may not apply.
Article ID: 834438 - Last Review: Mar 23, 2009 - Revision: 1
|
OPCFW_CODE
|
Microsoft created confusion when it sent an email to developers announcing that by April 1, 2014, it plans to "retire" XNA and DirectX from its "MVP Award Program (opens in new tab)" as a Technical Expertise. Leaked on Thursday, it contained wording that indicated that both XDA and DirectX would be retired, causing a wave of panic throughout the PC gaming community.
Microsoft's MVP Award Program program essentially awards "exceptional, independent community leaders who share their passion, technical expertise, and real-world knowledge of Microsoft products with others." The email said that both platforms will no longer be part of the program.
However the leaked email also said that the cross-platform XNA Game Studio development platform is not in active development, and that DirectX is no longer evolving as a technology. That led to an impression that both platforms would eventually be discontinued, and that Microsoft was gearing up to launch a unified replacement.
XNA and DirectX developer lead Promit Roy (Chief Technology Officer, Action = Reaction Labs) followed up with a blog stating that the email was poorly worded, especially in regards to the DirectX aspect. But the blog also pointed out that "DirectX outside of Direct3D is completely dead," and that "Direct3D has been absorbed into Windows core." Thus Direct3D is no more a "technology" than GDI or Winsock.
"XNA Game Studio is finished. That situation has been obvious for years now, so it also should not really come as a surprise either," Roy confirmed. "It is clear at this juncture that there was no future and the tech was being phased out. Direct3D 10 was launched in late 2006, a bit over six years ago, yet XNA was apparently never going to be brought along with the major improvements in DWM and Direct3D."
XNA Game Studio has been used to code games released across Xbox Live, Windows Phone and other Windows-based devices. It was a breeding ground for independent developers including Supergiant Games' Bastion and Polytron's Fez. Other titles include Funcom's Bloodline Champions, Magicka from Paradox Interactive, Rocket Riot from THQ, numberous titles from Microsoft Studios and loads more.
As for the whole DirectX aspect, ZDNet's Mary Jo Foley reached out to Microsoft to get an official statement. "I can confirm that the original communication sent to MVPs yesterday was inaccurate. Microsoft has issued a follow-up communication to the DirectX/XNA MVPs reaffirming that DirectX is very much an important and evolving technology for Microsoft," the rep said.
"Microsoft is actively investing in DirectX as the unified graphics foundation for all of our platforms, including Windows, Xbox 360, and Windows Phone. DirectX is evolving and will continue to evolve. We have absolutely no intention of stopping innovation with DirectX," the Microsoft rep added.
The wording contained in the leaked email was a mistake "pure and simple," the rep said.
Roy updated his blog with vents about Microsoft's communication skills, pointing out that XNA doesn’t support DirectX 10+ or Windows 8, but it’s still a "supported product". Because MVPs like Roy are serving as community representatives – as guides for everyone interested in the tech – Microsoft needs to communicate clearly with those developers.
|
OPCFW_CODE
|
import json
import time
from _thread import allocate_lock
from speedysvc.logger.std_logging.MemoryCachedLog import MemoryCachedLog
from speedysvc.logger.std_logging.log_entry_types import dict_to_log_entry, INFO
class FIFOJSONLog(MemoryCachedLog):
def __init__(self, path, max_cache=50000, parent_logger=None): # 50kb
"""
A disk-backed, in-memory-cached JSON log, delimited by
newlines before each entry so as to be able to figure
out the last readable entry, and remove any partially
overwritten ones in the cache.
\n{'t': [time], 'msg': msg, '}
"""
self.lock = allocate_lock()
self.parent_logger = parent_logger
MemoryCachedLog.__init__(self, path, max_cache=max_cache)
#====================================================================#
# Add Log Entries #
#====================================================================#
def write_to_log(self, t, pid, port, svc, msg, level=INFO):
"""
Write a message to the log.
:param t: the unix timestamp from the epoch as returned by time.time()
:param pid: the process ID from which the event occurred
:param port: the port of the service
:param svc: the name of the service
:param msg: the log message
:param level: a log level, e.g. ERROR, or INFO
"""
with self.lock:
self._write_line(json.dumps({
't': int(t),
'level': level,
'pid': pid,
'port': port,
'svc': svc,
'msg': msg
}).encode('utf-8'))
if self.parent_logger:
# Forward onto the parent (probably global)
# FIFOJSONLog if one's been specified
self.parent_logger.write_to_log(
t, pid, port, svc, msg, level
)
#====================================================================#
# Get Log Entries #
#====================================================================#
def iter_from_disk(self, use_lock=True):
"""
Iterate through all log items from disk -
not just the ones in-memory, or from this session
"""
if use_lock:
with self.lock:
for line in self._iter_from_disk():
yield json.loads(line.decode('utf-8'))
else:
for line in self._iter_from_disk():
yield json.loads(line.decode('utf-8'))
def iter_from_cache(self, offset=None, use_lock=True):
"""
Iterate through cache log items - yield the JSON log dicts
Better to use this in most cases, as is much faster
"""
if use_lock:
with self.lock:
for x, line in enumerate(self._iter_from_cache(offset)):
yield json.loads(line.decode('utf-8'))
else:
for x, line in enumerate(self._iter_from_cache(offset)):
yield json.loads(line.decode('utf-8'))
def get_text_log(self, include_service=True, include_date=True, include_time=True,
offset=None):
"""
Get coloured console-formatted log messages.
:param include_service: whether to include the service's name/port
:param include_date: whether to include the date of the message
:param include_time: whether to include the time of the message
:param offset: the offset for getting log entries only after this "spindle"
point, to prevent having to download the whole lot every time
:return: a tuple of (current offset, coloured console-formatted entries,
compatible with only Unix terminals)
"""
with self.lock:
L = []
for D in self.iter_from_cache(offset, use_lock=False):
log_entry = dict_to_log_entry(D)
L.append(log_entry.to_text(
include_service, include_date, include_time
))
return self.get_fifo_spindle(), L
def get_coloured_console_log(self, include_service=True, include_date=True, include_time=True,
offset=None):
"""
Get coloured console-formatted log messages.
:param include_service: whether to include the service's name/port
:param include_date: whether to include the date of the message
:param include_time: whether to include the time of the message
:param offset: the offset for getting log entries only after this "spindle"
point, to prevent having to download the whole lot every time
:return: a tuple of (current offset, coloured console-formatted entries,
compatible with only Unix terminals)
"""
with self.lock:
L = []
for D in self.iter_from_cache(offset, use_lock=False):
log_entry = dict_to_log_entry(D)
L.append(log_entry.to_coloured_console(
include_service, include_date, include_time
))
return self.get_fifo_spindle(), L
def get_html_log(self, include_service=True, include_date=True, include_time=True,
offset=None):
"""
Get coloured html-formatted log messages.
:param include_service: whether to include the service's name/port
:param include_date: whether to include the date of the message
:param include_time: whether to include the time of the message
:param offset: the offset for getting log entries only after this "spindle"
point, to prevent having to download the whole lot every time
:return: a tuple of (current offset, coloured html-formatted entries)
"""
with self.lock:
L = []
for D in self.iter_from_cache(offset, use_lock=False):
log_entry = dict_to_log_entry(D)
L.append(log_entry.to_html(
include_service, include_date, include_time
))
return self.get_fifo_spindle(), L
if __name__ == '__main__':
log = FIFOJSONLog('/tmp/test_fifo_json_log.json')
while True:
log.write_to_log(5454, 555, 55, 'mine', 'message'*5000)
print(log.get_coloured_console_log())
print(log.get_html_log())
print(log.get_text_log())
print(log.get_coloured_console_log())
print(log.get_html_log())
print(log.get_text_log())
|
STACK_EDU
|
Allows seamless interconnection with multi-source heterogeneous data to support massive distributed data storage.
The system supports mainstream distributed computing frameworks, such as Hadoop, achieving million-level throughput, and supports horizontal expansion.
Built-in multiple self-developed visualized algorithm tools and equipped with convenient feature tracking tool with solution debugging.
With stateless internal micro-service and built-in network load balancer, creating a system that has high availability and horizontal scaling.
Isolation of tenant and user rights with shared workspace, comprehensive installation, deployment, operation and maintenance tools.
Unified Access for Multi-Source Heterogeneous Data
Provides unified access engine for structured data and unstructured data, such as images and texts, in order to establish seamless connection with mainstream data warehouses and relational databases; supports composite data access in AI application scenarios, including tagging image datasets, samples and models.
Time Series Data Grouping Management
Through data sharding storage technology, data sharding can be performed at a certain time field, and time slicing can be used as a basis for obtaining data, thereby achieving a quick positioning of data shards required by the model in the machine learning scenario.
Unified Governance of the Whole Domain Data
4Paradigm Sage Data Platform provides a unified governance framework and meta-information management for heterogeneous data. The isomorphic data is integrated through the data group, isolated through the data domain, and could support sub-businesses and sub-scenario data management and comprehensively improve the data governance level in large-scale AI application process in horizontal and vertical dimensions.
Data Lifecycle Positioning and Tracking
The unified locator (prn) builds a global identity system for enterprise data, enabling quick locating and tracking of the changes in the lifecycle of the data.
Model Group and Model Version
Solved the management problems caused by having multiple versions in one model in the self-learning process through the model group. Model version management allows positioning different model snapshots of the same scenario.
Model Production Information Summary
Display of the relevant assessment reports during the model production is helpful for an accurate and comprehensive assessment of the model.
Visualized and Interpretable Model
The visualized and interpretable function of the visual and easy-to-understand model can not only help modeling engineers to analyze and optimize the model, but also provide an important channel for business personnel to understand the working principle of the model, creating a more transparent and controllable application of the model.
Compatible with Multiple Model Formats
Supports the import/export of open source (software) and third-party models, version management and online deployment, etc., in order to allow enterprises to achieve a one-stop model asset management, accumulate business value and improve management efficiency.
Enterprise-class Data Permission System
The tenant data is isolated from the user-level data to protect the individual data space of the enterprise from theft and contamination; unit data is shared in the workspace so as to achieve cross-department data multiplexing and knowledge transfer.
Visualized Task Management Panel
Supports retrieval of tasks by scenario, label, name, etc.; a centralized identification system, with a unified positioning of tasks and data.
Distributed Data Task Scheduling System
A distributed task management engine to uniformly schedule data jobs; a one-stop management of offline batch processing and real-time stream processing.
|
OPCFW_CODE
|
- GM Fox Fam!
After spending a few months elbow deep in the governance tracker, I noticed we have an opportunity to optimize this process. After some initial conversation during an AMA Lunch call, I am bringing this idea to the community for feedback. Once I have heard more from the community, I will place the your ideas into the official SCP format and move to ideation. I look forward to a lively discussion!
This proposal is to update our current governance process for clarity and standardization.
The original recommended first step was optional to allow more flexibility in the conversation as well as move governance forward quickly. Recently, the community has spoken and would like more time with the proposal.
Current Governance process tl;dr:
Recommended: Post idea on the forum and get feedback from the community. Use this feedback to refine your proposal.
- gauge sentiment.
After 5 days in the Proposal Discussion category, if the feedback is overall positive and confidence that the proposal will pass is high, make your proposal 21
- to the DAO
The current process can be found in its entirely here: https://forum.shapeshift.com/t/fox-governance-process/55
Proposed governance process tl;dr:
(All steps will be required)
Incubation: Post idea to obtain feedback from the community. This feedback should be used to refine your proposal.
- Ideation: Format proposal into appropriate template (SCP or Workstream) and share final draft to gauge community sentiment.
- Voting: Voting will take place via Snapshot & Boardroom.
- Details of proposed governance process:
- Posted under “Proposal discussion” category
- No specific format required
- Intent is to engage the community
- Timeline: (See poll below)
- No other requirement to move to Ideation
- Ideation proposal is posted to Boardroom (Note: this will take place on the forum until Ideation functionality on Boardroom has returned)
- should be used
- SCP number is included in title (SCP # can be obtained by reaching out to Miss, Tyler or Neverwas in the Governance channel)
- Posted under appropriate category (listed below)
- Proposal must be under ______ characters (Waiting to hear back from Boardroom on their limitation)
- Voting is 1 vote per fox token (while ideation is still in the forum, any Forum user can vote)
- Include link to the Incubation post
- Must include for/against poll
- Timeline: 48 hours? 5 days? (Poll below)
- Proposal may move forward if the overall vote is positive after timeline specified above
- Voting is available on both Snapshot and Boardroom (Boardroom is the interface to Snapshot
- Voting is 1 vote per fox token
- SCP # is included in the title
- Include links to the Incubation and Ideation posts
- Timeline: 72 hours
- Quorum: A minimum of 4,000,000 FOX must participate in the vote for it to be considered ratified. (Soft quorum will still be in effect. For votes that do not reach quorum, assume that 70% of the votes necessary to achieve quorum would be against the proposal. If this would still result in a majority of votes being in favor of the proposal, the proposal can be considered passed.)
- Other items of note:
Any proposals not following governance process will be asked to be removed.
- All meaningful governance discussion should take place on this forum to ensure the community has full transparency.
- Transitioning responsibilities and ownership of all ShapeShift operations to the DAO.
- Developing mutually-beneficial partnerships and affiliate revenue opportunities with aligned DAOs and products.
Product (Features and Product):
- Planning the optimal feature roadmap for achieving ShapeShift’s vision as the open-source interface to the decentralized universe.
- Planning and executing the open-sourcing of all ShapeShift code and infrastructure. Developing new features, fixing bugs, and optimizing performance of ShapeShift’s web and mobile applications.
- Evolving and enhancing FOX token utility and value accrual.
Marketing & Growth:
- Executing campaigns focused on growing the ShapeShift ecosystem.
- Supporting the community and maintaining the integrity of all ShapeShift various platforms such as Discord, The Forum, Boardroom and Notion.
- Managing and optimizing ShapeShift’s operational processes.
- Propose new workstreams to be added.
- Delighting and education customers as they navigate and experience The ShapeShift DAO.
Information & Globalization:
- strategizing, implementation, and optimization of growth campaigns/initiatives for the ShapeShift DAO.
How long should each post remain in Incubation before moving to Ideation?
How long should each post remain in Ideation before moving to Voting?
|
OPCFW_CODE
|
The type 'T' is not compatible with the type 'T'
I have two libraries written in C# that I'd like to use in an F# application. Both libraries use the same type, however, I am unable to convince F#'s type checker of this fact.
Here's a simple example using the Office interop types. F# seems particularly sensitive to these type issues. Casting on the F# side does not seem to help the situation. All three projects have a reference to the same assembly ("Microsoft.Office.Interop.Excel" version <IP_ADDRESS>).
In project "Project1" (a C# project):
namespace Project1
{
public class Class1
{
public static Microsoft.Office.Interop.Excel.Application GetApp()
{
return new Microsoft.Office.Interop.Excel.Application();
}
}
}
In project "Project2" (a C# project):
namespace Project2
{
public class Class2
{
Microsoft.Office.Interop.Excel.Application _app;
public Class2(Microsoft.Office.Interop.Excel.Application app)
{
_app = app;
}
}
}
In project "TestApp" (an F# project):
[<EntryPoint>]
let main argv =
let c2 = Project2.Class2(Project1.Class1.GetApp())
0
Any hints?
Edit:
Changing the call to Class2's constructor with the following dynamic cast solves the problem:
let c2 = Project2.Class2(Project1.Class1.GetApp() :?> Microsoft.Office.Interop.Excel.Application)
However, this is unsatisfying, since it is 1) dynamic, and 2) I still don't understand why the original type check failed.
I suspect that the answer has to do with the F# compiler not understanding the no-PIA type equivalence rules.
+1 F# and C# work slightly differently with COM interop, see there : http://apollo13cn.blogspot.se/2012/04/trick-in-f-interop.html
COM interop. When you reference a COM assembly it will actually generate a COM interop assembly to marshal between .NET and COM. If you have two assemblies both referencing the same COM assembly you may actually have two identically generated interop assemblies.
One solution, and frankly better design in general, would be to create interfaces in one of the two assemblies (or a shared 3rd assembly) which expose the features you want to make public and instead of using or consuming COM Types use these interfaces instead.
namespace Project1
{
public interface IApplication
{
// application members here...
}
public class Class1
{
public static IApplication GetApp()
{
return new ExcelApplication(new Microsoft.Office.Interop.Excel.Application());
}
private class ExcelApplication : IApplication
{
public ExcelApplication(Microsoft.Office.Interop.Excel.Application app)
{
this.app = app;
}
// implement IApplication here...
}
}
}
|
STACK_EXCHANGE
|
No results (visual) found, after successful edge calculation
Hello Rui,
Greetings !
Thanks for a very handy tool NATMI that you have developed. I am running it in the anaconda environment in my windows machine. It took me a while (with some difficulty) to install pygraphviz, but after that things were smooth.
I have a dataset of roughly 83000 cells which have 8 idents, my expn file is roughly 7GB large. I ran natmi on this data and it took me almost 5 hours to get the edges
(first step in analysis using the following codeline
python ExtractEdges.py --species human --emFile C:\abc\abc123\NATMI\xyz_expn_data.txt --annFile C:\abc\abc123\NATMI\xyzabc_metadata.txt --interDB lrc2p --coreNum 4 --out results_natmi).
However, when I run the second codeline
python VisInteractions.py --sourceFolder results_natmi --interDB lrc2p --weightType mean --detectionThreshold 0.2 --drawNetwork y --plotWidth 4 --plotHeight 4 --layout circle --fontSize 15 --edgeWidth 6 --maxClusterSize 0 --clusterDistance 0.6
I do see few folders popping up, but I fail to see any heatmaps, interaction data except for some excel sheets, the only visual representation of the data I find is below.
From_A-HSPC_to_A-HSPC_exp.pdf
More importantly, my folder "Network_exp_0_spe_0_det_0.2_top_0_signal_lrc2p_weight_mean" does not have anything inside it.
I am wondering what is going wrong. please let me know if there is anything/any package that is missing.
Best regards.
Hi,
Sorry for the late reply. I checked the generated figure, it was correctly generated. Could you check the generated excel files? Do they have all zeros? I think the detection threshold is too high, so all edges were filtered.
Best,
Rui
Hi,
Is your OS Windows? Could you try NATMI docker image (https://hub.docker.com/r/asrhou/natmi) and see if pdf cannot be generated using Docker?
Best,
Rui
Hi,
Yes, the results are in the docker image and removed when the container is ended.
Assume your own input files (not toy.sc.em.txt) are in C:\Users\User1\NATMI\test_Data folder, then you need to replace '-v /home/path/workdir/:/opt/NATMI/workdir' with '-v C:\Users\User1\NATMI\test_Data:/opt/NATMI/workdir' and all your files in C:\Users\User1\NATMI\test_Data folder are in /opt/NATMI/workdir folder in the Docker image. By specifing '--out /opt/NATMI/workdir/test_output', all your NATMI results will be saved in C:\Users\User1\NATMI\test_Data\test_output folder. Also remember to map your expression matrix file (e.g., em.txt) and annotation file (e.g., ann.txt) to '/opt/NATMI/workdir/em.txt' and '/opt/NATMI/workdir/ann.txt'.
Let me know if you can get your own NATMI results correctly.
Thanks,
Rui
|
GITHUB_ARCHIVE
|
# Python 2.3 distutils.log backported to Python 2.1.x, 2.2.x
import sys
def _fix_args(args,flag=1):
if type(args) is type(''):
return args.replace('%','%%')
if flag and type(args) is type(()):
return tuple([_fix_args(a,flag=0) for a in args])
return args
if sys.version[:3]>='2.3':
from distutils.log import *
from distutils.log import Log as old_Log
from distutils.log import _global_log
class Log(old_Log):
def _log(self, level, msg, args):
if level>= self.threshold:
if args:
print _global_color_map[level](msg % _fix_args(args))
else:
print _global_color_map[level](msg)
sys.stdout.flush()
_global_log.__class__ = Log
else:
exec """
# Here follows (slightly) modified copy of Python 2.3 distutils/log.py
DEBUG = 1
INFO = 2
WARN = 3
ERROR = 4
FATAL = 5
class Log:
def __init__(self, threshold=WARN):
self.threshold = threshold
def _log(self, level, msg, args):
if level >= self.threshold:
if args:
print _global_color_map[level](msg % _fix_args(args))
else:
print _global_color_map[level](msg)
sys.stdout.flush()
def log(self, level, msg, *args):
self._log(level, msg, args)
def debug(self, msg, *args):
self._log(DEBUG, msg, args)
def info(self, msg, *args):
self._log(INFO, msg, args)
def warn(self, msg, *args):
self._log(WARN, red_text(msg), args)
def error(self, msg, *args):
self._log(ERROR, msg, args)
def fatal(self, msg, *args):
self._log(FATAL, msg, args)
_global_log = Log()
log = _global_log.log
debug = _global_log.debug
info = _global_log.info
warn = _global_log.warn
error = _global_log.error
fatal = _global_log.fatal
def set_threshold(level):
_global_log.threshold = level
"""
def set_verbosity(v):
prev_level = _global_log.threshold
if v<0:
set_threshold(ERROR)
elif v == 0:
set_threshold(WARN)
elif v == 1:
set_threshold(INFO)
elif v >= 2:
set_threshold(DEBUG)
return {FATAL:-2,ERROR:-1,WARN:0,INFO:1,DEBUG:2}.get(prev_level,1)
from misc_util import red_text, yellow_text, cyan_text
_global_color_map = {
DEBUG:cyan_text,
INFO:yellow_text,
WARN:red_text,
ERROR:red_text,
FATAL:red_text
}
set_verbosity(1)
|
STACK_EDU
|
group category specific bars and provide xtics subcategory in gnuplot
I need to plot data of three categories:
cat1 (with subcategories i1,i2,i3)
cat2 (with subcategories p1,p2,p3)
cat3 (with subcategories n1,n2,n3)
Each category should be grouped and colored differently. In each group, we need to assign different patterns to boxes and corresponding subcategory as xtic label for it need to be provided for distinguishing.
Here is the sample data and the code.
Sample data: sample.dat
cat1 i1 95.2162 0.817947 i2 96.2065 0.710029 i3 98.4846 0.58444
cat2 p1 96.899 0.502756 p2 97.9695 1.16202 p3 99.631 0.0911258
cat3 n1 99.4709 0.318714 n2 99.5897 0.234542 n3 99.9535 0.0507579
Code:
set terminal png
set output 'bar.png'
set style data histograms
set style fill solid 1 border lt -1
set boxwidth 0.9
set style histogram errorbars lw 3
plot 'sample.dat' using 3:4:xtic(2) title "cat1", \
'' using 6:7:xtic(5) title "cat2", \
'' using 9:10:xtic(8) title "cat3"
please find the graph output. Required output is to have grouped bars in one single color and within need to have specific subcategory as xtic labels. But output here failed and showing 3 colors in all categories and only last xtic came for each group. Can you please help me in understanding where I went wrong ?
Thank you.
One possible solution would be to use the with boxes plotting style to generate individual groups and then superimpose the errorbars. The script below sets the boxwidth to 1 and then offsets individual groups by fixed amount (4 and 8). Since there are only 3 boxes within each group, this provides sufficient "gap" between groups (as wide as each box).
$DATA <<EOD
cat1 i1 95.2162 0.817947 i2 96.2065 0.710029 i3 98.4846 0.58444
cat2 p1 96.899 0.502756 p2 97.9695 1.16202 p3 99.631 0.0911258
cat3 n1 99.4709 0.318714 n2 99.5897 0.234542 n3 99.9535 0.0507579
EOD
set terminal pngcairo enhanced rounded font ",16"
set output 'fig.png'
set style fill solid 1 border lt -1
set boxwidth 1.0
set linetype 42 lw 2 lc rgb 'black'
set yr [94:102]
set xtics out nomirror
plot \
$DATA using (0 + $0):3:xtic(2) w boxes lc rgb 'red' t 'cat1', \
$DATA using (0 + $0):3:4 w yerrorbars lt 42 t '', \
$DATA using (4 + $0):6:xtic(5) w boxes lc rgb 'green' t 'cat2', \
$DATA using (4 + $0):6:7 w yerrorbars lt 42 t '', \
$DATA using (8 + $0):9:xtic(8) w boxes lc rgb 'blue' t 'cat3', \
$DATA using (8 + $0):9:10 w yerrorbars lt 42 t ''
This gives:
EDIT:
From your comment, it seems that I misunderstood your question. In order to group the boxes "per each row", you could do for example:
$DATA <<EOD
cat1 i1 95.2162 0.817947 i2 96.2065 0.710029 i3 98.4846 0.58444
cat2 p1 96.899 0.502756 p2 97.9695 1.16202 p3 99.631 0.0911258
cat3 n1 99.4709 0.318714 n2 99.5897 0.234542 n3 99.9535 0.0507579
EOD
set terminal pngcairo enhanced rounded font ",16"
set output 'fig.png'
set style fill solid 1 border lt -1
set boxwidth 1.0
set linetype 42 lw 2 lc rgb 'black'
set yr [94:102]
set xtics out nomirror
set lt 1 lc rgb 'red'
set lt 2 lc rgb 'blue'
set lt 3 lc rgb 'green'
plot \
$DATA using (4*$0):3:($0+1):xtic(2) w boxes lc variable t 'cat1', \
$DATA using (4*$0):3:4 w yerrorbars lt 42 t '', \
$DATA using (4*$0 + 1):6:($0+1):xtic(5) w boxes lc variable t '', \
$DATA using (4*$0 + 1):(1/0):($0+2) w boxes lc variable t 'cat2', \
$DATA using (4*$0 + 1):6:7 w yerrorbars lt 42 t '', \
$DATA using (4*$0 + 2):9:($0+1):xtic(8) w boxes lc variable t '', \
$DATA using (4*$0 + 1):(1/0):($0+3) w boxes lc variable t 'cat3', \
$DATA using (4*$0 + 2):9:10 w yerrorbars lt 42 t ''
Here, the idea is that the boxes corresponding to the rows of the first batch of columns are placed at positions 0, 4, 8, the boxes corresponding to rows of the second batch at positions 1, 5, 9, and finally the third batch at 2, 6, 10. This effectively creates the grouping i1,i2,i3, p1,p2,p3, and n1,n2,n3. The style lc variable ensures that each row gets a distinct color. However, without any adjustment, the bars in the legend would all have the same color (since the first row always corresponds to the group i1,i2,i3). To fix this, the script uses:
$DATA using (4*$0 + 1):6:($0+1):xtic(5) w boxes lc variable t '', \
$DATA using (4*$0 + 1):(1/0):($0+2) w boxes lc variable t 'cat2'
Here, the first statement does the plotting, while the second one generates an empty plot (due to the undefined value 1/0), but uses a color index shifted by 1, i.e., ($0+2) instead of ($0+1). This achieves that the item in the legend will get the right color (blue instead of red).
The result:
Finally, the statement $DATA denotes a data block. Older versions of Gnuplot (older than 5 I guess) do not support this feature so you could replace $DATA with a variable containing the name of the file containing the data.
EDIT2:
To be more specific, for example the expression (4*$0 + 1):6:($0+1):xtic(5) requests to generate boxes at x-coordinates calculated as 4 times the row number (0-based) plus 1. The height of the boxes is taken from column 6, the color index ($0+1) is calculated as one plus the row number, and finally the xtic labels are loaded from column 5.
Thanks a lot for answering !! But the issue here is i1,i2,i3 belongs to one category. similarly p1,p2,p3 belongs to one category and n1,n2,n3 belongs to one category. Essentially each row is of different category. Also can you please help me in compiling the same ? $ is shown as invalid command when i m compiling
@ksn I have updated the answer to address your comment.
can you please help in understanding this format (4*$0 + 1):(1/0):($0+3):xtic(5) what essentially are these parameters x:y:z or x:yz:p in general ?
Also I tried with gnuplot 5.0 patchlevel 7. Its not recognizing colors and the line types also got changed. I tried with gnuplot 4.6 patchlevel 6 as such it also messed up with the format. Coud you please let me know which version did you choose for compiling ?
@ksn originally, I made the figures with Gnuplot 5.2 (patchlevel 3). However, I tried with 5.0.7 as well and the output seems to be consistent. How exactly were the colors not recognized?
@ksn as for the expression (4*$0 + 1):(1/0):($0+3):xtic(5), I included a short explanation in the answer...
I tried modifying the plot to use patterns instead of colors in the boxes. I couldn't find "box pattern variable" to fill with different patterns for each row which can suffice the purpose of "lc variable" in the above plot. Can you please help how to proceed if I need to remove colors and place patterns instead.
|
STACK_EXCHANGE
|
I’m back after a few weeks off. Thanks to Jennifer and Will for writing TWIN4j in my absence, I enjoyed reading their take on the week’s graph related news.
This week we preview the modeling talks at the NODES 2019 conference and we have network analysis of the Prisoners of Zenda book.
We learn how to build a questionnaire with Neo4j, there’s a video introducing Spring Data Neo4j RX, and a new release of the Neo4j for Kettle plugin.
Mark Needham and the Developer Relations team
Featured Community Member: Matt Casters
Our featured community member this week is Matt Casters, Chief Solutions Architect at Neo4j.
Matt Casters – This Week’s Featured Community Member
Matt has been part of the Neo4j community for 18 months, focusing mostly on a tighter integration between Neo4j and the Kettle data integration tool.
He presented Integrating Relational, Big Data, and other Sources into Neo4j using Kettle at GraphConnect 2018, and regularly blogs about his work on his personal blog, and on the Neo4j Developer blog.
On behalf of the Neo4j community, thanks for all your work simplifying data import Matt!
NODES 2019 Preview: Modeling
With just 5 weeks to go until the first Neo4j Online Developer Summit, it’s time to preview the finalised schedule.
One of the biggest challenges for both new and seasoned users of graphs is coming up with a good graph model. The full list of talks on this topic is available by searching for the modeling tag, but here’s a preview of what’s on offer.
In It Depends (and why it’s the most frequent answer to modelling questions), Luanne Misquitta, VP of Engineering at GraphAware, will show how our use case should guide the model we come up with.
Max De Marzi, one of the best graph modelers in the business, will share his 7 years of experience helping Neo4j customers in his talk Graph Data Modeling Tips & Tricks.
Network analysis of Prisoners of Zenda book with Spacy and Neo4j
Tomaz Bratanic combines natural language processing, graph algorithms, and graph visualisation techniques to make sense of the Prisoner of Zenda, an adventure novel written in the 19th century.
Tomaz also shared a Jupyter Notebook containing all the code used in the blog post.
Building a Questionnaire with Neo4j — part 1/3: One simple question
Stefan Dreverman has started a series of blog posts showing how to use Neo4j to build a questionnaire.
In the first post we start with a single question that has multiple choices. Stefan shows how to create a graph model, import sample data, and then query it across different dimensions.
Spring Data Neo4j RX introduction, Nested Path Comprehensions, An Introduction to Neo4j
- Gerrit Meier published a video showing how to create a sample application using the upcoming Spring Data Neo4j RX. You can find the project in the neo4j/sdn-rx GitHub repository.
- I wrote a couple of blog posts exploring Cypher’s nested path comprehensions, and contrasting them to the OPTIONAL MATCH clause.
- The video from Stephan Pirnbaum‘s An Introduction to Neo4j talk at Neos Con 2019 is now available.
- Amy Hodler and I wrote a blog post for the O’Reilly Ideas blog, titled How graph algorithms improve machine learning.
Better Neo4j plugins for Kettle
Last week Matt Casters released version 4.1.0 of the Neo4j plugins for Kettle. This release contains performance improvements for the Neo4j Output step, data conversion improvements in the Neo4j Cypher step, as well as bug fixes.
|
OPCFW_CODE
|
#include <defs.h>
#include <stdio.h>
#include <sys/fs_mmgr.h>
#include <sys/mm/mmgr.h>
/*
Caution getting system specific
*/
/*
_mmnger_memory_map is a pointer to the bit map structure that we use to keep track of all of physical memory.
Each bit is a 0 if that block has not been allocated (Useable) or a 1 if it is reserved (In use).
The number of bits in this array is _mmngr_max_blocks. In other words, each bit represents a single memory block,
which in turn, is 4KB of physical memory.
_mmngr_max_blocks containes the amount of memory blocks available. This is the size of physical memory
(Retrieved from the BIOS from the boot loader) divide by PMMNGR_BLOCK_SIZE. This essentally divides the
physical address space into memory blocks
_mmngr_used_blocks containes the amount of blocks currently in use
_mmngr_memory_size is for refrence only--it containes the amount of physical memory in KB
*/
//! number of blocks currently in use
static uint64_t file_mmgr_used_blocks=0;
//! maximum number of available memory blocks
static uint64_t file_mmgr_max_blocks=65536;
/* memory map bit array. Each bit represents a memory block
Total number of blocks = 127MB / Block_size. i.e 127MB / 4K ~= 32600
No of blocks which can be represented in a byte = 8, because one bit
represent each block. Thus we need a 32600 / 8 ~= 4100 bit map.
i.e we need 4100 bytes roughly to represent all the blocks.
On using a 64 bit integer array, each entry in the array can represent
64 blocks. Thus the size of the bit map array = 4100 / 8 ~= 600.
Thus I am allocating 600 bytes for the bit map array.
Multiplying by 2 to support 256 MB of RAM
*/
/*
Caution, getting system specific
*/
uint64_t file_mmgr_memory_map[1100];
uint64_t file_mmgr_get_total_blocks(){
return file_mmgr_max_blocks;
}
uint64_t file_mmgr_get_used_blocks(){
return file_mmgr_used_blocks;
}
uint64_t file_mmgr_get_total_usable_blocks(){
return (file_mmgr_get_total_blocks() - file_mmgr_get_used_blocks());
}
/*
To say that the block represented by bit has been used. If bit 47 is passed,
then bit 47 in the 0th index of the array is set.
*/
inline void file_mmgr_set_block (int bit) {
file_mmgr_memory_map[bit / 64] |= (1 << (bit % 64));
}
/*
Tests if the requested block is free or not.
*/
inline bool file_mmgr_is_block_free (int bit) {
return file_mmgr_memory_map[bit / 64] & (1 << (bit % 64));
}
/*
To say that the block represented by bit has is now free to use.
*/
inline void file_mmgr_unset_block (int bit) {
file_mmgr_memory_map[bit / 64] &= ~ (1 << (bit % 64));
}
void file_mmgr_print_memory_status(){
printf("Total number of blocks = %d\n",file_mmgr_get_total_blocks());
printf("Total number of used blocks = %d\n",file_mmgr_get_used_blocks());
printf("Total number of usable blocks = %d\n\n",file_mmgr_get_total_usable_blocks());
}
/*
Return 1st free block. Note that, each bit in the bit map array will represent a block.
So, say 132 blocks are filled and we are supposed to insert in the the 133rd block.
So it means that memory_map[0] and memory_map[1] will be all F's. memory_map[2] will be
0xFFFFFFFF000011111. Thus for j=5, memory_map[2] & 1<<5, the bit will be 0. Hence, we return
2*64 + 5 = 133rd block as free.
*/
int file_mmgr_get_first_free(){
uint32_t i,j;
uint64_t bit;
for(i=0; i< (file_mmgr_get_total_blocks() / 64); i++){
if(file_mmgr_memory_map[i] != 0xFFFFFFFFFFFFFFFF){
for(j=0; j<64; j++){
bit = 1<<j;
if( !(file_mmgr_memory_map[i] & bit) )
return i*64 + j;
}
}
}
return -1;
}
int file_mmgr_get_first_range_free(int size){
uint32_t i,j,count;
uint64_t bit;
for(i=0; i< (file_mmgr_get_total_blocks() / 64); i++){
if(file_mmgr_memory_map[i] != 0xFFFFFFFFFFFFFFFF){
for(j=0; j<64; j++){
bit = 1<<j;
if(!(file_mmgr_memory_map[i] & bit)){
uint32_t temp_bit = i*64; // Go to that corresponding frame
uint32_t free = 0;
temp_bit += j;
for(count=0; count<size; count++){
if( !(file_mmgr_is_block_free(temp_bit + count)) )
free++;
else{
j=j+(count);
break; // No use of being in this loop anymore
}
if(free == size)
return ((i*64)+j); // Found the required range. Return it
}
}
}
}
}
return -1;
}
/*
Allocates a page from physical memory and returns the address at the physical memory.
Allocates only a single page
*/
int file_mmgr_alloc_block(){
uint64_t page_frame;
if(file_mmgr_get_total_usable_blocks() <= 0){
printf("Total usable blocks is less than or equal to 0\n");
return -1;
}
page_frame = file_mmgr_get_first_free();
//printf("rpf = %d ",page_frame);
if(page_frame == -1){
printf("Not able to find a matching frame\n");
return -1;
}
file_mmgr_set_block(page_frame);
file_mmgr_used_blocks++;
//printf("First free page_frame = %d and its address is %p",page_frame,(page_frame * PHY_PAGE_SIZE));
return (page_frame);
}
void* file_mmgr_alloc_size_blocks(int size){
uint64_t page_frame;
int i;
if(file_mmgr_get_total_usable_blocks() <= size){
printf("Total usable blocks is less than or equal to 0\n");
return NULL;
}
page_frame = file_mmgr_get_first_range_free(size);
if(page_frame == -1){
printf("Not able to find a matching frame\n");
return NULL;
}
for(i=0; i<size; i++){
file_mmgr_set_block(page_frame + i);
}
file_mmgr_used_blocks += size;
return ((void *)(page_frame * PHY_PAGE_SIZE));
}
void file_mmgr_free_block(void *p){
int page_frame = (((uint64_t)p)/BLOCK_SIZE);
if(p == NULL){
printf("You can't free a NULL pointer!\n");
return;
}
file_mmgr_unset_block(page_frame);
file_mmgr_used_blocks--;
}
void file_mmgr_free_size_blocks(void *p, int size){
int page_frame = (((uint64_t)p)/BLOCK_SIZE);
int i;
if(p == NULL){
printf("You can't free a NULL pointer!\n");
return;
}
for(i=0; i<size; i++)
file_mmgr_unset_block(page_frame + i);
file_mmgr_used_blocks -= size;
}
void file_mmgr_phy_init(uint32_t* modulep){
memset(file_mmgr_memory_map,0x0,sizeof(file_mmgr_memory_map));
return;
}
inline uint16_t file_mmgr_get_block_count(){
return 0;
}
|
STACK_EDU
|
What is port forwarding?
In computer networks, port forwarding or port mapping is a network address translation (NAT) application that redirects a communication request from one combination of address and port number to another, while packets pass through a network gateway such as a router or firewall. This method is most often used to create services on a host located in a secure or masked (internal) network accessible to hosts on the opposite side of the gateway (external network) by reassigning the destination IP address and communication port number to the internal host.
Why do I need port forwarding?
Port forwarding allows remote computers (for example, computers on the Internet) to connect to a specific computer or service on a private local area network (LAN).In a typical residential network, nodes access the Internet via a DSL or cable modem connected to a router or network address translator (NAT/NAPT). Hosts on a private network are connected to an Ethernet switch or communicate via a wireless LAN . The external interface of the NAT device is configured to a public IP address. On the other hand, the computers behind the router are invisible to hosts on the Internet, since each of them only interacts with a private IP address.
When configuring port forwarding, the network administrator allocates one port number on the gateway for exclusive use to communicate with a service on a private network located on a specific host. External hosts need to know this port number and gateway address to communicate with the internal network service. Often port numbers of well-known Internet services, such as port number 80 for web services (HTTP), are used for port forwarding, so shared Internet services can be implemented on hosts in private networks.
Typical applications include the following:
- Starting a public HTTP server on a private LAN
- Allowing Secure Shell access to a host on a private LAN from the Internet
- Allowing FTP access to a host on a private LAN from the Internet
- Launching a public game server on a private LAN
Administrators configure port forwarding in the gateway operating system. In Linux kernels , this is achieved by using packet filtering rules in the iptables or netfilter kernel components . BSD and operating systems href="https://mobileproxy.space/en/pages/kak-nastroit-rabotu-cherez-mobilnye-proksi-na-macos.html">macOS before Yosemite (OS 10.10.X) implement it in the Ipfirewall module (ipfw), and macOS operating systems, starting with Yosemite, implement it in the Packet Filter module
When used on gateway devices, port forwarding can be implemented using a single rule for converting the destination address and port. (In Linux kernels, this is the DNAT rule). In this case, the source address and port remain unchanged. When used on machines that are not the default gateway on the network, the source address must be changed to the address of the translator machine, otherwise packets will bypass the translator and the connection will not be established.
When port forwarding is implemented by a proxy process (for example, in application-level firewalls, SOCKS-based firewalls, or through TCP proxy channels), no packets are actually broadcast, but only data is proxied. This usually results in changing the source address (and port number) to the proxy machine address.
Usually only one of the private hosts can use a specific redirected port at the same time, but sometimes a configuration is possible that allows you to distinguish access by the source address of the source host.Unix-like operating systems sometimes use port forwarding, where port numbers less than 1024 can only be created by software running on behalf of the root user. Running with superuser privileges (for port binding) can be a security risk for the host, so port forwarding is used to redirect a low-numbered port to another high-numbered port, so that the application software can run like a normal operating system. a system user with limited privileges.
The Universal Plug and Play Protocol (UPnP) provides the ability to automatically install port forwarding instances on home Internet gateways. UPnP defines the Internet Gateway Device Protocol (IGD), which is a network service by which an Internet gateway announces its presence on a private network through a Simple Service Discovery Protocol (SSDP). An application providing Internet services can detect such gateways and use the UPnP IGD protocol to reserve the port number on the gateway and force the gateway to forward packets to the listening socket.
Types of port forwarding
Port forwarding can be divided into the following specific types: local, remote and dynamic port forwarding.
Local port forwarding
Local port forwarding is the most common type of port forwarding. It is used to allow a user to connect from a local computer to another server, i.e. securely forward data from another client application running on the same computer as the Secure Shell (SSH) client. By using local port redirection, you can bypass firewalls that block certain web pages.
Connections from the SSH client are redirected through the SSH server to the intended destination server. The SSH server is configured to redirect data from the specified port (local to the host on which the SSH client is running) through a secure tunnel to a specific host and destination port. The local port is on the same computer as the SSH client, and this port is the "forwarded port". On the same computer, any client that wants to connect to the same target host and port can be configured to connect to the port being redirected (rather than directly to the target host and port). After this connection is established, the SSH client listens on the port being redirected and forwards all data sent by applications to this port through a secure tunnel to the SSH server. The server decrypts the data.
On the command line, «-L» indicates the local port forwarding. You must specify the target server and two port numbers. Port numbers less than 1024 or greater than 49150 are reserved for the system. Some programs will only work with certain source ports, but in most cases you can use any source port number.
Some options for using local port forwarding:
- Using local port forwarding to receive mail
- Connect from your laptop to the website using an SSH tunnel.
Remote port forwarding
This form of port forwarding allows applications on the server side of a Secure Shell (SSH) connection to access services located on the SSH client side. In addition to SSH, there are proprietary tunneling schemes that use remote port forwarding for the same general purpose. In other words, remote port forwarding allows users to connect from the tunnel server side, SSH or another, to a remote network service located on the tunnel client side.
To use remote port forwarding, you need to know the destination server address (on the tunnel client side) and two port numbers. The selected port numbers depend on which application will be used.
Remote port forwarding allows other computers to access applications hosted on remote servers. Two examples:
- An employee of the company hosts an FTP server at home and wants to provide access to the FTP service to employees using computers at the workplace. To do this, an employee can configure remote port forwarding via SSH on the company's internal computers by specifying the address of his FTP server and using the correct port numbers for FTP (standard FTP port TCP/21).
- Opening remote desktop sessions is a common application of remote port forwarding. Using SSH, this can be done by opening the computing port of the virtual network (5900) and enabling the address of the target computer.
Dynamic port forwarding
Dynamic Port Forwarding (DPF) is a method of bypassing a firewall or NAT on request using holes in the firewall. The goal is to allow clients to securely connect to a trusted server that acts as an intermediary to send/receive data to one or more target servers.
DPF can be implemented by configuring a local application, such as SSH, as a proxy server SOCKS, which can be used to process data transmission over a network or over the Internet. Programs such as web browsers must be configured individually to route traffic through a proxy server that acts as a secure tunnel to another server. As soon as the proxy is no longer needed, the programs should be reconfigured to their original settings. Due to manual DPF requirements, it is used infrequently.
Once a connection is established, the DPF can be used to provide additional security for a user connected to an unreliable network. Since the data must pass through a secure tunnel to another server before it is redirected to the original destination, the user is protected from packet listening that may occur on the local network.
DPF is a powerful tool with many applications; for example, a user connected to the Internet via a cafe, hotel or other minimally secure network can use DPF to protect data. DPF can also be used to bypass firewalls that restrict access to external websites, such as corporate networks.
Instructions for configuring port forwarding for routers
- Port forwarding to Mikrotik
- Port forwarding to TP-Link
- Port forwarding to Keenetic?
- Port forwarding to D-Link?
|
OPCFW_CODE
|
By Development Team
What's new in Admin By Request 7.1
Version 7.1 is a continuation of the major release 7.0. We have had lots of great feedback from our customers and 7.1 is essentially a feature set based on customer feedback. We recommend all customers to upgrade to version 7.1, because it adds important new features and resolves some annoyances that customers have reported to us.
Right-click "Run As Administrator"
The new version 7.1 can now detect all files that require administrator permissions to run without users having to right-click and use “Run As Administrator”. Most of the time, Admin By Request was able to determine when users needed to run a file with administrative permissions (meaning without Admin By Request on an endpoint, the User Account Control (UAC) window would pop up). In some specific circumstances, there was still a need to right-click. These issues have been resolved and users no longer need to be aware of right-clicking to run an install file or similar.
When you use “Require Approval” (approval mode), your users are notified by email, when you approve or deny the request. This email still goes out, but the user is also notified by the application. Note that this is not a push mechanism. It is intentionally a pull request that runs in intervals, which means it’s not entirely real-time, but can be delayed by up to half a minute from when the request was approved or denied. The reason for this is that many of our customers have a policy to not allow any sort of push mechanism to endpoints for security reasons and therefore it’s a business decision not to use Firebase or similar to push these messages.
Under “Applications” in your portal settings, there is an new tab named “Tray Tools”. They allows you to customize a right-click tray menu to add tools for your end users. These could be links to Control Panel applets, web links or just handy applications that you refer your users to start from here.
Control Panel without an Administrator Session
However handy, the reason why the Tray Tools were introduced is to fix issues with the old Windows Control panel. The old Control Panel goes all the way back to Windows 95 and was made long before User Account Control (UAC) came along and for this reason, it is not always behaving correctly with UAC - which in turn means that Admin By Request can not always detect these in-line elevations inside the Control Panel. If you add links to the Control Panel from the Tray Tools, the elevation of the Control Panel works without an Administrator Session, because the process is initiated by Admin By Request. There is a pre-set "Quick add" list to add Device Manager, Network Adapter Settings, Add Printer and Uninstall Programs. You can then refer your users to the Tray Tools to use these shortcuts, voiding the need to grant them a full Administrator Session.
New About screen
The About screen has been totally redesigned. The About screen is used by your support staff and has been redesigned to be more user friendly and also to give space to more features in the About screen, which in reality is the service menu, where you can check connectivity, send diagnostics, use Support Assist, etc. If you missed what Support Assist ("Assistance") is, it allows your service staff to perform administrator tasks on the endpoint without the end user having to log off. The Auditlog will show both the end user and the servicing user for documentation.
Uninstall PIN Code
In most organizations, help desk users servicing endpoints have an option to get someone with a domain or azure administrator account to help, if there is a need to uninstall Admin By Request for what-ever reason. If that is not the case, Admin By Request can be uninstalled with a 10 digit PIN code. We have intentionally not added an option to uninstall Admin By Request from the portal, because doing so by mistake or with malicious intent could have hard consequences. The thinking behind this is that you need both access to the end point AND access to the portal to be able to uninstall without a domain or azure administrator account.
The Uninstall PIN Code can be found in the 5th About left-menu named “System”. It is called “System” to not encourage users to try to uninstall the software. The uninstall PIN code can be generated in the inventory on a machine and is 10 digits. There is no PIN code unless someone generates one in the portal.
Regardless, if a portal user actually generated such a PIN code or not, the endpoint behaves the same to not give away whether it's possible or not. If the user tries to brute force the PIN code, it will simply return “wrong PIN code” for the next 24 hours no matter what. This means it will statistically take 1.3 million years to brute force the PIN code - if you knew there even was one.
Windows 10 Enterprise for Virtual Desktops
The Virtual Desktops edition of Windows 10 is a workstation edition that behaves like a server that is typically provisioned through Microsoft Azure. The server edition of Admin By Request can now detect this special version of Windows 10 and fully support the multi-user environment.
Version 7.0 had compatibility issues with a few specific applications that were reported through support tickets, such as CodeSys' module updater and Acronis installer. These have all been resolved.
The inventory now includes a flag to show, if Bitlocker is enabled or not. You can filter the inventory to check, if all your endpoints are encrypted.
|
OPCFW_CODE
|
Difference between revisions of "Artwork guidelines"
|Line 44:||Line 44:|
Revision as of 17:19, 21 August 2017
GCompris Artwork Guidelines
Here are the guidelines to follow when creating new artwork or converting old artwork for Gcompris.
01 - Simple style
Simplify shapes to try to reach a mix of cartoon and anime style. Make things cute.
02 - Internal lines
To add internal lines (like a nose on a front face, or if two shapes of the same color are crossing), use the surface color and darken it enough to get some contrast but don't make it black (unless really needed by the activity). For some reference values, reduce the lightness around 100 and remove around 20 of saturation. Also don't use linear-width outlines, use shapes instead to make nice dynamic-width lines with sharp ends.
03 - Outlines
Don't add outlines on the scenery. Only add outlines on characters and important objects. Outlines must be wider than internal lines, regular and colored the same way as internal lines. To draw outlines, copy the shape, apply the outline color to it and paste the original fill object over. Then reduce the object using a combination of centered-scaling and manual points edit to get a regular outline with light variations. This way you can keep as much as possible the original shape silhouette without fattening it with the outline. It's important to keep the same set of points as the original shape (so don't use shrink path feature as it makes a new set of points..). For special cases of complicated nested shapes construction (like the balloons in the examples), it is allowed to add an outline over with a dark gray and 50% alpha color.
04 - Flat colors for hard objects
Use flat colors on every "hard" surface, no gradient. Only use gradient for transparency effects (like the sky, some water, eyes...) or similar special effects. For the eyes, follow the examples: no outline, almost white, top-down gradient for the iris, almost-black pupil, two white reflections (with possibly small transparency).
05 - No shadows
Don't add shadows most of the time, to make things more shiny. Only to suggest something is really big, you may add a blurred colored shadow area on the floor.
06 - Soft colors
Things must be very colored, but use more pastel or at least not too saturated colors (unless really needed by the activity).
07 - SVG format
All artwork must be pure vector svg files, no raster files. Also we can only support the SVG Tiny 1.2 subset of features.
|
OPCFW_CODE
|
[Urgent] Opt-out of "Automatically set up a basic Google Analytics 4 property"
In March (next week), Google will start creating new GA4 site tags if there's no existing GA4 ID connected to an existing UA ID. But Cilium.io already has a GA4 property.
So, Cilium.io needs to opt out of Google's forced & automatic GA4 migration. To do so:
Visit the UA UA-96283704-1 Google Analytics 4 Property Setup Assistant panel, and
Turn off Automatically set up a basic Google Analytics 4 property
Unfortunately, I have access to the panel but I don't have the permissions to opt out. Can you either grant me the permissions or make the change?
Note that according to the Google Tag Assistant, the Cilium docs are also being tracked via another UA property with ID UA-17997319-1. I don't have access to that account. You'll need to opt out of there too.
If you have access to the ReadTheDocs account for the Cilium docs, it might be a good idea to remove the UA IDs from the config there.
For context regarding the GA4 opt-out issue, see:
https://github.com/cncf/techdocs/issues/170
/cc @nate-double-u @lizrice @qmonnet
I don't have access to these.
@xmulligan Hi, do you have access and required permissions in the Google Analytics panel to help? Otherwise, do you know who does?
Cc @aanm regarding item 3 and access to ReadTheDocs
Thanks @lizrice.
I can't see any properties with the ID UA-17997319-1 in the admin panel. I wonder if this ID is some kind of default value in a Sphinx config file that has been propagated inadvertently?
In that case, would it be possible to remove that ID from the Sphinx config?
@chalin Do you mean the googleanalytics_id variable in Documentation/conf.py should be gone altogether? I suppose this means the ID we use is defined somewhere in that case?
I can take care of the removal from conf.py if that's what we need
@chalin Do you mean the googleanalytics_id variable in Documentation/conf.py should be gone altogether?
No, that stays.
What needs to be removed is the UA ID set for global_analytics_code in the RTD config, e.g.:
context = {
...
'user_analytics_code': '',
'global_analytics_code': 'UA-17997319-1',
...
}
Change the setting to 'global_analytics_code': '' (or drop the both analytics parameters if that works).
Since we're talking about require changes, there is this that needs to be done:
#194
I've removed UA-96283704-1 from the readthedocs admin panel
@aanm or @lizrice can you remove UA-17997319-1 from the config too?
@chalin I don't see the UA-17997319-1 anywhere in the readthedocs admin panel nor we do have that set in our Documentation config
@aanm - thanks for confirming. Investigating this a bit more, it seems to be a RTD ID, so we're good! Thanks all!
Thanks for your help @chalin !
|
GITHUB_ARCHIVE
|
Visual micro is a fully compatible arduino programming tool for microsoft visual studio atmel studio is also a supported alternative to the arduino ide. Introduction to the assembly language of the atmel avr processors type at90sxx, attiny, atmega and others with hints on the hard- and software. A step by step guide in using atmel studio and programmer to upload your firmware to the atmel microcontroller firmware programming upload with language. If you have an orangutan or 3pi robot or wish to use the pololu avr c/c++ library for some other reason, we recommend following the pololu avr programming quick start guide instead of this tutorial the following tutorial covers the steps needed to program avrs in windows using atmel studio and a. How does c compare with assembly language this sequence of notes delves into a variety of aspects of programming the atmel avr atmega 328p microcontroller via.
3 why assembly language the tools exist to program atmel microprocessors in the c programming language, and c is widely used why. How different is the arduino language than atmel studio language the language is the to get atmel studio running with arduino library and avr dude. Home » atmel avr » avr timers – timer0 posted by mayank on jun 24, 2011 in atmel avr, microcontrollers | 252 comments avr timers - timer0.
Microchip technology inc is a leading provider of microcontroller, mixed-signal, analog and flash-ip solutions, providing low-risk product development, lower total system cost and faster time to market for thousands of diverse customer applications worldwide. I havenot not worked on atmel studio before ,what are the programming languages supported by atmel studio any help. Atmel studio and atmega128 a beginner’s guide computer structure and assembly language programming 12 atmel studio 6 overview atmel studio 6. In this short instructable we are going to learn how to load a program to an arduino uno board using atmel studio instead of the arduino ide this is useful when you need to develop a program using more advance features or in another language, in this case we are going to use assembly language. Learn atmel avr programming - an introduction shocking builds loading atmel avr für dummies language: english.
The eldo control language made it possible for atmel to reduce cost while maintaining performance by supporting an iterative, automated decision-making process that exercised hard to reach parameters. Eeprom access will be covered in it's own section because it is a different beast in atmel and most the avr assembler language but it is such a. Mixing c and assembly language programs atmel avr assembly language include “m32definc” cseg org 0 rjmp main org 0x2a main: ldi r16, 0xff.
This atmel 8-bit avr risc-based microcontroller combines 8kb isp flash book language english computer user books advanced & power users brand atmel. Gcc winavr™ (pronounced whenever) is a suite of executable, open source software development tools for the atmel avr series of risc microprocessors hosted on the windows platform. An atmel software framework tutorial series that shows how to use atmel software framework (asf) to program arm cortex microcontrollers using atmel studio and the c programming language. See the previous post (here) for detailed information on as7 installation and simulating of an arduino program execution as an exercise to gain familiarity with as7, lets make an assembly language project using the below blink code: • select “filenewproject.
Atmel arm programming for embedded systems (mazidi & naimi arm series) (volume 5) [muhammad ali mazidi, shujen chen, eshragh ghaemi, naimis] on amazoncom free shipping on qualifying offers. Avr microcontrollers and atmel studio for c programming with arduino book by warwick a smith support and errata page. Programming arduino in assembly language arduino forum forum 2005-2010 (read only) software development programming atmel makes the. Where do i look to see what version of printf is implemented for a particular project i am not getting any floating point support.
|
OPCFW_CODE
|
SageMaker Model Registry – Offers a way to register trained models so that they can be easily tracked and deployed.
- It integrates computer vision with IP cameras on the premises.
- “An interesting area is automation of varied tasks,” Saha says.
- More examples for models such as BERT and YOLOv5 can be found in distributed_training/.
- However, only some instance types can be found as “fast launch”.
- AWS ML/AI services are trusted by some of the renowned multinational companies around the world.
For example, historical data for an ML model to plan the fastest route might neglect to account for an accident or perhaps a sudden road closure that significantly alters the flow of traffic.
To address this issue, practitioners route a copy of the inference requests likely to a production model to the new model they want to test.
Great Things About Using Aws Sagemaker
Subsequently, additional storage charges are incurred for the notebooks and data stored in the respective directory.
New tool can spot problems — such as overfitting and vanishing gradients — that prevent machine learning models from learning.
And of course, among SageMaker’s aims was to create ML easier.
“It eliminated the heavy lifting involved with managing ML infrastructure, performing health checks, applying security patches, and conducting other routine maintenance,” Saha says.
The trained model may then be deployed utilizing the above type of code.
The initial_instance_count specifies the quantity of instances that should be used to while predicting.
Host Models with NVidia Triton Server shows how to deploy models to a realtime hosted endpoint using Triton as the model inference
- Video Game Sales develops a binary prediction model for the success of video gaming predicated on review scores.
- In order to share your latest version, you must develop a new snapshot and then share it.
- Furthermore, models generated in Canvas may then be shared with data scientists and developers to make them obtainable in SageMaker Studio.
- Of course as
Direct internet access could be disabled on request to supply more security.
• Notebook sharing can be an integrated feature in SageMaker Studio.
Users can generate a shareable link that reproduces the notebook code as well as the SageMaker image necessary to execute it, in only several clicks.
BlazingText Tuning shows how to use SageMaker hyperparameter tuning with the BlazingText built-in algorithm and 20_newsgroups dataset..
Autopilot enables AI models to be trained for confirmed data set and ranks each algorithm by accuracy.
Notebook instances run within containers, which are isolated environments.
Use Built-in Algorithms With Pre-trained Models In Sagemaker Python Sdk¶
Maintains Uptime — Process keeps on running without any stoppage.
Although we’re extremely excited to receive contributions from the city, we’re still focusing on the best mechanism to take examples from external sources.
Please bear around in the short-term if pull requests take longer than expected or are closed.
Please read our contributing guidelinesif you would like to open an issue or submit a pull request.
Using AutoML algorithm provides a detailed walkthrough on how to use AutoML algorithm from AWS Marketplace.
• SageMaker Autopilot to automatically create ML models with full visibility.
An elastic, secure, and scalable environment to host your models, with one-click deployment.
Built-in model tuning that may automatically adjust hundreds of different combinations of algorithm parameters.
Free access to premium services like Tuneln, Mubi and more.
You then have to write the inference code to create your API endpoint, that will serve the requests made to the model.
With SageMaker, you can easily deploy trained models in production with one click so that developers can begin generating predictions for batch data or real-time.
In short, SageMaker and S3 buckets are services provided by the AWS.
Our notebook instance need data that people store in the S3 bucket to build the model.
Therefore a role ought to be provided so the notebook instance can access data from the S3 bucket.
Explain The Amazon Sagemaker And Advantages Of Sagemaker
It makes sense to utilize Spark containers when these pre-processing tasks are intermittent and wouldn’t normally work with a dedicated Spark cluster enough to help make the administration of the cluster worthwhile.
AWS Inferentia is a custom designed CPU chip optimised for inferencing in the cloud.
This optimising will lower the expense of cloud based Machine Learning by around 45% per inference.
Around 16 Inferentia CPUs can be configured in one Inf1 EC2 instance for maximum power and throughput.
A core comprises arithmetic logic units , control units and memory cache.
This architecture is suitable for processing a couple of similar, simpler, computations in parallel.
This is usually a typical workload for Machine Learning applications.
GPUs cost more but complete processing quicker therefore can work out more cost effective.
- Wells Fargo Ceo Login
- Market Research Facilities Near Me
- The Stock Market Is A Device For Transferring Money From The Impatient To The Patient
- Jeff Gural Net Worth
- Bloomberg Us Dynamic Balance Index Ii
- Free Wifi Near Me Without Password
- Rodeo Night Club Colorado Springs
- CNBC Pre Market Futures
- Is Dani Ruberti Married
- Cfd Flex Vs Cfd Solver
|
OPCFW_CODE
|
I have various HD cloning tools, yet I have yet to find one that makes a full clone of my current windows boot drive (C drive) to another same sized drive without me having to do it manually. I want a script that can do this for me. I want my main boot drive to be cloned every 3 or 4 days to another physical drive I have connected to my computer. I
I need to build a C##/Java software witch will do the following: 1. I need to be installed and run on any windows platform and get an UNIQ ID. 2. I need to be connected non-stop to my database server. 3. I need to execute some commands whenever I want (using a backend interface). 4. I need a backend interface to manage all my connected UNIQ IDS. 5
...(instead of actually sending, you could open a new email and then the user could press [Send]) Info to collect: * Windows operating system info: * (a) major version: Win10, Win8, Win7; * (b) Edition: Home or Pro (or other); * (c) version, such as 1703, 1803: * (d) 64-bit vs 32-bit * e.g. "Win10-Home (1803) 64-bit” or "Win10-Pro
C# WPF source code program using Orbbec Astra camera to Detect & Tracking user's body Skeleton Joints X,Y and Z program (source code) For work on this project required to have Orbbec Astra Camera, [войдите, чтобы посмотреть URL] Real time user's body Skeleton joints points detection and tracking program (C# WPF source code), Using Orbbec Astro
I make Windows customization software and I need to create a simple themes + desktop wallpapers gallery for my website. The website uses ASP.NET but has a PHPBB forum and I need the new gallery to integrate with the PHPBB's user database (i.e.; only users with accounts can upload, edit and delete their own stuff, and I need the gallery to share user
Please fill and revert back to us: Lakshya Leaders JD (Candidate Name) (Total Exp) (Current Company) QA Tech Lead V [войдите, чтобы посмотреть URL], when you apply! About Us: Lakshyaleaders Software (P) Ltd. is based out of Bangalore. We have client base over United States, Germany, Singapore and India. We are Microsoft BizSpark partner. We primarily focus on delivering
I need you to develop some software for me. I would like this software to be developed for Windows using C or C++.
we need you hire 2 developer maintain to a c++ graphical map based software. This software to be developed for Windows using C++, planning to migrate to c#. please bid with your CV that has experience of graphical and map based application in c++.
Needed features: effective Cleaning of C Drive, Scheduled cleaning, Disk fragmentation, File recovery.
I have a windows application that is written in c# that needs to be enhanced for the following functionality: 1 - Currently results are showing on the windows form, I need these results to be saved in the database (mysql preferably) , and also need a screen that will display results based on stored proc. 2- Currently one of the sites that is being
...m/[войдите, чтобы посмотреть URL]). The kiosk contains various hardware but of interest to this project is the bill acceptor and door switches. The kiosk comes with a software library call TABIO which includes several COM components that expose various interfaces and fire events based on actions such as the user inserting a banknote into the bill
I need you to develop some software for me. I would like this software to be developed for Windows using WPF C# I will give list of objects like List<tcobject> tclist=new List<>(); tcobject contains: int id; string name; string property 1; string property 2; string property 3; so this list have parent and child ,, parent and child will shown in
I have a Linux Multi thread TCP server which works but after a while does not accept new clients. I cannot find wha...clients. I cannot find what is causing the problem. I need someone with experience in sockets and multi threading to stop it from failing. I have the server software in a codeblox project and a windows C# client to exercise the server
I have installed a program in c drive. After 40 days, I have reinstalled a new Windows on same drive. I had a file on the installed program on the previous windows. Now I need to recover that file.I have tried with some recovery software and can't find the file. Is there anyway to recover ?
...[войдите, чтобы посмотреть URL] Using Kinect. We have our own software engineers (I am one myself) and have experience with kinect development in C#, but we have our schedule quite full for now. 1) Can you develop something like the youtube video ? It should run on windows or linux. 2) If yes, how much time, and what is the budget ? 3) For
JOB SUMMARY This is a full-time position, but if you prefer to start part-time with us, we can also consider it as an option. We are looking for people in software development with bright and curious minds who are ready to join our growing team of developers and to collaborate with our Qt and Quality Assurance teams. You will work from home with
Software for windows, in vbnet or c++, is a exe script, search an expert developer of windows programs
...Poland, are welcome. Implement and test the Human Resources Information System desktop application. Your software product will be a database-backed desktop application with a Windows Presentation Foundation (WPF) graphical front end, implemented in C#. As part of your development efforts, you will prepare and apply a small collection of test cases to verify
Implement and test the Human Resources Information System desktop application. Your software product will be a database-backed desktop application with a Windows Presentation Foundation (WPF) graphical front end, implemented in C#. As part of your development efforts, you will prepare and apply a small collection of test cases to verify that the completed
JOB SUMMARY This is a part-time position with the goal of becoming full-time in the future. We are looking for people in software development with bright and curious minds who are ready to join our growing team of developers and to collaborate with our Qt and quality assurance teams. You will work from home with mostly flexible hours. You must,
...to change our existing sensor (PrimeSense 1.09) to new sensor (Asus Xtion 2). Our software is built on C++ language , OpenGL and uses Visual Basic as platform. people have very good experience working with OpenGL, using Windows platform and a also have experienced in C++ can apply. People from Bangalore, India location is preferred. people willing to
Program Ubot Classified Ad Submitter into C# Bot I have programmed a bot which submits ads to 10 classified ad sites using Ubot. I need this bot coded in C# so that it does not use the Ubot framework. I can provide the following 1. The ubot code (similar to c#) 2. A video explaining exactly how the software works. 3. The compiled bot. I own all the
Tenho um aplicativo feito em C# Visual Studio 2017. O Projeto consiste em um ranking de jogos que insere dados no FireBase e no SQL. São dois softwares, um software faz o gerenciamento de jogadores, ranking e etc, e o outro software puxa do FireBase e coloca em um FrontEnd para exibição ao cliente em um segundo monitor. É necessário repa...
I am dealing with an existing windows software software with C# project. I need help remotely to improve existing RDLC reportings and fix bugs. Therefore, I need an RDLC reporting expert.
I need a c# .NET GUI software for gathering and analyzing data in a Peer-to-Peer software called "Share EX2" with the following functions: - Search for Uploads by different Metadata - List Uploads of User - List available Peers for File - Download File - Storage of found data in local DB (sqlite prefered) - Export of found/filtered data into XLS, TXT
... by unleashing the full power of C++ and Delphi programming languages. Remote Administration Either you are a private user wanting to control your PC from afar, or a big company which wants to administrate hundreds of machines from a single computer, TheRAT will suit your needs! Traditional Remote Support software (such as RDP) does not have the functionalities
I have older C+ software that is no longer supported by the original vendor. I need assistance to make it work on new laptop windows 7 32bit OS. You need strong reverse engineering skills to complete this task
I want a software done in C++ that automatically takes a screen shot of a specific program (computer game), must recognize its resolution, then it must fetch 3 images which are followed by numbers and convert from image to character, we only need to look for numbers. The numbers we are looking for are always in a fixed position depending on the resolution
I need you to develop some software for me. I would like this software to be developed for Windows using .NET. or c++ I own database on cloud hosting. I like to develop desktop program to search and find images on the cloud by using API. I Don't Looking For Mobail App. I have API.
...Prefer VB, but open to other software, with the exception of the following: a. Anything with Java in it's name b. Macros c. Python I do want a UI to keep things clean. I do NOT want a web based program and the script needs to operate without the browser. I can't have Chrome blowing up and causing problems with Windows. It's important you know that
I need you to develop some software for me. I would like this software to be developed for Windows using Java or C#. i want a software which exaclty look like MYOB V19.9 with same functionaluty an looking aince am used to MYOB my i want to have my own software so that i will not pay monthly fees
...project is to create a castle building system for the base building game, which will dynamically place walls/rooms down in run time. using Unity c#. To understand what I Want is best to view the video of Software inc example I uploaded. (Build room video) So basically I want to replicate this room placement system, but with rooms being combined at the
|
OPCFW_CODE
|
Server socket address already in use
Hi Chris, thanks for your great work!
I have the issue that my hardware wallet produced a zpub key instead of a xpub key when I set it up. It seems that EPS/Electrum cannot read my public keys out of the zpub when I start ./rescan-script.py; it's not showing any transactions/keys. Is there a solution for this?
Thanks in advance!
The only master public keys that work are obtained from Electrum's menu Wallet -> `Information. Master public keys created by the hardware wallet interface itself won't work.
So the workflow for hardware wallet + Electrum + EPS is: 1) Connect hardware wallet to Electrum 2) Obtain master public key(s) from Electrum's menu 3) Set up Electrum Personal Server
thanks for your reply. I understood that, but the master public key obtained from Wallet -> Information starts with "zpub.." and when I enter that zpub key into the eps config.cfg and $ ./server.py I get the following:
2018-08-16 21:22:53,700001 [ LOG] Obtaining bitcoin addresses to monitor . . .
2018-08-16 21:23:08,667017 [ LOG] Obtained list of addresses to monitor in 14.966687679290771sec
2018-08-16 21:23:08,667776 [ LOG] Building history with 200 addresses . . .
2018-08-16 21:23:08,765173 [ LOG] Found 0 txes. History built in 0.01688981056213379sec
2018-08-16 21:23:08,766883 [ LOG] Starting electrum server
Traceback (most recent call last):
File "./server.py", line 559, in
main()
File "./server.py", line 556, in main
poll_interval_listening, poll_interval_connected, certfile, keyfile)
File "./server.py", line 326, in run_electrum_server
server_sock = create_server_socket(hostport)
File "./server.py", line 318, in create_server_socket
server_sock.bind(hostport)
OSError: [Errno 98] Address already in use
I'm assuming this error comes up because I have a zpub instead of an xpub, is that possible?
That's nothing to do with the master public key, it's to do with how the server socket sometimes remains open for a short while. So if you close and restart EPS shortly after then it may crash with that error message. See this article for a longer explaination https://hea-www.harvard.edu/~fine/Tech/addrinuse.html
To fix/work around it, just wait a little bit long between closing and restarting EPS.
Hmm I get this Error also when I reboot and start EPS for the first time so I assume it has nothing to do with closing/restarting EPS too quickly..it seems the address is constantly in use. Can you think of another conflict/solution? Otherwise thank you for trying to help!
Restarting probably won't do much because the kernel will probably save the socket state and keep listening after the reboot. The only way is to wait really (I'll add an item to my todo list to see if theres anything else that can be done)
I suppose you could change ports. Change the port in config.cfg and in Electrum until the original port is released.
Hi Chris, I found the mistake (it has nothing to do with EPS):
in the past, when I tried to set up EPS the first time, I already automated EPS startup on my raspberry. I didn't realise this after I worked again on this so this is the reason why the address was "already in use" (i'm new to linux, so i don't always know what I'm doing, just trying to figure this out
even if I hadn't automated the set up, I wouldn't have worked because I set up the electrum wallet wrong in the first place. I chose "native segwit" (which produces a zpub) as a script type and not p2wpkh-p2sh (ypub). The latter would have been the one I had to choose because my wallet set up with trezor relies on this ypub. So actually I added an empty master public key to EPS.
After I looked up the ypub on the trezor wallet website, I added this one into the config file and now it's working!
Thanks!
hi guys
I got the same error today on a raspibolt setup. version 0.2.4:
INFO:2023-08-10 07:31:27,343: Synchronizing mempool . . .
INFO:2023-08-10 07:35:22,078: Found 157076 mempool entries. Synchronized mempool in<PHONE_NUMBER>9579163sec
Traceback (most recent call last):
File "/home/bitcoin/.local/bin/electrum-personal-server", line 8, in <module>
sys.exit(main())
File "/home/bitcoin/.local/lib/python3.9/site-packages/electrumpersonalserver/server/common.py", line 494, in main
run_electrum_server(rpc, txmonitor, config)
File "/home/bitcoin/.local/lib/python3.9/site-packages/electrumpersonalserver/server/common.py", line 129, in run_electrum_server
server_sock = create_server_socket(hostport)
File "/home/bitcoin/.local/lib/python3.9/site-packages/electrumpersonalserver/server/common.py", line 82, in create_server_socket
server_sock.bind(hostport)
OSError: [Errno 98] Address already in use
ah, nevermind. it was a port conflict. I changed it to listen on 50003 instead and now it works
|
GITHUB_ARCHIVE
|
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
namespace KuzzleSdk.API.Options {
/// <summary>
/// Options for real-time subscriptions
/// </summary>
[JsonObject(MemberSerialization.OptIn)]
public class SubscribeOptions {
/// <summary>
/// Subscription scope (values: all, out, in)
/// </summary>
[JsonProperty(PropertyName = "scope")]
public string Scope = "all";
/// <summary>
/// Filters notifications about users activity (values: all, out, in)
/// </summary>
[JsonProperty(PropertyName = "users")]
public string Users = "all";
/// <summary>
/// Pass data to this room's other subscribers, once at the moment of
/// subscription, and once when leaving the room (whatever the reason).
/// </summary>
[JsonProperty(
PropertyName = "volatile",
NullValueHandling = NullValueHandling.Ignore)]
public JObject Volatile;
/// <summary>
/// If true, receive notifications emanating from this SDK instance actions.
/// </summary>
// not serialized on purpose: not a Kuzzle API option
public bool SubscribeToSelf = true;
/// <summary>
/// Default constructor.
/// </summary>
public SubscribeOptions() { }
/// <summary>
/// Copy constructor.
/// </summary>
public SubscribeOptions(SubscribeOptions src) {
Scope = string.Copy(src.Scope);
Users = string.Copy(src.Users);
if (src.Volatile != null) {
Volatile = (JObject)src.Volatile.DeepClone();
}
SubscribeToSelf = src.SubscribeToSelf;
}
}
}
|
STACK_EDU
|
Angular 1 Theme Using the WordPress REST API from Roy Sivan
In this project, resident author and Angular for WordPress Guru, Roy Sivan, goes through an Angular 1.X WordPress Theme he has been continuing to evolve.
An Overview of the Theme
The AngularJS theme for WordPress is a simple single page application theme powered by the WordPress REST API. It can be used both as a boilerplate to extend functionality, or as a snapshot of what Single Page Applications can look like if built as a default WordPress theme.
The technologies in this project rely primarily on WordPress. As of 4.7 with the WordPress REST API now in core, it is simple to get going with nothing more than the base WordPress install. The theme is primarily powered by AngularJS 1.x or version 1 of AngularJS, as well as Angular UI-Router for added functionality in routing. I chose to use ui-router because it adds state-driven routing, which comes in handy for larger builds.
The Theme Out of the Box
There are 2 main version of the theme, however if you are just starting out, you will want to stick to version 6. As of writing this version 6 is the master branch, however as soon as version 7 is ready, it will be stored in the v6 branch and updates will stop.
Version 6 of the plugin is Angular 1.x powered by the WordPress REST APi which can be found in WordPress Core (as of 4.7), previously the REST API v2 plugin. Out of the box you get a pretty simple list page and detail page. Both pages have extended functionality to add, edit or delete a post, as well as add a comment on a post.
If you plan on using v6, the theme does not include menu or sidebar support anymore. There is a great plugin for adding in menu support via the API called WP API Menus which will add in the support for the API, but will require some work in the header.php or Angular templates.
Out of the box version 6 of the theme runs Angular 1.x. I am currently working on a new branch v7 which may be done by the time you read this (early 2017). There are many benefits to moving to Angular 2, including it being the newer technology. Also, I am making the Angular 2 theme ready for 100% decoupled use as a front end framework outside of WordPress. With just a few small changes the theme will be able to run outside of WordPress, but still getting the same data from your website. You will be able to put this either outside the WordPress ecosystem but on the same server, or on a completely different server.
I will probably update this article, or create a new one more fully once the Angular 2 theme is ready for use.
Customizing The Theme
Since this really is a developer’s boilerplate, customizing isn’t basic. Customizations are meant to be adding in new controllers, routes, and overall functionality.
However if you are looking to make something simple to get a better understanding of how it works, there are some basic customizations that can be done, which still require code.
Styling can be changed pretty easily. The _more-style.scss file under assets/scss is a great place to start. Keep in mind that you will to have the gulp watcher running to compile the CSS every time you make a change and save.
Template / Layout
This is going to be a big difference when it comes to Angular 1 and 2. Angular 1 is a little more user friendly with the HTML attributes, while Angular 2 is slightly more advanced, but I would say still easier than learning more advanced PHP.
This is another area where Angular 1 and 2 differ, moreso than the templates. While it is easy to create 1 JS file which powers all the JS needed for this theme, it must be broken up in Angular 2 into more granular components. Angular 2 also doesn’t currently need ui-router, as the routing system is pretty good out of the box, but time will tell if ui-router finds a need to upgrade it.
Project Ideas Based on This Theme
There are many things that this thing can transform into, however right now it is a pretty bare bones Angular + WordPress demo & boilerplate.
I am not much of a designer, so the overall theme is very basic. When looking at other JS framework powered themes which have full transitions going from view to view, this theme is pretty dull. I think there is a good amount of potential to create an awesome theme which is really a masterpiece of UI.
Building a Widget within Another Theme
Share Your Work!
Have an example project you have built based on this?
Let Zac know and we will feature it here!
Previous SectionNext Section
|
OPCFW_CODE
|
Supernacularnovel Hellbound With You – Chapter 384 The only way crime refuse suggest-p1
the demolished man review
Novel–Hellbound With You–Hellbound With You
Chapter 384 The only way believe cats
“Don’t worry. We will assist you to. That’s a offer,” Alex suddenly explained, astonishing Zeke. Truly the only time he ever been told this male produce a vow, in addition to the guarantees he made to Abigail, was as he vowed to defend his household which time within the medical center.
“Bring in Abigail indoors, Alex. I’ll call you once things are through,” Zeke informed Alex and Alex didn’t hesitate to consider his dearest within.
education system immoral
Observing Abi’s frown and fear, Alex changed gears and then he bent downward, kissing her cheek. “That was a laugh. Humans don’t have any idea about vampires and witch actually pre-existing in this world presently. Wars like this are currently just a thing of the past,” he coaxed her. But Abi was aware that wasn’t impossible, because mankind, even in this current day, nevertheless got competitions and the potential for combat busting out at some point had not been unattainable.
“They are going to, certainly, take a step about this. Or they can change anything they are preparing to counter-top it or break it. Today, I would favor that nothing at all this way will happen. Should they be aware that we now contain the uppr fingers, it should grow to be harder for us to sniff them out as long as they put their guard up,” Zeke described. “They ought to not see us standing on the similar spot and staying in good conditions using this type of witch. In short…” he checked out Abigail. “We should harmed her.”
how to write an essay on your favourite food
“If vampires and witches joined up with causes, it will be very interesting. That’s a dangerous combination. Think about merging the witches’ visions and the vampires’ advantages,” Alex done Zeke’s thoughts. “It creates me very fascinated to know what sort of being could stand resistant to the vampire-witch alliance once that occurs. In those days, vampires, mankind, and witches originated at me separately and in addition they all been unsuccessful. I wonder if they could overcome me if they emerged together as one back then. Needless to say, with the addition of Zeke, they merely might be able to –”
“Right here is the only technique. We are going to turn this into seem like we remained below due to the fact we had been hectic torturing this witch the whole nights. This is basically the sole method you can get rid of this without worrying about enemy turning out to be distrustful.”
“Don’t be concerned, they’ll be great,” he a.s.sured her. When Abi just nodded. She presumed what he said but she however couldn’t support but stress.
Zeke could only crunch his brows however finding how a male comforted his psychological spouse, he seen that Alex had carried this out on account of her where there was not a thing he could do about this. There was nothing he could do as it got to Alex accomplishing stuff for his partner.
Seeing her nervous phrase, Alex leaned in in her in reference to his eye becoming a very little intensive. “Are you looking for me to distract you additional?”
“Carry Abigail indoors, Alex. I’ll phone you once it is all totally more than,” Zeke shared with Alex and Alex didn’t think twice to consider his favorite inside of.
Alex could see her staying apprehensive so made her sit on a couch and the man brought out the product of gas he have through the witch.
“Don’t fear, they’ll be fine,” he a.s.sured her. When Abi just nodded. She believed what he said but she even now couldn’t assistance but be concerned.
“It’s okay, Abigail. I realize that we will need to achieve this. Even if we don’t heal as fast as vampires, I can cast a spell on myself so I don’t have the pain,” she attempted to console Abigail, whom she found was battling with the latest system. Following she spoke to Abigail, she then viewed Alex and considered Zeke. “Just promise me that just after this… we will support the other,” she mentioned evenly for the two vampires.
“Right here is the sole method. We shall turn this look like we stayed here due to the fact we were occupied torturing this witch the whole nights. Here is the only way we will get out of this without having the foe turning out to be questionable.”
eyes wide open
“Alex!” Abi immediately scolded him for making him cease while Zeke sighed.
“Just like vampires and witches signed up with makes, it may be very worthwhile. That’s a dangerous collaboration. Visualize combining the witches’ visions plus the vampires’ strong points,” Alex complete Zeke’s ideas. “It creates me very interested to be aware what style of being could stand resistant to the vampire-witch alliance once you do. Back then, vampires, humans, and witches arrived at me separately and in addition they all failed. I ponder should they could beat me whenever they came up together as one in those days. Of course, with adding Zeke, they simply could possibly –”
“Alicia…” Abi teared as she presented onto Alex. Was there really hardly any other way?
Seeing Abi’s frown and be concerned, Alex evolved gears and that he curved downwards, kissing her cheek. “Which has been a joke. Human beings don’t have any idea about vampires and witch actually active on earth now. Conflicts like that now are a thing of the past,” he coaxed her. But Abi knew that wasn’t extremely hard, since mankind, during this modern day, even now acquired wars and the chance of combat breaking up out at some point had not been impossible.
Finding Abi’s frown and fear, Alex evolved gears and he curved straight down, kissing her cheek. “That had been a laugh. Mankind don’t know about vampires and witch actually established on earth now. Wars like that are currently simply a thing of the past,” he coaxed her. But Abi knew that wasn’t not possible, for the reason that individuals, during this contemporary, still possessed conflicts and the chance of combat busting out sooner or later had not been unattainable.
“Bring in Abigail inside the house, Alex. I’ll phone you once things are all above,” Zeke instructed Alex and Alex didn’t hesitate to use his favorite inside.
“No. Vampires and witches have been opponents for many people centuries. What they want is for vampires and witches to remain in this particular rank quo.”
“No. Vampires and witches have been adversaries for several ages. What they need is good for vampires and witches to be in this particular position quo.”
Assurances were the rarest keyword phrases a vampire would ever say due to the fact that works well much like a curse in their mind, they couldn’t easily split them.
Profound Dragon Warlord
“Alex!” Abi immediately scolded him to create him quit while Zeke sighed.
“Alicia…” Abi teared as she retained onto Alex. Was there really not one other way?
“Don’t give any one ideas on how to defeat you, idiot!” he murmured all at once so Alex didn’t pick up him.
Alex begun to put the gas in her neck area properly. His hands ended up very hot and delicate.
“They are going to, obviously, take steps about it. Or they are able to modify whatever they are planning to reverse it or burst it up. At this time, I would personally prefer that nothing at all like that comes about. Should they recognize that we now hold the higher hands, it will eventually turn into tougher for us to sniff them out if they place their defend up,” Zeke revealed. “They must not see us standing up during the similar position and being in great words with this particular witch. In short…” he considered Abigail. “We will likely need to harm her.”
“This is basically the only way. We are going to makes seem like we stayed here since we were busy torturing this witch the full night time. This is actually the only way you can get rid of this without worrying about enemy becoming questionable.”
|
OPCFW_CODE
|
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Conditionals : MonoBehaviour {
void Start() {
// Conditionales u Operadores de Seleccion
// https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/selection-statements
// Instruccion 1
// Condicional IF
bool entrada = false;
// El procesador ejecuta las instrucciones dentro
// del cuerpo del IF, si la instruccion dentro del
// parentesis es true
if(entrada == true) {
print("Pude entrar");
}
// Instruccion 5
// Dejamos entrar a lorena al bar si es mayor de 18 años
int edadLorena = 15;
bool lorenaEsFeliz = false;
if(edadLorena >= 18) {
// Lorena si puede entrar
lorenaEsFeliz = true;
}
// Aca pueden ir mas instrucciones...
print(lorenaEsFeliz);
// Condicional IF ELSE
// https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/if-else
bool tengoTicketVIP = true;
if(tengoTicketVIP == true) {
print("El concierto es genial!");
} else {
print("Soy pobre... Nooo!!!");
}
// Instruccion 6
// Condicional SWITCH
// https://msdn.microsoft.com/en-us/library/06tc147t(v=vs.120).aspx
int distancia = 50;
switch(distancia) {
case 80:
print("El valor de la distancia es 80");
break;
case 70:
print("El valor de la distancia es 70");
break;
case 22:
print("El valor de la distancia es 22");
break;
default:
print("El valor de diferente a 80, 70 y 22");
break;
}
}
}
|
STACK_EDU
|
Introducing Autopilot — An open source project for adaptive service mesh
The many benefits of service mesh come from it taking full control of the communication network within a cluster. However, this increases your vulnerability to misconfiguration and human errors. There are fantastic tools to simplify service mesh configuration and improve resilience, but they all depend on continuous actions from a human operator. We believe that these tasks should be automated, and propose the notion of an adaptive mesh, a mesh that continuously senses changes within its environment and automatically adjusts to them. Today we are announcing Autopilot, an open-source project that turns your service mesh into an adaptive service mesh by building and deploying Service Mesh Operators.
The great advantages of a service mesh do not come without risk
A service mesh is an infrastructure layer that handles service-to-service communication. Service mesh abstracts the network to provide advanced capabilities, including encryption, authentication and authorization, routing, monitoring and tracing — and hides that complexity from your application.
Because the service mesh bears the full responsibility for routing all traffic within your cluster, “with great power comes great responsibilities”. As the only controller of the in cluster network traffic, an incorrect or outdated configuration can lead to severe degradation of the application performance, compromise its security and make it vulnerable to external attacks, or — in the worst case — even bring the entire network down. What becomes critical is the flawless configuration of the service mesh and this raises the crucial question:
How do you make sure your service mesh is resilient?
The community offers several approaches to enhance the resilience of the service mesh.
The Service Mesh Interface (SMI) simplifies your mesh configuration. The significance of correct configuration first appears during the initial installation of the service mesh. To establish a clear, convenient, and safe configuration process we announced with leaders in the service community — the Service Mesh Interface (SMI), a specification that covers the most common service mesh capabilities. Being Kubernetes native and provider agnostic, the SMI defines a common standard for service meshes.
The Service Mesh Hub discovers and validates your mesh configuration. In May 2019, we introduced the Service Mesh Hub (previously known as SuperGloo launched in 2018), an open-source abstraction layer that implements the SMI and automates the installation and management of all service meshes in your cluster. The Hub installs any mesh and automatically discovers all existing meshes. Being aware of your entire cluster, Service Mesh Hub is tasked with continuously validating the mesh configuration and making it easy to safely change the configuration.
Using GitOps pattern to automate the configuration process to reduce the chance of error. As your cluster, application, and ecosystem evolve, you may need to change your service mesh configuration to match these changes. Or you may want to add third-party extensions to your service mesh over time. The Service Mesh Hub enforces a GitOps pattern for making these changes, essentially treating your mesh configuration the same way you treat your code, by maintaining a repository and forcing PRs. The Service Mesh Hub provides a single pane of glass for monitoring, installing and managing the service mesh and its extensions. The combination of a highly automated processes and the GitOps pattern is designed to minimize human errors that could put your service mesh at risk.
An adaptive service mesh automatically adjusts its configuration to the changes in the environment
A microservices environment is highly dynamic by nature, to reflect changes in the infrastructure, deployments of new business functions, and enhancements to privacy and security; while other changes may be adversarial, including deliberate attempts to breach the system. The successful operation of a service mesh, therefore, requires continuous monitoring, rapid identification of changes that require intervention, and expeditious execution of an optimal response.
Many tools are available that allow a human operator to monitor and adjust service mesh configuration. However, the automation of these processes is required to ensure that a service mesh is consistently healthy and performant. A service mesh that automatically adjusts its configuration to changes in the environment — which we call an adaptive mesh — relieves the end user from the need to continuously monitor the service mesh, accelerates the response, and prevents end user errors.
For example, an adaptive service mesh will automatically identify security vulnerabilities and isolate the compromised services to protect the rest of the environment. During a canary deployment, the adaptive service mesh automatically controls the ratios between new and stable versions based on performance. Upon checking in code to git, the adaptive service mesh will automatically create a route to the new service.
Implementing an adaptive service mesh using service mesh operators.
Operators are a popular pattern for automating repeatable tasks in service management beyond what is provided by Kubernetes itself. The same pattern can be used to automate service mesh management and integrate service mesh capabilities with core Kubernetes features.
Building Kubernetes operators is facilitated by available SDKs, such as the Operator Framework by RedHat and the Kubebuilder SDK Kubernetes-SIG. These simplify and accelerate the development of Kubernetes Operators. However, their domain knowledge ends with vanilla Kubernetes, providing no out-of-the-box integration points with service meshes. The work of doing so falls upon the developer.
Introducing Autopilot: A Service Mesh Operators Framework
For this reason, we at Solo.io built the Service Mesh Autopilot, an opinionated SDK and toolkit for developing and deploying Service Mesh Operators. By treating service mesh as a first-class concept, Autopilot makes it easy to build Service Mesh Operators that automate and extend service mesh in the same way Kubernetes Operators automate and extend Kubernetes. Autopilot generates scaffolding, builds, and deploys Operators which run against a local or remote Kubernetes cluster with a service mesh installed.
Autopilot implements a control loop, composed of Watchers which provide the input, a state machine that provides the brain, and Workers that perform the required actions. Watchers are service mesh-specific sensors that can follow the service mesh metrics, CRDs, webhooks, etc. End users are asked to define the possible states and the transition rules between them, triggered by events that are generated by the watchers. End users also provide the workloads to be executed by the Workers. Workers that want to make changes to the configuration of the service mesh can either do it directly or follow a GitOps pattern and send them as PRs.
At Solo.io we originally developed Autopilot to streamline our own development process. Because Autopilot is self-generating, we were able to accelerate the development of Service Mesh extensions from months to days.
We invite the community to try Autopilot, join us in identifying more scenarios, more operators, and more features.
A technical introduction to Autopilot, along with demos of several use cases, can be found here. We encourage you to check out our GitHub repo, the docs, and join our slack channel. Happy KubeCon from Solo.io!
|
OPCFW_CODE
|
Functional testing requires the analysis of the outputs generated by the system (or its components) in response to input (test cases) defined on the basis of knowledge of the requirements of the system (or its components). Often made in Black Box mode, i.e. without access in any way to the internal structure of the software.
Functional testing is also called black box testing. We are interested in the contour of the system and have no information about what it looks like inside. Based on knowledge of the software structure, and in particular the code, associated inputs and oracle, for the definition of test cases.
Necessarily realized by accessing the source code, then in white box mode.
Whitebox, we can also access the source code, I want the most detailed testing possible. Then there is grey box testing which wants to indicate that something I know and something I don’t. Functional and black box testing are not really the same thing, because there are black box testing of things that are not functionality, e.g. also performance testing is black box testing.
When I talk about functional testing I typically test the functionality that the application promised to do.
Testing Black Box
The common point of all “Black box” techniques is the fact that the software is accessed only through its interface, without direct access to the code of the component to be tested (at the limit, without access to the code at all).
There is no such thing as a black box technique:
- Testing based on requirements;
- Testing based on use case scenarios;
- Testing with equivalence classes (minimum coverage of equivalence classes, coverage of adjacent equivalence classes, coverage of n-ple of equivalence classes, combinatorial coverage of equivalence classes);
- Testing with equivalence classes and limit values;
- Testing from decision tables;
Testing based on requirements
The principle of verifiability of requirements states that requirements should be testable, i.e. written in such a way that tests can be designed to demonstrate that the requirement has been met.
Requirement-based testing is a validation technique where various tests are designed for each requirement.
Note: the V model said: the day I do the requirements test I also do the requirements test, so I say what the system should do and what tests we should do to say it really did it.
Use Case Testing
- I note the Use Case Diagram and the description of all use case scenarios, one or more test cases are designed for each scenario;
- The designed test cases are performed manually or automatically;
- The testing strategy aims at covering use cases and scenarios.
If the requirements are made as use cases and scenarios, the scenario by its very nature lends itself to verification, because the ideal scenario is precisely a sequence of steps to be followed to obtain the result.
In this case it becomes essential that when we design scenarios we do so in detail, with all the exception scenarios.
If we write scenarios with all the exceptions we have half a test already written, just follow the instructions of the scenarios.
Testing of Partitions (or Classes of Equivalence)
Input and output data can generally be divided into classes where all members of the same class are somehow related. Each of the classes constitutes an equivalence class (a partition) and the program will (likely) behave in the same way for each class member. Test cases should be chosen from within each partition.
Example: If we want to try the sum thoroughly, we should do 2³² for both addends, and we get to 2³²*2³²=2⁶⁴ tests, that’s too many tests. But note we must be careful that the oracle is overflow.
Let’s start thinning out the rehearsals, saying that if you can do 4+3 then I trust you can do 4+5 and I don’t try all the values.
So I take some values that I think are completely equivalent from the point of view of function and put them in a class of equivalence.
In the case of the sum I have only one class, if we have the doubt about positive and negative numbers, we would do the class of positive numbers and the class of negative numbers for the first and then the second addendo. Now our tests could be done by covering the equivalence classes in every possible way. So I take a number from the first equivalence class and a number from the second, and then the various combinations.
I do 4 tests because: the first input had 2 classes of equivalence, the second input had 2 classes of equivalence, I get the combinations with the Cartesian product and they are 4.
Partitions are identified using program specifications or other documentation. A possible subdivision is one where the equivalence class represents a set of valid or invalid states for a condition on input variables.
Search for equivalency classes
- For each input you get: A valid equivalence class, corresponding to the set of values considered valid for that input; A set of invalid equivalence classes, one for each invalid condition. Each of these conditions corresponds to a set of values (invalid equivalence class);
More detailed technique:
- Even for the valid classes more than one equivalence class is distinguished, depending on the different scenarios that can be exercised
If the input is a:
- range of values: a valid class for values within the range, an invalid class for values below the minimum, and an invalid class for values above the maximum;
- specific value: one class valid for the specified value, one class invalid for lower values, and one invalid for higher values;
- element of a discrete set: one valid class corresponding to the set (classical technique) or one valid class for each element of the set (detailed technique), one invalid for an element not belonging to the set;
- Boolean value: As in the previous case, but for a two-value discrete set (true, false);
In all cases, it is good to consider also an additional class of invalid equivalence, corresponding to the input not belonging to the expected type;
Minimum coverage of equivalence classes. Each class of equivalence is covered by at least one test case. Minimum number of test cases equal to the number of input classes with multiple classes of equivalence.
Coverage of adjacent equivalence classes. Each equivalence class is covered by at least one test case. There is at least one test case per test case that differs by only one equivalence class. The number of test cases is in the order of the total number of equivalence classes.
Coverage of n-ple of equivalence class values (k-way). All k-ple of different equivalence classes are exercised at least once.
Coverage of all combinations of equivalence classes. Coverage of any combination of equivalence classes. Number of test cases equal to the production of the cardinality of the quantities of equivalence classes of each class. Equivalent to the previous case with k=number of inputs.
Equivalency classes example
In a form you must enter your date of birth, composed of day (numerical), month (string that may be January … December), year (numerical, between 1583 and 2100). The software must correctly recognize between valid dates (corresponding to days that actually existed) and invalid dates and provide the corresponding day of the week for valid dates.
Select test cases by partitioning into equivalence classes.
The conditions on the ‘day’ input:
Entry conditions is that the day can be between 1 and 31, so equivalence classes are:
- Valid: EC1 : 1 ≤ DAY ≤31;
- Not valid: EC2 : DAY < 1, EC3 : DAY > 31, EC4 : DAY is not a whole number.
If our software takes a stream as input then an invalid class may be if the user writes in words. But e.g. in the unit test of a function then it will necessarily be an integer. Note that the equivalence classes depend on the programming language because we are solving a practical testing problem.
The conditions on the ‘month’ input:
Input conditions is that the month must be in the whole M=(January, February, March, April, May, June, July, August, September, October, November, December) so equivalence classes are:
- Valid: EC5: MONTH ∈ M
EC51: MONTH = January, EC52: MONTH = February, EC53: MONTH = March, …. (Total 12 classes of equivalence);
- Not valid: EC6: MONTH ∉ M;
The conditions on the ‘year’ input:
Input conditions must be between 1583 and 2100 Equivalence classes:
- Valid: EC7: 1583 ≤ YEAR ≤ 2100;
- Not valid EC8: YEAR< 1583, EC9: YEAR > 2100, EC10: YEAR is not a whole number.
Selection of test cases from equivalence classes
Minimum testing with coverage of equivalence classes. In this way we generate the minimum number of test cases able to cover each equivalence class at least once, thus maximizing efficiency;
Testing with coverage of adjacent equivalence classes. We generate test cases that differ by the minimum number of equivalence classes covered (ideally one class). Good compromise between effectiveness and efficiency with positive consequences also for debugging activities;
Combinatorial testing in which we generate all possible combinations of the defined classes in order to maximize effectiveness.
Minimum testing with coverage of equivalence classes
An efficient Test Suite could be the following:
All classes of equivalence are covered but it is very difficult to detect errors. For example in TC2 the system might respond with an exception because the day is less than 1, without evaluating the month and year.
Code example in Java 8 with JUnit 5:
For the TC4 test the compiler with a static analysis already makes me realize that I put a string instead of a number and so there is no need for some tests.
Testing with coverage of adjacent equivalence classes
Let’s evaluate how many test cases find problems.
All valid (classical technique) and invalid equivalence classes are covered. If a test is successful, we may immediately identify the invalid equivalence class that is not handled correctly. We do not test dates in February, for example.
In this case it is simple, but in complicated cases it may take an algorithm. We take the first test case and then we vary one, and then we vary the one in every possible way. We vary the year by taking a value in the respective equivalence classes. Now we have to take another input and make it vary. Per line the tests are adjacent. So if a test of a certain line is wrong, I know the input that made it wrong. Note that building adjacent tests will not always be the same, we have different solutions and they are not all able to find the problems.
Testing with minimum coverage of equivalence classes (valid classes listed exhaustively)
All valid equivalence classes (detailed technique) are covered. If a test is successful, we may immediately identify the invalid equivalence class that is not handled correctly. However, we do not test dates such as 30 February.
Realization with JUnit 5
The code for these tests can be found in this repository https://github.com/lonardogio/black_box_testing on github. We will perform in JUnit the tests corresponding to these three
possible test suites:
You may notice that some tests are not feasible (the JUnit code would not compile), e.g. those with text input where numbers are required (e.g. “two thousand”).
|
OPCFW_CODE
|
I’ve decided. We need to start doing points poker here at Baseblack if we’re going to carry on this Agile DevOps thing.
I’ve got to admit, the first time I came across the Agile methodology was quite late in my career. In the past, prioritisation of “operations” projects was reasonably first come first serve, or by order of priority (frequently, business need, and seldom operational requirement).
For software development teams, Agile is a pretty good, native fit. The concepts embodied by stories and sprints fit a development team very cleanly. When it comes to systems administration and engineering, or what I’ve come to refer to as DevOps, Agile can be a bit more awkward initially.
Operations teams across the globe will tell you that their tasks are intrinsically more “sprawly”, and that interconnections between tasks are frequently more complex.
The truth of the matter is, that frequently there is no simple and sensible way to break up a task into entirely unconnected subtasks. Something which can bugger up Agile, if you’re too hard and fast with the requirements and rules by which you play the game.
Pretty early on in this new job, I started looking at the previous DevOp Engineer’s puppet manifests. They were mostly ok, but with some absolute crazy meatballs thrown in for good measure.
It’s actually a common fault of Sysadmins to want to throw out the previous team’s work and start afresh, but in this case it actually was easier to start fresh than repair the foibles and cockups of the old code.
Given that I’d already spent 3-4 days reading and trying to interpret the state of the system, and it was blatently apparent that there were too many bits of “wouldn’t it be cool if we hacked this in to make it do X”, and not enough actual hard and fast config to make things work.
I’ll put that one down to my predecessor not being very puppet-savvy.
One of the big reasons this sprint overran was that the discovery process (first 3-4 days) was mostly involved with exploring the state of the systems, and what we wanted to accomplish. In the old manifests, there were huge chunks of code that installed numerous applications, which would be easier to manage and integrate if modularised.
A good proportion of time in the implementation phase went into creating lots of individual modules for various applications and packages.
As I was saying earlier about interconnected tasks, this wasn’t just a Fix Puppet sprint.
The background to fixing puppet was to enable the faster building of new machines from unboxing to users logging in.
There were some massively weird problems with the internal DNS, using Bind9, and the old DHCP server was prone to some peculiar lease issues, and it was running on a physical VM host, when it probably ought to have been a VM guest. Fixing DNS would best be done whilst fixing DHCP. Fixing DNS meant installing PowerDNS, which in turn means installing Postgresql. Setting up DNS Slaves means installing PowerDNS on multiple servers and configuring Postgres replication.
There’s no way that I’m building out multiple copies of anything without Puppet, so there’s the first bit of recursive loop.
The way to untangle this is to realise that puppet doesn’t need a puppetmaster to run manifests. All you need to do is write the puppet configs and then use the puppet agent itself to run the manifests from files. You can then use that to bootstrap a puppetmaster, or a DNS server, or just get a sense of how it will all fit together when you do the final server buildout.
I’m going to leave this here. I think the general conclusions to draw are the following.
- Agile is great.. It doesn’t fit all teams, and it’s worth trying. If it doesn’t fit, no worries. If it does, cool.
- Planning is the biggest stage of any project, or at least, should be.
- Infrastructure projects shouldn’t be forced into the traditional Agile Sprint, because they tend to become a lot more sprawly on investigation of the actual problem than they look at first glance.
I’m about to post the articles on Postgres replication, and the technical portion of this article.
|
OPCFW_CODE
|
building a robot that identifies objects and slices them
su_chef robotics project is coming along, and I've finally gotten to the point where I can actually slice vegetables. It's still a little finicky, but the arm can identify vegetables, pick them up, drop them into a slicer, and then slice them. Check it out in action.
This feels like such a huge milestone, and I'm kind of shocked that I got it working. I know I'm still far away from a robust functional prototype, but now, for the first time, I'm starting to actually believe that this idea might really happen.
If you're interested in setting up a similar system, you can check out the instructions and code here. In the rest of this post, I'm going over the build details for the slicer control.
Building the Slicer
Aside from some fine tuning my previous arm controller and object detection algorithm, the only new component that I added to the
su_chef was the slicing apparatus. To build it, I combined a kitchen meat slicer, a motor and carriage from a broken printer, and an Arduino based relay.
The printer carriage and motor move the slicer tray back and forth. They're attached to the meat slicer with a little wood brace that I built to fit around the base of the slicer.
The motor just turns a band in chassis, which moves the carriage back and forth.
To control the slicer tray, I connected the motor to a relay that was controlled by an Arduino. The relay circuit looks like the following.
The pair of relays hooked up in this manner is required so that current can flow either direction depending on which way the relays are switched. This lets the DC motor run backwards and forwards so the tray can move both ways.
To keep the motor from ramming the sides, I also added two small "fail-safe" switches in the track. These will send a signal if the motor runs into them, informing the controller that the motor shouldn't push any further in that direction.
Controlling the slicer tray motion
To trigger the motion of the tray, I have to control the relays. To do this, I use a bit of Arduino code that can be found here. The code flips the relays on for a preset time, but also checks if the fail-safe switches are triggered and closes the relays if that happens.
Finally, I also added some ROS specific code to the Arduino that listens over the serial connection to trigger the slice. In my python control script, I send the
"go" message whenever I want the slicer to go back and forth one time.
Thus far, the whole thing is very fragile, but my next step to get to v0.3.0 is to update the pickup part to make it better able to lift things in more orientations. Also, I want to put in some checks that make sure the prior task completes before the next begins and aborts if something is going wrong. Eventually, I know I will need to upgrade to a more precise arm, but I'm going to push as far as I can with the current one first.
In the meantime, I am starting to work on organizing a worker-owned company around this project, which I'm calling WORC Foods for Worker-Owned Robotics Cooperative Foods. You can read more about it on our website. If you are at all interested in cofounding such a venture, please reach out!
Discussion Around the Web
Join the Conversation
|
OPCFW_CODE
|
How to audit fat clients?
06-08-2018 08:39 AM
Hi. What if business processes suppose admins to connect to target systems with fat clients and Web Apps. What are the options to record their actions? So far I only have in mind the idea of a dedicated gateway server where all those fat clients are installed and from which admins will have to work.
Admin Desktop (posible accesing from internet) -> CPS -> Gateway Server with all fat clients -> Target Systems.
Are there some more elegant ways?
06-11-2018 12:43 AM
May we have more information on what OS we are working with?
For Windows, we do have screen auditing for each user that login to the machine.
Also are you looking for auditing activity from the targeted server?
Please keep us posted. Thank you!
06-11-2018 09:23 AM
there are two main use cases. first is when admins from local network using fat clients to manage infrastructure (e.g. SCCM, SCOM, Citrix). Si in this case we will have either install agents on their working desktops. second case is a bit harder. Remote users over VPN. and they use Fat clients from their remote hosts to manage target infrastrucutre directly. So here is where the main issue. we cannot uinstall agents on remote hosts. So the onnly options i see it use some kind of RDP gateway. where agent will record.
06-15-2018 01:03 AM
Unfortunately, I don't believe we can have auditing enable on RDP gateway like you mentioned. As we do require agent to be installed on the remote host for auditing to work. Therefore, I believe this would be a RFE for our Dev and PM team to work on.
Hope it helps!
06-15-2018 05:58 AM
Let me elaborate to complement what @IChan means. Let's focus on your basic requirement:
Hi. What if business processes suppose admins to connect to target systems with fat clients and Web Apps.
What are the options to record their actions?
So far I only have in mind the idea of a dedicated gateway server where all those fat clients
are installed and from which admins will have to work. Kind of: Admin Desktop (posible accesing from internet) -> CPS -> Gateway Server with all fat clients
-> Target Systems. Are there some more elegant ways?
When auditing Desktop apps (including the Browser), you have several options with Centrify:
- Gateway-based auditing: Exactly what you described above. It's the same paradigm as password vaults and their ability to provide "jump-box" access. With Centrify, the user visits the CPS portal, accesses a "Secure Workstation or Server" that has all the tooling, and with our service you can get proctored access (e.g. Watch and Terminate) and Recording (via DirectAudit).
This is a perfectly-valid design choice; however, we always have to look at things from a security perspective. The drawback of this design is that it will work as long as:
a) End users are 'disciplined' about accessing the Gateway server (training, cognitive). The issue here is that end users won't always 'do the right thing' because of factors like: accessibility, performance, and preference. The gateway can always be bypassed, the user (if privileged) can install the fat client in their own systems too.
b) End-users may hate the vault product. This is exactly what we hear about some of our competitors (We respect you, but we're coming after you, so keep buying other companies; it's not a question of "if" but "when" we'll take them over technology wise, so keep spreading FUD - we have you where we wanted you ).
c) The "gateway" approach may not even be available - like in a disaster recovery scenario. You must bypass the gateway if there's an infrastructure event that impairs access to the gateway.
The biggest positives about this approach is that it's relatively easy to deploy, and dependign on licensing, it's the most cost-effective way to get this done.
We have work to do! We recognize that today this may not be very optimal and we have capabilities in the pipeline that will make us shine above our competitors because we not only we'll be mastering 'fat clients', but web and mobile apps too. Keep tuned this summer.
- Client-based auditing: In this scenario you use DirectAudit in all systems in scope. While this approach is the most robust one (especially combining it with the approach above), it can be expensive in two fronts (licensing and back-end infrastructure); this is because you have to have DA licenses for all the audited systems and a very robust back-end infrastructure to support the storage, distributed infrastructure and database for the audited sessions.
So, let's recap:
a) For your remote users, your design scenario is correct. I'd add Identity Assurance controls like MFA or conditional access rules to provide additional security controls. This experience will be improved this summer/early fall in CPS.
b) For your local users, note that you can use "selective auditing" which is the combination of a non-audited role (with the login right) and an audited role that contains the desktop or Apps that they want to run with privilege.
This will allow screen auditing only to be turned on when the user is performing tasks with privilege.
I will keep track of this thread and when we are ready to announce some news, I'll update it. Note that we have releases every month.
06-15-2018 07:14 AM
If we have Centrify agent installed on target systems, could we use Direct Authorize Zones feature to limit number of hosts from which one can connect to them? not only RDP, but with any "fat-tool"
06-26-2018 07:49 AM
Let's refine your statement:
If youhave Centrify agent installed on target systems, you can use Direct Authorize Zones feature to limit who can access a system (regardless of their power - you can even stop a Domain Admin).
Notice that the "number of hosts" is more like a network concept. Our zone controls are identity based (more about the who, NOT from "where" or from "which client";
Those controls, although typically recommended by traditional vault vendors, utltimately add complexity and aren't very practical in some scenarios like availability assurance. For example: how can you break/fix a system if it only allows connectivity from some hosts. What if those hosts aren't available as part of the outage.
Another issue with applications is that each app may have a different connectivity method or protocol. We are tied to the Windows, UNIX or Linux methods. In UNIX/Linux, if the app is PAM-enabled, you can control it with DZ, but that's not the case on Windows.
In the next few months you will be pleasantly surprised on some of the changes we have cooking.
|
OPCFW_CODE
|
Hash functions are data structures that are commonly used in computer systems for tasks such as checking the integrity of messages and authenticating information. Cryptographic hash functions add security features, making it difficult to detect the content of a message or information.
Bitcoinuses SHA-256 and RIPEMD160, while Ethereum uses the Keccak-256 hash function. They are mainly used to generate public keys and hash blocks.
As proof of work, a cryptographic hash function is used in a block (group) of transactions. Next, Bitcoin miners use computing devices to find part of that hash in what is called a partial reversion of the hash. The first miner to do so is chosen to validate the transactions and receives a reward for their efforts. No matter how short or long the entry is, whether it's a single word (hello) or even an entire novel (Bleak House by Charles Dickens), the hash is fixed at 64 characters.
SHA-256, the algorithm used by Bitcoin, generates 256-bit long hashes (a 256-digit string made up of ones and zeros). When creating a user account on any web service that requires a password, the password is executed using a hash function and the hashed summary of the message is stored. I remember going through SHA256 in my cybersecurity course on message encryption and cryptography, and now I've encountered these same SHA256 and hash functions in my blockchain development course. Now I'm starting to imagine an application of blockchain in computer systems, although it's a good article.
Hashing is an integral part of all blockchain-based transactions, including cryptocurrency trading. The cryptographic hash function serves as the basis for creating a blockchain and taking advantage of those solutions. This means that if a hacker can access the database containing the stored hashes, they will not be able to immediately compromise all user accounts because there is no easy way to find the password that produced a given hash. Anyone interested in bitcoin has ever heard the phrase “cryptographic hash function”.
If you use SHA-256 to generate a hash from “fun”, you'll always get the result shown in the following table. Hash functions weren't designed for cryptocurrencies, but they are widely used in major cryptocurrencies, mainly because of the properties I mentioned earlier. In the code example above, we have already seen that changing a small part of the input of a hash function results in a completely different output. This property allows us to encode any file, be it a text document, an image or even a video file, and get the result with the same length.
In bitcoin mining, the feature entries are all of the most recent transactions that haven't yet been confirmed (along with some additional entries related to the timestamp and a reference to the previous block). Once we have overcome the introduction of hash functions, let's now see how they are used in the main cryptocurrencies. In the Bitcoin protocol, hash functions are part of the block hashing algorithm used to write new transactions to the blockchain through the mining process.
|
OPCFW_CODE
|
Over the past few years, there has been a variety of different blockchain ecosystems offering decentralized settlement layers for various applications. These blockchains lie on different points of the tradeoff curve of decentralization, security, throughput, and cost. Decentralized applications have varied requirements and developers can choose which platform is best suited for their needs.
As the number of blockchains have grown, so has the fragmentation of the space. Generally, each chain runs in isolation, such that state in one chain is not easily accessible in another. As a result, blockchain ecosystems have become somewhat siloed, where assets such as tokens and NFTs, liquidity, and non-financialized positions such as governance votes or gameplay history are locked in the originating ecosystem.
Interoperability is critical
One of the main promises of crypto and web3 compared to the previous generation of internet applications is permissionless interoperability. In a future where users want to use applications across ecosystems, it is important that these blockchains can communicate in a secure, decentralized, and permissionless manner. In this world, the interoperability layer is critical infrastructure. Reliable cross-chain communication is crucial for a seamless user experience.
The world today
Today, there is a lot of user demand for interoperability, mostly for token bridging. To meet these demands, the current solutions for bridging go through centralized entities. On one end of the spectrum, there are centralized exchanges (with off-ramps to different chains) functioning as asset bridges for a large number of users. Another popular bridge design is a multisig bridge, where a centralized entity controlled by a multisig of validators is responsible for watching one chain for deposits and signing off on withdrawals on another chain.
Not only does centralization ideologically contradict the values of crypto, it has very real, practical consequences. Centralized designs are not censorship resistant and have very large trust assumptions placed on a small set of trusted parties. Furthermore, these centralized bridges empirically have been much less secure than the underlying blockchains that they are bridging between. In the past six months, several bridge hacks have resulted in over a billion dollars of lost user funds. These hacks not only negatively impact users, but they also weaken the credibility of the entire blockchain space and lead to downstream consequences like regulation.
The current state of cross-chain interoperability is less than ideal. How can we do better?
Given two chains that do not have shared security (i.e. their validator sets and consensus mechanisms are potentially different) verifying the state of a source chain on a target chain is equivalent to validating the consensus of the source chain in the execution environment of the target chain.
This is the exact principle that light client nodes use to keep track of the state of a blockchain in a compute and storage-efficient manner.
The best design for blockchain interoperability, assuming two chains with different validator sets, is to have an on-chain light client for a source chain running on the target chain (which is what IBC does). With this design, there are no additional trust assumptions placed on cross-chain communication, aside from trusting the economic security of the consensus of each participating chain. (A more detailed discussion can be found in footnote1).
Once an on-chain light client can keep track of block headers of another chain, anyone can supply state proofs to prove any information (balances, storage, transactions, events) about the source chain in the context of the target chain. With this, building a cross-chain application, such as a token bridge becomes simple. We provide some pseudocode architecture below of how such a token bridge would work.
Historically, this approach has been difficult because on-chain computation is quite expensive. From a compute perspective, it’s not feasible to run these on-chain light clients--especially because different consensus protocols use different signature schemes that may not be supported across all execution environments. However, recent advances in zero-knowledge proof systems, which allow for succinctly verifiable computation, make this approach feasible today.
In blockchains, on-chain compute is much more expensive than off-chain compute. With zkSNARKs, the user (broadly speaking) can generate a proof of an expensive computation in a compute-cheap environment (off-chain) and then cheaply verify (using the succinctness property) the result of this computation in an on-chain environment. Similar to how zkSNARKs are powering zk rollup teams to scale execution, verifiable compute can also scale verification of consensus.
We believe that a significant step towards the endgame of interoperability is succinct on-chain verification of consensus.
In the short-term, verification of consensus has far fewer trust assumptions than current centralized approaches and is much more secure. In the longer term, once it is possible to provide a succinct validity proof of execution (using zk(E)VMs), we can combine the two validity proofs to have an on-chain light client that verifies both state transitions and consensus, for maximal security--similar to running an honest full-node on-chain.
Proof of Consensus
We coin the term "proof of consensus" to succinctly 😉 encapsulate the idea described above: use zero-knowledge proofs to generate a validity proof of the state of a chain according to its consensus protocol. This validity proof can be used to power a gas-efficient light client, which facilitates trust-minimized interoperability. We note that we don't actually use the zero-knowledge property of "zero-knowledge proofs", we are using the succinctness property for scaling.
⚠️ This section dives into more technical details, feel free to skip ahead for information about Succinct Labs.
For a given consensus protocol, how does one generate a succinctly verifiable validity proof for the state of the chain? Here, proof of work (PoW) vs. proof of stake (PoS) and the actual details of the consensus algorithm matter quite a bit. We describe at a high-level an approach that works for most PoS consensus algorithms.
A quick review of PoS
Generally in PoS, there are a set of validators who stake capital and provide signatures that serve as attestations for blocks they believe should be part of the canonical chain. Different PoS systems have varied finalization mechanisms (a block is considered "finalized" if it cannot be changed without a significant amount of the staked capital being slashed), but in most PoS systems, if >2/3 of the validators have signed off on a block then the block is considered "finalized".
A quick review of zkSNARKs
Before we go into more detail, if you are unfamiliar with the concept of zkSNARKs, we recommend reading Vitalik's blogpost that provides a great overview. We quote the most relevant part below as a reminder:
A zkSNARK allows you to generate a proof that some computation has some particular output, in such a way that the proof can be verified extremely quickly even if the underlying computation takes a very long time to run.
Combining the two
Verification of consensus is a computation that requires verifying for a particular block that there exist valid signatures from >2/3 of the set of known validators. If we are able to verify signatures within a zkSNARK and also verify these signatures came from a list of public keys corresponding to a set of validators, then we have a validity proof that a block is finalized. Futhermore, because of the succinctness property of zkSNARKs, this validity proof can be verified extremely efficiently on-chain.
The function signature of a SNARK proof looks like the following pseudocode:
def verify_block(block, signatures, public_keys, validator_set_commitment):
for pub_key, signature in zip(public_keys, signatures):
verify_signature(pub_key, signature, block)
assert 3 * len(public_keys) > 2 * VALIDATOR_SET_SIZE
Generally for any consensus protocol for which we want to generate validity proofs, we have to implement the following core primitives:
- verification of the signature scheme used by the validators
- inclusion proof of validator public keys in validator set commitment (which is stored on-chain)
Both of the above look deceptively simple but are actually quite difficult! To implement various signature schemes inside a zkSNARK requires implementing out of field arithmetic and complex elliptic curve operations. We will explain more details in an upcoming blog post.
Another subtlety is we have to keep track of the set of validators, which changes with varied frequency according to the particular PoS protocol. In most PoS protocols, the current validator set signs off on the updated validator set, which we must keep track of on-chain as it is an input into zkSNARK proof (the
Introducing Succinct Labs
At Succinct Labs, we are dedicated to building the foundation of a decentralized, permissionless, and secure interoperability layer for blockchains. We believe that the latest breakthroughs in zero-knowledge proofs make it feasible to generate succinct validity proofs of consensus that can power on-chain light clients for trust-minimized interoperability. The terminal design for cross-chain communication will be proof-based, and we want such a design to become the canonical method for interoperability.
What We've Built & Roadmap
Our first partnership is with Gnosis Chain, generously funded by Gnosis DAO and 0xPARC. To build a trust-minimized bridge between Gnosis Chain and Ethereum, we built a succinct light client for Ethereum 2.0 proof of stake consensus (Gnosis chain also uses Ethereum 2.0 PoS). This involved building out zkSNARKs for BLS signature verification and verifying the validator hash to validate the Ethereum 2.0 PoS light client protocol. More technical details will be forthcoming in an upcoming blog post alongside a demo token bridge between Goerli (Ethereum test net) and Gnosis Chain.
In the future, we want to build succinct light clients for all consensus protocols to allow for trust-minimized cross-chain interoperability across all blockchains. Please reach out if your ecosystem is interested in these ideas!
We are also very interested in empowering builders using this primitive to build the next generation of interesting decentralized applications that live across chains.
Work With Us
We are very excited by the ideas in this blog post and are looking for collaborators to work with. The zkSNARK technology we are working with is cutting edge and the trust-minimized cross-chain communication protocol we are building will be quite impactful for the whole blockchain ecosystem.
Reach out if you're interested in joining the team: we are looking for strong developers with experience in Solidity and general smart contract engineering, blockchain infrastructure and writing zkSNARK/STARK circuits.
If your blockchain ecosystem or application is interested in using our underlying primitives, please reach out as well. We're looking for a small group of early partners and would love to chat.
Finally, we would like to thank Gnosis DAO for an extremely generous grant to fund the R&D behind this work and much of the originating ideas in this blog post and 0xPARC for grant funding and supporting us from the beginning.
- If >2/3 of source-chain validators collude to double-sign a header to decieve the light client, then as long as the signatures are data-available, these validators are subject to any source-chain slashing conditions. To decieve the light client without getting slashed, >2/3 of the source-chain validators must collude to go offline during their normal duties (incurring applicable penalties) and also sign an invalid header. IBC-based interoperability also has this exact weakness. In the future, once zk(E)VMs become omnipresent and it becomes possible to provide a succinct validity proof of execution, we can succinctly verify both execution and consensus of a source chain on a target chain. Then our on-chain light client will reject invalid state transitions, similar to an honest full-node.
|
OPCFW_CODE
|
Most of the PKM and data science tools are used to analyze our conscious thoughts. However, our subconscious plays a much more important role in our lives, so it would be highly important to learn to analyze it using all the amazing tools that we have at our disposal.
In this article, we demonstrate how you can use InfraNodus tool for self-reflection in order to analyze your dreams.
The approach is based on importing (or logging) your dreams into InfraNodus and then using its advanced text network analysis and visualization engine to reveal recurring ideas, topics, semantic patterns, and images that tend to come up in your dreams. You can then use the built-in GPT AI to generate questions and ideas in relation to those patterns and to rewire them in new ways. Finally, we also demonstrate how you can detect the structural gaps in your dreams in order to identify some parts of your subconscious that are not yet connected and building new bridges between them.
Step-by-Step Dream Analysis and Interpretation Workflow
The workflow below proposes a step-by-step dream analysis and interpretation workflow.
1. Logging and Importing Your Dreams
First, you need to log your dreams to have some data to start with. This can be done using a pen and a paper, a notes app on your phone, or any dream journaling app. It is recommended to do that right after you wake up. It is also helpful to write in the present tense, as if the dream is still happening, to improve recall.
You can also use InfraNodus to log your dreams. In order to do that, simply create a new text graph and write your dream impressions there. You can then combine those graphs into one when you perform analysis. You can also import all the dreams into the same graph if you don't need dream-by-dream analysis.
2. Revealing Recurring Dreaming Patterns: Words as a Network
The next step is to visualize the dream content and to reveal recurrent patterns within. This can be done using the Menu > Top Concepts Synthesis feature, which shows all your graphs at once. If you have some other graphs in your account, you can also combine them manually by using the graph comparison feature. Simply choose the "Combine" option to merge several graphs in one.
As a result, you will get something like this:
The graph uses network analysis metrics to reveal the most influential terms and topical clusters. The nodes represent the concepts that you use in your dreams, while the connections represent their co-occurrences. For example, if you used "driving" and "road" in the same context often, they will be closer to each other on the graph and have the same color. The bigger nodes on the graph have a bigger betweenness centrality measure, while the nodes that tend to co-occur together more often have the same color. Think of it as a social network of words that like to "hang out" together in your dreams. This representation allows us to apply social network measures and reveal the groups and the most influential words in this particular discourse of our dreams.
In our example, we can see that the most prominent concepts (shown bigger on the visualization and also highlighted in the Analytics panel) are:
These are the main protagonists of my dreams: places and people. Note, these are not necessarily the most frequently used concepts, but the concepts that connect the different ideas in my dreams most often. There is a difference because we use betweenness centrality measure to calculate the importance of concepts and this is different from frequency although it does correlate with it.
We can also see that the main topical clusters (shown with distinct colors on the visualization and in the Analytics panel) are:
- house place room
- stay found guy
- thought good felt
- father woman tell
3. Using GPT-3 AI to Interpret Dreams
We can also use built-in GPT-3 AI to interpret the topical clusters identified in the previous section. This will provide us an overview of the main topics that occur in our dreams and open a different perspective on our dreaming patterns:
As you can see, I'm dreaming about:
• House and spaces
• Creative expression
• Human interaction
This feedback could indicate some important patterns that tend to come up in my subconscious. I could ask myself why they are so prevalent in my dreams: does it mean that there is a certain concern regarding finding a house or having a fixed place to live, for example?
All this is very useful information, because I get insights about my subconscious dreaming and what my unconscious is trying to deal with while I'm in the state of sleep. Moveover, being aware of our dream patterns makes it more likely that I'm going to recognize them again while I'm dreaming, increasing the chances of a lucid dream.
Now that articicial intelligence identified recurrent topics in my subsconscious, let's move one step deeper and see what is hiding underneath.
4. Revealing the Underlying Patterns in Dreams
Now that we analyzed the surface structure of our dream-subconscious, we can dive into the deep structure: the underlying ideas and images contained in our dreams.
In order to do that, we can select the most prominent concepts on the graph and temporarily hide them. This can also be done using the Analytics > Reveal Underlying Ideas function.
We can now see more precise topics coming up that allow us to better understand the nuance of our dream imagery:
• Lake beauty
• Room place
• Waiting end
• Road driving
We can see that a lot of beautiful things, for instance, occur when we're dreaming about lakes. This could not only be interpreted as a generally healthy state of the subconscious mind but also serve as a sign that it might make sense to spend more time and nature, particularly at lakes because our subconscious associates it with good experiences.
5. Dream Interpretation Using GPT-3 AI
When we find some concepts or ideas that seem interesting to us, we can select them on the graph and use built in GPT-3 AI to generate a question or an idea that would encourage us to ponder about this particular combination of concepts.
In order to do that, just click on a few nodes that seem interesting to you personally on the graph, then go to the Analytics > Relations panel and explore all the other terms that are connected to it. Then, use the built-in GPT AI functions to generate summaries that contain those concepts or to develop them further.
What you're doing practically here is connecting your subconscious to all the existing human knowledge that AI represents (because this is how GPT AI works: it trains on all the texts that have ever been written and then learns to generate the most typical outcome for the prompt that it's given). You can then explore how the images that you dream about can be interpreted using the breadth of human experience.
It is important to emphasize this step: we just derived an interpretation of symbols based on the context of the dreams themselves. We did not look into a book that tells us how we should interpret water, but, rather, just observed what is the context where the notion of "water" appears in our dreams. Of course, we can also add to our subjective interpretation a more general interpretation as well, which would take archetypes and collective experience into account.
6. Navigating through Dreams and Lucid Dream-Weaving
Sometimes, while you're dreaming, you can jump from one dream to another. Another interesting occurrence is when you realize that you're dreaming inside the dream, so you can start steering the narrative: this is what's called lucid dreaming.
You can do a similar thing with the interactive InfraNodus graph of your dreams.
To do that, simply select some of the relevant concepts and then click the "search all graphs" button to see which dreams this combination occurs in. Then double-click the dream (a square node added on the graph) to get to the exact part of that dream where you were encountering those ideas).
This is a great way to navigate within your subconscious, uncovering interesting dreams and memories that you would otherwise forget.
7. Identifying the Blind Spots in Your Subconscious
InfraNodus has a feature that can detect a structural gap in your dream: the topics that you are dreaming about but not in the same context.
This can be a very interesting way of identifying the blind spots in your subconscious and thinking of a possible way to connect them. This can usually lead to very interesting results that may enhance your understanding of yourself.
To do that just go to Analytics > Gap Insights and click the Reveal the Structural Gap button. InfraNodus will show the structural gap and the topics where it occurs. You can then think of a connection or use built in GPT AI to generate a question that would encourage you to think of a connection between these two topics.
In this way, it's like doing the Yungian psychotherapy work on yourself, using data science and AI as a therapist.
If you're interested to try this out, please, sign up and log in InfraNodus self-analysis app: https://infranodus.com/use-case/introspection-self-reflection
|
OPCFW_CODE
|
Allowing user access to your MySQL tables just got easier thanks to PHPMyEdit. Hed: Facing the back end Dek: PHPMyEdit makes MySQL databases Web-accessible.
There are two parts to any Web application, a front end and a back end. I have become very good at putting together the front end of Web sites, but tend to lag, if not outright procrastinate, when it comes to finishing up the back-end interface.
The fault is not entirely mine. The front end of the Web site is what everyone sees, including the boss. When the boss sees the public page working, there is the assumption that the project is completed. The corner-cutting programming and the troglodyte approach to data entry on the back end are hidden, so at that point, the boss says, “On to the next project please.”
In addition to hurried schedules, there’s no glory to working on the back end. Back-end interface pages take time to build, and receive only a modicum of thanks. Sure, elements can be borrowed from already completed scripts, but not all databases are created equally, and there are always time-consuming and tedious modifications that don’t make the top of the fun chart.
This week, however, I found a tool that changes all that. Faced with three finished front-end database-driven site projects, I was preparing to knuckle down and crank out the administrative pages. From the back of my mind came a nagging reminder that I had seen a script on Freshmeat that promised to take any MySQL table and instantly create Add/Edit/Delete views of available data. What I downloaded is called PHPMyEdit, and I love it.
Now at version 3.1, PHPMyEdit is written and maintained by John McCreesh and was recently converted to PHP Classes by Pau Aliagas. PHPMyEdit is architecture-independent. If you are running PHP and MySQL, you can use this tool. PHPMyEdit consists of a single PHP page accessed through a Web browser. After completing a quick wizard to determine your database and table, the script will then provide you with code to copy and paste into your own PHP page, which then enables your Web-connected coworkers to modify data in the table you chose. Including the new PHP class drives all these modification scripts.
PHPMyEdit will adapt to any MySQL datatype and also allows for searches within individual fields. It’s highly configurable, allowing you to control the fields your users can access. Data can also be filled in via drop-down lists with data selected from separate tables. Style sheets are employed to control the layouts, so matching your administrative site design is straightforward.
With PHPMyEdit, I was able to quickly give a user access to three content tables within about an hour of downloading the script. This included installation as well as a few modifications. I recommend PHPMyEdit as a very useful addition to any site administrator’s toolbox.
Garth Gillespie is architect and chief technologist for ComputerUser.com.
|
OPCFW_CODE
|
Xcode 9.3 cannot submit build to App Store
I've just updated to Xcode 9.3 and am having the following issue when submitting my app to the app store:
Invalid Bundle - The app cannot be processed because options not allowed to be embedded in bitcode are detected in the submission. It is likely that you are not building the app with the toolchain provided in Xcode. Rebuild your entire app with the latest GM Xcode and submit the app again.
I've double checked and all the frameworks linked to my project and they have all been compiled with bitcode enabled. Only one of these frameworks is a Carthage framework and I've ensured that it's been rebuilt using carthage update.
I'm completely stumped and don't know what else to try.
Thanks ahead for your help!
Are you sure you are using latest GM Xcode and not Xcode 9.3 beta? If its latest Xcode then set the Command Line Tools to latest Xcode like in this answer https://stackoverflow.com/a/39967084/5866353 and then rebuild and try to submit the app.
Hi Sharath, I've double checked and the command line tool is properly set to Xcode 9.3. It can't be the beta version as this was updated through the App Store today. If I go to about Xcode there is no mention of this being a beta version of Xcode.
I've had to turn of bitcode for the submission, and then the build went through. This is far from ideal but it'll keep me going until Apple gets back to me on the bug reporter.
Reinstalling Xcode also didn't work.
Yes, I can confirm that when I disable the bitcode in build settings, submission went thorough... Apple I am so tired of you!!!
I can confirm having same problem, seems that the only solution is turning bitcode off
ITMS-90562: Invalid Bundle - The app cannot be processed because options not allowed to be embedded in bitcode are detected in the submission. It is likely that you are not building the app with the toolchain provided in Xcode. Rebuild your entire app with the latest GM Xcode and submit the app again.
This error might be caused by one of your external frameworks. You can try to rebuild the app from bitcode by yourself and that might give you some more informations. To do that in Xcode archive your app, then in organizer, in archives tab click "Distribute App", select "Development", and after that select "Rebuild from Bitcode" and proceed. After that Xcode will probably show more extended information about the problem which might help you solve it.
I had this problem, using Apple's latest toolchain, when including a dynamic framework built with hidden symbols (ld options -bitcode_bundle -bitcode_hide_symbols -r -x).
When the symbols weren't hidden, the app was processed by Apple as expected.
|
STACK_EXCHANGE
|
unsupported operand type(s) for +: 'int' and 'Entry'
I get that z is an Entry and that 5 is an integer but I don't know how to change z to be an integer that the user could enter.
This is my code:
import smtplib
from tkinter import *
window = Tk()
z = Entry(window, width=35, bg="white")
z.grid(row=5, column=2, sticky=W)
def click():
global YOUR_EMAIL_ADDRESS
YOUR_EMAIL_ADDRESS=YOUR_EMAIL_ADDRESS.get()
global YOUR_PASSWORD
YOUR_PASSWORD=YOUR_PASSWORD.get()
global TARGET_EMAIL
TARGET_EMAIL=TARGET_EMAIL.get()
global subject
subject=subject.get()
global msg
msg=msg.get()
global z
z=z.get()
send_email(subject, msg)
x = 5 + z
print(z)
are you aware that print will display it in terminal only?
Yea I am I was just testing stuff out
i added answer, hope it helped
This is the new error message I get Exception has occurred: TypeError
unsupported operand type(s) for +: 'int' and 'IntVar'
i hv updated the ans, sorry :(
In tkinter you have StringVar() and IntVar() for string and integer respectively. So here you need to use a keyword argument to Entry widget called Entry(.....,textvariable=my_var) and in tkinter you have to define the variable before using it, so here is your code and I have simplify your code if you have trouble understanding you can ask or just copy and use this.
import smtplib
from tkinter import *
window = Tk()
my_var = IntVar()
z = Entry(window, width=35, bg="white",textvariable=my_var)
z.grid(row=5, column=2, sticky=W)
def click():
global YOUR_EMAIL_ADDRESS , YOUR_PASSWORD , TARGET_EMAIL , subject , msg
YOUR_EMAIL_ADDRESS=YOUR_EMAIL_ADDRESS.get()
YOUR_PASSWORD=YOUR_PASSWORD.get()
TARGET_EMAIL=TARGET_EMAIL.get()
subject=subject.get()
msg=msg.get()
send_email(subject, msg)
x = 5 + my_var.get()
print(my_var)
Note that i removed your assignment of z = z.get() as it might be wrong to say so, instead you can do this too.
also, to get the value of a entrybox you can use z.get() which means you last line of code will be
x = 5 + int(z.get())
print(z.get())
and I don't recommend using z = z.get() as you already have a z outside your function. Note that I'm using int() because in Python you cannot '+' a str and int.
This is the new error message I get Exception has occurred: TypeError
unsupported operand type(s) for +: 'int' and 'IntVar'
Should be x = 5 + my_var.get().
i am so sorry, for not getting dis right :(((( Thanks @acw1668
|
STACK_EXCHANGE
|
Call graph for mac download
Footer Resource links
Everything ok. Submitted by Fabio Ricci on Mon, Hi Daniel, thank you for telling this.
Submitted by Gerald Rosenberg on Tue, Submitted by David Olshefski on Sat, Cannot complete the install because one or more required items could not be found. Software being installed: CallGraph Viewer 0. Submitted by Rajeev Mishra on Mon, Submitted by Markus Wagner on Tue, CallGraph Viewer.
Install Drag to Install! Neon 4.
The Open Graph Viz Platform
Uses Zest as the graphics visualization engine. Eclipse Versions:. Platform Support:. Organization Name:. Certiv Analytics.
CallGraph Skype Recorder
Date Created:. Development Status:. Date Updated:.
Submitted by:. Gerald Rosenberg. There is another install operation in progress. Download last errors CSV. Marketplace Drag to Install button By adding the following code below to your website you will be able to add an install button for CallGraph Viewer.
- how to open pdf in word mac.
- usb stick windows 7 mac.
- free lightroom 4 presets for mac!
Saturday, November 2, - Reviews Sign in to post reviews. Time determines which functions will be displayed on the callgraph and is represented by the thickness of the connections between the nodes. You can use callcount or any other available cost to control the color heatmap and the area size of the call graph nodes.
GitHub - kuopinghsu/callgraph-gen: Generating the call graph from elf binary file
The callgraph complexity can reduced by hiding functions with cost below a customizable preset. The node shape is also customizable. The callgraph is zoomable and understands trackpad gestures like zoom in, zoom out, smart zoom. You can drill down through double-click, mouseover for details. The loaded source code is annotated with costs. You can define where your source codes located and when required, how these locations should be mapped to the server path found in callgrind file.
Profiling Viewer opens the source code of the selected function and annotates its lines with the corresponding costs. Functions can be suppressed based on the source file path.
This information is provided for each thread that executed title code as the capture was running. As with the Summary Tab shown for Function Summary captures , the Summary Tab for Callgraph captures includes hyperlinks for source locations as well as information about thread affinities and inline functions. The events list on the Callgraph tab contains one top level row for the function you captured and one top level row for any other functions that were executing in parallel with the function you captured. In this example, PIX captured the callstacks for 5 functions that were running at the same time as the captured function SweepAndPrune::BuildIslands in this case.
Each row in the event list other than leaf nodes can be expanded and collapsed to drill into the call tree. As with the event lists for other CPU capture types, you can customize which columns are displayed by choosing a set of Counters, sort by any column, and filter the contents of the event list using the filter bar.
- make menu bar transparent mac;
- folder encryption software for mac.
- windows - How can I extract the call graph of a function from Python source files? - Stack Overflow.
- CallGraph Skype Recorder.
- come velocizzare utorrent mac 2015!
The default layout also contains the Butterfly View and the Function Histogram. Timeline Tab The Timeline tab contains the same callgraph information as the event list on Callgraph tab but lays it out visually on a timeline. The timeline contains one lane for each thread or core that ran code during the capture. The timeline control in the Timeline tab is the same control used in Timing Captures.
Features from the Timing Capture timeline such as the ability to pivot the data per-thread or per-core, selection synchronization between the timeline and an event list, and the display of callstacks on context switches are all provided in the Callgraph timeline as well.
|
OPCFW_CODE
|
The LINGUIST List’s Rising Stars awarded to three lab members
Rising Stars (from the Liguist List Website)
This is one of the programs that we run during our Annual Fund Drive where we feature undergraduate and MA students who have gone above and beyond the classroom to participate in the wider field of linguistics. Selected nominees should exemplify a commitment to not only academic performance, but also to the field of linguistics and principles of scientific inquiry. We are looking for those undergraduate and MA students who are excited about participating in the global community of linguistic researchers. Students need not already be published, but should have already contributed in some way to the linguistics community.
As someone studying both computer science and linguistics, computational linguistics has always been of great interest to me. Due to its heritage from two very broad fields, I believe that we are only just beginning to tap into its full potential despite its already numerous current uses.
Of these many applications, one in particular has firmly held my attention: the application of computational linguistics (and NLP) to societal problems. Researchers exploring this topic have created tools that can detect different types of bias and propaganda, assist medical professionals in diagnosis, and aid students with different learning disabilities. In truth, despite the huge progress made, we are watching this burgeoning field advance in leaps and bounds every day. Due to my interest in this area, I am incredibly thankful to conduct research in the Speech, Lexicon, and Modeling Laboratory, an environment where this concept takes center stage.
Accordingly, my own Bachelor’s thesis project focuses on using linguistics to increase access to the Internet for a linguistic minority. Despite speaking a variety of Persian, Tajikistanis are unable to read anything on the Internet written by Persian-speakers from Iran and Afghanistan, as they write in the Cyrillic script rather than the much more prevalent Arabic script. My work investigates whether a tool can be created to transliterate between the two incongruous scripts. Once our efforts have reached an acceptable threshold, we aim to make our tool accessible via a web browser extension, thereby making the Internet more accessible to speakers of Tajik Persian.
In a country where most interact with the Internet in Russian and/or English, being able to access the Internet in one’s native language (and script) is a remarkable boon, especially for monolingual speakers of Tajik Persian. As someone whose own mother tongue (Konkani) is split between several scripts, I understand how frustrating it can be to know that the language on the screen is your own, yet nonetheless out of reach. It is my belief that the future of computational linguistics is one that occupies itself with easing such frustrations.
Following graduation, I hope to further explore how I may contribute to this through the pursuit of a PhD in computational linguistics or language technology. My eventual goal is to conduct research in the technology industry, working to create tools that can be of particular use to linguistic minorities and marginalized communities.
During my undergraduate years, I used to be very invested in learning about typology, historical linguistics and fieldwork research as a means to support linguistic communities and aid the revitalisation of moribund languages. Being a graduate student now, I know that the process of fieldwork and language documentation has its own problematic aspects and that there are other, more direct, and potentially non-linguistic approaches to the problem. Still, I remain convinced that the findings from linguistic research should benefit people outside of academia in some way, and so, after writing a critical discourse analysis (CDA) on language ideologies in Japan as my BA thesis, I chose my MA classes based on how well they can prepare me to perform empirical research on similar topics with more practical approaches. My most recent project combined quantitative methods from corpus linguistics with the workflow of a CDA in order to identify biases in Japanese news articles, as an experimental attempt to see how quantitative methods can aid qualitative research.
I see the increasing incorporation of digital tools (in the form of e.g. online corpora, and open source software) as an important development in non-computational fields: Not only do they speed up research on large data sets, but they also allow re-approaches to already known phenomena via computer simulations and modeling, and keep research findings replicable and more accessible to other disciplines. It feels like non-computational linguists are beginning to normalise the use of digital tools, which makes it more likely that they will also enter the technical areas of language technology development:
Recent advances in AI managed to create something that can seemingly talk like a real human being, but as these technologies become available to the public (and their sometimes outrageous flaws becoming more apparent), I think linguists can and should help to ensure that these technologies are tested and developed in the interest of all groups of people. That is, linguists should make active attempts to stay informed about the technical workings of language technologies. Linguists should stay able to provide relevant suggestions and criticism in order to e.g. find ways to improve the way AIs “learn” and use language, or - even more generally - push the development of technologies that are primarily adjusted to English data to adapt to other, less richly resourced languages.
In that regard, I am very lucky to be part of the Speech, Lexicon, And Modeling lab whose research topics focus on the mental lexicon, using computational methods among others. I have only recently begun to learn how to work with different programs and write my own scripts, but I can already see how much this knowledge gave me a wider range of approaches to choose from for research, and my lab allows me to put these skills to use. I am also glad that an enthusiastic team at my university has allowed me to join them and organise a computational-linguistics-themed student conference this summer. I hope that this event can inspire the general linguists from my department to pick up some new skills, too.
On a more curiosity-driven side, I am also rather interested in sound symbolism research, and some of my recent projects have focused on sound iconicity and how humans perceive it; the phenomenon alone, and some languages’ preferences to make more frequent use of mimetics than others intrigue me a lot. I am especially interested in looking into languages such as Mandarin Chinese and Vietnamese among others, the reason being that some languages have indisputably many speakers, but their uses of sound symbolism haven’t been considered too much in contemporary sound symbolism research. Recent theories on the potential role of sound iconicity in e.g. language change, aesthetic perceptions, and its relations to human psychology make this all the more exciting, and I think this is a great opportunity for languages that have mostly been investigated in typological and comparative linguistic contexts to become relevant in other fields, where findings from Indo-European languages (and specifically, Japanese) dominate our knowledge on the topic.
Language models like ChatGPT have been a much-debated topic recently, generally, but also in Linguistics. Consequences for both teaching and research have amply been discussed with varying degrees of opinions. It is known that even though ChatGPT can be a helpful tool for writing and research, it hallucinates citations, facts, and other information.
I feel that it is particularly important to offer students and researchers the opportunity to understand the inner workings of these models so that they can benefit from these tools yet are aware of their disadvantages and whether or not to use them. At a fundamental level, this requires stirring interest in people, explaining how these technologies work, and beginning to question them. It has been my experience that people in the humanities tend to be more cautious/shy about topics like programming and technology, especially because some people chose a study program in the humanities because they wanted to avoid math or technology.
This was also true for me when I first started to study Linguistics. Over time, however, I became more interested in computational topics and began to learn about them. This was greatly facilitated by people in my environment who helped me on my journey, and I aspire to help others do the same. This is why, as part of my work with the Speech Lexicon and Modelling Lab at HHU, I have been developing tutorials and other open-source resources for various topics, all aimed at people with little to no prior knowledge. As a result, many resources and documentations that I have written are a product of my own challenges, such as acquiring proficiency with programming languages, setting up programs and software, and using open-source tools. Others are a result of my work as a teaching assistant for seminars in the area of digital humanities and programming, specifically aimed at linguists and people from the humanities. The skills acquired in these classes and materials are helpful for any student or researcher, regardless of whether they want to pursue a job in academia, the industry, or other areas.
Another obstacle I see for students that are already interested in learning about computational and technological topics is the pressure of delivering results in an academic setting. It can be daunting to start something without prior knowledge, especially knowing that this will most likely impact future grades and potentially face negative judgment from teaching staff and professors. As part of the organizing committee for the TaCoS, a computer linguistics conference for students only, the team and I make active efforts to reach out to as many students as possible and facilitate their interest in computational linguistics, regardless of their prior knowledge. Since no professors/teaching staff are attending, students can present their work and get feedback and inspiration from other students in the field without feeling pressured to perform well or worrying that they are asking the wrong questions. This will hopefully encourage students to pursue their interests outside of the conference.
Looking ahead, it is also interesting to consider the importance of open-source technology in linguistics. Specific tools and software, for example, neural networks, have found great resonance in some research communities, partially because of their wider availability. Other applications, which are slightly more specific to certain research areas, have yet to be made public and are only passed around among fellow researchers. This makes it considerably more challenging to get into this line of research without knowing the right people and creates an air of elitism around these tools and their related research. In addition, proprietary or private code and software contribute to the looming replication crisis for Linguistics. That is why Akhilesh Kakolu Ramarao and I held a workshop on free and open-source software for research and have been advocating for it during talks in classes. We hope this will enable students and researchers to lower the barrier of entry, understanding, and reproducibility of research.
This topic further relates to the general role of language models and technology in linguistic research. For example, language models such as neural networks have been used for a while in linguistic analysis in order to inform linguistic theory or investigate certain phenomena. However, there has always been a conversation about how well these models can represent language structures in our brains and how applicable their results are to humans if they are unrepresentative, even if they can model prominent linguistic phenomena. My own interest in this topic has developed throughout my Bachelor’s degree and has led me to write my Bachelor’s thesis on using psychologically-motivated learning models (Naive Discriminative Learning) to explain variation in phonetic details. Other projects for my work at the Slam Lab also pursue this topic and aim to compare the results of psychologically motivated models with models that are not.
I hope that in the future, my research can contribute to developing more psychologically motivated language models, and I can continue my passion for bringing digital skills to humanity students.
|Rising Star Issue
|Anh Kim Nguyen
|Rising Star Issue
|Rising Star Issue
|
OPCFW_CODE
|
Sun Microsystems, Inc. today announced a significant new version of Sun(TM) VirtualBox(TM), its high performance, cross-platform virtualization software. VirtualBox 3.0 is capable of creating and running multi-processor virtual machines that can handle heavyweight server-class workloads, and also delivers enhanced graphics support for desktop-class workloads, reinforcing VirtualBox's position as one of the world's most popular virtualization platforms. To download the freely available VirtualBox software, visit: http://www.sun.com/software/products/virtualbox/get.jsp
Many multi-threaded server-based workloads, such as database and Web applications, can benefit from Symmetric Multiple Processing (SMP) systems, which contain multiple CPUs. VirtualBox 3.0 can now support virtual SMP systems with up to 32 virtual CPUs (vCPUs) in a single virtual machine. With this major enhancement, VirtualBox software can be used to run not only desktop workloads on client or server systems, but also demanding server workloads.
“The rapid evolution and proliferation of VirtualBox software continues,” said Jim McHugh, vice president of marketing, Datacenter Software, Sun Microsystems. “With each new version, VirtualBox software delivers more innovation, performance and power. And as virtualization continues to gain momentum in the market, the world's developers and IT decision makers are turning to VirtualBox en masse.”
“VirtualBox is a key embedded technology in our Business and Disaster Recovery product line. The great new SMP capabilities in the VirtualBox software allow us to build even more powerful products satisfying our most demanding customers,” said Akash Saraf, CEO of Zenith Infotech.
A key component of Sun's industry-leading desktop-to-datacenter virtualization portfolio, VirtualBox software has been rapidly growing in popularity, surpassing 14.5 million downloads and 4 million registrations worldwide, as well as more than 25,000 downloads a day. A mere 50 megabyte download, VirtualBox software is incredibly compact and efficient and installs in just a few minutes.
New server features of VirtualBox 3.0 software include:
- Up to 32 vCPUs per guest to accommodate heavyweight data-processing workloads.
- Hypervisor enhancements for SMP to enable optimum performance.
- Updated API platform designed to be the basis of the community-driven VirtualBox Web Console project, which is coming soon to allow IT administrators to manage their datacenters from a Web console. This project is based on the popular Python language.
New desktop features of VirtualBox 3.0 software include:
- Microsoft Direct3D support for Windows guests, which enables graphically intensive Windows applications, like computer modeling, 3D design and games software, to run in a virtual environment.
- Support for version 2.0 of the Open Graphics Library (OpenGL) standard. As a result, high-performance Windows, Linux, Solaris(TM), and OpenSolaris(TM) graphical applications, which typically use graphical hardware acceleration, are able to run applications like Google Earth and CAM-based software on VirtualBox software.
- Support for a wider range of USB devices, including storage devices, iPods and phones.
Pricing and Availability
VirtualBox software is free of charge for personal use. For wider deployments within an organization, Enterprise subscriptions are also available, starting at $30 (USD) per user per year, which includes 24/7 premium support from Sun's technical team. Discounts are available based on volume. To sign up for an enterprise support subscription, visit: http://www.sun.com/software/products/virtualbox/get.jsp. For partners interested in redistributing the VirtualBox software as part of their own solution, Sun offers a comprehensive OEM licensing program.
Sun's Comprehensive Virtualization Products and Services
Sun offers a complete desktop-to-datacenter virtualization product and services portfolio, which includes solutions and services for both the management and virtualization of operating systems, servers, storage, networking, desktops and applications. With proven virtualization service expertise, Sun helps customers deploy new services faster, maximize the utilization of system resources, and more easily monitor and manage virtualized environments. Sun's pervasive virtualization services help enterprises scale their business while improving efficiency and transforming their IT environment into a dynamic datacenter. For more information on Sun's virtualization products, visit: http://www.sun.com/virtualization and to listen to a podcast of today's announcement, visit: http://www.blogtalkradio.com/SunNews/2009/06/25/Sun-VirtualBox-30
|
OPCFW_CODE
|
Before Teresi could react, Raymond had already flicked his tail towards his head fiercely.
Just like the clan, Teresi underestimated the lethality of this tail.
He dodged as fast as he could but was still hit.
Teresi felt dizzy. Before he could recover, Raymond attacked again. He couldn’t resist.
The people on the sidelines were going crazy.
Raymond’s speed and strength were absolutely impossible for a person who has had an energy explosion.
But the weirdest thing was that they couldn’t feel any energy fluctuations from the battlefield.
It was as if they were watching two ordinary giant pythons fighting, not beastmen.
Raymond didn’t give Teresi any mercy, nor did he lock the opponent using coercion. He only crushed Teresi by brute force.
Their family had previously wanted to leave the Black Python clan, mainly because they had been suppressed by the clan over and over again, which made staying in the clan disgusting.
As one of the five major families of the empire, the Black Python Clan has been entangled in the power and authority of the empire for thousands of years, to the point that their influence has spread throughout the interstellar galaxy.
With such power, had they left, they would have just given up all those years of hard work increasing the clan’s power and influence from their achievements.
It would be better to be the head of the clan directly, and then do a big purge.
So Raymond didn’t keep his strength in check at all. Teresi had no chance of fighting back. Within two minutes, he already had scars and fractures all over his body.
Teresi didn’t even have a chance to beg for mercy. He could only passively try to dodge. But within a second, he would be out of breath again.
Raymond stopped his attacks and transformed back into his human form. Looking down on Teresi’s frame on the ground, he asked, “Do you still want to fight?”
His voice was not loud, but everyone still heard it.
Hearing his voice, the audience snapped out of their shock and immediately began to make noise.
“Raymond, how could you…how could you be so powerful?!”
Reagan was going crazy.
Doris proudly said, “Because my son has already advanced to Rank 9!”
“What?! How is that possible, how is that possible?!!! We can’t even feel any energy from him!!!”
“You don’t know anything about Rank Nine, do you? Rank Nine beastmen can hide their energy fluctuations.”
When Raymond came down from the arena, Jin WoWo swiftly removed the enchantment.
Everyone outside the court instantly felt a powerful energy that wasn’t there before coming from Raymond.
Outside the field, except for Jin WoWo, and Doris and Ren who were protected by Jin WoWo, everyone was forced to step back involuntarily by this powerful surge of energy.
This was the instinctive awe and fear of the strong.
“How is that possible? That is absolutely impossible. The video clearly showed his energy explosion. Otherwise, how could such a huge energy burst out in an instant?!!! The video announcement was even made by the military. It is absolutely impossible to be fake!!!”
“Yes.” Raymond walked to Reagan and answered loudly, “That video is indeed true, but the huge energy that burst out in the video was not because my energy core broke, but because, at that moment, I advanced. The energy that spewed out was the result of an energy expulsion from the advancement.”
When the strength of the beastman is upgraded, there are indeed bursts of energy. But no one has seen such a terrifying energy burst.
This might only happen when beastmen upgrade to Rank Nine, such a terrifying explosion of energy.
Since no one has seen what it’s like to upgrade to the Rank Nine, it was impossible to judge whether what Raymond said was true or false.
Despite that, the energy radiating from Raymond was indeed very powerful. And it was several times more powerful than the energy of the opponent’s previous eighth rank.
Reagan didn’t want to admit it, but with the information on hand, he had no choice but to admit that what Raymond said was probably true.
“Patriarch!!!” A clansman ran to Raymond and said flatly, “The Patriarch is so powerful that he has no rivals in the Empire. This is a great blessing for our Black Python Clan!”
“Yes, with the Raymond as the Patriarch, our Black Python Clan will become the first of the five major families in the Empire!!”
This was Rank 9!
Unexpectedly, their Black Python Clan produced a Rank 9 beastman!
Now with Raymond, wouldn’t the other four major families of the empire have to look up to their Black Python Clan?
When they go out, wherever they go, they cannot be suppressed.
The people present were extremely happy with the outcome.
“Raymond is so strong, he would be wronged for him to just be the master of the family. It would be better to make him Patriarch directly.” A beastman suggested.
“I support Raymond to be the patriarch!” The first one run over shouted.
“I support Raymond as the patriarch!”
“I support Raymond as the patriarch!”
The shouts came and went, exactly as how they did when they were supporting Reagan as the patriarch half an hour ago.
Reagan was about to faint with anger. He was probably the shortest patriarch of the entire Black Python Clan’s history in thousands of years.
Raymond smiled slightly, “Since it’s what the people want, then I’m will not be courteous. Starting today, I, Raymond accepts the nomination and has officially become the Patriarch of the Black Python Clan.”
The people cheered.
A clan member who specializes in managing the clan’s website excitedly opened the backend of the website and entered the news of the new clan leader.
The clansmen present were all big figures with certain rights and powers in the clan. They immediately scrambled to open their own terminals to confirm the identity of the new patriarch on the website, and voted unanimously.
Immediately, an announcement of the New Patriarch was published on the homepage of the Black Python Clan website. And, the photo of the Old Patriarch Stan on the homepage was replaced with Raymond’s.
On the Star Network, the people who were discussing the content of Ren’s live broadcast immediately discovered the announcement.
They were so excited to see who the New Patriarch was and it was only a few seconds alter that they discovered Raymond’s breakthrough.
The announcement also explained why the military released an explosion video some time ago and the misunderstanding of Marshal Raymond’s “sacrifice”.
The whole network was on fire.
However, Raymond didn’t know what was going on in the Star Network. He only felt satisfaction.
next chapter will be up tomorrow!
|
OPCFW_CODE
|
from dataclasses import dataclass
from typing import List, Tuple
from simple_parsing import field
from .testutils import TestSetup
@dataclass
class Foo(TestSetup):
a: int = 123
b: str = "fooobar"
c: Tuple[int, float] = (123, 4.56)
d: List[bool] = field(default_factory=list)
@dataclass
class Bar(TestSetup):
barry: Foo = field(default_factory=Foo)
joe: "Foo" = field(default_factory=lambda: Foo(b="rrrrr"))
z: "float" = 123.456
some_list: "List[float]" = field(default_factory=[1.0, 2.0].copy)
def test_forward_ref():
foo = Foo.setup()
assert foo == Foo()
foo = Foo.setup("--a 2 --b heyo --c 1 7.89")
assert foo == Foo(a=2, b="heyo", c=(1, 7.89))
def test_forward_ref_nested():
bar = Bar.setup()
assert bar == Bar()
assert bar.barry == Foo()
bar = Bar.setup("--barry.a 2 --barry.b heyo --barry.c 1 7.89")
assert bar.barry == Foo(a=2, b="heyo", c=(1, 7.89))
assert isinstance(bar.joe, Foo)
|
STACK_EDU
|
FAQs and known issues
Do I need to change the way I use Docker when Enhanced Container Isolation is enabled?
No, you can continue to use Docker as usual. Enhanced Container Isolation will be mostly transparent to you.
Do all container workloads work well with Enhanced Container Isolation?
The great majority of container workloads run fine with ECI, but a few do not (yet). For the few workloads that don’t yet work with Enhanced Container Isolation, Docker will continue to improve the feature to reduce this to a minimum.
Can I run privileged containers with Enhanced Container Isolation?
Yes, you can use the
--privileged flag in containers but unlike privileged
containers without Enhanced Container Isolation, the container can only use it’s elevated privileges to
access resources assigned to the container. It can’t access global kernel
resources in the Docker Desktop Linux VM. This allows you to run privileged
containers securely (including Docker-in-Docker). For more information, see Key features and benefits.
Will all privileged container workloads run with Enhanced Container Isolation?
No. Privileged container workloads that wish to access global kernel resources inside the Docker Desktop Linux VM won’t work. For example, you can’t use a privileged container to load a kernel module.
Why not just restrict usage of the
Privileged containers are typically used to run advanced workloads in containers, for example Docker-in-Docker or Kubernetes-in-Docker, to perform kernel operations such as loading modules, or to access hardware devices.
Enhanced Container Isolation allows running advanced workloads, but denies the ability to perform kernel operations or access hardware devices.
Does Enhanced Container Isolation restrict bind mounts inside the container?
Yes, it restricts bind mounts of directories located in the Docker Desktop Linux VM into the container.
It does not restrict bind mounts of your host machine files into the container, as configured via Docker Desktop’s Settings > Resources > File Sharing.
Does Enhanced Container Isolation protect all containers launched with Docker Desktop?
It protects all containers launched by users via
docker create and
docker run. It does not yet protect Docker Desktop Kubernetes pods, Extension
Containers, and Dev Environments.
Does Enhanced Container Isolation protect container launched prior to enabling ECI?
No. Containers created prior to enabling ECI are not protected. Therefore, we recommend removing all containers prior to enabling ECI. In the future Docker Desktop will likely make this a hard requirement.
Does Enhanced Container Isolation affect performance of containers?
Enhanced Container Isolation has very little impact on the performance of
containers. The exception is for containers that perform lots of
umount system calls, as these are trapped and vetted by the Sysbox container
runtime to ensure they are not being used to breach the container’s filesystem.
With Enhanced Container Isolation, can the user still override the
--runtime flag from the CLI ?
No. With Enhanced Container Isolation enabled, Sysbox is set as the default (and only) runtime for
containers deployed by Docker Desktop users. If a user attempts to override the
docker run --runtime=runc), this request is ignored and the
container is created through the Sysbox runtime.
runc is disallowed with Enhanced Container Isolation because it
allows users to run as “true root” on the Docker Desktop Linux VM, thereby
providing them with implicit control of the VM and the ability to modify the
administrative configurations for Docker Desktop, for example.
How is ECI different from Docker Engine’s userns-remap mode?
See How does it work.
How is ECI different from Rootless Docker?
See How does it work
ECI support for WSL
Prior to Docker Desktop 4.20, Enhanced Container Isolation (ECI) on Windows hosts was only supported when Docker Desktop was configured to use Hyper-V to create the Docker Desktop Linux VM. ECI was not supported when Docker Desktop was configured to use Windows Subsystem for Linux (aka WSL).
Starting with Docker Desktop 4.20, ECI is supported when Docker Desktop is configured to use either Hyper-V or WSL version 2.
Docker Desktop requires WSL 2 version 220.127.116.11 or later. To get the current version of WSL on your host, type
wsl --version. If the command fails or if it returns a version number prior to 18.104.22.168, update WSL to the latest version by typing
wsl --updatein a Windows command or PowerShell terminal.
Note however that ECI on WSL is not as secure as on Hyper-V because:
While ECI on WSL still hardens containers so that malicious workloads can’t easily breach Docker Desktop’s Linux VM, ECI on WSL can’t prevent Docker Desktop users from breaching the Docker Desktop Linux VM. Such users can trivially access that VM (as root) with the
wsl -d docker-desktopcommand, and use that access to modify Docker Engine settings inside the VM. This gives Docker Desktop users control of the Docker Desktop VM and allows them to bypass Docker Desktop configs set by admins via the settings-management feature. In contrast, ECI on Hyper-V does not allow Docker Desktop users to breach the Docker Desktop Linux VM.
With WSL 2, all WSL 2 distros on the same Windows host share the same instance of the Linux kernel. As a result, Docker Desktop can’t ensure the integrity of the kernel in the Docker Desktop Linux VM since another WSL 2 distro could modify shared kernel settings. In contrast, when using Hyper-V, the Docker Desktop Linux VM has a dedicated kernel that is solely under the control of Docker Desktop.
The table below summarizes this.
|Security Feature||ECI on WSL||ECI on Hyper-V||Comment|
|Strongly secure containers||Yes||Yes||Makes it harder for malicious container workloads to breach the Docker Desktop Linux VM and host.|
|Docker Desktop Linux VM protected from user access||No||Yes||On WSL, users can access Docker Engine directly or bypass Docker Desktop security settings.|
|Docker Desktop Linux VM has a dedicated kernel||No||Yes||On WSL, Docker Desktop can’t guarantee the integrity of kernel level configs.|
In general, using ECI with Hyper-V is more secure than with WSL 2. But WSL 2 offers advantages for performance and resource utilization on the host machine, and it’s an excellent way for users to run their favorite Linux distro on Windows hosts and access Docker from within (see Docker Desktop’s WSL distro integration feature, enabled via the Dashboard’s Settings > Resources > WSL Integration).
Docker build and buildx has some restrictions
With ECI enabled, Docker build
--network=host and Docker buildx entitlements
security.insecure) are not allowed. Builds that require
these will not work properly.
Kubernetes pods are not yet protected
Kubernetes pods are not yet protected by ECI. A malicious or privileged pod can compromise the Docker Desktop Linux VM and bypass security controls. We expect to improve on this in future versions of Docker Desktop.
Extension Containers are not yet protected
Extension containers are also not yet protected by ECI. Ensure you extension containers come from trusted entities to avoid issues. We expect to improve on this in future versions of Docker Desktop.
Docker Desktop dev environments are not yet protected
Containers launched by the Docker Desktop Dev Environments feature are not yet protected either. We expect to improve on this in future versions of Docker Desktop.
Use in production
In general users should not experience differences between running a container
in Docker Desktop with ECI enabled, which uses the Sysbox runtime, and running
that same container in production, through the standard OCI
However in some cases, typically when running advanced or privileged workloads in
containers, users may experience some differences. In particular, the container
may run with ECI but not with
runc, or vice-versa.
|
OPCFW_CODE
|
What openings provide "mini-objectives" in the early game?
As a beginner I've found that it is psychologically really nice playing QGA as white simply because I have a clear task at hand in the early game - namely recapture that pawn. Similarly, hedgehog for black is satisfying because you are aiming for a layout rather than a sequence.
I often find it quite hard to stumble through the early game when there is no clear objective - many openings the objective is simply to "develop pieces onto good squares". I'm not yet good enough to visualise that task so play pretty aimlessly.
What other openings should I look at that have clear, understandable, short term objectives to get you to the mid-game?
Technically speaking ALL openings have an objective other than developing your pieces. For example
Italian Game/Italian Opening goes like so:
[fen ""]
1.e4 e5 2.Nf3 Nc6 3.Bc4
From this position you would think it's simply development of pieces however that's not the case. In the Italian game, the opening itself for white is pin pointed on the f7 weakness in black's structure, when black played e5 in response to white's e4 it opened up a weakness in the structure which is the f7 pawn.
That f7 pawn ends up being a pinned pawn in many cases if the player castles on the king side normally, or even sometimes turns in to a bishop sacrifice if you have a good rook lift and an attack going on (however that's too complicated for a beginner in chess). In short Italian Opening's objective is simple, attack the king side's weaknesses and is an aggressive opening most of the time when it goes into the mid game phase.
You also have the Ruy Lopez one of the most famous openings in the history of chess.
The opening focuses on ruining the pawn structure for Black and gaining a potential advantage with the pin on Knight on c6.
You also have the French Defense which is also a famous opening
[fen ""]
1.e4 e6 2.d4 d5
The entire purpose and reputation of the French Defense is based on defense and counter play. Usually players of the french prefer a closed game, however that's a conversation for another thread. In the french black's objective is usually counter attack on the queen side while white usually focuses his attack on the king side. French defense also a solid pawn structure which upholds its reputation for defense.
I truly can't give you a list since almost every single opening out there has an objective or a reason for every move. Posting more openings and their objectives + explanation would definitely take too long. However I will leave you with these 3 openings that I was able to mention in this long post and hopefully I helped. If you have any further questions feel free to ask and I will attempt to answer as best I can
Edited part to answer the questions in the comments section :)
King's Gambit (Polerio/Villemson Gambit line is shown below, not the classic line)
[fen ""]
1.e4 e5 2.f4 exf4 3. d4
This opening focuses more on offense, with white's move of 2.f4 he is hoping to accomplish mostly 2 things, 1.) ruin the pawn structure (doubled pawns on the f file) and 2. Clear the way to push down the middle. However King's Gambit has 2 variations, Accepted and Denied gambit (accepted is when black takes the f pawn and accepts the sacrifice, Denied is when black continues and ignores the pawn).
If black takes the pawn (which is mostly what white wants) white will in most cases sieze the opportunity and push down the middle with d4 hoping to occupy space and make space for his pieces to enter the game. However in response black usually focuses on the f2 spot weakness where the pawn was sitting before it was sacrificed. Black's objective after the sacrifice is usually have his black bishop hitting that diagonal and unless white can somehow defend it will become a problem (that's why it's called a gambit). From white's point of view however that open f column is also a weapon, it is basically a semi opened column (unless black's f pawn was taken back which would make it a completely open column) for the king side rook. And since King's gambit is usually an open game with bishops this is a dangerous combination.
Here is also the Queen'g Gambit, however instead of me typing the explanation here, you might have a more enjoyable time reading and listening to this
http://www.thechesswebsite.com/queens-gambit/
Hope I helped :) If you have even more questions feel free to ask :)
Thanks, that is helpful - I think the Italian example you give is a perfect example of what I am thinking of - "attack the f7 pawn", so that suggests thinking how to get the knight to g5 perhaps etc. The French example is really rather an example of what I meant by unclear objectives. "Attack on the kingside" could mean anything, moving any/all pawns forwards, knight out, queen out, bishop across... and I would have no real way to decide between them.
Sorry chessbrain - I see you rejected the edit. I didn't mean to "deface" your answer. I thought it was clearer using the chess boards for the openings - just the algebra can be a little hard to understand for some of us!
Ah I see what you mean. If that's the case, what you need isn't really a list of openings that have those objectives, you need a little more knowledge in the middle game in chess. Although the question is about openings your problem is "insight", and that is gained from learning attack patterns and training in some calculations. However I will list some openings with more explanations in a moment :)
And I apologize I did not know it was you who suggested the edit I didn't see anyone's name and to make it worse the chess table that was shown was definitely bugged cause it lookedREALLYstretched
When you look at edits the chessboard always looks stretched. It will be alright in the post.
@Corone I truly apologize yet again, didn't mean any of it, I just truly thought it was a troll. Now off to the next opening according to your request :) King's Gambit 1.e4 e5 2.f4... This opening focuses more on offense, white's objective right off the bat is to gain an advantage by sacrificing the f4 pawn afterwards white pushes towards the center, usually white ignores the sacrificed f4 pawn and focuses on pushing on to attack. After exf4 by black white soemtimes pushes with d4 attempting to dominate the middle. Due to the character limit in the comment section I can't explain more :(
@BlindKungFuMaster Really? I am new so I had no idea... Nor do I even know HOW to add a chess board thingy on this place T-T Is it possible to reverse what I did or ? Cause I truly didn't mean any harm
@Corone I edited my main answer to elaborate on the King's Gambit, hopefully that is more helpful. Also if you can tell me how to add the chess board I would do so gladly since what you said is correct. It is better if there's a board to represent the opening and moves overall.
@Chessbrain - I've added them back in (and for King's Gambit) - if after you accept the change, you click edit, you will be able to see what they look like in the source code.
@Corone Thanks, I accepted.. .Also just to be clear, that line for the King's Gambit is not the classic usually d4 does not come this early, this is called the Villemson Gambit, it's one of the lines for the king's gambit. :)
If you have more questions feel free to ask
I think this question divides into two parts:
Openings with "mini objectives" early in the game.
Openings that go for a layout instead of a sequence.
The second part is easier to tackle: There are a lot of openings, usually called "systems", that go for a certain structure without bothering too much with what the opponent is doing. Some examples:
The London System
The Colle-Zukertort
The Stonewall-Dutch
The Stonewall with white
Also in many sicilian lines (most notably the dragon) white just castles long and starts to march his kingside pawns --> sac,sac,mate as Bobby Fischer famously said. But these are much more dependent on move order issues and concrete tactics.
You could interpret "long castles" as a mini-objective for white in the English attack, which is almost always achieved by playing f3, Be3, Qd2.
Mini-objective vary from variation to variation, but I think there is one mini-objective that is almost always relevant, especially when playing black: pawn levers.
For example if you play the black side of a QGA, you might go for the pawn lever c5. In a King's Indian black prepares the pawn lever f5, and sometimes c6 (whereas white goes for c5). In a Benoni he goes for b5 and sometimes f5. In a Meran you will always play c5 or e5 at some point, while white usually plays e4, in a Italian or Spanish game white prepares e4, in a French black goes c5 and white hopes for f5 … they are all over the place!
So my advice would be to look out for pawn levers in the openings you play. They should very often provide the mini-objective that gets you from the opening into the middlegame.
|
STACK_EXCHANGE
|
How to print asterisk triangles side-by-side in python?
I am trying to print 4 star triangle side-by-side using nested for-loops in python. I included the code I am using now that prints the triangles vertically, but I do not know how to print them horizontally.
n = 0
print ("Pattern A")
for x in range (0,11):
n = n + 1
for a in range (0, n-1):
print ('*', end = '')
print()
print ('')
print ("Pattern B")
for b in range (0,11):
n = n - 1
for d in range (0, n+1):
print ('*', end = '')
print()
print ('')
enter image description here
We also cannot tell you what you need to change in your code, because you didn’t show it.
Thank you! It did not copy correctly. I've included and edited my question now
Thanks to @AzBakuFarid the main idea is to print every line of shapes from the top to the last together. @AzBakuFarid code had a very little mistake that you can see the corrected one below :
maximum = 10
a, b, c, d = 1, maximum, maximum, 1
while a <= maximum:
print('*'*a + ' '*(maximum-a) + ' '*2 + '*'*b + ' '*(maximum-b) + ' '*2 + ' '*(maximum-c) + '*'*c + ' '*2 + ' '*(maximum-d) + '*'*d)
a += 1
d += 1
b -= 1
c -= 1
As u wanted to be with for-loops I came up with this :
longest = int(input())
asterisk_a = 1
spaces_a = longest - 1
asterisk_b = longest
spaces_b = 0
asterisk_c = longest
spaces_c = 0
asterisk_d = 1
spaces_d = longest - 1
for i in range(0,longest):
print(asterisk_a * '*' + spaces_a * ' ' + ' ' + asterisk_b * '*' + spaces_b * ' ' + ' ' + spaces_c * ' ' + asterisk_c * '*' + ' ' + spaces_d * ' ' + asterisk_d * '*')
asterisk_a += 1
spaces_a -= 1
asterisk_b -= 1
spaces_b += 1
asterisk_c -= 1
spaces_c += 1
asterisk_d += 1
spaces_d -= 1
In the first line you should give the number of asterisks in the longest case.
I tried to use meaningful variable names for better understanding.
maximum = 10
a, b, c, d = 1, maximum, maximum, 1
while a <= maximum:
print('*'*a + ' '*(maximum-a) + ' '*2 + '*'*b + ' '*(maximum-b) + ' '*2 + ' '*(maximum-c) + '*'*c + ' '*2 + '*'*d + ' '*(maximum-d))
a += 1
d += 1
b -= 1
c -= 1
|
STACK_EXCHANGE
|
Time offsets associated with wrong words
#593 ## Environment details
Run on Firebase Cloud Function
@google-cloud/speech version: ^3.6.0
Bug description
I have a node function that takes in an audio file recorded on a mobile device and returns the transcription and time offsets for each word in the transcription. Everything seems to be working well except that the time offsets seem to be associated with the wrong words.
Here is my function:
const speech = require('@google-cloud/speech');
const functions = require('firebase-functions');
const admin = require('firebase-admin');
const path = require('path');
const os = require('os');
const fs = require('fs');
const ffmpegPath = require('@ffmpeg-installer/ffmpeg').path;
const ffmpeg = require('fluent-ffmpeg');
admin.initializeApp();
export const transcribeAudio = functions.https.onRequest(async (req, res) => {
const { name, fullPath } = req.query;
const bucket = admin.storage().bucket('audio-test.appspot.com');
const tempFilePath = path.join(os.tmpdir(), name);
const targetTempFilePath = path.join(os.tmpdir(), `${name}-converted.mp3`);
await bucket.file(fullPath).download({ destination: tempFilePath }).catch(console.warn);
const command = ffmpeg(tempFilePath)
.setFfmpegPath(ffmpegPath)
.format('mp3')
.output(targetTempFilePath);
await new Promise((resolve, reject) => command.on('end', resolve).on('error', reject).run()).catch(console.warn);
const file = fs.readFileSync(targetTempFilePath);
const audioBytes = file.toString('base64');
const audio = { content: audioBytes };
const client = new speech.SpeechClient();
const config = {
encoding: 'MP3',
sampleRateHertz: 16000,
languageCode: 'en-US',
enableWordTimeOffsets: true,
enableAutomaticPunctuation: true,
};
const request = {
audio: audio,
config: config,
};
const [response] = await client.recognize(request).catch(console.warn);
res.send(response);
fs.unlinkSync(tempFilePath);
fs.unlinkSync(targetTempFilePath);
});
And here is the json returned from that function:
{
"results": Array [
Object {
"alternatives": Array [
Object {
"confidence": 0.9800227284431458,
"transcript": "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
"words": Array [
Object {
"endTime": Object {
"nanos": 100000000,
"seconds": "4",
},
"speakerTag": 0,
"startTime": Object {
"nanos": 700000000,
"seconds": "3",
},
"word": "How",
},
Object {
"endTime": Object {
"nanos": 400000000,
"seconds": "4",
},
"speakerTag": 0,
"startTime": Object {
"nanos": 100000000,
"seconds": "4",
},
"word": "much",
},
Object {
"endTime": Object {
"nanos": 700000000,
"seconds": "4",
},
"speakerTag": 0,
"startTime": Object {
"nanos": 400000000,
"seconds": "4",
},
"word": "wood",
},
Object {
"endTime": Object {
"nanos": 800000000,
"seconds": "4",
},
"speakerTag": 0,
"startTime": Object {
"nanos": 700000000,
"seconds": "4",
},
"word": "would",
},
Object {
"endTime": Object {
"nanos": 0,
"seconds": "5",
},
"speakerTag": 0,
"startTime": Object {
"nanos": 800000000,
"seconds": "4",
},
"word": "a",
},
Object {
"endTime": Object {
"nanos": 100000000,
"seconds": "5",
},
"speakerTag": 0,
"startTime": Object {
"nanos": 0,
"seconds": "5",
},
"word": "woodchuck",
},
Object {
"endTime": Object {
"nanos": 600000000,
"seconds": "5",
},
"speakerTag": 0,
"startTime": Object {
"nanos": 100000000,
"seconds": "5",
},
"word": "chuck",
},
Object {
"endTime": Object {
"nanos": 200000000,
"seconds": "6",
},
"speakerTag": 0,
"startTime": Object {
"nanos": 600000000,
"seconds": "5",
},
"word": "if",
},
Object {
"endTime": Object {
"nanos": 400000000,
"seconds": "6",
},
"speakerTag": 0,
"startTime": Object {
"nanos": 200000000,
"seconds": "6",
},
"word": "a",
},
Object {
"endTime": Object {
"nanos": 700000000,
"seconds": "6",
},
"speakerTag": 0,
"startTime": Object {
"nanos": 400000000,
"seconds": "6",
},
"word": "woodchuck",
},
Object {
"endTime": Object {
"nanos": 0,
"seconds": "9",
},
"speakerTag": 0,
"startTime": Object {
"nanos": 700000000,
"seconds": "6",
},
"word": "could",
},
Object {
"endTime": Object {
"nanos": 200000000,
"seconds": "9",
},
"speakerTag": 0,
"startTime": Object {
"nanos": 0,
"seconds": "9",
},
"word": "chuck",
},
Object {
"endTime": Object {
"nanos": 600000000,
"seconds": "9",
},
"speakerTag": 0,
"startTime": Object {
"nanos": 200000000,
"seconds": "9",
},
"word": "wood?",
},
],
},
],
"channelTag": 0,
},
],
}
In the above example, I intentionally dragged out the word "woodchuck" the 2nd time I said it. If you look at the offsets, it seems like the times for that word were applied to the following word, "could," which I said much more quickly in the recording. Any insight as to why this would be happening would be much appreciated. Thanks!
Hi @sethcwhiting
Thank your posting your concern. According to our best-practices, "...Use a lossless codec to record and transmit audio. FLAC or LINEAR16 is recommended.". Here: https://cloud.google.com/speech-to-text/docs/best-practices
Using mp3, mp4, m4a, mu-law, a-law or other lossy codecs during recording or transmission may reduce accuracy. If your audio is already in an encoding not supported by the API, transcode it to lossless FLAC or LINEAR16. If your application must use a lossy codec to conserve bandwidth, we recommend the AMR_WB, OGG_OPUS or SPEEX_WITH_HEADER_BYTE codecs, in that preferred order.
Additionally, take a look at the supported audio encodings: https://cloud.google.com/speech-to-text/docs/encoding
Won't fix (No repro)
|
GITHUB_ARCHIVE
|
You just got a spanking new laptop with loads of RAM, the latest processor and very high performing Graphics card.
You download one of the most graphic-intensive games Far Cry 5 and open the game to play it. You keep all the graphic settings to Maximum with Textures, V-Sync, G-Sync, and other options turned ON for the highest quality gameplay.
You start the game and keep on playing it with a good frame rate or FPS of more than 60. Everything seems okay until you see something horrible.
The frame rates or the FPS suddenly drops below 30 and you observe stuttering and lags. The FPS stays below 30 for some time and then it again rises back to 60 and above.
The process keeps on repeating. You go from very high FPS to very low FPS in a matter of minutes. This is happening even though you’ve got a laptop with great specs that are good enough to run the game at an FPS of over 60 continuously.
Well, this phenomenon of the continuous rise and fall in the FPS is due to a process called CPU/GPU Bottleneck.
What Does Bottleneck Mean?
A CPU Bottleneck is a process in which the CPU cannot keep up with the processing speed of GPU. This causes a huge drop in the FPS.
Your CPU will be at maximum usage(100% usage) but the GPU usage will be very less. This occurs if you use an old fashioned CPU with a newer GPU model.
This drop in FPS causes lags and stuttering leading to a poor gaming experience.
A great example of Bottleneck is Traffic moving from a very large and wide road to a small and narrow road.
When the traffic is moving on a large road then a lot of cars can move together but when the traffic moves on a narrow road then only a limited number of cars can move together.
This narrow road creates a Limit to the number of cars that can move thereby causing a Bottleneck.
Let me give you an example of how Bottleneck occurs in a computer.
If you use a custom-built desktop PC with a 6th Generation Intel Core i3 processor with an NVIDIA GeForce RTX 2080 Ti Graphics Card then you will observe CPU Bottleneck.
This is because your CPU is very old compared to the GPU and the CPU doesn’t have enough processing power to keep up with the high processing power of the RTX 2080 Ti GPU.
The video above shows the perfect example of CPU Bottleneck while playing a game. The game stutters and lags when the CPU Performance is 100% and the GPU usage is very less.
The continuous rise and fall in FPS is because the GPU has to wait for the CPU to process the previously sent data before the GPU can send some more data to the CPU to process.
The CPU usage remains at 100% for the entire period while the GPU usage rises when it sends the data and falls when it has to wait for the CPU to process the data.
What Is A GPU Bottleneck?
A GPU Bottleneck is the process in which the GPU cannot keep up with the processing speed of CPU. This is the exact opposite of CPU Bottleneck.
Your GPU will be at maximum usage(100% usage) but the CPU usage will be very less. This happens if you use an old model of GPU with a newer CPU model.
For example, if you use a custom-built Desktop PC with a 10th Generation Intel Core i9 Processor with NVIDIA GTX 480 Graphics Card. Here, the processing power of the GPU is very less compared to the processing power of the CPU.
The GTX 480 cannot keep up with the high processing speed of the Intel Core i9 processor and this causes a GPU Bottleneck.
Other Causes Of CPU/GPU Bottleneck
The outdated CPU and GPU models is not the only cause of CPU/GPU Bottleneck. It can also happen with the Type of Game you’re playing.
There are basically two types of game mainly CPU Dependent Games and GPU Dependent Games.
CPU Dependent Games show high framerates or FPS under Low Graphics Settings. An example of such a game is Civilization 6 or Resident Evil 7.
GPU Dependent Games show low framerates or FPS under Low Graphics Settings if the GPU doesn’t have enough processing power. Example – Far Cry 5, GTA V, etc.
You can increase the framerates or FPS of these games by using a powerful GPU with high processing power.
How To Reduce CPU/GPU Bottleneck
If you’re observing CPU/GPU Bottleneck in Laptops then you cannot fix it completely. You can, however, reduce the Bottleneck effect by the following methods.
(A) You can reduce CPU Bottleneck in Laptop by overclocking the CPU and undervolting and underclocking the GPU.
(B) You can reduce GPU Bottleneck in Laptop by overclocking the GPU and undervolting and underclocking the CPU.
Following the above two methods can reduce the Bottleneck effect somewhat but it cannot remove it completely.
How To Fix CPU/GPU Bottleneck
If you’re using a desktop PC then you can fix the CPU/GPU Bottleneck by replacing the component that is causing this Bottleneck.
You can fix CPU Bottleneck by replacing the old CPU with a new and better quality CPU.
You can fix GPU Bottleneck by replacing the old GPU with a new and better GPU.
Is CPU/GPU Bottleneck Bad
CPU/GPU Bottleneck is bad for Gaming but it won’t harm your computer. This won’t cause any serious damage to your computer unlike CPU Throttling or GPU Throttling.
You will only notice low FPS in Games with Lags and Stutters but it won’t affect your daily tasks like Browsing the Internet, Watching Movie and other tasks.
Even though it is safe it doesn’t mean you should use a laptop or desktop with CPU/GPU bottleneck. If you specifically bought the computer for gaming then you won’t get a good gaming performance which doesn’t satisfy your main purpose.
You can reduce or fix the CPU/GPU Bottleneck problem by following the methods shared above.
|
OPCFW_CODE
|
[Nexus-developers] [Nexus] XML format--A way to serialize binary?
Peterson, Peter F.
petersonpf at ornl.gov
Fri Dec 8 16:25:33 GMT 2006
I have been interested in producing a C++ wrapper around the napi for
some time. Would you be willing to share this code with NeXus and
possibly incorporate it into a future release?
From: nexus-bounces at nexusformat.org
[mailto:nexus-bounces at nexusformat.org] On Behalf Of tieman
Sent: Friday, December 08, 2006 10:39 AM
To: Mark Koennecke; Nexus List
Subject: Re: [Nexus] XML format--A way to serialize binary?
Mark Koennecke wrote:
> Dear Brian Tiemann,
> tieman schrieb:
>> Hello all!
>> After much trial and tribulation (linking on Windows is a nightmare!)
>> I have all my windows programs using Nexus 3.0...on to linux!
>> Well, not quite yet...is there a way to serialize binary data into a
>> more compact format with NAPI? I generally need to write image data
>> that can be quite large. The "default" serialization method
>> generates an ascii string for each pixel. While this is very
>> readable--it typically makes the files 4-5 times larger than encoding
>> the data as binary. It would be nice to be able to write the image
>> data as a raw binary dump to save space.
> We implemented NeXus-XML in order to:
> * be buzword compliant
> * allow people to edit their data...
> So the editable data is by design. If you want efficiency, use HDF-5.
> In principle you can
> convert your data into an UINT8 array and print that as hex. The
> NeXus-API allows to set
> the format strings for printing numbers. This will yield a compact
> representation but will not be readable
> with the NeXus-API.
Hmmm...seems like a simple hack in the Nexus-API should handle this. If
the appropriate flag is set, serialize binary data as a byte stream and
tag it such that we can unserialize it properly on read...I'm not sure
I'll ever need this feature, but I do currently have a web app that
reads Nexus nexus files and passes them over the net as serialized XML
in this manner so I did want to at least try writing XML formatted data.
>> I don't anticipate needing the use the XML format often, but XML
>> would make it easier to write a web app that can browse the data
>> online for instance...
> Well, how do you write your web apps? If you do Java servlets, you may
> use the NeXus-Java interface.
> If you do Tcl you may use the SWIG generated wrapper around the
> NeXus-API, if you use perl or php
> you may help out with your knowledge to make the SWIG interface to
> those languages tick. This is
> little work. If you use python, contact Peter Peterson, they have a
> python interface to NeXus.
I use Java. I do not, and probably will not, use the NeXus-Java
interface. The reason is that I have a C++ wrapper around Nexus that
I've been using for many years now. The C++ wrapper maps the data file
to a data tree that can be accessed easily by paths and is an even
higher level interface than Napi. This has been linked to Java and
other languages through a simple C wrapper that has also been in use for
years now. In short--now that I have a robust class structure that's
cross platform and well integrated into a number of apps--I don't want
to change it :)
NeXus mailing list
NeXus at nexusformat.org
More information about the NeXus-developers
|
OPCFW_CODE
|
WHAT IS THE ACCOUNT YOU USE TO RUN YOUR PROGRAM? –loki2302 Aug 25 '11 at 16:00 @loki2302: He's on Windows so it would be the account he logged in Replace the line int main(int argc, LPTSTR argv) with int wmain(int argc, wchar_t **argv) The reason: Since your project file has
According to msdn, there are 16000 codes total. I have a device that returns some values and this box is seen by Windows as Human Interface Device). If the specified file exists, the function succeeds and the last-error code is set to ERROR_ALREADY_EXISTS (183). I'm giving you the answer for being civil and polite (rather than assuming I'm an idiot who hadn't checked the relevant documentation or googled like a maniac for an answer or https://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx
When it got infected I could not start it at all - not in normal mode, not in safe mode. So, no matter how you look at it, trying to find subset of codes to use might lead to some kind of disaster. Reply With Quote Quick Navigation C++ and WinAPI Top Site Areas Settings Private Messages Subscriptions Who's Online Search Forums Forums Home Forums Visual C++ & C++ Programming Visual C++ Programming Visual I doubt sifting through the next 14500 would be more help.
Also, the file metadata may still be cached (for example, when creating an empty file). Tenant paid rent in cash and it was stolen from a mailbox. angelocala94 commented Oct 7, 2014 Solved, cordova.file.externalDataDirectory not needed angelocala94 closed this Oct 7, 2014 matthiaslieb commented Jun 10, 2015 Now you save it to the directory 'this_is_a_file' but I think Createfile Error 2 CreateFile function Creates or opens a file or I/O device.
The dwDesiredAccess parameter can be zero, allowing the application to query file attributes without accessing the file if the application is running with adequate security settings. Createfile Error Code 3 Sign Up All Content All Content This Topic This Forum Advanced Search Browse Forums Guidelines Staff Online Users Members More Activity All Activity My Activity Streams Unread Content Content I Started Is this the host computer? http://stackoverflow.com/questions/7193348/createfile-failed-with-getlasterror-5 When opening CONOUT$, specify FILE_SHARE_WRITE.
You may use either forward slashes (/) or backslashes (\) in this name. Createfile Error 123 Powered by vBulletin Version 4.2.3 Copyright © 2016 vBulletin Solutions, Inc. Use the CONOUT$ value to specify console output. File Streams On NTFS file systems, you can use CreateFile to create separate streams within a file.
For information on special device names, see Defining an MS-DOS Device Name. http://cboard.cprogramming.com/windows-programming/155765-createfile-error-2-simple-problem-cant-figure-out.html Your Email This email is in use. Createfile Error Code 5 Symbolic Link Behavior If the call to this function creates a file, there is no change in behavior. Function Createfile Failed With An Error Code Of Results 1 to 12 of 12 Thread: Opening device with CreateFile fails with error code 5 ERROR_ACCESS_DENIED Tweet Thread Tools Show Printable Version Email this Page… Subscribe to this Thread… Display
more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed navigate here Now when I go to launch IE, I get a standard error message box - "Internet Explorer has encountered a problem and needs to close. Let us know! Code: #include
CreateFile ignores the lpSecurityDescriptor member when opening an existing file.I also thought of using Window shell but could not reched a solution. How to approach? As stated previously, this synchronous versus asynchronous behavior is determined by specifying FILE_FLAG_OVERLAPPED within the dwFlagsAndAttributes parameter. http://oraclemidlands.com/createfile-error/createfile-error-code-123.php We obviously need to use GetLastError(), but I could not find a reference to what the possible values would be. How about this?
Volume handles can be opened as noncached at the discretion of the particular file system, even when the noncached option is not specified in CreateFile. Createfile Error 32 When Trying Set File Time Oracle 11g SECURITY_IMPERSONATION Impersonate a client at the impersonation level. This flag has no effect if the file system does not support cached I/O and FILE_FLAG_NO_BUFFERING.
PuTTY slow connecting to Linux SSH server RattleHiss (fizzbuzz in python) Were there science fiction stories written during the Middle Ages? To ensure that the metadata is flushed to disk, use the FlushFileBuffers function. Search Engine Optimisation provided by DragonByte SEO v2.0.32 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd. 12,519,474 members (84,477 online) Sign in Email Password Forgot your Createfile Error 32 When Trying Set File Time Oracle Installation If this flag is not specified, then per-session devices (such as a device using RemoteFX USB Redirection) cannot be opened by processes running in session 0.
Last edited by DougD720; 04-08-2013 at 02:23 PM. 04-09-2013 #2 novacain View Profile View Forum Posts Visit Homepage train spotter Join Date Aug 2001 Location near a computer Posts 3,868 Not What can I say instead of "zorgi"? Below is the code and a screenshot of the Command Prompt. this contact form If FILE_FLAG_WRITE_THROUGH is used but FILE_FLAG_NO_BUFFERING is not also specified, so that system caching is in effect, then the data is written to the system cache but is flushed to disk
Why did the One Ring betray Isildur? Join them; it only takes a minute: Sign up ::createFile winApi fails with error 5 (access_denied) . I was round a long time ago Is 8:00 AM an unreasonable time to meet with my graduate students and post-doc? The operating system also requests a write-through of the hard disk's local hardware cache to persistent media.
Return value If the function succeeds, the return value is an open handle to the specified file, device, named pipe, or mail slot. You should check your user's access rights. Communications Resources The CreateFile function can create a handle to a communications resource, such as the serial port COM1. For more information, see Naming a Volume.
Success! The FormatMessage() function is available to generate a readable string for the error code. This includes allowing multiple files with names, differing only in case, for file systems that support that naming. Using int main(int argc, char *argv), should produce warning(s).
You can use the CRT wrappers as well, but with the inevitable loss of specificity. Is it ok to discuss it here or should I start a new topic/thread? REG.EXE VERSION 3.0HKEY_LOCAL_MACHINE\SYSTEM\Setup SystemPartition REG_SZ \Device\Harddisk0\DP(2)0x36e8e00-0x1746307200+2 Share this post Link to post Share on other sites AdvancedSetup Staff Root Admin 62,831 posts Location: US ID: 15 Posted January 23, They are: FILE_FLAG_NO_BUFFERING FILE_FLAG_RANDOM_ACCESS FILE_FLAG_SEQUENTIAL_SCAN FILE_FLAG_WRITE_THROUGH FILE_ATTRIBUTE_TEMPORARY If none of these flags is specified, the system uses a default general-purpose caching scheme.
A file system may or may not require buffer alignment even though the data is noncached. Is there a single word for people who inhabit rural areas? Now make it share and after that and remove EveryOne from it. If this parameter is zero and CreateFile succeeds, the file or device cannot be shared and cannot be opened again until the handle to the file or device is closed.
Does anyone have a reference to the possible error codes that Windows File functions may generate, specifically (for now) CreateFile()? The most commonly used values are GENERIC_READ, GENERIC_WRITE, or both (GENERIC_READ | GENERIC_WRITE). Note The sharing options for each open handle remain in effect until that handle is closed, regardless of process context. ValueMeaning 0 0x00000000 Prevents other processes from opening a file or
|
OPCFW_CODE
|
A little linting cleanup
Hey guys,
I'm preparing to try to add support for create-react-app thru ignite, first though I thought I would lint things, and whaddya know.... found there were these errors:
/home/justin/projects/node/ignite/src/cli/check.js:37:78: Trailing spaces not allowed.
/home/justin/projects/node/ignite/src/cli/check.js:90:1: Too many blank lines at the end of file. Max of 0 allowed.
/home/justin/projects/node/ignite/src/commands/new.js:74:47: Redundant use of `await` on a return value.
/home/justin/projects/node/ignite/src/commands/new.js:187:1: Too many blank lines at the end of file. Max of 0 allowed.
/home/justin/projects/node/ignite/src/commands/plugin.js:21:10: Redundant use of `await` on a return value.
/home/justin/projects/node/ignite/src/extensions/ignite.js:36:23: 'system' is assigned a value but never used.
/home/justin/projects/node/ignite/src/extensions/ignite/copyBatch.js:1:48: Block must not be padded by blank lines.
/home/justin/projects/node/ignite/src/extensions/ignite/copyBatch.js:2:1: Trailing spaces not allowed.
/home/justin/projects/node/ignite/src/extensions/ignite/copyBatch.js:3:21: Missing space before function parentheses.
/home/justin/projects/node/ignite/src/extensions/ignite/copyBatch.js:37:14: Redundant use of `await` on a return value.
/home/justin/projects/node/ignite/src/extensions/ignite/generate.js:21:12: Redundant use of `await` on a return value.
/home/justin/projects/node/ignite/src/lib/importPlugin.js:58:1: Trailing spaces not allowed.
/home/justin/projects/node/ignite/tests/fast/lib/detectInstall.test.js:28:6: Strings must use singlequote.
/home/justin/projects/node/ignite/tests/integration/ignite-new/new.test.js:26:1: Trailing spaces not allowed.
/home/justin/projects/node/ignite/tests/integration/ignite-new/new.test.js:41:1: Too many blank lines at the end of file. Max of 0 allowed.
so thought I would fix so everything is nice and clean. :rotating_light:
I broke stuff! :smiley:
Has the :robot: made a mistake with this or have I missed something?
Hey so I have been working on this a bit further today. While I was writing an integration test for my changes for removing react-native as a dependency, I noticed that the integration tests didn't complete at all. They just sort of hung there with the lovely spinner showing. I reckon this is why the semaphore build has failed above.
Running locally, the integration test (tests/integration/ignite-new/new.test.js) never returns from this line:
await execa(IGNITE, ['new', APP_DIR, '--min'])
Seems like the --min switch doesn't work like it used to (?) I tried adding -b ignite-ir-boilerplate-bowser in various combinations in place of and as well as the --min but no good.
What should work here?
Hey so I have been working on this a bit further today. While I was writing an integration test for my changes for removing react-native as a dependency (aren't I good writing the test first!), I noticed that the integration tests didn't complete at all. They just sort of hung there with the lovely spinner showing. Maybe this is why the semaphore build has failed above, because I don't think its because of my linting (?)
Running locally, the integration test (tests/integration/ignite-new/new.test.js) never returns from this line:
await execa(IGNITE, ['new', APP_DIR, '--min'])
Seems like the --min switch doesn't work like it used to (?) I tried adding -b ignite-ir-boilerplate-bowser in various combinations in place of and as well as the --min. and also with the --no-boilerplate option, but none of these were good.
What should work here?
Summoning @GantMan
OK, so kicked the tyres :car: a bit on this and found that the integration tests fail on master. Rolled back to commit 892fa5a, found integration tests passed successfully at this point, as long as this line was changed to this:
const sporkedFile = 'ignite/Spork/ignite-ir-boilerplate-andross/component.ejs'
Rolled back into master, applied this change, however the tests still hang. Maybe because of this? introduced in b124c8. But a dig into that looks like it was meant to be reverted...
.... now I'm really :confused:
Hi @juddey ,
I was looking at that as well and the only way to fix it for now I saw, was to modify the test (https://github.com/infinitered/ignite/blob/master/tests/integration/ignite-new/new.test.js#L15) to be await execa(IGNITE, ['new', APP_DIR, '--min', '-b', 'ignite-ir-boilerplate-bowser'])
Then it works! (takes a while, but it does)
Problem is, it block the same way on await execa(IGNITE, ['g', 'component', 'Test'])(here)
So I guess this is a problem for all prompts.
The only solution I could see to that problem is if you could pass the answer of a prompt in command line, but thats another job.
The idea would be that if you have a prompt that expects to get an answer for the foo parameter, if you pass parameters.options.foo, this should take it's place and disable the prompt.
Actually, @kevinvangelder , would you be open to that ?
I could work on a PR.
Yeah, that's the pattern we've been using in most places, specifically for supporting automated testing. Do @GantMan or @skellock have any different thoughts on this before Adrian gets started?
Hey @adrienthiery!
Thanks for the thoughts, and the idea to pass a specific boilerplate. I hadn't tried the combination of that and the solution for the test I talked about here, using the andross boilerplate.
The tests pass locally, and I've just pushed a commit to see what will happen on the ci side of things.
:point_down: Well, it looks like the ci script has the same sort of problem. Looking into it.
And there we go. :tada:
@juddey insert Bette Midler song here
I thought we removed that codacy plugin long ago. Standby.
@skellock You are the wind beneath my wings.
:tada:
|
GITHUB_ARCHIVE
|
Centering Values on Bars in Histogram in R
Looking to have the values of x-axis plotted in the center of the bars in R.
Having issues finding a way to make this possible, code is below:
hist(sample_avg, breaks =7, ylim=c(0,2000),
main = 'Histogram of Sample Average for 1 Coin Flip', xlab= 'Sample Average')
This is just for a coin flip, so I have 6 possible values and want to have 6 buckets with the x-axis tick marks underneath each respective bar.
Any help is much appreciated.
You might consider using barplot instead when dealing with discrete outcomes. The default barplot also centers the x-axis labels under the bars.
hist() returns the x coordinate of the midpoints of the bars in the mids components, so you can do this:
sample_avg <- sample(size=10000,x=seq(1,6),replace=TRUE)
foo <- hist(sample_avg, breaks =7, ylim=c(0,2000),
main = 'Histogram of Sample Average for 1 Coin Flip', xlab= 'Sample Average',
xaxt="n")
axis(side=1,at=foo$mids,labels=seq(1,5))
# when dealing with histogram of integers,
# then adding some residual ~ 0.001 will fix it all...
# example:
v = c(-3,5,5,4,10,8,8)
a = min(v)
b = max(v)
foo = hist(v+0.001,breaks=b-a,xaxt="n",col="orange",
panel.first=grid(),main="Histogram of v",xlab="v")
axis(side=1,at=foo$mids,labels=seq(a,b))
Not as nifty as you might have been hoping, but looks like the best thing is to use axes=F, then put in your own axes with the 'axis' command, specifying the tick marks you want to see.
Reference: https://stat.ethz.ch/pipermail/r-help/2008-June/164271.html
Tried that, but the last value is getting cut off everytime I do that. Do you know of any way to get around that? I tried messing with the length but keep getting that my 'at' and 'labels' are differing by 1.
Nevermind, got it figured out using $mids. I appreciate all the help!
You may get better results by specifying the breakpoints directly, using breaks=seq(0.5,6.5) instead of breaks=7.
|
STACK_EXCHANGE
|
In a dynamic world you cannot have an application that is not flexible. In fact today an ideal web application is the one that can be easily extended or scaled to accommodate changing requirements. An application may only need standard features today but tomorrow it may need extensions and customizations. What to do then? Do you re-develop the whole application?
To work around a situation like this Radix uses ASP.Net MVC framework to develop applications where every component can be extended or customized later on. We are using ASP.Net MVC to develop or revise existing websites.
So what is MVC? MVC is an architecture pattern that separates business logic from user interface for a web browser. MVC comprises of 3 main divisions.
Model – Contains business or domain logic along with data structure
View – Contains layout and presentation of data
Controller – Handles end user requests and responds to them
These three components together manage input, business logic, and output of an application. Radix uses MVC framework to develop interactive web applications and desktop applications.
Model represents and manages domain logic of an application. We work with domain model that contains business domain data. It responds to requests from View and instructions from Controller to update business logic. The core functionality of domain model is to maintain application domain behavior and data. We also offer View Models that represent data transferred between View and Controller.
View represents user interface in web based applications. It displays buttons, design elements, graphics, forms, information, textual output, etc. of application. This comprises of HTML layout only.
Controller manages user interaction. Considered to be the brain of an application, Controller processes and responds to requests, inputs and interactions. It processes requests, gets the appropriate data, and identifies appropriate view to represent it to end user. It acts as a link between the system and its users.
We use MVC to develop applications that can be easily managed with changing requirements. We use MVC patterns to separate user interface from substance of an application.
Why MVC Development with Radix?
MVC based application development has become a core development strategy at Radix. We are using MVC architecture to improve code reusability and identify issues as belonging to a specific domain and solve them in a more effective manner.
1) Separate Development: We use MVC to isolate the application code into model (business logic), view (UI logic), and controller (input logic) independently. We build interfaces based on core contracts that can be tested by using mock objects. Mock objects will imitate the behavior of actual object.
2) Easy Code Maintenance: Since we divide the application into three main domains, code maintenance and debugging becomes very easy. We don’t have to run through the entire code. We identify the issue as relating to a specific domain and debug that particular section only. This lowers our efforts and expedites maintenance process.
3) Multiple View Support in UI: Since MVC separates model, view, and control of application from each other, we design user interfaces in such a way that they can display multiple views of the same data at the same time.
4) Improved Application Performance: We use MVC’s web forms instead of HTML based forms in web application. This reduces application loading and processing time giving better performance.
5) Better Application Testing: We test model, view, and control part of the application separately. Isolated testing of these divisions makes it easy for us to identify latent vulnerabilities or issues and solve them.
6) Extensibility: ASP.Net MVC framework has components that are easy to replace or customize. It is easy to plug-in view engine, URL routing policy, action method parameter serialization, and other third party components as per the requirements.
7) Change Management: One thing that changes most frequently in an app is its user interface. Business rules do not change that very often. Since display and interface is kept separate from functionalities it becomes easy to manage design level change. New theme can be implemented, new fonts can be embedded in the design, etc.
MVC Development Practices
Like mentioned earlier the central idea of MVC is to improve code reusability and isolate issues to a particular domain. For this purpose we follow below mentioned guidelines when developing Model, View, & Controller part of the application.
Model is the underlying structure for data that will be used by different parts of an application. This means different parts will seek data from this common structure. So we make sure that we:
- Develop properties to represent specific data.
- Develop business logic to ensure the represented data fulfills design requirement.
View is responsible for displaying required data to end users. There are certain best practices that we follow for View. We develop:
- Separate presentation code to format and render data.
- Common presentation areas and put them in layout view.
- Partial views that are independent of layout and use fragments of presentation code.
Controller binds model and view together and runs them in an application. Controllers are responsible for handling end user requests. They direct traffic to where it needs to go. They figure out which view meets the end user request. So we take extra care when developing controller part of the application. As a practice we:
- Isolate controllers from non-testable or non-flexible applications.
- Isolate controllers from data access or business logic.
- Use Inversion of Control (IoC) container to manage dependencies. This makes testing very easy.
- Use strongly-typed views instead of magic strings.
Advantages of Radix MVC Application Development
- Conceptual and clear development approach
- Meticulous project management practices
- Application testing for quality & security
- Periodic reporting on progress made
- Confidentiality contracts for non-disclosure of project specifications
- IP protection policy in place
- Transparent business practices and communication
To take advantage of MVC framework in ASP.Net applications contact Radixweb.
|
OPCFW_CODE
|
[concurrency-interest] java fork-join getting-started notes for beginners <-> Java 7
djg at cs.washington.edu
Tue Aug 16 18:00:31 EDT 2011
I'm looking for quick confirmation that using the fork-join framework
with the Java 7 JRE is just as easy as it seems and that I'm pointing
students to the right stable versions of things.
As I've mentioned on this list a couple times, I've developed a
course-unit for second-year undergraduates that introduces parallelism
and concurrency using Java and the Fork-Join Framework (though it's
not really that Java-specific). At Washington, we've used this unit
in our required data-structures course for 1.5 years now and it's been
picked up by 5 other schools so far. In all, 10 instructors, most
non-experts in Java, parallelism, or both have used it and they all
claim success and, "I will do this again." For more information,
One thing that has proven absolutely essential is step-by-step
instructions suitable for beginners, specialized to just what they
need: ForkJoinPool, RecursiveTask, RecursiveAction. This was
particularly important for Java 1.6. The url
has these instructions and was last updated a few months ago. For
those of you who have not taught undergraduates, let me assure you
that there are, nonetheless, a mind-boggling number of ways to enter
-Xbootclasspath/p:jsr166.jar incorrectly. :-)
So what now:
It seems time to update my step-by-step instructions to say:
1. Please use Java 7 following steps a, b, c.
2. If you really can't, then here are the more complicated steps for
using Java 6 following steps, d, e, f, g.
In preparation for this, I downloaded JDK 7 onto a [Windows 7, 64-bit]
machine that has never had Java on it, installed Eclipse IDE for Java
Developers, indigo release (my instructions prefer but don't mandate
eclipse), set the Java Project JRE to JavaSE-1.7, and ran the attached
file. It Just Worked. This is So Wonderful and I send my heartfelt
appreciation to everyone on this list who helped make it happen.
Now my questions -- I think the answers are all 'yes' but this is the
place to confirm and I'm most concerned about (C):
A. Java 7: Is this the real deal -- the framework will use the
available processors and, after suitable VM warmup, be the parallel
execution engine we expect?
B. Installation: Will upgrading on machines that already have Java 6
be just as seamless?
C. Code: Is the attached file the way to show things to beginners?
(Note: My point is to show them the reduction explicitly rather than
using a library method. This is for pedagogical purposes. So no
complaining about that.)
D. Eclipse: When choosing JavaSE1-1.7, Eclipse Indigo release warns,
"The 1.7 compiler compliance level
is not yet supported. The new project will use a project specific
compiler compliance level of 1.6". Am I correct that this can be
ignored since I'm not using any new /language/ features, just a new
/library/? (Note: My understanding is there are Eclipse versions
available with 1.7 compilers, but if we're okay with the most standard
most stable Eclipse release, this is extremely helpful.)
E: Anything else I can do to make this as bullet-proof for beginners
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 1349 bytes
Desc: not available
More information about the Concurrency-interest
|
OPCFW_CODE
|
Connect to Firebird 2.5 embedded using FireDAC
No matter what I do, I cant seem to connect to Firebird 2.5 embedded database using FireDAC. Funny thing is that yesterday it seemed to work, now all of a sudden it just won't.
Rather to rake my head why all of a sudden it ceased to work, I begun rebuilding a clean project, so I can figure out step by step what is going on. I added (copied) the entire contents of the downloaded package from Firebird to my application directory and the database itself of course.
To avoid folder issues, I have set the Delphi environment options to put all the files (and the exe) in the same directory. Then I used the suggestion found in the Firebird *.doc folder and renamed fbembed.dll to fbclient.dll. (On some FireDAC Q&A about the issue I have read that Firedac requires fbembed.dll file. Well, I have tried both ways and it won't work anyway). But let me stick to first issue with fbclient.dll.
To establish a connection, I have dropped on the datamodule a FDPhysFBDriverLink1 and a FDConnection1.
Now the FDPhysFBDriverLink1: It's BaseDriverID is FB. For DriverID, I have tried both suggestions: First with 'FB' and then supplying the full path to the fbclient.dll. None seem to make the difference.
When I go to Firedac Connection editor, and input the path to my database file, I get "Cannot load vendor library (fbclient.dll or fbembed.dll)" error. But both of these files are in the application exe folder (like suggested on Embarcadero site). So, where is FireDAC looking for the files? I am getting a little fed up of Firebird and the FireDAC, as they can't simply explain what must be done for the connection to succeed. All they give is some vague options which none of them work. Add this-add that, and at the end, fail.
So if someone has experience on how to do this with straight forward answer (no links please, I have seen and tried them all), I would very much appreciate it. Trying for days to establish a simple connection is really stupid. I have tried also with UniDAC with similar results. What must I do to accomplish the connection?
I can't place it immediately but ISTR there was a question here a few weeks ago about Firebird embedded suddenly stopping working. You might try looking for it in case anyone came up with an answer.
It seems the issue was this :
FDPhysFBDriverLink1 needs this info :
BaseDriverID=FB
DriverID=FB
VendorLib=C:\Users\***\Documents\Delphi\FIREBIRD\fbembed.dll
After setting this I could connect the FDConnection1 using these parameters :
DriverID=FB
Database=C:\Users\***\Documents\Delphi\FIREBIRD\MYDB.FDB
User_Name=sysdba
Password=masterkey
Protocol=local
After hours and hours of trying various combinations I eventually got lucky. However, I am still uneasy with this though it works.
Why are you uneasy with this solution? Using Firebird embedded is - probably - not the default as most people use Firebird server. By default it will look for the normal client library (fbclient.dll), so you need to explicitly instruct it to use (and where to find) fbembed.dll (which has the same API as fbclient.dll, but also contains the server).
Firebird is a client-server system - it has server part (fbserver.exe or fb_inet_server.exe) and client part (fbclient.dll). So, when you connect to the normal (not embedded) instance of Firebird, you should specify fbclient.dll as a client library.
Embedded Firebird is a combination of server and client in the single dll - they are both complied into fbembed.dll (you can look at its size and compare with fbclient.dll).
When you use Embedded server, you need to specify fbembed.dll as client library, and, as a result, you will have Firebird server (i.e., code, responsible for processing SQL queries) embedded into your code.
If you don't specify vendor lib in FireDAC, it will try to load the first available dll, which is most likely will be fbclient.dll (from System32?).
FireDAC, as well as any other client components, are not aware about architecture of Firebird you are using - it just loads specified dll and invokes its API. It gives a great flexibility - you can connect to Embedded, or to SuperServer/Classic/SuperClassic instance, with a single change of client library (and adjusting connection string).
The obvious, but still important thing to mention - Embedded cannot be used for remote connections (with connection string like this: severname:Disk:\Path\Database.fdb).
|
STACK_EXCHANGE
|
Emacs is an incredibly useful piece of software, but the default configuration is difficult to use, and makes it difficult to appreciate the capabilities of the program, understand how to customize the environment, or even use the software effectively for basic text editing tasks. Once you become familiar with Emacs and have begun to customize it yourself, the rational for the “no default configuration,” becomes clear, but it’s difficult to get to this point and there’s no good reason to leave would-be emacs users to fend for themselves.
The “emacs-stack”, then, provides a good example configuration that users may find helpful as an example of how to manage a large and complex emacs configuration, and a set of good, working defaults derived from tycho’s working configuration. In contrast to some other attempts to provide a good introductory default to emacs, this “distribution,” of emacs code does not attempt to package emacs itself, nor does it attempt to deliver an emacs experience designed to be easy to learn for users of another system or environment. Rather, the emacs-stack presents a faithful example of a real “working” emacs configuration.
The distribution contains a directory named emacs/ that contains all emacs lisp files and a makefile that manages installation, removal, and byte-compilation of the emacs configuration. Most of the files in this distribution are publicly available and freely licensed packages, but there are a number of files that configure key-bindings, configure variables and settings, and provide some custom “glue functions” that are original to this configuration.
The latest release is always available at http://download.cyborginstitute.net/emacs-config.tar.gz.
Please submit an issue to the mailing list if you’d would like to see a more formal release process or numbering system.
See the downloads page for a link to the latest tarball with a copy of the emacs-stack release. Download this file, using the following command:
cd wget http://download.cyborginstitute.net/emacs-stack.tar.gz
Then extract the archive:
tar -xxv emacs-stack.tar.gz
Then run the make process to install the new configuration.
When complete the script will archive your existing ~/.emacs file in your home directory and ~/.emacs will be a symbolic link that points to a file in emacs/config/, and all emacs lisp files will be byte-compiled.
You can edit any of the emacs lisp files as ended for your configuration. Run the following make operation at any point after changing the lisp files to update the byte-compiled files:
Continue reading for more information on the specific opportunities for customization and components of the emacs-stack.
Many of the files included in the distribution are third-party libraries and scripts. The files in the config/ directory, and all files beginning with tycho- provide the integration and originate with this distribution.
There is no guarantee that any of the emacs lisp included in this package is: bug free, up to date, or unavailable through other means.
The following list introduces the core components of the emacs stack, that create the entire experience
Contains all machine specific configuration. Ideally, these files are all unique to the machine, but there is some duplication in practice.
Your user account’s ~/.emacs file should be a symlink to this file.
Do not insert lisp into this file unless it causes an inter-system compatibility issue.
This file controls the initialization process, and grows out of a need to maintain two or more emacs daemon instances on the same system with different desktop (i.e. state) systems.
This is the core configuration file, and most of the other tycho- files are required from this file. At some point in the distant history all of the tycho- files were in this file, now tycho-emacs.el contains (require) calls and sets a number of variables and settings.
Contains all visual modifications to emacs’ display and font selection. Implemented as a series of functions the init file requires this file and then calls one of these functions during the display process.
Alternatively, you may, use your ~/.Xdefaults file with the following lines to control your Emacs appearance a bit more cleanly:
emacs.menuBar:off emacs.FontBackend:xft Emacs.font: inconsolata:pixelsize=14:antialias=true:hinting=true Emacs.pane.menubar.font: inconsolata:pixelsize=14:antialias=true:hinting=true
Provides some fairly significant wrappers around Deft, similar to the code in tycho-ikiwiki.el.
In most cases, the tycho-keybindings.el file specifies all custom keybindings that don’t directly relate or depend on other code. For instance org-relgated bindings are stored in tycho-org.el.
All other tycho- files contain simple wrappers and configuration around otherwise unmodified lisp files and packages obtained from third party sources. Comments may be sparce, but feel free to open an inquiry on lists.cyborginstitue.net for a documentation and/or commenting enhancement.
|
OPCFW_CODE
|
Create Your Own Database Driven Website with MySQL and PHP for Windows
PUBLISHED: FEBRUARY O4, 2015 – ARTICLE: CREATE YOUR OWN DATABASE DRIVEN WEBSITE
Over the last few years, the internet hype has moved from just owning a website to owning a database driven website. If you have some background in IT and have been trying to create a website for your business, this is something you must be puzzled about.
Delving Into Database Driven Websites
As a budding entrepreneur with basics in HTML websites, you must be wondering what the big fuss is all about because most global brands have, after all, been using static websites.
To appreciate emphasis on a database driven website, consider these two types of web pages you can create:
- Static/non-dynamic websites: These websites look the same and content does not change seamlessly as it loads into the database. Every time your website is loaded or a user clicks on a button, there is no apparent change hence the term non-dynamic. For there to be change, you have to do it yourself and upload the new version of the page to a browser.
- Dynamic websites: This is where a database driven website falls and these pages can change every time they are loaded without the input of a DBA. A database driven web page grabs information from the database and loads the same into the webpage each time it is loaded. Once information in the database changes, this website also changes without any need for human input.
It goes without saying that a database driven website is an integral part of every business
platform. Real-time is the future of contemporary business, especially now that online users are looking for instantaneous solutions.
Creating Your Database Driven Website
How to create your own database driven website
Having discovered the advantages of this type of website it behooves on you to get some insight on how to build a website for your business. Here are some steps and factors to consider:
1. Critical Role Of PHP And MySQL
In static websites, you create your site with HTML, CSS and Java which requires you to upload your website building files to another location. Whether it is a web hosting service, an ISP or a web server set by your business, this changes when it comes to using PHP. PHP is a server-side scripting language which serves as a plugin for your web server to enable it run PHP scripts. Basically, you will have to download PHP and install it for it to be able to carry out this function.
2. Downloading PHP On Own Web Server
Whether you are using Linux, Mac OS or Windows, you will need PHP and MySQL which is your database for your website. If you are lucky then your web host’s server already has PHP and MySQL, which means you will not have to install.
3. Installing PHP And MySQL Together On Windows
One way to handle this double-prong installation is to do it together as follows. You need:
Windows, Apache, MySQL, and PHP (WampServer), a one-in-all installation program to make your work easy. This program contains the current versions of this software.
Once you have it, follow the easy installation procedure by identifying the location, most preferably the default installation directory in your machine.
Choose your default browser and if you are not using Firefox, just choose an executable file where your browser is located.
APACHE HTTP, a popular web server for PHP development, will appear and Windows will issue an alert as WampServer installs.
Type in your ISP’s SMTP server address and your email when prompted.
Fire up WampServer when installation is complete and try it out on your localhost menu item at the top of the box.
If you are a business owner and you don’t have adequate IT skills, this process might get tricky and you can call in web development experts from RemoteDBA.com to guide you through the process. Or sign up here and create a website using Easysite the site builder for people that have no web design or coding knowledge. That's right even your Granny could make a website with Easysite.com its just so easy. If you want to get your hands dirty, you can opt for the longer per- package installation delineated below.
4. Individual One-package Installation
This process might take time, but it is the way to go if you want to learn the workings of PHP and MySQL. Installation of MySQL database is no big deal because you can do it online free of charge from the free MySQL Community Server.
You will then follow the Windows links depending on whether you are using the 64-bit or 32-bit version.
Go through the installation process, through the set up and configure MySQL server now to launch the configuration wizard.
The installation wizard will request for this information which you will fill as indicated;
Server type: Developer machine.
Database: Non-transactional database only.
Connection limit: Decision Support (DSS) /OLAP.
Networking options: Enable strict mode option.
Default character set: Best Support For Multilingualism.
Windows option: Allow MySQL as Windows Service and make it easier to run
MySQL’s admin tools from the command prompt by including Bin Directory in Windows Assistant.
Security options: Modify the security settings options to learn these settings with assistance of the wizard before learning the ropes.
You can verify the wizard is working by opening the task manager to check whether the program is running.
5. Installing PHP Individually
Start by choosing the latest version of PHP and select either the installer version, which is quicker, or opt for the zipped package for manual installation. For some Windows versions, including XP and Vista, you need to install your own web server to develop a database driven website as they have no internet Information Services (IIS).
Things get a bit tricky here because it is not common to host websites built with PHP on IIS. Linux OS is more popular to host PHP-powered sites, but then again if you are working in an environment where the company has invested in asp.net technologies, you better use IIS infrastructure to host your website.
The easiest way around this issue is to simply go to install.txt and get an easy PHP installation process that will ensure you have a running website.
Truth be told, you might find it too cumbersome to go through the whole installation procedure, but the fruits of your Windows MySQL driven websites will revolutionize your business. If you don’t have the time, or the necessary skills, calling a remote database administration expert is easy and they will complete everything for you. All you will have to do is log in and start administering your website.
Did you know with Easysite you get a free domain for life, hosting, and eCommerce software. This means you can build your own website and if you want to start selling stuff from your site it won't cost you any extra, click here to try it free for 30 days.
|
OPCFW_CODE
|
How can I use a Dictionary parameter with Swagger UI?
I'm looking to use Swashbuckle to generate Swagger docs and Swagger UI for a Web API method that looks like the following (it's effectively a mail-merge with PDFs):
[HttpPost]
[ResponseType(typeof(byte[]))]
public HttpResponseMessage MergeValues([FromUri]Dictionary<string, string> mergeValues,
[FromBody]byte[] pdfTemplate)
{
return MergeStuff();
}
This isn't currently working, or at least I'm not sure how to interact with the resulting Swagger UI.
This creates the following Swagger UI, which seems reasonable, but I'm not sure what to do with it to populate it correctly (or if it's even correct). I'm using pretty much all default Swashbuckle settings from the latest Nuget.
Byte Array: If I enter Base64-encoded text for the byte array, the byte array always shows up null. Turns out I just need my BASE64 text surrounded by double-quotes and then it works.
Dictionary: I've tried various types of JSON expressions (arrays, objects, etc) and am unable to get any of the values in the Dictionary to populate (the resulting Dictionary object has 0 items).
I have the ability to change things and would like to know how I can do this. For example, if changing the dictionary to an array of KeyValuePair<string,string> helps, let's do it.
Options I know that I have that I'd like to avoid:
Changing these input types to strings and doing my own manual deserialization/decoding. I'd like to be explicit with my types and not get too fancy.
Custom binders. I'd like to utilize standard/default binders so I have less code/complexity to maintain.
It looks like you are trying to upload a file, take a look here : https://github.com/domaindrivendev/Swashbuckle/issues/120
Actually, turns out for the byte[] file, I just need double-quotes around my base64 content. With the quotes, that works. Now I just need to find out how to handle the Dictionary.
My question really was a two-parter. Here are the answers, although only a partial answer to the second question:
Question 1: How do I fill in the data for a byte array?
Answer 1: Paste in your base64-encoded value for it but be sure to surround that content with double-quotes, both at the beginning and end.
Question 2: How do I fill in the data for the Dictionary?
Answer 2: While it doesn't work with [FromUri], this will work with [FromBody] (either Dictionary or IDictionary<string,string> will work):
{"FirstName":"John","LastName":"Doe"}
I'm not sure why this doesn't work with FromUri but I'm going to ask a separate question that's much more focused than this one to get to the bottom of that. For the purposes of this question, both parameters can be put into a DTO, flagged as [FromBody], and all is good then.
I know this question is 5 years old, but I've been running into the same issue. It appears to be caused by how Swagger UI sends up the query parameter for a dictionary.
Eg. if you had a Dictionary<string, string> called MoreData, and you want to send a request with a key of "Key" and value of "123", Swagger will format the request parameter as:
&Key=123
However, the param should include the dictionary prop name, so it should be formatted as:
&MoreData[Key]=123
I don't (yet) know how to get Swagger to format the query params correctly, but I hope this helps someone in the future
PS: https://github.com/domaindrivendev/Swashbuckle.AspNetCore/issues/2034
|
STACK_EXCHANGE
|
Powerpoint 365 proofing languages -> is it possible to check 2 languages simulataneously?
My company's powerpoint presentations usually have to contain text in 2 different languages (slowenian and english). Powerpoint only allows 1 proofing language and so no matter which language is selected, words in the other language will be underlined red as incorrect. This is annoying and while there is a simple fix (turning off spelling correction completely) it is unacceptable for the majority of our users since spelling mistakes happen frequently.
Is there a way to combine proofing languages or dictionaries to allow writing and spelling correction in both languages simultaneously?
Example (say I want to write the word "company"):
Proofing language is set to english: Powerpoint does not underline the word "company" but it would underline the word "podjetje" (slowenian for company). Same applies vice-versa.
I want it to not underline both words BUT still underline if a word is misspelled (e.g. kompany)
Is there a way to do this?
@mdjo Harrymc is correct that you can install multiple proofing languages. To use this feature, you'd have to choose each text shape and mark it as the correct language, English or Slowenian. PPT would then use the correct dictionary for the text.
To do this, select the text or text box, go to the REVIEW tab, click LANGUAGE, then SET PROOFING LANGUAGE. Choose the correct language for that block of text.
Tedious. If I were doing this, I'd create a few "template" slides in whatever layouts I needed, set the languages for each text block, then copy the slides and edit to add new text for each new slide.
There are also two add-ins I know of that will set the language for an entire presentation. With either of them, you could, for example, start with an English language presentation and ignore the spelling errors in Slowenian text, then set the presentation to Slowenian to check the spelling there.
Personally, even though I wrote one of these add-ins, I'd go with Harrymc's suggestion, with the modifications I suggested here.
Tedious is the correct word for it. We use (on average) around 50 slides per presentation so this is sadly out of the question as it would take several hours. I am surprised that this is not a default feature of Powerpoint; there have to be millions of multilingual users out there and manually setting the language for each block of text cannot be a preferred way of doing things, no?
I agree. I think PPT's developers designed it for presentations that are all in one language or another, or that have small amounts of text in languages other than the default.
PowerPoint allows multiple proofing languages.
You may add languages in File > Options > Language under
"Office authoring languages and proofing".
Use the button "Add a Language...".
To set some text to a given language, select it.
At the bottom-left of the page you will see the language it's set to:
Click on the displayed language to launch the Language dialog:
In the dialog, click a language and then OK. This will change the proofing
language of the selected text.
|
STACK_EXCHANGE
|
Caret - Repeated K-fold cross-validation vs Nested K-fold cross validation, repeated n-times
The caret package is a brilliant R library for building multiple machine learning models, and has several functions for model building and evaluation. For parameter tuning and model training, the caret package offers ‘repeatedcv’ as one of the methods.
As a good practice, parameter tuning might be performed using nested K-fold cross validation which works as follows:
Partition the training set into ‘K’ subsets
In each iteration, take ‘K minus 1’ subsets for model training, and keep 1 subset (holdout set) for model testing.
Further partition the ‘K minus 1’ training set into ‘K’ subsets, and iteratively use the new ‘K minus 1’ subset and the ‘validation set’ for parameter tuning (grid search). The best parameter identified in this step is used to test on the holdout set in step 2.
On the other hand, I assume, the repeated K-fold cross-validation might repeat the step 1 and 2 repetitively as many times we choose to find model variance.
However, going through the algorithm in the caret manual it looks like the ‘repeatedcv’ method might perform nested K-fold cross validation as well, in addition to repeating cross validation.
My questions are:
Is my understating about the caret ‘repeatedcv’ method correct?
If not, could you please give an example of using nested K-fold cross validation, with ‘repeatedcv’ method using the caret package?
Edit:
Different cross validation strategies are explained and compared in this methodology article.
Krstajic D, Buturovic LJ, Leahy DE and Thomas S: Cross-validation pitfalls when selecting and assessing regression and classification models. Journal of Cheminformatics 2014 6(1):10. doi:10.1186/1758-2946-6-10
I am interested in “Algorithm 2: repeated stratified nested cross-validation” and “Algorithm 3: repeated grid-search cross-validation for variable selection and parameter tuning” using caret package.
Your understating about the caret ‘repeatedcv’ method I think is not entirely correct. ‘repeatedcv’ in caret refers to repetitions of K-fold cross validation when selecting the optimal hyperparameters. One cross-validation split may not be enough to identify the optimal parameters. This should not be confused with nested cross validation, which is a separate issue.
I would not say that "parameter tuning might be performed using nested K-fold cross validation". Parameter tuning is either using single or repated K-fold cross validation. Nested cross validation is used just for the assessment of prediction error of the prediction algorithm.
You should (see reference Krstajic et al (2014)) apply repeated nested K-fold cross validation to reliably estimate the out-of-sample prediction error of your prediction algorithm.
In stage 3 I would replace "The best parameter identified in this step" by "the best predictive model fit identified" as this stage involves both parameter estimation and model selection (optimal hyperparameter tuning).
|
STACK_EXCHANGE
|
Patented Split-Key Encryption Technology
When it comes to protecting data in the cloud, the biggest challenge isn’t encrypting the data– it’s protecting the encryption keys. Every time an application accesses a data store, it needs to use the encryption keys. This puts them at risk in two places: when they are stored, and when they are in use. Porticor® patented Virtual Key Management™ is the first solution that mitigates the threat of key theft both in storage, and in use – to keep your cloud data truly secure.
Porticor® Virtual Private Data™ is the only system available that offers the convenience of cloud-based hosted key management without sacrificing trust by requiring someone else to manage the keys. Breakthrough split-key encryption technology protects keys and guarantees they remain under customer control and are never exposed in storage; and with homomorphic key encryption, the keys are protected – even while they are in use.
How it works
Each data object (such as a disk or file) is encrypted with a unique key that is split in two. The first part – the master key – is common to all data objects in the application. It remains the sole possession of the application owner and is unknown to Porticor. The second part is different for each data object and is stored by the Porticor Key Management Service. Every time the application accesses the data store, Porticor uses both parts of the key to dynamically encrypt and decrypt the data. When the master key is in the cloud, it is homomorphically encrypted – even when in use – so that it can never be hacked or stolen.
Homomorphic Key Encryption
Porticor uses specialized homomorphic encryption techniques to keep your master key safe when it is in the cloud. With Porticor, the master key is known only to you, the application owner. Even before it arrives at the virtual appliance, Porticor encrypts the master key. The master key then stays encrypted while it is being used. Even if your cloud account is penetrated and the master key is stolen, it cannot be used to hack into the rest of your application, because Porticor encrypts the master key differently for every separate use.
Generally, homomorphic encryption is much too slow to be viable for real-world applications. But Porticor’s patent-pending technology combines the most robust key security with very high performance so that you can guarantee the safety of your data and fully comply with regulations. Learn more about Homomorphic Key Encryption.
Proven Strong Encryption Standards
Porticor Virtual Private Data uses strong encryption algorithms, such as AES-256, to encrypt the entire data layer. All projects (typically each project is an application) are cryptographically separated from each other, and Porticor uses a secure protocol to ensure trust among project instances. Porticor VPD also encrypts backup snapshots and for an extra measure of security, encrypted disks can be locked if the data is not in use.
Reliable, Scalable Infrastructure
With Porticor VPD, you can ensure data security in as many projects as you need: by applications, departments, content domain, or any other way you’d like. Each project can contain as much data as required, across multiple disks, databases, file servers and object storage. Porticor fully supports clustering and fail-over configurations and is available for public, private or hybrid clouds.
Integration and Automation
For organizations that want to integrate data encryption into an automated environment, Porticor VPD features a secure API that controls all of the functions of the virtual appliance. Porticor also offers a secure API to the Virtual Key Management service to enable integration with your existing cloud deployment. With Porticor you can automatically:
- Bring up and shut down new appliances securely
- Add and remove protected disks and revoke the associated keys
- Add other protected data objects (such as files or DB records) or remove them, and revoke the associated keys when needed
- Integrate the Porticor key management service with another 3rd party encryption solution for key allocation, management or revocation.
|
OPCFW_CODE
|
Recently I gave Python’s Django a try. Django is a free and open source web framework that allows for the quick creation of any kind of website, ranging from blogs to stores to Git server front-end. Django is extensible and has thousands of modules or “packages”. One of the best things about this project is the abundance of high quality documentation and learning resources. So much so that I decided to create a post to share what worked the best for me. In this post, I will highlight the best resources I came across and the order you should ingest them in.
Starting Out / guides and things worth considering
First things first, if there is great documentation available from the developers themselves, use it! Their tutorial should be the starting point for most users trying out Django for the first time, and I highly recommend you follow it closely even if you don’t want to create a silly polls app. The tutorial is clear, concise, and gives you a good picture of how Django serves pages to clients. If you blow through the tutorial and found it easy (if not, see next paragraph), then maybe try git cloning Cookiecutter-Django and reading Two Scoops of Django as a companion guide for more in depth information. Cookiecutter is nice because it bundles together basic functionality like user authentication, emailing, security features, support for deployment with Docker or Heroku, caching with Redis and many more nice things (Bootstrap!). I strongly suggest you set up your project in a virtualenv or Docker container, that way you don’t have to worry about version conflicts on your machine. Setting things up this way with Cookiecutter makes it easy to keep track of your dependencies and versions. You should try to decide on how you are going to deploy before you start building things.
It’s about this point where all these new things might start to blur together for someone just getting started, which reminds me of a great introductory post to Django and the ecosystem around it. Clouds, containers, webapps – the author finds a fun and memorable way of explaining all these new concepts.
If you stumbled a little bit on the official Django tutorial, try the Django Girls tutorial. This tutorial does not make any assumptions about the background knowledge of the reader and progresses to completing a Django app relatively quickly and deploying.
Whichever route you take, you’ll need to set up a database before you can really begin. The Django documentation doesn’t really steer you in any particular direction as far as which database you should choose and why, so I’ll pick a side: you should go with PostgreSQL to save yourself from future headaches. It is used more frequently, interfaces better with most Django packages, and is overall a nice object-relational database management system. If you are on Linux, you’ll need to know how to create users/tables in the PostgreSQL shell (Digital Ocean does a fine job explaining). Once you have installed a database and created a login for it, all you need to do is add those credentials to your settings.py and Django will create the tables it needs and update them through models and the admin web UI (and the Python/Django shell, which is nice when you just want to make a small change).
Things to do next / write your own tests.
Once you have your site up and running with a few pages and an app or two, you’ll probably want to do some house keeping. Creating maintainable and reusable code will be your best friend if you decide to move forward with Django. So do yourself a favor and write some good functional tests. “Obey the testing goat” is a free online book that delves deep into creating test cases for your site. By taking a relatively small amount of time now to code up some tests, you’ll potentially save yourself tens of, even hundreds of hours of debugging over the coming months/years. Sure you can write code well enough most of the time, and can catch errors in your development environment, but what is going to tell you that your syntactically correct but semantically incorrect code is about to drop an entire table from your database, or send receipt information to the wrong user? There may be some built in safety nets in Django (see Autoescaping and CSRF) , but most other unintended behavior, test cases are the best defense.
Make it look nice / we all need to learn web languages sometime or another
Ok, so at some point you’re going to want all your hard work to look nice. Web design is an industry, but thankfully with Django you don’t have to be an expert. Just take some pages from the Bootstrap library and adjust it to your liking. Somewhere within the bootstrap zip are HTML templates, .css, and .js files you can simply plug into your /static directory and have a nice responsive page ready to go. The templates that come with the Cookiecutter installation of Django will have references to the .css and .js files hosted on other domains. So your templates will work and look nice out of the box. But it’s probably best to have those resources stored locally. If you’re not comfortable with web stuff, head on over to w3schools – it’s probably one of the most impressive free learning resources I’ve come across over the years. The Bootstrap website also has some good examples you can pretty much just start copying/pasting to suit your own needs.
Deployment / fire it up and go live
The Cookiecutter repo I linked to earlier is nice because it makes deploying your app with Heroku or Docker much more convenient. When you install with Cookiecutter, you will be asked a series of questions about integrations you can include. You should say yes to at least one, if not both, so you can have your .yml or procfile generated for you, should you decide to deploy with Docker or Heroku. This will also change how some of your project is structured, by splitting settings.py into “base.py” and “local.py” to make testing/debugging your apps locally possible without much hassle. Continuing the theme of free, Heroku has a free plan for “deployment in a limited sandbox” where you get 512mb of RAM and 1 worker. Note that if you have an app with any bulk at all to it, you may have to look at a paid plan or setting up a CDN because the Heroku free plan might not have enough space (10,000 rows on PostgreSQL will go quickly). If your needs to manage documents and media files, then it might be worth it to think ahead about how you will host these files and what you will do when you want to change hosts. Link to relevant documentation. If you set things up right, switching CDNs should be pain free.
To summarize how the two deployment systems are different concisely: Heroku is a service (a platform as a service or PaaS) that takes care of a lot of the sys admin details that might slow you down. Docker is a software (with a free community edition and an enterprise version) that, for lack of a more detailed description, creates bare bones virtualization containers on a per app basis. Docker takes a build recipe “from this distro, install and update these packages, and change these settings…”, also known as the Dockerfile, and then a .yml configuration file that tells it exactly how to install the app and hook up necessary parts like the web server and database. These Docker containers are much lighter than a normal virtual machine which may take a gig or two of space – I’ve seen containers ranging from just over 100mb to 500-600mb. This makes spinning up a website from scratch extremely fast and efficient. If you decide to go the Docker route, I highly recommend following this resource as it uses cookiecutter too: Development and Deployment of Cookiecutter-Django via Docker.
Optional Packages and Third Party Services / to be done
WhiteNoise – Allows your Django app to serve static files itself, when normally a CDN is used. Viable for small to medium sized projects and very convenient.
Celery – Async task queue, very handy.
Well, you made it to the end. That means you’re probably at least a little interested in using Django. I would encourage you to do so, it was certainly a positive experience for me. You might also be debating between Django and another well known Python web framework – Flask. While I have not tried Flask, what I can say is this: Django is the “batteries included” solution, and takes care of things like user authentication that necessitates robust implementations. Flask leaves more up to the developer, which can be both good and bad. They are different tools for different use cases certainly, but overall I’d say Django is a better choice for beginners. Hopefully this post helps anyone planning to start a project soon, and please do comment about any other free resources that helped you learn Django.
|
OPCFW_CODE
|
New lizard lost tail 2 times already, worried about his/her safety
So recently, our family bought 2 more leopard geckos for the current 2 ones we already had... My sister's gecko is "supposedly" a girl, and her new young girl/boy got along nicely. They separated but soon started sleeping together more often then not.
My lizard is a bit older than my sister's. We believe he's a large fat-tail male.
I placed my new baby in the cage, and Hammy bit him on the first night right in the face, I immediately separated them, but later on, they started "Getting Along" and slept in the same hide for a couple days. About 3 days after buying "Chicki," as we called him, he lost half his tail.
I tried figuring out what caused it, I have a healthy semi-humid environment, I feed them often, give them a large clean water-bowl, and I block out very bright lights, all the crazy stuff. Then he still lost his tail, so I guessed Hammy ate it (I couldn't find it anywhere).
A few days later, they are sleeping in the same hide again. Finally just yesterday, he lost a bit more of his tail. He's got maybe 3/4ths of a centimeter left.
I'm worried that Hammy is hurting him at night, because I never see them fight during the day, and in fact, if I go to grab them both out, Hammy actually seems to "protect" him, and the two huddle up together sometimes, but at night, they go to opposite hides.
Does anyone know what I should do? Hammy hasn't hurt him any way else.
Please do keep in mind, we named Chicki, well, Chicki, because hes a bit of a chicken. Everything scares him and makes him stick his tail in the air and wag it like a dog. Its' possible he just dropped the tail and Hammy ate it, as Hammy is normally never violent.
It sounds like he lost his tail due to stress. According to https://funwithlife.org/leopard-gecko/hideout/, leopard geckos aren't particularly social creatures and housing them together often times won't work. For sure, though, you'll need different hides for each gecko (ideally two: one warm hide and one cool hide).
Personally, I'd separate them into different vivariums.
I wound up taping a cardboard wall, and getting another wooden hide, they fight over the wood hide we have, even though we have other hides..
I just really needed to get chicki out of danger, so I separated them.
|
STACK_EXCHANGE
|
Wednesday 25 Aug 2010
- Review action items from last meeting
- Blueprint status for Linaro 10.11 beta
working through python cross-build, pushing x-build bugs to the Ubuntu packages
last changes merged, awaiting upload
final stable tree in place for 2.6.35; set of packaged images to produce for 10.11 is settled
no progress to note this week
exec ASLR done, waiting on Ubuntu security team for further testing
initial u-boot package FFe'd and in the archive - Ubuntu ARM team will use it for one of their deliverables this cycle!
multistrap, pdebuild-cross uploaded to Ubuntu; resolution of xdeb cross-Package downloading next week
no progress this week, will postpone work items
Action Items from this Meeting
Action Items from Previous Meeting
- slangasek to sponsor u-boot-linaro into maverick: DONE
- jcrigby, lool, hrw, slangasek to further discuss how to provide fast and frequent kernel builds : DONE
- slangasek to help wookey with FFe bug filing for pdebuild-cross in maverick: DONE
- slangasek, wookey to discuss multistrap: DONE
- dmart, npitre to work out verification of exec ASLR on armel: DONE
- jcrigby turn CONFIG_COMPAT_BRK=n on: DONE
- ask Ubuntu kernel team about dropping linux-fsl-imx51 from maverick: DONE
- dmart to email Keybuk about bootchart changes
- dmart to email npitre clarifying final status of oprofile kernel patch: DONE
- Implemented per-person burndown chart production. Output of which can be found at:
- Implemented the upstream tree checking mechanism. Looks like we have no @linaro.org submissions at the moment. Looks like a whitelist of Linaro contributors email address will need to be checked as not all people seconded to Linaro are using their @linaro.org address.
Pushed for MootBot-UK installation in #linaro-meeting, called in a favor from Daviey and its now available.
Initial investigations into better tracking the requirements defined at the start of a cycle by partners and our own blueprint/work item system. Started putting down some thoughts/guidelines at JamieBennett/RequirementsTracking.
- Initial investigations into the initial requirements document and our actual deliverable's. There is a need to see what was POSTPONED this cycle and what still needs to be done to ensure this work happens in 11.05.
- Chased IS for our public Mumble server. Promised a turn-around early next week.
Discussed hardware packs and future image building with AlexanderSack and LoicMinier. Producing one image and adding a hardware specific part afterwards at image creation time is the way forward but not sure whether this will hit for the 10.11 cycle.
Meeting with Matt Waddel to discuss the status of Versatile Express in Linaro. I'll be driving the effort to ensure that we have images this cycle for Versatile Express. Talked with LoicMiner, JohnRigby (uboot) and CodySomerville (build infrastructure) to get the wheels in motion to make this happen.
Talked with the new Ubuntu Release Manager, KateStewart, to co-ordinate an effort to track both projects better. Brainstormed around tracking bugs and related packages across projects in an automated way. Weekly call now to discuss the implementation of this and work out what format we want the output to be in.
- Work with the Ubuntu Release Manager to implement bug dependency reporting and highlighting.
- Implement tagging support in the work item tracker to enable work items to be 'tagged' against a technical requirement reference number.
- Complete analysis of requirements to deliverables for the 10.11 release cycle.
- Produce a blueprint and work-item guideline document to ensure that everyone is using the same format for 11.05.
- Cross toolchains working with xdeb. 12 source packages cross-built. Patching required for 4 of those.
- None this week
- Versatile Express u-boot tested - OK.
- Interface testing with Matthew Revell went well
Awaiting VersatileExpress board
- xdeb debugging
- Continue work item "Use cross-compiler binaries with xdeb"
- Test Tom Gall's "alip-ael" image on Versatile Express
- Learn more about "series"
- Find out how to debug xdeb internals
- Push cross patches to ubuntu/debian
- Produce new scripts
- Having xdeb get correct target-arch depencies
- Being able to cross-build gstreamer reliably
- Fixed various multsistrap bugs/imperfections so that it would make a cross-build chroot from maverick plus marcins toolchains (or emdebian toolchains)
- Once that was done pdebuildcross/xapt 'just worked'. Limitations yet to be determined...
- So far as I can tell both emdebian and marcin toolchains produce the same results when cross-building (limited testing) which is good
- Gstreamer doesn't cross-build for me (Could not link libxml2 test program - investigating, but other stuff does.
- Synced with ppearse on his AEL/ALIP work
- Discussed getting more hardware with DaveM. Agreed to get 3 IGEPs
- Discussed adding pdebuild-cross to maverick with release manager as backup until xdeb gets similar functionality (xdeb can be used inside pdebuild cross to, which is probably a good idea). And corresponding updating of multistrap.
- Started upstreaming multistrap patches. Some pushback 'because it's only for ubuntu' (actually mostly it isn't) so will go in a branch for now.
- Some pushback on getting multistrap fixes in upstream
- Still don't know enough python to do good work
- Actually build some things (co-ordinating with ppearse)
- file cross-build bugs accordingly
- Work on xdeb target-arch downloading
- Discuss multstrap vs debootstrap with steve
- Get pdebuild-cross uploaded to maverick
- Unless we decide to use debootstrap instead: Get multstrap in maverick updated/patched to actually work in maverick with current toolchains
- Do Debconf 10 doughnut session for ARM people with Sledge
- Added mx51 flavour. It compiles but no testing yet.
- Fixed the vexpress config. Still needs testing by mattman before I send a pull request.
- Wasted a bit of time trying to agree on git tree naming. In the end just waited for Loic and Nico to come to an agreement.
- Spent the better part of two days on ST-E u-boot work. I have not heard anything back and hope it was not a complete waste of time.
- Got a new Beagle xM patch from Steve Sakoman but have not been able to try it.
- No progress on ARM device tree work.
- Explained to Loic why basing 10.11 U-Boot release on current upstream is better than going all the way back to 2010.06.
- Took two days off but worked about fours hours in the evenings that I was supposed to be off.
- Do a kernel stable release based on Nico's stable.
- Do a u-boot stable tentative candidate base on current upstream.
- Work on ARM device tree work.
- Ping Alexander and the folks at ST-E to see if they need any more help with U-Boot.
- Creation of a stable branch in the Linaro kernel repository with a couple fixes from the upstream stable tree and Dave Martin.
- Completion and testing of the exec ASLR security feature.
- Answer to a couple external inquiries about Linaro.
- The usual patch review and mailing list monitoring.
Platform/DevPlatform/Meetings/2010-08-25 (last modified 2011-03-11 11:32:00)
|
OPCFW_CODE
|
If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
So is it possible to interpret how someone is feeling based on their gait alone? That's exactly what scientists at the University of North Carolina at Chapel Hill and the University of Maryland at College Park have taught a computer to do. Using deep learning, their software can analyze a video of someone walking, turn it into a 3D model, and extract their gait. A neural network then determines the dominant motion and how it matches up to a particular feeling, based on the data on which it's trained. According to their research paper, published in June on arXiv, their deep learning model can guess four different emotions--happy, sad, angry, and neutral--with 80% accuracy.
Gain the confidence you need to apply machine learning in your daily work. With this practical guide, author Matthew Kirk shows you how to integrate and test machine learning algorithms in your code, without the academic subtext. Featuring graphs and highlighted code examples throughout, the book features tests with Python's Numpy, Pandas, Scikit-Learn, and SciPy data science libraries. If you're a software engineer or business analyst interested in data science, this book will help you:
Artificial Intelligence (AI) is not considered just an emerging technology with a bright future, it is indeed a robust growing platform, impacting several industries and touching numerous spheres of life. AI algorithms need enormous volumes of datasets to be trained appropriately, after which the system can not only decipher pictures, such as recognizing a dog is a dog or differentiating a chair from a table, it can also generate original images and create exceptionally amazing artistry of quality associated with those of Picasso or Michelangelo. AI model that makes it possible has matured substantially over the recent years and it produces perfect output for certain applications but needs more refinement in other cases. Computer scientists have spent around two decades to teach, train and build machines which can visualize the world around them, a normal skill that humans take for granted, yet it's one that's highly challenging to train a machine to do, kudos to artificial intelligence for making it possible!! Two major ground-breaking improvements in AI image processing have been facial-recognition technology in both retail and security, as well as image generation in all fields of art. The commercialized usage of facial recognition technology is to improve sales and marketing of products including efficient targeting of audience.
With new technology available to us, we're inching closer to the end of the days when deciphering ancient languages is a painstaking task filled with frustration and confusion. Nifty machines following complex algorithms are helping researchers around the globe as they take on the often monumental task of understanding ancient texts and lost languages. Big Think reports that linguistic experts estimate there have been approximately 31,000 languages spoken throughout human history. Many of them are now dead and forgotten, but a new AI project may be part of the answer in how to decipher the writing of ancient languages. "While languages change, many of the symbols and how the words and characters are distributed stay relatively constant over time. Because of that, you could attempt to decode a long-lost language if you understood its relationship to a known progenitor language."
Patrick Winston, a beloved professor and computer scientist at MIT, died on July 19 at Massachusetts General Hospital in Boston. A professor at MIT for almost 50 years, Winston was director of MIT's Artificial Intelligence Laboratory from 1972 to 1997 before it merged with the Laboratory for Computer Science to become MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). A devoted teacher and cherished colleague, Winston led CSAIL's Genesis Group, which focused on developing AI systems that have human-like intelligence, including the ability to tell, perceive, and comprehend stories. He believed that such work could help illuminate aspects of human intelligence that scientists don't yet understand. "My principal interest is in figuring out what's going on inside our heads, and I'm convinced that one of the defining features of human intelligence is that we can understand stories,'" said Winston, the Ford Professor of Artificial Intelligence and Computer Science, in a 2011 interview for CSAIL.
An automated system developed by MIT researchers designs and 3-D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand. In a paper published today in Science Advances, the researchers demonstrate the system by fabricating actuators -- devices that mechanically control robotic systems in response to electrical signals -- that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. Tilted an angle when it's activated, however, it portrays the famous Edvard Munch painting "The Scream."
In this episode, Lauren Klein interviews Michal Luria, a PhD candidate in the Human-Computer Interaction Institute at Carnegie Mellon University, about research that explores the boundaries of Human-Robot Interaction. Michal draws inspiration from the Medieval Times for her project to test how historical automata can inform modern robotics. She also discusses her work with cathartic objects to support emotional release. Michal Luria is a PhD candidate in the Human-Computer Interaction Institute at Carnegie Mellon University, advised by Professors Jodi Forlizzi and John Zimmerman. Prior to her PhD, Michal studied Interactive Communication at the Interdisciplinary Center Herzliya in Israel.
Home care is often singled out for being slow to embrace and implement technology, but as the demand for care services grows, providers are forced to think outside of the box when it comes curbing caregiver turnover. San Francisco-based home care startup Honor understands this all too well, according to CEO Seth Sternberg. The company is using insights gleaned from machine learning to examine and address turnover internally and with its network of home care partners. Honor, which has raised $115 million since launching in 2014, teams up with independently owned and operated agencies by taking over caregiver recruiting, onboarding and training, in addition to day-to-day logistics. Currently, the company operates in Arizona, California, New Mexico and Texas.
|
OPCFW_CODE
|
Generic Scanner Driver for windows 10
I have a scanner UMAX astar 5600. The scanner stopped working after I upgraded to windows 10 from windows 7.
I managed to make the scanner work again by making use of Samsung scanner drivers(As suggested in a YouTube video).
Things worked great, but now that driver of the Samsung scanner is not available, so my scanner does not work.
The device shows up in the device manager as "USB Scanner" wah an yellow exclamation mark. I am unable to add the device under devices and printers.
Is there a generic windows driver that I can use.
Any help will be appreciated.
No drivers to be found. I can only find VueScan which might work. This is a commercial software, so perhaps buying a new scanner will be more economical.
https://support.microsoft.com/en-us/help/14088/windows-10-install-and-use-a-scanner
https://paperscan-scanner-software-free-edition.windows10compatible.com/
You can find my solutions reading the answers to a similar question.
Selecting Microsoft > USB scanner device for the driver has got the scanner working.
The USB scanner device listed under Microsoft seems to be the generic scanner driver I was looking for.
Selecting that driver for my scanner make it to work like a charm.
More detail in your answer is needed, otherwise it is not a good answer.
Where do you select this? I have gone through device manager > found device > update driver > select from list of drivers and there is nothing like this under windows 10. So please can you tell me where you selected this
This is not a Windows solution, but Linux has a generic scanner driver in its kernel, and it is not hard to use at all if you can reboot your machine.
You may download liveOS (no installation needed) from Fedora or Ubuntu, write the image on a usb drive by rufus.ie , and reboot your PC with the usb drive.
The liveOS even contains a "Simple Scan" tool that acts as the GUI. However, you may need a second USB drive (or to access a network drive) to save the scanned documents.
I have VueScan and it works great, had old Canon scanner, then I and bought a used Fujitsu ScanSnap scanner and VueScan works with it, it even scans both sides of the paper in one pass. The only problem is the scanner won't work on USB3 connections, I don't know if it's a problem with the program or the scanner.
This does not provide an answer to the question. Once you have sufficient reputation you will be able to comment on any post; instead, provide answers that don't require clarification from the asker. - From Review
I have encountered the USB 3 problem with audio interfaces. For me the simplest thing is to get an old or basic usb 2.0 hub. they are still available. that then downgrades the usb 3 port to a 2
I used Viewscan and it work good for the Neat scsa4601eu in windows 10 and it lists all my network scanners too
Did you mean VueScan and not ViewScan? This answer is unclear.
Vuescan is a bit pricey. I already own the hardware and OS why do I need to buy more to just run a device.
|
STACK_EXCHANGE
|
This NY Times article/interview conducted by FiveThirtyEight.com’s Nate Silver and David Wasserman, House editor of the Cook Political Report. Particularly this snippet:
And/or this slideshow from Slate showing the most gerrymandered congressional districts in the nation.
Here’s my favorite from Illinois:
It’s worth noting that by federal law, congressional districts have to be “contiguous.” That means that (apparently) you can have a sidewalk connecting two blobs and you can call that contiguous.
- What is gerrymandering?
- Why do political parties do this?
- Do the political parties that gerrymander egregiously like this ever get punished?
- Is this really legal?
- So, how can we measure “compactness”?
- Would our measure of “compactness” yield the same top gerrymandered districts that Slate came up with?
- What are the least gerrymandered districts in the U.S.?
- How do we find the area of irregular shapes?
- Time to get you maps, compasses, colored pencils, and rulers out, just like Lewis and Clark.
- Have students develop a metric or method to measure the “compactness” of a congressional district, particularly their own.
- Students may want to research a history of gerrymandering, or the gerrymandering that has taken place in their state. I say, go for it. It might cross over into History and Political Science, but that’s definitely a good thing when it comes to HS and MS students.
- The intended audience for such activities could range from politicians to political scientists to community board members. (aside: if you really want to rankle people in the community, I suggest looking at gerrymandering for school zones)
- Assign each group a congressional district and ask them to develop a case for why theirs is the most egregious example of gerrymandering using geometry.
- While there isn’t necessarily one true, correct solution, it seems to me you could come up with a perimeter-to-area metric to measure the compactness, sort of analagous to the surface area-to-volume ratio that dictates how quickly, say, ice melts.
- Or the centrality of or distance between the population centers. Perhaps analogous to the center of gravity, you can imagine if you were to cut out the shape of a congressional district and add mass by population and location, the more unstable it is, perhaps the less compact it is. (note: this is not necessarily true, but worth investigating)
- Other potential metrics for students to measure: The distance between a population center and the borders of the distrct. Is there a way we could assign a penalty for districts who have conspicuous gaps or, as in the case in Illinois above, a tiny stretch of road (or something) connecting two large areas?
- Ask each group to come up with a congressional district map for their state that is less gerrymandered than it currently is and demonstrate why mathematically. You could certainly envision a scenario in which an actual politician or political scientist may want to sit in on a panel. Certainly, your students can come up with a more mathematically appropriate map of congressional districts than this:
- Note: as for the least gerrymandered congressional districts, that has to be a tie for first place between Wyoming, Montana, etc. because, well, you know….
How else could we measure the compactness and/or the egregiousness of gerrymandering?
|
OPCFW_CODE
|
Statistics is a crucial part of our everyday life; it helps us uncover new things and get assistance on many issues. Moreover, it is the basis of many scientific breakthroughs; hence it is a branch of mathematics that should be taken seriously. In this article, we shall answer questions such as what exactly is statistics? How do statistics correlate with ML (machine learning)? To get answers to such questions, follow along as we discuss more on this exciting topic.
Data statistics help us in so many situations; however, it could lead to drastic adverse effects if manipulated wrongly. Hence we have prepared this article to provide an insight on how to learn statistics for machine learning.
Now let us look at learning tips to help you in your quest.
What statistics do
Statistics provide meaning to data. It is the act of making raw data have meaning.
It is of two categories:
- Inferential: provides ways to go through experiments performed on a small scale.
- Descriptive: used to convert raw data into information
How statistics correlate to machine learning
Machine learning primarily uses information that is got from statistics to evaluate, interpret and select predictive models.
Machine learning has statistics as its base for better functionality. I mean, you do not expect to solve any real-world issue with raw data to work with. Instead, you require information, and that is where statistics skills come in.
Many students find learning statistics challenging due to all the equations, concepts, and Greek notations. However, once you change your perspective focus on learning statistics, you will find it easy to master statistics.
Why should I study statistics?
Statistics plays a significant role in many organizations, from calculating profits and losses, learning future impacts on a change made today. However, to achieve such complex analysis, you require to have more knowledge of statistics.
Machine learning and statistics projects
Once you commence an ML project, you had better be ready to apply statistics concepts; they go hand in hand. Here is how:
- It defines the problem statement
Finding the actual problem to solve is one hectic thing in terms of machine learning. Statistics gives your project purpose in society; without it, your machine is just a composition of thousands of lines of code.
Statistics help you define the problem statement with ease. Using raw data collected from society, you can be able to come up with an objective for your program.
However, the solution will not always be direct; sometimes, you will need to perform data mining and exploratory analysis of data.
- Data exploration
To understand data better, you must have an in-depth understanding of values and relationships between them.
- Data cleaning
Once an experiment is carried out, you are left with a list of disorganized values that have no use in their initial state. You hence use statistics to “clean” the data and create a record of information that has meaning.
You also get to deal with data corruption, missing values, and data errors.
Crucial statistics concepts
Statistics involves a couple of concepts that are pretty crucial in its functionality, include:
- Getting started
- Statistics distribution
- Data distribution and sampling
- Statistical experiments
- Nonparametric methods of statistics
Learning tips for statistics
The incorporation of statistics can be divided into two, namely:
- Top-down – whereby you start by understanding the question and then solving it using statistical methods.
- Bottom-up – you start by learning the theoretical part first, then you implement it later.
If you find it hard to solve any statistical question, you can always look for help online. For example, if you are a student, you should use the hundreds of online statistics assignment help resources available. They make learning less of a hassle and more fun.
Learning statistics is not an easy feat; however, it is not an impossible one. Using the tips provided in this article, you can learn statistics quickly and incorporate it into machine learning to produce more effective programs.
|
OPCFW_CODE
|
[bug] cargo tauri dev fails if backend has dependency that can not compile to wasm32-unknown-unknown
Describe the bug
cargo tauri dev fails if the backend (i.e. src-tauri) crate has dependency that can not compile to wasm32-unknown-unknown. This results in errors such as
vswhom: C1056: cannot update the time date stamp field
clang: Failed to find tool. Is clang++ installed?
This only seems to occur when the frontend is Rust based, and so needs to compile to wasm32-unknown-unknown.
cl.exe: ToolExecError: Command "C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.41.34120\\bin\\HostX64\\x64\\cl.exe" ... with args cl.exe did not execute successfully
From what I have gathered, it occurs when the beforeDevCommand or beforeBuildCommand in tarui.conf.json try to build the frontend.
e.g.
"build": {
"beforeDevCommand": "trunk serve",
"beforeBuildCommand": "trunk build"
}
Removing these commands allows the crate to build, but obviously won't rebuild the frontend. However, adding them back in after does not result in the error, although this can be flakey.
I believe this is occurring because when the frontend builds, it is also building the backend dependencies. However, the frontend is trying to compile to a wasm32-unknown-unknown target, which the backend dependencies are not compatible with.
Reproduction
Create a new Tauri app with a Rust based frontend. I've been using cargo crate-tauri-app --rc.
Add a non-wasm32-unknown-unknown compatible dependency to the backend src-tauri crate. I've been using zmq.
Build the app with cargo tauri dev, one of the errors above may occur, although this can be flakey.
To resolve:
4. Remove "build": { "beforeDevCommand": "trunk serve" } from tauri.conf.json.
5. Run cargo tauri dev again. This should build successfully, however will serve an old version of the app, if it exists.
6. Add "build": { "beforeDevCommand": "trunk serve" } back into tauri.conf.json.
7. Run cargo tauri dev again, which may now work successfully, however this can be flakey.
Expected behavior
cargo tauri dev should succeed even if the backend (src-tauri) crate has non-wasm32-unknown-unknown compatible dependencies.
Full tauri info output
[✔] Environment
- OS: Windows 10.0.22631 x86_64 (X64)
✔ WebView2: 128.0.2739.67
✔ MSVC: Visual Studio Build Tools 2022
✔ rustc: 1.81.0 (eeb90cda1 2024-09-04)
✔ cargo: 1.81.0 (2dbb1af80 2024-08-20)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN)
- node: 18.17.1
- pnpm: 8.7.0
- npm: 10.8.3
[-] Packages
- tauri 🦀: 2.0.0-rc.10
- tauri-build 🦀: 2.0.0-rc.9
- wry 🦀: 0.43.1
- tao 🦀: 0.30.0
- tauri-cli 🦀: 2.0.0-rc.8
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.0-rc.3
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
Stack trace
No response
Additional context
See also
This Discord discussion
vswhom-sys issue
I have only experienced this issue on Windows.
I am also able to trigger this by
Remove "build": { "beforeDevCommand": "trunk serve" } from tauri.conf.json.
Run trunk serve --port 1420 for the frontend.
Run cargo tauri dev again. This should result in one of the errors mentioned above.
This maybe indicating there is some weird interaction going on between the tauri build process and the trunk server.
This seems to have come from an internal dependency that, indeed, was not compatible with wasm32-unknown-unknown.
I'm wondering if it's possible to give a more clear error message indicating this?
If the beforeDevCommand is failing then we can't give it a better error message. And technically Tauri doesn't care if something can't be compiled to wasm so adding special handling for this is imo a bit weird too.
That said, there's also the issue of conflicting build caches with wasm frontends that (sometimes) require a different build.target-dir config for the frontend or tauri and i assume this would also help with this issue.
imo we should investigate whether that config fixes both issues and then add it to the templates and the docs. Unless of course someone knows of an actual fix. 🤷
Maybe this would be good to raise with the Trunk team then?
I have my frontend and backend building to different target directories, so don't think that has anything to do with the issue in my case.
I think it would be good to document in some way within the Tauri ecosystem, though, because I think this may be a somewhat common issue. In my case the dependency that was giving issues was pretty deep leading to errors that were flaky and hard to decipher. In the end I went through the dependency tree crate-by-crate to figure out which one was giving the issue.
Having some sort of documentation about the errors mentioned, could give others a shortcut to figure out what is going wrong.
I have my frontend and backend building to different target directories, so don't think that has anything to do with the issue in my case.
Nevermind then.
Maybe this would be good to raise with the Trunk team then?
Maybe, yeah. Assuming they don't just forward to the compiler team.
|
GITHUB_ARCHIVE
|
CEO of the product?
After publishing I love Product Management, I received this great question on Twitter:
@paulcothenet for example "responsible for right product/right time and all that entails" - this seems much wider than PM as I understand it— Ben Pickering (@benpickering) December 7, 2014
When I transitioned to PM, I had no idea what I was doing (and neither did the people around me). So I started a frantic search to understand what the job was about. Good Product Manager, Bad Product Manager by Ben Horowitz was one of the first thing I read.
As I went trough it, I went through the following emotions in a very short amount of time:
- "This job sounds awesome"
- "This sounds pretty damn hard"
- "There's no way I can do all this" (to my defense, we didn't even have marketing or PR at the time)
If you're a junior PM, Horowitz's description of the PM looks much broader than your job description. I'm sure there are plenty of new PMs that are as puzzled as I was, so I hope this can help.
As a junior Product Manager:
- All this doesn't apply to you (yet). It was written in a specific context and your organization and current role might not be up for it.
- It's still incredibly useful in helping you grow into your role and evaluate your organization.
- Think "How can I become the CEO of the Product" rather than "I'm the CEO of the product"
A bit of context
This document wasn't written to be shared outside of Netscape. This was 1996 and people were not yet giving generic management advice on the internet. Horowitz wrote it while in Product Management at Netscape / AOL as a training document for his own team. I'd bet he didn't expect it to be so widely circulated. When he released it he expected that people (like me) would scrutinize it to no end, so he accompanied it with this word of caution:
"Warning: This document was written 15 years ago and is probably not relevant for today’s product managers. I present it here merely as an example of a useful training document."
Of course, given the profile of the author, this warning has been largely ignored.
Why Good Product Manager, Bad Product Manager may not apply to you
Back to @benpickering's question, there are several things worth noting about Ben Horowitz' doc:
- It was written for a specific context and a specific team
- It was written by someone who was in charge of that team and could make sure the reality of the job would match the job description
- The Product Management job spans a wide spectrum of company sizes, product types, seniority and organizations.
Here are some scenarios where you shouldn't read too much into it:
Reason #1: Your company is way more early stage than Netscape was in 1996.
In 1996, Netscape was a public company. Your company may have just one product. As a consequence, the CEO of your product is the actual CEO. In this situation:
- do whatever you can to help ship the CEO's vision. Take care of the details, write PRDs, prototype, collateral, documentation, do QA...
- try to grow you role into the one described in GPMBPM
- don't ask for those extended responsibilities. Prove you can do them by doing them. A good CEO will recognize this and relinquish his role over time (unless he runs into the Product CEO paradox)
Reason #2: You're starting in PM
Your boss doesn't expect you to do all this (just yet). In this situation, figure out what's expected of you at that stage and thrive to grow into this ideal position.
As pointed out by Bubba Murarka in The Three Skills of a Great PM, starts by nailing Product Execution (i.e. Ship your first product) before worrying too much about Experimentation and Idea Validation
Reason #3: You're not really doing Product Management
You're doing Project Management. This is not necessarily a bad thing. I did this for half of my last job and I loved it. In this situation, you need to:
- Know what game you're playing and how you're being evaluated
- Get to a point where you can discuss with your management if this is the ideal situation for your company and you.
Reason #4: It's not clear what your "product" is
When you think about it, you're not quite sure you're working on a "product". There could be several reasons:
- You're starting in PM and your manager is getting you started with a subset of the problem, usually a small feature. That's all fine, just make sure you know who's owning the larger product and align with her strategy. Also make sure not to get stuck there too long and to grow into your role.
- You're actually doing Project Management
Reason #5: Your product organization is messed up (more than usual).
Sales and marketing (or the CEO) calls all the shot and you're just fighting fires. No one in product/engineering is talking to customers. In this situation:
- Figure out if you can fix it. Maybe your organization is just waiting for someone like you. Can you take the lead, talk to customers and figure out what they want?
- If you're actively prevented to talk to customers or if you're told "we know better", run away.
Why you should still apply it
That said, that piece is still pretty damn fantastic. Despite the word of caution from its author, I think it's still relevant today and worth reading at least once every month. Why?
It's a great way to evaluate your organization
If you're an individual PM, this document is a great way to judge the quality of your product organization.
- Is your organization thriving for the Good Product Manager items or do you hear too often the Bad Product Manager ones?
- If you're not responsible for right product / right time, who in your organization is? If you can't identify that person, your company has a problem. Can you grow into that role?
- If your manager thinks this doesn't apply, are there clear expectations for your role?
As a junior PM at an established company, there's only so much you can do to improve your organization. Problem is, when you get started, you don't know what a good one looks like. If your organization looks too far away from the one described here, it's probably not you, it's them. Time to look for a better one.
It's a great ideal to strive for
As a junior PM, this might look overwhelming, but it's close to the current definition of great product leaders. No one in their right mind expect you to be there on day one. But if you want to become great, this is where you need to go.
One thing that's really hard about PM is that you need to sweat the small stuff (helping your team, fighting bugs, writing release notes and documentations) while never keeping your eyes of the prize. Reading Good Product Manager, Bad Product Manager regularly is a great way to avoid getting stuck at a local maximum.
One way this can f**k you up (and a reason why I put it that late in my PM 101 list is by making you try to do too much, on day one. Don't even try! (but take a look at Ken Norton's checklist for that).
As a junior PM, I think it's more useful to:
- count the number of Good Product Manager boxes you check
- strive to grow that number every quarter. Improve the quality of your work. Pick up additional responsibilities. Do more without asking or being asked.
- measure your progress over time
CEO of the Product?
A final word of caution: Perhaps the most well-known -- and controversial -- bit from GPMBPM is that the "Product Manager is the CEO of the Product". There has been some lengthy discussions on Josh Elman's piece about A Product Manager's Job and its comments. I think it's worth a few more clarifications.
As a junior PM, you will definitely not have authority over anything. You'll have to influence without it. You'll be owning things because people realize they wouldn't happen without you. Not because you've been appointed to own them. Get a first few wins under your belt by doing everything it takes to ship your first product
As you're starting, I'd say that a better to read this is "How can I become the CEO of the product? What do I need to do to get there?". Like a CEO's job, PM is a hard job. Don't try to do too much at once but always strive to expand your scope. I hope this helped!
NOTE: There's an expanded version of GPMBPM (attributed to David Weiden) floating out there that adds a few more point. Very worth reading too.
Thanks to Kevin, Léo and Francis for attempting to make sense of my ramblings and to Ben for letting me share his tweets. Cartoon by Leo Cullum for the New Yorker.
|
OPCFW_CODE
|
'''
Base class for specialized service pages.
Copyright (c) 2008, 2009 Carolina Computer Assistive Technology
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
'''
import os
class BasePageController(object):
def __init__(self, id, module):
# id assigned to this page by the browser extension
self.id = id
# service module that created this instance
self.module = module
# observer that receives responses
self.observer = None
# if the echo service is started or not for this page
self.started = False
def setObserver(self, ob):
'''
Store an observer that will get its pushResponse method invoked by
this instance.
@param ob Object
'''
self.observer = ob
def pushRequest(self, cmd):
'''
Invoked by an object when a new request arrives for this page.
@param cmd Dictionary with information about the request
'''
if cmd['action'] == 'start-service':
# start service the echo service for this page
self._onStart(cmd)
elif cmd['action'] == 'stop-service':
# stop the echo service for this page
self._onStop(cmd)
elif self.started:
# handle an echo service command
self.onRequest(cmd)
# drop anything else that comes through at this point
def pushResponse(self, action, **kwargs):
'''
Sends a response to the observer.
@param action String name of the action on the response
@param kwargs Dictionary of keyword arguments to be included in the
response
'''
cmd = {}
cmd['action'] = action
cmd.update(kwargs)
self.observer.pushResponse(self.id, cmd)
def _onStart(self, cmd):
'''
Handles the start of this service.
@param cmd Dictionary of arguments for service start in the outfox
protocol
'''
if self.started:
# make sure we haven't already started, if so, send an error
self.pushResponse('failed-service',
description='Service already started.')
else:
# let subclass handle start
klass = self.onStart(cmd)
if klass is not None:
# send the service started message with the JS extension
self.pushResponse('started-service', extension=klass)
self.started = True
def _onStop(self, cmd):
'''
Handles the stop of this service.
@param cmd Dictionary of arguments for service stop in the outfox
protocol
'''
if not self.started:
# make sure we have started, if not, send an error
self.pushResponse('failed-service',
description='Service not started.')
else:
# let subclass handle stop
self.onStop(cmd)
# tell the requester that the service is stopped
self.pushResponse('stopped-service')
# clean up the observer
self.observer = None
self.started = False
def onStart(self, cmd):
'''
Override to handle a start request.
@param cmd Dictionary of arguments for service start in the outfox
protocol
@return JS methods to add to the outfox.<service name> object if the
service is ready for use, or None if the the subclass will send the
service started response at a later point
'''
pass
def onStop(self, cmd):
'''
Override to handle a stop request.
@param cmd Dictionary of arguments for service stop in the outfox
protocol
'''
pass
def onRequest(self, cmd):
'''
Override to handle any request once started.
@param cmd Dictionary of arbitrary command values
'''
pass
|
STACK_EDU
|
Progress report
#worklet branch
Working on bug fixes, the addition of AudioWorklet support, and performance improvement in the worklet branch.
So far,
AudioWorklet in AudioContext works fine but does not work in OfflineAudioContext.
-- addModule() is OK, but new AudioWorkletNode does not recognize added module.
Current status: offline rendering is only possible with ScriptProcessorNode.
Bug: the audio is not played until the end in case of slow-down playback
-- Adding silence to all the sources works at the cost of memory consumption
-- Lowest playback speed is set to 50%. Then silence is
If the sources are four tracks of 50MiB (internally decoded in PCM),
the total memory required becomes 100 x 4 + original 50 x 4 = 600MiB or more
Trying to reduce UI update by static rendering but the code became longer
-- I am trying to apply sub-component, update by a prop, style
Offline rendering (for fast processing, recording, and export to file) with AudioWorklet is now working on Chrome.
Processed audio samples are stored in the AudioWorkletProcessor and transmitted as a large message (over 50MB)
to the AudioWorkletNode. Firefox crashes when the message is transmitted.
AudioWorkletNode is undefined on Safari. That is the reason why a blank screen appears at startup.
I prepared App.js and AppNoWorklet.js, which will be chosen automatically by a browser.
In this app, the end of playback notice is fired first in ScriptProcessorNode or AudioWorkletProcessor.
Now only one source is added silence. It turned out it is not necessary to add silence to all sources after some experiments.
(Both in the case of AudioContext and OfflineAudioContext).
Also tried with OscillatorNode (never-ending stream), but it did not work as expected.
Good news.
OfflineAudioContext's output buffer is usable in the case of AudioWorklet. (Not in the case of ScriptProcessorNode.)
oncomplete(e) { play or export (e.renderedBuffer); }
works as described in spec.
So, the recording function in AudioWorklet is not used anymore and the worklet is offloaded and Firefox does not crash:-)
Still, e.renderedBuffer is garbage in the case of ScriptProcessorNode.
In OfflineAudioContext, ScriptProcessorNode works only as a source or the last node.
According to my experiment, the output stream from ScriptProcessorNode is garbage if the node reads input.
That causes the problem with e.renderedBuffer. Then I had to implement recording in ScriptProcessorNode.
Status of the app on major browsers:
Platform
Browser
Status
Android
Chrome
Live and offline playback doesn't work, with or withouth AudioWorklet
Android
Firefox
Live and offline playback work withouth AudioWorklet
IOS
Safari
AudioWorkletNode is not defined
IOS
Chrome
AudioWorkletNode is not defined
Desktop
Chrome
Live and offline playback work
Desktop
Firefor
Live and offline playback work
Prepared a check script to find if AudioWorkletNode, context.audioWorklet.addModule() are defined.
https://goto920.github.io/demos/simple-mixer/check-audioworklet.html
According to the results, there are three types of rendering engines.
Firefox
Chrome: Ubuntu (Linux), Windows 10, macOS
Safari: macOS Safari, iOS Safari, iOS Chrome
Details are in docs/
@goto920
I made a commit that enables user microphone record, check it out!
@dj-fiorex
Thank you for the code. Recording function works on my Linux PC but the sound is somewhat in low quality.
The sound from the speaker was not recorded. So, I think audio setup is needed including echo cancellation.
I am integrating recording function to src/App.js in main branch. The audio setup and callback function
for mediaRecorder is moved to a handler method when the mic icon is pressed.
In playAB(), only mediaRecorder.start()/ stop(), and playback of recorded track will be added.
For latency, comparing the recording of the sound from the speakers and source may be used for good estimation.
|
GITHUB_ARCHIVE
|
Currently there is only a hotkey to fold/unfold ALL headings.
It would be nice to have a hotkey for each level.
So if I press the “Fold/unfold to level 3”, it will fold levels 3, 4, 5 and 6, but not 1 and 2.
Speaking of Onenote, I think Word’s Outline Mode could be an inspiration in the ability to only show or hide list items by indent level: Filter lists by indent level. Not sure how Obsidian might do this as it depends on styles. Perhaps a plugin could take advantage of headings under the hood to accomplish this, although I have nothing to base that on.
this is more for the style of bullets (to replace #)
but unfortunately all the folding functionality cannot be controlled by this snippet I made, and my skill level doesn’t allow me to attempt a plugin to add this functionality (not sure if it’s even possible)
This feature is the only thing I have to go out of Obsidian to use my notes. If folding evolves I think it would make Obsidian so much better.
I think it would be even enough (if possible) to use only one hotkey for folding and by using this hotkey multiple times it always folds one header-level more, starting with the smallest header in the note. So you press it once, H6 folds, you press it twice, H5 folds and so on.
A different hotkey would be assigned for unfolding the same way.
Obsidian is great, but with hundreds of notes (=nodes in my case) is overwhelming: without (un)folding I’m exposed to a lot of textual information, much more than 5-7 objects acceptable by human working memory
In mind-maps I can (un)fold a given node only one level down. In Obsidian it depends on the former heading/indent state, but “unfolding” unfolds all the levels down.
My suggested reference model in Mind Manager, similar to org-mode mentioned by @santi.
I think it may be nice to either have hotkeys or modifier clicks on the arrows to fold all levels beneath current, unfold all levels beneath current, fold all levels except those between current and its currently most unfolded sub-level, fold all except current, unfold all levels to the depth of current, etc…
I am sure there are some other good ones I don’t have here. Personally, I like modifier clicks on arrows. For example alt-click arrow may be good for fold all except this level. Ctrl-click arrow feels like unfold sub-levels could work. Some of the other more nuanced ones may benefit from hotkeys.
If this is already a topic, I apologize. I promise I did do a few searches. Thanks for everything.
the option to fold/unfold only the lists, a default setup for folding status every time I open a link (like always have unfolded H1 and H2 but the rest of headings and lists folded), and also the bug that when you have checklists folded but you check a checkbox from a different list, all checkboxes unfold.
Maybe a plugin that addresses this issue on the outline section could do, but it should also include the option of folding only lists (and checkboxes).
However, this thread is about being able to quickly fold and unfold to a certain level. It is also a thread where some people’s requests for additional types of hot keys as they relate to folding have been merged. See here: Fold/Unfold headings up to specific level hotkeys - #12 by I-d-as . I make this distinction to avoid this thread being closed with this newly implemented feature that you are requesting.
I hope this is not interpreted that I am expecting this to be closed or that I am annoyed my request was merged here. I am just making it clear. I think the moderators do an excellent job on this forum and very much appreciate them!
Found this feature request while searching to see if heading-level folding shortcuts had already been requested (e.g. CTRL+1, CTRL+2… CTRL+6) . However I see Santi has done an excellent job of recording the many features associated with folding:
|
OPCFW_CODE
|
I'm using CGA to VGA converter with RGBI digital.
Cable which connects to adapter is DB9 > DB15 (VGA) has not connected intensity.
This is for digital RGBI loosing 8 colors - all dark colors are displayed as light.
My friend created for me cable with connected intensity, but signal is weak and I must to force up brightness, contrast, color adjust to get at least stable picture.
Q is: Exist any device to increase signal strength to display correct image?
Thank you for all.
+ Reply to Thread
Results 1 to 18 of 18
Note that the development board costs about US$150 to US$250 depending on where you get it from.
And then you'll need to learn how to program the device. That may require additional hardware.
There was a link to the project's source at raphnet.net.
Perhaps ready to use GBS8200 will be OK for you?
Definitely worth a try, given the low price.
So, I decided to ask again for signal strength increasing device.
Both suggested solutions have not solved flickering with interlaced modes.
And my CGA2VGA converter flickering removes - outputs stable picture, but picture is weak.
So, Q again: Exist any device which increases signal strength?
Have you looked at your VGA output with an oscilloscope to verify the voltage levels are too low? I'm not fully convinced that's the real problem.
Firstly i would address proper RGBI/CGA to Analogue/VGA conversion - this may remove necessity for signal amplifier - verify this circuit http://www.electroschematics.com/377/cga-scart-adapter/ .
Ok, for explain:
I own CGA2VGA converter which convert it as RGB analog instead RGBI digital.
It's cable has this connections (not connected internsity)
Friend created for me similar cable like on link - connection of intensity with regulation of R,G,B, I, H,V...
Converter has built-in deinterlacing and provides always stable picture - original cable displays 8 colors 'cause missing intensity and clear nice picture, but if I want 16 colors, I must to connect cable from friend - adjusting controls affecting also deinterlacing so there is not so big area to adjust it and signal stays weak...
So, I think it requires signal strength increase.
Please correct me if I'm not true.
2 Si diodes may give you approx 2.4V drop, - TTL level usually is around 3.6 - 4V - it may be problem with too high signal level not too low - Use oscilloscope and verify level of signal - oversteered amplifier (saturated) may give results that can be confused with too low signal level if some form of AGC is involved.
Personally i would prefer to use different CGA DAC than yours - if you don't have oscilloscope simulate your DAC in Spice - you will see why.
Problem - I'm not able to work with oscilloscope and even in electronic I'm moron - don't know anything...
To other adapters:
All other adapters flickering in interlaced mode - my adapter removes flickering and has else one benefit.
I'm using CGA2VGA for my Commodore 128, I'm programmer and user.
It's used for VDC circuit output what is RGBI digital, we are trying to force up to 1280 pixels horizontally instead normal maximum 800 pixels. Normal CRT for C128 it cannot display - there it's around 800 pixels, but my adapter has best results in 1280x200...
So, I don't want to use other.
How to do anything without any electrotechnic knowledge?
Really want to ask: exists any device to adjust VGA signal brightness, saturation, hue, colors, R, G, B and so on without convert signal into other?
Many adapters exists doing it but it is in 99,9% cases converting VGA to composite or S-Video.
Really adapter only adjusting signal and not changes other.
I have an Arduino and enough parts sitting around to create the resistor/diode circuit in post #14.
[Attachment 41134 - Click to enlarge]
I used a 82 ohm resistor as the load on the output (closest I had to 75). I used 1N5817 Schottky diodes (that's all that I have on hand). I checked the resistors with my multimeter and they were within about 1 percent. I used a 100 MHz scope to view the output.
Using 5 volt inputs I get the following outputs:
RED INT OUTPUT --------------- 0 0 0.01 V 1 0 0.26 V 1 1 0.60 V 0 1 0.28 V
This Arduino has the option to run at 3.3V where I get:
RED INT OUTPUT --------------- 0 0 0.01 V 1 0 0.21 V 1 1 0.40 V 0 1 0.18 V
Last edited by jagabo; 4th Apr 2017 at 17:26.
All what I can write to 3,3V vs 5V:
Copy from manual:
AC Input: 110V-240V 50-60Hz
DC Output: DC 3.3V, 1.5A, center positive
Input RGB Signals:
RGB: 0.7 Vp-p 75 ohm
H sync/ V sync: 2 Vp-p 75 ohm
YCbCr: 1 Vp-p(Y), 0.7 Vp-p 75 ohm(CbCr)
So, helps it or P/2 DA1 is solution?
|
OPCFW_CODE
|
By Varun Divakar and Ashish Garg
In this blog, we will be discussing an important concept in time series analysis: The Hurst exponent. We will learn how to calculate it with the help of an example. First, let us understand what Hurst exponent is.
Hurst Exponent Definition
The Hurst exponent is used as a measure of long-term memory of time series. It relates to the autocorrelations of the time series and the rate at which these decrease as the lag between pairs of values increases.
Hurst Value is more than 0.5
If the Hurst value is more than 0.5 then it would indicate a persistent time series (roughly translates to a trending market).
Hurst Value is less than 0.5
If the Hurst Value is less than 0.5 then it can be considered as an anti-persistent time series (roughly translates to sideways market).
Hurst Value is 0.5
If the Hurst value is 0.5 then it would indicate a random walk or a market where prediction of future based on past data is not possible.
How To Calculate The Hurst Exponent
To calculate the Exponent, we need to divide the data into different chunks. For example, if you have the return data of BTC/USD for the past 8 days’ data, then you divide it into halves as follows: Following the example of 8 observations for illustrative purposes only1:
1Length of the subseries in practical applications is usually much longer and affects the mean and standard deviation of the R/S statistic.
Then we divide the data into 3 different chinks as follows:
- Division 1 - one chunk of 8 observations
- Division 2 - two chunks of 4 observations each
- Division 3 - four chunks of 2 observations each
After dividing the data into chunks, we perform the following calculations on each chunk:
1. First we calculate the mean of the chunk, with say n observations,
M = (1/n) [ h(1)+h(2)+...+h(n) ]
- Then we calculate the standard deviation (S) of the n observations
s(n) = STD( h(1)+h(2)+...+h(n))
- Then we create a mean centered series by subtracting the mean from the observations,
x(1) = h(1) - M x(2) = h(2) - M ... x(n) = h(n) - M
Then we calculate the cumulative deviation by summing up the mean centred values,
Y(1) = x(1) Y(2) = x(1) + x(2) ... Y(n) = x(1) + x(2) + ...+ x(n)
5. Next, we calculate the Range (R), which is the difference between the maximum value of the cumulative deviation and the minimum value of the cumulative deviation,
R(n) = MAX[Y(1),Y(2)...Y(n)] - MIN[Y(1),Y(2)...Y(n)]
- And finally, we compute the ratio of the range R to the standard deviation S. This also known as the rescaled range.
Once we have the rescaled range for all the chunks, we compute the mean of each Division and note it along with the number of samples in each chunk of that Division as shown.
Next, we calculate the logarithmic values for the size of each region and for each region’s rescaled range.
The Hurst exponent ‘H’ is nothing but the slope of the plot of each range’s log(R/S) versus each range’s log(size). Here log(R/S) is the dependent or the y variable and log(size) is the independent or the x variable:
This hurst exponent value is indicating that our data is a persistent one, but we have to keep in mind that our data set is too small to draw such a conclusion. For example, if you want to calculate hurst exponent in python using the ‘hurst’ library, it requires you to give at least 100 data points.
We hope you have learnt how to calculate the Hurst exponent from this blog. In our advanced course on cryptocurrencies, we have demonstrated how hurst exponent along with another technical indicator can yield optimized trading signals.
Disclaimer: All investments and trading in the stock market involve risk. Any decisions to place trades in the financial markets, including trading in stock or options or other financial instruments is a personal decision that should only be made after thorough research, including a personal risk and financial assessment and the engagement of professional assistance to the extent you believe necessary. The trading strategies or related information mentioned in this article is for informational purposes only.
|
OPCFW_CODE
|
WIP: Prepare releases, including pre-built binaries
Fixes #207
Using cargo-dist to do the builds for selected targets and then create the release, upload sourc etarball as usual plus binary assets.
Adds a short RELEASING.md document to document the process.
Here is the first attempt at a release build in my own repo:
https://github.com/andrewdavidmackenzie/wild/actions/runs/12099771708
It's lacking something installed for musl so fails, which I'll try and fix.
warning<EMAIL_ADDRESS>Compiler family detection failed due to error: ToolNotFound: Failed to find tool. Is musl-g++ installed?
warning<EMAIL_ADDRESS>Compiler family detection failed due to error: ToolNotFound: Failed to find tool. Is musl-g++ installed?
on my machine I (seem to, never knew!!) have musl-gcc installed. I guess it needs that?
Is this for running tests? There are certainly some tests that check that static linking with musl works
Do you want to try this out on your fork to see if it works as expected? You might need to temporarily enable actions on your fork, since I think they're disabled by default.
It's trying to do the release build in my forks actions.
I'll try and get it passing and a release created in my fork, before
removing draft/wip status
On Sun, Dec 1, 2024, 11:04 AM David Lattimore @.***>
wrote:
Do you want to try this out on your fork to see if it works as expected?
You might need to temporarily enable actions on your fork, since I think
they're disabled by default.
—
Reply to this email directly, view it on GitHub
https://github.com/davidlattimore/wild/pull/209#issuecomment-2509667723,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABKF4LCRSFCXCZXRH66RYFD2DLNL3AVCNFSM6AAAAABSY6DK2CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBZGY3DONZSGM
.
You are receiving this because you authored the thread.Message ID:
@.***>
cargo build --target x86_64-unknown-linux-musl
fails on my local machine with the same error.
error occurred: Failed to find tool. Is musl-g++ installed?
I guess you installed it manually, but I'm confused why I don't see it in ci.yml
I see symbolic-demangle is referenced in cackle.yml
Without understanding all this, it seems that ci.yml installs bubblewrap
- run: sudo apt install bubblewrap lld
which is referenced from cackle.yml as the sandbox...
symbolic-demangle doesn't support building for musl. I asked them about that:
https://github.com/getsentry/symbolic/issues/880
A bit of a rat-hole...
We could dodge the whole issue and reduce the binaries we would build as part of a release...
Dropping musl is simple and would avoid this problem, even if releasing all binaries.
Releasing only linker (gnu and musl) would also avoid it.
Apart from for musl based systems like alpine, what other reason to release musl binary?
I tried reducing build to just wild, but it has a dev-dependency on linker-diff, and I think that's triggering the linker-diff build on musl, which fails.
So then I removed musl also.
If we want to re-add musl, even if just for wild, we'll need to resolve the build on linker diff, or find a way to avoid it in release builds of the wild binary (tests aren't run, so strictly speaking it's not needed but AFAIK, cargo doesn't distinguish between build dependencies and test dependencies in the dev-dependencies list...)
Successfull release build in GH Action in my repo: https://github.com/andrewdavidmackenzie/wild/actions/runs/12117386105
Here is the release it created: https://github.com/andrewdavidmackenzie/wild/releases
Once you have a look and we merge this, I might delete that release to avoid confusion.
Looks good to me, although there's merge conflicts
Fixed
|
GITHUB_ARCHIVE
|
Rich Javascript UI Frameworks, EXT, DOJO and YUI
Disclaimer & Long Winding Question Approaching
I know topics like this have been beaten to death here so suffice to say I'm not asking about which framework is better, I don't really care about opinions on the better framework. They all do pretty amazing things.
The Question
Given that I have an existing web application, made of mostly regular HTML+CSS (jQuery where needed), which is the optimal framework to integrate a few "rich" pages into typically a regular stream of HTML.
Reason
I am trying to bring our proven application into the realm of awesome desktop like UI but I want to do it one small piece, one screen at time. But for our users, support personel and especially me taking it slow is the only option.
Also, with our branding requirements having a framework that just takes over the viewport isn't an option, it has to play nice with other HTML on the screen.
Imagine the example being a rich user manager in an otherwise plain HTML+CSS environment.
Experience Thus Far
Dojo + Dijit
Pros: The new 1.5 widgets plus the claro theme is the cure for what ails us. Dojo seems to be able to use markup to create the UI which is very appealing and has a fair amount of widgets.
Cons: Holy bloated lib Batman! Dojo seems to be enormous and I have to learn a custom build system to get it to stop requesting 4,800 javascript files. This complex empire of Javascript makes me believe I won't be able to create much that isn't already there.
ExtJS
Pros: Amazing set of widgets, does everything we could possibly want. Seems quick, every version brings new improvements.
Cons: I'm not sure how to use this without the entire display being EXT. I'm still building a web site, so I would prefer something that could integrate into what we already have. Some pointers here would be great.
YUI
Pros: Well, it's Yahoo isn't it? AWS console is downright wicked. Plenty of support and a giant community.
Cons: Well, it's Yahoo isn't it? AWS console is the only wicked thing. Complex for someone who's used to jQuery.
Help Me
I am willing to accept experience, links to ways to solve problems I've outlined, new toolkits (even though I'm pretty sure I've seen most by now) or even just advice.
Only Jquery can save you . :)
I want to use jQuery UI so badly. But the style is weird, and there is no layout manager or toolbar. Piecing those components together is possible, but I was hoping not to have to.
Regarding ExtJS, it's pretty easy to start it in an existing div with something like this:
Ext.onReady(function() {
App = new Ext.Panel({...})
App.render('div-id')
});
The App panel can then have it's own layout manager.
I took your advice and spent a bit more time trying to mess with EXT. I have gotten some basic stuff in a div which is improving my opinion of the library. Is there another way to learn ExtJS without just deconstructing demo pages?
Starting from the samples and checking the docs for other things you can do with each widget is the best way, I think. You can find even more good samples in the 'User Extensions and Plugins' part of the forums. Every time you want to do something, start by searching the forum, someone else might have done a plugin from which you can learn.
There's also Saki's page, for even more documented samples: http://examples.extjs.eu/
You can buy the ExtJS In Action ebook from manning, but digging into the code is the best way to learn.
I concur with these guys. I'd recommend learning the nuances of the different layout managers first to get your feet wet with Ext. By using these technique above, you be able to mix different libraries across different sections of your page.
+1 for ExtJS In Action. Essential reading to get you up and running.
This is definitely the best answer here. Although, as a jQuery user I'm going to spend a bit more time there just to see what comes out of it. Hopefully when I'm done I'll have a jQuery and an EXT version of my UI... decision should be clear at that point.
This might be useful if you're familiar with jQuery, but not yet familiar with YUI 3 syntax: http://www.jsrosettastone.com/
Each of the libs you listed is excellent. When embarking on a larger scale project, the quality of a lib's documentation, community, and commitment to support become more relevant.
This is a great page (bookmarked) but I'm speaking more to the UI side of things, not just the core.
"When embarking on a larger scale project, the quality of a lib's documentation, community, and commitment to support become more relevant".well said man, just well said.
With Dojo, keep in mind that outside of dojo base, it only ever loads what you tell it to. But yes, without a built layer, that means it could easily end up requesting 50 JS files at startup for a large application using a bunch of widgets.
There are several pages in the reference guide documenting the build script: http://www.dojotoolkit.org/reference-guide/build/index.html
Rebecca Murphey wrote a nice blog post outlining an example app and build profile that you might find illuminative: http://blog.rebeccamurphey.com/scaffolding-a-buildable-dojo-application
If you get stuck, there's likely to be people in the Dojo IRC channel that can help.
RE ExtJS: I'm not sure what your exact situation is, but keep in mind that if you're intending to use it in commercial non-open-source software, you need to pay for licenses: http://www.sencha.com/store/js/
I'm a little curious as to why you think the size / number of requests is specifically an issue with Dojo though. I haven't used the others, but I'd expect it to be somewhat of a potential concern with any of them.
|
STACK_EXCHANGE
|