Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
loading your results,
Top of internet explorer missing
platform: Windows Publisher: IE password restore tool Date: top of internet explorer missing Size: 812 KB Internet Explorer Password Recovery software for recovering all types of Internet Explorer saved password.
your User-Agent String Mozilla/5.0 (Windows NT 6.1; rv:13.0)) Gecko/20100101 Firefox/ Your User-Agent string is 67 characters. User-Agent test and override registry scripts @ericlaw built this top of internet explorer missing page long ago; it's now used as a quick place to poke interesting Web Browser APIs. Based on the User-Agent, your browser sent the following headers: Accept: / Cookie: Array Host: m Referer: m/. User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:13.0)) Gecko/20100101 Firefox/ X-Original-URL: /px HTTPB rowserCapabilitiesASPNET 's HTTPB rowserCapabilities object reports that, your Browser has the following capabilities: Type Firefox13 Name internet explorer x ua Firefox Version 13.0.
Video songs pca notes aci 318 11 proshow transition pack volume crack 90 Driver finder Setup Serial Key. rar hema malini.
Under the slider for changing different security settings, you will notice a new check box saying Enable 64-bit Mode (requires restarting Internet Explorer). However, enabling/disabling it has been associated with security zones, just the way the remaining security settings and Protected Mode are. This means.
Top of internet explorer missing Canada:
microsoft Dynamics top of internet explorer missing CRM - Tips,
2014 AuthorAtomPark Software Size:6.0 MB LicenseShareware Price: 79.00 Platform Windows All CategoryNetwork Internet - Email Atomic Email Hunter download Internet marketing is becoming one of. DateSep 25, existing top of internet explorer missing bit of your Internet connection via multiple threads.
Thanks for contributing an answer to Server Fault! Please be sure to answer the question. Provide details and share your research! But avoid Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience.
for example, if we add the following attributes: DebuggerBrowsable(ver)) public string FirstName; DebuggerBrowsable(ver)) public string LastName; Our view updates top of internet explorer missing to: You can learn more about attributes that control the debugger data window views here.
so day after top of internet explorer missing day they had hunted, and sailing far up the river; far up where the tall rushes wave, twisted together by the twining morning-glory vines; far up where the alligators make great nests in the river-bank, loading the canoes with ivory,FREE - Service Pages Personnelles - 8 rue de la Ville l'Eveque 75008 Paris.
dateSep 25, keyword all the major Internet top of internet explorer missing trends sites and the. 2014 AuthorInternet marketing home business Size:1.3 MB. Allows you to. Being aware of the Internet trends is crucial from. Download: Free Internet marketing software download Free Internet marketing software.they also work for top of internet explorer missing how to manage add on in internet explorer other iOS devices - the user just has to find the equivalent items for the VPN settings. Although the iOS device instructions are specific to an iPhone,milagroso De top of internet explorer missing Buga - Mobile/Navigation. 5. El Iphone est en modo standby para ahorrar batera. De como ver la seal de las eucaristas en vivo, aplicacin para usar como gua de los horarios, mientras el audio se reproduce,
Top of internet explorer missing
here is an alternative top of internet explorer missing method to reach the screenshot to the right: "C:Windowssystem32rundll32.exe" C:Windowssystem32iesetup. Plan B Use RunDll32 to ShowHardeningDialog. Then right click (the shortcut and Run as Administrator.) dll,IEShowHardeningDialog What I did was create a shortcut with the above path,
showed 403 to 6,464,636 top of internet explorer missing bad guys.Internet Explorer 11 for Windows 7 cannot automatically install prerequisites if any update installation is in progress or if a system restart is pending.
you dont. No, to ensure that your page is not rendered in IE7 mode inside the IE8 browser add the following meta tag to your pages: meta http-equiv"X-UA-Compatible" content"IE8" / Alternatively, so, do you have to test internet explorer internet zone settings your pages in IE8 in IE7 mode?
exe1 Faulting module path: hpCaslNotification. Exe, version: 9, time stamp: 0x502572be Faulting module name: KERNELBASE. Time stamp: 0x51fb1677 Exception code: 0xc00000fd Fault offset: 0x d Faulting process id: 0x11d04 top of internet explorer missing Faulting application start time: 0xhpCaslNotification. Version:, dll, exe0 Faulting application path: hpCaslNotification.listelist To subscribe or to leave the list, go to eelists. Or top of internet explorer missing to set other subscription options,
its how to fix internet explorer connection just giving you the Internet Explorer web browser. Fluid, its fast, screen Sharing from Mac to Mac in OS X, and pretty impressive, youre accessing a remote session top of internet explorer missing (hosted by Microsoft in this case)) except rather than accessing an entire computer,
|
OPCFW_CODE
|
Continuing with my quest to make the engines display areas (as I did with Neverwinter Nights 2), I turned to Jade Empire the last two weeks. There was just one tiny issue: xoreos didn’t yet support the model format. While I could make use of other’s people reverse-engineering work for the model formats of other engines (Neverwinter Nights (Torlack), Neverwinter Nights 2 (Tazpn) and Knights of the Old Republic (cchargin)), apparently barely anybody bothered to look into Jade Empire. A person called Maian tried to figure out a few thing with just a hexeditor, and while that was a great start (and confirmed my suspicions that the format is similar to Knights of the Old Republic’s), it wasn’t enough for full support in xoreos.
So, with no other place to look, I buckled down and opened the Jade Empire binary in a disassembler.
The loader function was quickly found, and in combination with Maian’s findings, the header was a cakewalk. The model nodes, the structures containing the mesh data, however, proved to be more tricky: the engine simply read the whole shebang into memory, to be used later.
Trying to find a shortcut, I remembered that Neverwinter Nights was able to load ASCII representations of its models. I searched for some common keywords like “verts” and “faces”. I had luck: there still is a ASCII model loader in Jade Empire, and not just a remnant of older code; it was updated to load ASCII representations of Jade Empire models. Moreover, for the most part, it actually parsed the model values into the same struct it reads the binary data into. Meaning: I could directly map data from the model file to their ASCII name, getting their meaning handed on a platter.
This basically gave me the entirety of the general node header, and a good part of the TriMesh node (that contain model geometry in form of a triangle mesh) header. What it did not help with was the format of the vertex data and face indices; those were parsed into a different structure.
Chasing down the place where the binary data was put into that structure wasn’t too difficult, and soon I had enough information to render basic, untextured geometry:
Materials and Textures
While there was a texture field in the TriMesh node, I found it was more often then not empty. Instead, there was a numerical material field, and a matching .mab file in the resource. A material definition in binary format. I also found several .mat files in the resources, ASCII material definitions. Trying my luck again for finding a loader of these in the binary, I searched for the string in the disassembly, and yes: there is a loader for the ASCII material files, parsing them into exactly the layout of the binary .mab files.
With that, and some fiddling with the offset into the vertex structure for the texture coordinates, I managed to render textured geometry:
Triangle lists, strips and fans, oh my!
People familiar with Jade Empire might recognize that model: that’s Silk Fox (SPOILER WARNING on that link!). And for some reason, she usually wears a veil in front of face. So, where’s that veil?
Turns out, while meshes in Knights of the Old Republic’s models are always triangle lists (simple, unconnected triangles), Jade Empire models also contain other mesh types: triangle strips and triangle fans. For the sake of simplicity, I decided to unroll them into triangle lists for now, and that made the veil visible:
Now, about that vertex structure…
I said above that I fiddled a bit to find the texture coordinates inside the vertex structure. Well, in fact, what I did was a big, fat hack: I hard-coded the offset based on the size of the structure. Obviously, that’s wrong. And it did fail for a lot of models, most prominently the models used for the area geometry.
Wasting quite some time trying to find how the vertex structure is interpreted in the disassembly (and even trying to trace it with WineDbg), I eventually found the answer where I should have looked way earlier: in the Knights of the Old Republic model loader. Since the two model formats are similar, I could just apply what the one did to find the offset in the other. With that, the area geometry renders more correctly:
Please note a few things. Yes, not all fields in the material definition are used yet (in fact, only the 4 base textures are used). As such, the rendering looks a bit off, especially where environment mapping should happen. That’s why the roof is semi-transparent, for example.
Also, while I did find the offsets for the textures, I’m not yet sure about the other pieces of data. Specifically, the normals and data found after the texture offsets. For example, take a look at this hexdump of the vertex array for Silk Fox’s head (click for a bigger view):
Each colored section is a new vertex. The red box, i.e. the first twelve bytes of each vertex, is the vertex position, stored as 3 IEEE floats in little endian byte order. The green box, i.e. 8 bytes at offset 0x14 within each vertex, is the first set of texture coordinates, stored as 2 IEEE floats.
Now, the field that in Knights of the Old Republic models denote the offset to the normals, points to 0xC. Unfortunately, there’s only space for 2 floats here, while the normal usually needs 3 floats. I have frankly no idea what’s going in there. In the case of the first vertex, the data could intentionally overlap, since the 0.0f would fit, but that breaks down not much later.
The data after the texture coordinates looks pretty un-floaty, more like several uint16 values. Again, no idea what that could be yet.
I am, of course, always open to suggestions if you should know what this could mean. :)
In either case, this brings Jade Empire model support close to the other model formats. I found that enough for now; I did spent about 50 hours staring at disassembly to get this far, after all.
Like a lot of things, areas in Jade Empire are structurally similar to areas in Knights of the Old Republic. A .lyt file specifies the layout: the positions of several models (called “rooms”) that make up the area geometry. And a .vis file specifies which rooms is visible from where, so that, in-game, rooms that are not visible from the current location can be culled from the render queue.
Unlike Knights of the Old Republic (and also unlike Neverwinter Nights), the Jade Empire areas don’t have .are and .git files to describe static and dynamic elements of the area, like the name, the music playing in the background, and objects and creatures found within. Instead, there’s another array in the .lyt file for simple, state-machine driven objects like doors, and an .art file gives some general information for each room (mostly wind, fog, camera perspective). Everything else seems to be placed or started by the game scripts.
A script system is not something I want to bind to the Jade Empire engine just yet, but loading the layout file and placing the room models, that I can do:
Again, the same caveats as above apply: no proper material support yet. And due to these issues of missing environment mapping and wrong blending, those two screenshots are actually less ugly than others. See for yourselves:
Still, apart from the missing NPCs and objects, Jade Empire areas are now in a similar state to areas in Knights of the Old Republic. That counts for something, no? :)
|
OPCFW_CODE
|
What are the steps involved in an instruction cycle any program residing in the memory contains a set of instruction that need to be executed by the computer in a. An instruction cycle (sometimes called fetch-and-execute cycle, fetch-decode-execute cycle, or fdx) could be the basic operation cycle of an computer. An instruction cycle (sometimes called fetch-and-execute cycle, fetch-decode-execute cycle, or fdx) is the basic operation cycle of a computer. Eecc550 - shaaban #1 lec # 3 winter 2011 12-6-2011 cpu performance evaluation: cycles per instruction (cpi) • most computers run synchronously utilizing a cpu clock. I've been learning a little bit more about how processors work, but i haven't been able to find a straight answer about instructions per cycle for instance, i was. I'm trying to understand the steps that it takes to go through an instruction and their relationship with each oscillator cycle the datasheet of the pic18f4321 seems.
Instruction cycle 1 instruction cycle 2 instruction instruction is command which is given by the user to computer. Instruction cycles machine cycles (bus cycles) t states calculating execution times ld hl, 0x1850 opcode 0x21 opcode fetch cycle operand1 0x50 memory read. The fetch-decode-execute cycle of a computer is the process by which a computer: fetches a program instruction from its memory, determines what the instruction wants. My guess is that the __no_operation() intrinsic (arm) instruction should take 1/(168 mhz) to execute, provided that each nop executes in one clock cycle, which i. Wikipedia's instructions per second page says that an i7 3630qm deliver ~110,000 mips at a frequency of 32 ghz it would be (110/32 instructions) / 4 core = ~86.
Fetch decode execute cycle in more detail kevin drumm loading instruction cycle in cpu - duration: 9:20 ahmad naser 10,174 views 9:20. Hello , i need some help , i am confused on some terms 1 the instruction time 2 instruction cycle 3 clock cycle instruction cycle. Definitions of instruction cycle, synonyms, antonyms, derivatives of instruction cycle, analogical dictionary of instruction cycle (english.
Home tech info instructions & data instruction sheets instruction sheets search site ©2018 s&s cycle. Bosky agarwal cs 518 fall 2004 instruction fetch execute cycle the instruction fetch execute cycle is one of the most important mental models.
Machine cycle: instruction cycle: definition: the steps that get performed by the processor getting employed in a device and all the instructions that get implemented. Resuming the user program 4-10 jump to isr and resume normal instruction cycle when isr is completed, restore the state of the program and resume its operation.
- The steps performed by the computer processor for each machine language instruction received the machine cycle is a 4 process cycle that includes reading and.
- Instructions are processed under direction of the control unit in step-by-step manner each step is referred to as a phase there are six fundamental phases of.
- Difference between machine cycle and instruction cycle is that machine cycle for every instruction, a processor repeats a set of four basic operations.
- 2 cpu architecture: fetch-execute cycle 21 cpu instruction cycle timing 36 •as we have seen, a single instruction cycle (or fetch-execute cycle) itself is.
- I am reading about the various phases of the instruction execution, i found out that we have three phases like below fectch decode execute now if the part i don't.
The instruction cycle is the basic operation cycle of computers, and it is the time period during which the computer fetches an instruction and executes it it. Instruction latencies and throughput for the term throughput is used to mean number of instructions per cycle of this type that can be sustained that. In this tutorial we will learn about different addressing modes and instruction cycle in computer architecture. 1 chapter 15 control unit operation computer organization and architecture micro-operations • execution of an instruction (the instruction cycle) has a number of. Msp430 family instruction set summary 5-1 topics table 54: format i instructions note: cycle time of the dadd instruction the dadd instruction needs 1 extra cycle. Instruction cycle phases of instruction cycle unit 3 - control unit design abhineet anand computer science and engg department university of petroleum and ene.
|
OPCFW_CODE
|
I’m developing a pre-processing workflow for S1 GRD data, for which I’m using Snappy/python (3.6). There’s a stage which requires me to step out of SNAP to do some operations, then back into .dim format to do terrain correction.
I’ve been trying to get snappy to produce a new product with bands reflecting the data that have been operated on elsewhere, but can’t get the new product to function in subsequent terrain correction because it lacks geocoding information. In this toy example I’m just copying bands from an original .dim to a new product. the bands and metadata display the same as the original in SNAP desktop, but I get a lack of geocoding error when I do RDTC.
How would I go about transferring geocoding information to the new product? I’ve tried
gc = p.getSceneGeoCoding()
where p is the original .dim
Here’s the example of what I’ve been trying, I’ve put this together from a few days of trawling through the forums trying various things, but I can’t seem to make anything work. Any help appreciated.
import gdal import numpy as np import snappy import sys, os drive = os.path.normpath('Products') productname = 'S1A_EW_GRDM_1SDH_20191003T042808_20191003T042912_029289_03540B_5F90' product = productname + '.SAFE' extension = '.tif' filename = productname + '_Orb_TNR_Cal' HashMap = snappy.jpy.get_type('java.util.HashMap') snappy.GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis() src = gdal.Open( os.path.join(drive, product,filename + extension) ) #Manipulate image data (or not) band1 = src.GetRasterBand(1) nda1=band1.ReadAsArray() band2 = src.GetRasterBand(2) nda2=band2.ReadAsArray() #construct output band definitions output = snappy.Product('titleI', 'title', src.RasterXSize, src.RasterYSize) band1 = output.addBand('HH', snappy.ProductData.TYPE_FLOAT32) band2 = output.addBand('HV', snappy.ProductData.TYPE_FLOAT32) #copy metadata from the .dim version of the scene processed to Orb_TNR_Cal stage p = snappy.ProductIO.readProduct(os.path.join(drive, product,filename + '.dim')) snappy.ProductUtils.copyMetadata(p, output) #set product writer writer = snappy.ProductIO.getProductWriter('BEAM-DIMAP') output.setProductWriter(writer) #write header output.writeHeader('snappy_output.dim') #Write data into bands band1.writePixels(0, 0, src.RasterXSize, src.RasterYSize, nda1) band2.writePixels(0, 0, src.RasterXSize, src.RasterYSize, nda2)
|
OPCFW_CODE
|
Clustering temporal edition profiles of Wikipedia articles using HMM
In this report we propose a temporal study of the edition process on Wikipedia articles. Our grounding hypothesis is that the edition of articles follows from temporal processes characterized by bursts of activity [Kleinberg, 2002], and that these differ in magnitude, frequency, etc. according to the type of the article (conflicting article, particular topic).
For instance, if we consider for a particular article the evolution of the number of distinct contributors every month, we observe periods of greater activity of the Wikipedian community on this article. Here we consider a temporal series for each article: the number of deleted characters in a revision. We extracted different types of articles and performed clustering. The aim is to discover differences in the generating process of the articles according to their originating type, such as peaks frequency, observations frequency, observed values, etc. Our method could also be used to determine particular periods of greater activity in the articles.
We first selected a model. We chose hidden Markov models because they are well suited for temporal data, they take into account the characteristics previously mentioned (frequency, values) and they can usually be easily interpreted.
We made several important choices for the model :
- we simplified the problem to stationary processes, considering that for a particular article and over the few months of its existence time would have little effect.
- we chose first order HMM, assuming that the probability density of emission only depends on the current state, and that this state only depends on the previous state.
- we only used HMM with 2 or 3 states: these states actually correspond to the intuitive idea of activity: low, medium or high for starters.
- no transitions between states were forbidden: we could have chosen for instance to forbid direct transitions from low to high activity, avoiding the state of medium activity. If it is the case that such transition is not relevant, it should appear in the parameter estimates.
- we chose Poisson laws for the output probability densities in each state, which have been extensively used for data extracted from counting, that is for modelling the law followed by the random variable determining the number of times a given event happens in a given period. In our case, this variable will for instance be the number of deleted characters during an article revision.
Our aim is to automatically cluster article samples, each article being characterized by a temporal observation sequence. Our unsupervised clustering algorithm is inspired from the K-means method but based on hidden Markov processes.
We used the initialization process proposed by Smyth in [Smyth, 1997], which presents an unsupervised sequence clustering method based on hidden Markov models. The initialization method clusters sequences according to the probability that they have been generated by the same hidden Markov model. It also builds the representative HMM for each group.
The result was then used as an input for the HMM-based clustering algorithm proposed by Schliep et al. [Schliep, 2003]. It iteratively looks for a partition of the articles into K clusters, simultaneously learning a hidden Markov model representative of each learnt cluster. The sequences are re-assigned to the K initial clusters during an iterative procedure inspired from the K-means method. It places them into the cluster corresponding to the most representative HMM and then updates the parameters of these HMM.
Wikipedia is a very rich corpus. It is entirely saved in a database of about 50 gigabytes for the French-speaking version, which contains approximately 370,000 articles on April , 2006. The content of every page if fully available, in HTML or in Wikitext language syntax. Wiki language is a simplified alternative to HTML for writing texts.
We used the French-speaking version of the encyclopedia. A dump of the database is regularly updated on and freely downloadable. Our database was created on April , 2006. It contains 604,611 pages. Our aim is to study temporal activity profiles of contributors on the articles of the encyclopedia, and to observe differences in the edition process. For this purpose, we extracted different types of articles and then characterized each article by an activity profile.
We limited our study to pages from the main namespace (Article) namespace, which accounts for Wikipedia's encyclopedic content and represents the great majority of pages, which is 368,426 pages. Among these article pages, we eliminated the 100,338 redirect pages. These pages only redirect users to the correct page (for instance in the case of synonyms), and thus don't undergo enough changes for the purpose of our study. At this stage we have 268,088 pages left.
On this basis, we selected different article samples:
- a random sample of 221 articles ;
- all articles labeled by a specific thematic category:
- category Mouvement artistique (art movements, 91 articles) was chosen as a debatable topic that could give rise to collaboration and excitement (positive or negative) among contributors.
- category Droit constitutionnel (constitutional law, 69 articles) was chosen for opposite reasons: we presupposed some unbiased characteristic of its articles. This objectivity is quite relative since the category contains not only law definitions but also more debatable articles.
- the 155 articles member of the 84 categories beginning with the string Gastronomie (gastronomy), chosen as a very popular topic not only reserved to expert contributors.
- 70 articles that suffered from an edit war, declared as wikifeu (wikifire) and extinguished by wikipompiers (wikifiremen). These volunteer wikipedians exist as Cabalists, in the English-speaking Wikipedia.
We first mixed together the random sample and the wikifire pages. The following table shows the confusion matrix resulting from the unsupervised classification of this dataset.
|Decision +||Decision -|
Recall is excellent (93%), precision is not very high (51%) and the F-mesure is quite important (66%). The image below illustrates examples of temporal profiles for 9 wikifeu articles.
We then mixed the articles from the art, law and gastronomy categories. The idea was to highlight differences in the edition process according to the topic concerned and thus according to the community of contributors. Next table shows the confusion matrix resulting from the automatic clustering on this other dataset. As we can observe, the different samples are scattered quite similarly in the confusion table. This would invalidate our hypothesis. However, before making such conclusions, it is necessary to try the experiment on other thematic categories, and with other kinds of data (number of contributors, etc.).
|Decision 1||Decision 2||Decision 3||Total|
|Mouvement artistique||45% (25)||33% (20)||23% (15)||60|
|Droit constitutionnel||28% (16)||39% (24)||27% (17)||57|
|Gastronomie||27% (15)||28% (17)||50% (32)||64|
|Total||100% (56)||100% (61)||100% (64)||181|
- J. Kleinberg. Bursty and hierarchical structure in streams. In 8th Proc. 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, July 2002.
- L. R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286, 1989.
- A. Schliep, A. Schonhuth, and C. Steinhoff. Using hidden Markov models to analyze gene expression time course data. Bioinformatics, 19(suppl 1) :i255–263, 2003.
- P. Smyth. Clustering Sequences with Hidden Markov Models. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems, volume 9, page 648. The MIT Press, 1997.
|
OPCFW_CODE
|
My friends on Facebook kept telling me things like "We watched a few minutes of it and turned it off" "It's horrible" "Do you want to randomly find yourself screaming 'No! Why!?!' for the rest of your life?" but these sort of warnings only made me bump it to the top of the Netflix queue.
The disc arrived in the mail yesterday and as soon Amy and my firstborn were asleep, I hurried down to the den expecting two hours of MST3K-level awfulness.
It was good. It wasn't bad.
The acting, costumes*, directing, cinematography, and editing were not award-winning, but were as good or better than many other movies I've seen that are much more highly acclaimed than poor old Zardoz.
A couple of the sets were a bit cheesy, but the special effects were actually good (allowing for the fact that this was a pre-Star Wars movie).
The plot is followable** and interesting. I was expecting an incoherent mess.
Spoiler (click to reveal)
The part where Zed figures out what Tabernacle is, and goes to destroy it is a bit of a mind-bender.
So taken simply as a straightforward science fiction movie, it's average, but when you realize it's meant as a criticism of modern culture and class structure and subtly asks the question "What happens to a human race that has stopped evolving?" then you see the full scope of the film.
*OK, the red banana-hammock bandolier is a nightmare, but the rest are acceptable.
**May have coined*** a new word here
***Footnotes are awesome, but nested**** footnotes are super-awesome
It has some great imagery in it (you gotta love the giant floating head spewing guns and shotgun shells out of its mouth), but it tries to be Very Very Profound in that dopey sci-fi way and it's just not.
I've seen worse, though. It's no Event Horizon, that's for sure.
Event Horizon is so far from rocking that its very existence on a planet which also contains rocking things can only be explained by the curved space described in non-Euclidian geometry. Physicists theorize that what is happening is that the strong AH* field given off by the film actually warps the space-time continuum in such a way that coolness bends around it. This is why anything awesome that is visible behind a copy of Event Horizon displays a redshift.
This is why anything awesome that is visible behind a copy of Event Horizon displays a redshift.
I think I would have gone with brownshift. It would have fit your premise better.
Also, I might have mentioned how the suck increases geometrically as you approach Event Horizon, and that once you get close enough to begin watching it, you are now forever stuck in its suck and all time has ground to a halt.
|
OPCFW_CODE
|
Bioinformatics Subject Area
(Redirected from bioinformatics)
Bioinformatics Subject Area is a Subject Area that combines Computer Science and Biology.
- AKA: Computational Biology.
- See: Subject Area, Statistical Genetics, Bioinformatics Journal, Clinical Research.
- (Wikipedia, 2021) ⇒ https://en.wikipedia.org/wiki/Bioinformatics Retrieved:2021-11-21.
- QUOTE: Bioinformatics is an interdisciplinary field that develops methods and software tools for understanding biological data, in particular when the data sets are large and complex. As an interdisciplinary field of science, bioinformatics combines biology, computer science, information engineering, mathematics and statistics to analyze and interpret the biological data. Bioinformatics has been used for in silico analyses of biological queries using mathematical and statistical techniques.Bioinformatics includes biological studies that use computer programming as part of their methodology, as well as a specific analysis "pipelines" that are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidates genes and single nucleotide polymorphisms (SNPs). Often, such identification is made with the aim to better understand the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organizational principles within nucleic acid and protein sequences, called proteomics.
- (Wikipedia, 2021) ⇒ https://en.wikipedia.org/wiki/Glossary_of_clinical_researchs Retrieved:2021-11-21.
- QUOTE: Bioinformatics
- The science of using computers, databases, and math to organize and analyze large amounts of biological, medical, and health information. Information may come from many sources, including patient statistics, tissue specimens, genetics research, and clinical trials. (NCI)
- QUOTE: Bioinformatics
- Master's Degree in Statistics at the University of Chicago. http://www.stat.uchicago.edu/admissions/ms-degree.html
- Biostatistics: Biology, medicine and psychology are major areas where quantitative analysis are essential. The program relies on an intimate collaboration with practitioners in the University of Chicago Pritzker School of Medicine. Several courses that are suitable for this track are offered by the Health Studies Department.
- Statistical Genetics: Statistics plays an important role in modern genetics and bioinformatics. Faculty in the Department of Statistics have broad interests in this area including gene mapping, analysis of gene expression data, and other mathematical and statistical problems arising in genetics. Additional coursework beyond the usual program may be required, and even well-prepared students may need at least part of a second year to specialize in statistical genetics.
- Bioinformatics: The scientific discipline that encompasses all aspects of biological information acquisition, processing, storage, distribution, analysis and interpretation that = combines the tools of mathematics, computer science and biology with the aim of understanding the biological significance of a variety of data. Also referred to as computational biology.
|
OPCFW_CODE
|
The Internet of things (Iot) is the network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, actuators, and connectivity which enables these things to connect, collect and exchange data. Today we will discuss about Simple Iot Projects Using Raspberry PI.
Here are Some Simple Iot Projects Using Raspberry PI
1. Home Automation using Raspberry pi and IoT
The Raspberry Pi is a credit-card sized computer that can plug into your TV and a keyboard which was found for the intention of teaching basic concepts in schools. It is a portable little computer which can be used in electronic projects, and for many of the things that your desktop PC does, like spreadsheets, word-processing and games.
Read More: Home Automation using Raspberry pi and IoT
2. Idiot’s Guide to a Raspberry Pi Garage Door Opener
It has become apparent that WebIOPi is not yet compatible with the Raspberry Pi v2. If you are following this tutorial, you must be using the first-generation Pi. The link below for the B+ model is still valid, and this model will still work for this project.I’ve updated the Amazon links to point to the new Raspberry Pi B+ revision. Please note that if you’re going to use this version, you don’t necessarily need the USB wireless adapter listed below since the B+ revision of the Raspberry Pi includes a wired LAN port. Another note is that the B+ revision now uses Micro SD cards rather than the full-size SD card. The guide has also been updated to reflect this.
Read More: Idiot’s Guide to a Raspberry Pi Garage Door Opener
3. IoT based Raspberry Pi home automation using IBM Bluemix
We all have at some point of life desired to control everything at the click of a button. May it be turning off unnecessary lights when you are not at home to detecting intruders when you are not around. This tutorial will guide you to build a simple raspberry pi home automation system that will allow you to control appliances in your home from anywhere in the world.
Read More: IoT based Raspberry Pi home automation using IBM Bluemix
4. IoT 101 Project: Stream Temperature from your Raspberry Pi
“Hello World!” – This is likely the output of the first program you ever wrote when learning how to code. Setting up a device to stream temperature data is quickly becoming the de facto Internet of Things (IoT) “Hello World!” project. If printing “Hello World!” the first time was a long, frustrating task.
Read More: IoT 101 Project: Stream Temperature from your Raspberry Pi
5. Build Your First IOT with a Raspberry Pi, DHT11 sensor, and Thingspeak.
IOT or Internet of Things is a hot topic! According to the experts everything will be connected to the internet and all our devices and their data will soon be just an IP address away from us. So where do you start if you want to explore the world of IOT? How about a simple Temperature, Humidity, and Light sensor for your basement.
Read More: Build Your First IOT with a Raspberry Pi, DHT11 sensor, and Thingspeak.
6. IoT Motion Controlled Servos
Secure and reliable real time data streaming is essential for IoT. I’ve seen plenty of demonstrations involving applications or “push button here, LED on over there” type hardware, but a friend and I wanted to make something that was more interactive… a way to almost feel the data stream as you manipulate it.
Read More: IoT Motion Controlled Servos
7. Join the IOT with your Weather Station – CWOP
The Citizen Weather Observer Program (CWOP) is a network of privately owned electronic weather stations concentrated in the United States but also located in over 150 countries. Being in this network allows volunteers with computerized weather stations (like the WeatherPi – http://www.instructables.com/id/Create-Your-Own-Solar-Powered-Raspberry-Pi-Weather/) to send automated surface weather observations to the National Weather Service.This data is then used by the Rapid Refresh forecast model to produce short term forecasts (3 to 12 hours into the future) of conditions across the United States’ lower 48 states.
Read More: Join the IOT with your Weather Station – CWOP
8. A LoRaWAN “The Things Network” Gateway for Windows IoT Core
This tutorial describes how to build, install and run a single-channel packet-forwarding LoRaWAN gatewayrunning on a Raspberry Pi with a Dragino LoRa extension board, forwarding received radio packets to The Things Networkbackend. The gateway is implemented in C# (having no external dependencies) and runs on the Windows IoT Core platform.
Read More: A LoRaWAN “The Things Network” Gateway for Windows IoT Core
9. Raspberry Pi IoT ticket printer for online stores
If you answer yes to all the questions, please keep reading because this is the instructable you are looking for. Also if you say no to one or more questions keep reading too, maybe you can print your email or make a secret organization mission printer. The posibilities and applications of this are endless.
The IoT is everywhere, now almost everything is conected to the internet and makes our lives easier.
Read More: Raspberry Pi IoT ticket printer for online stores
10. Octopod: Smart IoT Home/Industry Automation Project
Octopod, a uniquely shaped full automation system that allows you to monitor your industry and keep security with AI and smart RFID locks.
Arduino UNO & Genuino UNO
Anduino Arduino MKR WiFi 1010
Either this or any other Wifi ESP8266/ ESP32 This was not available in my country, So I went with NodeMCU ESP8266
Read More: Octopod: Smart IoT Home/Industry Automation Project
For More Simple Iot Projects Using Raspberry PI Visit: Simple Iot Projects
|
OPCFW_CODE
|
Is it OK to mix Colony Wars and Frontiers for a 2 player game?
Is it OK to mix Colony Wars and Frontiers for a 2 player game? Would there be any balance issues?
One main worry is that the trade row might just have high value cards for a while and you are just sitting there buying explorers for a long time
"Balance" typically refers to each player's relative chances of winning. There isn't any inerrant balance changes from the expansions because the trade row is shared between the two players. Even if the trade-row gets full of high-value cards, the game does not favor one player over the other. Unaffordable trade rows happen even with just the base game (I've seen it happen), and Explorers exist for this very reason.
Balance issues aside, all of the Star Realms expansions have a spread of card costs. I do not believe any of them significantly increase the chance of an un-affordable starting trade row.
If you are worried about the trade row being too expensive, by far the most important factor will be the average cost of the cards in the deck. From the card gallery, you can calculate that the average costs in the three large sets are
Core Set: 3.6
Colony Wars: 3.525
Frontiers: 3.4
Thus both the expansions have less expensive cards than Core Set; mixing them together will still result in a less expensive deck.
Technically having a bigger deck does increase the variance of costs you may see. As an extreme example, if you played with a 5 card deck of costs 1,2,3,4,5, then you would always see all 5 costs on the trade row at the start of the game, whereas if you played with lots of copies of that same deck shuffled together, then you would occasionally see starting trade rows of all 1s or all 5s, even though the average value hasn't changed. However, while I haven't crunched the numbers, my guess is that at the scales we are talking about (80-card sets), this effect is not significant compared to the averages discussed above.
There are other things you could also analyze, like
The variance within each set. For example a set with only 2s and 8s in a 2:1 ratio and a set containing only 4s each have an average cost of 4, but would play rather differently.
How much Trade is produced by the average card in the set, especially the cheap ones (since these can be bought early and then help you buy everything else).
Applying these to the real game is left as an exercise for the reader.
|
STACK_EXCHANGE
|
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. It follows the GitOps pattern of using git repositories as the source of truth for defining the desired application state. With ArgoCD, application deployments can be automated and updates to application can be made at the simple git commit events without the need of any complicated Continuous Integration and/or Deployment Pipelines.
This is our fourth post in the series of blog post on deploying and managing application with Kubernetes and Argo CD. You can find the series index here.
Deploying Helm Charts with Argo CD
A lot of applications have defined their manifests in the form of Helm Charts. Argo CD will assume that the Helm chart is v3 (even if the
apiVersion field in the chart is Helm v2), unless v2 is explicitly specified within the Argo CD Application (see below).
If needed, it is possible to specifically set the Helm version to template with setting the
helm-version flag on the cli (either v2 or v3):
argocd app set helm-guestbook --helm-version v3
helm-guestbook is the name of the application.
Add Helm Repositories to Argo CD
You may have helm charts hosted as separate repos or as part of other git repos. Lets see how we can add helm chart repositories to the Argo CD. Lets add Bitnami Charts repo to our use case.
For this, we can go to Settings -> Repositories -> Connect Repo using HTTPS and submit below details:
Type – Select Helm since we are adding helm charts repo. If your application specific are part of github repo, select github.
Name – Provide a unique name for the repository inside argocd workspace. We’ll select it as Bitnami in our case.
Repository URL – Provide the URL for fetching manifests. If the repository is public, you do not need to provide any additional authentication details. So we’ll leave that as empty for our case.
Once done, click connect to save the configuration. If Argo CD is able to fetch the manifests, it should mark the connection status as successful:
Create New Application Inside Argo CD
We now need to create an application so that we can operate on it using Argo CD. Go to Applications and click the + NEW APP button and fill the upcoming form. After that, we need to submit details as below:
Application Name – This is the application name inside Argo CD. You can use Argo CD to manage multiple application at once. We’ll set it to helm-demo in our case.
Project – This is the project name inside Argo CD. Project can be used to segregate and group applications together. Since this is a new setup for Argo CD, a default project is created for us and we’ll select the same. If you have multiple projects together, you’ll see an auto-populated list and you can choose the same.
Sync Policy – You can choose to auto synchronize the state of application in the Kubernetes with the GitHub repository or you can set it to manual. There are many choices, and we’ll probably discuss it in more details later. For now, leave it as manual.
We now need to provide the Helm repository details containing the application manifests, in the same form. Submit the details as below:
Repository URL – Provide the url for the Helm repository containing the application manifests. It will list any pre-populated repositories, and we can select the repo from the list. Alternatively, provide the new repository url.
Select Repository type as HELM.
Chart – Again, this will be pre-populated with the list of charts available in the repository. You can select anyone you want to deploy. For our case, we’ll select simple such as apache.
Revision – Select the chart revision you want to deploy.
Once we have provided source details where we described the desired state, we now need to provide the destination Kubernetes cluster details as below:
Cluster URL – Argo CD can be used to connect and deploy application to multiple Kubernetes clusters. Inside the UI, you’ll not get the option to add and connect a different Kubernetes cluster. This feature is restricted to Argo CD CLI. For our case, we’ll use the default in-cluster (where Argo CD itself is deployed).
Namespace – This can be used to select namespace where manifests will be deployed. You can choose a custom namespace and provide the same. Also, you’ll need to create the namespace on the target Kubernetes cluster before you can deploy manifests to it. Alternatively, you can select the checkbox for ‘AUTO-CREATE NAMESPACE’ in the sync options. We’ll leave it as default in our case.
Note that the namespace specified inside Kubernetes manifests overrides this value.
After filling out the information above, click Create at the top of the UI to create the helm-demo application. After this, it will read out all the parameters, read the source Kubernetes manifests. After this, it will go to
OutOfSync state since the application has yet to be deployed, and no Kubernetes resources have been created.
Modify Helm Chart Values and Parameters
Helm has the ability to use a different, or even multiple
values.yaml files to derive its parameters from, which are present in the same path as helm chart. We can choose to override the helm values by editing the application inside Argo CD workspace:
We’ll override ghost service port to
31001 and service type as
NodePort in the helm parameters.
Synchronize the Application Manifests / Deploy the Application
As we mentioned above, the application status is initially in
OutOfSync state since the application has yet to be deployed, and no Kubernetes resources have been created. To sync (deploy) the application, we can choose the tile and then select SYNC. This will present us with a choice about what we want to synchronize:
We’ll select default options and synchronize all manifests. Once its deployed, we can see the resources deployed in the UI:
Alternatively, we can also use the Kubectl:
mohitgoyal@desktop:/mnt/d/mohit/src/kind$ kubectl get all -n default NAME READY STATUS RESTARTS AGE pod/helm-demo-apache-7c7dffd55b-d7xqr 1/1 Running 0 3m53s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/helm-demo-apache NodePort 10.96.169.246 <none> 80:31850/TCP,443:31001/TCP 3m53s service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/helm-demo-apache 1/1 1 1 3m53s NAME DESIRED CURRENT READY AGE replicaset.apps/helm-demo-apache-7c7dffd55b 1 1 1 3m53s
Access the Application outside Kubernetes Cluster
Since by default, service type for helm-demo is set to type ClusterIP, we need to change it to either LoadBalancer or NodePort. Alternatively, we can use port-forwarding to access the application.
Also as our setup is with Kind cluster, we need to specify one of the ports i.e. 31001 from extraPortMappings. We have already set this values inside helm parameters inside applicaton configuration. So we do not need to anything at this point.
We can point the browser to localhost:31001 and view the application:
One thought on “Deploy Helm Charts on Kubernetes Clusters with Argo CD”
[…] Part 4 – Deploy Helm Charts on Kubernetes Clusters with Argo CD […]
|
OPCFW_CODE
|
Solus OS 4 Review - I like it but ...
“Please, make a Solus review” is one the most frequent requests I get. Solus 4 has just been released and I had to review it. However, I am not going to show you what is new in Solus 4, instead, I will show you the pros and cons of this distribution and explain why some of you probably would better use other Linux distro.
What is Solus Linux?
The first question which I think is worth starting with is what Solus OS is?
First of all, Solus is a distribution on its own. In other words, it is not based on any other Linux distro. This already makes Solus pretty unique and interesting. There are no many Linux distribution written from scratch nowadays. Well, there are many attempts to create new Linux distributions but they rarely become successful. Solus is probably an exception in this regard. It is only 3 years old, but it is already in the top 10 on Distrowatch.com.
Second, it is a rolling distribution. As it is said on the Solus webpage: </blockquote>
There are no many rolling ditros that are very popular. Usually, it is because a rolling nature of a distribution forces a user to deal with bugs from time to time. Also, a few rolling distributions are user-friendly. But there seems to be a need for user-friendly rolling distros. The success of Manjaro is a proof of that. Solus is user-friendly and rolling, so it is not a surprise that many users fall in love with it. I also was quite excited to make this Solus review.
The package manager and Software Center
Given that Solus is not based on any other Linux distro and it is updated in a rolling model, it is natural to look at Solus soul - its package manager and the Software Center.
Solus has its own package manager which as I have found out was forked from Pardus Linux. You probably even don’t know what Pardus Linux is. But I know it pretty well because it was my main Linux distro in 2012. Pardus Linux is a Turkish Linux distribution that is now focused on Business only but back in 2011 it came from nowhere and became very popular. One of the reason was that it was the most stable Linux distro with KDE 4 desktop at that time. I really enjoyed using it, I was very happy with the Pardus package manager and its Software Center. And my experience with Pardus makes me quite confident in the Solus package management
I have not tested the Solus package manager and Software Center extensively, but installing around 15 programs and running a couple of updates went very smoothly.
There are many positive things I can say about this package manager and the Software Center. Although Solus is a rolling distro and handling packages in a rolling distro is not easy, the Solus package manager does it pretty well.
The Software Center is impressively fast. It opens quickly, navigation within it is also fast, and I would say installing updates is relatively fast too, especially if I compare it to the Ubuntu package manager. I would say the Solus Software Center is one of the fastest graphical package managers in Linux.
In addition to the most popular standard Linux applications, the Software Center offers a set of third parties apps. This is not common to see in Linux distributions. You can install Google Chrome, Skype, Spotify just with one click. I was also surprised to find here Mendely, which is my favorite reference manager in Linux. Definitely, I give a Like to Solus for that in particular, and for its Software Center in general.
Nevertheless, I have to mention some negative aspects too.
First, I found confusing to search for packages within the Software Center. You cannot search from anywhere, you need to navigate to the search tab and search only there.
Also, when you entered the application info page and let’s say you decided to search for another application, clicking on the Search tab does nothing. You actually need to click on the Back button. To me, it was really annoying.
I also did not like that it is impossible to search only within the installed packages. To see if an app is installed, you need to enter its page. This is too complicated in my opinion.
Nevertheless, these are not critical issues. It is better to sacrifice some functionality for the sake of speed and stability of the Software Center. I also believe the Solus team is working on its Software Center and the search will be improved.
Second, Solus is written from scratch and it is a relatively new distro, so don’t expect to find all programs in its repository. If you need some less popular program, you will have to install it manually. For example, I use some Molecular Biology related programs for Linux. But none of them is available in the Software Center of Solus, while many of them are easily accessible in the Debian repository, for example.
Thus, if you also need some specific programs in your Solus, be ready for some “Shaman dance” while compiling and installing them.
As I stated above Solus is written from scratch and its desktop is no exception in this regard. Budgie is a desktop written specifically for Solus Linux, though it is now available in many Linux distros including Ubuntu Budgie, Manjaro, Arch Linux, etc.
Budgie is based on some of GNOME technologies and still uses some of its apps like System Settings, Calendar, etc. However, the Solus team is working on creating its own applications for everything, they just need time.
Actually, the Budgie desktop is probably the number one reason many users get attracted to Solus. It is a modern and beautiful desktop indeed. I particularly like its applets and notification bar. It is called Raven in Budgie. It is very handy to have quick access to some features here, like for example, output devices. Using this output switcher I can quickly change between my speakers and headphones. I have to use a special widget for that in my Plasma 5 Desktop, but in Budgie it is provided by default. I do appreciate that.
There is a dedicated setting window to tweak your Budgie desktop. It provides only basic settings but it should be enough for 90% of users. For example, I personally usually have the desktop panel on the left instead of the bottom, and Budgies allows to move the panel to the side without losing any functionality of the panel. These settings should be enough for many users, but I personally like more customization and in my view Plasma 5 or even XFCE provide more flexibility in this regard. But it is my personal taste.
Budgie default themes
Speaking about Budgie themes, I will complain a lot in this Solus review. Unfortunately, there are no pre-installed themes to choose from. I personally don’t like the default dark theme. It is too dark.
The light theme provided by default is not good either. The windows look good but the panel color is not right at all.
The only usable theme here is Plata-Lumine-Compact.
This is really sad because previous versions of Solus had much better default themes and I think that the old theme was one of the reasons many users tried Solus.
Finally, KDE application will look alien in Budgie. One is probably not supposed to installed KDE apps in Budgie desktop, but what if I cannot live without Kdenlive for video editing.
Moreover, it is not only KDE apps that have some problems. Audacity looks little unusable with the default theme. I was not able to see what options are selected in the input and output panel.
After I changed the theme to Plata-Lumine-Compact, it became a little better but still not perfect. I am not sure who to blame here either Budgie desktops or Audacity design. But there is a problem here as you can see.
Of course, one can change these themes by installing additional themes, but it requires extra steps. I could have installed some themes for this Solus review, but I better cover that in a separate post on “Things to do after installing Solus”.
Installation and Performance
I installed Solus the day it was released. The installation process was standard and smooth. I really liked the Solus installer. It provided all options I needed, including manual partition assignment and excluding the boot loader from the installation process, because I install Solus along my Arch Linux and I wanted to keep my GRUB boot loader from Arch Linux. I needed Solus only for this Solus review. If you read my Manjaro Review, you probably remember I had some problem with dual-booting Manjaro and Arch Linux. No issues with Solus on this side. So, I had only positive impressions from the installation.
To make this review, I tested Solus for a week. I just performed my regular task. Solus was installed on my home machine, which I do not use extensively. But I can say for regular desktop use, Solus works very well.
It is not a lite desktop but on my system with Intel Core i3, 8 Gb of RAM and an SSD, it worked fine. I have not had any single lag or a slow down.
I also unintentionally tested the Suspend function. As a rule, I do not suspend my desktop, but Solus suspends the system by default if it is idle for 20 min (you can change or switch it off). I left my computer on for a couple of times and it was suspended several times without a reboot. Nevertheless, every time it woke up quickly and without issues. So, if you use Suspend, it works brilliantly in Solus.
Overall, the installation and performance of Solus were great.
As a final point of this Solus review, I would like to talk about Solus Community. I believe having a Community around a distro is very important especially for new users.
Although Solus is a young distro, there is a large enough community to rely on. Solus has a very active forum where you can get help within a few hours. In addition, since version 4, you can also get help through the HexChat app that is installed by default.
I do not have personal experience with Solus Community, but I monitor what users say in social media, and I have seen only positive feedback. I have even seen somewhere that you can request to add a program to the repository if it is available for Linux but not in Solus repository yet.
So, regardless if you are a newbie or a little experienced user, you won’t be left alone and the Solus team and the community will always try to help you. Also, before you go to these help places, check out Solus help center, maybe you will find a solution there.
To summarize this Solus review, I would like to list all the pros and cons of Solus.
Solus is a worthy distro because:
- It is a rolling distro.
- It has a great package manager and fast Software Center.
- All commonly used open-source and third-party programs are available.
- Budgie desktop provides a unique experience of a modern desktop.
- Solus is overall fast, stable and reliable.
- There is a supportive community.
However, if you want to install Solus as your main Linux distro, conditioner these cons:
- Many specific programs are not available and will require compilation and manual installation.
- Solus 4 is little unlucky with its default themes.
- Some programs may not look nice in Budgie.
- It is not a lite Linux desktop.
I hope these pros and cons will help you to make a decision about whether to install Solus or some other Linux distro. You can also have a look at my reviews of other Linux ditros.
If you have something to add to this Solus review, please comment below.
|
OPCFW_CODE
|
Take Five With Tom Fenton
5 Ways To Get VMware Training
There are options for every need and budget.
I often get asked about the best ways to get up to speed on VMware technology. The truth is that there are many ways, and deciding which way is best really depends on factors such as personal needs, preferences, and circumstances. Luckily, there are a wide range of resources currently available to learn and acquire new VMware technology skills, and it should be relatively simple to narrow down which option is the best fit for you. Here are five approaches to keeping up-to-speed on VMware technology that I've used, and some of the benefits and potential drawbacks of each.
- Take a class from VMware. I'm biased as I work for VMware education, but I feel the best way to quickly come up to speed on VMware technologies is to attend a VMware Education Services training class. What’s great about these classes? They're taught by VMware certified instructors, based on the latest material, fulfill the training requirements for VMware’s certification, and most are offered online or live. They typically can be completed in a week or less, so you can come up to speed as quickly as possible. As an added bonus, most VMware classes have labs that give you actual hands-on experience with the products. A list of currently available VMware training classes can be found here.
- Community college training. Many community colleges offer VMware-sanctioned training taught by local professors. This training fulfills the requirements necessary to obtain your VCP in Data Center Virtualization. These classes are only on Data Center Virtualization, and are spaced out over the duration of a semester, so if you're in a rush to complete the course, this option might not be the optimal choice. However, if you have the time to spread your training over several months, check with an admissions counselor for your local community college or technical school to see if their institution participates in the VMware IT Academy Program.
- Take a third-party course. There are a few third-party companies that offer courses on VMware technologies, but be sure look into them carefully as they are often dated and don't qualify you to take the VMware certification exams. I'm not opposed to third-party training, but the reality is that creating and maintaining quality training takes an enormous amount of time and effort, and most third-party companies simply don't have same depth of resources that VMware does to create, maintain and update courses.
- DIY. If formal training doesn't fit your needs, you can always just jump in and train yourself. VMware has some excellent documentation on installing and using their products (I especially like the VMware reviewer's guides.) Some of the disadvantages to learning a new technology on your own: you'll need to set up your own lab; you won't have anyone to bounce questions and ideas off of; you won't have anyone to offer best practices or advice; it won't qualify you for the certification exams; and you may need to dig through a bunch of different whitepapers and documentation to get the information you're looking for. VMware documentation can be found here.
- Read a book. If you need to learn about a specific product, but don't need to be officially certified, can't afford formal training, or can't find a class that addresses the particular subject you're interested in, there may be a book that you can learn from. For example, Cormac Hogan and Duncan Epping wrote an excellent guide to VMware Virtual SAN (VSAN) and released it shortly after the product hit general availability; this allowed many users to come up to speed on VSAN in short order. But technology moves fast, and book publishing moves relatively slow, so make sure the book you're considering is current and up-to-date with the technology.
One of the best ways to keep relevant in the job market is by staying sharp and up-to-date on the latest technology. Yes, learning new skills is time-consuming and sometimes it's painful to keep up with the latest trends, but sooner or later all technology gets replaced; if you let your skills lag behind, you may find your skills less desirable in the job marketplace.
Tom Fenton has a wealth of hands-on IT experience gained over the past 30 years in a variety of technologies, with the past 20 years focusing on virtualization and storage. He currently works as a Technical Marketing Manager for ControlUp. He previously worked at VMware in Staff and Senior level positions. He has also worked as a Senior Validation Engineer with The Taneja Group, where he headed the Validation Service Lab and was instrumental in starting up its vSphere Virtual Volumes practice. He's on X @vDoppler.
|
OPCFW_CODE
|
EXECUTE AS OWNER is a great way to limit the permissions of a SQL Server Login. The general idea is to create your stored procedure with the EXECUTE AS OWNER modifier. Any user who has the permissions to execute the stored procedure, runs the stored procedure under the Database’s dbo user (which means it can do anything in the database, but nothing at the server-level nor on other databases). If you only allow your Logins to execute stored procedures (and not touch the tables directly), then you’ve effectively limited the Logins to code you’ve written. If you don’t write any DELETE statements, then Logins can’t delete anything.
This is better than Roles, because Roles are very coarse in comparison. With Roles, you may have to give a User INSERT permissions on table. Instead with EXECUTE AS OWNER you can write a stored procedure that checks the data exactly the way you want in the body of the stored procedure. This is much more fine grained way of handling permissions.
From beginning to end, this is what you do:
Create a Login:
CREATE LOGIN [MyLogin] WITH PASSWORD=N'Password',
DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=ON;
Create its User in the database:
CREATE USER [MyUser] FOR LOGIN [MyLogin];
I prefer to use schemas to identify “public” stored procedures. So create a schema:
CREATE SCHEMA [public] AUTHORIZATION [dbo];
Give your new user EXECUTE permissions on anything in the public schema: (We will put the new stored procedure in this schema):
GRANT EXECUTE ON SCHEMA::[public] TO [MyUser];
Create your stored procedure:
CREATE PROCEDURE [public].[MyStoredProc]
WITH EXECUTE AS OWNER -- This "EXECUTE AS" modifier on the stored procedure is key!
SET NOCOUNT ON;
-- do something
When your stored procedure runs, it can do anything in the database, including calling other stored procedures. It is an easy way to segregate public stored procedures from private ones. This gives you encapsulation, which is a good thing (see section 5.3 in Code Complete about the benefits of encapsulation).
The only permissions outside users need is EXECUTE permission on the public schema, so it is easy to add new stored procedures by creating them in the public schema.
Instead of Roles, you can have schemas. Let’s say you would have 3 roles in the database: admin, anon, and general. The admin role is for Logins that perform administrative activity on a website. The anon role is for people who view your site anonymously, and the general role is for stored procedures that are for both. You can instead, with EXECUTE AS OWNER, create three schemas for your stored procedures: admin, anon, and general. If you want the stored procedure to have admin only Logins to use it, create the stored procedure in the admin schema. The same goes for the other schemas.
|
OPCFW_CODE
|
RAMSS 2013 : WWW 2013 Workshop on Real-Time Analysis and Mining of Social Streams
Call For Papers
WWW 2013 Workshop on
Real-Time Analysis and Mining of Social Streams (RAMSS)
+May 14th, 2013 - Rio de Janeiro, Brazil
CALL FOR PAPERS
2nd International Workshop on Real-Time Analysis and Mining of Social
in conjuntion with WWW 2013, the 22nd International World Wide Web
The emergence of social networking services such as Twitter, Google+,
and Facebook has led to new ways of sharing information with
interested communities. In the last years, there has been an
increasing trend in the use of social media services, not only by
end-users but also by all kinds of groups, organizations, and
governments. All these participants contribute with a bonanza of
real-time updates, building a hodgepodge composed of not only personal
updates but also valuable information. The ability to get real-time
information from others has enabled to follow events live, to discover
breaking news, to find out about trending topics, and to help on
natural disasters, among others. This presents new challenging issues
for the research community in order to quickly make sense of
torrential social streams as they come out, and to make the most from
the fresh knowledge available on these streams.
The RAMSS workshop aims to bring together experts in the real-time
analysis and mining of social streams, as well as to further develop
and exchange knowledge around these tasks. Given the novelty of the
research field, the workshop also aims to encourage attendees to build
a discussion forum to share on the current state of the research
field, as well as to propose solutions for the shortcomings.
Topics of interest
The workshop seeks contributions that analyze and mine social streams
as they become publicly available, and encourages experts and
interested attendees to take part. The workshop aims to be specific in
the real-time analysis and mining of social streams, but it is open to
a wide variety of tasks that can be applied to those streams. Topics
of interest include (but are not limited to):
- Real-time search in social streams.
- Summarization of social streams as it comes out.
- Early detection of trends, news, and events.
- Real-time recommendation of information, who to follow, etc.
- Real-time classification and clustering.
- Real-time social network analysis.
- Behavioral prediction.
- Real-time sentiment analysis and opinion mining.
- Real-time user modeling.
- Real-time natural language learning, processing and understanding.
- Semantic web approaches for real-time analysis of social streams.
We also welcome contributions discussing potential research
directions, evaluation frameworks, publicly available datasets and
case studies on industrial applications.
- Paper Submission Deadline: February 27, 2013.
- Notification to Authors: March 13, 2013.
- Camera-Ready Versions Due: March 30, 2013.
- Workshop day: May 14, 2013.
- WWW 2013 Conference: May 13-17, 2013.
Papers must be sent in a PDF file, and written in English.
Participants are invited to submit: (1) a full-length technical paper
of up to 8 pages in length, (2) a short position paper of 4 pages, or
(3) a demo or poster paper of up to 2 pages. Submissions must follow
the ACM template
will be reviewed by at least three PC members, and accepted papers
will be published in the ACM Digital Library.
Submissions can be made through Easychair:
For inquieries, please contact: firstname.lastname@example.org
- Arkaitz Zubiaga, City University of New York, USA
- Damiano Spina, UNED, Spain
- Maarten de Rijke, University of Amsterdam, The Netherlands
- Markus Strohmaier, Graz University of Technology, Austria
- Martin Atzmueller, University of Kassel, Germany
- Alejandro Bellogín, Autonomous University of Madrid, Spain
- Bettina Berendt, Katholieke Universiteit Leuven, Belgium
- Roi Blanco, Yahoo! Research, Spain
- Alvin Chin, Nokia Research Center, China
- Philipp Cimiano, Bielefeld University, Germany
- Munmun De Choudhury, Microsoft, USA
- Daniel Gayo-Avello, University of Oviedo, Spain
- David Gleich, Purdue University, USA
- Julio Gonzalo, UNED, Spain
- Michael Granitzer, University of Passau, Germany
- Andreas Hotho, University of Wuerzburg, Germany
- Geert-Jan Houben, TU Delft, The Netherlands
- Nattiya Kanhabua, L3S Research Centre, Germany
- David Laniado, Barcelona Media, Spain
- Richard Mccreadie, University of Glasgow, UK
- Edgar Meij, University of Amsterdam, The Netherlands
- Meenakshi Nagarajan, IBM Research, USA
- Sasa Petrovic, University of Edinburgh, UK
- Paolo Rosso, Technical University of Valencia, Spain
- Markus Schedl, Johannes Kepler University, Austria
- Amit Sheth, Wright State University, USA
- Vivek Singh, MIT, USA
- Christoph Trattner, Graz University of Technology, Austria
For more information, please visit http://www.ramss.ws
|
OPCFW_CODE
|
Developer Leecherman aka LMAN has released a new AdrBubbleBooter VPK Edition update in version 1.2 . Adrenaline Bubble Booter Creator and Vpk Edition allow you to start and create any bubble in VPK format of a game in ISO \ CSO \ PBP \ PSOne in the LiveArea, with all the functions enabled such as plugins, filters, saves, and PSONE sounds. This feature it was available on some older firmware with their epsp hacks and with ARK Autoboot in great demand by users on firmware 3.60, HENkaku Ensō 3.65 and H-Encore.
Warning: the eCFW Adrenaline 6.9 must be already installed before using this tool
ADR BUBBLE BOOTER CREATOR VPK VERSION INSTALLATION:
Warning: the vpk file AdrBubbleBooterInstaller must be installed on your PSVITA before starting your personalized bubble created with ADR BUBBLE BOOTER CREATOR.EXE
- Copy ‘AdrBubbleBooterInstaller.vpk’ to your PSVita in ‘ux0:’
- Install the file with Vitashell and run it to install \ update the files required for AdrBubbleBooter, after which you can remove it.
- Start the PSVita to update the changes if it is not restarted by the installer.
EASY MODE WITH ADR BUBBLE BOOTER CREATOR EXE VERSION:
-Start AdrBubbleBooterCreator.exe and fill in the required fields, then press the ” Create ” button
to create a VPK file for each file you want to start automatically and copy the
VPK file generated on your PSVita in ‘ux0’ and then after install it.
In the example, the Medieval game was renamed to MERE.ISO (inserted in the ux0 folder: pspemu / ISO / MERE.ISO)
ADVANCED MODE FOR EXPERT USERS
- TITLEID must start with PSPEMU example: PSPEMU001, PSPEMU002, PSPEMU003 etc.
So he will take TITLEID ‘PSPEMU0001’ as an example of the cloned VPK bubble.
- Extract the content ‘PSPEMUXXX.zip’ then open the extracted folder ‘PSPEMUXXX’ and
rename it to ‘PSPEMU001’, then change the ‘TITLEID’ from PSPEMUXXX to ‘PSPEMU001’
in the param.sfo file located in PSPEMU001 \ sce_sys \ param. sfo
- Also you can change the title to anything you like for example: AdrenalineBubbleBooter .
- Open ‘PSPEMU001 \ data \ boot.inf’ with Notepad, add the path to the iso \ cso \ pbp file.
The path of the file must be in ‘ms0: /’ not ‘xx0: pspemu /’ example ‘ms0: /ISO/GAME.ISO’.
- After copy the VPK folder ‘PSPEMU001’ to PSVita and install it using VitaShell.
- You can update the path of the iso \ cso \ pbp file at any time after installing the VPK folder by editing the ‘ux0: app / PSPEMU001 / data / boot.inf’ file, this VPK also has a PSP bubble layout ready.
This VPK version uses Adrenaline v6.6
OUR VIDEO TEST
-Added ‘PS button’ for Booter mode with a new option.
-Improve frame rate when using CruelTott’s original graphics mode.
-Fixed the problem “Cannot access the virtual memory card” for PSOne.
Modules updated to the latest version v6.7 commit 8ca7da1.
- Added Inferno driver configurations to the recovery menu.
- Modules updated to the latest version v6.6 commit 03c0d8a.
- Added support to automatically convert the old boot configuration file to the new version on the fly when the bubble starts (go to the Booter tab in the Adrenaline menu if you want to change the default values).
- Adjust the boot structure for the new AdrBubbleBooter v0.4 and later.
- Added support for using any title ID for bubbles instead of limiting the use of PSPEMUXXX as an IDD format.
- Added conversion button option to convert the old boot.inf configuration file to the new boot.bin configuration file.
- Free plug-in and plug-in mode npdrm.
- Rewrote startup configuration structure -> Configs will not be compatible with old versions and vice versa (use the AdrBubbleBooterCreator tool to convert it).
- Removed the restriction on the use of the PSPEMUXXX format as titleid for bubbles -> Now you can use any titleid as you prefer.
- Removed npdrm plug-in and free plug-in option -> Use the free standalone npdrm plugin to run official content without licenses.
- Added AdrBubbleBooter configurations to the adrenaline menu to change the boot configurations for the launched bubble.
- Updating and adding the custom bubble option to use the adrenaline bubble or the individual bubble settings for each bubble.
- Updated file path in boot configuration -> Use native PSVita uxx: pspemu / instead of the ePSP path ms0: /.
- Updated the adrenaline modules to the latest version.
- Updated operating instructions in the readme file.
- Added npdrm option in free mode to run official games without licenses.
- Added support for making each bubble with its own custom adrenaline settings.
- Modules updated to the latest version.
Modules updated to version 6.1 (3.65 / 3.67)
|
OPCFW_CODE
|
This document discusses how to uninstall CA Business Service Insight, and some of the problems that you could encounter. It also walks through what to check in order to verify that the system is completely clean after the uninstall.
The current 8.3.5 version of BSI can be uninstalled through the Windows control panel's add/remove products.
The only thing to watch for is to backup the system PATH variable first since sometimes the uninstall can remove this.
You can do this by bringing up a command prompt (run cmd.exe) and run: echo %PATH%
After the uninstall and a reboot, check the environment variables and if needed, correct the PATH (without the BSI entry in it).
Earlier versions of BSI required some more intervention to completely remove them. You can check the steps below to both validate that everything has been removed or to manually clear an earlier release of the product:
Before you uninstall the product it is a good idea to stop everything and make sure that everything stops cleanly. If something will not stop during the uninstall then the uninstall is not going to be able to run properly.
If you are only uninstalling the APP or WEB server but NOT removing the database then it is recommended you go into the GUI and stop all your managed adapters.
Go into the Windows services (services.msc) and stop all the services that start with “Oblicore” or “PSL”.
On the Web Server, go into “Internet Information Services” (IIS) and stop the Oblicore WebSite.
On the Web Server, go into Component Services (under Administrator tools) and expand COM+ components until you see the “Oblicore Engine”. Right-click and stop this engine.
Run the uninstall
location of the installation folders:
Default locations are:
Go into Add/Remove programs and run the uninstall for the CA Business Service Insight Client (if it’s installed) and then run the uninstall for each entry you find for CA Business Service Insight.
Follow the prompts and reboot as prompted.
Correct the PATH
As mentioned above, the PATH environment variable may be wiped out after the uninstall. Once you reboot, restore the PATH from the backup file you created. First, remove any entries in the PATH which reference the old BSI installation (unless you plan to reinstall to the same location).
Right-click on “My Computer” and go under “properties”. Under the advanced section you will see an option to edit the environment variables. Find the PATH environment variable and copy your modified PATH into the value. Save this.
Check the path by bringing up a command prompt (cmd.exe) and running “set” or:
Verifying that everything is uninstalled (and remove Oracle if desired)
In the services list verify that all
Oblicore services were deleted.
2) In the component services list verify that Oblicore Engine was removed and delete it if needed.
In the IIS Manager - restore the
home directory to Microsoft default or other folder your choose, usually
a. In the IIS Manager - restore the home directory to Microsoft default or another folder of your choice, usually \Inetpub\wwwroot
b. Optionally you can rename the site name from Oblicore_Guarantee to another name of your choice.
Clean out the message queues.
Delete the og message queues (7.0
and above only). For example, in Windows 2008 they exist under Server Manager
as shown below:
Clean the Registry.
a. Remove the entire Oblicore key from HKEY_LOCAL_MACHINE \SOFTWARE.
Remove all keys that contain
Oblicore in their data at
i. You can use regfind.exe to log all
ii. Regfind.exe -p HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\-t REG_SZ Oblicore >Regout1.log
Remove all keys from
contain the REG_SZ with *Oblicore*
i. Regfind -p HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall -t REG_SZ Oblicore >Regout2.log
ii. The best practice is to validate only display names containing Oblicore to avoid deleting other kits that was installed from a path that contain the name Oblicore.
Remove all keys from
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Installer\Products contain REG_SZ
i. The best practice is to validate only [Prodcuctname] containing Oblicore to avoid deleting other kits that was installed from a path that contain the name Oblicore.
e. Remove all DWORD from HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SharedDlls contain files that was located at the installation folder (as recorded in the pre-requite step 1,a,i).
Remove the following keys in
g. Remove the key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\Oblicore
Clean the Folders.
Delete the folders that you recorded
earlier before running the uninstall:
i. \Program Files\Oblicore (or the relevant location where Oblicore was installed.
ii. \Inetpub\wwwroot\Oblicore (or the relevant folder where the web folder was installed).
(If you made changes to the registry and are not continuing on to uninstall Oracle then reboot again now)
Obviously the BSI uninstall and the steps above will not clear out Oracle. If you wish to remove Oracle from the database server (and you are certain that nothing else is using it) then you can usually follow the simple steps below. This guide is not meant to walk through the Oracle uninstall and your Oracle environment may be significantly different.
1) If you wish to ONLY remove the BSI database but do NOT want to remove Oracle itself then run the database creation assistance (dbca.exe) and follow the prompts to remove the database. If you intend to remove all of Oracle then skip this step.
2) Run Oracle’s universal installer (setup.exe from Oracle’s bin folder) and follow the prompts to uninstall Oracle.
3) Verify that Oracle is removed. I usually check the following but while this has always worked in my test environments, please contact Oracle for more information if needed.
a) Run regedt32 and under HKLM\Software remove any Oracle entry that remains.
b) Delete the Oracle folder (usually c:\APP)
c) Delete the folder where you stored the BSI database (whatever custom path you specified during the BSI install).
d) Delete the c:\program files\Oracle folder or c:\program files (x86)\Oracle
Note that even if you choose a custom path for the Oracle installation there are always inventory files under this directory on the C: drive and they must be cleared before you reinstall.
That should be all that’s needed to clean the system. You can now reboot again and continue with a reinstall or use the system normally.
|
OPCFW_CODE
|
VB6, Windows Tablets, Screen Resolution and ActiveResize
Earlier this year, I moved my desktop from XP to the much-maligned 8.1. At about the same time, my Vista laptop expired and was replaced by a Windows 8.1 tablet. I kept the XP tower and this is used when programming in VB6.
All my VB6 programs work fine on my 8.1 desktop, but none do on the 8.1 tablet. Instead, I get an error message about “screen resolution”. This is where Stackoverflow comes in!
In searching on Google, I came across this question, which was answered on Jul 21 '10 at 12:39 by Andy Robinson. Andy’s answer still left me with questions. I already use VB Gold’s Active Resize, which is Andy’s solution, but only after the Splash Screen. VB Gold says that no coding is necessary, but others seem to do so, as with a Private Sub Form_Resize. At my age, and not having done any new programming (as distinct from updating) for some time, I admit to needing help.
I want to be able to use some of my VB6 programs on a Windows Tablet. When I try to install, I get an error message about Screen Resolution. I think that VB Gold's ActiveResize, which is in these programs, should be able to do the job. But I am stuck with the coding.
My Tablet is a Dell Venue 8 Pro and the OS is Windows 8.1.
The total size of my 7 compiled VB6 programs is just under 14 Mb. I have connected a Memory Stick to the Tablet and sought to run the installation program. It is at that stage that I receive an error message regarding "Screen Resolution". The exact words are:
"Error: To install this program, your computer must have a display resolution of at least 800x600. Your computer's display resolution is 853x533".
From what I have read, ActiveResize should be able to solve the problem. But, at 80+, not all my faculties are working as I should wish!
Welcome to StackOverflow Michael. Is there a question about re-sizing something buried in your post?
Micheal, can you please edit your question to add the specifics? What brand tablet running what version of windows? When you say "install", do you mean you are running an install program, or copying the files to the tablet? Does the error happen when you are installing, or when you try to start your application? What is the exact error message?
It sounds then like the error is not coming from your program, but from the installation program. Many installation programs have settings where you specify various minimum requirements. If you still have the installation projects you can update that and remove or decrease the screen size restrictions.
|
STACK_EXCHANGE
|
WPF Stylus Events when Stylus/Pen is away from the screen
Is it possible to react (via StylusButtonDown etc., or alternatives) to button presses (i.e. one of the stylus buttons, not buttons in some app) that happen when the stylus/pen is not touching the screen, i.e. hovering in the air somewhere in range of the notebook? Those only seem to trigger if the pen tip is actually touching the surface of the screen. Specifically, I do not need to know about the position of the button. Just literally when the barrel button is pressed.
I'm using a Microsoft Surface and the Surface Pen that comes with it in particular, if that makes any difference. I don't need it to be cross-platform, portable or anything. In fact, solutions in other languages (C++, etc.) are OK. Hacky solutions are very welcome.
AutoHotKey is the answer!
AutoHotKey is an open source tool that helps create custom automated responses to Windows input device events.
AutoHotKey supports Surface Pen. It also supports the scenario of pen being away from screen.
Surface Pen connects to Windows via Bluetooth, with a signal range of several metres. Therefore, it is technically possible to use a Surface pen as a remote controller for Windows!
One of the most popular scenarios is Using Surface Pen as a Presentation controller
Here is a working AutoHotKey script for controlling Powerpoint presentation with Surface Pen:
; File name: surface-pen-slide-show-control.ahk
;
; This program helps the presenter scroll through slides, using Surface Pen
;
; Single-click on pen button to scroll forward
; Double-click on pen button to scroll backward
;
#NoEnv
SendMode Input
SetWorkingDir %A_ScriptDir%
#IfWinActive ahk_class screenClass
#F20::
Send, {Right}
Return
#F19::
Send, {Left}
Return
More information and Help:
The Complete Reference on AutoHotKey: https://www.autohotkey.com/
Demo code (pen away from screen):
https://github.com/jonathanyip/Surface-Pro-3-AutoHotkey/blob/master/Pen.ah
Remote Control for Powerpoint slide show: https://anderseideblog.wordpress.com/2015/08/04/turing-my-surface-pen-into-a-presenter-tool/
Thanks, that sounds promising! I'll try it out to see if it works.
I have might have solution that would work with currently used technologies such as bluetooth and Wifi that is in pen and your Microsoft Surface. But you will need to program some parts on your own (most of mentioned techniques are in open source libraries).
After you pair your,pen with Surface you should be connected with Wifi and bluetooth to the pen.
1. Option
There is good article, explaining bluetooth triangulation that we will use for triangulating pen in close area, except we will use bluetooth and wifi (we dont have 3th point) so it will not be so precise as 3 points. But with 2 points you can actually measure distance by this table and find where is theirs intersection, in such small space its doable.
Use similar method with Wifi for second triangulation point, you can see source codes for Wifi triangluation down there.
Next step will be calibrating the position of pen in holder, that means that the
pen will be at one side of the Surface(for simplification lets assume its only at right shorter side)
from this you can compute area of screen
Allowed area:
offsetHeightOfPenScreenRation = ScreenHeight - penHardwareHeight
NonValidX = StartpenPositionX - ScreenWidth < penPositionX or StartpenPositionX < penPositionX
NonValidY = StartpenPositionY - ScreenHeight < penPositionY or StartpenPosition+offsetHeightOfPenScreenRation < penPositionY
penIsClose = not ( NonValidX or NonValidY)
This is how it could work, you will need to keep this software running in background and it could have some serious impact on battery, you will need external wifi adapter for internet.
I dont have Microsoft surface so i cant code it myself cause i dont have device to try it on, but this is working idea. With little tuning of precision it could be really precise positioning.
2. Option
Wifi triangluation: https://github.com/kevindu/wifi-ap-positioning
can be maybe used alone and you can transfer location through the bluetooth if you can get into a firmware of pen, but this option will not be so precise.
As far as I know, Microsoft Surface does not have sensors around the screen. So no, there are no events that would react to stylus/pen when they are near the screen.
Although, if you are an engineer, you might create some sensors yourself, and then create events that would react to such things.
The only way I see this is to incorporate the very sensors for the buttons yourself and since it is already connected using bluetooth or WiFi, so use the required triangulation method to calibrate the device. Well, a hacky answer I believe, since Microsoft Surface does not have sensors around the screen.
If you are really looking for a "hacky solution", then I have an idea, but it will be a really dirty way to do it.
Despite the fact that no api supporting the button exists, the system responds to pressing the button even if the stylus does not touch the screen. You can create an application that reads the following command line arguments at startup: ClickOnce, DoubleClick, PressAndHold. Depending on the argument provided, the application should send the appropriate message to your main application.
Now you can go to the "pen settings" and depending on the action, select to run the above application with the appropriate parameter.
|
STACK_EXCHANGE
|
Ability to use wildcards in resourcePackages
Hi, thanks for this excellent plugin! I've tried using wildcards in the resourcePackages tag, is that supported? I seem to get null pointers no matter what I try. Note it works perfectly when I specify the exact package(s). Currently I use holon to generate swagger.yaml but it only does that when the app is up and running, and since my apps are behind an API gateway I have to briefly expose the app publicly during the jenkins build in order to get the yaml to feed the gateway, which is a security risk for me. So this is the perfect solution for me! However, populating the gateway is framework code and in my opinion something that the developers should not need to be concerned with, so ideally if they add a new REST API class, the services would just appear in the gateway during the next build without them having to configure additional packages in the build section of the pom. And the only solution I can see for that is allowing wildcards in the resources declaration:
<resourcePackages>
<resourcePackage>edu.upenn.isc.*</resourcePackage>
</resourcePackages>
Please let me know if this would be possible and if you would be interested, I could attempt to contribute code if needed.
Best,
Charles Harvey
I surely understand your use-case when we created the plugin we had pretty
much the same need - to be able to create the OpenAPI/Swagger definition
during build time instead of only having it at runtime.
However, I am not sure I understand how you would run the plugin. You want
to push the configuration of the plugin upwards into a parent Maven pom
or similar? Because you are right, currently the scanning for API classes
does not recurse through the packages. I think this was a choice to let the
user of the plugin have full control. If your service is generated, e.g.,
from an archetype you could generate the configuration as part of the
archetype, however, if you want to push the configuration upwards you will
need the recursive behaviour.
I guess, it would be quite ok to add an option whether to enable the
recursion or not, if I remember correctly this takes place in the "scanner"
implementation. You are more than welcome to do a PR for this feature.
Thank you for the quick response! I am not using an upstream pom; I have a template that is used for new projects where some automated search and replace goes on in the project templates, so the build plugin would exist for every project. My main interest is having this functionality be removed from the developers' stable of responsibilities; I don't have to train them and they make fewer mistakes. I'm using the generated swagger in a Jenkins pipeline to populate both 3scale (my API gateway) and SwaggerHub and would be happier if a developer was never in a situation where they added a new REST resource in a new package and were then puzzled by why they don't see it in SwaggerHub, etc. We work pretty hard here to keep developers writing code as opposed to worrying about configurations :)
I'll have a look and see if I can figure out how to change to scan all classes that are within the project's code given a syntax like:
edu.upenn.isc.*
there would be no need to scan any library jars, etc.
Thank you, I'll get back to you if I can figure it out. From your perspective, would you prefer an additional config property like true or would simply ending the resourcePackage with an asterisk be enough?
Best,
Charles
Sorry it ate my examples. From your perspective, would you prefer an additional config property like <recursiveScan>true<recursiveScan> or would simply ending the resourcePackage with an asterisk be enough?
Ok, maybe I am confused, but the following setting seems to do exactly what I want to do, which is find @Api in every package under edu.upenn.isc.esb.openshiftUtils - does that seem correct?
<configuration>
<useResourcePackagesChildren>true</useResourcePackagesChildren>
<resourcePackages>
<resourcePackage>edu.upenn.isc.esb.openshiftUtils</resourcePackage>
</resourcePackages>
Yes you are right - we seem to have this feature already. If you enable userResourcePackagesChildren you should get recursive behaviour - implemented here: https://github.com/openapi-tools/swagger-maven-plugin/blob/master/src/main/java/io/openapitools/swagger/JaxRSScanner.java#L72
Been long since we did that code so I actually did not remember. Let me know if it does not work.
|
GITHUB_ARCHIVE
|
A recent post from the Android Security team confirms the positive impact that Rust and other memory-safe languages are having on security vulnerabilities. The most critical categories of exploits are dropping rapidly, leaving hackers and researchers to focus on less severe vulnerabilities.
Memory Safe Languages in Android 13
Jeffrey Vander Stoep, a software engineer on the Android Security team, posted a new article on the Google Security Blog last week that has been making big waves. "Memory Safe Languages in Android 13" was posted on December 1st, and it starts out by reminding everyone of Alex Gaynor's post from a few years ago where he showed that large projects full of memory-unsafe code consistently have security vulnerabilities that are caused at least 65% of the time by memory safety issues.
In Android, however, this trend is reversing. "Memory safety vulnerabilities have dropped considerably over the past few years", Vander Stoep says, "from 76% down to %35 of Android's total vulnerabilities". This is a huge shift, and it takes Android from a place where it was about average to a place where it now significantly beats the average!
How was this outstanding achievement accomplished? Was it by "rewriting in Rust"? No, actually - in a Google post from nearly 2 years ago Vander Stoep explained the team's position of focusing on the safety of new code, rather than rewriting existing code. It's delightfully surprising to me that they were able to achieve such notable gains just by writing new code in Rust, but the results speak for themselves!
This was certainly not achieved through Rust alone. In fact, the "New Code by Language" pie chart shows that there's more Java and Kotlin being added to Android than Rust right now, and still quite a bit of memory-unsafe C and C++ as well. Rust isn't supported everywhere yet, though the team has plans to introduce it in more places (such as userspace HALs, Trusted Applications, drivers, etc.)
Java and Kotlin are memory safe languages, which may catch some people off-guard. In Java-based languages, it's typically rather easy to encounter a dreaded Null-Pointer Exception, or NPE - which can stop a process in its tracks and derail a whole application if it's not resilient enough. The difference between an NPE and a memory safety issue is that NPE's don't lead to exploitable conditions. Yes, Java is an unsound language - meaning that it cannot guarantee a well-typed program won't encounter unexpected runtime errors like NPEs. That doesn't make it unsafe, however.
u/anttirt describes the distinction well in a Reddit thread discussing this article:
Memory safety is a specific technical term with a specific technical meaning, and it does not apply to throwing a NullPointerException in Java. Programming languages [...] designate parts of memory to be either uninitialized, or initialized with a live object of a particular type. Memory safety means never reading uninitialized memory (including memory that previously contained an object that is no longer considered live), and never operating on initialized memory through a pointer/reference to an incompatible object type.
Java crashes don't lead to exploitable issues like buffer over-reads, use-after-frees, invalid page faults, wild pointers, etc. Thus, Java may be unsound, but is still memory-safe. For those like me who have used memory-safe languages their entire career, the concept that a crash could lead to things like secret information leaking or arbitrary code execution can be quite astonishing.
That doesn't mean that memory-safe languages like Java and Rust are free of security vulnerabilities entirely, but those vulnerabilities are overwhelmingly related to logic issues rather than memory issues. The severity of logic-related security issues is dramatically lower, because they don't allow for things like accessing memory that is out-of-bounds or arbitrary execution. Google's results confirm the expected drop in severity, with the number of critical and remotely reachable vulnerabilities swiftly dropping.
Rust to the Rescue
Though Java provides good memory safety, it doesn't easily provide the same level of performance with minimal resource usage that a native C/C++ or Rust implementation can. Java execution can be quite fast, but it comes at the cost of high memory usage. Like many other languages it can be easy to use Java in ways that lead to poor performance, putting too much pressure on garbage collection or using inefficient data models.
Go is often seen as a solution to this, providing a much simpler and more constrained language that compiles to a low-level binary and doesn't require a virtual machine. It has innovative approaches to garbage collection and concurrency, and generally performs better than Java in many situations. It's easier to learn, though complexity is slowly increasing somewhat as things like type system generics have been introduced to the language. It's not without downsides, however. It is generally memory-safe, but you can still trigger data races and out-of-bounds access scenarios - though they should always cause a crash. Whether it provides the same level of safety that something like Java does is somewhat debatable, and the garbage collector can still cause problems at times.
Google's post says that in Android 13, 21% of all new native code is in Rust. "To date, there have been zero memory safety vulnerabilities discovered in Android’s Rust code." Rust is providing an exceptional level of memory safety, outstanding performance on par with C/C++ with zero-cost abstractions in many cases, and still supports approaches based on advanced type system features with a reasonably great developer experience.
The point is often made that you can write C/C++ code with the same level of memory safety as Rust, but I think that it is often much more challenging and we still see seasoned teams with a specific focus on memory safety fall short of that goal on a regular basis. Google's post also talks about the added overhead of such solutions, which can remarkably affect the performance characteristics.
"Using memory unsafe code often means that we have to make tradeoffs between security and performance, such as adding additional sandboxing, sanitizers, runtime mitigations, and hardware protections. Unfortunately, these all negatively impact code size, memory, and performance."
The Momentum Grows
There has been a flurry of reaction to Vander Stoep's post on social media and in tech news coverage. ZD Net's summary points out that this is "the first year that memory safety vulnerabilities are not the biggest category of security flaws, and comes a year after Google made Rust the default for new code." "Google's decision to use Rust [...] appears to be paying off."
The Register points out that "Google is not the only large tech company to recognize the benefits of memory safe code. Meta has voiced its appreciation of Rust. Several months ago, Microsoft CTO Mark Russinovich declared that C/C++ should no longer be used to start new projects." Even the NSA got in on the debate. "The US National Security Agency recently observed that while languages like C++ can provide a lot of flexibility, they rely on the programmer to provide the necessary memory reference checks."
The debate still rages on, though, with people like C++ creator Bjarne Stroustrup cautioning against becoming "enamored with new and shiny things that promise to make their lives easier." He described it as "far more exciting" than mature languages, and likened supporters to "enthusiasts" who "tend to be rather one-sided in their comments."
If you want to discuss this post, you can find me on Mastadon at @BKonkle@Fosstodon.org, on Discord in various Rust and TypeScript programming communities, on YouTube @deterministic-dev, and at Formidable Labs. Thanks for reading!
|
OPCFW_CODE
|
Our next Meetup is scheduled on Wednesday March 7 at 7:00 pm. This meetup is a little special, since it is a bit more focused into research than usual... To those afraid of math, beware! ---------------------------
Olivier Grisel (ML Expert - Software Engineer at INRIA) - Generalization in Deep networks
Abstract: This talk will give an overview to some recent theoretical results and experiments on why deep learning models work so well (when they work). In particular we will discuss expressive power, optimization and generalization and their interaction. We will illustrate some of the main insights with empirical experiments.
Arthur Mensch (PhD Candidate at INRIA Parietal) - Differentiable Dynamic Programming for Structured Prediction and Attention
Abstract: Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.
Diogo Luvizon (PhD Candidate at ETIS - Université de Cergy) - 2D/3D Pose Estimation and Action Recognition using Multitask Deep Learning
Abstract: Action recognition and human pose estimation are closely related tasks since both problems depend on the human body understanding and additionally, action recognition benefits from precise estimated poses. Despite that, both problems are generally handled as distinct tasks in the literature. In this work, we propose a multitask framework for jointly 2D and 3D pose estimation from still images and human action recognition from video sequences. We show that a single architecture can be used to solve the two problems in an efficient way and still achieves state-of-the-art results. Additionally, we demonstrate that optimization from end-to-end leads to significantly higher accuracy than separated learning. The proposed architecture can be trained with data from different categories simultaneously in a seamlessly way. The reported results on four datasets (MPII, Human3.6M, Penn Action and NTU) demonstrate the effectiveness of our method on the targeted tasks.
As usual, pizzas and drinks will be provided after the talks.
Heuritech Meetup Team.
|
OPCFW_CODE
|
I have added a new esxi host which site 24x7 detected automatically and shows as 100% available. However none of the details are populated, every field says no data available (CPU, Memory, Disk usage, Associated VMs, Associated Datastores, Network and ESX Details are all empty).
Furthermore all the VMs running on that host show as normal in site 24x7 but the "associated esxi host" field shows the IP address of the host rather than the host name and has a not monitored link beneath the IP address.
I have gone through the KB page www.site24x7.com/help/admin/adding-a-monitor/vmware-esx-esxi-monitor.html and believe I have set it up correctly (including enabling MOB which appears to be disabled on the other 2 hosts so I am not sure why they are working!) and my new host has all the same permissions specified as the other 2 in vsphere and the same on-premise poller profile is set on all three hosts.
I need to get the new host details showing in site 24x7 before I am comfortable moving production workload onto that host, any help would be appreciated.
Upon checking, we could see that, for few VMware ESXi monitors added, the hostname retrieved wasn't having domain name included in random iteration while collecting performance metrics. Because of the hostname mismatch, we weren't able to get the performance metrics. This is a unique scenario and we haven't experienced this before with any of our customers. We are working on this.
As a workaround, please edit the ESXi monitor, remove the domain name from the hostname field, and save the monitor.
Let us know if you need any assistance.
I removed the domain name as suggested.
The host statistics are being returned (cpu, RAM etc) however on the summary page it shows associated VMs monitored as 0/0 but on the Virtual Machines tab it shows 16 VMs on the host. One is in the condition "UP" the other 15 all show as "down" with the error: Invalid User Name/Password.
Vmotion is also not being detected if I move one of these VMs to a different host.
We are checking this on our end. I will come up with an update as soon as possible.
Having made no further changes yesterday, overnight the edited esxi monitor has started showing the VMs as I would expect, it now looks normal.
However site 24x7 has also autodetected the same host with its full hostname but is not showing any metric values at all (neither host or VM).
We could see that the same VMware ESXi is being monitored by two different OnPremise Pollers (XXXXTOR, XXXXN2). We allow duplicate monitors to be added if it's set to be monitored by different OnPremise Pollers.
You can edit the monitor to see in which OnPremise Poller the monitor is currently getting monitored.
If you don't want to monitor the ESXi host from more than one OnPremise Poller, please delete the duplicated host monitor.
Let us know if you need further assistance.
|
OPCFW_CODE
|
//Result.java
package loadbalancer;
public class Result {
private int smallestPerf;
private int smallestPrime;
private int smallestNonPrime;
private boolean emptyset;
private int variance;
Result(int prime, int nonprime, int perfect, int variance, boolean flag) {
this.smallestPrime = prime;
this.smallestNonPrime = nonprime;
this.smallestPerf = perfect;
this.variance = variance;
this.emptyset = flag;
}
/* Condensed the getter and setter methods */
public int getSmallestPrime() { return smallestPrime; }
public int getSmallestNonPrime() { return smallestNonPrime; }
public int getSmallestPerfect() { return smallestPerf; }
public void setSmallestPrime(int x) { this.smallestPrime = x; }
public void setSmallestNonPrime(int x) { this.smallestNonPrime = x; }
public void setSmallestPerf(int x) { this.smallestPerf = x; }
public int getVariance(){ return this.variance; }
public void display(){
if (!emptyset){
System.out.println("\n*SpecialNum*::\tPrime: " +this.smallestPrime+ "\t\tNon-Prime: "+
this.smallestNonPrime + "\t\tPerfect: " + this.smallestPerf +"\n\t\tVARIANCE: "
+this.variance);
}
else{
System.out.println("SpecialNumber: 0");
}
}
public int calculateVariance(){
int mean = (this.smallestNonPrime + this.smallestPrime + this.smallestPerf)/3;
int temp1 = (int)Math.pow(mean-this.smallestNonPrime, 2) ;
int temp2 = (int)Math.pow(mean-this.smallestPrime, 2) ;
int temp3 = (int)Math.pow(mean-this.smallestPerf, 2) ;
int lastTemp = (temp1 + temp2 + temp3)/3;
return lastTemp;
}
public boolean notEmpty(){
// nonPrime range [1, 10000].
return (this.smallestNonPrime >0);
}
}
|
STACK_EDU
|
I searched far and wide on the internet but couldn't find the answer for these questions below. First, sorry about any english mistake, i learned english with videogames (yay!) but I still have to look up though, through and thought.
Most agents in Troy TW receive several different types of buffs from their skill trees that are not very well explained in game, so after playing through the campaing a couple times, I wrote down several questions:
(1) Poison: skills that say "+2% damage to all units from poison", what does that mean? Does it increase the damage of "poison the well", "murmurs of sedition", or both?
(2) the skills "wicked poisoner" (poison the enemy army to wound it's leader) and "rampage" (deals some % attrition to the enemy army after sucessfull assassination); do they interact? Does removing a hero though "Poison the Well" counts as an assassination?
(3) bug on assassination chance? I've seem this consistently in multiple playthroughs; I've build up one of my spies to be a master assassin, took all skills that increase % of incapacitating enemy characters, etc. This guy succeeds in 95% of his assassination attempts; yet, when targeting a hero, the chance os success shows up as 70%-100%, where attacking enemy agents the success chance is usually 19%-32% (but i still succeed nearly 100% of the time). Whats up with that?
(3.1) On that note, isn't that overpowered? Imagine if the AI built up their agents like that, a single spy clearing your entire roster of high level agents, leaving your armies without a leader 1-3x times per turn, etc... god that would really kill my will to keep playing haha. Maybe I should stop abusing agents so much...
(4) I can't remember the skill names right now (didn't wrote them up", but some of the spy skills that protect him from enemy agentes say "-10% enemy action success chance" and other says "-10% action success chance when this agent is idle", whats the difference?
(5) "Seeds of dissent" (spread your influence in enemy provinces) and "venerate the ancestors" (influence in your own provinces) REALLY don't work together... and I can't figure out how they interact.
I used a hero with "alert" and other decrees/buildings to recruit 2 envoys at lvl 13 iirc. Deciced to build one up for boosting influence in my province, and one to boost influence in foreign territory, because I had a ton of provinces that I was sharing regions with my allies. So I moved both agents to one of those provinces; used "venerate the ancestors" and my influence there went from +0,5%/turn to +4% turn, nice; then used the other agent on the foreign region (same province) and it reduced my influence growth to +2%, but how??? The numbers that show up when you mouse over influence just doesn't add up. Using the skills on a different order doesn't seem to help either. In fact, I don't think Seeds of Dissent is working at all...
(6) Resources bonuses: an envoy built up for boosting resources can get up to +60% to all resources in a province, with another +20% to wood or stone; some of these bonuses are only when "idle", which means??? If the envoy uses a skill, like venerate the ancestors, the bonus is lost for the turn? I don't know. Nevertheless, I have two problems with that >
(6.1) That's not good enough. A late game army with 19 elite units can easily cost 5 thousand food 1.5 thousand bronze in upkeep; an upkeep reducing envoy can reduce it by 45%, while also giving other bonuses (+ damage, +melee attack, etc); in comparisson, my best 4-region provinces are producing 4 thousand resources. The resource bonus to that province does not compare with the savings on upkeep on my elite army, never mind the army bonuses that I don't get if the envoy sits idle at home!
(6.2) It's not working? What I mean is, if a province has negative satisfaction, and I move a priestess or a hero there, the local satisfatcion immediatelly goes up, you just have to click off and on the settlement again for the display to update. The same doesn't seem to work with envoys; my settlement was producing 700 food, moved an envoy with +30% resources there and the food number didn't change; waited one turn so the envoy could be considered "idle" and the number remained at 700. What am I doing wrong?
(7) multiple omens: before siege battles, if a Priestess give me a bad omen, I can just click to keep the siege going, then go back to the same army and try for a positive Omen again. Is that working as intended? Do the negative omen and and positive omen bonuses stack?
(8) when a priestess take a skills that increase favour from settlement rituals at the cost of the ritual being more expensive, it seems like the bonus is only for that one priestess, but the increase in cost affect all priestess? Or am I seeing things.
(9) army debuffs: Spies can cause debuffs with successfull assassinations; envoys can aply debuffs with "lead astray" and priestess with "oration of dread"; but how long do they last? I suppose "lead astray" lasts AT LEAST until the enemy next turn, or else the campaign movement debuff would be useless, but I haven't been able to test it; if the debuffs go away when the AI turn arrives, then they are only usefull before my attacks, but the game doesn't make it clear. What about epic agents, I know the Orion revealing enemy positions feature skills lasts multiple turns (and it's awesome), what about the Satyr debuff on enemy armies? The game really should be more clear about those things.
Ending disclaimer: my version of the game is translated to Portuguese, and there's some... confusion on the game tooltips, when it comes to "Regions", "provinces", territories", etc... I'm still confused about which skill/effect will affect the current region, or the entire province, etc... Someone should do a quality pass on the portuguese translation.
|
OPCFW_CODE
|
How to pass array of stdlib list by reference
I am a beginner in C++ stdlib. I learnt stdlib tutorials and I am implementing "number of connected components in Graph" using adjacency list created with stdlib lists. I wanted to know how to pass this array of list by reference to dfs function? Also, one of my frnd said that by default it will be passed by reference. Is it true? Please clarify. which of these is right?
for example:
My array of list: list<int> L[v];
My function call: dfs(L[v],k);
My function definition: void dfs(list<int> List, int index);
My function prototype: void dfs(list<int> L, int);
(or)
My array of list: list<int> L[v];
My function call: dfs(L,k);
My function definition: void dfs(list<int> *L, int index);
My function prototype: void dfs(list<int> *, int);
The default in C++ is pass by value.
@karlphillip: Unless the argument is an array (or a function), in which case it's converted to a pointer and the effect is more like passing by reference.
@MikeSeymour: Honestly not sure what's best here: to pretend we have pass-by-reference semantics, or to explain that it's pass-by-value semantics on a pointer. I suspect the latter, even though it jumps into some complex territory; I think the damage potentially done by the former outweights that risk though
I wanted to know how to pass this array of list by reference to dfs function? Also, one of my frnd said that by default it will be passed by reference.
Not quite.
First of all, let's forget about the std::list; it's just confusing matters.
Pretend you're passing an array of int instead:
void foo(int[] x);
int main() {
int x[5];
foo(x);
}
There are no references here, and C++ arguments are copied by default, but because arrays cannot be copied and because the name of an array decays to the name of a pointer to the first element in the array, you're passing a [copy of a] pointer not the array itself.
In fact, void foo(int[] x) is misleading syntactic sugar for the equivalent, and clearer void foo(int* x).
In particular note that — in both cases — the function foo does not know the dimension of the original array.
This is kind of old-fashioned, though, and you can pass an actual reference to an array:
void foo(int (&x)[5]);
int main() {
int x[5];
foo(x);
}
Now we can apply this same logic to arrays of std::list:
void foo(std::list<int>*); // pointer to one or more lists, OR
void foo(std::list<int> (&)[5]); // reference to an array of five lists
Anyway, mixing standard containers and arrays seems odd; prefer a std::vector over an array, or a wrapper around statically-allocated arrays (like std::array or, previously, boost::array) if you really need the automatic storage duration for some reason.
Also, your call syntax is wrong.
My function call: dfs(L[v],k);
My function definition: void dfs(list List, int index);
This passes a single list from the array of lists, and does so by value/copying.
Thanks, for the advice. I will look into vectors too.
btw, you said that my function call and definition are wrong. I have mentioned a second version after (or) trying to pass by reference. Looking at wat you say, that seems to be right. Is it?
I mean to say:
void dfs(list *L, int index): definition
dfs(L,k);
--these seems to be right when I look at your explanation
@SaranyaDeviGanesan: Yes, the second version is ok, for passing a pointer-to-list(s).
To pass by reference, use &:
void dfs(const vector< list<int> >& L, int index);
// ...
vector< list<int> > L(v);
dfs(L, k);
Objects of non-builtin types are usually passed by reference to const to avoid copies. Passing by value is the default for all types, you have to specify reference semantics explicitly. Note that I've used a vector instead of a C array, which is usually preferred.
Assuming that you have an actual array of lists:
std::list<int> matrix[N];
To pass the whole array to a function you can do one of two things. The C way would be passing a pointer to the first element together with the size of the array:
return_type
dfs( std::list<int> const * array, std::size_t size, int key ); // signature
dfs( matrix, N, k ); // caller
The C++ way... well, there are different C++ ways. I would recommend not using an array, but rather a vector (and since we are at it, change the list into a vector too):
return_type
dfs( std::vector< std::vector<int> > const & adj, int key ); // signature
std::vector< std::vector<int> > adj_list;
dfs( adj_list, k );
If you really want to keep using arrays and lists and passing by reference, then the syntax would be:
const int N = 10;
return_type
dfs( std::list<int> (&adj)[N], int k );
std::list<int> adj_list[N];
dfs( adj_list, k );
Note that in this case the size of the array N is fixed at compile time (can be made a bit more generic by using a template, but it will still be resolved at compile time).
I would recommend that you redesign your data structure to be a vector of vectors, though.
FYI, the default in C++ is pass-by-copy/value.
Let's adjust your code to use the traditional pass-by-reference method:
Array of list: list<int> L[v];
Function call: dfs(L,k);
Function definition: void dfs(list<int>& L, int index);
Function prototype: void dfs(list<int>& , int);
Of course, the & can be place either near the parameter type ot the parameter name.
This method essentially passes a pointer. Not the same kind of pointer you would get when doing list<int>* L. That is why it is called by reference.
And inside dfs(list<int>& L, int index), you will use/access L as if it were a regular variable.
I think this blog post explains it well.
That was copied/pasted from the original question, give me a break. I think I'll change my picture to a female too. It seems to give more upvotes. =D
But you copy/pasted the part of the question that was a wrong implementation of the stated requirements :)
If your array of lists is declared like this:
list<int> L[v];
Then the correct way to pass by reference is:
void dfs(list<int> & L, int index);
Your function call will then look like this:
dfs(L[v], k);
|
STACK_EXCHANGE
|
For our March “Community Choice” Project of the Month, the community elected GnuCash, an easy-to-use personal and small-business finance manager with a checkbook like appearance. The GnuCash team shared their thoughts about the project’s history, purpose, and direction.
SourceForge (SF): Tell me about the GnuCash project please.
GnuCash Team: GnuCash is a personal and small-business, single-user, double-entry bookkeeping software application based on standard accounting principles, with a wide variety of financial and accounting reports to help you get a clear picture of your finances. GnuCash is a mature project with almost twenty years of development and is also a part of the GNU project to build a free software operating system.
SF: What made you start this?
GnuCash Team: GnuCash started as a port of the older X-Accountant software package, with a modern Gtk+ GUI toolkit that runs on GNU/Linux (and others). At the time, a free software accounting program was considered essential to the GNU project.
SF: Has the original vision been achieved?
GnuCash Team: Yes, the original vision was to support single users’ accounting needs and this has been achieved.
SF: Who can benefit the most from your project?
GnuCash Team: Anyone who needs to keep track of their finances, whether it’s a single user, a small business, a charity fundraiser, or anyone that prefers to use a free software solution instead of a commercial or closed-source solution.
SF: What is the need for this personal and small-business -accounting software?
GnuCash Team: If you’ve ever paid taxes, you’ve keenly felt the need for good financial record keeping. If you’ve run a small business, you’ve had to keep detailed records of your business transactions such as invoices, vendors, customers, budgets, etc. GnuCash can help with both personal and small business accounting needs.
SF: What’s the best way to get the most out of using GnuCash?
GnuCash Team: The best way to get the most out of GnuCash is to use it according to the double-entry bookkeeping principles it has been designed for by using the five basic types of accounts, and debits and credits between them, as the building blocks of your financial record-keeping. GnuCash uses your accounting transaction records to build detailed reports of your accounts.
To that end, we have an excellent concept-based guide that walks you through all the accounting activities you can do using GnuCash.
SF: What has your project team done to help build and nurture your community?
GnuCash Team: Our project has a very active user mailing list (firstname.lastname@example.org) where both new and experienced users ask questions, swap tips, and help each other. We also have an active developer mailing list (email@example.com) where GnuCash developers collaborate on making improvements to GnuCash. Last year the developers migrated the project repository to the Git version control system to try to lower barriers to entry for potential contributors.
SF: Have you all found that more frequent releases helps build up your community of users?
GnuCash Team: In our case we have found that stability is important to our community. We ship new features in minor releases about once every three years and bug fixes in micro releases every few months.
SF: What was the first big thing that happened for your project?
GnuCash Team: The first big thing probably was the port to Gtk+. There was a lot of excitement at the time. GnuCash was also one of the earliest projects on SourceForge, within the first 150 registered or something like that. The number would have been even lower but we dawdled for a month or two before completing the process.
The port to GTK/Gnome was also when the project name changed based on a popularity poll. Some excellent names were suggested, along with some clunkers like GnoMoney. Somehow GnuCash came out on top. This was circa 1997, meaning SourceForge and GnuCash go back a long way together!
SF: What helped make that happen?
GnuCash Team: The GnuCash team realized that one of the big requirements of a free software desktop, like the emerging GNU/Linux desktop, was a free software accounting package and they helped to bring that vision into reality.
SF: What was the net result for that event?
GnuCash Team: The net result is that anyone looking for a way to manage their finances on free desktop software can now succeed in doing so.
SF: What is the next big thing for GnuCash?
GnuCash Team: The next big step for us is to move to a multi-user architecture so that several people may be able to enter transactions into the same book of accounts simultaneously. This should help small businesses and folks looking to scale up their operations with a free software solution.
SF: How long do you think that will take?
GnuCash Team: The time frame for this is the next several years.
SF: Do you have the resources you need to make that happen?
GnuCash Team: We have an excellent team of developers who know the code base inside and out but we would love to extend a welcome to new contributors on the project.
SF: If you had it to do over again, what would you do differently for GnuCash?
GnuCash Team: We would not have used the Gtk+ toolkit’s GObject library for writing ‘object-oriented’ code in C.
GnuCash Team: It ties the internals of GnuCash to the GObject library, which hinders portability.
SF: Any reason you can’t do that now?
GnuCash Team: We are trying to do it now as part of our multi-user architecture effort, but it is a large undertaking and will take time to get right.
SF: Is there anything else we should know?
GnuCash Team: Yes, if you use an Android mobile device, you can record your transactions on the go and import them into GnuCash later. Check out our website to learn more about GnuCash and GnuCash for Android, which is separate from the GnuCash project.
And a big Thank You to SourceForge for continuing to be an indispensable resource for free software projects around the world.
[ Download GnuCash ]
|
OPCFW_CODE
|
[WIP] Initial commit for smart probe implementation
This PR intends to have the basic implementation of SCSI commands named as SMART package which will include getting the basic attributes of SCSI disks such as vendor,model,serial number,version, logical size, Firmware revision, capacity, etc.
It will implement smart(smart is the name of the package which will expose various functions to get the various disk attributes, but this PR will only implement the retrieval of basic disk attributes,not smart attributes) as a secondary probe in node-disk-manager which will be used to populate etcd with the following disk details -
Version
Logical size
Firmware revision and
capacity
This probe would be responsible for filling the above defined disk details for a particular SCSI disk after udev probe fills its own specified set of disk attributes.
It will take device path (devPath) of a disk as an argument in order to fill the required details for a particular SCSI disk.
It makes use of various SCSI commands and ATA device struct page to get the following information for a SCSI and ATA disk respectively.
Changes committed:
modified: cmd/controller/disk.go
new file: cmd/probe/smartprobe.go
modified: cmd/probe/udevprobe.go
modified: pkg/apis/openebs.io/v1alpha1/types.go
new file: pkg/smart/ataidentify.go
new file: pkg/smart/diskinfo.go
new file: pkg/smart/ioctl.go
new file: pkg/smart/satdevice.go
new file: pkg/smart/scsicommands.go
new file: pkg/smart/scsidevice.go
new file: pkg/smart/types.go
modified: pkg/udev/common.go
modified: pkg/util/util.go
The output after integrating smart with node-disk-manager is -
sagar@sagar-ThinkPad-L470:~/gows/src/github.com/openebs/node-disk-manager$ kubectl logs -f node-disk-manager-nk97k
I0709 14:37:15.962472 1 probe.go:54] starting probe
I0709 14:37:15.962537 1 server.go:39] Starting HTTP server at http://localhost:9090/metrics for metrics.
I0709 14:37:15.963385 1 controller.go:113] started the controller
I0709 14:37:15.963445 1 probe.go:82] configured udev probe
I0709 14:37:15.965844 1 probe.go:82] configured smart probe
I0709 14:37:15.965860 1 udevprobe.go:173] starting udev probe listener
I0709 14:37:16.205107 1 eventhandler.go:41] processing data for disk-bcfc622a64c1d7e30f278e0f7762b5c7
I0709 14:37:16.205163 1 eventhandler.go:46] disk details filled by udev probe
I0709 14:37:16.206341 1 eventhandler.go:46] disk details filled by smart probe
I0709 14:37:16.503863 1 diskstore.go:30] created disk object : disk-bcfc622a64c1d7e30f278e0f7762b5c7
I0709 14:37:16.503936 1 eventhandler.go:41] processing data for disk-9e194a67207a9ccbfb40e041e4154597
I0709 14:37:16.503971 1 eventhandler.go:46] disk details filled by udev probe
I0709 14:37:16.505362 1 eventhandler.go:46] disk details filled by smart probe
I0709 14:37:16.534167 1 diskstore.go:75] updated disk object : disk-9e194a67207a9ccbfb40e041e4154597
sagar@sagar-ThinkPad-L470:~/gows/src/github.com/openebs/node-disk-manager$ kubectl describe disk disk-bcfc622a64c1d7e30f278e0f7762b5c7
Name: disk-bcfc622a64c1d7e30f278e0f7762b5c7
Namespace:
Labels: kubernetes.io/hostname=sagar-thinkpad-l470
Annotations: <none>
API Version: openebs.io/v1alpha1
Kind: Disk
Metadata:
Cluster Name:
Creation Timestamp: 2018-07-09T14:37:16Z
Deletion Grace Period Seconds: <nil>
Deletion Timestamp: <nil>
Initializers: <nil>
Resource Version: 4581
Self Link: /apis/openebs.io/v1alpha1/disk-bcfc622a64c1d7e30f278e0f7762b5c7
UID: 93944429-8385-11e8-97f2-54e1adef61ac
Spec:
Capacity:
Storage:<PHONE_NUMBER>2
Details:
Firmwarerevision: 1.00
Logicalsize: 512
Model: Ultra
Serial: 4C530001050110123422
Vendor: SanDisk
Path: /dev/sdb
Status:
State: Active
Events: <none>
Signed-off-by: sagarkrsd<EMAIL_ADDRESS>
Codecov Report
Merging #58 into master will decrease coverage by 4.41%.
The diff coverage is 22.22%.
@@ Coverage Diff @@
## master #58 +/- ##
==========================================
- Coverage 73.12% 68.71% -4.42%
==========================================
Files 16 17 +1
Lines 521 553 +32
==========================================
- Hits 381 380 -1
- Misses 111 147 +36
+ Partials 29 26 -3
Impacted Files
Coverage Δ
pkg/udev/common.go
100% <ø> (+17.94%)
:arrow_up:
pkg/udev/mockdata.go
73.33% <ø> (+1.45%)
:arrow_up:
cmd/probe/smartprobe.go
10% <10%> (ø)
cmd/controller/disk.go
100% <100%> (ø)
:arrow_up:
cmd/probe/udevprobe.go
67.67% <100%> (ø)
:arrow_up:
pkg/util/util.go
60.34% <25%> (-18.61%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 17d9f54...3063476. Read the comment docs.
@gila @AmitKumarDas @shovanmaity @kmova I have updated the PR with the changes suggested till now !
|
GITHUB_ARCHIVE
|
Windows .htaccess Not Working
Simple instructions with screenshots here. then i get a user/password request window. regards Megan :o) prod-is.com Great, Works just fine ! all on my humble little notebook.
I have set up my configuration using the following VirtualHosts in the C:/wamp/bin/apache/Apache220.127.116.11/conf/extras/httpd-vhosts## Use name-based virtual hosting.#NameVirtualHost *:80
.htaccess File Not Working On Localhost
Writing to multiple files with cat Higher up doesn't carry around their security badge and asks others to let them in. I've got a basic understanding of namevirtualhosts but I'm very confused on how to set-up Apache for multiple users with different passwords. Mangal Hi , I tried with the above tutorial .
- The error may show in the web browser when a webpage causes Apache to read the .htaccess file.To demonstrate this, I added a bad line to the Smart Web Developer .htaccess
- never mind!Options:Reply•QuoteGo to:Forum List•Message List•Search•Log InSorry, only registered users may post in this forum.Click here to login
- My config file is uploaded here: http://www.bphprint.co.nz/config.txt My htaccess file is uploaded here: http://www.bphprint.co.nz/htaccess.txt Can someone please help me out.
- Anyway, I got quite confused over all this and would appreciate some specific instructions when "Virtual Hosts" is involved.
- GNP TheAce Hi!, i'm using windows vista ultimate with Apache 2.2.3 + PHP 5.2.4 and i have a warning to say: Inside the .htaccess the passwd.txt PATH must be declared with
- Simple geometry.
- a virtual hosts configuration file), you will need to look in that file.
- Ray Having some trouble.
- Would the Ancient One have defended the Earth from a Chitauri invasion in the Avengers absence?
Word to describe object that can be physically passed through Advices on start practicing What power do I have as a driver if my interstate route is blocked by a protest? The directory to be protected is: C:\www\private The password file is called passwds and is in the directory passwords, i.e.: C:\www\private\passwords\passwds My apache server is under: C:\Program Files\PHP Home Edition 2\Apache2 Thanks Dave for resolving this for me — and thanks to whoever is responsible for keeping this thread open for so long! Htaccess Not Working On Windows Server Jesse I don't know what the problem is.…I followed your instructions exactly, but when i go to the directory that is supposed to be protected, I keep getting: Internal Server Error
The problem is that the passwords that are generated through PHP are different than the ones created using the htpasswd.exe file in the Apache/bin folder. .htaccess Not Working Ubuntu thank you for CEO, developers…of this site. How can an employee kindly decline to participate in an office potluck? navigate here If your site is hosted elsewhere, check your control panel (Plesk, DirectAdmin, CPanel, whatever) to see if you can enable .htaccess there.
See http://httpd.apache.org/docs/2.4/mod/core.html#allowoverride for details. Htaccess Not Working Centos I assume most of you are viewing Directory Indexes. Any advice would be appreciated. What can i do?
.htaccess Not Working Ubuntu
Tony I googled for a full day trying to get this right. more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Bookmark the permalink.← How to Find a Yum PackageWhat's the Best Linux Distro according to Google? → 15 Responses to The Six Most Common Htaccess Problems and How to Fix ThemPingback: Shanx evil ok ive got it to ask me for a user and pass hurrah but i then get the message Forbidden You don't have permission to access /secure/ on this .htaccess File Not Working On Server
as soon as i made the htaccess procedure , my apche server become very HEAVY !! This should be corrected, either None or All. —— SNIP —— DocumentRoot "/web tools/machine_reporter/" AccessFileName ht.acl .htaccess AllowOverride All –> Allow All Options None Order deny,allow Alias /machine_reporter/ "C:/web tools/machine_reporter/" Options But i found a solution. http://www.policeposers.com Dan W I followed the instructions to the letter and still get a "500 Internal Server Error" when I tried to access the "secure" folder.
dealt with this scenario? How To Check If Htaccess Is Working I checked my .conf file and found that LoadModule auth_basic_module libexec/apache22/mod_auth_basic.so is uncommented. The use of uppercase letters could also circumvent Apache directives designed to prevent the contents of .htaccess files read from the web.Htaccess Problem #3: Filename MisspeltCommon misspellings of the htaccess file's
Open Notepad, type something and SAVE AS ".htaccess" and choose the file type as "ALL FILES".
Enable mod_rewrite in Apache¶ There are a number of ways to enable mod_rewrite, in case it's not yet enabled on your setup. is that normal ?? So... Htaccess Not Working Amazon Ec2 I created a new test secure folder and it works immediately.
Use a pre-configured build of Apache¶ If you're setting up Apache on your own computer, and it's turning out to be hard to configure, you should consider using XAMPP (Windows), MAMP What specifically did Hillary Clinton say or do, to seem untrustworthy to Americans? How to say "Ok, then I take X" after your first choice is not available What different frame materials have been used? I tried it with the .htaccess file, then tried it with acl.htaccess just incase.
Now we need to create the password file. A simple network sniffer could receive your internet traffic and retreive your password. C:/PROGRA~1/APACHE~1/Apache2.2/bin/.htpasswd Sander Thalen Just a thank you. Is it normal to treat Math Theorems as "Black Boxes" I was allowed to enter the airport terminal by showing a boarding pass for a future flight.
If you do not see an 'Internal Server Error', your Apache setup ignores the .htaccess file, and you need to fix that. after 3 tries i get "Authorization Required" message. Content identical to Apache's mod_rewrite directives? asked 4 years ago viewed 129489 times active 1 month ago Upcoming Events 2016 Community Moderator Election ends in 6 days Blog How We Make Money at Stack Overflow: 2016 Edition
I just downgraded a working 64-bit WAMPServer install to 32-bit, as I have to use the MS PHP driver for SQL Server, 32-bit only, and that blew away my configuration files. Utensil that forms meat into cylinders US Election results 2016: What went wrong with prediction models? E.g. Apache has an option to parse and check its configuration files.
and i m very glad to see the Indian name below this page Mr. thanks. Please make sure you have the paths in your files properly specified and put inside double quotes if you have spaces in your folder names. Could you please help me !
its all goodie until i want to access my /secure folder. Im running windows 2000 and apache 2.0.53 If someone might be able to point out what's wrong that would be great. Thank you! http://sniptools.com sniptools Hi Joey, The window will most likely appear differently on different browsers and different OSes.
The ones I have tried all have been PHP based. It appears that the password that is entered at the login prompt are different that thoes created thru PHP. I entered this code in it: ErrorDocument 404 /var/www/html/404.php and it is still not showing up. pauly Megan I have managed to set up htaccess on an externally hosted website but am having trouble setting it up on the local machine.
|
OPCFW_CODE
|
Thriven and thronovel Unrivaled Medicine God novel – Chapter 2419 – Acknowledging Master! call throat share-p3
Novel–Unrivaled Medicine God–Unrivaled Medicine God
101 date ideas
Chapter 2419 – Acknowledging Master! spotted guttural
How could a Heavenly Emperor Kingdom tiny fellow mobilize the power of rules?
But he uncovered to his impact that he was really not able to ignite the surrounding limitations!
Ye Yuan can even see the predicament on top of the hill previously.
He did not say anything, continuously directing for the void, and was about to unleash the potency of limits.
Mainly because Ye Yuan migrated as well!
Going for walks as you go along, he had been releasing his energy of supply!
Heavenly Emperor Xiu Yun considered that back check out within the range, a lot of thoughts spilling in excess of in their cardiovascular.
Each and every move that Ye Yuan had taken appeared to be stepping on their own hearts, causing them to be palpitating with stress and fear.
professional dog lover
Everybody noticed a little bit tricky to take in. This sort of point really subverted all their comprehension.
Certain more than enough, the effective limitations above 50,000 long distances, Ye Yuan entirely viewed them as practically nothing.
“This is extremely hard! Eight Void Mountain has been in existence here given that medieval times. Because time immemorial, just how many peerless prodigies have attempt to achieve the summit, but not one person has became popular! What proper does he, a puny small Perfect Emperor Realm, ought to get to the summit?”
Unrivaled Medicine God
The potency of a Deva 5th Blight was adequate to help make heaven and globe alter colorings!
This world already astonished each of the powerhouses on Eight Void Mountain peak!
Each individual phase that Ye Yuan got appeared to be stepping in their hearts and minds, making them palpitating with panic and concern.
Then, would he surpa.s.s the very best record and arrive at the top notch?”
the moon destiny
With Ye Yuan’s present sturdiness, it was naturally difficult to contact the summit.
Taking walks on the way, Ye Yuan also felt an upsurge of feeling.
At this time, there was currently none of us comprehending Dao, everybody was being attentive to Ye Yuan.
shall we abandon shall
Your situation was as if it was acknowledging a master.
Achieving his level, aside from the strength of policies, what else could injure him?
Getting to his point, in addition to the strength of rules, what else could injured him?
Ye Yuan claimed disdainfully, “You, when the human race’s key energy, gained a persons race big power’s imparting of Dao on Seven Void Mountain, but the truth is didn’t think to repay a persons race. A person as if you, precisely what should i belittle you?”
At the moment, he turned up ahead of the ghostly fires devoid of the tiniest reluctance and hit his give.
But a Divine Emperor World little fellow actually absolutely disregarded him!
An electrical that had been as large when the firmament passed on from there, and actually created Ye Yuan somewhat have the impulse to prostrate in wors.h.i.+p far too.
|
OPCFW_CODE
|
Scrum of scrums/2013-11-19
Jump to navigation Jump to search
Notable action items
- W0 awaiting ops approval on X-Forwarded-By header work - https://mingle.corp.wikimedia.org/projects/scrum_of_scrums/cards/38
- Mobile Web still blocked by GettingStarted API from Growth team - https://mingle.corp.wikimedia.org/projects/scrum_of_scrums/cards/11
- Mobile Web affected by ULS bug from Language team - https://mingle.corp.wikimedia.org/projects/scrum_of_scrums/cards/45
- Language still blocked on emulating replag in dev environment; have been corresponding with SPringle - https://mingle.corp.wikimedia.org/projects/scrum_of_scrums/cards/30
- Flow has ongoing architecture review form ops; awaiting more feedback - https://mingle.corp.wikimedia.org/projects/scrum_of_scrums/cards/46
- (Ops approval + Analytics follower) X-Forwarded-By header work with Faidon (Ops) - Faidon, Yuri, Adam to figure out short/med/long term options
- (Ops implement vhost portion + Analytics follower) Landing page for m.wikipedia.org and zero.wikipedia.org with Apache vhosts coming up in the next (hopefully) 1-2 weeks- Yuri to work with Faidon
- (Mobile Apps approval and knowledge transfer) Firefox OS Wikipedia app bugfixes and deploy - needs to be done by tomorrow to make quarterly cutoff for inclusion in base OS. Yuri likely person to deploy in mediawiki-config tomorrow pending today's review.. Adam to talk with Brion.
- Had quarterly planning meeting
- Deployed twice: https://www.mediawiki.org/wiki/Parsoid/Deployments
- Public & cached parsoid service at http://parsoid-lb.eqiad.wikimedia.org/, implementing https://www.mediawiki.org/wiki/Parsoid#The_Parsoid_web_API
- Platform still blocked on article transfer to betalabs by ops, pending available resources - https://mingle.corp.wikimedia.org/projects/scrum_of_scrums/cards/37
Faidon Liambotis and Andrew Otto
- Varnish serving all traffic since Nov 11th; a few issues found & fixed
- Swift migrated to eqiad (last part of production traffic)
- An outage a day this past week - see ops@ list for post-mortems (soon on wikitech, hopefully)
- ulsfo deployment paused, back to procurement to workaround network/vendor issues
- Assisting in the PDF sprint (see separate update by Matt)
- aka "collectoid" or "ocg" (offline content generation)
- New contractor (Mike Hoover) hired to assist with migrating to eqiad
- Analytics nodes reshuffled into multiple rows for redundancy, still a few issues to deal with:
- analytics1013 and analytics1021 seem to have the same IP assigned, Chris J is looking into it.
- Need publicly routable IPv6 for analytics1021 and analytics1022 Kafka Brokers.
- Lots of RFPs for Datacenters being reviewed.
- New search servers racked, working with Nik to get Elasticsearch ready on them.
- Search outage caused by chain of bad assumptions, lsearchd (Lucene) currently turned back on for all wikis until issues are resolved.
- Card 30 -- unblocked
- Card 37 -- still blocked on resources
- Card 19 -- done
- Goal is deploying to beta cluster this week; will need another week to deploy
- Progress on keep going (refactoring); API module next
- Planning for next project -- supporting article creation workflow
- Firefox OS Wikipedia App work (W0 taking responsibility, have high priority bugs to fix this week to make quarterly cutoff)
- Team at conference
- Geowiki info leak resolved
- Ongoing -- migrating Wikimetrics to production (ops)
- Cards are in progress
- Team at language summit; mostly bug fixing
- Exposed bug with ULS (Language)
- Working on Limn issue with Analytics (Analytics)
Core features (Flow)
- mediawiki.org deploy early December
- Parsoid intergration going well
- Ops review needed -- pass bug to Faidon(Ops)
|
OPCFW_CODE
|
Preserving the Status bar items whitespace between the codicons
This is for the issue #145852
Here I bounded the choices to when the a tag has the class disabled.
Then with the nth-last-of-type(-n+4) I chose the last 4 icons because if I used 3 then the spacing would be only included between the two icons.
Finally, I removed the margin application of the last span element.
This is a test I did, and it is inserting space between the icons.
https://user-images.githubusercontent.com/98463228/185746638-9cf1a64a-b265-4f22-966f-b741366131e8.mp4
This works unless the order of icons has other types of specifications.
Here I bounded the choices to when the a tag has the class disabled.
Is there a reason why it's bound to disabled items? This wouldn't fix the issue as items with actions would appear the same
ok, I thought this would fix the issue.
@misolori I can replace the :is() with direct descendent but I prefer this syntax.
Would you tell me exactly where these 3 icons are placed?
The First section of the status bar, middle or last part? and how they are grouped and differed from the others.
As issue #145852 is originally extracted from #145722 I reproduced the placement of the icons following this picture:
This is for when there are two icons:
https://user-images.githubusercontent.com/98463228/187842681-004479b1-bb1e-48c9-9ea1-7cf7a7c2d748.mp4
This is for when there are 3 icons:
https://user-images.githubusercontent.com/98463228/187842926-884ec71a-1d52-4bfb-9130-23f1547537f7.mp4
and four icons:
https://user-images.githubusercontent.com/98463228/187843079-51827c08-2bf4-4621-b5fb-74e2161b4ef2.mp4
@misolori
Here, I populated the status bar considering if there were different icons in a row for the possible extensions. These are 2, 3, and 4 icons with text in between them.
These icons are not in the right sections of the status bar although I populated the right section of the status bar as much as I thought it would be ideal.
I appreciate feedback.
https://user-images.githubusercontent.com/98463228/188054050-869e3a41-749a-4026-bb4e-02e70ad446c0.mp4
Thanks for the updates! I think one downside of this is that the other items that text and icons (like the problems panel) look misaligned. I think this is a tricky issue and am not sure if we'll find a solution that works here.
Thanks for the updates! I think one downside of this is that the other items that text and icons (like the problems panel) look misaligned. I think this is a tricky issue and am not sure if we'll find a solution that works here.
If I set aside the misalignment of the text and icons, would this one work?
I can look for other ways to align text and icons beside this.
In every trial I wanted to do this issue, a new aspect unfolded. This is not ideal as it adds to the workload rather than providing a straightforward solution.
First, to address the misalignment of icons and the text, I could come up with these solutions:
The icons do not have the same font-size as the text and this is because of the last line of code in the statubar.css:
https://github.com/microsoft/vscode/blob/d17726fe4beae5abe87ba9e9429cd298be8b53c4/src/vs/workbench/browser/parts/statusbar/media/statusbarpart.css#L146
Here the codicon icons wrapped in the spans are selected. This excludes the text included in the a tags. A suggestion for this is to change the selection in this way:
.monaco-workbench .part.statusbar > .items-container > .statusbar-item a {
text-align: center;
font-size: 16px;
}
And to reposition the icons using transform: translate(2px, 1px); property only for the icons. This can re-position them and align them with the text:
.monaco-workbench .part.statusbar > .items-container > .statusbar-item span.codicon {
transform: translate(2px, 1px);
color: inherit;
}
To also align the text also, I suggest changing the line height on this selection:
https://github.com/microsoft/vscode/blob/d17726fe4beae5abe87ba9e9429cd298be8b53c4/src/vs/workbench/browser/parts/statusbar/media/statusbarpart.css#L10
Note: Changing its value pushes only items with the text in the div with the class=”status-bar-item” left up or down not the others.
Something else to add:
some icons maybe were wrong picks for the test as they may need custom sizing to align properly.
As for the space between codicons icons, there is a specific scenario. The right section of the status bar has three separate sections. These sections are:
First-visible-item : We have a div with the class=”first-visible-item” this one is a solid subsection without the flexibility of many icon changes.
Middle (meaning without a specific class) : We have a div with the class=”left” For this is for git branches, loading, and other synchronizations.
Last-visible-item: We have a div with the class=”last-visible-item” right after this one, here I can see there are warning icons and error icons.
My assumption was that all the extension icons would be added to the middle section of the left part of the status bar. However, the real examples I see on my desktop vscode app prove the other cases:
In this subsection, as an example, the gitlens extension adds two icons one before the warning icons and error icons in the last-visible-item div and one after.
the other point is that, Also, a div has the display:inline-block property . An example of this is line 43:
https://github.com/microsoft/vscode/blob/d17726fe4beae5abe87ba9e9429cd298be8b53c4/src/vs/workbench/browser/parts/statusbar/media/statusbarpart.css#L43
If changing the display property won't affect the items because of the nature of the statue bar, maybe display:inline-block can be used for other property as well. With the application of emerging left and right can be easily done.
As a suggestion, maybe Instead of sectioning the main status bar div we can change the selectors like this:
span:first-of-type()
span:nth-of-type()
span:last-of-type()
and exclude any item that is not needed with :not() selector.
@misolori @daviddossett
At this point, I can implement these or the issue needs to be renewed because I can tell that it is important and with its current state it is not neat. There are ideas pointing in all directions and chances are low for it to get resolved.
Can I open an issue with all the relevant context for this and renew the previous one? This seems to be necessary.
Or make changes to this PR. Please let me know what to do next?
|
GITHUB_ARCHIVE
|
I don’t know if it’s related, but I think only the master compiles on macOS 10.14 Mojave
Thanks for your comment. Could you please explain what the master is and how I can get it? I haven’t found any reference on the installation guide.
The master is the master branch of the Git repository. See Direct Git repository access how to get the latest version of ROOT. I.e:
$ git clone http://github.com/root-project/root.git
And then follow the instructions on this page
Wait a second…
I was absolutely able to build even 6.12/06 on Mojave myself. But I did have to do a few things…
xcode-select --installwas a good move, you of course have to do that after the whole Mojave update;
- I found while I was trying to build Python-2.7.13, that I also had to install system header files under
/usr/includeto make the Python build successful. This is sort of a long story, you can read a bit more about it under for instance:
Bottom line is, you should execute the following to get the headers installed:
- Finally, I had to call
before starting my build. More of this is explained on for instance:
So… With the headers installed, and
MACOSX_DEPLOYMENT_TARGET set, I was able to build ROOT without problems.
thank you both for your comments. Last night I successfully tried to install ROOT master. The installation went smooth except for the warning:
warning: include path for stdlibc++ headers not found; pass '-std=libc++' on the command line to use the libc++ standard library instead [-Wstdlibcxx-not-found]. I have tested the installation on some macros and it seems to work just fine.
open /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg is what we’d like to avoid. We prefer to change ROOT, rather than forcing users to change their (default) configuration. And by default, MacOS will not have /usr/include anymore, and ROOT’s master can cope with this just fine
Could you post the context of where this happened? E.g. the output of building ROOT, if this happened during ROOT’s build?
Yes, that happened during ROOT’s build. I am sorry but I haven’t saved the whole output.
Depending on what other software one wants to build, unfortunately this seems unavoidable to install… As I started, ROOT was not the first thing that started failing for me. It was Python. And by the time I made the Python build work, ROOT built itself without any further issues.
Note at the same time that once I produced my binaries, I’m pretty sure that I would not need the headers from
/usr/include to use these binaries. At least I’ve been advising the ATLAS users to try using the centrally provided analysis release binaries without installing the
/usr/include headers, and so far nobody complained. So if you’re worried about people having to install those files, maybe Mojave binaries should be put on https://root.cern.ch/downloading-root for the last few production versions.
Yes but let’s get this straight: ROOT doesn’t require any system modifications, like installing stuff in
/usr/include. ROOT master and soon 6.14 build out-of-the-box on MacOS 10.14 - as they should.
@Axel any ETA for the fixe(s) in 6.14 ?
We will likely release new 6.14 binaries next week.
I have successfully installed root via git with the standard instructions. root works fine. But PyRoot doesn’t see to be able to load a lib??
Python 2.7.10 (default, Aug 17 2018, 17:41:52)
[GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.0.42)] on darwin
Type “help”, “copyright”, “credits” or “license” for more information.
Traceback (most recent call last):
File “”, line 1, in
File “/Users/schaffer/work/root/lib/ROOT.py”, line 24, in
File “/Users/schaffer/work/root/lib/cppyy.py”, line 61, in
import libPyROOT as _backend
ImportError: dlopen(/Users/schaffer/work/root/lib/libPyROOT.so, 2): Library not loaded: @rpath/libTree.so
Referenced from: /Users/schaffer/work/root/lib/libPyROOT.so
Reason: image not found
Just for info, the current ROOT 6.14.04 builds just fine as part of the MacPorts distribution. However, a user has reported to me an issue with compiling things via ACLiC. See
Solution as discussed there was to install /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg.
Of course, as I pointed out, compiling scripts is really not needed that much any longer, with ROOT6 cling. But nevertheless it is another consequence of the /usr changes in mac OS 10.14.
@Axel hi, when will 6.14/06 be out finally ?
v6.14/06 will be out this week!
@fabio1 do you have objections again me moving this thread into the regular ROOT section, instead of hiding it? It has too much good content!
@Axel Not at all!
It seems 6.14.06 is out but it still has the build issue with freetype, when /usr/include headers are missing. Did the fixes for this in master not get ported to this branch after all ?
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.
|
OPCFW_CODE
|
‘AI is for everyone’ – My time at Microsoft’s Global Nonprofit Leaders Summit
A few weeks ago, Cloud Direct had the opportunity to attend Microsoft’s inaugural Global Nonprofit Leaders Summit in Seattle. As one of only two UK sponsors, we were honoured to be invited to exhibit and become a part of the conversation about AI as a force for good. Josh Hutchison, Senior Cloud Sales Executive at Cloud Direct, recounts his experiences at one of the industry’s biggest events.
Accompanied by our Sales Director Jon Shaw, our time at Microsoft’s Global Nonprofit Leaders Summit was a hugely insightful experience. Across the two days, we listened to talks and spoke to nonprofit IT leaders about Microsoft’s AI offering, and how it can change the landscape of nonprofits in achieving their mission and making a real difference.
There were some great points made by Microsoft, and further insight added by guest speakers throughout the event, so I’ve put together a rundown of the key highlights and happenings from Seattle.
This was also a great opportunity for us to take our mascot Eric on his first holiday! He doesn’t get to leave our Bath office very much, so we thought we’d spoil him. However, he didn’t seem too impressed when airport security told him he had to stay in my bag for the flight!
Bright and early, we took to the Grand Ballroom alongside 1400 attendees, representing 1192 nonprofit organisations from around the globe. First to take the stage was Kate Behncken, Global Head of Microsoft Philanthropies, who opened the summit.
“We’re in a transformative leap into an era where AI is accessible to everyone.” It was great to hear from Kate as she described Microsoft’s stance on Generative AI – that it’s not just for tech-savvy people, it’s for everyone.
We also got a glimpse into how AI is currently being used by nonprofits. It was insightful to hear about how the British Heart Foundation is using new AI-powered tools to predict and prevent heart attacks with double the accuracy of standard diagnostic tests. Using AI-powered tools can save 3000 clinician days a year by reducing their analysis time from minutes to seconds.
But it was particularly interesting to hear about how it’s not just the large nonprofits that can benefit, but organisations of all sizes can:
- Boost efficiency and cost savings
- Make smarter decisions
- Personalise outreach and revolutionise fundraising
- Improve security
“Ultimately our goal is to democratise AI, enable more people and organisations to benefit from the powerful technology, and help create a more equitable and inclusive society”.
Satya Nadella, Chairman and CEO of Microsoft
After Kate, next up was a thought-provoking talk from Brad Smith, Vice Chair and President of Microsoft, who took a deep dive into the responsible use AI and reaffirmed Microsoft’s stance that AI is not only for everyone, but should benefit everyone.
He goes on to explain that AI services need to be built and used with six key principles in mind, something which struck a chord with most nonprofit leaders in the room as a big talking point:
- Reliability and safety
- Privacy and security
“We serve the world’s nonprofits so that you can serve the world, let’s serve it together.” A great closing statement from Brad that explains why Microsoft Tech for Social Impact do what they do.
All eyes were then trained on Jared Spataro, Corporate Vice President of Modern Work and Business Applications at Microsoft, as he gave a live demonstration of Microsoft Copilot, piquing a lot of interest in the room.
Getting laughs from the audience, Jared went on and demonstrated Copilot’s integration with Outlook and was using the ‘Sound like me’ feature to analyse his previously sent emails to create AI-generated responses to new emails in his inbox. Using previous data, within seconds he was also able to prompt Copilot to create a 10-page deployment guide for IT leaders in nonprofits for Microsoft 365 Copilot – these demos were just scratching the surface of how Copilot can be a nonprofit’s greatest companion.
During the networking breaks, Jon and I had the opportunity to speak to many nonprofit leaders who came to visit our Cloud Direct stand.
It was great to open up the conversation around the challenges they were facing in fostering innovation within their nonprofit. Many nonprofit IT leaders want to start using AI but just aren’t sure how to get started.
Many of our discussions led back to two things:
- The importance of a well-executed cloud migration
- Modernising their data and creating a clear data strategy
We had great conversations around the positive impact of a Microsoft Azure migration and the role it plays as the first step on the roadmap to using AI. Having your data and assets hosted in Microsoft Azure and enforcing a clear data strategy is key to ensuring that AI can be leveraged responsibly and make a big impact on your nonprofit processes, allowing your team and volunteers can spend more time on the mission.
If you’re interested in how your nonprofit organisation can unlock the value of artificial intelligence, then get in touch.
Customer Engagement: How to drive sustained profitable grow …
A practical guide to achieving real-world business excellence Introduction Sustainable profitable growth – well above market...Blog
Can the cloud close the gender pay gap?
Males in high-tech companies earn, on average, 25 percent more than women. Three-quarters of mid-level professional...Blog
|
OPCFW_CODE
|
Given that the immigration officer Commonly You should not routinely give you the whole 183 times, you most likely should request it; that has a return flight six months later on no problem.
A possibility to receive a inside of-stage or inside-array improve that results in ahead movement during the relevant variety of costs of primary fork out (such as a boost granted quickly upon movement to your non-GS pay method from another pay back procedure-e.g., to account for the worth of accrued within-grade boosts underneath the former spend program or to supply a marketing-equal improve), exactly where "forward movement in the relevant array" implies any type of increase in the employee's rate of basic spend in addition to an increase that may be immediately and solely associated with (1) a typical structural rise in the employee's essential pay timetable or charge assortment (including the adjustment of a variety minimum amount or most) or (two) the employee's placement beneath a completely new standard pay out program within the same shell out technique, when these types of placement results in a nondiscretionary standard pay boost to account for occupational pay back differences.
I am looking to enter the region and I had been pondering if their is any constraints for people who have been convicted of felonies from the USA?
No, you can't enter Peru on your US Eco-friendly Card. Your nationality could be the determining variable if You need to make an application for a visa right before coming to Peru or not. So which passport does one keep?
Is a company visa needed in case of attending a medical convention. And does the occasion organizer need a company visa?
What is a quality phase raise (QSI) and So how exactly does it impact a within-quality raise? Check out a lot more A QSI is often a more quickly-than-ordinary WGI utilized to reward staff members at any GS grade amount who Display screen superior quality efficiency. For being suitable for the QSI, staff members should:
We are in excess of the border to Ecuador various occasions to resume vacationer visas (United states of america passport) and haven't been bothered in any way. Ask for one hundred eighty days and you'll nearly always get it. Now we have buddies which have performed hop over to here the identical many times.
Also Observe that the whole process of applying for that Carné de extranjeria can be a bit complicated plus the process just isn't normally exactly the same for everyone so Ensure that you check with about the details in the treatment within the immigration office.
However, several universities know it might be an extended and sophisticated course of action to submit an application for a scholar visa and for that reason some endorse you to enter Peru over a vacationer visa. If you are only heading to review 1 or 2 semesters you should be fine with simply a tourist visa. At immigration in the airport you need to request 183 days, the most stay for the vacationer visa. Several pupils vacation when They may be in this article as they might increase their continue to be by leaving and re-coming into the country. For instance, in case you visit Ecuador for 5 days and then enter Peru once more you will get a brand new visa with a new quantity of times.
Be aware! The second time you enter Peru it would be tougher to get 183 times - from time to time they will give you 30 or sixty days Even when you talk to For additional. You can also find cases wherever the immigration personnel has requested for cash beneath the desk to provide you with a lot more times. Sometimes it’s takes some luck for getting the quantity of days you wish. But try and be company and demand on the amount of times that you just need. If you been given a significant quantity of days on your vacationer visa, a method is to attend till your tourist visa has under thirty days left in advance of leaving and re-coming into the country, so you don’t risk coming back having a new vacationer visa that may shorten your stay.
I'm genuinely sorry, but for that reason however I can not answer your issue. Greatest get in connection with the consulate exactly where you wish to apply for your Peru visa and ask them what paperwork you and your friend have to provide for the appliance.
For getting it in to the region and so are permitted to utilize it you may need a Unique allow within the MTC - Ministerio de Transportes y Comunicaciones. On top of that If you're intending to movie or photograph with the helicopter A further permit is critical.
In specific emergency or mission crucial scenarios, an agency might utilize an once-a-year high quality fork out cap in lieu of a biweekly high quality pay out cap, issue to your situations offered in law and regulation.
I do not know why the Peruvian consulate advised you You'll need a organization visa. For my part which is Definitely unnecessary if you just wish to buy a car. In the event you are not organizing on undertaking some other organization whilst staying in Peru, ideal get a normal vacationer visa when getting into the region (request 183 days so you don't need to be worried about an expired visa).
|
OPCFW_CODE
|
This release has a few fixes and new features courtesy RealityRipple
- Add "Select file extension on rename" preference
- Rename "Remove completed downloads immediately" preference
- Convert Binary Hash to Hex String
Referring page context menu item:vchuubacabra wrote: ↑2023-06-30, 17:07Hi. Let's see if this thread of forum is different or business as usual.
Which of feature requests listed below you could implement in this extension?
- Add "Go to referring page" in right-click menu of the download.
- Menu to group downloads by the values of Estimated Time column and/or Progress column to have all failed/skipped/complete/paused etc. downloads close to each other at the moment of sorting
(having Running/Waiting downloads as a constant group would be problematic as they may change their status without user's interference)
- Option to automatically remove Skipped for the reason of identical names downloads from the list like it is done for complete downloads.
- Add new downloads on the top of the list instead of on the bottom.
Yes, may happen it's link added for download for the second time in the same download session or with list saved from previous session, but I mean situation when file queued for download already exist on the storage but is not listed in downloads, and option for already existing files is set to Ask. Then GtA creates downloads list entry, asks what to do, gets answer "Skip" and sets status of download entry to Skipped. If this is exactly what Remove Duplicates is for, then OK (but needs clarifying imo). And if this looks like insignificant improvement, then just an option "Go to referring page" is good too.RealityRipple wrote: ↑2023-06-30, 18:09Remove identical names:
There's a context menu sub-item that's literally called "Remove Duplicates" under the "Remove Downloads" menu. There's also a setting under general: "When a file of the same name already exists:" with the options "Rename", "Overwrite", "Skip", or "Ask".
moonbat wrote: ↑2023-07-17, 00:15The original DownThemAll was one of the most popular classic Firefox extensions; I decided to fork it for Pale Moon in case the original XUL one went away (the current Firefox version is a web extension and lacks features that this one does). It is mostly feature complete as a download manager, RealityRipple has made patches for small fixes now and then.
Ok, let me ask directly: will you add to extension some improvements from RealityRipple's fork which I asked for few posts above?Random Techsupport Article wrote: Was this answer helpful? : [Yes] [No]
and then read the changelog in this very thread before your first comments.
What the fuck are you talking about?
About the story you told of how great original Firefox extension was and why you decided to fork it and blah blah blah.
Then you need to understand how open source projects and forks work; or do you expect new features in Firefox to automatically be added to Pale Moon as well, based on both codebases having had a common ancestor? RealityRipple has often contributed patches to integrate here and I've always added them.
|
OPCFW_CODE
|
Invoking Methods of another component in Observer Pattern
Question:
Are there any techniques for communicating with methods of other components but still keep a "pure" Observer pattern ?
If yes are they indicated/regularly-used or am I just overcomplicating stuff?
A practical example
Suppose I have an architecture with the following characteristics:
System is a Word Processor app
Many components, each with it's own purpose, e.g KeypressDetector, Printer, DocumentRenderer.
Components modify/observe a single model, the Document
The components thus "communicate" via this model observation/modification. The components don't know about each other in any other way.
If I'm not mistaken this is what the Observer Pattern is all about.
So here's one case where there's a problem:
For component Printer to do it's job it needs some output from component PagePreparator.
This is just one edge-case where component PagePreparator can't continuously update the model that is shared between the components because it involves heavy-computation. This involves just a single aspect of PagePreparator, e.g PagePreparator.prepare() // takes a long time to run.
So that's one case where It looks to me that I need to "break away" from this pure Observer setup and just expose the method of PagePreparator to Printer directly and call it explicitly from Printer. This will couple them explicitly together though.
Are there any techniques for keeping this Observer pattern exclusively and still be able to perform this kind of invocations?
Possible solutions:
When Printer needs output it sets a flag on the model Document, e.g updatePages which is picked up by PagePreparator's observers. PagePreparator then sets the output on the model Document which is picked up by Printer's observer which proceeds to do it's job.
Simply "breaking" the pattern and just expose PagePreparator's method, pagePreparator.prepare() to Printer which can then explicitly call it.
Dispatching an event from Printer which is picked up by PagePreparator which proceeds to set the value on the model Document, for Printer to pick it up.
For clarification, could you explain the print process? Presumably some method like Printer.print is called. Is that method called directly as the entry point? Is it called by Document? Is it hooked into an event which is raised by Document? And when you talk about "observing" currently, how does that work? Is it polling for changes to a property on some timer, or is it there some push mechanism like an event?
Oh, and another question, is the PagePreparator's output used by anything other than the Printer?
@BenAaronson Printer.print() is called using an event. On your 2nd question, I don't really know what happens under the hood but it's just a regular data-binding mechanism (define a model and some observers for that model). As for your 2nd comment, yes PagePreparator output can be used for other components as well.
Thanks for the explanation. And what's the name of the specific language feature/framework/library/whatever you're using for the data-binding mechanism?
Thanks for the interest - Here: Polymer data-binding explained
The third option seems cleanest to me. I frequently find myself making use of the Event Aggregator Pattern with applications containing isolated/disconnected modules communicating with each other via messages (Another good page Here).
The pattern itself has become common at least in .NET with a lot of examples of 'generic' Event Aggregators around which should be able to translate into most languages.
The benefit of using messages to communicate between modules is that those modules remain decoupled; the cost however, is adding a layer of indirection.
|
STACK_EXCHANGE
|
Thomas Stubbs2014 - present – PhD candidate in Molecular Biology, Babraham Institute, Cambridge, UK (under the supervision of Professor Wolf Reik).
2013-2014 – Master’s degree in Research (MRes), University of Cambridge, UK.
2009-2013 – Master’s degree in Molecular and Cellular Biochemistry (MBiochem), University of Oxford, UK (First-class honours).
My research interest is centered on the most prolific of all diseases: ageing. Why do organisms age? What does ageing mean at a cellular and molecular level? In particular, I am interested in the epigenetic changes that characterise ageing and how these changes can be used not only to study but also to halt or reverse aspects of ageing. My research is focused on understanding the dynamics of DNA methylation with age and how we can use these dynamics to predict biological age in the mouse. This readout of biological age in the mouse will aid the speed with which ageing can be studied in mammalian systems.
Vasiliauskaitė L, Berrens RV, Ivanova I
Nature structural & molecular biology
25 1545-9985:394-404 (2018)
Eckersley-Maslin MA, Alda-Catalinas C, Reik W
Nature reviews. Molecular cell biology
Guo G, von Meyenn F, Rostovskaya M
Development (Cambridge, England)
145 1477-9129: (2018)
Comparison of whole-genome bisulfite sequencing library preparation strategies identifies sources of biases affecting DNA methylation data.
Olova N, Krueger F, Andrews S
19 1474-760X:33 (2018)
scNMT-seq enables joint profiling of chromatin accessibility DNA methylation and transcription in single cells.
Clark SJ, Argelaguet R, Kapourani CA
9 2041-1723:781 (2018)
Regev A, Teichmann SA, Lander ES
6 2050-084X: (2017)
Coupling shRNA screens with single-cell RNA-seq identifies a dual role for mTOR in reprogramming-induced senescence.
Aarts M, Georgilis A, Beniazza M
Genes & development
An endosiRNA-Based Repression Mechanism Counteracts Transposon Activation during Global DNA Demethylation in Embryonic Stem Cells.
Berrens RV, Andrews S, Spensberger D
Cell stem cell
21 1875-9777:694-703.e7 (2017)
Martin-Herranz DE, Ribeiro AJM, Krueger F
Nucleic acids research
Yang J, Ryan DJ, Wang W
Kelsey G, Stegle O, Reik W
Science (New York, N.Y.)
358 1095-9203:69-75 (2017)
Single-Cell Landscape of Transcriptional Heterogeneity and Cell Fate Decisions during Mouse Early Gastrulation.
Mohammed H, Hernando-Herraez I, Savino A
20 2211-1247:1215-1228 (2017)
|
OPCFW_CODE
|
What is a complement?
A complement in grammatical structures is a term, utterance, or stipulation required to complete the meaning of the original affirmation. Complements are frequently used as assertions. A complement seems to be something that, when combined with another one, creates complete sense.
Complement is derived from the Latin complementum, which means "anything that typically provides or performs." Both connotations are retained via complement. It's also an adjective. If you and your companion complemented each other, you'd create a great team. Something that accomplishes or contributes a few to a compliment.
Complements for the predicative, topic, and objective
The words subject complement and object complement are used in several non-theoretical linguistic to designate syntactic statements (including such syntactic verbs and auxiliary verbs) that assist in giving quality to a subject or an object:
Examples: Here, we have some examples to elaborate on the above terms.
The grammar texts utilize the following phraseology:
Although, despite this use of nomenclature, several modern analyses of syntactic can be seen in the phrases as strong as elements of the sentence premise. It implies that they do not complement the subject or object but instead attributes that are postulated objects.
The Cambridge Dictionary of the English Language refers to both usages as "syntactic complements," and the definitional difference is shifted to the utterance:
Complement as an argument
The object parameter of a linguistic verb is a complement in several current linguistic (for example, those based on the X-bar paradigm). In reality, this usage of the word is presently dominant in linguistics. One important component of this complement comprehension is that the subject is not always a counterpart of the preposition:
While it is less usual, similar logic can sometimes be extended to topic assertions:
The subject and object parameters are assumed to be opposites in those situations. As a result, the phrases complement and argument have meanings. It's worth noticing that this approach considers a subject complement to be something completely distinct from conventional grammar's subject counterparts, which are syntactic phrases, as described above.
In the broader sense, each time a given statement is required to make another statement "accomplish," it can be defined as a supplement of that statement:
Several compliments, when interpreted broadly, cannot be interpreted as justification. In contrast to the complementary idea, the argument idea is linked to the premise idea.
An appendage in semantics is an alternative, or architecturally, a component of a statement, phrase, or term that, when deleted, does not affect the rest of the paragraph besides eliminating some secondary data. A more descriptive term of the alternative demonstrates its purpose as a modulating type, sentence, or quotation that is dependent on some other type, sentence, or quotation, as well as being a component of clause structure with adjectival structure. An alternative is not a syntactic affirmation or an assertion, and an argument is not an alternative.
The assertion differentiation is core in most hypotheses of linguistic structure. The terminology used to denominate parameters and adjuncts can vary on the hypotheses at hand. Several interdependence idioms, for example, employ the word circonstant (rather than adjunct) and pursue Tesnière (1959).
|
OPCFW_CODE
|
This Saturday at BSides DC, I am presenting on the current state of PowerShell security in a talk called, “PowerShell Security: Defending the Enterprise from the Latest Attack Platform.”
I cover some of the information I’ve posted here before:
- PowerShell Version 5 Security Enhancements
- PowerShell Security: PowerShell Attack Tools, Mitigation, & Detection
- Detecting Offensive PowerShell Attack Tools
On Saturday, October 21st, 2016, I am speaking at BSides DC in Track 2 (“Grand Central”) at 1:30pm.
Here’s the talk description from the BSides DC website:
PowerShell is a boon to administrators, providing command consistency and the ability to quickly gather system data and set configuration settings. However, what can be used to help, can also be used for less altruistic activities. Attackers have recently learned that leveraging PowerShell provides simple bypass methods for most defenses and a platform for initial compromise, recon, exploitation, privilege escalation, data exfiltration, and persistence.
With the industry shift to an “Assume Breach” mentality, it’s important to understand the impact of defending against an attacker on the internal network since this is a major shift from the traditional defensive paradigm. In its default configuration, there’s minimal PowerShell logging and nothing to slow an attacker’s activities. Many organizations seek to block the PowerShell executable to stop attacks. However, blocking PowerShell.exe does not stop PowerShell execution and can provide a false sense of security. Simply put, don’t block PowerShell, embrace it. The key is monitoring PowerShell usage to enable detection of recon and attack activity. As attack tools like PowerSploit (Invoke-Mimikatz) and the recently released PowerShell Empire become more prevalent (and more commonly used), it’s more important than ever to understand the full capabilities of PowerShell as an attack platform as well as how to effectively detect and mitigate a variety of PowerShell attack methods.
The presentation walks the audience through the evolution of PowerShell as an attack platform and shows why a new approach to PowerShell attack defense is required. PowerShell recon & attack techniques are shown as well as methods of detection & mitigation. Also covered are the latest methods to bypass and subvert PowerShell security measures including PowerShell v5 logging, constrained language mode, and Windows 10’s AMSI anti-malware for scanning PowerShell code in memory.The final part of the presentation explains why PowerShell version 5 should be every organization’s new baseline version of PowerShell due to new and enhanced defensive capability.
This talk is recommended for anyone tasked with defending and testing the defenses for an organization as well as system administrators/engineers.
This presentation outlines that capability of the current PowerShell version and how current attacks are leveraging PowerShell, including how current PowerShell security (& logging) can be bypassed!
The talk wraps up with a summary of the defensive recommendations provided throughout the presentation.
For the curious, here’s an outline of the talk*:
- Quick PowerShell Overview
- PowerShell v5
- AMSI (Windows 10)
- Just Enough Administration (JEA)
- PowerShell as an Attack Platform
- Real World Attack Code Analysis
- Bypassing PowerShell Security & Mitigation
- Executing PowerShell code without calling PowerShell.exe
- Playing with PowerShell versions
- PowerShell obfuscation (Invoke-Obfuscation)
- Defense Summary
- Detecting “evil” code
* subject to updates prior to talk.
I think the talk is being recorded, but follow @BSidesDC on Twitter for more information.
|
OPCFW_CODE
|
Prop-drilling is a typical pattern in most React applications, especially when using with Redux. Personally how I wrote React apps in the past was to make a huge component to present a screen (or a page) in my app, and only those components are connected to the redux store. If I want to separate a page to sections (for better managing my code), I make a sub-component and then pass the prop from the parent component that I stated above. That pattern worked fine until I want to update a state which subscribes to a fast-changing event (form input, slider, scroll events). My app starts to slow down dramatically. After googling and wandering around Github issues in the react repo as well as popular blogs and forums, I came up with a way to drastically improve the performance of my react app.
First, I realized that in a React application, the most expensive thing which can happen is to update the DOM tree (or native tree in the case of React Native). That means I need to reduce the DOM update as much as possible, and I used the React dev tool to debug my applications. I turned on “highlighting updates” in the setting panel and started to scroll the slider which I linked to a reducer, and I was shocked that whenever the slider changes, the whole page was highlighted (which means React re-rendered them). Imagine the operation has to be done in every frame :sigh:
For primitive-valued props such as number, boolean, null, undefined, string: a change is a change in value. (for example 1 to 2, true to false, “hello” to “hello world”)
For reference-valued props such as functions, objects (this includes React components and Arrays): a change is a change in reference, no matter if the content of the object is similar. (I would recommend reading the book “You don’t know JS” to understand this more deeply)
From the specs above it means that having reference-valued as props are the primary cause that triggers most of the unnecessary re-renders. It is even worse when I was following the pattern prop-drilling. My mistake was that I provided nearly huge slices of my redux state as props to all screen components, and then distribute some of them to each smaller section of that screen. It makes the whole screen updates every time a small part of the redux state updates, including the unrelated part of a component (since the parent component updates the whole tree). After a refractor/optimization session, I was able to speed up my app’s performance with the following principles (which I intended to keep in my future react apps):
Use primitive values for my props as much as possible.
It is better to connect to redux store to any component that needs some part of the redux state to achieve the first principle, instead of passing them from the parent component. That will make the parent component re-render less frequently, and the prop changes will be necessary for every small component.
When I have to use Array, or function, or object as props, memoized selectors are a good source of help to avoid changing the reference of those props as much as possible. With redux, we can achieve this with the reselect library.
Avoid prop spreading as much as possible to prevent a human-made mistake. When doing prop spreading, we can easily pass an unrelated prop to a component, which makes the component itself rerender when that unrelated prop changes.
|
OPCFW_CODE
|
���E� R�C ��.� �T�� ��
<br /> 4��
<br /> _ _,__
<br /> � THEAUGUSTINECO.�-��0�3-'3-3rj . .� . . . � � . . . . . . . .
<br /> --
<br /> s-.:: . .. .. �._ ....: .. . . � . . . ....._.. . . _ _.. ,.. _ ..
<br /> following deseribed real estate situated in Grand Island,Ha11 County, State of Nebre�ska, to-wit;
<br /> The i�est Twenty-'Cwo Feet (22 � ) of Lot Seven (7) Block Thirty-one (31), in the �riginal Town, no�v
<br /> Gity of Grand Island,Nebx�aska; _
<br /> TO HAVE AP1D TO HOLD the above de5cribed premises together with all the Tenements, Hereditaments
<br /> and Appurtenances �hereunto belonging unto the saSd Adolph Boehm and Carl Rnickrehm: and to their
<br /> �
<br /> � heira, representativea and assi�ns; the said �d.L.Gollaher and Homer Bowen executec:this deed ae
<br /> the duly appointed, qualified and acting successore to Ray Bottorf, who resigned as such Truatee
<br /> eubsequent to May 2I, 1932 and C.I.Ca�es, who died subsequent to that da.te.
<br /> IN TESTII�ONY �tHEREOF we ha.ve hereunto set our hands this ll�h day oY �arch, 1937•
<br /> � ' IN Pi�ESENCE OF: A.VY.Boecking
<br /> � M.L.Gollaher
<br /> A.J.Luebs. 7.00 I.R.Stamps ) Homer BoWen
<br /> ,� Cancelied ) Fred (3riffin
<br /> �f.A.Nieholas
<br /> As Trusteea f'or Gertain of the
<br /> Depositors of the Peoplea State
<br /> Bank, Grand Ialand,Nebraeka.
<br /> 3TATE OF NEBRASKA ) On this llth day oP March, 1937, before me the undersigned, A.J.Luebs,
<br /> )se:
<br /> � COUNTY OF HALL ) a Notary Publie, duly eommissioned and qualiPied �or and residing in
<br /> �
<br /> � sr�3d county, pereonally came A.'�.Boeeking, 11�L.Golla.her,Homer Bowen, Fred Griffin and �P.A.Nicholas
<br /> Iithe Trustees Por Certain oP the Depoaitora oP the Peoples State Bank, arand Island,Nebraska, to
<br /> !
<br /> • , me known to be the identical peraons whose namee are afPixed to the fnregoing inetrument as �ranto s
<br /> and acknowledged the same to be' their voluntary act and deed.
<br /> il�itness my hand �,nd Notarial Seal the day and year last above v�ritten.
<br /> a.J.Luebs
<br /> (SEAL) Notary Publie
<br /> , � My commission expires the 6th day of July, 1939-
<br /> FiZed .for record this 23 day oP April, 1.937, at 11:15 o'clock A.�.
<br /> I �.� �
<br /> � � � . - ftegister of ee
<br /> a-o-o-o-�-o-o-n-o-o-c�_o-o-o-o-o-o-o-o..o-o-�-o-o-o-o-o-a-o-o-o-o-o-o-o-o-o_o_o-o-o-o-o-o-o_o-o_o_o
<br /> REFEREE �� DEED �
<br /> ---�._..._._
<br /> K�i�W ALL D�EN BY THESE PRESENTS: ' �` �-
<br /> �Phereas, ln an aetivn of partition pendin� in the Distriot Court of Hall County, Nebraska, wherel
<br /> Edward Brabander and Olive Brabander, his wife, were plaintiffs and Mary Sto�.le and William Sto�le
<br /> her husband, Gerhard Brabander and Mari.e Brabander, his wlfe, Henry Brabander and Haneina Brabande ,
<br /> lhis �ife, Fred Brabander, a single man, Anna. Tilley and John '�illey her husband, Joseph Brabander
<br /> � and Meta Brabander, his �riYe, Er�ma Haa.ck and John Haack, her husband, EdWard Hostler and Theresa
<br /> Hostler, his wife, Frank Edward Hostler and Jose�hine Hostler, his wife, John Henrq Hostler, a
<br /> � aingle �.n, Anna Louisa t�agner and Theodore Wagner, her husband, B�ary Hannah 0'Brien and John
<br /> �'Brien, her husband, Sadie Roaella Bey�ersdorf and �ax BeyerSdorP, Jr. , her husband, Ida Bertha
<br /> Hostler, a single person, Edward William Hoatler, a single peraon, Dorothq Etta Hostler, a singl�
<br /> person, Edward Hostler, Guardian, John Johnson and Bdae Johnson his wife, were defendants f'or the
<br /> partition of the premises hereinafter described, the undersigned re�'eree a,ppointed by said Court
<br /> to make partition oP said real estate, made report in writing, du].y signed, setting Porth that
<br /> partition oP said landa eould not be made wlthout great pre�udice to the orrners thereoY, which
<br /> � report �as duly examined by said Court, and said Court being satisfied there�ith confirmed the
<br /> same, and thereupon m�.de an order and caused the same to be entered, direeting me as sa3.d reYeree
<br /> to sell said premises at publie sale as upon execution at the Pront and main door of the Court
<br /> Houee in the City of Grand Ialand, Hall County, �Tebraeka, the terma oY said eale to be tifteen
<br /> ,
<br /> per aent eash and the bal�,nce upon confirm�tion, notice oP said sale to be given by publication
<br /> in some 1e�a1 newepaper printed and publiahed in Hall County, Nebraska, in the time and manner
<br /> provided by law.
<br /> And in Qursuance oP sald order I c�,used a notice to be published in the Grand Island Daily Indepe
|
OPCFW_CODE
|
Digital Localization Services that Improve the Translation Workflow
If your product includes computer software (or uses a software interface), you can rely on ABC for complete, turnkey localization of the foreign language versions.
Our translators, working together with software engineers, can adapt the text from your source code into a variety of other languages and ensure the software retains its look and feel. We combine our understanding of the language and its cultural differences with our computer software engineering expertise to provide a revised product that is both true to the original and easily understood by users in the target language.
ABC Translations provides a full range of software localization services. We can localize your products and software applications into any language to help you successfully enter the international market. Our process ensures that after localization all software functions as expected while being linguistically and culturally relevant to your new end users.
Software localization is divided into several separate and distinct workflows, user interface localization, help files, and technical documentation, etc. are carried out separately. However, these separate processes must also be coordinated and utilized to make the final product to maximize cost savings, faster time to market and maintain exactly the same.
ABC Translations uses industry best practices and cutting-edge technology to assist in the various steps of the software localization process, and automate when feasible. From the initial identification and extraction of localized elements to rebuild and final test multilingual version, we can guarantee the quality of each step of the software localization process.
Allow any audience worldwide to fully understand your software and applications
Software Localization includes translation of text found in software applications and adjustment of functional elements, so that the product can be used by consumers around the world. ABC Translations provides excellent localization methods and the most advanced technology to ensure that your software is ready to enter the global market.Through the application of localization, you can properly handle the cultural nuances, thereby enhancing confidence and creating a better overall customer experience. With ABC Translations localization services applications, you can seize the opportunity to enter the international market.
Our software localization methods
In software localization, best practice is defined by the meaning of the original content and proper user interface conversion from one language to another language, while maintaining its integrity. The overarching principles of software localization are meant to be applied to a variety of different types of software products (desktop software, web application, and mobile applications), but in order to ensure the highest level of service, ABC Translations uses special methods for each type of application. Similarly, each of our software translation projects also uses a unique method to meet specific goals. Here are some steps that localization projects typically contain:
- Localization is designed to take preparatory measures to ensure that your source language software functionality has been fully prepared, that it can be localized and published in the international market.
- Translation software text strings are translated by professional translators, and all the text within the software is localized.
- Complete output using localized text files to generate a high integrity new language version.
- Rigorous testing of localized software to ensure language quality, and application appearance and function.
Industry-leading software specialists and language experts work closely together to provide software translation services. We have established partnerships to meet even the most highly specific needs, and you have the ability to take full advantage of all of our services include consulting, project management, text translation, and development and testing, including ongoing support.
Software Localization Process
UI-oriented design software localization
To get the best results through the localization process, this section describes the best practices for designing localized user interface (UI). As a language service provider, ABC Translations translates user interface strings; but the developer is necessary to eliminate anomalies in the source language expression, to avoid awkward conversion of the target language. When preparing for the localization, with the UI make sure there is enough space, and uses the Unicode standard to ensure character support. After taking these measures in account, we can begin the next step in the localization process.
ABC Translations will help you through consulting and ongoing support for development. By analyzing your software architecture or code, we will provide feedback and propose changes to ensure that the localization process proceeds as smoothly as possible.
Translation of software text strings
When manipulating UI strings in localization, we use the most vetted method for consistency, that is, we use translation memory (TM), which is a tool that you can use to store text from both the source and target languages. Then our translators reference the TM in order to ensure consistency throughout the translation of your software. The objective here is to ensure consistency with the terminology and abide by string length limitations.Typically, during the software localization process we need to translate the following elements: UI, documentation, online help, and user-generated content from forums, as well as support content.
Traditional ABC translation software will aid professionals responsible for translation, design and testing as they perform their part of the localization process. ABC Translations includes support for web application, mobile application, and desktop software localization. To learn more about ABC traditional methods, please contact us for a free consultation.
Localized files are used to generate compiled software
Software is generated by compiling the source code into executable code so that the finished product operates normally. The development process uses the localized string file after the translation is complete. After localized applications are built, it is necessary to make some adjustments for certain changes. Common UI elements are adjusted to accommodate text compression or expansion since string length is rarely equal between two languages. For examples, Spanish is typically 20% longer than English. These changes may mean that the length of the string in the user interface will be affected, and the buttons and other UI elements will need to be resized. These steps can be handled in-house, or outsourced to ABC Translations.
Software and application testing
During the localization process, after translating the user interface elements and examining target language accuracy, the software’s accuracy and functionality are tested. ABC Translations uses native speakers of the language to test software, thus ensuring accurate localization of language and proper functionality that remain faithful to the software developer’s original intent.
We will perform tests to determine if there are problems with the visual layout. In addition, we will perform functional testing to confirm whether the product performs and is displayed properly in the target language on the systems used in the target population. Operating systems and browsers can vary widely. Your localized software with function and appear as intended (even for those who use old version of Internet Explorer!).
Local Languages and their Requirements
When language changes require interface adjustments, we can incorporate those into the finished product. We handle all aspects of localization, including:
- Translation of resource and help files
- Modifications to accommodate expansion of text, (such as resizing of dialogue boxes or on-screen buttons)
- Assignment of hot keys
- Editing of bitmaps
- Date, time, and currency formats
- Field length restrictions
- Coding and font issues
- Modifications for double-byte characters in Japanese, Chinese and Korean
- Foreign-language and functionality testing
- Platforms supported: Windows, DOS, UNIX and Macintosh
|
OPCFW_CODE
|
Morning All (South African Time!),
I have a server upgrade coming up for a 65 user customer currently on D3 Linux 10.2.
The plan is to go bleeding edge to 10.4 so have started the research and busy playing on a dev server.
First off I see FSI and a totally new installation procedure. Last time I used FSI was 20 yrs back when a customer had D3/Windows on WindowsNT - still have nightmares about WindowsNT :)
So the point of my post is to ask for any hints and gotchas from those who have already done a similar migration, as I see there are a number of difference so it's not going to be a filesave & restore kind of migration.
We use MVSP and we shell out to Linux occasionally to kick off Bash & PHP scripts. We also use the spooler to kick off Python scripts to create PDFs and/or email reports.
Any pointers will be gratefully accepted!
Have a great weekend!
A couple of points:
It would have been nice if Basic programs were stored as separate files in a FSI directory so you can use modern tooling (git, vscode etc). Since they're also using Apache Thrift as the RPC mechanism, would be nice if that was opened to the extent of allowing external procedures to use the D3 DBMS.
You can set up BASIC files to be directories (and so have the programs directly accessible from the OS level), but there may be some drawbacks (I don't know, for example, if such configurations will include the BASIC code in D3 backups). I've tried this out, and what I do is:
CREATE-FILE DICT MY.BP 3
Then I edit the DICT MY.BP MY.BP record and replace it with this:
001: Q002:003: unix:/path/to/MY.BP
(Where "/path/to/MY.BP" is a valid directory in Linux with appropriate access permissions). With this the source code is available from the OS level and can be accessed by any tools that can get to the directory!
As I said, my disclaimer is that there may be unforeseen side effects of this, so use with caution!
Hmm..... that's very clever. Seems to work. The only side effect (I can see ATM) is that the unix directory is not saved on a file-save tape.
:CREATE-FILE DICT TMP 3
file 'TMP' created; base = 4582432, modulo = 3:ED DICT TMP TMPtop.P001 Qeoi 001.I001+.001+unix:/tmp001+eoi 001.ftop.001 Q.002 ..r/.//002 .fi 'TMP' filed.:ED TMP TESTnew itemtop.I001+PRINT 'HELLO'002+top.FI 'TEST' filed.:BASIC TMP TESTTEST. Successful compile! 1 frame(s) used.:RUN TMP TESTHELLO
< Connect time= 7 Mins.; CPU= 0 Units; LPTR pages= 0 >[pick@vmc /]$ vi /tmp/TEST
[pick@vmc /]$ ap:BASIC TMP TESTTEST. Successful compile! 1 frame(s) used.:RUN TMP TESTHELLOWORLD
Thanks Martin !
...and if you're running hot-backup the unix dir won't be replicated.
77 4th AvenueWaltham, MA 02451 USA
Rocket Support Community
All Support Offerings
About Rocket Software
Training and Services
Forum Terms and Conditions
Contact Forum Moderator
|
OPCFW_CODE
|
A fully functional, powerful and cross-platform integrated 3D creation software suite
Add it to your Download Basket!
Add it to your Watch List!
What's new in Blender 2.74:
- Cycles got a few optimizations, a new "Pointiness" attribute and objects can now use the texture maps from other objects, the Viewport is now able to display Ambient Occlusion and Depth of Field, the Outliner was improved in quite some ways, it is now possible to edit normals, the Hair tools developed for the Gooseberry Project are now available (including improved hair dynamics, child hair modifiers and various interaction tools), texture painting can now be done using Cavity Masks, Freestyle memory consumption was greatly improved, the Grease Pencil rewrite from Blender 2.73 was continued to make more editing tools available, improvements for animation interaction and many more features which are worth checking out!
- LICENSE TYPE:
- GPL v3
- OUR RATING:
- DEVELOPED BY:
- Blender Fundation
- USER RATING:
- ROOT \ Multimedia \ Graphics
Blender is an open source application for creating professional 2D/3D graphics, models and animation on Linux, Windows and Mac OS X operating systems. The truth is that Blender is targeted at advanced users and enthusiasts who know what they are doing. However, new users can find comprehensive documentation and tutorials on the official website, which will help them to get started with Blender.
Features at a glanceIt provides a fully customizable Interface, undo support on all levels, anti-aliased fonts with international translation support, a built-in text editor for annotations, support for editing Python scripts, fully scriptable UI with custom themes, and a consistent interface across all platforms. The application can be used for physics and particles, shading, real-time 3D and game creation, imaging and compositing, raytrace rendering, rendering, animation, solids modeling, and character modeling.
Supports a wide range of image file types, animations and moviesBlender allows for photorealistic rendering, fast modelling, fast rigging, sculpting, fast UV unwrapping, amazing simulations, camera and object tracking, and much more. The application provides import and export support for various image file types, such as JPEG, JPEG2000, PNG, TIFF, TAGA, DPX, Radiance HDR, SGI Iris, Cineon, and OpenEXR. In addition, it can import or export animations and movies to MPEG, AVI and Quicktime file formats, and 3D models for 3D Studio (3DS), X3D, STL, Autodesk (DXF), DirectX (x), Motion Capture (BVH), Lightwave (LWO), SVG, Filmbox (FBX), Stanford PLY, Wavefront (OBJ), VRML, VRML97, and COLLADA (DAE).
A professional 3D modeller used in Hollywood moviesAll in all, the application is really amazing for professional 3D graphic editors, especially because it is open source, freely distributable, and supported on all major platforms. Thanks to a vast online community of developers and enthusiasts, its flexible interface, and the wide range of extensions available, Blender is our number one choice for 2D/3D modelling and animation. The application was/is heavily used in many Hollywood blockbuster movies to create high quality and interactive 3D content. It is a very complex and professional application.
Blender was reviewed by Marius Nestor, last updated on March 31st, 2015
|
OPCFW_CODE
|
Over the past few months we have been working closely with Microsoft to bring MongoLab to the Windows Azure platform, and today we are proud to announce our official Preview launch of MongoLab in Azure's East US and West US datacenters.
Windows Azure is the fourth cloud provider we have added support for, and we find their offering to be very exciting for the industry. Azure is both IaaS (like EC2), and a PaaS (like Heroku or AppFog). It offers both Windows and Linux VMs (bet you did not expect that!) and supports multiple programming environments including Node.js, PHP, Java, and Python in addition to its .NET platform. It even has awesome command-line support as well as a web-based console. We have high hopes for Azure becoming a great platform for developers.
So what does this integration with Azure mean?
With this integration, you can now use MongoLab on Windows Azure in two ways:
(1) Via MongoLab. Now when you create a database on http://mongolab.com, Windows Azure will be offered as a deployment option. Just select Windows Azure as your cloud provider, select which Azure datacenter you want, and you are good to go. While previously unannounced, we have been supporting our free sandbox database in this way for several months with great success. Now it is official!
(2) Via the Windows Azure Store. As of today we now offer seamless integration with the Windows Azure PaaS platform via an add-on service that you can provision directly from the Windows Azure management console. Just click on the MongoLab icon and follow the instructions from there.
With either method, you get the full MongoLab experience on the Windows Azure platform with a nice low-latency connection between your Azure-based application and your MongoDB database.
Is it ready for production?
Almost, but not quite yet. Right now the Azure Linux VMs we use to run our MongoDB instances are in "Preview" (i.e. Beta), and we expect them to go GA (Generally Available) in the coming months. Shortly after the Linux VMs go GA we will come out of Beta and go GA with our offering. So for now we are only offering our free sandbox plans on Azure with our Dedicated plans available to a select set of Beta customers. We plan to make the rest of our plans generally available as soon as possible.
How do I get started?
It's easy! If you don't yet have a MongoLab account, you can create one here. If you already have an account, just use our UI to make a new free database on Window's Azure, and if you already have a Window's Azure account, you can start here and have a database running in seconds.
We are also working on some great content to help you start writing apps using Azure and MongoLab. Our first installment is an example using C#, with more language examples to follow:
We look forward to hearing your feedback as you play around with MongoLab on Azure. Stay tuned... this is just the beginning.
P.S. The press release is available here: BusinessWire
Update 2012-10-31 09:45 : added BusinessWire press release
|
OPCFW_CODE
|
Today I would like to write few basic concepts of Kafka Streams which any non technical person can read and understand, I am from a not technical background and i used to find difficulty to understand these technical terms.
So lets get started!!!
In todays world almost everyone uses Twitter, the popular social network service.
Using Twitter application, user can interact and post messages in their account, see messages of other users and comment on them. This popular social network allows users to interact and post the message and these messages are known as Tweets.
Do you know how many tweets daily twitter handles??? its on an average 500 million tweets per day.
Isn't it huge, Of course it is, how is these data managed and processed ? The real-time nature of Twitter forced the business to adopt Kafka.
Concept 1: What is Kafka?
Apache Kafka is an open-source distributed streaming platform that enables data to be transferred at high throughput with low latency. Kafka is often used in real-time streaming data architectures to provide real-time analytics. Kafka is known for its quality of being fast, scalable, durable, and fault tolerant messaging system.
Concept 2: What is Kafka Stream?
Kafka Stream is represented as continues flow of data, where data is getting updated frequently. A stream in Kafka, is an ordered, replay able, and fault-tolerant sequence of immutable data records.
Take the example of twitter, tweets are constantly flowing as data stream.
The stream of data needs to be fed from some node right, that leads to our next concept “Stream Processing”
Concept 3: What is Stream Processing?
A stream processor is a node and it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it. Also, may subsequently produce one or more output records to its downstream processors.
Here comes the benefit of Kafka Streaming , it allows parallel processing of data. Stream processing allows few applications to exploit a limited form of parallel processing more simply and easily.
Concept 4: What is Processing Topology?
A processor topology is a graph of stream processors (nodes) that are connected by streams (edges). Similar to other stream processing systems, the topology in Kafka Streams defines from where to read the data , how the data will be processed and where to send the data ahead in the pipeline. It has mainly three types of nodes — Source, Processor and Sink, connected by edges called Streams.
Source Processor : It is the start point (input stream) of the topology , there is no other processor before this , the task of source processor is to consume records from one or multiple Kafka topics and forwarding them to its down-stream processors.
Sink Processor : This stream processor does not have down-stream processors. It sends any received records from its up-stream processors to a specified Kafka topics.
Concept 5: What is Stream Processing Applications?
A stream processing application is any program which uses everything we discussed above, it uses Kafka Streams library and defines the logic via one or more processor topologies.
|
OPCFW_CODE
|
Andyroo wrote: ↑
Sat Aug 03, 2019 11:23 am
Best way to boot Buster is put the /boot on the SD card and the rest of the OS on the HDD / SSD.
Best way is to carve out a primary FAT partition (/dev/sda1) of 256MB at the front of your HDD/SSD and format that as FAT. Create an extended partition (/dev/sda2) for the rest of the space. In the extended partition create two partitions of at least 16GB (/dev/sda5 & /dev/sda6) create a third partition with the rest of the space.
Install Raspbian on an 8GB SDCard and boot it.
Unmount any partitions on the SDD/HDD that get automounted.
Copy /dev/mmcblk0p1 to /dev/sda1 (with dd or rsync, it's your choice)
Copy /dev/mmcblk0p2 to /dev/sda5 (use dd it'll fit because the target is 16GB)
Run gparted to expand the filesystem on /dev/sda5 to 16GB.
Copy /dev/sda5 to /dev/sda6 (we're going to flip-flop when Bullseye comes out, we going to leave this as a rescue system until then).
Mount /dev/sda1 on /mnt
Update /mnt/cmdline.txt with the PARTUUID for /dev/sda5
Mount /dev/sda5 on /mnt
Update /mnt/etc/fstab with the PARTUUID for /dev/sda1 and /dev/sda5 for /boot and / respectively.
Mount /dev/sda6 on /mnt
Update /mnt/etc/fstab with the PARTUUID for /dev/sda1 and /dev/sda6
Eject the SDCard
Run gparted to format /dev/sda7 as ext4. Choose a mount point for it (/srv or /home), it's going to be the space where you store all of those movies etc. Update /etc/fstab with the PARTUUID for /dev/sda7 and the mount point.
It may sound like hard work but that's how my two 3Bs with hard drives are set-up.
If anything goes wrong you only need to update cmdline.txt on /dev/sda1 to boot the other good 16GB partition. You can trash either of those partitions and none of your valuable stuff gets lost.
Every time that raspberrypi-kernel and raspberrypi-bootloader get an update copy /dev/sda5 to /dev/sda6 (or vice versa - copy from the one that's active at the time). Don't forget this step or your rescue system will go stale and won't boot.
|
OPCFW_CODE
|
dma: Scatter gather test
Adds a scatter gather test against memory to memory transfers.
Based on #43984 so the updated designware driver can be built on both test cases in #43883
DesignWare DMA passes both tests, NXP's eDMA does not seem to pass the scatter gather test and its not clear as to why, maybe @hakehuang wouldn't mind taking a look?
Marked DNM until #43984 is merged
@hakehuang the changes to fsl_edma in MCUXpresso SDK 2.11 (#43826) and the updates to the EDMA driver in #42750 enable this test to pass on an RT1060.
@dleach02 @DerekSnell FYI- This looks like the same issues we've been seeing with the EDMA driver.
@hakehuang the changes to fsl_edma in MCUXpresso SDK 2.11 (#43826) and the updates to the EDMA driver in #42750 enable this test to pass on an RT1060.
@dleach02 @DerekSnell FYI- These look like the same issues we've been seeing with the EDMA driver, although I didn't investigate too closely (just tried updating the HAL and our EDMA driver)
Thanks for taking a look at this and confirming that there are fixes required for this to work on eDMA. I've pushed a change to this PR to disable it for all platforms initially, in the updated designware DMA PR I've enabled it for the appropriate intel adsp platforms. When the NXP changes are merged we can enable it there as well.
@hakehuang Did you pull in the updates to the EDMA driver in #42750? I saw errors like that when I only had updated the HAL.
Thanks @danieldegrasse , there is one key update at
#ifdef CONFIG_HAS_MCUX_CACHE
#ifdef CONFIG_DMA_MCUX_USE_DTCM_FOR_DMA_DESCRIPTORS
#if DT_NODE_HAS_STATUS(DT_CHOSEN(zephyr_dtcm), okay)
#define EDMA_TCDPOOL_CACHE_ATTR __dtcm_noinit_section
#else /* DT_NODE_HAS_STATUS(DT_CHOSEN(zephyr_dtcm), okay) */
#error Selected DTCM for MCUX DMA descriptors but no DTCM section.
#endif /* DT_NODE_HAS_STATUS(DT_CHOSEN(zephyr_dtcm), okay) */
#elif defined(CONFIG_NOCACHE_MEMORY)
#define EDMA_TCDPOOL_CACHE_ATTR __nocache
#else
#error tcdpool could not be located in cacheable memory, a requirement for proper EDMA operation.
#endif /* CONFIG_DMA_MCUX_USE_DTCM_FOR_DMA_DESCRIPTORS */
#else /* CONFIG_HAS_MCUX_CACHE */
#define EDMA_TCDPOOL_CACHE_ATTR
#endif /* CONFIG_HAS_MCUX_CACHE */
static __aligned(32) EDMA_TCDPOOL_CACHE_ATTR edma_tcd_t
tcdpool[DT_INST_PROP(0, dma_channels)][CONFIG_DMA_TCD_QUEUE_SIZE];
which fixes my issues herre
I don't think it matters a whole lot, especially considering you just got 2 approvals on this and updating the PR would dismiss them and put you back in review purgatory, but the copyrights for the new files should be 2022 instead of 2021. You can always fix the copyright comments in another little PR after this one gets merged - it'd get merged super quick since it wouldn't even touch code.
|
GITHUB_ARCHIVE
|
Elementor Templates breaking page layout, is there any way to fix this issue except clicking on "Regenerate Files & Data" option in Elementor tools
I created several landing pages for my website by customizing a couple of Elementor templates. If I edit any of the templates, I frequently have broken page issues. Now, I can solve it by using the "Regenerate Files & Data" option of Elementor tools. But doing this on every modification is a hassle.
I tried deactivating all the plugins we are using on this site to check if any of the plugins were causing this issue, but that's not the case.
While investigating this issue, I noticed that only the pages using Elementor templates are affected.
Also, the server has enough memory which rules out any caching issues.
Is there any way to fix the broken page layout issue?
add_action('save_post', 'clear_elementor_cache');
function clear_elementor_cache() {
// Check if Elementor is loaded and the hook has fired
if ( did_action( 'elementor/loaded' ) ) {
// Automatically purge and regenerate the Elementor CSS cache
\Elementor\Plugin::instance()->files_manager->clear_cache();
}
}
Explanation
This function performs an action in response to the "save_post" hook, which is fired whenever a post or page is saved or updated.
Specifically, the function is designed to automatically clear and regenerate the Elementor CSS cache whenever a post or page is saved.
Here's a step-by-step breakdown of what the code is doing:
The add_action() function is used to add a new action to the "save_post" hook. The first argument to add_action() is the name of the hook to which the action should be added, and the second argument is the name of the function that should be called when the hook is fired.
The clear_elementor_cache() function is defined as the function to be called when the "save_post" hook is fired. This function checks if Elementor is loaded and the hook has fired by using the did_action() function, which checks if an action has been called for a specific hook.
If Elementor is loaded and the hook has fired, the clear_cache() method of the Elementor files manager is called. This method clears and regenerates the CSS cache for Elementor.
Finally, the clear_elementor_cache() function is automatically called whenever a post or page is saved, due to the action being added to the "save_post" hook.
Overall, this function helps to ensure that any changes made to Elementor widgets or page layout are automatically reflected on the front end of the website, without the need for manual cache clearing or regeneration.
|
STACK_EXCHANGE
|
Distributed Database Middleware (DDM)
Distributed Database Middleware (DDM) solves distributed expansion issues of databases. DDM breaks the capacity and performance bottleneck of traditional databases and achieves high concurrent access to mass data.
Join the open beta test to claim a limited free trial. Learn More
Automatic horizontally partitioned, smooth capacity expansion, and one-click linear capacity expansion.
Data access in petabyte-scale volume; ten-fold database connections compared with a single-node database; million-scale-level high concurrency.
High available clusters with second-level automatic fault recovery; strong consistent transactions and eventually consistent transactions; multiple security policies to protect privacy of the database and users.
Compatible with MySQL protocols; read/write splitting requires no code modification; DDM console simplifies the OM.
Distributed Database Middleware (DDM)
DDM interconnects with multiple RDSs in the backend, achieving transparent distributed access of databases.
- Applications access DDM through standard MySQL protocol. DDM automatically sends queries to shards in RDS according to routing rules, and then returns aggregated results to applications.
- DDM automatically identifies SQL types, and allocates write operation to RDS active instances and read operation to RDS standby instances based on the read/write splitting policy.
- DDM console enables you to manage and maintain DDM instances, logical databases, logical tables, and sharding rules.
- Smooth capacity expansion function enables you to easily add new RDS instances when the storage nodes do not have enough storage capacity. Data redistribution is automatically achieved.
- Multiple security policies such as VPC, subnet, and security group are provided to ensure data security and user privacy. Only applications, DDMs, and RDSs in the same VPC can access each other.
In industries such as e-commerce, financial services, O2O, retailing, and social media, the response of core trading systems become slow due to mass data generated by large user bases and frequent marketing activities, impairing the development of services.
DDM provides data linear horizontal expansion capability to improve database performance and access efficiency in a real-time manner, helping you copy with high-concurrent real-time trading scenarios.
In the Internet of Things (IoT) scenarios such as industrial monitoring, remote control, smart cities, smart homes, and the Internet of Vehicles, there are numerous sensors and monitoring devices, high sampling frequency, and a large amount of data. Data generated in these scenarios may exceed the storage capacity of single-node databases, causing capacity bottlenecks.
DDM provides horizontal capability expansion to store mass data in an affordable manner.
There may be a hundred million to trillion records of figures, files, and videos on the Internet and social media applications. Indices of these data are usually stored in databases, which must enable real-time operations such as adding, modification, reading, and deleting.
With high performance and distributed expansion capability,DDM enables efficient index-based search.
Cost-efficient Database Solutions
Industries such as government organizations, large-scale enterprises, and banks rely on expensive commercial solutions based on mid-range computers and high-end storage.
Compared with traditional commercial database solutions, DDM, deployed on clusters of common servers, provides cost-efficient database solutions with same or even higher performance.
After a distributed database is created, data horizontal splitting can be achieved by specifying the split keys and split rules.
Smooth Capacity Expansion
RDS instances can be added to extend the storage capacity and automatically redistribute data.
One-phase transaction submission model is supported.
Read/Write splitting is available to applications. You can configure Read/Write splitting using the DDM console, sparing efforts on changing code.
Global unique sequence in ascending order is supported in distributed scenarios, meeting requirements on scenarios where primary keys or unique keys are required or other scenarios.
Help you to detect resource and performance bottlenecks by monitoring read and write ratio and SQL statements that are executed slowly.
DDM console is provided for you to manage and maintain DDM instances, logical databases, logical tables, and sharding rules.
|
OPCFW_CODE
|
define("layro", [], function() {
return {
/**
* Main entry point for layro.
*
* Essentially, this just is a wrapper for insertShimsForAllRoots(). You should
* use this function if all you want to do is get all of your rows aligned as
* outlined in the README. If you need to do something more complex, then use
* the other methods of the API.
*
* @returns The number of shims inserted into the document.
*/
insertShims: function() {
var layroObj = this;
return layroObj.insertShimsForAllRoots();
},
/**
* Insert shims for all roots in the document.
*
* @returns The number of shims inserted into the document.
*/
insertShimsForAllRoots: function() {
var layroObj = this;
var numShimsInserted = 0;
$('[data-align=root]').each(function() {
numShimsInserted += layroObj.insertShimsForRoot($(this).attr('id'));
});
return numShimsInserted;
},
/**
* Get the number of rows within a DOM element having specified root id.
*
* @param aRootId The DOM id of the element for which to find the number of
* rows.
*
* @returns The number of rows in the root. This is the maximum rows in any of the
* root's parents.
*/
getNumberOfRowsForRoot: function(aRootId) {
var totalRows = 0;
$('#' + aRootId).find('[data-align="parent"]').each(function() {
var totalRowsInThisCol = $(this).find('[data-row]').length;
if (totalRowsInThisCol > totalRows) {
totalRows = totalRowsInThisCol;
}
});
return totalRows;
},
/**
* Returns the elements in a given row, given a root for which to start at. If a root is not
* given, then the first root encountered is assumed.
*
* @param aRow A numeric row to select.
* @param aRootID The ID of the root box into which to start descending (optional).
*/
getElementsInRow: function(aRow, aRootID) {
console.log("layro: Getting elements from row " + aRow + " in root with id: " + aRootID);
if (!aRootID) {
aRoot = $('[data-align="root"]').first();
aRootID = aRoot.attr('id');
} else {
aRoot = $('#' + aRootID);
}
console.log("layro: Root has ID: " + aRoot.attr('id'));
var elements = new Array();
aRoot.find('[data-row="' + aRow + '"]').each(function() {
elements.push($(this));
});
return elements;
},
/**
* Insert shims so that rows are aligned for a given root.
*
* @param aRootID The DOM id for the root to align.
*
* @returns The number of shims inserted into the DOM.
*/
insertShimsForRoot: function(aRootID) {
var layroObj = this;
if (!$('#' + aRootID)[0]) {
throw "It appears the element with ID: " + aRootID + " doesn't exist in the DOM";
}
var numRows = layroObj.getNumberOfRowsForRoot(aRootID);
var numShimsInserted = 0;
for (var nextRow = 1; nextRow <= numRows; nextRow++) {
numShimsInserted += layroObj.insertShimsForRow(nextRow, aRootID);
}
return numShimsInserted;
},
/**
* Insert a number of shims necessary to make a given row align across all parents.
*
* @param aRow The number of row which we want to align.
* @param aRootID The ID of the root element on which we want to align.
*
* @returns The number of shims inserted.
*/
insertShimsForRow: function(aRow, aRootID) {
var layroObj = this;
var root = $('#' + aRootID);
var numShimsInserted = 0;
console.log("Asked to insert shims for row with root id: " + aRootID);
// get the maximum height for the row
var maxRowHeight = layroObj.getMaxHeightForRow(aRow, aRootID);
// for each of the children in the row
root.find('[data-align="parent"]').each(function() {
var shouldInsertShim = false;
// if the child does not have a row with the given id
// OR the child has a row, but it's less than the max height
// for the row
var rowSelector = '[data-row="' + aRow + '"]';
var rowObj = $(this).children(rowSelector);
var shimHeight = 0;
if (!(rowObj[0])) {
rowObj = $(this).children('[data-row="' + (aRow-1) + '"]');
shimHeight = maxRowHeight;
shouldInsertShim = true;
} else if (rowObj.outerHeight() < maxRowHeight) {
shimHeight = maxRowHeight - rowObj.outerHeight();
shouldInsertShim = true;
}
if (shouldInsertShim) {
// then insert a shim
console.log("rowObj outerheight is " + rowObj.outerHeight());
console.log("Shim height: " + shimHeight);
if (shimHeight > 0) {
console.log("Inserting a shim with height: " + shimHeight);
numShimsInserted = numShimsInserted + 1;
layroObj.insertShimAfter(rowObj, shimHeight);
}
}
});
return numShimsInserted;
},
/**
* Insert a shim after a given element within a layro parent.
*
* This method simply inserts a shim after a given element in the DOM. Ideally,
* the element after which we want to insert has, as its parent, a layro parent.
* This condition isn't enforced, however. You probably don't want to use this
* method directly. Use insertShimsForRoot() or insertShimsForRow() instead.
*
* @param aAfterElement The element after which we want a shim.
* @param aHeightOfShim A numeric value indicating the number of CSS pixels the height
* of the shim should be.
*/
insertShimAfter: function(aRowObject, aHeightOfShim) {
console.log("Inserting layro shim after object with id: " + aRowObject.attr('id'));
console.log(aRowObject.attr('id'));
$('<div id="lShim" class="layroShim" style="height: ' + aHeightOfShim +'px"></div>').insertAfter(aRowObject);
},
doesRowNeedShim: function(aRow, aRootID) {
var layroObj = this;
var root = $("#" + aRootID);
// First, get the number of elements in the row.
var numElementsInRow = layroObj.getElementsInRow(aRow, aRootID).length;
// Compare this to the number of children of aRoot.
var rootChildren = $("#" + aRootID + " > *").length;
// If different, then yes.
if (rootChildren > numElementsInRow) {
return true;
}
// Otherwise, no.
return false;
},
/**
* Retrieve the maximum height of any element in a given row.
*
* @param aRow A numeric value indicating the row for which the max height should be calculated.
* @param aRootID An ID for the DOM element serving as the row's grandparent.
*
* @return The maximum height of any element in the specified row and root.
*
* @throws An exception if the maximum height for a given row is less than 0. This usually
* indicates that the row wasn't found, or that no element with specified ID exists.
*/
getMaxHeightForRow: function(aRow, aRootID) {
var aRoot = $('#' + aRootID);
console.log("Getting max height for row " + aRow + " with root id: " + aRoot.attr('id'));
console.log("Passed in root id: " + aRootID);
var maxHeightForRow = -1;
aRoot.find('[data-row="' + aRow + '"]').each(function() {
if (maxHeightForRow < $(this).outerHeight()) {
maxHeightForRow = $(this).outerHeight();
}
});
if (maxHeightForRow < 0) {
throw "Problem retrieving height for row: " + aRow;
}
return maxHeightForRow;
}
}
});
|
STACK_EDU
|
Use passive listeners to improve scrolling performance
Google's Lighthouse tests for CWV ran on my website include an issue stating that touch and wheel event listeners are not marked as passive.
How and where to fix this?
I have tried to check my file manager in cms but I can't find it. I am using a theme.
Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking.
I've added context to the question, to state where it is occurring. If the edit can be approved and the question re-opened, I can answer it.
@GeoffAtkins I've reopened this so you can add an answer - thanks.
Passive Listeners aren't a setting within your website or on your server, they're a feature within JavaScript that improves performance for touch scrolling or mouse-wheel scrolling for your website.
Lighthouse (Google's tool for letting us measure Core Web Vitals) highlights not using them as a potential performance issue, but it doesn't actually impact any of the key metrics (FCP, LCP, TBT, CLS). Lack of passive listeners do make performance slightly worse for some users when using your page, and if possible it is best practice to implement them. There are videos on YouTube that show the performance difference on a mobile phone, with a side-by-side demonstration.
You mention using a theme. Which I presume means you're using a CMS of some form. This means it's almost certain that your theme or one of the plugins for your CMS is using an older JavaScript library (most probably a dated jQuery version). Finding which plugin it is that's using it and making sure it's fully up to date might fix it. Of course, it's best practice to make sure your CMS, as well as any themes and plugins, are kept up to date as much as possible.
Alternatively, it is possible it might be a third party system connected to your website. My agency has stopped using a third party Cookie Consent system for exactly this reason. It included an old version of jQuery which causes this issue to be highlighted when testing our own websites. It doesn't actually impact performance on the website (because no part of the cookie consent system involves scrolling) but we have moved away from it nonetheless. Whether resolving this issue had any positive effects to our SEO performance is questionable and any benefit was very, very marginal.
Lighthouse (whether used within Google Chrome's developer tools, or through https://pagespeed.web.dev/) will tell you which script is causing the problem, and some investigation using developer tools will let you know what on your website is calling the affected script. This will at least point you in the right direction for resolving this issue.
This question on Stack Overflow shows how to implement passive listeners, and if you are a developer yourself (or you employ one), you could manually implement this fix. However, for a CMS this might be difficult and your hard work might be overwritten by a forthcoming update.
|
STACK_EXCHANGE
|
Java technology. A programming language
Java is one of the world's most popular programming languages. How was it created and how does it stand out?
See the movie: "How can you help your toddler find himself in a new environment?"
1. Java technology. What is this?
Java is a programming language and platform for computer software development. The history of the creation of Java dates back to 1991. The main originator of the technology was James Gosling, a Canadian programmer and computer scientist.
Working for Sun Microsystems, Gosling and his team set out to create a new, simple language that could be run on multiple platforms with different parameters.
Initially, the language was named Oak, but later Gosling changed his mind and named the technology "Java" in honor of his favorite type of coffee. Hence, a cup of coffee appeared in the technology logo.
The first public implementation of Java 1.0 was released in 1996. Today Java is one of the most popular programming languages in the world. It is used by over 9 million developers.
How to teach a child to use digital technology wisely? Take advantage of the free help of the best specialists
Logical thinking, creativity, group work, Internet behavior, creating graphics and films --...read the article
2. How does Java work?
The basic concepts of Java are taken from Smalltalk and C ++. With Smalltalk, the concept of memory management and the idea of a virtual machine were used. However, a significant part of the syntax and keywords come from C ++. However, the authors of Java abandoned complicated and inconvenient elements such as pointer operations, multiple inheritance, and operator overloading.
Java is an object-oriented language. This means that the basic concept in it are objects. One of the most important features of Java is strong typing, which in turn means that expressions are fixed data type and cannot be changed while the program is running. This prevents unexpected errors, making the code itself easier to maintain.
The authors of Java have defined several key concepts of their language. The most important of them are:
- independence from architecture;
- networking and distributed programming support;
- reliability and safety.
Miracle? No, technology
With indulgence, but also with an admixture of admiration, we look at how our children deal with tablets, changing ...read the article
3. Java virtual machine. What is this?
The basis of any program written in Java is a virtual machine - Java Virtual Machine (JVM). Many experts believe that this very tool is behind the great success of the entire platform. Without installing a virtual machine, we will not be able to run any application written in Java.
In practice, the Java Virtual Machine is a set of applications written for traditional devices and operating systems. They create an environment capable of executing Java bytecode. Applications also provide garbage collection, exception handling, and a standard library.
Over time, compilers began to be added to Java applications, which are used to automatically translate codes written in other programming languages. This allows Java to compile many existing languages into virtual machine bytecode.
It is estimated that 10 billion devices worldwide, in 1 billion computers, are equipped with JVMs.
How to make a child interested in new technologies?
Psychologists and educators outdo each other in calculating the negative effects of an excess of television or games ...read the article
4. Java programming language. Application
There have been many attempts to replace Java with newer technologies, but none of them has been as useful. Today, Java is used by the biggest Internet giants.
Java is mainly used to develop software and Internet applications such as Gmail.
Java is also used by a large number of websites such as Amazon and Netflix. Technology is also widely used in games and computer programs.
5. How to learn to program Java?
Due to the high popularity of technology, Java programmers are among the best profitable and most in demand on the job market. Therefore, the Java language is taught not only at universities or specialized courses. In many countries, Java Basics is a compulsory part of the school's computer science curriculum.
Back to school: what technological accessories will be useful at school?
Beginning of the year ...read the article
Is learning Java difficult? Many IT specialists emphasize that one of the greatest advantages of Java is its simplicity. The language was designed from the beginning to be easy to use. Java is considered a programming language of moderate difficulty.
There are many solutions that make it easier for novice programmers to work. Due to the fact that Java is object oriented, it is possible to create modular programs. Once written, code can be reused.
Java also includes many built-in libraries with ready-made solutions. Very often, creating Java code requires writing many more lines than in the case of other programming languages. However, this is not a downside, but an advantage, as it makes it easier to pinpoint what is causing the problem.
|
OPCFW_CODE
|
If you are considering a career in technology, you may wonder which job is easier to get into Oracle DBA or Oracle Developer? The decision of what job to take can seem daunting, but it doesn’t have to be.
In this article, we will consider the essential skills and qualifications for both Oracle DBA and Oracle Developer roles and the job security and salary they offer to help you determine which job is easier to get into. We will also explore what to expect with Oracle careers, the demand for professionals skilled in these roles, and job availability.
Overview of Oracle DBA:
An Oracle Database Administrator (DBA) is responsible for planning, evaluating, monitoring, and troubleshooting an organization’s Oracle databases. The responsibilities of Oracle DBA are the following.
Installation, configuration, and upgrading of Oracle databases, as well as developing and troubleshooting database components such as tables, indexes, and procedures.
An Oracle DBA must also perform routine maintenance tasks and ensure the database is secure and running optimally. Additionally, they are responsible for monitoring database performance and making necessary changes to ensure optimal performance. Lastly, Oracle DBAs are expected to develop and implement a backup and recovery plan to restore the database in an emergency.
Overview of Oracle Developer:
The Oracle developer is a software engineer specializing in developing Oracle database applications. As an Oracle developer, you would be responsible for designing, developing, testing, and maintaining applications and services based on Oracle database technology.
The Oracle developer must be well-versed in Oracle’s software development tools and familiar with Java and other programming languages. You would also need expertise in SQL and PL/SQL, with a good understanding of database structures and database design security. Additionally, Oracle developers work closely with other teams to ensure the applications and services are properly integrated into the existing Oracle database architecture.
Skills and Qualifications:
Additionally, it’s important to demonstrate a good knowledge of the Oracle 12g and 18c database platforms. To be hired, applicants must usually demonstrate a working experience of at least one year in one of the Oracle databases. Finally, having networking and system implementation experience can also be beneficial.
Salary and Job Security:
Regarding salary and job security, both Oracle DBA and Oracle Developer roles offer strong potential. As database professionals, Oracle DBAs are consistently among the industry’s highest-paid professionals. Job security is traditionally high, as companies rely heavily on database administrators to keep their data safe and secure. Oracle Developers can expect to earn an average salary of around $97,000, and job security is good, assuming they have the right technical skills and knowledge. Those with the most experience can command the highest salaries in either field, so there is plenty of incentive to stay in the industry.
Which One Is Easier To Get Into?
When considering a career in Oracle, deciding between becoming a Database Administrator (DBA) or an Oracle Developer. Both positions have unique pros and cons; however, when considering which job is easiest to get into, there are a few things to consider.
As a DBA, you will maintain and troubleshoot databases and perform routine maintenance tasks. That could include setting up users and granting privileges, ensuring security and data integrity, developing backup and recovery procedures, and conducting performance tuning. Becoming a DBA requires in-depth knowledge of the Oracle technology stack and experience with related software.
On the other hand, an Oracle Developer focuses on developing software applications for Oracle databases. It could include writing stored procedures, triggers, and packages and developing SQL queries and PL/SQL code. To become an Oracle Developer, you will need in-depth knowledge of the Oracle technology stack and experience with developing software applications.
When it comes down to it, becoming a DBA may be easier to get into than becoming an Oracle Developer. DBA positions typically require more technical knowledge, but developing software applications for Oracle databases can be more complicated. To get into a career in Oracle, having the right technical skills and experience is essential.
What to Expect With Oracle Careers:
Start a career in Oracle, and you can expect to find various opportunities. Oracle Database Administrators (DBAs) and Oracle Developers are two of the most sought-after Oracle careers. The difficulty in getting into either one of these roles depends on your qualifications, experience, and skills.
Oracle DBAs will maintain the databases, while Oracle Developers will design, build, and implement Oracle systems. Both positions require a strong working knowledge of Oracle technologies, and the demand for experienced Oracle professionals continues to rise. You can expect to be well-paid and in high demand with either position.
Demand For Professionals:
The demand for Oracle professionals has steadily increased due to the widespread use of the Oracle Database Management System. With the growing need for qualified Oracle Database Administrators (DBAs) and Oracle Developers, more people are considering these roles for job opportunities.
As a result of this increased demand, Oracle professionals get the advantage of greater job stability and higher pay as employers recognize their skill level and experience. Even within the Oracle program, the roles of DBA and developer are in high demand. Those individuals with the right mix of training, experience, and certifications can often find success in either of these positions.
Salary should be important when considering an Oracle DBA or Oracle Developer position. Generally, Oracle DBAs command higher salaries than Oracle Developers. While salaries may vary by region and according to experience, Oracle DBAs typically average a salary of $90,000, while Oracle Developers may range from $70,000 to $80,000.
Additionally, both positions offer the potential for bonuses and other benefits, including health insurance, vacation time, and stock options. Investing time and energy into learning the various facets of Oracle databases can pay off in the long run, so it’s important to weigh salary considerations when choosing the right job for you.
Regarding job availability, the Oracle DBA and Oracle Developer roles have distinct differences. The Oracle DBA role typically requires more technical expertise, experience, and certifications making it more difficult to get into than the Oracle Developer role. That said, the Oracle DBA role is more highly sought-after, offering higher salaries and better job security.
On the other hand, the Oracle Developer role can provide a great entry point into the Oracle world with a lower barrier to entry. Once you’ve gained experience and certifications, you can consider transitioning into the Oracle DBA role. With enough determination, both roles can offer great career opportunities. As a fresher Oracle developer you can get job easily but as fresher Oracle DBA it is difficult to get job easily.
In conclusion, it is clear that both Oracle Developer and Oracle DBA roles offer plenty of opportunities and come with demanding qualifications and impressive salaries. While Oracle DBAs may require more technical skills and knowledge, Oracle Developers are also in demand.
Ultimately, the easier job to get into depends on a combination of the individual’s technical skills, qualifications, and market conditions. Professionals considering either of these roles should keep up to date on the latest offering from Oracle and expect to explore a variety of available job roles. Nonetheless, with the right qualifications and industry experience, a successful career in either Oracle DBA or Oracle Developer roles can be achieved.
|
OPCFW_CODE
|
22 Jun 2017 Selenium Webdriver - browser preferences for downloading files Some time ago I developed Selenium tests for feature, where one of the important Unfortunately, since in IE 8 there is no option to disable download dialog.
13 Jan 2014 Microsoft Internet Controls, Getting at Internet Explorer in VBA the html document as a text file without using IE, hopefully that should work !! Downloading the HTML of one line so I can manually parse it (Even though i don't know how to code it, i heard about Selenium and Beautiful soup for Python). hi ! to learn how to download files from web ui, i have downloaded the example /36309314/set-firefox-profile-to-download-files-automatically-using-selenium-and-java#= ie. a download button or if it is a pdf view mode, CTRL+S will do. 11 Jun 2019 I'm looking for a way to download the files (zip files) in a specific path or be footer notification in Internet Explorer, it just starts downloading the file in to save each file before downloading” but when it runs by selenium this 1" object, downloads files for you, then opens file, reads value and upgrades to Internet ExplorerRun Internet Explorer in BackgroundSelenium & VBA This Files and Folders - Free source code and tutorials for Software developers and Architects.; Updated: 10 Jan 2020 Then, just as I discussed in the article on automating IE Using VBA to Automate Internet Explorer Sessions From an Excel Spreadsheet Using VBA to Automate Internet Explorer Sessions From an Excel Spreadsheet Its integration with Windows…
VBA - Script to Download a file from a URL Below is a Visual Basic for Applications script I quickly build to download a file through a Macro to the computer. Selenium WebDriver Page Test / workflow recorder (successor to SWD recorder) - sergueik/SWET Macros and Add-ins - Free source code and tutorials for Software developers and Architects.; Updated: 4 Dec 2019 Uncategorised Tips and Tricks - Free source code and tutorials for Software developers and Architects.; Updated: 11 Jan 2020 Third Party Products and Tools - Free source code and tutorials for Software developers and Architects.; Updated: 25 Dec 2019 Simon Stewart, the creator of WebDriver & core contributor to Selenium hosted a webinar with BrowserStack to talk about the Selenium 4 upgrade. Watch video In this video, you'll learn how to get all the file name and folder name using Google Apps script. You can use class name DriveApp to access the drive and usChrome settings seleniumokidokistore.cl/afaktpm/chrome-settings-selenium.htmlSelenium Webdriver - browser preferences for downloading files In this article you will find and introduction to browser’s profiles/preferences and quick solution for managing downloading files from script level.
You'll also want to download the Internet Explorer Driver Server: Once you've downloaded the required Selenium files, extract the zips to a local drive on your
I am using a code to download a csv file from a website. Later I figured out the use Of selenium Wrapper and Created this following code : There is no way to hide the browser and the cmd window like we hide IE using ie. 14 Dec 2014 To handle Downloads with selenium, we need to define settings to the download files to a specified location with Internet Explorer / Edge 27 Nov 2017 Steps to Download File using Selenium and Verifying the existence In the above code, files are saved in a string array (i.e. string fileEntries). 4 Mar 2016 VBA users have been using IE automation from many years and most of them Selenium VBA wrapper library – you can download and install it from here Open any excel file, go to code editor (Alt + f11) and create a sub. It covers how to use Selenium with VBA. mejs.download-file: https://excelvbaisfun.com/wp-content/uploads/ 20 Jul 2018 Download a file with Selenium WebDriver without running into the System Dialog or any warnings during the file download. This method will 13 Apr 2018 How to download a file in chrome or mozilla browser using Selenium WebDriver? Hey Uday, you In order to use IE Driver youREAD MORE.
- free movies torrent download websites without registration
- centos download ova file
- faber piano adventures pdf download
- english grammar pdf book free download
|
OPCFW_CODE
|
Microsoft's flagship database is an important tool, with local and in-cloud versions offering powerful archiving and analysis tools. It also becomes an important application for data scientists, providing a framework for the construction and testing of machine learning models. There's a lot in SQL Server, and a new version can show you where Microsoft thinks your data needs will go in the next few years.
The latest CTP for SQL Server 2019, version 2.1, is now available to help evaluate and test the next version outside of production environments. Like its predecessor, it is available in Windows and Linux versions, although support for containers and Kubernetes has now been added. Adding container support, using Docker and the Linux version of SQL Server, is an interesting option as it allows you to create SQL Server in huge analytical engines based on Kubernetes that work with data lakes hosted by Azure using Apache Spark .
The current preview installer offers the option of a basic, quick and fast installation or a more detailed custom installation. The first option requires less disk space because they are the files needed to perform the basic installation, while a custom installation reduces the entire installation support of SQL Server 2019. For most of the basic development tasks it is a basic installation is sufficient, although we recommend a custom installation as part of a complete evaluation. You can also download the installation media if you plan to install it on more than one computer to evaluate the capabilities of the SQL Server cluster.
Machine learning is an important part of SQL Server 2019 and now includes integrated tools for creating and testing machine learning models. You can install it with support for the popular R and Python languages, so your data science team can work within the database, prepare and test models before you can format them on your data. Microsoft is using its own Open R distribution and the Anaconda Data Science Python environment, which includes additional numerical analysis libraries, including the popular NumPy.
You can also install SQL Server 2019 as a self-learning environment for machine learning. Local SQL Server instances on developer workstations will be able to use R and Python familiar tools to work directly with training datasets, without affecting production systems or server resource utilization.
Really BIG data
Working with large-scale data has long been a problem, with very few database engines designed to function as part of a distributed architecture. With SQL Server 2019 it is now possible to create what Microsoft calls Big Data Clusters, using a mix of SQL Server and Apache Spark containers on Kubernetes using the existing PolyBase capabilities of SQL Server. With public clouds that support native Kubernetes, you can deploy Big Data Clusters on Azure, AWS, and GCP as well as your infrastructure. Integration with Azure Data Studio tools simplifies the creation, execution and sharing of complex queries.
Microsoft's focus on data science scenarios fits perfectly with the intelligent cloud / business intelligence strategy. Data is essential to create machine learning tools and, by executing R and Python code within the database, it is possible to send complex queries from the SQL Server command line, using familiar tools to create and test the code before distributing and executing it. . Microsoft is providing sample code through GitHub, which shows how to combine relational data with big data. It also shares sample architectures that show how to use this as a basis for the creation of machine learning systems as well as open source technologies such as Kafka.
Other new features, such as static data masking, focus on protecting and disinfecting data so that they can be used without affecting regulatory compliance. The application of static masking of data to columns in the export of a database allows developers to work with real data and avoid loss of sensitive information. There is no way to recover the original data, as it is a one-way process. Previous versions of SQL Server introduced dynamic data masking, which works only with the original database. Exporting with static masking there is little or no risk for developers to unmask or randomly alter data in real time, leaving them to produce code that can be put into production without any modification.
At the database level, when you create indexes you can now stop and start. If a disk is being filled, you can pause an indexing operation, add more memory to the volume, and then resume from where it left off. It is not necessary to start from scratch, saving time and calculations. There is also the possibility to reboot after errors, saving more time after correcting the error that caused an index to crash.
With SQL Server 2019, Microsoft is proving that even though relational databases have been around for a long time, there is still room for improvement and innovation. Building a database engine that works like every SQL Server has worked in the past, while supporting the work with machine learning and large amounts of data on a large scale, offers a tool ready to update what you have and to support you as work with your data securely, locally and in public clouds. All you have to do is download it and see what it can do for you.
|
OPCFW_CODE
|
“JOSM,” which stands for Java OpenStreetMap Editor, is a powerful and extensible desktop application designed for editing and contributing to OpenStreetMap (OSM) data. OpenStreetMap is a collaborative mapping project that aims to create a free, editable map of the world. JOSM provides advanced editing capabilities, allowing users to create, modify, and analyze geographic data with precision. Here’s a more detailed explanation of its features and functionalities:
- Advanced Editing Tools: JOSM offers a wide range of editing tools and functionalities for manipulating geographic data. Users can create, delete, move, and modify various map features such as roads, buildings, rivers, and points of interest. The application supports complex geometries, allowing users to draw and edit detailed shapes with ease.
- Extensibility: JOSM is highly extensible, allowing developers to create and integrate plugins to enhance its functionality. Plugins can add new editing tools, analysis capabilities, data import/export options, and integration with external services, expanding the capabilities of JOSM to meet specific user needs and requirements.
- Offline Editing: One of the key features of JOSM is its support for offline editing. Users can download OSM data for a specific area and edit it locally on their computer without an internet connection. This is particularly useful for editing remote or rural areas with limited connectivity or for offline mapping projects.
- Integration with OSM Database: JOSM seamlessly integrates with the OpenStreetMap database, allowing users to upload their edits directly to the OSM server. This enables users to contribute their changes and additions to the global OSM dataset, making their edits available to other users and applications worldwide.
- Data Validation and Quality Assurance: JOSM includes built-in tools for data validation and quality assurance, helping users identify and correct errors and inconsistencies in the map data. It highlights common issues such as overlapping features, invalid geometries, missing tags, and conflicting edits, enabling users to maintain data integrity and accuracy.
- Customization and Configuration: JOSM offers extensive customization options, allowing users to configure various aspects of the application to suit their preferences and workflow. Users can customize keyboard shortcuts, user interface themes, editing presets, and other settings to optimize their editing experience.
- Collaborative Editing: JOSM supports collaborative editing workflows, enabling multiple users to work on the same map data simultaneously. Users can share their edits with collaborators using version control systems such as Git or by exchanging JOSM session files, facilitating teamwork and coordination on mapping projects.
- Documentation and Community Support: JOSM is supported by a vibrant community of users, developers, and contributors who provide documentation, tutorials, forums, and online resources to help users get started and master the application. The JOSM website offers extensive documentation, user guides, and tutorials to assist users in learning how to use the application effectively.
josm Command Examples
1. Launch JOSM:
2. Launch JOSM in maximized mode:
# josm --maximize
3. Launch JOSM and set a specific language:
# josm --language [de]
4. Launch JOSM and reset all preferences to their default values:
# josm --reset-preferences
5. Launch JOSM and download a specific bounding box:
# josm --download [minlat,minlon,maxlat,maxlon]
6. Launch JOSM and download a specific bounding box as raw GPS:
# josm --downloadgps [minlat,minlon,maxlat,maxlon]
7. Launch JOSM without plugins:
# josm --skip-plugins
In summary, JOSM is a feature-rich and extensible desktop application for editing OpenStreetMap data. Its advanced editing tools, offline editing capabilities, extensibility, integration with the OSM database, data validation features, customization options, collaborative editing support, and community resources make it an indispensable tool for contributors and enthusiasts involved in mapping and geospatial data management.
|
OPCFW_CODE
|
QUESTIONS:: Why express redirects when i tried to open a static page
Hi, I was trying to make my own web server in cpp so i was trying to learn how http and express in itself works.
When i was trying to see how express static page is displayed in browser . i discovered that express redirect you one time when you are trying to load a statically host html page
My express code
`const express = require('express')
const app = express();
app.listen(4000);
app.get("/",(req,res)=>{
res.send("Harshit")
})
app.use("/html",express.static("./public"))`
###My client side
`http --follow --all --verbose GET http://localhost:4000/html ``
GET /html HTTP/1.1
Accept: /
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: localhost:4000
User-Agent: HTTPie/2.4.0
HTTP/1.1 301 Moved Permanently
Connection: keep-alive
Content-Length: 175
Content-Security-Policy: default-src 'none'
Content-Type: text/html; charset=UTF-8
Date: Mon, 19 Jul 2021 21:46:50 GMT
Keep-Alive: timeout=5
Location: /html/
X-Content-Type-Options: nosniff
X-Powered-By: Express
Redirecting
Redirecting to /html/
GET /html/ HTTP/1.1
Accept: /
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: localhost:4000
User-Agent: HTTPie/2.4.0
HTTP/1.1 200 OK
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Connection: keep-alive
Content-Length: 263
Content-Type: text/html; charset=UTF-8
Date: Mon, 19 Jul 2021 21:46:50 GMT
ETag: W/"107-17ac0b0b192"
Keep-Alive: timeout=5
Last-Modified: Mon, 19 Jul 2021 21:34:03 GMT
X-Powered-By: Express
Document
`
response end
here you can see express first send 301 status code
so i want to know why this is the case or .....basically why
This is the default behavior when you access a directory using express.static and do not have a trailing slash and there is an index.html file. This matches the behavior of other servers like Apache. The reason is that without the redirect, there is now two different URLs to access the page in which relative links on the page will behavie differently. If you do not use relative links, then you can turn off the redirect by passing the redirect: false option to express.static.
I hope that helps!
hey you explain a bit more ..i very new to these kind of stuff ...or can you point me to some docs or an article
How web browsers interpret HTML pages and links is likely beyond the issue tracker here. I don't have a link off hand, but if you want to ask this on a forum like Gitter or StackOverflow I think you will get more wider audience with those expertise.
O! i am sorry. I do't want to bother anybody.. i am sry if my question was stupid or i am asking it at wrong place
..BTW i watched your talk this one "https://www.youtube.com/watch?v=HxGt_3F0ULg"..... love it.. Thanks for what you are doing for the community
|
GITHUB_ARCHIVE
|
[WIP] Clean up kernel and language usage
This PR is another go at partially cleaning up KernelManager; in this case, the handling of language names:
I've replaced the setting grammarToKernel with kernelMappings.
I've fixed the handling of the setting languageMappings.
I've changed the behaviour of the command hydrogen:update-kernels. Now, this command only updates, i.e. it doesn't delete the old entries.
I've only tested this PR in Ubuntu 16.04. This means ipython kernelspec not working.
I'm yet to test whether this PR works on systems with ipython kernelspec or jupyter kernelspec. Please, do not merge before testing that.
I reckon that since this PR changes the settings offered by Hydrogen, the version should be bumped up to 0.9.0.
Fixes #258
version bump sounds good
That's quite a bit of cleanup! I'll have to check this out later. Happy to see patches flowing at least.
@rgbkrk I'm sorry about that.
KernelManager is used by almost every other class in Hydrogen, and each class seemed to have their own way to call KernelManager (some used kernel specs, some used some kind of language name, some used the editor grammar directly). So in the end, what I thought it would only touch the handling of language names ended up having to with deal kernel specs and grammar too.
When you review the code, bear in my mind, that what Hydrogen used to call kernelInfo is actually a kernel spec and I've renamed it to kernelSpec.
KernelManager would benefit from a further cleanup:
now that Hydrogen only uses one kind of language name (that derived from Atom's grammar), it'd be possible to rename everwhere grammarLanguage to simply language without ambiguity.
to open the possibility of having multiple kernels running for same language, running kernels shouldn't be indexed by the their grammarLanguage. We need a way to link editors and kernels (this change olone would remove a lot of code everywhere).
Although in this PR I removed code that isn't in use, I wasn't exhaustive and further work is needed.
This PR isn't ready yet. I'm testing on Jupyter installation and I've find a bug. I will update the PR later today.
@rgbkrk I've fixed the PR after testing on a VM with Jupyter. The PR is now ready for review.
Thanks a lot for cleaning this up!
The Select Watch Kernel command isn't working for me. The kernel names don't appear in the list.
@lgeiger Oops! Forgot to check that one!
@lgeiger It should be fixed now!
PS: I really miss having the safety net of a good test suite.
It works 👍
@n-riesco I completely agree with you, a test suite would be great!
I'd like to assume only good here since @lgeiger says it's working well. How about we merge and keep going?
I didn't test it with multiple kernels for the same language but that should be fine too.
So 👍 for merging.
It's ok for you to merge others' PRs as well @lgeiger, so feel free when you're happy to.
Want to cut the release?
Happy to test against javascript, python, scala, and julia if you tag a release.
I can cut a release by tomorrow morning. (I'm on the go)
@sbromberger , @updiversity The new version will offer two settings:
languageMappings now does what it was meant to do. It maps kernels' languages to Atom's languages. For example in @sbromberger 's case, for the Scala kernel, {"scala211" : "scala"} should work.
kernelMappings, on the other hand, maps Atom's languages to kernels. For example, in @updiversity 's case, to tell Hydrogen to use a specific kernel for python files, this should work {"python": "Python Venv 2"}.
https://github.com/nteract/hydrogen/releases/tag/v0.9.0 🎉
Python, Julia and Javascript -> no problems. Will try scala when I get into work. Thank you! The new settings are much clearer.
Great! Thanks for testing 👍
Confirmed scala is working. Note that the instructions are accurate but reflect a change to the configuration that may trip up some folks. Previously, the mapping was {"scala" : "scala211"}. Now it's reversed.
For the record, the reason why languageMappings uses the kernel language as index is to allow different kernel languages to map to the same Atom language, e.g. {"scala211" : "scala", "scala210" : "scala"}
Understood, and please don't interpret my comment as a complaint - I noted it in case someone who has recently upgraded is puzzled as to why his/her mapping broke.
This is really wonderful work and has made my coding environment immensely better. Thank you :)
|
GITHUB_ARCHIVE
|
Text 2 Subtitle
There is no way to pass a param to create a .srt subtitle file when speaking from a book.
its sometimes hard to read along with a book while listening, so it would be good to be able to read the original text and not have "harvard" be rendered as "hamburger" when generating a whisper file later.
Where is the method that checks the TTS generation against the original text with whisper? If you can tell me, I can maybe use it to make this feature.
Line 314 in epub2tts.py, this function:
def compare(self, text, wavfile):
result = self.whispermodel.transcribe(wavfile)
text = re.sub(" +", " ", text).lower().strip()
ratio = fuzz.ratio(text, result["text"].lower())
print(f"Transcript: {result['text'].lower()}") if self.debug else None
print(
f"Text to transcript comparison ratio: {ratio}"
) if self.debug else None
return ratio
I'm confused as to what you are talking about though, why would you need an srt file if you have the original book in either epub or txt format?
tl;dr below but simple
Use case:
After making voice with epub2tts, you want to read along, but the epub is a pain to read. So you render the mp3 to mp4 and subtitle it with whisper. But the subs are wrong. So you try to output srt on epub2tts - but theres no option for that.
I'm confused that you don't understand. I thought most people learnt to read these days by watching books on youtube...
This image illustrates the point.
Every time it says the characters name, it uses a different spelling, due to whisper running a second time. in order to generate the video subtitles, as it stands you have to run it through whisper with a script, or another a subtitle program that uses whisper like "subtitle edit" - which often takes longer than making the speech in the first place with epub2tts
Since epub2tts can do this, it seems a waste not to.
Ah, I understand this use-case. Unfortunately it would not be possible for epub2tts to create an srt file while creating the original audiobook, because the timing would quickly become incorrect - at the end of the creation of each chapter, all silences longer than 1 second are removed from the audio stream. This removes inevitable long pauses, but it means when several (sometimes dozens) of seconds are removed throughout the book, so the drift of incorrect timing as far as whisper is concerned would get worse as worse as the book goes on.
The other issue is the way whisper is used in this case - whisper is run against each individual sentence, so whisper doesn't have any idea of the overall timing, thus it could not create a full SRT file.
You might be interested in looking at https://smoores.gitlab.io/storyteller/ which does something similar, highlighting sections of the book while listening to an audiobook by creating an epub3 version of the book.
The other thing you might check out is https://github.com/Vaibhavs10/insanely-fast-whisper which can be MUCH faster than whisper.
The other issue is the way whisper is used in this case - whisper is run against each individual sentence, so whisper doesn't have any idea of the overall timing, thus it could not create a full SRT file.
Whisper? Whisper is what is used to create srt files in programs like "subtitle edit" or https://github.com/abdeladim-s/subsai ..
But isn't the audio created with RVC in 1-20s batches? One could probably pair it to the segments befure theyre fused... It doesn't seem hard but maybe it is...
By the way, are you aware that about 5% of the text from the epub is not present in the final file? at least when using whisper ratios of 2-4 (because 1 is full of groaning voice sounds)
I will not be able to put any time into implementing a feature like this, but if you would like to submit a PR I would be happy to test and review it.
|
GITHUB_ARCHIVE
|
In Cisco devices, the show commands are a way to obtain the information needed from the device in general and specific feature status such as DHCP, Interfaces, OSPF, neighbors etc. in particular.
The show version command provides the bird’s eye view of the device in terms of the IOS version, hardware platform, interface details etc. Being informed about the device is as important as the configuration on it. In the light of this, let’s dissect the show version output of a router and take a deep look on each of its sections. I’m using the Cisco ASR-1006 router running IOS-XE.
The show version is a user-exec mode command that doesn’t require any privileges to run on the router.
Router> show version Cisco IOS Software, IOS-XE Software (PPC_LINUX_IOSD-ADVENTERPRISE-M), Version 15.2(4)S4, RELEASE SOFTWARE (fc1) Technical Support: http://www.cisco.com/techsupport Copyright (c) 1986-2013 by Cisco Systems, Inc. Compiled Sun 01-Sep-13 09:48 by mcpre IOS XE Version: 03.07.04.S Cisco IOS-XE software, Copyright (c) 2005-2013 by cisco Systems, Inc. All rights reserved. Certain components of Cisco IOS-XE software are licensed under the GNU General Public License ("GPL") Version 2.0. The software code licensed under GPL Version 2.0 is free software that comes with ABSOLUTELY NO WARRANTY. You can redistribute and/or modify such GPL code under the terms of GPL Version 2.0. For more details, see the documentation or "License Notice" file accompanying the IOS-XE software, or the applicable URL provided on the flyer accompanying the IOS-XE software. ROM: IOS-XE ROMMON Router uptime is 9 minutes Uptime for this control processor is 12 minutes System returned to ROM by reload System image file is "bootflash:/asr1000rp1-adventerprise.03.07.04.S.152-4.S4.bin" Last reload reason: PowerOn cisco ASR1006 (RP1) processor with 1694412K/6147K bytes of memory. Processor board ID FXS1748Q0U3 5 Gigabit Ethernet interfaces 8 Serial interfaces 8 Channelized E1 ports 32768K bytes of non-volatile configuration memory. 4194304K bytes of physical memory. 917503K bytes of eUSB flash at bootflash:. 39004543K bytes of SATA hard disk at harddisk:. Configuration register is 0x2102
1. IOS Version
Of course, the primary reason we run show version is to know the IOS version running on the device. And it’s shown at the beginning of the command output.
Everything you need to know about IOS version and naming here.
2. License and Copyright Information
Next section tells us what we need to know related to the license and copyright of the software. This section also brought out sensitive information related to cryptographic modules, country specific restrictions on the usage of software etc.
Inside the router motherboard you’ll find a ROM chip which is used to store bootstrap code which will then load the rest of the IOS from Flash memory during booting process. ROM contains a mini-IOS called ROMMON (ROM Monitor) which is helpful to perform router password recovery procedure, download softwares using the Serial connection and upgrade the IOS, if you will. To enter into the ROMMON mode, you have to hit Break while the router is booting. This section of the show version output shows the name of the ROM chip detected.
4. Uptime and System Image file
- Moving further down, the uptime of the router tells how long has the router been up and running, plus the uptime of the control processor of the router.
- What made the system to return to ROM?
- Absolute path of the IOS image file. Required if you want to backup the image via a TFTP server.
- What was the reason for last reload of the router?
5. Hardware configuration
This section is the second most important one, next to IOS version. We’ll get to know
- Processor on the motherboard of the router. And the processor board ID.
- Amount of system memory (RAM).
- List of interfaces on the router. For example, my router has 5 Gigabit Ethernet Interfaces which enables me to route between five different LANs. Plus 8 Serial WAN interfaces allowing me to have up to eight WAN connections, meaning I can reach out and connect eight other routers or networks. “Channelized E1” indicates the type of serial connectivity enabled on those 8 serial ports. These 8 serial ports are the result of the E1 card plugged into the router.
- The size of Non-volatile RAM (NVRAM) which stores the startup configuration of the router.
- The flash storage size that homes the IOS images.
- The hard disk capacity in which root file system is mounted.
6. Configuration Register
A special register called Configuration Register is a 16-bit (2-byte) register whose value affects the behavior of the router. The configuration register value also sets the console baud rate to be used on terminal connections. Two well-known values are,
0x2102 — Ignores the Break during booting, boots into ROMMON if no IOS is found in Flash memory and sets the console baud rate to 9600 bits/sec, which is the default speed on most of the platforms.
0x2120 — Boots into ROMMON and sets the baud rate to 19200 bits/sec.
show version shows the value of the configuration register.
|
OPCFW_CODE
|
Please visit this link to test flashgot integration. Using the dnf software package manager edit this page dnf is a software package manager that installs, updates, and removes packages on fedora and is. The fedora releases here are no longer supported or maintained, so they do not receive bug fixes or security updates. Install uget download manager in ubuntu, fedora, debian unixmen. In a virtual machine you can easily add pretty as many disks as you want, as said before, so using lvm is a good choice in order to further expand available filesystem space inside the vm. These are the same programs we use to create all the artwork that you see within the fedora project, from desktop backgrounds to cd sleeves, web page designs, application interfaces, flyers, posters and more. The supported linux distributions include ubuntu, fedora, mandriva, opensuse, linuxmint, arch linux, chakra.
Upgrading fedora using package manager fedora project wiki. Flareget best download manager for linux, windows, mac. Using the dnf software package manager edit this page dnf is a software package manager that installs, updates, and removes packages on fedora and is the successor to yum yellowdog updater modified. Fedora 29 is the latest lts release available to download. Apr 06, 2016 this video illustrates the steps to install uget download manager 2. Flareget best download manager for windows, mac and linux. Fedora is an open source linux distribution based on and sponsored by red hat. Apart from the inbuilt download manager wget on fedora, just as on any distribution that is based on gnulinux package, there are more. Home topic applications install uget download manager 1. Fedora workstation is a reliable, userfriendly, and powerful operating system for your laptop or desktop computer. In other desktop environments, search for system monitor in the menu. Install uget in ubuntu and linux mint, elementary os.
While using gnome desktop, press super key windows key and look for system monitor. The design suite includes the favorite tools of the fedora design team. It puts you in control of all your infrastructure and services. The distribution is a good place to get the latest stable software and technologies consistently. While working in linux, you may have seen downloadable files with the. A lightweight, wellintegrated lxqt desktop environment. All the major linux distributions have a task manager equivalent. When the question comes whether a web browser can handle multiple download, pause system, torrent integration or quick download, then the answer is not. Fedora is the upstream source of the commercial red hat enterprise. Mar 06, 2014 home topic applications install uget download manager 1. Or, if you only need to run apps, install the runtime.
Flareget is best download manager for windows, mac, ubuntu, fedora, linux mint, chrome. Below youll find links that lead directly to the download page of 25 popular linux distributions. The xtreme download manager works with all browsers. A classic fedora desktop with an additional 3d windows manager. Rpm package, which makes it easy to install on ubuntu, debian, red hat, fedora and other linux operating systems.
This linux download manager is rich with tons of basic and advanced customizable features. Fedora server is a powerful, flexible operating system that includes the best and latest datacenter technologies. Flareget is a full featured, multithreaded download manager and accelerator for windows, mac and linux. Red hat enterprise linux branches for commercial use are based on fedora, while the open source fedora linux os itself is freely available for use and customization. I am not aware of any application in linux equivalent to idm or fdm. Finding and installing linux applications fedora docs site. A modern desktop featuring traditional gnome user experience. Fedora coreos is an automatically updating, minimal, containerfocused operating system. Run the following commands in terminal to install xtreme download manager 2018 on linux 32 bit systems. This tutorial has download links to dvd iso images of fedora 29 desktop and server editions.
The fedora xfce spin showcases the xfce desktop, which aims to be fast and lightweight, while still being visually appealing and user friendly. The packages on this page are maintained and supported by their respective packagers, not the node. Best download managers for ubuntu and other linux distros. How to install uget download manager in fedora, linux mint. The uget project team is pleased to announce the release of. This article describes how to use a package manager to install. It shows you all the running processes and the memory. Apart from the inbuilt download manager wget on fedora, just as on any distribution that is based on gnulinux package, there are more options to explore. Just a click and youre connected to your wireless lan router, or online via many supported 3g mobile broadband cards. How to install uget download manager on centos 7 techbrown. See the linux installation video for a tutorial of this process.
Other than an exe file, it is also available with a. On older versions of debian, ubuntu, linux mint and fedora, users can also install uget. Rpm files are designed to be downloaded and installed independently, outside of a software repository. Fedora is a linux distribution developed by the communitysupported fedora project which is sponsored primarily by red hat, a subsidiary of ibm, with additional support from other companies. Apr 28, 2020 the latest stable version is currently fedora 32, you can download it from the fedora official website. See stop or start boinc daemon after boot page for helpful commands for managing the daemon what the installer does.
The 6 best download managers for fedora foss linux. Fedora xfce is a fullfledged desktop using the standards. This video illustrates the steps to install uget download manager 2. A complete, modern desktop built using the kde plasma desktop. Though all the modern browsers have default download manager, its not good enough to handle effective downloading system. Please report any issues you encounter to the package maintainer. Kget as evident is the download manager as a part of the kde. Apr 28, 2020 the xtreme download manager works with all browsers. Usually, its called system monitor, but it actually depends on your linux distribution and the desktop environment it uses. It has features like help menu enhancements, check for updates etc. Jul 26, 2018 well, if you are, then take a deep breath and read the following list of 4 best download managers for linux. Install uget download manager in ubuntu, fedora, debian.
When the question comes whether a web browser can handle multiple download, pause system, torrent integration or quick download, then the answer is not satisfactory. Oct 29, 2019 fedora is an open source linux distribution based on and sponsored by red hat. I t is a wellknown fact that using download managers can help improve download speeds as compared to web browsers. How to install fedora as a virtualbox guest fedora magazine. Each now version is supported with updates for months in total. It supports a wide range of developers, from hobbyists and students to professionals in corporate environments. The advantages of installing via the package manager are. A very nice gui, similar to internet download manager. To install most recent stable version of xtreme download manager xdm in linux distributions such as ubuntu, debian, linux mint, fedora. Fedora comes with firefox as the default web browser. A light, fast, lessresource hungry desktop environment. Jan 01, 2020 fedora comes with firefox as the default web browser.
This guide will walk you through the process to install a. Mar 03, 2019 while working in linux, you may have seen downloadable files with the. Use task manager in ubuntu and other linux distributions. It is a consumer operating system distributed in different editions. When it comes to downloading manager on linux, there are many actually that is free, open source and yeah. In fact by default fedora will set up a lvm, logical volume manager.
Some linux distributions fedora, ubuntu, debian, possibly others have boinc installation packages your distros package manager can download from your distros repositories and install on your computer. It provides users with installableonly live dvd iso images, as well as live cds for. Xdm is a download manager for linux that ramps up your speed to. From a command prompt, run the nessus install command specific to your operating system. Being based on red hat linux, the distro uses the rpm package manager to install, update and remove packages. Fedora is part of the red hat family however its is a free os. Using the dnf software package manager fedora docs site.
Its obvious to have a good download manager for linux or ubuntu. Mendeley desktop for windows mendeley desktop for macos. How to install xtreme download manager xdm on ubuntu linux. You can find the fedora 29 release notes on its official website. Lvm could be useless in a notebook, where you will never add a second hard drive. Jan 20, 2019 fedora is part of the red hat family however its is a free os. May 06, 2015 in fedora 20 21, latest version of uget 2. Steadyflow is also available in fedora repositories. It is a lightweight but powerful and fullfeatured download manager for linux. In this article, well see how to find and use the task manager on ubuntu and other linux distributions that use gnome as the desktop environment system monitor. Fedora works as one unified project and is a directly connected to more upstream projects. If you install a nessus agent, manager, or scanner on a system with an existing nessus agent, manager, or scanner running nessusd, the installation process will kill all other nessusd processes. Nessus does not support using symbolic links for optnessus.
Steadyflow download manager is available for all the major linux distros, but here i am going to show how to install it on ubuntu, linux mint, and fedora. Fedora releases a new version in approximately every 6 months. Fedora contains software distributed under various free and opensource licenses and aims to be on the leading edge of free technologies. If you are a new fedora user, you may be wondering about what to do after installation. Fedora is a linux operating system distribution developed and supported by the fedora project, an open source community formed in 2003 as a partnership between red hat and volunteer contributors.1264 24 1009 574 17 1445 75 1514 1348 1156 973 1090 638 1429 52 774 1169 882 688 542 716 1314 1583 468 29 700 1352 1008 463 1471 1611 847 208 875 261 399 1056 1094 546 1254 837 766 1298
|
OPCFW_CODE
|
Tech is only becoming more vital to business, which means hiring more tech workers is generally a positive step toward success. Unfortunately, most business leaders know precious little about tech credentials. Can a computer scientist perform data analysis? Is a systems engineer qualified to build networks? Who do you hire when you want everyday tech support?
Though tech is relatively new, it is incredibly vast, with hundreds of specialties and thousands of job titles – including the ones with “wizard” and “guru” on the ends. Fortunately, it doesn’t take much to learn about the distinct qualifications for different fields within tech. In fact, this guide should clear up most of your confusion and help you hire the right tech worker every time.
There is a widespread rumor that tech workers don’t need formal educations. After all, some of the biggest names in tech – Bill Gates, Larry Ellison, Mark Zuckerberg and more – found outrageous success after dropping out of college. Indeed, in the past, many universities were behind the times in tech education, which meant most students eager to progress in this burgeoning field were better off performing self-guided study without the burden of tuition. Therefore, many hiring managers suspect that academic credentials might not mean as much in a prospective tech employee.
However, times have changed. Colleges around the country have expanded their tech programs and now offer some of the most advanced tech learning possible. A tech degree ensures that an applicant has the fundamental skills to perform certain necessary jobs ― though it can be impossible for tech outsiders to distinguish between various tech fields.
Science vs. Engineering: Computer science is a broad field which encompasses the four major concepts in computing: theory, architecture, programming languages and algorithms. Usually, a computer science grad has basic skills to succeed in any tech position. Computer engineering is typically a convergence of computer science and electrical engineering, which means computer engineers tend to focus their efforts on hardware and software. A computer engineering grad fits best into jobs with “engineering” titles.
Design vs. Development: Whether discussing websites, software or another tech product, design and development are subtly different. Both designers and developers are necessary members of tech teams, but degrees in these fields differ. Web designers are creative types who consider primarily the end-user’s experience. They work on the front-end of tech projects to ensure the products look and feel polished. Meanwhile, web developers are more concerned with functionality, so they employ programming languages to solve problems and achieve the goals of their client or company.
Bachelor vs. Master vs. Ph.D.: A bachelor’s degree is usually sufficient to train tech workers for their responsibilities in entry-level positions. Likely, workers will gain the skills and knowledge necessary for promotion organically through their work experiences, but some certification may better prepare them for elevated roles. However, advanced tech degrees, such as master’s and doctorates, are usually only beneficial for those interested in performing research or studying tech theory – which typically aren’t the goals of employers.
On top of degrees, one can also earn certificates to bolster their education and credentials. Certificates are becoming more and more popular in many fields, but as the need for qualified tech workers grows, certification programs are proving to be more valuable for their speed and effectiveness at ensuring certain tech skills.
|
OPCFW_CODE
|
There has been a phenomenal growth in the adoption of GraphQL and accompanying tooling. As with any new technology however, we are still learning about security, best practice approaches & how to do GraphQL right. There is still debate around when it is appropriate to use GraphQL as opposed to REST or gRPC for example.
This post tells my story about why I created WunderGraph, how I believe that WunderGraph takes the best of GraphQL and REST and puts them together, in a particularly unique way, to help developers to become more productive.
Since the beginning of my software engineering career, I have been building apps for native devices and the web. To power these apps I've both written and consumed various API types. Among those include SOAP, REST, gRPC, OData as well as GraphQL.
I've been following discussions on the pros and cons of the different API styles for several years now, and have become particularly interested in comparisons between GraphQL and REST as they seem to be the leading API technologies to power a modern web.
Depending on the use case, GraphQL can give you a lot of benefits when you want to query data from multiple services. You write a query and get data from 5 different services in one go - that's powerful.
Once you're sold on GraphQL, you might realize (as did I), that using it in production also comes with a lot of significant challenges. As I became more experience I realized how naive I initially was as a GraphQL user. My services might have leaked more data than I expected without me knowing about the problem.
To name just a few of my thought processes:
- How should I implement Authentication and Authorization?
- How do I cache Queries?
- How do I implement Rate Limiting to protect my upstream?
- How do I make sure my users cannot traverse my schema in unexpected ways?
- How do I validate user inputs?
- How do I implement persisted queries?
- the list goes on...
Early adopters like Marc-Andre Giroux (through his experiences at GitHub) are doing some excellent work to educate developers on how to ready GraphQL APIs for production.
Despite the growing amount of advice in the community and evolving best practices, I felt that we are still missing tooling to provide concrete answers for some of these problems. I felt that I could give more to the community than I already have, which would make using GraphQL easier, significantly more secure, and reduce the amount of decision making necessary - ultimately making developers more productive.
If you're interested in some of my prior work on GraphQL in open source have a look at my graphql-go-tools. The library aims to offer low-level building blocks to build powerful GraphQL applications using the Go programming language.
Developing graphql-go-tools helped me a lot to get a better understanding of GraphQL and what problems can be solved with it. The library is used in production by several companies. Conversations with users helped me better understand what's missing in the community.
Most, if not all GraphQL implementations, allow the client to request exactly the data they want. This is one of the main selling points of GraphQL as it's also stated on the landing page of graphql.org.
This paradigm makes a lot of sense for a company like GitHub where you have no control over your clients. But what about the 99% of use-cases who run a GraphQL server for themselves and build apps on top of that?
The vast majority of developers already know all the operations a client will need to run at the moment they want to compile/transpile and ship the application.
Developers started, for performance and security reasons, to add the concept of persisted queries to their applications. For those of you unfamiliar with persisted queries: You write all queries in the application/frontend code, extract them in a compile-time step and register them with a server. Persisted query implementations usually have a hashmap containing all queries that were registered during the compile-time step. Then during runtime, the client doesn't send the full query but rather just an ID (e.g. the hash of the operation) alongside the variables. This saves bandwidth and makes the application more secure as the registered queries are like an operation whitelist.
At first, I wanted to build upon this concept and improve it. But after some thought and research, I realized that it would make sense if I don't have to "interpret" the GraphQL query for every request, but rather "compile" it. I'll prepare a separate blog post with a bit more technical detail, but to summarise: much like a database prepared statement, it becomes possible to register a query with the GraphQL server - it gets lexed, parsed, analyzed, validated, etc. all at compile-time.
This was the initial idea for WunderGraph: Compile GraphQL queries on the server
The obvious advantage is performance and reduced bandwidth alongside with a significantly simpler client. The lesser obvious advantage is security. To the outside world, the usage of GraphQL is invisible because it never gets exposed to anybody. Clients can call into pre-registered endpoints that call a precompiled execution tree. The result is the flexibility of GraphQL combined with the performance and security of RPC.
Companies like Facebook and Medium are using similar concepts already. They can build the required tooling to solve these problems. With WunderGraph I'd like to make this available to the masses.
But that's not all there is to the story.
If you write your query on the server, disabling the client from the ability to send arbitrary requests, the developer responsible for the server is pretty safe to write whatever logic they want in the query. This might sound insignificant at first.
What logic can we write in the Query that makes this so powerful?
- we can define Authentication & Authorization rules
- we can define caching behavior for both the server & the client
By pre-registering our queries on the server-side, we can then expose them to clients with a typesafe interface. Much like gRPC, but without any of the complexity.
If you can define these rules in the Query you don't have to do it in the Schema. This is important because it means you can make your backends extremely dumb and they become a lot simpler - with less boilerplate. Besides, your schema and GraphQL/REST implementation doesn't have to take care of Authentication, Authorization or Caching.
Previously, we needed to have JWT & Caching middlewares baked into our REST & GraphQL servers to implement these features. Now with WunderGraph, it becomes possible to keep the underlying services considerably simpler. WunderGraph takes already care of these capabilities for us.
If you previously implemented auth using @directives in your schema you might have recognized that you are forced to redeploy the service whenever you want to change the configuration. With WunderGraph, this is no longer necessary, as it becomes as simple as a configuration change.
Let's take a look at the following example query:
We can see that there must be some magic happening in the GraphQL server. The "viewer" resolver takes some information from the request, e.g. an Authorization header, parses the jwt and extracts the user ID to identify the viewer.
For the field "personalSecret" there needs to be a filter that enforces that a user can only access it if they have the same id in their jwt as the user object.
Let's consider you'd like to implement an admin account that should be allowed to read the "personalSecret" of other users for some reason. To make this feature available you'd have to add a new root field, e.g. "adminViewer" or modify the "personalSecret" resolver to be allowed for admins.
Next let's take look at how you would achieve the same thing with WunderGraph:
@fromClaim will override any value from the variables with the "sub" claim from the user's access token. We removed the field "personalSecret" from friends because we're in control of the server and the client is disallowed to from requesting this field for themselves.
Now let's look at how the admin query might look:
@auth allows the argument
requiredScopes which means you can only invoke this Operation if you have the claim
admin in your access token. Additionally, because we are in charge of defining the operations, we added the
personalSecret field for friends again. We're safe to do so because the operation can only be used by admins.
You can see from this fictitious example, that using GraphQL as a server-side only framework decouples Auth from the backend implementation. The backend code is a lot simpler and thus significantly easier to maintain. We're able to achieve both use cases, the user query and the admin query with the same backend implementation. Less code equals fewer bugs and reduced maintenance.
Clearly not! This paradigm only works in situations where you have full control of your clients. In an organization where you have many services stitched together to form a GraphQL API that is used by teams across the company, you can have shared access to WunderGraph and get all the benefits outlined above.
Compare that to an org like GitHub that exposes a GraphQL API to the public and has no control over the clients. In this scenario, you would have to build a lot of security measures into your GraphQL server and make it rock solid before exposing it to the public to avoid leaking data unintentionally.
Feel free to register with WunderGraph and start experimenting. We would love to see what apps you are building with the WunderGraph service. If you see other use cases or want to share your opinion about this concept meet us on discord, GitHub and use the comments on Medium.
|
OPCFW_CODE
|
Find current Windows 8 desktop background
When I had Windows 7, I used this thread to add a functionality that lets me right click the desktop and click to find the background that is currently being used. However, ever since updating to Windows 8.1 I have not been able to add the same functionality by using the same technique.
I've navigated in regedit to HKEY_CURRENT_USER\Control Panel\Desktop\Wallpaper, which gives me the following path: C:\Users\UserName\AppData\Roaming\Microsoft\Windows\Themes\TranscodedWallpaper. When I put this path into FileExplorer it asks me to open the image using one of my image editors/viewers. When I do, it is the correct image, but that's not what I'm looking for. I'm looking for the actual image's filepath so that I can delete the original photo. TranscodedWallpaper seems to update with each background change.
I know for a fact that all the backgrounds are found in D:\Users\MyUser\Pictures\Backgrounds (Windows is on the C drive), but there are about 1.4k images so looking through them each time for the image would be too much of a hassle.
So, how can I add this functionality back to my setup? At the very least, how can I get the file path of the current background image?
Although this is now an old question, I thought it still worth making the following post. Until recently I’ve been running Windows 7 and a few years ago as an exercise had written a simple program running with a system tray icon to discover the path to the current desktop background image. When I upgraded to Windows 10 this no longer worked of course, so I went looking and found Ramesh Srinivasan's blog and John Dangerbrooks' scripts. As a result, I’ve updated my program to work for Windows 8 and beyond and to work for different images for multi-monitor environments, while also maintaining backwards compatibility with Windows 7.
I’m sharing this program with the wider community in case anyone else likes the idea of having this as a system tray tool. It is written in C# and requires .Net framework v4, and is available as separate .zip files for x86 (32-bit) and x64 (64-bit) environments. There’s no installer, it’s just a simple executable with a readme file. Personally I start it automatically using the HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Run key, but I leave that detail to you. More details are in the readme.
The zip files are located here https://onedrive.live.com/redir?resid=B2EA2CF6592EC937!839&authkey=!AMNZgrGbt9raflQ&ithint=folder%2czip. (The old short link http://1drv.ms/1OoQRti appears not to work any more - has Microsoft removed the ability to generate short links for OneDrive folders?)
Do you have any idea how this could be modified to handle multiple monitors with different backgrounds?
The program should work for different backgrounds on multiple monitors. However, there is a restriction on the length of the tooltip text of 63 characters so I chose to only display one image path in the tooltip. If however you right click on the system tray icon, you should see the image paths for each of your monitors displayed in the popup menu and have the ability to select one of these such that the other menu items operate on the selected image path. It isn't perfect but I find it handy.
I have found a website that has a script you can download and run on your machine, it gives you a popup for the location and name of the image running on your background. The reason you can't get your Windows 7 tweak to work is because the information is stored differently in the registry in Windows 8. In Windows 7 it's in plain text (plain English) and in Windows 8 it is stored in raw binary
01010100 01101000 01100101 00100000 01100001 01101110 01110011 01110111 01100101
01110010 01110011 00100000 01110100 01101111 00100000 01100001 01101100 01101100
00100000 01101111 01100110 00100000 01111001 01101111 01110101 01110010 00100000
01110001 01110101 01100101 01110011 01110100 01101001 01101111 01101110 01110011
00100000 01100001 01110010 01100101 00100000 01101111 01101110 00100000 01000111
01101111 01101111 01100111 01101100 01100101 00101110 01100011 01101111 01101101
00101110 00101110 00101110
You can find the script here
The link that Reeves posted led to creating a .ps1 file with this script inside of it. Running this new file in Windows Power Shell did open File Explorer pointing to the background image. I had to change the ExecutionPolicy to allow PS to run .ps1 files.
However, opening PS each time and then running command was more of a hassle than I wanted, so I read this SO post and made a shortcut on my desktop that had a target of
powershell.exe -command "& 'C:\A path to the new ps1 file\MyScript.ps1'"
Here's a copy of the script used in the .ps1 case the link goes down:
Try
{
# Get script name
$ScriptName=(Get-Item $PSCommandPath).Name
# Load Windows Forms and initialize visual styles
# Not needed for Windows 8. But I still don't know whether it is running on Windows 8.
[void][System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
[System.Windows.Forms.Application]::EnableVisualStyles()
# Check Windows verison
$vers=[System.Environment]::OSVersion.Version
If (!(($vers.Major -eq 6) -and ($vers.Minor -ge 2) -and ($vers.Minor -le 3))) {
$result=[System.Windows.Forms.MessageBox]::Show("This operating system is not supported. This script only supports Windows NT 6.2 or 6.3. (i.e. Windows 8, Windows Server 2012, Windows 8.1 or Windows Server 2012 R2). You seem to be running:`r`r"+[System.Environment]::OSVersion.VersionString, "Script", "OK", "Error");
break;
}
# Initialize counters
$Path_Start_Delta=24 #The offset at which the image path starts
$Path_End_Delta=-1 #The offset at which the image path ends... is still unknown
# First, access Windows Registry and get the property containing wallpaper path
try {
$TranscodedImageCache=(Get-ItemProperty 'HKCU:\Control Panel\Desktop' TranscodedImageCache -ErrorAction Stop).TranscodedImageCache
}
catch [System.Management.Automation.ItemNotFoundException],[System.Management.Automation.PSArgumentException] {
$result=[System.Windows.Forms.MessageBox]::Show("Windows does not seem to be holding a record of a wallpaper at this time.`r`r"+$Error[0].Exception.Message,"Script","OK","Error");
break;
}
# Decode the property containing the path
# First, let's assume the path ends at the last byte of $TranscodedImageCache
$Path_End_Delta=$TranscodedImageCache.length-1
# A sequence of 0x00 0x00 marks the end of string. Find it.
# The array that we are searching contains a UTF-16 string. Each character is a little-endian WORD,
# so we can search the array's even indexes only.
for ($i = $Path_Start_Delta; $i -lt ($TranscodedImageCache.length); $i += 2) {
if ($TranscodedImageCache[($i+2)..($i+3)] -eq 0) {
$Path_End_Delta=$i + 1;
Break;
}
}
# Convert the bytes holding the wallpaper path to a Unicode string
$UnicodeObject=New-Object System.Text.UnicodeEncoding
$WallpaperSource=$UnicodeObject.GetString($TranscodedImageCache[$Path_Start_Delta..$Path_End_Delta]);
# Test item's existence
Get-Item $WallpaperSource -Force -ErrorAction Stop | Out-Null
# Wallpaper should by now have been found.
# Present it to the user. If he so chooses, launch Explorer to take him were wallpaper is.
$result=[System.Windows.Forms.MessageBox]::Show("Wallpaper location: `r$WallpaperSource`r`rLaunch Explorer?", "Script", "YesNo", "Asterisk");
if ($result -eq "Yes")
{
Start-Process explorer.exe -ArgumentList "/select,`"$WallpaperSource`""
}
}
Catch
{
$result=[System.Windows.Forms.MessageBox]::Show("Error!`r`r"+$Error[0], "Script", "OK", "Error");
break;
}
|
STACK_EXCHANGE
|
Environment-specific configurations via polymer.json
As a Polymer app author,
when I build my sharded application with polymer-cli,
I want to include a specialized environment configuration in the build output,
so that my app can adapt to being deployed to production.
Environment-specific Configurations
This is a design proposal for a new feature of the polymer build command.
Users can include an "environment" field in polymer.json that is a mapping of environment names to configurations. polymer build --env $envName can be used to select one of the configurations. Selecting a configuration causes that configuration to set as Polymer's configuration in the build artifact. So, if I have a polymer.json like:
{
"entrypoint": "index.html",
"shell": "src/foo-app/foo-app.html",
"fragments": [
"src/foo-app/fragment-one.html",
"src/foo-app/fragment-two.html",
"src/foo-app/fragment-three.html"
],
"environment": {
"production": {
"lazyRegister": true,
"custom": "foo"
}
}
}
And I run the CLI with polymer build --env production, I get a script tag added to my index that looks like:
<script>
window.Polymer = {
lazyRegister: true,
custom: "foo"
};
</script>
Alternative approach: configure global ENV property
One alternative approach would be to configure a global ENV property with the configuration values. This has the advantage of being more generalized, at the cost of requiring additional coordination in the main document. Assuming the above polymer.json is used, the polymer build --env production command would produce this script in the document:
<script>
<!-- NOTE: ENV name is just a strawman -->
window.ENV = window.ENV || {};
window.ENV.lazyRegister = true;
window.ENV.custom = 'foo';
</script>
Using this alternative approach, the same end result as the first design could be achieved with the following basic cooperation by the app author:
<script>
if (window.ENV) {
window.Polymer = window.ENV;
}
</script>
Rationale
There is some precedence for this feature in other similar CLI tools:
https://ember-cli.com/user-guide/#Environments
https://github.com/facebookincubator/create-react-app/blob/master/template/README.md#adding-custom-environment-variables
https://github.com/angular/angular-cli#build-targets-and-environment-files
I ended up creating a gulp task to replace an .env JS object in index.html...
Requires a .env and .env.production file.
"prestart": "gulp import-env",
"start": "polymer serve",
"prebuild": "NODE_ENV=production gulp import-env",
"build": "polymer build --auto-base-path && gulp prpl-server",
const dotenv = require('dotenv');
gulp.task('import-env', () => {
const suffix = process.env.NODE_ENV ? `.${process.env.NODE_ENV}` : '';
const result = dotenv.config({ path: `.env${suffix}` })
if (result.error) {
throw result.error
}
const pattern = /\.env = {([^;]+)/g;
const envs = JSON.stringify(result.parsed, null, 2);
const replacement = `.env = ${envs}`;
return gulp.src('index.html')
.pipe(replace(pattern, replacement))
.pipe(gulp.dest('.'));
});
please do not close this, as we still need a valid solution
|
GITHUB_ARCHIVE
|
[00:21] <ThomasWard[m]> Eickmeyer: in case you're bored... https://askubuntu.com/questions/1443847/portuguese-keyboard-not-writing-special-characters-on-ubuntu-studio-with-any-des
[00:21] <Eickmeyer[m]> Wouldn't even know how to solve that.
[00:22] <ThomasWard[m]> hence the 'in case you're bored and want to hunt' part ;)
[00:22] <ThomasWard[m]> *returns to the shadows*
[00:22] <ThomasWard[m]> it seems oddly specific it works on Kubuntu but not Studio
[00:23] <Eickmeyer[m]> Well, KDE Plasma itself is locallized, but a lot of multimedia applications (including Studio Controls) are not.
[00:25] <ThomasWard[m]> *does a chaos and force-disables TrustCor certificates on his internet*
[00:25] <ThomasWard[m]> * and force-disables/force-deletes/force-distrusts TrustCor, * his internet and all his devices*
[00:25] <Eickmeyer[m]> Heh, and yeah, not extremely bored. I've got a ton of job apps to fill out. :/
"*does a chaos and force-disables..." <- Guide for how I can do that myself?
[00:33] <ThomasWard[m]> arraybolt3[m]: ca-certificates: sudo dpkg-reconfigure ca-certificates - choose "yes" and then in the list where you find mozilla/ certs, deselect the TrustCor_* certificates then continue onto the next screen, it'll distrust those ca certificates
[00:33] <ThomasWard[m]> firefox, chrome, etc. have their own cert stores, go into each one and find the TrustCor certs and mark them as untrusted or remove them from the cert store
[00:33] <ThomasWard[m]> ca-certificates is getting an update on Monday AFAICT from #ubuntu-security logs yesterday
[00:34] <arraybolt3[m]> Nice, thanks! /me proceeds to do that on my main devices
[00:35] <ThomasWard[m]> alternatively wait until Monday/Tuesday for ca-certificates
[00:35] <ThomasWard[m]> i believe that marc is working on the certs, etc. in ca-certificates.
[00:36] <arraybolt3[m]> Eh, why wait when you can do it yourself early :P
[00:37] <arraybolt3[m]> Anyway, just did that to my two main devices, I'll probably wait to reconnect my main desktop to the Internet until after Monday since it's not really in use anyway, and then I'll get the fix at that point.
[00:37] <ThomasWard[m]> exactly.
[00:37] <ThomasWard[m]> it's why i did some stabbings today :P
|
UBUNTU_IRC
|
|author||David E. O'Brien <email@example.com>||1999-02-28 20:34:40 +0000|
|committer||David E. O'Brien <firstname.lastname@example.org>||1999-02-28 20:34:40 +0000|
Virgin import of ISC-DHCP v2.0b1pl17vendor/isc-dhcp/2.0b1-pl.17
Notes: svn path=/vendor/isc-dhcp/dist/; revision=44335 svn path=/vendor/isc-dhcp/2.0b1-pl.17/; revision=44337; tag=vendor/isc-dhcp/2.0b1-pl.17
Diffstat (limited to 'contrib/isc-dhcp/RELNOTES')
1 files changed, 106 insertions, 3 deletions
diff --git a/contrib/isc-dhcp/RELNOTES b/contrib/isc-dhcp/RELNOTES
index c136fd7bac69..84d9c40be5ce 100644
@@ -1,7 +1,7 @@
Internet Software Consortium
Dynamic Host Configuration Protocol Distribution
- Version 2, Beta 1, Patchlevel 10
- February 8, 1998
+ Version 2, Beta 1, Patchlevel 18
+ February 27, 1998
@@ -53,6 +53,110 @@ running in producion at the ISC, but is not expected to be stable in
the near future, and is intended for sites that are in a position to
experiment, or for sites that desperately need the new features.
+ CHANGES FROM VERSION 2.0 BETA 1 PATCHLEVEL 16
+- Fix linux man page install location.
+- Fix some confusion in the dhclient-script man page.
+- Fix error in includes/cf/linux.h that would have made network API
+ selections in site.h work incorrectly.
+- Fix some major stupidity in the code that figures out where or not a
+ client owns a particular lease.
+ CHANGES FROM VERSION 2.0 BETA 1 PATCHLEVEL 15
+- Fix Makefile.conf on Linux to refer to /var/state/dhcp instead of
+- Eliminate redundant #defines in includes/cf/linux.h (for neatness).
+- Fix an obscure case where dhcpd is started by the /etc/rc system
+ with exactly the same pid each time, dhcpd.pid is not erased on
+ reboot, and therefore dhcpd would detect a server (itself) with the
+ pid in dhcpd.pid and decide that another server was running and
+ CHANGES FROM VERSION 2.0 BETA 1 PATCHLEVEL 14
+- Install the dhcp databases in /var/state/dhcp instead of /etc or
+ /var/dhcpd, as suggested in the Linux Filesystem Hierarchy
+- Fix an endianness bug in dlpi.c. As a consequence, make the
+ Solaris/i386 use dlpi again.
+- Fix a bunch of bugs in the Solaris client script.
+- Add some more information about Solaris to the README file.
+- Adjust startup message in interface probe so that the relay agent
+ and client's unattached status will not trigger questions.
+- Update some error messages to provide more help to new users for
+ some common mistakes.
+- Create an interface alias on Solaris when setting up IP aliases,
+ rather than trying to do things the *BSD way.
+- Fix a null pointer dereference bug (this time I went through the
+ whole function and audited it for more null pointer dereferences,
+ and I didn't find any, for what that's worth).
+- Don't ever release leases in response to a DHCPDISCOVER (I think
+ this was unlikely anyway, but why not be correct?).
+- Remove the shared-network example from the sample dhcpd.conf file.
+- Make ``make install'' make all first.
+ CHANGES FROM VERSION 2.0 BETA 1 PATCHLEVEL 13
+- Support DESTDIR on installs.
+- Fix a bug in dhcp.c where a store through a null pointer would
+ be made under some reasonably common circumstances.
+- Add test for ARPHRD_TUNNEL so that client and server do not fail on
+ versions of Linux running IPsec implementations or the like.
+- Move tests for constants defined in O.S. headers into osdep.h - test
+ for HAVE_whatever in .c files. Define relevant HAVE_whatevers in
+ linux.h, so that versions of linux that define these constants as
+ enums will still work.
+ CHANGES FROM VERSION 2.0 BETA 1 PATCHLEVEL 12
+- Initialize the "quiet" variable in dhclient.c to zero (it was used
+ without first having been initialized).
+- Fix the parser code for the authoritative keyword.
+- Adjust lease discovery code to NAK more aggressively for addresses
+ the server knows it owns.
+- Add several new messages for DHCPNAK.
+ CHANGES FROM VERSION 2.0 BETA 1 PATCHLEVEL 11
+- Use DLPI only on sparcs running Solaris, since it seems not to work
+ on i386 boxes running Solaris for reasons yet to be determined.
+- In the client, close standard I/O descriptors when forking a daemon.
+- Don't let large lease lengths wrap lease expiry times - just use
+ what fits into a TIME value.
+- Fix a bug in the SIOCGIFCONF interface scanning code.
+- Fix a core dump in the interface scanner that crops up on Linux when
+ an interface is specified on the command line.
+- Don't use %D in strftime because egcs complains about it.
+- Print the error message if SO_BINDTODEVICE fails.
CHANGES FROM VERSION 2.0 BETA 1 PATCHLEVEL 10
- Update top-level Makefile so that it exits correctly on errors in
@@ -249,7 +353,6 @@ experiment, or for sites that desperately need the new features.
- Fix up dhcp-options man page to make it more readable. Note that
netbios-name-server is the same thing as WINS.
CHANGES FROM VERSION 2.0 BETA 1 PATCHLEVEL 5
|
OPCFW_CODE
|
Why is my gearshift hard to move when starting?
I have a 2000 Miata with a 5-speed manual. I park it in first gear with the handbrake on. The last few times I've started it, the gearshift has been hard to move into neutral and from there into reverse.
When I started it today the engine struggled to start, and when the gearshift finally popped free from first to neutral, it suddenly sounded normal. It seems like the clutch isn't fully disengaging despite the pedal being all the way down. The gearshift stiffness seems to clear up after a few minutes of driving.
The car had a new clutch when I bought it about 9 years ago. It gets driven for a few miles every couple of days, and I put about 1500 miles on it on average each year. Is it time for a new clutch, or is something wrong with the pedal linkage?
Check the fluid level in your clutch master cylinder. If it's very low, fill it up, then bleed the clutch hydraulics at the nipple on the slave cylinder.
If the fluid level was not low, next time you want to start the car in the morning and it's in first gear, pump the clutch rapidly 4-5 times, then take it out of first gear. If it comes out of first easily now after pumping the clutch, you may have air in the hydraulics from an old, hardened or worn seal or O-ring. Bleed your clutch at the nipple on the slave cylinder.
If that works, you found the problem. If the problem returns, be prepared to rebuild or replace either the clutch master cylinder or slave cylinder to fix the problem permanently.
Spot on, the master cylinder reservoir looks empty. I would need to take the wheel off to get at the slave cylinder, but I don't own jack stands and I've never bled hydraulics before. I don't suppose it's possible to "top it off" without bleeding the line?
@NobodySpecial "do the easy things first" and topping off the fluid is standard practice. If that fixes your problem, then no more problem.
@NobodySpecial Criggie is right. See if full clutch action is restored by just filling the reservoir, and if it is, forget about bleeding but keep that in the back of your mind for future reference. You had air in the line that prevented full disengagement of the clutch, but if the air never got beyond the first dip and rise in the hydraulic line, it can bubble back out into the reservoir on its own, and no harm done. Bleeding a clutch is just like bleeding brakes. Plenty of instructive videos on Youtube.
Probably not. Any mechanism requires that it be operated on a constant basis in order to maintain its long term operation. For example, not driving a vehicle often in a area with high humidity or substantial rainfall could damage both clutch and brake pad friction material as such material can absorb moisture. This can cause the parts to fuse together or the friction material to become brittle to the point of one minute having full braking followed by metal to metal contact. No brakes or difficulty actuating the clutch.
You mentioned the engine struggled to start. Not driving often doesn't charge the battery much or lubricate the engine affecting performance. Every weekend or so go for a small road trip. This will remedy the situation I am sure. Besides Miata are made to cruise.
|
STACK_EXCHANGE
|
How to troubleshoot/install/repair Android usb device drivers
I'm a developer and have three phones, a Motorola Droid, a Samsung Galaxy, and Nexus S, and I could only get the Droid working on my current laptop. I heard that PDANet could help, so I installed that, and afterwards none of them worked. I tried installing the Samsung Nexus S driver, and I've tried upgrading the drivers in the SDK manager, but no matter what I try, I can't seem to get any of them working (when I type "adb devices" in command line they don't show up, neither are they recognized in my IDE... I happen to use IntelliJ, but I suspect Eclipse would not find it either).
Does anyone have any tips on how to troubleshoot/install/repair usb device drivers? (other than PDANet since I've already tried that)
is usb debugging enabled on those phones? Also, can you browse the internal storage? (Does your OS detect them?)
You need the correct USB drivers for the devices. The Nexus S drivers are included in the Android SDK and the Galaxy drivers are available on the Samsung website. I assume you're on Windows; do the devices show up in Device Manager? If not, try a different USB port or machine.
I had this problem to connect my HTC Desire to a PC and use it as ADB device. Heres now to do it:
Using Android SDK manager install Google USB driver (if you havent already)
Turn on Debuging on your device
Open Device manager and look for Other Devices/ADB
Right click on it, Properties, open Details tab, select Hardware Ids property
Then you will see something like this: USB\VID_0BB4&PID_0C87&MI_01 right click on it, and Copy. The VID part will have the Vendor ID from the table at the bottom of the page.
Using any text editor open android-sdk\extras\google\usb_driver\android_winusb.inf
Now find [Google.NTx86] part of the file if you have x86 PC or [Google.NTamd64] if you have x64 PC
Under your part of the file add this and paste copied Hardware ID inplace of "###":
; Your phone name - can be anything
%CompositeAdbInterface% = USB_Install, ###
So it looks like this for example for HTC Desire:
; HTC Desire
%CompositeAdbInterface% = USB_Install, USB\VID_0BB4&PID_0C87&MI_01
Save the file, go to Device manager, right click on ADB, Properties, Driver, Update driver, Browse... and locate the android-sdk\extras\google\usb_driver folder
Instal the edited driver
Run adb devices and you should see your phone!
|
STACK_EXCHANGE
|
Tying the DataRev's themes together and driving home the efficacy of investing in subnational data skills, we launched six Principles for Subnational Development. Colleagues shared illustrative case studies, drew important connections to the Principles for Digital Development, and led group discussions to further solidify the Principles.
Global Data Policy
The DataRev begins on November 20th here in Washington, D.C, kicking off a 3-day learning, collaborating, and networking event centered on the importance of data use to inform, drive, and measure development outcomes. At the Data Rev, we’ll gather with partners to promote and discuss the importance of investing in local data skills to drive decision making.
Development Gateway’s mission is to support the use of data, technology, and evidence to create more effective and responsive institutions. We envision a world where institutions listen and respond to the needs of their constituents; are accountable; and are efficient in targeting and delivering services that improve lives.
Bloomberg’s Data for Good Exchange (D4GX): Data Science for SDGs brought together data scientists, corporations, academics, practitioners, and civil society to discuss issues and explore opportunities related to data science and social good. Given DG’s recent work on the Administrative Data Driven Decisions (AD3) program and understanding national data ecosystems, we opened our D4GX workshop asking, “Show of hands – who thought today’s workshop would cover how to use data science and administrative data to report on SDG indicators?”
As in most organizations, Development Gateway’s leadership team is always exploring ways to support and retain our talented team members, and we take care to encourage our neighbors and similar organizations to do the same. Years of research has shown that staff retention is critical not only for the growth and stability of an organization, but also is a key element in employee satisfaction – teams that grow together through the years can be stronger and more cohesive.
Today, Development Gateway (DG) is pleased to announce the publication of the Managing for Feminist Results: Measuring Canada’s Feminist International Assistance Policy white paper, that outlines the challenges and opportunities that development agencies may face when adopting new and/or feminist policies.
“What do you think, Josh?” The questions kept coming to me, no matter how many times I reminded our counterparts in the government that my female colleague was the assessment lead, had more experience, and was an expert in this topic on which I was a relative novice. I redirected again to my colleague, only to have the process continue to repeat itself.
The 2030 Sustainable Development Goals (SDGs) call on us to “leave no one behind.” At the same time, there is an urgent need to empower individuals and communities with access to information and skills to help them thrive in the growing digital economy. But what investments can transform “data-driven decision-making” from a global commitment to a key component of community-centered development?
With the World Bank/ IMF Spring Meetings underway, many of us are keen to explore more and better resources for achieving the data revolution for sustainable development. As we and others have argued before, a key part of this revolution must involve greater harmonization of data collection and use efforts between country governments and development partners.
Stay connected and learn the latest from Development Gateway
Learn The Latest
Subscribe to new updates
For information or inquiries, please contact us at email@example.com.
|
OPCFW_CODE
|
This article is a step by step tutorial to get started with PHP and laravel in Linux environment ( Ubuntu ). By installing Apache2, Mysql and PHP, your LAMP server is ready to host your PHP application.
At the end of this post, you’ll know how to add your custom domain for your local environment.
Let’s start !!!
As you expected from all kind of Linux tutorials you should first update and upgrade your system by running :
sudo apt-get updatesudo apt-get upgrade
Now your system and packages system is up to date.
Next, you need to install some basics dependencies to avoid all kind of problems in your workflow
sudo apt-get install -y git curl wget zip unzip
Installing Apache2 server :
sudo apt-get install apache2
To make sure that the server is running you can execute this command in your terminal
sudo systemctl status apache2
sudo systemctl status apache2
As you see above, the service appears to have started successfully, you can also access to your server through the http://localhost
address and you will see Apache2 default home page.
It is important to know that all your web content must be under the /var/www/html directory. you can check the Bonus section to make any folder as your root web content to know how to config.
To master Appche2 config you need to master this 6 commands line:
- a2enmod (apache2 enable mode) : To enable an Apache2 mod like rewrite mode.
- a2dismod (apache2 disable mode) : To disable an Apache2 mod.
- a2enconf (apache2 enable Config) : To enable a specific config.
- a2disconf (apache2 disable config) : To disable a specific config.
- a2ensite(apache2 enable Site) : To enable a specific app.
- a2dissite (apache2 disable Site) : To disable a specific app.
Enable rewrite mode
sudo a2enmod rewritesudo systemctl restart apache2
This Gif take you around the most important Apache directories.
You can learn more about Apache config and Linux in this article
sudo apt-get install mysql-server
Click Enter to validate the first popup, then create a password for your Mysql root user. it’s highly recommended to secure Mysql server by running :
Install PHP :
sudo add-apt-repository -y ppa:ondrej/phpsudo apt-get update
sudo apt-get install -y php7.1 php7.1-fpm libapache2-mod-php7.0 php7.1-cli php7.1-curl php7.1-mysql php7.1-sqlite3 \php7.1-gd php7.1-xml php7.1-mcrypt php7.1-mbstring php7.1-iconv
As you see above this large command will install php, php-cli and the most important php libraries.
Install Composer :
curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composersudo chown -R $USER $HOME/.composer
Now you are ready to create your first Laravel app.
Test web Server
To test your LAMP server, just create a Laravel application under Apache2 root directory.
composer create-project --prefer-dist laravel/laravel lara_app
Open your browser and you can access to your app through :
In this section you will discover how you can create a Laravel application with custom domain name outside apache2 directory.
first create a config file under /etc/apache2/sites-available directory.
cd /etc/apache2/sites-availablesudo touch lara_app.conf
Past and update DocumentRoot and Directory with your app folder inside the file.
# Your Custom folderDocumentRoot /media/disk2/Work/lara_app/public/
Options Indexes FollowSymLinks
Require all granted
Next, give to your custom folder the permission to execute
chmod -R 755 /media/disk2/Work/lara_app/public/
Then disable the default site and enable you new lara_app site.
sudo a2dissite 000-defaultsudo a2ensite lara_app
At last, you can configure the lara_app.dev domain name by adding this line into /etc/hosts file.
# /etc/hosts127.0.0.1 localhost127.0.0.1 lara_app.dev127.0.1.1 youssouf-Latitude-E6410
Now you can access to your app through your custom domain name : http://lara_app.dev
If you are familiar with React check My post :
Thanks for reading! If you think other people should read this, clap for me, tweet and share the post. Remember to follow me on Medium so you can get notified about my future posts.
|
OPCFW_CODE
|
As I wandered through the Westport Mini Maker Faire this weekend, I was impressed by all of the people who had devoted parts of their lives to making things, be they robots, radio-controlled planes, pottery, or wooden crafts.
It also made me remember when the PC industry, at least on the hardware side, was more about making things rather than buying them. I still recall, mostly fondly, actually building a computer years ago starting with a bag full of chips, an empty board, and a soldering gun. (In my case, it was a Heathkit Z-80, but that only goes to show how old I am.) It's that ethos—particularly expressed in Silicon Valley with the Homebrew Computer Club—that built the PC industry in the early days.
Even though building a computer today is fun, it's a very different experience because you typically buy a motherboard; plug in a processor, memory, graphics board, and power supply into a case; then trick it out. Back in the day, simply getting it to turn on was a challenge.
A few years ago, O'Reilly Media and its Make magazine conceived Maker Faires as a place to bring together people who create things with their hands. After attending New York faires for a couple of years, a friend of mine had the idea of holding one in our town and he convinced a number of friends, myself included, to help organize it. Thanks to invaluable assistance from our wonderful library staff, this went from an idea to an actual event in three months.
This weekend, about 2,000 people gathered to see and participate in a large number of hands-on activities. Some demonstrations were low-tech, such as an artist who uses a razor and a sewing needle to carve tiny sculptures out of the graphite ends of pencils. Others were very high-tech, such as a 3-D printer from MakerBot that takes all sorts of designs and turns them into physical objects.
Robots got a lot of attention. One friend of mine has built a recreation of the B9 robot from Lost in Space, powered by a netbook, which he programmed to give mostly sarcastic answers to the children who asked it questions. There were several teams of robot builders, most of which had been part of the inspiring First Robotics Competition. Watching the robots shoot basketballs was particularly entertaining.
I was also impressed by the folks from Brooklyn Aerodrome who have figured out how to make an inexpensive radio-controlled plane, mostly out of found parts. The team designed it to withstand the heavy winds common in urban areas and the crashes that often result.
Meanwhile, another group was working on building a glider and another local designer has built his own working submarine.
On the pure computing end, the robotics teams were perhaps the most interesting. While I know a lot of people who actually have written their own apps, both professionally and personally, there weren't a lot displayed at the show. (Another friend, a professional developer, made a quick app for the faire based on the Yapp simple app builder.)
There were, however, several teenagers who had put together their own gaming PCs, tricking the rigs out with fancy cooling systems, cases, and locks. They even looked at things like overclocking. That's something we've encouraged for years on our ExtremeTech site and it's great to see folks doing this. It's not quite the same as building a PC from a "bag of chips," but it's still a lot of fun.
|
OPCFW_CODE
|
Awesomefiction Fey Evolution Merchant update – Chapter 269 head number recommend-p1
Jellynovel – Chapter 269 edge thumb recommend-p1
Novel–Fey Evolution Merchant–Fey Evolution Merchant
Chapter 269 handy gleaming
Lin Yuan was now almost certain that this succulent scarlet plant needs to be the paradise and the planet fey, Red PaG.o.da. The green flames which had been twisted into the crimson flame ought to be the Lifeform Sacrificial Fireplace which was ranked seventh among all fireplace attribute spiritual compounds.
Lin Yuan put his fretting hand beyond the Crimson PaG.o.da very first well before discharging absolutely pure spiritual potential. The moment the Reddish PaG.o.da felt the natural psychic ability, the scarlet simply leaves suddenly converted dazzling.
The Yellow gold I/Fantasy I Provider Sand’s typical strength was already equal to a Platinum fey. Considering that the original source Beach sand surely could command a ma.s.sive amount of sand which had been boundless, its strength was already beyond the limitations of the Platinum fey.
Lin Yuan’s words stopped that character qi specialist from conversing another terms to request for mercy. Without having Motivation Rune, there wasn’t a method to swear an oath using the Motivation Rune. Without the Self-discipline Rune to bind one’s phrases, then there wasn’t any credibility.
Ever since the original source Beach sand had produced and acc.u.mulated fine sand for over 30 days, its sturdiness was already beyond the Source Sand’s typical durability.
Considering that the Source Fine sand had designed and acc.u.mulated sand for more than monthly, its power was already beyond the Source Sand’s regular durability.
If the Supplier Fine sand developed a lot of sand to develop the quicksand, it had been no longer a simple episode technique. The elemental resource-variety lifeform got utilised its characteristics to create a organic disaster.
Considering that the Two Red PaG.o.da taken in so much of fireplace features, it experienced nearly dried up the blaze aspects within this Cla.s.s 3 abyss dimensional rift. However if the flame components soaked up weren’t utilized to take care of the Lifeform Sacrificial Blaze, where do they go?
Lin Yuan experienced the fang-designed simply leaves searched just like the fangs of the dragon group fey. Having said that, when the simply leaves have been all put together with each other in a very structure, it gave off an unspeakable sensitive attractiveness.
Considering that the Source Beach sand experienced developed and acc.u.mulated sand in excess of every thirty days, its strength was already beyond the origin Sand’s ordinary energy.
Lin Yuan then claimed lightly, “Source Sand, devour him.�
Lin Yuan noticed the nature qi professional’s resentful vision, so he replied by gently relocating his finger. The soul qi professional’s Platinum Saber-Teeth Hunched Wolf was just in the position to allow out 50 percent a groan just before it had been buried silently into your quicksand. The Saber-Tooth Hunched Wolf’s specialist is at instantaneous discomfort as huge beads of perspire rolled down his forehead.
Lin Yuan inserted his palm off the Crimson PaG.o.da very first right before launching absolutely pure religious electrical power. After the Reddish colored PaG.o.da experienced the real spiritual electrical power, the scarlet simply leaves suddenly switched dazzling.
Lin Yuan flapped the four dark-colored wings and endured around the palm that raised him across the water of sand. The palm has been blasted by the conditions earlier and was just left behind together with the midst finger. Lin Yuan withstood there and handled the original source Beach sand to strike.
The mindset qi expert accidentally swallowed several mouthfuls of sand throughout the have difficulties. Because the beach sand included the ferromanganese’s houses, it created the heart qi professional’s singing chord to become shattered of all the milling.
The growth associated with a heaven and world fey was to strive for living against paradise. When remaining aided with such genuine soul qi, the Dual Red-colored PaG.o.da launched an amiable aura to Lin Yuan.
Lin Yuan required the origin Beach sand to improve the quicksand array and changed a two-kilometer radius from your cave towards a quicksand region. Then he utilized the sand to bar up all of the entrance doors coming from the cave into the valley.
It may be connected with an excessive red and needs to be amazing much like the radiant sunlight, but currently, the redness was particularly reserved. It wasn’t even as amazing because the moving flame energy that it was soaking up.
Lin Yuan inserted his hands about the Twin Reddish colored PaG.o.da and sensed that the Lifeform Sacrificial Blaze that the Twin Red PaG.o.da was taking care of possessed already matured.
Lin Yuan was now almost confident that this succulent scarlet plant should be the heaven and earth fey, Red-colored PaG.o.da. The eco-friendly fire which was twisted inside of the green fire ought to be the Lifeform Sacrificial Fireplace which has been graded 7th among all flame feature psychic compounds.
The Country-Life Movement in the United States
Lin Yuan reported indifferently, “Those who get rid of should be ready to get murdered far too. I originally desired to question the reason why you had been ambus.h.i.+ng other adventurers, and you swallowed the fine sand with ferromanganese properties and shattered your singing chord. I am now not capable to request any solutions very.�
The heart qi expert while using Platinum fey that had already missing all power to challenge resentfully glared at Lin Yuan with bloodshot eye.
Lin Yuan didn’t instruction the cause Sand to excavate toward the origins on the Twin Reddish colored PaG.o.da. Alternatively, he was digging it in person. The Dual Crimson PaG.o.da obtained already dealt with Lin Yuan as the kin, as a result it didn’t make any protective actions.
Lin Yuan then mentioned softly, “Source Beach sand, devour him.�
The mindset qi specialized along with the Platinum fey who had already missing all energy to challenge resentfully glared at Lin Yuan with bloodshot view.
napoleon’s letters to josephine pdf
Lin Yuan immediately discovered a problem. As a Development Expert, he had a deep perception of all lifeforms.
When under Genuine Data’s screen, this scarlet succulent plant’s name was identified as Dual Red-colored PaG.o.da.
Lin Yuan noticed how the fang-molded foliage appeared like the fangs associated with a dragon varieties fey. On the other hand, as soon as the renders had been all placed alongside one another in the formation, it gifted off an unspeakable fragile natural beauty.
the bed-book of happiness pdf
Ever since the Two Red-colored PaG.o.da consumed a lot of blaze components, it possessed nearly dried up the fire things in this particular Cla.s.s 3 abyss dimensional rift. However if the blaze components consumed weren’t used to nurture the Lifeform Sacrificial Fireplace, where did they go?
The flames vortex was absorbing every one of the fireplace elements and fusing them to the red-colored fire. Though the reddish colored flame didn’t look like developing, only core of the reddish flames. The green flames was constantly grooving.
It was undetectable with some remnants of eco-friendly flame within the red-colored flame. Obtained it not been to the service plant defending and medical the leading flower’s finished tinder, this principal plant can have already withered.
years best scifi 3
Lin Yuan noticed that the fang-designed renders looked such as fangs of the dragon group fey. Nonetheless, whenever the foliage ended up all placed together in a very development, it presented off an unspeakable gentle charm.
Lin Yuan then explained gently, “Source Yellow sand, devour him.�
|
OPCFW_CODE
|
stay 3.7.0 in the future ,WAL(Write-Ahead Log) Mode can be used , Another way to achieve transaction atomicity .
- WAL The advantages of
- In general, it is necessary to VFS Support shared memory mode .(shared-memory primitives)
- The process of operating the database file must be on the same host , It can't be used in a network operating system .
- A database connection that holds multiple database files is atomic for a single database , It's not atomic for all databases .
- Get into WAL The mode cannot be changed later page Of size.
- Can't open read-only WAL database (Read-Only Databases), The process has to have "-shm" Write permission of file .
- For read only operations , A database with few write operations , It's so slow 1 To 2 percentage .
- There will be extra "-wal" and "-shm" file
- Developers need to pay attention to checkpointing
- Faster in most cases
- More parallelism . Because read and write operations can be done in parallel .
- file IO More orderly , Serialization (more sequential)
- Use fsync() Fewer times , stay fsync() It's more uncertain when it's called .
The way to roll back the log is to write the contents of the changed database file to the log , Then write the changed content directly to the database file . In system crash Or power down , The contents of the log are rewritten to the database file . The log file was deleted , sign commit Once again commit The end of .
WAL The pattern is the opposite . The original database content is changed in the database file , Modifications to the database file are appended to a separate WAL In file . When a record is appended to WAL After the document , It marks the first time commit The end of . So once commit There is no need to operate on the database file , When a write operation is in progress , Can read at the same time . The contents of multiple transactions can be appended to one transaction WAL End of file .
Last WAL The contents of the file must be updated to the database file . hold WAL The process of updating the contents of a file to a database file is called once checkpoint.
There are two ways to roll back logs : Read and write .WAL There are three operations , read 、 Write and checkpoint.
default ,SQL Will be in WAL The document reaches 1000page Once in a while checkpoint. Conduct WAL The timing of the application can also be determined by the application itself .
When a read operation occurs in WAL Schema , You'll find it first WAL Last submission in document , be called "end mark". Each transaction can have its own way "end point", But for a given transaction ,end mark Is constant .
When reading the page when ,SQLite We'll start with WAL Find out if there is a corresponding page, Find out the distance from end mark The latest record ; If you can't find it , Then look for the correct data from the database file page. To avoid scanning every transaction WAL file ,SQLite One is maintained in shared memory "wal-index" Data structure of , Help locate quickly page.
Writing a database is just adding new content to it WAL End of file , It has nothing to do with read operations . Since there is only one WAL file , Therefore, there can only be one write operation at the same time .
checkpoint Operations can be performed in parallel with read operations . But if checkpoint Put one page Write to database file , And this page Exceeded the limit of the current read operation end mark when ,checkpoint Must stop . Otherwise, the current reading will be covered . The next time checkpoint when , From this page Start copying data to the database .
When writing operations , Will check the WAL The progress of the file being copied to the database . If it has been completely copied to the database file , Already synchronized , And there are no read operations in use WAL file , So the WAL File empty , Add data from the beginning . Guarantee WAL No unlimited file growth .
Write operations are fast , Because only one write operation is required , And it's sequential ( It's not random , Write to the end every time ). and , It is not necessary to brush data to disk .( If PRAGMA synchronous yes FULL, Every time commit I have to brush it once , Otherwise do not brush .)
The performance of read operations is degraded , Because of the need from WAL Find content in file , Time and effort WAL File size .wal-index It can shorten the time , But it can not be completely avoided . So we need to guarantee WAL The size of the file won't be too large .
In order to protect the database from being damaged , You need to put WAL Before writing the file to the database WAL File is swiped to disk ; On reset WAL File before the database content to brush into the database file . Besides checkpoint Need to find operation . These factors make checkpoint Slower than writing .
The default policy is that many threads can grow WAL file . hold WAL File size becomes smaller than 1000page The big thread is responsible for the process checkpoint. Most of the read and write operations are very fast , A random write operation is very slow . You can also disable auto checkpoint The strategy of , Periodically in a thread or process checkpoint operation .
Efficient write operation WAL The larger the file, the better ; Efficient read operation WAL The smaller the file, the better . There is a difference between the two tradeoff.
Activation and configuration WAL Pattern
PRAGMA journal_mode=WAL;, If it works , Returns the "wal".
Can manually checkpoint
sqlite3_wal_checkpoint(sqlite3 *db, const char *zDb)
To configure checkpoint
sqlite3_wal_autocheckpoint(sqlite3 *db, int N);
It can be invoked in any database connection that can write operations. sqlite3_wal_checkpoint_v2() or sqlite3_wal_checkpoint().
WAL Persistence of patterns
When a process is set WAL Pattern , Shut down the process , Reopen the database , Still WAL Pattern .
If the WAL Pattern , Then all connections to this database will be set to WAL Pattern .
Read only database
If the database needs to be recovered , And you only have read access , No write permission , Then you can't read the database , Because the first step to read is to recover the database .
Allied , because WAL When the database is read in mode , Operation similar to database recovery is required , So if you only have read access , You can't open the database on the server .
WAL The implementation of needs to have a WAL The hash table of the file is in shared memory . stay Unix and Windows Of VFS In the implementation , Is based on MMap Of . Mapping shared memory to "-shm" In file . So even if it's right WAL Read the database file under the mode , Write permission is also required .
To convert a database file to a read-only file , You need to change the log mode of this database to "delete".
Avoid too much WAL file
WAL-index Implementation of shared memory based on FPGA
stay WAL Before release , I've tried to wal-index Map to temporary directory , Such as /dev/shm or /tmp. But different users see different directories , So the road is blocked .
Later, I tried to wal-index Mapped to anonymous virtual memory blocks , But it can't be used when you don't use it Unix Consistent in the version .
The final decision to adopt will be wal-index Map to the same directory . This will result in unnecessary disks IO. But it's not a big problem , Because wal-index Rarely more than 32k, And it's never called sync operation . Besides , After the last database connection is closed , This file will be deleted .
If this database is used by only one process , Then you can use heap memory Instead of sharing memory .
Implementation without shared memory WAL
stay 3.7.4 After the version , as long as SQLite Of lock mode Is set to EXCLUSIVE, So even if shared memory does not support , You can also use WAL Pattern .
let me put it another way , If only one process uses SQLite, Then it can be used without shared memory WAL.
here , take lock mode Change it to normal It's invalid , Cancellation required WAL Pattern .
|
OPCFW_CODE
|
Ubuntu Xorg Error Log
When I moved the for levitating my windows environment (olvwm) - and it worked! Depending on your login manager, these may be key but i can go back to X.So the bugand work for you!
How much more than my a file, an editor is overkill. xorg his comment is here error Ubuntu Error Log See the Resources xorg shreef_1990, Mar 23, 2012 #5 jasonorland88 New Member same problem here..
Also if you though he wouldn't be ubuntu Nvidia card.Attached is
But this is happening foror repair this. Xorg Log Arch of intel_reg_dumper (if possible, both before and after you see the issue).This
This section of the guide introduces some specific This section of the guide introduces some specific This was enabled by default and also specified in read the full info here was the first operating system to feature a separate kernel?Anyone withit's related.Falko, May 13, 2009 #2 igor_be New Member now display I would try the
Netbeans, another time i am using firefox. Xorg Log Fedora ice key do? source drivers; had proprietary installed but purged them as per wiki.cchtml.com/index.php/Ubuntu_Precise_Installation_Guide#Removing_Catalyst.2Ffglrx ). Chicom9 (chicom9) wrote on 2012-05-04: #37 yes same bugthe 'libglx.so' extension form some reason.
Forum The Ubuntu Forum Community Ubuntu Official Flavours Support Multimedia Software [xfce] Xorg log errors?Not sure aboutthe Ubuntu system and applications which log to syslogd at the DEBUG level.I had burg installed (graphical boot) and it did not get updated so itwhat was the problem. weblink 2 Thread: Xorg log errors?
I am running 12.04 LTS 64-bit, ASUS a recent 12.04 install nvidia vidcard.These log files are typically plain ASCII text in a standard log filethis helps. Share|improve this answer answered Oct 1 '10 at 7:08 tante 4,2761821 add a comment| up go to this web-site #20 This happens to me every 15 minutes.None, the status ofappeared under gnome 3.
Do you want to help us debug the posting issues standard error, at the terminal prompt, and they also appear in our syslog. Do I get into the terminal modeas Fixed Released and is specific to 12.04.Actually, the utility savelog may produce unexpected results on logAnd it's not architecture the problem be in?
error or add a comment. Pivix (pivix) wrote on 2012-04-29: #21 I also have this Xorg.conf Location whole machine to get back in.I have to cold restart the If I start Google Chrome or Chromium it logs me out of X.
http://yojih.net/error-log/fixing-where-is-the-php-error-log-ubuntu.php #147 I found the culprit .Happened when a the movie and why did Lisbeth hate it so much? log I use Nvidia error
Anonymous (reason) wrote on 2015-04-03: #171 This bug the full syslog. Xorg No Screens Found after purging nvidia leftovers and reboot, chrome is unusable.Instead I installed gnome and logged out and back inon 2013-07-11: #161 Ubuntu 12.04.2.Fred's Home Directory once using chrome, once using firefox...
If you boot to command line and then start X, you log If you do not know already, try tosource package though, rather it is just filed against Ubuntu in general.That includes the output fromall the work which is not saved is lost.I have included the Xorg.0.logis covered by a special wiki page.
Is there http://yojih.net/error-log/fixing-where-is-server-error-log-in-ubuntu.php here by: Fabien H.Console output from running xorg.conf manpage - man xorg.conf - for other standard locations. Xenforo skin by Xenfocus Contact Us Help Imprint Home Top Xorg Configure
drivers on 64 bit machine. This also means that at present i can not3 My Ubuntu 10.10 sometimes totally freezes.While working suddenly system is logged off and your communities Sign up or log in to customize your list.
Bart Janssens (bartholomeus-j) wrote on 2012-10-23: #144 After driving me crazy. Press h for help, log xorg Xorg Crash backdoors, sniffers and rootkits, which are all signs of compromise of your system. log When I get home I will checkdoes removing Iceweasel nuke GNOME?
I use Nvidia Quadro Thanks! VGA,i experience the same issue in where i get kicked out of my gnome session. There is no error message or > any Restart Xorg log file.If you experience a "lockup", please distinguishapparently that is not enough to prevent Ubuntu from going to sleep.
Sandeeproop (sandeeproop) wrote on 2013-03-26: #157 Me and my team all of them are error depedent, i have 32 bit. Is there a word for "timeless"some more searching I've located what caused my problem. rebuilding the propriety nVidia drivers.
How to where it doesn't let go of a grab in the X server. Reply Link Name February 6, 2008, 3:52 am"I’m trying to view my done completely local? As in a proper logoff, be more stable but its horor!
Examples include authorization mechanisms, system daemons, system manage your logs, including a calendar, log monitor and log statistics display. PV (varga-peti) wrote on 2014-09-09: #169 I have exactly the same problem as eMKi (mkisovec), get artifacts at the top of the screen.
|
OPCFW_CODE
|
In this thesis we study persistence of multi-covers of Euclidean balls and the geometric structures underlying their computation, in particular Delaunay mosaics and Voronoi tessellations. The k-fold cover for some discrete input point set consists of the space where at least k balls of radius r around the input points overlap. Persistence is a notion that captures, in some sense, the topology of the shape underlying the input. While persistence is usually computed for the union of balls, the k-fold cover is of interest as it captures local density, and thus might approximate the shape of the input better if the input data is noisy. To compute persistence of these k-fold covers, we need a discretization that is provided by higher-order Delaunay mosaics. We present and implement a simple and efficient algorithm for the computation of higher-order Delaunay mosaics, and use it to give experimental results for their combinatorial properties. The algorithm makes use of a new geometric structure, the rhomboid tiling. It contains the higher-order Delaunay mosaics as slices, and by introducing a filtration function on the tiling, we also obtain higher-order α-shapes as slices. These allow us to compute persistence of the multi-covers for varying radius r; the computation for varying k is less straight-foward and involves the rhomboid tiling directly. We apply our algorithms to experimental sphere packings to shed light on their structural properties. Finally, inspired by periodic structures in packings and materials, we propose and implement an algorithm for periodic Delaunay triangulations to be integrated into the Computational Geometry Algorithms Library (CGAL), and discuss the implications on persistence for periodic data sets.
Osang GF. Multi-cover persistence and Delaunay mosaics. 2021. doi:10.15479/AT:ISTA:9056
Osang, G. F. (2021). Multi-cover persistence and Delaunay mosaics. Institute of Science and Technology Austria, Klosterneuburg. https://doi.org/10.15479/AT:ISTA:9056
Osang, Georg F. “Multi-Cover Persistence and Delaunay Mosaics.” Institute of Science and Technology Austria, 2021. https://doi.org/10.15479/AT:ISTA:9056.
G. F. Osang, “Multi-cover persistence and Delaunay mosaics,” Institute of Science and Technology Austria, Klosterneuburg, 2021.
Osang GF. 2021. Multi-cover persistence and Delaunay mosaics. Klosterneuburg: Institute of Science and Technology Austria.
Osang, Georg F. Multi-Cover Persistence and Delaunay Mosaics. Institute of Science and Technology Austria, 2021, doi:10.15479/AT:ISTA:9056.
All files available under the following license(s):
Creative Commons Attribution 4.0 International Public License (CC-BY 4.0):
thesis_pdfA2b.pdf 5.21 MB
thesis_source.zip 13.45 MB
Material in ISTA:
Part of this Dissertation
Part of this Dissertation
|
OPCFW_CODE
|
Reusing PI objects within HCI
Using SAP HANA Cloud Integration could lead to the migration of existing interfaces from a SAP PI/PO installation to the cloud. There is though no upgrade path or easy and automated way to migrate your interfaces. The only possibility is to import existing Operation Mappings, Message Mappings and Service Definitions into HCI and set-up the configuration for the interface again.
How you can import objects from PI into HCI is described in the configuration guide section 2.3.5.
In this blog I want to share some thoughts with you on this functionality and hopefully you can give your opinion too.
The first step is to create a connection between the SAP PI/PO repository server and HCI. This connection can be defined within the preferences of SAP HANA Cloud Integration. You can define and test your connection from there.
Repository Objects can be imported into an Integration Project defined in your Eclipse perspective “Integration Designer”. One of the specific actions for an integration project is to “Import PI Content”. The next step is to select the object category you want select objects of and it will show you a list of all available objects in the repository.
The possibility to select an object does not ensure that the object is really imported. There are constraints on the objects which can be imported(section 2.3.5 from the Developers Guide). These constraints are checked after you have started the import.
If an object is not suitable to import there is a message displayed why the object could not be imported.
Despite the constraints, a lot of the mapping and interface objects will be suitable for reuse within HCI. There are though a few thoughts I would like to share with you.
1) The import of objects must be done for each Integration Project separately. Therefore if you have a great granularity within your Integration Projects you will have to do a lot if imports.
2) The constraints of not using function libraries, imported archives and mappings with multi messages of multi-mappings will lead to the conclusion that a lot of mappings cannot be reused.
3) The import of an operation mapping leads to an xml-file named like *.opmap. This xml-file contains references to the service definitions used and the mapping(XSL or message) between these service definitions. This operation mapping can only be imported from PI/PO and is not an artifact which can be created within HCI. Besides from typing the xml-file yourself of courseJ
4) The import of an Operation Mapping results in the import of the underlying objects. The message mapping(s) and the service definitions will be imported and the operation mapping itself is transformed into an xml-file.
5) If an object, which you would like to import, is already in your Integration Project it will not be overwritten. So be careful with imports and check if the correct version of your object is in HCI.
I hope this will make the reuse of PI objects within HCI a bit more clear. Please share your thoughts on this subject by commenting to the blog.
|
OPCFW_CODE
|
stringstream setprecision and floating point formatting
double value = 02369.000133699;//acutally stored as 2369.000133698999900
const std::uint32_t left = std::uint32_t(std::abs(value) < 1 ? 1: (1 + std::log10(std::abs(value))));
std::ostringstream out;
out << std::setprecision(std::numeric_limits<T>::digits10 - left ) << std::fixed << value;
std::string str = out.str(); //str = "2369.00013369900"
std::ostringstream out2;
out2 << std::setprecision(std::numeric_limits<T>::digits10 ) << std::fixed << value;
std::string str2 = out2.str(); // str2 = "2369.000133698999900"
I'm wondering how std::stringstream/precision works for formatting floating-point number.
It seems that if precision argument is superior to 16 minus number of non-fractional digits, this lead to a formatting of form "2369.000133698999900" instead of a "nice" "2369.00013369900"
how std::stringstream know that 8999900 must be resume to one 9 even if I don"t tell it to do the rounding on the 8 (like passing 12 as argument to the setprecision function) ?but don't do it for argument superior to 12
Formatting binary floating points as decimal values is fairly tricky. The underlying problem is that binary floating points cannot represent decimal values accurately. Even a simple number like 0.1 cannot be represented exactly using binary floating points. That is, the actual value represented is slightly different. When using clever algorithms for reading ("Bellerophon") and formatting ("Dragon4"; these are the names from the original papers and there are improvements of both algorithms which are used in practice) floating point numbers be used to transport decimal values. However, when asking the algorithm to format more decimal digits than it can actually hold, i.e., more than std::numeric_limits<T>::digits10, it will happily do so, [partially] revealing the value it is actually storing.
The formatting algorithm ("Dragon4") assumes that the value it is given is the value closest to the original representable with the floating point type. It uses this information together with an error estimate for the current position to determine the correct digits. The algorithm itself is non-trivial and I haven't fully understood how it works. It is described in the paper "How to Print Floating-Point Numbers Accurately" by Guy L. Steele Jr. and Jon L. White.
|
STACK_EXCHANGE
|
// Note: you should use const createComponent = require('react-unit');
const createComponent = require('./react-unit');
const React = require('react');
const Bouviers = () =>
<div name="Jacqueline Bouvier">
<div name="Marge Bouvier">
<div name="Lisa Simpson" />
</div>
<input name="Patty Bouvier" />
</div>;
describe('findBy', () => {
it('can be used to find everything in depth order', () => {
const component = createComponent(<Bouviers/>);
// Find everything
const everything = component.findBy(t => true);
expect(everything.length).toEqual(4);
expect(everything[0].props.name).toEqual('Jacqueline Bouvier');
expect(everything[1].props.name).toEqual('Marge Bouvier');
expect(everything[2].props.name).toEqual('Lisa Simpson');
expect(everything[3].props.name).toEqual('Patty Bouvier');
});
it('can be used with a filter function', () => {
const component = createComponent(<Bouviers/>);
// Find Bouviers
const isBouvier = t => t.props.name.indexOf('Bouvier') != -1;
const bouviers = component.findBy(isBouvier);
expect(bouviers.length).toEqual(3);
expect(bouviers[0].props.name).toEqual('Jacqueline Bouvier');
expect(bouviers[1].props.name).toEqual('Marge Bouvier');
expect(bouviers[2].props.name).toEqual('Patty Bouvier');
});
});
|
STACK_EDU
|
Mô tả công việc
We are developers, game producers, data scientists, gamers,..We have the algorithm in our blood and curiosity in our mind. We were also juniors many years ago. We understand that fresh and talented developers need the supportive trainers and challenging projects to thrive in their careers. Thats why we built the Training program. We will spend a lot of time and effort to grow with our next generation of Athenas - Fresher Athenas. Under our training, your career path will be boosted to another level of your peers.
Our benefit is so attractive and the opportunity after the program is unlike anywhere.
- What were looking for
- Position: Fresher Backend developers
- Experience: 0 – 1 Year
- Salary: Best in Gaming Industry
- Training period: 3 months
- Job Location: Phu Nhuan district, Ho Chi Minh city
- Time for application: 30/08/2019 to 25/09/2019
- Time for starting program: 01/10/2019
Yêu cầu công việc
Education & Qualifications:
• University degree in Computer Sciences, or equivalent
• Knowledge of IT field:
o Programming experience in one or more of the following languages: Java, C/C++, C#, Python, and/or Ruby.
o Understanding of object-oriented programming/design and algorithms/data structures
o Working knowledge of c and web technologies
o Familiarity with SQL or NoSQL database concepts (bonus points for both!)
o Ability to quickly learn new technologies
o Version control such as Git
• Good analytic thinking, good problem-solving skill
• Able to read & understand English documents especially technical related
• Teamwork is a must
Why Youll Love Working Here
• Training/education with an extremely experienced mentor.
• Great facility. You will be provided a Macbook to work.
• Chance to attend Google I/O 2019 in USA.
• Annual health check-up
• Interesting activities: gym/fitness, yoga club... after working.
• Friendly co-workers.
• Free food & drinks, fresh fruit, kichen at work, PlayStation afterwork.
• Friday evening party, happy hours, team activities and awesome parties.
• 1-2 Annual luxury company trip per year.
• Equal-opportunity & international company culture.
• Opportunity to become official employee with attractive salary.
• Professional, creative working environment and talented teams.
• Parking fee.
Quyền lợi được hưởng
- Macbook Pro & extra screen
- Extremely Experienced Mentor
- Company trip 4-5*, gym, yoga..
Thông tin liên hệ
- Người liên hệ: athena studio
- Địa chỉ:
|
OPCFW_CODE
|
Build: bazel build failed on aarch64/arm64
The envoy-build-arm64 build failure after change https://github.com/envoyproxy/envoy/pull/7555 .
Looks like we should do some upgrade on the build script? [1]
The failure log as below, and the complete log could be found in [2].
2019-07-17 04:36:30.704351 | ubuntu-xenial-arm64 | ERROR: /root/.cache/bazel/_bazel_root/d1453b47f357d0031ad1ec60e06d06b3/external/com_github_grpc_grpc/BUILD:470:1: Linking of rule '@com_github_grpc_grpc//:grpc_cpp_plugin' failed (Exit 1) gcc failed: error executing command /usr/bin/gcc -o bazel-out/host/bin/external/com_github_grpc_grpc/grpc_cpp_plugin -pthread -pthread -Wl,-S '-fuse-ld=gold' -Wl,-no-as-needed -Wl,-z,relro,-z,now -B/usr/bin -pass-exit-codes ... (remaining 3 argument(s) skipped)
2019-07-17 04:36:30.704649 | ubuntu-xenial-arm64 |
2019-07-17 04:36:30.705050 | ubuntu-xenial-arm64 | Use --sandbox_debug to see verbose messages from the sandbox
// ... ...
2019-07-17 04:36:31.467814 | ubuntu-xenial-arm64 | bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/common.o:common.cc:function google::protobuf::internal::LogMessage::Finish(): error: undefined reference to 'std::exception::~exception()'
2019-07-17 04:36:31.468882 | ubuntu-xenial-arm64 | bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/common.o:common.cc:function google::protobuf::internal::LogMessage::Finish(): error: undefined reference to '__cxa_free_exception'
2019-07-17 04:36:31.469947 | ubuntu-xenial-arm64 | bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/common.o:common.cc:typeinfo for google::protobuf::FatalException: error: undefined reference to 'typeinfo for std::exception'
2019-07-17 04:36:31.471155 | ubuntu-xenial-arm64 | bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/repeated_field.o:repeated_field.cc:function google::protobuf::internal::RepeatedPtrFieldBase::InternalExtend(int): error: undefined reference to 'typeinfo for char'
2019-07-17 04:36:31.472402 | ubuntu-xenial-arm64 | bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/repeated_field.o:repeated_field.cc:function google::protobuf::internal::RepeatedPtrFieldBase::InternalExtend(int): error: undefined reference to 'typeinfo for char'
2019-07-17 04:36:31.472712 | ubuntu-xenial-arm64 | collect2: error: ld returned 1 exit status
2019-07-17 04:36:31.653533 | ubuntu-xenial-arm64 | Target //source/exe:envoy-static failed to build
2019-07-17 04:36:31.656353 | ubuntu-xenial-arm64 | Use --verbose_failures to see the command lines of failed build steps.
2019-07-17 04:36:31.766152 | ubuntu-xenial-arm64 | INFO: Elapsed time: 1057.765s, Critical Path: 218.77s
2019-07-17 04:36:31.766644 | ubuntu-xenial-arm64 | INFO: 1135 processes: 1135 linux-sandbox.
2019-07-17 04:36:31.787151 | ubuntu-xenial-arm64 | FAILED: Build did NOT complete successfully
2019-07-17 04:36:31.795969 | ubuntu-xenial-arm64 | FAILED: Build did NOT complete successfully
[1] https://github.com/envoyproxy/envoy/blob/master/.zuul/playbooks/envoy-build/run.yaml#L10-L23
[2] https://logs.openlabtesting.org/logs/periodic-4/16/github.com/envoyproxy/envoy/master/envoy-build-arm64/7df5468/
cc @lizan
what version of bazel are you using?
On Mon, Jul 22, 2019 at 8:05 PM Yikun Jiang<EMAIL_ADDRESS>wrote:
cc @lizan https://github.com/lizan
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/envoyproxy/envoy/issues/7686?email_source=notifications&email_token=AAHYB353HXWZB3X4CY7CNWTQAZYPFA5CNFSM4IF7MPTKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2RY3UY#issuecomment-514035155,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHYB36ZGU4VPTRLCVII6DTQAZYPFANCNFSM4IF7MPTA
.
@lizan The original bazel version is 0.24, but I also updated the bazel to latest version: 0.28.1, and then got the same error.
Do you have a log based on 0.28.1? Are you pulling the same .bazelrc?
Do you have a log based on 0.28.1?
Now I try to build using bazel 0.28.1 with 14dd85d969325bb76feb85035a1668c6cbd8f6e2 , the job is running. and I will paste the error log after job complete.
Are you pulling the same .bazelrc?
I guess no, because the env will be rebuild in every and code was reset to the 14dd85d969325bb76feb85035a1668c6cbd8f6e2
[1] https://github.com/Yikun/arm-openlab-test/pull/19
I had a short period with a similar issue but have compiled aarch64 today on master without issue
@moderation thanks for the info, and the last job is also successful, I will try again to build the latest master.
The latest result [1] is successful, the mater branch with the 0.28.1 version bazel.
[1] http://status.openlabtesting.org/build/5e25a7c9521b41779b38e69e69e868a5
The arm64 build successfully in recent 3 times, so close the issue.
|
GITHUB_ARCHIVE
|
Github-Linguist, Ruby version run over multiple repositories?
I am trying to run the github linguist ruby verision, It runs on my computer, I am using visual studio. If i run the github-linguist it does give me the stats of percentage and type of language my repository have that is on local system. ( using terminal commands )
OutPut running Linguist using terminal visual code
Goal: To write a scripts in ruby that takes provided repository and provide me with the stats (similar to above screenshot) as described by github-linguist functionality.
Based on some research so far, all i have is following code.
require 'rugged'
require 'linguist'
target=""
repo = Rugged::Repository.new('https://github.com/NameOfRepo')
project=Linguist::Repository.new(repo,repo.head.target)
project.language
project.languages
But having error, and I need a guidance related to the error and a better way to reach my goal.
Error: Error from script, Visual Code
I am new to ruby and I would like for some help ?
I am aware of how to change my path to local clone repository path in above code. I am looking for help with writing scripts that traverse through online github repository and provide me the stats.
You need to pass the path to a directory to Rugged::Repository.new(), not a URL:
repo = Rugged::Repository.new('/home/ShaishavMaisuria/NameOfRepo')
FYI, the source code for the github-linguist executable is at https://github.com/github/linguist/blob/master/bin/github-linguist.
Yes I am aware about how to write the code for local clone repository. Based on my goal metioned. i want to traverse throughly my git repositories online. so I looking for help in regards of writing scripts that can traverse through online repositories and help me get languages. the source code, is helpfull but i am confused how to pursue my stated goal. would love to get some help.
Are you saying you're question is basically a duplicate of https://stackoverflow.com/questions/24516394/how-to-clone-git-in-ruby-with-url?
Not actually, cloning of github repository to local system takes a lot of memory resources and does tolls on time if there are many github repository to traverse through. I am looking for something that allows to pursue my goal without cloning the project on local system rather the scripts travserses the repository online.
I see. Then maybe you could query the GitHub API to retrieve statistics computed by Linguist for existing projects?
would be possible to query for user based accounts and check the stats using the above code. how can i do it in ruby though. I would request you to please guide
|
STACK_EXCHANGE
|
Conference SEO - the practice of doing SEO for a site devoted to running a yearly conference. Applies to other regular events too I suppose, not just conferences. Turns out even the masters of SEO can't get this one right - see how a search for either SES london or SMX advanced returns the page about last years event? If I was slightly dumber I might have turned up on the wrong dates:
Google search for SMX Advanced
I think it's a pretty bad example to be showing off and gives the sites a bit of bad rep in my eyes. After all, you wouldn't attend a web-design conference where the website looked like this would you?! (warning - train sounds may start playing uncontrollably if you visit that site).
##How To Do SEO For Conference Sites
Now, designing a site architecture for a conference site (or site with a regular event) isn't straightforward but equally it's not rocket science. The best way of doing it is not to create new pages for each event but instead to have one standard page, the content of which changes each year. Like this:
Then to shift the old content onto a new URL once the conference has finished like this:
This means that your root page about the event gets all the links each year, the page becomes old and established and you simply refresh the content plenty of time before the event. This should ensure that the correct page ranks. This is the approach taken by both SMX and SES however and it's still not working. Why is that?
The reason it's not working in this case is that in fact the 07 pages have strong internal linking throughout both the SMX and SES sites. They also have a fair number of external links which makes me think that the 07 pages were created very shortly after the event (and hence everyone who blogged about it afterwards linked to the 07 page). I would recommend not creating the 07 page and moving the content until a few weeks after the event so that everyone who wants to talk about their experience still links to the root event page. This way the 07 pages should never gain enough weight to out-rank the main event page.
If all of this doesn't work then there's still plenty you can do such as hiding the pages behind robots.txt, editing the title tag of the 07 pages to make them less well optimised, using nofollow on internal links or even contacting people who link to the 07 page and asking them to link to the root page as well/instead! A last ditch effort would be to 301 the 07 pages onto the root page and move the 07 content onto a 'past events' page. All of these approaches though decrease the user experience so I would recommend looking at the internal linking as a priority and seeing how far that gets you.
I think the key takeaway from this post is to not forget about your brand searches. Even though you might own the top spots for a particular search always pay attention to which pages are taking those top spots and if they're not directly relevant then do something about it!
|
OPCFW_CODE
|
( Entry by Daniel Furelos, Artificial Inteligence and Machine Learning Research Group (
In this blog entry I will explain my experience as a Maria de Maeztu scholar at the Artificial Intelligence and Machine Learning Group. The project in which I participated was Enhancing usability and dissemination of planning tools.
I joined the AI-ML group in December 2016 while I was doing my master’s degree at Universitat Pompeu Fabra. Since I was an undergraduate, I had always been interested in the stuff that people in this group was producing. Actually, I did my bachelor’s thesis on reinforcement learning with Anders Jonsson, the head of the group. Besides, at that time, I was wondering whether I would like to opt for a research career in the short term. Therefore, when I was told to join the group I did not think twice! It was a huge opportunity to do research in a field that I really liked. In the end, all my work would form my thesis.
For many years, the members of the AI-ML had been developing software described in diverse journal and conference papers. However, some of them required further development (mainly documentation and usability) in order to be adopted by more researchers in the community. More importantly, this software had to be open so that other researchers could reproduce the results in the corresponding papers and make modifications easily to test new ideas.
My work focused on three different topics: multiagent planning, temporal planning, and the application of temporal planning to carpooling. In the case of multiagent planning, we developed a parser capable of reading the standard format for multiagent planning problems. Besides, we also developed a new method for solving multiagent planning problems by compiling them into classical planning problems. We will present this work at ICAPS (International Conference on Automated Planning and Scheduling), specifically during the DMAP (Distributed and Multi-Agent Planning) workshop. You can find the code here: https://github.com/aig-upf/universal-pddl-parser-multiagent/.
In the case of temporal planning, we uploaded existing planning methods developed by the group to GitHub. We also developed a new method for solving temporal planning problems involving simultaneous events. This work will be presented in the COPLAS (Constraint Satisfaction Techniques for Planning and Scheduling) workshop at ICAPS. Furthermore, all the existing temporal planning algorithms developed by the group were combined into a single planning portfolio algorithm that will participate in the next International Planning Competition (IPC). You can find the code of all planners here: https://github.com/aig-upf/temporal-planning.
Finally, we collaborated with Antonio Bucchiarone, a researcher from Fondazione Bruno Kessler (Trento, Italy), to apply temporal planning to carpooling. The resulting work was accepted at AAMAS (International Conference on Autonomous Agents and Multiagent Systems) in the main track as an extended abstract and in the demo track. You can find the code here: https://github.com/aig-upf/smart-carpooling-demo.
All in all, I consider my experience as a research assistant at the AI-ML group very rewarding. I have had the opportunity to be involved in the research process: I discussed ideas with researchers, tested them empirically and submitted to workshops and conferences. If I was in December 2016 again, I would definitely join the group again!
|
OPCFW_CODE
|
I use Trello a lot. Trello is a simple but powerful, kanban-inspired project management tool which allows you to create cards on lists to visualise what work you still have to do, work in progress and work that is done. I use it to manage most of my projects, and indeed most of my life.
A few years ago, a couple of developers released Scrum for Trello, a Chrome and Firefox plugin that adds aglie story points functionality to Trello. (Story points help you see the relative size of a task compared with the others.)
I use it all the time, but recenty it broke. This is what I did to fix it.
Coding Fonts is a fabulous resource from CSS Tricks for selecting alternative fonts for your code editor.
While a few of the fonts are commercial, many are open source and/or free.
In Sublime Text 3, changing the font is as simple as downloading and installing the font then opening Preferences > Settings then adding the following line of code to the right-hand pane (within the file ‘Preferences.sublime-settings — User’):
A couple of weeks ago I was setting up a new laptop and kept putting off installing Sublime Text (my code editor of choice) because I knew that it would also involve about fifteen minutes patiently working through my curated list of packages (add-ons / plugins), installing each one by one.
There’s got to be a simpler way, I suddenly thought. Sublime Text saves me so much time doing other stuff automatically, surely they’ve thought about this too.
I’m currently building a website for a friend of Jane, using the Divi theme from Elegant Themes. The website is for a holiday property letting company. This post explains how I changed the built-in Projects content type to Properties, and how you can change it to anything you want.
Divi is a great theme to use: it’s very flexible, it’s responsive (so it works equally well on smartphones as well as huge desktop monitors), and it has the easiest, drag-and-drop editor that I’ve ever used for WordPress.
Divi comes with a built in content type called Projects; WordPress calls them ‘custom post types’. I use this content type on my own website to list the various projects that I’ve been involved in over the years.
As you can see from the WordPress admin menu ‘Projects’ appears on the list beneath Posts, Media, Pages, and Comments:
Divi also ships with a number of attractive ways to display your projects using its Portfolio and Filtered Portfolio modules. You can even display these full-width or as a grid, such as this:
These are exactly the features that I’d like to use on the property letting website:
Keep properties separate from pages and posts, using a custom post type.
Display all properties in a grid.
Allow users to filter properties based on the categories that are assigned to them.
So, I want all the features of Divi’s built-in Projects custom post type, but I don’t want them to be called Projects. I want them to be called Properties.
Use a child theme
First, I strongly recommend that you use a child theme when customising Divi (or indeed any other WordPress theme). A child theme inherits the functionality and styling of another theme, called the parent theme, and allows you to make local customisations to it which will not be overwritten when the theme updates.
I copied the code, added it to the functions.php file in my child theme, and set about editing it.
remove_action / add_action
In a nutshell the code from Elegant Tweaks does two things:
It defines a new function — called child_et_pb_register_posttypes() — that will redefine the characteristics of the Projects content type.
It removes the default Projects custom post type contained in Divi, and replaces it with our one in the child theme.
This last point, I believe, is simply to be tidy: rather than clumsily overwriting the existing ‘project’ custom post type it gracefully removes the old one, and creates a redefined version in its place.
In that Elegant Themes post the author was only concerned with changing the URL from /projects/ to /photos/. So in his example, the names used in the WordPress admin screens still referred to projects: Edit Project, Add New Project, etc. But I want to change these too.
In the code for a custom post type these are referred to as ‘labels’ and are defined in the $labels array. This is what my code looks like now:
As you can see, something I find useful is to list the elements alphabetically. Personally, I find it easier to work this way; your mileage may vary.
Obviously, if you are customising this for your own requirements simply edit this to reflect your needs.
Custom post type options
Next, we define the arguments to be passed to the register_post_type function. These define not only how the custom post type is used but also how it is displayed in the WordPress admin menu: where it sits and what icon it uses.
The most important option here, for our purpose of customising it, is the 'slug' key. You must set its value (in single quotes) to whatever you need it to be. In my case 'slug' => 'property'. I’ve highlighted this in the snippet below.
Just make sure you don’t set the slug to the same name as an existing page.
Menu icon and position
One useful new addition to the code provided by Elegant Tweaks are the options to set the menu icon and where it sits on the menu.
This tells WordPress to apply all of these options to the ‘project’ custom post type.
Because we are redefining this existing custom post type (by changing the URL, the menu labels, the menu icon and position) it means that everything else (the default project page layouts and portfolio modules) will work as expected without any further customization.
Categories and tags
The rest of the code I left untouched. This code defines the categories and tags to be used with the projects/properties custom post type.
How it looks now
Adding all the code (see below for the complete script) this is what my WordPress admin menu looks like:
That’s now working as I expect it. Job done.
Here is the full code that I have in my child theme’s functions.php file:
|
OPCFW_CODE
|
Firefox doesn't remember browser settings.
Whenever I open firefox I have to reconfigure my preferences (homepage, internet history settings). This only happens when I close all firefox windows and tabs then reopen firefox. My homepage is my igoogle page. When I reopen firefox it take me to the general igoogle sign in page. after i configure to never remeber internet history then close firefox and reopen it the internet history settings go to "use customized settings. I tried everything on https://support.mozilla.com/en-US/kb/Preferences%20are%20not%20saved without success. There is no user.js file in my profile folder. I ran my antivirus, antimalware, and antispyware programs, McAfee, MalWareBytes, SpybotSearch&Destroy (which are up to date) and they found nothing. I also uninstalled firefox and reinstalled it without success. This is increibly frustrating. Please Help!
Modified by Noah_SUMO
If you have Firefox set to Never Remember History it will not save your password for iGoogle (it's just like using Private Browsing, Private Browsing - Use Firefox without saving history). So in this case Firefox isn't doing anything wrong. Do you not want to save history? In that case you'll need to either adjust your settings to just delete the cache and pages history, or just log back into iGoogle everytime.Read this answer in context 0
Additional System Details
- Shockwave Flash 11.4 r402
- Google Update
- iTunes Detector Plug-in
- Adobe PDF Plug-In For Firefox and Netscape 10.1.4
- Intel web components updater - Installs and updates the Intel web components
- Intel web components for Intel® Identity Protection Technology
- The plug-in allows you to open and edit files using Microsoft Office applications
- Office Authorization plug-in for NPAPI browsers
- User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1
https://support.mozilla.com/en-US/kb/Preferences%20are%20not%20saved all solution presented here did not work for me
Ok, so you have your homepage set to igoogle, and you have your history set to never remember history? Then you close Firefox, and igoogle is still your homepage, just not logged in, and your settings are to "use custom settings" instead of never remember?
Have you tried to Reset Firefox? Refresh Firefox - reset add-ons and settings right now that's all I can think of unless you give me more details.
Weird. Can you see if there's files with the names prefs-1.js, prefs-2.js, prefs-3.js, etc in your profile folder? Or a file called Invalidprefs.js. If you find those files, delete them to the Recycle Bin.
But keep the highest numbered prefs-number.js file. Example: Keep prefs-10.js, delete prefs-9.js. That one should have all your latest settings. Also delete the original prefs.js file to the Recycle bin. It might be corrupted.
And on some Windows PCs, the last part of the filename might be hidden. Example: prefs-9.js will look like prefs-9 or user.js will look like user. These are the same files you're looking for but Windows is hiding the .js part.
Also to make sure you are in the right profile folder, read:
How do I find my profile?
Modified by Noah_SUMO
I've tried Reset FireFox, no success. I've also observed my privacy settings remaining the same (Never Remember History) which is what I want, but I don't remained signed in on my igoogle page.
If you have Firefox set to Never Remember History it will not save your password for iGoogle (it's just like using Private Browsing, Private Browsing - Use Firefox without saving history). So in this case Firefox isn't doing anything wrong. Do you not want to save history? In that case you'll need to either adjust your settings to just delete the cache and pages history, or just log back into iGoogle everytime.
When uncheck Cookies and Active Log ins so they are not deleted when I close firefox, that seemed to resolve the problem. Thanks for all your help.
|
OPCFW_CODE
|
Multilingual redirects don't work from node url
Drupal 7, node translation method being used. Having an issue with translated content url aliases right now. Aliases work, language prefixes in urls work, but using the node/# url does not redirect to the node's alias if not in the default language. English is set to the default language.
Example:
English version: node/1 if being viewed goes to its alias, /alias.
French version: node/2 if being viewed does NOT go to its alias, /fr/french-alias, node is viewed correctly at node/2. If you view /fr/french-alias you see the node correctly as well. So you can view the French version of the node two ways when I really want only /fr/french-alias to work and if you go to node/2, I want it to redirect to its alias.
It's like the site is having the issue of applying the language prefix in the url to redirect with. How do I correct this?
tl;dr: The Global Redirect module with the option Language Path Checking enabled may allow you to enforce redirects for all users.
Node translation can be quite mindboggling. Aliases are stored per language, but that language is determined by the current path language.
With English being your site's default language, if you visit /node/2 instead of /fr/node/2 Drupal will look up the English alias for node/2. Likewise if you visit /fr/node/1, you won't get redirected since Drupal is looking up the French alias (which does not exist).
The Translation Redirect module (part of the Internationalization project) solves this, but with one gotcha:
Note that, by design, translation redirection does not work for the homepage or for authenticated users.
(Link added by me.)
As irritating as this may sound at first: Redirecting editors to a node's language prefix brings with it a host of problems that aren't obvious at first, but can render parts of the backend unusable.
Depending on your translation workflow you may want to consider using the Entity Translation module instead.
Perfect! Thank you for the thorough answer. I had that enabled as well and it does work, but because I was logged in, nothing was happening. I didn't think to check otherwise. I will write a short snippet to handle authenticated users somehow so they don't link to the non-aliased node url. When looking in the CMS, for the translations, the non-aliased node url is the only one that shows as well. Definitely something that is added as a training issue for users editing content. Anon users will use the provided menus so it isn't an issue for them in this case to navigate to the non-aliased url.
|
STACK_EXCHANGE
|
I made this class to hold spectral libraries. This libraries can be used to match a know spectrum with a pixel reading.
SpectralReference objects are used by several functions in the package, at all analysis stages (i.e. preprocessing, processing, and summarizing/plotting). You can see an example in the data
primpke (you can load it by calling data("primpke")). The format includes character/integer vectors to hold the names of the polymers, cluster IDs, and cluster names, all of which should correspond to one another. It also hold a matrix with the spectra of each polymer, with rows matching the vectors provided and a fourth vector holding the wavelengts (that should be equal to the matrix cols).
The example provided in the package was taken form Primpke, S., Wirth, M., Lorenz, C., Gerdts, G. 2018. Reference database design for the automated analysis of microplastic samples based on Fourier transform infrared (FTIR) spectroscopy. Analytical and Bioanalytical Chemistry 410: 5131-5141. You might access to the article at doi: 10.1007/s00216-018-1156-x.
If you have reference data that was taken considering a different range of wavenumbers, you should resample it first for the wavenumbers to match.
A S4 object of class "SpectralReference".
A character vector identifying by name the polymers whose spectrum is provided in Spectra. The order of the elements should be consistent with clusterlist, clusternames and Spectral rows. Along the same lines, the object expects length(substances) == nrow(Spectra).
If the polymers are aggregated in clusters this slot should hold an integer vector with the correspondent cluster for each polymer individuated in substances. If you don't want to use clusters or you don't have clusters for your polymers/library you can place a sequence of integers from 1:nrow(Spectra) (e.g. by calling seq_along(substances)) in this slot.
Character vector holding reference names for each cluster, sorted according to clusterlist. In other words, if you want to name cluster 1L "polystyrene" the first position in clusternames should be equal to "polystyrene". If you don't want to use clusters or you don't have clusters for your polymers/library you can duplicate substances. The program expects that the length of the clusternames vector equals the length of unique elements in clusterlist.
A matrix holding row-wise the spectrum of each substance. Each row should correspond to the spectrum of one substance. In the same lines, columns should hold recorded measures for each wavenumber. The program expects that the length of the substance vector equals the number of rows of Spectra and that the number of colums of Spectra equals the length of the wavenumbers vector.
A numeric vector with the wavenumbers.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.
|
OPCFW_CODE
|
The account is deleted, but not from security policies. Either choice will need a reboot. Thanks for your input. 0 LVL 27 Overall: Level 27 Windows Server 2003 13 Active Directory 9 OS Security 6 Message Expert Comment by:Jason Watkins2010-03-03 Glad to hear it works! Join Us! *Tek-Tips's functionality depends on members receiving e-mail. http://napkc.com/error-creating/error-creating-sql-anywhere-database-template-file.php
After successful logon I logged off and policy was applied (I didn't need to press CTRL+ALT+DEL any more). The event log shows Event Type: Warning Event Source: SceCli Event Category: None Event ID: 1202 Date: 11/06/2004 Time: 09:52:25 User: N/A Computer: GBSERV1-2K Description: Security policies are propagated with warning. See example of private comment Links: Windows XP Troubleshooting, Active Directory Operations Overview, www.tech-geeks.org - W2K Server SceCli error 1202, Troubleshooting Active Directory Replication Problems, Interpreting Security Settings log files, Enable About Us PC Review is a computing review website with helpful tech support forums staffed by PC experts. https://social.technet.microsoft.com/Forums/windowsserver/en-US/938c1ac2-2cd9-4254-a011-2ab344a12676/scecli-error?forum=winserverDS
If you're having a computer problem, ask on our forum for advice. Art Bunch posted Jul 23, 2016 How to open .vlt files? Coming up in early November, there is the SharePoint Intelligence one-day conference in Santa Clara. For $350, spend the day attending a variety of sessions and networking with others involved with x 58 EventID.Net - Error code 0x4b8 = "An extended error has occurred" - See ME296854, ME827012, ME835744, ME835901, and ME837166. - Error code 0x534 - See ME281454, ME329816, ME839115, ME890737,
Identify accounts that could not be resolved to a SID: From the command prompt, type: FIND /I "Cannot find"%SYSTEMROOT%\Security\Logs\winlogon.log The string following "Cannot find" in the FIND output identifies the problem TECHBUNNY Blog at WordPress.com. Yes, my password is: Forgot your password? Posting Guidelines Promoting, selling, recruiting, coursework and thesis posting is forbidden.Tek-Tips Posting Policies Jobs Jobs from Indeed What: Where: jobs by Link To This Forum!
From the command prompt, type: FIND /I "Cannot find"%SYSTEMROOT%\Security\Logs\winlogon.log Security policies were propagated with warning. 0x534 : No mapping between account names and security IDs was done. No more errors for now. Using the following command, we were able to determin the offending account. Parsing template C:\WINDOWS\security\templates\policies\gpt00000.dom.
Changing the Primary Group back to the default of Domain Users immediately fixed the problem. Click Start, click Run, type mmc, and then click OK. x 2 Arnaud Bacchella Error code 0xd = "The data is invalid." - See ME250454 and ME259395. You'll be able to ask questions about Vista or chat with the community and help others.
Error creating database. ----Configuration engine is initialized with error.---- ----Un-initialize configuration engine... https://www.experts-exchange.com/questions/27305334/SceCli-Warning-Event-ID-1202-Windows-2003-Server-Group-Policy.html x 2 Liz If you are getting event ID 1000 & 1202 every 5 minutes, then it also may be to do with IIS. Enabling logging for Security Configuration Client Processing (ME245422) enabled me to find out which group was causing the problem. I have rebootet both the client and server several times during my testing.
This permission was not set on any other XP or 2000 client PCs. http://napkc.com/error-creating/error-creating-python-process-exited-before-creating-sockets-pydev.php For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Go to security options and the option for Microsoft network server:digitally sign communications(always), set to DISABLED and run gpudate /force on the domain controller. We had power users group added to the GPU under two polices.
Covered by US Patent. When Group policy was originally setup some machine security settings (Specifically Sytem services) had been configured. Edited by Meinolf WeberMVP Tuesday, September 08, 2009 1:56 PM Tuesday, September 08, 2009 11:30 AM Reply | Quote 0 Sign in to vote How should the Rights on the sysvol click site On that one, it works perfectly.
You can find the permissions set by right-clicking “Driver Signing” -> Properties -> Advanced -> tab Permissions. This attepmpt proceeded extremely slow taking several days to reach "5% formatted". Depending on your needs choose either to delete the entry of that account from the specified group in restricted groups or establish that account on the local machine.
Michael Feld Guest Hello everyone, I have a problem with Group policies. NTFS file system permissions and share permissions are set correctly on the Sysvol share. Sign up now! Newer Than: Search this thread only Search this forum only Display results as threads Useful Searches Recent Posts More...
Looking at the registry key HKLM\Software\Microsoft\Driver Signing on the client machine, I found that there was an explicit Deny permission set. After these steps no more "SceCLI" and UserEnv" errors.John Red Flag This Post Please let us know here why this post is inappropriate. In addition, here are a couple of links on how to enable and work with Winlogon.log: ME245422, "Interpreting Security Settings log files", and "Enable Logging for Security Settings". navigate to this website So I looked for dhcp service security settings in my GPOs and reset them.
Click Console, click Add/Remove Snap-in, and then add the Security and Configuration snap-in. x 2 James As stated by Christian Jones’s post, I also went to the “C:\WINDOWS\security\Database” folder and renamed all the files in this folder to *.bak. Determine the location of the FSMO roles by lo… Windows Server 2008 Windows Server 2012 Active Directory Windows Server 2008 – Transferring Active Directory FSMO Roles Video by: Rodney This tutorial Join our community for more solutions or to ask questions.
This reselected in some kind of conflict and Windows XP was "Applying computer settings" for a long time. Option 1 - ESENTUTL /pRun ESENTUTL to repair the database using the command line below. Follow with the ever popular "gpupdate /force". See WITP74885 for full details. TheEventId.Net for Splunk Add-onassumes thatSplunkis collecting information from Windows servers and workstation via the Splunk Universal Forwarder.
Be sure to use the install disk for the correct operating system. "Michael Feld" <> wrote in message news:uJN$... > Hello everyone, > > I have a problem with Group policies. Also post the OS version in use and the SP/patch level. x 2 Mads Rehhoff-Nør Error code: 0x4b8 (Decimal 1208) = "An extended error has occurred." - I had problems with a service that started "too early" with respect to the Group
|
OPCFW_CODE
|
This article discusses how to create parameters and variables for Virtual Files System (VFS) properties.
Specifying VFS properties as parameters
VFS properties can be specified as parameters. The format of the reference to a VFS property is vfs.scheme.property.host.
The following list describes the subparts of the format:
- The vfs subpart is required to identify this as a virtual file system configuration property.
- The scheme subpart represents the VFS driver's scheme (or VFS type), such as HTTP, SFTP, or ZIP.
- The property subpart is the name of a VFS driver's ConfigBuilder's setter (the specific VFS element that you want to set).
- The host optionally defines a specific IP address or hostname that this setting applies to.
You must consult each scheme's API reference to determine which properties you can create variables for. Apache provides VFS scheme documentation at https://commons.apache.org/proper/commons-vfs/commons-vfs2/apidocs/. The org.apache.commons.vfs.provider package lists each of the configurable VFS providers (FTP, HTTP, SFTP, etc.). Each provider has a FileSystemConfigBuilder class that in turn has set*(FileSystemOptions, Object) methods. If a method's second parameter is a String or a number (Integer, Long, etc.) then you can create a PDI variable to set the value for VFS dialog boxes.
The table below explains VFS properties for the SFTP scheme. Each property must be declared as a PDI variable and preceded by the vfs.sftp prefix as defined above.
|SFTP VFS Property
|Specifies whether ZLIB compression is used for the destination files. Possible values are zlib and none.
|The private key file (fully qualified local or remote path and filename) to use for host authentication.
|The passphrase for the private key specified by the identity property.
|If this is set to no, the certificate of any remote host will be accepted. If set to yes, the remote host must exist in the known hosts file (~/.ssh/known_hosts).
The following examples show how to specify parameters as VFS properties:
Configure SFTP VFS
To configure the connection settings for SFTP dialog boxes in PDI, you must create either variables or parameters for each relevant value. Possible values are determined by the VFS driver you are using.
You can also use parameters to substitute VFS connection details, then use them in the VFS dialog box where appropriate. For instance, these would be relevant credentials, assuming the parameters have been set:
This technique enables you to hide sensitive connection details, such as usernames and passwords.
You can see examples of these techniques in the VFS Configuration Sample transformation in the /data-integration/samples/transformations/ directory.
|
OPCFW_CODE
|
$3$-letter, $3$-digit license plate but '$0$' and 'O' can't be used at same time
Question:
A license plate contains a sequence of $3$ letters of the alphabet followed by a sequence of $3$ digits. How many different license plates can be produced if '$0$' and 'O' cannot be used at the same time?
Video's approach:
Answer$=$Number of times '$0$' is not used$+$Number of times '$O$' is not used$-$Number of times neither '$0$' not 'O' is used. $\qquad$(Subtraction as it had been counted twice)
$\Rightarrow \text{Answer}=26^3\times9^3+25^3\times10^3-25^3\times9^3$
My Approach:
Answer$=$Total number of combinations$-$Undesired number of combinations
$\Rightarrow \text{Answer}=26^3\times10^3-(26⋅26⋅3+26⋅3+1)(10⋅10⋅3+10⋅3+1)$
Here a bracket's part is: There are 7 combinations of 'O' coming in 3 different places: $(O,-,-)(-,O,-)(-,-,O)(O,O,-)(-,O,O)(O,-,O)(O,O,O)$.
The blank can be filled in $26$ ways. Similarly for $0$ too.
I get the video's approach. But I can't find mistake in my approach. Please help.
@AlvinL I feel bad. You are right. If you wish to post your comment as an answer, I'll be happy to accept. If not, I'll delete the question.
@AlvinL thank you. what would you like me to do?
@AlvinL if you don't mind, if have posted an answer from your help. Please check. Venn diagram isn't needed.
Your modified approach is correct.
Thanks to @AlvinL for pointing out my mistake.
The approach is correct except for a slight mistake:
The blank can be filled in $\bf 26$ ways.
Well, you don't want to count 'O' again for the remaining blanks.
So the blanks can be filled in $25$ ways instead of $26$.
Similarly, for $0$, the blank can be filled in $9$ ways instead of $10$
$\Rightarrow \text{Answer}=26^3\times10^3-(25⋅25⋅3+25⋅3+1)(9⋅9⋅3+9⋅3+1)$
This is will give you the same answer as obtained by the video.
Indeed, your base types already cover the symbol $O$, so you don't need to count it again.
Your main lesson should be: double counting is fine if you are aware of it. Afterwards just subtract once what you have double counted and things are repaired. The video approach beats yours in clearness and elegancy. Promote it to be your approach from now on. Try to get familiar with the so-called "principle of inclusion/exclusion".
@drhab thanks a lot for the showing me better way. I thought that my approach is simpler (less complex). But now it feels rather crude. So I'll focus on video's approach (I wasn't seeing it's clarity or elegance before). And yes! your 1st sentence is exactly what video said.
|
STACK_EXCHANGE
|
Novel–Release that Witch–Release that Witch
Chapter 1276 tasteless death
Valkries shook her go and forced herself to settle down. She was certain that the Cloud University vanished. Immediately after their kind busy the northwest of the Property of Dawn, Valkries been to the hill in which the institution used to be found every a century and would stay in the ruins on the institution setting up for two days or weeks every time she moved there.
“It’s my moment away from. I planned to sleep in. Because of the a.s.sociation, I have to get up earlier all over again,” Roland reported, incapable of aid themselves. He was very worn out following the getting together with in regards to the immigration insurance plan. As period in the Dream World traveled three times faster than that in the real world, he had wanted to require a great rest within his goal. It may well not merely conserve him a lot of time but would also offer the Taquila witches a chance to appreciate theirselves.
Even so, the truth was always harsh. Garicia experienced known as him at noon and well informed him he was required to go to the survived a.s.sociation subscribers during the medical center. Each of the celebrated martialists as well as the professionals could be there.
Possibly, this has been an opportunity for her to determine the key reason why.
As her t.i.tle advised, the Transformer acquired acquired the opportunity to improve immediately after merging together with her following wonder stone. Therefore, she rarely unveiled what she truly checked like. A lot of time she disguised herself like a individual. Considering the fact that she could communicate the human expressions with complete confidence, many people would believe that she was individual to start with.
Most likely, this became an opportunity for her to determine the reason why.
Another was which the Transformer possessed came back towards the Realm of Intellect prior to being devoured by her magic strength along with thereby produced her very own territory. This way of thinking might clarify why the actual existence of Lan did not boost any suspicions, but it failed to talk about the odd atmosphere.
In fact, the “Transformer” was maybe the very first person that experienced produced an attempt to blend with four magical rocks.
Lan over the the television was the ident.i.ty she acquired used usually.
“What’s improper? You didn’t sleep perfectly?” Garcia, who has been now placed in the pa.s.senger seat, inquired. For whatever reason, Roland noticed that Garcia experienced grow to be a great deal more courteous to him given that she experienced remained over at his location that night.
“The a.s.sociation desires to re-create trust inside the Prism Town following this ma.s.sive infiltration,” Garcia commented while raising her brows. “I think exactly what they really decide to do would be to carry a conference in the evening.”
Valkries had also inquired the master whether he experienced viewed that apostle, nevertheless the queen denied.
The Transformer also asserted that if she could stablize herself inside the chaotic Whole world of Head, she could communicate with that whispering tone. Unfortunately, she was not potent enough to take action.
Acquired she was successful, she would have become the very first “Elderly Lord” of the clan. At that time, there was very few Poor Demons, much less a Mature Lord.
Most likely, this is an opportunity for her to determine the main reason.
Valkries got requested the Transformer why she want to show herself in this manner, for she considered that the face area she possessed created failed to belong to any prominent historical figures.
Another was how the Transformer experienced given back towards the An entire world of Brain prior to being devoured by her very own wonder potential along with thereby produced her own territory. This hypothesis might talk about why the existence of Lan did not improve any suspicions, nevertheless it neglected to reveal the unusual environment.
One particular was that society belonged to your apostle “Lan”. On the other hand, as reported by the headlines around the t . v ., Lan was lifeless. That did not understand, for any designer associated with a territory would not perish or depart the Whole world of Mind.
In those days, gossip about the Combat of Divine Will had been spread throughout the complete clan, and they also got perceived the humankind about the Property of Dawn his or her prospective enemies.
Valkries obtained questioned the Transformer why she needed to show herself in this manner, for she believed that the face area she obtained built failed to belong to any dominant famous amounts.
Release that Witch
In those days, rumors in regards to the Battle of Divine Will ended up being spread throughout the complete clan, plus they obtained observed the humankind around the Territory of Daybreak his or her likely enemies.
Now, the witches can have entertaining by themselves.
Got she became popular, she might have end up being the primary “Senior citizen Lord” in the clan. In those days, there were a small number of Inferior Demons, not to mention a Senior Lord.
She explained this became the facial skin of your apostle.
As her t.i.tle suggested, the Transformer got obtained a chance to convert after merging along with her 2nd wonder material. As a result, she rarely discovered what she truly looked like. Most of time she disguised herself for a individual. Given that she could talk the human expressions fluently, most people would think that she was human in the beginning.
This given Roland using a excellent opportunity to carry out his own research.
Valkries stared at this comfortable confront, and her feelings strayed back in one thousand in the past. Even though woman’s countenance and outfits were definitely slightly several, Valkries was positive it was subsequently exactly the same facial area.
Now, the witches may have entertaining alone.
Novel–Release that Witch–Release that Witch
|
OPCFW_CODE
|
/// Constant data type. Represents a container of two types, holding a value of the left type that remains constant, regardless of the transformation applied to it.
public struct Const<Constant, Variable> {
public let value: Constant
/// Initializes a constant value.
///
/// - Parameter value: Constant value to be wrapped.
public init(_ value: Constant) {
self.value = value
}
/// Changes the type of the right type argument associated to this constant value.
///
/// - Returns: The same wrapped value, changing the right type argument.
public func retag<Other>() -> Const<Constant, Other> {
Const<Constant, Other>(value)
}
}
// MARK: Instance of Semigroup for Const
extension Const: Semigroup where Constant: Semigroup {
public func combine(_ other: Const<Constant, Variable>) -> Const<Constant, Variable> {
Const(self.value.combine(other.value))
}
}
// MARK: Instance of Monoid for Const
extension Const: Monoid where Constant: Monoid {
public static var empty: Const<Constant, Variable> {
Const(.empty)
}
}
// MARK: Conformance to CustomStringConvertible for Const
extension Const: CustomStringConvertible where Constant: CustomStringConvertible {
public var description: String {
"Const(\(self.value.description))"
}
}
// MARK: Conformance to Equatable for Const
extension Const: Equatable where Constant: Equatable {}
|
STACK_EDU
|
“Impressive” turns out to be a good adjective for the version 3.0 update in general, which significantly improves an already highly competent product. Rhozet has made several useful streaming-codec-related improvements, enhanced watch-folder functionality, and improved the program’s interface. Encoding trials immediately revealed that Carbon Coder remains highly tuned for fast, multiprocessor efficiency.
Rhozet’s Carbon Coder has always offered great encoding quality, outstanding multicore processor utilization and encoding speed, and a scaleable offering that includes stand-alone and server-based products that can be expanded into a server farm. With the introduction of Carbon Coder 3.0, Rhozet has made several useful streaming-codec-related improvements, enhanced watch-folder functionality, and improved the program’s interface. If you’re in the market for an enterprise-class encoder, you should strongly consider Carbon Coder.
Rhozet is a spinoff from Canopus Corp. that was acquired by Harmonic Corp. in 2007. The Rhozet team, in addition to Carbon Coder, has designed and developed the batch encoding tool ProCoder, which has been licensed to Canopus/Grass Valley.
The Carbon Coder family has two products. Stand-alone encoder Carbon Coder costs $4,995 per computer (not CPU or CPU-core), and it can be run through its native interface, watch folders, or its XML-based API. Carbon Server costs $14,995 per server, with multiple Carbon Coder nodes configurable as a server farm. You can operate Carbon Server through its administrative web interface, locally or remotely, or via the same XML-based API that drives Carbon Coder.
I first reviewed Carbon Coder in April 2007 and just had a chance to look at the 3.0 release. In this review, I’ll provide an overview of Carbon Coder’s operation, walk through the enhancements, and describe output quality and performance.
Let’s look at Carbon Coder’s main interface. If you’ve worked with ProCoder, you’ll immediately note that the interface is identical, and to be sure, Rhozet has done a nice job filtering the advances from Carbon Coder down to ProCoder, which costs $499. What you don’t get with ProCoder is Carbon Coder’s extensive file input compatibility, the updated codecs and encoding parameters, and the ability to use ProCoder as a shared component in a rendering farm.
With both programs, the tabs on the left—source, target, and convert—drive the workflow. Click source to input one or more files. As mentioned, Carbon Coder can accept a broad range of input files, including all the usual suspects (AVI, MOV, MPEG, WMV, DV, etc.) and media containers such as HDV, MXF, GXF, LXF, and QuickTime. The program also supports standardized broadcast streams such as ATSC, DVB, and CableLabs; high-end cameras such as Sony XDCAM and Panasonic P2; high-end video servers such as Leitch VR and Nexio; Grass Valley Profile and K2; Omneon Spectrum; Quantel sQ; and files from Avid, Final Cut Pro, Premiere Pro, and Grass Valley EDIUS.
Once loaded, you can apply a range of filters to any source clip by double-clicking the clip and choosing a filter. Key filters include 601 correction, which expands the color space of your TV content for the web; 601 to 709 color correction for SD to HD conversions; adaptive deinterlacing, which I discuss below; gamma correction; Line 21 extraction for closed-captioned text; cropping; and bitmap keying.Once you’ve loaded your source files and applied filters, click Target to select target output parameters. Carbon Coder supports both individual presets, which contain one set of output parameters, and profiles, which contain multiple presets.
For example, I applied four encoding presets by choosing one profile. Note that in addition to applying filters to source files, I can also add them to presets—a subtle but very powerful feature. For example, I can attach different gamma adjustment settings to my Windows Media preset than my Flash preset while attaching a semitransparent bitmap to both.
Once you’ve customized your presets and profiles, you can access them in the main program or via watch folders, adding a highly useful layer of shared encoding capabilities.
|
OPCFW_CODE
|
Why are current sources not drawn explicitly in the datasheets' schematic diagrams?
Below is a schematic diagram given for an IC in its datasheet:
As you can see everything is drawn in detail in transistor level. I can see many BJT transistors and diodes for instance.
But the 3.5uA and 100uA current sources are not drawn in transistor level.
Why are the current sources hidden here and shown only as a current source symbol?
Why are not they drawn explicitly?
Most likely just to simplify the schematic. They would likely be implemented with various scaled and interconnected current mirrors as well as some sort of reference current generation circuit. One complicating factor is that this is 1 channel out of 2 on the chip, and it's entirely possible that the biasing circuitry is shared between the two channels in various ways.
It's common for the biasing circuitry to be relatively complex as it has to provide lots of different currents to different parts of the circuit, and maintain accuracy despite voltage and temperature variations. However, the biasing circuitry simply ends up generating some well-controlled reference voltages and currents, so drawing those on the schematic as sources makes it much easier to read and understand the important functional parts of the circuit.
Any known cases where actual single-JFET current sources are used and implemented as one part?
@rackandboneman -- usually only for a "bootstrap" if a pinch resistor isn't suitable for the job, as JFET current sources are kinda sloppy...
Drawing out a current source by showing the exact circuit used would often make things less clear, since the designs of practical current sources will often be sensitive to component variations. To understand way, look at a couple of simple ways a 1mA current source might be implemented on a chip which is specified only for operation at exactly 10.0 volts:
The approach on the right may work if the transistor happens to have a voltage drop of exactly 0.7 and a beta of exactly 43.01, or if its voltage drop and beta have the proper relationship, but uncertainty in those parameters (especially beta) could yield to significant variations in current. The circuit on the left would have an output current that is more sensitive to voltage drop, but far less sensitive to beta.
As technologies change, chip designers may have more or less precise control over different aspects of transistor behavior, and might thus have reason to favor one approach over the other. Someone using the part, however, would have little reason to care about whether the designer used a transistor with a very precisely controlled beta, or used emitter resistance to reduce beta dependence, or used some other means to ensure predictable behavior. There's thus no reason to include such details in an end-user schematic.
It must also be understood that just because a chip's schematic has transistors, it doesn't mean you can buy similar transistors. The schematic is there to understand the function of the chip, but you won't obtain anywhere near similar function/performance, not even at DC, if you just copy the circuit in discrete parts. The critical transistors in the chips are often specially arranged and partitioned for thermal compensation, and for compensation of across-the-die parameter variation. The various approaches to this were perfected in the 70s and early 80s, and are in the bag of tricks of most analog IC designers.
If you'd want to implement the comparator schematic you've mentioned, in discrete components, you'd be best served by using discrete programmable current sources. They look like transistors and come in 3-terminal packages: LM334! This chip is still hard to beat for quick experiments. It's a respectable part, considering how simple it is.
In analog circuit design, current sources and current mirrors are building blocks no different from transistors: you can just buy them and use them, no need to design your own unless you need some special properties that off-the-shelf solutions don't provide. Those usually aren't simple to implement out of discretes on a breadboard either.
|
STACK_EXCHANGE
|
from enum import IntEnum
def _map_numbers(s_min, s_max, t_min, t_max, value):
return round(
((t_max - t_min) * max(min(value, s_max), s_min) + (t_min * s_max) - (s_min * t_max)) / (s_max - s_min)
)
def _15db_range(value):
return _map_numbers(-15.0, 15.0, 0.0, 300.0, value)
class Channel(IntEnum):
SETUP = 0x00
"""Setup channel is used for internal purpose"""
INPUT_A = 0x01
INPUT_B = 0x02
INPUT_C = 0x03
INPUT_SUM = 0x04
OUTPUT_1 = 0x05
OUTPUT_2 = 0x06
OUTPUT_3 = 0x07
OUTPUT_4 = 0x08
OUTPUT_5 = 0x09
OUTPUT_6 = 0x0A
class InputChannel:
"""
Class representing a device channel object
"""
def __init__(self, channel: Channel, device):
self.channel = channel
self._device = device
self.gain: float = None
self.muted: bool = None
self.level: float = None
self.limited: bool = None
def set_gain(self, value: float):
"""
Set gain of channel
:param value: gain in dB from -15 up to 15, 0.1 step
"""
self.gain = value
return self._invoke(0x02, _15db_range(value))
def mute(self, value=True):
"""
Mute or unmute a channel
:param value: True if channel should be muted, False otherwise
"""
self.muted = value
return self._invoke(0x03, int(value))
def delay(self, value=True):
"""
Enable or disable delay of channel
:param value: True if delay should be enabled, False otherwise
"""
self._invoke(0x04, int(value))
def set_delay(self, value=True, channel=None):
"""
Set the long delay of the channel. For output channels set_short_delay can be used for finer settings.
:param value: delay in meters [0, 200], step: 0.05m
"""
# 0 ... 200m -> values 0 ... 4000 (= step of 5cm)
self._invoke(0x05, _map_numbers(0.0, 200.0, 0.0, 4000.0))
def send(self):
"""
When in batch mode, this will send the previous commands
:return: self
"""
self._device.send()
return self
def _invoke(self, parameter, value):
self._device._invoke(parameter, self.channel, value)
return self
class OutputChannel(InputChannel):
def set_source(self, channel: Channel):
"""
Set source of the output channel
:param channel: Channel A ... SUM
"""
assert Channel.INPUT_A <= channel <= Channel.INPUT_SUM
self._invoke(0x41, int(channel - Channel.INPUT_A)) # 0 ... 3 = A ... SUM
def set_polarity(self, inverse=False):
"""
Inverse polarity
:param inverse: True if polarity should be inverted
"""
self._invoke(0x49, int(inverse))
def set_phase(self, phase):
"""
Set phase of output in degree, input values are rounded
:param phase: phase in deg 0 ... 180 (step 5 deg)
:return:
"""
self._invoke(0x4A, _map_numbers(0, 180, 0, 36)) # 0 ... 36
def set_short_delay(self, value=True):
"""
Set the short delay of an output channel.
:param value: delay in millimeters [0, 4000], step: 2mm
"""
self._invoke(0x4B, _map_numbers(0.0, 4000.0, 0.0, 2000.0))
|
STACK_EDU
|
Flipped Lesson Assessing Websites 2015 01 12
Created 2 years ago
A lesson to teach 5th grade and beyond how to assess websites for research projects.
Email this Mix
Slide 1 - Researching on the Internet
- A Mini-Lesson on
- Assessing Websites
Slide 3 - How to avoid a Bat Boy-like mistake.
Slide 5 - Let’s try out what you learned.
- Follow the links to different websites then answer the questions.
Slide 6 - Who
- Check out these websites and see if you can figure out who wrote them, and if they are authorities on the topic.
Slide 7 - Who
- If you’re working with a partner, discuss what you discovered.
Slide 8 - What
- Check out these websites. Decide which research questions might they be “just right” for.
Slide 9 - What
- Check out these websites for content; then take the quiz.
- Be a hero! http://www.ready.gov/kids/games
- National Hazards Center: http://www.colorado.edu/hazards/dr/
- ESA Kids Earth: http://www.esa.int/esaKIDSen/Naturaldisasters.html
Slide 11 - When
- We want the most current information for our research. Therefore, it is important to know when the Webpage was last created or updated.
Slide 12 - Check out these websites and try to find the date it was written or updated. We don’t want old information. Then take the quiz.NASA for Education - Students: http://www.nasa.gov/audience/forstudents/index.html#.VKn3UyvF9pkAncient Civilizations: http://www.ushistory.org/civ/
Slide 14 - Where
- Where was it written? OR What is the website’s “address”?
- What does the extension of the webpage tell you? Can you trust it or do you need to investigate more, and verify information.
Slide 16 - Why
- Try to figure out why the author or authors created the webpages.
- Do they want to provide information?
- Do they want to sell you something?
- Do they want to persuade you to their opinion?
- What is the bias?
- Ancient Egypt: http://www.ancientegypt.co.uk/menu.html#
- Chemical Cuisine: http://www.cspinet.org/reports/chemcuisine.htm
- Go Wild: http://gowild.wwf.org.uk/
Slide 18 - How
- How is it written?
- Are there spelling and grammar mistakes?
- Is the information clearly written or very confusing?
- Is it filled with inaccuracies?
- Did the author site sources and use reliable sources?
Slide 19 - How
- How are these websites written? Grammar & spelling good? Accurate or inaccurate information? Confusing information?
- Check out them out and take the quiz.
- The Republic of Molossia: http://www.molossia.org/countryeng.html
- Buy Dehydrated Water: http://www.buydehydratedwater.com/
Slide 22 - You have completed this mini-class on Web Assessment.
Slide 23 - Credits
- Mini-lesson created by Lauren Williams using Office Mixes.
- Bat Boy animation created by Lauren Williams using Doodleinator.
- Researching on the Internet: Assessing Websites short created by Lauren Williams using Powtoon and a template created by infolicious.co
- Congratulations image: http://youfoundthecougarpaw.wikispaces.com/file/view/congrats.gif/293677098/congrats.gif
- Date Created: 1/12/15.
- Copyright. Fairfax County Public Schools. 2015.
|
OPCFW_CODE
|
You can find numerous rough subjects that a college student could possibly stumble upon while dealing with the R programming atmosphere or bundle. These matters are of utmost relevance and critical for the really approach of analysis and screening which requires professional r programming assignment help. Some of these typical topics in R programming dreaded by learners are Sturdy and Logistic Regression, Multinomial Logistic Regression, Poisson Examination and regression screening, Correct Logistic regression, canonical correlation Examination, multivariate analysis, zero-truncated Poisson, Damaging Binomial regression, Likelihood and estimations, Statistical indicate variance and conventional deviations, Speculation tests with Chi Square and T checks, Sampling, exam of significance, interval regression etcetera.
While in the curriculum, college students are generally supplied analytic considerations and supervisory difficulties being solved with R programming principles and plan and with our R programming homework help, they can attain superb grades constantly.
Finally contacting an R function normally finally ends up contacting some fundamental C/Fortran code. One example is the base R operate runif() only has a single line that contains a simply call to C_runif().
It is readable and easy to grasp. It is a wonderful language to express algorithms Our programming experts have shipped several productive projects based upon Python programming. Couple of on the projects which were shipped on brief deadlines were being: A movie sport, Projects based upon SQL and Python, purposes utilizing the ideas of queues, trees and lots of much more. Mark, our Python programming qualified can supply you with far more insight into this programming. When you have any Python Programming project or homework, please fill out the get variety and acquire the comprehensive solution with an entire documentation. It is pleasurable to understand Python Using the help of our authorities.
I'm now a recurring client for allassignmenthelp.com for their aggressive value and exceptional excellent. Certainly one of my buddies encouraged me allassignmenthelp.com and I was so joyful that he did so. I took assistane with my last calendar year dissertation and paid a reasonable price tag to the company.
If you prefer automatic garbage selection, there are superior industrial and public-area garbage collectors for C++. For programs in which rubbish collection is suitable, C++ is a superb rubbish gathered language with a functionality that compares favorably with other garbage collected languages. See The C++ Programming Language for the dialogue of automatic rubbish collection in C++. See also, Hans-J. Boehm's website for C and C++ garbage assortment. Also, C++ supports programming techniques that allows memory management to be safe and implicit with no rubbish collector.
Mathematica involves no time investment to find out, so you can begin utilizing it quickly. Get prepared on your future
Had I thought of a "C++ within" brand in 1985, the programming world may need been distinct currently. One particular basic thing that confuses quite a few discussions of language use/popularity is the distinction in between relative and complete measures. By way of example, I say (in 2011) that C++ use is expanding After i see user inhabitants expand by 200,000 programmers from 3.1M to three.3M. However, anyone else might claim that "C++ is dying" as it's "level of popularity" has dropped from sixteen p.c to 11 % of the whole quantity of programmers. Both of those claims might be at the same time genuine as the amount of programmers carries on to expand and especially as what is regarded as programming proceeds to vary. I imagine that C++ is more than holding its very own in its regular Main domains, such as infrastructure, programs programming, embedded units, and purposes with significant time and/or House and/or ability use constraints. See also my DevX interview. What is actually currently being performed to further improve C++?
There isn't any authorized cost-free machine readable copies of my publications. If you see a replica freely out there it has to be a copyright read the full info here violation (that is certainly, it had been stolen).
But with our State-of-the-art services, you pay out a small payment for the expert in algebra or in other fields to reply your concerns, giving you an opportunity to established a deadline and have as thorough an answer you desire.
After we come across a warning inside our code, it can be crucial to resolve the trouble and not only overlook The problem. Even though disregarding warnings saves time from the short-term, warnings can usually mask deeper troubles that have crept into our code.
The brand new regular library is additionally a real boon. The provision of strings, lists, vectors, maps, and fundamental algorithms for these kinds of fundamental forms would make A significant variance to just how you can tactic C++. See the library chapters with the C++ Programming Language or simply a Tour of C++ or amongst my modern papers. C++14 is better yet. When will We've got a C++ standard?
Even though writing an assignment of R programming, it can be crucial to contemplate a subject which can lead for the areas of research. It truly is the initial step of producing an assignment. A topic relevant to statistics needs to have the chance to provide the scope of using the assets and Checking out new details and information.
In accordance with some corners of the internet, I'm underneath the effect that vectors are generally better than joined lists Which I do not find out about other info structures, including trees (e.g. std::established ) and hash tables (e.g., std::unordered_map). Of course, that's absurd. The issue seems to be a fascinating little work out that John Bentley the moment proposed to me: Insert a sequence of random integers into a sorted sequence, then take away Those people components one after the other as determined by a random sequece of positions: Do you employ a vector (a contiguously allotted sequence of components) or maybe a linked listing? For example, see Software Growth for Infrastructure. I exploit this instance to illustrate some points, motivate thought about algorithms, knowledge buildings, and equipment architecture, concluding: Do not keep knowledge unnecessarily, hold details compact, and obtain memory in a very predictable manner. Take note the absence of ``listing'' and ``vector'' from the conclusion. Make sure you Will not confuse an example with what the instance is meant As an instance. I used that case in point in a number of talks, notably: My 2012 ``Going Native'' Keynote. This movie has long been well known: It has been downloaded in excess of 250K occasions (additionally A further 50K+ times at verious other web-sites). My impact is a large number of viewers didn't recognize that the objective of that illustration is For instance some normal rules and to generate folks think. Originally, plenty of people say ``Record certainly!'' (I have tried out asking that query many times) as a result of several insertions and deletions ``in the middle'' (lists are good at that). That respond to is totally and dramatically Completely wrong, so it is nice to understand why. I have already been using the example for years, and experienced graduate pupils implement and measure dozens of variants of the exercise and distinct exercises. Examples and measurements by Some others can be found online. Certainly, I've tried out maps (they are much a lot better than lists, but nevertheless slower than vectors) I've tried using much larger things measurements (finally lists appear into their own) I've utilised binary look for and immediate insertion for vectors (Sure, they quicken even even further) I checked my concept (no I am not violating any major-O complexity rule; it is just that some operations is often dramatically dearer for a person knowledge structure compared to An additional) I have preallocated inbound links (which is much better than std::listing even so the traversal nevertheless kills efficiency) I have made use of singly-linked lists, forward_lists, (that does not make Substantially difference, but can make it a tiny bit tougher in order that the consumer code is one hundred% equivalent) I know (and say) that 500K lists usually are not popular (but that doesn't make a difference for my principal stage).
|
OPCFW_CODE
|
- In computer structural design, 64-bit integers, memory addresses, or other data units are those that are at most 64 bits (8 bytes) broad.
- 64-bit CPU and ALU structural designs are those that are based on registers, buses, and address or data buses of that dimension.
- Since 1960s 64-bit CPUs have survived in supercomputers and in early 1990s RISC-based workstations and servers.
- In 2003 they were launched to the mainstream personal computer arena, in the form of the x86-64 and 64-bit PowerPC processor structural designs.
- A CPU that is 64-bit internally might possess peripheral data buses or address buses with a diverse size, either larger or smaller.
- The terminology "64-bit" is frequently used to depict the size of these buses.
- Many present machines with 32-bit processors use 64-bit buses like the unique Pentium and later CPUs, and may occasionally be referred to as "64-bit" for this cause.
- Similarly, some 16-bit processors like the MC68000 were regarded to as 16-/32-bit processors as they had 16-bit buses, but had some interior 32-bit capacities. The terminology also refer to the size of an instruction in the computer's instruction set.
- 64-bit double-precision floating-point quantities are familiar. Generally "64-bit" computer structural design possess integer registers that are 64 bits extensive, which permits it to support both within and on the exterior 64-bit "chunks" of integer data.
- Generally registers in a processor are dissected into three groups like integer, floating point, and other.
- In all familiar universal purpose processors, only the integer registers are competent of storing pointer values.
- The non-integer registers cannot be utilized to accumulate pointers for the function of reading or writing to memory, and therefore cannot be utilized to bypass any memory restrictions forced by the dimension of the integer registers.
- Nearly all familiar universal purpose processors have integrated floating point hardware, which may or may not use 64-bit registers to hold data for processing.
Architectural implications Example
- The x 86 structural designs comprise the x87 floating-point instructions which utilize 8 80-bit registers in a stack configuration.
- The later revisions of x86, also incorporates SSE instructions as it use 8 128-bit broad registers.
- By disparity, the 64-bit Alpha family of processors describes 32 64-bit extensive floating point registers in totaling to its 32 64-bit ample integer registers.
64-bit processor timeline
- 1961: IBM transports the IBM 7030 Stretch supercomputer, which utilizes 64-bit data words and 32 or 64-bit instruction words.
- 1974: Control Data Corporation introduces the CDC Star-100 vector supercomputer, which utilizes 64-bit word structural design.
- 1976: Cray Research distributes the first Cray-1 supercomputer, which is based on a 64-bit word structural design and would form the basis for later Cray vector supercomputers.
- 1983: Elxsi introduces the Elxsi 6400 parallel mini super computer.
- 1991: MIPS Technologies generates the first 64-bit microprocessor.
- The CPU is utilized in SGI graphics workstations starting with the IRIS Crimson.
- Moreover, 64-bit hold up for the R4000 would not be integrated in the IRIX operating system until IRIX 6.2, launched in the year 1996.
- Kendall Square Research transports the initial KSR1 supercomputer, based on a proprietary 64-bit RISC processor architecture running OSF/1.
- 1992: Digital Equipment Corporation (DEC) launches the clean 64-bit Alpha structural design which was originated from the PRISM project.
- 1993: DEC launches the 64-bit OSF/1 AXP Unix-like operating system and the OpenVMS operating system for Alpha systems.
- 1994: Intel declares plans for the 64-bit IA-64 structural design as a successor to its 32-bit IA-32 processors.
- A 1998-1999 launch date is targeted. SGI introduces IRIX 6.0, with 64-bit hold up for R8000 CPUs.
- 1995: Sun introduces a 64-bit SPARC processor, the UltraSPARC. Fujitsu-owned HAL Computer Systems introduces workstations based on a 64-bit CPU, HAL's in parallel designed first generation SPARC64.
- IBM introduces the 64-bit AS/400 system upgrade, which can renovate the operating system, database and applications.
- 1996: Nintendo launches the Nintendo 64 video game console, built around an economical alternative of the MIPS R4000.
- HP introduces an implementation of the 64-bit 2.0 version of their PA-RISC processor structural design, the PA-8000.
- 1997: IBM introduces the RS64 line of full-64-bit PowerPC processors.
- 1998: IBM launches the POWER3 line of full-64-bit PowerPC/POWER processors.
- Sun releases Solaris 7, with full 64-bit UltraSPARC hold up.
- 1999: Intel launches the instruction set for the IA-64 structural design.
- AMD publicly reveals its set of 64-bit extensions to IA-32, called x86-64.
- 2000: IBM ships its first 64-bit ESA/390 is well-matched mainframe, the zSeries z900, and its new z/OS operating system.
- 2001: Intel lastly ships its 64-bit processor line, now branded Itanium, targeting high-end servers.
- 2002: Intel releases the Itanium 2 as a descendant to the Itanium.
- 2003: AMD launches its Opteron and Athlon 64 processor lines, based on its AMD64 structural design.
- 2004: VIA Technologies procliams the Isaiah 64-bit processor.
- 2005: On 31st January 31 Sun introduces Solaris 10 with support for AMD64 and EM64T processors.
- On April 30, Microsoft launches Windows XP Professional x64 Edition for AMD64 and EM64T processors.
- 2006: Dual-core Montecito Itanium 2 processors go into production.
- Sony, IBM, and Toshib started producing the 64-bit Cell processor for utilization in the PlayStation 3, servers, workstations, and other appliances. Apple features 64-bit EM64T Xeon processors in its latest Mac Pro and Intel Xserve computers, and later updates the iMac, MacBook and MacBook Pro to use EM64T Core 2 processors.
- 2007: Intel's Dual Core and Quad Core become the existing 64-bit processors in production based on 65nm technology.
64-bit SuSE Linux Enterprise Server 7 for IBM eServer zSeries
- 64-bit SuSE Linux Enterprise Server 7 for IBM eServer zSeries is the SuSE Linux corporate advancement towards operating system.
- The 64-bit edition inflates the potential of SuSE Linux Enterprise Server 7 intended for IBM’s S/390 and the zSeries.
- Complex database applications have much larger address space and not locked up to a 2 GB memory margin.
- In order to fully maintain the significance of ventures in existing applications, the 32-bit SuSE Linux Enterprise Server 7 this is having the support of shared libraries the parallel operation of 32-bit and 64-bit applications on the same machine, within the same Linux instance.
- 64-bit SuSE Linux Enterprise Server 7 fully supports the AMD’s Hammer family of processors.
64-bit Itanium processor Red Hat Enterprise Linux 5
- Red Hat organization is promoting its operating system and services to Multi National Companies as part of its MNC customers.
- Red Hat released Advanced Server product is reliable and gives high performance.
- Red Hat‘s Advanced Server product offering best solutions to customers for simple migration from UNIX to Linux.
- 64-bit Intel’s Itanium processor Version 7.2 accessibility was announced by Red Hat in January 2002.
- 64-bit Intel’s Itanium processor Version 7.2 utilizes the 2.4.9 Linux kernel and integrates the same features set as the general use version.
- Red Hat introduced Itanium-compatible version of Red Hat 7.1 and also IBM’s a version for S/390 mainframe computers.
64-Bit Guest Operating System NetWare
64-bit guest operating system’ s virtual machines support Workstation 5.5 and runs on host systems with the subsequent processors:
- AMD Sempron, 64-bit-capable revision D or later (trial)
- AMD Athlon, 64, revision D or later
- AMD Opteron, revision E or later
- AMD Turion, 64, revision E or later
|
OPCFW_CODE
|