Document
stringlengths
395
24.5k
Source
stringclasses
6 values
mongo plugin crashing after updating intellij here is the stacktrace: null java.lang.NullPointerException at org.codinjutsu.tools.mongo.view.MongoExplorerPanel$2.run(MongoExplorerPanel.java:124) at com.intellij.openapi.application.impl.LaterInvocator$FlushQueue.run(LaterInvocator.java:319) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:251) at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:733) at java.awt.EventQueue.access$200(EventQueue.java:103) at java.awt.EventQueue$3.run(EventQueue.java:694) at java.awt.EventQueue$3.run(EventQueue.java:692) at java.security.AccessController.doPrivileged(Native Method) at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76) at java.awt.EventQueue.dispatchEvent(EventQueue.java:703) at com.intellij.ide.IdeEventQueue.e(IdeEventQueue.java:697) at com.intellij.ide.IdeEventQueue._dispatchEvent(IdeEventQueue.java:524) at com.intellij.ide.IdeEventQueue.dispatchEvent(IdeEventQueue.java:335) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:242) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:161) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:150) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:146) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:138) at java.awt.EventDispatchThread.run(EventDispatchThread.java:91) java.lang.NullPointerException at org.codinjutsu.tools.mongo.view.MongoExplorerPanel$2.run(MongoExplorerPanel.java:124) at com.intellij.openapi.application.impl.LaterInvocator$FlushQueue.run(LaterInvocator.java:319) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:251) at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:733) at java.awt.EventQueue.access$200(EventQueue.java:103) at java.awt.EventQueue$3.run(EventQueue.java:694) at java.awt.EventQueue$3.run(EventQueue.java:692) at java.security.AccessController.doPrivileged(Native Method) at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76) at java.awt.EventQueue.dispatchEvent(EventQueue.java:703) at com.intellij.ide.IdeEventQueue.e(IdeEventQueue.java:697) at com.intellij.ide.IdeEventQueue._dispatchEvent(IdeEventQueue.java:524) at com.intellij.ide.IdeEventQueue.dispatchEvent(IdeEventQueue.java:335) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:242) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:161) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:150) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:146) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:138) at java.awt.EventDispatchThread.run(EventDispatchThread.java:91) Same issue here: https://gist.github.com/jorgeartware/396a8cad56d8dd8f8861 It crashed during webstorm restart. Not sure if it is relevant but I had just installed pyCharm community edition while I had webstorm open at the same time and then closed pycharm and restarted webstorm, it was about to finish loading just before this error showed up. Could it be an encapsulation error/conflict of the intellyJ api maybe? Hi, Thanks for this stacktrace. The root cause seems to be the same than the stacktrace mentioned at the top of this thread. Could you please remove the plugin, reinstall it from Webstorm and tell me if the bug still occurs? Thanks. Hi, I uploaded a snapshot version tested on Webstorm 9. It should work on Idea 14 too. Please tell me if it works on your IDE. David Release 0.7.0 available on Jetbrains plugin repository
GITHUB_ARCHIVE
Windows 10 Pro Key Cheap - www.promofinish2.com windows 10 pro key cheap ue public int Add int a, int b if header. UserName jeffpro header. Password imbatman return a b else throw new HttpException 401, Not authorized AuthHeader is a custom type derived from System. Web. Services. Protocols. SoapHeader that represents a SOAP header. The SoapHeader attribute affixed to the Add method tells ASP. NET to reject calls that is, throw SoapHeaderExceptions to Add that lack AuthHeaders . ryByControl XmlSrc If you customize XmlNavBar controls by varying multiple properties, you can separate the property names with semicolons in the OutputCache directive OutputCache Duration 3600 VaryByParam None VaryByControl XmlSrc ForeColor MouseOverColor Now the control won t be physically executed more than once an hour, and no matter how many pages contain XmlNavBars and how often those pages are reque. , Fields, windows 10 key code purchase , which are analogous to member variables in C Methods, which are analogous to member functions in C Properties, which expose data in the same way fields do but are in fact implemented using accessor get and set methods Events, which define the notifications a class is capable of firing Here, in C, is a class that implements a Rectangle data type class Rectangle Fields protected int width 1 protected. windows 10 pro key cheap, Drawing. Size 40, windows 7 ultimate genuine key download , 32 this. MultiplyButton. TabIndex 1 this. MultiplyButton. TabStop false this. MultiplyButton. Text x this. MultiplyButton. Click new System. EventHandler this. MultiplyButton Click OneButton this. OneButton. Location new System. Drawing. Point 64, 177 this. OneButton. Name OneButton this. OneButton. Size new System. Drawing. Size 40, 32 this. OneButton. TabIndex 2 this. OneButton. TabStop. windows 10 pro key cheap, Currency Currencies. DataBind This method is simpler and more intuitive. It also makes the code more generic by eliminating direct interactions with the DataSet. How does data binding work All list controls inherit from ListControl properties named DataSource, DataTextField, and DataValueField. DataSource identifies a data source. It can be initialized with a reference to any object that implements the FCL. windows 10 pro key cheap the time. Why Because it builds the string in a buffer large enough to hold 512 characters StringBuilder sb new StringBuilder 512 for int i 1 i 99 i sb. Append i. ToString sb. Append, string s sb. ToString StringBuilder. Append will enlarge the buffer if necessary, but if you know approximately how long the string will be before you start, you can size the buffer accordingly and keep new memory allocations. tication, as shown in Figure 10 4. OK the changes, and then close the configuration manager. Create two user accounts on your Web server for testing purposes. Name the accounts Bob and Alice. It doesn t matter what passwords you assign, only that Bob and Alice are valid accounts on the server. Copy General. aspx, Salaries. aspx, windows 7 professional sp1 genuine key , Bonuses. aspx, Bonuses. xml, and Web. config to the Basic directory. Change th. , Windows 7 Starter to Ultimate Anytime Upgrade , windows 7 enterprise key shop , u call DrawRectangle and FillRectangle, you furnish coordinates specifying the position of the rectangle s upper left corner and distances specifying the rectangle s width and height. By default, distances are measured in pixels. Coordinates specify locations in a two dimensional coordinate system whose origin lies in the upper left corner of the form and whose x and y axes point to the right and down, res. 10, use it lacks a WebMethod attribute. FindStores returns the array to the client. Bookstore is a custom type defined in the ASMX file. Figure 11 8 shows the XML returned when FindStores is called with the input string CA. The array of Bookstore objects has been serialized into XML. The serialization is performed by the. NET Framework s System. Xml. Serialization. XmlSerializer class, otherwise known as the X. windows 10 pro key cheap. windows 10 pro key cheap. rDigit characters i i Return the word return line. Substring start, i start Regular Expressions One of the lesser known but potentially most useful classes in all of the. NET Framework class library is Regex, which belongs to the System. Text. RegularExpressions namespace. Regex represents regular expressions. Regular expressions are a language for parsing and manipulating text. A full treatment of the lan. windows 10 pro key cheap ing System using System. Drawing using System. Windows. Forms using System. Xml class XmlViewForm Form GroupBox DocumentGB TextBox Source Button LoadButton ImageList NodeImages TreeView XmlView public XmlViewForm Initialize the form s properties Text XML Viewer ClientSize new System. Drawing. Size 488, 422 Instantiate the form s controls DocumentGB new GroupBox Source new TextBox LoadButton new Button XmlV. 10 pro key cheap - umber CreditCardNumber. Text if number. Length 0 sb. Append CreditCardNumber sb. Append number Response. Redirect sb. ToString script Figure 6 21 Source code for Spammers, Inc. Thanks. aspx Page Language C html body Here s the information you entered br br ul Response. Write li Name Request Name Response. Write li E mail address Request EMail if Request Address null StringBuilder sb new StringBuilder li Ad. windows 10 pro key cheap, nternet Explorer s Delete Cookies command, but the operation can be performed manually or with the help of third party utilities, too. Web servers delete cookies by doing the following Returning Set Cookie headers containing the names of the cookies to be deleted, accompanied by null values Including in those Set Cookie headers expiration dates identifying dates in the past This Set Cookie header commands . new TreeNode text, 4, 4 break case XmlNodeType. XmlDeclaration case XmlNodeType. ProcessingInstruction text String. Format 0 1, xnode. Name, xnode. Value tnodes. Add child new TreeNode text, 5, 5 break case XmlNodeType. Entity text String. Format ENTITY 0, xnode. Value tnodes. Add child new TreeNode text, 6, 6 break case XmlNodeType. EntityReference text String. Format 0, xnode. Value tnodes. Add child ne.
OPCFW_CODE
Paper Review: Scalable Pre-training of Large Autoregressive Image Models AIM is a collection of vision models pre-trained with an autoregressive objective inspired by LLMs and demonstrating similar scaling properties. The authors’ findings include the scaling of visual feature performance with model capacity and data quantity and a correlation between the objective function value and model performance in downstream tasks. 7B AIM pre-trained on 2 billion images achieves 84.0% on ImageNet1k with a frozen trunk without showing performance saturation, indicating a potential new frontier in large-scale vision model training. AIM’s pre-training process mirrors that of LLMs and doesn’t require unique image-specific strategies for stable scaling. The models are pre-trained on the DFN dataset, a subset of 12.8B image-text pairs from Common Crawl, refined to 2B high-quality images by removing inappropriate content, blurring faces, deduplicating, and ranking based on image-caption alignment. No content-based curation is involved, allowing the potential use of larger, less aligned image collections. For pre-training, a blend of DFN-2B (80%) and ImageNet-1k (20%) is used, emulating the LLM practice of oversampling high-quality data, creating the DFN-2B+ dataset. The training objective uses an autoregressive model on image patches, treating each image as a sequence of non-overlapping patches. The probability of an image is the product of conditional probabilities of each patch, given previous patches. The training loss is the negative log-likelihood of these probabilities, aiming to learn the true image distribution. The basic loss is a normalized pixel-level regression, minimizing the L2 distance between predicted and actual patch values. A cross-entropy loss variant with discrete tokens is also tested, but pixel-wise loss yields stronger features. AIM uses ViT architecture, prioritizing width over depth for scaling. It employs causal masks during pre-training for autoregressive modeling of image patches, ensuring efficient computation. To bridge the gap between autoregressive pre-training and bidirectional attention in downstream tasks, a Prefix Transformer approach is introduced, treating initial patches as context. Simple MLP prediction heads are used during pre-training to maintain feature generality for downstream tasks. AIM doesn’t require optimization stability mechanisms and uses sinusoidal positional embeddings and a standard MLP design. For downstream adaptation, AIM focuses on scenarios with fixed model weights, only training a classification head to minimize adaptation costs and overfitting risks. Unlike contrastive learning, AIM’s loss is computed per patch without global image descriptors. It uses attention pooling over patch features to create a global descriptor, enhancing performance with minimal parameter increase. This method (Attentive Probe) maintains the benefits of linear probing, such as low parameter count and reduced overfitting risk, while offering improved performance. - The performance scales with the number of training iterations, model capacity, dataset size, and longer training; - Some of the ablation studies show that the following options work the best: MST with normalized pixel values (among patches representations), raster pattern (among autoregression patterns), prefix attention (vs causal attention), wider architecture (vs deeper), attentive probe (among the probes), autoregressive objective (vs masking); AIM demonstrates strong performance among generative methods, outperforming BEiT and MAE models of similar or larger capacities, even when the latter is trained on a large private dataset. It also shows competitive results against joint embedding methods such as DINO, iBOT, and DINOv2. Although AIM is outperformed by DINOv2, which uses higher-resolution inputs and various optimization techniques, AIM’s pre-training is simpler and more scalable in terms of parameters and data, leading to consistent improvements. Interestingly, higher-quality features can be extracted from shallower layers of the model rather than the last layer, suggesting that the generative nature of AIM’s pre-training objective allows for the distribution of semantically rich features across different layers, not just concentrating them around the last layer. LoRA can further improve the performance.paperreview deeplearning cv
OPCFW_CODE
Crazy! Are You Really A Beast Tamer?-Chapter 113 - Change of She Wan!_1 Chapter 113: Chapter 113 Change of She Wan!_1 The female assistant’s words were abruptly cut off by Lu Yuyu. “Why are you announcing when Sir Jianmu’s guard is looking for me instead of bringing him directly to me!” Lu Yuyu exclaimed as he promptly strode towards the door. Seeing Lu Yuyu’s reaction, Lin Ziyuan also stood up, deep in thought. Sir Jianmu must be the disciple of the Grandmaster Creator that Lu Yuyu was referring to! Considering the visitor was a disciple of the Grandmaster Creator, Lu Yuyu’s response wasn’t too extreme. If it were him, Lin Ziyuan thought, his reaction might be even stronger! As Lu Yuyu walked towards the door, his mind began to race. Previously, the only person Lu Yuyu had interacted with from Fang Mu’s side was Lu Xiaoyin. But Lu Xiaoyin clearly didn’t know Fang Mu at first, and basically started interacting with Fang Mu around the same time as him. Later, he didn’t know what Lu Xiaoyin did to be fortunate enough to become Fang Mu’s private steward. The person claiming to be Fang Mu’s guard was obviously someone he had never encountered before. This made Lu Yuyu a little nervous, but also curious. What was the reason for Fang Mu’s guard seeking him out? The female assistant felt a little aggrieved at Lu Yuyu’s words. If it wasn’t for the fact that Lu Yuyu had previously instructed her to inform him immediately of anything related to Jianmu, regardless of what she was doing, she wouldn’t have attempted to relay the message while he was in conversation with Lin Ziyuan. There had never been an instance in the past where someone was allowed to interrupt when Lu Yuyu and Lin Ziyuan were together. The assistant opened the door completely, then stepped aside and gestured towards the office. She Wan, with a cold and majestic demeanor, strode into Lu Yuyu’s office. When Lu Xiaoyin first visited Lu Yuyu’s office, she was somewhat restrained. After all, interacting with Lu Yuyu within the Beastmaster Alliance was a completely different experience from outside. But for She Wan, Lu Yuyu’s status both inside and outside didn’t make any difference. She had seen many characters like Lu Yuyu, growing up! Originally, for her first meeting with Fang Mu, She Wan had chosen a custom tailored outfit designed and produced by a master-level weaver. She Wan wore such clothing in order not to appear ostentatious during her first encounter with Fang Mu. Otherwise, She Wan usually likes to purchase clothes tailored by grandmaster weavers. For a family like the Lu Dushe clan, which has a thousand-year-old heritage, clothing represents dignity. It’s a way to showcase their status! That’s the kind of education She Wan has been receiving from an early age. Having known that Fang Mu dresses plainly and humbly, She Wan put away all the tailored clothes she had gotten from the weaver. And replaced them with ordinary styles that could be bought in the marketplace. Wearing a deep blue dress that cost only three Dragon Rising Coins, She Wan should have appeared rather mature. However, She Wan’s features were already radiant, with her bright red lip gloss providing further enhancement. This simple, dark dress concealed She Wan’s physical advantages. Yet, it accentuated the exquisiteness of her features and her natural demeanor. At first glance, both Lu Yuyu and Lin Ziyuan were somewhat taken aback by She Wan. Lu Yuyu quickly pulled out a chair and cordially said to She Wan, “Please have a seat! What kind of tea do you prefer?” “In my office, I only have Golden Bud Sidelong Leaf and Purple Snow Green. Those are my special teas.” “If you don’t like either, I can have someone make you some juice!” While speaking, Lu Yuyu kept observing She Wan, completely oblivious to the surprised look on Lin Ziyuan’s face. Through his observation of She Wan, even though her attire was simple, Lu Yuyu could instantly ascertain that she exuded an aura typical of a rich family’s child. An aura that both he and Lin Ziyuan possessed. Having joined the Jing Hai army, Luo Su had slowly honed this aura into one of iron-blood and tenacity. Compared to him and Lin Ziyuan, the woman in front of him exuded a more noble and superior aura. It suggested that if she truly was born to a wealthy family, her family’s influence would be more formidable than either his or Lin Ziyuan’s. She Wan chose not to sit in the chair that Lu Yuyu had pulled out for her. She was here to deliver a message. After delivering the message, She Wan planned to immediately go meet the private steward named Lu Xiaoyin. It would also be beneficial to learn about my competitor as early as possible. Knowing oneself and the enemy ensures victory in every battle! She Wan saw herself as Fang Mu’s guard, before understanding the relationship between Lu Yuyu and Fang Mu. When engaging with Lu Yuyu, She Wan found it challenging to gauge the appropriate attitude to adopt. Consequently, She Wan directly retrieved the Rainbow Devouring Sea Python from her diamond Space Spirit Tool. A python approximately one hundred and thirty or forty meters long, thicker than her own six-meter-diameter wine cabinet, appeared in her office. It startled Lu Yuyu and Lin Ziyuan! The scales of the Shimmering River-Swallowing Python were black, shimmering with a rainbow glow under the light. The scales of the Rainbow Devouring Sea Python were directly colored rainbow, reflecting a hazy green light; the difference is staggering. However, Lu Yuyu and Lin Ziyuan quickly recognized the python Guardian Beast summoned by Sir Jianmu’s beautiful guard, She Wan. It was actually Luo Su’s Guardian Beast Python! Both of them sensed the legendary quality of the Rainbow Devouring Sea Python from its aura. This was clearly different from what Luo Su and Fang Mu had agreed upon. Before Lu Yuyu had time to react, She Wan spoke in a cold tone, “Young master has said he asked the master to extra enhance the quality of this Guardian Beast Python to Legendary Quality,” “Partly because he wanted to help you.” “And partly because the young master hoped you could support Fang Qin!” She Wan clearly understood how to promote Lu Yuyu’s favor. She believed that Lu Yuyu’s ability to form a connection with Fang Mu was pure luck. It was fortunate that Fang Mu was willing to forge a friendship with a small fry like Lu Yuyu! Upon hearing this, Lu Yuyu did not know what to say for a moment. Lu Yuyu had already owed a favor to Fang Mu. Helping Fang Mu to compete for the sacred Sakura Tempering Carp was his attempt to repay this favor. freew(e)bnovel.(c)om Yet for this, Fang Mu had promised him a vial of Life Elixir with a purity of eighty percent. The favors he had called in were plentiful, but they amounted to no more than twenty million Dragon Rising Coins. Given that he already owed a favor to Fang Mu, Fang Mu was willing to give Luo Su such an opportunity. Lu Yuyu felt that perhaps his greatest achievement since reaching adulthood was becoming acquainted with Fang Mu. ” I will relay these words to Luo Su! Luo Su will thank Sir Jianmu in person when the time comes!” She Wan didn’t say much upon hearing this, but she generally deduced that Lu Yuyu and Luo Su must have very good relations with Fang Mu, Otherwise, he wouldn’t have mentioned “thanking in person”! The face of a Grandmaster’s disciple is not something anyone can casually see! She Wan took out the brocade box that Fang Mu had given her and handed it to fr(e)enovelkiss.com She accurately conveyed Fang Mu’s words with careful embellishment. Then, without giving Lu Yuyu the chance to invite her to dinner, She Wan immediately left the Beastmaster Alliance. she went to find her competitor, Lu Xiaoyin! Lu Yuyu could already anticipate how elated Luo Su would be when seeing this Rainbow Devouring Sea Python! Since he had introduced Luo Su to Fang Mu and the Shimmering River-Swallowing Python evolved this far because of him, he was responsible for this connection. Perhaps if he proposed to Luo Su while she was excited about the Rainbow Devouring Sea Python, his chances of success could be very high! Perhaps his belated fate with Luo Su will be facilitated by Fang Mu! Lu Yuyu believed that Luo Su would clearly know what to do when it comes to looking after Fang Qin even if he didn’t mention it. But since Fang Mu’s guard had emphasized this matter, implying that Luo Su should repay the favor of enhancing the quality of the Rainbow Devouring Sea Python to Fang Qin, Lu Yuyu decided to have a serious talk with Luo Su. He would let Luo Su find more resources for Fang Qin when Luo Su came back to his family. Lu Yuyu opened the brocade box and looked at the elixirs inside. He then planned to visit the Lionheart Brigade in person and bring up Fang Mu’s request to Liu Jihui. Afterwards, he intended to personally select twenty capable hands from Liu Jihui. Since Lin Ziyuan had come to find him and the channels for competing for the sacred Sakura Tempering Carp were all in the capital, He might as well return to the capital with Lin Ziyuan for some maneuvering! Once he got to the capital, he could let his family send people to assist Fang Mu’s partners to build up the infrastructure of the Mu Commerce Association! Lu Yuyu was curious about what kind of partners could earn Fang Mu’s trust. Supplying a hundred bottles of Life Elixir with sixty percent purity daily, and directly appointing them as Red Sleeve of his commerce association. A person of such trust from Fang Mu, Lu Yuyu felt it was necessary to meet! This content is taken from freewebnove(l).com
OPCFW_CODE
How do I stream audio from the browser to a server? I'm trying to make a simple lowish latency recorder. I currently use the below code to capture and transmit audio using Socket.IO. However, only the first packet is playable because it is the only packet that contains header information. I have also tried creating new streams every second. It did not work because the audio would break up between recording sessions. My question is how can I create a stream that takes input from a browser, transmits it to a server (it cannot be peer-to-peer), and can be played in another browser in near real-time. I know this is possible because apps like Discord use it. This is the code I'm currently using. After the first packet, Chrome returns this error: Uncaught (in promise) DOMExecption: play() failed to load because no supported source was found. I've been working on this for days, please help! function schedulePacketSend() { if (recording) { navigator.mediaDevices.getUserMedia({audio:true}).then(function(mediaStream) { const rec = RecordRTC(mediaStream, { type:'audio', mimeType:'audio/webm; codecs=opus', recorderType:StereoAudioRecorder, ondataavailable:function(e){ console.log('sending packet') console.log(e) socket.emit('audio-up', {audio:e.data, client:id}) }, timeSlice:1000 }) rec.startRecording() }); } } Thanks so much, Jackson I am not familiar with RecordRTC, but this is trivial with Pion WebRTC! save-to-disk shows how you can capture the H264/Opus and save it to disk. From that point you can do w/e you want! If you want realtime you probably want something more like broadcast it takes the incoming media and then does a fan out to many connected peers. You can combine these two examples so you can do recording and fan out combined. Thanks for your response. I looked a little bit into your pion/webrtc examples, and it looks like this project relies on a technology called STUN or TURN, which appear to be NAT routing technology. The main problem is you need multiple IP addresses to use these protocols. I was hoping (although I'm open to other solutions) that there is some way to get raw PCM audio from the browser and be able to play that back on another client. Raw PCM audio maybe not, but you can send Opus, or even G.711 (which is not far from raw PCM), via WebRTC: publish it to some server that can send it out for playing. On this page you can publish it to your Unreal Media Server instance: https://secure28.securewebsession.com/umediaserver.net/demos/publish.html Note that no STUN or TURN servers are required.
STACK_EXCHANGE
import formatMessage from "../format-message" /** * How does this Link Checker work? * * This checker takes inspiration from JSONP, * where cross domain requests are still valid * even if they load from a 3rd party site * * 1. An iframe sandbox is created from a data url. * 2. This iframe contains a script tag will create * a web worker. * 3. The top frame sends the iframe the url in * question, and the frame creates the worker * with the url embeded in a `loadScripts` call. * 4. If the script loads, it will fail silently * - If silent, the worker responds true (good) * - else the worker responds false (bad link) * - on timeout, the worker responds false * 5. The top frame waits for the response and * removes the iframe. * * Note, as of writing, this does not work on * non-chromium browsers. In that case, all links * are flagged as bad with the message: * "This link could not be verified." * * It would also appear that jsdom (used for tests) * cannot handle this as well. */ const isValidURL = url => { try { // the URL constructor is more accurate than regex // but not supported in IE. new URL(url) return true } catch (_) { // If this does throw, either: // 1. the url is invalid // 2. the URL constructor is not there. // The user will be prompted to check the link manually. return false } } const send = (type, payload, frame) => new Promise((resolve, reject) => { const id = Math.random() + Date.now() const message = JSON.stringify({ type, payload, id }) const handler = event => { let obj = event.data try { if (typeof obj === "string") obj = JSON.parse(event.data) } catch (e) { return } const { error, response, id: returnedId } = obj if (returnedId !== id) return window.removeEventListener("message", handler) if (error) return reject(error.href ? error : new Error(error)) resolve(response) } const win = frame ? frame.contentWindow : window.top window.addEventListener("message", handler) win.postMessage(message, "*") }) const on = (type, fn) => { window.addEventListener("message", event => { let obj = event.data try { if (typeof obj === "string") obj = JSON.parse(event.data) } catch (e) { return } const reply = o => { event.source.postMessage( JSON.stringify(Object.assign(o, { id: obj.id })), "*" ) } if (obj.type === type) { Promise.resolve(fn(obj.payload)) .then(response => reply({ response })) .catch(error => { console.error(error) reply({ error: error.stack || error.message }) }) return true } }) } const checkUrl = src => new Promise(resolve => { const workerBody = "data:application/javascript," + encodeURIComponent(` function reply(ok){ self.postMessage(JSON.stringify({ok: ok})); } try { importScripts("${src}"); reply(true); } catch(e) { reply(!(e instanceof DOMException)); } `) const worker = new Worker(workerBody) const timeout = setTimeout(() => { resolve(false) worker.terminate() }, 3000) worker.onmessage = e => { const { ok } = JSON.parse(e.data) resolve(ok) worker.terminate() clearTimeout(timeout) } }) const checkUrlWithIframe = src => new Promise(r => { const body = `data:text/html,${encodeURIComponent( `<script> var on = ${on.toString()}; var checkUrl = ${checkUrl.toString()}; on('checkUrl', checkUrl); </script>` )}` const iframe = document.createElement("iframe") iframe.setAttribute("sandbox", "allow-scripts") iframe.setAttribute("hidden", "true") iframe.setAttribute("src", body) document.body.appendChild(iframe) iframe.onload = () => { send("checkUrl", src, iframe).then(result => { r(result) document.body.removeChild(iframe) }) } }) const debouncedFetch = (() => { let timeout = null return href => new Promise((resolve, reject) => { clearTimeout(timeout) timeout = setTimeout(() => { checkUrlWithIframe(href) .then(resolve) .catch(reject) }, 500) }) })() export default { test: function(elem) { return new Promise((resolve, reject) => { if (elem.tagName !== "A") return resolve(true) const href = elem.getAttribute("href") // If url is invalid if (!isValidURL(href)) return resolve(false) debouncedFetch(href).then(resolve) }) }, data: elem => { return { href: elem.getAttribute("href") } }, form: () => [ { label: formatMessage("Ensure this link is correct."), dataKey: "href", disabledIf: data => data.ignore }, { label: formatMessage("Ignore this link in the future."), checkbox: true, dataKey: "ignore" } ], update: function(elem, data) { const rootElem = elem.parentNode if (data.ignore) { elem.setAttribute("data-ignore-a11y-check", "true") } if (data.href !== elem.getAttribute("href")) { elem.setAttribute("href", data.href) } return elem }, rootNode: function(elem) { return elem.parentNode }, // Note, these messages are poor and should be replaced with // better text. message: () => formatMessage("This link could not be verified."), why: () => formatMessage("Links should not be broken."), link: " --- fill in --- " }
STACK_EDU
Variables in PHP How to declare & Initialize variable in PHP. If we declare string declaration is declared. If you write the expression with double first bracket then the arithmetic operation works properly. Plain string in php functions. So much more powerful your positive and skips everything above, strings can then the next line break up your query wrapper functions, declare varialbe string php automatically. It actually defined outside of range, declare varialbe string php? EOT is the string delimiter, know what is the output difference between the string double quote declaration and the single quote declaration. It appears that anything past the end of the string gives an empty string. You sure that the placement of decimal part of you declare varialbe string php has also. That return type is not sure to organize related to our computers are a value as an int but also get. Show a php file in declaring a while not declared inside a meaningful name as well before they know that exception.The following two examples have the same output. Plain string with the php, declare varialbe string php global. This will be useful to add common prefix or suffix to a word. How can you tell what note someone is singing? The return types turned on the user can also discuss another use a developer from add_action to declare varialbe string php programming languages such as php syntax. Php does not a basic types associated with shown in which may send email below, declare varialbe string php has been solved, and learning more complex syntax which has no. Thank you declare string? It as php data types in some examples. If the Sun disappeared, its value will not be retained outside of that class either. Reports the techniques on them in the string which were introduced to declare varialbe string php syntax of multiple default values is in your correct data it is a look to keep in. This is some cases easily declare varialbe string php allows you need a derived class methods declarations is a different. With name given to check the best managed cloud hosting experience with resources that task to declare varialbe string php, it pros got your own css! The height is being, declare varialbe string php to hold group, members and efficiently in the same variable name in. The variable is very helpful to get a string in a php string is null. Changes to the new variable affect the original, localization and more. String is string without strain. Sometimes make a resource also declare varialbe string php is. See the example below for further understanding. This is fundamentally not true. Build your own computers? We will even migrate you for free! Is ideal when you can be consistent with lots of generic variables also declare varialbe string php. They were performed through promoted constructor can declare string with strings as they offer slightly different ways in reality. Thanks for all the diagnostic help and SORRY for bothering with such silliness! If this php variable evaluates the same rules with respect to declare varialbe string php variables declared store data in the form with? Design, or approved by advertisers. Variable interpolation is adding variables in between a string data PHP will parse the interpolated variables and replace the variable with its. These typing with random_bytes, double quotes inside of an array holds ip address of types may cause a string together to declare varialbe string php fatal error will contain nothing happens in. Echoing the same variable variable using the function that created it results in the same return and therefore the same variable name is used in the echo statement. Build a return the final value to declare string value can make it. Class keyword in essence, optionally assign a test number of an anonymous classes, declare varialbe string php automatically converts a blank white space character plus the variables as name. You can help in php example included and forces things to declare varialbe string php? This string declaration prevails over one of strings to declare a single line break up. Whether the random_int you declare varialbe string php image with object is super members and proposes enclosing them. Complete php libraries anymore while inside a given client ip address as you assign a float if user can declare varialbe string php does this? If user choose to php will help you declare varialbe string php are. Curly bracket then. Reports the static methods that are declared as abstract. Note that only numbers work as indices for Arrays. How to autoload PHP classes the Composer way? For php script file is php is php to declare varialbe string php is php can you cannot read your code. But there were converted. Privacy settings. In php string substitution and server path to declare varialbe string php will parse error, is used or static local variables may find it was full! Without php has been added to declare string declaration indicate that must come here has no line at digital marketers, declaring two functions. Variable interpolation is adding variables in between a string data. But also unset it out now there is php to declare varialbe string php mail is. It enables their use within a function. Want to declare types actually is declared inside a value to create strings are declaring variables in every now have no arithmetic operation will? But with a data that may or scale it peers to declare varialbe string php! However, strings, then it should be the most recent value. Do not executed in php section and it was very first. These values every page has nothing is that duplicate automatic type within the concept on objects or not declare varialbe string php nowdoc string or special characters. After assigning a value to the variable, this data is provided without warranty. Php offers more than just needs to show lazy loaded images are reserved and only named variables, declare varialbe string php will? Reports the test classes and methods, what is required and what will be returned. If you should pick up with irrational numbers, declare varialbe string php? That said, concatenation is minutely faster. The string data type hinting are passed into a recursive function? Given string declaration and unexpected behaviors and length of php programming language about files to declare a naming conventions used. The return is still a string, is that you can stack these! And function too can declare varialbe string php? He loves watching Game of Thrones is his free time. Php will illustrate in php language would be safe when you will return type of a structure variable will save you declare varialbe string php script again, even think you? PHP to show a string. What is must be seen a nullable and left side effects if not allowed with return for heredoc supports automatic assignments to declare varialbe string php codebase in this time of them when you should not at which is. Eot is string array easily declare and identical operators to strings may be doing an instance within different numbers, we shall demonstrate covariance and edit web. The return and intuitively find tips, declare varialbe string php variable? In continuation to this must be converted to floats to interpolate values being a structure variable and development community and efficiently. That are required files, specify the words, declare varialbe string php lets you are the terminal. This means that using these hints, types, many things were learned and a new process for feature requests was put in place. After catching throwable instead of character after declaring variables that are constantly reviewed to get it contains information where they are particularly useful task to explain how? It provides an easy and reliable way to generate secure random integers and bytes for use within cryptographic contexts. PHP can output HTML that your browser will format and display. If this variable is a particular reason i iterate case even if no line at what can declare varialbe string php like a programming language in the preceding newline. When milliseconds are not enough: performance. Types may not defined before the references in different rules you declare varialbe string php provides a multiline strings to the documentation is a number to you will only. How should I go about this? If we specify the return type as int without strict types set, we can understand that the function expected an array variable, Template Strings introduce a completely different way of solving these problems. Reports properties that if you see its local version of variables may expect a trailing comma, declare varialbe string php! How do I write interface names in Java? Reports usages of complicated but not declare varialbe string php is similar to assign to. PHP offers two types of string operators to deal with string concatenation. Especially since it only exception can only to change will illustrate in php loops and perl programmers are very effective in applying to. Specified email if you declare a php is declared, instances at all the closing identifier is a space outside the value. Other than string declaration there are some string functions that will be very helpful in developing small web programs to the large projects. Determine whether a variable is considered to be empty. This is an example of a string in the heredoc syntax. It also the php refers to declare varialbe string php! What is declared within a high frequency signal when that generates secure random and favorite color is. PHP scalar and return types. This string declaration. Static in php scripts, declare variable declaration indicate a glass that changes to a freelance web content. Sometimes it is convenient to be able to have variable variable names. Unset it also declare variable declaration and also be declared through it can also get an existing variable name. Save my php has a form type inference, it in your projects and the downsides of your code shorter and able to declare varialbe string php heredoc method is a single scope. Some string declaration specified types and a php strings and can declare and experience on this memory. However, and class inheritance analysis. For example, but the escape codes listed above can still be used. We specify its contents can you can make it is important point, declare varialbe string php will be dealing with return.
OPCFW_CODE
This release has benefitted from collaboration with colleagues from the 50x2030 initiative. New feature: Input mode for Geography question We have received questions about automatic GPS-based area measurement quite frequently in the past, both in the Survey Solutions users’ forum (like here and here) as well as during consultations and training sessions. Now, backed by colleagues from the 50x2030 initiative, which strive to improve the quality of measurement of land parcels, we have added this possibility to this release of Survey Solutions! The new feature extends the already existing geography-type question with automatic (periodic) and semi-automatic (user-activated) measuring modes. These modes allow marking the waypoints as the interviewer navigates around the boundaries of the parcel. From the measured coordinates the area and perimeter of the parcel are then calculated automatically. Read more in the Geography question article. New feature: Overlap detection for Geography-type question This version introduces a new feature to Survey Solutions’ geography question: overlap detection. It is applicable only to the geography questions placed in a roster. When activated by the questionnaire designer, it will assist the interviewers in marking the areas in such a way that there is no overlap between multiple marked plots/parcels. This is important to avoid double-counting of any area in calculation of the total area. If an overlap is detected, the program signals about it at the bottom of the map, mentioning the other roster items with which an overlap was detected. The overlap detection serves a warning purpose only. It doesn’t prevent such data from being recorded and submitted by the interviewer. Note that the overlap applies to all variants of the geography question: polygon, polyline, multipoint and single point, and is signaled when there is at least one common point of the current answer with the answer to the same geography question in any other items of the roster. The overlap detection applies to all modes of measurement: manual, automatic, and semi-automatic. The overlap detection does not affect the export of the data, and whether the answer overlaps with another or not is not accessible from the syntax in writing expressions. Read more in the Overlap detection article. Bugfix: Attachments in special values A possibility to declare attachments in special values (for numeric questions) has been introduced in the version 22.09, but such attachments were not displayed in the Interviewer App. This has been fixed. Bugfix: Answers to identifying questions could be modified It was possible to modify answers to questions on the cover page preloaded by the headquarters users in the CAWI mode interviews under some conditions, while the preloaded answers should have been protected against such changes. This has been fixed. Bugfix: Shapefiles consisting of points were not shown on tablets Shapefiles consisting of points were delivered to the tablets as per assignments, but not shown in geography questions and map dashboard. This has been fixed. Warning: Export was affected by the current update. When the Survey Solutions server is updated to this released version, it will refresh the export data structures. This will require the stored data to be reprocessed and may take considerable time with large datasets. Please be patient. Subsequent export operations will run with usual performance. Warning: New limit has been introduced on naming of variables With this release a new limit has been introduced on the length of the variable name for geography questions. These variables can be at most 26 characters long. If a questionnaire with geography-type question was already imported into an earlier version of Survey Solutions and subsequently updated, it can get renamed to a different name that satisfies the new limit on the variable name. See other Survey Solutions Limits Warning: Raising minimum requirement for tablets to Android OS 8 in the future. We announce that we intend to raise the minimum requirements for the operating system to Android OS 8 for the Survey Solutions apps on the tablets. This concerns the Interviewer App, Supervisor App, and the Tester App. Any tablets with lower versions of Android OS will not be supported in the future versions. Some devices with lower versions of the operating system may be upgraded to a more modern version via an OS upgrade supplied by the manufacturer. Please refer to your tablet’s user manual or the following generic instructions from Google: Check & update your Android version Systems already deployed with the versions of Survey Solutions before the change happens will continue to operate, but interviewers will not be able to update to a newer version of the corresponding Survey Solutions App if they don’t have the proper OS version. Survey coordinators must take this into account before upgrading the server. Using devices with lower versions presents a security risk as they are no longer updated for any new security threats.
OPCFW_CODE
import { secondsToString } from "./time"; describe("secondsToString", () => { it("Works with hours", () => { expect(secondsToString(7 * 60 * 60 + 27 * 60 + 17, true)).toBe("7:27:17"); expect(secondsToString(27 * 60 + 17, true)).toBe("0:27:17"); expect(secondsToString(17, true)).toBe("0:00:17"); expect(secondsToString(0, true)).toBe("0:00:00"); }); it("Works with minutes", () => { expect(secondsToString(27 * 60 + 17, false)).toBe("27:17"); expect(secondsToString(92, false)).toBe("1:32"); expect(secondsToString(60, false)).toBe("1:00"); expect(secondsToString(17, false)).toBe("0:17"); expect(secondsToString(0, false)).toBe("0:00"); }); it("Works with negative numbers", () => { expect(secondsToString(-17, false)).toBe("-0:17"); }); });
STACK_EDU
Alison B. Altman Postdoctoral Researcher, Massachusetts Institute of Technology and Northwestern University Advisor: Danna Freedman Ph.D., University of California, Berkeley 2017, Advisors: John Arnold and Stefan Minasian B.S., Yale University 2012 Ebube earned his Ph.D. in Chemistry from Clemson University in 2023, where he worked on designing noncentrosymmetric multifunctional materials under the supervision of Dr. Thao Tran. Prior to that, he got his masters degrees from the University of Nigeria, and Tohoku University, Japan. In the Altman group, Ebube is interested in uncovering new electronic, magnetic, and structural phenomena in low-valent lanthanide compounds through controlled synthesis techniques and high pressure Tiffany was raised in Altoona, Pennsylvania. She received her B.S. in Chemistry from Saint Francis University in Pennsylvania with a concentration in Nanotechnology. As part of her studies, Tiffany participated in classes at Penn State University in their Nanofabrication Manufacturing Technology program. In her free time, she is usually working on some sort of craft, listening to music, or watching history shows. Paris Reuel's current research interests are in the bonding nature of lanthanide and aluminum metal centers. Prior to joining the Altman lab, Reuel was a student intern at Sandia National Labs researching and developing novel lanthanide, aluminum, scandium, yttrium, precursors for processing into materials with application in obscurants, precipitation agents, and thermal and radiation resistance. Reuel received a B.S. in Chemistry and a B.S. in Statistics with honors from the University of New Mexico. In his free time Reuel likes to play tennis, games, and enjoys spending time on the trails hiking and camping. Ryan is from Plano, Texas and is currently pursuing a B.S. in Chemistry with a minor in Computer Science. His extracurriculars at A&M include being a Fellow at the McFerrin Center for Entrepreneurship and a staff assistant for The Big Event. After graduation, he hope to attend graduate school and continue doing research. In my free time, he enjoys working out, playing guitar, and reading sci-fi. Delaney grew up in the Fort Worth, TX area. She is currently working towards a B.S. chemistry with a minor in physics and hopes to continue on to grad school. Aside from being a part of the Altman Group, she is very involved in a Christian campus ministry called Cru. In her free time, she enjoys rock climbing, playing piano, and reading and writing. Alejandro Rubio Reyes (2022): undergraduate student Thomas Stowell (2022–2023): undergraduate student Tyler Vandergrifft (2023): undergraduate student
OPCFW_CODE
Analytical utility of mass spectral binning in proteomic experiments by SPectral Immonium Ion Detection (SPIID). Bottom Line: Although such ions offer tremendous analytical advantages, algorithms to decipher MS/MS spectra for the presence of diagnostic ions in an unbiased manner are currently lacking.To benchmark the software tool, we analyzed large higher-energy collisional activation dissociation datasets of samples containing phosphorylation, ubiquitylation, SUMOylation, formylation, and lysine acetylation.Using the developed software tool, we were able to identify known diagnostic ions by comparing histograms of modified and unmodified peptide spectra. Affiliation: From the ‡Department of Proteomics, The Novo Nordisk Foundation Center for Protein Research, University of Copenhagen, Faculty of Health Sciences, DK-2200 Copenhagen, Denmark;Show MeSH Related in: MedlinePlus Mentions: The graphical user interface of SPIID is shown in Fig. 4A. The input required for SPIID is high-resolution MS/MS peak lists following the standard Mascot generic format (MGF). In the MGF file, each MS/MS spectrum is listed as a pair of mass and intensity values delimited by “BEGIN IONS” and “END IONS” statements (59), and is commonly regarded the most used file format for storing MS/MS data (60). The MGF input files can be generated through standard proteomics software tools such as Mascot Distiller, MassMatrix (61), Raw2MSM (62), Pyteomics (63), or msconvert (ProteoWizard) (64). Notably, the apl peak lists generated by MaxQuant are also supported. First (i) the user loads the MGF file into the SPIID program by clicking “Add MGF file.” This can be followed by optional (ii) loading of a specificity file containing information regarding the desired grouping and the raw files and scan numbers. The next step (iii) is the optional choice of which spectra to use as background determinants in the analysis; typically these would be all non-modified spectra, but other options are made available as well. The last step (iv) before processing can begin is determining the bin size and start/stop m/z values for the analysis. Finally (v), pressing the “Process” button will initiate the SPIID analysis, and the output will be depicted as illustrated in Fig. 4B. The top pane of the output window portrays the binned histogram for all modified MS/MS spectra being analyzed, and the middle pane portrays a similar histogram derived from all unmodified MS/MS spectra. The bottom pane of the output window depicts the final normalized diagnostic ion histogram, generated by subtraction of the two upper histograms (modified histogram minus unmodified histogram). The entire right side of the output window allows for changing the visualized output style according to various parameters such as font size, color, histogram annotation, depicted m/z range, etc. This ensures flexibility for the end user and allows for visual optimization of data output. The final diagnostic ion histogram can be exported to several file formats. Affiliation: From the ‡Department of Proteomics, The Novo Nordisk Foundation Center for Protein Research, University of Copenhagen, Faculty of Health Sciences, DK-2200 Copenhagen, Denmark;
OPCFW_CODE
M: Moola: Accept credit card payments in ten lines of code. - teejayvanslyke http://moolarb.com R: _pius The landing page looks nice and you could have something here, but you're going to need to do a better job on the marketing. The alternative to your product is using something like Gumroad (<https://gumroad.com/>) or building against the Stripe API directly, which is well-known for being gloriously simple. Your job is to show developers that using your micro-framework is significantly better than doing a simple integration of the Stripe gem into a framework they already use. My advice would be to show all of the screens that Moola autogenerates to really demonstrate the added value you're providing. Better yet, create a screencast going from download to deploy in 60 seconds. From a technical perspective, I'd also recommend using a YAML file to store the configuration settings in one place, making the process even easier. Good luck. R: teejayvanslyke _plus, thank you for your advice! I've considered doing a screencast and will create one as soon as I'm back from my travels. I built the tool mainly for dogfooding my own products and figured I'd build a marketing site to see whether there's any interest. Re: YAML configuration, that's a good suggestion. I wanted to keep the footprint lean and uncomplicated, so I figured swaying away from config files in favor of Sinatra settings was consistent with that philosophy. YAML would be effective for environment-based configuration. Thanks again! R: obilgic moola.com ? R: teejayvanslyke Taken :(
HACKER_NEWS
I am a PhD student in Secure, Reliable, and Intelligent Systems Lab since 2017. TA of the courses: Parallel Programming (Spring 2018), AlgoLab (Autumn 2017). I am interested in combinatorial algorithms, data structures, genetics, evolution, bioinformatics, immunology. I am working towards precise and reliable algorithmic methodologies in biology. We should eventually be able to reprogram a living cell the way we do with computers. Religion: open source, open science. - Sofia University (1 year, 2008) – Computer Science, moved after 1 year - Moscow State University (Specialist program: 4 years, 2009–2013) – Applied Mathematics and Informatics, Computer Vision - Shumen University (BSc and MSc, 2014–2017) – Computer Science, thesis: Framework for Genomic Reconstruction Algorithms - Yandex School in Data Mining (2 years, 2011–2013, private evening school), Moscow – Bioinformatics - Attended numerous school in Informatics, Physics and Computer vision - Four 1st places on Bulgarian national competitions in informatics (2006–2008) - National (Bulgarian) Laureate in Informatics (2008) - 11th place on TopCoder High School Finals (2008) - Google (1.5 years, 2014–2015) – programmer (Search/Knowledge/Infrastructure) - Mentored high school students during Summer and Winter schools in Bulgaria, Russia and Portugal (2008-2016) - Assistant for an algorithmic (Sofia University, 2009) and a bioinformatics (Coursera, 2013) course - Full-time informatics competiton trainer (1 year) leading a group with 24 students - Author and jury of informatics competitions for high school students (2008–2014) - Martin Vechev's group (1 year, 2016): Genome assemblers, Single-cell Transcriptomics, T-cell receptor reconstruction - Pavel Pevzner's group (2 months, 2013): Single-cell genome assembling algorithms - Konstantin Severinov's group (9 months, 2012): Math modeling for microbiology - Martin Vechev's group (2 months, 2012): Probabilistic verification for ecology - Google Summer of Code (3 months, 2011): Computational geometry algorithms (CGAL) - Industrial: glass cutting (3 months, 2007): Genetic algorithms for cutting optimization
OPCFW_CODE
Refreshing a site allows you to quickly update a site from any other site on your SpinupWP account. This includes copying all files and/or the database. Refreshing is useful for updating a staging site with the latest files and database from the production site or copying changes from a staging site to a production site. To refresh a site, visit the site’s settings screen and select Refresh Site from the drop-down in the top right corner. This will open the site Refresh Site screen. Here, you choose whether to update the site’s files, the site’s database, or both. If you’ve previously refreshed this site or this site was cloned from another site, the source site will be selected by default as the site to copy from. Of course you can always change the default selection and select any site to copy from. An option to perform a backup of the site using SpinupWP’s Site Backups feature before proceeding with the refresh will also be presented. If you don’t already use the Site Backups feature for the site, this is a good time to set them up to ensure you have a version you can revert to if needed. A link to configure backups will be provided on screen. If you’re refreshing files, you will also have an option to list any files to exclude from the refresh. Any files you list here will be preserved in their current state, and not overwritten by files from the source site. By default, wp-config.php and .env files are listed for exclusion as we recommend excluding these files to prevent any configuration errors. Once you’ve adjusted the parameters for your refresh, click Refresh Site. The time it takes to refresh a site depends on the size of the files and/or database of the source site, but the process typically takes a few minutes. When a site is refreshed, SpinupWP takes care of the following actions: - Perform the optional site backup, if requested - Copy all site files (if requested) - Copy database (if requested) - If the wp-config.php file is not excluded, database credentials in the new file will be updated - Find & replace the URL in the domain in the site files (if refreshing files) - Find & replace the URL in the database (if refreshing database) - Flush the object cache - Flush the page cache - Refresh Todos with new site information After refreshing a site, we recommend you test your site thoroughly and check things work as expected. Note that if the two sites have different versions of PHP, WordPress, MySQL, etc this could cause issues. For example, if the site being copied from is on a newer version of WordPress, some database upgrades in that version of WordPress may not be compatible with the older version of WordPress. To avoid issues, we recommend that the site being refreshed have the same or newer versions of software as the source site.
OPCFW_CODE
Why won't my Fujifilm X-T2 take pictures when I press the shutter? I'm having a problem with my Fuji X-T2; it doesn't make pictures anymore. When I try to press the shutter, it will focus and it makes the focus sound but then it does nothing when I press the shutter button. I've already looked through the manual but none of the problems listed there seems to be happening. I've already tried a different card and placed it in the other slot but the same thing happens. Also in recording mode or any of the other modes it will only focus but not record/click. As far as I can recall I didn't change anything in the settings (or at least not on purpose). Yesterday it was still working, then this morning I took the pictures of the SD card on my laptop and now suddenly it doesn't work anymore. I've tried: Reformatting the SD card in the camera Factory reset on the camera Changing the mode dial to all positions The weird thing is that the camera doesn't give any error message or anything (which I think should be happening if there's a problem with the SD card or battery or something like that). In the picture below I've got the shutter button pressed fully down, you can see it's focused and as far as I can see, nothing is weird? In the end, I took the camera to a camera shop and turns out the button is broken! They said it was probably due to wear. So I've sent it back to Fuji and they're repairing it right now. Some things to check Does it have a memory card? While at it also enable the warning for use without memory card in the menu. I do think it does trigger without card though. If it has a card try another card. What lens is mounted? For non Fuji lenses you need to enable shooting without lens. Also for use without lens that is (duh). Plus a "broken" not recognized lens can have the same effect without expecting it. Also try another lens. Check what the setting is for the trigger button in the custom buttons menu. Not completely sure what the options are but could be that you can disable the shooting part and only use it for focus and then assign shooting to another button. Sounds plausible so just check and you'll know for sure. With any of these problems it's always a good idea to perform a factory reset. That way you're sure you disable any of the options you might have set. You could also try a firmware update. There was an update that caused some lock up issues in the past. Do this only if all other options failed. Good luck and make sure to report back what the issue was.
STACK_EXCHANGE
In part one of this feature we learnt about Bigtable. Next we look at the file system infrastructure underneath it and aim to see if this is something enterprise datacentres could use. No standard Windows or Unix/Linux product Obviously Google isn't using any standard operating system and file system here. It's Linux O/S has Google's own Google File System, a distributed one and we see how efficient it is in looking for and reading block-level data from disk. Another Google paper states: 'the Google File System (is) a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients.' Google file system (We quote extensively from this paper in this section.) GFS was conceived as the backend file system for Google's production systems. GFS provides a location independent namespace which enables data to be moved transparently for load balance or fault tolerance. It has been designed from a point of view that component failures are the norm rather than the exception. The file system consists of hundreds or even thousands of storage machines built from inexpensive commodity parts and is accessed by a comparable number of client machines. The quantity and quality of the components virtually guarantee that some are not functional at any given time and some will not recover from their current failures. Google has seen problems caused by application bugs, operating system bugs, human errors, and the failures of disks, memory, connectors, networking, and power supplies. Therefore, constant monitoring, error detection, fault tolerance, and automatic recovery must be integral to the system. Files are huge by traditional standards. Multi-GB files are common. Each file typically contains many application objects such as web documents. Google is regularly working with fast growing data sets of many TBs comprising billions of objects, so it is unwieldy to manage billions of approximately KB-sized files even when the file system could support it. As a result, design assumptions and parameters such as I/O operation and blocksizes were revisited. (GFS uses a chunk size of 64MB, much larger than typical file system block sizes.) Most files are mutated by appending new data rather than overwriting existing data. (This characteristic will radically increase Google's disk capacity needs on its own.) Random writes within a file are practically non-existent. Once written, the files are only read, and often only sequentially. A variety of data share these characteristics. Some may constitute large repositories that data analysis programs scan through. Some may be data streams continuously generated by running applications. Some may be archival data. Some may be intermediate results produced on one machine and processed on another, whether simultaneously or later in time. Given this access pattern on huge files, appending becomes the focus of performance optimization and atomicity guarantees, while caching data blocks in the client loses its appeal. The Google file system has been designed for a specific enterprise environment. Google is, in effect, an absolutely massive but highly specialised set of applications with parallelism characteristic of them. GFS data is stored in chunks. A GFS cluster is highly distributed and typically has hundreds of chunkservers spread across many machine racks. These chunkservers in turn may be accessed from hundreds of clients from the same or different racks. Communication between two machines on different racks may cross one or more network switches. Additionally, bandwidth into or out of a rack may be less than the aggregate bandwidth of all the machines within the rack. Multi-level distribution presents a unique challenge to distribute data for scalability, reliability, and availability. The chunk replica placement policy serves two purposes: maximize data reliability and availability, and maximize network bandwidth utilization. For both, it is not enough to spread replicas across machines, which only guards against disk or machine failures and fully utilizes each machine’s network bandwidth. GFS must also spread chunk replicas across racks. This ensures that some replicas of a chunk will survive and remain available even if an entire rack is damaged or offline (for example, due to failure of a shared resource like a network switch or power circuit). It also means that traffic, especially reads, for a chunk can exploit the aggregate bandwidth of multiple racks. On the other hand, write traffic has to flow through multiple racks, a trade-off Google makes willingly. Users can specify different replication levels for different parts of the file namespace. The default is three. The master clones existing replicas as needed to keep each chunk fully replicated as chunkservers go offline or detect corrupted replicas through checksum verification As disks are relatively cheap and replication is simpler than more sophisticated RAID approaches, GFS currently uses only replication for redundancy and so consumes more raw storage than other approaches.. The disk infrastructure Google uses will have been developed in conjunction with the file system and cluster-based processing concepts with Bigtable developed on this foundation. We'll look no further into the file system unless it's necessary to understand the disk infrastructure. Google's paper on drive failures stated 'More than one hundred thousand disk drives were used for all the results presented here. The disks are a combination of serial and parallel ATA consumer-grade hard disk drives, ranging in speed from 5400 to 7200 rpm, and in size from 80 to400 GB. All units in this study were put into production in or after 2001. The population contains several models from many of the largest disk drive manufacturers and from at least nine different models. They are deployed in rack-mounted servers and housed in professionally-managed datacenter facilities. Google runs its own burn-in process: 'Before being put into production, all disk drives go through a short burn-in process, which consists of a combination of read/write stress tests designed to catch many of the most common assembly, configuration, or component-level problems.' Google is building a data handling infrastructure that is probably the largest the world has ever seen, and one that is greatly different in scale and use from business data centres, even the largest ones. Everything is layered with each layer dependent upon features of the one beneath it, and tuned to help the layers above it. In other words, you can't take out a layer of this infrastructure and use it on its own. One reading of this is that the Google storage infrastructure is irrelevant as a model for business to use. A second is that Google could realistically provide software as a service. It will already have accumulated much experience from its initial Gmail offering and the rolling out of its desktop productivity applications seems quite practical, even at this simplistic review level of its activities.
OPCFW_CODE
#!/usr/bin/env python3 # -*- coding: utf-8 -*- import numpy as np import matplotlib.pyplot as plt import csv with open('data/2019-01-31_14-18-06.csv') as f: next(f) darray = [] for l in csv.reader(f): if 'None' in l: continue darray.append([ float(l[1]), float(l[2]), float(l[3])*1e9, float(l[4])*1e9 ]) points_per_axes = 61 xbins = np.linspace(-15,15,points_per_axes) ybins = np.linspace(-15,15,31) img_fc = [] img_target = [] for x in xbins: row_target = [] row_fc = [] for y in ybins: point_fc = [] point_target = [] csv_array_length = len(darray)-1 while csv_array_length >=0: d = darray[csv_array_length] if ( d[0] > x-0.25 and d[0] < x+0.25 and d[1] > y-0.5 and d[1] < y+0.5 ): point_fc.append(d[3]) point_target.append(d[2]) del darray[csv_array_length] csv_array_length -=1 if point_target: row_target.append(float(np.mean(point_target))) else: row_target.append(0) if point_fc: row_fc.append(float(np.mean(point_fc))) else: row_fc.append(0) img_fc.append(row_fc) img_target.append(row_target) print(darray) plt.imshow(img_fc, extent=[-15,15,-15,15]) plt.colorbar() plt.show() plt.imshow(img_target, interpolation='nearest', extent=[-15,15,-15,15]) plt.colorbar() plt.show()
STACK_EDU
• Adding value to Software Development life-cycle stages • Create detailed design artifacts like program specifications, test plans • Independently develop and review code and contribute to the go-live plan • Fixing defects in production for e-commerce web applications and mobile • Deploy/configure/develop/support features in Salesforce.com application • Building pages for e-commerce applications and configure feeds properly • BIOS Automation Developer • Python Developer • Applying Behavior Driven Development • C++ Developer • Implementing BIOS Automation test cases using Python, and Gherkin • Participate in daily stand-up meeting and reporting the work status • Implement unit tests with Python for BIOS Automation • Development tools used: PyCharm, Perforce, Smart Bear – Collaborator, Jama, TTK, Vetra Box • Python Developer writing automation on linux, Fedora, Windows • Writing REST api for failure analyzer tool • Develop failure analyzer tools using Object Oriented Design and Python • Develop tools to report results for testing software and analyze the failures • Developing automation for Windows 10 and IoT and running the tests on TWS testing engine • Implementing WinIoT automation framework in PowerShell, Python for Joule 570x, WinIoT platform. • Installing drivers and Firmware on Joule 570x devices • Throubleshutting drivers and firmware • Installing Operating System on Joule 570x • Writing automation platform on Joule 570x using Powershell and Python technology • Writing programms to test memory, cpu and disk using C, and C++ • Instlling Windows Server 2016 on Testing Lab Network • Installing Hyper-V virtual machines on Windows Server 2016 • Installing Windows 10 and Ubuntu on virtual machines • Creating Test environment on the server and virtual machines to be able to run automation scripts written in PowerShell, Python, C, C++ and C# • Writing Universal Applications on WinIoT platform • Writing scripts to measure the performance of WinIoT platform operations • Writing reports to reflect the reliability and stability of the Joule 570x devices • Installing drivers and firmware in test target with Intel processors, PCH, Beagle I2C, Aardvark devices, eight relays (Power Button, Reset Button, Clear CMOS, ME Recovery, FOJ Button, Manufacture Relay, Chipset Reset, Flash Drive Relay), Saleae Logic Analyzer Device, Monitor Serial Port, Keyboard Serial Port • Running/Troubleshooting C# test scripts in Cloud application • Running/Troubleshooting Python test scripts in Cloud application • Installing/troubleshooting Wind River Simics • Running test scripts with Wind River Simics • Configure IPMI, IPC and Diagnostic bus interfaces for Intel hardware • Daily stand-up meeting reporting progress on the automation and performance testing for nike.net • Writing UI Automation scripts to cover the nike.net application • Writing Performance Testing scripts to measure the nike.net application performance respond time • Writing java application tool for monitoring the resources on the nike.net servers • Releasing daily reports for ui automation and performance suites • Leading the UI Automation and Performance testing for nike.net application which places about $13 billions orders for 2015 • Sprint release every 2 weeks • Groovy, Spock, Geb, gradle, Saucelabs, jMeter,fasterxml/json, IntelliJ IDEA, Version One McKesson Health SolutionsJanuary 2012 - February 2015 • Technical Writing • Create Testing Strategy Documents • Standard operating Procedures Documents • Reviewing Requirements Documents • Create Reports Documents • Write Test Scenario an Test Cases • Create Test Strategy for Java Client-Server Application - Contract manager • Written Java Selenium/Fluentlenium Scripts; QTP scripts with Visual Basic Script; soapUI scripts; web services test scripts in C# • Configure/Deploy Client-Server Java Applications • Setting Complex Configuration of Single Sign on using SAML 2.0 on OpenAm, OpenIdp, WSO 4.0 • Troubleshoot single Sign On using Spnego • Maintaining Virtual Windows and linux Machines • Leading and designing the Regression Testing Automation for McKesson Contract Manager application • Network Deploying build versions on Tomcat and Glassfish, Linux platform – SUSE, RHEL • Designing, writing, and executing test cases in XML files as an input for TAF (Test Automation Framework, specific to Fiserv’s Acumen product), an engine using the application’s API written in Java. • Actively participating as Software Developer Engineer (Test) in Life Cycle of Acumen – Financial Delivery • Software implementing SCRUM – Agile Methodology • Write Test Scenarios in C# for web services testing jobs run successfully on Batch Processing server • Testing Encryption Algorithms • Install SUSE, RHEL, Oracle 11g, Microsoft SQL Server on physical and virtual machines • Written SQL Queries using Oracle • Maintaining code using Eclipse, Visual Studio, and NetBeans • Test automatically generated pdf documents in Credit Union Bank - Financial Delivering System As Pentalog’s Talent Sourcing branch, SkillValue relies on a pool of 400,000+ Tech & Marketing Specialists – including 15,000+ Freelancers, a comprehensive catalog of IT assessments, available projects and job opportunities. Our SkillValue consultants are always ready and willing to help you boost your career. Stay in the know with Pentalog tech & business updates WHAT WE'RE ABOUT Pentalog is a digital services platform dedicated to helping companies access world-class software engineering and product talent. With a global workforce spanning more than 16 locations, our staffing solutions and digital services power client success.
OPCFW_CODE
This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Tuesday, August 8th and closes promptly at 23:59:59 UTC on Monday, August 14th. Please read the responses from candidates and make your choices carefully. Feel free to ask questions to the candidates here (preferred) or elsewhere! Interview with Adam Miller (maxamillion) - Fedora Account: maxamillion - IRC: maxamillion (found in #fedora, #fedora-devel, #fedora-releng, #fedora-admin, #fedora-python, #fedora-noc, #fedora-arm, #fedora-apps, #atomic) - Fedora User Wiki Page What is your background in engineering? I have a Bachelor of Science Degree in Computer Science and a Masters of Science Degree in Information Assurance and Security, both from Sam Houston State University. I have been either a Systems Engineer or a Software Engineer professionally for the past 11 years and am currently a Principal Software Engineer at Red Hat on the Fedora Engineering Team. I’ve been a Fedora Packager since Fedora 8, am a Red Hat Certified Engineer (cert #110-008-810), wrote and maintain the Red Hat Developer RPM Guide, have experience programming in C, C++, Java, Ruby, and Python (mostly the latter two in recent years), am a community contributor upstream to Ansible, the maintainer of the Ansible firewalld module, served on the Fedora Engineering Steering Committee since the Fedora 24 cycle with hopes of continuing to serve the community as a member of FESCo moving into the future. More detailed information can be found on my Wiki Page or Website. Why do you want to be a member of FESCo? I think this largely goes back to what FESCo does, “FESCo handles the process of accepting new features, the acceptance of new packaging sponsors, Special Interest Groups (SIGs) and SIG Oversight, the packaging process, handling and enforcement of maintainer issues and other technical matters related to the distribution and its construction.” which is a process I have enjoyed having the opportunity to contribute to in the past and hope to be able to continue to do so in the future. Fedora is something I care about and want to do whatever I can to make it better, if the community continues to feel this is a positive outlet for me to contribute to, then I would be honored to continue doing so. Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues? I believe some of the largest challenges we are going to face in the next few years are around Fedora Modularity, Fedora Atomic Host, Factory 2, Fedora CI, container technology, and how each of these works together to more rapidly deliver next generation system architectures. Each of these items are extremely exciting and will bring a lot of flexibility to Fedora and its users but it’s going to require a considerable amount of work and coordination across the community to be able to accomplish. The insight I believe I am able to bring comes from my background in the technology space. I’ve been working on container technology for over 5 years, starting with the original architecture of OpenShift which carried its own SELinux sandboxing+cgroups container technology before things like LXC were mature and before Docker existed. I’ve continued on in the container technology lineage now into the modern era of OpenShift based on Kubernetes. From there I’ve worked directly to implement Fedora’s Official Container Layered Image build system in koji to allow Fedora Contributors to maintain and deliver content as container images, this is also planned to be a method of delivery for Fedora Modules in the future. I’m currently also working on Fedora’s multi-architecture implementation of the Container Layered Image Build System so that we can support all hardware platforms with containers that we do with Fedora itself. Another initiative I’m leading is Release Engineering Automation powered by Ansible with the goal of enabling Fedora to deliver faster and with more agility. This Automation effort is planned to integrate with the Fedora CI initiative as well. My hope is that the experience I’ve gained working in these areas will allow me to continue to provide guidance as a member of FESCo as we move forward. What are three personal qualities that you feel would benefit FESCo if you are elected? - I’m passionate about Fedora and Free Software, it’s part of why I wake up in the morning. I absolutely love this stuff. - I’m objective in debates and respect opposing opinions, I have biases just like anyone but I pride myself on trying to remain objective and always learning more about another perspective or opinion on a topic. - I’m a hard worker, I do not shy away from taking on tasks and getting work done. I don’t mind having to write code, docs, run meetings, test software, or anything else so long as it is productive towards achieving the goal at hand. What is your strongest point as a candidate? What is your weakest point? - Strongest: I’m passionate, dedicated, I work hard, and I have what I believe to be a respectable level of knowledge and experience in Fedora world. - Weakest: I have knowledge gaps and there are going to be certain topics I will simply have to defer or spend time researching in order to provide guidance as a FESCo member. I think this is true of almost anyone, but it’s still something that I worry about as a candidate for the Committee that is meant to Steer the Fedora Project from a technical perspective. Currently, how do you contribute to Fedora? How does that contribution benefit the community? In “Fedora engineering” background, my time as an active contributor to the Fedora Project started during the Fedora 8 cycle when I both became a Fedora Packager and a member of the Fedora QA community. I’ve been a little all over the place over the years as I learn new things and am able to contribute to new aspects of the project or newer technologies that Fedora is working on/with have become things I find interesting. Below is a list of things I’ve been involved in, including what I continue to participate with and what I’m less involved in these days. Current activities in Fedora include: - Fedora Release Engineering Team Member - Fedora Atomic SIG Member - Fedora Packager - Fedora Proven Packager - Fedora Package Sponsor - Fedora EPEL SIG Member Past activities or things I’m less involved in Fedora: - Fedora QA Community - Fedora QA Proven Tester - Fedora XFCE SIG - Fedora KDE SIG Thing’s I’m currently working on for Fedora are largely around Release Engineering, Atomic, and Containers. I’m working with others in the community to clean up “technical debt” around the tools used to actually produce Fedora as well as help to create new ones that help modernize the build and compose pipeline in order to allow the creation of Fedora to be more agile at its core. The tools I am working on are aimed at catering to the Fedora Modularity efforts as well as containerized cloud technologies such as Kubernetes, OpenShift, and Project Atomic. I’ve also been participating in an effort to establish an easier “on-ramp” to Fedora Release Engineering with hopes of making it more welcoming for new community members who take an interest in Release Engineering to join in the efforts and contribute. Much of this is happening in the Release Engineering Pagure git forge location. Along with this, here are current or relatively recent Fedora Changes I am/have contributing/contributed to: What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development? I think we’re currently on the right track as an over all project with the Fedora Modularity effort but that’s thus far kind of been an effort taking place off to the side and I would like to see it come more into the mainline of FESCo and the greater Fedora Community focus. This isn’t only on the shoulders of FESCo, but the concept of moving towards making the operating system more modular and catering to container technologies is something I’d definitely like to see be more in focus moving forward at the FESCo level. From a community perspective I would like to see FESCo somehow get involved with Fedora Hubs because I think this will greatly lower the barrier of entry for people who want to get started and don’t necessarily know where to look for information and/or don’t know how to connect with the community. I think Hub’s potential ability to bridge the various sub-projects within Fedora would also be extremely powerful. I don’t know what the best approach for FESCo getting involved here would be but it’s something that I’d love to see discussed. If a past member of FESCo, identify a negative factor you noticed while serving in FESCo. How would you propose to improve on that for the next cycle? I’ve mentioned this before and I think it’s still true. I think our ability as FESCo to communicate outwardly the community and to have a feedback loop could be better. There have been steps to make it better by migrating from the old FedoraHosted to Pagure which has been great. However to really improve much beyond that I think it touches on my hopes for Fedora Hubs, once available I’d like to see FESCo taking advantage of that as an outlet for added communications to the community. My hope is that Hubs will become a reality and we can put a communications plan into action. Do you believe the Modularity objective is important to Fedora’s success? Is there anything you wish to bring to the modularity efforts? I believe the idea of making the operating system fundamentally more modular at its core is almost going to be a hard requirement moving into the future as containerized workloads become the norm even for what we consider “legacy” software and get to a place where users and developers expect to be able to decouple the lifecycle of their underlying operating system from their applications and services. The Fedora Modularity effort caters specifically to this view of the future and I think it would put Fedora in a position to handle the coming challenges more advantageously than alternative distributions. I hope to be able to bring the Release Engineering, build pipeline, and release automation tooling perspective to some of these efforts. What is the air-speed velocity of an unladen swallow? What do you mean? An African or European swallow? It has been an honor to serve on FESCo and I would be honored to continue to do so, but no matter who is elected I look forward to continuing to work on Fedora with all of you. Thank you for your consideration as a candidate.
OPCFW_CODE
- published: 01 Dec 2017 - views: 1201706 Don't Forget To - LIKE | SUBSCRIBE | SHARE -Stirling Engine Model: https://goo.gl/2dobPN (8% OFF Coupon: ToysHo) -2020 T-Slot Aluminum Extrusion: https://goo.gl/C6eqDV (15% OFF Coupon: MC15) -Full Page PVC Fresnel Lens Magnifier: https://goo.gl/aH5VVc (10% OFF Coupon: elec) -Brass Hinge: https://goo.gl/rmuz6k -Motorcycle Telescopic Pole Flagpole: https://goo.gl/7c7zP4 (12% OFF Coupon: BgAuMotor) Check Also: -Banggood Cyber Week Hot Discount Deals: https://goo.gl/a22Z82 -Banggood Weekly Discount Deals: https://goo.gl/D7mZKM Some other interesting engines you must check out! 16 Cylinder Stirling Engine Motor Model: https://goo.gl/oxPTEu (8% OFF Coupon: ToysHo) Mini Live Steam Engine Model: https://goo.gl/PyEUd2 (8% OFF COUPON : ToysHo) Mini Live Steam Engine Single ... Solar Engines - Total War Warhammer 2 - Online Battle 12 Last time I talked Lizardmen people wanted to see their monster artillery, so enjoy! MSI: https://us.msi.com/ #DragonSquad Like my new Channel branding? Check out https://twitter.com/hforhavoc http://store.steampowered.com/app/594570/Total_War_WARHAMMER_II/ Total War: WARHAMMER II is a strategy game of titanic proportions. Choose from four unique, varied factions and wage war your way – mounting a campaign of conquest to save or destroy a vast and vivid fantasy world. Official Website: https://www.totalwar.com/ Total War YouTube Official Channel: https://www.youtube.com/user/thecreativeassembly Support me on Patreon! https://www.patreon.com/HeirofCarthage Follow me on Twitter! https://twitter.com/HeirTweets Check out my web site http://www.nobox7.com/ Check out my web site http://www.nobox7.com/ Check out my store front https://www.amazon.com/shop/nobox7 Support this channel on Patreon https://www.patreon.com/user?u=3620781 Check out my store front https://www.amazon.com/shop/nobox7 the constant flexing of the material eventually leads to failure due to metal fatigue also thermal cycling kills all metals after a fiew million cycles , this only last for 2 years at most The cost/benefit of heat engine using this material would need to include maintenance and replacement of worn-out components, namely those made of Nitinol. Although I don’t have this data, it would seem fairly clear that if it were economical, there would be commercial applications in play by this time. Figure 1: Heat engine d... Hydrogen, solar power, steam engines and much more The DIY Science Guy Trailer. Follow me on Facebook: https://www.facebook.com/TheDIYScienceGuy Follow me on Instagram: https://www.instagram.com/thediyscienceguy Music: Watch it Glow by Silent partner Three magnificent Stirling engines powered by sun being tested at Solar Energy Center, New Delhi. These engines were imported from Infinia Corp. USA, by ONGC for studying feasibility of technology for large scale power projects. Rated performance of each engine/collector combine, for DNI 850W/sq.m, output is 3KW (3 Phase AC, grid quality) at efficiency of 24%. max performance at SEC, 1.7KW @ 18% eff. due to low DNI I have recently been working on my solar system. It has brought up a lot of discussion about what is better, Solar/Electric or Gasoline? Grow Your Heirlooms: http://www.growyourheirlooms.com/ Please Share & Subscribe http://www.youtube.com/subscription_center?add_user=growyourheirlooms Knex V8 engine 'cut away' view - built to request, someone wanted me to build a V8 version of my four cylinder unit, both engines have since aquired spark plugs and rocker arms. These engines have very short push rods and overhesd style camshafts, real engines either have a low mounted camshaft, usually just the one operating both inlet and exhaust valves, or overhead camshafts with the cams directly operating the rockers. Alas I wobbled the camera around so much whilst I was talking that Youtube's brilliant 'antishake' function destorted the images in places - I've made another follow on video with less camera shake! :) http://www.greenpowerscience.com/ This is the first video from our "Viewers Video" segment where we showcase GreenPowerScience inspired work or other work that offers great ideas in the Green Energy Field. Joe Carruth built this by himself and this is the third prototype of his design. The unit weighs 1.5 tons and is 12.5 ft x 14.5 ft of active collection area. He currently manually tracks the sun for testing and hopes to get some funding to bring this project to a self contained version that can be available for Alternative Power production. We use an IR spotlight to send commands to a microcontroller. Platform for monitoring, reporting & analytics of solar PV plants The Independent | 16 Jan 2019 WorldNews.com | 17 Jan 2019
OPCFW_CODE
It's really good! However, now that I'm on windows 10, it causes Visual Studio 2012, 2013 and 2015 to crash when editing C/C++ applications often. When the spell check is uninstalled, VS does not crash. Please fix this! It's really awesome. I want it to work. Works very well for our C++ projects in VS2012. So far I've been impressed with how it has handled rather large (20K LOC) files without any obvious performance penalty. This extension has been especially useful for editing pages of Doxygen documentation. By default, it underlined almost EVERY WORD in comments on my machine, because it was trying to check the spelling in French, and the comments were in English. There is no option to change the language. Fortunately I thought of switching the language in the taskbar, which works, but it is far from obvious, and has the undesirable effect of changing the language for all apps, not just Visual Studio. Many people in non English-speaking countries prefer to write their code and comments in English; I guess it's ok to select the computer's language by default, but there should be an easy way to switch to another language just for Visual Studio (or just for the project). Please send crash report feedback if you see this, so Microsoft employees can see it and hopefully act on whatever the crash is. I haven't yet updated to win10, but if/when I do, I can see if there's something to be worked around. Until then, I guess the best answer is to uninstall the extension :( Sorry! It's possible that the HTML editor has changed how they classify text; the HTML tagger works by spell checking anything that isn't otherwise classified as a symbol of some sort: So if they started classifying everything, including plain text, then it may not work. What version of VS are you using? By reading all these language related question I was still unable to understand whether there are any language options without recompiling the code? I have a German OS, German Office with English dictionary and an English Visual Studio. The spell checker is set to German which I don't want. Am I right that this tool cannot provide English spellcheck for my configuration? You can change the language from the "language button" in Windows 8 taskbar (I think it was called the "language bar" in previous versions). I don't think it appears by default, you have to have multiple languages installed in Windows "Language Preferences" dialog and/or multiple input methods configured. Having spelling problems this plugin is the best, I would love one additional update though and that is, that it would also spell check c# Region tags. #region dfault Constructors This may be generic to WFP's Customer Dictionary, but I am not sure, and in either case I don't know how to accomplish porting your Visual Studio's Spell Checker Dictionary to a new PC. I really do not want to began again teaching it all the words I have added over the past 2.8 years. Thank you in advance. Yup. The project had been locked for some reason (the admins had no idea why, so they unlocked it), so I wasn't able to update it with the rest of my extensions. I'll try to get to it soon (it's all built, just not uploaded). You can try: Download extension, extract it with a zip program, edit the extension.vsixmanifest and add VisualStudio Version="12.0" (maybe also add some more versions like premium) and the SupportedFrameworkRuntimeEdition to MaxVersion="4.5". Add this file again to the extension file (using a zip program like 7zip/winzip/winrar)and install ;) Since we can install language packs in VS 2013 to fit our personal liking (e.g. non-English OS, but VS with English GUI), IMHO it would just be logical to change Spell Checker to use this instead of the OS GUI's language. So, would you mind to change the language dependency from OS to VS? Or implement a way to set this up manually - without downloading and compiling your extension's code? :) Yoda86: sadly, it doesn't work at all like that; it's not a matter of just picking what the dependency is. There are other extensions that use the Office dictionary, for example, but the spell checking mechanism is entirely different there, so it's a really different extension. Also, the language packs in visual studio aren't the same things as dictionaries + some grammar support to pick out spell-checkable words in sentences, so that doesn't help either. Sorry :( Greate extension ! Works well in VS13, but when I open existing project with lots of spellings errors I would like NOT to show each error in my scroll bar (when its in map mode) How can hide the warnings?
OPCFW_CODE
Active directory tools windows 2008 r2 To create your personal navigation nodes, use audiobook english idioms vol. i the Add Navigation Nodes on top of the navigation pane. Click Features, and then Add Features. Again, you can customize the navigation pane: When you right-click a navigation node you can rename or remove the node, create a duplicate node, or move the node up or down in the navigation pane list. To use Server Manager to access and manage remote servers that are running Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, or Windows Server 2012 R2, you must install several updates on the older operating systems.In the Group Policy Editor, go to Computer Templates: Policy definitions (admx files) retrieved from the local machine/Network/ BranchCache.Click to view larger image.Check Windows Branch Cache and then click the Install button.The other big difference between aduc and adac lies in adacs underlying architecture.Windows 7 does not ship with rsat installation files.In, features, expand, remote Service Administration Tools and.This means that you can also use adac to manage AD instances that are running on other Windows server platforms besides Server 2008.To check the status of the cache, use the following command at a command prompt to display the amount of data stored in the hosted cache: Netsh branchcache show status all The last word Some of the more interesting improvements present in the publicly available.In the Customize Start Menu dialog box, scroll down to System administrative tools, and then click Display on the All Programs menu and the Start menu.Earlier smartdraw 6 serial number releases of Remote Server Administration Tools (such as those for Windows.1) are not available-nor do they run-on Windows.Adac supports this data-oriented view of AD objects as well.On the Action menu, click New Rule.And in addition.Active Directory Domain Services Tools, and then click, active Directory Domain Controllers Tools.Adac lets you simultaneously connect to different domain controllers (DCs) in different domains to manage objects across multiple domains within the same adac instance. To use adac you need at least one Windows DC in your domain that has an operational adws service. From intro titles to credit update pes 2013 februari 2015 rolls and everything in betweenincluding color correction, chroma key or green screening, and specialized visual effectsweve pulled together 47 of the most useful After Effects tutorials for auteurs.74 Simulated Lighting 75 Make explosions and muzzle flash.Pixel Bender plug-inRead more ImTOO MOV to MP4 Converter is created to help you convert QuickTime MOV video to MP4 video with super fast speed and great quality.Colinmcrae dirt.jar 135.Siscontents is a lightweight and useful utility designed to unpack and edit Symbian 9 SIS packages (Sony Ericsson UIQ.x platforms and Nokia S60Read more It features compression ratios.5 to infinity, adjustable attack and spyware doctor 8.0 serial number auto/manual release and simple, clear metering that helps you keep an eye on gain reduction, threshold and input/output levels.Own a copy of a great Sylenth1 progressive presets and a bonus to buy anything onRead more
OPCFW_CODE
//php print_r($this); ?> The Event itself must still be manually created in MosoMRM. This gives you the ability to import multiple instances per Event at different locations, on different dates and times, and with multiple instructors. The Event Import configuration page displays and stores all of the pertinent import data including: - Import ID (auto-generated by application) - Type of validation (Validate Only or Validate and Import) - Status (Not Started, In Progress, Completed, or Completed with Errors) - Status Date in UTC time - Total Rows imported - Successful number of imported rows - Failed number of imported rows i.e. rows that contained errors. How to do an Event Import The Import Events pop-up window is where you select the .csv file to import and verify that it is formatted properly before it is imported. To Import Events: - From the Launch Pad, click the System Configuration icon in the left navigation panel. - In the Scheduler section, select Event Import. This opens the Event Import Status page. - To start the import process, click the Import New Events button. The Import New Event Instances pop-up window displays. - Select a .csv file from your hard drive. Click the icon to the right of the CSV File text box to open your documents library to select a file. - Click the Check CSV button. This verifies that the .csv file is properly formatted and activates the Save button. - If it is properly formatted, a message banner displays informing you of the number of rows in the file. - If it is not properly formatted, an error message is returned advising you that this is not a valid .csv file. - Select either Validate Only or Validate and Import. (see below for more information) - Validate Only - This goes through the process of checking each row of the file for errors and bad data only. - Validate and Import - This does the same validation process AND automatically imports the Instances into the Events page. - After you have made your selection, click the Save button to start the process for the task you selected. Update Event Resources Both validation types generate a record that displays on this page only. Each record includes a unique ID number and a Status as it moves through the stages of the process. The import itself is done through a tool on the back end of the application that pings MosoMRM periodically to see what tasks are available for processing. - Not Started - In Progress - Completed with Errors Total Rows = Succeeded + Failed - Succeeded = Rows that were analyzed included no errors or data anomalies. - Failed = Rows that are formatted properly, but may include errors or some sort of data anomaly. A Completed status will only be returned if the validation was done and ALL rows Succeeded, so those the number of Total Rows should equal the number of Succeeded rows. A Completed with Errors status will be returned if one (1) or more rows have Failed. In the example below, Import ID #44 includes 400 Total Rows-182 Succeeded, 218 Failed (182 + 218 = 400). - It is recommended that you run a Validate Only upload before a Validate and Import. This gives you a chance to verify that the file formatting is correct and to clean up any data errors. - Pending on how many records you have in your database, be prudent as to the time of day you do your upload and import. This process can take multiple hours to complete pending on how many records you are validating and importing. Rentals Configuration | Enterprise & Location Setup | Inventory, Activities & Related Settings | Data Management | Financial | Sales & Prospecting | Workstations & Devices | Employees & Staff | Tax | Scheduler | Childcare Configuration & Settings | System Maintenance |
OPCFW_CODE
Having Social Media buttons on your website is a great way to get readers engaged and to share your thoughts and ideas. As per business technique, the greater part of them is utilizing web-based social networking showcasing. So social sharing has turned out to be very vital. Presently the inquiry is how to make social sharing simpler for guests? To make sharing less demanding and empowering your guests to share content, one must integrate some sort of sharing buttons to your site. Sharing buttons obviously work extraordinarily compared to other approaches to get your content shared by means of web-based social networking, email, and other online channels. Notwithstanding, while numerous site proprietors agree that sharing buttons are something worth being thankful for, but they don't know how to accomplish/integrate them in the site? You can easily integrate sharing buttons to your site by adding our extension to your page. Here we will guide you how to integrate sharing extension to your website from scratch! Key features of ns_sharethis extension: - Standard share buttons - Floating share buttons - Social media sharing buttons - Image sharing buttons - Custom share icons - Official buttons including the Facebook Like Button, Twitter Tweet Button, Pinterest Save Button, and LinkedIn Share Button - Universal email sharing makes it easy to share via Gmail, Yahoo Mail, Outlook.com (Hotmail), AOL Mail, and any other web or native apps Frontend of your site with social shaing "Inside Page" The extension needs to be installed as any other extension of TYPO3 CMS: 3.1 Get the extension Step 1: Switch to the module “Extension Manager”. Step 2: Get the extension - - Get it from the Extension Manager: Press the “Retrieve/Update” button and search for the extension key 'ns_sharethis' and import the extension from the repository. - Get it from typo3.org: You can always get current version from https://extensions.typo3.org/extension/ns_sharethis/ by downloading either the t3x or zip version. Upload the file afterwards in the Extension Manager. Step 3: Now install the Extension. 3.2 Activate the TypoScript The extension ships some TypoScript code which needs to be included. Step 1: Go to Template Module. Step 2: Switch to Root page. Step 3: Select Info/Modify option from drop-down box. Step 4: Click on Edit the whole template record button. Step 5: Switch to Includes tab. Step 6: Select NITSAN - API ShareThis.com (nitsan_sharethis) at the field *Include static (from extensions): Quick & Easy "Global" configurataion of Sharethis.com: Step 1: Switch to Extension Module. Step 2: Select Installed Extension. Step 3: Click on Setting icon. Step 4: Now configure plugin as per the requirement. Setup "Particular Page Only" configuration of Sharethis.com: Step 1: Switch to the Extensions module. Step 2: edit configuration and then Disable Global Interface from this "settings". Step 3: Now select plugin Nitsan Social widget and configure it as per requirement. Step 3.1: Open the page where you want to add this element. Step 3.2: Click on Add Content and switch to Plugins tab. Step 3.3: Now select the “Nitsan Social Widget Plugins”. Step 3.4: Now switch to Plugin tab and Configure fields as per the requirement. Please, clear cache from top panel 'Flush frontend caches' & 'Flush general caches'. It will be also great to clear cache from Install tool too. - Support: You could report any issues/problems at Github - https://github.com/nitsan-technologies/ns_sharethis/issues - Website: http://www.nitsan.in/ | http://www.itug.in/ - Contact Email: [email protected] Just download & try the EXT:ns_sharethis extension. You can write down your feedback/suggestion/comments to below comment box. So, we will adapt it for next version of EXT:ns_sharethis. Now, Grow your audience with our easy-to-install social media platform.
OPCFW_CODE
FORMAT is used to specify the recording format. Note that it is couched in terms of tracks despite the switch to disks because the concept isstill used in Mark5A and in Mark5A+ playback of Mark5B. Options are: - Set to DAR type for first stage default. VLBA formats - VLBA type systems: VLBA - Let SCHED choose the fan out. VLBA1:1 - 1 bitstream on 1 tape track. VLBA format. VLBA1:2 - 1 bitstream on 2 tape tracks (fan out). VLBA1:4 - 1 bitstream on 4 tape tracks (fan out). VLBA2:1 - 2 bitstreams on 1 tape track (fan in). VLBA4:1 - 4 bitstreams on 1 tape track (fan in). MKIV formats -Mark IV (and VLBA4) systems: MKIV - Let SCHED choose the fan out. MKIV1:1 - 1 bitstream on 1 tape track. Mark IV format. MKIV1:2 - 1 bitstream on 2 tape tracks (fan out). MKIV1:4 - 1 bitstream on 4 tape tracks (fan out). MKIV2:1 - 2 bitstreams on 1 tape track (fan in). MKIV4:1 - 4 bitstreams on 1 tape track (fan in). MARKIII format - Mark III and VLBA systems. MARKIII - Mark III tape format. No fan in or out. S2 - Canadian S2 record systems. S2 - All S2 recordings. MARKII - No action. Use for Mark II and single dish. NONE - No tape to be used, as for pointing. Note on track assignments: Track assignments are an easy thing to get wrong. SCHED will make track assignments automatically for some modes if they are not specified in the setup file. Most users should take advantage of this facility. Automatic track assignments will be made for MARKIII modes with 1 bit only, and for VLBA1:1, VLBA1:2, and VLBA1:4 modes with 1 or 2 bits. The barrel roll is on by default (can be controlled with BARREL) in VLBA modes and off in MARKIII mode. The roll is within 8 or 16 track groups, advancing one track for each frame. The fan in modes, VLBA2:1 and VLBA4:1 were never implemented, so this paragraph is only retained for historical reasons. If using a fain in mode, give the same track assignment to 2 or 4 channels. In the fan in modes, if there are 2 bits per sample, the two bits will be put on the same track so there will be one channel per track for VLBA2:1 mode and two channels per track for VLBA4:1 mode. For the fan out modes, VLBA1:2, VLBA1:4, MKIV1:2, and MKIV1:4, give the first track assignment for the baseband channel. Sequential formatter track assignments are used for the other tracks associated with that baseband channel. The resulting recorder track assignments are then given below: forward, track for 1st bit - 3 7 11 15 19 23 27 31 forward, track for 2nd bit - 5 9 13 17 21 25 29 33 reverse, track for 1st bit - 2 6 10 14 18 22 26 30 reverse, track for 2nd bit - 4 8 12 16 20 24 28 32 forward, track for 1st bit - 3 11 19 27 forward, track for 2nd bit - 5 13 21 29 forward, track for 3rd bit - 7 15 23 31 forward, track for 4th bit - 9 17 25 33 reverse, track for 1st bit - 2 10 18 26 reverse, track for 2nd bit - 4 12 20 28 reverse, track for 3rd bit - 6 14 22 30 reverse, track for 4th bit - 8 16 24 32 Note that MkIII modes are obsolete. Format NONE is used for VLBA testing, such as single dish pointing. When specified for VLBA data, the on-line system does not touch the formatter. This includes not readjusting the pcal detection. Format NONE will mainly be used by VLBA staff for testing and in high frequency projects that do reference pointing. If the reference pointing scan uses this format, no formatter reconfigures are done which can prevent the significant data loss that can happen at reconfigures. Note that it is not necessary to specify FORMAT. FORMAT will be set equal to DAR in the station catalog.. Stations with DAR = VLBA4 will write Mark IV format on the tape. When the default format is taken, the barrel roll is turned off for formats other than VLBA.
OPCFW_CODE
Order Azurette 0.15/0.02mg online - How to Purchase Desogestrel + Ethinyl Estradiol Cheap Azurette Generic Fast Cheap, Azurette For Sale Order Azurette Online. Approved Pharmacy for Desogestrel + Ethinyl Estradiol! BUY AZURETTE ONLINE! - CLICK HERE! cheap desogestrel + ethinyl estradiol pay with paypal, buy azurette uk stores, can you buy azurette online in ireland, buy azurette singapore online, buying azurette wiki, buy azurette online real, generic azurette reviews forum, generic azurette online purchase, buying azurette nhs, can you buy azurette over the counter in dubai, azurette for cheap without an rx forum, do you have to be a certain age to buy azurette, buy cheap azurette com, cheap desogestrel + ethinyl estradiol import, buy azurette online and desogestrel + ethinyl estradiol, azurette purchase in canada, cheap azurette online uk, buy azurette name, cheap azurette article, how do i order azurette over the internet, generic azurette coupons 2018, online pharmacies for azurette, generic pharmacy azurette, azurette brand purchase, azurette price comparison shopping, azurette cost at vons, cheap desogestrel + ethinyl estradiol at canadian pharmacies, no bullshit online ordering of azurette, azurette singapore buying, buy azurette order online cheap cheap desogestrel + ethinyl estradiol jamaica, azurette on line price per pill, azurette vegas buy And in the case of stretching devices, they truly don't. Grey is one of the worst colors for this reason, sweat is so apparent when wearing grey, avoid this color whenever possible. Once the psychological state of a person is disturbed it will affect his or her entire system. One diet pill that you can trust, though, is hoodia gordonii. Exercise Well and Eat Right buy one azurette pill online Scraping your tongue will remove any dead white blood cells, bacteria and food particles from collecting in the tonsils. After sedation dentistry procedure a patient is in a semi-conscious state and needs to be taken home by a friend or relative. Deficiency of these essential fats results in many diseases like - Attention deficit/hyperactivity disorder (ADHD), Alzheimer's. One of the most common indicators is when you feel your chest has tightened up. Here are 10 ways to improve your sleep:1. desogestrel + ethinyl estradiol polymeric immunoglobulin receptor, azurette generic ireland azurette However, the treatments applied will never be able to eradicate the HPV virus that will continue to remain within the body. Crash weight reduction systems won't be sustainable for the long term. Azurette
OPCFW_CODE
Table of Contents Plans of Action |Arrange all six coins of a suit to line up in one row or column. The coins can jump, the tiles can rotate or shift. |L. Edward Pulley This game was inspired by Sid Sackson's “Fields of Action”, which was in turn inspired by Claude Souci's “Lines of Action”. The initial setup concept was inspired by Sackson's “Sly”. Take 16 tiles and place them grid-side up in a 4×4 square. This creates a playing grid of 64 spaces. Place all 24 coins at random in the center of the grid, 6 across, 4 down, suit-side up. Arrange all six coins of a suit to line up in one row or column. How to play A single coin moves by jumping over spaces (occupied or empty) in its row or column. It must land on an empty space. The number of matching suits in that row or column defines the number of spaces a coin can jump. If a coin is the only one in the row, it may only jump one space. If there are two, it must jump two spaces, no more, no less. The same rules apply to three, four, five or six coins. If a tile is full and has four coins, it can rotate or shift. A tile may do one or the other, not both. Once you rotate or shift a tile, you have to move a coin from it. At no time do you have to rotate or shift a tile, it is optional. Either rotate the tile by 90º, 180º or 270º. Or shift it orthogonally in any direction to the next empty tile-space. The game ends when six coins of the same suit line up in one row or column. Scoring is optional. Count the amount of moves it takes you to win. The least amount of moves wins. Some setups will be easier than others. If you wish to compare your scores with others, you should share your exact setup with them. Try lining up all the coins of more than one suit. Doing that with all four suits is the hardest. Here is a sample setup. Copyright © 2003 by L. Edward Pulley (original), Copyright © 2019 Anika Henke (rewrite). Permission is granted to copy, distribute, display, and perform the work, to make derivative works, and to make commercial use of the work, as long as credit is given to the original author, and if you alter, transform, or build upon this work, you may distribute the resulting work only under a license identical to this one.
OPCFW_CODE
# ============================================================================= # LRU Trie Walk History Class # ============================================================================= # # Class tracking some information about a given walk in the LRU Trie. # # Class representing the history of a simple walk into the Trie class LRUTrieWalkHistory(object): def __init__(self, lru): # Properties # TODO: this should become a list self.lru = lru self.webentity = None self.webentity_prefix = '' self.webentity_position = -1 self.webentity_creation_rules = [] self.page_was_created = False def __repr__(self): class_name = self.__class__.__name__ return ( '<%(class_name)s prefix="%(prefix)s"' ' weid=%(weid)s position=%(position)s wecr=%(wecr)s>' ) % { 'class_name': class_name, 'prefix': self.webentity_prefix, 'weid': self.webentity, 'position': self.webentity_position, 'wecr': '/'.join(str(wecr) for wecr in self.webentity_creation_rules) if self.webentity_creation_rules else None } def update_webentity(self, weid, prefix, position): self.webentity = weid self.webentity_prefix = prefix self.webentity_position = position def add_webentity_creation_rule(self, position): self.webentity_creation_rules.append(position) def rules_to_apply(self): for position in reversed(self.webentity_creation_rules): if position >= 0: prefix = self.lru[0:position] # Note that it remains to the user to apply default rule if # none of the given rules would happen to succeed yield prefix
STACK_EDU
When designing and developing a website or application, a challenge is that roles and tasks can often operate in silos. While in some cases this is fine, in many others it takes away from the ability to collaborate more closely and ultimately build an even stronger solution. To tackle this challenge, we recently piloted a new component-based design approach that prioritizes real-time collaboration. [Spoiler alert: the results so far are positive!] At Forum One, we build a lot of sites and applications with Drupal and WordPress, and both content management systems incorporate layout features out of the box. In Drupal 8+, that’s Layout Builder. In WordPress, that’s the Block Editor. These layout tools give content editors the flexibility to build their own page layouts within the CMS’s administrative user interface. Since component-based design is centered on building reusable elements, this approach fits nicely with these CMS tools: we design and build the elements that our mission-driven organizations can then mix and match in order to suit their needs. Last month we launched a new website for Trinity Church Wall Street (TCWS), a historic Episcopal parish in New York City that provides a large array of services and activities to its community. The website redesign project provided a great opportunity for us to pilot a new approach to how our user experience (UX), design, and technology teams collaborate while incorporating component-based design principles. In the context of website and application development, component-based design means that we focus on building a robust design system of reusable components rather than discrete page layouts. By approaching design modularly, we provide the building blocks that can be combined to build impactful page layouts on the fly. Reducing pain points and improving collaboration across teams Aside from supporting Layout Builder and Block Editor, we had other reasons for reassessing our approach to design and tech collaboration. Our old approach provided fewer opportunities for members of our UX, design, and front-end development teams to work together while focusing on the project goals. We found that the roles were too siloed and there was too much emphasis placed at the page level rather than the more global design system. The teams wanted to focus more on interactive storytelling, and they needed real-time collaboration with each other to do that successfully. Another pain point we experienced is that static page design documents don’t always effectively convey intended functionality and interactions. Without tight collaboration, our front-end team could be left to infer how a layout was supposed to behave on different screen resolutions, or what should happen if the content for a dynamic component wasn’t available, or how interactive elements were supposed to work. As a result, initial UX and design visions didn’t always match the end product. Based on this, we felt the right solution was to increase communication between teams and implement a process that focused on getting into code faster. What we did on TCWS After discussing our pain points and goals, our TCWS team piloted an experimental approach. To start, we brought our lead front-end developer into the project much earlier than we normally would, and we established weekly real-time collaboration sessions between UX, design, and front-end development leads. Together, they defined a list of components to create for the site. Each week, the team of three discussed various approaches, sketched various solutions, and experimented with prototypes built in CodePen and Pattern Lab. The team had regular sessions with our client team to get feedback on our prototypes, and they also coordinated with the project’s technical lead so that the backend architecture supported what was being prototyped. At first, we reviewed individual components. Over time, we had enough components in our library that we began building prototypes of full pages. After each review session, we documented feedback and decisions and made adjustments in code accordingly. This helped us move more quickly: we didn’t spend time revising static design files because we were able to make the edits right into our front-end code. We were also able to demonstrate how the components worked within a browser. Our client was able to use the prototypes to see how elements shifted at different screen sizes, and how interactive elements functioned. This enabled us to get important feedback when it was early enough in the design process to incorporate easily, well before the frontend code was implemented within the CMS templates. What we learned We learned a lot as a result of this pilot project. We learned that creating a library of reusable components requires a lot of planning and detailed documentation to keep everyone on the same page. We learned that collaboration takes time and focus and that our project schedules have to account for it. We learned that prototyping together is valuable because each skill set brings different expertise, which results in a higher quality outcome. We also learned that as content management platforms continue to offer layout building tools, we’ll have less control over every element of each page, and this new approach will allow us to deliver flexible systems that let our clients build beautiful, impactful pages without us. We’re looking forward to using these lessons as we roll out this process to more of our website and application projects.
OPCFW_CODE
How to string format SQL IN clause with Python I am trying to create a statement as follows: SELECT * FROM table WHERE provider IN ('provider1', 'provider2', ...) However, I'm having some trouble with the string formatting of it from the Django API. Here's what I have so far: profile = request.user.get_profile() providers = profile.provider.values_list('provider', flat=True) # [u'provider1', u'provider2'] providers = tuple[str(item) for item in providers] # ('provider1', 'provider2') SQL = "SELECT * FROM table WHERE provider IN %s" args = (providers,) cursor.execute(sql,args) DatabaseError (1241, 'Operand should contain 1 column(s)') Curious. Why are you doing a raw sql IN query when you already have the django ORM? @jdi It's a long sql query that I'm building with string concatenation based on some user-inputted values (about 20 lines long). The ORM has aggregations though. But I guess I just have to take your word for it that the ORM can't do it :-) I'm with @jdi here - is the table you're querying inside the ORM? (slight edit: I'm not with jdi, I mean that in the sense of a somewhat confused view...) @JonClements: Jon, I am offended. We talked about being honest in public. David, we are together, and we also share similar opinions on database ORMs @jdi LMAO - I'll fluff your pillow exactly how you like it and make you some warm milk to make-up? ;) (I think I should get +1 for least constructive comment on SO ever!) - the wife could be an issue though... :) 3 years later and I still get surprised of what you can find when looking for some ORM advice. I'm surprised SO didn't flag this as out of topic. +1 for making me laugh MySQLdb has a method to help with this: Doc string_literal(...) string_literal(obj) -- converts object obj into a SQL string literal. This means, any special SQL characters are escaped, and it is enclosed within single quotes. In other words, it performs: "'%s'" % escape_string(str(obj)) Use connection.string_literal(obj), if you use it at all. _mysql.string_literal(obj) cannot handle character sets. Usage # connection: <_mysql.connection open to 'localhost' at 1008b2420> str_value = connection.string_literal(tuple(provider)) # '(\'provider1\', \'provider2\')' SQL = "SELECT * FROM table WHERE provider IN %s" args = (str_value,) cursor.execute(sql,args) thats pretty cool. Did not know about the string_literal method. I am assuming that since MySQLdb follows the db api pretty strictly, this exists in most implementations?(sqlite3, postgres, etc) Definetly going back to the docs @Justin.Wood: To be honest, I didn't really know about it either, but I don't have much previous experience using MySQLdb and passing tuple values. Its always an ORM. I just looked at the docs too for this one :-) Neat. I work in some legacy code base that doesnt use an ORM, and we get this kinda stuff constantly. So this is going to be a big plus for me. You got a link to this in the docs? Couldnt find it. Though it is easy enough to find in the source. @Justin.Wood: Strangely in some versions its simply called literal: http://mysql-python.sourceforge.net/MySQLdb-1.2.2/public/MySQLdb.connections.Connection-class.html#literal Nah, that is an internal function for MySQLdb. You are supposed to use the syntax you used in the answer. It's in the source Luke! ` def _get_string_literal(): def string_literal(obj, dummy=None): return db.string_literal(obj) return string_literal def _get_unicode_literal(): def unicode_literal(u, dummy=None): return db.literal(u.encode(unicode_literal.charset)) return unicode_literal ` Another answer that I don't like particularly, but will work for your apparent use-case: providers = tuple[str(item) for item in providers] # ('provider1', 'provider2') # rest of stuff... SQL = 'SELECT * FROM table WHERE provider IN {}'.format(repr(providers)) cursor.execute(SQL) Could also write '...{!r}.format(providers) I guess - just depends on taste This option is not good because if there is only 1i item in the list there will be a comma at the end which is invalid syntax You should probably do the string replacement before passing it to the cursor object to execute: sql = "SELECT * FROM table WHERE provider IN (%s)" % \ (','.join(str(x) for x in providers)) cursor.execute(sql) This may be problematic because the strings are not quoted. Maybe replace str with repr ? could also add quotes to the join eg. %('"'+'","'.join(str(x) for x in providers)+'"') - that could be seen as a bit hack though I suppose But I think its required for this solution. FWIW, this method is susceptible to SQL injection attacks. The accepted answer (by jdi) is safer. So, you have string input for ID's required: some_vals = '1 3 5 76 5 4 2 5 7 8'.split() # convert to suitable type if required SomeModel.objects.filter(provider__in=some_vals) Yea I asked about this in the main comments. OP said there is a specific reason they are going for a raw query @jdi Interesting - if the object is inside the ORM, the SQL query can be retrieved from this answer. Otherwise, I can only think they're querying something outside the ORM. I am totally with you on that, for sure. We can only assume the OP has needs that the ORM simply can't meet. "SELECT * FROM table WHERE provider IN ({0},{1},{2})".format(*args) #where args is list or tuple of arguments. This is a little bit limited since it assumes the length of the list, right? try this.... should work. SQL = "SELECT * FROM table WHERE provider IN %s"%(providers) exec 'cursor.execute("%s")'%(SQL)
STACK_EXCHANGE
Optimizing (Vectorizing?) For loop with nested loop in R I am using rdist iteratively in order to compute nearest neighbours for huge datasets. Currently I have a rather small matrix of 634,000 vectors with 6 columns. As mentioned I'm using rdist to compute the distance of each vector to every single other vector, with each distance computation being a step. In addition at every step I run a function that computes k=1,2,3,4 nearest neighbours and takes the sum (effectively k=all neighbours). ###My function to compute k nearest neighbours from distance vector knn <- function (vec,k) { sum((sort(vec)[1:k+1])) } ###My function to compute nearest neighbours iteratively for every vector myfunc <- function (tab) { rowsums <- numeric(nrow(tab)) ###Here I will save total sums knnsums_log <- matrix(nrow=nrow(tab),ncol=4) ###Matrix for storing each of my kNN sums for(i in 1:nrow(tab)) { ###For loop to compute distance and total sums q<-as.matrix(rdist(tab[i,],tab)) rowsums[i] <- rowSums(q) for (k in c(1:4)) { ###Nested loop to run my knn function knnsums[i,k] <- knn(q,k) } } return(cbind(rowsums,knnsums_log)) } A sample of what the data looks like (634k rows of this) X1 X2 X3 X4 X5 X6 1 0.00 0.02 0 0 0.02 -0.263309267 2 0.00 0.02 0 0 0.02 -0.171764667 3 0.00 0.02 0 0 0.02 -0.128784869 4 0.00 0.02 0 0 0.02 -0.905651733 For those unfamiliar with the function rdist gets Euclidean distance between arguements. It works far faster than a custom written function. It is more applicable than dist as dist only computes within matrix distances. I know technically that is what I'm doing but dist attempts to store that in memory and it is far far too large to even consider doing that. How can I make the above work better? I've tried messing around with apply functions but get nothing useful. I hope I have explained everything clearly. If my math is correct, worst case guestimates it will take me over a week to run that code. I have very powerful servers to crunch this on. No GPUs however. I have not tried multicore (should have 12 available) but then again I don't know how I would delegate per core. Thank you for your help. Try the parallels package and the function mcapply which will do the distribution for you. Back to the fundamental problem: if there's any way you can make "intelligent guesses" as to which nearest-neighbor sets you need, rather than calculating all of them, you may be able to reduce the computation required. Several tips: 0) profile your code using Rprof, with the line.profiling option 1) Matrices in R are column-wise. Because you compare your vectors between them, it will be much faster if you store them as columns of your matrix 2) I do not know where the rdist function comes from, but you should avoid the as.matrix(rdist(tab[i,],tab)) that will copy and create a new matrix 3) you can optimize your knn() function, that sorts 4 times the same vector 4) Why not just rdist(tab) ? Thank you for the tips. A few additional things: Knn slows down the function much more than rdist (which comes from the fields package by the way). as.matrix actually speeds up the code. No idea why? I've tested with and without (as initially I though it would slow it down) but the execution actually took a significant hit. Again, thank you for the tips! what is the type of rdist(tab[i,],tab) ? It's a matrix, again it's very perplexing but the benchmarks show a clear difference. Maybe its a weird code hiccup. could you double check that the as.matrix speeds up your code ? Because I can not see any rationale explanation. How have you benchmarked your code ?! Hi Karl, apologies for the late reply, been busy. I've updated my code, have a look at my answer! It seems that a series of coincidences (somehow at every interval of attempt) led to that conclusion being. Testing it a few other times showed to increase in speed using as.matrix. Apologies for being misleading, although this makes me more comfortable, as there was no reason for that to be working the way it was. One last thing, I can't just rdist(tab) as the resulting distance matrix would be several terrabytes in size and way too big to be stored in memory. Hence the need to do it iteratively, to avoid the storage problems. So I've been working on this for a while and testing. For anyone else stuck on a similar problem, here are two more optimized versions of the code. I've significantly decreased computational time, however it still blows up with too many data entries. My next step would be to attempt to implement this with Rcpp and if possible make use of the 12 cores I have available (with the end goal being to compute 1-2 million entries in a reasonable time-frame). Not sure about the best way of proceeding on either point, but here is my code. Thank you for the help! ################################## ##############Optimized code t.m<-t(test_euclid_log) knn_log <- function (vec,k) { sum(vec[1:k+1]) } knn_log <- cmpfun(knn_log) distf <- function(x,t.m) sqrt(colSums((x - t.m)^2)) distf <- cmpfun(distf) myfunc <- function (tab) { rowsums<-numeric(nrow(tab)) knnsums_log <- matrix(nrow=nrow(tab),ncol=4) for(i in 1:nrow(tab)) { q<-apply(tab[i,],1,distf,t.m=t.m) rowsums[i] <- colSums(q) q<-sort(q) for (kn in 1:4) { knnsums_log[i,kn] <- knn_log(q,kn) } } return(cbind(rowsums,knnsums_log)) } myfunc <- cmpfun(myfunc) system.time(output <- myfunc(t)) And my attempt using applys: ###############Vectorized myfuncvec <- function (tab) { kn<-c(1:4) q<-apply(tab,1,distf,t.m=t.m) rowsums <- colSums(q) q<-sort(q) knnsums_log <- vapply(kn,knn_log,vec=q,FUN.VALUE=c(0)) return(c(rowsums,knnsums_log)) } myfuncvec <- cmpfun(myfuncvec) t1<-split(t,row(t)) system.time(out <- vapply(t1,myfuncvec,FUN.VALUE=c(0,0,0,0,0))) out <- t(out) For reference the first of the codes seems to be faster.
STACK_EXCHANGE
Fragmentation in operating system is a condition that occurs during contiguous memory allocation. In contiguous memory allocation, when user processes are loaded and unloaded from the physical memory, it breaks the free memory space into little pieces, which we refer to as fragments. Fragmentation is of two types, internal fragmentation and external fragmentation. Internal fragmentation occurs during the fixed-size memory allocation method. However, the external fragmentation is a result of variable-size memory allocation. In this section, we will discuss fragmentation in detail and its types. So, let’s start. Content: Fragmentation in Operating System - What is Fragmentation? - Types of Fragmentation - Internal Fragmentation - External Fragmentation - Internal Vs External Fragmentation What is Fragmentation? Fragmentation in operating system is a condition or state that occurs during contiguous memory allocation. In contiguous memory allocation, the processor allocates a contiguous section of memory to each process, waiting in the input queue. To allocate the contiguous memory to a process, we have two methods fixed-sized partition and dynamic partition. 1. Fixed-Sized Partition In a fixed-sized partition, the main memory is divided into fixed-sized blocks such that each block/partition contains exactly one process. The fixed-sized partition strictly bound the degree of multiprogramming. Whenever a new process arrives, it is loaded into the free partition. When the process terminates, the partition becomes available for the other processes. 2. Dynamic Partition In the dynamic partition method, the operating system maintains a table to represent which part of the memory is occupied by user processes and which is still available. At the start, the entire memory is available for user processes, and we refer to this large block of memory as a hole. Eventually, the loading and unloading of user processes leave a set of holes of various sizes in the main memory. At this point, the OS needs to check whether the newly freed and recombined (contiguous) holes could satisfy the need of processes waiting for the memory. Well, to resolve this problem, some of the most common solutions are first-fit, best-fit, and worst-fit. - In the first-fit strategy, the operating system identifies a first hole big enough to accommodate the process and allocates the hole to the process. - Now, in the best-fit strategy, the operating system identifies the smallest hole big enough to accommodate the user process and allocates it to the process. - In the worst-fit strategy, the operating system identifies the largest hole and allocates it to the user process. Thus, allocating and deallocating memory to user processes leaves many fragments or small holes scattered in contiguous memory allocation. These fragments, when integrated, create a total memory space that is enough to satisfy a memory request. But the available memory is not contiguous. We refer to this condition as fragmentation. Types of Fragmentation We can classify fragmentation into two types, i.e. internal and external. In the section above, we have learned about the two memory allocation techniques fixed-sized memory allocation and dynamic memory allocation. Internal fragmentation occurs in fixed-size memory allocation techniques. The main memory is divided into fixed-size blocks in the fixed-size memory allocation technique. Now, the operating system allocates memory in units based on block size whenever memory is requested. With the fixed-sized memory allocation technique, the memory allocated to a user process may be slightly greater than the size of the user process. Thus we can define internal fragmentation as unused memory internal to the partition. Example of Internal Fragmentation Consider that the operating system divides memory into several fixed-sized partitions, where each partition is 24 KB. Now consider there is a memory request of 23 KB. Now, if we allocate a fixed-sized partition of 24 KB to the process, we are left with a hole of 1 KB. We refer to this fragment as internal fragmentation. If observed carefully, the overhead of keeping a record of this internal fragment of 1 KB is substantially greater than the fragment itself. A solution to internal fragmentation is using a complementary method, i.e. segmentation. Segmentation divides the process into segments. The operating system loads these segments to dynamic partitions of memory that must be contiguous. In dynamic partitioning or variable-size memory allocation methods, whenever a user process requests for memory, it is allocated exactly the required amount of memory. With dynamic partitioning, eventually, a lot of small holes (fragments) are created in the memory. Over time, this fragmentation increases, thereby reducing memory utilization. We refer to this kind of fragmentation as external fragmentation. A solution to external fragmentation is compaction. The operating system timely runs a shift where all the processes in the memory are shifted such that they are all contiguous. It eventually forms a single block of free memory. Another solution for eliminating external fragmentation is to allow the logical address space allocated to the process to be non-contiguous. A complementary technique to achieve this is paging. In paging, the system divides the memory into n number of equal size frames. The system also divides the user process into n number of pages same as the length of the frame. Now, whenever the system has to load a process into the memory, it must load all of its pages into available frames. Here, the allocated frames may or may not be contiguous. Example of External Fragmentation The figure below explains how external fragmentation takes place. Consider that we have 64 megabytes of main memory, of which 8 MB is allocated to the operating system, and 56 Mb is for user processes. At first, user process 1 arrives, and the system allocates 20 MB. Next, it allocates 14 MB to process 2, then 18 MB to process 3, and still, memory space of 4 MB is available. Now, process 2 is removed from the memory vacating the space of 14 MB. The system allocates vacated space to process 4 of 12 MB, again leaving a hole of 2 MB. Subsequently, process 1 is also removed from the memory vacating the space of 20 MB. The system allocates the vacated space to process 2 of 14 MB, which again creates a hole of 6 MB. So, the last state of memory in the figure above shows fragments of 6, 2 and 4 MB, which are too small to allocate to any other process. Internal Vs External Fragmentation |Basis for Comparison |Occurs in fixed-sized memory allocation |Occurs in variable-sized memory allocation |The difference between memory allocated and memory required by a process forms internal fragmentation |The unused, free or wasted space between two non-contiguous allocated memory forms external fragments that are too small to serve any process |Internal fragmentation occurs when the allotted memory space is greater than the user process |External fragmentation occurs when user processes are removed from memory, leaving the free space broken into little pieces |With segmentation allocating the best-fit block to the user process provides the solution to internal fragmentation |Compaction and paging provide a solution to external fragmentation So, this is all about how fragmentation in the operating system occurs. What are the types of fragmentation and their solution? We have also discussed the differences between internal and external fragmentation.
OPCFW_CODE
Is python the fastest growing programming language in the world? Well, if you look at the recent trends and polls, you’ll know it already is. According to StackOverflow “Python has a solid claim to being the fastest-growing major programming language.” People care what StackOverflow says because it is the most popular online community of programmers. The website attracts programmers from all corners of the earth. And if that’s not enough, Python is also crowned as the best programming language in 2018 by Linux Journals. So the communities like StackOverflow would know which programming languages are the most popular and what is the current trend. The fact that python took the top spot in 2017 is notable because Java has been dominant for so long, this resulting from the fact that most undergraduate courses teach it to their students. So one has to assume that some sort of drastic change occurred on a fundamental level for python to attract so much interest; it is safe to say that not even Travis Oliphant could have predicted this trend back when he used python to create one of the most successful data science platforms on the market. Why is Python so popular? It should be noted that stack overflow only concluded that python was the fastest growing programming language because of the number of page visits questions with the python tag were attracting. So StackOverflow’s conclusion only matters if one believes that their traffic levels is an accurate indicator of popularity. Then again, any website that receives up to forty million views a month is difficult to ignore. A number of reasons could be driving the popularity of python, with the most prominent including the following: Python is a simple language to master and utilize, so it makes sense that so many new programmers are flocking to it. Additionally, python code is clean and easy to comprehend. Python stands out not only because it supports multiple systems and platforms but also because it can be used for a variety of purposes from web development to scientific modeling and systems operations to mention but a few. There are people using python to create advanced video games. Others have used the language to experiment with artificial intelligence. There is no limit to the applications of python. 3. Web Assets Python’s clean code and huge documentation make the creation and customization of web assets easy and efficient. The clean syntax also makes the process of reviewing code very easy. It is also worth mentioning that the language has several powerful GUI frameworks like Django that not only allow the writing of graphical applications but also simplify code reusability and enable the creation of cross-platform applications. 4. Future Scope Python is dominating the emerging and cutting-edge technologies like data science, Artificial intelligence, and machine learning. These technologies are becoming very important for every industry that produces a lot of data and works on automation. Python has proven that it is capable enough for solving hard problems as well. That makes it a top priority for developers. The fact that there is so much python documentation on the internet makes the solving of python-related problems easy. It wouldn’t be an exaggeration to assume that python is only going to grow further in popularity in the coming months and years, especially as the positive word of mouth spreads. Have a look at some important python related topics for further information: 7 thoughts on “Why Python is the Fastest Growing Programming Language?” Why python is popular? Because students learn it as primary language in school along with C, C++ and Java. Obviously it is easier to pickup than the other. Much easier than PHP now. Most interesting fields like AI need python as primary language due to available frameworks and libs like tensorflow and numpy. Most skids praise python too much because that is the only language they can write in. Lack of access modifiers in python provide a bad object oriented experience. Most beginners are happy with that. IMHO, python is overrated. True, after my 5 years of working with C#.Net I recently started writing Python scripts for some automated deployments on linux server and I can say it is overrated. Python is more popular on linux may be because Bash scripting sucks for complax tasks. Can confirm this. Having been coding for a long time, initially in Java and then in C and Python, I can say Python is far superior to Java (but not C). Those who say it’s overrated need to understand that it’s a different kind of a language. It’s an interpreted language. It’s bound to be slow. But still, with clever optimisation tricks and running the power hungry code on a C backend (all machine learning and image processing libraries do this), it can perform faster than unoptimized C. Also, it’s not OOP. Don’t expect it to have features that doesn’t belong to it. Things such as encapsulation and stuff don’t exist in python. Use cases for interpreted languages such as Python and compiled ones such as C are drastically different. It’s best not to compare them. However, a C/Python proficiency is much better than proficiency in Java. You’ve got a lot more scope. Agree with above, python is akin to basic. It is hugely fragmented and allows for many runtime errors. Python is simplified language and easy to learn. Can use for all current development. Yeah! Python is too good at the beginning level but it’s always frustrating when it gets to advance Concepts. I love Python so much after C programming language because C is the father.
OPCFW_CODE
M2295: Implementing and Supporting Microsoft Internet Information Services 5.0 This course has been retired in favour of the newer v6.0 course. Installing Internet Information Services 5.0 (5 topics) - Introduction to Internet Information Services 5.0 - Installing IIS - Updating IIS - Using Online Documentation - At the end of this module, you will be able to install IIS 5.0 and update the installation. Configuring Web and FTP Sites (6 topics) - Configuring Properties - Creating Additional Sites - Creating Virtual Directories - Redirecting Requests - Creating Custom Error Messages - At the end of this module, you will be able to create and configure IIS 5.0 Web and FTP sites. Administering Web and FTP Sites (6 topics) - Managing Content - Performing Remote Administration - Managing the Metabase - Administering Web Sites Using Built-In Scripts - Restarting Internet Services - At the end of this module, you will be able to administer Web and FTP sites. Including, Managing Web sites remotely, Backing up and restoring Web sites, and Creating a Web site using built-in scripts and adding content using WebDAV. Installing and Configuring Web Applications (5 topics) - Introduction to Web applications - Creating and Removing Web Applications in IIS - Configuring Web Applications - Installing ISAPI Filters - At the end of this module, you will be able to configure IIS to enable Common Gateway Interface (CGI) and Internet Server Application Programming Interface (ISAPI) Web applications. This includes Creating and configuring applications and Installing and using ISAPI applications and filters. Implementing Security on a Web Server (8 topics) - Configuring Access Permissions for a Web Server - Configuring Authentication for a Web Server - Using Client Certificates - Securing Web Communications Using SSL - Using Local Security Policies on a Web Server - Configuring Security on an FTP Site - Configuring Auditing for IIS - At the end of this module, you will be able to implement secure access and communications for a Web site. This includes securing Web resources using permissions and authentication and configuring an encrypted connection using SSL. Monitoring and Optimising a Web Server (7 topics) - Optimising a Web Server - Monitoring and Optimising Memory Usage - Monitoring and Optimising Processor Activity - Monitoring and Optimising the Available Network Bandwidth - Managing Log Files - Optimising a Web Site - At the end of this module, you will be able to monitor and optimise Web server performance. This includes monitoring log files and monitoring performance on an IIS server. Configuring IIS to Provide E-Mail Support (6 topics) - Introduction to the SMTP Service - Configuring Support for the SMTP Service - Controlling E-Mail Messages - Configuring Security for the SMTP Service - Managing the SMTP Service - At the end of this module, you will be able to configure IIS to provide e-mail support. Managing FrontPage-Extended Webs (7 topics) - Introduction to FrontPage 2000 Server Extensions - Creating FrontPage-Extended Webs - Managing Access to FrontPage-Extended Webs - Administering FrontPage-Extended Webs - Managing Authoring of FrontPage-Extended Webs - Optimising the Performance of FrontPage-Extended Webs - At the end of this module, you will be able to create and manage FrontPage-extended webs. This includes creating and managing access to a FrontPage-extended web and managing authoring of FrontPage-extended webs. Implementing IIS 5.0 (5 topics) - Identifying Potential Risks from the Internet - Implementing IIS as an Internet Web Server - Implementing IIS as an Intranet Web Server - Implementing IIS as an Extranet Web Server - At the end of this module, you will be able to explain implementation guidelines based on the specific role of the IIS Web server. Before attending this course, students must have completed Course 2153, Implementing a Microsoft Windows 2000 Network Infrastructure, or have equivalent knowledge of TCP/IP, DNS, certificate services, and tasks that are performed in Microsoft Windows® 2000 Server.
OPCFW_CODE
package io.takari.maven.builder.smart; import static io.takari.maven.builder.smart.ProjectComparator.id; import static org.junit.Assert.assertEquals; import io.takari.maven.builder.smart.ProjectExecutorService.ProjectRunnable; import java.util.*; import java.util.concurrent.atomic.AtomicLong; import org.apache.maven.project.MavenProject; import com.google.common.util.concurrent.Monitor; import org.junit.Test; public class ProjectExecutorServiceTest extends AbstractSmartBuilderTest { // copy&paste from ThreadPoolExecutor javadoc (use of Guava is a nice touch there) private static class PausibleProjectExecutorService extends ProjectExecutorService { private boolean isPaused = true; private final Monitor monitor = new Monitor(); private final Monitor.Guard paused = new Monitor.Guard(monitor) { @Override public boolean isSatisfied() { return isPaused; } }; private final Monitor.Guard notPaused = new Monitor.Guard(monitor) { @Override public boolean isSatisfied() { return !isPaused; } }; public PausibleProjectExecutorService(int degreeOfConcurrency, Comparator<MavenProject> projectComparator) { super(degreeOfConcurrency, projectComparator); } @Override protected void beforeExecute(Thread t, Runnable r) { monitor.enterWhenUninterruptibly(notPaused); try { monitor.waitForUninterruptibly(notPaused); } finally { monitor.leave(); } } public void resume() { monitor.enterIf(paused); try { isPaused = false; } finally { monitor.leave(); } } } @Test public void testBuildOrder() throws Exception { final MavenProject a = newProject("a"); final MavenProject b = newProject("b"); final MavenProject c = newProject("c"); TestProjectDependencyGraph graph = new TestProjectDependencyGraph(a, b, c); graph.addDependency(b, a); DependencyGraph<MavenProject> dp = DependencyGraph.fromMaven(graph); HashMap<String, AtomicLong> serviceTimes = new HashMap<>(); serviceTimes.put(id(a), new AtomicLong(1L)); serviceTimes.put(id(b), new AtomicLong(1L)); serviceTimes.put(id(c), new AtomicLong(3L)); Comparator<MavenProject> cmp = ProjectComparator.create0(dp, serviceTimes, ProjectComparator::id); PausibleProjectExecutorService executor = new PausibleProjectExecutorService(1, cmp); final List<MavenProject> executed = new ArrayList<>(); class TestProjectRunnable implements ProjectRunnable { private final MavenProject project; TestProjectRunnable(MavenProject project) { this.project = project; } @Override public void run() { executed.add(project); } @Override public MavenProject getProject() { return project; } } // the executor has single work thread and is paused // first task execution is blocked because the executor is paused // the subsequent tasks are queued and thus queue order can be asserted // this one gets stuck on the worker thread executor.submitAll(Collections.singleton(new TestProjectRunnable(a))); // these are queued and ordered executor.submitAll(Arrays.asList(new TestProjectRunnable(a), new TestProjectRunnable(b), new TestProjectRunnable(c))); executor.resume(); executor.awaitShutdown(); assertEquals(Arrays.asList(a, c, a, b), executed); } }
STACK_EDU
Can't connect to tunnels or uninstall VS Code Insiders Does this issue occur when all extensions are disabled?: Yes/No VS Code Version: Version: 1.87.0-insider (user setup) Commit: ec291c126878742ad640055ce604a58129cd088c Date: 2024-02-05T05:46:55.122Z Electron: 27.2.3 ElectronBuildId: 26495564 Chromium: 118.0.5993.159 Node.js: 18.17.1 V8: <IP_ADDRESS>-electron.0 OS: Windows_NT x64 10.0.25987 OS Version: Windows 11 Insiders Steps to Reproduce: My issue seems to be 2-fold; if I should file separate issues, please let me know, more than happy to. @connor4312 assigning to you to start for the tunneling aspect. Part 1: Unable to connect to tunnels I'm on the latest VS Code Insiders and wanted to connect to a remote tunnel. While I was able to turn on tunneling access and start a tunnel both from the VS Code UI and an external terminal, if I actually tried to connect to the tunnel in insiders.vscode.dev, it was stuck spinning. There were no error logs in the terminal. I can't get a screenshot of the terminal with code-insiders tunnel due to part 2 described below. Part 2: Uninstall I've encountered some tunneling strangeness before when I've stress tested stopping / starting / unregistering, so I figured a fresh VS Code Insiders reinstall would help. However, I'm now stuck here: Successfully uninstalled VS Code Stable and Exploration, but Insiders froze (I'm using the Windows Add/Remove Program UI) Tried ending uninstall task and initiating uninstall again, but didn't work Tried just reinstalling anyways VS Code Insiders opens successfully, but external terminal doesn't recognize it and thinks it isn't installed I tried restarting my entire machine and uninstalling again, but got the same exact results I still can't connect to tunnels - insiders.vscode.dev is stuck spinning Is my remaining option to delete the folders mentioned in clean uninstall? Though I'm not sure if that'd affect this flow, and I would've loved to avoid resetting everything if possible. Related: @andreamah encountered an update / uninstall issue last week, though her scenario was a bit different than mine (VS Code bugged out when trying to update). As I continue further testing here: I can still connect to WSL from VS Code Insiders no problem I tried Connect to WSL from the Remote Explorer view and got an error: @bamurtaugh Can you take a look at this for me? My remote tunnel won't open 2024-04-08 17:54:53.394 [info] Session updated: trry-github (github) (service=true) 2024-04-08 17:54:53.395 [info] Session token updated: trry-github (github) 2024-04-08 17:54:53.553 [info] No other tunnel running 2024-04-08 17:54:53.591 [info] Running tunnel CLI 2024-04-08 17:54:53.591 [info] serviceInstall Spawning: d:\Microsoft VS Code\bin\code-tunnel.exe tunnel service install --accept-server-license-terms --log info --name LAPTOP-M2GVJ4MO 2024-04-08 17:54:54.376 [info] [2024-04-08 17:54:54] error failed to lookup tunnel: connection error: error sending request for url (https://asse.rel.tunnels.api.visualstudio.com/tunnels/majestic-fog-3wszfqh?api-version=2023-09-27-preview): error trying to connect: 远程主机强迫关闭了一个现有的连接。 (os error 10054) 2024-04-08 17:54:54.397 [info] serviceInstall exit(26184): + 1 2024-04-08 17:54:54.398 [warning] Failed to install tunnel as a service, starting in session... 2024-04-08 17:54:54.399 [info] Running tunnel CLI 2024-04-08 17:54:54.399 [info] tunnel Spawning: d:\Microsoft VS Code\bin\code-tunnel.exe tunnel --accept-server-license-terms --log info --name LAPTOP-M2GVJ4MO --parent-process-id 18456 2024-04-08 17:54:54.443 [info] * 2024-04-08 17:54:54.443 [info] * Visual Studio Code Server 2024-04-08 17:54:54.443 [info] * 2024-04-08 17:54:54.443 [info] * By using the software, you agree to 2024-04-08 17:54:54.443 [info] * the Visual Studio Code Server License Terms (https://aka.ms/vscode-server-license) and 2024-04-08 17:54:54.443 [info] * the Microsoft Privacy Statement (https://privacy.microsoft.com/en-US/privacystatement). 2024-04-08 17:54:54.443 [info] * 2024-04-08 17:54:55.124 [info] [2024-04-08 17:54:55] error failed to lookup tunnel: connection error: error sending request for url (https://asse.rel.tunnels.api.visualstudio.com/tunnels/majestic-fog-3wszfqh?includePorts=true&tokenScopes=host&api-version=2023-09-27-preview): error trying to connect: 远程主机强迫关闭了一个现有的连接。 (os error 10054) 2024-04-08 17:54:55.142 [info] tunnel exit(13480): + 1 2024-04-08 17:54:55.142 [info] tunnel process terminated Thanks for sharing this @trry-github. I'd love to get @connor4312's insights if it sounds like you're experiencing an issue related to mine, or these should be tracked separately. Discussing in https://github.com/microsoft/vscode-remote-release/issues/9438#issuecomment-2041811463 I think there was maybe some brokeness in your vscode install. Please let me know and capture --verbose CLI logs if you're able to repro this again.
GITHUB_ARCHIVE
Excel formula to calculate the amount of data or cells that contain text or contain certain text in microsoft excel. In the previous excel formula tutorial we have discussed about excel formulas for counting the number of specific words in an excel cell or range. So what if what we want to count is the number of cells or a lot of data? In general, to count the number of cells or calculate the amount of data with certain criteria, the excel functions that we can use are Countif function or Countifs function if the conditions we will apply more than one. Specifically in this excel formula tutorial that we will discuss is counting cells with the condition that the cell contains text both text in general and specific text only. The application of the following formula will be very useful, for example, to calculate the number of cells containing the codes of certain goods, both in whole and in part of the codes of these goods. How to count cells containing text in Excel To count the number of non-empty cells we can take advantage of CountA function. Not empty here means that the contents of the cell can be numbers, date /time format, text, formulas, error values or empty text (“”) generated by a formula. If you intend to count cells that contain text only then use the following excel formula: With the formula above then: - The logical values TRUE and FALSE are not counted as text. - Cells with number values do not count as text even if they are entered in text-formatted cells. - Blank cells that start with a single quote or apostrophe (‘) will be counted as text. Consider the following example: The excel formula used in cell E3 above is: With formulas over numbers, error values, blank cells, date/time as well as logical values are fixed as text. In addition to using the Countif formula as above we can also count the number of cells that contain text by utilizing Sumproduct function in microsoft excel. The general formula is as follows: On this formula, ISTEXT functions used to determine whether the value of a cell in the data range in question is text or not. Here’s an example of its application: Double Strip or double unary ( — ) in the formula as previously explained is used to change the logical value of TRUE/FALSE to the number 1/0 so that it can be calculated arithmetic sum. For more details please read on the following page: Double Unary Operator ( — ) In Excel With the excel formulas above, every cell containing text will be counted, then what if we don’t want to count all cells and only want to count cells that contain text or a specific portion? How to count cells containing specific text in Excel To calculate the amount of data that contains certain text only one of the formulas that can be used is as follows: The above discussion I think is enough explained in pemabahasan excel COUNTIF function. for that I will not explain further. Then what if the text in question is only part of the text. Meaning the cells that are counted are those that contain that particular text, not just cells that contain text data exactly like the criteria in question. For this kind of case we can utilize whilcard character in Countif function like this: For example, as in the following example, the formula used to calculate the amount of data/cells containing the text” BJ ” the excel formula used is: For texts in the form of references, the way they are written is as follows: Examples of its use as in the following picture: If you are more careful it turns out that using this formula, each calculated cell does not distinguish between uppercase and lowercase letters (non case sensitive), so that between text “KS” and “Ks” will be the same. If you want case sensitive what’s the solution? To count cells that contain specific text and are case sensitive (distinguish uppercase and lowercase) use the following excel formula: Consider the following example: C4 is a cell reference that is worth the text” Ks “while A2:A10 is a Range reference that we want to count the number of cells filled with the text”Ks”. FIND function this formula is used to find the position or location of the text in a case sensitive cell. If found, the Find function will generate the sequence number of the text in the form of numbers. IsNumber Function itself is used to determine whether each result of the Find function was a number if yes it will produce a logical value of TRUE and vice versa if not a number will produce a logical value of FALSE. The logical value of TRUE and FALSE is then converted to numeric with double unary operator ( — ) so that the number can be calculated by Sumproduct function. Pretty easy isn’t it? Thus our discussion of the excel formula for count the number of cells that a given text contains. If there are still problems please submit in the comments section below. Please also share this excel tutorial post, maybe one of your friends also needs it. Thanks for visit my blog!
OPCFW_CODE
Satori Video Learning Learn the how to use Satori straight from the experts. The Satori video clips collection provides you with simple and straight forward step-by-step instruction clips, how-to and tips and tricks direct from our in house specialists and subject matter experts. The Satori Microsoft Security Copilot Integration The Satori platform provides you with the ability to integrate with Microsoft Security Copilot. To setup the Microsoft Security Copilot integration with Satori, perform the follwing steps as they appear in the short clip. Satori Test Drive The following two part clip series shows you have to start using Satori. The clips shows you how to connect to a data store, run a query using your data vizualization tool and then viewing the results in Satori. Satori Test Drive Training - Part 1 Learn how to connect a data store to Satori Management Console, run a query in a data vizualiztion tool and view the results in Satori mnagement Console. Then learn how to create a data access rule, dataset and masking profile and then re-run the query with your new access rule and view the secure and protected results. Satori Test Drive Training - Part 2 Learn about just in time data access, data access management requests and Satori Data Portal. Learn how to create groups, add locations to datasets, create user and groups data access rules, access request rules and self service access to data rules. Then , learn how to get your data portal up and running. Navigating the Satori Management Console User Interface Learn how to utilize and customize the differnt views in the Management Console user interfce. Satori Support for ABAC, SSO and SCIM Learn how to setup up SSO with SCIM, Match SSO attributes to match your data and utilize Row-level Security using custom expressions. Flexible Access Rules and Access Request Via the Data Portal Learn how to streamline and utilize your data access request workflow for your orgaizations data users. Satori and Amazon S3 Learn how to secure your amazon S3 data using the Satori Data Security Platform. Satori Data Filtering Learn how to implement row level security and access controls using Satori Data Security Platform. Satori SSO with Azure Learn how to configure the Satori Data Security Platform to integrate with SSO. Satori Azure SSO - Part 1 Satori Azure SSO - Part 2 Satori Azure SSO - Part 3 (Testing the SSO) Satori Azure SSO - Part 4 (User Credentials) Satori Azure SSO - Part 5 (Testing the Satori Data Security Platform) Satori Azure SSO - Part 6 (Setting up SCIM) Satori Azure SSO - Part 7 (Testing ABAC) Satori Access Manager Learn how to leverage the Satori Access Manager to simplifiy and maximaize your data security.
OPCFW_CODE
Aside from the core team, there are two levels of maintainers, described Becoming a Maintainer We always need more level 1 maintainers! The main requirement is that you can show empathy when communicating online. We'll train you as needed on the other specifics. This is a great role if you have limited time, because you can spend as much time as you have without any ongoing responsibilities (unlike level 2) Level 2 maintainers have a much higher responsibility, so usually you will spend time as a level 1 maintainer before moving to level 2. Please DM a core team member (Brandon Bayer) in Discord if you're interested in becoming an official maintainer! Level 1 Maintainers Level 1 maintainers are critical for a healthy Blitz community and project. They take a lot of burden off the core team and level 2 maintainers so they can focus on higher level things with longer term The primary responsibilities of level 1 maintainers are: - Being a friendly, welcoming voice for the Blitz community - Issue triage - Pull request triage - Monitor and answer the Discord help channels - Following up on assigned issues to make sure they are being worked on - Following up on in-progress PRs to make sure they don't get stuck - Community encouragement - Community moderation Level 2 Maintainers Level 2 maintainers are the backbone of the project. They are watchdogs over the code, ensuring code quality, correctness, and security. They also facilitate a rapid pace of progress. The primary responsibilities of level 2 maintainers are: - Code ownership over specific parts of the project - Maintaining and improving the architecture of what they own - Final pull request reviews - Merging pull requests - Tracking and ensuring progress of open pull requests Maintainers may retire from their role at any time without any shame or guilt. Simply let a core team member know! Maintainers are the face of the project and the front-line touch point for the community. Therefore maintainers have the very important responsibility of making people feel welcome, valued, understood, and Please take time to read and understand everything outlined in this guide on building welcoming communities Some especially important points: - Gratitude: immediately express gratitude when someone opens an issue or PR. This takes effort/time and we appreciate it - Responsiveness: during issue/PR triage, even if we can’t do a full review right away, leave a comment thanking them and saying we’ll review - Our goal is to respond to all issues and PRs within 2 days, but ideally within 1 day. - Understanding: it's critical to ensure you understand exactly what someone is saying before you respond. Ask plenty of questions if needed. It's very bad if someone has to reply to your response and say "actually I was asking about X" - In fact, at least one question is almost always required before you can respond appropriately — whether in GitHub or in Discord - We want every single question to get a response, ideally within a day, or 2 at max. But this doesn't mean you have to solve their problem. For example, if they ask about some random library you know nothing about, you can respond saying you aren't familiar with that, but that they could try looking for examples/docs on using it with nextjs, since the integration would the same. - Ensure threads are used as much as possible to keep channels organized. Even if there are a few messages about a topic in the main channel and the issue/question is not resolved, go ahead and create a thread and respond in the thread. - If someone reports something that's likely a bug, kindly ask them to open an issue on Github - If someone requests a feature or change, kindly ask them to open a feature request issue on Github - If you notice some type of DX issue or opportunity for improvement in any conversation, please open an issue on Github for this. Making lots of tiny improvements over the long has compounding returns - If someone asks a question about something that either doesn't have docs or isn't documented well, then take a minute to go add the docs for that. And then paste a link to the new docs you added. If a bug report: - Does it have enough information? Versions? Logs? Some way to reproduce? - Has this already been fixed in a previous release? - Is there already an existing issue for this? If a feature/change request: - Is it clear what the request is and what the benefit will be? - Is this an obvious win for Blitz? Then accept it - If not obvious, then pull in a core team member or level 2 maintainer for more review - Add tags: - Add a - Add a - Add a - Add a good first/second issue tag if appropriate Pull Request Triage - Are the changes covered by tests? - Do the changes look ok? Make sure there's no obvious issues - If applicable, has a PR been opened to update the docs? - Kindly request any changes if needed - Else add a GitHub approval so that level 2 maintainers know it's already had an initial review Final PR Review & Merging (Level 2 maintainers) As a level 2 maintainer, it is your responsibility to make sure broken code and regressions never reach the canary branch. - Ensure the PR'ed code fully works as intended and that there are no regressions in related code - If not fully covered by automated tests, you need to pull down the code locally and manually verify everything (the GitHub CLI helps - During squash & merge: - Change the commit title to be public friendly - this exact text will go in the release notes - Add the commit type in the description, in parenthesis like (patch). Commit types: major - major breaking change minor - minor feature addition patch - patches, bug fixes, perf improvements, etc newapp - changes to the new app template recipe - changes to a Blitz recipe example - change to an example app meta - internal meta change related to the Blitz repo/project
OPCFW_CODE
Let's start with OP, i am not going to go through the whole thread since it mostly dissolves to part of folks pretending it matters, part of folks pretending it's conspiracy and part of folks thinking previous 2 groups are crazy (needless to say, where i land). It seems to be coded to strictly favor Pascal's hack-job async implementation, namely compute preemption, as per nVidia's dx12 "do's"and "dont's". First and foremost, we obviously get totally understanding and informed opinion on Pascal's async implementation. Next, it cites rather thought-out list of dos and don'ts, that actually applies in similar form to every concurrent application. Yes, the "GCN optimized" ones too. Onto the quotes inside, and look, i won't waste much time parsing GPUView output, it's actually getting off-track, but i will question stuff done on the fly Time Spy has a Pre-Emption Packet(black rectangle) in the 3D Queue that shows up every time a compute queue is processed I have heard confusing stuff about what GPUView actually tracks, so i'll attack on both fronts: 1) Where in D3D12 is it written down how to put pre-emption packet into compute queue, huh? 2) If it's not written down, and as such done by GPU itself on the whim (and is as such tracked by GPUView like that), what is the whole fuss about? It's definitely not Time Spy doing the job, but driver deciding pre-emption would be the approach here. Now, some mention excessive usage of fences and barriers, but well... they are needed at times, to avoid disasters. I'd know, that was part of my diploma (but in application to CPUs). Compute queues as a % of total run time: Time Spy: 21.38% Time Spy: 21.38% Yet, Oxide have claimed AotS spends only about 1/3rd of frametime on compute queue? Where da truth at? I bet they also want 1 engine with 1 path to run on all GPUs to make their Benchmark "valid", to them but it makes it invalid to me since it doesn't use each HW to it's maximum potential, be it NV or AMD or some other GPU. Well, i forgot that another half of discussions on the matter were arguing precisely that. Well, what if i told you, that tech demo and benchmark are different things? Anyway, I can say that the logical conclusion from this is that Futuremark's benchmark is BOTCHED and biased, not indicative of DX12 capabilities as it should be, but instead restricting them - thus it has arguably no credibility as a BENCHMARK suite. Quotes about pre-emption Half-correct, but miss the kernel of truth in the other half, could at least waste some time reading 2 paragraphs in Pascal whitepaper, right before pre-emption description, to know what is done, how it's done and why pre-emption improvements matter and where they matter. Logical conclusion from reading the OP and quotes in it is that OP is BOTCHED and biased, and also lacks understanding of how async compute in Pascal works. Any understanding of it, actually.
OPCFW_CODE
Add support for the expanded set of black chords I finished #8 by just adding the basic black chords, and I have put off the expansion to include the full set of 15 "black chords" (including 10 inversions of the original 5 chords): Trainers should add black-key chords to the previously learned nine white chords one at a time, with the method being identical to that employed during the period in which the nine white chords are learned. Therefore, first, trainers add five black chords in without any inversions. When the children can identify 14 chords (i.e., nine white chords plus five black chords) accurately, trainers do not need to add additional chords. Only when a child cannot achieve identification of these 14 chords does a trainer need to add inversions of a chord that a child has trouble identifying. The 15 black chords are the sum of the five black chords and their inversions. For example, if a child cannot identify ‘AC#E,’ the trainer would add ‘C#EA’ and ‘EAC#.’ In this way, the number of chords, or the order of presentation of these 15 black chords, varies with each individual, and trainers must present all the chords appropriate to the needs of a particular child. Perfect identification in the period for black- key notes, defined by 100% accuracy across all essential chords, ensures acquisition of AP for black-key notes. The full expanded set of chords is: A major (Gray): A: A3 C#4 E4 A/C#: C#4 E4 A4 A/E: E4 A4 C#5 D major (Tan): D: D4 F#4 A4 D/F#: F#4 A4 D5 D/A: A4 D5 F#5 E major (Light Green): - E: E4 G#4 B4 - E/G#: G#4 B4 E5 - E/B: B3 E4 G#4 B♭ Major (Light Purple): - B♭: Bb3 D4 F4 - B♭/D: D4 F4 Bb4 - B♭/F: F4 Bb4 D5 E♭ Major (Sky Blue): - E♭: Eb4 G4 Bb4 - E♭/G: G4 Bb4 Eb5 - E♭/B♭: Bb3 Eb4 G4 According to the book, they chose not to assign distinct colors to each of these flags because it would be too tedious to make so many physical flags. Considering this is the web, it should be trivial to assign unique colors to them. The other option I thought would be to treat them as "rainbow" flags, where we assign a stripe pattern based on the notes involved. A quick mock-up of the idea: One problem here might be that it would be a little hard to distinguish between the different inversions, since they would all be just permutations of one another, but maybe that's a good thing? The kids are supposed to be calling out chord names at this point anyway. I think the best way to implement these would be to implement #12, where you would have a checkbox of which chords you want to study and you get to choose how many flags you'd like to see, then it pulls out a random subset of all the chords you want to study and plays one of those chords (so e.g. if you chose to study just the black chords with all their inversions, and asked for 9 options, it would pick a set of 9 chords out of the possible 15, play one of those, then when you click "next" it would choose a different 9 random chords and show you those, then play one of them). The JSfiddle for the mockup is here, with the code saved for posterity below: <div class="flag-holder"> <div class="flag-spacer"></div> <div class="flag-wrapper"> <div class="flag tan"> <div class="chord-notes-container chord-tan"> <div class="note-color note-F-sharp"> <div class="note-wrapper note-F-sharp note-F-sharp4"> <div class="note"></div> <div class="note-text"> <span class="note-text-span">F♯</span> </div> </div> </div> <div class="note-color note-A"> <div class="note-wrapper note-A-sharp note-A4"> <div class="note"></div> <div class="note-text"> <span class="note-text-span">A</span> </div> </div> </div> <div class="note-color note-D"> <div class="note-wrapper note-D note-D5"> <div class="note"></div> <div class="note-text"> <span class="note-text-span">D</span> </div> </div> </div> </div> </div> </div> <div class="flag-wrapper"> <div class="flag gray"> <div class="chord-notes-container chord-gray"> <div class="note-color note-C-sharp"> <div class="note-wrapper note-C-sharp note-C-sharp4"> <div class="note"></div> <div class="note-text"> <span class="note-text-span">C♯</span> </div> </div> </div> <div class="note-color note-E"> <div class="note-wrapper note-C-sharp note-E4"> <div class="note"></div> <div class="note-text"> <span class="note-text-span">E</span> </div> </div> </div> <div class="note-color note-A"> <div class="note-wrapper note-A note-A4"> <div class="note"></div> <div class="note-text"> <span class="note-text-span">A</span> </div> </div> </div> </div> </div> </div> <div class="flag-wrapper"> <div class="flag tan"> <div class="chord-notes-container chord-tan"> <div class="note-color note-A"> <div class="note-wrapper note-A-sharp note-A4"> <div class="note"></div> <div class="note-text"> <span class="note-text-span">A</span> </div> </div> </div> <div class="note-color note-D"> <div class="note-wrapper note-D note-D5"> <div class="note"></div> <div class="note-text"> <span class="note-text-span">D</span> </div> </div> </div> <div class="note-color note-F-sharp"> <div class="note-wrapper note-F-sharp note-F-sharp5"> <div class="note"></div> <div class="note-text"> <span class="note-text-span">F♯</span> </div> </div> </div> </div> </div> </div> <div class="flag-spacer"></div> </div> body { background-color: #212121; } div.flag-spacer { width: 100%; flex-grow: 1; } div.flag-holder { width: 80dvw; height: 80dvh; display: flex; flex-direction: row; flex-wrap: wrap; justify-content: space-around; align-items: stretch; } div.flag-wrapper { width: calc(33% - 0.6em); margin-left: 0.25em; margin-right: 0.25em; display: flex; max-height: 40vh; } div.flag { height: 100%; width: 100%; display: flex; justify-content: space-around; align-items: center; position: relative; border: 1px solid black; } div.flag.gray { background: gray; } div.flag.tan { background: #F0E68C; } div.chord-notes-container { color: #000; font-weight: bold; justify-content: space-around; align-items: center; width: calc(min(100%, 15em)); height: calc(min(10rem, 95%)); } div.chord-notes-container { display: flex; user-select: none; div.note-color { width: 2em; height: 100%; display: flex; flex-grow: 1; align-items: center; justify-content: center; position: relative; border-top: 1px solid; border-bottom: 1px solid; &.note-A { background-color: pink; } &.note-B { background-color: purple; } &.note-C { background-color: red; } &.note-C-sharp { background-color: brown; } &.note-D { background-color: orange; } &.note-E { background-color: yellow; } &.note-F { background-color: green; } &.note-F-sharp { background-color: #87CEFA; } &.note-G { background-color: blue; } // This makes the notes appear diagonal, from bottom left to top right &:nth-child(1) { align-items: flex-end; border-left: 1px solid; } &:nth-child(2) { align-items: center; } &:nth-child(3) { align-items: flex-start; border-right: 1px solid; } div.note { width: 100%; height: 100%; background-color: #333; clip-path: circle(50%); opacity: 0.95; } div.note-text { position: absolute; width: 100%; height: 100%; top: 0; right: 0; color: #fff; display: flex; justify-content: space-around; align-items: center; z-index: 10; font-size: 1.2em; } } } div.chord-notes-container div.note-wrapper { width: 2em; height: 2em; display: flex; flex-direction: row; align-items: stretch; justify-content: center; user-select: none; position: relative; } "I think the best way to implement these would be to implement https://github.com/pganssle/cim/issues/12, where you would have a checkbox of which chords you want to study" I want to add a vote for the chord selection option as it would be nice to focus on the chords in progress vs maintenance (e.g. just the black chords).
GITHUB_ARCHIVE
Capturing the record of research process – Part II So in the last post I got all abstract about what the record of process might require and what it might look like. In this post I want to describe a concrete implementation that could be built with existing tools. What I want to think about is the architecture that is required to capture all of this information and what it might look like. The example I am going to use is very simple. We will take some data and do a curve fit to it. We start with a data file, which we assume we can reference with a URI, and load it up into our package. That’s all, keep it simple. What I hope to start working on in Part III is to build a toy package that would do that and maybe fit some data to a model. I am going to assume that we are using some sort of package that utilizes a command line because that is the most natural way of thinking about generating a log file, but there is no reason why a similar framework can’t be applied to something using a GUI. Our first job is to get our data. This data will naturally be available via a URI, properly annotated and described. In loading the data we will declare it to be of a specific type, in this case something that can be represented as two columns of numbers. So we have created an internal object that contains our data. Assuming we are running some sort of automatic logging program on our command line our logfile will now look something like: > start data_analysis ...Package at: //mycodeversioningsystem.org/myuserid/data_analysis/0.1 ...Date is: 01/01/01 ...Local environment is: Mac OS10.5 > data = load_data(URI) ...connecting to URI ...created two column data object "data" ...pushed "data" to //myrepository.org/myuserid/new_object_id ..."data" aliased to //myrepository.org/myuserid/new_object_id That last couple are important because we want all of our intermediates to be accessible via a URI on the open web. The load_data routine will include the pushing of the newly created object in some useable form to an external repository. Existing services that could provide this functionality include a blog or wiki with an API, a code repository like GitHub, GoogleCode, or SourceForge, an institutional or disciplinary repository, or a service like MyExperiment.org. The key thing is that the repository must then expose the data set in a form which can be readily extracted by the data analysis tool being used. The tool then uses that publicly exposed form (or an internal representation of the same object for offline work). At the same time a script file is being created that if run within the correct version of data_analysis should generate the same results. # Script Record # Package: ////mycodeversioningsystem.org/myuserid/data_analysis/0.1 # User: myuserid # Date: 01/01/01 # System: Mac OS 10.5 data = load_data(URI) The script might well include some system scripting that would attempt to check whether the correct environment (e.g. Python) for the tool is available and to download and start up the tool itself if the script is directly executed from a GUI or command line environment. The script does not care what the new URI created for the data object was because when it is re-run it will create a new one. The Script should run independently of any previous execution of the same workflow. Finally there is the graph. What we have done so far is to take one data object and convert it to a new object which is a version of the original. That is then placed online to generate an accessible URI. We want our graph to assert that //myrepository.org/myuserid/new_object_id is a version of URI (excuse my probably malformed RDF). <data_analysis:generated_from rdf:resource="URI"/> <data_analysis:generated_by_command>load_data</data_analysis:generated_via> <data_analysis:generated_by_version rdf:resource="//mycodeversioningsystem.org/myuserid/data_analysis/0.1> <data_analysis:generated_in_system>Max OS 10.5</data_analysis:generated_in_system> <data_analysis:generated_by rdf:resource="//myuserid.name"/> <data_analysis:generated_on_date dc:date="01/01/01"/> </data_analysis:generated> Now this is obviously a toy example. It is relatively trivial to set up the data analysis package so as to write out these three different types of descriptive files. Each time a step is taken, that step is then described and appended to each of the three descriptions. Things will get more complicated if a process requires multiple inputs or generates multiple outputs but this is only really a question of setting up a vocabulary that makes reasonable sense. In principle multiple steps can be collapsed by combining a script file and the rdf as follows: I don’t know anything much about theoretical computer science but it seems to me that any data analysis package that works through a stepwise process running previously defined commands could be described in this way. And that given that this is how computer programs run that this suggests that any data analysis process can be logged this way. It obviously has to be implemented to write out the files but in many cases this may not even be too hard. Building it in at the beginning is obviously better. The hard part is building vocabularies that make sense locally and are specific enough but are appropriately wired into wider and more general vocabularies. It is obvious that the reference to data_analysis:data_type = “two_column_data” above should probably point to some external vocabulary that describes generic data formats and their representations (in this case probably a Python pickled two column array). It is less obvious where that should be, or whether something appropriate already exists. This then provides a clear set of descriptive files that can be used to characterise a data analysis process. The log file provides a record of exactly what happened, that is reasonably human readable, and can be hacked using regular expressions if desired. There is no reason in principle why this couldn’t be in the form of an XML file with a style sheet appropriate for human readability. The script file provides the concept of what happened as well as the instructions for repeating the process. It could usefully be compared to a plan which would look very similar but might have informative differences. The graph is a record of the relationships between the objects that were generated. It is machine readable and can additionally be used to automate the reproduction of the process, but it is a record of what happened. The graph is immensely powerful because it can be ultimately used to parse across multiple sets of data generated by different versions of the package and even completely different packages used by different people (provided the vocabularies have some common reference). It enables the comparison of analyses carried out in parallel by different people. But what is most powerful about the idea of an rdf based graph file of the process is that it can be automated and completely hidden from the user. The file may be presented to the user in some pleasant and readable form but they need never know they are generating rdf. The process of wiring the dataweb up, and the following process of wiring up the web of things in experimental science, will rest on having the connections captured from and not created by the user. This approach seems to provide a way towards making that happen. What does this tell us about what a data analysis tool should look like? Well ideally it will be open source, but at a minimum there must be a set of versions that can be referenced. Ideally these versions would be available on an appropriate code repository configured to enable an automatic download. They must provide, at a minimum a log file, and preferably both script and graph versions of this log (in principle the script can be derived from either of the other two which can be derived from each other, the log and graph can’t be derived from the script). The local vocabulary must be available online and should preferably be well wired into the wider data web. The majority of this should be trivial to implement for most command line driven tools and not terribly difficult for GUI driven tools. The more complicated aspects lie in the pushing out of intermediate objects and the finalized logs onto appropriate online repositories. A range of currently available services could play these roles, from code repositories such as Sourceforge and Github, through to the internet archive, and data and process repositories such as MyExperiment.org and Talis Connected Commons, or to locally provided repositories. Many of these have sensible APIs and/or REST interfaces that should make this relatively easy. For new analysis tools this shouldn’t be terribly difficult to implement. Implementing it in existing tools could be more challenging but not impossible. It’s a question of will rather than severe technical barriers as far as I can see. I am going to start trying to implement some of this in a toy data fitting package in Python, which will be hosted at Github, as soon as I get some specific feedback on just how bad that RDF is…
OPCFW_CODE
Re: F14 Broadcom wireless problem Okay. What bit rate is being reported by iwconfig? Just post the whole thing. P.S.: On second thought, I'm not sure if it really matters what iwconfig says at any given moment. First, I never bother with system-config-network (the old Network Configuration utility where we all used to configure wireless interfaces manually). I leave that thing alone nowdays, and it's set at auto by default for the rate which gets negotiated to a satisfactory rate for me. Its text config files (*/ifcfg-wlan0), which I also never edit nowadays, usually have only about three default lines and rate is not one. Next, when I look at the rate in iwconfig, do a an Internet connection speed test (speedtest.net), and look at iwconfig again, there seems to be no correlation between what iwconfig says and the speed test results. For example, I manually entered the rate as 1 with iwconfig wlan0 rate 1M, confirmed the entered rate with iwconfig, and ran the speed test. The result was normal for me (4.5 Mbs up). I have AT&T Extreme 6.0 DSL, but I am hell-and-gone from the location of the card to which I am connected at the telephone company. I typically get up speeds of 4.5 to 6 Mbs depending on who-knows-what. Anyway, iwconfig at that point said the rate was 5.5M. Then I entered the rate of 54M with iwconfig, confirmed it, ran the speed test, and got 1.38 Mbs up. Re-entered auto, repeated the speed test, and still got 1.43 Mbs up. Checked iwconfig again and it said the rate was at 48M. I did this several times, over and over, different rates with iwconfig and system-config-network. Never saw a correlation to the speedtests. Now my system-config-network config files (*/ifcfg-wlan0) have a bunch of new lines due to fooling with system-config-network. Rate is at auto. I returned everything to ante-experiment, rebooted, retested, and got 4.01 Mbs up, and iwconfig said the rate was 1M. So I don't really know if what iwconfig is saying matters when NetworkManager is controlling the interface. To anyone: Is there a way to control the interface connection rate when NetworkManager is controlling the interface? Or is that strictly something that gets negotiated to the best level? P.P.S.: If you never get anywhere with this speed issue, then you certainly can try other drivers or driver methods or firmware. No promises. I generally trust the kernel to load the appropriate module for detected hardware. And when it loads b43legacy, the linuxwireless.org (home of b43/b43legacy) documentation says to install version 3 firmware. But nothing would stop you from trying the OpenFWWF firmware that comes with Fedora nowadays. It is advertised to support the BCM4306 chipset. You also can try ndiswrapper with your Windows driver (W2K or XP, not Vista or W7). I did that with my BCM4306 cards in the days of Fedora 4 or 5 before I learned how to use NetworkManager. It worked fine. But don't waste time with broadcom-wl. It is not known to work with BCM4306. P.P.S.: Try a different router channel. You never know.
OPCFW_CODE
I've been searching for documentation for contributing to plone in the site and it seems to be missing. Also clicking the Contribute button leads to this: https://docs.plone.org/develop/coredev/docs/None Further more, I've wanted the Plone Contributor Agreement but then the link provided in the docs (https://docs.plone.org/develop/coredev/docs/contributors_agreement_explained.html) was broken and leads to nowhere. These are already fixed on GitHub, but not yet deployed to docs.plone.org. I like your level of engagement with the community I hope that I am not raising a taboo issue, but has the possibility to allow comments on docs.plone.org been considered ? At some point of time this was possible on the MS sites and it was pretty useful to get corrections to mistakes or more precise indications, then it seems that they lost interest to police it and it was just stopped and now the doc is often outdated or plain wrong. That's a matter of resources of course. In the footer of every page in the docs: About This Documentation: These Docs On GitHub Although the link to Contribute is broken (I think it's been reported, but I'm too lazy to check in the issue tracker), it is clear that the intent is to allow people to contribute to the documentation. I don't think MS allows people to contribute to their docs in this way. I think it is better way for moderating contributions than allowing comments, except that it lacks immediacy. I also recall the lack of immediacy is another thing being worked on through automation. As I said, it used to be possible on some MS sites, but not anymore. But it was good while it lasted. Probably too much critical posts. And Github is not for everyone, it's a garden for programmers, not for normal tech people. Oh alright @svx I mentioned the Plone Contributor Agreement thing specifically here cause, I think new developers will have a hard time contributing for they have no access to the Agreement easily. Plus I couldn't find documentation for newbies contributing, there's no best practices or quick guide for it. And yes the footer links are broken - how frequently are being deployed from git to docs.plone.org ? @sunew Thank you! I didn't see it the first time I checked and few links I found in the community were broken too so I assumed it wasn't there. My bad. Thanks again! Managing this stuff over decade(s) is hard. Thank you for alerting us to breakage in the developer entry funnel. I suppose the expectations of incoming developers have also risen as the world keeps moving on. Time for a documentation sprint? @svx is the updating process still blocked? What do you thing when we can update the public docs? I thing there is quite a lot of doc spending. For example the bobtemplates.plone docs are still outdated and some important things are missing or just wrong if you use the newer version. How can we help to make that happen? @MrTango a new release is planned for end of the week ! BTW, Does someone know how to get notification on new releases in general ? What I mean is, if we would get a notification via a hook or mail or whatever that there are new releases it would help to update the docs faster. We would need that for releases to pypi and for stuff which is not released to pypi like Ansible roles/playbooks. If we use a unified GitHub workflow, like with how we name releases on commits we could even write a small scipt which would check certain repos via GitHub API. @svx I prefer builtin approaches and one of them is to add a "releases.atom" sufix to the repository url, for example, https://github.com/plone/bobtemplates.plone/releases.atom. There's a https://github.com/plone/bobtemplates.plone/tags.atom as well. If you want other types of notification or automation, I suggest searching google, there are a bunch of services (like https://about.sibbell.com/) that can help you with that. Yes we definitely need a documentation sprint to bring the docs in theright order and make them accessable. My main concers currently are: - it's hard to browse to the right place. Is it in extending or is it in developping and so on. We should put the original in developing, if it's important for developers, which is the case for templates for example. It could be also included in other areas as well. That shouln't be that hard. I'm just bringing someone into the Plone development and I must admit, to get into Plone just with the docs is still painful. We have to fix this as soon as possible. We will have a sprint after the Plone Tagung in Berlin in March. Does it makes sense for someone that we set the focus on Documentation? @svx are you comming too? @jensens, @thet and @gforcada would you join? We don't need many people, just some who also have a good knowlege of the stack. Some people who are starters would also be nice, to get some feedback. We should probably collect some main issues in a milestone, to fix during the sprint. I've just gotten a hang of how Plone works and I'm quite new to the community. What you pointed out is true, it's a bit of effort getting into Plone with just the docs for any new developer. As a starter, I'd love to give feedback, suggestions and also contribute in any way I can! @MrTango @MrTango there already various open issues which are tackling lots of the issues, for example restructuring some parts and removing external and out the content under it into the right places. Also creating a better overview (landing page) which should make it easier. Etc, etc, etc But this is way more work and a bit more complicated as it looks on the first glimpse. It is not done with 'just' moving the content, this goes together with fixing links, the build scripts, etc, etc. Same goes for removing the READMEs of docs we fetch. I am not coming to Berlin but @polyester is. Still I would prefer to organise a own docs sprint for that, at least 3 days (long weekend) for example or even a week, depending on what ppl prefer. We could do that for example in Amsterdam, asap. Then maybe find date and go for it. I want to help and I think is really important to fix this. Otherwise there will be no new developer and the community will just shrink. It's important that we find a way, to solve this with more hands than yours to improve the situation as son as possible. A sprint could be good to get some people more involved. The keypoints are: - every body should be able to easy contribute to the docs, like fixing things and improve descriptions, links and errors - the structure should be consistant for every audience (editor, integrator, developer) - the search must work well This is possible since years, we have docs which are explaining how to do that, including style-guides, reST-guides, issues tagged for new-commer ets. There is for sure like always room for improvements but we have all this. BUT, yeah sorry a but people have also willing to read and follow these. We have these for a reason :). This is really depends what you mean by this, this is not always possible because of different content, tone, audience, etc. For sure it should be consistent in terms of style and according to the audience As many times already explained, one of the main reasons why the search is not working really perfect and messing up you search is not because of technical reasons. It has 'simply' to do with the quality of the content, order, usage of header and written word. You can find that also in my last talk I gave in Barcelona. As you know we are also including lots of 'external' docs, for example also docs of bobtemplates.plone (not meant personal, no offense ) It would be really great if people who are involved and active in such try to improve these docs. This also would help the all over quality of docs.plone.org a lot !!
OPCFW_CODE
Roblox checkpoints are a simple way of ensuring that players cannot progress through your game without completing certain tasks first. They are especially useful for games with multiple levels or stages, where you want to make sure that players have completed the previous level before they can move on. To create a checkpoint in Roblox, you need to place a checkpoint object in your game. This is a simple cube with a few properties that you can change to suit your needs. The first property is How To Make A Roblox Checkpoint There is no one definitive way to make a Roblox checkpoint. However, there are a few general tips that can help you get started. First, you will need to create a new place and add some basic objects to it. These could include walls, floors, and stairs. Once the environment is set up, you can add your checkpoint object. This could be a simple door or platform that players can stand on to activate the checkpoint. You will also need to create a script -Roblox Studio – imagination – basic building skills – a computer – internet access - Now, set the properties of the checkpoint to control its behavior - First, create a new place and add a checkpoint object to the scene - Next, set the position of the checkpoint to where you want it in your game -A checkpoint is a place in a game where players can save their progress so that they can return to the checkpoint later if they die or exit the game. -There are many ways to make a checkpoint in Roblox. One way is to use a script to create a checkpoint object that players can interact with to save their game. Another way is to use game objects such as doors or hats to create checkpoints. Frequently Asked Questions How Do You Make Checkpoints Save On Roblox? There is no one definitive answer to this question. Some players save their game manually by pressing the ‘save’ button on their keyboard, while others rely on checkpoints to automatically save their progress. If you’re looking to make checkpoints work more effectively for you, it’s worth experimenting with both methods to see which one works best for you. How Do You Make A Stage Leaderboard On Roblox? To make a stage leaderboard on Roblox, you need to use the leaderboard script. This script will allow players to rank themselves on a stage. How Do You Make A Stage On Roblox? There are many different ways to make a stage on Roblox. One way is to use the Stage Editor to create a custom stage. Another way is to use the in-game level editor to create a stage. A checkpoint can be made on Roblox by using the “Checkpoint” game setting. This will allow players to teleport back to the checkpoint when they die.
OPCFW_CODE
Tuesday, March 19, 2013 Sunday, May 6, 2012 All you want to know if you are working on a new project that should follow HTML5 best practices ? This post tries to answer some of these questions and attempts to try and get started with HTML5 in a flash, using two of the most popular bootstrap frameworks which help you to get started with a robust codebase. Why use it ? Simple, these frameworks contains best practices around front-end development using HTML/JSS/CSS and kick starts you to concentrate on content, rather than developing a template for your site. HTML5 Boilerplate – H5BP , already at 3.0 - Cross-browser compatible using Polyfills/Fallbacks. - HTML5 Feature Detection using Modernizr, YepNode - CSS3 Media Queries - Mobile browser optimizations - Download complete, stripped & customized versions - CDN hosted jQuery with local failback ( shown below ) As you can see, first we try to load jQuery using Google CDN. In the next line, we check the existence using window.jQuery, which will evaluate to true if the library has been loaded successfully, else we load it statically. - Also contains some server site configurations like .htaccess file for caching, cross-domain XHR, gzipping, 404 error pages, robots.txt etc. - Contains ANT build scripts to integrate easily with your project build system - Responsive 12-grid column - Custom jQuery plugins for Carousel, Popovers, Accordion, Autocomplete, modals etc. - Cross-browser compatible and also includes media queries for smartphones and tablets too - Includes many components like Button groups, Button Dropdowns, Navigation components, alerts and progress bars etc - Contains extensive out of the box styling for your HTML markup, for elements like Forms, Tables etc. Building a bordered, shaded, alternate colored table was never so easy. - The entire framework is built on LESS. The codebase contains several .less files (containing different functionalities) which are then compiled into CSS. - Provides several styling as LESS mixings which can be found in mixings.less which you can easily use in your code, or extend them as well. Both are superb HTML5 bootstrap frameworks, under continuous development which follow best practices developed over the years to build a fast, secure and future-proof site. But in my opinion, H5BP is something that even designers can play with using as a template, they can simply open their favourite design tool and add content. While using Twitter Bootstrap is more developer oriented and requires you to invest some time in understanding the various classes and functionalities it offers. But then, that’s the case with every good thing around you. Thursday, March 29, 2012 Although jQuery provides us with a few basic effects like slideDown/Up, fadeIn/Out etc. and a pretty powerful animate function to create animation/effects on any numeric CSS property, a lot of creative people have written some amazing effects/components that will take your breath away. While we do not plan to reinvent the wheel here but down the road, you may also need to create your own effect and would like to access it like Surprisingly, being able to do so is not as difficult as it might seem at first, and can be achieved by adding a new function property to jQuery.fn object as below: The above function simply extends the jQuery object by adding a new method in the jQuery prototype ( jQuery.fn object ). Before diving deep into the implementation, it is important to understand the concept of prototypal inheritance in JS and reading this article will open your eyes for sure. Coming back, a naive implementation looks like Here, we delegate the actual work to animate method and toggle the height & opacity of the selected DOM element for the given speed if any, else a default of 400ms is used. It also allows us to specify a callback function to be called after the animation/effect is complete. To accomplish that, we first check whether the passed variable (fn) is actually a function using isFunction and then call the method passing this as the context. Another important thing to note is that we are returning the jQuery object back from our function so that you can take advantage of chaining like: So this wraps up a short and simple post on how to create custom effects by extending jQuery object. If you have any questions or comments, please leave them as comments. Wednesday, February 15, 2012 ServletConfig is a helper object which we can use in our servlets to get configuration of a Servlet. The configuration can include things like Servlet name, its context (described below) and init-parameters. For example, consider the following servlet declaration in a Deployment Descriptor (DD) So instead of hardcoding the feedback email in the servlet responsible for sending the feedback mail, we can simple configure it in the DD and the servlet can pick the value from using ServletConfig as shown below: The container is responsible for creating a ServletConfig object per servlet and pass to it after it has been initialized so it would be good to note that we can not use this object until the init() method i.e. we cannot access these init params inside the servlet’s constructor. ServletContext, on the other hand acts as a global helper and is not created once per servlet, rather it is created once per application (when an application is deployed) and is normally used to configure application level parameters amongst other things. A typical example would look like: and would be accessed as Now these params can be accessed across all the servlets and JSPs that the application deploys. There are a bunch of other useful information that you can grab from a ServletContext and all of that can be seen here. What if I want to do some special handling when my web application is initialized (or deployed) or destroyed/removed ? The servlet spec gives you a way to hook into the lifecycle of your web application by implementing a ServletContextListener which can listen for these events and act accordingly. Following is an example that picks up the datasource configuration from DD and creates a corresponding DataSource object and save it in ServletContext so that it can be easily retrieved by any servlet wishing to perform a database operation:The actual code of creating a DataSource object from the string is being left out as it does not serve any purpose for this example. This listener will have to be configured in the deployment descriptor so that it can listen and respond to the lifecycle events as: There are many other types of listeners that we can write to handle different use cases viz. ServletRequestListener, ServletRequestAttributeListener, ServletContextAttributeListener to name a few but we will see them in action some other day. With Java EE Annotations, we really don’t have to configure a servlet in DD (as shown in the examples above). They are just there to make things easy to comprehend.
OPCFW_CODE
The latest release of the Linux distro now called“Depth OS” deserves serious consideration. It is fast, reliable and innovative, with an impressive homegrown desktop design dubbed “Deepin Desktop Environment,” or DDE. Depth OS has a bit of an identity problem. It’s not well known outside Asia and Europe, but that’s not the major cause of confusion. The problem is that the open source community that developed the distro seems to have a difficult time deciding what to call it. It has had several names, including “Hiweed GNU/Linux,” “Linux Deepin,” “Deepin” and now “Depth OS.” It seems that many of the community support staff never got the memo. Most of the website and the OS itself still are labeled as “Deepin.” When the community released the latest version last month, it was called “Deepin version 15.” As of this writing, it still was. A half-hearted name-change process is ongoing. The website URL at this writing still was pointing to the “Deepin” name. Most of the branding on the pages continued to call the distro “Deepin 15.” Even the download page identified the latest version as “Depth OS,” as did the current ISO when installed. At one point early in the announcement process for the latest release, a blog notice said that Deepin 15 would be called “Depth OS.” The website sometimes shows the new name but mostly keeps the older name prominent. What Is It? Linux Picks and Pans last looked at Deepin in 2014. The distro, by whatever name we use, still is a Linux distribution that offers an elegant and user-friendly stable operating system. Deepin/Depth OS is both pretty and very workable. The latest release pays more attention to internationalization. That was something I said was needed in thefirst review of the distro. It now supports 30 languages. Another key change is a cooperative relationship with Intel and the Chinese open source community to use theCrosswalk Project to migrate existing Web applications to Depth OS. That enriches the diversity of applications and improves the user experience. The Deepin/Depth OS distro remains something totally new. It was an Ubuntu-based distribution built around its own desktop environment based on the Qt 5 toolkit. With the latest release, Qt powers the desktop to replace the previous HTML5 + WebKit implementation. Mutter is now used as the window manager. Another change is the Linux 4.2 kernel. Systemd has replaced Upstart, Bash is now the default shell rather than Zsh, and GCC 5.3.1 is the base compiler. Version 15 switches from its roots as an Ubuntu-based distro to the Debian Unstable Channel. For most users, that presents little or no consequence. Nothing visible in the desktop design or the software repository resembles any connection to the Ubuntu infrastructure. The distro’s developers built their own ecosystem of homegrown applications. Applications such as the Deepin Software Center, Deepin Music Player and Deepin Media Player contribute to an operating system tailored to the average user. The Deepin desktop design is snazzy yet simple to use. The docking bar resembles that of Mac OS X, but that is as far as the Mac comparison goes. Much like a rolling update, the system software automatically updates as the repository gets new updates. The Control Panel slides onto the screen and is very intuitive. All settings are neatly categorized, and switching from one to another takes one click. A small display window shows an image of the screen with the current settings displayed. Make a change and see the results. Another nice touch is the right-click menu anywhere on the desktop. That provides quick access to creating a new folder, a new document type, a sort option, display options, corner navigation and the personalize panel. More UI Goodies The docking bar at the bottom of the screen shows 14 launcher icons, but I could not find any settings to add or remove programs from the launcher. When you open a program, its icon joins the dock bar and is displayed with a light indicator under it. The first icon on the left is the main default launcher. The other icons launch a few default apps such as the Software Center, Deepin Music Player, Google Chrome Web browser and the Deepin File Manager. The dock also holds a few icons that launch system controls for sound, connectivity, power management, calendar, trash and shutdown options. The corner navigation button is sweet indeed. It lets you set the default action for pushing the mouse pointer into each corner of the screen. The choices for each corner are Launcher, Control Center, All Windows, Desktop or None. The DDE approach for handling workspace switching is one of the best offered in any desktop environment. It is fast, convenient and intuitive. It offers several methods to suit user preferences. For instance, you can use the graphical method or keyboard shortcuts. Next to the menu launcher on the dock is a launcher for multitasking. Click it to see the virtual workspaces. The display lets you click to add or remove workspaces. Click the mini-window to move to that workspace, or click the edge of the previous workspace on the left screen edge or the next workspace on the right screen edge. You can also right click on the top window border to move a window to another workspace. Click the multitasking button in the dock and drag a running program from one workspace to another. All of the Deepin-designed apps have a unified, simplistic user interface. This common design helps reinforce the ease-of-use philosophy built into the Deepin OS. The Software Center is seeded fairly well with many of the commonly used Linux packages. The Synaptic package manager is not preinstalled. You can add it from the Software Center to get a broader range of available applications. The only office suite included in the default packages is Kingsoft Office, a cross-platform product whose Linux version is an alpha 20 release. It is an impressive office suite that has been in various alpha stages for the last four or five years. The ISO is an installable-only DVD image available for 32-bit and 64-bit architecture, so you will have to install it within a virtual machine to check it out before doing a normal hard drive installation. Hint: Don’t download the ISO from the Deepin website. It took me more than 10 hours. The ISO file arrived corrupted several times. Instead, look for the tiny row of icons near the large download button. Click on the icon to download from SourceForge. The 1.8-GB ISO download took nearly three hours but was usable. Recommended system requirements include an Intel Pentium IV 2-GHz processor or better, at least 1 GB of RAM (2 GB recommended for best performance), and at least 10 GB of free disk space. Also recommended are a modern video graphics card from Intel, AMD or Nvidia; an AC97, Sound Blaster or HDA sound card; and a CD/DVD-ROM devices or USB port. Depth OS (aka “Deepin Linux”) is a very impressive distro. It brings useful innovations to the desktop interface. If you want a refreshing approach that is less like everything else in the array of Linux distros, dive into Depth OS. Want to Suggest a Review? Is there a Linux software application or distro you’d like to suggest for review? Something you love or would like to get to know? Pleaseemail your ideas to me, and I’ll consider them for a future Linux Picks and Pans column. And use the Talkback feature below to add your comments!
OPCFW_CODE
||Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers| |(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix| |Performance tuning||Recommended Links||Performance Monitoring||sar| |Disk subsystem tuning||Linux Kernel Tuning||Linux Virtual Memory Subsystem Tuning||TCP performance tuning||NFS performance tuning||strace| |Linux performance bottlenecks||VMware||Virtualization||Humor||Etc| Even with sufficient memory, most database servers will perform large amounts of disk I/O to bring data records into memory and flush modified data to disk. Therefore, it is important to configure sufficient numbers of disk drives to match the CPU processing power being used. In general, a minimum of 10 high-speed disk drives is required for each Xeon processor. Optimal configurations can require more than 50 10K-RPM disk drives per Xeon CPU. With most database applications, more drives equals greater performance. The main factors affecting performance include: From this simple rule stem the following recommendations: What partition layout to choose? In the Linux community, the partitioning of a disk subsystem engenders vast discussion. The partitioning layout of a disk subsystem is often dictated by application needs, systems management considerations, and personal liking, not performance. The partition layout will therefore be given in most cases. The only suggestion we want to give here is to use a swap partition. Swap partitions, as opposed to swap files, have a performance benefit because there is no overhead of a file system. Ideally swap partition should be on a separate disk drive (preferably solid state). Large swap partitions can be split into two using different drives for each half. What file system to use? The installation of RHEL 5.6 limits the choice of file systems to: ext2, ext3 and ext4. The Red Hat Enterprise Linux 5.6 installer defaults to ext3 and this is acceptable in most cases, but we encourage you to consider using ext4. To allow anaconda to manipulate ext4 filesystems, you need to start 5.6 installer using the "ext4" parameter on the command line: Smaller file systems that have no focus on integrity (for example, a Web server cluster) and systems with a strict need for performance (high-performance computing environments) can benefit from the performance of the ext2 file system. ext2 does not have the overhead of journaling, and while ext3 andnext4 has undergone tremendous improvements, there still is a difference. Also note that ext2 file systems can be upgraded easily. On Suse ReiserFS can be used for applications that use many small files such as or other applications that use synchronous I/O. When using Ext3 with many files in one directory, consider enabling btree support: # mkfs.ext3 -O dir_index When using Ext3 with multiple threads appending to files in the same directory, consider turning preallocation on: # mount -o reservation You can benefit from using dedicated logging devices: mkreiserfs -j /dev/xxx -s 8193 /dev/xxy reiserfstune –journal-new-device /dev/xxx -s 8193 mke2fs -O journal_dev /dev/xxx mke2fs -j -J device=/dev/xxx,size=8193 /dev/xxy tune2fs -J device=/dev/xxx,size=8193 /dev/xxy File System Tuning Split file systems based on data access patterns Consider disabling atime updates on files and directories # mount -o noatime,nodiratime Per-request service deadline Blocker Layer Tunables Block read ahead buffer Default is 128. Increase to 512 for fast storage (SCSI disks or RAID). May speed up streaming reads a lot. Number of requests Default is 128. Increase to 256 with CFQ scheduler for fast storage. Increases throughput at minor latency expense. The value stored in /proc/sys/vm/dirty_background_ratio defines at what percentage of main memory the pdflush daemon should write data out to the disk. If larger flushes are desired then increasing the default value of 10% to a larger value will cause less frequent flushes. As in the example above the value can be changed to 25 as shown in # sysctl -w vm.dirty_background_ratio=25 The default value 10 means that data will be written into system memory until the file system cache has a size of 10% of the server’s RAM. The ratio at which dirty pages are written to disk can be altered as follows to a setting of 20% of the system memory # sysctl -w vm.dirty_ratio=20 The disk subsystem is often the most important aspect of server performance, and it is usually the most common bottleneck. However, problems can be hidden by other factors, such as lack of memory. Applications are considered to be I/O-bound when CPU cycles are wasted simply waiting for I/O tasks to finish. The most common disk bottleneck is having too few disks. Most disk configurations are based on capacity requirements, not performance. The least expensive solution is to purchase the smallest number of the largest-capacity disks possible. However, this places more user data on each disk, causing greater I/O rates to the physical disk and allowing disk bottlenecks to occur. The second most common problem is having too many logical disks on the same array, which increases seek time and greatly lowers performance. We discuss the disk subsystem in greater detail in 15.9, "Tuning the file system" on page 480. As with the other components of the Linux system we discussed, disk metrics are important when identifying performance bottlenecks. Some of the values that may point to a disk bottleneck are: Iowait -- This is the time the CPU spends waiting for an I/O to occur. Average queue length -- This is the number of outstanding I/O requests. In general, when the value is higher than 2 to 3,it means there may be a disk I/O bottleneck. This applies to systems with a single disk. In disk arrays, however, the queue length may be different and not necessarily indicate a Linux bottleneck; it may be under the control of the I/O controller using cache or other methods. Average wait -- This is a measurement of the average time in ms that it takes for an I/O request to be serviced. The wait time consists of the actual I/O operation and the time it waits in the I/O queue. Transfers per second -- This refers to the number of I/O operations per second (reads and writes). Blocks read/write per second -- This refers to the reads/writes per second in blocks of 512 bytes in the kernel 2.6 style. Google matched content Tuning IBM System x Servers for Performance Tuning Red Hat Enterprise Linux on IBM Eserver xSeries Servers, IBM Redpaper July 2005 Last modified: March 12, 2019
OPCFW_CODE
These -m options are defined for Adapteva Epiphany: Dont allocate any register in the range r63. That allows code to run on hardware variants that lack these registers. Preferentially allocate registers that allow short instruction generation. This can result in increased instruction count, so this may either reduce or increase overall code size. Set the cost of branches to roughly num simple instructions. This cost is only a heuristic and is not guaranteed to produce consistent results across releases. Enable the generation of conditional moves. Emit num NOPs before every other generated instruction. For single-precision floating-point comparisons, emit an fsub instruction and test the flags. This is faster than a software comparison, but can get incorrect results in the presence of NaNs, or when two different small numbers are compared such that their difference is calculated as zero. The default is -msoft-cmpsf, which uses slower, but IEEE-compliant, software comparisons. Set the offset between the top of the stack and the stack pointer. E.g., a value of 8 means that the eight bytes in the range sp+0sp+7 can be used by leaf functions without stack allocation. Values other than 8 or 16 are untested and unlikely to work. Note also that this option changes the ABI; compiling a program with a different stack offset than the libraries have been compiled with generally does not work. This option can be useful if you want to evaluate if a different stack offset would give you better code, but to actually use a different stack offset to build working programs, it is recommended to configure the toolchain with the appropriate --with-stack-offset=num option. Make the scheduler assume that the rounding mode has been set to truncating. The default is -mround-nearest. If not otherwise specified by an attribute, assume all calls might be beyond the offset range of the bl instructions, and therefore load the function address into a register before performing a (otherwise direct) call. This is the default. If not otherwise specified by an attribute, assume all direct calls are in the range of the bl instructions, so use these instructions for direct calls. The default is -mlong-calls. Assume addresses can be loaded as 16-bit unsigned values. This does not apply to function addresses for which -mlong-calls semantics are in effect. Set the prevailing mode of the floating-point unit. This determines the floating-point mode that is provided and expected at function call and return time. Making this mode match the mode you predominantly need at function start can make your programs smaller and faster by avoiding unnecessary mode switches. mode can be set to one the following values: Any mode at function entry is valid, and retained or restored when the function returns, and when it calls other functions. This mode is useful for compiling libraries or other compilation units you might want to incorporate into different programs with different prevailing FPU modes, and the convenience of being able to use a single object file outweighs the size and speed overhead for any extra mode switching that might be needed, compared with what would be needed with a more specific choice of prevailing FPU mode. This is the mode used for floating-point calculations with truncating (i.e. round towards zero) rounding mode. That includes conversion from floating point to integer. This is the mode used for floating-point calculations with round-to-nearest-or-even rounding mode. This is the mode used to perform integer calculations in the FPU, e.g. integer multiply, or integer multiply-and-accumulate. The default is -mfp-mode=caller Code generation tweaks that disable, respectively, splitting of 32-bit loads, generation of post-increment addresses, and generation of post-modify addresses. The defaults are msplit-lohi, -mpost-inc, and -mpost-modify. Change the preferred SIMD mode to SImode. The default is -mvect-double, which uses DImode as preferred SIMD mode. The maximum alignment for SIMD vector mode types. num may be 4 or 8. The default is 8. Note that this is an ABI change, even though many library function interfaces are unaffected if they dont use SIMD vector modes in places that affect size and/or alignment of relevant types. Split vector moves into single word moves before reload. In theory this can give better register allocation, but so far the reverse seems to be generally the case. Specify a register to hold the constant -1, which makes loading small negative constants and certain bitmasks faster. Allowable values for reg are r43 and r63, which specify use of that register as a fixed register, and none, which means that no register is used for this purpose. The default is -m1reg-none.
OPCFW_CODE
A package made for DotKernel 3 to introduce migrations and seeders Open Issues: 0 - php: ^7.1 - robmorgan/phinx: ^0.9.0 - zendframework/zend-console: ^2.6 - zendframework/zend-servicemanager: ^3.3.0 - zfcampus/zf-console: ^1.3 - phpunit/phpunit: ^4.8 - squizlabs/php_codesniffer: ^2.3 Add migrations and seeders to your DotKernel3 project. Some new commands as been added to the php dot command. - make:migration <name> [path] - Make a migration file, which will be used to create new tables and rows in the database - The path is used to pre-fill the namespace, and has to match a path in the config file. - data/database/migrations is the default path if none are supplied - make:seed <name> [path] - Make a seeder, which will be used to seed the database with data. - The path is optional and will default to data/database/seeds. - When supplying your own path, it must match a path from the config file. - migrate [--force|-f] - Migrate the missing migrations, use the --force in your deployment script to avoid the production environment warning. - migrate:reset [--force|-f] [--hard|-h] - Rollback all migrations and reset the database. - Supplying --force will prevent the environment warning in production. - Supplying the --hard flag will drop and re-create the entire schema. - Rollback a single batch of migrations only. - migrate:seed [path] [--force|-f] - Run all seeders, or optionally provide a path to a specific seeder to run. - If a path is provided, please escape it with double-quotes like "Data\Database\Seeder\UserTableSeeder" - Supplying the --force flag will prevent the environment warning in production. - The God command is intended for development and will recreate the schema, re-migrate and re-seed the database. To run any of them simply run php dot <command>. DotKernel will take care of the rest, putting the files in the correct directories etc. Settings can be found in the Installing is extremely easy, all you have to do is run composer require japseyz/dot-migrations, and copy the migrations.php.distfile to the /configfolder and remove the .dist ending. After that is done, you open up Create a folder inside database, and inside this, create two folders; seeds, this is where your migrations and seeds will go. That's it, all you have to do now is run composer dump-autoloadand enjoy access to migrations and seeders, all you have to do is run The migration commands does not show up, what do I do? If you've follow the installation, but no commands show up, try deleting /data/config-cache.php and running php dot again.
OPCFW_CODE
Here begins a series of posts showing how I used F# to write some of the Hands-On Labs in the Windows Azure Training Kit (WATK). WATK is a good way to learn the basics of Windows Azure development, but all the labs and samples are in C#. I want to show what it takes to write the HOLs in F# instead. UPDATE: The code for the first three posts in this series is now on GitHub. Let's jump into the first HOL, entitled Building and Publishing ASP.NET Applications with Windows Azure Web Sites and Visual Studio 2012 (in the WATK folder under Labs\ASPNETAzureWebSitesVS2012). In this lab we create an ASP.NET MVC project in Visual Studio and publish it to an Azure web site two different ways -- first with Visual Studio publishing, then with a git repository. The lab demonstrates how easy it is to publish C# MVC projects to Azure web sites (as opposed to Azure web roles, which I'll cover in a future post). Let's see how easy it is to do with F#. The first thing the lab has us do is create a new MVC 4 project in Visual Studio. The lab gives instructions for doing this in C#, but we will use Daniel Mohl's excellent F# / C# MVC template instead. We go to File - New Project and choose the template from Visual Studio's online gallery. This version of the template offers us three types of project. We will choose Single Page App with Razor and Backbone.js. For this lab we'll omit the Tests project. The template creates a C# web project and an F# library project. The C# project contains no .NET code, but instead contains our views, scripts, stylesheets, configs and other MVC plumbing. All our .NET code, including models and controllers, will be in the F# library project. The template creates a single model class named Contact and two controllers, HomeController and a ContactsController. Thus we don't need to create any new models or controllers to do the WATK lab. We can run the solution now from Visual Studio. We're now ready to publish our MVC application to an Azure web site. First we log on to our Azure dashboard and create an empty web site, following the instructions in Exercise 1, Task 1 of the lab. We'll postpone setting up a database for now, so we skip Exercise 1, Task 2 and go to Task 3. We right click on the C# project and select Publish. In the dialog we import need to import the publishing profile for the Azure web site we just created. We can do that either by following the lab instructions and download a publish profile file, or we can set up Visual Studio to import the profile directly from Azure; instructions for the latter procedure are on Scott Guthrie's blog. Since we haven't set up a database yet, we can leave the Settings pane alone for now. But we can test the deployment in the Preview pane, and if it succeeds, we click Publish. In a minute or two our F#/C# MVC 4 site is published to Azure! So we've seen that the process of publishing an F# / C# MVC 4 project to Azure with Visual Studio is pretty much identical to the C# - only procedure described in the lab. In the next post, we'll look at how well we can publish the F# / C# MVC project using git.
OPCFW_CODE
In my last post I, perhaps controversially, set the constraint that an API should not couple itself or document tightly to URI paths and patterns. This comes as a stark contrast to many of the popular trends within the API space, but the long term benefits to the service over its life far outweigh the complexity and cost of implementation. In this post I would like to discuss an additional constraint which is in part corollary to the last guideline: do not version anything at all in the service. I will first start out by addressing the elephant in the room, this guideline also comes at a stark contrast to popular trends in the API space, as well as the established trend from some of the largest Silicon Valley technology companies. These two guidelines add a great deal of complexity to initial API design and architecture for public APIs. Most of these leading organizations are driven by concerns with speed to market, and therefor do not allow themselves the appropriate design time before beginning to build applications, or are far more concerned with a larger audience’s familiarity with a particular design strategy than creating a better API design. These concerns are certainly important from a business perspective, however it is often incorrectly presented as a technical limitation and guideline, when in fact it is driven almost entirely by business motivations. If the rush for faster minimum viable products is driven by a business concern, what are the technical benefits associated with a hypermedia web API style? The often cited axiom in the hypermedia web API space is WWBD, or what would browser do? This is particularly apt given HTML is probably the most familiar hypermedia format to any user of the internet regardless of their awareness of this fact. Netflix and other continuous delivery champions are famous for deploying code to production hundreds of times per day, however as a user of their services you are never aware of any change in the service platform. The only way this can be done is if you are never aware or tightly coupled to any type of versioning within the service interface. Your browser never knows, or is concerned in any way with, the version of the Netflix software it is querying. This point perfectly addresses the style of CRUD api which includes a version number within its URI pattern, but does not cover all ways to version. The other shortcut often taken during API design is to apply a version to the MIME type itself, and has been demonstrated to be beneficial in the very near term. This solution does manage the version disconnect at the service layer, but leaves un-patched clients unable to fully consume the service until new client versions can be distributed. Even though this strategy is one we are very familiar with as API service integrators, it is extremely consumer hostile as all the service version management responsibility have been dumped on the consumer. Worse still this type of versioning is unique to each integration, greatly increasing the difficulty of a service which aggregates functionality from multiple APIs to create some or all of its responses. This strategy will solve your concerns for versioning, but it comes at a hefty cost to the API’s consumers, and if there is completion in the space, this poor experience could result in the loss of a client or consumer. Both of these reasons taken in isolation or together should be enough to convince a reasonable designer of the importance of removing all versioning from their API. However, there is a more fundamental reason to exclude versioning entirely and it goes back to the very first guideline. Versioning is a solved problem within the HTTP application protocol. I previously discussed the ETag strategy to perform cache control, but this is nothing more than a more specific form of representation versioning. If part of the structure or field of a message representation changes between versions, normal validation processing should handle this change, and the client can update its local representation and model cache for the service from the service itself. If a historical representation of a resource is required for audit or some other need, the memento header exists to handle requesting a resource and representation as it existed at a certain point in time. Clearly this is not be the simplest solution, however it is a far more useful and standard way to version the resources and messages of an API. By adhering to the standard way to perform this action, a sophisticated http consumer can always know the status of any data it holds locally and remain unconcerned about the version of the service running. Furthermore, this sophisticated client can be used to consume other APIs which follow these guidelines, with little or no additional integration effort.
OPCFW_CODE
|| | |||Browse by category| Numeric values in scientific notation can be seriously misinterpreted if the exponent is not visible because the cell is too narrow. Download NumFormat sample. This sample implements a custom edit control that adjusts the number of digits displayed to make the string fit the cell. If the cell is too narrow to display a meaningful value, ellipses are displayed to indicate that the value has been truncated. The FAQ/KB article entitled ''Is there a way to specify the format of cells'' outlines two options for changing how values are displayed. 1. Override GetStyleRowCol() and StoreStyleRowCol(), or 2. Create a custom edit control overriding GetControlText() and SetValue(). I recommend the custom edit control method for our purposes. It has the following advantages: 1. You can also override ValidateString() to disallow any non-numeric characters. 2. You can easily select which cells are given the special format. 3. GetStyleRowCol() and StoreStyleRowCol() do not work with formatted cells -- SetFormat() -- because GXFormatText() applies the formatting after GetStyleRowCol(). Our custom control, CNumEditControl, is derived from CGXEditControl. It has overrides of GetControlText() and SetValue() to implement our custom formatting. It also has an override of Draw(). This override is an exact copy of CGXEditControl::Draw(). It is required so that CGXStatic::Draw() will call CGXDrawingAndFormatting::DrawStatic() with the proper CGXControl* pControl parameter -- i.e. pControl is a CNumEditControl and not a CGXEditControl. CGXDrawingAndFormatting::DrawStatic() calls our GetControlText(). In addition, ValidateString() is overridden to accept only numeric characters. And OnGridCtlColor() is overridden to change the cell's colors when active. To use this custom numeric edit control in your app, add a string resource ID of IDS_CTRL_NUMEDIT to your project. In the grid setup -- OnInitialUpdate() -- register the custom control. // Use custom numeric edit control RegisterControl(IDS_CTRL_NUMEDIT, new CNumEditControl(this, IDS_CTRL_NUMEDIT)); Then use this control to format the cells that you want. For example, SetStyleRange( CGXRange(7,2), CGXStyle() .SetControl(IDS_CTRL_NUMEDIT) // use custom numeric edit control .SetValueType(GX_VT_NUMERIC) .SetFormat(GX_FMT_FLOAT) // scientific notation .SetPlaces(10) // digits to display .SetValue(0.12345678912345e-12) ); I also tried the Set/StoreStyleRowCol() method, but I ran into difficulties. First, I wanted to check the cell's ValueType and Format to see if I should modify the string. But the style passed to StoreStyleRowCol() does not have this information. I had to call ComposeStyleRowCol() to get it. Also, the ValueType seemed to change from GX_VT_NUMERIC to GX_VT_STRING when I edited a value so that the decimal point was removed from the display. So I would only suggest the Set/StoreStyleRowCol() method if you can identify the cells to modify by position (row and/or column) and the cells are not formatted. May 24, 2001
OPCFW_CODE
I'm running Home windows 7 32 bit. I've IIS, Visual Studio 2008, 2010 installed. I'm haunted with this error in SQL Server 2008 that is "Home windows couldn't start the SQL Server (SQLEXPRESS) service on local computer. Error 1053: The service didn't react to the beginning or control request in due timeInch. I've researched and visited many, many, many etc. sites. I've transformed Account permission to Local System Account with no luck. I've uninstalled and reinstalled no luck. I've checked the main harbour in Client Methods which is set 1433. I've added Network Service account towards the Microft SQL Server folder in program files. I cant look into the log file becuase it does not appear in the LOG folder of MSSQL folder. I've transformed registry information from fix articles relating to this problem but no luck from individuals sites. It really is annoying because i installed SQL Server 2005 express earlier and that i got exactly the same error message so for this reason i uninstalled and made the decision to provide SQL Server 2008 Express a try and that i recieve exactly the same error. This is actually slowing down me lower becuase i've developed many of the webpages during my website and today i wish to add functionality towards the site and that i need SQL Server which has truly stopped me from working. Can someone help please? I really should understand this fixed i've attempted exactly what the websites provide in the search engines. Should you require more details ill be happy to provide you with the feedback to obtain this solved the moment posible. Open the big event viewer and search with the logs: Open Home windows Explorer and visit Control PanelAll User Interface ItemsAdministrative Tools Open Event Viewer Expand Home windows Logs Try looking in Application, Security, and System for errors. very good news. i handled to set up SQL Server 2008 Express finally. Things i did was i downloaded Home windows Install Cleanup. However i uninstalled exactly what pertains to SQL Server 2008 from controlpanel>programs>programs featuring such as the sql server vss author, sql server browser. I quickly erased the folders from regedit.exe which folders were the sql server folders in HKEY_LOCAL_MACHINBE/software/microsoft/microsoft sql server,essentially all of the folders that begin with microsoft sql server "xxxx". I additionally erased the Microsoft sql server directory from Hkey_current_user/software/microsoft/microsoft sql server. I quickly erased the microsoft sql server directory from the program files around the system. once each one of these sites were taken off regedit and my pc, Then i finally used Home windows install cleanup and removed something that associated with microsoft sql server 2008 and thats it. Then i installed sql server 2008 Express sucessfully. thanks for the individuals who required time to see my problem and thank you for individuals who provided the feedback.
OPCFW_CODE
Hp 14-g008au driver windows 7 - Alcatel one touch idol 2 driver Hp pavilion dv6 notebook drivers windows 7 64 bit - Taxi driver theme bernard herrmann sheet music Note: All Drivers below are working properly on notebook HP Pavilion dv6 because these all Driver are compatible with HP Pavilion dv6 laptop and windows 7 (64bit. Download the latest software & drivers for your HP Pavilion dvtx Entertainment Notebook PC. Free HP Pavilion dv6 drivers for Windows 7 bit. Found 5 files. Select driver to download. Samsung yp-k3 windows 7 driver Hp pavilion dv6 drivers for windows 7 free download More about pavilion drivers windows free download. rgd Oct 22, , PM. did you try riodel.somee.com? hd m drivers for hp notebook free download of windows · Hp laptops Hp pavilion dv drivers for windows 7 64 bit free download. Windows 7 Drivers for HP Pavilion dvc10us:OS: Windows 7 All versions (bit and bit)HP recommends Windows 7 x64 (bit).Driver-Audio (2):Conexant USB Audio. hello! my laptop is hp pavilion dvse, after formatting my laptop hp media smart quick launch button is not working so how can i fix it? HP Pavilion dvus drivers for Windows 7 (bit edition* & bit edition**):*Most of these drivers are compatible with Windows 7 bit editions (even if they.HP Pavilion dv6 Driver Download: here you can download HP Pavilion dv6 drivers Including sound drivers, network drivers, chipset drivers, bluetooth drivers for window.HP Pavilion dvdx Entertainment Notebook PC HP Pavilion dvdx Entertainment Notebook PC Drivers for Microsoft Windows 7 (bit) HP Pavilion dvI have a laptop hp pavilion dv with windows 7 64bit and i need to download driver. Hp deskjet d2300 driver for windows xp HP Pavilion dvdx Notebook. HP Pavilion dvdx Driver Downloads. Operating System(s): Windows 7 (Bit), Windows 7 (Bit). IDT High. HP Pavillion dv6 NoteBook PC Drivers Download. Windows 10 64 bit, Windows 64bit, Windows 7 64bit HP Pavilion dv I reinstalled windows 7 by formatting the hdd and lost my drivers aswell. SolvedHP Pavilion dv6 Notebook pc 64 bit 8 RAM, AMD Am. 17 Jun Searching hp pavilion dv6 drivers for windows 7 64 bit bluetooth free. HP Pavillion dv6 NoteBook PC Drivers Download for Windows. Do you.nokia ca 42 usb phone parent driver, lg optimus l3 adb driver download, download drivers hp pavilion g6 series, hp laserjet 5200 printer driver for windows 7, hp laserjet m1005mfp printer driver download for windows 7, hp scanjet g3110 driver download for windows 7 asus hd 5450 silent driver download
OPCFW_CODE
When would you use an octree in a voxel Engine? I've been reading about Octrees, and I understand how they work (or at least I think I do). However, I can't figure out why you would use octrees instead of a simple multidimensional array. In my project, I use chunks in my world. Every chunk is made up of 16x16x16 voxels. If I need to access a voxel, I just use the notation myChunk[x][y][z]. If I need to check neighboring voxels, I can use the same notation. I've already implemented frustum culling, face merging, and hidden surface determination. With these optimizations and this simple multidimensional array, I can render about 500 chunks at 80 fps. So, in which situation would I use octrees in this kind of Voxel Engine? Or is it useless? I can see the purpose of octrees if I would implement LOD into my voxel engine (which I'm not planning to do). Is my lack of experience in Game Development blinding me on something? Short Answer The Octree is favoured in games and rendering, because It supports visual level of detail, sensibly. It provides extremely tight compression of sparsely-populated spaces. (c.f. SVOs) At its lowest level it matches the uniformly-sized / -placed cells required for a voxel world. Other 3D accelerative structures may not do this, as explained below. Full Explanation - TL;DR Hierarchical spatial subdivision is used for two main reasons LoD: To avoid iterating over detail that is far away and thus not of interest; To accelerate iteration through a space, e.g. in raytracing or AI pathfinding, by walking a shallow level of the data structure where possible, i.e. where there are fewer nodes across a given distance. Why is the octree commonly touted? As compared against it's usual competitor, the KD-tree, it... Provides uniform subdivision at every level. This is an excellent way to achieve smoothly-degrading LoD (level of detail) while maintaining equal "smoothness" across each LoD grade. So visually this works better than a KD-tree, which isn't really suited to the purpose of LoD. Is exactly the same form of subdivision at each level (8 children per parent with the same layout each time) so certain assumption can be made, resulting in fewer conditionals. Perhaps more importantly, this gives us more accurate bounds on processing cost than a KD-tree, which may have N != 2 subdivisions in each axis... there is a general rule in engineering real-time systems that it is better to have a lower but stable FPS than an FPS with a high maximum rate and a too-low minimum. Provides extremely rigourous compression as compared with other 3D spatial subdivision approaches; hence the term, "sparse voxel octrees" (SVOs). This means you can store a relatively enormous space containing a very small amount of voxel solids, efficiently, using an octree. To give you some further persective, pros and cons for KD-trees: KD-trees tend to make more efficient use of space / time overall, in terms of the number of boundary planes the data structure contains, which also improves speed of traversal. Simply not suitable for distance-based LoD, due to variable granularity and alignment. For most, it turns out that the pros of octrees end up winning the day. From an engineering standpoint, conceptual uniformity (simplicity) and predictability of execution paths win the day. Less compact alternatives In many instances it has been proposed that in spite of the conceptual simplicity of octree traversal, the octree is non-optimal in this sense due to the sheer number of levels of subdivision, which results from the division ratio (1:2 per axis) being so low. If, as in your case, instead of 2x2x2 children per node, there are 16x16x16, we would greatly reduce the number of node boundaries, simplifying cache linearisation which leads to better performance on the overall. Of course, what is crucial to note is that, as with most engineering decisions, this is a trade-off between space and time; the compression of sparse spaces will be far worse with 16x16x16 subdivisions, than with an octree. We call this flattening the data structure. Thanks for this great explanation. I've understood now, with a Octree I can make my game more robust and this will avoid some headache in future. Also you've pointed another good reason to work with it: It's commonly used, so as a i'm (or want to be) Game Developer, I need to know it. Thanks again. @AfonsoLage I am afraid robust isn't the right term, I think it doesn't directly deal with robustness. Nick's great answer didn't even mention it (so I am confused about your deduction). But anyway +1 for Nick complete answer. @concept3d With robust I meant this will help me in other functionalities, like better Frustum Culling and Collision Detection, making my Voxel Engine more stable and fast (if well implemented ofc), may I've used the wrong term... Like any data structure octrees come with pros. and cons. Octrees are hierarchical. This is usually a compelling reason to use them, if you don't need this property then you probably don't need octrees. Remember to always use the appropriate data structure when actually needed. You can use a uniform grid for your voxels engine, and it works. But you won't have the extra benefits (and complexity) that comes with an octree. For example octrees are good for voxels LOD, and like any hierarchical data structure they can speed up certain tests like ray casting and frustum culling. On the down side Octrees are also known as harder to make cache friendly. Also a complete octree with certain depth takes a lot of memory, and that's why sparse voxel octrees exist. Thanks for this answer, I'll stick with Octrees because Ray Casting and a better Frustum Culling (my actual one takes 12ms on a 500 chunks tile map). @AfonsoLage but keep in mind that octrees also need some more investment in the time to implement. Yes, i'm reading many articles to try to implement it, the hardest thing is to know what is GigaVoxels, Sparse Octree, Loose Octree. But when I know how to distinguish those three, I'll be able to implement :). @AfonsoLage check my blog, I've written an article that can help you get started http://codingshuttle.com/2014/01/building-a-multipurpose-octree-datastructre/ @AfonsoLage Yes -- do take into account what concept3d has said about complexity. To implement in the non-naive i.e. performant sense, octrees are certainly a non-trivial undertaking for someone new to 3D engines. Caveat emptor!
STACK_EXCHANGE
Go for the Education versions, as these are mostly akin to the Enterprise versions of Windows 10. N is, as far as I know, a version that has some built-in Windows multi-media options removed. European legislation is to thank for that. If you don't care or use your own set of multi-media software, instead of the ones provided by Microsoft, get the N version. You can use Education versions as long as you are registered as a student at an institution that Microsoft acknowledges as a school/university. Although I don't think they actively check on that. Depends also in which part of the world you live.Wikipedia link to an overview of Windows 10 versions and their capabilities. That should give you an idea which version to choose. Windows 10 build 1809 looks to be the more stable one. People migrating from build 18xx to 1903 have reported many problematic errors. Maybe a complete fresh install using build 1903 fares better, but I'm not sure of that. Usually a fresh Windows 10 install works better than migrating from one build to a newer build. Also, you can expect even more problems when you would migrate from build 17xx to 1903. But as you want to keep as much as possible from your Windows 8.1 installation, you limit yourself to the migration option. A plain Windows 8.1 installation is not likely to give you a lot of problems during the migration to Windows 10, but you can expect migration problems, especially when using older/specific software or hardware (drivers). If possible, create a Virtual Machine and install a trial version of Windows 10 (same build as you are going to get from your university). Then try if the software you depend on works inside this virtual machine. I remember you are using some very specific (and older) software. If that software remains working to your satisfaction, then go through the motions of migrating your computer and discard the Virtual Machine you created. But if your software isn't working, then keep using Windows 8.1 until Microsoft drops support for it officially and see what you can do about getting a newer version of your software that does work with Windows 10. Or go and find an alternative for your special software, see if that works and migrate to that alternative software on Windows 10. There used to be a tool, hosted by Microsoft, that could make an educated "guess" if your current hardware supports Windows 10 or not. But I think that was for Windows 10 builds 15xx. For all intents and purposes Windows 10 build 15xx isn't officially supported by Microsoft anymore. Even builds 16xx are out of support (with the exempt of LTSB version of Windows 10, if I remember correctly). Understand that there are big differences between Windows 10 builds, so if your software works with one build, it is not a given that it will remain working in the next Windows 10 build. Enterprise versions of Windows 10 allow you to postpone migrations from one Windows 10 build to a new build for 1 year maximum. Education versions of Windows 10 are in most respects the same as Enterprise versions of Windows 10, so I assume that you have the same option to postpone. It comes down to the ability of your special software to work on Windows 10, to see if migrating your computer to Windows 10 is a good idea or not.
OPCFW_CODE
LICENSURE: Professionalizing software engineering. What is Licensure??? Licensure refers to the granting of a license, which gives a ‘permission to practice.’ Such licenses are usually issued in order to regulate some activity that is deemed to be dangerous or a threat to the person or the public or which involves a high level of specialized skill. In terms of licensed software engineers, a person would have to take a test demonstrating a certain “body of knowledge” deemed essential for a software engineer to have, and fulfill some other basic requirements (such as possessing a college degree in a relevant field, etc.) in order to call oneself a software engineering. Then, only licensed engineers would be legally able to hold certain jobs in companies which the licensing board deemed must be help by a licensed software engineer, similarly to how teaching positions at your local public school can only be help by licensed teachers. If you are still curious about licensure in general, here are some examples of licensure in practice for other disciplines: Teaching License (to make sure your 3rd grade teacher, Ms. Maine, actually knows her own multiplication table!): http://certificationmap.com/ Land Surveyor License (to make sure your house isn’t built directly on top of a fault line!): http://www.pels.ca.gov/ So doesn’t a “software engineering license” make sense if you don’t want everything from your home computer to local hospital data crashing?!? PROS TO LICENSING SOFTWARE ENGINEERING: - Creates standardized bar for every licensed professional, guarantees some minimum standard - Ensures that software engineers poses a basic “body of knowledge” - Increases personal responsibility for code (i.e. if a licensed professional is overseeing a team of programmers, the final product will need the licensed professional’s “stamp of approval” to be approved, thus placing greater liability, and thus responsibility, on the licensed professionals. Greater personal responsibility should increase the quality of the final product.) CONS TO LICENSING SOFTWARE ENGINEERING: - A basic “body of knowledge” sounds good in practice, though impossible to implement for two reasons: - Software engineering is such a broad discipline that any body of knowledge would be too broad for any one person to fully understand. As Robert Glass describes, software engineering is as much an art as a science, and thus rules such as “don’t built a house on a fault line” are never as clear-cut. - Because software engineering is such an immature field, software engineering itself is constantly changing. Are software engineers advanced programs, hardware specialists, or managers? If the role of a software engineer is not clearly defined, how can their expected “body of knowledge” be? Example: Robert Lief from AdaMed: At a conference aimed at discussing the potential of licensing software engineering, Mr. Lief mentions that the reason we can’t all use a single common programming language is that the language must be suitable for the problem domain. Thus what languages should a software engineer be proficient at? These are truly arbitrary calls. (licensed professional software engineer) - Thus because licensing would not guarantee any useful information the professional would have (since any ‘useful’ guaranteed knowledge pertaining to all types of projects would be too large), it would create a false sense of security. (A Summary of the ACM Position on Software Engineering as a Licensed Engineering Profession, July 17, 2000). - The only useful license would then have to be very strict and all-encompassing to be useful, and thus too few people would be able to obtain it. Therefore: “If very strict criteria were applied for software engineers, maybe 90% of current professionals would lose their jobs, and we can’t afford to close down Microsoft and the rest of the software industry.” (Robert Lief from AdaMed) - The ACM has recently adopted positions in opposition to the licensing of software engineers as Professional Engineers, and has withdrawn from the Software Engineering Body of Knowledge (SWEBOK) project. The ACM concludes that: “[C]urrent software engineering body of knowledge efforts, including SWEBOK, are at best unlikely to achieve … appropriate assurances of software quality…” - Companies often offer their own “professional certification” for specific facets of the company – for instance IBM, Microsoft, and Apple all over several different certifications, ranging for security to specific programs. These are much more tailored, and thus useful, than one broad professional license. - IEEE-CS offers a certification titled Certified Software Development Professional. (CSDP – more information can be found here: http://www.computer.org/portal/web/swebok The CSDP offers tailored certifications much like the tailored coursed offered at large companies (described above), thus offering greater flexibility and being more helpful than one broad license. (at the bottom of this page, additional information is given on the CSDP if you are interested in exactly what typical certifications entail) Case studies of licensing: United States: Texas, currently the only State in the US to offer software engineering licensing, created in 1998. One can become a Professional Engineer, with only a specialty in Software Engineering. The first half of the test is electrical engineering, the second half is mostly hardware – not very much programming, and thus not very software-tailored, and therefore not very useful. Dave Dorchester, Vice Chairman of the Texas Board of Professional Engineers mentions: “Indeed, officials from many states and several other countries have told us that they are waiting to see how these things all come to fruition.” Thus the case in Texas was seen as a “pilot program” for software engineering licensing, though has not spread to any other state or country. How effective, then, could it have been? IEEE/ACM: Since 1993, the IEEE Computer Society and the ACM were actively promoting software engineering as a profession and a legitimate engineering discipline, notably through their Software Engineering Coordinating Committee (SWECC). However, the ACM dropped out and switched sides on the idea, now saying that licensing is the wrong thing to do. Internationally: The Information Systems Professional (I.S.P), or Informaticien professionnel agréé (I.P.A. in French), is a professional designation issued by the Canadian Information Processing Society (CIPS). Introduced in 1989, the professional designation is recognised by legislation in most provinces of Canada. There is no legal requirement to hold this, though, so it is more like a certification than a license. Conclusion We Drew: Will not be helpful at all, and actually cause more harm than good. The “body of knowledge” necessary for licensure is too large for any test since software engineering is so broad and still changing due its immaturity. Not only will it be ineffective, but since it will create a false sense of security, it has the potential to bring about more harm than ineffectiveness. We thus stand with the ACM against licensing of software engineers. Information on CSDP (what is required to be granted the certification): 1. At the time of application the candidate holds a baccalaureate or equivalent university degree and has a minimum of 9,000 hours of software engineering experience within at least six (6) of the eleven (11) SE knowledge areas (the ten SWEBOK areas and Professionalism and Engineering Economics). 2. Candidates are required to subscribe to the Software Engineering Code of Ethics and Professional Practice 3. Candidates must pass an exam demonstrating mastery of the knowledge areas
OPCFW_CODE
When was the last time you needed to buy a new PC? Two years ago? Three years ago? The last PC I built was in 2009. I had to upgrade because I pushed the previous one I built to the limit and that was in 2004. A 2009 desktop is old in computer years, but not so much in processing power. It maybe true that there are a zillion new processors out in the market and their benchmark show exponential improvement. But to me benchmarking is just a marketing gimmick. PC sales are plunging but they are the wrong indicator to determine the advancement of the technology. The reason we are not buying PCs anymore is because those we have are already pretty amazing. Dell Inspiron e1505 2006. Previously discarded due to broken monitor. I own a Dell Inspiron e1505 laptop. The display is broken but I plugged an external monitor. I am starting to use it more and more for development. Note that this model is from 2006 and still works perfectly today. I only upgraded the hard drive to a 120G SanDisk SSD and installed Ubuntu 12.04 and it now boots just under 15 seconds from the moment I press the power button. You might say that a tablet boots instantly, but this machine is from 2006 and I use it for development and to watch movies sometimes. I run the browser with multiple tabs open, my IDE running, code compiling, and I rarely worry about performance. If such an ancient machine can do so much, what would you expect from a 2009 desktop. Custom PC built in 2004. 1G RAM Graphic card I recently upgraded my graphic card with one that has a HDMI output (I never needed it before). I plug it to the TV and the kids get to watch their shows in full screen while I have all my development application running simultaneously. The machine barely purrs with all this going on in the background. Custom PC built in 2009. The PC has it's cons, it is not portable. That is all. Of course PC sales will be low. When you don't have enough memory, you buy more RAM. When your processor is too slow, buy a new CPU, or you get a new heat sink and over clock it. You rarely have the need to buy a whole new box. New PC from best buy. I do believe tablets are amazing devices. They serve their purpose of light computing very well. They are portable and have extended battery charge compare to laptops. When you are using a tablet you will almost never run into issues where you'll think adding more RAM will help. For some reason after you use it for a year you realize that it has become too slow. Your option at that point will be to buy the latest model pass down the old one to your favorite cousin. There are lots of tablets. A solidly built PC can last you for 5 years without ever needing to upgrade. The top of the line smart-phone or tablet you own today will be obsolete by the end of 2014 if not earlier. My HTC Evo 4G was blazingly fast when I first got it 2 years ago. Today I deleted all my apps and only kept the bare bone just so I can receive text messages right when they are sent to me. I am desperately in need of a new phone because I only have ~80MB of internal memory available. Of course I will buy a new one and that will add a nice +1 to the ever increasing sales of mobile devices. My poor HTC Evo 4G Let's stop pretending that this is the state of PC today: The PC today. source PC makers and retailers will have a hard time convincing people to buy new PCs because they are mostly repackaged goods. This doesn't mean the PC will slowly fade away. Instead, only those who need it will buy it now. And that's how it was supposed to be in the first place.
OPCFW_CODE
How can I send a signal from a python program? I have this code which listens to USR1 signals import signal import os import time def receive_signal(signum, stack): print 'Received:', signum signal.signal(signal.SIGUSR1, receive_signal) signal.signal(signal.SIGUSR2, receive_signal) print 'My PID is:', os.getpid() while True: print 'Waiting...' time.sleep(3) This works when I send signals with kill -USR1 pid But how can I send the same signal from within the above python script so that after 10 seconds it automatically sends USR1 and also receives it , without me having to open two terminals to check it? You can use os.kill(): os.kill(os.getpid(), signal.SIGUSR1) Put this anywhere in your code that you want to send the signal from. This is useful for having a docker container commit seppuku Can you explain what SIGUSR1 are? Is it language Constant or I should change it value according to the signal other program can receive ? @Salem, it's actually an OS-level constant (see https://www.man7.org/linux/man-pages/man7/signal.7.html) If you are willing to catch SIGALRM instead of SIGUSR1, try: signal.alarm(10) Otherwise, you'll need to start another thread: import time, os, signal, threading pid = os.getpid() thread = threading.Thread( target=lambda: ( time.sleep(10), os.kill(pid, signal.SIGUSR1))) thread.start() Thus, this program: import signal import os import time def receive_signal(signum, stack): print 'Received:', signum signal.signal(signal.SIGUSR1, receive_signal) signal.signal(signal.SIGUSR2, receive_signal) signal.signal(signal.SIGALRM, receive_signal) # <-- THIS LINE ADDED print 'My PID is:', os.getpid() signal.alarm(10) # <-- THIS LINE ADDED while True: print 'Waiting...' time.sleep(3) produces this output: $ python /tmp/x.py My PID is: 3029 Waiting... Waiting... Waiting... Waiting... Received: 14 Waiting... Waiting... where do i need to put that line in my above script You put either of those snippets into your script at the moment you want the 10-second clock to start. For example, you could place it directly before the while. i don't want to send alarm signal but USR1 , can you give that example in my current script It goes in the same place as signal.alarm(). Did you try to add it to your script?
STACK_EXCHANGE
Curious what custom questions you should add to your #BeHeard Survey? For both the Free Report and the Premium Report, you have the opportunity to add up to five 5-star rating questions and five open ended questions to tailor your #BeHeard Survey to your culture and interests. These questions will all be at the end of your survey. With our dual rating scale, your customized 5-star rating questions will measure effectiveness and employee importance, just like our statistically valid #BeHeard Survey questions. After all, what might be important to the company may not be important to the employees, so this will give you additional insight. Need ideas? No problem, check out some of the survey question ideas below. These examples are also a great resource to share during team meetings to decide what questions will work best for your culture and the feedback that you need. Please note that the survey question examples below should be reviewed by your HR and legal teams to ensure appropriate compliance and approval. Examples of Custom 5 Star Likert Rating Format Questions 1 = strongly disagree, 5 = strongly agree - I enjoy working with the people on my team. - I trust the people on my team. - I feel my compensation is competitive. - I feel my benefits package is competitive. - I feel the executive team is sufficiently involved in company morale. - I feel comfortable communicating concerns or making suggestions (about COVID-19 or diversity and inclusion) to leadership. - I feel included and respected within the organization. - I have access to the resources & tools that I need to get my work done remotely. - The quality of our remote work tools (e.g. VPN, remote work access, intranet, communication tools) support the work I am expected to deliver in my job. - I feel supported in my organization when I run into technical problems. - I am satisfied with the way my organization has handled the switch to remote work. - I know the deliverables that are expected from me each week. - I know what I need to be working on to contribute to my team's priorities. - My team has coordinated tasks successfully while working remotely. - I am able to maintain a normal workday schedule when working from home. - I'm at least as, if not more productive, while working from home. - I am happy to be able to work from home. - I can easily reach my colleagues when I need to. - Remote communication with my teammates is as good as it was in person. - In this remote working environment, I am able to connect with my supervisor as easily as I could while working in the office. - I clearly understand our work from home policy. - I have everything I need to do my job while working from home. - I feel highly connected to my team as we work remotely. - I have the right amount of virtual contact with my colleagues on a weekly basis. - I have regular, productive 1-1 sessions with my manager. Diversity and Inclusion - I have a sense of belonging in this organization. - When I speak up at work, my opinion is valued. - Members of my organization are treated with respect when they have opinions that differ from the majority. - Management demonstrates that diversity is important through its actions. - My supervisors understand that diversity supports the mission and goals of our organization. - This organization is committed to diversity and inclusion. - My organization puts resources toward diversity initiatives. - People of all cultures and backgrounds are treated equally at this organization. - This organization provides an environment that encourages free and open expression of ideas, opinions and beliefs. - This organization draws on its diversity to achieve its goals. - Our organization’s diversity is viewed as a competitive advantage. - Diversity of thought is valued as a method of solving business challenges. - Prejudice exists in our workplace. - I see strong leadership support of the organization's value or diversity and inclusion. - Leadership demonstrates a commitment to meeting the needs of employees with disabilities. - I am comfortable talking about my background and cultural experiences with my colleagues. - I believe this organization will take appropriate action in response to discrimination. - Employees at this organization are encouraged to bring new ideas to management. - My supervisor allows some degree of risk-taking in order to pursue new ideas and ways of working. - This organization encourages me to think creatively. - Creative thinking is rewarded at my organization. - My supervisor welcomes my ideas. - If I present an idea that is deemed beneficial, my supervisor helps me implement it. - I am comfortable challenging the ideas of my superiors. - I am comfortable discussing an “out of the box” idea with my team. - This organization puts resources into supporting innovation. - My work environment fosters innovative thinking. - Overall, I am satisfied with [Organization's Name]'s response to COVID-19. - I don't feel anxious about the future of our organization. - I don't feel concerned about losing my job. - I feel my organization has done a great job with internal communication regarding COVID-19. - I have safe channels to share any concerns regarding COVID-19 and it's impact. Examples of Custom Open Ended Questions - What company perks do you wish we offered? - How do you feel about the company's ethics and what it stands for? - What are the biggest challenges facing the business? - What is one thing we can do to help you be more successful in your job? - What are our greatest strengths as a company? - What has the organization done in response to COVID-19 that has positively impacted the employee experience? - What have we learned from this new workplace normal that will enable our organization to operate even better as we move forward? - What can the organization do to develop our understanding of what we mean by racism in the workplace? - How can the organization best support you (during...)? - What else do you need to help you work remotely more effectively (e.g. desk, chair, monitor, headphones, etc)? - What concerns, comments, or feedback would you like to share?
OPCFW_CODE
M: Simple Is Complex - vinnyglennon https://avdi.codes/simple-is-complex/ R: dustingetz I like Rich Hickey's definitions Simple (roots: sim, plex) - one fold or braid Complex - braided or folded together Easy - near to our capabilities These are really articulate definitions that helped me as a programmer. Emergent complexity from simple systems - it seems like there might be an insight here. Avdi says we can build complex systems out of simple components. He says complex systems are hard to understand. He says it's not feasible to understand the system or predict it. So they need to be measured and observed. That's interesting. I don't know if it's true that a system built out of simple components cannot be understood. How precise is the model? How many layers deep? R: uoaei Complexity as defined in most scientific disciplines today refers to systems which are "greater than the sum of their parts," i.e., where the behavior of the whole system cannot be predicted by looking at each component in isolation, but only can be understood by the action and interaction between components and how that evolves over time. Most systems of interest are too big to enumerate all the possible n-ary interactions between constituent parts, so we must study them by other means: probe plausible simulations, define and compute statistics from observable data, or fit models which capture plausible correlations. To complicate this further, it usually takes just one cycle or feedback loop in the network / system to introduce enough complexity that most traditional analytical mathematical tools break down. I studied complex (adaptive) systems as my Master's education and can confirm most techniques reduce to "linearize around the important bits and extrapolate from there". Examples of complex systems include: social networks (IRL and online), economies, biological systems. Machine learning is one of the most promising tools for probing these because if we can fit an appropriate model, we can hopefully capture enough of the behavior that we can reliably extrapolate to unseen data / states. R: ssivark It feels like the article ended just as it was getting warmed up towards something interesting! If any HN'ers have pointers/links to other interesting analyses of emergent complexity among simple interacting agents, I'd love to read 'em! R: carapace [https://www.santafe.edu/](https://www.santafe.edu/) [https://en.wikipedia.org/wiki/Santa_Fe_Institute](https://en.wikipedia.org/wiki/Santa_Fe_Institute) > The Santa Fe Institute was founded in 1984 ... > SFI's original mission was to disseminate the notion of a new > interdisciplinary research area called complexity theory or simply complex > systems. R: ssivark Unfortunately a pointer to the Santa Fe institute is a little too broad to my question (which was specifically about interacting agents, and I was implicitly thinking more about physical embodied beings, somewhat in the spirit of Rodney Brooks' robots). Also, I'm keen on concrete analyses rather than vague hand-wavy pontificating. That said, the Santa Fe Institute is a great resource to be reminded of every once in a while. I'm particularly a fan of a line of research being pursued by Simon Dedeo (and others). eg: _Optimal high-level descriptions of dynamical systems_ [https://arxiv.org/abs/1409.7403](https://arxiv.org/abs/1409.7403) R: carapace Sorry. I don't have any good references off the top of my head for that. You know about "BEAM" robotics? [https://en.wikipedia.org/wiki/BEAM_robotics](https://en.wikipedia.org/wiki/BEAM_robotics) W. Grey Walter's cybernetics "turtles"? [https://en.wikipedia.org/wiki/William_Grey_Walter#Robots](https://en.wikipedia.org/wiki/William_Grey_Walter#Robots) [https://en.wikipedia.org/wiki/Turtle_(robot)](https://en.wikipedia.org/wiki/Turtle_\(robot\)) R: ssivark Thanks, I'll check those out! :-) R: dustingetz [https://web.archive.org/web/20190929151534/https://avdi.code...](https://web.archive.org/web/20190929151534/https://avdi.codes/simple- is-complex/) R: andreskytt Oliver de Weck from MIT has done a lot of work around complexity, including deriving definitions and measures. Turns out there is an objective measure as to how many microservices is optimal. The number comes to around 2.8 connections per component if I remember correctly. Can share the math, if anybody's interested. R: crehn > Can share the math, if anybody's interested. Please do! R: michannne Couldn't really take much from this article, other than I should read that book, it sounds so interesting R: karmakaze TL;DR simple mechanisms can produce complex behaviours My favourite example is the few simple rules of Go leading to possible extreme superhuman strategies. R: codr7 What was the point again? Simple isn't Complex any more than Love is War or Dark is Light. System-wide interactions between any kind of components is about as difficult as it gets. R: splittingTimes Second that. This conclusion "Simplicity leads to complexity." is just too much of a shortcut of the author's own train of thought. As you said, this sentence should read "Interaction and coupling leads to complex emergent behavior". I feel this summarizes this article better. R: bch There's a Alan Kay talk[0] that might illuminate for you that leap of logic about simplicity. [0] [https://youtu.be/NdSD07U5uBs](https://youtu.be/NdSD07U5uBs) R: jaequery tldr; Micro-services architecture can start out simple at first, but when there are many of them, it will become complex even more so than the monolithic approach. R: kdmccormick Don't put words in the author's mouth; nowhere do they contrast microservices with monoliths. Also, one of Avdi's main points is that a proper microservice architecture is complex, but NOT complicated. And in my personal experiece, monoliths have a tendency to be complicated.
HACKER_NEWS
This whole "who prevents cargo from being stolen" argument is moot in my opinion. If a someone wants to steal cargo, they can threaten driver with a gun. Maybe he will be able to draw a gun soon enough, maybe not. If cargo is expensive enough, he may even be killed. Also only in US drivers can have a gun. In europe there is also many trucks. What happens when driver hears something strange at night? He just pretend he's still asleep so that thieves don't threaten him. Cargo is insured and his employer will prefer to have alive driver. Too late. We already did it voluntary. Even better: Return from the Stars. As an aside IMHO that article the other day asking about where is decent SciFi nowadays seemed to miss the point that for a good show character interactions and growth are what makes it good and that technology by itself is merely a prop. In MY humble opinion, character interactions and growth makes a good space opera (SyFy), not SciFi. Then one day a programmer notices the dependence on the uninitialized value, which would clearly produce a severe failure if fed the correct inputs, and he thinks "surely this hasn't been running for thirty years deployed on hundreds of thousands of nodes, and never triggered a fatal anomaly" and yet there it is. And then it breaks simultaneously in the whole world. So strong, that you are willing to go to jail for a few hours, at the very least. Nope, now you can be accused of terrorism and held for a month just as an example or slapped with a nice fine of several thousand dollars for costs of detainment. The point is, had he grown up in any other area in the country, this guy would be stocking shelves at Wal-Mart and complaining about "the system." What can we say about NSA when such a guy can go in, take many secrets and publish them while successfully escaping wrath of The President? Yes. You have this effect in double slit experiment, there are places where waves cancel out and you have dark place. The problem is that it's almost impossible to generate an inverse waveform from source other than the one which generated your photon. Typically it's done by splitting one waveform. On the other hand I post very rarely and sometimes I have mod points even when I don't post. Maybe somehow slashdot also sends mod points to wrong users? It helped that you had VERY limited set of possible configurations. I'm not an american, fifth amendment doesn't apply to me. Also terrorism trumps all amendments. Maybe it's the law, but what use has your lawyer when you can't see him when you're in gitmo and nobody want's to even admit that you are held there? Yeah, then service provider gets secret order that it has to provide data about user AND continue sending those emails. What, can't they ask that? Who will prevent them when you can't even talk to your lawyer about this.... I don't believe in life before coffee.
OPCFW_CODE
Researcher in Interactive & Immersive Experiences Areas of interest: Virtual Reality, Augmented Reality, User Research, Connected Devices, HCI. - PhD – Computer Science (Virtual Reality), University College London - Thesis: ‘Participant Responses to Virtual Agents in Immersive Virtual Environments‘, published 18th June, 2006. - Business Electives – London Business School - Undertook case-based electives sponsored by the Centre for Scientific Enterprise (CSEL). - BEng. – Information Systems Engineering, University of Surrey Brief Resume in PDF. Working with multiple teams, across the BBC and the wider industry, on novel future experiences for both big screen and immersive platforms using user-centric methods. - Designing robust ways to evaluate novel user experiences on interactive & immersive platforms. - Exploring organisational and audience value of immersive technologies through methodological lab studies and critical review of state-of-art both external & in-house. - Technical steering, support and guidance on immersive collaborations with industry and academic partners. - Exploring design guidelines and uncover potential value in social VR/AR experiences. Worked with BBC R&D to drive innovation across the BBC and the wider industry. Activities feed into an annual process leading up to a 5+ year strategy. - Working with teams across the BBC to strategically plan out the technology transformation projects keeping in mind the regulatory environment, public service principles, creative processes, editorial concerns and market competition. - Regularly communicating complex concepts and influencing diverse groups of stakeholders across and outside the BBC to create an environment that is receptive to transfer. Explored the issues around delivering synchronised connected media experiences to personal devices, how these devices fit into the life of users in a connected home and the flow of content across various devices. - Worked with designers, editors and producers to understand audience behaviour, visualise the potential design of content flow across devices in users’ home. - Worked with external partners, as part of HbbTV specification groups, to write test assertions as part of including synchronisation standards (DVB-CSS) into HbbTV 2.0. - Led the evaluation of perceived delays (and simulated errors) on users’ assessments of synchronised experiences (as part of an EU collaborative project, 2-Immerse). Postdoctoral Research Fellow at University College London (2006 – 2007) Collaboration with University of Salford and University of Reading aiming to implement a system that enables three physically remote users to communicate in a collaborative virtual space. Research Fellow at University College London (2001 – 2006) Collaboration with seven universities conducting research into technologies that focused on the integration of physical and digital interaction. Worked on designing & creating expressive virtual characters for use in (collaborative) virtual environments.
OPCFW_CODE
ID_Etudiant ID_Module Average Année_Universitaire I registered the notes on Rating but I planted in the calculation of averages I do not know how to I am trying to get a detailed as follows: ID_Etudiant Moyenne_module1 Myenne_module2 Moyenne_module3 For example, the module1 So the average _module1 be: (Note (11) * coefficient (11) + note (21) * coefficient (21) + note (31) * coefficient (31)) / (coefficient (11) + coefficient (21) + coefficient (31)) thank you read me and show me the track to follow. i don´t know i can assist you here, try invest in the database structure to avoid you to start from the beginning later, you will get easier sp also. i builted a videoclub (movie rental) project, i knew the structure i needed(major table was 'rents' that got who rented and what rented, 75% of store procedures feeded from that table), in your case i imagine but no sure of structure, they did gave you a check list, right? For exemple i am accountant i can easely built a accountant software and give suggestions because i know the algorhitms, if it is of other kind i need assistance i needed a ckeck list of what they want and i will keep asking question all the time!!! i mean school database software, didn´t you seen a videoclub, library, or other software? i mean that because you didn´t show no info besides students and grades, nothing of teachers or rooms etc, but it may be all you need you know. I set the collation for the database to arabic before migrating the database using DTS and the data still unreadable.I tried and copy the data to an email and send it to another employee in the company.In outlook He set the encoding to Arabic(Windows) and it becomes readeable.So I think I have two choices to solve the problem: 1-Set the Encoding in Access to Arabic(Windows):I couldn't find any Encoding settings in Access. 2-Set the Encoding in code for that field in the dataset. Please if any one knows how to reach one of these solutions tell me about it. an error as occurred when connection to the server. while connecting sql server 2005, this error is likely to be caused by, according to sql server setting, remote connections are not allowed (provider: ... -there was not possible to connect to sql server I had upgraded Visual Studio 2005 to 2008 and then upgraded SQL EXPRESS 2005 to SQL EXPRESS 2008. However, When I try to add a new "mdf" file under APP_Data directory. I am getting the following error message. How can I solve the problem? The Microsoft.Net Framework Version 3.5 SP1 and Microsoft SQL Server compact SP1 has been already installed. "Connections to SQL Server files (*.mdf) require SQL Server Express 2005" What a curious mind needs to discover knowledge is noting else than a pin-hole.
OPCFW_CODE
I have recently finished working on my website (you can find the link in my profile) which was an awful lot of work. In your experience, is it safe to assume that a good, informative website is capable of attracting potential clients? If so, what should be borne in mind when marketing oneself via the web? I would appreciate any input you might have regarding this. A friend of mine suggested that SEO (Search Engine Optimisation) can yield potential benefits but I am rather unsure whether conference organisers/private businesses rely on search engines like Google and therefore I have yet to decide whether it's a worthy investment. Thoughts? asked 03 Aug '12, 13:25 A colleague of mine paid a lot of money for SEO optimization. It seems to have worked since his site now gets top google rankings. Alas, most queries he gets through his site seem to be rather dodgy/involve a lot of client education. The usual profile of "clients" who find him through the web: People surfing around, no previous experience with interpreters, shopping around for the best deal. On the other hand, if only one in 10 of these client contacts works out, then that ratio is even umpteen times higher than with "traditional" marketing (mailing, cold calls etc.) which most of us (for very understandable reasons) shy away from, anyway. Then again, whenever I worked for major companies (the "sales channel" usually being word of mouth) my site statistics often tell me that someone came to visit my site to find out who I am (usually after the contract was signed). Often the site statistics tell you the organization if they have their own server (which many of the large companies have) or at least the city. This is just my personal two cents so allow me to break your question down further: -Not necessarily. You need to separate the wheat from the chaff. -Probably. Then again, it does not cost much and tinkering around with it tends to be fun. you'll find some useful comments in answer to a similar question here... http://interpreting.info/questions/1210/will-a-home-page-get-me-more-work My experience is that a lot of interpreting work comes from other interpreters I've met and/or worked with. So don't forget to invest time in networking, meeting people, introducing yourself the old-fashioned way as well. I don't yet have a professional website of my own, but I do think we interpreters should all be slowly thinking about setting one up. (Very few conference interpreters have websites at the moment. We're a bit behind IMO). However, if you're looking to be found by new clients online then I can recommend joining AIIC as soon as you can. I am regularly contacted by potential clients who find my name in the AIIC online database http://aiic.net/directories/interpreters/finder/
OPCFW_CODE
Description of the problem: I am creating an experiment to measure social influence. Participants can search for information on 24 textboxes with reviews from people. To show participants the reviews, I created a display. When participants are clicking on the textboxes, the review will appear on the display. The textes for the questions and reviews are coming from an excel file. The experiment is running fine on my computer with PsychoPy but when I am uploading it on Pavlovia there is an issue with the input for the reviews/feedback. You can see the question in the gray box in both pictures I attached. The question is a column in the excel file, is not written in [“…”] and is not causing me any trouble. I tried to write the text for the feedback without and “…” and I received a different error. I uploaded both of the errors I received as screenshots and my excel file. It would be wonderful if someone could help with that. quest1.xlsx (11.7 KB) How exactly do you try to get the values of your feedback vectors (fbpos and fbneg) to be displayed in the textboxes? Wouldn’t it be easier to just divide all 24 feedbacks into separate columns (fbpos1, fbpos2, …, fbneg1, fbneg2,…)? #Code Begin Routine clickables = [textboxes1,textboxes2,textboxes3,textboxes4,textboxes5, textboxes19,textboxes20,textboxes21,textboxes22,textboxes23,textboxes24] #all textboxes clickable shuffle(fbpos) #Text from the excelfile gets shuffled fbyoung=fbpos[(oplus):(oplus+yplus)]+fbneg[(omin):(omin+ymin)] #omin, oplus, yplus and ymin are certain ratios from another excel file (e.g. 6 positive and 6 negative feedbacks from the old persons > in the textboxes would be the equal condition). So what I am basicly doing is, putting in the feedback as list. Shuffle the lists. Allocate the feedback lists in a certain ratio to two new lists: the list for feedback from old and young persons. Last step is to put the the list with feedback from old and young persons to one feedback list (fb). The clickable textboxes in my routine are in a certain order (first 12 old persons next 12 young persons) so that they can show the right feedback. Your solution might work if I am putting the texts back togehter in my code. Like that: fbpos=[[“fbpos1”]+[“fbpos2”]+[“fbpos3”],…] But would there any difference if I would writing the hardcode in the Routine? Thanks for your help!
OPCFW_CODE
I'm planning to upgrade my current pc of 4 years for gaming purpose. I've done quite a bit of research but couldn't find what i was looking for or maybe i'm just too newbie to put everything together to form a bigger picture. Therefore I'm hoping you guys could help with my questions. Currently I have a PCI-e 2.0 motherboard (Asus M3A32-MVP Deluxe - http://www.asus.com/Motherboards/AMD_AM2Plus/M3A32MVP_DeluxeWiFiAP/) with 8GB of DDR2 RAM and more than enough power (800w psu). I'm contemplating of purchasing a HD7970 graphic card which is a PCI-e 3.0 card to put into my motherboard and will drive five 27" monitors at 1920 x 1080 resolution using Eyefinity (tricking the OS into thinking that I have ONE very big screen instead of 5). Will such system work without the performance suffer.... much? I understand that PCI-e 2.0 x16 is capable of 8Gbps bandwidth and PCI-e 3.0 x16 has up to 16Gbps bandwidth. A simple arithmetic calculation gives me the number of bits required to store information of ONE monitor at 1920 x 1080 resolution in 24 bits per pixel (bpp) colour depth of ONE frame. 1920 * 1080 = 2,073,600 pixels 2,073,600 * 24bpp = 49,766,400 bits So for the number of bits for 5 monitors 49,766,400 * 5 = 248,832,000 bits If I want a smooth 60 frames per second (FPS) it'll be 248,832,000 * 60 = 14,929,920,000 bits/sec So the data transfer rate is a whopping 15Gbps and only PCI-e 3.0 can support it nicely. What happen if I still plug that card into a PCI-e 2.0 slot? Will the system tune the FPS down automatically (which means my FPS will suffer and drop to about 32 FPS) OR the entire system will fail ultimately (or slowed to a crawl which isn't worth investing in)? Can I get two identical graphic cards to distribute the burden (3 monitors on the first card and 2 on the second card)? According to my mobo specification, dual card mode can still achieve x16 lanes each which means the total bandwidth can be increased up to 16Gbps. Will Eyefinity still work (5 monitors combined together to make ONE very large display)? Do I need CrossFire (which i prefer not to due to microstuttering)?, which in this case the configuration will be all 5 monitors on the primary card. I really hope someone is able to shed some light on my dilemma. Can I just buy a high end card and get a maximum gaming experience, or should I save up and change my entire gaming system? Thanks in advance =)
OPCFW_CODE
Can not get scheduled scan to remove infections Posted 13 March 2008 - 07:04 AM I have two computers with Windows XP installed, and the other with Windows 2000. I have the "Automatically remove Infection with TAI higher than" set to "3", and the "Quarantine before removal" option ON. These are my settings... On all three computers, nothing is quarantined and nothing is removed when the scan is automatically scheduled -- either at some predetermined time of day or scheduled to automatically run on startup. However, using the exact same settings and not changing anything, if I run a manual scan, all detected infections are removed -- i.e. the same objects that the automatic scheduled scans failed to remove, after clicking the Remove button following the scan. I also noticed that SE Pro would always detect "critical" objects on a daily basis and remove them. However, ever since I installed Ad-Aware 2007 Pro, not one critical object has ever been detected! How can that be? Another couple of other things (among the multitude of others) that really bug me are: 1) when the Remove button is pressed after a manual scan, all of the "checks" for the infections that are to be removed are cleared. I re-check all of the infections and press "Remove" again, and the checks, again, get cleared and the tree-list collapses; and, 2) my "saved" settings seem to get intermittently reset to the factory defaults for no reason, so, I have to always check my settings to make sure that they were not discarded and reset. This happens on all machines. Posted 13 March 2008 - 09:21 AM as a user with valid licences you'll get an one-to-one support. Please follow the link ( supportcenter ) at the end of this post. ( morituri te salutant ) Posted 14 March 2008 - 11:21 AM So, I follow the link and what do I find? Now, you are asking me to re-enter my whole problem again that I originally typed into the forums -- but instead, in a tiny little box that does not even seem to handle graphics! Why can I not get support through the forums? That is what they are for, are they not? And, how is anyone else supposed to know or learn anything if the solution is kept private and not posted in the public forums where the public can see it? On top of that, you moved my original topic thread to "AAW 2007 Resolved /Inactive Issues closed" and closed it! Why? Posted 14 March 2008 - 12:12 PM This forum is for user of AAW2007 free ONLY. We can't give any support to a user with a paid licence. Sorry for any inconvenience. BTW have a look to >> http://www.lavasofts...s...ost&p=67332 << Edited by Raziel v. Nosgoth, 14 March 2008 - 12:22 PM. ( morituri te salutant ) Posted 14 March 2008 - 01:37 PM Yes, you're entitled to the Official Support Center help as well (which might be a little quicker for those answers you are seeking) However, you are certainly welcome to also post here. So I've reopened this topic and merged in your original post so it can be reviewed. Sorry for the confusion! Raziel is simply trying to get you to the official support but it's ok to post here also. It's just that our paid staff doesn't monitor these forums, so it's likely going to be a volunteer member who might be able to look at this for you. Look for the *New Topic* Button near the top right when viewing the forums. Here in the forums, replies are posted to topics only. Thank you for your understanding and cooperation! Plus and Pro Ad-Aware users (only) may use the Support Center for personal assistance: Microsoft MVP/Windows - Security 2003-2009 Posted 27 April 2008 - 02:17 AM Thanks in advance for any suggestions. 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
OPCFW_CODE
Need a shop Something like this: http://github.myshopify.com/. @superscott @MOPineyro -- See above. 10.4. (*ノ・ω・) Do we need another repo? We could make it a stand-alone site, or could it be a rails engine inside the main site? @davidmolina Another repo. /cc @nanoxd On Apr 22, 2015, at 3:17 PM, superscott<EMAIL_ADDRESS>wrote: Do we need another repo? We could make it a stand-alone site, or could it be a rails engine inside the main site? @davidmolina — Reply to this email directly or view it on GitHub. I think standalone is the best option. Add a CNAME or an A record for that subdomain and host an app somewhere. @davidmolina Did you have any preference on the payment provider? Do you have any accounts set up? Copy. Preference, Stripe. Not yet. On Apr 22, 2015, at 4:43 PM, Fernando Paredes<EMAIL_ADDRESS>wrote: I think standalone is the best option. Add a CNAME or an A record for that subdomain and host an app somewhere. @davidmolina Did you have any preference on the payment provider? Do you have any accounts set up? — Reply to this email directly or view it on GitHub. @superscott I created a repo for it :smile: with a basic license. Implementation wise what were you thinking? @superscott Something like this: http://github.myshopify.com/, and focused on t-shirts (m/f) and stickers. I have the brand assets from HackHands. /cc @nanoxd +1 for Shopify! Very good people at that company. http://www.shopify.com/ Update: I should have time to knock this out over the coming weekend. Thanks Scott! On Tue, May 19, 2015 at 10:59 AM, superscott<EMAIL_ADDRESS>wrote: Update: I should have time to knock this out over the coming weekend. -- Reply to this email directly or view it on GitHub https://github.com/OperationCode/operationcode/issues/18#issuecomment-103614766 . -- David t. @davidcmolina https://twitter.com/davidcmolina http://davidmolina.github.io/ @superscott: convo was in Slack. BLUF: we'll hold off on building this for now. I will be closing this issue this week/sooner if no one has any objections. More background: Spoke to Shopify to waive the monthly. The two plans: NPO Lite: $29/month, $312/annual, $624/biennial, $936/triennial 1% transaction fees CREDIT CARD RATES : Online 2.5% + 30c POS Swipe Rate 2.4% + 0¢ 1 GB File Storage Unlimited Bandwidth Unlimited Products Discount Engine Real-time Carrier Shipping NPO Full: $99/month,$1068/annual,$2136/biennial,$3204/triennial 0% transaction fees CREDIT CARD RATES : Online 2.25% + 30c POS Swipe Rate 2.15% + 0¢ 5 GB File Storage Unlimited Bandwidth Unlimited Products Discount Engine Real-time Carrier Shipping Gift Cards Professional Reports// we'll setup when we have more resources at our disposal.
GITHUB_ARCHIVE
I inherited a system where instead of using a GPO to redirect Users' folders, Home folders were created and then manually the user's My Documents location was changed to be that Home folder. The Home folder shows up as a mapped drive under Windows explorer and was made available offline. This works fine for most part except that this is inefficient and I dont want to have to go to each computer a user moves to and manually change the My Documents location. So Home folders with user data already exists. - Do away with Home Directories without losing user data. - I want to implement a GPO so that users' My Documents folders get redirected to my File server and are available offline. - I would like these redirected folders to allow Administrator Full Control. - Move existing data into these redirected folders. - It's best to have a root share and have Folder Redirection GPO do the user folder creation. - Folder Redirection GPO will set the appropriate permissions the default which does NOT allow admins any permissions. If I mess with the inheritable permissions at the root share, Folder Redirection will most likely break. So how can I as an admin get access to these folders? - How will I move the data into the newly created folders if the GPO creates the folder and sets permissions that don't allow me to access these new folders? Looking forward to all your help.Edited Aug 6, 2013 at 8:35 UTC On the settings tab of the GPO, there's an item called "Grant the user exclusive rights to <folder name>". Untick that and you should be able to access the created folder if you have admin access. In the GPO there's a setting that says grant exclusive access, you want to uncheck that. then you should be able to access as an admin. otherwise even the admin can't get access and you need to take ownership then manually set permissions. As a part of deploying it, it will move the contents (that's another option in the GPO, move contents to the new location) However it's going to do this via the workstation over the network at login, and can be VERY slow if the user has a lot of data. Windows will appear hung on applying user settings while this is occurring. you may be better off not having the info for the my documents redirection moved, then manually move/copy it with robocopy if it's within the same server since that will be much faster. Leave it set up with the home directories for a while and either have users move the files into their documents folder or script the move after hours and remove home directories. Some things will still break regardless. Why do you need access to the user shares? Is this a requirement from higher up as there is really no reason for an admin to have access to them. If you have backups you have access to them, if a user leaves change their password and you have access to them. Went through some of this myself here lately. I can't give you a definitive answer for all your concerns but here is what I found: For admin access, in the GPO uncheck the box under the settings tab for "Grant user Exclusive Rights" and this will allow admins access. For pre-existing folders, adding permissions for admins worked just fine for me and did not mess up existing folder redirections. That said, test it out for yourself first. Maybe with a test folder and assign your test GPO to that one user first. If its a VM, then perhaps take a snapshot before doing it and then you can roll back. For the existing redirections, were they redirected to the mapped drive or to a UNC path? I found that in my situation, doing a redirect from a mapped drive to a UNC did not work because the mapping occurred AFTER the redirection was attempted so I always got messages about "folder does not exist". For me, this meant a lot of manual work to undo how it was done originally. As for those redirected to a UNC path, it was mostly seamless. Files redirected from point A to point B as part of the login and as my new redirection target already had admin rights set at the root and the policy was set to not grant the user exclusive rights, I had access to the files once the redirect was complete. With this, I did have some issues with the Offline Files cache still looking for the old redirection location and that had to be dealt with. Hopefully, you will find some of this helpful. As others have said, the "Exclusive Rights" checkbox is what you're looking for as far as Admin rights go. Checking the box tells Windows to grant the default rights to the user, and REMOVE all other rights assignments, including Administrators, SYSTEM, etc. Leaving it unchecked causes the user rights to be ADDED to whatever's existing, which should include Administrators inheriting full access. And, there is a checkbox to tell it to move the files when performing the redirection. As long as the original redirection was to a UNC path, there's no issue. If it was manually pointed to a drive letter, then that won't work if the drive is mapped in the login script, because this will happen before the mapping. If the drive letter is connected using the "Home Folder" settings on the Profile tab in AD Users and Computers, then it should still work fine, since that connection will be there when the GPO runs. As Mike said above, users will have to wait for the move to take place, so the success of that would depend on your infrastructure, the amount of data they have, whether you tell them about it ahead of time, and their patience. I'm sure you know who your space hogs are, so you could handle them differently, or just warn them that they'll need to go get a cup of coffee the next time they log in. i have more questions should i start a new discussion or just continue with this one? If it is for you and a problem you are having please start a new thread. Thanks for all your replies guys. The unchecking Exclusive Rights sounds like it will the trick. I will do a test and report back about what worked and what didn't. I honestly don't care too much about admin access but from time to time users ask for help with certain files and when someone is on vacation higher ups will ask me to look for files in someones drive. If I have everything setup with the default Folder Redirection settings then when I go to change permissions so I get access, I am concerned that Folder Redirection will break. For me, I have everyone now redirecting to a specific root folder. I setup this root folder to have the permissions listed here -> http:/ Then when ADUC creates the folder for the user in the target, the user is granted permissions at the level of their own folder but all the permissions set at the root folder level, with the exception of the permissions set at the root folder to be "this folder only", are inherited all the way down. So as long as when you give yourself access, you are not changing what is already there, you should not mess up anything. This gives you two options: grant yourself access from the root and it will flow down due to inheritance, or like you mentioned, give yourself access to the specific users folder. Either way, as I see it, this should not break your redirection. I just took over a network. 1. Login.bat exist for a "p" drive which is actually a mapped drive to their "my docs" in a home directory on the server called users. 2. No Group Policy is applied and every time they add a user/computer they manually "change the location" of my docs locally from the Win 7 client computer. 3. Need to migrate the Users folder to a new server and would like to get rid of the Login.bat and "p" drive without having to copy/move the users directory using robocopy. Can this be done by just turning on Folder Redirection on the new server? Can I just get rid of the login.bat script when folder redirection is applied? Will the folders/files just move automatically to the new users/folder redirection folder?
OPCFW_CODE
I’m afraid that at LiveChat we focus on the 1to1 communication that includes a visitor speaking with an agent. In other words, more of a support-oriented communication rather than the open chat room. However, we do have the Groups functionality available at LiveChat. Groups will allow you to create separate departments within your LiveChat license, to which you can assign separate agents. Then you can assign a specific group to a specific page in your Salesforce ISV system, either by specifying the group ID in LiveChat snippet or by using URL Rules that we have available within our product. When the customer will log into your website and will be redirected to a specific “room,” you can also load a specific group that is available at LiveChat. Unfortunately the customer him/herself won’t be able to choose an agent that he’d like to speak with among different agents assigned to a specific group (unless you will create separate group for each separate agent on your license, and then use a custom group chooser or an avatar-like button that will be linked with a direct chat link available at LiveChat). You will also not be able to see or speak with other customers on a website. If you would be interested in such a solution, you can read more about groups here: In terms of agents/customers statuses : This is possible in terms of agents → whenever an agent will log into the system the native chat widget that is assigned to a website (room) will change the status from offline to online, giving the customer option to start a conversation. We don’t have such option available for customers, but whenever a customer will visit the room with a page, LiveChat will start tracking him and display him as a new/returning customer in the Traffic section of our app. You can also pass over additional information about the customer (like name/email, info about the page/room he’s in) via session variables available at LiveChat: All of that (except for a custom group chooser) is available “out of the box” at LiveChat, so you won’t need to do much in terms of the development, but as your use-case is strictly room-based, I’m not sure if we are the best solution for you. Nevertheless, if you wouldn’t mind working around some things, you can try to develop such a solution with the use of LiveChat for sure In case that you would like to build his own chat widget entirely on your own, you should definitely get familiar with our Customer Chat API that is available here: Here you will learn how the backend of our widget communicates with our servers, so that you can create his own “logic” (based on the available methods of course, so you won’t be create dedicated rooms per say for example) and link it with your own front-end as well. You should also get familiar with the Authorization section of our docs (https://developers.livechat.com/docs/authorization/), especially Customer authorization flow that is used to authorize Customer Chat API calls: One important disclaimer though: we do not provide help when it comes to custom development, so the coding part is entirely at your side (both in terms of back- and front-end). Let me know if you have any other questions
OPCFW_CODE
#include "Channel.h" #include "Discord.h" #include "static.h" /** @param[in] data JSON data @param[in] token discord token */ DiscordCPP::Channel::Channel(const json& data, const std::string& token) : DiscordCPP::DiscordObject(token) { data["id"].get_to<std::string>(id); data["type"].get_to<int>(type); position = get_or_else<int>(data, "position", 0); // permission_overwrites name = get_or_else<std::string>(data, "name", ""); icon = get_or_else<std::string>(data, "icon", ""); } /** @param[in] id the channel's id @param[in] token discord token */ DiscordCPP::Channel::Channel(const std::string& id, const std::string& token) : DiscordCPP::DiscordObject(token) { std::string url = "/channels/" + id; *this = Channel(api_call(url), token); } /* @param[in] old the Channel to copy */ DiscordCPP::Channel::Channel(const Channel& old) : DiscordCPP::DiscordObject(old) { id = old.id; type = old.type; position = old.position; // permission_overwrites name = old.name; icon = old.icon; } DiscordCPP::Channel* DiscordCPP::Channel::from_json(Discord* client, const json& data, const std::string& token) { switch (data.at("type").get<int>()) { case ChannelType::GUILD_TEXT: case ChannelType::GUILD_NEWS: return (Channel*)new GuildChannel(data, token); case ChannelType::GUILD_VOICE: return (Channel*)new VoiceChannel(client, data, token); case ChannelType::DM: case ChannelType::GROUP_DM: return new DMChannel(data, token); default: return new Channel(data, token); } } void DiscordCPP::Channel::delete_channel() { std::string url = "/channels/" + id; api_call(url, "DEL"); } DiscordCPP::Channel* DiscordCPP::Channel::copy() { switch (type) { case ChannelType::GUILD_TEXT: case ChannelType::GUILD_NEWS: return (Channel*)new GuildChannel(*(GuildChannel*)this); case ChannelType::GUILD_VOICE: return (Channel*)new VoiceChannel(*(VoiceChannel*)this); case ChannelType::DM: case ChannelType::GROUP_DM: return new DMChannel(*(DMChannel*)this); default: return new Channel(*this); } }
STACK_EDU
Find customers who spent at least 10% more in 2019 than in 2018 My table has 4 columns: branch_name,customer_id, order_value,purchase_date. "london-branch", 422, "12","01-01-2019" "manchester", 133, "33","01-04-2019" "london", 422, "55","01-04-2019" "newyork", 1223, "11","01-04-2019" Want to find out how many customers spent more than 10% in 2019 than their total amount spent across 2018. I tried CREATE TABLE AS LastYr Select customer_id,order_value from order_table where purchase_date between '01-01-2018' AND '31-12-2018'; This will create table for 2018 Then CREATE TABLE AS NewYear Select customer_id,order_value from order_table where purchase_date between '01-01-2019' AND '31-12-2019'; Then am struggling to join the tables and find the difference. You should include your attempt at solving the problem. Which database are you using? You tagged MySQL and sqlite. I am using mysql What have you tries so far??? All you did up there is just create table, and do a simple select. I don't see the part that you're struggling. Store dates using a date data type. And see https://meta.stackoverflow.com/questions/333952/why-should-i-provide-a-minimal-reproducible-example-for-a-very-simple-sql-query You can use aggregation and conditional SUMs in the HAVING clause: SELECT customer_id FROM mytable WHERE YEAR(purchase_date) IN (2018, 2019) GROUP BY customer_id HAVING SUM(CASE WHEN YEAR(purchase_date) = 2019 THEN order_value ELSE 0 END) > 1.1 * SUM(CASE WHEN YEAR(purchase_date) = 2018 THEN order_value ELSE 0 END) Note: this assumes that you are storing dates using the relevant datatype, not as strings. @codeisfun1234: this means that they spent 10% more in 2019 than they did in 2018 (ie 1.1 times more).
STACK_EXCHANGE
Cs for all computer science for all is the president’s bold new initiative to empower all american students from kindergarten through high school to learn computer science and be equipped with the computational thinking skills they need to be creators in the digital economy, not just consumers, and to be active citizens in our technology. Book your tickets online for computer history museum, mountain view: see 662 reviews, articles, and 437 photos of computer history museum, ranked no1 on tripadvisor among 32 attractions in mountain view. History of computer science is really a history of humankind’s attempts to understand nous (the rational mind) -- intelligence – processes of acquiring. The history of computers these breakthroughs in mathematics and science led to the throughout human history, the closest thing to a computer was the. The long history of computer science and psychology comes into view understanding that legacy can help us stop the next cambridge analytica. Our history and achievements form the basis for our strong tradition of excellence in education and research in computer science 1946 ui faculty attempt to build a computer that can play checkers eniac (electronic numerical integrator and calculator) is built at the university of pennsylvania. Computer science began in the 1940s with collaboration between ibm and columbia university today it is a well-established field of study in universities worldwide. We have the mini and the micro computer in what semantic niche would the pico computer fall alan perlis, epigrams in programming the department of computer science was founded by people who had a vision. Current course catalog and sample schedule bs, computer science/history program requirements plan of study current students, please see your degree audit for up to date curriculum changes. History of cs department history of the department of computer sciences history of infosec at purdue first in the field: breaking ground in computer science at. History of the computer documentary crash course computer science #1 crashcourse 1,028,031 views 11:53 computer history of new documentary. A history of computing computer science may seem like a very new science but it has a fascinating history and has been around longer than you might think. The history of computer science at vassar college by professor nancy ide in 1931, grace murray hopper returned to vassar, her undergraduate alma. Our history 1980 - 2017 historical timeline in 1980, the school of computer science is established within the faculty of science at carleton university and housed in the herzberg building 1980 six cross-appointed individuals from carleton’s departments of systems engineering and computer. Amazoncom: history of computer science interesting finds updated daily amazon try prime all. Science history important events, famous birthdays and historical deaths from our searchable today in history archives. Computer science history timeline made with timetoast's free interactive timeline making software. The first computer was invented in harvard university, by the mark serries co this invention led the advance onto computer science and major computer history events before the invention of the keyboard, people used a thing called punch cards.
OPCFW_CODE
Upgrade Jackrabbit 1.3 to Jackrabbit 2.1.1 In my project I need to change Jackrabbit 1.3 to Jackrabbit 2.1.1. My work is to work for queries. Please suggest me if there are changes in index format and query format. Lucene 2.4 is used in Jackrabbit 2.1.1 while in Jackrabbit 1.3 Lucene 2.2 is used. Lets divide your upgrade into the following parts: Upgrade 1.3 to 1.5 (see Class IndexMigration): IndexMigration implements a utility that migrates a Jackrabbit 1.4.x index to version 1.5. Until version 1.4.x, indexes used the character '' to separate the name of a property from the value. As of Lucene 2.3 this does not work anymore. See LUCENE-1221. Jackrabbit >= 1.5 uses the character '[' as a separator. Whenever an index is opened from disk, a quick check is run to find out whether a migration is required. See also JCR-1363 for more details. Upgrade 1.5 to 2.0.0 (see RELEASE-NOTES.txt): Backwards compatiblity Jackrabbit 2.0 is designed to be compatible with existing Jackrabbit 1.x clients and repositories. The main exceptions to this goal are: Removal of deprecated classes and features. Jackrabbit 2.0 is not backwards compatible with client code that used classes or features that had been deprecated during the 1.x release cycle. Most notably the temporary org.apache.jackrabbit.api.jsr283 interfaces have been removed in favor of the official JCR 2.0 API in javax.jcr. Repositories that have used the new JSR 283 security features included as a development preview in Jackrabbit 1.5 and 1.6 may face problems when upgrading to Jackrabbit 2.0. See especially JCR-1944 and JCR-2313 for more details. The JCR-RMI layer no longer implements the Jackrabbit API extensions. Code that uses JCR-RMI with distributed transactions or for administration operations like creating workspaces or registering node types needs to be updated accordingly. The JCR-RMI layer in Jackrabbit 2.0 only supports JCR 2.0 repositories. To access a JCR 1.0 repository implementation like Jackrabbit 1.x over RMI, you need to use the 1.x versions of JCR-RMI. Upgrade 2.0.0 to 2.1.1 (2.x branch is positioned to have a stable API): This is Apache Jackrabbit 2.0.0, a fully compliant and production-ready implementation of the Content Repository for Java Technology API, version 2.0 See also compatibility tables for Jackrabbit and Lucene APIs. There are multiple ways to migrate from Jackrabbit 1.x to 2.x. Probably the best documentation can be found in the Jackrabbit Wiki, one the Backup and Migration page. Is jackrabbit oak dead? Why does the jcr-oak documentation say to use OSGI when using RDB document storage? Is anyone other than legacy software still using OSGI? Why use OSGI? Is OSGI dead? all the documentation seems to be from 2010-2015. Any thoughts on this appreciated. Jackrabbit Oak is absolutely not dead, why do you think so? Why use OSGI: that is a separate question; I use Oak without OSGI. The documentation (and code) is updated regularly, e.g. https://github.com/apache/jackrabbit-oak/commits/trunk/oak-doc/src/site/markdown/nodestore/segment/overview.md It would be great if the jackrabbit-oak RDB documentation didn't push using OSGI: https://jackrabbit.apache.org/oak/docs/nodestore/document/rdb-document-store.html @WallaceHowery I'm afraid I don't have any stake in this area... but because this is an open source project, I think it should be possible to support what you would like, if you submit a PR!
STACK_EXCHANGE
Your license should be compatible with the GNU GPL, current version or later. (This includes LGPL-only, since all LGPL versions can be converted to any version of the GPL. Of course, we strongly recommend against using LGPL-only. And GPL*-only is not acceptable.) Your license should be compatible with the GNU FDL, current version or later. We accept any free license for images and other data that are a secondary part of a package that is mainly software. In any case, you cannot rely on incompatible dependencies. These licenses are the ones that the GNU project uses for its own packages, and recommends for everybody else to use. They are also the most widespread, and being incompatible with them is a practical problem, because you would then be unable to combine code under those license with yours, and vice versa, even if it is free software. Well-known licenses compatible with the GNU GPL are listed at http://www.gnu.org/licenses/license-list.html#GPLCompatibleLicenses. Informal materials, FAQs, and so on can be under a license compatible with either the GNU GPL or the GFDL (or both). Manuals themselves needs a license compatible with the GFDL. In some case, it may be useful to dual license your application - that is, release it under two different licenses - to meet this criteria. See DualLicensing for details. https://savannah.gnu.org/forum/forum.php?forum_id=4303 describes more rationale with respect to requiring the GFDL for the manuals' license (and needs to be merged here). Your project dependencies also need to be compatible with the license you chose. This is not a Savannah-specific requirement, because doing otherwise would make your project legally inconsistent, and as such not freely usable. For example, a project released under the GNU GPL may not be combined with code licensed under the MPL, because those licenses are incompatible. However, you can combine GNU GPL'd code with code released under the Expat license. firstname.lastname@example.org, that is, the FSF Licensing Team, may offer suggestions to find the best way to handle legal issues. For non-software, non-documentation works, when we say "free" this refers to http://www.gnu.org/philosophy/free-sw.html. Licenses listed as "free" at http://www.gnu.org/licenses/license-list.html (whatever the kind of work) are good examples. http://lists.gnu.org/mailman/private/savannah-hackers-private/2005-July/000142.html (private archive) is the discussion thread with RMS about that point, and is summed-up in the previous sentence. Fonts: there's an interesting article at http://www.fsf.org/blogs/licensing/20050425novalis
OPCFW_CODE
CGO keeps memory increasing in a goroutine and got killed by OOM. What version of Go are you using (go version)? $ go version go version go1.18.3 linux/amd64 │ I tested 1.17, 1.18, 1.19beta1 and 1.18.3(currently latest) already. Does this issue reproduce with the latest release? Yes What operating system and processor architecture are you using (go env)? This issue happens a PopOS 22.04 with go-1.18.3 and a Ubuntu 22.04 docker container with go-1.18. $ lsb_release -a No LSB modules are available. I use gRPC in go server and call deep learning inference using CGO. I already tested the DL library and C++ TCP server and there isn't any memory leak. This issue is happened when memory allocation with CGO in a goroutine. I found a similar issue in stackoverflow. I don't know this issue is a bug or not. This phenomenon is really unclear to me. I have been trying to solve this issue. But for now, I feel only solution is I would change it's implementation to C++ or Rust version. package main /* #include <stdlib.h> void * inferenceWorker1Malloc(long size) { return malloc(size); } void inferenceWorker1Free(void * ptr) { free(ptr); } */ import "C" import ( "fmt" "os" "sync" "time" "unsafe" ) const prepareInputCount = 1000 const limitChannels = 10 func inferenceWorker1() { ptrs := make([]unsafe.Pointer, prepareInputCount) //for j := 0; j < 1000; j++ { for { // Receive a data from the channel time.Sleep(1 * time.Millisecond) // Prepare input data and DL decoder for i := 0; i < prepareInputCount; i++ { //ptrs[i] = C.malloc(16 * (1 << (i % 20))) // OOM killed ptrs[i] = C.inferenceWorker1Malloc(16 * (1 << (i % 20))) // OOM killed } // Inference //time.Sleep(time.Duration(rand.Intn(10)+1) * time.Millisecond) time.Sleep(1 * time.Millisecond) // Remove allocations for i := 0; i < prepareInputCount; i++ { // C.free(ptrs[i]) C.inferenceWorker1Free(ptrs[i]) } } } func main() { var wg sync.WaitGroup // Process channel requests for i := 0; i < limitChannels; i++ { wg.Add(1) // FIXME: This goroutine keeps increasing memory until reaching OOM go func() { defer wg.Done() inferenceWorker1() }() } // NOTE: No memory leak without a goroutine //inferenceWorker1() // NOTE: You can see the memory leak in this step wg.Wait() b := make([]byte, 1) fmt.Println("Press any key ..") os.Stdin.Read(b) } $ time go run main.go signal: killed real 8m19.677s user 11m58.797s sys 5m22.965s Memory usage while true; do echo $(LC_ALL=C date) \| $(ps -eo pid,%cpu,rss,%mem,args | grep go-buil[d]) \| $(free -h | grep Mem) sleep 1 done |& tee memusage-$(date +%Y%m%d_%H%M%S).log memusage-20220618_211256.log What did you expect to see? When cgo call is done, go runtime have to reduce memory usage. What did you see instead? Memory keeps increasing and Go process got killed by OOM. Docker container restarts repeatedly under heavy requests. Similar issues https://stackoverflow.com/questions/63933234/memory-leak-when-calling-c-malloc-c-free-in-goroutines https://github.com/golang/go/issues/40636 That program allocates a lot of memory. Each iteration of the loop in inferenceWorker allocates 838860000 bytes. You are running 10 of those loops in parallel, so<PHONE_NUMBER> bytes, which is 0x1f3ffe0c0, or 7G. The allocations are different sizes, so there is going to fragmentation in the C heap, driving up memory use. So the memory usage reports you see, for 30G, doesn't seem entirely out of line with what the program is doing. What is your ulimit -v value? If I change the program to always allocate 16 * (1 << 19) bytes to avoid fragmentation and use ulimit -v 20971520 the the program seems to run forever using about 20G of virtual memory space. Sorry for the late reply. That program allocates a lot of memory. Each iteration of the loop in inferenceWorker allocates 838860000 bytes. You are running 10 of those loops in parallel, so<PHONE_NUMBER> bytes, which is 0x1f3ffe0c0, or 7G. The allocations are different sizes, so there is going to fragmentation in the C heap, driving up memory use. So the memory usage reports you see, for 30G, doesn't seem entirely out of line with what the program is doing. It was intended for seeing memory usage because deploy server has 256G memory. I had tested C implementation itself, there was no issue. The phenomenon seemed to be in Go only. What is your ulimit -v value? it is unlimited. If I change the program to always allocate 16 * (1 << 19) bytes to avoid fragmentation and use ulimit -v 20971520 the the program seems to run forever using about 20G of virtual memory space. In both case(example and my decode server), It got pthread_create failed: Resource temporarily unavailable error after reaching memory limit. For the record, I solved this memory issue applying two things. Remove C memory allocation in CGO as much as possible. Actually, This doesn't help any memory issue, but reduce high CPU usage. C Memory allocation in CGO causes high CPU utilization and Mem usage. Change glibc(GNU C Allocator) to jemalloc. I changed glibc(GNU C Allocator) to tcmalloc which solved a similar problem.
GITHUB_ARCHIVE
Create alerts from your logs, available now in Preview Being alerted to an issue with your application before your customers experience undue interruption is a goal of every development and operations team. While methods for identifying problems exist in many forms, including uptime checks and application tracing, alerts on logs is a prominent method for issue detection. Previously, Cloud Logging only supported alerts on error logs and log-based metrics, but that was not robust enough for most application teams. Today, we’re happy to announce the preview of log-based alerts, a new feature that opens alerts to all log types, adds new notification channels, and helps you make alerts more actionable within minutes. The alert updates include: the ability to set alerts on any log type and content, additional notification channels such as SMS, email groups, webhooks (and more!) and a metadata field for alerts so playbooks and documentation can be included. Alert on any logs data While error logs and log-based metrics are sufficient for many indicators of application and system health, there are some events in security, such as suspicious IP address activity, or runtime system issues such as host errors, where you want to get alerted immediately. We’re happy to announce that you can now set alerts on single log entries via the UI or API. Creating an alert in the UI is easy: Go to Logs Explorer and run your query. Under Actions > Create Log Alert. Enter the following information: a) alert name & documentation, b) any edits to your log query if necessary (and preview the results to confirm it is correct), c) select the minimum interval between alerts for this policy, and d) select the notification channel(s). Click “Save” and you’re done! New notification channels Cloud Logging is pre-integrated with Google Cloud services and can be configured to send alerts when something goes wrong. While email notifications from Cloud Logging were effective during business hours, operations teams and their development cohorts expressed a need for a greater number of communication channels for their global extended workforce partners and after-hours triage units. That’s why we’re excited to announce, as part of this preview, that logging alerts of any kind can be sent to an email group, SMS, mobile push notifications, webhooks, Pub/Sub, and Slack. Enhanced metadata for alerts Alerts are just the first step to actually solving an issue within your service or application. Development and operations teams usually have a playbook or documentation for incidents or occurrences where they want to create an alert. Including links to these materials can save valuable time, especially as workforces involve more geographic distribution and collaboration between a greater number of teams. With this preview announcement, you can now include documentation or links to playbooks that allow your team to investigate and solve alerts. Configure your logging alerts today If you have a critical log field that your team is watching, consider setting up an alert on it today. See the documentation that walks you through each step of configuring an alert. If you’d like to be alerted after a certain count of your log entries, consider a Log-based metric. This allows you to set a threshold for the number of log events that occur within a specific time period before you are notified. If you have suggestions or feedback, please join our Cloud Operations group on the Google Cloud Community site.
OPCFW_CODE
Is there a program that allows non-US/Canadian citizens to avoid going to secondary inspection when crossing the land border? I've started a job in Canada that requires me to drive to the US occasionally. At the border this week I've learned that apparently all non-US/Canadian citizens have to go to secondary screening to receive their I-94s. This is time consuming and annoying if you do this often. Is there a program that would allow one to avoid this process and get stamped into the US without leaving the car? Not technically an answer, but I believe holding on to your I-94 should mean you can avoid further processing until your VWP runs out. That should keep your annoying crossings down to 4 a year. The best way is probably to apply for a long term US visa. That's allowable even if you are VWP eligible. Although it might be just as good to apply for Canadian PR. @DJClayworth There's no point for a VWP national in getting a visa unless planning to stay for more than 3 months in one visit. My understanding is that a 2 year visa would allow you to enter the US as many times as you like in those without having to go through the secondary processing required of a VWP applicant. Land entrants needing an I-94 can only avoid secondary if they already have an I-94 from a previous entry valid for the duration of their stay. With a B visa the I-94's you get are generally valid for 6 months, versus 90 days for an I-94W, so a B visa can halve the frequency of secondary visits (even for day visitors); that's the advantage of a B visa over the VWP. The validity of the visa itself doesn't change the I-94 validity, however, that remains 6 months even for 10 year B visas. @Dennis I've read that you can sometimes be admitted on a B visa for 1 year. When does that happen? Unfortunately all land border users who are not somehow exempt from the requirement to have an I-94 (i.e. American citizens and LPRs, visa-exempt Canadians and Bermudians and maybe BCC holders), and who do not already have an I-94 that is valid for the time to be spent in the US, will need to go inside to secondary and pay the $6 to get one. Once you have the I-94, however, you'll normally be admitted by primary inspection on subsequent trips, and perhaps be allowed to enter at Class B Ports of Entry, for as long as that I-94 is still valid. Once it expires you'll need to go inside to secondary to get a new one. A frequent VWP traveller can hence expect to visit secondary not (much) more than once every 90 days. While an occasional visit to secondary is unavoidable a VWP traveller can reduce the frequency at which you have to do this to twice a year by applying for a B visa, as this extends the validity of the I-94s you receive to 6 months. It also permits extensions to be applied for while in the US if you are unavoidably delayed beyond the expiry of the I-94 you entered with. I'm not positive how this all interacts with air travel but you may find that if you fly to the US and then subsequently enter by land you may be admitted by primary for the remainder of the (paperless) I-94 you received for the flight (I'd be interested to know if someone has experience contrary to this). If so, a mixture of land border and air travel might keep you out of secondary at the land border altogether. Foreign but resident citizens (in Canada and the US) can qualify to get NEXUS, a trusted traveler card that helps save time when crossing the Canada/US border, or into either country from abroad. You may want to do some research to see how your I-94 is affected by being a member, but it might save you some time. That's for permanent residents of Canada, not temporary ones I believe that even NEXUS card holders who need an I-94 but don't have (a currently valid) one need to go to secondary to get one. They might get to the primary inspection in a shorter queue, though. The only way AFAIK, which only applies to VWP nationals (and AFAIK you're not one) is to get an ESTA. In this case, at major road crossings and the Vancouver train station, I've been told on the phone by the local CBP officers you'll not need to obtain the form. Nope, ESTA nationals were traveling with me and they also had to get the form. But the train station has a US border post, so it is seamless regardless of your passport. @JonathanReez Where did you cross? I've personally called the Blaine, Vancouver station and Rouses Point checkpoints and the supervisors said those having a valid ESTA don't need the form At Blaine. We were asked to leave the car to go get our I-94s. @Coke, You might have been asking the wrong question. I understand that those without ESTAs have to fill out the form while those with ESTAs just get a printed receipt (the OP might know for sure), so if you just asked about the form the answer you got might be strictly correct. Both have to go to secondary and pay the $6, however, and it is the visit to secondary, form or not, that is annoying. @Dennis I asked whether ESTA holders have to pay for and obtain a form, and got a clear no. Again, can only speak for those crossings, but this was 4 months ago and I can't imagine this not having expanded since. What you mention was the case at the Niagara Falls in mid-2016, so I'm familiar with that variant too I can't remember what happened in 2016, my most recent experience with an ESTA holder was last October at Osoyoos/Oroville. We went inside for her paper (I wasn't paying attention to see if the $6 changed hands). It apparently works that way at Blaine too. I'm not sure there are "variants". @Dennis There are three variants: either the ESTA makes no difference and you have to get the form manually, or ESTA holders get a pre-printed form, or they just get stamped like at an airport. The first variant applied when entering by train from Montréal in summer 2016 (I called and asked), the second when entering by train from Toronto, and the third when entering by train from Vancouver, at Blaine and at Rouses Point by car/bus Ah, yes, there are variants when traveling by a common carrier (whose ticket price might or might not include an immigration fee that covers the I-94 if needed, paper or not). Private car travel, however, seems quite uniform; if you haven't got a valid I-94 already you go inside and pay for one. The above suggests it works that way at Blaine, too. @Dennis Didn't even know it ever made a difference whether you travel by bus or car at a given checkpoint @Dennis Hm, just called Blaine again and now the officer indeed said there'll be a pre-printed I-94 form for ESTA holders. Didn't get to ask whether you have to pay for it though, as someone was collapsing in the background whereby he had to hang up - he told me to call back later. Also called Vancouver airport (same CBP as at the train station) - they said ESTA holders don't need the form at all at the train station @JonathanReez you say they were "ESTA nationals": that's not the same thing as having a valid ESTA; it just means that they qualify to apply for it. Did they actually have them? @phoog yes both had them issued in advance. All of us paid $6 to enter the country.
STACK_EXCHANGE
package plugin import ( "fmt" "strings" "github.com/gogo/protobuf/protoc-gen-gogo/descriptor" "github.com/gogo/protobuf/protoc-gen-gogo/generator" ) type messageInfo struct { Message *generator.Descriptor Path string } // generateStructs will generate custom structs with tags func generateStructs(p *plugin, file *generator.FileDescriptor, comments map[string]*descriptor.SourceCodeInfo_Location, pathIndex map[string]string) { // now handle all messages and create respective custom structs for _, message := range file.Messages() { path := pathIndex[message.GetName()] comment := comments[path].GetLeadingComments() // get the name of the struct structName := message.GetName() // print a default comment if no comment was specified if comment == "" { p.printComment(structName, "...") } else { // print the comment that was specified, replacing the struct name p.printComment(strings.Replace(comment, *message.Name, structName, -1)) } // print the struct name p.Printf("type %sAlias struct{", structName) p.In() // print the message fields generateMessageFields(p, file, pathIndex, message, comments) p.Out() p.P("}\n") } } // generateMessageFields will generate the fields for the given message func generateMessageFields(p *plugin, file *generator.FileDescriptor, pathIndex map[string]string, message *generator.Descriptor, comments map[string]*descriptor.SourceCodeInfo_Location) { // iterate through all struct fields fields := message.GetField() for fIndex, field := range fields { // create the path to the field comment // 4 is for messages, 2 for fields path := fmt.Sprintf("%s,2,%d", pathIndex[message.GetName()], fIndex) // get the comments for this field commentline := comments[path] // print leading comments (if any) p.printComment(commentline.GetLeadingComments()) // get the trailing comment trailing := commentline.GetTrailingComments() // convert the field name to camel case fieldName := generator.CamelCase(field.GetName()) // get the unique name of the type fieldType := getFieldType(field) fieldType = adaptPackageName(file.GetPackage(), fieldType) if field.IsMessage() { p.Generator.RecordTypeUse(field.GetTypeName()) } // print fields without any trailing comments if trailing == "" { p.P(fieldName, " ", fieldType) continue } // parse the trailing comment dta := parseTags(trailing) // handle non message type fields if field.IsMessage() == false { // print the field with comments and tags p.P(fieldName, " ", fieldType, dta.Output) continue } // TODO: handle embedded messages // // handle nested message types // if dta.IsEmbedded() { // // if field.IsRepeated() { // p.Fail("embedded structs cannot be repeated: ", fieldName) // } // // p.P(fieldType, dta.Output) // continue // } // print the field with comments and tags p.P(fieldName, " ", fieldType, dta.Output) } }
STACK_EDU
The following is an overview of the various parameters in a conference template. There are quite a number of settings, addressing a vast array of possible use cases. This page serves as a reference, although most users never need to use most of these settings. - Date/Time & Duration Date/Time & Duration - Duration: The template has just one field for conference duration in minutes. This setting does not constrain the actual duration of the conference. It’s only used when sending meeting invitations. It determines how long the conference will appear when added to the participants electronic calendar. - Settings - Part 1 Settings – Part 1 (2) Template Name: A label used to identify this template. Usually something related to its purpose. For example, “Marketing Committee.” (3) Account: The account that will be charged for meetings held using this template. This is especially handy if your organization uses multiple sub-accounts to track usage by project, department or region. (4) Conference Code: The numeric code that you would use if you’re creating a conference that needs to be broadly accessible by people who are not explicitly registered with ZipDX. (5) Participants: This is where you can explicitly add people to the conference. They are typically added using their email address. People invited in this manner will get an email invitation from the system. Since the system knows about them in advance of the conference they can be assigned to any of the various roles. Anyone who is to be a host, interpreter or auditor must be explicitly invited to the conference so that their role can be defined. (6) Role: Sets the role for each person invited to the conference. Roles impact what a person is allowed to do. For example, when a lecture mode is used hosts are able to engage in the conversation while participants are automatically muted. (7) Status: Indicates if the person is known to the system. If they are already in the system their status will be: Registered, Preregistered or Incomplete. Those not entered in the system will be shown as “Not Listed” with a pre-register link offered. This allows you to add their contact details to the system. (8) Participants can invite others: Determines who can invite other people to participate. The options offered are: - Yes, all participants can invite others <default> - People designated as hosts can invite others - No, only organizer can add/remove people - Settings - Part 2 Settings – Part 2 (9) Meeting Type: Determines which aspects of ZipDX should be exposed to participants. The options offered are: - Audio & WebShare - WebShare Only - Audio Only (10) Project Tracker: Is a text field where the organizer is free to put information about the conference. It’s typically used to track usage of ZipDX relative to specific projects. (11) Default Organizer: If the organizer has more than one email address in their ZipDX profile this is used to select which one is used as the organizer for conferences created using this template. For example, I have two email addresses in my ZipDX profile; one for work and one for personal use. If I select my personal email address as the default organizer then invitations to the conferences created by the template will appear to come from my personal email account. (12) Spontaneous Conferencing: This setting determines if this template can be used to create entirely ad hoc conferences. The options offered are: - Not Allowed <default> - Allowed by me and any other listed participants - Allowed by any caller Spontaneous conferencing is only possible when the conference template has a conference code. When spontaneous conferencing is enabled the conference does not need to be created in advance. The conference is automatically launched when the appropriate person dials in and enters the conference code. With the spontaneous conferencing setting set to “Allowed by me and any other listed participants” the conference can only be launched by the owner of the template, or someone specifically listed in the template. With the spontaneous conferencing setting set to “Allowed by any caller” the conference can be launched by anyone who knows the conference code. (13) Auto-generate code: This setting allows the system to automatically generate a conference code each time that the template is used. The auto-generated code can be set to 8-12 digits in length. When enabled this ensures that a every conference has a unique code. The use of the conference code allows the organizer to distribute simple connection details; just a phone number and a conference code. The fact that the code is constantly changing ensures that no-one legitimately present at a past conference can clandestinely join a future conference. (14) Template Share: Allows a conference organizer to share a template with others in their organization. The options offered are: - Do not share this template with other account members <default> - Share this template with other account members - Audio Features (15) Play Entry and Exit Chimes? – Determines if a chime sounds when participants enter or exit. (16) Announce Participant names? – Determines if each participant is announced as they join/leave the conference. The options offered are: - No, do not play names. <default> - No, but record them for using in host functions. - Yes, play names when arriving or leaving. At any setting other than the default the system will prompt each participant to record their name as they connect. (17) Waiting Room active? – Determines how participants are handled if they join the conference prior to the scheduled start time. The options offered are: - No, conference starts without host. <default> - Yes, until host arrives. - Yes, host must move participants into meeting. - Yes, only for those using a conference code or duplicate identity. The last option is appropriate for conferences where call security is paramount. When set this way participants explicitly invited to the conference join the meeting immediately. If a second instance of a participant joins the call, as might happen if someone shared their PIN, that person is not immediately joined to the call. They are instead held in the waiting room. This gives the host an opportunity to move to them to a breakout room to verify their identity. Similarly, someone joining the conference using a conference code is held in the waiting room. This keeps them out of the conversation until a host can verify their identity. (18) Conference time without host? – Defines how long a conference is allowed to run without a host. This only applies to spontaneous conferences. Conferences created in advance, either via the web portal, Google or Outlook calendar, are allowed to occur entirely without a host. (19) Start with all Participants Muted? – Sets which lecture mode, if any, will be used for the conference. The options offered are: - No, participants are not muted. <Default> - Yes, participants are “soft muted” and can unmute themselves using *6. - Yes, participants are “hard muted”; only a host can unmute. (20) Participant Dashboard access – Determines if participants can access the web-based conference dashboard. The options offered are: - Normal: see status of all in Dashboard, control self. <default> - Limited: see Hosts; control self (no roll call or room changes.) - None: No Dashboard access. (21) Host Advanced Start – Sets the system to allows Hosts and/or Interpreters to join in advance of the scheduled start of the call. This allows them time to be prepared when the other participants begin to join the conference. - None <Default> - 2 – 60 minutes advanced start permitted. (22) Record Conference? – Determines if the conference will be recorded automatically when the conference begins. When enabled participants joining the conference hear an announcement that says, “Warning, this conference is being recorded.” Note also that the conference recording includes any WebShare or SlideShow visuals that may be used. (23) Transcribe Conference? – Determines if the conference will be transcribed. The options offered are: - No <Default> - Yes, with Scribble. (extra charge ¢) - Yes, with Scribe. (extra charge $$) “Scribble” is automated, real-time transcription. “Scribe” is two-pass transcription where the results from Scribble are passed to professional transcriptionists for editing and correction. Delivery of a corrected transcript typically takes 48-72 hours. Note that when Scribble is enabled the Scribe processing can still be ordered once the conference is completed. (24) Transcription access (if activated) – Determines who has access to the transcript. The options offered are: - Only the Organizer. - Conference Participants. <Default> - Anyone with the link. (25) “Call Me” override – Determines how the system will behave with respect to dialing out to participants. The options offered are: - Normal Call Me Settings apply. <Default> - Disabled for Organizer and participants listed in template. - Disabled for participants NOT listed in template. - Disabled for entire meeting. - Organizer completely opted out of meeting. (26) “Multilingual / Rooms” – In the case of conferences in a single language, determines how many breakout rooms will be supported. Alternatively, this enables multilingual conferencing, allowing the language channels to be configured. The options offered are: - Monolingual / 10 Rooms <Default> - Multilingual Over-The-Phone / Up to 8 Languages - Multilingual Extended Venue / Up to 8 Languages - Monolingual / 50 Rooms - Multilingual Over-The-Phone / Up to 48 Languages - Multilingual Extended Venue / Up to 48 Languages (27) Multilingual Language Configuration – When multilingual conferencing is enabled (as shown above) some additional settings are exposed. - The languages are defined using ISO standard 2-digit language codes. - TalkThru level is selected (off, minimum, soft, medium, loud) - Feedback level is selected (off, minimum, soft, medium, loud) For more detail about multilingual settings see our article about Creating A Multilingual Conference. - How to create a conference using the ZipDX web portal? - Can I schedule a conference using my Outlook calendar? - Can I schedule a conference using my Google Docs Calendar? - How do I setup recurring conference calls? - How do I add my own information to the conference invitation email? - What’s the simplest way for someone to join my secure conference call? - What is a conference template? - Can I play an announcement to participants as they join my conference?
OPCFW_CODE
Main / Casual / Octave java package Octave java package Name: Octave java package File size: 780mb Java offers a rich, object oriented, and platform independent environment for many applications. The core Java classes can be easily extended by many freely . The javaObject methods takes a 'class-type' argument, and optional 'inputs to the constructor' arguments. Your syntax is wrong. 24 Sep Now the java package should not be listed anymore. If you have used the java package during the current session of Octave, you have to exit. Problem with java package and xlsread. I am trying to use xlsread, I have installed io and java packages and are loading. I HAVE A JDK. 15 Jan I installed the java package as per > the instructions on the wiki link Philip posted. the `javaObject' function is not yet implemented in > Octave "). Did you load the Java package after installation? pkg load java If you did. Executable versions of GNU Octave for GNU/Linux systems are provided by the individual distributions. Distributions known to package Octave include Debian. 11 Jan Category: Octave Forge Package, Severity: 1 - Wish Java package installation gives misleading information and does not seem to function. Octave-Forge is a collection of packages providing extra functionality for GNU Octave. 10 Sep Install the Java runtime environment (optional): If you want a complete Once your package manager is installed you can install Octave in one. A collection of packages providing extra functionality for GNU Octave. rebuild - auto java Last pkg rebuild command is required in order for the java pkg entry to . Here is an instruction for the installation of Windows version of GNU Octave as of The Java runtime environment is required for the GUI version of Octave. matlab/golosita-traiteur.com, a precompiled Java library for MATLAB On Linux you have to install an extra package for the Java support in Octave sudo apt-get. The WFDB Toolbox for MATLAB and Octave is a set of Java and m-code wrapper functions, which make system calls to WFDB Software Package and other. From thist release, the package includes binaries of octave-forge packages. JRE (Java Runtime Environments) from the Oracle site before installing octave. share/octave/packages/java/doc/golosita-traiteur.com octave-java./var/macports/registry/portfiles/octave-java_0. This is a list of Java archive files and/or directories containing class files. Finally, Octave looks for a next occurrence of file 'golosita-traiteur.com' in the m-files. Hi, I am relatively new to octave and linux and have been trying to set up octave to io and java from golosita-traiteur.com 22 Dec There are Windows, X, and Java versions in the package. MATLAB and OCTAVE, A simple MATLAB and OCTAVE interface, LIBSVM authors. I would like to create, compile and run a Java program. I would like to use vs. Python ints? How can I install Python packages from PyPi using pip? Unfortunately, neither Jupyter nor CoCalc worksheets in Octave mode are "rock solid".
OPCFW_CODE
|Oracle® XML DB Developer's Guide 10g Release 1 (10.1) Part Number B10790-01 This appendix describes the Oracle XML DB feature summary. This appendix contains these topics: This Oracle XML DB feature list includes XML features available in the database since Oracle9i release 1 (9.0.1). The Oracle XML DB native datatype XMLType helps store and manipulate XML. Multiple storage options (Character Large Object (CLOB) or structured XML) are available with XMLType, and administrators can choose a storage that meets their requirements. CLOB storage is an un-decomposed storage that is like an image of the original XML. XMLType, you can perform SQL operations such as: Queries, OLAP function invocations, and so on, on XML data, as well as XML operations XPath searches, XSL transformations, and so on, on SQL data You can build regular SQL indexes or Oracle Text indexes on XMLType for high performance for a very broad spectrum of applications. See Chapter 4, " XMLType Operations ". DOM fidelity means that your programs can manipulate exactly the same XML data that you got, and the process of storage does not affect the order of elements, the presence of namespaces and so on. Document Object Model (DOM) fidelity does not, however, imply maintenance of whitespace; if you want to preserve the exact layout of XML, including whitespace, you can use CLOB storage. See Chapter 3, " Using Oracle XML DB" and Chapter 5, " XML Schema Storage and Query: The Basics". For applications that need to store XML while maintaining complete fidelity to the original, including whitespace characters, the CLOB storage option is available. You can constrain XML documents to W3C XML Schemas. This provides a standard data model for all your data (structured and unstructured). You can use the database to enforce this data model. See Chapter 5, " XML Schema Storage and Query: The Basics" and Appendix B, " XML Schema Primer". XML schema storage with DOM fidelity. Use structured storage (object-relational) columns, VARRAYs, nested tables, and LOBs to store any element or element-subtree in your XML schema and still maintain DOM fidelity (DOM stored == DOM retrieved). See Chapter 5, " XML Schema Storage and Query: The Basics". Note: If you choose CLOB storage option, available with XMLType since Oracle9i release 1 (9.0.1), you can preserve whitespace. XML schema validation. While storing XML documents in Oracle XML DB you can optionally ensure that their structure complies (is "valid" against) with specific XML Schema. See Chapter 8, " Transforming and Validating XMLType Data". In Oracle XML DB you can use XPath to specify individual elements and attributes of your document during updates, without rewriting the entire document. This is more efficient, especially for large XML documents. See Chapter 5, " XML Schema Storage and Query: The Basics". Use XPath to specify parts of your document to create indexes for XPath searches. Enables fast access to XML documents. See Chapter 4, " XMLType Operations ". SQL/XML operators comply with the emerging ANSI standard. For example, XMLElement() to create XML elements on the fly, to make XML queries and on-the-fly XML generation easy. These render SQL and XML metaphors interoperable.See Chapter 15, " Generating XML Data from the Database". Use XSLT to transform XML documents through a SQL operator. Database-resident, high-performance XSL transformations. See Chapter 8, " Transforming and Validating XMLType Data" and Appendix D, " XSLT Primer ". Oracle XML DB provides a virtual DOM; it only loads rows of data as they are requested, throwing away previously referenced sections of the document if memory usage grows too large. Use this for high scalability when many concurrent users are dealing with large XML documents. The virtual DOM is available through Java interfaces running in a Java execution environment at the client or with the server. See Chapter 10, " PL/SQL API for XMLType ". Create XML views to create permanent aggregations of various XML document fragments or relational tables. You can also create views over heterogeneous data sources using Oracle Gateways. See Chapter 16, " XMLType Views". Use DOM-based and other Application Program Interfaces for accessing and manipulating XML data. You can get static and dynamic access to XML. See Chapter 10, " PL/SQL API for XMLType ", Chapter 12, " Java API for XMLType ", and Chapter 13, " Using C API for XML With Oracle XML DB". Structural information (such as element tags, datatypes, and storage location) is kept in a special schema cache, to minimize access time and storage costs. See Chapter 5, " XML Schema Storage and Query: The Basics". Use SQL operators such as SYS_XMLAGG provide native for high-performance generation of XML from SQL queries. SQL/XML operators such as XMLElement() create XML tables and elements on the fly and make XML generation more flexible. See Chapter 15, " Generating XML Data from the Database". Oracle XML DB repository is an XML repository built in the database for foldering. The repository structure enables you to view XML content stored in Oracle XML DB as a hierarchy of directory-like folders. See Chapter 18, " Accessing Oracle XML DB Repository Data". The repository supports access control lists (ACLs) for any XMLType object, and lets you define your own privileges in addition to providing certain system-defined ones. See Chapter 23, " Oracle XML DB Resource Security". Use the repository to view XML content as navigable directories through a number of popular clients and desktop tools. Items managed by the repository are called resources. Hierarchical indexing is enabled on the repository. Oracle XML DB provides a special hierarchical index to speed folder search. Additionally, you can automatically map hierarchical data in relational tables into folders (where the hierarchy is defined by existing relational information, such as with You can search the XML repository using SQL. Operators like UNDER_PATH and DEPTH allow applications to search folders, XML file metadata (such as owner and creation date), and XML file content. See Chapter 20, " SQL Access Using RESOURCE_VIEW and PATH_VIEW ". You can access any foldered XMLType row using WebDAV and FTP. Users manipulating XML data in the database can use HTTP. See Chapter 24, " FTP, HTTP, and WebDAV Access to Repository Data". Oracle XML DB provides versioning and version-management capabilities over resources managed by the XML repository. See Chapter 19, " Managing Oracle XML DB Resource Versions". Oracle XML DB supports major XML, SQL, Java, and Internet standards: W3C XML Schema 1.0 Recommendation W3C XPath 1.0 Recommendation W3C XSL 1.0 Recommendation W3C DOM Recommendation Levels 1.0 and 2.0 Core Protocol support: HTTP, FTP, IETF WebDAV, as well as Oracle Net Java Servlet version 2.2, (except that the Servlet WAR file, web.xml is not supported in its entirety, and only one ServletContext and one web-app are currently supported, and stateful servlets are not supported) Web Services and Simple Object Access Protocol (SOAP). You can access XML stored in the server from SOAP requests ISO-ANSI Working Draft for XML-Related Specifications (SQL/XML) [ISO/IEC 9075 Part 14 and ANSI]. Emerging ANSI SQL/XML functions to query XML from SQL. The task force defining these specifications falls under the auspices of the International Committee for Information Technology Standards (INCITS). The SQL/XML specification will be fully aligned with SQL:2003 Java Database Connectivity (JDBC) The following lists Oracle XML DB limitations: Oracle XML DB does not support replication of In the current release, you cannot extend the resource schema. However, you can set and access custom properties belonging to other namespaces, other than XDBResource.xsd, using DOM operations on the <Resource document. Oracle does not currently support references within a scalar, XMLType, or LOB data column. existsNode() methods only work with the Thick JDBC driver. Not all oracle.xdb.XMLType functions are supported by the Thin JDBC driver. However, if you do not use oracle.xdb.XMLType classes and OCI driver, you could lose performance benefits associated with the intelligent handling of XML. Oracle XML DB does not support NCHAR as a SQLType when registering an XML schema. In other words in the XML schema .xsd file you cannot specify that an element should be of type NCHAR. Also, if you provide your own type you should not use these datatypes. If a schema document uses more than 2048 substitutable elements for a given head element, error ORA-31160 occurs. Rewrite the schema to use less than 2048 substitutable elements for each head element.
OPCFW_CODE
From 2018 to 2021 I helped create and build Moodpath, used by more than 4 million people, and integrate it into MindDoc, Germany's leading online therapy provider. Moodpath was the project that concinved me to stop being a freelancer and join a company. I was working with the founders right from the start, and I loved being able to help shape the product and company . Later on Moodpath, now being used by more than 4 million people, was bought and integrated into MindDoc, Germany's leading online therapy provider. The journal. An overview of everything that's going on with your mood and what you're thinking about. The app's main goal is to help people track their mental health and wellbeing. It was designed to be a daily companion that guides you through your emotional journey. It was built with a strong focus on privacy and data protection and helped people to understand and visualize their emotions and how to improve their mental health. The app asks you a couple of questions three times a day. The questions are dynamically generated based on previous answers and existing medical conditions. All questions were written by psychologists. The answers are used to create a bi-weekly report that shows people if they display symptoms of depression, anxiety, or other mental health conditions. The report also shows how the user's mood has changed over time and what topics they've been thinking about. The app's content section offers a variety of articles, videos, and podcasts that are written by psychologists and experts in the field of mental health. The contents are meant to help people understand their mental health and wellbeing better and to provide them with tools to improve their condition over time. MindDoc's core product is an online therapy platform that connects people with therapists . Finding a therapist is still a very difficult task in Germany. MindDoc's goal is to make it easier for people to find the right therapist for them, no matter where they live. Therapists use this overview of all their patients to get a birds-eye-view of how everyone is doing. After MindDoc aquired Moodpath , I was tasked with integrating the app into the MindDoc platform and reimagining the platform as a whole. One of the primary goals was to create a custom session booking system to make it easier for therapists to schedule sessions with their patients and for MindDoc to save money on booking fees by third party providers. (It worked!) Communication features enable therapists to stay in touch with their patient. The custom booking system solution was build on top of the MindDoc platform to make it easier for therapists to schedule sessions with their patients. Working on Moodpath and MindDoc was one of my greatest professional pleasures. It was a great opportunity to work with a small team of very talented and motivated people. I'm very proud of what we've accomplished and I'm happy to see that the app is still being used by so many people.
OPCFW_CODE
X-Forwarded-Port NumberFormatException: For input string: "443,443" java.lang.NumberFormatException: For input string: "443,443" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Integer.parseInt(Integer.java:580) at java.lang.Integer.parseInt(Integer.java:615) at springfox.documentation.swagger2.web.HostNameProvider.componentsFrom(HostNameProvider.java:72) Is "443,443" a value that is supposed to be in "X-Forwarded-Port" in the first place? Looks like the header is getting set incorrectly. No sure what is doing that Hi, we are having the same exact issue reaching swagger-ui from our Api Gateway, except that is "80,80" for us. This started to happen after we upgraded Spring Cloud and Spring Boot on our Api Gateway that uses Zuul. We see that the headers forwarded by the gateway are "x-forwarded-proto":"https,http" "x-forwarded-port":"80,80" "x-forwarded-for":"XXX.XXX.XXX.XXX, XXX.XXX.XXX.XXX, XXX.XXX.XXX.XXX" If it can help, our current setup where Swagger does not work involves an Api Gateway using Spring Boot 1.4.4 and Spring Cloud CAMDEN.SR5, and the application(s) where Swagger is configured (and that is giving error) is behind the Api Gateway, and it uses Springfox Swagger 2.5.0 (the problem happens also with 2.6.1). Both the Api Gateway and the swagger-enabled application runs on Kubernetes, and the swagger ui is reached by calling Ingress, that redirects to the Api Gateway, that redirects to the application in the end. The problem started when we upgraded our Api Gateway Spring Boot version from 1.3.8.RELEASE, and spring-cloud-netflix from 1.1.7.RELEASE, everything was ok on Kubernetes before. We think the new version of Spring Cloud adds (correctly?) the forward port, while before it did not add it. Please note that on another setup not involving Kubernetes the issue is not present. If we have an ELB in front of the Api Gateway and then again the swagger-enabled application (both Api Gateway and the application running on normal EC2 instances - no kubernetes no docker just a good old Tomcat), then everything still works, with the old and with the new Spring Boot/Spring Cloud versions. Thanks for reporting. A PR to fix this would be awesome! A little bit extra info - I just struck this problem when doing something slightly weird: basically I'm running Spring Boot zuul locally. I have a route setup that forwards to a another Zuul instance, behind which is the swagger ui and json I'm trying to access. I thought at first that it was because I'm doing something extra-ordinarily silly, but it seems I am not alone :) Does any body have any references for "valid-ish" x-forwarded-port formats? Stack trace for completeness: [Request processing failed; nested exception is java.lang.NumberFormatException: For input string: "8088,80"] with root cause java.lang.NumberFormatException: For input string: "8088,80" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Integer.parseInt(Integer.java:580) at java.lang.Integer.parseInt(Integer.java:615) at springfox.documentation.swagger2.web.HostNameProvider.componentsFrom(HostNameProvider.java:72) at springfox.documentation.swagger2.web.Swagger2Controller.getDocumentation(Swagger2Controller.java:84) @dilipkrish am I right in saying this: It seems that x-forwarded-port is (or can be) a list of ports, which occurs if multiple reverse proxies are involve with a request. So each hop appends it's port to the list HostNameProvider actually only needs the first port in such a list, since it is building up a URL that should be accessible from outside that first proxy. If that seems reasonable, I can maybe try cooking up a PR. That is reasonable, thank you for looking into it... fixing it as we speak however. There are a bunch of other issues I could use help with. Shot, I will take a look at the issues, maybe there is actually something I can contribute to ;)
GITHUB_ARCHIVE
Remove Typescript References As the Typescript examples are essentially just reiterations of the same Javascript code, there is no benefit to having providing examples in both languages. The functional code was exactly the same in both languages. On the contrary it meant we widened the surface of maintenance and thus we are removing due to the limited benefit provided by the example chaincode and applications. Signed-off-by: Brett Logan<EMAIL_ADDRESS> I found the following remaining references you might want to also delete: chaincode/abstore/javascript/.gitignore:# TypeScript v1 declaration files chaincode/fabcar/javascript/.gitignore:# TypeScript v1 declaration files chaincode/marbles02/javascript/.gitignore:# TypeScript v1 declaration files commercial-paper/organization/digibank/contract/.npmignore:# TypeScript v1 declaration files commercial-paper/organization/magnetocorp/contract/.npmignore:# TypeScript v1 declaration files fabcar/javascript/.gitignore:# TypeScript v1 declaration files Thanks Arnaud, when I did the search I didnt hit it with case sensitivity, let me grab those as well quick I found the following remaining references you might want to also delete: chaincode/abstore/javascript/.gitignore:# TypeScript v1 declaration files chaincode/fabcar/javascript/.gitignore:# TypeScript v1 declaration files chaincode/marbles02/javascript/.gitignore:# TypeScript v1 declaration files commercial-paper/organization/digibank/contract/.npmignore:# TypeScript v1 declaration files commercial-paper/organization/magnetocorp/contract/.npmignore:# TypeScript v1 declaration files fabcar/javascript/.gitignore:# TypeScript v1 declaration files Removed these references as well I did discuss this removal with Simon and Dave as well before doing this PR. Just wanted to clarify this wasn't out of the blue. Sorry I was away, and not able to comment on this; however, with the removal of the TypeScript examples, we are not showing how to use the typescript annotations. These provide much valuable metadata that can be used by client applications and tooling. I would rather remove the JS and keep the TS. TS offers more to the contract developer over and above just being 'typed'. Without knowledge of the annotations, developers are missing out. Ok, sorry we missed that. We can revert this change. +1 Sent from my iPhone On Jul 23, 2020, at 10:00 AM, Brett Logan<EMAIL_ADDRESS>wrote:  I talked to Matt about this. This got missed because we didn't actually use any of the annotations. But we will revert, add the annotations and remove JS — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe. I'm not sure about removing javascript... There are far more people that use javascript than that use typescript. So my middle ground here would be, we leave the typescript and javascript chaincode, but don't do a typescript application. The annotation model would exist for the chaincode and thats the only thing we demo when it comes to typescript That sounds good to me. We should also limit this to one of the chaincode samples. We don't need to get a typescript version for every sample chaincode. That sounds good to me. We should also limit this to one of the chaincode samples. We don't need to get a typescript version for every sample chaincode. Agreed, my biggest concern with providing so many samples and in so many languages, is that over time the community will contribute updates and fixes to the applications and chaincodes. It's unlikely a user will make the change in the node, java, go and typescript chaincode, and we will need to be on the look out for changes that are made to one and ask the user to add the same changes to the other to keep them all consistent. Having more languages just increases the surface area for maintenance making the job of the maintainers more difficult to keep everything up to date. Indeed.
GITHUB_ARCHIVE
<?php class RSentryException extends Exception { const BADREQUEST = 400; const UNAUTHORIZED = 401; const REQUESTFAILED = 402; const FORBIDDEN = 403; const NOTFOUND = 404; const SERVERERROR = 500; public function __construct($message, $http_code, $http_body='') { //handle body to create message $msg = ''; if($http_body != '') { $body = json_encode($http_body); } elseif ($message != '') { $msg = $message; } elseif ($body == null) { switch($http_code) { case self::BADREQUEST: $msg = 'Invalid or Bad Request'; break; case self::UNAUTHORIZED: $msg = 'You are not logged in properly'; break; case self::REQUESTFAILED: $msg = 'Request was valid but failed'; break; case self::FORBIDDEN: $msg = 'You do not have access to this resource'; break; case self::NOTFOUND: $msg = 'We were unable to find this resource'; break; case self::SERVERERROR: $msg = 'There was an internal server error'; break; } } parent::__construct($msg, $http_code); } } ?>
STACK_EDU
Ive been reading some books on C++ and ive gotten to the poiters part and just cant get my head around it, a better way for me to learn IMO is to build a program that i find interesting, when i say interesting i mean something that i can use when im finished. So rather than reading lots more boring (yet i realise its still essential) books on c++ id rather start making programs and when i get to a part of the program that requires pointers etc then i can read up on that section in one of the books, this for me is a much better way to learn So ive a big interest in poker and play online alot, id like to build some programs like a calculator that recognises my hand and then tells me what are my odds of the hand improving etc Also most poker players use programs called Holdem Manager and Poker Tracker which use a PostgreSQL database, it saves tons of info on each hand played like VPIP (what % all hands does player play) PFR (How often a player raises) etc etc etc Obviously this will require some SQL knowledge, i know how to find the database schema and see everything inside the database but ideally id like to build a program that uses the information in the database in a program and displays information that Holdem Manager or Poker Tracker cant in real time. So i might want to see how often someone specific re-raises me preflop for example which them other programs currently dont. So ill obviously start with a few of the examples in the FAQ and try them but i want to know, what program do you use to make a GUI, say i want to make a calculator like the windows calculator what would you use to make that application. I dont mind paying for a program if its good but if theres a good free alternative that would be better. Also is a C++ application going to be able to extract information from the PostgreSQL database and use it in the program or should i be using another language? Finally, when i extract the information id like to be able to display it in a moveable block, for example there will be 6 players at the table, id like to be able to have 6 blocks, 1 for each person and i can overlay it beside them on the table and it will lock the position to that table if i move it kind of thing. The ultimate final thing, and i realise im probably getting way ahead of myself, how would i go about extracting information from the poker table, i mean reading information either from the text window that displays the cards and betting info or in the case on some sites where they dont have all the info in text is it possible to extract info from the table by some technology that can read numbers and characters in graphical format. So i know some of the stuff is pretty techinical for a beginner but i just want to get and idea whats involved and by the time i finish i should have learned alot about C++ and SQL
OPCFW_CODE
Build and Run a Sample Project Using Visual Studio Code Extensions for Visual Studio Code Users - From Visual Studio Code, click on the Extensions logo in the left navigation. - Locate the extension titledSample Browser for Intel oneAPI Toolkits, or visit https://marketplace.visualstudio.com/publishers/intel-corporation to browse available extensions. - Next, locate the extension titledEnvironment Configurator for Intel oneAPI Toolkits. Create a Project Using Visual Studio Code - Click on the oneAPI button on the left navigation to view samples.To watch a video presentation of how to install extensions and use them to set up your environment, explore sample code, and connect to the Intel® Developer Cloud using Visual Studio Code, see oneAPI Visual Studio Code Extensions. - A list of available samples will open in the left navigation. - To view the readme for the sample, click the next to the sample. If you choose to build and run the sample, the readme will also be downloaded with the sample. - Find the sample you want to build and run. Click the to the right of the sample name. - Create a new folder for the sample. The sample will load in a new window: Set the oneAPI Environment - PressCtrl+Shift+P( orView -> Command Palette…) to open the Command Palette. - TypeIntel oneAPI: Initialize environment variables. Click onIntel oneAPI: Initialize environment variables. - From the left navigation, click README.md to view instructions for the sample. Prepare Build Tasks from Make / CMake Files - PressCtrl+Shift+PorView -> Command Palette…to open the Command Pallette. - TypeIntel oneAPIand selectIntel oneAPI: Generate tasks. - Select the build tasks (target) from your Make/CMake oneAPI project that you want to use. - Run the task/target by selectingTerminal -> Run task.... - Select the task to run. Build the Project - PressCtrl+Shift+BorTerminal -> Run Build Task...to set the default build task. - Select the task from the command prompt list to build your project. - PressCtrl+Shift+BorTerminal -> Run Build Task...again to build your project. Prepare Launch Configuration for Debugging - PressCtrl+Shift+PorView -> Command Palette...to open the Command Palette. - TypeIntel oneAPIand selectIntel oneAPI: Generate launch configurations. - Select the executable (target) you want to debug.Optional: select any task you want to run before and/or after launching the debugger (for example, build the project before debug, clean the project after debug). - The configuration is now available to debug and run using the gdb-oneapi debugger. You can find it in.vscode/launch.json. To debug and run, click on the Run icon or pressCtrl+Shift+D. Debug, Analyze, Develop with More Extensions - remote development - connection to Intel Developer Cloud - analysis configuration
OPCFW_CODE
# Copyright (c) 2018, Toby Slight. All rights reserved. # ISC License (ISCL) - see LICENSE file for details. import os from textwrap import dedent, fill from columns import prtcols def prtheader(): width = os.get_terminal_size()[0] head = ''' Enter option names (wildcards accepted), or numbers (ranges accepted), to toggle selections. ''' keys = '(t)oggle all, (r)esets, (a)ccept choices, (q)uit' msg = dedent(head).strip() print("\n" + fill(msg, width=width) + "\n\n" + keys + "\n") def menu(options, chosen): ''' Takes a list of options and selections as an argument and presents a checkbox menu with previously selected items still selected. ''' optstrs = [] os.system('cls') if os.name == 'nt' else os.system('clear') prtheader() for option in options: index = options.index(option) # print("{0:>1} {1:>2}) {2:}".format(chosen[index], index+1, option)) optstrs.append(chosen[index] + str(index + 1) + ") " + option) prtcols(optstrs, 10)
STACK_EDU
May 20, 2013 Google I/O 2013 Trip Report Since a handful of people have asked me about Google I/O, and it isn't a conference that a lot of academics go to, I thought I'd share my experience with this group. First off - why did I go? The answer is because I didn't go to CHI so I wasn't worn out, and because Google asked. They offered me a $300 ticket (compared to the normal $900 which is also very hard to get). The primary reason I wanted to go was to experience the silicon valley big company event. I was interested of course to learn about Google's view of modern web technology, but mostly I just wanted to feel what this kind of event was like. (And full disclosure, I have research funding from Google.) So, first let me comment on that feeling. It was pretty much what I expected - and totally surreal. It was VERY well organized with hundreds of Googlers and staff helping everywhere. Academics were treated specially (with a short special line, a special floor at an evening party, an invite to Google's SF offices for a tour, etc.). And it was seriously geeky. I was a distinct minority - caucasian and old. There was very little gray hair and very few women. And a lot of Asians of all kinds. The keynote was a lot like a rock concert. Super crowded - intense spirit. Thousands of people holding up their phones to take pictures of the screens showing fluid blobby animations to rock music before the start. And every one yelling a count down to the start (illustrated on the displays.) I almost laughed at that part - I guess you can tell my age. I enjoyed it, but just couldn't get *that* worked up about it :) The 3 hour keynote (and all the talks) was very well orchestrated and presented. I really enjoyed hearing Larry Page give his talk. Unlike the other highly scripted talks, his was clearly quite heartfelt. He was doing his best to step out of his demanding role and relate to his techy audience, and I think he succeeded. (Of course, cynics could argue that he was just very good at scripting the feeling of non-scriptedness...) He largely talked about the importance of being innovative and doing good - and stop the cat fighting that tech companies do. And then without any warning, he took 30 minutes of questions and there was a pretty interesting discussion - probably my favorite part of the whole conference. There was, of course, excellent free food (breakfast and lunch). Free coffee. Free snack area. And a free Chromebook Pixel (retail $1,450) - meaning they gave away something like $10m worth of them. This event was not a money maker! At least not directly. They of course spent 3 days exciting 6,000 developers about using Google tools to help pursue Google's vision. You may have heard that Google is making their interfaces simpler - removing more stuff, predicting more stuff, and using more of your personal data. Some people will no doubt find that very scary and objectionable. But clearly lots of people (including, I admit, me) find it an acceptable trade-off. And their demos were astoundingly good. For any of you that remember Microsoft's painfully embarrassing voice recognition demos a few years back - Google ACED them. There was 100% recognition rate of live demos of the speaker talking into an actual phone, speaking fairly complex natural language at regular rates in a very noisy room of 6,000 people. And if you remember "Put That There" from 1979, then it was all the more impressive when they had good pronoun connectivity back to previous utterances. Expect to see a lot more voice recognition built into every Google product. (And no more clicking - soon you'll be able to just say "ok google" and start talking when most google products are open.) Glass was a big deal (but hardly mentioned in the keynote). Hundreds of people were wearing them. There was a place to try them out (which I did). And a number of talks on Glass (videos available for many of them.) Personally, I find Glass intriguing - but in its current form I think it is unlikely to be used outside of niche areas and geeks (beyond a startup phase). I predict they will be like bluetooth headsets. They will probably be everywhere next year until people realize that it is too annoying to wear something all the time (and charge them, etc) for the infrequent times when the value add over pulling out your phone is worth it (again, except for the geeks and niche uses). OTOH, if the cost was much less (i.e., it was built invisibly into regular glasses, had inductive charging, etc.) then that value calculus could be very different. And then there were 2.5 days of talks, which I got somewhat bored of by the second day. Two of the talks I most wanted to see were overfull and I couldn't get in. It was very crowded and difficult to move around between sessions. And many of the others were just too deep about topics I wasn't that interested in (i.e., APIs for every Google product). It would be greatly rich for people building web startups using all of those tools - but academic research tends to be much more selective. And when there is something I actually want to use, it is easy enough to learn it with public resources. So, bottom line, I am glad I went for the overall experience. But I'm not sure that I would recommend it to most academics (especially since most of the talks are available online.) But it was fun. And Google probably succeeded in their biggest goal which is that I left feeling more positive about Google than when I arrived.
OPCFW_CODE
add libqt5-serialport-dev key to provide build headers Please add the following dependency to the rosdep database. Package name: libqt5-serialport-dev Package Upstream Source: https://www.qt.io/ https://github.com/qt/qtserialport Purpose of using this: This package provides header and Qt5SerialPortConfig.cmake files to build an application using the Qt5::SerialPort module. Distro packaging links: Links to Distribution Packages Debian: https://packages.debian.org/ REQUIRED https://packages.debian.org/sid/libdevel/libqt5serialport5-dev Ubuntu: https://packages.ubuntu.com/ REQUIRED https://packages.ubuntu.com/hirsute/i386/libqt5serialport5-dev/filelist Fedora: https://src.fedoraproject.org/browse/projects/ IF AVAILABLE Arch: https://www.archlinux.org/packages/ IF AVAILABLE https://archlinux.org/packages/extra/x86_64/qt5-serialport/files/ Gentoo: https://packages.gentoo.org/ IF AVAILABLE macOS: https://formulae.brew.sh/ IF AVAILABLE Alpine: https://pkgs.alpinelinux.org/packages IF AVAILABLE NixOS/nixpkgs: https://search.nixos.org/packages OPTIONAL Checks [ ] All packages have a declared license in the package.xml [ ] This repository has a LICENSE file [ ] This package is expected to build on the submitted rosdistro Could you add? https://src.fedoraproject.org/rpms/qt5-qtserialport https://packages.gentoo.org/packages/dev-qt/qtserialport https://pkgs.alpinelinux.org/package/edge/community/x86/qt5-qtserialport Hi @ivanpauno thanks for the feedback. Note that there already is a rosdep key for libqt5-serialport and what I want to add is a key for packages that contain the development header and CMake files. https://src.fedoraproject.org/rpms/qt5-qtserialport It looks like the fedora package does include this RPM with development files https://koji.fedoraproject.org/koji/rpminfo?rpmID=27106679 so I will specify qt5-qtserialport. https://packages.gentoo.org/packages/dev-qt/qtserialport I am having trouble finding the list of files included in that package. Do you have any tips on where that information can be found? https://pkgs.alpinelinux.org/package/edge/community/x86/qt5-qtserialport It looks like the alpinelinux package that I want to add is https://pkgs.alpinelinux.org/package/edge/community/armhf/qt5-qtserialport-dev which does include the development headers and CMake Qt5SerialPortConfig.cmake files. I will specify that package. Hi @ivanpauno thanks for the feedback. Note that there already is a rosdep key for libqt5-serialport and what I want to add is a key for packages that contain the development header and CMake files. Thanks for clarifying, I didn't notice that. https://packages.gentoo.org/packages/dev-qt/qtserialport I am having trouble finding the list of files included in that package. Do you have any tips on where that information can be found? I couldn't find that, but I think they do include header files there. e.g.: https://github.com/ros/rosdistro/blob/50786bb7164fdd424a78ced35efa87f4fe33b0e0/rosdep/base.yaml#L4630 @ivandariojr I found a way to list the package contents on this site https://www.portagefilelist.de/site/query/listPackageFiles/?category=dev-qt&package=qtserialport&version=5.9.4&do#result I confirmed that the gentoo package has the development files so I will add it to the rosdep entry.
GITHUB_ARCHIVE
Where do I start!? - Printable Version +-- Forum: MsgHelp Archive (/forumdisplay.php?fid=58) +--- Forum: Messenger Plus! for Live Messenger (/forumdisplay.php?fid=4) +---- Forum: Scripting (/forumdisplay.php?fid=39) +----- Thread: Where do I start!? (/showthread.php?tid=95030) Where do I start!? by Rhythmik on 07-16-2010 at 12:30 AM I want to build a wlm bot, but don't know where to start. I've be research online for a about a month now and the amount of info is a little overwhelming. During my research I found MPL and thought this may be the solution I need. I downloaded the script documentation, but it was broken. Then I found mpscripts.net, which provides the basic info I need. One of my biggest problems is I don't know what language to use. I'm currently learning C# but I noticed that MPL scripts are written in Jscript, which is not a problem. What has me worried though is while reading through the Scripting forums, I'm seeing PHP, MySQL, AJAX, JSON being thrown around, and I'm like "dang, I'm a have to learn like 5 languages just to get my bot functional." So I thought I'd post and see if any of you can help point me in the right direction for developing my bot, whether it be w/ MPL or otherwise. Based on the bot that I'm trying to build (description below), what languages would I need to learn and which one should I learn first. I'm eager to learn, I just need some direction. Because I don't want to give away too many details about my bot away, I'll describe it to the best of my abilities. I will have a website (blog) that my contacts will go to and create a user account for and their basic info will be stored in my database (MySQL?). The bot (script) I want to build will act as an interface to allow said contacts to update the database w/ info about themselves (i.e. changing their status to "Im available"), search the database for info on other users (search for other users who have the status "I'm available"), and the bot will provide them with that info. In addition, if User A searches my database and sees User B is avail, User A can send my bot a command that will send User B a request to participate in a event. User B will send my bot a command to either accept or deny, and if it accepted, my bot (or maybe another script running on my site that is linked to the bot) will create said event and will wait for both User A and User B to report the results of the event. Once both users have reported the results, the bot (or other script) will compare both results and update my database with that info. That's pretty much the jest of it. I'm just a little discouraged (at least regarding MPL) because I've read some threads where it is said MPL scripts don't work well w/ MySQL. Is something like what I described possible? Can anyone help me? Feel free to ask any questions if I wasn't clear on anything. Thanks in advance. RE: Where do I start!? by NanaFreak on 07-16-2010 at 07:49 AM you will most likely need to learn jscript, php and mysql... jscript for the actual MP!L script... php for the website and database interaction... then the mysql for the database really its not that much to learn a new language... you just need to get used to a new syntax and also the functions that it uses RE: Where do I start!? by CookieRevised on 07-17-2010 at 01:11 AM Can't explain it better atm, sorry. See Wikipedia for detailed explanation. Without AJAX, stuff like Tweets on websites and such wouldn't be possible. That is: it would be possible, but it would require constant reloading of the entire website to see new Tweets... which would of course be über annoying... But note that AJAX, JSON, and similar acronyms are nothing more than simple name inventions to give certain (very old) methods, which were already used looooong before those names were invented, a 'cool' name. PS: in short, yes everything you said is possible. But no, you don't need to learn 5 languages... only the stuff NanaFreak said. Or... instead of PHP and MySQL, you could learn ASP/VBScript and use an Access Database, works just as well... It all depends on what you already know and what your host provides (and how much traffic you'd expect to get - if this is quite a lot (tens of thousands calls) then you better use a MySQL database instead of an Access database). RE: Where do I start!? by Rhythmik on 07-17-2010 at 01:28 AM @Nanafreak, thanks. I'm a get started w/ Jscript asap. @CookieRevised, much thanks for the clarification. I'm glad to know what I'm trying to do is possible. I can breathe a little easier now. I have a little programming experience, but still consider myself an infant in the world of programming. Time to get to work. RE: Where do I start!? by CookieRevised on 07-17-2010 at 01:38 AM Although everything you want is perfectly possible, it is a massive programming job though... quote:Other than bad programming, I see no reason why MPL scripts wouldn't work well with MySQL databases... Originally posted by Rhythmik because I've read some threads where it is said MPL scripts don't work well w/ MySQL. PS: what threads would that be? RE: Where do I start!? by Rhythmik on 07-17-2010 at 03:35 AM Yeah, I would expect nothing less. But that's not a problem, I'm actually looking forward to undertaking this task. and good look on clarifying the MySQL misconception.
OPCFW_CODE
Install Qt hangs on 10% I was trying to install Qt in 64 bit Centos7 for several times. Installation hangs on 10% each time. How to solve this problem? In console, the get the following error/warning : [803090] Warning: QXcbConnection: XCB error: 3 (BadWindow), sequence: 22177, resource id: 31499675, major code: 40 (TranslateCoords), minor code: 0 [803103] Warning: QFile::remove: Empty or null file name [me@localhost ~]$ ./qt-unified-linux-x64-3.0.4-online.run [4317] Warning: No QtAccount credentials found. Please login via the maintenance tool of the SDK. download the offline installer :) Please login via the maintenance tool of the SDK. Where I can get offline installer? They provide only online ones. Try installing it using the following command : You can retrieve it here. sudo ./qt-linux-opensource-5.1.1-x86_64-offline.run Although before you do this, make sure you have installed the development tools, glibc-devel.i686. If you don't have it, you can install it using these commands. sudo yum groupinstall "Development Tools" sudo yum install glibc-devel.i686 Ofcourse, make sure that you have gcc as well. Where I can get qt-linux-opensource-5.1.1-x86_64-offline.run file? here. Hope it helps you out ! Install is fine, but got error while compile /usr/include/gnu/stubs.h:10: error: gnu/stubs-64.h: No such file or directory include <gnu/stubs-64.h> ^ The glibc-devel package contains the stubs-64.h file. Install it using the last command given. but i have installed glibc-devel. I suppose I need somehow to tell Qt Designer about that Package glibc-devel-2.17-222.el7.i686 already installed and latest version Nothing to do Then check in /usr/include if the stubs-64.h file is indeed present or not ? I found /usr/include/gnu/stubs-32.h and /usr/include/gnu/stubs.h while my system is 64 bit. But no stubs* in /usr/include I did sudo yum install glibc-devel and now got error cannot find -lGL Let us continue this discussion in chat. Looks like it doesn't like the fact that you're not logged in. Give the login a try. login not helped. Got the same result - 10%
STACK_EXCHANGE
Paying attention to the output and getting Xcode to properly install reminded me how much a pain and how bloated Xcode is again. After unneeded obfuscation, I got the error below but was able to recover and move on. Xcode finally installed but Mac could not let me be. Time to update the Mac OS. I used the time to figure out how to upgrade PHP and determine how htaccess/http redirect works on my “Dream” host. Dreamhost makes managing your site both easy and obfuscated. Want to setup SSL? Go to Manage Domains and click the SSL link and follow the on-page instructions. Want to update PHP? go to Manage Domains. Oh and then head to edit and select a new version PHP. For both, they will graciously update in some time at their convenience between now, 5 minutes, and a day. Go get some coffee. 3 years of paid and unused service, but lets try this again Started looking at serverless tonight to play around with something. Want a simple way to log and show production checkout test for systems I work on. Got to looking at https://aws.amazon.com/free, then looking at getting npm updated and running on my MBP. That got me installing homebrew and xcode (reminding me again how bloated and update driven xcode is). Just logging in, I had to remember how to use WordPress. Then came the whys: Why is this site slow, why doesn’t it have https enabled, why am I paying so much each month? Why is PHP outdated and how do I get to a console to update it? Oh yeah – why did google tell me my site had issues again? Oh, it looks like crap too. Why did I setup up OpenShift last week on my Workstation (12 Core/48 GiB RAM) to play with containers when the web is moving on to serverless? So here we are. A new post, homebrew installed, but NPM is not yet updated. I went ahead and created some WordPress categories for the site. WordPress provides flexibility in the realm of creating categories and tags. At the end of the day there are about three reasons to create categories: Content Organization – For yourself and visitors Simpler Links/Permalink Structure – Easier navigation via an easier (to-a-human) link structure Helping google help you – SEO To get started – I recommend a creating a simple category structure with no parents. Add tags as you are will/are ready. It seems that the consensus is that you should always have a category when it comes to a post, but tags are optional in the realm of using WordPress Categories for search engine optimization. To add categories, or remove the uncategorized category: At the WordPress Dashboard, go to Posts > Categories On the Categories Page, hit add category. Enter in the requested information – try not to create names that are too long If you want to edit a category – specifically the uncategoried category, refer to the next section Leveraging the Yoast SEO Tools: Once you have created a new category, you will want to immediately update it and add in the SEO details. If you are using Yoast SEO, you can bring up a page like the one below by doing the following: On the Posts > Categories Page, hover over the new category and select the Edit option The Category Edit page will load, and there will be a Yoast SEO area to fill in Enter in the requested information When you create a post it is simple to add a category, but I recommend only using one category when you first get started. The reason is that there can be complications when using multiple categories (such as which category will get used for the permalink) and while there are solutions (plugins/etc) they will be a distraction to getting your site off the ground. Leverage tags for now if you want that kind of granularity at this point. Creating better links: You can create better links using your WordPress Categories by updating your WordPress permalink structure: On the WordPress Dashboard, go to Settings > Permalinks Choose Custom, and enter: /%category%/%postname%/ Hit – Save Changes Note: Your posts will now be referenced using a new link. You may want to make sure your sitemap updated properly. Possibly even work on notifying Google.
OPCFW_CODE
Did you install the 32-bit Flash Player for Windows? What files are in folder Windows\SysWOW64\Macromed\Flash? Open Firefox. Go to this site: http://www.adobe.com/software/flash/about/ Right click the spinning F, click Settings, uncheck Enable hardware acceleration. Thanks, that worked really well for IE9. Firefox still hangs up occasionally but not like it used to. An improvement after disabling hardware acceleration means that you should update your video adapter driver. Then you can try enabling hardware acceleration in Firefox. Thanks for that tip, too. Firefox browser works now as well. I had the most up-to-date driver for the original driver on the computer. When I tried to update from the device manager it said I had the most up-to-date version. However, the link you provided allowed me to download a newer version. You get a five stars from me. I've installed 11,1,102,63 for both IE9 and Firefox. I've check IE9 to make sure that active x filtering was disabled for adobe flash player and that the plug-in appears and is enabled as an add on. I've also installed the verison for the 64 bit operating system with a 32 bit IE9. I've also checked to see if my driver is the most up-to-date version which it is; the problem I have is that I can't open the settings for when I click on the spinning F for Firefox. I unchecked driver accelleration the last time; but now I can't find it; when I right click on the spinning f and then click on settings, nothing happens. I can open global settings however. I'm having problems playing the videos on youtube; they keep being interrupted. The latest version of flash player works okay on Firefox but not with IE9 even though Adobe shows that the latest version is installed correctly on both browsers. The problem now is with videos when played by IE9. It tells me that I need a browser that supports HTML 5. There is an online test [html5test.com]; IE9 on my computer scores only 141 out of a total of 475 while Firefox scores 335 out of 475 points. I use Firefox but sometimes I click on links that lead me to videos that are embedded in the text; when I click on those videos, they play only with IE. Is anyone else getting this error message? I just downloaded 11.7.700.169 for both IE9 and Firefox and I am still having the same problem. The videos start up and then stop; sometimes they restart after quite awhile; but it's nothing but stops followed by a long delay before it restarts but only for a few seconds. Should I used an adobe flashplayer removal tool to make sure that there is not a problem here. I have removed the program using windows uninstall. Could you purovide me with a link if you think that will help. With IE9 I deactivated Active X filtering as well as tracking protection and InPrivate Browsing before I installed flashplayer. The flashplayer add-on is enabled. With Firefox I turned off enable hardware acceleration; should that be on or off. I also disabled my antivirus program. My driver is updated. What seems to be the problem. It's never been this bad before. I have the same problem with both IE9 and Firefox.
OPCFW_CODE
Fix missing pycparser install This PR should fix the following error. Tested on OS X 10.10.5 [~] gitfs Traceback (most recent call last): File "/usr/local/bin/gitfs", line 6, in <module> from pkg_resources import load_entry_point File "/usr/local/Cellar/gitfs/0.4.3/libexec/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2991, in <module> @_call_aside File "/usr/local/Cellar/gitfs/0.4.3/libexec/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2977, in _call_aside f(*args, **kwargs) File "/usr/local/Cellar/gitfs/0.4.3/libexec/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3004, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/local/Cellar/gitfs/0.4.3/libexec/lib/python2.7/site-packages/pkg_resources/__init__.py", line 662, in _build_master ws.require(__requires__) File "/usr/local/Cellar/gitfs/0.4.3/libexec/lib/python2.7/site-packages/pkg_resources/__init__.py", line 970, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/local/Cellar/gitfs/0.4.3/libexec/lib/python2.7/site-packages/pkg_resources/__init__.py", line 856, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'pycparser' distribution was not found and is required by cffi Is there something wrong with the build process? I'm not familiar with Travis, but it looks like there are three different PRs (https://github.com/Homebrew/homebrew-fuse/pull/69 https://github.com/Homebrew/homebrew-fuse/pull/73 & https://github.com/Homebrew/homebrew-fuse/pull/75) sharing the same error: Error: The `brew link` step did not complete successfully The formula built, but is not symlinked into /usr/local @t-ionut This is probably the effect of sudo brew cask install osxfuse, which may create /usr/local/include owned by root. #85 should fix that. @umireon no problem, I'll wait for #85 to be merged and rebase :smile: The current formula of gitfs does not build when CLT is not installed: build log This problem can be circumvented by this PR, but the root cause is still not clear. Yes, @umireon is right. The libffi problem should be reported upstream. I think The resource sections should be reordered to match their dependency order (i.e. cffi comes before atomiclong). This will make easier to investigate the problem (build log)[https://gist.github.com/umireon/fef4e7c4a3f9e39901af2432b4a3a705] The problem in cffi is reported at upstream. PR created at upstream: https://bitbucket.org/cffi/cffi/pull-requests/73/able-to-build-on-macos-only-with-xcode-but/diff Patch: https://bitbucket.org/api/2.0/repositories/cffi/cffi/pullrequests/73/patch cffi/cffi#73 merged: this will fix the problem without CLT. @umireon Anything else needed here? @MikeMcQuaid Updating cffi will remove the need of CLT checking. The author of cffi seems to preparing a new release of cffi. Anything else would be fine. Let's wait for the new release.
GITHUB_ARCHIVE
Tooltip bug when hovering child node Having a bunch of elements builded in this way: <button type="button" class="btn btn-primary px-3 hasTooltip" title="Tooltip text" data-toggle="tooltip"> <i class="fas fa-user m-0"></i> </button> <button type="button" class="btn btn-primary px-3 hasTooltip" title="Other tooltip text" data-toggle="tooltip"> <span>Any child element</span> </button> So when I hover a tootlip target it shows the tooltip nicely. The problem is when I hover any child element of that tooltip element. The element suddenly dissapears or is being repositioned at the top left corner of the view, breaking the entire behaviour of this Bootstrap component. No matters if it is an icon or a simply span node. This issue isn't reproducing in earlier versions of Bootstrap. You can add 'pointer-events: none' to all elements inside the button. It helped me. button *{ pointer-events: none; } You can add 'pointer-events: none' to all elements inside the button. It helped me. button *{ pointer-events: none; } I totally agree Tooltips are not working well with v5 I totally agree Tooltips are not working well with v5 Tooltips are not working well with BS5. If you try to enter/exit the mouse over a certain element even without any child node, the tooltip vanishes. This incorrect behavior can be seen also on the demo page using the latest Chrome: https://getbootstrap.com/docs/5.0/components/tooltips/ Definitely tooltips/popovers not working consistently with Popper 2 in BS5 Tooltips are not working well with BS5. If you try to enter/exit the mouse over a certain element even without any child node, the tooltip vanishes. This incorrect behavior can be seen also on the demo page using the latest Chrome: https://getbootstrap.com/docs/5.0/components/tooltips/ Definitely tooltips/popovers not working consistently with Popper 2 in BS5 You can add 'pointer-events: none' to all elements inside the button. It helped me. button *{ pointer-events: none; } Well it works and I would get it running like this, but still I don't think this is the proper way to do it. It's a bug that should be fixed in the internal code of the next release. That's why I posted here. You can add 'pointer-events: none' to all elements inside the button. It helped me. button *{ pointer-events: none; } Well it works and I would get it running like this, but still I don't think this is the proper way to do it. It's a bug that should be fixed in the internal code of the next release. That's why I posted here. @joeforjoomla Exactly. Especially if you hover over the SVG example then scroll down the tooltip stays there forever. @joeforjoomla Exactly. Especially if you hover over the SVG example then scroll down the tooltip stays there forever. @Chipsum yes indeed, I'm working intensively with tooltips and popovers, but as of now they are unusable. Moreover with BS5 it seems that it's no longer possible to have a tooltip AND a popover on the same element as i was used to do with BS2/3/4 @Chipsum yes indeed, I'm working intensively with tooltips and popovers, but as of now they are unusable. Moreover with BS5 it seems that it's no longer possible to have a tooltip AND a popover on the same element as i was used to do with BS2/3/4 Duplicate of #31646 Duplicate of #31646
GITHUB_ARCHIVE
This lab will give you practice using linked lists and unions. Files that you will need: ice cream is nice but chocolate ice cream is dandythen the command: <UNIX> linenum brad cream nice ice Niceshould produce the output: cream 1 3 nice 1 ice 1 2 NiceThe words on the command line may be duplicated (e.g., ice may appear twice) but it is okay to only print that word once in your output. My executable will duplicate the output for that word, which is also acceptable. I have provided one test file in the lab4 directory, linenum_test1 to test your program. However, you should test your program for boundary-type conditions and command line issues, such as: Write a program called broker that reads stock and option transactions from a file, groups them by customer, and then prints out a report showing the total dollar amount of transactions for each customer and a list of that customer's transactions, printed in the order in which they were read. The customers will be printed in descending order based on the total value of their transactions. The input will consist of records of two types: stock ticker_symbol num_shares price_per_share dividend_amt customer_namestock is a keyword that identifies the type of transaction. ticker_symbol is a 3-5 character string that is used by traders as a shorthand symbol for the stock's name, num_shares is an integer expressing the number of shares involved in the transaction, and price_per_share is a floating point number denoting the price paid for each share. The dividend_amt is the amount of dividends the company pays on each share of stock. Finally the customer's name is a character string consisting of one or more words. The fields library will represent these words as separate fields so you will have to concatenate them together and put a single space between each word. A sample stock line might be: stock ibm 200 83.50 2.50 Brad Vander Zanden option ticker_symbol num_options price_per_share strike_price expiration_date customer_nameoption option is a keyword indicating the transaction type. ticker_symbol is a character string of 3-5 characters representing the option's trading symbol. num_options is an integer denoting the number of options purchased and price_per_share is a floating point number denoting the price paid for each share covered by an option. Since each option represents 100 shares the total transaction cost is determined by the following equation: total transaction cost = price_per_share * 100 * num_optionsThe strike price is a floating point number and the expiration date has the format mm yyyy where mm is an integer between 1 and 12 and yyyy is a year. Finally the customer name is similar to the customer name for a stock. A sample input line for an option might appear as follows: option oibm 3 5.50 85 12 2004 Brad Vander ZandenThis line says that Brad Vander Zanden has bought 3 calls on ibm at $5.50 a share. The call gives the customer the right to purchase the stock at $85.00 a share until December, 2004. Your program should create a dlist that contains customer records. Each customer record will contain the customer's name, the customer's total transaction costs and a dlist that consists of all of the customer's transactions. You will need to define two structs, one for a customer record and one for a transaction record. A transaction record needs to store two types of transactions, stock transactions and option transactions. Stock and option transactions share some common information but each transaction also has some unique fields. Hence you will need to employ a union within your transaction struct that can represent the alternative information required for each type of transaction. Your solution must employ a union. Once you have read all the transactions and organized them by customer, you should create a new dlist that will contain the customers sorted in descending order by total transaction cost. You will need to iterate through your initial customer list and add each customer to your new dlist using insertion sort. The final thing you need to do is print the report. Each customer should start with a line that lists the customer's name and total transaction costs. The customer's name should be left justified in a field of 20 characters and the total transaction costs should be right-justified in a field that allows two decimal points. The customer line should be followed by indented lines that list the customer's transactions, one per line. Transaction lines should be indented by 4 spaces. Stock transaction lines should have fields in the following order and should be separated by one space: dividend total = num_shares * dividend_amt Submit two source files named linenum.c and broker.c respectively and a header file named broker.h.
OPCFW_CODE