Tuesday, December 23, 2014

A man who believed in machines



"A computer would deserve to be called intelligent if it could deceive a human into believing that it was human."

This is one of the most influential quote in the last century. The man who said this believed that by year 2000. A.D, everybody will believe in the capabilities of computers of creating something new, who always thought that machines do whatever they were told to do. 
He was highly influential in the development of computer science, providing a formalisation of the concepts of "algorithm" and "computation" with his brilliant idea. He was a  mathematician, logician, cryptanalyst, philosopher, pioneering computer scientist, mathematical biologist, and marathon and ultra distance runner. In 1999, Time Magazine named him as one of the 100 Most Important People of the 20th century.
He was none other than Alan Turing.  Turing is widely considered to be the father of theoretical computer science and artificial intelligence.
He paved the way to concepts of algorithm and computation, by giving birth to theoretical computer science and artificial intelligence.
In his early life, he worked for government decrypting the enemy ciphers. His intelligence and diligent efforts were even praised by Winston Churchill. His pivotal role in cracking  intercepted coded messages enabled the Allies to defeat the Nazis in several crucial battles. It has been estimated that the work at Bletchley Park shortened the war in Europe by as many as two to four years.
After the war, he worked at the National Physical Laboratory, where he designed the ACE, among the first designs for a stored-program computer. In 1948 Turing joined Max Newman's Computing Laboratory at Manchester University, where he assisted development of the Manchester computers and became interested in mathematical biology. He wrote a paper on the chemical basis of morphogenesis, and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s.
This famous scientist was born in Paddington, London, while his father, Julius Mathison Turing (1873–1947), was on leave from his position with the Indian Civil Service (ICS) at Chhatrapur, Bihar and Orissa Province, in British India. Very early in life, Turing showed signs of the genius that he was later to display prominently.
In his early age, Turing continued to show remarkable ability in the studies he loved, solving advanced problems in 1927 without having studied even elementary calculus. In 1928, aged 16, Turing encountered Albert Einstein's work; not only did he grasp it, but he extrapolated Einstein's questioning of Newton's laws of motion from a text in which this was never made explicit.
In 1928, German mathematician David Hilbert had called attention to the Entscheidungsproblem (decision problem). In his momentous paper "On Computable Numbers, with an Application to the Entscheidungs problem" (submitted on 28 May 1936 and delivered 12 November), Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the Entscheidungsproblem by first showing that the halting problem for Turing machines is undecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation.
In 1948, he was appointed Reader in the Mathematics Department at the University of Manchester. In 1949, he became Deputy Director of the Computing Laboratory there, working on software for one of the earliest stored-program computers—the Manchester Mark 1. During this time he continued to do more abstract work in mathematics, and in "Computing machinery and intelligence" (Mind, October 1950), Turing addressed the problem of artificial intelligence, and proposed an experiment which became known as the Turing test, an attempt to define a standard for a machine to be called "intelligent". The idea was that a computer could be said to "think" if a human interrogator could not tell it apart, through conversation, from a human being. In the paper, Turing suggested that rather than building a program to simulate the adult mind, it would be better rather to produce a simpler one to simulate a child's mind and then to subject it to a course of education. A reversed form of the Turing test is widely used on the Internet; the CAPTCHA test is intended to determine whether the user is a human or a computer. 
In 1948, Turing, working with his former undergraduate colleague, D. G. Champernowne, began writing a chess program for a computer that did not yet exist. By 1950, the program was completed and dubbed the Turbochamp. In 1952, he tried to implement it on a Ferranti Mark 1, but lacking enough power, the computer was unable to execute the program. Instead, Turing played a game in which he simulated the computer, taking about half an hour per move. The game was recorded. The program lost to Turing's colleague Alick Glennie, although it is said that it won a game against Champernowne's wife. His Turing test was a significant, characteristically provocative and lasting contribution to the debate regarding artificial intelligence, which continues after more than half a century.
He also invented the LU decomposition method in 1948, used today for solving matrix equations.
Let me put some light on Turing machine. A Turing machine is a hypothetical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.
The "Turing" machine was invented in 1936 by Alan Turing who called it an "a-machine" (automatic machine). The Turing machine is not intended as practical computing technology, but rather as a hypothetical device representing a computing machine. Turing machines help computer scientists understand the limits of mechanical computation.
In his 1948 essay, "Intelligent Machinery", Turing wrote that his machine consisted of:
...an unlimited memory capacity obtained in the form of an infinite tape marked out into squares, on each of which a symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The machine can alter the scanned symbol and its behavior is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behavior of the machine. However, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings. (Turing 1948, p. 3)
A Turing machine that is able to simulate any other Turing machine is called a universal Turing machine (UTM, or simply a universal machine). A more mathematically oriented definition with a similar "universal" nature was introduced by Alonzo Church, whose work on lambda calculus intertwined with Turing's in a formal theory of computation known as the Church–Turing thesis. The thesis states that Turing machines indeed capture the informal notion of effective methods in logic and mathematics, and provide a precise definition of an algorithm or "mechanical procedure". Studying their abstract properties yields many insights into computer science and complexity theory.

For detailed infor , reader can visit:
http://en.wikipedia.org/wiki/Turing_machine



Sunday, December 21, 2014

Indian Mars Mission, so cheap!



India’s Mars orbiter mission tells the world that the more technology was denied the more determined the country became to master space technologies. India has created global history by becoming the first Asian nation to reach the Mars orbit in a space mission. The success is sweeter because this has been done in its maiden attempt. No other country that has attempted a mission to Mars has succeeded in reaching the planet on debut. So, the Indian Space Research Organisation (ISRO) can claim that it has done a shade better than accomplished space powers such as the United States and Russia in reaching Mars. 
India took a giant step towards to making its first manned space mission after it successfully launched its latest rocket with a crew module for astronauts. The testing of its Geostationary Launch Vehicle [GSLV] capped a triumphant year for its Indian Space Research Organisation (ISRO) which completed the cheapest ever mission to Mars in September. It entered the Martian orbit only a day after the American Maven mission but was £365 million cheaper.


India’s prime minister Narendra Modi had joked that it was £13 million cheaper than the Hollywood space hit Gravity starring George Clooney and Sandra Bullock. 
The new rocket was substantially more expensive, taking a decade and $400 million (£256 million) to develop, but it marks a significant breakthrough in the race to send Indian astronauts into space and eventually make a lunar landing.
Ajay Lele, a defence researcher and the author of Mission Mars: India’s Quest for the Red Planet said the successful launch was a key stage towards launching manned missions but warned it could be another ten years before it achieves its ambition.
It is a significant development but we were a bit euphoric about it. GSLV is a suborbital launch vehicle and has only passed the liquid and solid engine test. We will need another two years to test the cryogenic (liquid gas) engine”, he said “So far we have been able to carry a payload of three to four tons and to send a manned mission we need higher pay load capacity. If things go well, we are still ten years away from the manned mission”, he said.

But What's So Special About It?

Why everybody lost their mind over this?

On Wednesday, India’s space program signed an agreement with NASA for a joint Earth-observing satellite mission as well as a charter to establish a working group for cooperation on Mars exploration. That comes on the heels of India’s Mars orbiter reaching the red planet’s orbit last week.
India’s Mars spacecraft’s relatively cheap roughly $74 million cost has drawn attention.
NPR’s Geoff Brumfiel has reported that there are several reasons. Among them, according to Brumfiel’s article: less sophistication of the spacecraft compared to NASA’s MAVEN, which also reached Mars orbit last week,  the orbiting path it chose and much lower labor costs.
From the story, which quotes Earth 2 Orbit’s Amaresh Kollipara: First, the spacecraft itself is a lot less sophisticated than its NASA counterpart, and is not designed to last as long. “It’s essentially buying a Honda Civic versus buying a Mercedes S-Class,” Kollipara says. The Indian craft has fewer cameras and scientific doohickeys.
It is orbiting in a big oval with Mars at one end. The downside of that path is that the Indian spacecraft only gets close to Mars once every few days. But fewer firings of the engine meant the Indian spacecraft would need less fuel. That helped keep the weight down to nearly half that of the NASA mission — and that lighter load made it much cheaper to launch.

Secondly, we clearly won the race.
If the 20th century witnessed a “space race” between the U.S. and the USSR, the 21st century is seeing an Asian space race. In most aspects of space technology, China is way ahead of India. It has larger rockets, bigger satellites and several rocket ports. It even launched its first astronaut in space way back in 2003 and has a space laboratory in the making.
In 2008, when India undertook its first mission to moon Chandrayaan-1, China raced ahead and orbited its Chang’e-1 satellite ahead of India. But in this Martian marathon, India has reached the finish line ahead of China. This now puts India in the pole position as far as Asian Martian exploration goes. In 2012, the first Chinese probe to Mars Yinghuo-1 failed. It was riding atop a Russian satellite called Phobos-Grunt. But the Chinese probe failed to even leave earth. Earlier in 1998, a Japanese probe to Mars ran out of fuel.
Today, India’s Mars orbiter mission has shown that the Indian elephant has lumbered ahead of the Chinese red dragon. For the record, ISRO’s chairman Dr. K. Radhakrishnan has gone on record by saying, “We are not racing with anybody. We are racing with ourselves. We have to race to reach the next level of excellence.”
Reasons for this:
To hold costs down, India relied on technologies it has used before and kept the size of the payload small, at 15 kilograms. It saved on fuel by using a smaller rocket to put its spacecraft into Earth orbit first to gain enough momentum to slingshot it toward Mars.
And this Vox story says the spacecraft is mainly a “demonstration of the fact that India has the technology to reach Mars,” but adds that some science will be conducted.
In addition to cameras that will photograph Mars’ surface, it’s equipped with a few different instruments that will analyze the planet’s atmosphere, looking for methane in particular. Scientists believe that, if methane is present, it could be a sign of microbial life. Some previous crafts have detected traces of methane, but the Curiosity rover has failed to find any.
Conclusion
Many have questioned why India should be sending a robotic mission to Mars when there is so much poverty, malnutrition, death, disaster and diseases among its 1.2 billon population. Some have even called this mission as being a part of India’s “delusional dream” of becoming a superpower in the 21st century. There can be nothing farther from the truth. If one analyses the cost of the Mars Orbiter mission of Rs.450 crore, for Indians it works out to be about Rs.4 per person. Today, a bus ride would cost a lot more.
India’s Mars Orbiter mission has paved the way for cheaper and faster inter-planetary probes. 
Mr. Modi, in his stirring speech to ISRO, spoke of its capabilities and efficiencies. It is an eye-opener that a country which can undertake a mission to Mars is unable to provide electricity to 400 million citizens. What is worse is that 600 million Indians still don’t have access to toilets. It is hoped that Mr. Modi would have learnt a lesson or two from the Indian space agency on how to undertake cost-effective projects with no time or cost overruns. 
The Orbiter mission undoubtedly tells the world that India is a space power to reckon with. The more technology was denied to India, the more determined it became to master these technologies.
I just wish after this giant leap and such a marvelous technological advancements, the country men will look back at the dirt spread in the homeland. Both sided development is necessary for any country to progress and become a world power. We have reached Mars, but we are still suffering from the very basic amenities and still struggling with petty issues like women security, etc. Our engineers have once again proved their brilliance to the world and made their mark by their achievement. I wish the technological development we achieved by this achievement, can help and provide useful inputs to eradicate the basic problems we are dealing with.  

Saturday, December 6, 2014

Beauty with Brain...Rachel Haot


Rachel Haot
Rachel Haot is an American businesswoman and entrepreneur who currently serves as the Chief Digital Officer and Deputy Secretary of Technology for New York State under Governor Cuomo's administration. Prior to this role, Rachel served as Chief Digital Officer for the City of New York for three years under Mayor Michael Bloomberg, from January 2011 to December 2013, leading NYC Digital.
She was also featured in Vogue Magazine
Rachel Sterne Haot is the chief digital office and the deputy secretary for technology for New York State in governor Cuomo’s executive chamber. Her focus is to realize the governor’s vision for the state by improving the way that government and public engage online, and supporting collaborative innovative with the technology community. 
Prior to this role, Rachel served as chief digital officer for the city of New York for three years under Mayor Bloomberg, from January 2011 to December 2013. At the city, she established the first urban digital roadmap in the country, achieving all initiatives by October 2013. Major milestones included the pre launch of official city website nyc.gov; tripling the City’s social media audience to over four million; hosting the first hackathons in municipal government and launching tech sector initiative and campaign We Are Made in NY to support the digital industry.
Before her role with the city, Rachel server as founder and CEO of GroundReport, a pioneering global citizen journalism platform, from 2006 to 2010. She also launched and ran upward, a digital strategy consultancy, taught as a Columbia Business School adjunct professor and worked in business development for the consumer web industry. 
She has been recognized as a ’40 Under 40’ leader by Crain’s, Forbes and Fortune.
From 2006 to 2010, Haot founded and served as Chief Executive Officer of GroundReport, a global crowd sourced news startup that was one of the earliest examples of Citizen Journalism. In 2008, Haot founded digital strategy consulting firm Upward, and later served as an adjunct professor of Social Media and Entrepreneurship at Columbia Business School. In 2012, she was named a Young Global Leader by the World Economic Forum, and serves on the digital advisory board of Women@NBCU. She has been recognized as a "30 Under 30" leader by Fortune and Forbes.

In 2011, New York City Mayor Michael Bloomberg named Haot to the post of Chief Digital Officer. Her responsibilities included the development and multi-stakeholder execution of the New York City's Digital Roadmap, a plan unveiled in May 2011 to realize the City's digital potential for all New Yorkers, spanning 40 initiatives across the areas of Internet connectivity, STEM education, Open Government and big data, online engagement and technology industry support. 
In October 2013, Mayor Bloomberg and Haot announced that 100% of initiatives had been completed, and introduced new goals submitted via public listening sessions to build on this foundation in the 2013 Digital Roadmap.

A major milestone of the Digital Roadmap was the complete overhaul of the user experience and architecture of Official city website nyc.gov for the first time in a decade, featuring fully responsive design, data-driven information architecture and improved customer service functionality. In addition, Roadmap initiatives included expansion of public Wi-Fi to more than 50 parks, low-cost broadband connectivity and training for over 300,000 low-income New Yorkers, paid technology sector internships for underrepresented minorities, more than 40 digital learning programs that have served over one million New Yorkers, the release of over 2,000 public data sets, the first municipals hackathons in the country, the expansion of 311 on mobile and social media, an official City mobile app store and the launch of 

We Are Made in NY, an economic development initiative and marketing campaign to support New York City's tech sector. One component of We Are Made in NY is an interactive map of the sector's companies, investors and incubators, which show the locations of over 2,000 technology companies in New York City, and allows users to filter by hiring companies, visit job listings and add new startups to the map.
Haot was interviewed by WNYC in the wake of Superstorm Sandy, where she detailed the efforts her office was undertaking to bring the city back to its feet from a digital infrastructure standpoint, especially in Lower Manhattan. In addition to the City's first official hackathon, Reinvent NYC.GOV, Haot's office hosted the Reinvent Payphones Design Challenge, a competition to promote the re-purposing of New York City’s public pay telephones for the digital age, which garnered over 100 submissions from design firms and universities.

 In February 2013, Haot and Mayor Bloomberg introduced We Are Made in NY, an economic development initiative to support tech sector growth in New York City.


Friday, December 5, 2014

Beauty with Brain...Kira Radinsky

Kira Radinsky

This is no ordinary girl! You may find the face cute and charming but she is among the top innovators on the earth in the list which includes names like Mark Zuckerberg.

Kira Radinsky, beauty with brain, an Iranian prodigy who revolutionized the world with her childhood obsession of simulation and predictive analysis.
When reviewing Kira Radinsky's resumé, it's easy to feel a bit unaccomplished. She started college at just 15 and earned a Ph.D. by the time she was 26. The girl who landed a spot on MIT’s prestigious 35 Innovators Under 35 list this year—previous winners include nerds like Facebook’s Mark ¬Zuckerberg—has figured out a way to forecast natural disasters, disease epidemics, social unrest, and violence outbreaks. Her predictions aren’t vague or ambiguous. They are made of something much more concrete—science. Kira is pioneering predictive data-mining software for Technion-Israel Institute of Technology. 

It was during her studies at the Technion that Radinsky was able to transform her ideas into reality. While enrolled in university, she developed a new prediction method that can foresee events with 80 percent accuracy. For this software, she scanned 500 years-worth of literature, including all the materials published in the New York Times from 1880 onwards, whereupon she found strong correlations between various events and discovered indicators for future cholera outbreaks.
Shortly after she went on to win the coveted Israel Defense Prize, interned at Microsoft, earned a black belt in karate, learned salsa dancing, acquired 10 patents, launched the start-up SalesPredict, and completed her PhD—all by the age of 27, earning her academic recognition from tech giants like Google, Yahoo, and Facebook. Perhaps the “27 Club” curse in Kira’s case is bewitching her with brilliance? But it is her obsession with predicting the future that has catapulted the soft-spoken Radinsky to international fame.
How good can computers get at predicting events?
In 2012, when Cuba suffered its first outbreak of cholera in 130 years, the government and medical experts there were shocked. But software created by Kira Radinsky had predicted it months earlier. Radinsky’s software had essentially read 150 years of news reports and huge amounts of data from sources such as Wikipedia, and spotted a pattern in poor countries: floods that occurred about a year after a drought in the same area often led to cholera outbreaks.
Oracle of Internet
The predictions made by Radinsky’s software are about as accurate as those made by humans. That digital prognostication ability would be extremely useful in automating many kinds of services.
Radinsky was born in Ukraine and immigrated to Israel with her parents as a preschooler. She developed the software with Eric Horvitz, co-director at Microsoft Research in Redmond, Washington, where she spent three months as an intern while studying for her PhD at the Technion-Israel Insitute of Technology. Radinsky then started SalesPredict, to advise salespeople on how to identify and handle promising leads. 
“My true passion,” she says, “is arming humanity with scientific capabilities to automatically anticipate, and ultimately affect, future outcomes based on lessons from the past.”
But despite her myriad early achievements, Dr. Kira Radinsky could not shake off her decade-old obsession with predicting the future.  “At some point I came to realize there is so much untapped data that can be leveraged in amazing ways. I never really stopped to think of how difficult the problem of predicting the future would be. But I thought maybe that’s a common thought for ordinary people trying to achieve extraordinary things,” she tells.

Radinsky's developed an algorithm that predicts future global events. It's no wonder that she's known around the tech world as the "web prophet."
“It’s a very sophisticated form of data mining, enabling deep analysis of disparate events and seeing how they repeat themselves time after time,” said David Shamah of theTimes of Israel.
What's interesting about the algorithm is the way it connects the "fading" technology of printed news with the onset of digital media: A major resource feeding her algorithms is an archive of The New York Times, along with Twitter feeds and Wikipedia entries. Because Radinsky can now identify cause-and-effect patterns with this system, she can alert us to possible disaster, political events, and even disease outbreak. “If a storm comes two years after a drought, a few weeks [after the storm] the probability of a cholera outbreak is huge, especially in countries with low GDP and low concentration of clean water,” she explains to Fast Company.
How accurate is she? About 70% to 90%. The duo developed software that parses the web and composes a complex algorithm that taps into 22 years worth of archives from the New York Times and more than 90 other data sources with an accuracy of 70 to 90 percent. Basically, they're analyzing today’s and yesterday’s news to predict tomorrow’s. Her algorithm predicted the cholera epidemic in Cuba (the first in decades) as well as the riots that sparked the Arab Spring. While it may seem that a bit of common sense and a lot of research could allow many scientists to foresee something like a cholera outbreak, Fast Company notes the real innovation is in Radinsky's automation of it: "Getting a computer to do it, and to analyze accurately the massive amounts of electronic data present on the web, is another matter." 


What's important to remember about Radinsky’s algorithms is that they suggest probability, rather than certainty. When she began fiddling around with Google Trends in 2007, Radinsky quickly realized she could predict what people would search for based on breaking news stories. Perhaps the best illustration of her technique is in her findings on the Arab Spring riots. Though her software successfully forecast the riots, it also predicted fall of the Sudanese government, which didn't happen. 
Now, Radinsky has formed her own start-up, SalesPredict, a sales and marketing predictions organizations that dedicates a portion of its research to medical and humanitarian endeavors in collaboration with SparkBeyond. Most recently, her team predicted a cholera epidemic in Zimbabwe that could break before 2014. It's the hope that such warnings can be used to help us better prepare for troubling times, in turn making them more manageable.

“It’s my strong desire to see my ideas implemented in the real world and be personally involved with the implementation. I intend to be the Indiana Jones of Predictive Analytics. Seriously, I really enjoy combining research and practice,” says Radinsky.


Kira now deploys her research for the prediction of Cholera outbreaks worldwide and works with medical organizations to bring this into production. Radinsky has even begun working with an organization affiliated with the UN with the goal of predicting genocides and preventing them. She also currently is looking into using her prediction software to identify people with suicidal tendencies.

for more visit her homepage:

http://tx.technion.ac.il/~kirar/

Beauty with Brain...Kira Radinsky- Part -2

Indiana Jones of Predicative Analysis

Interview with Vice

VICE: Is it possible to predict the future with today’s technology?
Kira Radinsky: We have reached a critical amount of data and computation power to start finding repeating patterns in history systematically. We built a predictive model based on more than 150 years of historical news data that examines past events with similar outcomes. Our system also incorporates related contextual information pulled from LinkedData, a project that finds connections between hundreds of resources. The combination allows the software to extrapolate from news of a cholera outbreak in Angola, for example, to predict a similar outbreak in Rwanda.
So do you believe that history has a tendency to repeat itself?
The probabilities are always changing, but some patterns, if we abstract them correctly, always remain. And if we incorporate the most recent information we can learn about new patterns emerging all the time. Think about how children learn—they receive reinforcement from the environment and learn patterns. This is also how we learn. I would say the work I have done is not about predicting the future, it is more about making deep analysis on probabilities of future outcomes based on what we have seen, just as an expert in the field would do if he had the time to look at all the available data in the world.                       
What spurred this passion to use your computer science capabilities to help people?
I became fascinated with the idea of predicting the future at very early age. At some point I came to realize there is so much untapped data that can be leveraged in amazing ways. I believe that not often a person has the opportunity to do something really big that can help many people. My passion is to make big things that can affect people's lives for the better.
With all the focus on the "end of the world," do you see this software being able to predict an end of the world-scale catastrophe like a nuclear holocaust or  zombie apocalypse?
The system is built upon probabilities based on patterns it saw in the past. As far as I know, there haven’t been any zombie apocalypses. So far, the system predicted the first Cholera outbreak in Cuba in 130 years, riots in Turkey and Syria, and recently the ones in Sudan. Many critical decisions are not based on real data, because we didn’t have the means to do this. However, using our software, we can now potentially empower important decision makers in the world with the tools to make better decisions.
Do your methods work for looking into the past as well?
In our first experiments, we utilized 150 years of New York Times data and we already have access to data going back to 1500. The system has no limitation on how far in the future it predicts as long as we have enough data to see the patterns. Some patterns have 20 years of distance between the events.
However, because we only have 150 years of data so far, we are limited to these prediction horizons at the moment.
Can this software be applied in other industries?
The implications of this technology also extend to the business world. My current venture, SalesPredict, pioneers predictive modeling to increase sales. We help companies increase their sales pipelines by 75 percent. The system is still piloting and we only now started working with international organizations. However, the product is already working and companies are enjoying it. I believe that this type of technology will be widespread in a matter of years. The predictive algorithms are general purpose, in the sense they can be used to predict a variety of events given the right data. In the future, I believe predictive analytics can be incorporated into everything we do—including predicting mental and physical diseases based on our search behavior.”  

Friday, November 14, 2014

Playing with webpages using Powershell...




It has been very fascinating to open and download the files from webpages from the command line. For linux users, the wget command is the heavenly gift. When it comes to windows, it becomes little hectic. Using cmd one can open the webpages but playing with the data is little time consuming. So I am writing this article about how to open and read webpages from Powershell.


Steps that are included in this process are :

  1. Open the webpage
  2. Extract HTML Title, Description, Keywords
  3. Avoid URLs Matching Any of a Set of Patterns
  4. Setting a Maximum Response Size
  5. Setting a Maximum URL Length
  6. Using the Disk Cache
  7. Crawling the Web
  8. Get Referenced Domains
  9. GetBaseDomain
  10. Must-Match Patterns

Now lets start with the commands
Start --> Fast Search Server 2010 for SharePoint (right click --> Run as Administrator)

The Short Version
Add the SharePoint PowerShell cmdlets
Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue

Create and configure the Content Source (enter a URL that doesn't mind you crawling it. Perhaps your blog page?)
$contentSSA = "FASTContent"
$startaddress = [enter a URL here]
$contentsourcename = "Web site crawl"
$contentsource = New-SPEnterpriseSearchCrawlContentSource -SearchApplication $contentSSA -Type Web -name $contentsourcename -StartAddresses $startaddress -MaxSiteEnumerationDepth 0

Start the crawl
$contentsource.StartFullCrawl()
$contentsource.CrawlStatus

Keep executing $contentsource.CrawlStatus until the status changes to CrawlCompleting and then Idle
Execute a search

The Long Version
Again, there really isn't any reason to go over all the steps as they don't really change from step to step. So let's clarify few things.
$contentsource = New-SPEnterpriseSearchCrawlContentSource -SearchApplication $contentSSA -Type Web -name $contentsourcename -StartAddresses $startaddress -MaxSiteEnumerationDepth 0

It is interesting to note that the New-SPEnterpriseSearchCrawlContentSource cmdlet defaults to the Custom crawl rule which will read all pages and all links found at the starting URL. We set MaxSiteEnumerationDepth to zero which causes the crawler to read the content at the site we started at rather than allowing the crawler to go into ADD mode becoming easily distracted and chasing down every car that goes by.

Another method :

(New-Object System.Net.WebClient).DownloadFile($url, $localFileName)
In v3, the Invoke-WebResquest cmdlet:
Invoke-WebRequest -Uri $url -OutFile $localFileName
Another option is with the Start-BitsTransfer cmdlet:
Start-BitsTransfer -Source $source -Destination $destination

There are at least (not 2) 4 ways to open web address URL with default browser in Powershell.
1. Run a exe file with parameter is our url.
How to get exe filepath of default browser?

Function GET-DefaultBrowserPath {
#Get the default Browser path
New-PSDrive -Name HKCR -PSProvider registry -Root Hkey_Classes_Root | Out-Null
$browserPath = ((Get-ItemProperty ‘HKCR:\http\shell\open\command’).'(default)’).Split(‘”‘)[1]
return $browserPath
}
call
Get-DefaultBrowserPath

Simplest way:
just type start ‘http://www.gurucore.com’ in Powershell or cmd.

Friday, July 25, 2014

Is your private life private!!

 

 
Who’s looking at your pictures right now? If you’ve ever sold a used cellphone, it’s likely a complete stranger has sifted through your most intimate memories.
Recently Prague-based security firm Avast made news when they bought 20 used Android phones off eBay, then used basic recovery software to restore deleted files on them. In the process the analysts found more than 40,000 stored photos, out of which 1,500 were pictures of children. The Avast blog also states that they retrieved “More than 750 photos of women in various stages of undress” and “more than 250 selfies of what appear to be the previous owner’s manhood.”
In this age of smart phones and incessant social networking, where the lines between public and personal lives are blurred, we are unequivocally vulnerable. Ironically, at the same time, we have never been more careless with personal information.
Jaromir Horejsi, malware analyst at Avast Software says the experiment began when an employee accidently erased his phone’s memory, then found that he was able to resurrect all his files with a “little searching and an inexpensive purchase”. That made him wonder how many other people consider their data permanently gone, when it’s still retrievable by anyone who gets hold of your phone.
Horejsi says, “As the old saying goes, a picture is worth a thousand words. Now add private Facebook messages that include geo-location, Google searches for open job positions in a specific field, media files, and phone contacts. Put all of these pieces together to complete the puzzle and you have a clear picture of who the former smartphone owner was. Stalkers, enemies, and thieves can abuse personal data to stalk, blackmail and steal people’s identities.”
For a generation that’s grown up with technology, we know astonishingly little about how it works. “For years lots of bloggers, including me, have been screaming to people that their data is not safe. Especially with androids where factory reset simply deletes the top layer,” says Karthik Kamalakannan, a tech writer who also works on developing android, iOS and web applications for the future. Avast compares it to deleting the index of a book — so pointers are removed, but chapters remain. “It’s like sticking a clean paper over it,” says Karthik, adding “so information never gets erased. It just gets overwritten.”
While Avast’s experiment also uncovered 750 emails, 250 contact names and one completed loan application, the main reaction — from media and the public — was horror at the idea of explicit selfies going public. This, by the way, isn’t the first scare related to cell phone selfies. Over the past couple of years ‘revenge porn’ has been getting attention. This is sexually-explicit media — a majority of which are selfies — shared online along with personal information, without the consent of the pictured individual. (It’s typically uploaded by an ex- partner or hacker).
Fifteen years ago, this might have worried a niche group of people. Today, if you’re under 30 years of age, it’s about 50 per cent of the people you know in the same age bracket. A set of interviews done with college students and young professionals pegged the number higher, with many of the girls saying that at least eight out of 10 of their friends have taken intimate selfies to send to their boyfriends. One student said it’s become especially common with Snapchat. Some said they take at least one a day. (It seems more frequent with people in long-distance relationships.)
Despite the Avast story, all were still fairly casual about taking these pictures, with one respondent saying, “I think to a large extent, when you take pictures of yourself, a part of you knows there is a risk of someone else seeing it or making a copy of it.” The common solution seems to be a “chin and below” code.
All the people interviewed said they change their phone at least once a year — usually by exchanging it for a newer model. Old phones are sold after erasing the pictures, and a ‘factory reset’. Discussing how important it is to ensure your information is safe, Suresh Jumani who runs Chennai-based Mobile Zone store, cautions against simply looking for the cheapest deal, as in many big multi-brand outlets a large number of floating staff handle old phones. “We have seen people hand in phones without even logging out of Gmail, Whatsapp or Facebook,” he says, going on to discuss how his staff are instructed to do a proper master re-set, ensure the data is over-written and then reformat the phone before selling it to another customer.
Androids tend to be more problematic than iPhones which automatically overwrite data with a factory reset. As an additional precaution Jumani keeps a ledger listing names of the customers so they know who bought and sold each phone. “Chennai’s got the maximum churn rate,” says Jumani, “People change their phones once in eight months, on average, as opposed to Mumbai and Delhi where it’s once a year. The old phones are often bought by students, some of whom sell them again.” So in a lifetime a phone can have anything from one to 10 owners.
Robert Siciliano, an Identity Theft Expert with Hotspot Shield conducted an experiment similar to Avast’s in 2012 when he bought 30 mobile phones and laptops from Craigslist and recovered personal data from 15 devices. Discussing how the “public is blissfully unaware of the risks posed with their personal information leaking,” he says he’s also been guilty of selling old devices but will never do that again. If you’re selling your phones he suggests you “seek out software that promises to rid the device of any data beyond a factory reset.”
Or, do what he now does to make sure you’re absolutely safe. “Old phones should be destroyed. With a hammer.”

Monday, July 14, 2014

Most secure operating system !!



This article doesn't contain any facts or evidences, it solely depends on author's opinion.

With the over increasing usage of internet and privacy issues, the first thing that comes in our mind is 'security'. Is our operating system secure? Is our privacy maintained?  I have been using various operating systems from years.
The security of a given anything, even operating systems (OS), tends to be a difficult or even controversial issue to examine. The only, truly secure operating systems are those that lack contact to the outside world. As for any other OS, they'll inevitably have some sort of vulnerability or weakness that can be exploited. In fact, any networked OS can be exposed by careful abuse of its configuration—no exceptions. All the same, here are the top five most secure operating systems on the planet today.
1. OpenBSD: By default, this is the most secure general purpose operating system out there. The fact that it suffered only two remote attack vulnerabilities in the last decade serves as solid evidence of its stringent security and strict auditing policy. Moreover, OpenBSD lacks a large enough attack surface (care of running numerous web applications) for hackers to exploit.

2. Linux: Linux is another superior operating system. When customized, it can be set up to extremely secure. Linux has an impressive vulnerability patching policy.

3. Mac OS X: This Apple-made OS handles user permissions better, but it still contains an indecent number of vulnerabilities and remote exploits in its systems. That, coupled with Apple's slow response to many of its security issues, has landed this operating system at the bottom of this list.

4. Windows Server 2008: Say what you will about a Microsoft operating system's security; at the very least, they know how to improve and they've gone through the very worst security threats that the Internet can dish out. This iteration of Windows Server has improved backup and recovery, user account control, web server (IIS) role, and server role security configuration.

5. Windows Server 2000: This operating system is so secure that it took nearly a decade before Microsoft can come up with a better one. This OS for network servers, notebook computers, and corporate workstations continues to get monthly security patches even after nine years since its release.

The above list was as per technical specifications. But now according to surveys by Government officials, the results are somewhat different.

The Communications-Electronics Security Group (CESG), the group within the UK Government Communications Headquarters (GCHQ) that assesses operating systems and software for security issues, has found that while no end-user operating system is as secure as they'd like it to be, Ubuntu is the best of the lot. In late 2013, the CESG looked at the security of the most popular end-user operating systems for desktops, smart phones, and tablets. This included: Android 4.2, Android 4.2 on Samsung devices; iOS 6, Blackberry 10.1, Google's Chrome OS 26, Ubuntu 12.04, Windows 7 and 8; Windows 8 RT, and Windows Phone 8. These were judged for their security suitability for OFFICIAL level use according to the UK Government Security Classifications. This is the UK's government lowest security level.

Ubuntu however, scores the highest in a direct comparison. “Ubuntu 14.04 is Ubuntu's latest Long Term Support (LTS) version, and it's recommended for use by businesses. The CESG examined each operating system security on the following grounds:

● Virtual Private Network (VPN)
● Disk Encryption
● Authentication
● Secure Boot
● Platform Integrity and Application Sandboxing
● Application White listing
● Malicious Code Detection and Prevention
● Security Policy Enforcement
● External Interface Protection
● Device Update Policy
● Event Collection for Enterprise Analysis
● Incident Response

Ubuntu has three problem areas that kept it from a perfect score where others had more. Windows Phone 8 has the most "Significant Risk" items with two and Blackberry 10.1 Corporate has the most "Some Risk" areas with six. Where Ubuntu could stand improvement is in VPN, Disk Encryption and Secure Boot.

Technically Ubuntu's VPN is good enough, but it hasn't been shown to meet the security requirement by an independent third party. Canonical's current position, from Ubuntu 12.10 onwards, is "to adopt Grub2 as the default boot loader, with support for Secure Boot, but with an ability to turn off secure boot to modify the OS, if required. This gives users and enterprises the best compromise between security and ability to customize after sale." Problems aside, the simple truth is that if security is what you want most from desktop, smartphone, or tablet operating systems than Ubuntu is what you should be using.

True, security is always a moving target, but year-in and year out; Linux-based operating systems are more secure than their competition. As Windows XP's support clock ticks to its end of supported life, Ubuntu should be considered for your most security sensitive desktops. Its smartphone and tablet side, Ubuntu One, is still a work in progress. The most secure mobile operating system for now is Android on Samsung devices.

Linux-based systems get a lot of press in IT trade publications. A lot of that press relates to its security characteristics. In fact, some claim "Linux is the most secure operating system (OS) of them all." Such statements are, of course, unsupportable hyperbole; while many Linux distributions may outshine both MS Windows and Apple MacOS X by a significant margin, there's evidence to suggest that most Linux distributions are not up to the standards of FreeBSD, for instance -- let alone OpenBSD, with possibly the best security record of any general-purpose operating system.

That's even leaving out special-purpose OSes such as a number of RTOSes, IBM i, OpenVMS, and TrustedBSD. In the sense that many people tend to think first, foremost, and often only of Linux-based systems when they think of open source OSes (and even think of "Linux" as an OS without distinguishing between distributions), however, they have a point: all else being equal, a popular open source OS has definite security advantages over a popular closed source counterpart. Linux distributions are far from the only open source operating systems, though. Just for the sake of argument, insofar as Linux is emblematic of open source OSes, then, and that MS Windows is emblematic of closed source OSes, it may not be so unrealistic to say "Linux is the most secure OS of them all," where "them all" consists of only two choices -- but the world is not that simple.

"Linux" in the abstract, however -- as a stand-in for the average Linux distribution -- is simply not the most secure OS available by a more comprehensive view of OSes. There are, in fact, some Linux distributions that have been created for research purposes that are intentionally as poorly secured as possible in default configuration.
Furthermore, determining a "most secure" OS is not as straightforward as it might at first sound. One of the most common criteria used by people who don't really understand security, and by those who do understand it but want to manipulate those who don't with misdirection and massaged statistics, is vulnerability discovery rates. Those of us who know better are aware that there's a lot more to security than counting vulnerabilities. Other, more credible criteria, may involve factors such as:
·         code quality auditing
·         default security configuration
·         patch quality and response time
·         privilege separation architecture
and a whole lot more.
Even if we ignore any OS that won't, for instance, run a popular browser (such as Firefox), a popular email client (such as Thunderbird), and a popular office suite (such as OpenOffice.org) in a WIMP GUI on an Intel x86 architecture computer, the average Linux distribution doesn't beat every other option in all categories by any stretch. Ubuntu Linux, arguably the Linux distribution with the greatest mindshare, certainly doesn't.
Over the last few years, system security has gained a lot of momentum and software professionals are focusing heavily on this aspect. Linux is often treated as a highly secure operating system. However, the reality is that Linux too has its own share of security flaws. But there is no need to worry. Read this to understand how Linux secures your system.
http://www.linuxuser.co.uk/features/security-in-linux

If you're one of those people inclined to say "Linux is the most secure operating system of all," you should probably rethink that. A much stronger case can be made for the security of some other OSes than the average Linux distribution. Even if it couldn't, the variability of Linux distributions in general, and the differing criteria for the security of an OS that may come into play in comparisons, make such a statement quixotic at best.
The long version of the answer to the question "Is Linux the most secure OS?" is that it depends on what OSes you're comparing, or whether you're comparing specific OSes at all (instead of something like "open source vs. closed source"), and for what purposes you mean to evaluate the security of an operating system. If you make claims like that, someone who knows better will have an easy way to discredit your argument. Be more specific, not only in your arguments, but in your thinking -- because it's too easy to form bad habits that may lead to making bad decisions about your own security, and because giving people inaccurate information about security like that can create real problems. If you mean that all else being equal popular open source OSes are more secure than popular closed source OSes, say so. If you mean that Ubuntu's default configuration is more secure than MS Windows Vista's, say so. Just saying "Linux is the most secure operating system of all," on the other hand, is imprecise and inaccurate.