WHAT'S NEW?
Loading...

How to ping Host using PHP

        In this Post we learn to ping host and check connectivity between hosts in network. Ping is usually used to check then connectivity between two host and got strangth of connection.Ordinary, its sysntex is PING [Host name or IPAddress] in command prompt.
Here we will not use any additional plugins.

Its simply define function `getconnection` with argument `$domain` in any class.

//Ping Host using php
function getconnection($domain){
 $starttime = microtime(true);
 $file     = fsockopen ($domain, 80, $errno, $errstr, 10); 
     $stoptime  = microtime(true);
 $status    = 0;

 if (!$file) $status = -1;  // Site is down
 else {
         fclose($file);
         $status = ($stoptime - $starttime) * 1000;
         $status = floor($status);
        }
    return $status;
}

Index.php

<?php
 $domain = 'www.google.com';
 $ping =  getconnection($domain);
 echo $domain . ' - ' . $ping;
?>

To Download Project Source click here





How To create bar chart using OPEN FLESH CHART in PHP



 
     Hello, friends here we will learn to create flesh chart using PHP. Gor this purpose we will use OPEN FLESH CHART to create chart. OPEN FLESH CHART is
opensource charting libraries and Provides more then 30 diffrent charts like Bar Chart, Line Chart, Spider Chart, Pie Chart, 3D Chart etc.

      OPEN FLESH CHART use Flesh to display chart and import data and config parameters (chart width,Height,legend,values of X-Axis & Y-Axis,background etc.) and Juery
to attach chart with webpage.

We will use version OPEN FLESH CHART 2.
First of all we will create index.php page as below.

Index.php

<script type="text/javascript" src="js/swfobject.js"></script>
<script type="text/javascript">swfobject.embedSWF(
      "open-flash-chart.swf", "TOTAL",
      "100%", "350", "9.0.0", "expressInstall.swf",
      {"data-file":"total.txt"} );
</script><body>        <div id="TOTAL"></div>
</body>


We have added total.txt inwhich our chart parameter is declared using JQuery with default hight, width, our flesh (.swf) file.

Our parametters in total.txt areas below

Total.txt

  {
  "bg_colour":  "#ffffff",
  "title":{
    "text":  "",
    "style": "{font-size: 12px;}"
  },

  "x_legend":{
 "text":   "Year",
 "style": "{color: #736AFF;}"
  },

  "y_legend":{
    "text": "Amount (Millions)",
    "style": "{color: #736AFF;}"
  },

  "elements":[
    {
      "type":      "bar_3d",
      "alpha":     0.99,
      "colour":    "#2D89EF",
      "text":      "Comapny 1",
      "tip": "Comapny 1
Amount: #val#",
      "font-size": 10,
   "on-show": {"type": "grow-up", "cascade":5, "delay":0.5},
      "values" :   [1229.869024,934,1312,567.508,3234]
    },
    {
      "type":      "bar_3d",
      "alpha":     0.99,
      "tip": "Comapny 2
Amount: #val#",
      "colour":    "#CD950C",
      "text":      "Comapny 2",
      "font-size": 10,
   "on-show": {"type": "grow-up", "cascade":5, "delay":1},
      "values" :   [781,995,241,222,344]
    },
   {
      "axis":      "right",
      "type":      "line",
      "colour":    "#736AFF",
      "text":      "Ratio",
      "font-size": 10,
      "width":     1,
   "dot-style": {
  "type":"anchor", "sides":4,
  "alpha":1, "hollow":true,
  "background-colour":"#ffffff", "background-alpha": 0.4,
  "width":1, "tip":"#val# %" },

      "values" :   [22.80,75.87,32.2,71,69]
    }

  ],

  "x_axis":{
    "stroke":      0,
    "tick_height": 9,
    "colour":      "#808080",
    "grid-visible":  false,
    "steps": 5,
    "labels":{
    "steps":1,
    "labels":[    
     "2007-08",
     "2008-09",
     "2009-10",
     "2010-11",
     "2011-12"
     ]
   },
 "3d":         15
   },

  "y_axis":{
    "stroke":      0.2,
    "tick_length": 100,
    "colour":      "#808080",
    "grid-colour":"#ccddcc",
    "offset":      0,
    "steps":    500,
    "max":         2500
  },
  "y_axis_right":{  
    "tick-length":   5,
    "stroke":      0.9,
    "colour":        "#d0d0d0",
    "min":           0,
    "max":           100,
    "steps":    20,
 "grid-visible":  false

  }

}

Output
we have added three two Bar columns and single line with multiple Y-AXIS.

Our Flesh Chart will look as below


To Download Source click here









Powerful MYSQL Performance Tips


mysql-logo

          Mysql is widly used SQL database Server. It is easy and secure to manage data using SQL server
with best GUI interface.There are many additional tools available that helps for maintainging server. sometimes, when working with millions of records and a huge amount of data then some importent tips must follow to improve SQL Operations which is ignored by Developer.Sometimes user face problem like slow SQL Operations when working with large amount of data.
So Here,we have some Importent Tips to improve Performance of SQL Operation.

         It is client/server implementation that consist of a server daemon and many diffrent client programs/libraries. Here is very useful tips for all mysql DBA's,which is Suggested by mysql community experts.


1. Kaj (Most Excellent Obvious Facilitator) Index stuff.
2. Ronald Don't Index Everything
3. Use benchmarking
4. Minimize traffic by fetching only what you need.
5. Paging/chunked data retrieval to limit
6. Don't use SELECT *
7. Be wary of lots of small quick queries if a longer query can be more efficient
8. Use EXPLAIN to profile the query execution plan
9. Use Slow Query Log (always have it on!)
10. LIMIT m,n may not be as fast as it sounds
11. Don't use ORDER BY RAND() if you have > ~2K records
12. Use SQL_NO_CACHE when you are SELECTing frequently updated data or large sets of data
13. avoid wildcards at the start of LIKE queries
14. avoid correlated subqueries and in select and where clause (try to avoid in)
15. config params.
16. no calculated comparisons -- isolate indexed columns
17. innodb_flush_commit=0 can help slave lag
18. ORDER BY and LIMIT work best with equalities and covered indexes
19. isolate workloads don't let administrative work interfere with customer performance. (ie backups)
20. use optimistic locking, not pessimistic locking. try to use shared lock, not exclusive lock. share mode vs. FOR UPDATE
21. use row-level instead of table-level locking for OLTP workloads
22. Know your storage engines and what performs best for your needs, know that different ones exist.
23. Optimize for data types, use consistent data types. Use PROCEDURE ANALYSE() to help determine if you need less
24. separate text/blobs from metadata, don't put text/blobs in results if you don't need them
25. if you can, compress text/blobs
26. compress static data
27. don't back up static data as often
28. derived tables (subqueries in the FROM clause) can be useful for retrieving BLOBs w/out sorting them.
(self-join can speed up a query if 1st part finds the IDs and use it to fetch the rest)
29. enable and increase the query and buffer caches if appropriate
30. ALTER TABLE...ORDER BY can take chronological data and re-order it by a different field
31. InnoDB ALWAYS keeps the primary key as part of each index, so do not make the primary key very
large, be careful of redundant columns in an index, and this can make the query faster
32. Do not duplicate indexes
33. Utilize different storage engines on master/slave ie, if you need fulltext indexing on a table.
34. BLACKHOLE engine and replication is much faster than FEDERATED tables for things like logs.
35. Design sane query schemas. don't be afraid of table joins, often they are faster than denormalization
36. Don't use boolean flags
37. Use a clever key and ORDER BY instead of MAX
38. Keep the database host as clean as possible. Do you really need a windowing system on that server?
39. Utilize the strengths of the OS
40. Hire a MySQL (tm) Certified DBA
41. Know that there are many consulting companies out there that can help, as well as MySQL's
      Professional Services.
42. Config variables & tips:
       1. use one of the supplied config files
       2. key_buffer, unix cache (leave some RAM free), per-connection variables, innodb memory
           variables
       3. be aware of global vs. per-connection variables
       4. check SHOW STATUS and SHOW VARIABLES (GLOBAL|SESSION in 5.0 and up)
       5. be aware of swapping esp. with Linux, "swappiness" (bypass OS filecache for innodb data      files, innodb_flush_method=O_DIRECT if possible (this is also OS specific))
       6. defragment tables, rebuild indexes, do table maintenance
       7. If you use innodb_flush_txn_commit=1, use a battery-backed hardware cache write           controller
       8. more RAM is good so faster disk speed
       9. use 64-bit architectures

43. Know when to split a complex query and join smaller ones
44. Debugging sucks, testing rocks!
45. Delete small amounts at a time if you can
46. Archive old data -- don't be a pack-rat! 2 common engines for this are ARCHIVE tables and MERGE tables.
47. use INET_ATON and INET_NTOA for IP addresses, not char or varchar
48. make it a habit to REVERSE() email addresses, so you can easily search domains
49. Use LOAD DATA instead of INSERT.
50. increase myisam_sort_buffer_size to optimize large inserts (this is a per-connection variable)
51. look up memory tuning parameter for on-insert caching
52. increase temp table size in a data warehousing environment (default is 32Mb) so it doesn't write to disk (also constrained by max_heap_table_size, default 16Mb)
53. Normalize first, and denormalize where appropriate.
54. Databases are not spreadsheets, even though Access really really looks like one. Then again, Access isn't a real database.
55. In 5.1 BOOL/BIT NOT NULL type is 1 bit, in previous versions it's 1 byte.
56. A NULL data type can take more room to store than NOT NULL
57. Choose appropriate character sets & collations -- UTF16 will store each character in 2 bytes, whether it needs it or not, latin1 is faster than UTF8.
58. make similar queries consistent so cache is used
59. Have good SQL query standards
60. Don't use deprecated features
61. Use Triggers wisely
62. Run in SQL_MODE=STRICT to help identify warnings
63. Turning OR on multiple index fields (<5.0) into UNION may speed things up (with LIMIT), after 5.0 the index_merge should pick stuff up.
64. /tmp dir on battery-backed write cache
65. consider battery-backed RAM for innodb logfiles reference points can be calculated.
67. as your data grows, indexing may change (cardinality and selectivity change). Structuring may want to change. Make your schema as modular as your code. Make your code able to scale. Plan and embrace change, and get developers to do the same.
68. pare down cron scripts
69. create a test environment
70. try out a few schemas and storage engines in your test environment before picking one.
71. Use HASH indexing for indexing across columns with similar data prefixes
72. Use myisam_pack_keys for int data
73. Don't use COUNT * on Innodb tables for every search, do it a few times and/or summary tables, or if you need it for the total # of rows, use SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS().
74. use --safe-updates for client
75. Redundant data is redundant
76. Use INSERT ... ON DUPLICATE KEY update (INSERT IGNORE) to avoid having to SELECT
77. use groupwise maximum instead of subqueries
78. be able to change your schema without ruining functionality of your code
79. source control schema and config files
80. for LVM innodb backups, restore to a different instance of MySQL so Innodb can roll forward
81. use multi_query if appropriate to reduce round-trips
82. partition appropriately
83. partition your database when you have real data
84. Don't use DISTINCT when you have or could use GROUP BY
85. Use proper data partitions
86. For Cluster. Start thinking about Cluster *before* you need them
87. Insert performance
88. Batch INSERT and REPLACE